ocnn 2.2.4__tar.gz → 2.2.6__tar.gz

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (44) hide show
  1. {ocnn-2.2.4/ocnn.egg-info → ocnn-2.2.6}/PKG-INFO +33 -22
  2. ocnn-2.2.6/README.md +72 -0
  3. {ocnn-2.2.4 → ocnn-2.2.6}/ocnn/__init__.py +1 -1
  4. {ocnn-2.2.4 → ocnn-2.2.6}/ocnn/models/autoencoder.py +2 -1
  5. {ocnn-2.2.4 → ocnn-2.2.6}/ocnn/models/ounet.py +1 -2
  6. {ocnn-2.2.4 → ocnn-2.2.6}/ocnn/octree/octree.py +24 -3
  7. {ocnn-2.2.4 → ocnn-2.2.6}/ocnn/octree/points.py +1 -1
  8. {ocnn-2.2.4 → ocnn-2.2.6/ocnn.egg-info}/PKG-INFO +33 -22
  9. {ocnn-2.2.4 → ocnn-2.2.6}/setup.py +1 -1
  10. ocnn-2.2.4/README.md +0 -61
  11. {ocnn-2.2.4 → ocnn-2.2.6}/LICENSE +0 -0
  12. {ocnn-2.2.4 → ocnn-2.2.6}/MANIFEST.in +0 -0
  13. {ocnn-2.2.4 → ocnn-2.2.6}/ocnn/dataset.py +0 -0
  14. {ocnn-2.2.4 → ocnn-2.2.6}/ocnn/models/__init__.py +0 -0
  15. {ocnn-2.2.4 → ocnn-2.2.6}/ocnn/models/hrnet.py +0 -0
  16. {ocnn-2.2.4 → ocnn-2.2.6}/ocnn/models/image2shape.py +0 -0
  17. {ocnn-2.2.4 → ocnn-2.2.6}/ocnn/models/lenet.py +0 -0
  18. {ocnn-2.2.4 → ocnn-2.2.6}/ocnn/models/resnet.py +0 -0
  19. {ocnn-2.2.4 → ocnn-2.2.6}/ocnn/models/segnet.py +0 -0
  20. {ocnn-2.2.4 → ocnn-2.2.6}/ocnn/models/unet.py +0 -0
  21. {ocnn-2.2.4 → ocnn-2.2.6}/ocnn/modules/__init__.py +0 -0
  22. {ocnn-2.2.4 → ocnn-2.2.6}/ocnn/modules/modules.py +0 -0
  23. {ocnn-2.2.4 → ocnn-2.2.6}/ocnn/modules/resblocks.py +0 -0
  24. {ocnn-2.2.4 → ocnn-2.2.6}/ocnn/nn/__init__.py +0 -0
  25. {ocnn-2.2.4 → ocnn-2.2.6}/ocnn/nn/octree2col.py +0 -0
  26. {ocnn-2.2.4 → ocnn-2.2.6}/ocnn/nn/octree2vox.py +0 -0
  27. {ocnn-2.2.4 → ocnn-2.2.6}/ocnn/nn/octree_align.py +0 -0
  28. {ocnn-2.2.4 → ocnn-2.2.6}/ocnn/nn/octree_conv.py +0 -0
  29. {ocnn-2.2.4 → ocnn-2.2.6}/ocnn/nn/octree_drop.py +0 -0
  30. {ocnn-2.2.4 → ocnn-2.2.6}/ocnn/nn/octree_dwconv.py +0 -0
  31. {ocnn-2.2.4 → ocnn-2.2.6}/ocnn/nn/octree_gconv.py +0 -0
  32. {ocnn-2.2.4 → ocnn-2.2.6}/ocnn/nn/octree_interp.py +0 -0
  33. {ocnn-2.2.4 → ocnn-2.2.6}/ocnn/nn/octree_norm.py +0 -0
  34. {ocnn-2.2.4 → ocnn-2.2.6}/ocnn/nn/octree_pad.py +0 -0
  35. {ocnn-2.2.4 → ocnn-2.2.6}/ocnn/nn/octree_pool.py +0 -0
  36. {ocnn-2.2.4 → ocnn-2.2.6}/ocnn/octree/__init__.py +0 -0
  37. {ocnn-2.2.4 → ocnn-2.2.6}/ocnn/octree/shuffled_key.py +0 -0
  38. {ocnn-2.2.4 → ocnn-2.2.6}/ocnn/utils.py +0 -0
  39. {ocnn-2.2.4 → ocnn-2.2.6}/ocnn.egg-info/SOURCES.txt +0 -0
  40. {ocnn-2.2.4 → ocnn-2.2.6}/ocnn.egg-info/dependency_links.txt +0 -0
  41. {ocnn-2.2.4 → ocnn-2.2.6}/ocnn.egg-info/not-zip-safe +0 -0
  42. {ocnn-2.2.4 → ocnn-2.2.6}/ocnn.egg-info/requires.txt +0 -0
  43. {ocnn-2.2.4 → ocnn-2.2.6}/ocnn.egg-info/top_level.txt +0 -0
  44. {ocnn-2.2.4 → ocnn-2.2.6}/setup.cfg +0 -0
@@ -1,6 +1,6 @@
1
1
  Metadata-Version: 2.1
2
2
  Name: ocnn
3
- Version: 2.2.4
3
+ Version: 2.2.6
4
4
  Summary: Octree-based Sparse Convolutional Neural Networks
5
5
  Home-page: https://github.com/octree-nn/ocnn-pytorch
6
6
  Author: Peng-Shuai Wang
@@ -28,30 +28,41 @@ Requires-Dist: packaging
28
28
 
29
29
  This repository contains the **pure PyTorch**-based implementation of
30
30
  [O-CNN](https://wang-ps.github.io/O-CNN.html). The code has been tested with
31
- `Pytorch>=1.6.0`, and `Pytorch>=1.9.0` is preferred.
31
+ `Pytorch>=1.6.0`, and `Pytorch>=1.9.0` is preferred. The *original*
32
+ implementation of O-CNN is based on C++ and CUDA and can be found
33
+ [here](https://github.com/Microsoft/O-CNN), which has received
34
+ [![stars - O-CNN](https://img.shields.io/github/stars/microsoft/O-CNN?style=social)](https://github.com/microsoft/O-CNN) and
35
+ [![forks - O-CNN](https://img.shields.io/github/forks/microsoft/O-CNN?style=social)](https://github.com/microsoft/O-CNN).
32
36
 
33
- O-CNN is an octree-based sparse convolutional neural network framework for 3D
34
- deep learning. O-CNN constrains the CNN storage and computation into non-empty
35
- sparse voxels for efficiency and uses the `octree` data structure to organize
36
- and index these sparse voxels.
37
37
 
38
- The concept of sparse convolution in O-CNN is the same with
39
- [H-CNN](https://ieeexplore.ieee.org/abstract/document/8580422),
38
+ O-CNN is an octree-based 3D convolutional neural network framework for 3D data.
39
+ O-CNN constrains the CNN storage and computation into non-empty sparse voxels
40
+ for efficiency and uses the `octree` data structure to organize and index these
41
+ sparse voxels. Currently, this type of 3D convolution is known as Sparse
42
+ Convolution in the research community.
43
+
44
+
45
+ The concept of Sparse Convolution in O-CNN is the same with
40
46
  [SparseConvNet](https://openaccess.thecvf.com/content_cvpr_2018/papers/Graham_3D_Semantic_Segmentation_CVPR_2018_paper.pdf),
41
- and
42
- [MinkowskiNet](https://openaccess.thecvf.com/content_CVPR_2019/papers/Choy_4D_Spatio-Temporal_ConvNets_Minkowski_Convolutional_Neural_Networks_CVPR_2019_paper.pdf).
43
- The key difference is that our O-CNN uses the `octree` to index the sparse
44
- voxels, while these 3 works use the `Hash Table`.
45
-
46
- Our O-CNN is published in SIGGRAPH 2017, H-CNN is published in TVCG 2018,
47
- SparseConvNet is published in CVPR 2018, and MinkowskiNet is published in
48
- CVPR 2019. Actually, our O-CNN was submitted to SIGGRAPH in the end of 2016 and
49
- was officially accepted in March, 2017. The camera-ready version of our O-CNN was
50
- submitted to SIGGRAPH in April, 2017. We just did not post our paper on Arxiv
51
- during the review process of SIGGRAPH. Therefore, **the idea of constraining CNN
52
- computation into sparse non-emtpry voxels is first proposed by our O-CNN**.
53
- Currently, this type of 3D convolution is known as Sparse Convolution in the
54
- research community.
47
+ [MinkowskiNet](https://github.com/NVIDIA/MinkowskiEngine), and
48
+ [SpConv](https://github.com/traveller59/spconv).
49
+ The key difference is that our O-CNN uses `octrees` to index the sparse voxels,
50
+ while these works use `Hash Tables`. However, I believe that `octrees` may be
51
+ the right choice for Sparse Convolution. With `octrees`, I can implement the
52
+ Sparse Convolution with pure PyTorch. More importantly, with `octrees`, I can
53
+ also build efficient transformers for 3D data --
54
+ [OctFormer](https://github.com/octree-nn/octformer), which is extremely hard
55
+ with `Hash Tables`.
56
+
57
+
58
+ Our O-CNN is published in SIGGRAPH 2017, SparseConvNet is published in CVPR
59
+ 2018, and MinkowskiNet is published in CVPR 2019. Actually, our O-CNN was
60
+ submitted to SIGGRAPH in the end of 2016 and was officially accepted in March,
61
+ 2017. <!-- The camera-ready version of our O-CNN was submitted to SIGGRAPH in April, 2018. -->
62
+ We just did not post our paper on Arxiv during the review process of SIGGRAPH.
63
+ Therefore, **the idea of constraining CNN computation into sparse non-emtpry
64
+ voxels, i.e. Sparse Convolution, is first proposed by our O-CNN**.
65
+
55
66
 
56
67
  ## Key benefits of ocnn-pytorch
57
68
 
ocnn-2.2.6/README.md ADDED
@@ -0,0 +1,72 @@
1
+ # O-CNN
2
+
3
+ **[Documentation](https://ocnn-pytorch.readthedocs.io)**
4
+
5
+ [![Documentation Status](https://readthedocs.org/projects/ocnn-pytorch/badge/?version=latest)](https://ocnn-pytorch.readthedocs.io/en/latest/?badge=latest)
6
+ [![Downloads](https://static.pepy.tech/badge/ocnn)](https://pepy.tech/project/ocnn)
7
+ [![Downloads](https://static.pepy.tech/badge/ocnn/month)](https://pepy.tech/project/ocnn)
8
+ [![PyPI](https://img.shields.io/pypi/v/ocnn)](https://pypi.org/project/ocnn/)
9
+
10
+ This repository contains the **pure PyTorch**-based implementation of
11
+ [O-CNN](https://wang-ps.github.io/O-CNN.html). The code has been tested with
12
+ `Pytorch>=1.6.0`, and `Pytorch>=1.9.0` is preferred. The *original*
13
+ implementation of O-CNN is based on C++ and CUDA and can be found
14
+ [here](https://github.com/Microsoft/O-CNN), which has received
15
+ [![stars - O-CNN](https://img.shields.io/github/stars/microsoft/O-CNN?style=social)](https://github.com/microsoft/O-CNN) and
16
+ [![forks - O-CNN](https://img.shields.io/github/forks/microsoft/O-CNN?style=social)](https://github.com/microsoft/O-CNN).
17
+
18
+
19
+ O-CNN is an octree-based 3D convolutional neural network framework for 3D data.
20
+ O-CNN constrains the CNN storage and computation into non-empty sparse voxels
21
+ for efficiency and uses the `octree` data structure to organize and index these
22
+ sparse voxels. Currently, this type of 3D convolution is known as Sparse
23
+ Convolution in the research community.
24
+
25
+
26
+ The concept of Sparse Convolution in O-CNN is the same with
27
+ [SparseConvNet](https://openaccess.thecvf.com/content_cvpr_2018/papers/Graham_3D_Semantic_Segmentation_CVPR_2018_paper.pdf),
28
+ [MinkowskiNet](https://github.com/NVIDIA/MinkowskiEngine), and
29
+ [SpConv](https://github.com/traveller59/spconv).
30
+ The key difference is that our O-CNN uses `octrees` to index the sparse voxels,
31
+ while these works use `Hash Tables`. However, I believe that `octrees` may be
32
+ the right choice for Sparse Convolution. With `octrees`, I can implement the
33
+ Sparse Convolution with pure PyTorch. More importantly, with `octrees`, I can
34
+ also build efficient transformers for 3D data --
35
+ [OctFormer](https://github.com/octree-nn/octformer), which is extremely hard
36
+ with `Hash Tables`.
37
+
38
+
39
+ Our O-CNN is published in SIGGRAPH 2017, SparseConvNet is published in CVPR
40
+ 2018, and MinkowskiNet is published in CVPR 2019. Actually, our O-CNN was
41
+ submitted to SIGGRAPH in the end of 2016 and was officially accepted in March,
42
+ 2017. <!-- The camera-ready version of our O-CNN was submitted to SIGGRAPH in April, 2018. -->
43
+ We just did not post our paper on Arxiv during the review process of SIGGRAPH.
44
+ Therefore, **the idea of constraining CNN computation into sparse non-emtpry
45
+ voxels, i.e. Sparse Convolution, is first proposed by our O-CNN**.
46
+
47
+
48
+ ## Key benefits of ocnn-pytorch
49
+
50
+ - **Simplicity**. The ocnn-pytorch is based on pure PyTorch, it is portable and
51
+ can be installed with a simple command:`pip install ocnn`. Other sparse
52
+ convolution frameworks heavily rely on C++ and CUDA, and it is complicated to
53
+ configure the compiling environment.
54
+
55
+ - **Efficiency**. The ocnn-pytorch is very efficient compared with other sparse
56
+ convolution frameworks. It only takes 18 hours to train the network on
57
+ ScanNet for 600 epochs with 4 V100 GPUs. For reference, under the same
58
+ training settings, MinkowskiNet 0.4.3 takes 60 hours and MinkowskiNet 0.5.4
59
+ takes 30 hours.
60
+
61
+ ## Citation
62
+
63
+ ```bibtex
64
+ @article {Wang-2017-ocnn,
65
+ title = {{O-CNN}: Octree-based Convolutional Neural Networksfor {3D} Shape Analysis},
66
+ author = {Wang, Peng-Shuai and Liu, Yang and Guo, Yu-Xiao and Sun, Chun-Yu and Tong, Xin},
67
+ journal = {ACM Transactions on Graphics (SIGGRAPH)},
68
+ volume = {36},
69
+ number = {4},
70
+ year = {2017},
71
+ }
72
+ ```
@@ -12,7 +12,7 @@ from . import models
12
12
  from . import dataset
13
13
  from . import utils
14
14
 
15
- __version__ = '2.2.4'
15
+ __version__ = '2.2.6'
16
16
 
17
17
  __all__ = [
18
18
  'octree',
@@ -33,8 +33,9 @@ class AutoEncoder(torch.nn.Module):
33
33
  self.full_depth = full_depth
34
34
  self.feature = feature
35
35
  self.resblk_num = 2
36
- self.code_channel = 64 # dim-of-code = code_channel * 2**(3*full_depth)
37
36
  self.channels = [512, 512, 256, 256, 128, 128, 32, 32, 16, 16]
37
+ # dim-of-code = code_channel * 2**(3*full_depth)
38
+ self.code_channel = self.channels[full_depth]
38
39
 
39
40
  # encoder
40
41
  self.conv1 = ocnn.modules.OctreeConvBnRelu(
@@ -18,8 +18,7 @@ class OUNet(AutoEncoder):
18
18
 
19
19
  def __init__(self, channel_in: int, channel_out: int, depth: int,
20
20
  full_depth: int = 2, feature: str = 'ND'):
21
- super().__init__(channel_in, channel_out, depth, full_depth, feature,
22
- code_channel=-1) # !set code_channe=-1
21
+ super().__init__(channel_in, channel_out, depth, full_depth, feature)
23
22
  self.proj = None # remove this module used in AutoEncoder
24
23
 
25
24
  def encoder(self, octree):
@@ -35,7 +35,9 @@ class Octree:
35
35
  and :obj:`points`, contain only non-empty nodes.
36
36
 
37
37
  .. note::
38
- The point cloud must be in range :obj:`[-1, 1]`.
38
+ The point cloud must be strictly in range :obj:`[-1, 1]`. A good practice
39
+ is to normalize it into :obj:`[-0.99, 0.99]` or :obj:`[0.9, 0.9]` to retain
40
+ some margin.
39
41
  '''
40
42
 
41
43
  def __init__(self, depth: int, full_depth: int = 2, batch_size: int = 1,
@@ -151,7 +153,9 @@ class Octree:
151
153
  point_cloud (Points): The input point cloud.
152
154
 
153
155
  .. note::
154
- Currently, the batch size of the point cloud must be 1.
156
+ The point cloud must be strictly in range :obj:`[-1, 1]`. A good practice
157
+ is to normalize it into :obj:`[-0.99, 0.99]` or :obj:`[0.9, 0.9]` to retain
158
+ some margin.
155
159
  '''
156
160
 
157
161
  self.device = point_cloud.device
@@ -176,7 +180,7 @@ class Octree:
176
180
  for d in range(self.depth, self.full_depth, -1):
177
181
  # compute parent key, i.e. keys of layer (d -1)
178
182
  pkey = node_key >> 3
179
- pkey, pidx, pcounts = torch.unique_consecutive(
183
+ pkey, pidx, _ = torch.unique_consecutive(
180
184
  pkey, return_inverse=True, return_counts=True)
181
185
 
182
186
  # augmented key
@@ -287,6 +291,23 @@ class Octree:
287
291
  update_neigh (bool): If True, construct the neighborhood indices.
288
292
  '''
289
293
 
294
+ # increase the octree depth if required
295
+ if depth > self.depth:
296
+ assert depth == self.depth + 1
297
+ self.depth = depth
298
+ self.keys.append(None)
299
+ self.children.append(None)
300
+ self.neighs.append(None)
301
+ self.features.append(None)
302
+ self.normals.append(None)
303
+ self.points.append(None)
304
+ zero = torch.zeros(1, dtype=torch.long)
305
+ self.nnum = torch.cat([self.nnum, zero])
306
+ self.nnum_nempty = torch.cat([self.nnum_nempty, zero])
307
+ zero = zero.view(1, 1)
308
+ self.batch_nnum = torch.cat([self.batch_nnum, zero], dim=0)
309
+ self.batch_nnum_nempty = torch.cat([self.batch_nnum_nempty, zero], dim=0)
310
+
290
311
  # node number
291
312
  nnum = self.nnum_nempty[depth-1] * 8
292
313
  self.nnum[depth] = nnum
@@ -63,9 +63,9 @@ class Points:
63
63
  if self.batch_id is not None:
64
64
  assert self.batch_id.dim() == 2 or self.batch_id.dim() == 1
65
65
  assert self.batch_id.size(0) == self.points.size(0)
66
- assert self.batch_id.size(1) == 1
67
66
  if self.batch_id.dim() == 1:
68
67
  self.batch_id = self.batch_id.unsqueeze(1)
68
+ assert self.batch_id.size(1) == 1
69
69
 
70
70
  @property
71
71
  def npt(self):
@@ -1,6 +1,6 @@
1
1
  Metadata-Version: 2.1
2
2
  Name: ocnn
3
- Version: 2.2.4
3
+ Version: 2.2.6
4
4
  Summary: Octree-based Sparse Convolutional Neural Networks
5
5
  Home-page: https://github.com/octree-nn/ocnn-pytorch
6
6
  Author: Peng-Shuai Wang
@@ -28,30 +28,41 @@ Requires-Dist: packaging
28
28
 
29
29
  This repository contains the **pure PyTorch**-based implementation of
30
30
  [O-CNN](https://wang-ps.github.io/O-CNN.html). The code has been tested with
31
- `Pytorch>=1.6.0`, and `Pytorch>=1.9.0` is preferred.
31
+ `Pytorch>=1.6.0`, and `Pytorch>=1.9.0` is preferred. The *original*
32
+ implementation of O-CNN is based on C++ and CUDA and can be found
33
+ [here](https://github.com/Microsoft/O-CNN), which has received
34
+ [![stars - O-CNN](https://img.shields.io/github/stars/microsoft/O-CNN?style=social)](https://github.com/microsoft/O-CNN) and
35
+ [![forks - O-CNN](https://img.shields.io/github/forks/microsoft/O-CNN?style=social)](https://github.com/microsoft/O-CNN).
32
36
 
33
- O-CNN is an octree-based sparse convolutional neural network framework for 3D
34
- deep learning. O-CNN constrains the CNN storage and computation into non-empty
35
- sparse voxels for efficiency and uses the `octree` data structure to organize
36
- and index these sparse voxels.
37
37
 
38
- The concept of sparse convolution in O-CNN is the same with
39
- [H-CNN](https://ieeexplore.ieee.org/abstract/document/8580422),
38
+ O-CNN is an octree-based 3D convolutional neural network framework for 3D data.
39
+ O-CNN constrains the CNN storage and computation into non-empty sparse voxels
40
+ for efficiency and uses the `octree` data structure to organize and index these
41
+ sparse voxels. Currently, this type of 3D convolution is known as Sparse
42
+ Convolution in the research community.
43
+
44
+
45
+ The concept of Sparse Convolution in O-CNN is the same with
40
46
  [SparseConvNet](https://openaccess.thecvf.com/content_cvpr_2018/papers/Graham_3D_Semantic_Segmentation_CVPR_2018_paper.pdf),
41
- and
42
- [MinkowskiNet](https://openaccess.thecvf.com/content_CVPR_2019/papers/Choy_4D_Spatio-Temporal_ConvNets_Minkowski_Convolutional_Neural_Networks_CVPR_2019_paper.pdf).
43
- The key difference is that our O-CNN uses the `octree` to index the sparse
44
- voxels, while these 3 works use the `Hash Table`.
45
-
46
- Our O-CNN is published in SIGGRAPH 2017, H-CNN is published in TVCG 2018,
47
- SparseConvNet is published in CVPR 2018, and MinkowskiNet is published in
48
- CVPR 2019. Actually, our O-CNN was submitted to SIGGRAPH in the end of 2016 and
49
- was officially accepted in March, 2017. The camera-ready version of our O-CNN was
50
- submitted to SIGGRAPH in April, 2017. We just did not post our paper on Arxiv
51
- during the review process of SIGGRAPH. Therefore, **the idea of constraining CNN
52
- computation into sparse non-emtpry voxels is first proposed by our O-CNN**.
53
- Currently, this type of 3D convolution is known as Sparse Convolution in the
54
- research community.
47
+ [MinkowskiNet](https://github.com/NVIDIA/MinkowskiEngine), and
48
+ [SpConv](https://github.com/traveller59/spconv).
49
+ The key difference is that our O-CNN uses `octrees` to index the sparse voxels,
50
+ while these works use `Hash Tables`. However, I believe that `octrees` may be
51
+ the right choice for Sparse Convolution. With `octrees`, I can implement the
52
+ Sparse Convolution with pure PyTorch. More importantly, with `octrees`, I can
53
+ also build efficient transformers for 3D data --
54
+ [OctFormer](https://github.com/octree-nn/octformer), which is extremely hard
55
+ with `Hash Tables`.
56
+
57
+
58
+ Our O-CNN is published in SIGGRAPH 2017, SparseConvNet is published in CVPR
59
+ 2018, and MinkowskiNet is published in CVPR 2019. Actually, our O-CNN was
60
+ submitted to SIGGRAPH in the end of 2016 and was officially accepted in March,
61
+ 2017. <!-- The camera-ready version of our O-CNN was submitted to SIGGRAPH in April, 2018. -->
62
+ We just did not post our paper on Arxiv during the review process of SIGGRAPH.
63
+ Therefore, **the idea of constraining CNN computation into sparse non-emtpry
64
+ voxels, i.e. Sparse Convolution, is first proposed by our O-CNN**.
65
+
55
66
 
56
67
  ## Key benefits of ocnn-pytorch
57
68
 
@@ -7,7 +7,7 @@
7
7
 
8
8
  from setuptools import setup, find_packages
9
9
 
10
- __version__ = '2.2.4'
10
+ __version__ = '2.2.6'
11
11
 
12
12
  with open("README.md", "r", encoding="utf-8") as fid:
13
13
  long_description = fid.read()
ocnn-2.2.4/README.md DELETED
@@ -1,61 +0,0 @@
1
- # O-CNN
2
-
3
- **[Documentation](https://ocnn-pytorch.readthedocs.io)**
4
-
5
- [![Documentation Status](https://readthedocs.org/projects/ocnn-pytorch/badge/?version=latest)](https://ocnn-pytorch.readthedocs.io/en/latest/?badge=latest)
6
- [![Downloads](https://static.pepy.tech/badge/ocnn)](https://pepy.tech/project/ocnn)
7
- [![Downloads](https://static.pepy.tech/badge/ocnn/month)](https://pepy.tech/project/ocnn)
8
- [![PyPI](https://img.shields.io/pypi/v/ocnn)](https://pypi.org/project/ocnn/)
9
-
10
- This repository contains the **pure PyTorch**-based implementation of
11
- [O-CNN](https://wang-ps.github.io/O-CNN.html). The code has been tested with
12
- `Pytorch>=1.6.0`, and `Pytorch>=1.9.0` is preferred.
13
-
14
- O-CNN is an octree-based sparse convolutional neural network framework for 3D
15
- deep learning. O-CNN constrains the CNN storage and computation into non-empty
16
- sparse voxels for efficiency and uses the `octree` data structure to organize
17
- and index these sparse voxels.
18
-
19
- The concept of sparse convolution in O-CNN is the same with
20
- [H-CNN](https://ieeexplore.ieee.org/abstract/document/8580422),
21
- [SparseConvNet](https://openaccess.thecvf.com/content_cvpr_2018/papers/Graham_3D_Semantic_Segmentation_CVPR_2018_paper.pdf),
22
- and
23
- [MinkowskiNet](https://openaccess.thecvf.com/content_CVPR_2019/papers/Choy_4D_Spatio-Temporal_ConvNets_Minkowski_Convolutional_Neural_Networks_CVPR_2019_paper.pdf).
24
- The key difference is that our O-CNN uses the `octree` to index the sparse
25
- voxels, while these 3 works use the `Hash Table`.
26
-
27
- Our O-CNN is published in SIGGRAPH 2017, H-CNN is published in TVCG 2018,
28
- SparseConvNet is published in CVPR 2018, and MinkowskiNet is published in
29
- CVPR 2019. Actually, our O-CNN was submitted to SIGGRAPH in the end of 2016 and
30
- was officially accepted in March, 2017. The camera-ready version of our O-CNN was
31
- submitted to SIGGRAPH in April, 2017. We just did not post our paper on Arxiv
32
- during the review process of SIGGRAPH. Therefore, **the idea of constraining CNN
33
- computation into sparse non-emtpry voxels is first proposed by our O-CNN**.
34
- Currently, this type of 3D convolution is known as Sparse Convolution in the
35
- research community.
36
-
37
- ## Key benefits of ocnn-pytorch
38
-
39
- - **Simplicity**. The ocnn-pytorch is based on pure PyTorch, it is portable and
40
- can be installed with a simple command:`pip install ocnn`. Other sparse
41
- convolution frameworks heavily rely on C++ and CUDA, and it is complicated to
42
- configure the compiling environment.
43
-
44
- - **Efficiency**. The ocnn-pytorch is very efficient compared with other sparse
45
- convolution frameworks. It only takes 18 hours to train the network on
46
- ScanNet for 600 epochs with 4 V100 GPUs. For reference, under the same
47
- training settings, MinkowskiNet 0.4.3 takes 60 hours and MinkowskiNet 0.5.4
48
- takes 30 hours.
49
-
50
- ## Citation
51
-
52
- ```bibtex
53
- @article {Wang-2017-ocnn,
54
- title = {{O-CNN}: Octree-based Convolutional Neural Networksfor {3D} Shape Analysis},
55
- author = {Wang, Peng-Shuai and Liu, Yang and Guo, Yu-Xiao and Sun, Chun-Yu and Tong, Xin},
56
- journal = {ACM Transactions on Graphics (SIGGRAPH)},
57
- volume = {36},
58
- number = {4},
59
- year = {2017},
60
- }
61
- ```
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes