ocnn 2.2.2__tar.gz → 2.2.3__tar.gz

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (43) hide show
  1. {ocnn-2.2.2/ocnn.egg-info → ocnn-2.2.3}/PKG-INFO +15 -3
  2. {ocnn-2.2.2 → ocnn-2.2.3}/README.md +14 -2
  3. {ocnn-2.2.2 → ocnn-2.2.3}/ocnn/__init__.py +1 -1
  4. {ocnn-2.2.2 → ocnn-2.2.3}/ocnn/models/lenet.py +1 -1
  5. {ocnn-2.2.2 → ocnn-2.2.3}/ocnn/nn/__init__.py +2 -1
  6. {ocnn-2.2.2 → ocnn-2.2.3}/ocnn/nn/octree_conv.py +18 -0
  7. {ocnn-2.2.2 → ocnn-2.2.3}/ocnn/nn/octree_dwconv.py +18 -0
  8. {ocnn-2.2.2 → ocnn-2.2.3}/ocnn/octree/octree.py +2 -2
  9. {ocnn-2.2.2 → ocnn-2.2.3}/ocnn/octree/points.py +7 -2
  10. {ocnn-2.2.2 → ocnn-2.2.3}/ocnn/utils.py +7 -4
  11. {ocnn-2.2.2 → ocnn-2.2.3/ocnn.egg-info}/PKG-INFO +15 -3
  12. {ocnn-2.2.2 → ocnn-2.2.3}/setup.py +1 -1
  13. {ocnn-2.2.2 → ocnn-2.2.3}/LICENSE +0 -0
  14. {ocnn-2.2.2 → ocnn-2.2.3}/MANIFEST.in +0 -0
  15. {ocnn-2.2.2 → ocnn-2.2.3}/ocnn/dataset.py +0 -0
  16. {ocnn-2.2.2 → ocnn-2.2.3}/ocnn/models/__init__.py +0 -0
  17. {ocnn-2.2.2 → ocnn-2.2.3}/ocnn/models/autoencoder.py +0 -0
  18. {ocnn-2.2.2 → ocnn-2.2.3}/ocnn/models/hrnet.py +0 -0
  19. {ocnn-2.2.2 → ocnn-2.2.3}/ocnn/models/image2shape.py +0 -0
  20. {ocnn-2.2.2 → ocnn-2.2.3}/ocnn/models/ounet.py +0 -0
  21. {ocnn-2.2.2 → ocnn-2.2.3}/ocnn/models/resnet.py +0 -0
  22. {ocnn-2.2.2 → ocnn-2.2.3}/ocnn/models/segnet.py +0 -0
  23. {ocnn-2.2.2 → ocnn-2.2.3}/ocnn/models/unet.py +0 -0
  24. {ocnn-2.2.2 → ocnn-2.2.3}/ocnn/modules/__init__.py +0 -0
  25. {ocnn-2.2.2 → ocnn-2.2.3}/ocnn/modules/modules.py +0 -0
  26. {ocnn-2.2.2 → ocnn-2.2.3}/ocnn/modules/resblocks.py +0 -0
  27. {ocnn-2.2.2 → ocnn-2.2.3}/ocnn/nn/octree2col.py +0 -0
  28. {ocnn-2.2.2 → ocnn-2.2.3}/ocnn/nn/octree2vox.py +0 -0
  29. {ocnn-2.2.2 → ocnn-2.2.3}/ocnn/nn/octree_align.py +0 -0
  30. {ocnn-2.2.2 → ocnn-2.2.3}/ocnn/nn/octree_drop.py +0 -0
  31. {ocnn-2.2.2 → ocnn-2.2.3}/ocnn/nn/octree_gconv.py +0 -0
  32. {ocnn-2.2.2 → ocnn-2.2.3}/ocnn/nn/octree_interp.py +0 -0
  33. {ocnn-2.2.2 → ocnn-2.2.3}/ocnn/nn/octree_norm.py +0 -0
  34. {ocnn-2.2.2 → ocnn-2.2.3}/ocnn/nn/octree_pad.py +0 -0
  35. {ocnn-2.2.2 → ocnn-2.2.3}/ocnn/nn/octree_pool.py +0 -0
  36. {ocnn-2.2.2 → ocnn-2.2.3}/ocnn/octree/__init__.py +0 -0
  37. {ocnn-2.2.2 → ocnn-2.2.3}/ocnn/octree/shuffled_key.py +0 -0
  38. {ocnn-2.2.2 → ocnn-2.2.3}/ocnn.egg-info/SOURCES.txt +0 -0
  39. {ocnn-2.2.2 → ocnn-2.2.3}/ocnn.egg-info/dependency_links.txt +0 -0
  40. {ocnn-2.2.2 → ocnn-2.2.3}/ocnn.egg-info/not-zip-safe +0 -0
  41. {ocnn-2.2.2 → ocnn-2.2.3}/ocnn.egg-info/requires.txt +0 -0
  42. {ocnn-2.2.2 → ocnn-2.2.3}/ocnn.egg-info/top_level.txt +0 -0
  43. {ocnn-2.2.2 → ocnn-2.2.3}/setup.cfg +0 -0
@@ -1,6 +1,6 @@
1
1
  Metadata-Version: 2.1
2
2
  Name: ocnn
3
- Version: 2.2.2
3
+ Version: 2.2.3
4
4
  Summary: Octree-based Sparse Convolutional Neural Networks
5
5
  Home-page: https://github.com/octree-nn/ocnn-pytorch
6
6
  Author: Peng-Shuai Wang
@@ -43,14 +43,14 @@ The key difference is that our O-CNN uses the `octree` to index the sparse
43
43
  voxels, while these 3 works use the `Hash Table`.
44
44
 
45
45
  Our O-CNN is published in SIGGRAPH 2017, H-CNN is published in TVCG 2018,
46
- SparseConvNet is published in CVPR 2018, and MinkowskiNet is published in
46
+ SparseConvNet is published in CVPR 2018, and MinkowskiNet is published in
47
47
  CVPR 2019. Actually, our O-CNN was submitted to SIGGRAPH in the end of 2016 and
48
48
  was officially accepted in March, 2017. The camera-ready version of our O-CNN was
49
49
  submitted to SIGGRAPH in April, 2017. We just did not post our paper on Arxiv
50
50
  during the review process of SIGGRAPH. Therefore, **the idea of constraining CNN
51
51
  computation into sparse non-emtpry voxels is first proposed by our O-CNN**.
52
52
  Currently, this type of 3D convolution is known as Sparse Convolution in the
53
- research community.
53
+ research community.
54
54
 
55
55
  ## Key benefits of ocnn-pytorch
56
56
 
@@ -65,3 +65,15 @@ research community.
65
65
  training settings, MinkowskiNet 0.4.3 takes 60 hours and MinkowskiNet 0.5.4
66
66
  takes 30 hours.
67
67
 
68
+ ## Citation
69
+
70
+ ```bibtex
71
+ @article {Wang-2017-ocnn,
72
+ title = {{O-CNN}: Octree-based Convolutional Neural Networksfor {3D} Shape Analysis},
73
+ author = {Wang, Peng-Shuai and Liu, Yang and Guo, Yu-Xiao and Sun, Chun-Yu and Tong, Xin},
74
+ journal = {ACM Transactions on Graphics (SIGGRAPH)},
75
+ volume = {36},
76
+ number = {4},
77
+ year = {2017},
78
+ }
79
+ ```
@@ -24,14 +24,14 @@ The key difference is that our O-CNN uses the `octree` to index the sparse
24
24
  voxels, while these 3 works use the `Hash Table`.
25
25
 
26
26
  Our O-CNN is published in SIGGRAPH 2017, H-CNN is published in TVCG 2018,
27
- SparseConvNet is published in CVPR 2018, and MinkowskiNet is published in
27
+ SparseConvNet is published in CVPR 2018, and MinkowskiNet is published in
28
28
  CVPR 2019. Actually, our O-CNN was submitted to SIGGRAPH in the end of 2016 and
29
29
  was officially accepted in March, 2017. The camera-ready version of our O-CNN was
30
30
  submitted to SIGGRAPH in April, 2017. We just did not post our paper on Arxiv
31
31
  during the review process of SIGGRAPH. Therefore, **the idea of constraining CNN
32
32
  computation into sparse non-emtpry voxels is first proposed by our O-CNN**.
33
33
  Currently, this type of 3D convolution is known as Sparse Convolution in the
34
- research community.
34
+ research community.
35
35
 
36
36
  ## Key benefits of ocnn-pytorch
37
37
 
@@ -46,3 +46,15 @@ research community.
46
46
  training settings, MinkowskiNet 0.4.3 takes 60 hours and MinkowskiNet 0.5.4
47
47
  takes 30 hours.
48
48
 
49
+ ## Citation
50
+
51
+ ```bibtex
52
+ @article {Wang-2017-ocnn,
53
+ title = {{O-CNN}: Octree-based Convolutional Neural Networksfor {3D} Shape Analysis},
54
+ author = {Wang, Peng-Shuai and Liu, Yang and Guo, Yu-Xiao and Sun, Chun-Yu and Tong, Xin},
55
+ journal = {ACM Transactions on Graphics (SIGGRAPH)},
56
+ volume = {36},
57
+ number = {4},
58
+ year = {2017},
59
+ }
60
+ ```
@@ -12,7 +12,7 @@ from . import models
12
12
  from . import dataset
13
13
  from . import utils
14
14
 
15
- __version__ = '2.2.2'
15
+ __version__ = '2.2.3'
16
16
 
17
17
  __all__ = [
18
18
  'octree',
@@ -26,7 +26,7 @@ class LeNet(torch.nn.Module):
26
26
  self.convs = torch.nn.ModuleList([ocnn.modules.OctreeConvBnRelu(
27
27
  channels[i], channels[i+1], nempty=nempty) for i in range(stages)])
28
28
  self.pools = torch.nn.ModuleList([ocnn.nn.OctreeMaxPool(
29
- nempty) for i in range(stages)])
29
+ nempty) for _ in range(stages)])
30
30
  self.octree2voxel = ocnn.nn.Octree2Voxel(self.nempty)
31
31
  self.header = torch.nn.Sequential(
32
32
  torch.nn.Dropout(p=0.5), # drop1
@@ -15,6 +15,7 @@ from .octree_pool import (octree_max_pool, OctreeMaxPool,
15
15
  octree_global_pool, OctreeGlobalPool,
16
16
  octree_avg_pool, OctreeAvgPool,)
17
17
  from .octree_conv import OctreeConv, OctreeDeconv
18
+ from .octree_gconv import OctreeGroupConv
18
19
  from .octree_dwconv import OctreeDWConv
19
20
  from .octree_norm import OctreeBatchNorm, OctreeGroupNorm, OctreeInstanceNorm
20
21
  from .octree_drop import OctreeDropPath
@@ -32,7 +33,7 @@ __all__ = [
32
33
  'OctreeMaxPool', 'OctreeMaxUnpool',
33
34
  'OctreeGlobalPool', 'OctreeAvgPool',
34
35
  'OctreeConv', 'OctreeDeconv',
35
- 'OctreeDWConv',
36
+ 'OctreeGroupConv', 'OctreeDWConv',
36
37
  'OctreeInterp', 'OctreeUpsample',
37
38
  'OctreeInstanceNorm', 'OctreeBatchNorm', 'OctreeGroupNorm',
38
39
  'OctreeDropPath',
@@ -109,6 +109,12 @@ class OctreeConvBase:
109
109
  r''' Peforms the forward pass of octree-based convolution.
110
110
  '''
111
111
 
112
+ # Type check
113
+ if data.dtype != out.dtype:
114
+ data = data.to(out.dtype)
115
+ if weights.dtype != out.dtype:
116
+ weights = weights.to(out.dtype)
117
+
112
118
  # Initialize the buffer
113
119
  buffer = data.new_empty(self.buffer_shape)
114
120
 
@@ -139,6 +145,12 @@ class OctreeConvBase:
139
145
  r''' Performs the backward pass of octree-based convolution.
140
146
  '''
141
147
 
148
+ # Type check
149
+ if grad.dtype != out.dtype:
150
+ grad = grad.to(out.dtype)
151
+ if weights.dtype != out.dtype:
152
+ weights = weights.to(out.dtype)
153
+
142
154
  # Loop over each sub-matrix
143
155
  for i in range(self.buffer_n):
144
156
  start = i * self.buffer_h
@@ -165,6 +177,12 @@ class OctreeConvBase:
165
177
  r''' Computes the gradient of the weight matrix.
166
178
  '''
167
179
 
180
+ # Type check
181
+ if data.dtype != out.dtype:
182
+ data = data.to(out.dtype)
183
+ if grad.dtype != out.dtype:
184
+ grad = grad.to(out.dtype)
185
+
168
186
  # Record the shape of out
169
187
  out_shape = out.shape
170
188
  out = out.flatten(0, 1)
@@ -32,6 +32,12 @@ class OctreeDWConvBase(OctreeConvBase):
32
32
  r''' Peforms the forward pass of octree-based convolution.
33
33
  '''
34
34
 
35
+ # Type check
36
+ if data.dtype != out.dtype:
37
+ data = data.to(out.dtype)
38
+ if weights.dtype != out.dtype:
39
+ weights = weights.to(out.dtype)
40
+
35
41
  # Initialize the buffer
36
42
  buffer = data.new_empty(self.buffer_shape)
37
43
 
@@ -62,6 +68,12 @@ class OctreeDWConvBase(OctreeConvBase):
62
68
  r''' Performs the backward pass of octree-based convolution.
63
69
  '''
64
70
 
71
+ # Type check
72
+ if grad.dtype != out.dtype:
73
+ grad = grad.to(out.dtype)
74
+ if weights.dtype != out.dtype:
75
+ weights = weights.to(out.dtype)
76
+
65
77
  # Loop over each sub-matrix
66
78
  for i in range(self.buffer_n):
67
79
  start = i * self.buffer_h
@@ -88,6 +100,12 @@ class OctreeDWConvBase(OctreeConvBase):
88
100
  r''' Computes the gradient of the weight matrix.
89
101
  '''
90
102
 
103
+ # Type check
104
+ if data.dtype != out.dtype:
105
+ data = data.to(out.dtype)
106
+ if grad.dtype != out.dtype:
107
+ grad = grad.to(out.dtype)
108
+
91
109
  # Record the shape of out
92
110
  out_shape = out.shape
93
111
  out = out.flatten(0, 1)
@@ -274,7 +274,7 @@ class Octree:
274
274
  children[0] = 0
275
275
 
276
276
  # update octree
277
- self.children[depth] = children
277
+ self.children[depth] = children.int()
278
278
  self.nnum_nempty[depth] = nnum_nempty
279
279
 
280
280
  def octree_grow(self, depth: int, update_neigh: bool = True):
@@ -498,7 +498,7 @@ class Octree:
498
498
  # normalize xyz to [-1, 1] since the average points are in range [0, 2^d]
499
499
  if rescale:
500
500
  scale = 2 ** (1 - depth)
501
- xyz = self.points[depth] * scale - 1.0
501
+ xyz = xyz * scale - 1.0
502
502
 
503
503
  # construct Points
504
504
  out = Points(xyz, self.normals[depth], self.features[depth],
@@ -56,11 +56,16 @@ class Points:
56
56
  assert self.features.dim() == 2
57
57
  assert self.features.size(0) == self.points.size(0)
58
58
  if self.labels is not None:
59
- assert self.labels.dim() == 2
59
+ assert self.labels.dim() == 2 or self.labels.dim() == 1
60
60
  assert self.labels.size(0) == self.points.size(0)
61
+ if self.labels.dim() == 1:
62
+ self.labels = self.labels.unsqueeze(1)
61
63
  if self.batch_id is not None:
62
- assert self.batch_id.dim() == 2 and self.batch_id.size(1) == 1
64
+ assert self.batch_id.dim() == 2 or self.batch_id.dim() == 1
63
65
  assert self.batch_id.size(0) == self.points.size(0)
66
+ assert self.batch_id.size(1) == 1
67
+ if self.batch_id.dim() == 1:
68
+ self.batch_id = self.batch_id.unsqueeze(1)
64
69
 
65
70
  @property
66
71
  def npt(self):
@@ -173,7 +173,7 @@ def resize_with_last_val(list_in: list, num: int = 3):
173
173
  '''
174
174
 
175
175
  assert (type(list_in) is list and len(list_in) < num + 1)
176
- for i in range(len(list_in), num):
176
+ for _ in range(len(list_in), num):
177
177
  list_in.append(list_in[-1])
178
178
  return list_in
179
179
 
@@ -186,15 +186,18 @@ def list2str(list_in: list):
186
186
  return ''.join(out)
187
187
 
188
188
 
189
- def build_example_octree(depth: int = 5, full_depth: int = 2):
190
- r''' Builds an example octree on CPU from 3 points.
189
+ def build_example_octree(depth: int = 5, full_depth: int = 2, pt_num: int = 3):
190
+ r''' Builds an example octree on CPU from at most 3 points.
191
191
  '''
192
192
  # initialize the point cloud
193
193
  points = torch.Tensor([[-1, -1, -1], [0, 0, -1], [0.0625, 0.0625, -1]])
194
194
  normals = torch.Tensor([[1, 0, 0], [-1, 0, 0], [0, 1, 0]])
195
195
  features = torch.Tensor([[1, -1], [2, -2], [3, -3]])
196
196
  labels = torch.Tensor([[0], [2], [2]])
197
- point_cloud = ocnn.octree.Points(points, normals, features, labels)
197
+
198
+ assert pt_num <= 3 and pt_num > 0
199
+ point_cloud = ocnn.octree.Points(
200
+ points[:pt_num], normals[:pt_num], features[:pt_num], labels[:pt_num])
198
201
 
199
202
  # build octree
200
203
  octree = ocnn.octree.Octree(depth, full_depth)
@@ -1,6 +1,6 @@
1
1
  Metadata-Version: 2.1
2
2
  Name: ocnn
3
- Version: 2.2.2
3
+ Version: 2.2.3
4
4
  Summary: Octree-based Sparse Convolutional Neural Networks
5
5
  Home-page: https://github.com/octree-nn/ocnn-pytorch
6
6
  Author: Peng-Shuai Wang
@@ -43,14 +43,14 @@ The key difference is that our O-CNN uses the `octree` to index the sparse
43
43
  voxels, while these 3 works use the `Hash Table`.
44
44
 
45
45
  Our O-CNN is published in SIGGRAPH 2017, H-CNN is published in TVCG 2018,
46
- SparseConvNet is published in CVPR 2018, and MinkowskiNet is published in
46
+ SparseConvNet is published in CVPR 2018, and MinkowskiNet is published in
47
47
  CVPR 2019. Actually, our O-CNN was submitted to SIGGRAPH in the end of 2016 and
48
48
  was officially accepted in March, 2017. The camera-ready version of our O-CNN was
49
49
  submitted to SIGGRAPH in April, 2017. We just did not post our paper on Arxiv
50
50
  during the review process of SIGGRAPH. Therefore, **the idea of constraining CNN
51
51
  computation into sparse non-emtpry voxels is first proposed by our O-CNN**.
52
52
  Currently, this type of 3D convolution is known as Sparse Convolution in the
53
- research community.
53
+ research community.
54
54
 
55
55
  ## Key benefits of ocnn-pytorch
56
56
 
@@ -65,3 +65,15 @@ research community.
65
65
  training settings, MinkowskiNet 0.4.3 takes 60 hours and MinkowskiNet 0.5.4
66
66
  takes 30 hours.
67
67
 
68
+ ## Citation
69
+
70
+ ```bibtex
71
+ @article {Wang-2017-ocnn,
72
+ title = {{O-CNN}: Octree-based Convolutional Neural Networksfor {3D} Shape Analysis},
73
+ author = {Wang, Peng-Shuai and Liu, Yang and Guo, Yu-Xiao and Sun, Chun-Yu and Tong, Xin},
74
+ journal = {ACM Transactions on Graphics (SIGGRAPH)},
75
+ volume = {36},
76
+ number = {4},
77
+ year = {2017},
78
+ }
79
+ ```
@@ -7,7 +7,7 @@
7
7
 
8
8
  from setuptools import setup, find_packages
9
9
 
10
- __version__ = '2.2.2'
10
+ __version__ = '2.2.3'
11
11
 
12
12
  with open("README.md", "r", encoding="utf-8") as fid:
13
13
  long_description = fid.read()
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes