multipers 2.2.3__cp310-cp310-win_amd64.whl

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.

Potentially problematic release.


This version of multipers might be problematic. Click here for more details.

Files changed (189) hide show
  1. multipers/__init__.py +31 -0
  2. multipers/_signed_measure_meta.py +430 -0
  3. multipers/_slicer_meta.py +212 -0
  4. multipers/data/MOL2.py +458 -0
  5. multipers/data/UCR.py +18 -0
  6. multipers/data/__init__.py +1 -0
  7. multipers/data/graphs.py +466 -0
  8. multipers/data/immuno_regions.py +27 -0
  9. multipers/data/minimal_presentation_to_st_bf.py +0 -0
  10. multipers/data/pytorch2simplextree.py +91 -0
  11. multipers/data/shape3d.py +101 -0
  12. multipers/data/synthetic.py +111 -0
  13. multipers/distances.py +198 -0
  14. multipers/filtration_conversions.pxd +229 -0
  15. multipers/filtration_conversions.pxd.tp +84 -0
  16. multipers/filtrations.pxd +224 -0
  17. multipers/function_rips.cp310-win_amd64.pyd +0 -0
  18. multipers/function_rips.pyx +105 -0
  19. multipers/grids.cp310-win_amd64.pyd +0 -0
  20. multipers/grids.pyx +350 -0
  21. multipers/gudhi/Persistence_slices_interface.h +132 -0
  22. multipers/gudhi/Simplex_tree_interface.h +245 -0
  23. multipers/gudhi/Simplex_tree_multi_interface.h +561 -0
  24. multipers/gudhi/cubical_to_boundary.h +59 -0
  25. multipers/gudhi/gudhi/Bitmap_cubical_complex.h +450 -0
  26. multipers/gudhi/gudhi/Bitmap_cubical_complex_base.h +1070 -0
  27. multipers/gudhi/gudhi/Bitmap_cubical_complex_periodic_boundary_conditions_base.h +579 -0
  28. multipers/gudhi/gudhi/Debug_utils.h +45 -0
  29. multipers/gudhi/gudhi/Fields/Multi_field.h +484 -0
  30. multipers/gudhi/gudhi/Fields/Multi_field_operators.h +455 -0
  31. multipers/gudhi/gudhi/Fields/Multi_field_shared.h +450 -0
  32. multipers/gudhi/gudhi/Fields/Multi_field_small.h +531 -0
  33. multipers/gudhi/gudhi/Fields/Multi_field_small_operators.h +507 -0
  34. multipers/gudhi/gudhi/Fields/Multi_field_small_shared.h +531 -0
  35. multipers/gudhi/gudhi/Fields/Z2_field.h +355 -0
  36. multipers/gudhi/gudhi/Fields/Z2_field_operators.h +376 -0
  37. multipers/gudhi/gudhi/Fields/Zp_field.h +420 -0
  38. multipers/gudhi/gudhi/Fields/Zp_field_operators.h +400 -0
  39. multipers/gudhi/gudhi/Fields/Zp_field_shared.h +418 -0
  40. multipers/gudhi/gudhi/Flag_complex_edge_collapser.h +337 -0
  41. multipers/gudhi/gudhi/Matrix.h +2107 -0
  42. multipers/gudhi/gudhi/Multi_critical_filtration.h +1038 -0
  43. multipers/gudhi/gudhi/Multi_persistence/Box.h +171 -0
  44. multipers/gudhi/gudhi/Multi_persistence/Line.h +282 -0
  45. multipers/gudhi/gudhi/Off_reader.h +173 -0
  46. multipers/gudhi/gudhi/One_critical_filtration.h +1431 -0
  47. multipers/gudhi/gudhi/Persistence_matrix/Base_matrix.h +769 -0
  48. multipers/gudhi/gudhi/Persistence_matrix/Base_matrix_with_column_compression.h +686 -0
  49. multipers/gudhi/gudhi/Persistence_matrix/Boundary_matrix.h +842 -0
  50. multipers/gudhi/gudhi/Persistence_matrix/Chain_matrix.h +1350 -0
  51. multipers/gudhi/gudhi/Persistence_matrix/Id_to_index_overlay.h +1105 -0
  52. multipers/gudhi/gudhi/Persistence_matrix/Position_to_index_overlay.h +859 -0
  53. multipers/gudhi/gudhi/Persistence_matrix/RU_matrix.h +910 -0
  54. multipers/gudhi/gudhi/Persistence_matrix/allocators/entry_constructors.h +139 -0
  55. multipers/gudhi/gudhi/Persistence_matrix/base_pairing.h +230 -0
  56. multipers/gudhi/gudhi/Persistence_matrix/base_swap.h +211 -0
  57. multipers/gudhi/gudhi/Persistence_matrix/boundary_cell_position_to_id_mapper.h +60 -0
  58. multipers/gudhi/gudhi/Persistence_matrix/boundary_face_position_to_id_mapper.h +60 -0
  59. multipers/gudhi/gudhi/Persistence_matrix/chain_pairing.h +136 -0
  60. multipers/gudhi/gudhi/Persistence_matrix/chain_rep_cycles.h +190 -0
  61. multipers/gudhi/gudhi/Persistence_matrix/chain_vine_swap.h +616 -0
  62. multipers/gudhi/gudhi/Persistence_matrix/columns/chain_column_extra_properties.h +150 -0
  63. multipers/gudhi/gudhi/Persistence_matrix/columns/column_dimension_holder.h +106 -0
  64. multipers/gudhi/gudhi/Persistence_matrix/columns/column_utilities.h +219 -0
  65. multipers/gudhi/gudhi/Persistence_matrix/columns/entry_types.h +327 -0
  66. multipers/gudhi/gudhi/Persistence_matrix/columns/heap_column.h +1140 -0
  67. multipers/gudhi/gudhi/Persistence_matrix/columns/intrusive_list_column.h +934 -0
  68. multipers/gudhi/gudhi/Persistence_matrix/columns/intrusive_set_column.h +934 -0
  69. multipers/gudhi/gudhi/Persistence_matrix/columns/list_column.h +980 -0
  70. multipers/gudhi/gudhi/Persistence_matrix/columns/naive_vector_column.h +1092 -0
  71. multipers/gudhi/gudhi/Persistence_matrix/columns/row_access.h +192 -0
  72. multipers/gudhi/gudhi/Persistence_matrix/columns/set_column.h +921 -0
  73. multipers/gudhi/gudhi/Persistence_matrix/columns/small_vector_column.h +1093 -0
  74. multipers/gudhi/gudhi/Persistence_matrix/columns/unordered_set_column.h +1012 -0
  75. multipers/gudhi/gudhi/Persistence_matrix/columns/vector_column.h +1244 -0
  76. multipers/gudhi/gudhi/Persistence_matrix/matrix_dimension_holders.h +186 -0
  77. multipers/gudhi/gudhi/Persistence_matrix/matrix_row_access.h +164 -0
  78. multipers/gudhi/gudhi/Persistence_matrix/ru_pairing.h +156 -0
  79. multipers/gudhi/gudhi/Persistence_matrix/ru_rep_cycles.h +376 -0
  80. multipers/gudhi/gudhi/Persistence_matrix/ru_vine_swap.h +540 -0
  81. multipers/gudhi/gudhi/Persistent_cohomology/Field_Zp.h +118 -0
  82. multipers/gudhi/gudhi/Persistent_cohomology/Multi_field.h +173 -0
  83. multipers/gudhi/gudhi/Persistent_cohomology/Persistent_cohomology_column.h +128 -0
  84. multipers/gudhi/gudhi/Persistent_cohomology.h +745 -0
  85. multipers/gudhi/gudhi/Points_off_io.h +171 -0
  86. multipers/gudhi/gudhi/Simple_object_pool.h +69 -0
  87. multipers/gudhi/gudhi/Simplex_tree/Simplex_tree_iterators.h +463 -0
  88. multipers/gudhi/gudhi/Simplex_tree/Simplex_tree_node_explicit_storage.h +83 -0
  89. multipers/gudhi/gudhi/Simplex_tree/Simplex_tree_siblings.h +106 -0
  90. multipers/gudhi/gudhi/Simplex_tree/Simplex_tree_star_simplex_iterators.h +277 -0
  91. multipers/gudhi/gudhi/Simplex_tree/hooks_simplex_base.h +62 -0
  92. multipers/gudhi/gudhi/Simplex_tree/indexing_tag.h +27 -0
  93. multipers/gudhi/gudhi/Simplex_tree/serialization_utils.h +62 -0
  94. multipers/gudhi/gudhi/Simplex_tree/simplex_tree_options.h +157 -0
  95. multipers/gudhi/gudhi/Simplex_tree.h +2794 -0
  96. multipers/gudhi/gudhi/Simplex_tree_multi.h +163 -0
  97. multipers/gudhi/gudhi/distance_functions.h +62 -0
  98. multipers/gudhi/gudhi/graph_simplicial_complex.h +104 -0
  99. multipers/gudhi/gudhi/persistence_interval.h +253 -0
  100. multipers/gudhi/gudhi/persistence_matrix_options.h +170 -0
  101. multipers/gudhi/gudhi/reader_utils.h +367 -0
  102. multipers/gudhi/mma_interface_coh.h +255 -0
  103. multipers/gudhi/mma_interface_h0.h +231 -0
  104. multipers/gudhi/mma_interface_matrix.h +282 -0
  105. multipers/gudhi/naive_merge_tree.h +575 -0
  106. multipers/gudhi/scc_io.h +289 -0
  107. multipers/gudhi/truc.h +888 -0
  108. multipers/io.cp310-win_amd64.pyd +0 -0
  109. multipers/io.pyx +711 -0
  110. multipers/ml/__init__.py +0 -0
  111. multipers/ml/accuracies.py +90 -0
  112. multipers/ml/convolutions.py +520 -0
  113. multipers/ml/invariants_with_persistable.py +79 -0
  114. multipers/ml/kernels.py +176 -0
  115. multipers/ml/mma.py +714 -0
  116. multipers/ml/one.py +472 -0
  117. multipers/ml/point_clouds.py +346 -0
  118. multipers/ml/signed_measures.py +1589 -0
  119. multipers/ml/sliced_wasserstein.py +461 -0
  120. multipers/ml/tools.py +113 -0
  121. multipers/mma_structures.cp310-win_amd64.pyd +0 -0
  122. multipers/mma_structures.pxd +127 -0
  123. multipers/mma_structures.pyx +2746 -0
  124. multipers/mma_structures.pyx.tp +1085 -0
  125. multipers/multi_parameter_rank_invariant/diff_helpers.h +93 -0
  126. multipers/multi_parameter_rank_invariant/euler_characteristic.h +97 -0
  127. multipers/multi_parameter_rank_invariant/function_rips.h +322 -0
  128. multipers/multi_parameter_rank_invariant/hilbert_function.h +769 -0
  129. multipers/multi_parameter_rank_invariant/persistence_slices.h +148 -0
  130. multipers/multi_parameter_rank_invariant/rank_invariant.h +369 -0
  131. multipers/multiparameter_edge_collapse.py +41 -0
  132. multipers/multiparameter_module_approximation/approximation.h +2295 -0
  133. multipers/multiparameter_module_approximation/combinatory.h +129 -0
  134. multipers/multiparameter_module_approximation/debug.h +107 -0
  135. multipers/multiparameter_module_approximation/euler_curves.h +0 -0
  136. multipers/multiparameter_module_approximation/format_python-cpp.h +286 -0
  137. multipers/multiparameter_module_approximation/heap_column.h +238 -0
  138. multipers/multiparameter_module_approximation/images.h +79 -0
  139. multipers/multiparameter_module_approximation/list_column.h +174 -0
  140. multipers/multiparameter_module_approximation/list_column_2.h +232 -0
  141. multipers/multiparameter_module_approximation/ru_matrix.h +347 -0
  142. multipers/multiparameter_module_approximation/set_column.h +135 -0
  143. multipers/multiparameter_module_approximation/structure_higher_dim_barcode.h +36 -0
  144. multipers/multiparameter_module_approximation/unordered_set_column.h +166 -0
  145. multipers/multiparameter_module_approximation/utilities.h +419 -0
  146. multipers/multiparameter_module_approximation/vector_column.h +223 -0
  147. multipers/multiparameter_module_approximation/vector_matrix.h +331 -0
  148. multipers/multiparameter_module_approximation/vineyards.h +464 -0
  149. multipers/multiparameter_module_approximation/vineyards_trajectories.h +649 -0
  150. multipers/multiparameter_module_approximation.cp310-win_amd64.pyd +0 -0
  151. multipers/multiparameter_module_approximation.pyx +217 -0
  152. multipers/pickle.py +53 -0
  153. multipers/plots.py +334 -0
  154. multipers/point_measure.cp310-win_amd64.pyd +0 -0
  155. multipers/point_measure.pyx +320 -0
  156. multipers/simplex_tree_multi.cp310-win_amd64.pyd +0 -0
  157. multipers/simplex_tree_multi.pxd +133 -0
  158. multipers/simplex_tree_multi.pyx +10335 -0
  159. multipers/simplex_tree_multi.pyx.tp +1935 -0
  160. multipers/slicer.cp310-win_amd64.pyd +0 -0
  161. multipers/slicer.pxd +2371 -0
  162. multipers/slicer.pxd.tp +214 -0
  163. multipers/slicer.pyx +15467 -0
  164. multipers/slicer.pyx.tp +914 -0
  165. multipers/tbb12.dll +0 -0
  166. multipers/tbbbind_2_5.dll +0 -0
  167. multipers/tbbmalloc.dll +0 -0
  168. multipers/tbbmalloc_proxy.dll +0 -0
  169. multipers/tensor/tensor.h +672 -0
  170. multipers/tensor.pxd +13 -0
  171. multipers/test.pyx +44 -0
  172. multipers/tests/__init__.py +57 -0
  173. multipers/tests/test_diff_helper.py +73 -0
  174. multipers/tests/test_hilbert_function.py +82 -0
  175. multipers/tests/test_mma.py +83 -0
  176. multipers/tests/test_point_clouds.py +49 -0
  177. multipers/tests/test_python-cpp_conversion.py +82 -0
  178. multipers/tests/test_signed_betti.py +181 -0
  179. multipers/tests/test_signed_measure.py +89 -0
  180. multipers/tests/test_simplextreemulti.py +221 -0
  181. multipers/tests/test_slicer.py +221 -0
  182. multipers/torch/__init__.py +1 -0
  183. multipers/torch/diff_grids.py +217 -0
  184. multipers/torch/rips_density.py +304 -0
  185. multipers-2.2.3.dist-info/LICENSE +21 -0
  186. multipers-2.2.3.dist-info/METADATA +134 -0
  187. multipers-2.2.3.dist-info/RECORD +189 -0
  188. multipers-2.2.3.dist-info/WHEEL +5 -0
  189. multipers-2.2.3.dist-info/top_level.txt +1 -0
@@ -0,0 +1,217 @@
1
+ from typing import Literal
2
+
3
+ import numpy as np
4
+ import torch
5
+ from pykeops.torch import LazyTensor
6
+
7
+
8
+ def get_grid(strategy: Literal["exact", "regular_closest", "regular_left", "quantile"]):
9
+ """
10
+ Given a strategy, returns a function of signature
11
+ `(num_pts, num_parameter), int --> Iterable[1d array]`
12
+ that generates a torch-differentiable grid from a set of points,
13
+ and a resolution.
14
+ """
15
+ match strategy:
16
+ case "exact":
17
+ return _exact_grid
18
+ case "regular_closest":
19
+ return _regular_closest_grid
20
+ case "regular_left":
21
+ return _regular_left_grid
22
+ case "quantile":
23
+ return _quantile_grid
24
+ case _:
25
+ raise ValueError(
26
+ f"""
27
+ Unimplemented strategy {strategy}.
28
+ Available ones : exact, regular_closest, regular_left, quantile.
29
+ """
30
+ )
31
+
32
+
33
+ def todense(grid: list[torch.Tensor]):
34
+ return torch.cartesian_prod(*grid)
35
+
36
+
37
+ def _exact_grid(filtration_values, r=None):
38
+ grid = tuple(_unique_any(f) for f in filtration_values)
39
+ return grid
40
+
41
+
42
+ def _regular_closest_grid(filtration_values, r: int):
43
+ grid = tuple(_regular_closest(f, r) for f in filtration_values)
44
+ return grid
45
+
46
+
47
+ def _regular_left_grid(filtration_values, r: int):
48
+ grid = tuple(_regular_left(f, r) for f in filtration_values)
49
+ return grid
50
+
51
+
52
+ def _quantile_grid(filtration_values, r: int):
53
+ qs = torch.linspace(0, 1, r)
54
+ grid = tuple(_unique_any(torch.quantile(f, q=qs)) for f in filtration_values)
55
+ return grid
56
+
57
+
58
+ def _unique_any(x, assume_sorted=False, remove_inf: bool = True):
59
+ if not assume_sorted:
60
+ x, _ = x.sort()
61
+ if remove_inf and x[-1] == torch.inf:
62
+ x = x[:-1]
63
+ with torch.no_grad():
64
+ y = x.unique()
65
+ idx = torch.searchsorted(x, y)
66
+ x = torch.cat([x, torch.tensor([torch.inf])])
67
+ return x[idx]
68
+
69
+
70
+ def _regular_left(f, r: int, unique: bool = True):
71
+ f = _unique_any(f)
72
+ with torch.no_grad():
73
+ f_regular = torch.linspace(f[0].item(), f[-1].item(), r, device=f.device)
74
+ idx = torch.searchsorted(f, f_regular)
75
+ f = torch.cat([f, torch.tensor([torch.inf])])
76
+ if unique:
77
+ return _unique_any(f[idx])
78
+ return f[idx]
79
+
80
+
81
+ def _regular_closest(f, r: int, unique: bool = True):
82
+ f = _unique_any(f)
83
+ with torch.no_grad():
84
+ f_reg = torch.linspace(
85
+ f[0].item(), f[-1].item(), steps=r, dtype=f.dtype, device=f.device
86
+ )
87
+ _f = LazyTensor(f[:, None, None])
88
+ _f_reg = LazyTensor(f_reg[None, :, None])
89
+ indices = (_f - _f_reg).abs().argmin(0).ravel()
90
+ f = torch.cat([f, torch.tensor([torch.inf])])
91
+ f_regular_closest = f[indices]
92
+ if unique:
93
+ f_regular_closest = _unique_any(f_regular_closest)
94
+ return f_regular_closest
95
+
96
+
97
+ def evaluate_in_grid(pts, grid):
98
+ """Evaluates points (assumed to be coordinates) in this grid.
99
+ Input
100
+ -----
101
+ - pts: (num_points, num_parameters) array
102
+ - grid: Iterable of 1-d array, for each parameter
103
+
104
+ Returns
105
+ -------
106
+ - array of shape like points of dtype like grid.
107
+ """
108
+ # grid = [torch.cat([g, torch.tensor([torch.inf])]) for g in grid]
109
+ # new_pts = torch.empty(pts.shape, dtype=grid[0].dtype, device=grid[0].device)
110
+ # for parameter, pt_of_parameter in enumerate(pts.T):
111
+ # new_pts[:, parameter] = grid[parameter][pt_of_parameter]
112
+ return torch.cat(
113
+ [
114
+ grid[parameter][pt_of_parameter][:, None]
115
+ for parameter, pt_of_parameter in enumerate(pts.T)
116
+ ],
117
+ dim=1,
118
+ )
119
+
120
+
121
+ def evaluate_mod_in_grid(mod, grid, box=None):
122
+ """Given an MMA module, pushes it into the specified grid.
123
+ Useful for e.g., make it differentiable.
124
+
125
+ Input
126
+ -----
127
+ - mod: PyModule
128
+ - grid: Iterable of 1d array, for num_parameters
129
+ Ouput
130
+ -----
131
+ torch-compatible module in the format:
132
+ (num_degrees) x (num_interval of degree) x ((num_birth, num_parameter), (num_death, num_parameters))
133
+
134
+ """
135
+ if box is not None:
136
+ grid = tuple(
137
+ torch.cat(
138
+ [
139
+ box[0][[i]],
140
+ _unique_any(
141
+ grid[i].clamp(min=box[0][i], max=box[1][i]), assume_sorted=True
142
+ ),
143
+ box[1][[i]],
144
+ ]
145
+ )
146
+ for i in range(len(grid))
147
+ )
148
+ (birth_sizes, death_sizes), births, deaths = mod.to_flat_idx(grid)
149
+ births = evaluate_in_grid(births, grid)
150
+ deaths = evaluate_in_grid(deaths, grid)
151
+ diff_mod = tuple(
152
+ zip(
153
+ births.split_with_sizes(birth_sizes.tolist()),
154
+ deaths.split_with_sizes(death_sizes.tolist()),
155
+ )
156
+ )
157
+ return diff_mod
158
+
159
+
160
+ def evaluate_mod_in_grid__old(mod, grid, box=None):
161
+ """Given an MMA module, pushes it into the specified grid.
162
+ Useful for e.g., make it differentiable.
163
+
164
+ Input
165
+ -----
166
+ - mod: PyModule
167
+ - grid: Iterable of 1d array, for num_parameters
168
+ Ouput
169
+ -----
170
+ torch-compatible module in the format:
171
+ (num_degrees) x (num_interval of degree) x ((num_birth, num_parameter), (num_death, num_parameters))
172
+
173
+ """
174
+ from pykeops.numpy import LazyTensor
175
+
176
+ with torch.no_grad():
177
+ if box is None:
178
+ # box = mod.get_box()
179
+ box = np.asarray([[g[0] for g in grid], [g[-1] for g in grid]])
180
+ S = mod.dump()[1]
181
+
182
+ def get_idx_parameter(A, G, p):
183
+ g = G[p].numpy() if isinstance(G[p], torch.Tensor) else np.asarray(G[p])
184
+ la = LazyTensor(np.asarray(A, dtype=g.dtype)[None, :, [p]])
185
+ lg = LazyTensor(g[:, None, None])
186
+ return (la - lg).abs().argmin(0)
187
+
188
+ Bdump = np.concatenate([s[0] for s in S], axis=0).clip(box[[0]], box[[1]])
189
+ B = np.concatenate(
190
+ [get_idx_parameter(Bdump, grid, p) for p in range(mod.num_parameters)],
191
+ axis=1,
192
+ dtype=np.int64,
193
+ )
194
+ Ddump = np.concatenate([s[1] for s in S], axis=0, dtype=np.float32).clip(
195
+ box[[0]], box[[1]]
196
+ )
197
+ D = np.concatenate(
198
+ [get_idx_parameter(Ddump, grid, p) for p in range(mod.num_parameters)],
199
+ axis=1,
200
+ dtype=np.int64,
201
+ )
202
+
203
+ BB = evaluate_in_grid(B, grid)
204
+ DD = evaluate_in_grid(D, grid)
205
+
206
+ b_idx = tuple((len(s[0]) for s in S))
207
+ d_idx = tuple((len(s[1]) for s in S))
208
+ BBB = BB.split_with_sizes(b_idx)
209
+ DDD = DD.split_with_sizes(d_idx)
210
+
211
+ splits = np.concatenate([[0], mod.degree_splits(), [len(BBB)]])
212
+ splits = torch.from_numpy(splits)
213
+ out = [
214
+ list(zip(BBB[splits[i] : splits[i + 1]], DDD[splits[i] : splits[i + 1]]))
215
+ for i in range(len(splits) - 1)
216
+ ] ## For some reasons this kills the gradient ???? pytorch bug
217
+ return out
@@ -0,0 +1,304 @@
1
+ from typing import Callable, Literal, Optional
2
+
3
+ import numpy as np
4
+ import torch
5
+ from gudhi.rips_complex import RipsComplex
6
+
7
+ import multipers as mp
8
+ from multipers.ml.convolutions import DTM, KDE
9
+ from multipers.simplex_tree_multi import _available_strategies
10
+ from multipers.torch.diff_grids import get_grid
11
+
12
+
13
+ def function_rips_signed_measure_old(
14
+ x,
15
+ theta: Optional[float] = None,
16
+ function: Literal["dtm", "gaussian", "exponential"] | Callable = "dtm",
17
+ threshold: float = np.inf,
18
+ grid_strategy: _available_strategies = "regular_closest",
19
+ resolution: int = 100,
20
+ return_original: bool = False,
21
+ return_st: bool = False,
22
+ safe_conversion: bool = False,
23
+ num_collapses: int = -1,
24
+ expand_collapse: bool = False,
25
+ dtype=torch.float32,
26
+ **sm_kwargs,
27
+ ):
28
+ """
29
+ Computes a torch-differentiable function-rips signed measure.
30
+
31
+ Input
32
+ -----
33
+ - x (num_pts, dim) : The point cloud
34
+ - theta: For density-like functions : the bandwidth
35
+ - threshold : rips threshold
36
+ - function : Either "dtm", "gaussian", or "exponenetial" or Callable.
37
+ Function to compute the second parameter.
38
+ - grid_strategy: grid coarsenning strategy.
39
+ - resolution : when coarsenning, the target resolution,
40
+ - return_original : Also returns the non-differentiable signed measure.
41
+ - safe_conversion : Activate this if you encounter crashes.
42
+ - **kwargs : for the signed measure computation.
43
+ """
44
+ assert isinstance(x, torch.Tensor)
45
+ if function == "dtm":
46
+ assert theta is not None, "Provide a theta to compute DTM"
47
+ codensity = DTM(masses=[theta]).fit(x).score_samples_diff(x)[0].type(dtype)
48
+ elif function in ["gaussian", "exponential"]:
49
+ assert theta is not None, "Provide a theta to compute density estimation"
50
+ codensity = (
51
+ -KDE(
52
+ bandwidth=theta,
53
+ kernel=function,
54
+ return_log=True,
55
+ )
56
+ .fit(x)
57
+ .score_samples(x)
58
+ .type(dtype)
59
+ )
60
+ else:
61
+ assert callable(function), "Function has to be callable"
62
+ if theta is None:
63
+ codensity = function(x).type(dtype)
64
+ else:
65
+ codensity = function(x, theta=theta).type(dtype)
66
+
67
+ distance_matrix = torch.cdist(x, x).type(dtype)
68
+ if threshold < np.inf:
69
+ distance_matrix[distance_matrix > threshold] = np.inf
70
+
71
+ st = RipsComplex(
72
+ distance_matrix=distance_matrix.detach(), max_edge_length=threshold
73
+ ).create_simplex_tree()
74
+ # detach makes a new (reference) tensor, without tracking the gradient
75
+ st = mp.SimplexTreeMulti(st, num_parameters=2, safe_conversion=safe_conversion)
76
+ st.fill_lowerstar(
77
+ codensity.detach(), parameter=1
78
+ ) # fills the codensity in the second parameter of the simplextree
79
+
80
+ # simplificates the simplextree for computation, the signed measure will be recovered from the copy afterward
81
+ st_copy = st.grid_squeeze(
82
+ grid_strategy=grid_strategy, resolution=resolution, coordinate_values=True
83
+ )
84
+ if sm_kwargs.get("degree", None) is None and sm_kwargs.get("degrees", [None]) == [
85
+ None
86
+ ]:
87
+ expansion_degree = st.num_vertices
88
+ else:
89
+ expansion_degree = (
90
+ max(np.max(sm_kwargs.get("degrees", 1)), sm_kwargs.get("degree", 1)) + 1
91
+ )
92
+ st.collapse_edges(num=num_collapses)
93
+ if not expand_collapse:
94
+ st.expansion(expansion_degree) # edge collapse
95
+ sms = mp.signed_measure(st, **sm_kwargs) # computes the signed measure
96
+ del st
97
+
98
+ simplices_list = tuple(
99
+ s for s, _ in st_copy.get_simplices()
100
+ ) # not optimal, we may want to do that in cython to get edges and nodes
101
+ sms_diff = []
102
+ for sm, weights in sms:
103
+ indices, not_found_indices = st_copy.pts_to_indices(
104
+ sm, simplices_dimensions=[1, 0]
105
+ )
106
+ if sm_kwargs.get("verbose", False):
107
+ print(
108
+ f"Found {(1-(indices == -1).mean()).round(2)} indices. \
109
+ Out : {(indices == -1).sum()}, {len(not_found_indices)}"
110
+ )
111
+ sm_diff = torch.empty(sm.shape).type(dtype)
112
+ # sim_dim = sm_diff.shape[1] // 2
113
+
114
+ # fills the Rips-filtrations of the signed measure.
115
+ # the loop is for the rank invariant
116
+ for i in range(0, sm_diff.shape[1], 2):
117
+ idxs = indices[:, i]
118
+ if (idxs == -1).all():
119
+ continue
120
+ useful_idxs = idxs != -1
121
+ # Retrieves the differentiable values from the distance_matrix
122
+ if useful_idxs.size > 0:
123
+ edges_filtrations = torch.cat(
124
+ [
125
+ distance_matrix[*simplices_list[idx], None]
126
+ for idx in idxs[useful_idxs]
127
+ ]
128
+ )
129
+ # fills theses values into the signed measure
130
+ sm_diff[:, i][useful_idxs] = edges_filtrations
131
+ # same for the other axis
132
+ for i in range(1, sm_diff.shape[1], 2):
133
+ idxs = indices[:, i]
134
+ if (idxs == -1).all():
135
+ continue
136
+ useful_idxs = idxs != -1
137
+ if useful_idxs.size > 0:
138
+ nodes_filtrations = torch.cat(
139
+ [codensity[simplices_list[idx]] for idx in idxs[useful_idxs]]
140
+ )
141
+ sm_diff[:, i][useful_idxs] = nodes_filtrations
142
+
143
+ # fills not-found values as constants
144
+ if len(not_found_indices) > 0:
145
+ not_found_indices = indices == -1
146
+ sm_diff[indices == -1] = torch.from_numpy(sm[indices == -1]).type(dtype)
147
+
148
+ sms_diff.append((sm_diff, torch.from_numpy(weights)))
149
+ flags = [True, return_original, return_st]
150
+ if np.sum(flags) == 1:
151
+ return sms_diff
152
+ return tuple(stuff for stuff, flag in zip([sms_diff, sms, st_copy], flags) if flag)
153
+
154
+
155
+ def function_rips_signed_measure(
156
+ x,
157
+ theta: Optional[float] = None,
158
+ function: Literal["dtm", "gaussian", "exponential"] | Callable = "gaussian",
159
+ threshold: Optional[float] = None,
160
+ grid_strategy: Literal[
161
+ "regular_closest", "exact", "quantile", "regular_left"
162
+ ] = "exact",
163
+ complex: Literal["rips", "delaunay", "weak_delaunay"] = "rips",
164
+ resolution: int = 100,
165
+ safe_conversion: bool = False,
166
+ num_collapses: Optional[int] = None,
167
+ expand_collapse: bool = False,
168
+ dtype=torch.float32,
169
+ plot=False,
170
+ # return_st: bool = False,
171
+ *,
172
+ log_density: bool = True,
173
+ vineyard: bool = False,
174
+ pers_backend=None,
175
+ **sm_kwargs,
176
+ ):
177
+ """
178
+ Computes a torch-differentiable function-rips signed measure.
179
+
180
+ Input
181
+ -----
182
+ - x (num_pts, dim) : The point cloud
183
+ - theta: For density-like functions : the bandwidth
184
+ - threshold : rips threshold
185
+ - function : Either "dtm", "gaussian", or "exponenetial" or Callable.
186
+ Function to compute the second parameter.
187
+ - grid_strategy: grid coarsenning strategy.
188
+ - resolution : when coarsenning, the target resolution,
189
+ - return_original : Also returns the non-differentiable signed measure.
190
+ - safe_conversion : Activate this if you encounter crashes.
191
+ - **kwargs : for the signed measure computation.
192
+ """
193
+ if num_collapses is None:
194
+ num_collapses = -1 if complex == "rips" else None
195
+ assert isinstance(x, torch.Tensor)
196
+ if function == "dtm":
197
+ assert theta is not None, "Provide a theta to compute DTM"
198
+ codensity = DTM(masses=[theta]).fit(x).score_samples_diff(x)[0].type(dtype)
199
+ elif function in ["gaussian", "exponential"]:
200
+ assert theta is not None, "Provide a theta to compute density estimation"
201
+ codensity = (
202
+ -KDE(
203
+ bandwidth=theta,
204
+ kernel=function,
205
+ return_log=log_density,
206
+ )
207
+ .fit(x)
208
+ .score_samples(x)
209
+ .type(dtype)
210
+ )
211
+ elif isinstance(function, torch.Tensor):
212
+ assert (
213
+ function.ndim == 1 and codensity.shape[0] == x.shape[0]
214
+ ), """
215
+ When function is a tensor, it is interpreted as the value of some function over x.
216
+ """
217
+ codensity = function
218
+ else:
219
+ assert callable(function), "Function has to be callable"
220
+ if theta is None:
221
+ codensity = function(x).type(dtype)
222
+ else:
223
+ codensity = function(x, theta=theta).type(dtype)
224
+
225
+ distance_matrix = torch.cdist(x, x).type(dtype)
226
+ distances = distance_matrix.ravel()
227
+ if complex == "rips":
228
+ threshold = (
229
+ distance_matrix.max(axis=1).values.min() if threshold is None else threshold
230
+ )
231
+ distances = distances[distances <= threshold]
232
+ elif complex in ["delaunay", "weak_delaunay"]:
233
+ complex = "delaunay"
234
+ distances /= 2
235
+ else:
236
+ raise ValueError(
237
+ f"Unimplemented with complex {complex}. You can use rips or delaunay ftm."
238
+ )
239
+
240
+ # simplificates the simplextree for computation, the signed measure will be recovered from the copy afterward
241
+ reduced_grid = get_grid(strategy=grid_strategy)((distances, codensity), resolution)
242
+
243
+ degrees = sm_kwargs.pop("degrees", [])
244
+ if sm_kwargs.get("degree", None) is not None:
245
+ degrees = [sm_kwargs.pop("degree", None)] + degrees
246
+ if complex == "rips":
247
+ st = RipsComplex(
248
+ distance_matrix=distance_matrix.detach(), max_edge_length=threshold
249
+ ).create_simplex_tree()
250
+ # detach makes a new (reference) tensor, without tracking the gradient
251
+ st = mp.SimplexTreeMulti(st, num_parameters=2, safe_conversion=safe_conversion)
252
+ st.fill_lowerstar(
253
+ codensity.detach(), parameter=1
254
+ ) # fills the codensity in the second parameter of the simplextree
255
+ st = st.grid_squeeze(reduced_grid)
256
+ st.filtration_grid = []
257
+ if None in degrees:
258
+ expansion_degree = st.num_vertices
259
+ else:
260
+ expansion_degree = max(degrees) + 1
261
+ st.collapse_edges(num=num_collapses)
262
+ if not expand_collapse:
263
+ st.expansion(expansion_degree) # edge collapse
264
+
265
+ s = mp.Slicer(st, vineyard=vineyard, backend=pers_backend)
266
+ elif complex == "delaunay":
267
+ s = mp.slicer.from_function_delaunay(
268
+ x.detach().numpy(), codensity.detach().numpy()
269
+ )
270
+ st = mp.slicer.to_simplextree(s)
271
+ st.flagify(2)
272
+ s = mp.Slicer(st, vineyard=vineyard, backend=pers_backend).grid_squeeze(
273
+ reduced_grid
274
+ )
275
+
276
+ s.filtration_grid = [] ## To enforce minpres to be reasonable
277
+ if None not in degrees:
278
+ s = s.minpres(degrees=degrees)
279
+ else:
280
+ from joblib import Parallel, delayed
281
+
282
+ s = tuple(
283
+ Parallel(n_jobs=-1, backend="threading")(
284
+ delayed(lambda d: s if d is None else s.minpres(degree=d))(d)
285
+ for d in degrees
286
+ )
287
+ )
288
+ ## fix previous hack
289
+ for stuff in s:
290
+ # stuff.filtration_grid = reduced_grid ## not necessary
291
+ stuff.filtration_grid = [[1]] * stuff.num_parameters
292
+
293
+ sms = tuple(
294
+ sm
295
+ for slicer_of_degree, degree in zip(s, degrees)
296
+ for sm in mp.signed_measure(
297
+ slicer_of_degree, grid=reduced_grid, degree=degree, **sm_kwargs
298
+ )
299
+ ) # computes the signed measure
300
+ if plot:
301
+ mp.plots.plot_signed_measures(
302
+ tuple((sm.detach().numpy(), w.detach().numpy()) for sm, w in sms)
303
+ )
304
+ return sms
@@ -0,0 +1,21 @@
1
+ MIT License
2
+
3
+ Copyright (c) 2023 David Loiseaux
4
+
5
+ Permission is hereby granted, free of charge, to any person obtaining a copy
6
+ of this software and associated documentation files (the "Software"), to deal
7
+ in the Software without restriction, including without limitation the rights
8
+ to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9
+ copies of the Software, and to permit persons to whom the Software is
10
+ furnished to do so, subject to the following conditions:
11
+
12
+ The above copyright notice and this permission notice shall be included in all
13
+ copies or substantial portions of the Software.
14
+
15
+ THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16
+ IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17
+ FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18
+ AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19
+ LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20
+ OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21
+ SOFTWARE.
@@ -0,0 +1,134 @@
1
+ Metadata-Version: 2.2
2
+ Name: multipers
3
+ Version: 2.2.3
4
+ Summary: Multiparameter Topological Persistence for Machine Learning
5
+ Author-email: David Loiseaux <david.lapous@proton.me>, Hannah Schreiber <hannah.schreiber@inria.fr>
6
+ Maintainer-email: David Loiseaux <david.lapous@proton.me>
7
+ License: MIT License
8
+
9
+ Copyright (c) 2023 David Loiseaux
10
+
11
+ Permission is hereby granted, free of charge, to any person obtaining a copy
12
+ of this software and associated documentation files (the "Software"), to deal
13
+ in the Software without restriction, including without limitation the rights
14
+ to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
15
+ copies of the Software, and to permit persons to whom the Software is
16
+ furnished to do so, subject to the following conditions:
17
+
18
+ The above copyright notice and this permission notice shall be included in all
19
+ copies or substantial portions of the Software.
20
+
21
+ THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
22
+ IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
23
+ FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
24
+ AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
25
+ LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
26
+ OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
27
+ SOFTWARE.
28
+
29
+ Project-URL: source, https://github.com/DavidLapous/multipers
30
+ Project-URL: download, https://pypi.org/project/multipers/#files
31
+ Project-URL: tracker, https://github.com/DavidLapous/multipers/issues
32
+ Project-URL: release notes, https://github.com/DavidLapous/multipers/releases
33
+ Keywords: TDA,Persistence,Multiparameter,sklearn
34
+ Classifier: Development Status :: 5 - Production/Stable
35
+ Classifier: Programming Language :: Python :: 3.10
36
+ Classifier: Programming Language :: Python :: 3.11
37
+ Classifier: Programming Language :: Python :: 3.12
38
+ Classifier: Programming Language :: Python :: Implementation :: CPython
39
+ Classifier: Topic :: Scientific/Engineering :: Artificial Intelligence
40
+ Classifier: Topic :: Scientific/Engineering :: Mathematics
41
+ Classifier: Topic :: Scientific/Engineering :: Visualization
42
+ Classifier: Topic :: Software Development :: Libraries :: Python Modules
43
+ Classifier: License :: OSI Approved :: MIT License
44
+ Requires-Python: >=3.10
45
+ Description-Content-Type: text/markdown
46
+ License-File: LICENSE
47
+ Requires-Dist: numpy
48
+ Requires-Dist: gudhi>=3.8
49
+ Requires-Dist: tqdm
50
+ Requires-Dist: scipy
51
+ Requires-Dist: joblib
52
+ Requires-Dist: matplotlib
53
+ Requires-Dist: scikit-learn
54
+ Requires-Dist: filtration-domination
55
+ Requires-Dist: pykeops
56
+ Requires-Dist: pot
57
+
58
+ # multipers : Multiparameter Persistence for Machine Learning
59
+ [![Documentation](https://img.shields.io/badge/Documentation-website-blue)](https://davidlapous.github.io/multipers)
60
+ [![DOI](https://joss.theoj.org/papers/10.21105/joss.06773/status.svg)](https://doi.org/10.21105/joss.06773)
61
+ [![PyPI](https://img.shields.io/pypi/v/multipers?color=green)](https://pypi.org/project/multipers)
62
+ [![Build, test](https://github.com/DavidLapous/multipers/actions/workflows/python_PR.yml/badge.svg)](https://github.com/DavidLapous/multipers/actions/workflows/python_PR.yml)
63
+ [![Downloads](https://static.pepy.tech/badge/multipers)](https://pepy.tech/project/multipers)
64
+ <br>
65
+ Scikit-style PyTorch-autodiff multiparameter persistent homology python library.
66
+ This library aims to provide easy to use and performant strategies for applied multiparameter topology.
67
+ <br> Meant to be integrated in [the Gudhi library](https://gudhi.inria.fr/).
68
+
69
+ ## Quick start
70
+ This library allows computing several representations from "geometrical datasets", e.g., point clouds, images, graphs, that have multiple scales.
71
+ We provide some *nice* pictures in the [documentation](https://davidlapous.github.io/multipers/index.html).
72
+ A non-exhaustive list of features can be found in the **Features** section.
73
+
74
+ This library is available [on PyPI](https://pypi.org/project/multipers/) for (reasonably up to date) Linux, macOS and Windows, via
75
+ ```sh
76
+ pip install multipers
77
+ ```
78
+
79
+ Windows support is experimental, and some core dependencies are not available on Windows.
80
+ We hence recommend Windows user to use [WSL](https://learn.microsoft.com/en-us/windows/wsl/).
81
+ <br>
82
+ A documentation and building instructions are available
83
+ [here](https://davidlapous.github.io/multipers/compilation.html).
84
+
85
+
86
+ ## Features, and linked projects
87
+ This library features a bunch of different functions and helpers. See below for a non-exhaustive list.
88
+ <br>Filled box refers to implemented or interfaced code.
89
+ - [x] [[Multiparameter Module Approximation]](https://arxiv.org/abs/2206.02026) provides the multiparameter simplicial structure, as well as technics for approximating modules, via interval-decomposable modules. It is also very useful for visualization.
90
+ - [x] [[Stable Vectorization of Multiparameter Persistent Homology using Signed Barcodes as Measures, NeurIPS2023]](https://proceedings.neurips.cc/paper_files/paper/2023/hash/d75c474bc01735929a1fab5d0de3b189-Abstract-Conference.html) provides fast representations of multiparameter persistence modules, by using their signed barcodes decompositions encoded into signed measures. Implemented decompositions : Euler surfaces, Hilbert function, rank invariant (i.e. rectangles). It also provides representation technics for Machine Learning, i.e., Sliced Wasserstein kernels, and Vectorizations.
91
+ - [x] [[A Framework for Fast and Stable Representations of Multiparameter Persistent Homology Decompositions, NeurIPS2023]](https://proceedings.neurips.cc/paper_files/paper/2023/hash/702b67152ec4435795f681865b67999c-Abstract-Conference.html) Provides a vectorization framework for interval decomposable modules, for Machine Learning. Currently implemented as an extension of MMA.
92
+ - [x] [[Differentiability and Optimization of Multiparameter Persistent Homology, ICML2024]](https://proceedings.mlr.press/v235/scoccola24a.html) An approach to compute a (clarke) gradient for any reasonable multiparameter persistent invariant. Currently, any `multipers` computation is auto-differentiable using this strategy, provided that the input are pytorch gradient capable tensor.
93
+ - [x] [[Multiparameter Persistence Landscapes, JMLR]](https://jmlr.org/papers/v21/19-054.html) A vectorization technic for multiparameter persistence modules.
94
+ - [x] [[Filtration-Domination in Bifiltered Graphs, ALENEX2023]](https://doi.org/10.1137/1.9781611977561.ch3) Allows for 2-parameter edge collapses for 1-critical clique complexes. Very useful to speed up, e.g., Rips-Codensity bifiltrations.
95
+ - [x] [[Chunk Reduction for Multi-Parameter Persistent Homology, SOCG2019]](https://doi.org/10.4230/LIPIcs.SoCG.2019.37) Multi-filtration preprocessing algorithm for homology computations.
96
+ - [x] [[Computing Minimal Presentations and Bigraded Betti Numbers of 2-Parameter Persistent Homology, JAAG]](https://doi.org/10.1137/20M1388425) Minimal presentation of multiparameter persistence modules, using [mpfree](https://bitbucket.org/mkerber/mpfree/src/master/). Hilbert, Rank Decomposition Signed Measures, and MMA decompositions can be computed using the mpfree backend.
97
+ - [x] [[Delaunay Bifiltrations of Functions on Point Clouds, SODA2024]](https://epubs.siam.org/doi/10.1137/1.9781611977912.173) Provides an alternative to function rips bifiltrations, using Delaunay complexes. Very good alternative to Rips-Density like bifiltrations.
98
+ - [x] [[Rivet]](https://github.com/rivetTDA/rivet) Interactive two parameter persistence
99
+ - [x] [[Kernel Operations on the GPU, with Autodiff, without Memory Overflows, JMLR]](http://jmlr.org/papers/v22/20-275.html) Although not linked, at first glance, to persistence in any way, this library allows computing blazingly fast signed measures convolutions (and more!) with custom kernels.
100
+ - [ ] [Backend only] [[Projected distances for multi-parameter persistence modules]](https://arxiv.org/abs/2206.08818) Provides a strategy to estimate the convolution distance between multiparameter persistence module using projected barcodes. Implementation is a WIP.
101
+ - [ ] [Partial, and experimental] [[Efficient Two-Parameter Persistence Computation via Cohomology, SoCG2023]](https://doi.org/10.4230/LIPIcs.SoCG.2023.15) Minimal presentations for 2-parameter persistence algorithm.
102
+
103
+ If I missed something, or you want to add something, feel free to open an issue.
104
+
105
+ ## Authors
106
+ [David Loiseaux](https://www-sop.inria.fr/members/David.Loiseaux/index.html),<br>
107
+ [Hannah Schreiber](https://github.com/hschreiber) (Persistence backend code),<br>
108
+ [Luis Scoccola](https://luisscoccola.com/)
109
+ (Möbius inversion in python, degree-rips using [persistable](https://github.com/LuisScoccola/persistable) and [RIVET](https://github.com/rivetTDA/rivet/)),<br>
110
+ [Mathieu Carrière](https://www-sop.inria.fr/members/Mathieu.Carriere/) (Sliced Wasserstein)<br>
111
+
112
+ ## Citation
113
+ Please cite this library when using it in scientific publications;
114
+ you can use the following journal bibtex entry
115
+ ```bib
116
+ @article{multipers,
117
+ title = {Multipers: {{Multiparameter Persistence}} for {{Machine Learning}}},
118
+ shorttitle = {Multipers},
119
+ author = {Loiseaux, David and Schreiber, Hannah},
120
+ year = {2024},
121
+ month = nov,
122
+ journal = {Journal of Open Source Software},
123
+ volume = {9},
124
+ number = {103},
125
+ pages = {6773},
126
+ issn = {2475-9066},
127
+ doi = {10.21105/joss.06773},
128
+ langid = {english},
129
+ }
130
+ ```
131
+ ## Contributions
132
+ Feel free to contribute, report a bug on a pipeline, or ask for documentation by opening an issue.<br>
133
+ In particular, if you have a nice example or application that is not taken care in the documentation (see the ./docs/notebooks/ folder), please contact me to add it there.
134
+