curlew 1.0__tar.gz

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
curlew-1.0/LICENSE ADDED
@@ -0,0 +1,21 @@
1
+ MIT License
2
+
3
+ Copyright (c) 2025 Sam Thiele
4
+
5
+ Permission is hereby granted, free of charge, to any person obtaining a copy
6
+ of this software and associated documentation files (the "Software"), to deal
7
+ in the Software without restriction, including without limitation the rights
8
+ to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9
+ copies of the Software, and to permit persons to whom the Software is
10
+ furnished to do so, subject to the following conditions:
11
+
12
+ The above copyright notice and this permission notice shall be included in all
13
+ copies or substantial portions of the Software.
14
+
15
+ THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16
+ IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17
+ FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18
+ AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19
+ LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20
+ OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21
+ SOFTWARE.
curlew-1.0/PKG-INFO ADDED
@@ -0,0 +1,19 @@
1
+ Metadata-Version: 2.4
2
+ Name: curlew
3
+ Version: 1.0
4
+ Summary: A python package for constructing complex geological models from various types of neural fields.
5
+ Home-page: https://github.com/samthiele/curlew
6
+ Author: Sam Thiele
7
+ Author-email: s.thiele@hzdr.de
8
+ License: MIT
9
+ License-File: LICENSE
10
+ Requires-Dist: numpy
11
+ Requires-Dist: torch
12
+ Requires-Dist: tqdm
13
+ Dynamic: author
14
+ Dynamic: author-email
15
+ Dynamic: home-page
16
+ Dynamic: license
17
+ Dynamic: license-file
18
+ Dynamic: requires-dist
19
+ Dynamic: summary
curlew-1.0/README.md ADDED
@@ -0,0 +1,47 @@
1
+ # curlew
2
+
3
+ A toolkit for building 2- and 3- dimensional geological models using neural fields.
4
+
5
+ <img src="./icon.png" width="200">
6
+
7
+ ## Getting started
8
+
9
+ ### Installation
10
+
11
+ To install directly from github try: `pip install git+https://github.com/samthiele/curlew.git`.
12
+
13
+ This should run on most systems: `numpy`, `pytorch` and `tqdm` are the only required dependencies. Matplotlib is handy too, but not required.
14
+
15
+ ### Tutorials
16
+
17
+ To help get up to speed with `curlew`, we maintain a set of CoLab tutorial notebooks [here](https://drive.google.com/drive/folders/14OPpL2-zKuJSd2Hh7jobnIYPnxzl0wCI?usp=sharing).
18
+ Additional examples (used to make figures in the paper listed below) can be found [here](https://github.com/k4m4th/curlew_tutorials).
19
+
20
+ ### Documentation
21
+
22
+ Documentation is automatically built and served through [GitHub pages](https://samthiele.github.io/curlew/).
23
+
24
+ ## Support
25
+
26
+ Please use [GitHub issues](https://github.com/samthiele/curlew/issues) to report bugs.
27
+
28
+ ## Contributing and appreciation
29
+
30
+ Please star this repository if you found it useful. If you have fixed bugs or added new features then we welcome pull requests.
31
+
32
+ ## Authors and acknowledgment
33
+
34
+ `curlew` has been developed by Sam Thiele and Akshay Kamath, with valuable input from
35
+ Mike Hillier, Lachlan Grose, Richard Gloaguen and Florian Wellmann.
36
+
37
+ If you use `curlew` we would appreciate it if you:
38
+
39
+ 1) Cite the following paper (for academic work)
40
+
41
+ ```
42
+ Kamath, A.V., Thiele, S.T., Moulard, M., Grose, L., Tolosana-Delgado, R., Hillier, M.J., Wellmann, R., & Gloaguen, R. Curlew 1.0: Implicit geological modelling with neural fields in python. Geoscientific Model Development (preprint online soon)
43
+ ```
44
+
45
+ 2) Star this repository so that we get a rough idea of our user base
46
+
47
+ 3) Leave a [GitHub issue](https://github.com/samthiele/curlew/issues) if you have questions or comments (Issues do not strictly need to be related to bug reports).
@@ -0,0 +1,88 @@
1
+ """
2
+
3
+ A toolkit for building 2- and 3- dimensional geological models using neural fields.
4
+
5
+ <img src="https://github.com/samthiele/curlew/blob/main/icon.png?raw=true" width="200">
6
+
7
+ ## Getting started
8
+
9
+ ### Installation
10
+
11
+ To install directly from github try: `pip install git+https://github.com/samthiele/curlew.git`.
12
+
13
+ This should run on most systems: `numpy`, `pytorch` and `tqdm` are the only required dependencies. Matplotlib is handy too, but not required.
14
+
15
+ ### Tutorials
16
+
17
+ To help get up to speed with `curlew`, we maintain a set of CoLab tutorial notebooks [here](https://drive.google.com/drive/folders/14OPpL2-zKuJSd2Hh7jobnIYPnxzl0wCI?usp=sharing).
18
+ Additional examples (used to make figures in the paper listed below) can be found [here](https://github.com/k4m4th/curlew_tutorials).
19
+
20
+ ### Support
21
+
22
+ Please use [GitHub issues](https://github.com/samthiele/curlew/issues) to report bugs.
23
+
24
+ ## Contributing and appreciation
25
+
26
+ Please star this repository if you found it useful. If you have fixed bugs or added new features then we welcome pull requests.
27
+
28
+ ## Authors and acknowledgment
29
+
30
+ `curlew` has been developed by Sam Thiele and Akshay Kamath, with valuable input from
31
+ Mike Hillier, Lachlan Grose, Richard Gloaguen and Florian Wellmann.
32
+
33
+ If you use `curlew` we would appreciate it if you:
34
+
35
+ 1) Cite the following paper (for academic work)
36
+
37
+ ```
38
+ Kamath, A.V., Thiele, S.T., Moulard, M., Grose, L., Tolosana-Delgado, R., Hillier, M.J., Wellmann, R., & Gloaguen, R. Curlew 1.0: Implicit geological modelling with neural fields in python. Geoscientific Model Development (preprint online soon)
39
+ ```
40
+
41
+ 2) Star this repository so that we get a rough idea of our user base
42
+
43
+ 3) Leave a [GitHub issue](https://github.com/samthiele/curlew/issues) if you have questions or comments (Issues do not strictly need to be related to bug reports).
44
+
45
+ """
46
+ import torch
47
+ from curlew.fields import NF
48
+ from curlew.geology.model import GeoModel
49
+ from curlew.geology.SF import SF
50
+
51
+ device = 'cpu' # can be changed to set device to e.g., gpu
52
+ """The device used to compute operations with pytorch tensors. Change to allow e.g. GPU parallelisation."""
53
+
54
+ dtype = torch.float64
55
+ """The precision used during pytorch computations. Lower to float32 to save RAM."""
56
+
57
+ ccmap = None
58
+ """A colourful (custom) matplotlib colormap taylored for `curlew`. Will only be set if `matplotlib` is installed."""
59
+
60
+ try:
61
+ # Define curlew colormap :-)
62
+ import matplotlib.colors as mcolors
63
+
64
+ # Define the colors extracted manually from the provided logo image
65
+ colors = [
66
+ "#A6340B", # rich red (not darkest)
67
+ "#E35B0E", # vibrant orange-red
68
+ "#F39C12", # medium orange
69
+ "#F0C419", # bright orange-yellow
70
+ "#FAE8B6", # soft pale orange (close to white but not pure white)
71
+ "#8CD9E0", # light cyan blue
72
+ "#31B4C2", # medium cyan-blue
73
+ "#1B768F", # medium blue
74
+ "#054862", # deeper blue (not darkest)
75
+ ]
76
+ ccmap = mcolors.ListedColormap(colors)
77
+ except:
78
+ pass
79
+
80
+ # import things we want to expose under the `curlew` namespace
81
+ from curlew import core
82
+ from curlew import data
83
+ from curlew import geology
84
+ from curlew import geometry
85
+ from curlew import visualise
86
+
87
+ from curlew.core import CSet, HSet
88
+ from curlew.geology import fault, strati, sheet
@@ -0,0 +1,309 @@
1
+ """
2
+ Define several core curlew types for storing data and hyperparameters.
3
+ """
4
+ import numpy as np
5
+ import torch
6
+ from dataclasses import dataclass, field
7
+ import copy
8
+ import curlew
9
+
10
+
11
+ @dataclass
12
+ class CSet:
13
+ """
14
+ Set of local constraints used when fitting a specific NF. Note that in the below descriptions *i* refers to the
15
+ relevant NF's input dimensions, and *o* refers to its output dimensions. N refers to an arbitrary number of
16
+ constraints, which must be equal for each "position" and "value" pair. Constraints left as None (default) will
17
+ not be included during training. For most applications it is assumed that many types of constraints will not be defined.
18
+
19
+ Attributes:
20
+ vp (torch.tensor or np.ndarray): (N,o) array of value constraint positions (in modern-day coordinates).
21
+ vv (torch.tensor or np.ndarray): (N,o) array of value constraint values.
22
+ gp (torch.tensor or np.ndarray): (N,i) array of gradient constraint position vectors (in modern-day coordinates).
23
+ gv (torch.tensor or np.ndarray): (N,i) array of gradient value vectors.
24
+ gop (torch.tensor or np.ndarray): (N,i) array of gradient orientation constraint positions (in modern-day coordinates).
25
+ gov (torch.tensor or np.ndarray): (N,i) array of gradient orientation value vectors. These differ from `gv` in that the
26
+ gradient (younging) direction is not enforced, only the orientiation is considered.
27
+ pp (torch.tensor or np.ndarray): (N,i) array of property position value vectors.
28
+ pv (torch.tensor or np.ndarray): (N,q) array of property value vectors.
29
+ iq (tuple): Inequality constraints. Should be a tuple containing `(N,[(P1, P2, iq),...]`), where each P1 and P2 are (N,d) arrays or tensors
30
+ defining positions at which to evaluate inequality constraints such as `P1 > P2`. `iq` defines the inequality to evaluate, and can be `<`, `=` or `>`.
31
+ Note that this inequality is computed for a random set of `N` pairs sampled from `P1` and `P2`.
32
+ grid (tuple, torch.tensor or np.ndarray): Either a tuple containing (N,[[xmin,xmax],[ymin,ymax],...] to use random grid points during each epoch, or a
33
+ (N,i) array of positions (in modern-day coordinates) defining specific points. These points are used to define
34
+ where "global" constraints are enforced.
35
+ delta (float): The step size used when computing numerical derivatives at the grid points. Default (None) is to initialise
36
+ as half the distance between the first and second points listed in `grid`. Larger values of delta result
37
+ in gradients representing larger scale gradients.
38
+ trend (torch.tensor or np.ndarray): an (i,) vector defining a globally preferential gradient direction.
39
+ """
40
+
41
+ # local constraints
42
+ vp : torch.tensor = None
43
+ vv : torch.tensor = None
44
+ gp : torch.tensor = None
45
+ gv : torch.tensor = None
46
+ gop : torch.tensor = None
47
+ gov : torch.tensor = None
48
+ pp : torch.tensor = None
49
+ pv : torch.tensor = None
50
+ iq : tuple = None # inequality constraints
51
+
52
+ # global constraints
53
+ grid : torch.tensor = None # predefined grid, or params for sampling random ones
54
+ sgrid : torch.tensor = None # tensor or array containing the last-used (random) grid
55
+ delta : float = None # step to use when computing numerical derivatives
56
+ trend : torch.tensor = None # global preferential gradient direction vector
57
+ # axis: an (i,) vector defining a globally preferential axis direction.
58
+
59
+ # place to store offset vectors based on delta used for numerical gradient computation.
60
+ _offset : torch.tensor = field(init=False, default=None)
61
+
62
+ def torch(self):
63
+ """
64
+ Return a copy of these constraints cast to pytorch tensors with the specified
65
+ data type and hosted on the specified device.
66
+ """
67
+ args = {}
68
+ for k in dir(self):
69
+ if '_' not in k and not callable(getattr(self, k)):
70
+ attr = getattr(self, k)
71
+ if attr is None: continue # easy
72
+ if k == 'iq': # inequalities are special
73
+ o = (attr[0], [] )
74
+ for i in range(len(attr[1])):
75
+ # convert P1 and P2 to tensor
76
+ if not isinstance( attr[1][i][0], torch.Tensor): # possibly already a tensor
77
+ o[1].append( (torch.tensor( attr[1][i][0], device=curlew.device, dtype=curlew.dtype ),
78
+ torch.tensor( attr[1][i][1], device=curlew.device, dtype=curlew.dtype ),
79
+ attr[1][i][2] ) )
80
+ else:
81
+ o[1].append( (attr[1][i][0], attr[1][i][1], attr[1][i][2] )) # already tensors
82
+ attr = o
83
+ else:
84
+ if attr is not None:
85
+ if isinstance( attr, np.ndarray ) or isinstance( attr, list ): # convert nd array or list types to tensor
86
+ attr = torch.tensor( attr, device=curlew.device, dtype=curlew.dtype )
87
+ args[k] = attr
88
+ return CSet(**args)
89
+
90
+ def numpy(self):
91
+ """
92
+ Return a copy of these constraints cast to numpy arrays if necessary.
93
+ """
94
+ args = {}
95
+ for k in dir(self):
96
+ if '_' not in k and not callable(getattr(self, k)):
97
+ attr = getattr(self, k)
98
+ if attr is None: continue # easy
99
+ if k == 'iq': # inequalities are special
100
+ o = (attr[0], [] )
101
+ for i in range(len(attr[1])):
102
+ # convert P1 and P2 to tensor
103
+ if isinstance(attr[1][i][0], torch.Tensor ):
104
+ o[1].append( (attr[1][i][0].cpu().detach().numpy(),
105
+ attr[1][i][1].cpu().detach().numpy(),
106
+ attr[1][i][2] ) )
107
+ attr = o
108
+ else:
109
+ if attr is not None:
110
+ if isinstance(attr, torch.Tensor ):
111
+ attr = attr.cpu().detach().numpy()
112
+ args[k] = attr
113
+ return CSet(**args)
114
+
115
+ def toPLY( self, path ):
116
+ from curlew.io import savePLY
117
+ from pathlib import Path
118
+ path = Path(path)
119
+ C = self.numpy()
120
+ if self.vp is not None: savePLY( path / 'value.ply', xyz=C.vp, attr=C.vv[:,None])
121
+ if self.gp is not None: savePLY( path / 'gradient.ply', xyz=C.gp, attr=C.gv)
122
+ if self.gop is not None: savePLY( path / 'orientation.ply', xyz=C.gop, attr=C.gov)
123
+ if self.iq is not None:
124
+ lkup = {'=':'eq','<':'lt','>':'gt'}
125
+ for i,iq in enumerate(C.iq[1]):
126
+ savePLY( path / str(f'iq_{i}_{lkup[iq[2]]}/lhs.ply'), xyz=iq[0], rgb=[(255,0,0) for i in range(len(iq[0]))])
127
+ savePLY( path / str(f'iq_{i}_{lkup[iq[2]]}/rhs.ply'), xyz=iq[1], rgb=[(0,0,255) for i in range(len(iq[1]))])
128
+
129
+ def toCSV( self, path ):
130
+ from curlew.io import savePLY
131
+ from pathlib import Path
132
+ path = Path(path)
133
+ C = self.numpy()
134
+ def saveCSV( path, xyz, attr=None, names=[], rgb=None ):
135
+ import pandas as pd
136
+ cols = ['x','y','z']+names
137
+ if rgb is not None:
138
+ cols += ['r','g','b']
139
+ vals = xyz
140
+ if attr is not None:
141
+ vals = np.hstack([vals, attr])
142
+ if rgb is not None:
143
+ vals = np.hstack([vals, rgb])
144
+ df = pd.DataFrame( vals, columns=cols )
145
+ df.to_csv( path )
146
+
147
+ if self.vp is not None: saveCSV( path / 'value.csv', xyz=C.vp, attr=C.vv[:,None], names=['value'])
148
+ if self.gp is not None: saveCSV( path / 'gradient.csv', xyz=C.gp, attr=C.gv, names=['gx','gy', 'gz'])
149
+ if self.gop is not None: saveCSV( path / 'orientation.csv', xyz=C.gop, attr=C.gov, names=['gox','goy', 'goz'])
150
+ if self.iq is not None:
151
+ lkup = {'=':'eq','<':'lt','>':'gt'}
152
+ for i,iq in enumerate(C.iq[1]):
153
+ saveCSV( path / str(f'iq_{i}_{lkup[iq[2]]}/lhs.csv'), xyz=iq[0], rgb=[(255,0,0) for i in range(len(iq[0]))])
154
+ saveCSV( path / str(f'iq_{i}_{lkup[iq[2]]}/rhs.csv'), xyz=iq[1], rgb=[(0,0,255) for i in range(len(iq[1]))])
155
+
156
+
157
+ def copy(self):
158
+ """Creates a copy of this CSet instance."""
159
+ out = copy.deepcopy(self)
160
+ if self.grid is not None:
161
+ out.grid = self.grid.copy() # ensure grid is a copy
162
+ return out
163
+
164
+ def transform(self, f, batch=50000 ):
165
+ """
166
+ Apply the specified function to each position stored in this constraint set.
167
+
168
+ Parameters
169
+ ----------
170
+ f : callable
171
+ A function taking a set of points as input, such that `f(x)` returns the transformed positions.
172
+ batch : int
173
+ The batch size to use for reconstructing grids (as these can be quite large).
174
+ Returns
175
+ -------
176
+ A copy of this CSet instance with all positions transformed.
177
+ """
178
+ out = self.copy()
179
+ if out.vp is not None: out.vp = f(out.vp)
180
+ if out.gp is not None: out.gp = f(out.gp)
181
+ if out.gop is not None: out.gop = f(out.gop)
182
+ if out.pp is not None: out.pp = f(out.pp)
183
+ if out.iq is not None:
184
+ for i in range(len(out.iq[1])):
185
+ out.iq[1][i] = ( f(out.iq[1][i][0]), # LHS
186
+ f(out.iq[1][i][1]), # RHS
187
+ out.iq[1][i][2] ) # relation
188
+ if self.grid is not None:
189
+ from curlew.utils import batchEval
190
+ out.grid._setCache( batchEval( self.grid.coords(), f ) )
191
+
192
+ # TODO -- use autodiff to rotate gradient constriants??
193
+
194
+ return out
195
+
196
+ def filter(self, f):
197
+ """
198
+ Apply the specified filter to each position stored in this constraint set.
199
+
200
+ Parameters
201
+ ----------
202
+ f : callable
203
+ A function taking a set of positions as input, such that `f(x)` returns True if the point should be retained, and False otherwise.
204
+
205
+ Returns
206
+ -------
207
+ A copy of this CSet instance with the filter applied to potentially remove points.
208
+ """
209
+ out = self.copy()
210
+ def e( arr ):
211
+ mask = f( arr )
212
+ if isinstance(arr, torch.Tensor): mask = torch.tensor(mask, device=curlew.device, dtype=torch.bool)
213
+ if isinstance(arr, np.ndarray): mask = np.array(mask, dtype=bool)
214
+ return mask
215
+ if out.vp is not None:
216
+ mask = e(out.vp)
217
+ out.vp = out.vp[mask,:]
218
+ out.vv = out.vv[mask]
219
+ if out.gp is not None:
220
+ mask = e(out.gp)
221
+ out.gp = out.gp[mask,:]
222
+ out.gv = out.gv[mask, :]
223
+ if out.gop is not None:
224
+ mask = e(out.gop)
225
+ out.gop = out.gop[mask,:]
226
+ out.gov = out.gov[mask, :]
227
+ if out.pp is not None:
228
+ mask = e(out.pp)
229
+ out.pp = out.pp[mask,:]
230
+ out.pv = out.pv[mask, ...]
231
+ if out.iq is not None:
232
+ for i in range(len(out.iq[1])):
233
+ out.iq[1][i] = ( out.iq[1][i][0][ e(out.iq[1][i][0]), : ], # LHS
234
+ out.iq[1][i][1][ e(out.iq[1][i][1]), : ], # RHS
235
+ out.iq[1][i][2] ) # relation
236
+ return out
237
+
238
+ @dataclass
239
+ class HSet:
240
+ """
241
+ Set of hyperparameters used for training one or several NFs. Values can be 0 to completely disable a
242
+ loss function, or a string "1.0" or "0.1" to initialise the hyperparameter as the specified fraction of the `1 / initial_loss`. Note that
243
+ simpler loss functions (i.e. with most of the different loss components set to 0) can be much easier to optimise, so try to keep things simple.
244
+
245
+ Attributes:
246
+ value_loss : float | str
247
+ Factor applied to value losses. Default is 1.
248
+ grad_loss : float | str
249
+ Factor applied to gradient losses. Default is 1 (as this loss generally ranges between 0 and 1).
250
+ ori_loss : float | str
251
+ Factor applied to orientation losses. Default is 1 (as this is fixed to the range 0 to 1).
252
+ thick_loss : float | str
253
+ Factor applied to thickness loss. Default is 1 (as this loss is also generally small).
254
+ mono_loss : float | str
255
+ Factor applied to monotonicity (divergence) loss. Default is "0.01" (initialise automatically).
256
+ flat_loss : float | str
257
+ Factor applied to global trend misfit. Default is 0.1 (as this shouldn't be too strongly applied).
258
+ prop_loss : float | str
259
+ Factor applied to scale the loss resulting from reconstructed property fields (i.e. forward model misfit).
260
+ iq_loss : float | str
261
+ Factor applied to scale the loss resulting from any provided inequality constraints.
262
+ use_dynamic_loss_weighting : bool
263
+ Enables dynamic task loss weighting based on real-time loss values. Default is False.
264
+ This approach ensures that each task contributes equally in magnitude (≈1)
265
+ while still allowing non-zero gradients. It effectively adjusts the relative
266
+ gradient scale of each task based on its current loss.
267
+ one_hot : bool
268
+ Enables one-hot encoding of the scalar field value according to the event-ID. Only works with property field HSet()s.
269
+ """
270
+
271
+ value_loss : float = 1
272
+ grad_loss : float = 1
273
+ ori_loss : float = 0
274
+ thick_loss : float = 1
275
+ mono_loss : float = "0.01"
276
+ flat_loss : float = 0.1
277
+ prop_loss : float = "1.0"
278
+ iq_loss : float = 0
279
+ use_dynamic_loss_weighting : bool = False
280
+ one_hot : bool = False
281
+
282
+ def copy(self, **kwargs):
283
+ """
284
+ Creates a copy of the HSet instance. Pass keywords to then update specific parts of this copy.
285
+
286
+ Keywords
287
+ --------
288
+ Keywords can be provided to adjust hyperparameters after making the copy.
289
+
290
+ """
291
+ out = copy.deepcopy( self )
292
+ for k,v in kwargs.items():
293
+ out.__setattr__(k, v)
294
+ return out
295
+
296
+ def zero(self, **kwargs):
297
+ """
298
+ Set all hyperparameters in the HSet to zero and return. Useful to disable all losses before setting a few relevant ones.
299
+
300
+ Keywords
301
+ --------
302
+ Any non-zero hyperparameters can be passed as keywords along with their desired value.
303
+ """
304
+ for k in dir(self):
305
+ if '__' not in k:
306
+ if not callable(getattr(self, k)):
307
+ setattr(self, k, kwargs.get(k, 0 ) )
308
+ return self
309
+