pyopenrivercam 0.8.12__py3-none-any.whl → 0.9.1__py3-none-any.whl

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -1,6 +1,6 @@
1
1
  Metadata-Version: 2.4
2
2
  Name: pyopenrivercam
3
- Version: 0.8.12
3
+ Version: 0.9.1
4
4
  Summary: pyorc: free and open-source image-based surface velocity and discharge.
5
5
  Author-email: Hessel Winsemius <winsemius@rainbowsensing.com>
6
6
  Requires-Python: >=3.9
@@ -21,23 +21,25 @@ Requires-Dist: click
21
21
  Requires-Dist: cython; platform_machine == 'armv7l'
22
22
  Requires-Dist: dask
23
23
  Requires-Dist: descartes
24
- Requires-Dist: ffpiv>=0.1.4
24
+ Requires-Dist: ffpiv>=0.2.0
25
25
  Requires-Dist: flox
26
26
  Requires-Dist: geojson
27
27
  Requires-Dist: geopandas
28
28
  Requires-Dist: matplotlib
29
29
  Requires-Dist: netCDF4
30
- Requires-Dist: numba
31
- Requires-Dist: numpy>=1.23, <2
30
+ Requires-Dist: numba>0.56
31
+ Requires-Dist: numpy==1.26.4; python_version == '3.9'
32
+ Requires-Dist: numpy>=2; python_version >= '3.10'
32
33
  Requires-Dist: opencv-python
33
- Requires-Dist: openpiv
34
34
  Requires-Dist: packaging; platform_machine == 'armv7l'
35
+ Requires-Dist: pillow==9.5.0; python_version == '3.9'
36
+ Requires-Dist: pillow; python_version >= '3.10'
35
37
  Requires-Dist: pip
36
38
  Requires-Dist: pyproj
37
39
  Requires-Dist: pythran; platform_machine == 'armv7l'
38
40
  Requires-Dist: pyyaml
39
- Requires-Dist: rasterio<1.4.0
40
- Requires-Dist: scikit-image
41
+ Requires-Dist: rasterio<1.4.0; python_version <= '3.12'
42
+ Requires-Dist: rasterio; python_version > '3.12'
41
43
  Requires-Dist: scipy
42
44
  Requires-Dist: shapely
43
45
  Requires-Dist: tqdm
@@ -112,11 +114,15 @@ We are seeking funding for the following frequently requested functionalities:
112
114
 
113
115
  If you wish to fund this or other work on features, please contact us at info@rainbowsensing.com.
114
116
 
115
- > **_note:_** For instructions how to get Anaconda (with lots of pre-installed libraries) or Miniconda (light weight) installed, please go to https://docs.conda.io/projects/conda/en/latest/
117
+ > [!NOTE]
118
+ > For instructions how to get Anaconda (with lots of pre-installed libraries) or Miniconda (light weight) installed,
119
+ > please go to https://docs.conda.io/projects/conda/en/latest/
116
120
 
117
- > **_manual:_** Please go to https://localdevices.github.io/pyorc for the latest documentation
121
+ > [!TIP]
122
+ > Please go to https://localdevices.github.io/pyorc for the latest documentation
118
123
 
119
- > **_compatibility:_** At this moment **pyorc** works with any video compatible with OpenCV as long as it has proper metadata.
124
+ > [!IMPORTANT]
125
+ > At this moment **pyorc** works with any video compatible with OpenCV as long as it has proper metadata.
120
126
 
121
127
  ## Installation
122
128
  You need a python environment. We recommend using the Miniforge project. Download
@@ -158,6 +164,18 @@ pip install pyopenrivercam[extra]
158
164
  The `[extra]` section ensures that also geographical plotting is supported, which we recommend especially for the
159
165
  set up of a camera configuration with RTK-GPS measured control points.
160
166
 
167
+ > [!NOTE]
168
+ >
169
+ > Most of the heavy lifting is done while deriving cross-correlations for estimation of velocity vectors with Particle
170
+ > Image Velocimetry. You can speed up this process (x2) by installing `rocket-fft`. With `python <= 3.12` this
171
+ > is automatically included. With higher versions, you need, for the moment, to install it separately as follows:
172
+ >
173
+ > ```shell
174
+ > pip install git+https://github.com/localdevices/rocket-fft.git
175
+ > ```
176
+ >
177
+ > Once rocket-fft gets updated in PyPi you will no longer need this separate installation procedure.
178
+
161
179
  ### Upgrading from pypi with pip
162
180
 
163
181
  Did you read about a new version and you want to upgrade? Simply activate your virtual environment, type
@@ -175,6 +193,8 @@ If you use `mamba` as a package manager, then the steps are the same, except for
175
193
  ```shell
176
194
  mamba install pyopenrivercam
177
195
  ```
196
+ The version installed may not have the latest underlying libraries and therefore may be slower than the latest PyPi
197
+ version. We therefore recommend using `pip` for installation (see above).
178
198
 
179
199
  ### Installation from latest code base
180
200
 
@@ -1,16 +1,16 @@
1
- pyorc/__init__.py,sha256=osrK3G2RA5IsF1ka-QHSPa7gXf7JCXQuUWK8RnR1I1A,524
1
+ pyorc/__init__.py,sha256=xEUNlptqQPG7rmdsLoKbE67WfOF1TsCLPi5I3SQgv9U,523
2
2
  pyorc/const.py,sha256=Ia0KRkm-E1lJk4NxQVPDIfN38EBB7BKvxmwIHJrGPUY,2597
3
- pyorc/cv.py,sha256=fXGqT8vBn9-z6UxS5ho7thP9VQll9RrYHJW5KnUJQjo,50250
4
- pyorc/helpers.py,sha256=90TDtka0ydAydv3g5Dfc8MgtuSt0_9D9-HOtffpcBds,30636
3
+ pyorc/cv.py,sha256=t2ZR4eyGbiwlIaGHysOheWdaDQuqpWLKjcTiAUzWAR0,50261
4
+ pyorc/helpers.py,sha256=jed0YyywnpvsZS-8mcA7Lfzn9np1MTlmVLE_PDn2QY0,30454
5
5
  pyorc/plot_helpers.py,sha256=gLKslspsF_Z4jib5jkBv2wRjKnHTbuRFgkp_PCmv-uU,1803
6
6
  pyorc/project.py,sha256=CGKfICkQEpFRmh_ZeDEfbQ-wefJt7teWJd6B5IPF038,7747
7
7
  pyorc/pyorc.sh,sha256=-xOSUNnMAwVbdNkjKNKMZMaBljWsGLhadG-j0DNlJP4,5
8
- pyorc/sample_data.py,sha256=53NVnVmEksDw8ilbfhFFCiFJiGAIpxdgREbA_xt8P3o,2508
8
+ pyorc/sample_data.py,sha256=_yxtjhHc1sjHXZJRWQgBNOOn0Qqs2A5CavyFOQX5p8U,3241
9
9
  pyorc/api/__init__.py,sha256=k2OQQH4NrtXTuVm23d0g_SX6H5DhnKC9_kDyzJ4dWdk,428
10
10
  pyorc/api/cameraconfig.py,sha256=NP9F7LhPO3aO6FRWkrGl6XpX8O3K59zfTtaYR3Kujqw,65419
11
11
  pyorc/api/cross_section.py,sha256=MH0AEw5K1Kc1ClZeQRBUYkShZYVk41fshLn6GzCZAas,65212
12
- pyorc/api/frames.py,sha256=Kls4mpU_4hoUaXs9DJf2S6RHyp2D5emXJkAQWvvT39U,24300
13
- pyorc/api/mask.py,sha256=-owe66kte2ob3_Zndf21JR-LyEX_1mECbHOuqNfzuMc,16507
12
+ pyorc/api/frames.py,sha256=BnglhmHdbKlIip5tym3x-aICOpQRmm853109A7JkWk8,23189
13
+ pyorc/api/mask.py,sha256=A3TRMqi30L4N491C4FoYY0zvV1GwQ1U31OEkCJp_Nzc,16698
14
14
  pyorc/api/orcbase.py,sha256=C23QTKOyxHUafyJsq_t7xn_BzAEvf4DDfzlYAopons8,4189
15
15
  pyorc/api/plot.py,sha256=MxIEIS8l46bUaca0GtMazx8-k2_TbfQLrPCPAjuWos8,31082
16
16
  pyorc/api/transect.py,sha256=wENKWt0u0lHtT0lPYv47faHf_iAN9Mmeev-vwWjnz6E,13382
@@ -24,11 +24,10 @@ pyorc/cli/main.py,sha256=qhAZkUuAViCpHh9c19tpcpbs_xoZJkYHhOsEXJBFXfM,12742
24
24
  pyorc/service/__init__.py,sha256=vPrzFlZ4e_GjnibwW6-k8KDz3b7WpgmGcwSDk0mr13Y,55
25
25
  pyorc/service/camera_config.py,sha256=OsRLpe5jd-lu6HT4Vx5wEg554CMS-IKz-q62ir4VbPo,2375
26
26
  pyorc/service/velocimetry.py,sha256=bPI1OdN_fi0gZES08mb7yqCS_4I-lKSZ2JvWSGTRD1E,34434
27
- pyorc/velocimetry/__init__.py,sha256=lYM7oJZWxgJ2SpE22xhy7pBYcgkKFHMBHYmDvvMbtZk,148
28
- pyorc/velocimetry/ffpiv.py,sha256=CYUjgwnB5osQmJ83j3E00B9P0_hS-rFuhyvufxKXtag,17487
29
- pyorc/velocimetry/openpiv.py,sha256=6BxsbXLzT4iEq7v08G4sOhVlYFodUpY6sIm3jdCxNMs,13149
30
- pyopenrivercam-0.8.12.dist-info/entry_points.txt,sha256=Cv_WI2Y6QLnPiNCXGli0gS4WAOAeMoprha1rAR3vdRE,44
31
- pyopenrivercam-0.8.12.dist-info/licenses/LICENSE,sha256=DZak_2itbUtvHzD3E7GNUYSRK6jdOJ-GqncQ2weavLA,34523
32
- pyopenrivercam-0.8.12.dist-info/WHEEL,sha256=G2gURzTEtmeR8nrdXUJfNiB3VYVxigPQ-bEQujpNiNs,82
33
- pyopenrivercam-0.8.12.dist-info/METADATA,sha256=hU0j9dsG6ksjRTM6UdMLVJ15TGzGQ9lW1mHhxDKxJy0,11641
34
- pyopenrivercam-0.8.12.dist-info/RECORD,,
27
+ pyorc/velocimetry/__init__.py,sha256=5oShoMocCalcCZuIsBqlZlqQuKJgDDBUvXQIo-uqFPA,88
28
+ pyorc/velocimetry/ffpiv.py,sha256=92XDgzCW4mEZ5ow82zV0APOhfDc1OVftBjKqYdw1zzc,17494
29
+ pyopenrivercam-0.9.1.dist-info/entry_points.txt,sha256=Cv_WI2Y6QLnPiNCXGli0gS4WAOAeMoprha1rAR3vdRE,44
30
+ pyopenrivercam-0.9.1.dist-info/licenses/LICENSE,sha256=DZak_2itbUtvHzD3E7GNUYSRK6jdOJ-GqncQ2weavLA,34523
31
+ pyopenrivercam-0.9.1.dist-info/WHEEL,sha256=G2gURzTEtmeR8nrdXUJfNiB3VYVxigPQ-bEQujpNiNs,82
32
+ pyopenrivercam-0.9.1.dist-info/METADATA,sha256=grSrBhgs9_bH4u-R_J8xHM4bHKiyJwDXOK_kckDN8ng,12566
33
+ pyopenrivercam-0.9.1.dist-info/RECORD,,
pyorc/__init__.py CHANGED
@@ -1,6 +1,6 @@
1
1
  """pyorc: free and open-source image-based surface velocity and discharge."""
2
2
 
3
- __version__ = "0.8.12"
3
+ __version__ = "0.9.1"
4
4
 
5
5
  from .api import CameraConfig, CrossSection, Frames, Transect, Velocimetry, Video, get_camera_config, load_camera_config # noqa
6
6
  from .project import * # noqa
pyorc/api/frames.py CHANGED
@@ -12,7 +12,7 @@ from matplotlib.animation import FuncAnimation
12
12
  from tqdm import tqdm
13
13
 
14
14
  from pyorc import const, cv, helpers, project
15
- from pyorc.velocimetry import ffpiv, openpiv
15
+ from pyorc.velocimetry import ffpiv
16
16
 
17
17
  from .orcbase import ORCBase
18
18
  from .plot import _frames_plot
@@ -124,14 +124,12 @@ class Frames(ORCBase):
124
124
  overlap : (int, int), optional
125
125
  amount of overlap between interrogation windows in pixels (y, x)
126
126
  engine : str, optional
127
- select the compute engine, can be "openpiv" (default), "numba", or "numpy". "numba" will give the fastest
128
- performance but is still experimental. It can boost performance by almost an order of magnitude compared
129
- to openpiv or numpy. both "numba" and "numpy" use the FF-PIV library as back-end.
127
+ select the compute engine, can be "numba" (default), or "numpy". "numba" will give the fastest
128
+ performance. It can boost performance by almost an order of magnitude compared
129
+ to numpy. both "numba" and "numpy" use the FF-PIV library as back-end.
130
130
  ensemble_corr : bool, optional
131
- only used with `engine="numba"` or `engine="numpy"`.
132
131
  If True, performs PIV by first averaging cross-correlations across all frames and then deriving velocities.
133
132
  If False, computes velocities for each frame pair separately. Default is True.
134
-
135
133
  **kwargs : dict
136
134
  keyword arguments to pass to the piv engine. For "numba" and "numpy" the argument `chunks` can be provided
137
135
  with an integer defining in how many batches of work the total velocimetry problem should be subdivided.
@@ -143,7 +141,6 @@ class Frames(ORCBase):
143
141
 
144
142
  See Also
145
143
  --------
146
- OpenPIV project: https://github.com/OpenPIV/openpiv-python
147
144
  FF-PIV project: https://github.com/localdevices/ffpiv
148
145
 
149
146
  """
@@ -167,43 +164,19 @@ class Frames(ORCBase):
167
164
  # get all required coordinates for the PIV result
168
165
  coords, mesh_coords = self.get_piv_coords(window_size, search_area_size, overlap)
169
166
  # provide kwargs for OpenPIV analysis
170
- if engine == "openpiv":
171
- # thresholds are not used.
172
-
173
- import warnings
174
-
175
- warnings.warn(
176
- '"openpiv" is deprecated, please use "numba" or "numpy" as engine',
177
- DeprecationWarning,
178
- stacklevel=2,
179
- )
180
- # Remove threshold parameters from kwargs
181
- kwargs.pop("corr_min", None)
182
- kwargs.pop("s2n_min", None)
183
- kwargs.pop("count_min", None)
184
- kwargs = {
185
- **kwargs,
186
- "search_area_size": search_area_size[0],
187
- "window_size": window_size[0],
188
- "overlap": overlap[0],
189
- "res_x": camera_config.resolution,
190
- "res_y": camera_config.resolution,
191
- }
192
- ds = openpiv.get_openpiv(self._obj, coords["y"], coords["x"], dt, **kwargs)
193
- elif engine in ["numba", "numpy"]:
194
- kwargs = {
195
- **kwargs,
196
- "search_area_size": search_area_size,
197
- "window_size": window_size,
198
- "overlap": overlap,
199
- "res_x": camera_config.resolution,
200
- "res_y": camera_config.resolution,
201
- }
202
- ds = ffpiv.get_ffpiv(
203
- self._obj, coords["y"], coords["x"], dt, engine=engine, ensemble_corr=ensemble_corr, **kwargs
204
- )
205
- else:
167
+ if engine not in ["numba", "numpy"]:
206
168
  raise ValueError(f"Selected PIV engine {engine} does not exist.")
169
+ kwargs = {
170
+ **kwargs,
171
+ "search_area_size": search_area_size,
172
+ "window_size": window_size,
173
+ "overlap": overlap,
174
+ "res_x": camera_config.resolution,
175
+ "res_y": camera_config.resolution,
176
+ }
177
+ ds = ffpiv.get_ffpiv(
178
+ self._obj, coords["y"], coords["x"], dt, engine=engine, ensemble_corr=ensemble_corr, **kwargs
179
+ )
207
180
  # add all 2D-coordinates
208
181
  ds = ds.velocimetry.add_xy_coords(mesh_coords, coords, {**const.PERSPECTIVE_ATTRS, **const.GEOGRAPHICAL_ATTRS})
209
182
  # ensure all metadata is transferred
@@ -359,7 +332,7 @@ class Frames(ORCBase):
359
332
  keep_attrs=True,
360
333
  )
361
334
 
362
- def minmax(self, min=-np.Inf, max=np.Inf):
335
+ def minmax(self, min=-np.inf, max=np.inf):
363
336
  """Minimum / maximum intensity filter.
364
337
 
365
338
  All pixels will be thresholded to a minimum and maximum value.
pyorc/api/mask.py CHANGED
@@ -254,14 +254,17 @@ class _Velocimetry_MaskMethods:
254
254
  be within tolerance.
255
255
 
256
256
  """
257
- x_std = self[v_x].std(dim="time")
258
- y_std = self[v_y].std(dim="time")
259
- x_mean = np.maximum(self[v_x].mean(dim="time"), 1e30)
260
- y_mean = np.maximum(self[v_y].mean(dim="time"), 1e30)
261
- x_var = np.abs(x_std / x_mean)
262
- y_var = np.abs(y_std / y_mean)
263
- x_condition = x_var < tolerance
264
- y_condition = y_var < tolerance
257
+ with warnings.catch_warnings():
258
+ # suppress warnings on all-NaN slices
259
+ warnings.simplefilter("ignore", category=RuntimeWarning)
260
+ x_std = self[v_x].std(dim="time")
261
+ y_std = self[v_y].std(dim="time")
262
+ x_mean = np.maximum(self[v_x].mean(dim="time"), 1e30)
263
+ y_mean = np.maximum(self[v_y].mean(dim="time"), 1e30)
264
+ x_var = np.abs(x_std / x_mean)
265
+ y_var = np.abs(y_std / y_mean)
266
+ x_condition = x_var < tolerance
267
+ y_condition = y_var < tolerance
265
268
  if mode == "or":
266
269
  mask = x_condition | y_condition
267
270
  else:
pyorc/cv.py CHANGED
@@ -243,7 +243,8 @@ def _get_perpendicular_distance(point, line):
243
243
  perpendicular_distance = np.linalg.norm(perpendicular_vector)
244
244
 
245
245
  # Use cross product to calculate side
246
- cross_product = np.cross(line_vector, point_vector)
246
+ # cross_product = np.cross(line_vector, point_vector)
247
+ cross_product = line_vector[0] * point_vector[1] - line_vector[1] * point_vector[0]
247
248
 
248
249
  # Determine the sign of the perpendicular distance
249
250
  return perpendicular_distance if cross_product > 0 else -perpendicular_distance
@@ -1070,12 +1071,7 @@ def get_polygon_pixels(img, pol, reverse_y=False):
1070
1071
  if 0 in mask.shape:
1071
1072
  # no shape in mask, so return empty array instantly
1072
1073
  return np.array([], dtype=np.uint8)
1073
- try:
1074
- cv2.fillPoly(mask, [np.array(cropped_polygon_coords, dtype=np.int32)], color=255)
1075
- except Exception:
1076
- import pdb
1077
-
1078
- pdb.set_trace()
1074
+ cv2.fillPoly(mask, [np.array(cropped_polygon_coords, dtype=np.int32)], color=255)
1079
1075
  return numba_extract_pixels(cropped_img, mask)
1080
1076
 
1081
1077
 
pyorc/helpers.py CHANGED
@@ -189,15 +189,6 @@ def get_geo_axes(tiles=None, extent=None, zoom_level=19, **kwargs):
189
189
 
190
190
  """
191
191
  _check_cartopy_installed()
192
- #
193
- # try:
194
- # import cartopy
195
- # import cartopy.crs as ccrs
196
- # import cartopy.io.img_tiles as cimgt
197
- # except ModuleNotFoundError:
198
- # raise ModuleNotFoundError(
199
- # 'Geographic plotting requires cartopy. Please install it with "conda install cartopy" and try ' "again."
200
- # )
201
192
  ccrs, cimgt = _import_cartopy_modules()
202
193
  if tiles is not None:
203
194
  tiler = getattr(cimgt, tiles)(**kwargs)
@@ -299,6 +290,9 @@ def get_xs_ys(cols, rows, transform):
299
290
  """
300
291
  xs, ys = xy(transform, rows, cols)
301
292
  xs, ys = np.array(xs), np.array(ys)
293
+ # Ensure data is reshaped (later versions of rasterio return a 1D array only)
294
+ xs = xs.reshape(rows.shape)
295
+ ys = ys.reshape(rows.shape)
302
296
  return xs, ys
303
297
 
304
298
 
pyorc/sample_data.py CHANGED
@@ -1,6 +1,7 @@
1
1
  """Retrieval of sample dataset."""
2
2
 
3
3
  import os
4
+ import time
4
5
  import zipfile
5
6
 
6
7
 
@@ -13,7 +14,7 @@ def get_hommerich_dataset():
13
14
 
14
15
  # Define the DOI link
15
16
  filename = "20241010_081717.mp4"
16
- base_url = "doi:10.5281/zenodo.15002591"
17
+ base_url = "https://zenodo.org/records/15002591/files"
17
18
  url = base_url + "/" + filename
18
19
  print(f"Retrieving or providing cached version of dataset from {url}")
19
20
  # Create a Pooch registry to manage downloads
@@ -26,7 +27,16 @@ def get_hommerich_dataset():
26
27
  registry={filename: None},
27
28
  )
28
29
  # Fetch the dataset
29
- file_path = registry.fetch(filename, progressbar=True)
30
+ for attempt in range(5):
31
+ try:
32
+ file_path = registry.fetch(filename, progressbar=True)
33
+ break
34
+ except Exception as e:
35
+ if attempt == 4:
36
+ raise f"Download failed with error: {e}."
37
+ else:
38
+ print(f"Download failed with error: {e}. Retrying...")
39
+ time.sleep(1)
30
40
  print(f"Hommerich video is available in {file_path}")
31
41
  return file_path
32
42
 
@@ -40,7 +50,7 @@ def get_hommerich_pyorc_zip():
40
50
 
41
51
  # Define the DOI link
42
52
  filename = "hommerich_20241010_081717_pyorc_data.zip.zip"
43
- base_url = "doi:10.5281/zenodo.15002591"
53
+ base_url = "https://zenodo.org/records/15002591/files"
44
54
  url = base_url + "/" + filename
45
55
  print(f"Retrieving or providing cached version of dataset from {url}")
46
56
  # Create a Pooch registry to manage downloads
@@ -54,6 +64,17 @@ def get_hommerich_pyorc_zip():
54
64
  )
55
65
  # Fetch the dataset
56
66
  file_path = registry.fetch(filename, progressbar=True)
67
+ # Fetch the dataset
68
+ for attempt in range(5):
69
+ try:
70
+ file_path = registry.fetch(filename, progressbar=True)
71
+ break
72
+ except Exception as e:
73
+ if attempt == 4:
74
+ raise f"Download failed with error: {e}."
75
+ else:
76
+ print(f"Download failed with error: {e}. Retrying...")
77
+ time.sleep(1)
57
78
  print(f"Hommerich video is available in {file_path}")
58
79
  return file_path
59
80
 
@@ -1,6 +1,5 @@
1
1
  """pyorc velocimetry methods."""
2
2
 
3
3
  from .ffpiv import get_ffpiv
4
- from .openpiv import get_openpiv, piv
5
4
 
6
- __all__ = ["get_ffpiv", "piv", "get_openpiv"]
5
+ __all__ = ["get_ffpiv"]
@@ -412,12 +412,11 @@ def _get_uv_timestep(da, n_cols, n_rows, window_size, overlap, search_area_size,
412
412
  verbose=False,
413
413
  )
414
414
 
415
- # get the maximum correlation per interrogation window
416
- corr_max = np.nanmax(corr, axis=(-1, -2))
417
-
418
415
  # get signal-to-noise, whilst suppressing nanmean over empty slice warnings
419
416
  with warnings.catch_warnings():
417
+ # get the maximum correlation per interrogation window
420
418
  warnings.simplefilter("ignore", category=RuntimeWarning)
419
+ corr_max = np.nanmax(corr, axis=(-1, -2))
421
420
  s2n = corr_max / np.nanmean(corr, axis=(-1, -2))
422
421
 
423
422
  # reshape corr / s2n to the amount of expected rows and columns
@@ -1,331 +0,0 @@
1
- """PIV processing wrappers for OpenPIV."""
2
-
3
- from typing import List, Optional, Tuple, Union
4
-
5
- import numpy as np
6
- import openpiv.pyprocess
7
- import openpiv.tools
8
- import xarray as xr
9
-
10
- __all__ = [
11
- "get_openpiv",
12
- "piv",
13
- ]
14
-
15
-
16
- def get_openpiv(frames, y, x, dt, **kwargs):
17
- """Compute time-resolved Particle Image Velocimetry (PIV) using Fast Fourier Transform (FFT) within OpenPIV.
18
-
19
- Calculates velocity using the OpenPIV algorithms by processing sequential frames
20
- from a dataset and returning the velocity components, signal-to-noise ratio, and
21
- correlation values. The function shifts frames in time and applies the PIV algorithm
22
- to compute flow fields over the specified spatial axes.
23
-
24
- Parameters
25
- ----------
26
- frames : xarray.Dataset
27
- The input dataset containing time-dependent frames with coordinates.
28
- y : array-like
29
- The spatial coordinates along the y-axis where the outputs should be interpolated.
30
- x : array-like
31
- The spatial coordinates along the x-axis where the outputs should be interpolated.
32
- dt : float
33
- The time step between consecutive frames (used to go from per-frame to per-second displacement).
34
- **kwargs : dict
35
- Additional keyword arguments to be passed to the PIV function.
36
-
37
- Returns
38
- -------
39
- xarray.Dataset
40
- A dataset containing computed velocity components `v_x` and `v_y`,
41
- signal-to-noise ratios `s2n`, and correlation values `corr`. The dataset
42
- includes updated x and y coordinates representing the flow field grid.
43
-
44
- """
45
- # first get rid of coordinates that need to be recalculated
46
- coords_drop = list(set(frames.coords) - set(frames.dims))
47
- frames = frames.drop_vars(coords_drop)
48
- # get frames and shifted frames in time
49
- frames1 = frames.shift(time=1)[1:].chunk({"time": 1})
50
- frames2 = frames[1:].chunk({"time": 1})
51
- # retrieve all data arrays
52
- v_x, v_y, s2n, corr = xr.apply_ufunc(
53
- piv,
54
- frames1,
55
- frames2,
56
- dt,
57
- kwargs=kwargs,
58
- input_core_dims=[["y", "x"], ["y", "x"], []],
59
- output_core_dims=[["new_y", "new_x"]] * 4,
60
- dask_gufunc_kwargs={
61
- "output_sizes": {"new_y": len(y), "new_x": len(x)},
62
- },
63
- output_dtypes=[np.float32] * 4,
64
- vectorize=True,
65
- keep_attrs=True,
66
- dask="parallelized",
67
- )
68
- # merge all DataArrays in one Dataset
69
- ds = xr.merge([v_x.rename("v_x"), v_y.rename("v_y"), s2n.rename("s2n"), corr.rename("corr")]).rename(
70
- {"new_x": "x", "new_y": "y"}
71
- )
72
- # add y and x-axis values
73
- ds["y"] = y
74
- ds["x"] = x
75
- return ds
76
-
77
-
78
- def piv(
79
- frame_a,
80
- frame_b,
81
- dt,
82
- res_x=0.01,
83
- res_y=0.01,
84
- search_area_size=30,
85
- window_size=None,
86
- overlap=None,
87
- **kwargs,
88
- ):
89
- """Perform PIV analysis on two sequential frames following keyword arguments from openpiv.
90
-
91
- This function also computes the correlations per interrogation window, so that poorly correlated values can be
92
- filtered out. Furthermore, the resolution is used to convert pixel per second velocity estimates, into meter per
93
- second velocity estimates. The centre of search area columns and rows are also returned so that a georeferenced
94
- grid can be written from the results.
95
-
96
- Note: Typical openpiv kwargs are for instance
97
- window_size=60, overlap=30, search_area_size=60, dt=1./25
98
-
99
- Parameters
100
- ----------
101
- frame_a: np.ndarray (2D)
102
- first frame
103
- frame_b: np.ndarray (2D)
104
- second frame
105
- dt : float
106
- time resolution in seconds.
107
- res_x: float, optional
108
- resolution of x-dir pixels in a user-defined unit per pixel (e.g. m pixel-1) Default: 0.01
109
- res_y: float, optional
110
- resolution of y-dir pixels in a user-defined unit per pixel (e.g. m pixel-1) Default: 0.01
111
- search_area_size: int, optional
112
- length of subsetted matrix to search for correlations (default: 30)
113
- window_size: int, optional
114
- size of interrogation window in amount of pixels. If not set, it is set equal to search_area_size
115
- (default: None).
116
- overlap: int, optional
117
- length of overlap between interrogation windows. If not set, this defaults to 50% of the window_size parameter
118
- (default: None).
119
- **kwargs: dict
120
- keyword arguments related to openpiv. See openpiv manual for further information
121
-
122
- Returns
123
- -------
124
- v_x: np.ndarray(2D)
125
- raw x-dir velocities [m s-1] in interrogation windows (requires filtering to get valid velocities)
126
- v_y: np.ndarray (2D)
127
- raw y-dir velocities [m s-1] in interrogation windows (requires filtering to get valid velocities)
128
- s2n: np.ndarray (2D)
129
- signal to noise ratio, measured as maximum correlation found divided by the mean correlation
130
- (method="peak2mean") or second to maximum correlation (method="peak2peak") found within search area
131
- corr: np.ndarray (2D)
132
- correlation values in interrogation windows
133
-
134
- """
135
- window_size = search_area_size if window_size is None else window_size
136
- overlap = int(round(window_size) / 2) if overlap is None else overlap
137
- # modified version of extended_search_area_piv to accomodate exporting corr
138
- v_x, v_y, s2n, corr = extended_search_area_piv(
139
- frame_a, frame_b, dt=dt, search_area_size=search_area_size, overlap=overlap, window_size=window_size, **kwargs
140
- )
141
- return v_x * res_x, v_y * res_y, s2n, corr
142
-
143
-
144
- def extended_search_area_piv(
145
- frame_a: np.ndarray,
146
- frame_b: np.ndarray,
147
- window_size: int,
148
- overlap: int = 0,
149
- dt: float = 1.0,
150
- search_area_size: Optional[Union[Tuple[int, int], List[int], int]] = None,
151
- correlation_method: str = "circular",
152
- subpixel_method: str = "gaussian",
153
- sig2noise_method: Optional[str] = "peak2mean",
154
- width: int = 2,
155
- normalized_correlation: bool = True,
156
- use_vectorized: bool = False,
157
- ) -> Tuple[np.ndarray, np.ndarray, np.ndarray, np.ndarray]:
158
- """Perform PIV cross-correlation analysis.
159
-
160
- Extended area search can be used to increased dynamic range. The search region
161
- in the second frame is larger than the interrogation window size in the
162
- first frame. For Cython implementation see
163
- openpiv.process.extended_search_area_piv
164
-
165
- This is a pure python implementation of the standard PIV cross-correlation
166
- algorithm. It is a zero order displacement predictor, and no iterative
167
- process is performed.
168
-
169
- Parameters
170
- ----------
171
- frame_a : 2d np.ndarray
172
- an two dimensions array of integers containing grey levels of
173
- the first frame.
174
-
175
- frame_b : 2d np.ndarray
176
- an two dimensions array of integers containing grey levels of
177
- the second frame.
178
-
179
- window_size : int
180
- the size of the (square) interrogation window, [default: 32 pix].
181
-
182
- overlap : int
183
- the number of pixels by which two adjacent windows overlap
184
- [default: 16 pix].
185
-
186
- dt : float
187
- the time delay separating the two frames [default: 1.0].
188
-
189
- correlation_method : string
190
- one of the two methods implemented: 'circular' or 'linear',
191
- default: 'circular', it's faster, without zero-padding
192
- 'linear' requires also normalized_correlation = True (see below)
193
-
194
- subpixel_method : string
195
- one of the following methods to estimate subpixel location of the
196
- peak:
197
- 'centroid' [replaces default if correlation map is negative],
198
- 'gaussian' [default if correlation map is positive],
199
- 'parabolic'.
200
-
201
- sig2noise_method : string
202
- defines the method of signal-to-noise-ratio measure,
203
- ('peak2peak' or 'peak2mean'. If None, no measure is performed.)
204
-
205
- width : int
206
- the half size of the region around the first
207
- correlation peak to ignore for finding the second
208
- peak. [default: 2]. Only used if ``sig2noise_method==peak2peak``.
209
-
210
- search_area_size : int
211
- the size of the interrogation window in the second frame,
212
- default is the same interrogation window size and it is a
213
- fallback to the simplest FFT based PIV
214
-
215
- normalized_correlation: bool
216
- if True, then the image intensity will be modified by removing
217
- the mean, dividing by the standard deviation and
218
- the correlation map will be normalized. It's slower but could be
219
- more robust
220
-
221
- use_vectorized : bool
222
- If set, vectorization is used to speed up analysis.
223
-
224
- Returns
225
- -------
226
- u : 2d np.ndarray
227
- a two dimensional array containing the u velocity component,
228
- in pixels/seconds.
229
-
230
- v : 2d np.ndarray
231
- a two dimensional array containing the v velocity component,
232
- in pixels/seconds.
233
-
234
- sig2noise : 2d np.ndarray ( optional: only if sig2noise_method != None )
235
- a two dimensional array the signal to noise ratio for each
236
- window pair.
237
-
238
- corr : 2d np.ndarray
239
- a two dimensional array with the maximum correlation values found in each interrogation window.
240
-
241
- The implementation of the one-step direct correlation with different
242
- size of the interrogation window and the search area. The increased
243
- size of the search areas cope with the problem of loss of pairs due
244
- to in-plane motion, allowing for a smaller interrogation window size,
245
- without increasing the number of outlier vectors.
246
-
247
- See:
248
-
249
- Particle-Imaging Techniques for Experimental Fluid Mechanics
250
-
251
- Annual Review of Fluid Mechanics
252
- Vol. 23: 261-304 (Volume publication date January 1991)
253
- DOI: 10.1146/annurev.fl.23.010191.001401
254
-
255
- originally implemented in process.pyx in Cython and converted to
256
- a NumPy vectorized solution in pyprocess.py
257
-
258
- """
259
- if search_area_size is not None:
260
- if isinstance(search_area_size, tuple) == False and isinstance(search_area_size, list) == False:
261
- search_area_size = [search_area_size, search_area_size]
262
- if isinstance(window_size, tuple) == False and isinstance(window_size, list) == False:
263
- window_size = [window_size, window_size]
264
- if isinstance(overlap, tuple) == False and isinstance(overlap, list) == False:
265
- overlap = [overlap, overlap]
266
-
267
- # check the inputs for validity
268
- search_area_size = window_size if search_area_size is None else search_area_size
269
-
270
- if overlap[0] >= window_size[0] or overlap[1] >= window_size[1]:
271
- raise ValueError("Overlap has to be smaller than the window_size")
272
-
273
- if search_area_size[0] < window_size[0] or search_area_size[1] < window_size[1]:
274
- raise ValueError("Search size cannot be smaller than the window_size")
275
-
276
- if (window_size[1] > frame_a.shape[0]) or (window_size[0] > frame_a.shape[1]):
277
- raise ValueError("window size cannot be larger than the image")
278
-
279
- # get field shape
280
- n_rows, n_cols = openpiv.pyprocess.get_field_shape(frame_a.shape, search_area_size, overlap)
281
-
282
- # We implement the new vectorized code
283
- aa = openpiv.pyprocess.sliding_window_array(frame_a, search_area_size, overlap)
284
- bb = openpiv.pyprocess.sliding_window_array(frame_b, search_area_size, overlap)
285
-
286
- # for the case of extended seearch, the window size is smaller than
287
- # the search_area_size. In order to keep it all vectorized the
288
- # approach is to use the interrogation window in both
289
- # frames of the same size of search_area_asize,
290
- # but mask out the region around
291
- # the interrogation window in the frame A
292
-
293
- if search_area_size > window_size:
294
- # before masking with zeros we need to remove
295
- # edges
296
-
297
- aa = openpiv.pyprocess.normalize_intensity(aa)
298
- bb = openpiv.pyprocess.normalize_intensity(bb)
299
-
300
- mask = np.zeros((search_area_size[0], search_area_size[1])).astype(aa.dtype)
301
- pady = int((search_area_size[0] - window_size[0]) / 2)
302
- padx = int((search_area_size[1] - window_size[1]) / 2)
303
- mask[slice(pady, search_area_size[0] - pady), slice(padx, search_area_size[1] - padx)] = 1
304
- mask = np.broadcast_to(mask, aa.shape)
305
- aa *= mask
306
-
307
- corr = openpiv.pyprocess.fft_correlate_images(
308
- aa, bb, correlation_method=correlation_method, normalized_correlation=normalized_correlation
309
- )
310
- if use_vectorized == True:
311
- u, v = openpiv.pyprocess.vectorized_correlation_to_displacements(
312
- corr, n_rows, n_cols, subpixel_method=subpixel_method
313
- )
314
- else:
315
- u, v = openpiv.pyprocess.correlation_to_displacement(corr, n_rows, n_cols, subpixel_method=subpixel_method)
316
-
317
- # return output depending if user wanted sig2noise information
318
- sig2noise = np.zeros_like(u) * np.nan
319
- if sig2noise_method is not None:
320
- if use_vectorized == True:
321
- sig2noise = openpiv.pyprocess.vectorized_sig2noise_ratio(
322
- corr, sig2noise_method=sig2noise_method, width=width
323
- )
324
- else:
325
- sig2noise = openpiv.pyprocess.sig2noise_ratio(corr, sig2noise_method=sig2noise_method, width=width)
326
-
327
- sig2noise = sig2noise.reshape(n_rows, n_cols)
328
- # extended code for exporting the maximum found value for corr
329
- corr = corr.max(axis=-1).max(axis=-1).reshape((n_rows, n_cols))
330
-
331
- return u / dt, v / dt, sig2noise, corr