lbm_suite2p_python 2.0.5__tar.gz → 2.2.0__tar.gz

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (28) hide show
  1. lbm_suite2p_python-2.2.0/PKG-INFO +98 -0
  2. lbm_suite2p_python-2.2.0/README.md +82 -0
  3. {lbm_suite2p_python-2.0.5 → lbm_suite2p_python-2.2.0}/lbm_suite2p_python/__init__.py +4 -0
  4. lbm_suite2p_python-2.2.0/lbm_suite2p_python/default_ops.py +214 -0
  5. {lbm_suite2p_python-2.0.5 → lbm_suite2p_python-2.2.0}/lbm_suite2p_python/merging.py +83 -2
  6. {lbm_suite2p_python-2.0.5 → lbm_suite2p_python-2.2.0}/lbm_suite2p_python/postprocessing.py +21 -6
  7. {lbm_suite2p_python-2.0.5 → lbm_suite2p_python-2.2.0}/lbm_suite2p_python/run_lsp.py +359 -108
  8. {lbm_suite2p_python-2.0.5 → lbm_suite2p_python-2.2.0}/lbm_suite2p_python/volume.py +29 -6
  9. {lbm_suite2p_python-2.0.5 → lbm_suite2p_python-2.2.0}/lbm_suite2p_python/zplane.py +141 -42
  10. lbm_suite2p_python-2.2.0/lbm_suite2p_python.egg-info/PKG-INFO +98 -0
  11. lbm_suite2p_python-2.2.0/lbm_suite2p_python.egg-info/requires.txt +1 -0
  12. {lbm_suite2p_python-2.0.5 → lbm_suite2p_python-2.2.0}/pyproject.toml +2 -2
  13. lbm_suite2p_python-2.0.5/PKG-INFO +0 -84
  14. lbm_suite2p_python-2.0.5/README.md +0 -68
  15. lbm_suite2p_python-2.0.5/lbm_suite2p_python/default_ops.py +0 -42
  16. lbm_suite2p_python-2.0.5/lbm_suite2p_python.egg-info/PKG-INFO +0 -84
  17. lbm_suite2p_python-2.0.5/lbm_suite2p_python.egg-info/requires.txt +0 -1
  18. {lbm_suite2p_python-2.0.5 → lbm_suite2p_python-2.2.0}/LICENSE.md +0 -0
  19. {lbm_suite2p_python-2.0.5 → lbm_suite2p_python-2.2.0}/MANIFEST.in +0 -0
  20. {lbm_suite2p_python-2.0.5 → lbm_suite2p_python-2.2.0}/lbm_suite2p_python/__main__.py +0 -0
  21. {lbm_suite2p_python-2.0.5 → lbm_suite2p_python-2.2.0}/lbm_suite2p_python/_benchmarking.py +0 -0
  22. {lbm_suite2p_python-2.0.5 → lbm_suite2p_python-2.2.0}/lbm_suite2p_python/utils.py +0 -0
  23. {lbm_suite2p_python-2.0.5 → lbm_suite2p_python-2.2.0}/lbm_suite2p_python.egg-info/SOURCES.txt +0 -0
  24. {lbm_suite2p_python-2.0.5 → lbm_suite2p_python-2.2.0}/lbm_suite2p_python.egg-info/dependency_links.txt +0 -0
  25. {lbm_suite2p_python-2.0.5 → lbm_suite2p_python-2.2.0}/lbm_suite2p_python.egg-info/entry_points.txt +0 -0
  26. {lbm_suite2p_python-2.0.5 → lbm_suite2p_python-2.2.0}/lbm_suite2p_python.egg-info/top_level.txt +0 -0
  27. {lbm_suite2p_python-2.0.5 → lbm_suite2p_python-2.2.0}/setup.cfg +0 -0
  28. {lbm_suite2p_python-2.0.5 → lbm_suite2p_python-2.2.0}/tests/test_run_volume.py +0 -0
@@ -0,0 +1,98 @@
1
+ Metadata-Version: 2.4
2
+ Name: lbm_suite2p_python
3
+ Version: 2.2.0
4
+ Summary: Light Beads Microscopy Pipeline using Suite2p
5
+ License-Expression: BSD-3-Clause
6
+ Project-URL: homepage, https://github.com/MillerBrainObservatory/LBM-Suite2p-Python
7
+ Keywords: Pipeline,Numpy,Microscopy,ScanImage,Suite2p,tiff
8
+ Classifier: Development Status :: 3 - Alpha
9
+ Classifier: Intended Audience :: Science/Research
10
+ Classifier: Programming Language :: Python :: 3 :: Only
11
+ Requires-Python: <3.12.10,>=3.12.7
12
+ Description-Content-Type: text/markdown
13
+ License-File: LICENSE.md
14
+ Requires-Dist: mbo_utilities>=2.1.1
15
+ Dynamic: license-file
16
+
17
+ # LBM-Suite2p-Python
18
+
19
+ > **Status:** Late-beta stage of development
20
+
21
+ [![Documentation](https://img.shields.io/badge/Documentation-blue?style=for-the-badge&logo=readthedocs&logoColor=white)](https://millerbrainobservatory.github.io/LBM-Suite2p-Python/index.html)
22
+
23
+ [![PyPI - Version](https://img.shields.io/pypi/v/lbm-suite2p-python)](https://pypi.org/project/lbm-suite2p-python/)
24
+ [![DOI](https://zenodo.org/badge/DOI/10.1007/978-3-319-76207-4_15.svg)](https://doi.org/10.1038/s41592-021-01239-8)
25
+
26
+ A volumetric 2-photon calcium imaging processing pipeline for Light Beads Microscopy (LBM) datasets, built on Suite2p.
27
+
28
+ A GUI is available via [mbo_utilities](https://millerbrainobservatory.github.io/mbo_utilities/index.html#gui) (GUI functionality will lag behind this pipeline).
29
+
30
+ ## Quick Start
31
+
32
+ See the [installation documentation](https://millerbrainobservatory.github.io/LBM-Suite2p-Python/install.html) for GUI dependencies and troubleshooting.
33
+
34
+ ```bash
35
+ uv pip install lbm_suite2p_python
36
+ ```
37
+
38
+ ### Basic Usage
39
+
40
+ ```python
41
+ import lbm_suite2p_python as lsp
42
+
43
+ ops = {"two_step_registration": 1}
44
+ files = [
45
+ r"D://demo//plane05_stitched.zarr",
46
+ r"D://demo//plane06_stitched.zarr",
47
+ ]
48
+
49
+ # Process entire volume
50
+ output_ops = lsp.run_volume(
51
+ input_files=files,
52
+ save_path=None, # save next to tiffs
53
+ ops=ops,
54
+ keep_reg=True, # Keep registered binaries
55
+ force_reg=False, # Skip if already registered
56
+ force_detect=False # Skip if stat.npy exists
57
+ )
58
+ ```
59
+
60
+ **Process a single plane:**
61
+ ```python
62
+ ops_file = lsp.run_plane(
63
+ input_path=files[0],
64
+ save_path=None,
65
+ ops=ops,
66
+ keep_raw=False, # Delete data_raw.bin after processing
67
+ keep_reg=True # Keep data.bin (registered binary)
68
+ )
69
+ ```
70
+
71
+ ## Documentation
72
+
73
+ - **[Installation Guide](https://millerbrainobservatory.github.io/LBM-Suite2p-Python/install.html)**
74
+ - **[User Guide](https://millerbrainobservatory.github.io/LBM-Suite2p-Python/user_guide.html)** - Complete usage examples
75
+ - **[API Reference](https://millerbrainobservatory.github.io/LBM-Suite2p-Python/api.html)**
76
+
77
+ ## Built With
78
+
79
+ This pipeline integrates several open-source tools:
80
+
81
+ - **[Suite2p](https://github.com/MouseLand/suite2p)** - Core registration and segmentation
82
+ - **[Cellpose](https://github.com/MouseLand/cellpose)** - Anatomical segmentation (optional)
83
+ - **[Rastermap](https://github.com/MouseLand/rastermap)** - Activity clustering (optional)
84
+ - **[mbo_utilities](https://github.com/MillerBrainObservatory/mbo_utilities)** - ScanImage I/O and metadata
85
+ - **[scanreader](https://github.com/atlab/scanreader)** - ScanImage metadata parsing
86
+
87
+ ## Issues & Support
88
+
89
+ - **Bug reports:** [GitHub Issues](https://github.com/MillerBrainObservatory/LBM-Suite2p-Python/issues)
90
+ - **Questions:** See [Suite2p documentation](https://suite2p.readthedocs.io/) for Suite2p-specific questions
91
+ - **Known issues:** Widgets may throw "Invalid Rect" errors ([upstream issue](https://github.com/pygfx/wgpu-py/issues/716#issuecomment-2880853089))
92
+
93
+ ## Contributing
94
+
95
+ Contributions are welcome! This project follows Suite2p's conventions and uses:
96
+ - **Ruff** for linting and formatting (line length: 88, numpy docstring style)
97
+ - **pytest** for testing
98
+ - **Sphinx** for documentation
@@ -0,0 +1,82 @@
1
+ # LBM-Suite2p-Python
2
+
3
+ > **Status:** Late-beta stage of development
4
+
5
+ [![Documentation](https://img.shields.io/badge/Documentation-blue?style=for-the-badge&logo=readthedocs&logoColor=white)](https://millerbrainobservatory.github.io/LBM-Suite2p-Python/index.html)
6
+
7
+ [![PyPI - Version](https://img.shields.io/pypi/v/lbm-suite2p-python)](https://pypi.org/project/lbm-suite2p-python/)
8
+ [![DOI](https://zenodo.org/badge/DOI/10.1007/978-3-319-76207-4_15.svg)](https://doi.org/10.1038/s41592-021-01239-8)
9
+
10
+ A volumetric 2-photon calcium imaging processing pipeline for Light Beads Microscopy (LBM) datasets, built on Suite2p.
11
+
12
+ A GUI is available via [mbo_utilities](https://millerbrainobservatory.github.io/mbo_utilities/index.html#gui) (GUI functionality will lag behind this pipeline).
13
+
14
+ ## Quick Start
15
+
16
+ See the [installation documentation](https://millerbrainobservatory.github.io/LBM-Suite2p-Python/install.html) for GUI dependencies and troubleshooting.
17
+
18
+ ```bash
19
+ uv pip install lbm_suite2p_python
20
+ ```
21
+
22
+ ### Basic Usage
23
+
24
+ ```python
25
+ import lbm_suite2p_python as lsp
26
+
27
+ ops = {"two_step_registration": 1}
28
+ files = [
29
+ r"D://demo//plane05_stitched.zarr",
30
+ r"D://demo//plane06_stitched.zarr",
31
+ ]
32
+
33
+ # Process entire volume
34
+ output_ops = lsp.run_volume(
35
+ input_files=files,
36
+ save_path=None, # save next to tiffs
37
+ ops=ops,
38
+ keep_reg=True, # Keep registered binaries
39
+ force_reg=False, # Skip if already registered
40
+ force_detect=False # Skip if stat.npy exists
41
+ )
42
+ ```
43
+
44
+ **Process a single plane:**
45
+ ```python
46
+ ops_file = lsp.run_plane(
47
+ input_path=files[0],
48
+ save_path=None,
49
+ ops=ops,
50
+ keep_raw=False, # Delete data_raw.bin after processing
51
+ keep_reg=True # Keep data.bin (registered binary)
52
+ )
53
+ ```
54
+
55
+ ## Documentation
56
+
57
+ - **[Installation Guide](https://millerbrainobservatory.github.io/LBM-Suite2p-Python/install.html)**
58
+ - **[User Guide](https://millerbrainobservatory.github.io/LBM-Suite2p-Python/user_guide.html)** - Complete usage examples
59
+ - **[API Reference](https://millerbrainobservatory.github.io/LBM-Suite2p-Python/api.html)**
60
+
61
+ ## Built With
62
+
63
+ This pipeline integrates several open-source tools:
64
+
65
+ - **[Suite2p](https://github.com/MouseLand/suite2p)** - Core registration and segmentation
66
+ - **[Cellpose](https://github.com/MouseLand/cellpose)** - Anatomical segmentation (optional)
67
+ - **[Rastermap](https://github.com/MouseLand/rastermap)** - Activity clustering (optional)
68
+ - **[mbo_utilities](https://github.com/MillerBrainObservatory/mbo_utilities)** - ScanImage I/O and metadata
69
+ - **[scanreader](https://github.com/atlab/scanreader)** - ScanImage metadata parsing
70
+
71
+ ## Issues & Support
72
+
73
+ - **Bug reports:** [GitHub Issues](https://github.com/MillerBrainObservatory/LBM-Suite2p-Python/issues)
74
+ - **Questions:** See [Suite2p documentation](https://suite2p.readthedocs.io/) for Suite2p-specific questions
75
+ - **Known issues:** Widgets may throw "Invalid Rect" errors ([upstream issue](https://github.com/pygfx/wgpu-py/issues/716#issuecomment-2880853089))
76
+
77
+ ## Contributing
78
+
79
+ Contributions are welcome! This project follows Suite2p's conventions and uses:
80
+ - **Ruff** for linting and formatting (line length: 88, numpy docstring style)
81
+ - **pytest** for testing
82
+ - **Sphinx** for documentation
@@ -5,6 +5,7 @@ from lbm_suite2p_python.run_lsp import *
5
5
  from lbm_suite2p_python.utils import *
6
6
  from lbm_suite2p_python.volume import *
7
7
  from lbm_suite2p_python.zplane import *
8
+ from lbm_suite2p_python.postprocessing import *
8
9
 
9
10
  try:
10
11
  __version__ = version("lbm_suite2p_python")
@@ -15,6 +16,7 @@ except PackageNotFoundError:
15
16
  __all__ = [
16
17
  "run_volume",
17
18
  "run_plane",
19
+ "run_grid_search",
18
20
  "plot_traces",
19
21
  "plot_masks",
20
22
  "plot_rastermap",
@@ -24,6 +26,8 @@ __all__ = [
24
26
  "plot_execution_time",
25
27
  "plot_noise_distribution",
26
28
  "dff_rolling_percentile",
29
+ "dff_median_filter",
30
+ "dff_shot_noise",
27
31
  "load_ops",
28
32
  "load_planar_results",
29
33
  "default_ops",
@@ -0,0 +1,214 @@
1
+ """MBO Default ops"""
2
+
3
+ def s2p_ops():
4
+ """ default options to run pipeline """
5
+ return {
6
+ # file input/output settings
7
+ "look_one_level_down":
8
+ False, # whether to look in all subfolders when searching for tiffs
9
+ "fast_disk": [], # used to store temporary binary file, defaults to save_path0
10
+ "delete_bin": False, # whether to delete binary file after processing
11
+ "mesoscan": False, # for reading in scanimage mesoscope files
12
+ "bruker": False, # whether or not single page BRUKER tiffs!
13
+ "bruker_bidirectional":
14
+ False, # bidirectional multiplane in bruker: 0, 1, 2, 2, 1, 0 (True) vs 0, 1, 2, 0, 1, 2 (False)
15
+ "h5py": [], # take h5py as input (deactivates data_path)
16
+ "h5py_key": "data", # key in h5py where data array is stored
17
+ "nwb_file": "", # take nwb file as input (deactivates data_path)
18
+ "nwb_driver": "", # driver for nwb file (nothing if file is local)
19
+ "nwb_series":
20
+ "", # TwoPhotonSeries name, defaults to first TwoPhotonSeries in nwb file
21
+ "save_path0": '', # pathname where you'd like to store results, defaults to first item in data_path
22
+ "save_folder": [], # directory you"d like suite2p results to be saved to
23
+ "subfolders": [
24
+ ], # subfolders you"d like to search through when look_one_level_down is set to True
25
+ "move_bin":
26
+ False, # if 1, and fast_disk is different than save_disk, binary file is moved to save_disk
27
+
28
+ # main settings
29
+ "nplanes": 1, # each tiff has these many planes in sequence
30
+ "nchannels": 1, # each tiff has these many channels per plane
31
+ "functional_chan":
32
+ 1, # this channel is used to extract functional ROIs (1-based)
33
+ "tau": 1.3, # this is the main parameter for deconvolution
34
+ "fs":
35
+ 10., # sampling rate (PER PLANE e.g. for 12 plane recordings it will be around 2.5)
36
+ "force_sktiff": False, # whether or not to use scikit-image for tiff reading
37
+ "frames_include": -1,
38
+ "multiplane_parallel": False, # whether or not to run on server
39
+ "ignore_flyback": [],
40
+
41
+ # output settings
42
+ "preclassify":
43
+ 0.0, # apply classifier before signal extraction with probability 0.3
44
+ "save_mat": False, # whether to save output as matlab files
45
+ "save_NWB": False, # whether to save output as NWB file
46
+ "combined":
47
+ True, # combine multiple planes into a single result /single canvas for GUI
48
+ "aspect":
49
+ 1.0, # um/pixels in X / um/pixels in Y (for correct aspect ratio in GUI)
50
+
51
+ # bidirectional phase offset
52
+ "do_bidiphase":
53
+ False, # whether or not to compute bidirectional phase offset (applies to 2P recordings only)
54
+ "bidiphase":
55
+ 0, # Bidirectional Phase offset from line scanning (set by user). Applied to all frames in recording.
56
+ "bidi_corrected":
57
+ False, # Whether to do bidirectional correction during registration
58
+
59
+ # registration settings
60
+ "do_registration": True, # whether to register data (2 forces re-registration)
61
+ "two_step_registration":
62
+ False,
63
+ # whether or not to run registration twice (useful for low SNR data). Set keep_movie_raw to True if setting this parameter to True.
64
+ "keep_movie_raw":
65
+ False, # whether to keep binary file of non-registered frames.
66
+ "nimg_init": 300, # subsampled frames for finding reference image
67
+ "batch_size": 500, # number of frames per batch
68
+ "maxregshift":
69
+ 0.1, # max allowed registration shift, as a fraction of frame max(width and height)
70
+ "align_by_chan":
71
+ 1, # when multi-channel, you can align by non-functional channel (1-based)
72
+ "reg_tif": False, # whether to save registered tiffs
73
+ "reg_tif_chan2": False, # whether to save channel 2 registered tiffs
74
+ "subpixel": 10, # precision of subpixel registration (1/subpixel steps)
75
+ "smooth_sigma_time": 0, # gaussian smoothing in time
76
+ "smooth_sigma":
77
+ 1.15, # ~1 good for 2P recordings, recommend 3-5 for 1P recordings
78
+ "th_badframes":
79
+ 1.0,
80
+ # this parameter determines which frames to exclude when determining cropping - set it smaller to exclude more frames
81
+ "norm_frames": True, # normalize frames when detecting shifts
82
+ "force_refImg": False, # if True, use refImg stored in ops if available
83
+ "pad_fft": False, # if True, pads image during FFT part of registration
84
+
85
+ # non rigid registration settings
86
+ "nonrigid": True, # whether to use nonrigid registration
87
+ "block_size": [128,
88
+ 128], # block size to register (** keep this a multiple of 2 **)
89
+ "snr_thresh":
90
+ 1.2,
91
+ # if any nonrigid block is below this threshold, it gets smoothed until above this threshold. 1.0 results in no smoothing
92
+ "maxregshiftNR":
93
+ 5, # maximum pixel shift allowed for nonrigid, relative to rigid
94
+
95
+ # 1P settings
96
+ "1Preg": False, # whether to perform high-pass filtering and tapering
97
+ "spatial_hp_reg":
98
+ 42, # window for spatial high-pass filtering before registration
99
+ "pre_smooth":
100
+ 0, # whether to smooth before high-pass filtering before registration
101
+ "spatial_taper":
102
+ 40,
103
+ # how much to ignore on edges (important for vignetted windows, for FFT padding do not set BELOW 3*ops["smooth_sigma"])
104
+
105
+ # cell detection settings with suite2p
106
+ "roidetect": True, # whether or not to run ROI extraction
107
+ "spikedetect": True, # whether or not to run spike deconvolution
108
+ "sparse_mode": True, # whether or not to run sparse_mode
109
+ "spatial_scale":
110
+ 1, # 0: multi-scale; 1: 6 pixels, 2: 12 pixels, 3: 24 pixels, 4: 48 pixels
111
+ "connected":
112
+ True, # whether or not to keep ROIs fully connected (set to 0 for dendrites)
113
+ "nbinned": 5000, # max number of binned frames for cell detection
114
+ "max_iterations": 20, # maximum number of iterations to do cell detection
115
+ "threshold_scaling":
116
+ 1.0, # adjust the automatically determined threshold by this scalar multiplier
117
+ "max_overlap":
118
+ 0.75, # cells with more overlap than this get removed during triage, before refinement
119
+ "high_pass":
120
+ 100, # running mean subtraction across bins with a window of size "high_pass" (use low values for 1P)
121
+ "spatial_hp_detect":
122
+ 25, # window for spatial high-pass filtering for neuropil subtraction before detection
123
+ "denoise": False, # denoise binned movie for cell detection in sparse_mode
124
+
125
+ # cell detection settings with cellpose (used if anatomical_only > 0)
126
+ "anatomical_only":
127
+ 3,
128
+ # run cellpose to get masks on 1: max_proj / mean_img; 2: mean_img; 3: mean_img enhanced, 4: max_proj
129
+ "diameter": 6, # use diameter for cellpose, if 0 estimate diameter
130
+ "cellprob_threshold": -6, # cellprob_threshold for cellpose
131
+ "flow_threshold": 0, # flow_threshold for cellpose
132
+ "spatial_hp_cp": 0.5, # high-pass image spatially by a multiple of the diameter
133
+ "pretrained_model":
134
+ "cpsam", # path to pretrained model or model type string in Cellpose (can be user model)
135
+
136
+ # classification parameters
137
+ "soma_crop":
138
+ True, # crop dendrites for cell classification stats like compactness
139
+ # ROI extraction parameters
140
+ "neuropil_extract":
141
+ True, # whether or not to extract neuropil; if False, Fneu is set to zero
142
+ "inner_neuropil_radius":
143
+ 2, # number of pixels to keep between ROI and neuropil donut
144
+ "min_neuropil_pixels": 350, # minimum number of pixels in the neuropil
145
+ "lam_percentile":
146
+ 50., # percentile of lambda within area to ignore when excluding cell pixels for neuropil extraction
147
+ "allow_overlap":
148
+ False, # pixels that are overlapping are thrown out (False) or added to both ROIs (True)
149
+ "use_builtin_classifier":
150
+ False, # whether or not to use built-in classifier for cell detection (overrides
151
+ # classifier specified in classifier_path if set to True)
152
+ "classifier_path": "", # path to classifier
153
+
154
+ # channel 2 detection settings (stat[n]["chan2"], stat[n]["not_chan2"])
155
+ "chan2_thres": 0.65, # minimum for detection of brightness on channel 2
156
+
157
+ # deconvolution settings
158
+ "baseline": "maximin", # baselining mode (can also choose "prctile")
159
+ "win_baseline": 60., # window for maximin
160
+ "sig_baseline": 10., # smoothing constant for gaussian filter
161
+ "prctile_baseline": 8., # optional (whether to use a percentile baseline)
162
+ "neucoeff": 0.7, # neuropil coefficient
163
+ }
164
+
165
+
166
+ def default_ops(metadata=None, ops=None):
167
+ """
168
+ Returns default ops for Suite2P processing on Light Beads Microscopy datasets.
169
+
170
+ Main changes to defaults:
171
+
172
+ anatomical_only=3
173
+ diameter=6
174
+ spatial_hp_cp=0.5
175
+ cellprob_threshold=-6
176
+ flow_threshold=0
177
+ spatial_scale=1
178
+ tau=1.3
179
+
180
+ Parameters
181
+ ----------
182
+ metadata : dict, optional
183
+ Metadata dictionary containing information about the dataset.
184
+ ops : dict, str or Path, optional
185
+ Path to or dict of suite2p ops.
186
+
187
+ Returns
188
+ -------
189
+ dict
190
+ Default ops for Suite2P processing.
191
+
192
+ Examples
193
+ --------
194
+ >>> import lbm_suite2p_python as lsp
195
+ >>> metadata = mbo.get_metadata("D://demo//raw_data//raw_file_00001.tif") # noqa
196
+ >>> lsp.run_plane(
197
+ >>> ops=ops,
198
+ >>> input_tiff="D://demo//raw_data//raw_file_00001.tif",
199
+ >>> save_path="D://demo//results",
200
+ >>> save_folder="v1"
201
+ >>> )
202
+ """
203
+ if ops is None:
204
+ print("Importing suite2p packages...")
205
+ ops = s2p_ops()
206
+
207
+ if metadata is not None:
208
+ ops["fs"] = metadata["frame_rate"]
209
+ ops["dx"] = [metadata["pixel_resolution"][0]]
210
+ ops["dy"] = [metadata["pixel_resolution"][1]]
211
+
212
+ ops["nplanes"] = 1
213
+ ops["nchannels"] = 1
214
+ return ops
@@ -87,8 +87,89 @@ def _merge_images(
87
87
 
88
88
  def merge_mrois(input_dir, output_dir, overwrite=True):
89
89
  """
90
- Merge Suite2p outputs from multiple ROIs into per-plane outputs.
91
- Will attempt to merge everything available; skips missing files gracefully.
90
+ Merge Suite2p outputs from multiple ROIs per plane into unified per-plane outputs.
91
+
92
+ This function is called automatically by run_volume() when multi-ROI data is detected
93
+ (filenames containing "roi"). It performs horizontal stitching of images and concatenation
94
+ of ROI statistics and traces.
95
+
96
+ Parameters
97
+ ----------
98
+ input_dir : str or Path
99
+ Directory containing per-ROI subdirectories with naming pattern "planeXX_roiYY/".
100
+ Each subdirectory should contain Suite2p outputs (ops.npy, stat.npy, F.npy, etc.).
101
+ output_dir : str or Path
102
+ Directory where merged outputs will be saved. Creates subdirectories named "planeXX/"
103
+ for each unique plane number found in input_dir.
104
+ overwrite : bool, default True
105
+ If True, overwrites existing merged outputs. If False, skips planes that already
106
+ have merged ops.npy files.
107
+
108
+ Notes
109
+ -----
110
+ **Merging Process:**
111
+
112
+ 1. **ROI Grouping**: Groups directories by plane number
113
+ - Input: "plane01_roi01/", "plane01_roi02/", "plane02_roi01/"
114
+ - Groups: {"plane01": [roi01, roi02], "plane02": [roi01]}
115
+
116
+ 2. **Image Stitching**: Horizontally concatenates images
117
+ - Full-FOV images (refImg, meanImg, meanImgE): Simple horizontal stacking
118
+ - Cropped images (max_proj, Vcorr): Placed at yrange/xrange with horizontal offset
119
+ - Final dimensions: Ly = max(ROI heights), Lx = sum(ROI widths)
120
+
121
+ 3. **ROI Coordinate Adjustment**: Updates stat array pixel coordinates
122
+ - Adds horizontal offset to stat["xpix"] for each ROI
123
+ - Updates stat["med"] centroid positions
124
+ - Recalculates stat["ipix_neuropil"] linear indices
125
+
126
+ 4. **Trace Concatenation**: Vertically stacks fluorescence traces
127
+ - Concatenates F, Fneu, spks arrays along axis 0 (ROI dimension)
128
+ - Preserves temporal dimension (axis 1)
129
+ - Result shape: (total_neurons, nframes)
130
+
131
+ 5. **Binary Stitching**: Creates horizontally stitched binary files
132
+ - Reads frame-by-frame from each ROI binary
133
+ - Horizontally stacks frames using np.hstack()
134
+ - Writes to merged data.bin
135
+ - Handles frame count mismatches by using minimum nframes
136
+
137
+ 6. **Metadata Merging**: Combines ops dictionaries
138
+ - Uses first ROI's ops as base
139
+ - Updates Lx to sum of ROI widths
140
+ - Updates xrange to [0, total_Lx]
141
+ - Preserves registration offsets if identical across ROIs
142
+
143
+ **Output Structure:**
144
+
145
+ For input::
146
+
147
+ input_dir/
148
+ ├── plane01_roi01/
149
+ │ ├── ops.npy (Lx=512)
150
+ │ ├── stat.npy (100 neurons)
151
+ │ ├── F.npy (100, 5000)
152
+ │ └── data_raw.bin
153
+ └── plane01_roi02/
154
+ ├── ops.npy (Lx=512)
155
+ ├── stat.npy (120 neurons)
156
+ ├── F.npy (120, 5000)
157
+ └── data_raw.bin
158
+
159
+ Creates output::
160
+
161
+ output_dir/
162
+ └── plane01/
163
+ ├── ops.npy (Lx=1024, nrois=2)
164
+ ├── stat.npy (220 neurons with adjusted xpix)
165
+ ├── F.npy (220, 5000)
166
+ ├── data.bin (horizontally stitched, shape: nframes × Ly × 1024)
167
+ └── [merged visualization PNGs]
168
+
169
+ See Also
170
+ --------
171
+ merge_zarr_rois : Similar merging for ZARR format
172
+ run_volume : Automatically calls this function when multi-ROI data detected
92
173
  """
93
174
  input_dir = Path(input_dir)
94
175
  output_dir = Path(output_dir)
@@ -395,14 +395,29 @@ def load_planar_results(ops: dict | str | Path, z_plane: list | int = None) -> d
395
395
 
396
396
  save_path = Path(output_ops["save_path"])
397
397
 
398
- F = np.load(save_path.joinpath("F.npy"))
399
- Fneu = np.load(save_path.joinpath("Fneu.npy"))
400
- spks = np.load(save_path.joinpath("spks.npy"))
401
- stat = np.load(save_path.joinpath("stat.npy"), allow_pickle=True)
402
- iscell = np.load(save_path.joinpath("iscell.npy"), allow_pickle=True)[:, 0].astype(
398
+ # Check all required files exist
399
+ required_files = {
400
+ "F.npy": save_path / "F.npy",
401
+ "Fneu.npy": save_path / "Fneu.npy",
402
+ "spks.npy": save_path / "spks.npy",
403
+ "stat.npy": save_path / "stat.npy",
404
+ "iscell.npy": save_path / "iscell.npy",
405
+ }
406
+
407
+ missing_files = [name for name, path in required_files.items() if not path.exists()]
408
+ if missing_files:
409
+ raise FileNotFoundError(
410
+ f"Missing required files in {save_path}: {', '.join(missing_files)}"
411
+ )
412
+
413
+ F = np.load(required_files["F.npy"])
414
+ Fneu = np.load(required_files["Fneu.npy"])
415
+ spks = np.load(required_files["spks.npy"])
416
+ stat = np.load(required_files["stat.npy"], allow_pickle=True)
417
+ iscell = np.load(required_files["iscell.npy"], allow_pickle=True)[:, 0].astype(
403
418
  bool
404
419
  )
405
- cellprob = np.load(save_path.joinpath("iscell.npy"), allow_pickle=True)[:, 1]
420
+ cellprob = np.load(required_files["iscell.npy"], allow_pickle=True)[:, 1]
406
421
  # model = np.load(save_path.joinpath("model.npy"), allow_pickle=True).item()
407
422
 
408
423
  n_neurons = spks.shape[0]