Rhapso 0.1.94__py3-none-any.whl → 0.1.96__py3-none-any.whl

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -1,13 +1,11 @@
1
1
  Metadata-Version: 2.4
2
2
  Name: Rhapso
3
- Version: 0.1.94
3
+ Version: 0.1.96
4
4
  Summary: A python package for aligning and stitching light sheet fluorescence microscopy images together
5
- Home-page: https://github.com/AllenNeuralDynamics/Rhapso
6
5
  Author: ND
7
6
  Author-email: sean.fite@alleninstitute.org
8
7
  Project-URL: Source, https://github.com/AllenNeuralDynamics/Rhapso
9
- Project-URL: Bug Tracker, https://github.com/AllenNeuralDynamics/Rhapso/issues
10
- Project-URL: Changelog, https://github.com/AllenNeuralDynamics/Rhapso/releases
8
+ Project-URL: Roadmap, https://github.com/AllenNeuralDynamics/Rhapso/issues
11
9
  Classifier: Development Status :: 3 - Alpha
12
10
  Classifier: Intended Audience :: Developers
13
11
  Classifier: Natural Language :: English
@@ -40,7 +38,6 @@ Dynamic: author-email
40
38
  Dynamic: classifier
41
39
  Dynamic: description
42
40
  Dynamic: description-content-type
43
- Dynamic: home-page
44
41
  Dynamic: license-file
45
42
  Dynamic: project-url
46
43
  Dynamic: requires-dist
@@ -49,10 +46,10 @@ Dynamic: summary
49
46
 
50
47
  # Rhapso
51
48
 
52
- **Rhapso** is a modular Python toolkit for interest point based registration, alignment, and fusing of large-scale microscopy datasets.
49
+ This is the code base for **Rhapso**, a modular Python toolkit for the alignment and stitching of large-scale microscopy datasets.
53
50
 
54
51
  [![License](https://img.shields.io/badge/license-MIT-brightgreen)](LICENSE)
55
- [![Python Version](https://img.shields.io/badge/python-3.11-blue.svg)](https://www.python.org/downloads/release/python-3110/)
52
+ [![Python Version](https://img.shields.io/badge/python-3.10-blue.svg)](https://www.python.org/downloads/release/python-3100/)
56
53
  [![Documentation](https://img.shields.io/badge/docs-wiki-blue)](https://github.com/AllenNeuralDynamics/Rhapso/wiki)
57
54
 
58
55
  <!-- ## Example Usage Media Content Coming Soon....
@@ -63,7 +60,7 @@ Dynamic: summary
63
60
  ## Table of Contents
64
61
  - [Summary](#summary)
65
62
  - [Contact](#contact)
66
- - [Features](#features)
63
+ - [Supported Features](#supported-features)
67
64
  - [Performance](#performance)
68
65
  - [Layout](#layout)
69
66
  - [Installation](#installation)
@@ -80,7 +77,7 @@ Dynamic: summary
80
77
 
81
78
  <br>
82
79
 
83
- **Update 11/26/25**
80
+ **Update 1/12/26**
84
81
  --------
85
82
  Rhapso is still loading... and while we wrap up development, a couple things to know if you are outside the Allen Institute:
86
83
  - This process requires a very specific XML structure to work.
@@ -89,11 +86,15 @@ Rhapso is still loading... and while we wrap up development, a couple things to
89
86
  <br>
90
87
 
91
88
  ## Summary
92
- Rhapso is a set of Python components for registration, alignment, and stitching of large-scale, 3D, overlapping tile-based, multiscale microscopy datasets.
89
+ Rhapso is a set of Python components used to register, align, and stitch large-scale, 3D, overlapping, tile-based, multiscale microscopy datasets. Its stateless components can run on a single machine or scale out across cloud-based clusters.
93
90
 
94
- Rhapso was developed by the Allen Institute for Neural Dynamics. Rhapso is comprised of stateless components. You can call these components using a pipeline script, with the option to run on a single machine or scale out with Ray to cloud based (currently only supporting AWS) clusters.
91
+ Rhapso is published on PyPI and can be installed with:
95
92
 
96
- Current data loaders support Zarr and Tiff.
93
+ ```bash
94
+ pip install Rhapso
95
+ ```
96
+
97
+ Rhapso was developed by the Allen Institute for Neural Dynamics.
97
98
 
98
99
  <br>
99
100
 
@@ -102,11 +103,15 @@ Questions or want to contribute? Please open an issue..
102
103
 
103
104
  <br>
104
105
 
105
- ## Features
106
- - **Interest Point Detection** - using DOG based feature detection
107
- - **Interest Point Matching** - using descriptor based RANSAC to match feature points
108
- - **Global Optimization** - aligning matched features per tile, globally
109
- - **Validation and Visualization Tools** - validate component specific results for the best output
106
+ ## Supported Features
107
+ - **Interest Point Detection** - DOG based feature detection
108
+ - **Interest Point Matching** - Descriptor based RANSAC to match feature points
109
+ - **Global Optimization** - Align matched features between tile pairs globally
110
+ - **Validation and Visualization Tools** - Validate component specific results for the best output
111
+ - **ZARR** - Zarr data as input
112
+ - **TIFF** - Tiff data as input
113
+ - **AWS** - AWS S3 based input/output and Ray based EC2 instances
114
+ - **Scale** - Tested on 200 TB of data without downsampling
110
115
 
111
116
  ---
112
117
 
@@ -114,18 +119,31 @@ Questions or want to contribute? Please open an issue..
114
119
 
115
120
  ## High Level Approach to Registration, Alignment, and Fusion
116
121
 
117
- We first run **interest point detection** to capture feature points in the dataset, focusing on overlapping regions between tiles. These points drive all downstream alignment.
122
+ This process has a lot of knobs and variations, and when used correctly, can work for a broad range of datasets.
123
+
124
+ **First, figure out what type of alignment you need.**
125
+ - Are there translations to shift to?
126
+ - If so, you’ll likely want to start with a rigid alignment and double-check that the required translations do not span more than the overlapping distance.
118
127
 
119
- Next, we perform **alignment** in two-three stages, with regularized models:
128
+ **A very important thing to keep in mind:** interest-point–based alignment will not work well if you don’t find enough high-quality points that can be matched.
129
+ - Too few, even if they’re very good, will lead to poor alignment.
130
+ - The same is true if you have lots of low-quality matches.
120
131
 
121
- 1. **Rigid matching + solver** – Match interest points with a rigid model and solve for globally consistent rigid transforms between all tiles.
122
- 2. **Affine matching + solver** – Starting from the rigid solution, repeat matching with an affine model to recover more precise tile transforms.
123
- 3. **Split affine matching + solver** For very large z-stacks, we recommend first running the split dataset component to chunk tiles into smaller Z-bounds, then repeating affine matching and solving in “split affine” mode to refine local alignment.
132
+ Once you’ve run the rigid step, how does your data look?
133
+ - Did the required translations shrink to an acceptable level?
134
+ - If not, try again with new parameters, keeping the questions above in mind.
124
135
 
125
- All resulting transforms are written back into the input XML.
136
+ At this point, the translational part of your alignment should be in good shape. Now ask: **are additional transformations needed?** If so, you likely need an affine alignment next.
126
137
 
127
- Whether you split or not, once the XML contains your final transforms, you are ready for **fusion**. We recommend viewing the aligned XML in FIJI/BDV to visually confirm alignment quality before running fusion.
138
+ Your dataset should be correctly aligned at this point. If not, there are a number of reasons why, and we have listed some common recurrences and will keep this up to date.
128
139
 
140
+ There is a special case in some datasets where the z-stack is very large. In this case, you can use the split-dataset utility, which splits each tile into multiple tiles of your choosing. Then you can run split-affine alignment, allowing for more precise transformations without such imposing global rails.
141
+
142
+ **Common Causes of Poor Alignment**
143
+ - Not enough quality matches (adjust sigma threshold until you do)
144
+ - Data is not consistent looking (we take a global approach to params)
145
+ - Large translations needed (extened search radius)
146
+ - Translations that extend beyond overlapping span (increase overlap)
129
147
 
130
148
  ---
131
149
 
@@ -180,6 +198,19 @@ Rhapso/
180
198
 
181
199
  ## Installation
182
200
 
201
+ ### Option 1: Install from PyPI (recommended)
202
+
203
+ ```bash
204
+ # create and activate a virtual environment
205
+ python -m venv .venv && source .venv/bin/activate
206
+ # or: conda create -n rhapso python=3.10 && conda activate rhapso
207
+
208
+ # install Rhapso from PyPI
209
+ pip install Rhapso
210
+ ```
211
+
212
+ ### Option 2: Install from GitHub (developers)
213
+
183
214
  ```sh
184
215
  # clone the repo
185
216
  git clone https://github.com/AllenNeuralDynamics/Rhapso.git
@@ -271,21 +302,11 @@ with open("Rhapso/pipelines/ray/param/your_param_file.yml", "r") as file:
271
302
  Rhapso/pipelines/ray/aws/config/
272
303
  ```
273
304
 
274
- ### 4. Update config file to point to whl location in setup_commands
275
- ```python
276
- - aws s3 cp s3://rhapso-whl-v2/Rhapso-0.1.8-py3-none-any.whl /tmp/Rhapso-0.1.8-py3-none-any.whl
277
- ```
278
-
279
305
  ### 5. Update alignment pipeline script to point to config file
280
306
  ```python
281
307
  unified_yml = "your_cluster_config_file_name.yml"
282
308
  ```
283
309
 
284
- ### 6. Create whl file and upload to s3
285
- ```python
286
- python setup.py sdist bdist_wheel
287
- ```
288
-
289
310
  ### 7. Run AWS alignment pipeline script
290
311
  ```python
291
312
  python Rhapso/pipelines/ray/aws/alignment_pipeline.py
@@ -90,12 +90,12 @@ Rhapso/split_dataset/save_points.py,sha256=k-jH-slmxkbrxDl-uJvDkwOedi6cg7md3kg_a
90
90
  Rhapso/split_dataset/save_xml.py,sha256=Iq1UdFa8sdnWGygfIpDi4F5In-SCWggpl7lnuDTxkHE,14280
91
91
  Rhapso/split_dataset/split_images.py,sha256=2RzAi0btV1tmh4le9QotRif1IYUU6_4pLcGGpFBM9zk,22434
92
92
  Rhapso/split_dataset/xml_to_dataframe_split.py,sha256=ByaLzJ4sqT417UiCQU31_CS_V4Jms7pjMbBl0ZdSNNA,8570
93
- rhapso-0.1.94.dist-info/licenses/LICENSE,sha256=U0Y7B3gZJHXpjJVLgTQjM8e_c8w4JJpLgGhIdsoFR1Y,1092
93
+ rhapso-0.1.96.dist-info/licenses/LICENSE,sha256=U0Y7B3gZJHXpjJVLgTQjM8e_c8w4JJpLgGhIdsoFR1Y,1092
94
94
  tests/__init__.py,sha256=LYf6ZGyYRcduFFSaOLmnw3rTyfS3XLib0dsTHDWH0jo,37
95
95
  tests/test_detection.py,sha256=NtFYR_du9cbKrclQcNiJYsKzyqly6ivF61pw6_NICcM,440
96
96
  tests/test_matching.py,sha256=QX0ekSdyIkPpAsXHfSMqJUUlNZg09caSlhhUM63MduM,697
97
97
  tests/test_solving.py,sha256=t8I9XPV_4ZFM-DJpgvdYXxkG2_4DQgqs-FFyE5w8Nfg,695
98
- rhapso-0.1.94.dist-info/METADATA,sha256=RxnCqnOZgjl4wZTyJiIPUah6KBMySSFyCD4mdl34NZA,16989
99
- rhapso-0.1.94.dist-info/WHEEL,sha256=SmOxYU7pzNKBqASvQJ7DjX3XGUF92lrGhMb3R6_iiqI,91
100
- rhapso-0.1.94.dist-info/top_level.txt,sha256=NXvsrsTfdowWbM7MxEjkDZE2Jo74lmq7ruWkp70JjSw,13
101
- rhapso-0.1.94.dist-info/RECORD,,
98
+ rhapso-0.1.96.dist-info/METADATA,sha256=-q4TsuDbH67FN4eXEhxwprGeEFLit211AOXdxnCKYNg,17741
99
+ rhapso-0.1.96.dist-info/WHEEL,sha256=SmOxYU7pzNKBqASvQJ7DjX3XGUF92lrGhMb3R6_iiqI,91
100
+ rhapso-0.1.96.dist-info/top_level.txt,sha256=NXvsrsTfdowWbM7MxEjkDZE2Jo74lmq7ruWkp70JjSw,13
101
+ rhapso-0.1.96.dist-info/RECORD,,