Rhapso 0.1.98__py3-none-any.whl → 0.1.991__py3-none-any.whl
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- {rhapso-0.1.98.dist-info → rhapso-0.1.991.dist-info}/METADATA +100 -51
- {rhapso-0.1.98.dist-info → rhapso-0.1.991.dist-info}/RECORD +5 -5
- {rhapso-0.1.98.dist-info → rhapso-0.1.991.dist-info}/WHEEL +0 -0
- {rhapso-0.1.98.dist-info → rhapso-0.1.991.dist-info}/licenses/LICENSE +0 -0
- {rhapso-0.1.98.dist-info → rhapso-0.1.991.dist-info}/top_level.txt +0 -0
|
@@ -1,7 +1,7 @@
|
|
|
1
1
|
Metadata-Version: 2.4
|
|
2
2
|
Name: Rhapso
|
|
3
|
-
Version: 0.1.
|
|
4
|
-
Summary: A python package for aligning and stitching light sheet fluorescence microscopy images
|
|
3
|
+
Version: 0.1.991
|
|
4
|
+
Summary: A python package for aligning and stitching light sheet fluorescence microscopy images
|
|
5
5
|
Author: ND
|
|
6
6
|
Author-email: sean.fite@alleninstitute.org
|
|
7
7
|
Project-URL: Source, https://github.com/AllenNeuralDynamics/Rhapso
|
|
@@ -46,7 +46,7 @@ Dynamic: summary
|
|
|
46
46
|
|
|
47
47
|
# Rhapso
|
|
48
48
|
|
|
49
|
-
This is the code base for **Rhapso**, a modular Python toolkit for the alignment and stitching of large-scale microscopy datasets.
|
|
49
|
+
This is the official code base for **Rhapso**, a modular Python toolkit for the alignment and stitching of large-scale microscopy datasets.
|
|
50
50
|
|
|
51
51
|
[](LICENSE)
|
|
52
52
|
[](https://www.python.org/downloads/release/python-3100/)
|
|
@@ -64,6 +64,8 @@ This is the code base for **Rhapso**, a modular Python toolkit for the alignment
|
|
|
64
64
|
- [Performance](#performance)
|
|
65
65
|
- [Layout](#layout)
|
|
66
66
|
- [Installation](#installation)
|
|
67
|
+
- [How To Start](#how-to-start)
|
|
68
|
+
- [Try Rhapso on Sample Data](#try-rhapso-on-sample-data)
|
|
67
69
|
- [Ray](#ray)
|
|
68
70
|
- [Run Locally w/ Ray](#run-locally-with-ray)
|
|
69
71
|
- [Run on AWS Cluster w/ Ray](#run-on-aws-cluster-with-ray)
|
|
@@ -88,11 +90,7 @@ Rhapso is still loading... and while we wrap up development, a couple things to
|
|
|
88
90
|
## Summary
|
|
89
91
|
Rhapso is a set of Python components used to register, align, and stitch large-scale, overlapping, tile-based, multiscale microscopy datasets. Its stateless components can run on a single machine or scale out across cloud-based clusters.
|
|
90
92
|
|
|
91
|
-
Rhapso is published on PyPI
|
|
92
|
-
|
|
93
|
-
```bash
|
|
94
|
-
pip install Rhapso
|
|
95
|
-
```
|
|
93
|
+
Rhapso is published on PyPI.
|
|
96
94
|
|
|
97
95
|
Rhapso was developed by the Allen Institute for Neural Dynamics.
|
|
98
96
|
|
|
@@ -117,49 +115,6 @@ Questions or want to contribute? Please open an issue..
|
|
|
117
115
|
|
|
118
116
|
<br>
|
|
119
117
|
|
|
120
|
-
## High Level Approach to Registration, Alignment, and Fusion
|
|
121
|
-
|
|
122
|
-
This process has a lot of knobs and variations, and when used correctly, can work for a broad range of datasets.
|
|
123
|
-
|
|
124
|
-
**First, figure out what type of alignment you need.**
|
|
125
|
-
- Are there translations to shift to?
|
|
126
|
-
- If so, you’ll likely want to start with a rigid alignment.
|
|
127
|
-
|
|
128
|
-
Once you’ve run the rigid step, how does your data look?
|
|
129
|
-
- Did the required translations shrink to an acceptable level?
|
|
130
|
-
- If not, try again with new parameters, keeping the questions above in mind.
|
|
131
|
-
|
|
132
|
-
At this point, the translational part of your alignment should be in good shape. Now ask: **are transformations needed?** If so, you likely need an affine alignment next.
|
|
133
|
-
|
|
134
|
-
Your dataset should be correctly aligned at this point. If not, there are a number of reasons why, and we have listed some common recurrences and will keep this up to date.
|
|
135
|
-
|
|
136
|
-
There is a special case in some datasets where the z-stack is very large. In this case, you can use the split-dataset utility, which splits each tile into chunks. Then you can run split-affine alignment, allowing for more precise transformations without such imposing global rails.
|
|
137
|
-
|
|
138
|
-
**Common Causes of Poor Alignment**
|
|
139
|
-
- Not enough quality matches (adjust sigma threshold until you do)
|
|
140
|
-
- Data is not consistent looking (we take a global approach to params)
|
|
141
|
-
- Large translations needed (extend search radius)
|
|
142
|
-
- Translations that extend beyond overlapping span (increase overlap)
|
|
143
|
-
|
|
144
|
-
---
|
|
145
|
-
|
|
146
|
-
<br>
|
|
147
|
-
|
|
148
|
-
## Performance
|
|
149
|
-
|
|
150
|
-
**Interest Point Detection Performance Example (130TB Zarr dataset)**
|
|
151
|
-
|
|
152
|
-
| Environment | Resources | Avg runtime |
|
|
153
|
-
|:----------------------|:---------------------|:-----------:|
|
|
154
|
-
| Local single machine | 10 CPU, 10 GB RAM | ~120 min |
|
|
155
|
-
| AWS Ray cluster | 560 CPU, 4.4 TB RAM | ~30 min |
|
|
156
|
-
|
|
157
|
-
<br>
|
|
158
|
-
*Actual times vary by pipeline components, dataset size, tiling, and parameter choices.*
|
|
159
|
-
|
|
160
|
-
---
|
|
161
|
-
|
|
162
|
-
<br>
|
|
163
118
|
|
|
164
119
|
## Layout
|
|
165
120
|
|
|
@@ -222,6 +177,100 @@ pip install -r requirements.txt
|
|
|
222
177
|
|
|
223
178
|
<br>
|
|
224
179
|
|
|
180
|
+
## How to Start
|
|
181
|
+
|
|
182
|
+
Rhapso is driven by **pipeline scripts**.
|
|
183
|
+
|
|
184
|
+
- Each pipeline script has at minimum an associated **param file** (e.g. in `Rhapso/pipelines/ray/param/`).
|
|
185
|
+
- If you are running on a cluster, you’ll also have a **Ray cluster config** (e.g. in `Rhapso/pipelines/ray/aws/config/`).
|
|
186
|
+
|
|
187
|
+
A good way to get started:
|
|
188
|
+
|
|
189
|
+
1. **Pick a template pipeline script**
|
|
190
|
+
For example:
|
|
191
|
+
- `Rhapso/pipelines/ray/local/alignment_pipeline.py` (local)
|
|
192
|
+
- `Rhapso/pipelines/ray/aws/alignment_pipeline.py` (AWS/Ray cluster)
|
|
193
|
+
|
|
194
|
+
3. **Point it to your param file**
|
|
195
|
+
Update the `with open("...param.yml")` line so it reads your own parameter YAML.
|
|
196
|
+
- [Run Locally w/ Ray](#run-locally-with-ray)
|
|
197
|
+
|
|
198
|
+
5. **(Optional) Point it to your cluster config**
|
|
199
|
+
If you’re using AWS/Ray, update the cluster config path.
|
|
200
|
+
- [Run on AWS Cluster w/ Ray](#run-on-aws-cluster-with-ray)
|
|
201
|
+
|
|
202
|
+
5. **Edit the params to match your dataset**
|
|
203
|
+
Paths, downsampling, thresholds, matching/solver settings, etc.
|
|
204
|
+
|
|
205
|
+
6. **Run the pipeline**
|
|
206
|
+
The pipeline script will call the Rhapso components (detection, matching, solver, fusion) in the order defined in the script using the parameters you configured.
|
|
207
|
+
|
|
208
|
+
---
|
|
209
|
+
|
|
210
|
+
<br>
|
|
211
|
+
|
|
212
|
+
## Try Rhapso on Sample Data
|
|
213
|
+
|
|
214
|
+
The quickest way to get familiar with Rhapso is to run it on a real dataset. We have a small (10GB) Z1 example hosted in a public S3 bucket, so you can access it without special permissions. It’s a good starting point to copy and adapt for your own alignment workflows.
|
|
215
|
+
|
|
216
|
+
XML (input)
|
|
217
|
+
- s3://aind-open-data/HCR_802704_2025-08-30_02-00-00_processed_2025-10-01_21-09-24/image_tile_alignment/single_channel_xmls/channel_488.xml
|
|
218
|
+
|
|
219
|
+
Image prefix (referenced by the XML)
|
|
220
|
+
- s3://aind-open-data/HCR_802704_2025-08-30_02-00-00_processed_2025-10-01_21-09-24/image_radial_correction/
|
|
221
|
+
|
|
222
|
+
<br>
|
|
223
|
+
|
|
224
|
+
**Note:** Occasionally we clean up our aind-open-data bucket. If you find this dataset does not exist, please create an issue and we will replace it.
|
|
225
|
+
|
|
226
|
+
---
|
|
227
|
+
|
|
228
|
+
<br>
|
|
229
|
+
|
|
230
|
+
## High Level Approach to Registration, Alignment, and Fusion
|
|
231
|
+
|
|
232
|
+
This process has a lot of knobs and variations, and when used correctly, can work for a broad range of datasets.
|
|
233
|
+
|
|
234
|
+
**First, figure out what type of alignment you need.**
|
|
235
|
+
- Are there translations to shift to?
|
|
236
|
+
- If so, you’ll likely want to start with a rigid alignment.
|
|
237
|
+
|
|
238
|
+
Once you’ve run the rigid step, how does your data look?
|
|
239
|
+
- Did the required translations shrink to an acceptable level?
|
|
240
|
+
- If not, try again with new parameters, keeping the questions above in mind.
|
|
241
|
+
|
|
242
|
+
At this point, the translational part of your alignment should be in good shape. Now ask: **are transformations needed?** If so, you likely need an affine alignment next.
|
|
243
|
+
|
|
244
|
+
Your dataset should be correctly aligned at this point. If not, there are a number of reasons why, and we have listed some common recurrences and will keep this up to date.
|
|
245
|
+
|
|
246
|
+
There is a special case in some datasets where the z-stack is very large. In this case, you can use the split-dataset utility, which splits each tile into chunks. Then you can run split-affine alignment, allowing for more precise transformations without such imposing global rails.
|
|
247
|
+
|
|
248
|
+
**Common Causes of Poor Alignment**
|
|
249
|
+
- Not enough quality matches (adjust sigma threshold until you do)
|
|
250
|
+
- Data is not consistent looking (we take a global approach to params)
|
|
251
|
+
- Large translations needed (extend search radius)
|
|
252
|
+
- Translations that extend beyond overlapping span (increase overlap)
|
|
253
|
+
|
|
254
|
+
---
|
|
255
|
+
|
|
256
|
+
<br>
|
|
257
|
+
|
|
258
|
+
## Performance
|
|
259
|
+
|
|
260
|
+
**Interest Point Detection Performance Example (130TB Zarr dataset)**
|
|
261
|
+
|
|
262
|
+
| Environment | Resources | Avg runtime |
|
|
263
|
+
|:----------------------|:---------------------|:-----------:|
|
|
264
|
+
| Local single machine | 10 CPU, 10 GB RAM | ~120 min |
|
|
265
|
+
| AWS Ray cluster | 560 CPU, 4.4 TB RAM | ~30 min |
|
|
266
|
+
|
|
267
|
+
<br>
|
|
268
|
+
*Actual times vary by pipeline components, dataset size, tiling, and parameter choices.*
|
|
269
|
+
|
|
270
|
+
---
|
|
271
|
+
|
|
272
|
+
<br>
|
|
273
|
+
|
|
225
274
|
## Ray
|
|
226
275
|
|
|
227
276
|
**Ray** is a Python framework for parallel and distributed computing. It lets you run regular Python functions in parallel on a single machine **or** scale them out to a cluster (e.g., AWS) with minimal code changes. In Rhapso, we use Ray to process large scale datasets.
|
|
@@ -90,12 +90,12 @@ Rhapso/split_dataset/save_points.py,sha256=k-jH-slmxkbrxDl-uJvDkwOedi6cg7md3kg_a
|
|
|
90
90
|
Rhapso/split_dataset/save_xml.py,sha256=Iq1UdFa8sdnWGygfIpDi4F5In-SCWggpl7lnuDTxkHE,14280
|
|
91
91
|
Rhapso/split_dataset/split_images.py,sha256=2RzAi0btV1tmh4le9QotRif1IYUU6_4pLcGGpFBM9zk,22434
|
|
92
92
|
Rhapso/split_dataset/xml_to_dataframe_split.py,sha256=ByaLzJ4sqT417UiCQU31_CS_V4Jms7pjMbBl0ZdSNNA,8570
|
|
93
|
-
rhapso-0.1.
|
|
93
|
+
rhapso-0.1.991.dist-info/licenses/LICENSE,sha256=U0Y7B3gZJHXpjJVLgTQjM8e_c8w4JJpLgGhIdsoFR1Y,1092
|
|
94
94
|
tests/__init__.py,sha256=LYf6ZGyYRcduFFSaOLmnw3rTyfS3XLib0dsTHDWH0jo,37
|
|
95
95
|
tests/test_detection.py,sha256=NtFYR_du9cbKrclQcNiJYsKzyqly6ivF61pw6_NICcM,440
|
|
96
96
|
tests/test_matching.py,sha256=QX0ekSdyIkPpAsXHfSMqJUUlNZg09caSlhhUM63MduM,697
|
|
97
97
|
tests/test_solving.py,sha256=t8I9XPV_4ZFM-DJpgvdYXxkG2_4DQgqs-FFyE5w8Nfg,695
|
|
98
|
-
rhapso-0.1.
|
|
99
|
-
rhapso-0.1.
|
|
100
|
-
rhapso-0.1.
|
|
101
|
-
rhapso-0.1.
|
|
98
|
+
rhapso-0.1.991.dist-info/METADATA,sha256=QjjEf8EIF1t2I1mvBmN22MSawRvJNQLWV_pnh08YlZ0,19294
|
|
99
|
+
rhapso-0.1.991.dist-info/WHEEL,sha256=SmOxYU7pzNKBqASvQJ7DjX3XGUF92lrGhMb3R6_iiqI,91
|
|
100
|
+
rhapso-0.1.991.dist-info/top_level.txt,sha256=NXvsrsTfdowWbM7MxEjkDZE2Jo74lmq7ruWkp70JjSw,13
|
|
101
|
+
rhapso-0.1.991.dist-info/RECORD,,
|
|
File without changes
|
|
File without changes
|
|
File without changes
|