Rhapso 0.1.98__py3-none-any.whl → 0.1.99__py3-none-any.whl
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- {rhapso-0.1.98.dist-info → rhapso-0.1.99.dist-info}/METADATA +80 -45
- {rhapso-0.1.98.dist-info → rhapso-0.1.99.dist-info}/RECORD +5 -5
- {rhapso-0.1.98.dist-info → rhapso-0.1.99.dist-info}/WHEEL +0 -0
- {rhapso-0.1.98.dist-info → rhapso-0.1.99.dist-info}/licenses/LICENSE +0 -0
- {rhapso-0.1.98.dist-info → rhapso-0.1.99.dist-info}/top_level.txt +0 -0
|
@@ -1,7 +1,7 @@
|
|
|
1
1
|
Metadata-Version: 2.4
|
|
2
2
|
Name: Rhapso
|
|
3
|
-
Version: 0.1.
|
|
4
|
-
Summary: A python package for aligning and stitching light sheet fluorescence microscopy images
|
|
3
|
+
Version: 0.1.99
|
|
4
|
+
Summary: A python package for aligning and stitching light sheet fluorescence microscopy images
|
|
5
5
|
Author: ND
|
|
6
6
|
Author-email: sean.fite@alleninstitute.org
|
|
7
7
|
Project-URL: Source, https://github.com/AllenNeuralDynamics/Rhapso
|
|
@@ -94,6 +94,8 @@ Rhapso is published on PyPI and can be installed with:
|
|
|
94
94
|
pip install Rhapso
|
|
95
95
|
```
|
|
96
96
|
|
|
97
|
+
<br>
|
|
98
|
+
|
|
97
99
|
Rhapso was developed by the Allen Institute for Neural Dynamics.
|
|
98
100
|
|
|
99
101
|
<br>
|
|
@@ -117,49 +119,6 @@ Questions or want to contribute? Please open an issue..
|
|
|
117
119
|
|
|
118
120
|
<br>
|
|
119
121
|
|
|
120
|
-
## High Level Approach to Registration, Alignment, and Fusion
|
|
121
|
-
|
|
122
|
-
This process has a lot of knobs and variations, and when used correctly, can work for a broad range of datasets.
|
|
123
|
-
|
|
124
|
-
**First, figure out what type of alignment you need.**
|
|
125
|
-
- Are there translations to shift to?
|
|
126
|
-
- If so, you’ll likely want to start with a rigid alignment.
|
|
127
|
-
|
|
128
|
-
Once you’ve run the rigid step, how does your data look?
|
|
129
|
-
- Did the required translations shrink to an acceptable level?
|
|
130
|
-
- If not, try again with new parameters, keeping the questions above in mind.
|
|
131
|
-
|
|
132
|
-
At this point, the translational part of your alignment should be in good shape. Now ask: **are transformations needed?** If so, you likely need an affine alignment next.
|
|
133
|
-
|
|
134
|
-
Your dataset should be correctly aligned at this point. If not, there are a number of reasons why, and we have listed some common recurrences and will keep this up to date.
|
|
135
|
-
|
|
136
|
-
There is a special case in some datasets where the z-stack is very large. In this case, you can use the split-dataset utility, which splits each tile into chunks. Then you can run split-affine alignment, allowing for more precise transformations without such imposing global rails.
|
|
137
|
-
|
|
138
|
-
**Common Causes of Poor Alignment**
|
|
139
|
-
- Not enough quality matches (adjust sigma threshold until you do)
|
|
140
|
-
- Data is not consistent looking (we take a global approach to params)
|
|
141
|
-
- Large translations needed (extend search radius)
|
|
142
|
-
- Translations that extend beyond overlapping span (increase overlap)
|
|
143
|
-
|
|
144
|
-
---
|
|
145
|
-
|
|
146
|
-
<br>
|
|
147
|
-
|
|
148
|
-
## Performance
|
|
149
|
-
|
|
150
|
-
**Interest Point Detection Performance Example (130TB Zarr dataset)**
|
|
151
|
-
|
|
152
|
-
| Environment | Resources | Avg runtime |
|
|
153
|
-
|:----------------------|:---------------------|:-----------:|
|
|
154
|
-
| Local single machine | 10 CPU, 10 GB RAM | ~120 min |
|
|
155
|
-
| AWS Ray cluster | 560 CPU, 4.4 TB RAM | ~30 min |
|
|
156
|
-
|
|
157
|
-
<br>
|
|
158
|
-
*Actual times vary by pipeline components, dataset size, tiling, and parameter choices.*
|
|
159
|
-
|
|
160
|
-
---
|
|
161
|
-
|
|
162
|
-
<br>
|
|
163
122
|
|
|
164
123
|
## Layout
|
|
165
124
|
|
|
@@ -222,6 +181,82 @@ pip install -r requirements.txt
|
|
|
222
181
|
|
|
223
182
|
<br>
|
|
224
183
|
|
|
184
|
+
## How to Start
|
|
185
|
+
|
|
186
|
+
Rhapso is driven by **pipeline scripts**.
|
|
187
|
+
|
|
188
|
+
- Each pipeline script has at minimum an associated **param file** (e.g. in `Rhapso/pipelines/ray/param/`).
|
|
189
|
+
- If you are running on a cluster, you’ll also have a **Ray cluster config** (e.g. in `Rhapso/pipelines/ray/aws/config/`).
|
|
190
|
+
|
|
191
|
+
A good way to get started:
|
|
192
|
+
|
|
193
|
+
1. **Pick a template pipeline script**
|
|
194
|
+
For example:
|
|
195
|
+
- `Rhapso/pipelines/ray/local/alignment_pipeline.py` (local)
|
|
196
|
+
- `Rhapso/pipelines/ray/aws/alignment_pipeline.py` (AWS/Ray cluster)
|
|
197
|
+
|
|
198
|
+
3. **Point it to your param file**
|
|
199
|
+
Update the `with open("...param.yml")` line so it reads your own parameter YAML.
|
|
200
|
+
- [Run Locally w/ Ray](#run-locally-with-ray)
|
|
201
|
+
|
|
202
|
+
5. **(Optional) Point it to your cluster config**
|
|
203
|
+
If you’re using AWS/Ray, update the cluster config path.
|
|
204
|
+
- [Run on AWS Cluster w/ Ray](#run-on-aws-cluster-with-ray)
|
|
205
|
+
|
|
206
|
+
5. **Edit the params to match your dataset**
|
|
207
|
+
Paths, downsampling, thresholds, matching/solver settings, etc.
|
|
208
|
+
|
|
209
|
+
6. **Run the pipeline**
|
|
210
|
+
The pipeline script will call the Rhapso components (detection, matching, solver, fusion) in the order defined in the script using the parameters you configured.
|
|
211
|
+
|
|
212
|
+
---
|
|
213
|
+
|
|
214
|
+
<br>
|
|
215
|
+
|
|
216
|
+
## High Level Approach to Registration, Alignment, and Fusion
|
|
217
|
+
|
|
218
|
+
This process has a lot of knobs and variations, and when used correctly, can work for a broad range of datasets.
|
|
219
|
+
|
|
220
|
+
**First, figure out what type of alignment you need.**
|
|
221
|
+
- Are there translations to shift to?
|
|
222
|
+
- If so, you’ll likely want to start with a rigid alignment.
|
|
223
|
+
|
|
224
|
+
Once you’ve run the rigid step, how does your data look?
|
|
225
|
+
- Did the required translations shrink to an acceptable level?
|
|
226
|
+
- If not, try again with new parameters, keeping the questions above in mind.
|
|
227
|
+
|
|
228
|
+
At this point, the translational part of your alignment should be in good shape. Now ask: **are transformations needed?** If so, you likely need an affine alignment next.
|
|
229
|
+
|
|
230
|
+
Your dataset should be correctly aligned at this point. If not, there are a number of reasons why, and we have listed some common recurrences and will keep this up to date.
|
|
231
|
+
|
|
232
|
+
There is a special case in some datasets where the z-stack is very large. In this case, you can use the split-dataset utility, which splits each tile into chunks. Then you can run split-affine alignment, allowing for more precise transformations without such imposing global rails.
|
|
233
|
+
|
|
234
|
+
**Common Causes of Poor Alignment**
|
|
235
|
+
- Not enough quality matches (adjust sigma threshold until you do)
|
|
236
|
+
- Data is not consistent looking (we take a global approach to params)
|
|
237
|
+
- Large translations needed (extend search radius)
|
|
238
|
+
- Translations that extend beyond overlapping span (increase overlap)
|
|
239
|
+
|
|
240
|
+
---
|
|
241
|
+
|
|
242
|
+
<br>
|
|
243
|
+
|
|
244
|
+
## Performance
|
|
245
|
+
|
|
246
|
+
**Interest Point Detection Performance Example (130TB Zarr dataset)**
|
|
247
|
+
|
|
248
|
+
| Environment | Resources | Avg runtime |
|
|
249
|
+
|:----------------------|:---------------------|:-----------:|
|
|
250
|
+
| Local single machine | 10 CPU, 10 GB RAM | ~120 min |
|
|
251
|
+
| AWS Ray cluster | 560 CPU, 4.4 TB RAM | ~30 min |
|
|
252
|
+
|
|
253
|
+
<br>
|
|
254
|
+
*Actual times vary by pipeline components, dataset size, tiling, and parameter choices.*
|
|
255
|
+
|
|
256
|
+
---
|
|
257
|
+
|
|
258
|
+
<br>
|
|
259
|
+
|
|
225
260
|
## Ray
|
|
226
261
|
|
|
227
262
|
**Ray** is a Python framework for parallel and distributed computing. It lets you run regular Python functions in parallel on a single machine **or** scale them out to a cluster (e.g., AWS) with minimal code changes. In Rhapso, we use Ray to process large scale datasets.
|
|
@@ -90,12 +90,12 @@ Rhapso/split_dataset/save_points.py,sha256=k-jH-slmxkbrxDl-uJvDkwOedi6cg7md3kg_a
|
|
|
90
90
|
Rhapso/split_dataset/save_xml.py,sha256=Iq1UdFa8sdnWGygfIpDi4F5In-SCWggpl7lnuDTxkHE,14280
|
|
91
91
|
Rhapso/split_dataset/split_images.py,sha256=2RzAi0btV1tmh4le9QotRif1IYUU6_4pLcGGpFBM9zk,22434
|
|
92
92
|
Rhapso/split_dataset/xml_to_dataframe_split.py,sha256=ByaLzJ4sqT417UiCQU31_CS_V4Jms7pjMbBl0ZdSNNA,8570
|
|
93
|
-
rhapso-0.1.
|
|
93
|
+
rhapso-0.1.99.dist-info/licenses/LICENSE,sha256=U0Y7B3gZJHXpjJVLgTQjM8e_c8w4JJpLgGhIdsoFR1Y,1092
|
|
94
94
|
tests/__init__.py,sha256=LYf6ZGyYRcduFFSaOLmnw3rTyfS3XLib0dsTHDWH0jo,37
|
|
95
95
|
tests/test_detection.py,sha256=NtFYR_du9cbKrclQcNiJYsKzyqly6ivF61pw6_NICcM,440
|
|
96
96
|
tests/test_matching.py,sha256=QX0ekSdyIkPpAsXHfSMqJUUlNZg09caSlhhUM63MduM,697
|
|
97
97
|
tests/test_solving.py,sha256=t8I9XPV_4ZFM-DJpgvdYXxkG2_4DQgqs-FFyE5w8Nfg,695
|
|
98
|
-
rhapso-0.1.
|
|
99
|
-
rhapso-0.1.
|
|
100
|
-
rhapso-0.1.
|
|
101
|
-
rhapso-0.1.
|
|
98
|
+
rhapso-0.1.99.dist-info/METADATA,sha256=kqyfZB6PEVsMDtjj-8QH_P1VxJvAQxMG4wUdmvVXeYY,18488
|
|
99
|
+
rhapso-0.1.99.dist-info/WHEEL,sha256=SmOxYU7pzNKBqASvQJ7DjX3XGUF92lrGhMb3R6_iiqI,91
|
|
100
|
+
rhapso-0.1.99.dist-info/top_level.txt,sha256=NXvsrsTfdowWbM7MxEjkDZE2Jo74lmq7ruWkp70JjSw,13
|
|
101
|
+
rhapso-0.1.99.dist-info/RECORD,,
|
|
File without changes
|
|
File without changes
|
|
File without changes
|