hyperresashs 1.0.0__tar.gz

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -0,0 +1,376 @@
1
+ Metadata-Version: 2.4
2
+ Name: hyperresashs
3
+ Version: 1.0.0
4
+ Summary: Isotropic segmentation pipeline for MTL subregions from multi-modality 3T MRI
5
+ Author: Yue Li, PATCHLab, University of Pennsylvania
6
+ Author-email: Paul Yushkevich <pyushkevich@gmail.com>
7
+ Project-URL: repository, https://github.com/liyue3780/HyperResASHS
8
+ Project-URL: homepage, https://github.com/liyue3780/HyperResASHS
9
+ Keywords: deep learning,image segmentation,medical image analysis,medical image segmentation,MTL subregions,hippocampus segmentation,nnU-Net,hyperresashs,ASHS,Automatic Segmentation of Hippocampal Subfields
10
+ Classifier: Development Status :: 4 - Beta
11
+ Classifier: Intended Audience :: Developers
12
+ Classifier: Intended Audience :: Science/Research
13
+ Classifier: Intended Audience :: Healthcare Industry
14
+ Classifier: Programming Language :: Python :: 3
15
+ Classifier: License :: OSI Approved :: Apache Software License
16
+ Classifier: Topic :: Scientific/Engineering :: Artificial Intelligence
17
+ Classifier: Topic :: Scientific/Engineering :: Image Recognition
18
+ Classifier: Topic :: Scientific/Engineering :: Medical Science Apps.
19
+ Classifier: Operating System :: OS Independent
20
+ Requires-Python: >=3.10
21
+ Description-Content-Type: text/markdown
22
+ Requires-Dist: torch==2.5.1
23
+ Requires-Dist: torchvision==0.20.1
24
+ Requires-Dist: torchaudio==2.5.1
25
+ Requires-Dist: acvl-utils>=0.2
26
+ Requires-Dist: dynamic-network-architectures>=0.4
27
+ Requires-Dist: tqdm
28
+ Requires-Dist: dicom2nifti
29
+ Requires-Dist: scipy
30
+ Requires-Dist: batchgenerators>=0.25
31
+ Requires-Dist: numpy
32
+ Requires-Dist: scikit-learn
33
+ Requires-Dist: scikit-image>=0.19.3
34
+ Requires-Dist: SimpleITK>=2.2.1
35
+ Requires-Dist: pandas
36
+ Requires-Dist: graphviz
37
+ Requires-Dist: tifffile
38
+ Requires-Dist: requests
39
+ Requires-Dist: nibabel
40
+ Requires-Dist: matplotlib
41
+ Requires-Dist: seaborn
42
+ Requires-Dist: imagecodecs
43
+ Requires-Dist: yacs
44
+ Requires-Dist: PyYAML>=6.0.3
45
+ Requires-Dist: huggingface_hub>=0.20.0
46
+ Requires-Dist: picsl_greedy>=0.0.11
47
+ Requires-Dist: picsl_c3d>=1.4.6.0
48
+ Requires-Dist: tensorboard==2.20.0
49
+ Requires-Dist: lpips==0.1.4
50
+
51
+ # HyperResASHS
52
+
53
+ [![arXiv](https://img.shields.io/badge/arXiv-2508.17171-b31b1b.svg)](https://doi.org/10.48550/arXiv.2508.17171)
54
+
55
+ **HyperResASHS** is a deep learning pipeline for isotropic segmentation of medial temporal lobe (MTL) subregions from multi-modality 3T MRI (T1w and T2w). This repository implements the method described in our paper for achieving high-resolution, isotropic segmentation of brain structures.
56
+
57
+ ## Overview
58
+
59
+ This project addresses the challenge of segmenting MTL subregions from anisotropic MRI data by:
60
+ 1. Building an isotropic atlas using Implicit Neural Representations (INR)
61
+ 2. Training a multi-modality segmentation model with nnU-Net
62
+ 3. Performing inference on test data with automatic preprocessing
63
+
64
+ The pipeline handles the entire workflow from raw multi-modality MRI images to final segmentation results, including registration, ROI extraction, upsampling, and model inference.
65
+
66
+ ## Setup
67
+
68
+ Follow these steps to set up the repository in a fresh environment:
69
+
70
+ ### 1. Create a Conda Environment
71
+
72
+ Create a new conda environment with Python 3.10 or higher:
73
+
74
+ ```bash
75
+ conda create -n hyperresashs python=3.10
76
+ conda activate hyperresashs
77
+ ```
78
+
79
+ ### 2. Clone the Repository
80
+
81
+ Clone the repository with submodules:
82
+
83
+ ```bash
84
+ git clone --recursive https://github.com/liyue3780/HyperResASHS.git
85
+ cd HyperResASHS
86
+ ```
87
+
88
+ If you've already cloned without submodules, initialize them with:
89
+
90
+ ```bash
91
+ git submodule update --init --recursive
92
+ ```
93
+
94
+ ### 3. Install Python Dependencies
95
+
96
+ **Important**: PyTorch version compatibility is critical. This pipeline requires PyTorch 2.5.x (tested with 2.5.1). Newer versions (e.g., 2.9) may cause compatibility issues.
97
+
98
+ ```bash
99
+ # First, install PyTorch with CUDA support (adjust CUDA version as needed)
100
+ # For CUDA 11.8:
101
+ pip install torch==2.5.1 torchvision==0.20.1 torchaudio==2.5.1 --index-url https://download.pytorch.org/whl/cu118
102
+
103
+ # For CPU only:
104
+ # pip install torch==2.5.1 torchvision==0.20.1 torchaudio==2.5.1
105
+
106
+ # Then install the package and remaining dependencies
107
+ pip install -e .
108
+ ```
109
+
110
+ **Note**: Additional dependencies may be required by the submodules. See the submodule setup instructions below.
111
+
112
+ ### 4. Set Up Submodules
113
+
114
+ This repository uses git submodules for dependencies:
115
+
116
+ - **`submodules/multi_contrast_inr`**: INR repository (tracking `main` branch)
117
+ - **`submodules/nnUNet`**: Modified nnUNet repository (tracking `mmseg` branch) - [https://github.com/liyue3780/nnUNet/tree/mmseg](https://github.com/liyue3780/nnUNet/tree/mmseg)
118
+
119
+ **Install nnUNet submodule:**
120
+
121
+ This step is only needed if you will be training new HyperResASHS models. For inference, it is not necessary.
122
+
123
+ ```bash
124
+ cd submodules/nnUNet
125
+ pip install -e .
126
+ cd ../..
127
+ ```
128
+
129
+ **Set nnU-Net Environment Variables:**
130
+
131
+ After installing nnUNet, you must set the following environment variables:
132
+
133
+ ```bash
134
+ export nnUNet_raw="/path/to/nnUNet_raw"
135
+ export nnUNet_preprocessed="/path/to/nnUNet_preprocessed"
136
+ export nnUNet_results="/path/to/nnUNet_results"
137
+ ```
138
+
139
+ For detailed setup instructions (including Linux, MacOS, and Windows), see the [nnU-Net environment variables documentation](https://github.com/MIC-DKFZ/nnUNet/blob/master/documentation/set_environment_variables.md).
140
+
141
+ **Install INR submodule dependencies:**
142
+
143
+ This step is only needed if you will be training new HyperResASHS models. For inference, it is not necessary.
144
+
145
+ ```bash
146
+ pip install tensorboard==2.20.0 lpips==0.1.4
147
+ ```
148
+
149
+ Refer to the INR repository's documentation for specific installation requirements. The INR submodule may require additional dependencies such as PyTorch, nibabel, and other packages.
150
+
151
+ **Note**: The modified nnUNet includes Modality Augmentation methods for multi-modality brain MRI segmentation. Make sure to use the `mmseg` branch when running nnU-Net training.
152
+
153
+ ### 6. Verify Installation
154
+
155
+ Verify that the main pipeline can be imported:
156
+
157
+ ```bash
158
+ python -c "from hyperresashs.preprocessing import PreprocessorInVivo; from hyperresashs.testing import ModelTester; from hyperresashs.prepare_inr import INRPreprocess; print('Installation successful!')"
159
+ ```
160
+
161
+ ## Configuration
162
+
163
+ For detailed configuration information, including config file format and cross-validation file format, see [Configuration Guide](docs/configuration.md).
164
+
165
+ ## Pipeline Details
166
+
167
+ The pipeline consists of six main steps that can be run in linear order. Each step assumes the previous steps have been completed:
168
+
169
+ 1. **Prepare** → 2. **Prepare INR** → 3. **Run INR Upsampling** → 4. **Preprocess** → 5. **Train** → 6. **Test**
170
+
171
+ **Note**: Steps 2-3 are only needed if using INR upsampling. For other upsampling methods (e.g., `GreedyUpsampling` or `None`), you can skip Steps 2-3 and go directly from Step 1 to Step 4.
172
+
173
+ ### Step 1: Prepare Patch Data (`stage = prepare`)
174
+
175
+ Run this step first to create the experiment folder by copying images and segmentations from the two atlas folders (T1w and T2w ASHS atlases). This step:
176
+ - Copies primary and secondary modality images from ASHS packages
177
+ - Copies segmentation files
178
+ - Performs coordinate system transformations (swapdim RPI)
179
+ - Creates the folder structure in `{PREPARE_RAW_PATH}/{EXP_NUM}{MODEL_NAME}/images/`
180
+
181
+ **Usage:**
182
+ ```bash
183
+ python main.py -s prepare -c {CONFIG_ID}
184
+ ```
185
+
186
+ ### Step 2: Prepare INR Data (`stage = prepare_inr`)
187
+
188
+ This step prepares the data for INR upsampling. It requires the prepared patch data from Step 1. The `prepare_inr` stage will:
189
+ - Prepare the data in the format expected by the INR submodule
190
+ - Generate INR configuration files for each case
191
+ - Create a shell script `shell/run_inr_upsampling_{EXP_NUM}{MODEL_NAME}.sh` with paths automatically filled from your config
192
+
193
+ **Usage:**
194
+ ```bash
195
+ python main.py -s prepare_inr -c {CONFIG_ID}
196
+ ```
197
+
198
+ **Folder structure created:**
199
+
200
+ The `prepare_inr` stage creates the following folder structure under `{INR_PATH}/{EXP_NUM}{MODEL_NAME}/`:
201
+ - `preprocess/`: Contains case folders with input data and config files for INR
202
+ - `training_preparation/`: Contains case folders with prepared data ready for INR training
203
+ - `training_output/`: Will contain INR training outputs (created after INR training completes)
204
+
205
+ ### Step 3: Run INR Upsampling (`stage = run_inr`)
206
+
207
+ After preparing INR data, you have two options to run INR upsampling:
208
+
209
+ #### Option 1: Run via Python (Recommended for single GPU)
210
+
211
+ Run INR upsampling directly using Python:
212
+
213
+ ```bash
214
+ python main.py -s run_inr -c {CONFIG_ID}
215
+ ```
216
+
217
+ This will process all cases sequentially. The script automatically:
218
+ - Finds all cases in the `training_preparation` folder (created in Step 2)
219
+ - Runs INR training for each case using the generated config files from the `preprocess` folder
220
+
221
+ #### Option 2: Run via Shell Script (Recommended for multi-GPU)
222
+
223
+ For multi-GPU setups, you can use the generated shell script and modify it to run different batches on different GPUs. See [INR Upsampling Shell Script Guide](docs/inr_upsampling_shell_script.md) for detailed instructions.
224
+
225
+ ### Step 4: Complete Preprocessing (`stage = preprocess`)
226
+
227
+ After INR upsampling is finished, run the preprocessing stage to complete all remaining steps. This step assumes the patch data preparation (Step 1) was already completed. It will:
228
+ - Copy INR upsampled results (if using INR upsampling method)
229
+ - Perform resampling/upsampling based on the configured method
230
+ - Register secondary modality (T1w) to primary (T2w)
231
+ - Prepare nnU-Net dataset
232
+ - Remove outer segmentation artifacts
233
+ - Convert labels to continuous format
234
+ - Create cross-validation splits
235
+ - Run nnU-Net experiment planning
236
+ - Generate nnU-Net training script: `shell/train_nnunet_{EXP_NUM}{MODEL_NAME}.sh`
237
+
238
+ **Usage:**
239
+ ```bash
240
+ python main.py -s preprocess -c {CONFIG_ID}
241
+ ```
242
+
243
+ **Outputs from Step 4:**
244
+ - nnU-Net dataset in `{NNUNET_RAW_PATH}/Dataset{EXP_NUM}_{MODEL_NAME}/`
245
+ - Preprocessed data in `{NNUNET_RAW_PATH}/../nnUNet_preprocessed/Dataset{EXP_NUM}_{MODEL_NAME}/`
246
+ - Cross-validation splits file: `splits_final.json`
247
+ - Training script: `shell/train_nnunet_{EXP_NUM}{MODEL_NAME}.sh`
248
+
249
+ **Note**: If you're using a non-INR upsampling method (e.g., `GreedyUpsampling` or `None`), you can skip Steps 2-3 and go directly from Step 1 to Step 4.
250
+
251
+ ### Step 5: nnU-Net Training (`stage = train`)
252
+
253
+ Step 4 creates the nnU-Net dataset, runs experiment planning, and creates five-fold cross-validation splits. A training script is automatically generated for convenience.
254
+
255
+ You have two options to run nnU-Net training:
256
+
257
+ #### Option 1: Run via Python (Recommended)
258
+
259
+ Run nnU-Net training directly using Python:
260
+
261
+ ```bash
262
+ python main.py -s train -c {CONFIG_ID}
263
+ ```
264
+
265
+ This will train all 5 folds (fold 0-4) sequentially using the `TRAINER` specified in your configuration file (e.g., `ModAugUNetTrainer`).
266
+
267
+ **Note**: Ensure you're using the modified nnUNet from `submodules/nnUNet` (mmseg branch) which includes Modality Augmentation methods. The `nnUNetv2_train` command should be available after installing the modified nnUNet.
268
+
269
+ #### Option 2: Run via Shell Script
270
+
271
+ Alternatively, you can run the generated training script manually. See [nnU-Net Training Shell Script Guide](docs/nnunet_training_shell_script.md) for detailed instructions.
272
+
273
+ ### Step 6: Testing (`stage = test`)
274
+
275
+ Testing is independent from Steps 1-5 and uses its own configuration files in the `config_test/` directory. Each test configuration has its own ID that links to a trained model.
276
+
277
+ **Test Configuration ID Convention:**
278
+ - The test config ID links to the training model ID. For example:
279
+ - `2921` links to model `292` (the `1` represents the first test set)
280
+ - `2922`, `2923`, `2924`, etc. can be used for different test sets of the same model
281
+ - The first digits match the `EXP_NUM` of the trained model you want to use
282
+
283
+ **Usage:**
284
+ ```bash
285
+ python main.py -s test -c {TEST_CONFIG_ID}
286
+ # For example: python main.py -s test -c 2921
287
+ ```
288
+
289
+ **Configuration Requirements:**
290
+ - `EXP_NUM`: Must match the training model's `EXP_NUM` (e.g., `292`)
291
+ - `MODEL_NAME`: Must match the training model's `MODEL_NAME` (e.g., `TestPipeline`)
292
+ - `TRAINER`: Must match the training model's `TRAINER`
293
+ - `CONDITION`: Must match the training model's `CONDITION` (e.g., `in_vivo`)
294
+ - `UPSAMPLING_METHOD`: Must match the training model's `UPSAMPLING_METHOD`
295
+ - `TEST_PATH`: Path to test data (see [Configuration Guide](docs/configuration.md) for structure)
296
+ - `TEMPLATE_PATH`: Path to ASHS template for MTL ROI cropping (downloadable from [DOI: 10.5061/dryad.k6djh9wmn](https://doi.org/10.5061/dryad.k6djh9wmn))
297
+
298
+ **This stage performs:**
299
+ - Whole-brain registration (T1w to T2w)
300
+ - ROI extraction using ASHS template
301
+ - Patch cropping and upsampling
302
+ - Local registration for fine alignment
303
+ - nnU-Net inference for segmentation
304
+ - Output of segmentation results
305
+
306
+ For detailed test configuration information, see [Configuration Guide](docs/configuration.md).
307
+
308
+ ## Citation
309
+
310
+ If you use this code in your research, please cite our paper:
311
+
312
+ ```bibtex
313
+ @article{hyperresashs2024,
314
+ title={HyperResASHS: Isotropic Segmentation of MTL Subregions from Multi-modality 3T MRI},
315
+ author={[Authors]},
316
+ journal={arXiv preprint arXiv:2508.17171},
317
+ year={2024},
318
+ url={https://doi.org/10.48550/arXiv.2508.17171}
319
+ }
320
+ ```
321
+
322
+ ## Changelog
323
+
324
+ ### 01/14/2026
325
+ - Replaced `trim_neck.sh` shell script with Python implementation using `picsl_c3d` package
326
+ - Removed ITK-SNAP installation requirement (no longer needed)
327
+ - Updated `multi_contrast_inr` submodule to latest version
328
+ - Modified `--config_id` argument to accept both integer ID and full file path
329
+ - Added config validation checks: ID consistency, conflict detection, and nnUNet dataset existence
330
+ - Made `FILE_NAME_CONFIG` optional with automatic defaults based on stage (test vs. other stages)
331
+ - Renamed `scripts/` folder to `shell/` for better clarity
332
+ - Added optional `--subject_id` argument for test stage to test specific subjects
333
+ - Updated `.gitignore` to exclude generated config and script files while keeping template files tracked
334
+ - Fixed config validation to skip checks when stage is `test`
335
+
336
+ ### 01/07/2026
337
+ - Added `requirements.txt` and `setup.py` with pinned package versions for reproducible installation
338
+ - Added nnU-Net environment variables setup instructions
339
+ - Added Python stages for INR upsampling (`stage = run_inr`) and nnU-Net training (`stage = train`)
340
+ - Created comprehensive documentation in `docs/` folder:
341
+ - Configuration guide with training and test config details
342
+ - INR upsampling shell script guide
343
+ - nnU-Net training shell script guide
344
+ - Updated README to reference documentation files for better organization
345
+ - Added `trim_neck.sh` script for neck trimming
346
+ - Updated testing documentation with config_test details and test data structure
347
+ - Changed default pipeline stage from `prepare_inr` to `prepare`
348
+
349
+ ### 01/04/2026
350
+ - Refactored pipeline to support linear execution order
351
+ - Added separate `prepare` stage for patch data preparation
352
+ - Simplified `execute()` method in preprocessing to remove conditional checks
353
+ - Updated pipeline documentation to reflect linear execution flow
354
+ - Added `.gitignore` to exclude Python cache files and build artifacts
355
+
356
+ ### 12/24/2025
357
+ - Added INR and modified nnUNet as git submodules
358
+ - Added INR preparation module with config generation
359
+ - Added INR upsampling script template and generation
360
+ - Added nnUNet training script template and generation
361
+ - Updated README with submodule setup and pipeline documentation
362
+
363
+ ### 12/22/2025
364
+ - Updated README.md with comprehensive documentation
365
+ - Added data structure documentation slots for atlas and test data
366
+
367
+ ### 11/20/2025
368
+ - Added main pipeline of preprocessing
369
+
370
+ ### 10/27/2025
371
+ - Initial release of isotropic segmentation pipeline for MTL subregions
372
+ - Support for 3T-T2w and 3T-T1w multi-modality MRI
373
+
374
+ ## Contact
375
+
376
+ For questions or support, please open an issue or contact [liyue3780@gmail.com](mailto:liyue3780@gmail.com).