chhavi 1.0.0__tar.gz

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
chhavi-1.0.0/LICENSE ADDED
@@ -0,0 +1,21 @@
1
+ MIT License
2
+
3
+ Copyright (c) 2026 Hemangi C. Varkal
4
+
5
+ Permission is hereby granted, free of charge, to any person obtaining a copy
6
+ of this software and associated documentation files (the "Software"), to deal
7
+ in the Software without restriction, including without limitation the rights
8
+ to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9
+ copies of the Software, and to permit persons to whom the Software is
10
+ furnished to do so, subject to the following conditions:
11
+
12
+ The above copyright notice and this permission notice shall be included in all
13
+ copies or substantial portions of the Software.
14
+
15
+ THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16
+ IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17
+ FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18
+ AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19
+ LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20
+ OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21
+ SOFTWARE.
chhavi-1.0.0/PKG-INFO ADDED
@@ -0,0 +1,275 @@
1
+ Metadata-Version: 2.4
2
+ Name: chhavi
3
+ Version: 1.0.0
4
+ Summary: RAMSES to VTKHDF conversion tool for AMR datasets
5
+ Author-email: Hemangi Varkal <hemangivarkal1612@gmail.com>
6
+ License: MIT License
7
+
8
+ Copyright (c) 2026 Hemangi C. Varkal
9
+
10
+ Permission is hereby granted, free of charge, to any person obtaining a copy
11
+ of this software and associated documentation files (the "Software"), to deal
12
+ in the Software without restriction, including without limitation the rights
13
+ to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
14
+ copies of the Software, and to permit persons to whom the Software is
15
+ furnished to do so, subject to the following conditions:
16
+
17
+ The above copyright notice and this permission notice shall be included in all
18
+ copies or substantial portions of the Software.
19
+
20
+ THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
21
+ IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
22
+ FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
23
+ AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
24
+ LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
25
+ OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
26
+ SOFTWARE.
27
+
28
+ Requires-Python: >=3.9
29
+ Description-Content-Type: text/markdown
30
+ License-File: LICENSE
31
+ Requires-Dist: numpy>=1.20
32
+ Requires-Dist: h5py>=3.6
33
+ Requires-Dist: osyris>=1.6
34
+ Requires-Dist: vtk>=9.2
35
+ Requires-Dist: tqdm>=4.60
36
+ Dynamic: license-file
37
+
38
+ # Chhavi: A Python tool for converting RAMSES outputs to VTKHDF
39
+
40
+ [![License: MIT](https://img.shields.io/badge/License-MIT-blue.svg)](LICENSE)
41
+ [![Python Version](https://img.shields.io/badge/python-3.9%2B-brightgreen)]()
42
+ [![Build Status](https://img.shields.io/badge/tests-passing-brightgreen)]()
43
+
44
+
45
+ ## Overview
46
+
47
+ **Chhavi** is a Python package that converts **[RAMSES](https://ramses-organisation.readthedocs.io/en/latest/)** simulation outputs into **[VTKHDF](https://vtk.org/documentation/)** **OverlappingAMR** format.
48
+
49
+ It provides both a **command-line interface (CLI)** and a **Python API**, making it easy to visualize and analyze data in **[ParaView](https://docs.paraview.org/en/latest/)** and other compatible tools.
50
+
51
+ The package also includes test coverage, example scripts, profile analysis suite for **[Osyris](https://osyris.readthedocs.io/en/stable/)** vs **VTKHDF** validation and clear documentation, ensuring it is reproducible and accessible for scientific use.
52
+
53
+ ---
54
+
55
+ ## Features
56
+
57
+ - Convert **RAMSES AMR** outputs into **VTKHDF OverlappingAMR** files
58
+ - Uses **Osyris** for data analysis and extraction
59
+ - Support for both **scalar fields** (density, pressure, grav_potential) and
60
+ **vector fields** (velocity, magnetic_field, grav_acceleration)
61
+ - Dry-run mode to preview what would be written without creating files
62
+ - CLI and Python API for flexible use
63
+ - Parallel conversion support with configurable number of workers (`--nproc`)
64
+ - Customizable output directory through `--output-dir` option
65
+ - Profile Analysis - **Osyris** vs **VTKHDF** validation (**CCC** > 0.99)
66
+ - Fully tested with `pytest`
67
+
68
+ ---
69
+
70
+ ## Installation
71
+
72
+ Clone the repository and install dependencies:
73
+
74
+ ```bash
75
+ git clone https://github.com/HemangiVarkal/Chhavi.git
76
+ cd Chhavi
77
+ pip install -r requirements.txt
78
+ ```
79
+
80
+ ---
81
+
82
+ ## Usage
83
+
84
+ ### Command-Line Interface (CLI)
85
+
86
+ Example:
87
+
88
+ ```bash
89
+ python -m chhavi.cli --base-dir ramses_outputs/
90
+ --folder-name sedov_3d/ -n 1
91
+ --output-prefix sedov_test
92
+ --fields density,velocity,pressure
93
+ --dry-run
94
+ --output-dir ./vtk_outputs
95
+ --nproc 1
96
+ ```
97
+
98
+ Key options:
99
+ - `--base-dir` → Parent folder
100
+ - `--folder-name` → Folder containing RAMSES outputs
101
+ - `-n` → Snapshot number(s)
102
+ - `--output-prefix` → Prefix for generated `.vtkhdf` files
103
+ - `--fields` → Comma-separated list of fields (scalars/vectors)
104
+ - `--dry-run` → Run without writing files
105
+ - `--output-dir` → Directory to store `.vtkhdf` output files. Defaults to `--base-dir/--folder-name`.
106
+ - `--nproc` → Number of CPU cores to use for parallel conversion (default: 1). Falls back to serial if unavailable.
107
+
108
+ ---
109
+
110
+ ### Python API
111
+
112
+ ```python
113
+ from chhavi.converter import ChhaviConverter
114
+
115
+ converter = ChhaviConverter(
116
+ input_folder="ramses_outputs/sedov_3d",
117
+ output_prefix="sedov_test",
118
+ fields=["density", "velocity"],
119
+ dry_run=True,
120
+ output_directory="./vtk_outputs"
121
+ )
122
+
123
+ converter.process_output(1)
124
+ ```
125
+ ---
126
+
127
+ ### Profile Analysis Workflow
128
+
129
+ Profile analysis tools are provided in `profiles/` directory:
130
+ ```python
131
+ cd profiles/
132
+ python .\compute_osyris_profile.py --base-dir ..\ramses_outputs --folder-name sedov_3d --numbers 4
133
+ python .\compute_vtk_profile.py --base-dir ..\vtk_outputs --folder-name sedov_3d --numbers 4
134
+ python .\analyzing_profiles.py -n 4
135
+
136
+ ```
137
+ This suite:
138
+
139
+ 1. Generates radial density profiles from **Osyris** (RAMSES native) and **VTKHDF** outputs, saved as CSV files in profile_outputs/ folder:
140
+ `osyris_profile_00002.csv` [radius, mean, std, min, max] <br>
141
+ `vtk_profile_00002.csv` [radius, mean, std, min, max]
142
+ 2. Computes **CCC** validation metric between CSV profiles (**CCC** > 0.99 confirms equivalence)
143
+ 3. Creates publication-quality comparison plots `profile_comparison_00002.png` with error bands
144
+ 4. `test_profile_analysis.py` in `tests/` validates `snapshot output_00004` with 7 automated tests
145
+
146
+ ---
147
+
148
+ ### Example Script
149
+
150
+ An example is provided in `examples/example_usage.py`:
151
+
152
+ ```bash
153
+ python -m examples.example_usage
154
+ ```
155
+
156
+ This script:
157
+ 1. Lists available fields in snapshots
158
+ 2. Prints dataset info
159
+ 3. Performs a dry-run conversion
160
+ 4. Displays scalar/vector fields to be written
161
+
162
+ You can specify an output directory in the example usage by setting `OUTPUT_DIR='your/path'`.
163
+
164
+ ---
165
+
166
+ ## Output File Structure
167
+
168
+ ```lua
169
+ <output-prefix>_00001.vtkhdf
170
+ <output-prefix>_00002.vtkhdf
171
+ ...
172
+ ```
173
+
174
+ ---
175
+
176
+ ## Tests
177
+
178
+ Run the test suite with:
179
+
180
+ ```bash
181
+ pytest tests/
182
+ ```
183
+
184
+ ---
185
+
186
+ ## Repository Structure
187
+
188
+ ```
189
+ Chhavi/
190
+ ├── chhavi/ # Core Python package
191
+ │ ├── __init__.py
192
+ │ ├── cli.py
193
+ │ ├── converter.py
194
+ │ └── parallel.py
195
+
196
+ ├── tests/ # Unit tests
197
+ │ ├── __init__.py
198
+ │ ├── test_cli.py
199
+ │ ├── test_converter.py
200
+ │ ├── test_import.py
201
+ │ ├── test_parallel.py
202
+ │ ├── test_parser.py
203
+ │ └── test_profile_analysis.py
204
+
205
+ ├── examples/ # Example usage scripts
206
+ │ └── example_usage.py
207
+
208
+ ├── profiles/ # Profile analysis suite
209
+ │ ├── compute_osyris_profile.py
210
+ │ ├── compute_vtk_profile.py
211
+ │ ├── analyzing_profiles.py
212
+
213
+ ├── papers/ # JOSS submission papers
214
+ │ ├── paper.md
215
+ │ └── paper.bib
216
+
217
+ ├── ramses_outputs/ # Sample real RAMSES outputs
218
+ │ └── sedov_3d/
219
+ │ ├── output_00001/
220
+ │ ├── output_00002/
221
+ │ ├── output_00003/
222
+ │ ├── output_00004/
223
+ │ └── output_00005/
224
+
225
+ ├── LICENSE
226
+ ├── README.md
227
+ ├── requirements.txt
228
+ └── .gitignore
229
+ ```
230
+
231
+ ---
232
+
233
+ ## Notes & Best Practices
234
+
235
+ - Default fields if `--fields` is not specified: `density`, `pressure`, `velocity`
236
+ - Profile validation confirms **Osyris** and **VTKHDF** produce equivalent results
237
+ - Verbose mode `--verbose` provides step-by-step information, including the number of cells retained per level.
238
+ - Parallel execution automatically uses specified CPU cores (`--nproc`); falls back to serial execution if needed
239
+ - Output directory is auto-created if it does not exist — no manual setup required
240
+ - If no cells survive filtering or fields are missing, the output file is skipped, with warnings logged
241
+
242
+ ---
243
+
244
+ ## License
245
+
246
+ This project is licensed under the terms of the **MIT License**.
247
+ See the [LICENSE](LICENSE) file for details.
248
+
249
+ ---
250
+
251
+ ## Authors
252
+
253
+ - **[Hemangi C. Varkal](https://github.com/HemangiVarkal)** — Developer
254
+ - **Shubhankar R. Gharote** — Space Applications Centre (SAC), ISRO
255
+ - **Dr. Munn Vinayak Shukla** — Space Applications Centre (SAC), ISRO
256
+ - **Dr. Mehul Pandya** — Space Applications Centre (SAC), ISRO
257
+
258
+ ---
259
+
260
+ ## Acknowledgements
261
+
262
+ - This work was carried out at the **Space Applications Centre (SAC), Indian Space Research Organisation (ISRO), Ahmedabad, India**.
263
+ - The author expresses sincere appreciation to **Dr. Rashmi Sharma (DD, EPSA)** for continuous encouragement and institutional support.
264
+ - Special thanks are due to **Dr. Mehul Pandya (Group Director, SESG/EPSA)**, **Dr. Munn Vinayak Shukla (Head, SSD/SESG/EPSA)** and **Shubhankar R. Gharote (Scientist, SSD/SESG/EPSA)** for their invaluable guidance, technical insights, and collaboration throughout the development of this work.
265
+ - Computations were performed using the **SAGAR High Performance Computing (HPC) Facility** of **SAC**.
266
+ - Implementation follows **ParaView VTKHDF OverlappingAMR** conventions.
267
+
268
+ ---
269
+
270
+ ## Citation
271
+
272
+ If you use **Chhavi** in your work, please cite it as:
273
+
274
+ > Varkal, H. (2026). *Chhavi: A Python tool for converting RAMSES outputs to VTKHDF*.
275
+ > GitHub repository: [https://github.com/HemangiVarkal/Chhavi](https://github.com/HemangiVarkal/Chhavi)
@@ -0,0 +1,37 @@
1
+ # -*- coding: utf-8 -*-
2
+
3
+ """
4
+
5
+ Chhavi: RAMSES → VTKHDF (Overlapping AMR) Converter
6
+ ===================================================
7
+
8
+ ──────────────────────────────────────────────────────────────────────────────
9
+ Description
10
+ ──────────────────────────────────────────────────────────────────────────────
11
+ Chhavi converts RAMSES simulation outputs into the VTKHDF format that
12
+ ParaView and other VTK-based visualization tools can read.
13
+
14
+ ──────────────────────────────────────────────────────────────────────────────
15
+ WHY THIS EXISTS
16
+ ──────────────────────────────────────────────────────────────────────────────
17
+ - RAMSES uses an Adaptive Mesh Refinement (AMR) grid suitable for HPC, but not
18
+ directly compatible with most visualization pipelines.
19
+ - VTKHDF’s OverlappingAMR format stores data by refinement level, with
20
+ cell-centered fields and AMR indexing metadata, ideal for ParaView.
21
+
22
+ """
23
+
24
+ from .converter import (
25
+ ChhaviConverter,
26
+ parse_output_numbers,
27
+ parse_norm_range,
28
+ parse_fields_arg,
29
+ list_fields_for_snapshot,
30
+ )
31
+
32
+ from .parallel import (
33
+ process_single_output,
34
+ run_parallel_conversion,
35
+ )
36
+
37
+ __version__ = "1.0.0"
@@ -0,0 +1,247 @@
1
+ #!/usr/bin/env python3
2
+ # -*- coding: utf-8 -*-
3
+
4
+ """
5
+ ──────────────────────────────────────────────────────────────────────────────
6
+ CLI QUICK START (copy–paste, then tweak)
7
+ ──────────────────────────────────────────────────────────────────────────────
8
+ Example run (filters a subvolume, includes extra fields):
9
+
10
+ python3 cli.py \
11
+ --base-dir ./simulations \
12
+ --folder-name output_dir \
13
+ --numbers 1,3,5-7 \
14
+ --output-prefix overlapping_amr \
15
+ --level-start 2 --level-end 5 \
16
+ --x-range 0.0:1.0 --y-range 0.0:1.0 --z-range 0.0:1.0 \
17
+ --fields density,pressure \
18
+ --verbose \
19
+ --nproc 4 \
20
+ --output-dir ./vtk_outputs
21
+
22
+ Exploration mode :
23
+
24
+ # Lists fields Osyris sees in the mesh (no conversion happens)
25
+ python3 cli.py --base-dir ./simulations --folder-name output_dir \
26
+ -n 5 --list-fields
27
+
28
+ # Dry-run: show counts after filters, but don’t write .vtkhdf
29
+ python3 cli.py --base-dir ./simulations --folder-name output_dir \
30
+ -n 5 --level-start 1 --dry-run --verbose
31
+
32
+ Required args:
33
+
34
+ --base-dir Path to your RAMSES run root directory.
35
+ --folder-name Subfolder inside base-dir containing outputs.
36
+ -n / --numbers Output numbers to process. Formats:
37
+ "7" or "3,5,9" or "10-15"
38
+
39
+ Optional args:
40
+
41
+ --output-prefix / -o Prefix for output files (default: overlapping_amr)
42
+ --level-start / --level-end AMR level filter (inclusive)
43
+ --x-range / --y-range / --z-range Normalized ranges [0,1] over box length
44
+ --fields Physical fields to be included (default: density, pressure, velocity)
45
+ --verbose step-by-step narration
46
+ --list-fields Only list available mesh fields and exit
47
+ --dry-run Run everything except the actual write step
48
+ --nproc Number of CPU cores to use for parallel conversion (default: 1)
49
+ --output-dir Directory to store .vtkhdf output files (default: input directory)
50
+ ──────────────────────────────────────────────────────────────────────────────
51
+ Tip on “normalized ranges”:
52
+ RAMSES coordinates are in code/physical units. We divide by the simulation
53
+ box length (from metadata) so [0,1] always spans the full domain, regardless
54
+ of units.
55
+ ──────────────────────────────────────────────────────────────────────────────
56
+ """
57
+
58
+ import os
59
+ import argparse
60
+ import logging
61
+
62
+ from .converter import (
63
+ parse_output_numbers,
64
+ parse_norm_range,
65
+ parse_fields_arg,
66
+ list_fields_for_snapshot,
67
+ )
68
+ from .parallel import run_parallel_conversion, setup_logging
69
+
70
+ logger = logging.getLogger("chhavi")
71
+
72
+
73
+ def main() -> None:
74
+ """
75
+ Parse CLI args and run the conversion pipeline.
76
+ """
77
+
78
+ parser = argparse.ArgumentParser(
79
+ description="VTKHDF AMR Generator from RAMSES Data (refactored)"
80
+ )
81
+
82
+ # Required inputs
83
+ parser.add_argument(
84
+ "--base-dir",
85
+ type=str,
86
+ required=True,
87
+ help="Base directory containing simulation folders (REQUIRED)",
88
+ )
89
+ parser.add_argument(
90
+ "--folder-name",
91
+ type=str,
92
+ required=True,
93
+ help="Folder inside base_dir to process (REQUIRED)",
94
+ )
95
+ parser.add_argument(
96
+ "-n",
97
+ "--numbers",
98
+ type=parse_output_numbers,
99
+ required=True,
100
+ help="Output numbers like '1', '1,3,5' or '2-7' (REQUIRED)",
101
+ )
102
+
103
+ # Output and level selection
104
+ parser.add_argument(
105
+ "-o",
106
+ "--output-prefix",
107
+ dest="output_prefix",
108
+ default="overlapping_amr",
109
+ help="Output file prefix (default: overlapping_amr)",
110
+ )
111
+ parser.add_argument(
112
+ "--level-start",
113
+ type=int,
114
+ default=None,
115
+ help="Minimum AMR level to include (inclusive). Optional.",
116
+ )
117
+ parser.add_argument(
118
+ "--level-end",
119
+ type=int,
120
+ default=None,
121
+ help="Maximum AMR level to include (inclusive). Optional.",
122
+ )
123
+
124
+ # Normalized ranges (single arg per axis)
125
+ parser.add_argument(
126
+ "--x-range",
127
+ type=parse_norm_range,
128
+ default=None,
129
+ help="Normalized x range 'min:max' (e.g., 0.2:0.8, :0.7, 0.1:, :).",
130
+ )
131
+ parser.add_argument(
132
+ "--y-range", type=parse_norm_range, default=None, help="Normalized y range 'min:max'."
133
+ )
134
+ parser.add_argument(
135
+ "--z-range", type=parse_norm_range, default=None, help="Normalized z range 'min:max'."
136
+ )
137
+
138
+ # Field selection (replace per-field enable flags with a single argument)
139
+ parser.add_argument(
140
+ "--fields",
141
+ type=parse_fields_arg,
142
+ default=None,
143
+ help="Comma-separated list of fields to include (e.g. density,velocity,magnetic_field). If omitted, sensible defaults are used.",
144
+ )
145
+
146
+ parser.add_argument(
147
+ "--list-fields",
148
+ action="store_true",
149
+ help="List available fields in the first requested snapshot and exit.",
150
+ )
151
+
152
+ # Utility flags
153
+ parser.add_argument(
154
+ "--dry-run", action="store_true", help="Print plan without writing files."
155
+ )
156
+ parser.add_argument("--verbose", action="store_true", help="Verbose logging.")
157
+ parser.add_argument(
158
+ "--nproc",
159
+ type=int,
160
+ default=None,
161
+ help="Number of CPU cores to use for parallel conversion (default: 1)",
162
+ )
163
+ parser.add_argument(
164
+ "--output-dir",
165
+ type=str,
166
+ default=None,
167
+ help="Directory to store .vtkhdf output files (default: input directory)",
168
+ )
169
+
170
+ args = parser.parse_args()
171
+
172
+ # Configure logging early
173
+ setup_logging(args.verbose)
174
+
175
+ # Build absolute input folder path and validate
176
+ input_folder = os.path.join(os.path.abspath(args.base_dir), args.folder_name)
177
+
178
+ if not os.path.exists(input_folder):
179
+ logger.error("Input folder not found: %s", input_folder)
180
+ raise FileNotFoundError(f"Input folder not found: {input_folder}")
181
+
182
+ # Validate nproc argument (positive integer if provided)
183
+ if args.nproc is not None and args.nproc <= 0:
184
+ parser.error(f"Invalid --nproc value: {args.nproc}. Must be a positive integer.")
185
+
186
+ # Validate output directory existence or create it
187
+ output_dir = args.output_dir if args.output_dir is not None else input_folder
188
+ if not os.path.exists(output_dir):
189
+ try:
190
+ os.makedirs(output_dir, exist_ok=True)
191
+ logger.info("Created output directory: %s", output_dir)
192
+ except Exception as e:
193
+ parser.error(f"Failed to create output directory '{output_dir}': {e}")
194
+
195
+ # Check level ranges
196
+ if args.level_start is not None and args.level_end is not None:
197
+ if args.level_end < args.level_start:
198
+ parser.error(
199
+ f"Invalid level range: end ({args.level_end}) < start ({args.level_start})."
200
+ )
201
+
202
+ # Normalize default ranges: None -> (None,None)
203
+ def norm_default(r):
204
+ return (None, None) if r is None else r
205
+
206
+ x_range = norm_default(args.x_range)
207
+ y_range = norm_default(args.y_range)
208
+ z_range = norm_default(args.z_range)
209
+
210
+ # If user requested to list fields, inspect the first snapshot in args.numbers
211
+ if args.list_fields:
212
+ first_num = args.numbers[0]
213
+ logger.info("Listing fields for snapshot %s in folder '%s'...", first_num, input_folder)
214
+ fields = list_fields_for_snapshot(input_folder, first_num)
215
+
216
+ if fields:
217
+ print("Available fields (best-effort):")
218
+ for f in fields:
219
+ print(" -", f)
220
+ else:
221
+ print("No fields discovered (see logs for details).")
222
+ return
223
+
224
+ # Run conversion (fields argument may be None meaning use defaults+auto-detect)
225
+ try:
226
+ run_parallel_conversion(
227
+ output_numbers=args.numbers,
228
+ input_folder=input_folder,
229
+ output_prefix=args.output_prefix,
230
+ fields=args.fields,
231
+ level_start=args.level_start,
232
+ level_end=args.level_end,
233
+ x_range_norm=x_range,
234
+ y_range_norm=y_range,
235
+ z_range_norm=z_range,
236
+ dry_run=args.dry_run,
237
+ verbose=args.verbose,
238
+ nproc=args.nproc,
239
+ output_directory=output_dir,
240
+ )
241
+ except Exception as e:
242
+ logger.exception("FATAL: Unexpected error: %s", e)
243
+ raise
244
+
245
+
246
+ if __name__ == "__main__":
247
+ main()