hyper-py-photometry 0.1.6__tar.gz → 1.0.1__tar.gz

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (80) hide show
  1. {hyper_py_photometry-0.1.6/src/hyper_py_photometry.egg-info → hyper_py_photometry-1.0.1}/PKG-INFO +1 -1
  2. hyper_py_photometry-1.0.1/paper/paper.md +72 -0
  3. {hyper_py_photometry-0.1.6 → hyper_py_photometry-1.0.1}/src/hyper_py/data_output.py +1 -1
  4. {hyper_py_photometry-0.1.6 → hyper_py_photometry-1.0.1}/src/hyper_py/detection.py +2 -3
  5. {hyper_py_photometry-0.1.6 → hyper_py_photometry-1.0.1}/src/hyper_py/map_io.py +4 -2
  6. {hyper_py_photometry-0.1.6 → hyper_py_photometry-1.0.1}/src/hyper_py/single_map.py +45 -18
  7. {hyper_py_photometry-0.1.6 → hyper_py_photometry-1.0.1/src/hyper_py_photometry.egg-info}/PKG-INFO +1 -1
  8. hyper_py_photometry-0.1.6/paper/paper.md +0 -73
  9. {hyper_py_photometry-0.1.6 → hyper_py_photometry-1.0.1}/.github/workflows/Hyper-py_paper.yml +0 -0
  10. {hyper_py_photometry-0.1.6 → hyper_py_photometry-1.0.1}/.github/workflows/pypi-publish.yml +0 -0
  11. {hyper_py_photometry-0.1.6 → hyper_py_photometry-1.0.1}/.gitignore +0 -0
  12. {hyper_py_photometry-0.1.6 → hyper_py_photometry-1.0.1}/.vscode/launch.json +0 -0
  13. {hyper_py_photometry-0.1.6 → hyper_py_photometry-1.0.1}/IDL_comparison/Gaussian_comparison_Hyper_py_IDL_ALL.py +0 -0
  14. {hyper_py_photometry-0.1.6 → hyper_py_photometry-1.0.1}/IDL_comparison/ellipses_map_500_Gaussians_1_centroids.reg +0 -0
  15. {hyper_py_photometry-0.1.6 → hyper_py_photometry-1.0.1}/IDL_comparison/ellipses_map_500_Gaussians_1_ellipses.reg +0 -0
  16. {hyper_py_photometry-0.1.6 → hyper_py_photometry-1.0.1}/IDL_comparison/ellipses_map_500_Gaussians_2_centroids.reg +0 -0
  17. {hyper_py_photometry-0.1.6 → hyper_py_photometry-1.0.1}/IDL_comparison/ellipses_map_500_Gaussians_2_ellipses.reg +0 -0
  18. {hyper_py_photometry-0.1.6 → hyper_py_photometry-1.0.1}/IDL_comparison/hyper_output_map_500_Gaussians_1.txt +0 -0
  19. {hyper_py_photometry-0.1.6 → hyper_py_photometry-1.0.1}/IDL_comparison/hyper_output_map_500_Gaussians_2.txt +0 -0
  20. {hyper_py_photometry-0.1.6 → hyper_py_photometry-1.0.1}/IDL_comparison/map_500_Gaussians_1.fits +0 -0
  21. {hyper_py_photometry-0.1.6 → hyper_py_photometry-1.0.1}/IDL_comparison/map_500_Gaussians_2.fits +0 -0
  22. {hyper_py_photometry-0.1.6 → hyper_py_photometry-1.0.1}/IDL_comparison/matched_flux_comparison_table.txt +0 -0
  23. {hyper_py_photometry-0.1.6 → hyper_py_photometry-1.0.1}/IDL_comparison/output/Flux_Comparison_Hyper_IDL_Int_1.png +0 -0
  24. {hyper_py_photometry-0.1.6 → hyper_py_photometry-1.0.1}/IDL_comparison/output/Flux_Comparison_Hyper_IDL_Int_2.png +0 -0
  25. {hyper_py_photometry-0.1.6 → hyper_py_photometry-1.0.1}/IDL_comparison/output/Flux_Comparison_Hyper_IDL_Peak_1.png +0 -0
  26. {hyper_py_photometry-0.1.6 → hyper_py_photometry-1.0.1}/IDL_comparison/output/Flux_Comparison_Hyper_IDL_Peak_2.png +0 -0
  27. {hyper_py_photometry-0.1.6 → hyper_py_photometry-1.0.1}/IDL_comparison/output/Flux_Comparison_Hyper_IDL_vs_py_Int_1.png +0 -0
  28. {hyper_py_photometry-0.1.6 → hyper_py_photometry-1.0.1}/IDL_comparison/output/Flux_Comparison_Hyper_IDL_vs_py_Int_2.png +0 -0
  29. {hyper_py_photometry-0.1.6 → hyper_py_photometry-1.0.1}/IDL_comparison/output/Flux_Comparison_Hyper_IDL_vs_py_Peak_1.png +0 -0
  30. {hyper_py_photometry-0.1.6 → hyper_py_photometry-1.0.1}/IDL_comparison/output/Flux_Comparison_Hyper_IDL_vs_py_Peak_2.png +0 -0
  31. {hyper_py_photometry-0.1.6 → hyper_py_photometry-1.0.1}/IDL_comparison/output/Flux_Comparison_Hyper_py_Int_1.png +0 -0
  32. {hyper_py_photometry-0.1.6 → hyper_py_photometry-1.0.1}/IDL_comparison/output/Flux_Comparison_Hyper_py_Int_2.png +0 -0
  33. {hyper_py_photometry-0.1.6 → hyper_py_photometry-1.0.1}/IDL_comparison/output/Flux_Comparison_Hyper_py_Peak_1.png +0 -0
  34. {hyper_py_photometry-0.1.6 → hyper_py_photometry-1.0.1}/IDL_comparison/output/Flux_Comparison_Hyper_py_Peak_2.png +0 -0
  35. {hyper_py_photometry-0.1.6 → hyper_py_photometry-1.0.1}/IDL_comparison/output/Flux_Diff_Histogram_Int.png +0 -0
  36. {hyper_py_photometry-0.1.6 → hyper_py_photometry-1.0.1}/IDL_comparison/output/Flux_Diff_Histogram_Peak.png +0 -0
  37. {hyper_py_photometry-0.1.6 → hyper_py_photometry-1.0.1}/IDL_comparison/output/combined_source_counts_comparison.txt +0 -0
  38. {hyper_py_photometry-0.1.6 → hyper_py_photometry-1.0.1}/IDL_comparison/output/matched_flux_comparison_table.txt +0 -0
  39. {hyper_py_photometry-0.1.6 → hyper_py_photometry-1.0.1}/IDL_comparison/photometry_sources_1300_ellipses_1300_polynomial_background_4sigma_ipac_1.txt +0 -0
  40. {hyper_py_photometry-0.1.6 → hyper_py_photometry-1.0.1}/IDL_comparison/photometry_sources_1300_ellipses_1300_polynomial_background_4sigma_ipac_2.txt +0 -0
  41. {hyper_py_photometry-0.1.6 → hyper_py_photometry-1.0.1}/IDL_comparison/table_500_Gaussians_1.txt +0 -0
  42. {hyper_py_photometry-0.1.6 → hyper_py_photometry-1.0.1}/IDL_comparison/table_500_Gaussians_2.txt +0 -0
  43. {hyper_py_photometry-0.1.6 → hyper_py_photometry-1.0.1}/LICENSE +0 -0
  44. {hyper_py_photometry-0.1.6 → hyper_py_photometry-1.0.1}/README.md +0 -0
  45. {hyper_py_photometry-0.1.6 → hyper_py_photometry-1.0.1}/environment.yml +0 -0
  46. {hyper_py_photometry-0.1.6 → hyper_py_photometry-1.0.1}/paper/Figures/Flux_Diff_Histogram_Int.png +0 -0
  47. {hyper_py_photometry-0.1.6 → hyper_py_photometry-1.0.1}/paper/Figures/Flux_Diff_Histogram_Peak.png +0 -0
  48. {hyper_py_photometry-0.1.6 → hyper_py_photometry-1.0.1}/paper/paper.bib +0 -0
  49. {hyper_py_photometry-0.1.6 → hyper_py_photometry-1.0.1}/pyproject.toml +0 -0
  50. {hyper_py_photometry-0.1.6 → hyper_py_photometry-1.0.1}/requirements.txt +0 -0
  51. {hyper_py_photometry-0.1.6 → hyper_py_photometry-1.0.1}/setup.cfg +0 -0
  52. {hyper_py_photometry-0.1.6 → hyper_py_photometry-1.0.1}/src/hyper_py/__init__.py +0 -0
  53. {hyper_py_photometry-0.1.6 → hyper_py_photometry-1.0.1}/src/hyper_py/__main__.py +0 -0
  54. {hyper_py_photometry-0.1.6 → hyper_py_photometry-1.0.1}/src/hyper_py/assets/default_config.yaml +0 -0
  55. {hyper_py_photometry-0.1.6 → hyper_py_photometry-1.0.1}/src/hyper_py/bkg_multigauss.py +0 -0
  56. {hyper_py_photometry-0.1.6 → hyper_py_photometry-1.0.1}/src/hyper_py/bkg_single.py +0 -0
  57. {hyper_py_photometry-0.1.6 → hyper_py_photometry-1.0.1}/src/hyper_py/config.py +0 -0
  58. {hyper_py_photometry-0.1.6 → hyper_py_photometry-1.0.1}/src/hyper_py/create_background_slices.py +0 -0
  59. {hyper_py_photometry-0.1.6 → hyper_py_photometry-1.0.1}/src/hyper_py/extract_cubes.py +0 -0
  60. {hyper_py_photometry-0.1.6 → hyper_py_photometry-1.0.1}/src/hyper_py/fitting.py +0 -0
  61. {hyper_py_photometry-0.1.6 → hyper_py_photometry-1.0.1}/src/hyper_py/gaussfit.py +0 -0
  62. {hyper_py_photometry-0.1.6 → hyper_py_photometry-1.0.1}/src/hyper_py/groups.py +0 -0
  63. {hyper_py_photometry-0.1.6 → hyper_py_photometry-1.0.1}/src/hyper_py/hyper.py +0 -0
  64. {hyper_py_photometry-0.1.6 → hyper_py_photometry-1.0.1}/src/hyper_py/logger.py +0 -0
  65. {hyper_py_photometry-0.1.6 → hyper_py_photometry-1.0.1}/src/hyper_py/paths_io.py +0 -0
  66. {hyper_py_photometry-0.1.6 → hyper_py_photometry-1.0.1}/src/hyper_py/photometry.py +0 -0
  67. {hyper_py_photometry-0.1.6 → hyper_py_photometry-1.0.1}/src/hyper_py/run_hyper.py +0 -0
  68. {hyper_py_photometry-0.1.6 → hyper_py_photometry-1.0.1}/src/hyper_py/survey.py +0 -0
  69. {hyper_py_photometry-0.1.6 → hyper_py_photometry-1.0.1}/src/hyper_py/visualization.py +0 -0
  70. {hyper_py_photometry-0.1.6 → hyper_py_photometry-1.0.1}/src/hyper_py_photometry.egg-info/SOURCES.txt +0 -0
  71. {hyper_py_photometry-0.1.6 → hyper_py_photometry-1.0.1}/src/hyper_py_photometry.egg-info/dependency_links.txt +0 -0
  72. {hyper_py_photometry-0.1.6 → hyper_py_photometry-1.0.1}/src/hyper_py_photometry.egg-info/entry_points.txt +0 -0
  73. {hyper_py_photometry-0.1.6 → hyper_py_photometry-1.0.1}/src/hyper_py_photometry.egg-info/requires.txt +0 -0
  74. {hyper_py_photometry-0.1.6 → hyper_py_photometry-1.0.1}/src/hyper_py_photometry.egg-info/top_level.txt +0 -0
  75. {hyper_py_photometry-0.1.6 → hyper_py_photometry-1.0.1}/test/maps/test_2d_map_1.fits +0 -0
  76. {hyper_py_photometry-0.1.6 → hyper_py_photometry-1.0.1}/test/maps/test_2d_map_2.fits +0 -0
  77. {hyper_py_photometry-0.1.6 → hyper_py_photometry-1.0.1}/test/maps/test_2dmap.txt +0 -0
  78. {hyper_py_photometry-0.1.6 → hyper_py_photometry-1.0.1}/test/maps/test_datacube.fits +0 -0
  79. {hyper_py_photometry-0.1.6 → hyper_py_photometry-1.0.1}/test/maps/test_datacube.txt +0 -0
  80. {hyper_py_photometry-0.1.6 → hyper_py_photometry-1.0.1}/test/test_hyper.py +0 -0
@@ -1,6 +1,6 @@
1
1
  Metadata-Version: 2.4
2
2
  Name: hyper-py-photometry
3
- Version: 0.1.6
3
+ Version: 1.0.1
4
4
  Summary: HYPER: Hybrid Photometry Photometry and Extraction Routine
5
5
  Author-email: Alessio Traficante <alessio.traficante@inaf.it>
6
6
  Project-URL: Homepage, https://github.com/alessio-traficante/hyper-py
@@ -0,0 +1,72 @@
1
+ ---
2
+ title: 'HYPER-PY: HYbrid Photometry and Extraction Routine in PYthon'
3
+ tags:
4
+ - Python
5
+ - astronomy
6
+ - photometry
7
+ - astronomy tools
8
+ authors:
9
+ - name: Alessio Traficante
10
+ orcid: 0000-0003-1665-6402
11
+ affiliation: 1
12
+ corresponding: true
13
+ - name: Fabrizio De Angelis
14
+ affiliation: 1
15
+ - name: Alice Nucara
16
+ affiliation: 1
17
+ - name: Milena Benedettini
18
+ affiliation: 1
19
+ affiliations:
20
+ - name: INAF-IAPS, Via Fosso del Cavaliere, 100, 00133 Rome (IT)
21
+ index: 1
22
+ date: 02 September 2025
23
+ repository: https://github.com/Alessio-Traficante/hyper-py
24
+ bibliography: paper.bib
25
+ ---
26
+
27
+
28
+
29
+ # Summary
30
+
31
+ Source extraction and photometry of compact objects are key tasks in observational astronomy. Numerous tools have been developed to tackle the complexities of astronomical data, especially for precise background estimation and source deblending, which are essential for reliable flux measurements across wavelengths (e.g., Cutex: @Molinari11; getsources: @Menshinkov12; Fellwalker: @Berry15; Astrodendro). These challenges are particularly significant in star-forming regions, best observed in the far-infrared (FIR), sub-millimeter, and millimeter bands, where cold, dense compact sources emit strongly. To address this, several software packages have been designed for handling the structured backgrounds and blended sources typical of observations by instruments like Herschel (70–500 μm) and ALMA (1–3 mm). These tools differ in detection and flux estimation approaches. Within this context, we developed HYPER (HYbrid Photometry and Extraction Routine, @Traficante15), originally implemented in IDL, aiming to deliver robust and reproducible photometry of compact sources in FIR/sub-mm/mm maps. HYPER combines source detection via high-pass filtering, background estimation through local polynomial fitting, and source modeling with 2D elliptical Gaussians, simultaneously fitting multiple Gaussians to deblend overlapping sources.
32
+
33
+ Aperture photometry in **HYPER** is performed on background- and companion-subtracted images, using footprints defined by the source’s 2D Gaussian models, ensuring robust flux measurements in crowded or structured environments (@Traficante15; @Traficante23). The hybrid approach combines parametric Gaussian modeling with classical aperture photometry.
34
+ Here, we present **Hyper-Py**, a fully restructured and extended Python implementation of **HYPER**. **Hyper-Py** preserves the original logic while offering improvements in performance, configurability, and background modeling capabilities, making it a flexible modern tool for source extraction and photometry across diverse datasets. Notably, **Hyper-Py** enables background estimation and subtraction across individual slices of 3D datacubes, allowing consistent background modeling along the spectral axis for line or continuum studies in spectrally resolved observations.
35
+
36
+
37
+
38
+ # Statement of need
39
+ *Hyper-Py* is an open-source Python package freely available to the community. This new implementation builds upon and improves the original IDL **HYPER** by incorporating several major advancements:
40
+
41
+ **Parallel execution for multi-map analysis**. *Hyper-Py* employs built-in parallelization where each input map is independently assigned to a processing core on multi-core systems. This allows concurrent execution of the complete photometric pipeline on different maps simultaneously. This parallel framework dramatically increases computational efficiency without altering individual map results.
42
+
43
+ **Native support for FITS datacubes**. The software treats each slice along the third axis as an independent 2D map, compatible with parallel processing, allowing simultaneous background subtraction per slice. The output is a 3D background cube matching the input cube’s shape, configurable for targeted regions or full spatial coverage through a user-friendly configuration file. This capability provides flexibility for both line-specific and broader continuum background modeling.
44
+
45
+ **Improved source detection reliability**. Source detection has been enhanced with a robust sigma-clipping algorithm that iteratively estimates the root mean square (*rms*) noise of input maps, excluding outliers to characterize background fluctuations accurately—even with bright sources or structures present. This *rms* serves as a threshold reference for detecting compact sources exceeding a configurable significance level (*n_sigma* × *rms*), settable via the config file. Such refinement increases detection reliability and reproducibility across heterogeneous datasets.
46
+
47
+ **Advanced background estimation strategy**. Unlike the original IDL implementation @Traficante15, which modeled background separately from source fitting, *Hyper-Py* supports multiple statistical fitting techniques—least-squares, Huber, and Theil–Sen regressions—applied to masked cutouts around each source. Least-squares performs well in regions dominated by Gaussian noise; Huber regression balances L2 and L1 losses to reduce outlier effects via a tunable parameter ε (huber_epsilons in the config file); and Theil–Sen is a non-parametric, robust method ideal for non-Gaussian noise or contamination. When multiple methods are enabled, *Hyper-Py* selects the best model by minimizing residuals, ensuring accurate background reconstruction even with gradients or faint extended emission. Furthermore, an optional joint fit of background and 2D Gaussian models with L2 (ridge) regularization stabilizes fits in regions with strong gradients, preventing background overfitting at the expense of source flux.
48
+
49
+ **Gaussian plus background model optimization strategy**. *Hyper-py* utilizes the Levenberg–Marquardt algorithm through the lmfit package’s “least_squares” minimizer, allowing control over the cost function’s residual weighting by selecting different loss models. The default “cauchy” loss diminishes outlier influence, improving robustness to data artifacts, unmasked sources, or non-Gaussian noise. Alternatives like “soft_l1” and “huber” are also available for specific dataset optimization.
50
+
51
+ **Model selection criteria for background fitting**. Background model selection criteria are configurable, offering Normalized Mean Squared Error (NMSE), reduced chi-squared ($\chi^2_\nu$), or Bayesian Information Criterion (BIC) via the config file. NMSE is default due to its robust, scale- and weighting-independent nature, ideal under varying masking or pixel weighting. Reduced chi-squared depends on noise models and pixel counts, potentially biasing selection. BIC penalizes model complexity, favoring simplicity when residuals are comparable. These options allow users to select criteria best suited to their scientific aims and data noise properties.
52
+
53
+ **Improved user configurability**. *Hyper-Py* is designed to be more user-friendly, featuring a clear and well-documented configuration file. This allows users to adapt the full photometric workflow to a wide range of observational conditions and scientific goals by modifying only a minimal set of parameters.
54
+
55
+ We assessed *Hyper-Py* performance using extensive simulations. Starting from a noise-only map based on ALMA program #2022.1.0917.S, we generated two maps with reference headers and superimposed varying backgrounds plus 500 synthetic 2D Gaussian sources. These sources mimic real compact objects with integrated fluxes spanning 8–20 times the map rms (peak fluxes ~1–1.5 × *rms*) and FWHM sizes of 0.5–1.5 times the beam size to include both unresolved and moderately extended sources (@Elia21). Random position angles and a minimum 30% overlap ensured realistic blending, thus providing a rigorous test of the code under realistic and challenging conditions.
56
+ We compared the original IDL **HYPER** and *Hyper-Py* under equivalent configurations, with the latter benefiting from improved background estimation, optional regularization, and parallel processing. The key results are presented in **Table 1**, detailing differences in source identification and false positives between the codes.
57
+
58
+
59
+ | Catalog | Source Type | Total | Matched | False | False Percentage |
60
+ |--------:|:------------|------:|--------:|------:|------------------:|
61
+ | 1 | *Hyper-py* | 500 | 490 | 4 | 0.8% |
62
+ | 1 | **HYPER (IDL)** | 500 | 493 | 73 | 12.9% |
63
+ | 2 | *Hyper-py* | 500 | 487 | 4 | 0.8% |
64
+ | 2 | **HYPER (IDL)** | 500 | 487 | 46 | 8.6% |
65
+
66
+ In addition, Figure 1 and Figure 2 show the differences between the peak fluxes and the integrated fluxes of the sources with respect to the reference values of the simulated input sources as estimated by *Hyper-Py* and HYPER, respectively.
67
+
68
+ *Hyper-Py* is freely available for download via its [GitHub repository](https://github.com/Alessio-Traficante/hyper-py), and can also be installed using pip, as described in the accompanying README file.
69
+
70
+ ![histogram of the differences between the peak fluxes of the sources as recovered by *Hyper-Py* and HYPER, respectively, with respect to the reference values of the simulated input sources](Figures/Flux_Diff_Histogram_Peak.png)
71
+
72
+ ![histogram of the differences between the peak fluxes of the sources as recovered by *Hyper-Py* and HYPER, respectively, with respect to the reference values of the simulated input sources](Figures/Flux_Diff_Histogram_Int.png)
@@ -23,7 +23,7 @@ def write_tables(data_dict, output_dir, config, sigma_thres, real_rms, base_file
23
23
  flux_units_beam = 'mJy/beam'
24
24
  else:
25
25
  flux_units_beam = 'Jy/beam'
26
- flux_units = 'Jy/beam'
26
+ flux_units = 'Jy'
27
27
 
28
28
  units = {
29
29
  'MAP_ID': '', 'HYPER_ID': '', 'BAND': 'GHz',
@@ -3,8 +3,6 @@ from astropy.stats import sigma_clipped_stats
3
3
  from photutils.detection import DAOStarFinder
4
4
  from scipy.ndimage import convolve
5
5
  from astropy.table import Table
6
- from astropy.wcs import WCS
7
-
8
6
 
9
7
 
10
8
  def select_channel_map(map_struct):
@@ -52,7 +50,8 @@ def estimate_rms(image, sigma_clip=3.0):
52
50
  return sigma
53
51
 
54
52
 
55
- def detect_peaks(filtered_image, threshold, fwhm_pix, roundlim=(-1.0, 1.0), sharplim=(-1.0, 2.0)):
53
+ def detect_peaks(filtered_image, threshold, fwhm_pix, roundlim=(-1.0, 1.0), sharplim=(-1.0, 2.0)):
54
+
56
55
  finder = DAOStarFinder(
57
56
  threshold=threshold,
58
57
  fwhm=fwhm_pix,
@@ -50,8 +50,10 @@ def read_and_prepare_map(filepath, beam, beam_area_arcsec2, beam_area_sr, conver
50
50
 
51
51
 
52
52
  # --- Unit conversions ---
53
- bunit = header.get('BUNIT')
54
- if bunit == 'MJy /sr':
53
+ header_for_units = hdul[0].header
54
+ bunit = header_for_units.cards['BUNIT'].value
55
+
56
+ if bunit == 'MJy/sr' or bunit == 'MJy / sr':
55
57
  arcsec_to_rad = np.pi / (180.0 * 3600.0)
56
58
  pix_area_sr = (pix_dim * arcsec_to_rad)**2
57
59
  image_data *= 1e6 * pix_area_sr # MJy/sr to Jy/pixel
@@ -95,7 +95,7 @@ def main(map_name=None, cfg=None, dir_root=None, logger=None, logger_file_only=N
95
95
  flux_err = []
96
96
 
97
97
 
98
- fit_statuts_val = []
98
+ fit_status_val = []
99
99
  deblend_val = []
100
100
  cluster_val = []
101
101
 
@@ -143,14 +143,16 @@ def main(map_name=None, cfg=None, dir_root=None, logger=None, logger_file_only=N
143
143
 
144
144
 
145
145
  # --- map rms used to define real sources in the map - accounting for non-zero background --- #
146
+ map_zero_mean_detect = real_map - np.nanmean(real_map)
147
+
146
148
  use_maual_rms = cfg.get("detection", "use_manual_rms", False)
147
149
  if use_maual_rms == True:
148
150
  real_rms = cfg.get("detection", "rms_value", False)
149
151
  else:
150
152
  sigma_clip = SigmaClip(sigma=3.0, maxiters=10)
151
- map_zero_mean_detect = real_map - np.nanmean(real_map)
152
153
  clipped = sigma_clip(map_zero_mean_detect)
153
154
  real_rms = np.sqrt(np.nanmean(clipped**2))
155
+
154
156
 
155
157
 
156
158
  # --- run sources identification --- #
@@ -234,12 +236,24 @@ def main(map_name=None, cfg=None, dir_root=None, logger=None, logger_file_only=N
234
236
  # Convert to sky coordinates
235
237
  x_pixels = np.array(xcen, dtype=np.float64)
236
238
  y_pixels = np.array(ycen, dtype=np.float64)
237
- ra, dec = wcs.wcs_pix2world(x_pixels, y_pixels, 0)
238
- ra_save = np.where(ra < 0., ra + 360., ra)
239
-
240
- skycoords = SkyCoord(ra=ra, dec=dec, unit='deg', frame='icrs')
241
- glon = skycoords.galactic.l.deg
242
- glat = skycoords.galactic.b.deg
239
+
240
+ ctype1 = header.get('CTYPE1', '').upper()
241
+ ctype2 = header.get('CTYPE2', '').upper()
242
+
243
+ if 'GLON' in ctype1 and 'GLAT' in ctype2:
244
+ # Input coordinates are Galactic
245
+ glon, glat = wcs.wcs_pix2world(x_pixels, y_pixels, 0)
246
+ skycoords = SkyCoord(l=glon, b=glat, unit='deg', frame='galactic')
247
+ ra = skycoords.fk5.ra.deg
248
+ dec = skycoords.fk5.dec.deg
249
+ ra_save = np.where(ra < 0., ra + 360., ra)
250
+ else:
251
+ # Assume input coordinates are Equatorial (FK5/ICRS)
252
+ ra, dec = wcs.wcs_pix2world(x_pixels, y_pixels, 0)
253
+ ra_save = np.where(ra < 0., ra + 360., ra)
254
+ skycoords = SkyCoord(ra=ra, dec=dec, unit='deg', frame='fk5')
255
+ glon = skycoords.galactic.l.deg
256
+ glat = skycoords.galactic.b.deg
243
257
 
244
258
  # Prepare zeroed output table
245
259
  N = len(xcen)
@@ -386,7 +400,7 @@ def main(map_name=None, cfg=None, dir_root=None, logger=None, logger_file_only=N
386
400
  redchi_val.append(final_redchi)
387
401
  bic_val.append(final_bic)
388
402
 
389
- fit_statuts_val.append(fit_status)
403
+ fit_status_val.append(fit_status)
390
404
  deblend_val.append(0) # not deblended
391
405
  cluster_val.append(1) # only one source
392
406
 
@@ -568,7 +582,7 @@ def main(map_name=None, cfg=None, dir_root=None, logger=None, logger_file_only=N
568
582
  redchi_val.append(final_redchi)
569
583
  bic_val.append(final_bic)
570
584
 
571
- fit_statuts_val.append(fit_status)
585
+ fit_status_val.append(fit_status)
572
586
  deblend_val.append(1) # multi-Gaussian fit
573
587
  cluster_val.append(len(group_indices)) # number of sources in the group
574
588
 
@@ -586,18 +600,31 @@ def main(map_name=None, cfg=None, dir_root=None, logger=None, logger_file_only=N
586
600
 
587
601
  # Assuming you have your WCS object (usually from your FITS header)
588
602
  header = map_struct["header"]
589
- # Convert pixel coordinates to sky coordinates (RA, Dec)
590
603
  x_pixels = np.array(updated_xcen, dtype=np.float64)
591
604
  y_pixels = np.array(updated_ycen, dtype=np.float64)
592
605
 
593
606
  # Initialize WCS from header
594
607
  wcs = WCS(header)
595
- ra, dec = wcs.wcs_pix2world(x_pixels, y_pixels, 0)
596
- ra_save = np.where(ra < 0., ra + 360., ra)
597
-
598
- skycoords = SkyCoord(ra=ra, dec=dec, unit='deg', frame='icrs')
599
- glon = skycoords.galactic.l.deg
600
- glat = skycoords.galactic.b.deg
608
+
609
+ # Detect coordinate type from header
610
+ ctype1 = header.get('CTYPE1', '').upper()
611
+ ctype2 = header.get('CTYPE2', '').upper()
612
+
613
+ if 'GLON' in ctype1 and 'GLAT' in ctype2:
614
+ # Input coordinates are Galactic
615
+ glon, glat = wcs.wcs_pix2world(x_pixels, y_pixels, 0)
616
+ # Convert to FK5 (RA/Dec)
617
+ skycoords = SkyCoord(l=glon, b=glat, unit='deg', frame='galactic')
618
+ ra = skycoords.fk5.ra.deg
619
+ dec = skycoords.fk5.dec.deg
620
+ ra_save = np.where(ra < 0., ra + 360., ra)
621
+ else:
622
+ # Assume input coordinates are Equatorial (FK5/ICRS)
623
+ ra, dec = wcs.wcs_pix2world(x_pixels, y_pixels, 0)
624
+ ra_save = np.where(ra < 0., ra + 360., ra)
625
+ skycoords = SkyCoord(ra=ra, dec=dec, unit='deg', frame='fk5')
626
+ glon = skycoords.galactic.l.deg
627
+ glat = skycoords.galactic.b.deg
601
628
 
602
629
 
603
630
  ######################## Write Table after photometry ########################
@@ -639,7 +666,7 @@ def main(map_name=None, cfg=None, dir_root=None, logger=None, logger_file_only=N
639
666
  "FWHM_1": list(radius_val_1),
640
667
  "FWHM_2": list(radius_val_2),
641
668
  "PA": list(PA_val),
642
- "STATUS": list(fit_statuts_val),
669
+ "STATUS": list(fit_status_val),
643
670
  "GLON": list(glon),
644
671
  "GLAT": list(glat),
645
672
  "RA": list(ra_save),
@@ -1,6 +1,6 @@
1
1
  Metadata-Version: 2.4
2
2
  Name: hyper-py-photometry
3
- Version: 0.1.6
3
+ Version: 1.0.1
4
4
  Summary: HYPER: Hybrid Photometry Photometry and Extraction Routine
5
5
  Author-email: Alessio Traficante <alessio.traficante@inaf.it>
6
6
  Project-URL: Homepage, https://github.com/alessio-traficante/hyper-py
@@ -1,73 +0,0 @@
1
- ---
2
- title: 'HYPER-PY: HYbrid Photometry and Extraction Routine in PYthon'
3
- tags:
4
- - Python
5
- - astronomy
6
- - photometry
7
- - astronomy tools
8
- authors:
9
- - name: Alessio Traficante
10
- orcid: 0000-0003-1665-6402
11
- affiliation: 1
12
- corresponding: true
13
- - name: Fabrizio De Angelis
14
- affiliation: 1
15
- - name: Alice Nucara
16
- affiliation: 1
17
- - name: Milena Benedettini
18
- affiliation: 1
19
- affiliations:
20
- - name: INAF-IAPS, Via Fosso del Cavaliere, 100, 00133 Rome (IT)
21
- index: 1
22
- date: 02 September 2025
23
- repository: https://github.com/Alessio-Traficante/hyper-py
24
- bibliography: paper.bib
25
- ---
26
-
27
-
28
-
29
- # Summary
30
-
31
- Source extraction and photometry of compact objects are fundamental tasks in observational astronomy. Over the years, various tools have been developed to address the inherent complexity of astronomical data—particularly for accurate background estimation and removal, and for deblending nearby sources to ensure reliable flux measurements across multiple wavelengths (e.g. Cutex: @Molinari11; getsources: @Menshinkov12; Fellwalker: @Berry15; [Astrodendro](http://www.dendrograms.org/)). These challenges are especially pronounced in star-forming regions, which are best observed in the far-infrared (FIR), sub-millimeter, and millimeter regimes, where the cold, dense envelopes of compact sources emit most strongly.
32
- To address these needs, several software packages have been designed to handle the structured backgrounds and blended source populations typical of observations with instruments such as Herschel (70–500 μm) and ALMA (1–3 mm). These packages differ significantly in their detection strategies and flux estimation methods. In this framework, we developed **HYPER** (HYbrid Photometry and Extraction Routine, @Traficante15), originally implemented in Interactive Data Language (IDL), with the goal of providing robust and reproducible photometry of compact sources in FIR/sub-mm/mm maps. **HYPER** combines: (1) source detection via high-pass filtering; (2) background estimation and removal through local polynomial fitting; and (3) source modeling using 2D elliptical Gaussians. For blended regions, HYPER fits multiple Gaussians simultaneously to deblend overlapping sources, subtracting companions before performing photometry.
33
- Aperture photometry in **HYPER** is then carried out on the background-subtracted, companion-subtracted images, using the footprint defined by each source’s 2D Gaussian model. This ensures a consistent and robust integrated flux measurement, even in crowded or strongly structured environments @Traficante15; @Traficante23.
34
- The hybrid nature of **HYPER** lies in its combined approach: using 2D Gaussian modeling, while retaining classical aperture photometry techniques.
35
- In this work, we present **Hyper-Py**, a fully restructured and extended version of **HYPER** developed entirely in Python. *Hyper-Py* not only replicates the core logic of the original IDL implementation, but introduces multiple improvements in performance, configurability, and background modeling capabilities—making it a modern and flexible tool for source extraction and photometric analysis across a wide range of datasets. Notably, *Hyper-Py* also introduces the ability to estimate and subtract the background emission across individual slices of 3D datacubes, enabling consistent background modeling along the spectral axis for line or continuum studies in spectrally resolved observations.
36
-
37
-
38
-
39
- # Statement of need
40
- *Hyper-Py* is a Python package freely accessible to the community. This new Python implementation is a conversion of the IDL version **HYPER** which includes several
41
- improvements to the original package:
42
-
43
- **Parallel execution for multi-map analysis**. *Hyper-Py* introduces built-in parallelization for the analysis of multiple input maps. On multi-core systems, each map is independently assigned to a dedicated core, allowing concurrent execution of the full photometric pipeline—source detection, background fitting, Gaussian modeling, and aperture photometry—for each map. This parallel framework substantially improves computational efficiency without altering the scientific output for individual maps.
44
-
45
- **Native support for the analysis of FITS datacubes**. The code enables users to treat each slice along the third axis as an independent 2D map. This functionality is fully compatible with the existing parallelization framework, allowing multiple slices to be processed simultaneously across different cores. The primary goal of this mode is to estimate the spatially-varying background emission on a per-slice basis. The final output is a reconstructed 3D background datacube with the same shape as the input cube. This background cube can either be focused on a specific region or line of sight—leaving all other voxels as NaNs—or computed across the full spatial extent of the cube. The region where to perform the extraction and related parameters are configurable via the config.yaml file, offering flexibility for both targeted and global background modeling.
46
-
47
- **Improved source detection reliability**. The source detection module has been significantly improved through the implementation of a more robust sigma-clipping algorithm for estimating the root mean square (*rms*) noise of the input map. This enhancement ensures a more accurate characterization of the background fluctuations by iteratively excluding outliers and recalculating the noise level, even in the presence of bright sources or residual structures. The resulting *rms* value is then used as a reference threshold to identify compact sources with peak intensities exceeding a user-defined significance level, set as *n_sigma* × *rms*, where *n_sigma* is configurable via the config.yaml file. This refinement improves the reliability and reproducibility of the source detection process across heterogeneous datasets.
48
-
49
- **Advanced background estimation strategy**. *Hyper-Py* introduces a robust and flexible background estimation framework designed to improve subtraction accuracy in complex regions and in case of blended sources. Unlike the original IDL version, which estimated and subtracted the background independently of source modeling @Traficante15, *Hyper-Py* supports multiple statistical methods for background fitting, applied to masked cutouts around each source. Users can select from least-squares regression, Huber regression, and Theil–Sen regression, either individually or in combination. The least-squares method is optimal in regions dominated by Gaussian noise. The Huber regressor provides robustness against outliers by interpolating between L2 and L1 loss functions, with the tuning parameter ε (huber_epsilons in the config file) controlling the transition. The Theil–Sen estimator is a non-parametric, highly robust approach particularly suited for non-Gaussian noise or residual contamination. When multiple methods are enabled, *Hyper-Py* evaluates all and selects the background model that minimizes the residuals within the unmasked region, ensuring accurate reconstruction even in the presence of variable gradients or faint extended emission. In addition, *Hyper-Py* offers an optional joint fit of the background and 2D elliptical Gaussians, which may improve stability or convergence in specific cases. When this combined fit is used, the background polynomial terms can be regularized using L2 (ridge) regression, helping suppress unphysically large coefficients. This constraint enhances the robustness of the background model in regions with strong intensity gradients or spatially variable emission, reducing the risk of overfitting the background at the expenses of the source flux estimation.
50
-
51
- **Gaussian + background model optimization strategy**. The combined fitting is optimized using the Levenberg–Marquardt algorithm, implemented via the "least_squares" minimizer from the lmfit package (equivalent to the scipy.optimize.least_squares function). This method allows flexible control over the cost function via the loss parameter, which modifies the residuals used during minimization.
52
- Specifically, we adopt a robust loss function, setting loss = "cauchy". This choice reduces the influence of outliers by scaling the residuals in a non-linear way, effectively down-weighting pixels that deviate significantly from the model. Compared to the standard least-squares loss ("linear"), robust losses like "cauchy" are better suited to data affected by small-scale artifacts, unmasked sources, or non-Gaussian noise features. Alternative loss functions such as "soft_l1" and "huber" are also available and may offer improved convergence speed in some cases, particularly when dealing with moderately noisy data.
53
-
54
- **Model selection criteria for background fitting**. In *Hyper-Py*, the optimal background model—i.e., the best combination of box size and polynomial order—is determined by evaluating the residuals between the observed data and the fitted model. The framework offers a configurable choice of model selection criteria, specified via the config.yaml file. Users can select from three statistical metrics: Normalized Mean Squared Error (NMSE), reduced chi-squared ($\chi^2_\nu$), or the Bayesian Information Criterion (BIC). By default, *Hyper-Py* adopts NMSE, a robust, unitless, and scale-independent metric that quantifies the fraction of residual power relative to the total signal power in the cutout. Unlike reduced chi-squared, NMSE is not sensitive to changes in pixel weighting schemes or the number of valid (unmasked) pixels, making it particularly reliable when background fits are performed under varying masking conditions, inverse-rms weighting, or SNR-based weighting. In contrast, the reduced chi-squared statistic depends directly on the assumed noise model and weighting, and may therefore be biased when the number or distribution of contributing pixels changes. The BIC criterion offers an alternative that penalizes overfitting by incorporating the number of model parameters and the sample size, favoring simpler models when multiple fits achieve similar residuals. This flexibility allows users to tailor the model selection strategy to the scientific context or noise characteristics of their data, ensuring that the chosen background model is both statistically sound and physically meaningful.
55
-
56
- **Improved user configurability**. *Hyper-Py* is designed to be more user-friendly, featuring a clear and well-documented configuration file. This allows users to adapt the full photometric workflow to a wide range of observational conditions and scientific goals by modifying only a minimal set of parameters. The modular structure of the configuration also enhances transparency and reproducibility in all stages of the analysis.
57
-
58
- We assessed the performance of the *Hyper-Py* pipeline using a dedicated suite of simulations. Specifically, we adopted a noise-only map derived from the ALMA program #2022.1.0917.S and produced two maps with this reference header. In each of these maps we injected a variable background and 500 synthetic 2D Gaussian sources into it. These sources were designed to emulate the properties of real, compact astronomical objects: integrated fluxes ranged between 8 and 20 times the map *rms* (corresponding to peak fluxes of approximately 1–1.5 times the *rms*), and FWHMs spanned from 0.5 to 1.5 times the beam size, as computed from the FITS header, to simulate both unresolved and moderately extended sources @Elia21. The source position angles were randomly assigned, and a minimum overlap fraction of 30% was imposed to ensure a significant level of blending, thus providing a rigorous test of the code under realistic and challenging conditions. We then compared the performance of the original IDL implementation of **HYPER** with the new Python-based *Hyper-Py* version. Both codes were run using equivalent configurations, with *Hyper-Py* additionally benefiting from its extended capabilities—such as improved background estimation, optional L2 regularization, and multi-core parallel processing. The main results of this comparison are presented in **Table 1**, where we show the differences between the number of identified sources and the false positives by *Hyper-Py* and **HYPER**, respectively.
59
-
60
- | Catalog | Source Type | Total | Matched | False | False Percentage |
61
- |--------:|:------------|------:|--------:|------:|------------------:|
62
- | 1 | *Hyper-py* | 500 | 490 | 4 | 0.8% |
63
- | 1 | **HYPER (IDL)** | 500 | 493 | 73 | 12.9% |
64
- | 2 | *Hyper-py* | 500 | 487 | 4 | 0.8% |
65
- | 2 | **HYPER (IDL)** | 500 | 487 | 46 | 8.6% |
66
-
67
- In addition, Figure 1 and Figure 2 show the differences between the peak fluxes and the integrated fluxes of the sources with respect to the reference values of the simulated input sources as estimated by *Hyper-Py* and HYPER, respectively.
68
-
69
- *Hyper-Py* is freely available for download via its [GitHub repository](https://github.com/Alessio-Traficante/hyper-py), and can also be installed using pip, as described in the accompanying README file.
70
-
71
- ![histogram of the differences between the peak fluxes of the sources as recovered by *Hyper-Py* and HYPER, respectively, with respect to the reference values of the simulated input sources](Figures/Flux_Diff_Histogram_Peak.png)
72
-
73
- ![histogram of the differences between the peak fluxes of the sources as recovered by *Hyper-Py* and HYPER, respectively, with respect to the reference values of the simulated input sources](Figures/Flux_Diff_Histogram_Int.png)