ExoIris 0.21.0__tar.gz → 0.22.0__tar.gz

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (58) hide show
  1. {exoiris-0.21.0 → exoiris-0.22.0}/CHANGELOG.md +16 -0
  2. {exoiris-0.21.0 → exoiris-0.22.0}/ExoIris.egg-info/PKG-INFO +2 -1
  3. {exoiris-0.21.0 → exoiris-0.22.0}/ExoIris.egg-info/requires.txt +1 -0
  4. {exoiris-0.21.0 → exoiris-0.22.0}/PKG-INFO +2 -1
  5. {exoiris-0.21.0 → exoiris-0.22.0}/exoiris/exoiris.py +176 -54
  6. exoiris-0.22.0/exoiris/loglikelihood.py +139 -0
  7. {exoiris-0.21.0 → exoiris-0.22.0}/exoiris/tslpf.py +70 -9
  8. {exoiris-0.21.0 → exoiris-0.22.0}/requirements.txt +2 -1
  9. exoiris-0.21.0/exoiris/loglikelihood.py +0 -144
  10. {exoiris-0.21.0 → exoiris-0.22.0}/.github/workflows/python-package.yml +0 -0
  11. {exoiris-0.21.0 → exoiris-0.22.0}/.gitignore +0 -0
  12. {exoiris-0.21.0 → exoiris-0.22.0}/.readthedocs.yaml +0 -0
  13. {exoiris-0.21.0 → exoiris-0.22.0}/CODE_OF_CONDUCT.md +0 -0
  14. {exoiris-0.21.0 → exoiris-0.22.0}/ExoIris.egg-info/SOURCES.txt +0 -0
  15. {exoiris-0.21.0 → exoiris-0.22.0}/ExoIris.egg-info/dependency_links.txt +0 -0
  16. {exoiris-0.21.0 → exoiris-0.22.0}/ExoIris.egg-info/top_level.txt +0 -0
  17. {exoiris-0.21.0 → exoiris-0.22.0}/LICENSE +0 -0
  18. {exoiris-0.21.0 → exoiris-0.22.0}/README.md +0 -0
  19. {exoiris-0.21.0 → exoiris-0.22.0}/doc/Makefile +0 -0
  20. {exoiris-0.21.0 → exoiris-0.22.0}/doc/make.bat +0 -0
  21. {exoiris-0.21.0 → exoiris-0.22.0}/doc/requirements.txt +0 -0
  22. {exoiris-0.21.0 → exoiris-0.22.0}/doc/source/_static/css/custom.css +0 -0
  23. {exoiris-0.21.0 → exoiris-0.22.0}/doc/source/api/binning.rst +0 -0
  24. {exoiris-0.21.0 → exoiris-0.22.0}/doc/source/api/exoiris.rst +0 -0
  25. {exoiris-0.21.0 → exoiris-0.22.0}/doc/source/api/tsdata.rst +0 -0
  26. {exoiris-0.21.0 → exoiris-0.22.0}/doc/source/conf.py +0 -0
  27. {exoiris-0.21.0 → exoiris-0.22.0}/doc/source/examples/e01/01a_not_so_short_intro.ipynb +0 -0
  28. {exoiris-0.21.0 → exoiris-0.22.0}/doc/source/examples/e01/01b_short_intro.ipynb +0 -0
  29. {exoiris-0.21.0 → exoiris-0.22.0}/doc/source/examples/e01/02_increasing_knot_resolution.ipynb +0 -0
  30. {exoiris-0.21.0 → exoiris-0.22.0}/doc/source/examples/e01/03_increasing_data_resolution.ipynb +0 -0
  31. {exoiris-0.21.0 → exoiris-0.22.0}/doc/source/examples/e01/04_gaussian_processes.ipynb +0 -0
  32. {exoiris-0.21.0 → exoiris-0.22.0}/doc/source/examples/e01/05a_ldtkldm.ipynb +0 -0
  33. {exoiris-0.21.0 → exoiris-0.22.0}/doc/source/examples/e01/A2_full_data_resolution.ipynb +0 -0
  34. {exoiris-0.21.0 → exoiris-0.22.0}/doc/source/examples/e01/appendix_1_data_preparation.ipynb +0 -0
  35. {exoiris-0.21.0 → exoiris-0.22.0}/doc/source/examples/e01/data/README.txt +0 -0
  36. {exoiris-0.21.0 → exoiris-0.22.0}/doc/source/examples/e01/data/nirHiss_order_1.h5 +0 -0
  37. {exoiris-0.21.0 → exoiris-0.22.0}/doc/source/examples/e01/data/nirHiss_order_2.h5 +0 -0
  38. {exoiris-0.21.0 → exoiris-0.22.0}/doc/source/examples/e01/example1.png +0 -0
  39. {exoiris-0.21.0 → exoiris-0.22.0}/doc/source/examples/e01/plot_1.ipynb +0 -0
  40. {exoiris-0.21.0 → exoiris-0.22.0}/doc/source/examples/figures.ipynb +0 -0
  41. {exoiris-0.21.0 → exoiris-0.22.0}/doc/source/examples/friendly_introduction.ipynb +0 -0
  42. {exoiris-0.21.0 → exoiris-0.22.0}/doc/source/examples/index.rst +0 -0
  43. {exoiris-0.21.0 → exoiris-0.22.0}/doc/source/examples/k_knot_example.svg +0 -0
  44. {exoiris-0.21.0 → exoiris-0.22.0}/doc/source/examples/setup_multiprocessing.py +0 -0
  45. {exoiris-0.21.0 → exoiris-0.22.0}/doc/source/index.rst +0 -0
  46. {exoiris-0.21.0 → exoiris-0.22.0}/doc/source/install.rst +0 -0
  47. {exoiris-0.21.0 → exoiris-0.22.0}/exoiris/__init__.py +0 -0
  48. {exoiris-0.21.0 → exoiris-0.22.0}/exoiris/binning.py +0 -0
  49. {exoiris-0.21.0 → exoiris-0.22.0}/exoiris/ephemeris.py +0 -0
  50. {exoiris-0.21.0 → exoiris-0.22.0}/exoiris/ldtkld.py +0 -0
  51. {exoiris-0.21.0 → exoiris-0.22.0}/exoiris/spotmodel.py +0 -0
  52. {exoiris-0.21.0 → exoiris-0.22.0}/exoiris/tsdata.py +0 -0
  53. {exoiris-0.21.0 → exoiris-0.22.0}/exoiris/tsmodel.py +0 -0
  54. {exoiris-0.21.0 → exoiris-0.22.0}/exoiris/util.py +0 -0
  55. {exoiris-0.21.0 → exoiris-0.22.0}/exoiris/wlpf.py +0 -0
  56. {exoiris-0.21.0 → exoiris-0.22.0}/pyproject.toml +0 -0
  57. {exoiris-0.21.0 → exoiris-0.22.0}/setup.cfg +0 -0
  58. {exoiris-0.21.0 → exoiris-0.22.0}/tests/test_binning.py +0 -0
@@ -5,6 +5,22 @@ All notable changes to ExoIris will be documented in this file.
5
5
  The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.1.0/),
6
6
  and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
7
7
 
8
+ ## [0.22.0] - 2025-12-13
9
+
10
+ ### Added
11
+ - Added support for free-k knot configuration in spline models.
12
+ - Added SVD solver options in the `loglikelihood.LogLikelihood` class.
13
+ - Added `bspline-quadratic` interpolation option.
14
+
15
+ ### Improved
16
+ - Improved transmission spectrum and log-likelihood methods for robustness and performance.
17
+ - Cleaned up and refactored the `LogLikelihood` class.
18
+ - Updated limb darkening parameter plotting.
19
+
20
+ ### Fixed
21
+ - Added a safety check for uninitialized `white_gp_models` in white
22
+ light curve processing.
23
+
8
24
  ## [0.21.0] - 2025-11-24
9
25
 
10
26
  ### Added
@@ -1,6 +1,6 @@
1
1
  Metadata-Version: 2.4
2
2
  Name: ExoIris
3
- Version: 0.21.0
3
+ Version: 0.22.0
4
4
  Summary: Easy and robust exoplanet transmission spectroscopy.
5
5
  Author-email: Hannu Parviainen <hannu@iac.es>
6
6
  License: GPLv3
@@ -29,6 +29,7 @@ Requires-Dist: xarray
29
29
  Requires-Dist: seaborn
30
30
  Requires-Dist: astropy
31
31
  Requires-Dist: uncertainties
32
+ Requires-Dist: scikit-learn
32
33
  Dynamic: license-file
33
34
 
34
35
  # ExoIris: Fast and Flexible Transmission Spectroscopy in Python
@@ -11,3 +11,4 @@ xarray
11
11
  seaborn
12
12
  astropy
13
13
  uncertainties
14
+ scikit-learn
@@ -1,6 +1,6 @@
1
1
  Metadata-Version: 2.4
2
2
  Name: ExoIris
3
- Version: 0.21.0
3
+ Version: 0.22.0
4
4
  Summary: Easy and robust exoplanet transmission spectroscopy.
5
5
  Author-email: Hannu Parviainen <hannu@iac.es>
6
6
  License: GPLv3
@@ -29,6 +29,7 @@ Requires-Dist: xarray
29
29
  Requires-Dist: seaborn
30
30
  Requires-Dist: astropy
31
31
  Requires-Dist: uncertainties
32
+ Requires-Dist: scikit-learn
32
33
  Dynamic: license-file
33
34
 
34
35
  # ExoIris: Fast and Flexible Transmission Spectroscopy in Python
@@ -32,7 +32,7 @@ from emcee import EnsembleSampler
32
32
  from matplotlib.pyplot import subplots, setp, figure, Figure, Axes
33
33
  from numpy import (any, where, sqrt, clip, percentile, median, squeeze, floor, ndarray, isfinite,
34
34
  array, inf, arange, argsort, concatenate, full, nan, r_, nanpercentile, log10,
35
- ceil, unique, zeros)
35
+ ceil, unique, zeros, cov)
36
36
  from numpy.typing import ArrayLike
37
37
  from numpy.random import normal
38
38
  from pytransit import UniformPrior, NormalPrior
@@ -133,6 +133,12 @@ def load_model(fname: Path | str, name: str | None = None):
133
133
  for i in range(hdr['NSPOTS']):
134
134
  a.add_spot(hdr[f'SP{i+1:02d}_EG'])
135
135
 
136
+ # Read the free k knot indices if they exist.
137
+ # ==========================================
138
+ if 'N_FREE_K' in hdr and hdr['N_FREE_K'] > 0:
139
+ n_free_k = hdr['N_FREE_K']
140
+ a._tsa.set_free_k_knots([int(hdr[f'KK_IX_{i:03d}']) for i in range(n_free_k)])
141
+
136
142
  # Read the priors.
137
143
  # ================
138
144
  priors = pickle.loads(codecs.decode(json.loads(hdul['PRIORS'].header['PRIORS']).encode(), "base64"))
@@ -154,7 +160,7 @@ class ExoIris:
154
160
 
155
161
  def __init__(self, name: str, ldmodel, data: TSDataGroup | TSData, nk: int = 50, nldc: int = 10, nthreads: int = 1,
156
162
  tmpars: dict | None = None, noise_model: Literal["white", "fixed_gp", "free_gp"] = 'white',
157
- interpolation: Literal['bspline', 'pchip', 'makima', 'nearest', 'linear'] = 'makima'):
163
+ interpolation: Literal['nearest', 'linear', 'pchip', 'makima', 'bspline', 'bspline-quadratic'] = 'makima'):
158
164
  """
159
165
  Parameters
160
166
  ----------
@@ -211,6 +217,7 @@ class ExoIris:
211
217
  self._white_fluxes: None | list[ndarray] = None
212
218
  self._white_errors: None | list[ndarray] = None
213
219
  self._white_models: None | list[ndarray] = None
220
+ self.white_gp_models: None | list[ndarray] = None
214
221
 
215
222
  def lnposterior(self, pvp: ndarray) -> ndarray:
216
223
  """Calculate the log posterior probability for a single parameter vector or an array of parameter vectors.
@@ -653,6 +660,9 @@ class ExoIris:
653
660
  the data and the number of columns specified (ncol). If the provided axes array (axs)
654
661
  does not accommodate all the subplots, the behavior is undefined.
655
662
  """
663
+ if self.white_gp_models is None:
664
+ raise ValueError("White light curve GP predictions are not available. Run 'optimize_gp_hyperparameters' first.")
665
+
656
666
  ndata = self.data.size
657
667
 
658
668
  if axs is None:
@@ -871,13 +881,13 @@ class ExoIris:
871
881
 
872
882
  if result == 'fit':
873
883
  pv = self._tsa._de_population[self._tsa._de_imin]
874
- ks = self._tsa._eval_k(pv[self._tsa._sl_rratios])
884
+ ks = self._tsa._eval_k(pv)
875
885
  ar = 1e2 * concatenate([squeeze(k) for k in ks]) ** 2
876
886
  ax.plot(wavelength[ix], ar[ix], c='k')
877
887
  ax.plot(self._tsa.k_knots, 1e2 * pv[self._tsa._sl_rratios] ** 2, 'k.')
878
888
  else:
879
889
  df = pd.DataFrame(self._tsa._mc_chains.reshape([-1, self._tsa.ndim]), columns=self._tsa.ps.names)
880
- ks = self._tsa._eval_k(df.iloc[:, self._tsa._sl_rratios])
890
+ ks = self._tsa._eval_k(df.values)
881
891
  ar = 1e2 * concatenate(ks, axis=1) ** 2
882
892
  ax.fill_between(wavelength[ix], *percentile(ar[:, ix], [16, 84], axis=0), alpha=0.25)
883
893
  ax.plot(wavelength[ix], median(ar, 0)[ix], c='k')
@@ -893,7 +903,11 @@ class ExoIris:
893
903
  ax.set_xticks(xticks, labels=xticks)
894
904
  return ax.get_figure()
895
905
 
896
- def plot_limb_darkening_parameters(self, result: Optional[str] = None, axs: Optional[tuple[Axes, Axes]] = None) -> None | Figure:
906
+ def plot_limb_darkening_parameters(
907
+ self,
908
+ result: None | Literal["fit", "mcmc"] = None,
909
+ axs: None | tuple[Axes, Axes] = None,
910
+ ) -> None | Figure:
897
911
  """Plot the limb darkening parameters.
898
912
 
899
913
  Parameters
@@ -922,56 +936,68 @@ class ExoIris:
922
936
  This method plots the limb darkening parameters for two-parameter limb darkening models. It supports only
923
937
  quadratic, quadratic-tri, power-2, and power-2-pm models.
924
938
  """
925
- if not self._tsa.ldmodel in ('quadratic', 'quadratic-tri', 'power-2', 'power-2-pm'):
939
+ if not self._tsa.ldmodel in (
940
+ "quadratic",
941
+ "quadratic-tri",
942
+ "power-2",
943
+ "power-2-pm",
944
+ ):
926
945
  return None
927
946
 
928
947
  if axs is None:
929
- fig, axs = subplots(1, 2, sharey='all', figsize=(13,4))
948
+ fig, axs = subplots(1, 2, sharey="all", figsize=(13, 4))
930
949
  else:
931
950
  fig = axs[0].get_figure()
932
951
 
933
952
  if result is None:
934
- result = 'mcmc' if self._tsa.sampler is not None else 'fit'
935
- if result not in ('fit', 'mcmc'):
953
+ result = "mcmc" if self._tsa.sampler is not None else "fit"
954
+ if result not in ("fit", "mcmc"):
936
955
  raise ValueError("Result must be either 'fit', 'mcmc', or None")
937
- if result == 'mcmc' and self._tsa.sampler is None:
938
- raise ValueError("Cannot plot posterior solution before running the MCMC sampler.")
956
+ if result == "mcmc" and not (
957
+ self._tsa.sampler is not None or self.mcmc_chains is not None
958
+ ):
959
+ raise ValueError(
960
+ "Cannot plot posterior solution before running the MCMC sampler."
961
+ )
939
962
 
940
963
  wavelength = concatenate(self.data.wavelengths)
941
964
  ix = argsort(wavelength)
942
965
 
943
- if result == 'fit':
966
+ if result == "fit":
944
967
  pv = self._tsa._de_population[self._tsa._de_imin]
945
968
  ldc = squeeze(concatenate(self._tsa._eval_ldc(pv), axis=1))
946
- axs[0].plot(self._tsa.ld_knots, pv[self._tsa._sl_ld][0::2], 'ok')
947
- axs[0].plot(wavelength[ix], ldc[:,0][ix])
948
- axs[1].plot(self._tsa.ld_knots, pv[self._tsa._sl_ld][1::2], 'ok')
949
- axs[1].plot(wavelength[ix], ldc[:,1][ix])
969
+ axs[0].plot(self._tsa.ld_knots, pv[self._tsa._sl_ld][0::2], "ok")
970
+ axs[0].plot(wavelength[ix], ldc[:, 0][ix])
971
+ axs[1].plot(self._tsa.ld_knots, pv[self._tsa._sl_ld][1::2], "ok")
972
+ axs[1].plot(wavelength[ix], ldc[:, 1][ix])
950
973
  else:
951
- pvp = self._tsa._mc_chains.reshape([-1, self._tsa.ndim])
952
- ldc = pvp[:,self._tsa._sl_ld]
974
+ if self._tsa.sampler is not None:
975
+ pvp = self._tsa._mc_chains.reshape([-1, self._tsa.ndim])
976
+ else:
977
+ pvp = self.mcmc_chains.reshape([-1, self._tsa.ndim])
978
+ ldc = pvp[:, self._tsa._sl_ld]
953
979
 
954
- ld1m = median(ldc[:,::2], 0)
955
- ld1e = ldc[:,::2].std(0)
956
- ld2m = median(ldc[:,1::2], 0)
957
- ld2e = ldc[:,1::2].std(0)
980
+ ld1m = median(ldc[:, ::2], 0)
981
+ ld1e = ldc[:, ::2].std(0)
982
+ ld2m = median(ldc[:, 1::2], 0)
983
+ ld2e = ldc[:, 1::2].std(0)
958
984
 
959
985
  ldc = concatenate(self._tsa._eval_ldc(pvp), axis=1)
960
- ld1p = percentile(ldc[:,:,0], [50, 16, 84], axis=0)
961
- ld2p = percentile(ldc[:,:,1], [50, 16, 84], axis=0)
986
+ ld1p = percentile(ldc[:, :, 0], [50, 16, 84], axis=0)
987
+ ld2p = percentile(ldc[:, :, 1], [50, 16, 84], axis=0)
962
988
 
963
989
  axs[0].fill_between(wavelength[ix], ld1p[1, ix], ld1p[2, ix], alpha=0.5)
964
- axs[0].plot(wavelength[ix], ld1p[0][ix], 'k')
990
+ axs[0].plot(wavelength[ix], ld1p[0][ix], "k")
965
991
  axs[1].fill_between(wavelength[ix], ld2p[1, ix], ld2p[2, ix], alpha=0.5)
966
- axs[1].plot(wavelength[ix], ld2p[0][ix], 'k')
992
+ axs[1].plot(wavelength[ix], ld2p[0][ix], "k")
967
993
 
968
- axs[0].errorbar(self._tsa.ld_knots, ld1m, ld1e, fmt='ok')
969
- axs[1].errorbar(self._tsa.ld_knots, ld2m, ld2e, fmt='ok')
994
+ axs[0].errorbar(self._tsa.ld_knots, ld1m, ld1e, fmt="ok")
995
+ axs[1].errorbar(self._tsa.ld_knots, ld2m, ld2e, fmt="ok")
970
996
 
971
997
  ldp = full((self.nldp, 2, 2), nan)
972
998
  for i in range(self.nldp):
973
999
  for j in range(2):
974
- p = self.ps[self._tsa._sl_ld][i*2+j].prior
1000
+ p = self.ps[self._tsa._sl_ld][i * 2 + j].prior
975
1001
  if isinstance(p, UniformPrior):
976
1002
  ldp[i, j, 0] = p.a
977
1003
  ldp[i, j, 1] = p.b
@@ -981,11 +1007,15 @@ class ExoIris:
981
1007
 
982
1008
  for i in range(2):
983
1009
  for j in range(2):
984
- axs[i].plot(self._tsa.ld_knots, ldp[:, i, j], ':', c='C0')
985
-
986
- setp(axs, xlim=(wavelength.min(), wavelength.max()), xlabel=r'Wavelength [$\mu$m]')
987
- setp(axs[0], ylabel='Limb darkening coefficient 1')
988
- setp(axs[1], ylabel='Limb darkening coefficient 2')
1010
+ axs[i].plot(self._tsa.ld_knots, ldp[:, i, j], ":", c="C0")
1011
+
1012
+ setp(
1013
+ axs,
1014
+ xlim=(wavelength.min(), wavelength.max()),
1015
+ xlabel=r"Wavelength [$\mu$m]",
1016
+ )
1017
+ setp(axs[0], ylabel="Limb darkening coefficient 1")
1018
+ setp(axs[1], ylabel="Limb darkening coefficient 2")
989
1019
  return fig
990
1020
 
991
1021
  def plot_residuals(self, result: Optional[str] = None, ax: None | Axes | Sequence[Axes] = None,
@@ -1109,8 +1139,8 @@ class ExoIris:
1109
1139
  return fig
1110
1140
 
1111
1141
  @property
1112
- def transmission_spectrum(self) -> Table:
1113
- """Get the posterior transmission spectrum as a Pandas DataFrame.
1142
+ def transmission_spectrum_table(self) -> Table:
1143
+ """Get the posterior transmission spectrum as an Astropy Table.
1114
1144
 
1115
1145
  Raises
1116
1146
  ------
@@ -1122,7 +1152,7 @@ class ExoIris:
1122
1152
 
1123
1153
  pvp = self.posterior_samples
1124
1154
  wls = concatenate(self.data.wavelengths)
1125
- ks = concatenate(self._tsa._eval_k(pvp.values[:, self._tsa._sl_rratios]), axis=1)
1155
+ ks = concatenate(self._tsa._eval_k(pvp.values), axis=1)
1126
1156
  ar = ks**2
1127
1157
  ix = argsort(wls)
1128
1158
  return Table(data=[wls[ix]*u.micrometer,
@@ -1130,21 +1160,100 @@ class ExoIris:
1130
1160
  median(ar, 0)[ix], ar.std(0)[ix]],
1131
1161
  names = ['wavelength', 'radius_ratio', 'radius_ratio_e', 'area_ratio', 'area_ratio_e'])
1132
1162
 
1133
- def radius_ratio_spectrum(self, wavelengths: ArrayLike, knot_samples: ArrayLike | None = None) -> ndarray:
1134
- if knot_samples is None:
1135
- knot_samples = self.posterior_samples.iloc[:, self._tsa._sl_rratios].values
1136
- k_posteriors = zeros((knot_samples.shape[0], wavelengths.size))
1137
- for i, ks in enumerate(knot_samples):
1138
- k_posteriors[i, :] = self._tsa._ip(wavelengths, self._tsa.k_knots, ks)
1139
- return k_posteriors
1140
-
1141
- def area_ratio_spectrum(self, wavelengths: ArrayLike, knot_samples: ArrayLike | None = None) -> ndarray:
1142
- if knot_samples is None:
1143
- knot_samples = self.posterior_samples.iloc[:, self._tsa._sl_rratios].values
1144
- d_posteriors = zeros((knot_samples.shape[0], wavelengths.size))
1145
- for i, ks in enumerate(knot_samples):
1146
- d_posteriors[i, :] = self._tsa._ip(wavelengths, self._tsa.k_knots, ks) ** 2
1147
- return d_posteriors
1163
+ def transmission_spectrum_samples(self, wavelengths: ndarray | None = None,
1164
+ kind: Literal['radius_ratio', 'depth'] = 'depth',
1165
+ samples: ndarray | None = None) -> tuple[ndarray, ndarray]:
1166
+ """Calculate posterior transmission spectrum samples.
1167
+
1168
+ This method computes the posterior samples of the transmission spectrum,
1169
+ either as radius ratios or as transit depths, depending on the specified
1170
+ kind. It interpolates the data for given wavelengths or uses the
1171
+ instrumental wavelength grid if none is provided. Requires that MCMC
1172
+ sampling has been performed prior to calling this method.
1173
+
1174
+ Parameters
1175
+ ----------
1176
+ wavelengths
1177
+ The array of wavelengths at which the spectrum should be sampled.
1178
+ If None, the default wavelength grid defined by the instrumental data
1179
+ will be used.
1180
+ kind
1181
+ Specifies the desired representation of the transmission spectrum.
1182
+ 'radius_ratio' returns the spectrum in radius ratio units, while
1183
+ 'depth' returns the spectrum in transit depth units. Default is 'depth'.
1184
+ samples
1185
+ Array of posterior samples to use for calculation. If None,
1186
+ the method will use previously stored posterior samples.
1187
+
1188
+ Returns
1189
+ -------
1190
+ ndarray
1191
+ Array containing the transmission spectrum samples for the specified
1192
+ wavelengths. The representation (radius ratio or depth) depends on the
1193
+ specified `kind`.
1194
+ """
1195
+ if self.mcmc_chains is None:
1196
+ raise ValueError("Cannot calculate posterior transmission spectrum before running the MCMC sampler.")
1197
+
1198
+ if kind not in ('radius_ratio', 'depth'):
1199
+ raise ValueError("Invalid value for `kind`. Must be either 'radius_ratio' or 'depth'.")
1200
+
1201
+ if samples is None:
1202
+ samples = self.posterior_samples.values
1203
+
1204
+ if wavelengths is None:
1205
+ wavelengths = concatenate(self.data.wavelengths)
1206
+ wavelengths.sort()
1207
+
1208
+ k_posteriors = zeros((samples.shape[0], wavelengths.size))
1209
+ for i, pv in enumerate(samples):
1210
+ k_posteriors[i, :] = self._tsa._ip(wavelengths, self._tsa.k_knots, pv[self._tsa._sl_rratios])
1211
+
1212
+ if kind == 'radius_ratio':
1213
+ return wavelengths, k_posteriors
1214
+ else:
1215
+ return wavelengths, k_posteriors**2
1216
+
1217
+ def transmission_spectrum(self, wavelengths: ndarray | None = None, kind: Literal['radius_ratio', 'depth'] = 'depth', samples: ndarray | None = None, return_cov: bool = True) -> tuple[ndarray, ndarray]:
1218
+ """Compute the transmission spectrum.
1219
+
1220
+ This method calculates the mean transmission spectrum values and the covariance matrix
1221
+ (or standard deviations) for the given parameter set. The mean represents the average
1222
+ transmission spectrum, and the covariance provides information on the uncertainties and
1223
+ correlations between wavelengths or samples.
1224
+
1225
+ Parameters
1226
+ ----------
1227
+ wavelengths
1228
+ Array of wavelength values at which to calculate the transmission spectrum.
1229
+ If None, the default grid will be used.
1230
+ kind
1231
+ Specifies the method to represent the spectrum. 'radius_ratio' computes the
1232
+ spectrum in terms of the planet-to-star radius ratio, while 'depth' computes
1233
+ the spectrum in terms of transit depth.
1234
+ samples
1235
+ Array of samples used to compute the spectrum uncertainties. If None, previously
1236
+ stored samples will be utilized.
1237
+
1238
+ return_cov : bool, optional
1239
+ Indicates whether to return the covariance matrix of the computed transmission
1240
+ spectrum. If True, the covariance matrix is returned along with the mean spectrum.
1241
+ If False, the standard deviation of the spectrum is returned.
1242
+
1243
+ Returns
1244
+ -------
1245
+ tuple[ndarray, ndarray]
1246
+ A tuple containing two arrays:
1247
+ - The mean transmission spectrum.
1248
+ - The covariance matrix of the spectrum (if `return_cov` is True), or the
1249
+ standard deviation (if `return_cov` is False).
1250
+ """
1251
+ sp_samples = self.transmission_spectrum_samples(wavelengths, kind, samples)[1]
1252
+ mean = sp_samples.mean(0)
1253
+ if return_cov:
1254
+ return mean, cov(sp_samples, rowvar=False)
1255
+ else:
1256
+ return mean, sp_samples.std(0)
1148
1257
 
1149
1258
  def save(self, overwrite: bool = False) -> None:
1150
1259
  """Save the ExoIris analysis to a FITS file.
@@ -1163,6 +1272,13 @@ class ExoIris:
1163
1272
  pri.header['interp'] = self._tsa.interpolation
1164
1273
  pri.header['noise'] = self._tsa.noise_model
1165
1274
 
1275
+ if self._tsa.free_k_knot_ids is None:
1276
+ pri.header['n_free_k'] = 0
1277
+ else:
1278
+ pri.header['n_free_k'] = len(self._tsa.free_k_knot_ids)
1279
+ for i, ix in enumerate(self._tsa.free_k_knot_ids):
1280
+ pri.header[f'kk_ix_{i:03d}'] = ix
1281
+
1166
1282
  # Priors
1167
1283
  # ======
1168
1284
  pr = pf.ImageHDU(name='priors')
@@ -1246,7 +1362,9 @@ class ExoIris:
1246
1362
 
1247
1363
  hdul.writeto(f"{self.name}.fits", overwrite=True)
1248
1364
 
1249
- def create_loglikelihood_function(self, wavelengths: ArrayLike, kind: Literal['radius_ratio', 'depth'] = 'depth') -> LogLikelihood:
1365
+ def create_loglikelihood_function(self, wavelengths: ndarray, kind: Literal['radius_ratio', 'depth'] = 'depth',
1366
+ method: Literal['svd', 'randomized_svd', 'eigh'] = 'svd',
1367
+ n_max_samples: int = 10000) -> LogLikelihood:
1250
1368
  """Create a reduced-rank Gaussian log-likelihood function for retrieval.
1251
1369
 
1252
1370
  Parameters
@@ -1265,7 +1383,11 @@ class ExoIris:
1265
1383
  """
1266
1384
  if self.mcmc_chains is None:
1267
1385
  raise ValueError("Cannot create log-likelihood function before running the MCMC sampler.")
1268
- return LogLikelihood(self, wavelengths, kind)
1386
+ return LogLikelihood(wavelengths,
1387
+ self.transmission_spectrum_samples(wavelengths, kind)[1],
1388
+ method=method,
1389
+ n_max_samples=n_max_samples,
1390
+ nk=self.nk)
1269
1391
 
1270
1392
  def create_initial_population(self, n: int, source: str, add_noise: bool = True) -> ndarray:
1271
1393
  """Create an initial parameter vector population for the DE optimisation.
@@ -0,0 +1,139 @@
1
+ # ExoIris: fast, flexible, and easy exoplanet transmission spectroscopy in Python.
2
+ # Copyright (C) 2025 Hannu Parviainen
3
+ #
4
+ # This program is free software: you can redistribute it and/or modify
5
+ # it under the terms of the GNU General Public License as published by
6
+ # the Free Software Foundation, either version 3 of the License, or
7
+ # (at your option) any later version.
8
+ #
9
+ # This program is distributed in the hope that it will be useful,
10
+ # but WITHOUT ANY WARRANTY; without even the implied warranty of
11
+ # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
12
+ # GNU General Public License for more details.
13
+ #
14
+ # You should have received a copy of the GNU General Public License
15
+ # along with this program. If not, see <https://www.gnu.org/licenses/>.
16
+ from typing import Literal
17
+
18
+ from numpy import full, cov, sum, ndarray, log, pi, asarray
19
+ from numpy.linalg import eigh, svd
20
+ from sklearn.utils.extmath import randomized_svd
21
+
22
+
23
+ class LogLikelihood:
24
+ def __init__(self, wavelength: ndarray, spectra: None | ndarray = None, spmean: None | ndarray = None,
25
+ spcov: None | ndarray = None, eps: float = 1e-10, method: Literal['svd', 'randomized_svd', 'eigh'] = 'svd',
26
+ n_max_samples: int = 10000, nk: int | None = None):
27
+ """Reduced-rank Normal log-likelihood.
28
+
29
+ This class constructs a statistically robust log-likelihood function for
30
+ comparing a theoretical transmission spectrum to the posterior distribution
31
+ inferred by ExoIris.
32
+
33
+ Because the posterior samples are generated from a spline with $K$ knots
34
+ but evaluated on $M$ wavelengths ($M \gg K$), the empirical covariance
35
+ matrix is singular or strongly ill-conditioned. This class solves the
36
+ rank-deficiency problem by projecting the model into the principal
37
+ subspace of the posterior (Karhunen-Loève compression).
38
+
39
+ Parameters
40
+ ----------
41
+ wavelength
42
+ The wavelength grid with a shape (M,) on which the posterior samples and theoretical
43
+ spectra are evaluated.
44
+ spectra
45
+ The posterior spectrum samples with shape (N_samples, M_wavelengths).
46
+ If provided, ``spmean`` and ``spcov`` are computed automatically.
47
+ Mutually exclusive with ``spmean`` and ``spcov``.
48
+ spmean
49
+ The pre-computed mean spectrum with shape (M,). Must be provided
50
+ along with ``spcov`` if ``spectra`` is None.
51
+ spcov
52
+ The pre-computed covariance matrix with shape (M, M). Must be provided
53
+ along with ``spmean`` if ``spectra`` is None.
54
+ eps
55
+ Relative tolerance factor used to determine which eigenvalues of
56
+ the covariance matrix are considered significant. Eigenvalues smaller
57
+ than ``eps * max_eigenvalue`` are discarded. Default is ``1e-10``.
58
+
59
+ Notes
60
+ -----
61
+ This implementation follows the "Signal-to-Noise Eigenmode" formalism
62
+ described by Tegmark et al. (1997) for analyzing rank-deficient
63
+ cosmological datasets.
64
+
65
+ The log-likelihood is evaluated as:
66
+
67
+ .. math:: \ln \mathcal{L} = -\frac{1}{2} \left[ \sum_{i=1}^{K} \frac{p_i^2}{\lambda_i} + \sum_{i=1}^{K} \ln(\lambda_i) + K \ln(2\pi) \right]
68
+
69
+ where $\lambda_i$ are the significant eigenvalues of the covariance
70
+ matrix, and $p_i$ are the projections of the model residuals onto the
71
+ corresponding eigenvectors (principal components).
72
+
73
+ References
74
+ ----------
75
+ Tegmark, M., Taylor, A. N., & Heavens, A. F. (1997). Karhunen-Loève
76
+ eigenvalue problems in cosmology: how should we tackle large data sets?
77
+ *The Astrophysical Journal*, 480(1), 22.
78
+ """
79
+ self.wavelength = wavelength
80
+ self.eps = eps
81
+
82
+ if spectra is not None and (spmean is not None or spcov is not None):
83
+ raise ValueError("Cannot specify both `spectra` and `spmean` and `spcov`.")
84
+
85
+ if spectra is None and (spmean is None or spcov is None):
86
+ raise ValueError("Must specify either `spectra` or both `spmean` and `spcov`.")
87
+
88
+ if spectra is not None:
89
+ spectra = spectra[:n_max_samples, :]
90
+ self.spmean = spectra.mean(axis=0)
91
+
92
+ if method == 'svd':
93
+ _, sigma, evecs = svd(spectra - spectra.mean(0), full_matrices=False)
94
+ evals = (sigma**2) / (spectra.shape[0] - 1)
95
+ evecs = evecs.T
96
+ elif method == 'randomized_svd':
97
+ if nk is None:
98
+ raise ValueError("Must specify `nk` when using `method='randomized_svd'`.")
99
+ _, sigma, evecs = randomized_svd(spectra - spectra.mean(0), n_components=nk, n_iter=5, random_state=0)
100
+ evals = (sigma ** 2) / (spectra.shape[0] - 1)
101
+ evecs = evecs.T
102
+ elif method == 'eigh' or (spmean is not None and spcov is not None):
103
+ if spectra is not None:
104
+ self.spcov = cov(spectra, rowvar=False)
105
+ else:
106
+ self.spmean = spmean
107
+ self.spcov = spcov
108
+ evals, evecs = eigh(self.spcov)
109
+
110
+ keep = evals > eps * evals.max()
111
+ self.eigenvalues, self.eigenvectors = evals[keep], evecs[:, keep]
112
+ self.log_det = sum(log(self.eigenvalues))
113
+ self.log_twopi = self.eigenvalues.size * log(2*pi)
114
+
115
+ def __call__(self, model: ndarray | float) -> ndarray:
116
+ """Evaluate the log-likelihood of a model spectrum.
117
+
118
+ Parameters
119
+ ----------
120
+ model : float or ndarray
121
+ The theoretical model spectrum. If a float is provided, it is
122
+ broadcast to a flat spectrum. If an array, it must match the
123
+ wavelength grid size used during initialization.
124
+
125
+ Returns
126
+ -------
127
+ float
128
+ The natural log-likelihood $\ln \mathcal{L}$.
129
+ """
130
+ if isinstance(model, float):
131
+ model = full(self.wavelength.size, model)
132
+ else:
133
+ model = asarray(model)
134
+
135
+ # Project the residuals onto the eigenvectors (Basis Rotation)
136
+ # and Compute the Mahalanobis Distance (Chi-Squared in Subspace).
137
+ p = (self.spmean - model) @ self.eigenvectors
138
+ chisq = sum(p**2 / self.eigenvalues)
139
+ return -0.5 * (chisq + self.log_det + self.log_twopi)
@@ -20,7 +20,7 @@ from typing import Optional, Literal
20
20
  from ldtk import BoxcarFilter, LDPSetCreator # noqa
21
21
  from numba import njit, prange
22
22
  from numpy import zeros, log, pi, linspace, inf, atleast_2d, newaxis, clip, arctan2, ones, floor, sum, concatenate, \
23
- sort, ndarray, zeros_like, array, tile, arange, squeeze, dstack
23
+ sort, ndarray, zeros_like, array, tile, arange, squeeze, dstack, nan, diff, all
24
24
  from numpy.random import default_rng
25
25
  from celerite2 import GaussianProcess as GP, terms
26
26
 
@@ -85,6 +85,10 @@ def ip_bspline(x, xk, yk):
85
85
  return splev(x, splrep(xk, yk))
86
86
 
87
87
 
88
+ def ip_bspline_quadratic(x, xk, yk):
89
+ return splev(x, splrep(xk, yk, k=2))
90
+
91
+
88
92
  def ip_makima(x, xk, yk):
89
93
  return Akima1DInterpolator(xk, yk, method='makima', extrapolate=True)(x)
90
94
 
@@ -101,11 +105,11 @@ def add_knots(x_new, x_old):
101
105
  return sort(concatenate([x_new, x_old]))
102
106
 
103
107
 
104
- interpolator_choices = ("bspline", "pchip", "makima", "nearest", "linear")
108
+ interpolator_choices = ("bspline", "pchip", "makima", "nearest", "linear", "bspline-quadratic")
105
109
 
106
110
 
107
- interpolators = {'bspline': ip_bspline, 'pchip': ip_pchip, 'makima': ip_makima,
108
- 'nearest': ip_nearest, 'linear': ip_linear}
111
+ interpolators = {'bspline': ip_bspline, 'bspline-quadratic': ip_bspline_quadratic, 'pchip': ip_pchip,
112
+ 'makima': ip_makima, 'nearest': ip_nearest, 'linear': ip_linear}
109
113
 
110
114
 
111
115
  def clean_knots(knots, min_distance, lmin=0, lmax=inf):
@@ -149,19 +153,19 @@ def clean_knots(knots, min_distance, lmin=0, lmax=inf):
149
153
  class TSLPF(LogPosteriorFunction):
150
154
  def __init__(self, runner, name: str, ldmodel, data: TSDataGroup, nk: int = 50, nldc: int = 10, nthreads: int = 1,
151
155
  tmpars = None, noise_model: Literal["white", "fixed_gp", "free_gp"] = 'white',
152
- interpolation: Literal['bspline', 'pchip', 'makima', 'nearest', 'linear'] = 'makima'):
156
+ interpolation: Literal['nearest', 'linear', 'pchip', 'makima', 'bspline', 'bspline-quadratic'] = 'makima'):
153
157
  super().__init__(name)
154
158
  self._runner = runner
155
159
  self._original_data: TSDataGroup | None = None
156
160
  self.data: TSDataGroup | None = None
157
161
  self.npb: list[int] | None= None
158
162
  self.npt: list[int] | None = None
159
- self.ndim: int | None = None
160
163
  self._baseline_models: list[ndarray] | None = None
161
164
  self.interpolation: str = interpolation
162
165
 
163
166
  if interpolation not in interpolator_choices:
164
167
  raise ValueError(f'interpolation must be one of {interpolator_choices}')
168
+
165
169
  self._ip = interpolators[interpolation]
166
170
  self._ip_ld = interpolators['bspline']
167
171
 
@@ -187,6 +191,7 @@ class TSLPF(LogPosteriorFunction):
187
191
  self.nk = nk
188
192
 
189
193
  self.k_knots = linspace(data.wlmin, data.wlmax, self.nk)
194
+ self.free_k_knot_ids = None
190
195
 
191
196
  if isinstance(ldmodel, LDTkLD):
192
197
  self.ld_knots = array([])
@@ -217,6 +222,10 @@ class TSLPF(LogPosteriorFunction):
217
222
  def errors(self) -> list[ndarray]:
218
223
  return self.data.errors
219
224
 
225
+ @property
226
+ def ndim(self) -> int:
227
+ return len(self.ps)
228
+
220
229
  def set_data(self, data: TSDataGroup):
221
230
  self._original_data = deepcopy(data)
222
231
  self.data = data
@@ -238,7 +247,6 @@ class TSLPF(LogPosteriorFunction):
238
247
  self._init_p_baseline()
239
248
  self._init_p_bias()
240
249
  self.ps.freeze()
241
- self.ndim = len(self.ps)
242
250
 
243
251
  def initialize_spots(self, tstar: float, wlref: float, include_tlse: bool = True) -> None:
244
252
  self.spot_model = SpotModel(self, tstar, wlref, include_tlse)
@@ -531,6 +539,56 @@ class TSLPF(LogPosteriorFunction):
531
539
  self._mc_chains = fmcn.reshape([mco.shape[0], mco.shape[1], ndn])
532
540
  self.sampler = None
533
541
 
542
+ def set_free_k_knots(self, ids):
543
+ self.free_k_knot_ids = ids
544
+
545
+ # Remove existing parameter block if one exists
546
+ block_names = [b.name for b in self.ps.blocks]
547
+ try:
548
+ bid = block_names.index('free_k_knot_locations')
549
+ del self.ps[self.ps.blocks[bid].slice]
550
+ del self.ps.blocks[bid]
551
+ except ValueError:
552
+ pass
553
+
554
+ # Calculate minimum distances between knots
555
+ min_distances = zeros(self.nk)
556
+ min_distances[0] = self.k_knots[1] - self.k_knots[0]
557
+ min_distances[self.nk-1] = self.k_knots[self.nk-1] - self.k_knots[self.nk-2]
558
+ for i in range(1, self.nk-1):
559
+ for j in range(i):
560
+ min_distances[i] = min(self.k_knots[i] - self.k_knots[i-1], self.k_knots[i+1] - self.k_knots[i])
561
+
562
+ # Create new parameter block
563
+ ps = []
564
+ for kid in ids:
565
+ sigma = min_distances[kid]/6 if (kid+1 in ids or kid-1 in ids) else min_distances[kid]/4
566
+ ps.append(GParameter(f'kl_{kid:04d}', f'k knot {kid} location', 'um', NP(self.k_knots[kid], sigma), [0, inf]))
567
+ self.ps.thaw()
568
+ self.ps.add_global_block('free_k_knot_locations', ps)
569
+ self.ps.freeze()
570
+ self._start_kloc = self.ps.blocks[-1].start
571
+ self._sl_kloc = self.ps.blocks[-1].slice
572
+
573
+ try:
574
+ pid = [p.__name__ for p in self._additional_log_priors].index('k_knot_order_priors')
575
+ del self._additional_log_priors[pid]
576
+ except ValueError:
577
+ pass
578
+
579
+ # Add a prior on the order of the knots
580
+ def k_knot_order_prior(pv):
581
+ pv = atleast_2d(pv)
582
+ logp = zeros(pv.shape[0])
583
+ k_knots = self.k_knots.copy()
584
+ for i in range(pv.shape[0]):
585
+ k_knots[self.free_k_knot_ids] = pv[i, self._sl_kloc]
586
+ original_separations = diff(self.k_knots)
587
+ current_separations = diff(k_knots)
588
+ logp[i] = 1e2*(clip(current_separations / original_separations / 0.25, -inf, 1.0) - 1.).sum()
589
+ return logp
590
+ self._additional_log_priors.append(k_knot_order_prior)
591
+
534
592
 
535
593
  def add_ld_knots(self, knot_wavelengths) -> None:
536
594
  """Add limb darkening knots to the model.
@@ -598,9 +656,12 @@ class TSLPF(LogPosteriorFunction):
598
656
  """
599
657
  pvp = atleast_2d(pvp)
600
658
  ks = [zeros((pvp.shape[0], npb)) for npb in self.npb]
659
+ k_knots = self.k_knots.copy()
601
660
  for ids in range(self.data.size):
602
661
  for ipv in range(pvp.shape[0]):
603
- ks[ids][ipv,:] = self._ip(self.wavelengths[ids], self.k_knots, pvp[ipv])
662
+ if self.free_k_knot_ids is not None:
663
+ k_knots[self.free_k_knot_ids] = pvp[ipv, self._sl_kloc]
664
+ ks[ids][ipv,:] = self._ip(self.wavelengths[ids], k_knots, pvp[ipv, self._sl_rratios])
604
665
  return ks
605
666
 
606
667
  def _eval_ldc(self, pvp):
@@ -648,7 +709,7 @@ class TSLPF(LogPosteriorFunction):
648
709
  pv = atleast_2d(pv)
649
710
  ldp = self._eval_ldc(pv)
650
711
  t0s = pv[:, self._sl_tcs]
651
- k = self._eval_k(pv[:, self._sl_rratios])
712
+ k = self._eval_k(pv)
652
713
  p = pv[:, 1]
653
714
  aor = as_from_rhop(pv[:, 0], p)
654
715
  inc = i_from_ba(pv[:, 2], aor)
@@ -10,4 +10,5 @@ pandas
10
10
  xarray
11
11
  seaborn
12
12
  astropy
13
- uncertainties
13
+ uncertainties
14
+ scikit-learn
@@ -1,144 +0,0 @@
1
- # ExoIris: fast, flexible, and easy exoplanet transmission spectroscopy in Python.
2
- # Copyright (C) 2025 Hannu Parviainen
3
- #
4
- # This program is free software: you can redistribute it and/or modify
5
- # it under the terms of the GNU General Public License as published by
6
- # the Free Software Foundation, either version 3 of the License, or
7
- # (at your option) any later version.
8
- #
9
- # This program is distributed in the hope that it will be useful,
10
- # but WITHOUT ANY WARRANTY; without even the implied warranty of
11
- # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
12
- # GNU General Public License for more details.
13
- #
14
- # You should have received a copy of the GNU General Public License
15
- # along with this program. If not, see <https://www.gnu.org/licenses/>.
16
- from typing import Literal
17
-
18
- from numpy import full, cov, sqrt, sum
19
- from numpy.linalg import eigh
20
- from numpy.typing import ArrayLike
21
-
22
-
23
- class LogLikelihood:
24
- def __init__(self, exoiris, wavelength: ArrayLike, kind: Literal['radius_ratio', 'depth'] = 'depth', eps: float = 1e-10):
25
- """Reduced-rank Gaussian log-likelihood.
26
-
27
- This class constructs a statistically correct reduced-rank Gaussian
28
- log-likelihood function for comparing a theoretical transmission spectrum to the
29
- posterior distribution inferred by ExoIris. Because the posterior
30
- samples of the transmission spectrum are generated from a spline with
31
- far fewer independent parameters than the number of wavelength bins, the
32
- empirical covariance matrix is rank-deficient or strongly ill-conditioned.
33
- Direct inversion of the full covariance is therefore numerically unstable
34
- and produces incorrect likelihoods.
35
-
36
- Parameters
37
- ----------
38
- exoiris
39
- The ExoIris model object from which posterior samples of the knot
40
- values and spline interpolation functions are obtained.
41
-
42
- wavelength
43
- Wavelength grid on which the radius ratio posterior samples and the
44
- theoretical spectra will be evaluated.
45
-
46
- kind
47
- The type of the spectrum. Can be either ``radius_ratio`` for a radius ratio
48
- spectrum, or ``depth`` for a transit depth spectrum.
49
-
50
- eps
51
- Relative tolerance factor used to determine which eigenvalues of
52
- the covariance matrix are considered significant. Eigenvalues smaller
53
- than ``eps * max_eigenvalue`` are discarded. Default is ``1e-10``.
54
-
55
- Attributes
56
- ----------
57
- k_posteriors : ndarray of shape (n_samples, n_wavelengths)
58
- Radius-ratio posterior samples evaluated on the wavelength grid.
59
-
60
- k_mean : ndarray of shape (n_wavelengths,)
61
- Posterior mean radius-ratio spectrum.
62
-
63
- k_cov : ndarray of shape (n_wavelengths, n_wavelengths)
64
- Empirical covariance matrix of the posterior samples.
65
-
66
- lambda_r : ndarray of shape (k,)
67
- Significant eigenvalues of the covariance matrix (``k`` = reduced
68
- dimensionality).
69
-
70
- u_r : ndarray of shape (n_wavelengths, k)
71
- Eigenvectors corresponding to the significant eigenvalues.
72
-
73
- sqrt_inv_lambda_r : ndarray of shape (k,)
74
- Factors used to whiten the reduced-rank representation.
75
-
76
- y_data : ndarray of shape (k,)
77
- Whitened reduced-rank representation of the posterior mean spectrum.
78
-
79
- Notes
80
- -----
81
- The class implements the reduced-rank likelihood method:
82
-
83
- 1. Posterior samples of the spline knot values are evaluated on the
84
- user-specified wavelength grid to produce a matrix of radius-ratio
85
- samples, ``k_posteriors``.
86
-
87
- 2. The empirical mean spectrum and covariance matrix are computed over
88
- these samples.
89
-
90
- 3. An eigendecomposition of the covariance matrix is performed. All
91
- eigenvalues below ``eps * max(eigenvalue)`` are discarded, ensuring that
92
- only statistically meaningful directions (i.e., those supported by the
93
- spline parameterization and the data) are retained.
94
-
95
- 4. The retained eigenvectors form an orthonormal basis for the true
96
- low-dimensional subspace in which the posterior distribution lives.
97
- Projection onto this basis, followed by whitening with
98
- ``lambda_r**(-1/2)``, yields a representation where the posterior is a
99
- standard multivariate normal with identity covariance.
100
-
101
- 5. The log-likelihood of a theoretical spectrum ``x`` is evaluated in this
102
- reduced space as:
103
-
104
- log L = -0.5 * sum_i (y_data[i] - y_model[i])^2
105
-
106
- where ``y_data`` is the whitened reduced-space representation of the
107
- posterior mean spectrum, and ``y_model`` is the whitened projection of
108
- the model spectrum.
109
-
110
- - This reduced-rank formulation is mathematically equivalent to computing
111
- the Gaussian likelihood in knot space, and avoids the numerical
112
- instabilities associated with inverting a nearly singular covariance
113
- matrix in the oversampled wavelength space.
114
-
115
- - If ``x`` is provided as a scalar, it is broadcast to a constant spectrum
116
- over the wavelength grid. Otherwise, it must be an array of the same
117
- wavelength length.
118
- """
119
- self.model = m = exoiris
120
- self.wavelength = wavelength
121
- self.eps = eps
122
-
123
- if kind == 'radius_ratio':
124
- self.spectrum = m.radius_ratio_spectrum(wavelength)
125
- elif kind == 'depth':
126
- self.spectrum = m.area_ratio_spectrum(wavelength)
127
- else:
128
- raise ValueError('Unknown spectrum type: {}'.format(kind))
129
-
130
- self.spmean = self.spectrum.mean(0)
131
- self.spcov = cov(self.spectrum, rowvar=False)
132
-
133
- evals, u = eigh(self.spcov)
134
- tol = eps * evals.max()
135
- keep = evals > tol
136
- self.lambda_r, self.u_r = evals[keep], u[:, keep]
137
- self.sqrt_inv_lambda_r = 1.0 / sqrt(self.lambda_r)
138
- self.y_data = (self.u_r.T @ self.spmean) * self.sqrt_inv_lambda_r
139
-
140
- def __call__(self, x):
141
- if isinstance(x, float):
142
- x = full(self.wavelength.size, x)
143
- y_model = (self.u_r.T @ x) * self.sqrt_inv_lambda_r
144
- return -0.5*sum((self.y_data - y_model)**2)
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes