mini-pole 0.3__tar.gz

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
mini_pole-0.3/LICENSE ADDED
@@ -0,0 +1,21 @@
1
+ MIT License
2
+
3
+ Copyright (c) 2024 lzphy
4
+
5
+ Permission is hereby granted, free of charge, to any person obtaining a copy
6
+ of this software and associated documentation files (the "Software"), to deal
7
+ in the Software without restriction, including without limitation the rights
8
+ to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9
+ copies of the Software, and to permit persons to whom the Software is
10
+ furnished to do so, subject to the following conditions:
11
+
12
+ The above copyright notice and this permission notice shall be included in all
13
+ copies or substantial portions of the Software.
14
+
15
+ THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16
+ IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17
+ FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18
+ AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19
+ LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20
+ OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21
+ SOFTWARE.
mini_pole-0.3/PKG-INFO ADDED
@@ -0,0 +1,156 @@
1
+ Metadata-Version: 2.2
2
+ Name: mini_pole
3
+ Version: 0.3
4
+ Summary: The Python code provided implements the matrix-valued version of the Minimal Pole Method (MPM) as described in Phys. Rev. B 110, 235131 (2024).
5
+ Home-page: https://github.com/Green-Phys/MiniPole
6
+ Author: Lei Zhang
7
+ Author-email: lzphy@umich.edu
8
+ License: MIT
9
+ Classifier: Programming Language :: Python :: 3
10
+ Classifier: License :: OSI Approved :: MIT License
11
+ Classifier: Operating System :: OS Independent
12
+ Requires-Python: >=3.8
13
+ Description-Content-Type: text/markdown
14
+ License-File: LICENSE
15
+ Requires-Dist: numpy>=1.21.0
16
+ Requires-Dist: scipy>=1.7.0
17
+ Dynamic: author
18
+ Dynamic: author-email
19
+ Dynamic: classifier
20
+ Dynamic: description
21
+ Dynamic: description-content-type
22
+ Dynamic: home-page
23
+ Dynamic: license
24
+ Dynamic: requires-dist
25
+ Dynamic: requires-python
26
+ Dynamic: summary
27
+
28
+ # Minimal Pole Method (MPM)
29
+ The Python code provided implements the matrix-valued version of the Minimal Pole Method (MPM) as described in [Phys. Rev. B 110, 235131 (2024)](https://doi.org/10.1103/PhysRevB.110.235131), extending the scalar-valued method introduced in [Phys. Rev. B 110, 035154 (2024)](https://doi.org/10.1103/PhysRevB.110.035154).
30
+
31
+ The input of the simulation is the Matsubara data $G(i \omega_n)$ sampled on a uniform grid $\lbrace i\omega_{0}, i\omega_{1}, \cdots, i\omega_{n_{\omega}-1} \rbrace$, where $\omega_n=\frac{(2n+1)\pi}{\beta}$ for fermions and $\frac{2n\pi}{\beta}$ for bosons, and $n_{\omega}$ is the total number of sampling points.
32
+
33
+ ## 1. Installation
34
+
35
+ ### Dependencies
36
+ - `numpy`
37
+ - `scipy`
38
+ - `matplotlib`
39
+
40
+ ### Installation Commands
41
+ 1. Via `setup.py`:
42
+ ```bash
43
+ python3 setup.py install
44
+
45
+ 2. Or via `pip`:
46
+ ```bash
47
+ pip install mini_pole
48
+
49
+ ## 2. Usage
50
+ ### i) The standard MPM is performed using the following command:
51
+
52
+ **p = MiniPole(G_w, w, n0 = "auto", n0_shift = 0, err = None, err_type = "abs", M = None, symmetry = False, G_symmetric = False, compute_const = False, plane = None, include_n0 = True, k_max = 999, ratio_max = 10)**
53
+
54
+ Parameters
55
+ ----------
56
+ 1. G_w : ndarray
57
+ An (n_w, n_orb, n_orb) or (n_w,) array containing the Matsubara data.
58
+ 2. w : ndarray
59
+ An (n_w,) array containing the corresponding real-valued Matsubara grid.
60
+ 3. n0 : int or str, default="auto"
61
+ If "auto", n0 is automatically selected with an additional shift specified by n0_shift.
62
+ If a non-negative integer is provided, n0 is fixed at that value.
63
+ 4. n0_shift : int, default=0
64
+ The shift applied to the automatically determined n0.
65
+ 5. err : float
66
+ Error tolerance for calculations.
67
+ 6. err_type : str, default="abs"
68
+ Specifies the type of error: "abs" for absolute error or "rel" for relative error.
69
+ 7. M : int, optional
70
+ The number of poles in the final result. If not specified, the precision from the first ESPRIT is used to extract poles in the second ESPRIT.
71
+ 8. symmetry : bool, default=False
72
+ Determines whether to preserve up-down symmetry.
73
+ 9. G_symmetric : bool, default=False
74
+ If True, the Matsubara data will be symmetrized such that G_{ij}(z) = G_{ji}(z).
75
+ 10. compute_const : bool, default=False
76
+ Determines whether to compute the constant term in G(z) = sum_l Al / (z - xl) + const.
77
+ If False, the constant term is fixed at 0.
78
+ 11. plane : str, optional
79
+ Specifies whether to use the original z-plane or the mapped w-plane to compute pole weights.
80
+ 12. include_n0 : bool, default=False
81
+ Determines whether to include the first n0 input points when weights are calculated in the z-plane.
82
+ 13. k_max : int, default=999
83
+ The maximum number of contour integrals.
84
+ 14. ratio_max : float, default=10
85
+ The maximum ratio of oscillation when automatically choosing n0.
86
+
87
+ Returns
88
+ -------
89
+ Minimal pole representation of the given data.
90
+ Pole weights are stored in p.pole_weight, a numpy array of shape (M, n_orb, n_orb).
91
+ Shared pole locations are stored in p.pole_location, a numpy array of shape (M,).
92
+
93
+ ### ii) The MPM-DLR algorithm is performed using the following command:
94
+
95
+ **p = MiniPoleDLR(Al_dlr, xl_dlr, beta, n0, nmax = None, err = None, err_type = "abs", M = None, symmetry = False, k_max=200, Lfactor = 0.4)**
96
+
97
+ Parameters
98
+ ----------
99
+ 1. Al_dlr (numpy.ndarray): DLR coefficients, either of shape (r,) or (r, n_orb, n_orb).
100
+ 2. xl_dlr (numpy.ndarray): DLR grid for the real frequency, an array of shape (r,).
101
+ 3. beta (float): Inverse temperature of the system (1/kT).
102
+ 4. n0 (int): Number of initial points to discard, typically in the range (0, 10).
103
+ 5. nmax (int): Cutoff for the Matsubara frequency when symmetry is False.
104
+ 6. err (float): Error tolerance for calculations.
105
+ 7. err_type (str): Specifies the type of error, "abs" for absolute error or "rel" for relative error.
106
+ 8. M (int): Specifies the number of poles to be recovered.
107
+ 9. symmetry (bool): Whether to impose up-down symmetry (True or False).
108
+ 10. k_max (int): Number of moments to be calculated.
109
+ 11. Lfactor (float): Ratio of L/N in the ESPRIT algorithm.
110
+
111
+ Returns
112
+ -------
113
+ Minimal pole representation of the given data.
114
+ Pole weights are stored in p.pole_weight, a numpy array of shape (M, n_orb, n_orb).
115
+ Shared pole locations are stored in p.pole_location, a numpy array of shape (M,).
116
+
117
+ ## 3. Examples
118
+
119
+ The scripts in the *examples* folder demonstrate the usage of MPM and MPM-DLR.
120
+
121
+ ### i) MPM Algorithm
122
+
123
+ The *examples/MPM* folder includes a Jupyter notebook that demonstrates how to use `MiniPole` to recover synthetic spectral functions. You can modify the lambda expression in the `GreenFunc` class to recover a different spectrum, but please remember to update the lower and upper bounds (x_min and x_max) of the spectrum accordingly. Additional details will be provided in the future.
124
+
125
+ ### ii) MPM-DLR Algorithm
126
+
127
+ The *examples/MPM_DLR* folder contains scripts to recover the band structure of Si, as shown in the middle panel of Fig. 9 in [Phys. Rev. B 110, 235131 (2024)](https://doi.org/10.1103/PhysRevB.110.235131).
128
+
129
+ #### Steps:
130
+
131
+ a) Download the input data file [Si_dlr.h5](https://drive.google.com/file/d/1_bNvbgOHewiujHYEcf-CCpGxlZP9cRw_/view?usp=drive_link) to the *examples/MPM_DLR/* directory.
132
+
133
+ b) Obtain the recovered poles by running **python3 cal_band_dlr.py --obs=`<option>`**, where **`<option>`** can be "S" (self-energy), "Gii" (scalar-valued Green's function), or "G" (matrix-valued Green's function).
134
+
135
+ c) Plot the band structure by running **python3 plt_band_dlr.py --obs=`<option>`**.
136
+
137
+ #### Note:
138
+
139
+ a) Reference runtime on a single core of a laptop (using the M1 Max Apple chip as an example): 13 seconds for "Gii" and 160 seconds for both "G" and "S".
140
+
141
+ b) Parallel computation is supported in **cal_band_dlr.py** to speed up the process on multiple cores. Use the following command: **mpirun -n `<num_cores>` python3 cal_band_dlr.py --obs=`<option>`**, where **`<num_cores>`** is the number of cores and **`<option>`** is "S," "Gii," or "G".
142
+
143
+ c) Full Parameters for **cal_band_dlr.py**:
144
+
145
+ - `--obs` (str): Observation type used in the script. Default is `"S"`.
146
+ - `--n0` (int): Parameter $n_0$ as described in [Phys. Rev. B 110, 235131 (2024)](https://doi.org/10.1103/PhysRevB.110.235131).
147
+ - `--err` (float): Error tolerance for computations. Default is `1.e-10`.
148
+ - `--symmetry` (bool): Specifies whether to preserve up-down symmetry in calculations.
149
+
150
+ d) Full Parameters for **plt_band_dlr.py**:
151
+
152
+ - `--obs` (str): Observation type used in the script. Default is `"S"`.
153
+ - `--w_min` (float): Lower bound of the real frequency in eV. Default is `-12`.
154
+ - `--w_max` (float): Upper bound of the real frequency in eV. Default is `12`.
155
+ - `--n_w` (int): Number of frequencies between `w_min` and `w_max`. Default is `200`.
156
+ - `--eta` (float): Broadening parameter. Default is `0.005`.
@@ -0,0 +1,129 @@
1
+ # Minimal Pole Method (MPM)
2
+ The Python code provided implements the matrix-valued version of the Minimal Pole Method (MPM) as described in [Phys. Rev. B 110, 235131 (2024)](https://doi.org/10.1103/PhysRevB.110.235131), extending the scalar-valued method introduced in [Phys. Rev. B 110, 035154 (2024)](https://doi.org/10.1103/PhysRevB.110.035154).
3
+
4
+ The input of the simulation is the Matsubara data $G(i \omega_n)$ sampled on a uniform grid $\lbrace i\omega_{0}, i\omega_{1}, \cdots, i\omega_{n_{\omega}-1} \rbrace$, where $\omega_n=\frac{(2n+1)\pi}{\beta}$ for fermions and $\frac{2n\pi}{\beta}$ for bosons, and $n_{\omega}$ is the total number of sampling points.
5
+
6
+ ## 1. Installation
7
+
8
+ ### Dependencies
9
+ - `numpy`
10
+ - `scipy`
11
+ - `matplotlib`
12
+
13
+ ### Installation Commands
14
+ 1. Via `setup.py`:
15
+ ```bash
16
+ python3 setup.py install
17
+
18
+ 2. Or via `pip`:
19
+ ```bash
20
+ pip install mini_pole
21
+
22
+ ## 2. Usage
23
+ ### i) The standard MPM is performed using the following command:
24
+
25
+ **p = MiniPole(G_w, w, n0 = "auto", n0_shift = 0, err = None, err_type = "abs", M = None, symmetry = False, G_symmetric = False, compute_const = False, plane = None, include_n0 = True, k_max = 999, ratio_max = 10)**
26
+
27
+ Parameters
28
+ ----------
29
+ 1. G_w : ndarray
30
+ An (n_w, n_orb, n_orb) or (n_w,) array containing the Matsubara data.
31
+ 2. w : ndarray
32
+ An (n_w,) array containing the corresponding real-valued Matsubara grid.
33
+ 3. n0 : int or str, default="auto"
34
+ If "auto", n0 is automatically selected with an additional shift specified by n0_shift.
35
+ If a non-negative integer is provided, n0 is fixed at that value.
36
+ 4. n0_shift : int, default=0
37
+ The shift applied to the automatically determined n0.
38
+ 5. err : float
39
+ Error tolerance for calculations.
40
+ 6. err_type : str, default="abs"
41
+ Specifies the type of error: "abs" for absolute error or "rel" for relative error.
42
+ 7. M : int, optional
43
+ The number of poles in the final result. If not specified, the precision from the first ESPRIT is used to extract poles in the second ESPRIT.
44
+ 8. symmetry : bool, default=False
45
+ Determines whether to preserve up-down symmetry.
46
+ 9. G_symmetric : bool, default=False
47
+ If True, the Matsubara data will be symmetrized such that G_{ij}(z) = G_{ji}(z).
48
+ 10. compute_const : bool, default=False
49
+ Determines whether to compute the constant term in G(z) = sum_l Al / (z - xl) + const.
50
+ If False, the constant term is fixed at 0.
51
+ 11. plane : str, optional
52
+ Specifies whether to use the original z-plane or the mapped w-plane to compute pole weights.
53
+ 12. include_n0 : bool, default=False
54
+ Determines whether to include the first n0 input points when weights are calculated in the z-plane.
55
+ 13. k_max : int, default=999
56
+ The maximum number of contour integrals.
57
+ 14. ratio_max : float, default=10
58
+ The maximum ratio of oscillation when automatically choosing n0.
59
+
60
+ Returns
61
+ -------
62
+ Minimal pole representation of the given data.
63
+ Pole weights are stored in p.pole_weight, a numpy array of shape (M, n_orb, n_orb).
64
+ Shared pole locations are stored in p.pole_location, a numpy array of shape (M,).
65
+
66
+ ### ii) The MPM-DLR algorithm is performed using the following command:
67
+
68
+ **p = MiniPoleDLR(Al_dlr, xl_dlr, beta, n0, nmax = None, err = None, err_type = "abs", M = None, symmetry = False, k_max=200, Lfactor = 0.4)**
69
+
70
+ Parameters
71
+ ----------
72
+ 1. Al_dlr (numpy.ndarray): DLR coefficients, either of shape (r,) or (r, n_orb, n_orb).
73
+ 2. xl_dlr (numpy.ndarray): DLR grid for the real frequency, an array of shape (r,).
74
+ 3. beta (float): Inverse temperature of the system (1/kT).
75
+ 4. n0 (int): Number of initial points to discard, typically in the range (0, 10).
76
+ 5. nmax (int): Cutoff for the Matsubara frequency when symmetry is False.
77
+ 6. err (float): Error tolerance for calculations.
78
+ 7. err_type (str): Specifies the type of error, "abs" for absolute error or "rel" for relative error.
79
+ 8. M (int): Specifies the number of poles to be recovered.
80
+ 9. symmetry (bool): Whether to impose up-down symmetry (True or False).
81
+ 10. k_max (int): Number of moments to be calculated.
82
+ 11. Lfactor (float): Ratio of L/N in the ESPRIT algorithm.
83
+
84
+ Returns
85
+ -------
86
+ Minimal pole representation of the given data.
87
+ Pole weights are stored in p.pole_weight, a numpy array of shape (M, n_orb, n_orb).
88
+ Shared pole locations are stored in p.pole_location, a numpy array of shape (M,).
89
+
90
+ ## 3. Examples
91
+
92
+ The scripts in the *examples* folder demonstrate the usage of MPM and MPM-DLR.
93
+
94
+ ### i) MPM Algorithm
95
+
96
+ The *examples/MPM* folder includes a Jupyter notebook that demonstrates how to use `MiniPole` to recover synthetic spectral functions. You can modify the lambda expression in the `GreenFunc` class to recover a different spectrum, but please remember to update the lower and upper bounds (x_min and x_max) of the spectrum accordingly. Additional details will be provided in the future.
97
+
98
+ ### ii) MPM-DLR Algorithm
99
+
100
+ The *examples/MPM_DLR* folder contains scripts to recover the band structure of Si, as shown in the middle panel of Fig. 9 in [Phys. Rev. B 110, 235131 (2024)](https://doi.org/10.1103/PhysRevB.110.235131).
101
+
102
+ #### Steps:
103
+
104
+ a) Download the input data file [Si_dlr.h5](https://drive.google.com/file/d/1_bNvbgOHewiujHYEcf-CCpGxlZP9cRw_/view?usp=drive_link) to the *examples/MPM_DLR/* directory.
105
+
106
+ b) Obtain the recovered poles by running **python3 cal_band_dlr.py --obs=`<option>`**, where **`<option>`** can be "S" (self-energy), "Gii" (scalar-valued Green's function), or "G" (matrix-valued Green's function).
107
+
108
+ c) Plot the band structure by running **python3 plt_band_dlr.py --obs=`<option>`**.
109
+
110
+ #### Note:
111
+
112
+ a) Reference runtime on a single core of a laptop (using the M1 Max Apple chip as an example): 13 seconds for "Gii" and 160 seconds for both "G" and "S".
113
+
114
+ b) Parallel computation is supported in **cal_band_dlr.py** to speed up the process on multiple cores. Use the following command: **mpirun -n `<num_cores>` python3 cal_band_dlr.py --obs=`<option>`**, where **`<num_cores>`** is the number of cores and **`<option>`** is "S," "Gii," or "G".
115
+
116
+ c) Full Parameters for **cal_band_dlr.py**:
117
+
118
+ - `--obs` (str): Observation type used in the script. Default is `"S"`.
119
+ - `--n0` (int): Parameter $n_0$ as described in [Phys. Rev. B 110, 235131 (2024)](https://doi.org/10.1103/PhysRevB.110.235131).
120
+ - `--err` (float): Error tolerance for computations. Default is `1.e-10`.
121
+ - `--symmetry` (bool): Specifies whether to preserve up-down symmetry in calculations.
122
+
123
+ d) Full Parameters for **plt_band_dlr.py**:
124
+
125
+ - `--obs` (str): Observation type used in the script. Default is `"S"`.
126
+ - `--w_min` (float): Lower bound of the real frequency in eV. Default is `-12`.
127
+ - `--w_max` (float): Upper bound of the real frequency in eV. Default is `12`.
128
+ - `--n_w` (int): Number of frequencies between `w_min` and `w_max`. Default is `200`.
129
+ - `--eta` (float): Broadening parameter. Default is `0.005`.
@@ -0,0 +1,5 @@
1
+ from .con_map import *
2
+ from .esprit import *
3
+ from .green_func import *
4
+ from .mini_pole_dlr import *
5
+ from .mini_pole import *
@@ -0,0 +1,167 @@
1
+ import numpy as np
2
+
3
+ class ConMapGeneric:
4
+ '''
5
+ Generic holomorphic mapping which works for any cases.
6
+ '''
7
+ def __init__(self, w_m, dw_h, branch_in = True):
8
+ '''
9
+ Initialize the class with w_m and dw_h, which correspond to $\omega_{\rm m}$ and $\Delta \omega_{\rm h}$ in the paper, respectively.
10
+ Points in the z plane are mapped to the inside (outside) of the unit disk in the w plane when branch_in is True (False).
11
+ '''
12
+ assert dw_h.real > 0.0 and dw_h.imag == 0.0
13
+ self.w_m = w_m
14
+ self.dw_h = dw_h
15
+ self.branch_in = branch_in
16
+ self.w_inf = [0.0]
17
+
18
+ def cal_z(self, w):
19
+ '''
20
+ Intermediate function of z(w) which only works for a single point.
21
+ '''
22
+ if w in self.w_inf:
23
+ return np.inf
24
+ else:
25
+ return 0.5 * self.dw_h * (w - 1.0 / w) + 1j * self.w_m
26
+
27
+ def cal_w(self, z):
28
+ '''
29
+ Intermediate function of w(z) which only works for a single point.
30
+ '''
31
+ x = (z - 1j * self.w_m) / self.dw_h
32
+ w = x - np.sqrt(x ** 2.0 + 1.0)
33
+ if self.branch_in:
34
+ if np.absolute(w) > 1.0:
35
+ w = 2.0 * x - w
36
+ else:
37
+ if np.absolute(w) < 1.0:
38
+ w = 2.0 * x - w
39
+ return w
40
+
41
+ def cal_dz(self, w):
42
+ '''
43
+ Intermediate function of dz(w) which only works for single point.
44
+ '''
45
+ if w in self.w_inf:
46
+ return np.inf
47
+ else:
48
+ return 0.5 * self.dw_h * (1.0 + 1.0 / w ** 2.0)
49
+
50
+ def z(self, w):
51
+ '''
52
+ Calculate z from w.
53
+ '''
54
+ return np.vectorize(self.cal_z)(w)
55
+
56
+ def w(self, z):
57
+ '''
58
+ Calculate w from z.
59
+ '''
60
+ return np.vectorize(self.cal_w)(z)
61
+
62
+ def dz(self, w):
63
+ '''
64
+ Calculate dz/dw at value w.
65
+ '''
66
+ return np.vectorize(self.cal_dz)(w)
67
+
68
+ class ConMapGapless:
69
+ '''
70
+ conformal mapping which works for both gapless and gapped cases
71
+ '''
72
+ def __init__(self, w_min):
73
+ assert w_min > 0.0
74
+ self.w_min = w_min
75
+ self.w_inf = [-1.0, 1.0]
76
+
77
+ def cal_z(self, w):
78
+ assert np.abs(w) < 1.0 + 1.e-15
79
+ if w in self.w_inf:
80
+ return np.inf
81
+ else:
82
+ return 2.0 * self.w_min * w / (1.0 - w * w)
83
+
84
+ def cal_w(self, z):
85
+ if z == 0.0:
86
+ w = 0.0
87
+ else:
88
+ w = self.w_min * (np.sqrt(1.0 / (z * z) + 1.0 / (self.w_min * self.w_min)) - 1.0 / z)
89
+ if np.absolute(w) > 1.0:
90
+ w = -w - 2.0 * self.w_min / z
91
+ return w
92
+
93
+ def cal_dz(self, w):
94
+ assert np.abs(w) < 1.0 + 1.e-15
95
+ if w in self.w_inf:
96
+ return np.inf
97
+ else:
98
+ return 2.0 * self.w_min * (1.0 + w ** 2) / (1.0 - w ** 2) ** 2
99
+
100
+ def z(self, w):
101
+ return np.vectorize(self.cal_z)(w)
102
+
103
+ def w(self, z):
104
+ return np.vectorize(self.cal_w)(z)
105
+
106
+ def dz(self, w):
107
+ return np.vectorize(self.cal_dz)(w)
108
+
109
+ class ConMapRet:
110
+ '''
111
+ Holomorphic mapping for the retarded Green's function.
112
+ '''
113
+ def __init__(self, w_m, dw_h):
114
+ '''
115
+ Initialize the class with w_m and dw_h.
116
+ Points in the z plane are mapped to the inside of the unit disk in the w plane.
117
+ '''
118
+ assert dw_h.real > 0.0 and dw_h.imag == 0.0
119
+ self.w_m = w_m
120
+ self.dw_h = dw_h
121
+ self.w_inf = [0.0]
122
+
123
+ def cal_z(self, w):
124
+ '''
125
+ Intermediate function of z(w) which only works for a single point.
126
+ '''
127
+ if w in self.w_inf:
128
+ return np.inf
129
+ else:
130
+ return 0.5 * self.dw_h * (w + 1.0 / w) + self.w_m
131
+
132
+ def cal_w(self, z):
133
+ '''
134
+ Intermediate function of w(z) which only works for a single point.
135
+ '''
136
+ x = (z - self.w_m) / self.dw_h
137
+ w = x + np.sqrt(x ** 2.0 - 1.0 + 0j)
138
+ if np.absolute(w) > 1.0:
139
+ w = 2.0 * x - w
140
+ return w
141
+
142
+ def cal_dz(self, w):
143
+ '''
144
+ Intermediate function of dz(w) which only works for single point.
145
+ '''
146
+ if w in self.w_inf:
147
+ return np.inf
148
+ else:
149
+ return 0.5 * self.dw_h * (1.0 - 1.0 / w ** 2.0)
150
+
151
+ def z(self, w):
152
+ '''
153
+ Calculate z from w.
154
+ '''
155
+ return np.vectorize(self.cal_z)(w)
156
+
157
+ def w(self, z):
158
+ '''
159
+ Calculate w from z.
160
+ '''
161
+ return np.vectorize(self.cal_w)(z)
162
+
163
+ def dz(self, w):
164
+ '''
165
+ Calculate dz/dw at value w.
166
+ '''
167
+ return np.vectorize(self.cal_dz)(w)
@@ -0,0 +1,154 @@
1
+ import numpy as np
2
+ from kneed import KneeLocator
3
+ import warnings
4
+
5
+ class ESPRIT:
6
+ '''
7
+ Matrix version of the ESPRIT method for approximating functions with complex exponentials.
8
+ '''
9
+ def __init__(self, h_k, x_min = 0, x_max = 1, err = None, err_type = "abs", M = None, Lfactor = 0.4, tol = 1.e-15, ctrl_ratio = 10):
10
+ '''
11
+ Initialize with function values sampled on a uniform grid from x_min to x_max.
12
+ '''
13
+ h_k = h_k.reshape(h_k.shape[0], -1)
14
+ self.N = h_k.shape[0]
15
+ self.dim = h_k.shape[1]
16
+ if Lfactor < 1.0 / 3.0 or Lfactor > 0.5:
17
+ warnings.warn("It is suggested to set 1 / 3 <= Lfactor <= 1 / 2.")
18
+ self.L = int(Lfactor * (self.N - 1))
19
+ assert (self.N - self.L) >= (self.L + 1)
20
+ assert x_min < x_max
21
+ assert err_type in ["abs", "rel"]
22
+
23
+ if np.max(np.abs(h_k.imag)) < tol:
24
+ self.type = "real"
25
+ self.h_k = np.array(h_k.real)
26
+ elif np.max(np.abs(h_k.real)) < tol:
27
+ self.type = "imag"
28
+ self.h_k = np.array(1j * h_k.imag)
29
+ else:
30
+ self.type = "cplx"
31
+ self.h_k = np.array(h_k)
32
+ self.x_min = x_min
33
+ self.x_max = x_max
34
+ self.x_k = np.linspace(self.x_min, self.x_max, self.N)
35
+ self.err = err
36
+ self.err_type = err_type
37
+ self.M = M
38
+ self.tol = tol
39
+
40
+ #note to set data type to be complex even if the input is real! Otherwise the result might be unstable!
41
+ self.H = np.zeros((self.dim * (self.N - self.L), self.L + 1), dtype=np.complex128)
42
+ for l in range(self.N - self.L):
43
+ self.H[(self.dim * l):(self.dim * (l + 1)), :] = self.h_k[l:(l + self.L + 1)].T
44
+
45
+ #for some specific versions of numpy, there is a very low chance that SVD does not converge
46
+ while True:
47
+ try:
48
+ _, self.S, self.W = np.linalg.svd(self.H, full_matrices=False)
49
+ break
50
+ except:
51
+ #reconstruct the Hankel matrix
52
+ self.L -= 1
53
+ self.H = np.zeros((self.dim * (self.N - self.L), self.L + 1), dtype=np.complex128)
54
+ for l in range(self.N - self.L):
55
+ self.H[(self.dim * l):(self.dim * (l + 1)), :] = self.h_k[l:(l + self.L + 1)].T
56
+
57
+ if self.M is None:
58
+ self.find_M_with_err() if self.err is not None else self.find_M_with_exp_decay()
59
+ if self.S[self.M] / self.S[0] < 1.e-14:
60
+ self.err = 1.e-14
61
+ self.err_type = "rel"
62
+ self.find_M_with_err()
63
+ else:
64
+ self.M = min(self.M, self.S.size - 1)
65
+ while True:
66
+ self.sigma = self.S[self.M]
67
+ self.W_0 = self.W[:self.M, :-1]
68
+ self.W_1 = self.W[:self.M, 1:]
69
+ self.F_M = np.linalg.pinv(self.W_0.T) @ self.W_1.T
70
+
71
+ self.gamma = np.linalg.eigvals(self.F_M)
72
+ self.find_omega()
73
+ self.cal_err()
74
+
75
+ if self.err_max < max(ctrl_ratio * self.sigma, 1.e-14 * self.S[0]):
76
+ break
77
+ else:
78
+ self.M -= 1
79
+ if self.M == 0:
80
+ raise Exception("Could not find controlled approximation!")
81
+
82
+ def find_M_with_err(self):
83
+ '''
84
+ Find the rank M for the given error tolerance.
85
+ '''
86
+ cutoff = self.err if self.err_type == "abs" else self.S[0] * self.err
87
+ for idx in range(self.S.size):
88
+ if self.S[idx] < cutoff:
89
+ break
90
+ if self.S[idx] >= cutoff:
91
+ print("err is set to be too small!")
92
+ self.M = idx
93
+
94
+ def find_M_with_exp_decay(self):
95
+ '''
96
+ Find the maximum index for the exponentially decaying region.
97
+ '''
98
+ kneedle = KneeLocator(np.arange(self.S.size), np.log(self.S), S=1, curve='convex', direction='decreasing')
99
+ self.dlogS = np.abs(np.diff(np.log(self.S[:(kneedle.knee + 1)]), n=1))
100
+ self.M = np.where(self.dlogS > self.dlogS.max() / 3)[0][-1] + 1
101
+
102
+ def find_omega(self):
103
+ '''
104
+ Find weights of corresponding nodes gamma.
105
+ '''
106
+ V = np.zeros((self.h_k.shape[0], self.M), dtype=np.complex128)
107
+ for i in range(V.shape[0]):
108
+ V[i, :] = self.gamma ** i
109
+ #using least-squares solution is more stable than using pseudo-inverse
110
+ #setting rcond=None (default) sometimes leads to incorrect result for high-precision input
111
+ self.omega, residuals, rank, s = np.linalg.lstsq(V, self.h_k, rcond=-1)
112
+ self.lstsq_quality = (residuals, rank, s)
113
+
114
+ def cal_err(self):
115
+ h_k_approx = self.get_value(self.x_k)
116
+ self.err_max = np.abs(h_k_approx - self.h_k).max(axis=0).max()
117
+ self.err_ave = np.abs(h_k_approx - self.h_k).mean(axis=0).max()
118
+
119
+ def get_value_indiv(self, x, col):
120
+ '''
121
+ Get the approximated function value at point x for column col.
122
+ '''
123
+ assert col >= 0 and col < self.dim
124
+ x0 = (x - self.x_min) / (self.x_max - self.x_min)
125
+ if np.any(x0 < -1.e-12) or np.any(x0 > 1.0 + 1.e-12):
126
+ warnings.warn("This approximation only has error control for x in [x_min, x_max]!")
127
+
128
+ if np.isscalar(x0):
129
+ V = self.gamma ** ((self.h_k.shape[0] - 1) * x0)
130
+ value = np.dot(V, self.omega[:, col])
131
+ else:
132
+ V = np.zeros((x0.size, self.gamma.size), dtype=np.complex128)
133
+ for i in range(V.shape[0]):
134
+ V[i, :] = self.gamma ** ((self.h_k.shape[0] - 1) * x0[i])
135
+ value = np.dot(V, self.omega[:, col])
136
+ return value if self.type == "cplx" else value.real if self.type == "real" else 1j * value.imag
137
+
138
+ def get_value(self, x):
139
+ '''
140
+ Get the approximated function value at point x.
141
+ '''
142
+ x0 = (x - self.x_min) / (self.x_max - self.x_min)
143
+ if np.any(x0 < -1.e-12) or np.any(x0 > 1.0 + 1.e-12):
144
+ warnings.warn("This approximation only has error control for x in [x_min, x_max]!")
145
+
146
+ if np.isscalar(x0):
147
+ V = self.gamma ** ((self.h_k.shape[0] - 1) * x0)
148
+ value = np.dot(V, self.omega)
149
+ else:
150
+ V = np.zeros((x0.size, self.gamma.size), dtype=np.complex128)
151
+ for i in range(V.shape[0]):
152
+ V[i, :] = self.gamma ** ((self.h_k.shape[0] - 1) * x0[i])
153
+ value = np.dot(V, self.omega)
154
+ return value if self.type == "cplx" else value.real if self.type == "real" else 1j * value.imag