pyerrors 2.13.0__tar.gz → 2.14.0__tar.gz

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (35) hide show
  1. {pyerrors-2.13.0 → pyerrors-2.14.0}/PKG-INFO +15 -8
  2. {pyerrors-2.13.0 → pyerrors-2.14.0}/README.md +1 -6
  3. {pyerrors-2.13.0 → pyerrors-2.14.0}/pyerrors/__init__.py +9 -9
  4. {pyerrors-2.13.0 → pyerrors-2.14.0}/pyerrors/correlators.py +72 -72
  5. {pyerrors-2.13.0 → pyerrors-2.14.0}/pyerrors/dirac.py +3 -3
  6. {pyerrors-2.13.0 → pyerrors-2.14.0}/pyerrors/fits.py +3 -1
  7. {pyerrors-2.13.0 → pyerrors-2.14.0}/pyerrors/input/__init__.py +8 -8
  8. {pyerrors-2.13.0 → pyerrors-2.14.0}/pyerrors/input/dobs.py +4 -3
  9. {pyerrors-2.13.0 → pyerrors-2.14.0}/pyerrors/input/hadrons.py +2 -2
  10. {pyerrors-2.13.0 → pyerrors-2.14.0}/pyerrors/input/json.py +9 -5
  11. {pyerrors-2.13.0 → pyerrors-2.14.0}/pyerrors/input/openQCD.py +1 -1
  12. {pyerrors-2.13.0 → pyerrors-2.14.0}/pyerrors/input/sfcf.py +10 -9
  13. {pyerrors-2.13.0 → pyerrors-2.14.0}/pyerrors/obs.py +39 -31
  14. pyerrors-2.14.0/pyerrors/version.py +1 -0
  15. {pyerrors-2.13.0 → pyerrors-2.14.0}/pyerrors.egg-info/PKG-INFO +15 -8
  16. {pyerrors-2.13.0 → pyerrors-2.14.0}/pyproject.toml +3 -0
  17. pyerrors-2.13.0/pyerrors/version.py +0 -1
  18. {pyerrors-2.13.0 → pyerrors-2.14.0}/LICENSE +0 -0
  19. {pyerrors-2.13.0 → pyerrors-2.14.0}/pyerrors/covobs.py +0 -0
  20. {pyerrors-2.13.0 → pyerrors-2.14.0}/pyerrors/input/bdio.py +0 -0
  21. {pyerrors-2.13.0 → pyerrors-2.14.0}/pyerrors/input/misc.py +0 -0
  22. {pyerrors-2.13.0 → pyerrors-2.14.0}/pyerrors/input/pandas.py +0 -0
  23. {pyerrors-2.13.0 → pyerrors-2.14.0}/pyerrors/input/utils.py +0 -0
  24. {pyerrors-2.13.0 → pyerrors-2.14.0}/pyerrors/integrate.py +0 -0
  25. {pyerrors-2.13.0 → pyerrors-2.14.0}/pyerrors/linalg.py +0 -0
  26. {pyerrors-2.13.0 → pyerrors-2.14.0}/pyerrors/misc.py +0 -0
  27. {pyerrors-2.13.0 → pyerrors-2.14.0}/pyerrors/mpm.py +0 -0
  28. {pyerrors-2.13.0 → pyerrors-2.14.0}/pyerrors/roots.py +0 -0
  29. {pyerrors-2.13.0 → pyerrors-2.14.0}/pyerrors/special.py +0 -0
  30. {pyerrors-2.13.0 → pyerrors-2.14.0}/pyerrors.egg-info/SOURCES.txt +0 -0
  31. {pyerrors-2.13.0 → pyerrors-2.14.0}/pyerrors.egg-info/dependency_links.txt +0 -0
  32. {pyerrors-2.13.0 → pyerrors-2.14.0}/pyerrors.egg-info/requires.txt +0 -0
  33. {pyerrors-2.13.0 → pyerrors-2.14.0}/pyerrors.egg-info/top_level.txt +0 -0
  34. {pyerrors-2.13.0 → pyerrors-2.14.0}/setup.cfg +0 -0
  35. {pyerrors-2.13.0 → pyerrors-2.14.0}/setup.py +0 -0
@@ -1,6 +1,6 @@
1
- Metadata-Version: 2.1
1
+ Metadata-Version: 2.2
2
2
  Name: pyerrors
3
- Version: 2.13.0
3
+ Version: 2.14.0
4
4
  Summary: Error propagation and statistical analysis for Monte Carlo simulations
5
5
  Home-page: https://github.com/fjosw/pyerrors
6
6
  Author: Fabian Joswig
@@ -39,8 +39,20 @@ Requires-Dist: pytest-benchmark; extra == "test"
39
39
  Requires-Dist: hypothesis; extra == "test"
40
40
  Requires-Dist: nbmake; extra == "test"
41
41
  Requires-Dist: flake8; extra == "test"
42
+ Dynamic: author
43
+ Dynamic: author-email
44
+ Dynamic: classifier
45
+ Dynamic: description
46
+ Dynamic: description-content-type
47
+ Dynamic: home-page
48
+ Dynamic: license
49
+ Dynamic: project-url
50
+ Dynamic: provides-extra
51
+ Dynamic: requires-dist
52
+ Dynamic: requires-python
53
+ Dynamic: summary
42
54
 
43
- [![pytest](https://github.com/fjosw/pyerrors/actions/workflows/pytest.yml/badge.svg)](https://github.com/fjosw/pyerrors/actions/workflows/pytest.yml) [![](https://img.shields.io/badge/python-3.9+-blue.svg)](https://www.python.org/downloads/) [![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT) [![arXiv](https://img.shields.io/badge/arXiv-2209.14371-b31b1b.svg)](https://arxiv.org/abs/2209.14371) [![DOI](https://img.shields.io/badge/DOI-10.1016%2Fj.cpc.2023.108750-blue)](https://doi.org/10.1016/j.cpc.2023.108750)
55
+ [![](https://img.shields.io/badge/python-3.9+-blue.svg)](https://www.python.org/downloads/) [![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT) [![arXiv](https://img.shields.io/badge/arXiv-2209.14371-b31b1b.svg)](https://arxiv.org/abs/2209.14371) [![DOI](https://img.shields.io/badge/DOI-10.1016%2Fj.cpc.2023.108750-blue)](https://doi.org/10.1016/j.cpc.2023.108750)
44
56
  # pyerrors
45
57
  `pyerrors` is a python framework for error computation and propagation of Markov chain Monte Carlo data from lattice field theory and statistical mechanics simulations.
46
58
 
@@ -56,11 +68,6 @@ Install the most recent release using pip and [pypi](https://pypi.org/project/py
56
68
  python -m pip install pyerrors # Fresh install
57
69
  python -m pip install -U pyerrors # Update
58
70
  ```
59
- Install the most recent release using conda and [conda-forge](https://anaconda.org/conda-forge/pyerrors):
60
- ```bash
61
- conda install -c conda-forge pyerrors # Fresh install
62
- conda update -c conda-forge pyerrors # Update
63
- ```
64
71
 
65
72
  ## Contributing
66
73
  We appreciate all contributions to the code, the documentation and the examples. If you want to get involved please have a look at our [contribution guideline](https://github.com/fjosw/pyerrors/blob/develop/CONTRIBUTING.md).
@@ -1,4 +1,4 @@
1
- [![pytest](https://github.com/fjosw/pyerrors/actions/workflows/pytest.yml/badge.svg)](https://github.com/fjosw/pyerrors/actions/workflows/pytest.yml) [![](https://img.shields.io/badge/python-3.9+-blue.svg)](https://www.python.org/downloads/) [![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT) [![arXiv](https://img.shields.io/badge/arXiv-2209.14371-b31b1b.svg)](https://arxiv.org/abs/2209.14371) [![DOI](https://img.shields.io/badge/DOI-10.1016%2Fj.cpc.2023.108750-blue)](https://doi.org/10.1016/j.cpc.2023.108750)
1
+ [![](https://img.shields.io/badge/python-3.9+-blue.svg)](https://www.python.org/downloads/) [![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT) [![arXiv](https://img.shields.io/badge/arXiv-2209.14371-b31b1b.svg)](https://arxiv.org/abs/2209.14371) [![DOI](https://img.shields.io/badge/DOI-10.1016%2Fj.cpc.2023.108750-blue)](https://doi.org/10.1016/j.cpc.2023.108750)
2
2
  # pyerrors
3
3
  `pyerrors` is a python framework for error computation and propagation of Markov chain Monte Carlo data from lattice field theory and statistical mechanics simulations.
4
4
 
@@ -14,11 +14,6 @@ Install the most recent release using pip and [pypi](https://pypi.org/project/py
14
14
  python -m pip install pyerrors # Fresh install
15
15
  python -m pip install -U pyerrors # Update
16
16
  ```
17
- Install the most recent release using conda and [conda-forge](https://anaconda.org/conda-forge/pyerrors):
18
- ```bash
19
- conda install -c conda-forge pyerrors # Fresh install
20
- conda update -c conda-forge pyerrors # Update
21
- ```
22
17
 
23
18
  ## Contributing
24
19
  We appreciate all contributions to the code, the documentation and the examples. If you want to get involved please have a look at our [contribution guideline](https://github.com/fjosw/pyerrors/blob/develop/CONTRIBUTING.md).
@@ -481,12 +481,12 @@ from .obs import *
481
481
  from .correlators import *
482
482
  from .fits import *
483
483
  from .misc import *
484
- from . import dirac
485
- from . import input
486
- from . import linalg
487
- from . import mpm
488
- from . import roots
489
- from . import integrate
490
- from . import special
491
-
492
- from .version import __version__
484
+ from . import dirac as dirac
485
+ from . import input as input
486
+ from . import linalg as linalg
487
+ from . import mpm as mpm
488
+ from . import roots as roots
489
+ from . import integrate as integrate
490
+ from . import special as special
491
+
492
+ from .version import __version__ as __version__
@@ -101,7 +101,7 @@ class Corr:
101
101
  self.N = 1
102
102
  elif all([isinstance(item, np.ndarray) or item is None for item in data_input]) and any([isinstance(item, np.ndarray) for item in data_input]):
103
103
  self.content = data_input
104
- noNull = [a for a in self.content if not (a is None)] # To check if the matrices are correct for all undefined elements
104
+ noNull = [a for a in self.content if a is not None] # To check if the matrices are correct for all undefined elements
105
105
  self.N = noNull[0].shape[0]
106
106
  if self.N > 1 and noNull[0].shape[0] != noNull[0].shape[1]:
107
107
  raise ValueError("Smearing matrices are not NxN.")
@@ -141,7 +141,7 @@ class Corr:
141
141
  def gamma_method(self, **kwargs):
142
142
  """Apply the gamma method to the content of the Corr."""
143
143
  for item in self.content:
144
- if not (item is None):
144
+ if item is not None:
145
145
  if self.N == 1:
146
146
  item[0].gamma_method(**kwargs)
147
147
  else:
@@ -159,7 +159,7 @@ class Corr:
159
159
  By default it will return the lowest source, which usually means unsmeared-unsmeared (0,0), but it does not have to
160
160
  """
161
161
  if self.N == 1:
162
- raise Exception("Trying to project a Corr, that already has N=1.")
162
+ raise ValueError("Trying to project a Corr, that already has N=1.")
163
163
 
164
164
  if vector_l is None:
165
165
  vector_l, vector_r = np.asarray([1.] + (self.N - 1) * [0.]), np.asarray([1.] + (self.N - 1) * [0.])
@@ -167,16 +167,16 @@ class Corr:
167
167
  vector_r = vector_l
168
168
  if isinstance(vector_l, list) and not isinstance(vector_r, list):
169
169
  if len(vector_l) != self.T:
170
- raise Exception("Length of vector list must be equal to T")
170
+ raise ValueError("Length of vector list must be equal to T")
171
171
  vector_r = [vector_r] * self.T
172
172
  if isinstance(vector_r, list) and not isinstance(vector_l, list):
173
173
  if len(vector_r) != self.T:
174
- raise Exception("Length of vector list must be equal to T")
174
+ raise ValueError("Length of vector list must be equal to T")
175
175
  vector_l = [vector_l] * self.T
176
176
 
177
177
  if not isinstance(vector_l, list):
178
178
  if not vector_l.shape == vector_r.shape == (self.N,):
179
- raise Exception("Vectors are of wrong shape!")
179
+ raise ValueError("Vectors are of wrong shape!")
180
180
  if normalize:
181
181
  vector_l, vector_r = vector_l / np.sqrt((vector_l @ vector_l)), vector_r / np.sqrt(vector_r @ vector_r)
182
182
  newcontent = [None if _check_for_none(self, item) else np.asarray([vector_l.T @ item @ vector_r]) for item in self.content]
@@ -201,7 +201,7 @@ class Corr:
201
201
  Second index to be picked.
202
202
  """
203
203
  if self.N == 1:
204
- raise Exception("Trying to pick item from projected Corr")
204
+ raise ValueError("Trying to pick item from projected Corr")
205
205
  newcontent = [None if (item is None) else item[i, j] for item in self.content]
206
206
  return Corr(newcontent)
207
207
 
@@ -212,8 +212,8 @@ class Corr:
212
212
  timeslice and the error on each timeslice.
213
213
  """
214
214
  if self.N != 1:
215
- raise Exception("Can only make Corr[N=1] plottable")
216
- x_list = [x for x in range(self.T) if not self.content[x] is None]
215
+ raise ValueError("Can only make Corr[N=1] plottable")
216
+ x_list = [x for x in range(self.T) if self.content[x] is not None]
217
217
  y_list = [y[0].value for y in self.content if y is not None]
218
218
  y_err_list = [y[0].dvalue for y in self.content if y is not None]
219
219
 
@@ -222,9 +222,9 @@ class Corr:
222
222
  def symmetric(self):
223
223
  """ Symmetrize the correlator around x0=0."""
224
224
  if self.N != 1:
225
- raise Exception('symmetric cannot be safely applied to multi-dimensional correlators.')
225
+ raise ValueError('symmetric cannot be safely applied to multi-dimensional correlators.')
226
226
  if self.T % 2 != 0:
227
- raise Exception("Can not symmetrize odd T")
227
+ raise ValueError("Can not symmetrize odd T")
228
228
 
229
229
  if self.content[0] is not None:
230
230
  if np.argmax(np.abs([o[0].value if o is not None else 0 for o in self.content])) != 0:
@@ -237,7 +237,7 @@ class Corr:
237
237
  else:
238
238
  newcontent.append(0.5 * (self.content[t] + self.content[self.T - t]))
239
239
  if (all([x is None for x in newcontent])):
240
- raise Exception("Corr could not be symmetrized: No redundant values")
240
+ raise ValueError("Corr could not be symmetrized: No redundant values")
241
241
  return Corr(newcontent, prange=self.prange)
242
242
 
243
243
  def anti_symmetric(self):
@@ -245,7 +245,7 @@ class Corr:
245
245
  if self.N != 1:
246
246
  raise TypeError('anti_symmetric cannot be safely applied to multi-dimensional correlators.')
247
247
  if self.T % 2 != 0:
248
- raise Exception("Can not symmetrize odd T")
248
+ raise ValueError("Can not symmetrize odd T")
249
249
 
250
250
  test = 1 * self
251
251
  test.gamma_method()
@@ -259,7 +259,7 @@ class Corr:
259
259
  else:
260
260
  newcontent.append(0.5 * (self.content[t] - self.content[self.T - t]))
261
261
  if (all([x is None for x in newcontent])):
262
- raise Exception("Corr could not be symmetrized: No redundant values")
262
+ raise ValueError("Corr could not be symmetrized: No redundant values")
263
263
  return Corr(newcontent, prange=self.prange)
264
264
 
265
265
  def is_matrix_symmetric(self):
@@ -292,7 +292,7 @@ class Corr:
292
292
  def matrix_symmetric(self):
293
293
  """Symmetrizes the correlator matrices on every timeslice."""
294
294
  if self.N == 1:
295
- raise Exception("Trying to symmetrize a correlator matrix, that already has N=1.")
295
+ raise ValueError("Trying to symmetrize a correlator matrix, that already has N=1.")
296
296
  if self.is_matrix_symmetric():
297
297
  return 1.0 * self
298
298
  else:
@@ -336,10 +336,10 @@ class Corr:
336
336
  '''
337
337
 
338
338
  if self.N == 1:
339
- raise Exception("GEVP methods only works on correlator matrices and not single correlators.")
339
+ raise ValueError("GEVP methods only works on correlator matrices and not single correlators.")
340
340
  if ts is not None:
341
341
  if (ts <= t0):
342
- raise Exception("ts has to be larger than t0.")
342
+ raise ValueError("ts has to be larger than t0.")
343
343
 
344
344
  if "sorted_list" in kwargs:
345
345
  warnings.warn("Argument 'sorted_list' is deprecated, use 'sort' instead.", DeprecationWarning)
@@ -371,9 +371,9 @@ class Corr:
371
371
 
372
372
  if sort is None:
373
373
  if (ts is None):
374
- raise Exception("ts is required if sort=None.")
374
+ raise ValueError("ts is required if sort=None.")
375
375
  if (self.content[t0] is None) or (self.content[ts] is None):
376
- raise Exception("Corr not defined at t0/ts.")
376
+ raise ValueError("Corr not defined at t0/ts.")
377
377
  Gt = _get_mat_at_t(ts)
378
378
  reordered_vecs = _GEVP_solver(Gt, G0, method=method, chol_inv=chol_inv)
379
379
  if kwargs.get('auto_gamma', False) and vector_obs:
@@ -391,14 +391,14 @@ class Corr:
391
391
  all_vecs.append(None)
392
392
  if sort == "Eigenvector":
393
393
  if ts is None:
394
- raise Exception("ts is required for the Eigenvector sorting method.")
394
+ raise ValueError("ts is required for the Eigenvector sorting method.")
395
395
  all_vecs = _sort_vectors(all_vecs, ts)
396
396
 
397
397
  reordered_vecs = [[v[s] if v is not None else None for v in all_vecs] for s in range(self.N)]
398
398
  if kwargs.get('auto_gamma', False) and vector_obs:
399
399
  [[[o.gm() for o in evn] for evn in ev if evn is not None] for ev in reordered_vecs]
400
400
  else:
401
- raise Exception("Unknown value for 'sort'. Choose 'Eigenvalue', 'Eigenvector' or None.")
401
+ raise ValueError("Unknown value for 'sort'. Choose 'Eigenvalue', 'Eigenvector' or None.")
402
402
 
403
403
  if "state" in kwargs:
404
404
  return reordered_vecs[kwargs.get("state")]
@@ -435,7 +435,7 @@ class Corr:
435
435
  """
436
436
 
437
437
  if self.N != 1:
438
- raise Exception("Multi-operator Prony not implemented!")
438
+ raise NotImplementedError("Multi-operator Prony not implemented!")
439
439
 
440
440
  array = np.empty([N, N], dtype="object")
441
441
  new_content = []
@@ -502,7 +502,7 @@ class Corr:
502
502
  correlator or a Corr of same length.
503
503
  """
504
504
  if self.N != 1:
505
- raise Exception("Only one-dimensional correlators can be safely correlated.")
505
+ raise ValueError("Only one-dimensional correlators can be safely correlated.")
506
506
  new_content = []
507
507
  for x0, t_slice in enumerate(self.content):
508
508
  if _check_for_none(self, t_slice):
@@ -516,7 +516,7 @@ class Corr:
516
516
  elif isinstance(partner, Obs): # Should this include CObs?
517
517
  new_content.append(np.array([correlate(o, partner) for o in t_slice]))
518
518
  else:
519
- raise Exception("Can only correlate with an Obs or a Corr.")
519
+ raise TypeError("Can only correlate with an Obs or a Corr.")
520
520
 
521
521
  return Corr(new_content)
522
522
 
@@ -583,7 +583,7 @@ class Corr:
583
583
  Available choice: symmetric, forward, backward, improved, log, default: symmetric
584
584
  """
585
585
  if self.N != 1:
586
- raise Exception("deriv only implemented for one-dimensional correlators.")
586
+ raise ValueError("deriv only implemented for one-dimensional correlators.")
587
587
  if variant == "symmetric":
588
588
  newcontent = []
589
589
  for t in range(1, self.T - 1):
@@ -592,7 +592,7 @@ class Corr:
592
592
  else:
593
593
  newcontent.append(0.5 * (self.content[t + 1] - self.content[t - 1]))
594
594
  if (all([x is None for x in newcontent])):
595
- raise Exception('Derivative is undefined at all timeslices')
595
+ raise ValueError('Derivative is undefined at all timeslices')
596
596
  return Corr(newcontent, padding=[1, 1])
597
597
  elif variant == "forward":
598
598
  newcontent = []
@@ -602,7 +602,7 @@ class Corr:
602
602
  else:
603
603
  newcontent.append(self.content[t + 1] - self.content[t])
604
604
  if (all([x is None for x in newcontent])):
605
- raise Exception("Derivative is undefined at all timeslices")
605
+ raise ValueError("Derivative is undefined at all timeslices")
606
606
  return Corr(newcontent, padding=[0, 1])
607
607
  elif variant == "backward":
608
608
  newcontent = []
@@ -612,7 +612,7 @@ class Corr:
612
612
  else:
613
613
  newcontent.append(self.content[t] - self.content[t - 1])
614
614
  if (all([x is None for x in newcontent])):
615
- raise Exception("Derivative is undefined at all timeslices")
615
+ raise ValueError("Derivative is undefined at all timeslices")
616
616
  return Corr(newcontent, padding=[1, 0])
617
617
  elif variant == "improved":
618
618
  newcontent = []
@@ -622,7 +622,7 @@ class Corr:
622
622
  else:
623
623
  newcontent.append((1 / 12) * (self.content[t - 2] - 8 * self.content[t - 1] + 8 * self.content[t + 1] - self.content[t + 2]))
624
624
  if (all([x is None for x in newcontent])):
625
- raise Exception('Derivative is undefined at all timeslices')
625
+ raise ValueError('Derivative is undefined at all timeslices')
626
626
  return Corr(newcontent, padding=[2, 2])
627
627
  elif variant == 'log':
628
628
  newcontent = []
@@ -632,11 +632,11 @@ class Corr:
632
632
  else:
633
633
  newcontent.append(np.log(self.content[t]))
634
634
  if (all([x is None for x in newcontent])):
635
- raise Exception("Log is undefined at all timeslices")
635
+ raise ValueError("Log is undefined at all timeslices")
636
636
  logcorr = Corr(newcontent)
637
637
  return self * logcorr.deriv('symmetric')
638
638
  else:
639
- raise Exception("Unknown variant.")
639
+ raise ValueError("Unknown variant.")
640
640
 
641
641
  def second_deriv(self, variant="symmetric"):
642
642
  r"""Return the second derivative of the correlator with respect to x0.
@@ -656,7 +656,7 @@ class Corr:
656
656
  $$f(x) = \tilde{\partial}^2_0 log(f(x_0))+(\tilde{\partial}_0 log(f(x_0)))^2$$
657
657
  """
658
658
  if self.N != 1:
659
- raise Exception("second_deriv only implemented for one-dimensional correlators.")
659
+ raise ValueError("second_deriv only implemented for one-dimensional correlators.")
660
660
  if variant == "symmetric":
661
661
  newcontent = []
662
662
  for t in range(1, self.T - 1):
@@ -665,7 +665,7 @@ class Corr:
665
665
  else:
666
666
  newcontent.append((self.content[t + 1] - 2 * self.content[t] + self.content[t - 1]))
667
667
  if (all([x is None for x in newcontent])):
668
- raise Exception("Derivative is undefined at all timeslices")
668
+ raise ValueError("Derivative is undefined at all timeslices")
669
669
  return Corr(newcontent, padding=[1, 1])
670
670
  elif variant == "big_symmetric":
671
671
  newcontent = []
@@ -675,7 +675,7 @@ class Corr:
675
675
  else:
676
676
  newcontent.append((self.content[t + 2] - 2 * self.content[t] + self.content[t - 2]) / 4)
677
677
  if (all([x is None for x in newcontent])):
678
- raise Exception("Derivative is undefined at all timeslices")
678
+ raise ValueError("Derivative is undefined at all timeslices")
679
679
  return Corr(newcontent, padding=[2, 2])
680
680
  elif variant == "improved":
681
681
  newcontent = []
@@ -685,7 +685,7 @@ class Corr:
685
685
  else:
686
686
  newcontent.append((1 / 12) * (-self.content[t + 2] + 16 * self.content[t + 1] - 30 * self.content[t] + 16 * self.content[t - 1] - self.content[t - 2]))
687
687
  if (all([x is None for x in newcontent])):
688
- raise Exception("Derivative is undefined at all timeslices")
688
+ raise ValueError("Derivative is undefined at all timeslices")
689
689
  return Corr(newcontent, padding=[2, 2])
690
690
  elif variant == 'log':
691
691
  newcontent = []
@@ -695,11 +695,11 @@ class Corr:
695
695
  else:
696
696
  newcontent.append(np.log(self.content[t]))
697
697
  if (all([x is None for x in newcontent])):
698
- raise Exception("Log is undefined at all timeslices")
698
+ raise ValueError("Log is undefined at all timeslices")
699
699
  logcorr = Corr(newcontent)
700
700
  return self * (logcorr.second_deriv('symmetric') + (logcorr.deriv('symmetric'))**2)
701
701
  else:
702
- raise Exception("Unknown variant.")
702
+ raise ValueError("Unknown variant.")
703
703
 
704
704
  def m_eff(self, variant='log', guess=1.0):
705
705
  """Returns the effective mass of the correlator as correlator object
@@ -728,7 +728,7 @@ class Corr:
728
728
  else:
729
729
  newcontent.append(self.content[t] / self.content[t + 1])
730
730
  if (all([x is None for x in newcontent])):
731
- raise Exception('m_eff is undefined at all timeslices')
731
+ raise ValueError('m_eff is undefined at all timeslices')
732
732
 
733
733
  return np.log(Corr(newcontent, padding=[0, 1]))
734
734
 
@@ -742,7 +742,7 @@ class Corr:
742
742
  else:
743
743
  newcontent.append(self.content[t - 1] / self.content[t + 1])
744
744
  if (all([x is None for x in newcontent])):
745
- raise Exception('m_eff is undefined at all timeslices')
745
+ raise ValueError('m_eff is undefined at all timeslices')
746
746
 
747
747
  return np.log(Corr(newcontent, padding=[1, 1])) / 2
748
748
 
@@ -767,7 +767,7 @@ class Corr:
767
767
  else:
768
768
  newcontent.append(np.abs(find_root(self.content[t][0] / self.content[t + 1][0], root_function, guess=guess)))
769
769
  if (all([x is None for x in newcontent])):
770
- raise Exception('m_eff is undefined at all timeslices')
770
+ raise ValueError('m_eff is undefined at all timeslices')
771
771
 
772
772
  return Corr(newcontent, padding=[0, 1])
773
773
 
@@ -779,11 +779,11 @@ class Corr:
779
779
  else:
780
780
  newcontent.append((self.content[t + 1] + self.content[t - 1]) / (2 * self.content[t]))
781
781
  if (all([x is None for x in newcontent])):
782
- raise Exception("m_eff is undefined at all timeslices")
782
+ raise ValueError("m_eff is undefined at all timeslices")
783
783
  return np.arccosh(Corr(newcontent, padding=[1, 1]))
784
784
 
785
785
  else:
786
- raise Exception('Unknown variant.')
786
+ raise ValueError('Unknown variant.')
787
787
 
788
788
  def fit(self, function, fitrange=None, silent=False, **kwargs):
789
789
  r'''Fits function to the data
@@ -801,7 +801,7 @@ class Corr:
801
801
  Decides whether output is printed to the standard output.
802
802
  '''
803
803
  if self.N != 1:
804
- raise Exception("Correlator must be projected before fitting")
804
+ raise ValueError("Correlator must be projected before fitting")
805
805
 
806
806
  if fitrange is None:
807
807
  if self.prange:
@@ -810,12 +810,12 @@ class Corr:
810
810
  fitrange = [0, self.T - 1]
811
811
  else:
812
812
  if not isinstance(fitrange, list):
813
- raise Exception("fitrange has to be a list with two elements")
813
+ raise TypeError("fitrange has to be a list with two elements")
814
814
  if len(fitrange) != 2:
815
- raise Exception("fitrange has to have exactly two elements [fit_start, fit_stop]")
815
+ raise ValueError("fitrange has to have exactly two elements [fit_start, fit_stop]")
816
816
 
817
- xs = np.array([x for x in range(fitrange[0], fitrange[1] + 1) if not self.content[x] is None])
818
- ys = np.array([self.content[x][0] for x in range(fitrange[0], fitrange[1] + 1) if not self.content[x] is None])
817
+ xs = np.array([x for x in range(fitrange[0], fitrange[1] + 1) if self.content[x] is not None])
818
+ ys = np.array([self.content[x][0] for x in range(fitrange[0], fitrange[1] + 1) if self.content[x] is not None])
819
819
  result = least_squares(xs, ys, function, silent=silent, **kwargs)
820
820
  return result
821
821
 
@@ -840,9 +840,9 @@ class Corr:
840
840
  else:
841
841
  raise Exception("no plateau range provided")
842
842
  if self.N != 1:
843
- raise Exception("Correlator must be projected before getting a plateau.")
843
+ raise ValueError("Correlator must be projected before getting a plateau.")
844
844
  if (all([self.content[t] is None for t in range(plateau_range[0], plateau_range[1] + 1)])):
845
- raise Exception("plateau is undefined at all timeslices in plateaurange.")
845
+ raise ValueError("plateau is undefined at all timeslices in plateaurange.")
846
846
  if auto_gamma:
847
847
  self.gamma_method()
848
848
  if method == "fit":
@@ -854,16 +854,16 @@ class Corr:
854
854
  return returnvalue
855
855
 
856
856
  else:
857
- raise Exception("Unsupported plateau method: " + method)
857
+ raise ValueError("Unsupported plateau method: " + method)
858
858
 
859
859
  def set_prange(self, prange):
860
860
  """Sets the attribute prange of the Corr object."""
861
861
  if not len(prange) == 2:
862
- raise Exception("prange must be a list or array with two values")
862
+ raise ValueError("prange must be a list or array with two values")
863
863
  if not ((isinstance(prange[0], int)) and (isinstance(prange[1], int))):
864
- raise Exception("Start and end point must be integers")
865
- if not (0 <= prange[0] <= self.T and 0 <= prange[1] <= self.T and prange[0] < prange[1]):
866
- raise Exception("Start and end point must define a range in the interval 0,T")
864
+ raise TypeError("Start and end point must be integers")
865
+ if not (0 <= prange[0] <= self.T and 0 <= prange[1] <= self.T and prange[0] <= prange[1]):
866
+ raise ValueError("Start and end point must define a range in the interval 0,T")
867
867
 
868
868
  self.prange = prange
869
869
  return
@@ -900,7 +900,7 @@ class Corr:
900
900
  Optional title of the figure.
901
901
  """
902
902
  if self.N != 1:
903
- raise Exception("Correlator must be projected before plotting")
903
+ raise ValueError("Correlator must be projected before plotting")
904
904
 
905
905
  if auto_gamma:
906
906
  self.gamma_method()
@@ -941,7 +941,7 @@ class Corr:
941
941
  hide_from = None
942
942
  ax1.errorbar(x[:hide_from], y[:hide_from], y_err[:hide_from], label=corr.tag, mfc=plt.rcParams['axes.facecolor'])
943
943
  else:
944
- raise Exception("'comp' must be a correlator or a list of correlators.")
944
+ raise TypeError("'comp' must be a correlator or a list of correlators.")
945
945
 
946
946
  if plateau:
947
947
  if isinstance(plateau, Obs):
@@ -950,14 +950,14 @@ class Corr:
950
950
  ax1.axhline(y=plateau.value, linewidth=2, color=plt.rcParams['text.color'], alpha=0.6, marker=',', ls='--', label=str(plateau))
951
951
  ax1.axhspan(plateau.value - plateau.dvalue, plateau.value + plateau.dvalue, alpha=0.25, color=plt.rcParams['text.color'], ls='-')
952
952
  else:
953
- raise Exception("'plateau' must be an Obs")
953
+ raise TypeError("'plateau' must be an Obs")
954
954
 
955
955
  if references:
956
956
  if isinstance(references, list):
957
957
  for ref in references:
958
958
  ax1.axhline(y=ref, linewidth=1, color=plt.rcParams['text.color'], alpha=0.6, marker=',', ls='--')
959
959
  else:
960
- raise Exception("'references' must be a list of floating pint values.")
960
+ raise TypeError("'references' must be a list of floating pint values.")
961
961
 
962
962
  if self.prange:
963
963
  ax1.axvline(self.prange[0], 0, 1, ls='-', marker=',', color="black", zorder=0)
@@ -991,7 +991,7 @@ class Corr:
991
991
  if isinstance(save, str):
992
992
  fig.savefig(save, bbox_inches='tight')
993
993
  else:
994
- raise Exception("'save' has to be a string.")
994
+ raise TypeError("'save' has to be a string.")
995
995
 
996
996
  def spaghetti_plot(self, logscale=True):
997
997
  """Produces a spaghetti plot of the correlator suited to monitor exceptional configurations.
@@ -1002,7 +1002,7 @@ class Corr:
1002
1002
  Determines whether the scale of the y-axis is logarithmic or standard.
1003
1003
  """
1004
1004
  if self.N != 1:
1005
- raise Exception("Correlator needs to be projected first.")
1005
+ raise ValueError("Correlator needs to be projected first.")
1006
1006
 
1007
1007
  mc_names = list(set([item for sublist in [sum(map(o[0].e_content.get, o[0].mc_names), []) for o in self.content if o is not None] for item in sublist]))
1008
1008
  x0_vals = [n for (n, o) in zip(np.arange(self.T), self.content) if o is not None]
@@ -1044,7 +1044,7 @@ class Corr:
1044
1044
  elif datatype == "pickle":
1045
1045
  dump_object(self, filename, **kwargs)
1046
1046
  else:
1047
- raise Exception("Unknown datatype " + str(datatype))
1047
+ raise ValueError("Unknown datatype " + str(datatype))
1048
1048
 
1049
1049
  def print(self, print_range=None):
1050
1050
  print(self.__repr__(print_range))
@@ -1094,7 +1094,7 @@ class Corr:
1094
1094
  def __add__(self, y):
1095
1095
  if isinstance(y, Corr):
1096
1096
  if ((self.N != y.N) or (self.T != y.T)):
1097
- raise Exception("Addition of Corrs with different shape")
1097
+ raise ValueError("Addition of Corrs with different shape")
1098
1098
  newcontent = []
1099
1099
  for t in range(self.T):
1100
1100
  if _check_for_none(self, self.content[t]) or _check_for_none(y, y.content[t]):
@@ -1122,7 +1122,7 @@ class Corr:
1122
1122
  def __mul__(self, y):
1123
1123
  if isinstance(y, Corr):
1124
1124
  if not ((self.N == 1 or y.N == 1 or self.N == y.N) and self.T == y.T):
1125
- raise Exception("Multiplication of Corr object requires N=N or N=1 and T=T")
1125
+ raise ValueError("Multiplication of Corr object requires N=N or N=1 and T=T")
1126
1126
  newcontent = []
1127
1127
  for t in range(self.T):
1128
1128
  if _check_for_none(self, self.content[t]) or _check_for_none(y, y.content[t]):
@@ -1193,7 +1193,7 @@ class Corr:
1193
1193
  def __truediv__(self, y):
1194
1194
  if isinstance(y, Corr):
1195
1195
  if not ((self.N == 1 or y.N == 1 or self.N == y.N) and self.T == y.T):
1196
- raise Exception("Multiplication of Corr object requires N=N or N=1 and T=T")
1196
+ raise ValueError("Multiplication of Corr object requires N=N or N=1 and T=T")
1197
1197
  newcontent = []
1198
1198
  for t in range(self.T):
1199
1199
  if _check_for_none(self, self.content[t]) or _check_for_none(y, y.content[t]):
@@ -1207,16 +1207,16 @@ class Corr:
1207
1207
  newcontent[t] = None
1208
1208
 
1209
1209
  if all([item is None for item in newcontent]):
1210
- raise Exception("Division returns completely undefined correlator")
1210
+ raise ValueError("Division returns completely undefined correlator")
1211
1211
  return Corr(newcontent)
1212
1212
 
1213
1213
  elif isinstance(y, (Obs, CObs)):
1214
1214
  if isinstance(y, Obs):
1215
1215
  if y.value == 0:
1216
- raise Exception('Division by zero will return undefined correlator')
1216
+ raise ValueError('Division by zero will return undefined correlator')
1217
1217
  if isinstance(y, CObs):
1218
1218
  if y.is_zero():
1219
- raise Exception('Division by zero will return undefined correlator')
1219
+ raise ValueError('Division by zero will return undefined correlator')
1220
1220
 
1221
1221
  newcontent = []
1222
1222
  for t in range(self.T):
@@ -1228,7 +1228,7 @@ class Corr:
1228
1228
 
1229
1229
  elif isinstance(y, (int, float)):
1230
1230
  if y == 0:
1231
- raise Exception('Division by zero will return undefined correlator')
1231
+ raise ValueError('Division by zero will return undefined correlator')
1232
1232
  newcontent = []
1233
1233
  for t in range(self.T):
1234
1234
  if _check_for_none(self, self.content[t]):
@@ -1284,7 +1284,7 @@ class Corr:
1284
1284
  if np.isnan(tmp_sum.value):
1285
1285
  newcontent[t] = None
1286
1286
  if all([item is None for item in newcontent]):
1287
- raise Exception('Operation returns undefined correlator')
1287
+ raise ValueError('Operation returns undefined correlator')
1288
1288
  return Corr(newcontent)
1289
1289
 
1290
1290
  def sin(self):
@@ -1392,13 +1392,13 @@ class Corr:
1392
1392
  '''
1393
1393
 
1394
1394
  if self.N == 1:
1395
- raise Exception('Method cannot be applied to one-dimensional correlators.')
1395
+ raise ValueError('Method cannot be applied to one-dimensional correlators.')
1396
1396
  if basematrix is None:
1397
1397
  basematrix = self
1398
1398
  if Ntrunc >= basematrix.N:
1399
- raise Exception('Cannot truncate using Ntrunc <= %d' % (basematrix.N))
1399
+ raise ValueError('Cannot truncate using Ntrunc <= %d' % (basematrix.N))
1400
1400
  if basematrix.N != self.N:
1401
- raise Exception('basematrix and targetmatrix have to be of the same size.')
1401
+ raise ValueError('basematrix and targetmatrix have to be of the same size.')
1402
1402
 
1403
1403
  evecs = basematrix.GEVP(t0proj, tproj, sort=None)[:Ntrunc]
1404
1404
 
@@ -34,7 +34,7 @@ def epsilon_tensor(i, j, k):
34
34
  """
35
35
  test_set = set((i, j, k))
36
36
  if not (test_set <= set((1, 2, 3)) or test_set <= set((0, 1, 2))):
37
- raise Exception("Unexpected input", i, j, k)
37
+ raise ValueError("Unexpected input", i, j, k)
38
38
 
39
39
  return (i - j) * (j - k) * (k - i) / 2
40
40
 
@@ -52,7 +52,7 @@ def epsilon_tensor_rank4(i, j, k, o):
52
52
  """
53
53
  test_set = set((i, j, k, o))
54
54
  if not (test_set <= set((1, 2, 3, 4)) or test_set <= set((0, 1, 2, 3))):
55
- raise Exception("Unexpected input", i, j, k, o)
55
+ raise ValueError("Unexpected input", i, j, k, o)
56
56
 
57
57
  return (i - j) * (j - k) * (k - i) * (i - o) * (j - o) * (o - k) / 12
58
58
 
@@ -92,5 +92,5 @@ def Grid_gamma(gamma_tag):
92
92
  elif gamma_tag == 'SigmaZT':
93
93
  g = 0.5 * (gamma[2] @ gamma[3] - gamma[3] @ gamma[2])
94
94
  else:
95
- raise Exception('Unkown gamma structure', gamma_tag)
95
+ raise ValueError('Unkown gamma structure', gamma_tag)
96
96
  return g
@@ -293,7 +293,7 @@ def least_squares(x, y, func, priors=None, silent=False, **kwargs):
293
293
  if len(key_ls) > 1:
294
294
  for key in key_ls:
295
295
  if np.asarray(yd[key]).shape != funcd[key](np.arange(n_parms), xd[key]).shape:
296
- raise ValueError(f"Fit function {key} returns the wrong shape ({funcd[key](np.arange(n_parms), xd[key]).shape} instead of {xd[key].shape})\nIf the fit function is just a constant you could try adding x*0 to get the correct shape.")
296
+ raise ValueError(f"Fit function {key} returns the wrong shape ({funcd[key](np.arange(n_parms), xd[key]).shape} instead of {np.asarray(yd[key]).shape})\nIf the fit function is just a constant you could try adding x*0 to get the correct shape.")
297
297
 
298
298
  if not silent:
299
299
  print('Fit with', n_parms, 'parameter' + 's' * (n_parms > 1))
@@ -365,6 +365,8 @@ def least_squares(x, y, func, priors=None, silent=False, **kwargs):
365
365
  if (chol_inv[1] != key_ls):
366
366
  raise ValueError('The keys of inverse covariance matrix are not the same or do not appear in the same order as the x and y values.')
367
367
  chol_inv = chol_inv[0]
368
+ if np.any(np.diag(chol_inv) <= 0) or (not np.all(chol_inv == np.tril(chol_inv))):
369
+ raise ValueError('The inverse covariance matrix inv_chol_cov_matrix[0] has to be a lower triangular matrix constructed from a Cholesky decomposition.')
368
370
  else:
369
371
  corr = covariance(y_all, correlation=True, **kwargs)
370
372
  inverrdiag = np.diag(1 / np.asarray(dy_f))
@@ -5,11 +5,11 @@ r'''
5
5
  For comparison with other analysis workflows `pyerrors` can also generate jackknife samples from an `Obs` object or import jackknife samples into an `Obs` object.
6
6
  See `pyerrors.obs.Obs.export_jackknife` and `pyerrors.obs.import_jackknife` for details.
7
7
  '''
8
- from . import bdio
9
- from . import dobs
10
- from . import hadrons
11
- from . import json
12
- from . import misc
13
- from . import openQCD
14
- from . import pandas
15
- from . import sfcf
8
+ from . import bdio as bdio
9
+ from . import dobs as dobs
10
+ from . import hadrons as hadrons
11
+ from . import json as json
12
+ from . import misc as misc
13
+ from . import openQCD as openQCD
14
+ from . import pandas as pandas
15
+ from . import sfcf as sfcf
@@ -79,7 +79,7 @@ def _dict_to_xmlstring_spaces(d, space=' '):
79
79
  o += space
80
80
  o += li + '\n'
81
81
  if li.startswith('<') and not cm:
82
- if not '<%s' % ('/') in li:
82
+ if '<%s' % ('/') not in li:
83
83
  c += 1
84
84
  cm = False
85
85
  return o
@@ -529,7 +529,8 @@ def import_dobs_string(content, full_output=False, separator_insertion=True):
529
529
  deltas.append(repdeltas)
530
530
  idl.append(repidl)
531
531
 
532
- res.append(Obs(deltas, obs_names, idl=idl))
532
+ obsmeans = [np.average(deltas[j]) for j in range(len(deltas))]
533
+ res.append(Obs([np.array(deltas[j]) - obsmeans[j] for j in range(len(obsmeans))], obs_names, idl=idl, means=obsmeans))
533
534
  res[-1]._value = mean[i]
534
535
  _check(len(e_names) == ne)
535
536
 
@@ -671,7 +672,7 @@ def _dobsdict_to_xmlstring_spaces(d, space=' '):
671
672
  o += space
672
673
  o += li + '\n'
673
674
  if li.startswith('<') and not cm:
674
- if not '<%s' % ('/') in li:
675
+ if '<%s' % ('/') not in li:
675
676
  c += 1
676
677
  cm = False
677
678
  return o
@@ -113,7 +113,7 @@ def read_hd5(filestem, ens_id, group, attrs=None, idl=None, part="real"):
113
113
  infos = []
114
114
  for hd5_file in files:
115
115
  h5file = h5py.File(path + '/' + hd5_file, "r")
116
- if not group + '/' + entry in h5file:
116
+ if group + '/' + entry not in h5file:
117
117
  raise Exception("Entry '" + entry + "' not contained in the files.")
118
118
  raw_data = h5file[group + '/' + entry + '/corr']
119
119
  real_data = raw_data[:].view("complex")
@@ -186,7 +186,7 @@ def _extract_real_arrays(path, files, tree, keys):
186
186
  for hd5_file in files:
187
187
  h5file = h5py.File(path + '/' + hd5_file, "r")
188
188
  for key in keys:
189
- if not tree + '/' + key in h5file:
189
+ if tree + '/' + key not in h5file:
190
190
  raise Exception("Entry '" + key + "' not contained in the files.")
191
191
  raw_data = h5file[tree + '/' + key + '/data']
192
192
  real_data = raw_data[:].astype(np.double)
@@ -133,10 +133,11 @@ def create_json_string(ol, description='', indent=1):
133
133
  names = []
134
134
  idl = []
135
135
  for key, value in obs.idl.items():
136
- samples.append([np.nan] * len(value))
136
+ samples.append(np.array([np.nan] * len(value)))
137
137
  names.append(key)
138
138
  idl.append(value)
139
- my_obs = Obs(samples, names, idl)
139
+ my_obs = Obs(samples, names, idl, means=[np.nan for n in names])
140
+ my_obs._value = np.nan
140
141
  my_obs._covobs = obs._covobs
141
142
  for name in obs._covobs:
142
143
  my_obs.names.append(name)
@@ -331,7 +332,8 @@ def _parse_json_dict(json_dict, verbose=True, full_output=False):
331
332
  cd = _gen_covobsd_from_cdatad(o.get('cdata', {}))
332
333
 
333
334
  if od:
334
- ret = Obs([[ddi[0] + values[0] for ddi in di] for di in od['deltas']], od['names'], idl=od['idl'])
335
+ r_offsets = [np.average([ddi[0] for ddi in di]) for di in od['deltas']]
336
+ ret = Obs([np.array([ddi[0] for ddi in od['deltas'][i]]) - r_offsets[i] for i in range(len(od['deltas']))], od['names'], idl=od['idl'], means=[ro + values[0] for ro in r_offsets])
335
337
  ret._value = values[0]
336
338
  else:
337
339
  ret = Obs([], [], means=[])
@@ -356,7 +358,8 @@ def _parse_json_dict(json_dict, verbose=True, full_output=False):
356
358
  taglist = o.get('tag', layout * [None])
357
359
  for i in range(layout):
358
360
  if od:
359
- ret.append(Obs([list(di[:, i] + values[i]) for di in od['deltas']], od['names'], idl=od['idl']))
361
+ r_offsets = np.array([np.average(di[:, i]) for di in od['deltas']])
362
+ ret.append(Obs([od['deltas'][j][:, i] - r_offsets[j] for j in range(len(od['deltas']))], od['names'], idl=od['idl'], means=[ro + values[i] for ro in r_offsets]))
360
363
  ret[-1]._value = values[i]
361
364
  else:
362
365
  ret.append(Obs([], [], means=[]))
@@ -383,7 +386,8 @@ def _parse_json_dict(json_dict, verbose=True, full_output=False):
383
386
  taglist = o.get('tag', N * [None])
384
387
  for i in range(N):
385
388
  if od:
386
- ret.append(Obs([di[:, i] + values[i] for di in od['deltas']], od['names'], idl=od['idl']))
389
+ r_offsets = np.array([np.average(di[:, i]) for di in od['deltas']])
390
+ ret.append(Obs([od['deltas'][j][:, i] - r_offsets[j] for j in range(len(od['deltas']))], od['names'], idl=od['idl'], means=[ro + values[i] for ro in r_offsets]))
387
391
  ret[-1]._value = values[i]
388
392
  else:
389
393
  ret.append(Obs([], [], means=[]))
@@ -47,7 +47,7 @@ def read_rwms(path, prefix, version='2.0', names=None, **kwargs):
47
47
  Reweighting factors read
48
48
  """
49
49
  known_oqcd_versions = ['1.4', '1.6', '2.0']
50
- if not (version in known_oqcd_versions):
50
+ if version not in known_oqcd_versions:
51
51
  raise Exception('Unknown openQCD version defined!')
52
52
  print("Working with openQCD version " + version)
53
53
  if 'postfix' in kwargs:
@@ -127,7 +127,8 @@ def read_sfcf_multi(path, prefix, name_list, quarks_list=['.*'], corr_type_list=
127
127
  check_configs: list[list[int]]
128
128
  list of list of supposed configs, eg. [range(1,1000)]
129
129
  for one replicum with 1000 configs
130
-
130
+ rep_string: str
131
+ Separator of ensemble name and replicum. Example: In "ensAr0", "r" would be the separator string.
131
132
  Returns
132
133
  -------
133
134
  result: dict[list[Obs]]
@@ -199,9 +200,9 @@ def read_sfcf_multi(path, prefix, name_list, quarks_list=['.*'], corr_type_list=
199
200
  else:
200
201
  ens_name = kwargs.get("ens_name")
201
202
  if not appended:
202
- new_names = _get_rep_names(ls, ens_name)
203
+ new_names = _get_rep_names(ls, ens_name, rep_sep=(kwargs.get('rep_string', 'r')))
203
204
  else:
204
- new_names = _get_appended_rep_names(ls, prefix, name_list[0], ens_name)
205
+ new_names = _get_appended_rep_names(ls, prefix, name_list[0], ens_name, rep_sep=(kwargs.get('rep_string', 'r')))
205
206
  new_names = sort_names(new_names)
206
207
 
207
208
  idl = []
@@ -646,22 +647,22 @@ def _read_append_rep(filename, pattern, b2b, cfg_separator, im, single):
646
647
  return T, rep_idl, data
647
648
 
648
649
 
649
- def _get_rep_names(ls, ens_name=None):
650
+ def _get_rep_names(ls, ens_name=None, rep_sep='r'):
650
651
  new_names = []
651
652
  for entry in ls:
652
653
  try:
653
- idx = entry.index('r')
654
+ idx = entry.index(rep_sep)
654
655
  except Exception:
655
656
  raise Exception("Automatic recognition of replicum failed, please enter the key word 'names'.")
656
657
 
657
658
  if ens_name:
658
- new_names.append('ens_name' + '|' + entry[idx:])
659
+ new_names.append(ens_name + '|' + entry[idx:])
659
660
  else:
660
661
  new_names.append(entry[:idx] + '|' + entry[idx:])
661
662
  return new_names
662
663
 
663
664
 
664
- def _get_appended_rep_names(ls, prefix, name, ens_name=None):
665
+ def _get_appended_rep_names(ls, prefix, name, ens_name=None, rep_sep='r'):
665
666
  new_names = []
666
667
  for exc in ls:
667
668
  if not fnmatch.fnmatch(exc, prefix + '*.' + name):
@@ -670,12 +671,12 @@ def _get_appended_rep_names(ls, prefix, name, ens_name=None):
670
671
  for entry in ls:
671
672
  myentry = entry[:-len(name) - 1]
672
673
  try:
673
- idx = myentry.index('r')
674
+ idx = myentry.index(rep_sep)
674
675
  except Exception:
675
676
  raise Exception("Automatic recognition of replicum failed, please enter the key word 'names'.")
676
677
 
677
678
  if ens_name:
678
- new_names.append('ens_name' + '|' + entry[idx:])
679
+ new_names.append(ens_name + '|' + entry[idx:])
679
680
  else:
680
681
  new_names.append(myentry[:idx] + '|' + myentry[idx:])
681
682
  return new_names
@@ -82,6 +82,8 @@ class Obs:
82
82
  raise ValueError('Names are not unique.')
83
83
  if not all(isinstance(x, str) for x in names):
84
84
  raise TypeError('All names have to be strings.')
85
+ if len(set([o.split('|')[0] for o in names])) > 1:
86
+ raise ValueError('Cannot initialize Obs based on multiple ensembles. Please average separate Obs from each ensemble.')
85
87
  else:
86
88
  if not isinstance(names[0], str):
87
89
  raise TypeError('All names have to be strings.')
@@ -222,7 +224,7 @@ class Obs:
222
224
  tmp = kwargs.get(kwarg_name)
223
225
  if isinstance(tmp, (int, float)):
224
226
  if tmp < 0:
225
- raise Exception(kwarg_name + ' has to be larger or equal to 0.')
227
+ raise ValueError(kwarg_name + ' has to be larger or equal to 0.')
226
228
  for e, e_name in enumerate(self.e_names):
227
229
  getattr(self, kwarg_name)[e_name] = tmp
228
230
  else:
@@ -291,7 +293,7 @@ class Obs:
291
293
  texp = self.tau_exp[e_name]
292
294
  # Critical slowing down analysis
293
295
  if w_max // 2 <= 1:
294
- raise Exception("Need at least 8 samples for tau_exp error analysis")
296
+ raise ValueError("Need at least 8 samples for tau_exp error analysis")
295
297
  for n in range(1, w_max // 2):
296
298
  _compute_drho(n + 1)
297
299
  if (self.e_rho[e_name][n] - self.N_sigma[e_name] * self.e_drho[e_name][n]) < 0 or n >= w_max // 2 - 2:
@@ -620,7 +622,7 @@ class Obs:
620
622
  if not hasattr(self, 'e_dvalue'):
621
623
  raise Exception('Run the gamma method first.')
622
624
  if np.isclose(0.0, self._dvalue, atol=1e-15):
623
- raise Exception('Error is 0.0')
625
+ raise ValueError('Error is 0.0')
624
626
  labels = self.e_names
625
627
  sizes = [self.e_dvalue[name] ** 2 for name in labels] / self._dvalue ** 2
626
628
  fig1, ax1 = plt.subplots()
@@ -659,7 +661,7 @@ class Obs:
659
661
  with open(file_name + '.p', 'wb') as fb:
660
662
  pickle.dump(self, fb)
661
663
  else:
662
- raise Exception("Unknown datatype " + str(datatype))
664
+ raise TypeError("Unknown datatype " + str(datatype))
663
665
 
664
666
  def export_jackknife(self):
665
667
  """Export jackknife samples from the Obs
@@ -676,7 +678,7 @@ class Obs:
676
678
  """
677
679
 
678
680
  if len(self.names) != 1:
679
- raise Exception("'export_jackknife' is only implemented for Obs defined on one ensemble and replicum.")
681
+ raise ValueError("'export_jackknife' is only implemented for Obs defined on one ensemble and replicum.")
680
682
 
681
683
  name = self.names[0]
682
684
  full_data = self.deltas[name] + self.r_values[name]
@@ -711,7 +713,7 @@ class Obs:
711
713
  should agree with samples from a full bootstrap analysis up to O(1/N).
712
714
  """
713
715
  if len(self.names) != 1:
714
- raise Exception("'export_boostrap' is only implemented for Obs defined on one ensemble and replicum.")
716
+ raise ValueError("'export_boostrap' is only implemented for Obs defined on one ensemble and replicum.")
715
717
 
716
718
  name = self.names[0]
717
719
  length = self.N
@@ -856,15 +858,12 @@ class Obs:
856
858
 
857
859
  def __pow__(self, y):
858
860
  if isinstance(y, Obs):
859
- return derived_observable(lambda x: x[0] ** x[1], [self, y])
861
+ return derived_observable(lambda x, **kwargs: x[0] ** x[1], [self, y], man_grad=[y.value * self.value ** (y.value - 1), self.value ** y.value * np.log(self.value)])
860
862
  else:
861
- return derived_observable(lambda x: x[0] ** y, [self])
863
+ return derived_observable(lambda x, **kwargs: x[0] ** y, [self], man_grad=[y * self.value ** (y - 1)])
862
864
 
863
865
  def __rpow__(self, y):
864
- if isinstance(y, Obs):
865
- return derived_observable(lambda x: x[0] ** x[1], [y, self])
866
- else:
867
- return derived_observable(lambda x: y ** x[0], [self])
866
+ return derived_observable(lambda x, **kwargs: y ** x[0], [self], man_grad=[y ** self.value * np.log(y)])
868
867
 
869
868
  def __abs__(self):
870
869
  return derived_observable(lambda x: anp.abs(x[0]), [self])
@@ -1270,7 +1269,7 @@ def derived_observable(func, data, array_mode=False, **kwargs):
1270
1269
  if 'man_grad' in kwargs:
1271
1270
  deriv = np.asarray(kwargs.get('man_grad'))
1272
1271
  if new_values.shape + data.shape != deriv.shape:
1273
- raise Exception('Manual derivative does not have correct shape.')
1272
+ raise ValueError('Manual derivative does not have correct shape.')
1274
1273
  elif kwargs.get('num_grad') is True:
1275
1274
  if multi > 0:
1276
1275
  raise Exception('Multi mode currently not supported for numerical derivative')
@@ -1336,7 +1335,7 @@ def derived_observable(func, data, array_mode=False, **kwargs):
1336
1335
  new_covobs = {name: Covobs(0, allcov[name], name, grad=new_grad[name]) for name in new_grad}
1337
1336
 
1338
1337
  if not set(new_covobs.keys()).isdisjoint(new_deltas.keys()):
1339
- raise Exception('The same name has been used for deltas and covobs!')
1338
+ raise ValueError('The same name has been used for deltas and covobs!')
1340
1339
  new_samples = []
1341
1340
  new_means = []
1342
1341
  new_idl = []
@@ -1377,7 +1376,7 @@ def _reduce_deltas(deltas, idx_old, idx_new):
1377
1376
  Has to be a subset of idx_old.
1378
1377
  """
1379
1378
  if not len(deltas) == len(idx_old):
1380
- raise Exception('Length of deltas and idx_old have to be the same: %d != %d' % (len(deltas), len(idx_old)))
1379
+ raise ValueError('Length of deltas and idx_old have to be the same: %d != %d' % (len(deltas), len(idx_old)))
1381
1380
  if type(idx_old) is range and type(idx_new) is range:
1382
1381
  if idx_old == idx_new:
1383
1382
  return deltas
@@ -1385,7 +1384,7 @@ def _reduce_deltas(deltas, idx_old, idx_new):
1385
1384
  return deltas
1386
1385
  indices = np.intersect1d(idx_old, idx_new, assume_unique=True, return_indices=True)[1]
1387
1386
  if len(indices) < len(idx_new):
1388
- raise Exception('Error in _reduce_deltas: Config of idx_new not in idx_old')
1387
+ raise ValueError('Error in _reduce_deltas: Config of idx_new not in idx_old')
1389
1388
  return np.array(deltas)[indices]
1390
1389
 
1391
1390
 
@@ -1407,12 +1406,14 @@ def reweight(weight, obs, **kwargs):
1407
1406
  result = []
1408
1407
  for i in range(len(obs)):
1409
1408
  if len(obs[i].cov_names):
1410
- raise Exception('Error: Not possible to reweight an Obs that contains covobs!')
1409
+ raise ValueError('Error: Not possible to reweight an Obs that contains covobs!')
1411
1410
  if not set(obs[i].names).issubset(weight.names):
1412
- raise Exception('Error: Ensembles do not fit')
1411
+ raise ValueError('Error: Ensembles do not fit')
1412
+ if len(obs[i].mc_names) > 1 or len(weight.mc_names) > 1:
1413
+ raise ValueError('Error: Cannot reweight an Obs that contains multiple ensembles.')
1413
1414
  for name in obs[i].names:
1414
1415
  if not set(obs[i].idl[name]).issubset(weight.idl[name]):
1415
- raise Exception('obs[%d] has to be defined on a subset of the configs in weight.idl[%s]!' % (i, name))
1416
+ raise ValueError('obs[%d] has to be defined on a subset of the configs in weight.idl[%s]!' % (i, name))
1416
1417
  new_samples = []
1417
1418
  w_deltas = {}
1418
1419
  for name in sorted(obs[i].names):
@@ -1445,18 +1446,21 @@ def correlate(obs_a, obs_b):
1445
1446
  -----
1446
1447
  Keep in mind to only correlate primary observables which have not been reweighted
1447
1448
  yet. The reweighting has to be applied after correlating the observables.
1448
- Currently only works if ensembles are identical (this is not strictly necessary).
1449
+ Only works if a single ensemble is present in the Obs.
1450
+ Currently only works if ensemble content is identical (this is not strictly necessary).
1449
1451
  """
1450
1452
 
1453
+ if len(obs_a.mc_names) > 1 or len(obs_b.mc_names) > 1:
1454
+ raise ValueError('Error: Cannot correlate Obs that contain multiple ensembles.')
1451
1455
  if sorted(obs_a.names) != sorted(obs_b.names):
1452
- raise Exception(f"Ensembles do not fit {set(sorted(obs_a.names)) ^ set(sorted(obs_b.names))}")
1456
+ raise ValueError(f"Ensembles do not fit {set(sorted(obs_a.names)) ^ set(sorted(obs_b.names))}")
1453
1457
  if len(obs_a.cov_names) or len(obs_b.cov_names):
1454
- raise Exception('Error: Not possible to correlate Obs that contain covobs!')
1458
+ raise ValueError('Error: Not possible to correlate Obs that contain covobs!')
1455
1459
  for name in obs_a.names:
1456
1460
  if obs_a.shape[name] != obs_b.shape[name]:
1457
- raise Exception('Shapes of ensemble', name, 'do not fit')
1461
+ raise ValueError('Shapes of ensemble', name, 'do not fit')
1458
1462
  if obs_a.idl[name] != obs_b.idl[name]:
1459
- raise Exception('idl of ensemble', name, 'do not fit')
1463
+ raise ValueError('idl of ensemble', name, 'do not fit')
1460
1464
 
1461
1465
  if obs_a.reweighted is True:
1462
1466
  warnings.warn("The first observable is already reweighted.", RuntimeWarning)
@@ -1558,7 +1562,7 @@ def invert_corr_cov_cholesky(corr, inverrdiag):
1558
1562
 
1559
1563
  condn = np.linalg.cond(corr)
1560
1564
  if condn > 0.1 / np.finfo(float).eps:
1561
- raise Exception(f"Cannot invert correlation matrix as its condition number exceeds machine precision ({condn:1.2e})")
1565
+ raise ValueError(f"Cannot invert correlation matrix as its condition number exceeds machine precision ({condn:1.2e})")
1562
1566
  if condn > 1e13:
1563
1567
  warnings.warn("Correlation matrix may be ill-conditioned, condition number: {%1.2e}" % (condn), RuntimeWarning)
1564
1568
  chol = np.linalg.cholesky(corr)
@@ -1639,7 +1643,7 @@ def _smooth_eigenvalues(corr, E):
1639
1643
  Number of eigenvalues to be left substantially unchanged
1640
1644
  """
1641
1645
  if not (2 < E < corr.shape[0] - 1):
1642
- raise Exception(f"'E' has to be between 2 and the dimension of the correlation matrix minus 1 ({corr.shape[0] - 1}).")
1646
+ raise ValueError(f"'E' has to be between 2 and the dimension of the correlation matrix minus 1 ({corr.shape[0] - 1}).")
1643
1647
  vals, vec = np.linalg.eigh(corr)
1644
1648
  lambda_min = np.mean(vals[:-E])
1645
1649
  vals[vals < lambda_min] = lambda_min
@@ -1758,7 +1762,11 @@ def import_bootstrap(boots, name, random_numbers):
1758
1762
 
1759
1763
 
1760
1764
  def merge_obs(list_of_obs):
1761
- """Combine all observables in list_of_obs into one new observable
1765
+ """Combine all observables in list_of_obs into one new observable.
1766
+ This allows to merge Obs that have been computed on multiple replica
1767
+ of the same ensemble.
1768
+ If you like to merge Obs that are based on several ensembles, please
1769
+ average them yourself.
1762
1770
 
1763
1771
  Parameters
1764
1772
  ----------
@@ -1771,9 +1779,9 @@ def merge_obs(list_of_obs):
1771
1779
  """
1772
1780
  replist = [item for obs in list_of_obs for item in obs.names]
1773
1781
  if (len(replist) == len(set(replist))) is False:
1774
- raise Exception('list_of_obs contains duplicate replica: %s' % (str(replist)))
1782
+ raise ValueError('list_of_obs contains duplicate replica: %s' % (str(replist)))
1775
1783
  if any([len(o.cov_names) for o in list_of_obs]):
1776
- raise Exception('Not possible to merge data that contains covobs!')
1784
+ raise ValueError('Not possible to merge data that contains covobs!')
1777
1785
  new_dict = {}
1778
1786
  idl_dict = {}
1779
1787
  for o in list_of_obs:
@@ -1824,7 +1832,7 @@ def cov_Obs(means, cov, name, grad=None):
1824
1832
  for i in range(len(means)):
1825
1833
  ol.append(covobs_to_obs(Covobs(means[i], cov, name, pos=i, grad=grad)))
1826
1834
  if ol[0].covobs[name].N != len(means):
1827
- raise Exception('You have to provide %d mean values!' % (ol[0].N))
1835
+ raise ValueError('You have to provide %d mean values!' % (ol[0].N))
1828
1836
  if len(ol) == 1:
1829
1837
  return ol[0]
1830
1838
  return ol
@@ -1840,7 +1848,7 @@ def _determine_gap(o, e_content, e_name):
1840
1848
 
1841
1849
  gap = min(gaps)
1842
1850
  if not np.all([gi % gap == 0 for gi in gaps]):
1843
- raise Exception(f"Replica for ensemble {e_name} do not have a common spacing.", gaps)
1851
+ raise ValueError(f"Replica for ensemble {e_name} do not have a common spacing.", gaps)
1844
1852
 
1845
1853
  return gap
1846
1854
 
@@ -0,0 +1 @@
1
+ __version__ = "2.14.0"
@@ -1,6 +1,6 @@
1
- Metadata-Version: 2.1
1
+ Metadata-Version: 2.2
2
2
  Name: pyerrors
3
- Version: 2.13.0
3
+ Version: 2.14.0
4
4
  Summary: Error propagation and statistical analysis for Monte Carlo simulations
5
5
  Home-page: https://github.com/fjosw/pyerrors
6
6
  Author: Fabian Joswig
@@ -39,8 +39,20 @@ Requires-Dist: pytest-benchmark; extra == "test"
39
39
  Requires-Dist: hypothesis; extra == "test"
40
40
  Requires-Dist: nbmake; extra == "test"
41
41
  Requires-Dist: flake8; extra == "test"
42
+ Dynamic: author
43
+ Dynamic: author-email
44
+ Dynamic: classifier
45
+ Dynamic: description
46
+ Dynamic: description-content-type
47
+ Dynamic: home-page
48
+ Dynamic: license
49
+ Dynamic: project-url
50
+ Dynamic: provides-extra
51
+ Dynamic: requires-dist
52
+ Dynamic: requires-python
53
+ Dynamic: summary
42
54
 
43
- [![pytest](https://github.com/fjosw/pyerrors/actions/workflows/pytest.yml/badge.svg)](https://github.com/fjosw/pyerrors/actions/workflows/pytest.yml) [![](https://img.shields.io/badge/python-3.9+-blue.svg)](https://www.python.org/downloads/) [![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT) [![arXiv](https://img.shields.io/badge/arXiv-2209.14371-b31b1b.svg)](https://arxiv.org/abs/2209.14371) [![DOI](https://img.shields.io/badge/DOI-10.1016%2Fj.cpc.2023.108750-blue)](https://doi.org/10.1016/j.cpc.2023.108750)
55
+ [![](https://img.shields.io/badge/python-3.9+-blue.svg)](https://www.python.org/downloads/) [![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT) [![arXiv](https://img.shields.io/badge/arXiv-2209.14371-b31b1b.svg)](https://arxiv.org/abs/2209.14371) [![DOI](https://img.shields.io/badge/DOI-10.1016%2Fj.cpc.2023.108750-blue)](https://doi.org/10.1016/j.cpc.2023.108750)
44
56
  # pyerrors
45
57
  `pyerrors` is a python framework for error computation and propagation of Markov chain Monte Carlo data from lattice field theory and statistical mechanics simulations.
46
58
 
@@ -56,11 +68,6 @@ Install the most recent release using pip and [pypi](https://pypi.org/project/py
56
68
  python -m pip install pyerrors # Fresh install
57
69
  python -m pip install -U pyerrors # Update
58
70
  ```
59
- Install the most recent release using conda and [conda-forge](https://anaconda.org/conda-forge/pyerrors):
60
- ```bash
61
- conda install -c conda-forge pyerrors # Fresh install
62
- conda update -c conda-forge pyerrors # Update
63
- ```
64
71
 
65
72
  ## Contributing
66
73
  We appreciate all contributions to the code, the documentation and the examples. If you want to get involved please have a look at our [contribution guideline](https://github.com/fjosw/pyerrors/blob/develop/CONTRIBUTING.md).
@@ -1,3 +1,6 @@
1
1
  [build-system]
2
2
  requires = ["setuptools >= 63.0.0", "wheel"]
3
3
  build-backend = "setuptools.build_meta"
4
+
5
+ [tool.ruff.lint]
6
+ ignore = ["F403"]
@@ -1 +0,0 @@
1
- __version__ = "2.13.0"
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes