xrtm-eval 0.1.1__tar.gz → 0.1.2__tar.gz

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (28) hide show
  1. {xrtm_eval-0.1.1/src/xrtm_eval.egg-info → xrtm_eval-0.1.2}/PKG-INFO +24 -2
  2. {xrtm_eval-0.1.1 → xrtm_eval-0.1.2}/README.md +23 -1
  3. {xrtm_eval-0.1.1 → xrtm_eval-0.1.2}/pyproject.toml +1 -1
  4. {xrtm_eval-0.1.1 → xrtm_eval-0.1.2/src/xrtm_eval.egg-info}/PKG-INFO +24 -2
  5. {xrtm_eval-0.1.1 → xrtm_eval-0.1.2}/LICENSE +0 -0
  6. {xrtm_eval-0.1.1 → xrtm_eval-0.1.2}/setup.cfg +0 -0
  7. {xrtm_eval-0.1.1 → xrtm_eval-0.1.2}/src/xrtm/eval/__init__.py +0 -0
  8. {xrtm_eval-0.1.1 → xrtm_eval-0.1.2}/src/xrtm/eval/core/__init__.py +0 -0
  9. {xrtm_eval-0.1.1 → xrtm_eval-0.1.2}/src/xrtm/eval/core/epistemics.py +0 -0
  10. {xrtm_eval-0.1.1 → xrtm_eval-0.1.2}/src/xrtm/eval/core/eval/__init__.py +0 -0
  11. {xrtm_eval-0.1.1 → xrtm_eval-0.1.2}/src/xrtm/eval/core/eval/aggregation.py +0 -0
  12. {xrtm_eval-0.1.1 → xrtm_eval-0.1.2}/src/xrtm/eval/core/eval/bayesian.py +0 -0
  13. {xrtm_eval-0.1.1 → xrtm_eval-0.1.2}/src/xrtm/eval/core/eval/definitions.py +0 -0
  14. {xrtm_eval-0.1.1 → xrtm_eval-0.1.2}/src/xrtm/eval/kit/eval/__init__.py +0 -0
  15. {xrtm_eval-0.1.1 → xrtm_eval-0.1.2}/src/xrtm/eval/kit/eval/analytics.py +0 -0
  16. {xrtm_eval-0.1.1 → xrtm_eval-0.1.2}/src/xrtm/eval/kit/eval/bias.py +0 -0
  17. {xrtm_eval-0.1.1 → xrtm_eval-0.1.2}/src/xrtm/eval/kit/eval/epistemic_evaluator.py +0 -0
  18. {xrtm_eval-0.1.1 → xrtm_eval-0.1.2}/src/xrtm/eval/kit/eval/intervention.py +0 -0
  19. {xrtm_eval-0.1.1 → xrtm_eval-0.1.2}/src/xrtm/eval/kit/eval/metrics.py +0 -0
  20. {xrtm_eval-0.1.1 → xrtm_eval-0.1.2}/src/xrtm/eval/kit/eval/resilience.py +0 -0
  21. {xrtm_eval-0.1.1 → xrtm_eval-0.1.2}/src/xrtm/eval/kit/eval/viz.py +0 -0
  22. {xrtm_eval-0.1.1 → xrtm_eval-0.1.2}/src/xrtm/eval/schemas/__init__.py +0 -0
  23. {xrtm_eval-0.1.1 → xrtm_eval-0.1.2}/src/xrtm/eval/schemas/forecast.py +0 -0
  24. {xrtm_eval-0.1.1 → xrtm_eval-0.1.2}/src/xrtm_eval.egg-info/SOURCES.txt +0 -0
  25. {xrtm_eval-0.1.1 → xrtm_eval-0.1.2}/src/xrtm_eval.egg-info/dependency_links.txt +0 -0
  26. {xrtm_eval-0.1.1 → xrtm_eval-0.1.2}/src/xrtm_eval.egg-info/requires.txt +0 -0
  27. {xrtm_eval-0.1.1 → xrtm_eval-0.1.2}/src/xrtm_eval.egg-info/top_level.txt +0 -0
  28. {xrtm_eval-0.1.1 → xrtm_eval-0.1.2}/tests/test_metrics.py +0 -0
@@ -1,6 +1,6 @@
1
1
  Metadata-Version: 2.4
2
2
  Name: xrtm-eval
3
- Version: 0.1.1
3
+ Version: 0.1.2
4
4
  Summary: The Judge/Scoring engine for XRTM.
5
5
  Author-email: XRTM Team <moy@xrtm.org>
6
6
  License: Apache-2.0
@@ -23,15 +23,27 @@ Dynamic: license-file
23
23
 
24
24
  [![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0)
25
25
  [![Python](https://img.shields.io/badge/python-3.11+-blue.svg)](https://www.python.org/downloads/)
26
+ [![PyPI](https://img.shields.io/pypi/v/xrtm-eval.svg)](https://pypi.org/project/xrtm-eval/)
26
27
 
27
28
  **The Judge for XRTM.**
28
29
 
29
30
  `xrtm-eval` is the rigorous scoring engine used to grade probabilistic forecasts. It operates independently of the inference engine to ensure objective evaluation.
30
31
 
32
+ ## Part of the XRTM Ecosystem
33
+
34
+ ```
35
+ Layer 4: xrtm-train → (imports all)
36
+ Layer 3: xrtm-forecast → (imports eval, data)
37
+ Layer 2: xrtm-eval → (imports data) ← YOU ARE HERE
38
+ Layer 1: xrtm-data → (zero dependencies)
39
+ ```
40
+
41
+ `xrtm-eval` provides scoring metrics AND trust primitives used by the forecast engine.
42
+
31
43
  ## Installation
32
44
 
33
45
  ```bash
34
- uv pip install xrtm-eval
46
+ pip install xrtm-eval
35
47
  ```
36
48
 
37
49
  ## Core Primitives
@@ -54,6 +66,16 @@ score = evaluator.score(prediction=0.7, ground_truth=1)
54
66
  ### 2. Expected Calibration Error (ECE)
55
67
  Use the `ExpectedCalibrationErrorEvaluator` to measure the gap between confidence and accuracy across bin buckets.
56
68
 
69
+ ### 3. Epistemic Trust Primitives (v0.1.1+)
70
+ `xrtm-eval` now includes trust scoring infrastructure:
71
+
72
+ ```python
73
+ from xrtm.eval.core.epistemics import IntegrityGuardian, SourceTrustRegistry
74
+
75
+ registry = SourceTrustRegistry()
76
+ guardian = IntegrityGuardian(registry)
77
+ ```
78
+
57
79
  ## Development
58
80
 
59
81
  Prerequisites:
@@ -2,15 +2,27 @@
2
2
 
3
3
  [![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0)
4
4
  [![Python](https://img.shields.io/badge/python-3.11+-blue.svg)](https://www.python.org/downloads/)
5
+ [![PyPI](https://img.shields.io/pypi/v/xrtm-eval.svg)](https://pypi.org/project/xrtm-eval/)
5
6
 
6
7
  **The Judge for XRTM.**
7
8
 
8
9
  `xrtm-eval` is the rigorous scoring engine used to grade probabilistic forecasts. It operates independently of the inference engine to ensure objective evaluation.
9
10
 
11
+ ## Part of the XRTM Ecosystem
12
+
13
+ ```
14
+ Layer 4: xrtm-train → (imports all)
15
+ Layer 3: xrtm-forecast → (imports eval, data)
16
+ Layer 2: xrtm-eval → (imports data) ← YOU ARE HERE
17
+ Layer 1: xrtm-data → (zero dependencies)
18
+ ```
19
+
20
+ `xrtm-eval` provides scoring metrics AND trust primitives used by the forecast engine.
21
+
10
22
  ## Installation
11
23
 
12
24
  ```bash
13
- uv pip install xrtm-eval
25
+ pip install xrtm-eval
14
26
  ```
15
27
 
16
28
  ## Core Primitives
@@ -33,6 +45,16 @@ score = evaluator.score(prediction=0.7, ground_truth=1)
33
45
  ### 2. Expected Calibration Error (ECE)
34
46
  Use the `ExpectedCalibrationErrorEvaluator` to measure the gap between confidence and accuracy across bin buckets.
35
47
 
48
+ ### 3. Epistemic Trust Primitives (v0.1.1+)
49
+ `xrtm-eval` now includes trust scoring infrastructure:
50
+
51
+ ```python
52
+ from xrtm.eval.core.epistemics import IntegrityGuardian, SourceTrustRegistry
53
+
54
+ registry = SourceTrustRegistry()
55
+ guardian = IntegrityGuardian(registry)
56
+ ```
57
+
36
58
  ## Development
37
59
 
38
60
  Prerequisites:
@@ -4,7 +4,7 @@ build-backend = "setuptools.build_meta"
4
4
 
5
5
  [project]
6
6
  name = "xrtm-eval"
7
- version = "0.1.1"
7
+ version = "0.1.2"
8
8
  description = "The Judge/Scoring engine for XRTM."
9
9
  readme = "README.md"
10
10
  requires-python = ">=3.11"
@@ -1,6 +1,6 @@
1
1
  Metadata-Version: 2.4
2
2
  Name: xrtm-eval
3
- Version: 0.1.1
3
+ Version: 0.1.2
4
4
  Summary: The Judge/Scoring engine for XRTM.
5
5
  Author-email: XRTM Team <moy@xrtm.org>
6
6
  License: Apache-2.0
@@ -23,15 +23,27 @@ Dynamic: license-file
23
23
 
24
24
  [![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0)
25
25
  [![Python](https://img.shields.io/badge/python-3.11+-blue.svg)](https://www.python.org/downloads/)
26
+ [![PyPI](https://img.shields.io/pypi/v/xrtm-eval.svg)](https://pypi.org/project/xrtm-eval/)
26
27
 
27
28
  **The Judge for XRTM.**
28
29
 
29
30
  `xrtm-eval` is the rigorous scoring engine used to grade probabilistic forecasts. It operates independently of the inference engine to ensure objective evaluation.
30
31
 
32
+ ## Part of the XRTM Ecosystem
33
+
34
+ ```
35
+ Layer 4: xrtm-train → (imports all)
36
+ Layer 3: xrtm-forecast → (imports eval, data)
37
+ Layer 2: xrtm-eval → (imports data) ← YOU ARE HERE
38
+ Layer 1: xrtm-data → (zero dependencies)
39
+ ```
40
+
41
+ `xrtm-eval` provides scoring metrics AND trust primitives used by the forecast engine.
42
+
31
43
  ## Installation
32
44
 
33
45
  ```bash
34
- uv pip install xrtm-eval
46
+ pip install xrtm-eval
35
47
  ```
36
48
 
37
49
  ## Core Primitives
@@ -54,6 +66,16 @@ score = evaluator.score(prediction=0.7, ground_truth=1)
54
66
  ### 2. Expected Calibration Error (ECE)
55
67
  Use the `ExpectedCalibrationErrorEvaluator` to measure the gap between confidence and accuracy across bin buckets.
56
68
 
69
+ ### 3. Epistemic Trust Primitives (v0.1.1+)
70
+ `xrtm-eval` now includes trust scoring infrastructure:
71
+
72
+ ```python
73
+ from xrtm.eval.core.epistemics import IntegrityGuardian, SourceTrustRegistry
74
+
75
+ registry = SourceTrustRegistry()
76
+ guardian = IntegrityGuardian(registry)
77
+ ```
78
+
57
79
  ## Development
58
80
 
59
81
  Prerequisites:
File without changes
File without changes