explainiverse 0.7.0__py3-none-any.whl → 0.8.0__py3-none-any.whl

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -1,7 +1,7 @@
1
1
  Metadata-Version: 2.1
2
2
  Name: explainiverse
3
- Version: 0.7.0
4
- Summary: Unified, extensible explainability framework supporting LIME, SHAP, Anchors, Counterfactuals, PDP, ALE, SAGE, and more
3
+ Version: 0.8.0
4
+ Summary: Unified, extensible explainability framework supporting 18 XAI methods including LIME, SHAP, LRP, TCAV, GradCAM, and more
5
5
  Home-page: https://github.com/jemsbhai/explainiverse
6
6
  License: MIT
7
7
  Keywords: xai,explainability,interpretability,machine-learning,lime,shap,anchors
@@ -35,7 +35,7 @@ Description-Content-Type: text/markdown
35
35
  [![Python 3.10+](https://img.shields.io/badge/python-3.10+-blue.svg)](https://www.python.org/downloads/)
36
36
  [![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)
37
37
 
38
- **Explainiverse** is a unified, extensible Python framework for Explainable AI (XAI). It provides a standardized interface for **17 state-of-the-art explanation methods** across local, global, gradient-based, concept-based, and example-based paradigms, along with **comprehensive evaluation metrics** for assessing explanation quality.
38
+ **Explainiverse** is a unified, extensible Python framework for Explainable AI (XAI). It provides a standardized interface for **18 state-of-the-art explanation methods** across local, global, gradient-based, concept-based, and example-based paradigms, along with **comprehensive evaluation metrics** for assessing explanation quality.
39
39
 
40
40
  ---
41
41
 
@@ -43,7 +43,7 @@ Description-Content-Type: text/markdown
43
43
 
44
44
  | Feature | Description |
45
45
  |---------|-------------|
46
- | **17 Explainers** | LIME, KernelSHAP, TreeSHAP, Integrated Gradients, DeepLIFT, DeepSHAP, SmoothGrad, Saliency Maps, GradCAM/GradCAM++, TCAV, Anchors, Counterfactual, Permutation Importance, PDP, ALE, SAGE, ProtoDash |
46
+ | **18 Explainers** | LIME, KernelSHAP, TreeSHAP, Integrated Gradients, DeepLIFT, DeepSHAP, SmoothGrad, Saliency Maps, GradCAM/GradCAM++, LRP, TCAV, Anchors, Counterfactual, Permutation Importance, PDP, ALE, SAGE, ProtoDash |
47
47
  | **8 Evaluation Metrics** | Faithfulness (PGI, PGU, Comprehensiveness, Sufficiency, Correlation) and Stability (RIS, ROS, Lipschitz) |
48
48
  | **Unified API** | Consistent `BaseExplainer` interface with standardized `Explanation` output |
49
49
  | **Plugin Registry** | Filter explainers by scope, model type, data type; automatic recommendations |
@@ -66,6 +66,7 @@ Description-Content-Type: text/markdown
66
66
  | **SmoothGrad** | Gradient | [Smilkov et al., 2017](https://arxiv.org/abs/1706.03825) |
67
67
  | **Saliency Maps** | Gradient | [Simonyan et al., 2014](https://arxiv.org/abs/1312.6034) |
68
68
  | **GradCAM / GradCAM++** | Gradient (CNN) | [Selvaraju et al., 2017](https://arxiv.org/abs/1610.02391) |
69
+ | **LRP** | Decomposition | [Bach et al., 2015](https://doi.org/10.1371/journal.pone.0130140) |
69
70
  | **TCAV** | Concept-Based | [Kim et al., 2018](https://arxiv.org/abs/1711.11279) |
70
71
  | **Anchors** | Rule-Based | [Ribeiro et al., 2018](https://ojs.aaai.org/index.php/AAAI/article/view/11491) |
71
72
  | **Counterfactual** | Contrastive | [Mothilal et al., 2020](https://arxiv.org/abs/1905.07697) |
@@ -143,7 +144,7 @@ adapter = SklearnAdapter(model, class_names=iris.target_names.tolist())
143
144
  # List all available explainers
144
145
  print(default_registry.list_explainers())
145
146
  # ['lime', 'shap', 'treeshap', 'integrated_gradients', 'deeplift', 'deepshap',
146
- # 'smoothgrad', 'saliency', 'gradcam', 'tcav', 'anchors', 'counterfactual',
147
+ # 'smoothgrad', 'saliency', 'gradcam', 'lrp', 'tcav', 'anchors', 'counterfactual',
147
148
  # 'protodash', 'permutation_importance', 'partial_dependence', 'ale', 'sage']
148
149
 
149
150
  # Create an explainer via registry
@@ -211,6 +212,70 @@ print(f"Attributions: {explanation.explanation_data['feature_attributions']}")
211
212
  print(f"Convergence δ: {explanation.explanation_data['convergence_delta']:.6f}")
212
213
  ```
213
214
 
215
+ ### Layer-wise Relevance Propagation (LRP)
216
+
217
+ ```python
218
+ from explainiverse.explainers.gradient import LRPExplainer
219
+
220
+ # LRP - Decomposition-based attribution with conservation property
221
+ explainer = LRPExplainer(
222
+ model=adapter,
223
+ feature_names=feature_names,
224
+ class_names=class_names,
225
+ rule="epsilon", # Propagation rule: epsilon, gamma, alpha_beta, z_plus, composite
226
+ epsilon=1e-6 # Stabilization constant
227
+ )
228
+
229
+ # Basic explanation
230
+ explanation = explainer.explain(X[0], target_class=0)
231
+ print(explanation.explanation_data["feature_attributions"])
232
+
233
+ # Verify conservation property (sum of attributions ≈ target output)
234
+ explanation = explainer.explain(X[0], return_convergence_delta=True)
235
+ print(f"Conservation delta: {explanation.explanation_data['convergence_delta']:.6f}")
236
+
237
+ # Compare different LRP rules
238
+ comparison = explainer.compare_rules(X[0], rules=["epsilon", "gamma", "z_plus"])
239
+ for rule, result in comparison.items():
240
+ print(f"{rule}: top feature = {result['top_feature']}")
241
+
242
+ # Layer-wise relevance analysis
243
+ layer_result = explainer.explain_with_layer_relevances(X[0])
244
+ for layer, relevances in layer_result["layer_relevances"].items():
245
+ print(f"{layer}: sum = {sum(relevances):.4f}")
246
+
247
+ # Composite rules: different rules for different layers
248
+ explainer_composite = LRPExplainer(
249
+ model=adapter,
250
+ feature_names=feature_names,
251
+ class_names=class_names,
252
+ rule="composite"
253
+ )
254
+ explainer_composite.set_composite_rule({
255
+ 0: "z_plus", # Input layer: focus on what's present
256
+ 2: "epsilon", # Middle layers: balanced
257
+ 4: "epsilon" # Output layer
258
+ })
259
+ explanation = explainer_composite.explain(X[0])
260
+ ```
261
+
262
+ **LRP Propagation Rules:**
263
+
264
+ | Rule | Description | Use Case |
265
+ |------|-------------|----------|
266
+ | `epsilon` | Adds stabilization constant | General purpose (default) |
267
+ | `gamma` | Enhances positive contributions | Image classification |
268
+ | `alpha_beta` | Separates pos/neg (α-β=1) | Fine-grained control |
269
+ | `z_plus` | Only positive weights | Input layers, what's present |
270
+ | `composite` | Different rules per layer | Best practice for deep nets |
271
+
272
+ **Supported Layers:**
273
+ - Linear, Conv2d
274
+ - BatchNorm1d, BatchNorm2d
275
+ - ReLU, LeakyReLU, ELU, Tanh, Sigmoid, GELU
276
+ - MaxPool2d, AvgPool2d, AdaptiveAvgPool2d
277
+ - Flatten, Dropout
278
+
214
279
  ### DeepLIFT and DeepSHAP
215
280
 
216
281
  ```python
@@ -602,7 +667,7 @@ explainiverse/
602
667
  │ └── pytorch_adapter.py # With gradient support
603
668
  ├── explainers/
604
669
  │ ├── attribution/ # LIME, SHAP, TreeSHAP
605
- │ ├── gradient/ # IG, DeepLIFT, DeepSHAP, SmoothGrad, Saliency, GradCAM, TCAV
670
+ │ ├── gradient/ # IG, DeepLIFT, DeepSHAP, SmoothGrad, Saliency, GradCAM, LRP, TCAV
606
671
  │ ├── rule_based/ # Anchors
607
672
  │ ├── counterfactual/ # DiCE-style
608
673
  │ ├── global_explainers/ # Permutation, PDP, ALE, SAGE
@@ -626,10 +691,10 @@ poetry run pytest
626
691
  poetry run pytest --cov=explainiverse --cov-report=html
627
692
 
628
693
  # Run specific test file
629
- poetry run pytest tests/test_smoothgrad.py -v
694
+ poetry run pytest tests/test_lrp.py -v
630
695
 
631
696
  # Run specific test class
632
- poetry run pytest tests/test_smoothgrad.py::TestSmoothGradBasic -v
697
+ poetry run pytest tests/test_lrp.py::TestLRPConv2d -v
633
698
  ```
634
699
 
635
700
  ---
@@ -640,6 +705,7 @@ poetry run pytest tests/test_smoothgrad.py::TestSmoothGradBasic -v
640
705
  - [x] Core framework (BaseExplainer, Explanation, Registry)
641
706
  - [x] Perturbation methods: LIME, KernelSHAP, TreeSHAP
642
707
  - [x] Gradient methods: Integrated Gradients, DeepLIFT, DeepSHAP, SmoothGrad, Saliency Maps, GradCAM/GradCAM++
708
+ - [x] Decomposition methods: Layer-wise Relevance Propagation (LRP) with ε, γ, αβ, z⁺, composite rules
643
709
  - [x] Concept-based: TCAV (Testing with Concept Activation Vectors)
644
710
  - [x] Rule-based: Anchors
645
711
  - [x] Counterfactual: DiCE-style
@@ -649,9 +715,6 @@ poetry run pytest tests/test_smoothgrad.py::TestSmoothGradBasic -v
649
715
  - [x] Evaluation: Stability metrics (RIS, ROS, Lipschitz)
650
716
  - [x] PyTorch adapter with gradient support
651
717
 
652
- ### In Progress 🚧
653
- - [ ] Layer-wise Relevance Propagation (LRP)
654
-
655
718
  ### Planned 📋
656
719
  - [ ] Attention-based explanations (for Transformers)
657
720
  - [ ] TensorFlow/Keras adapter
@@ -671,7 +734,7 @@ If you use Explainiverse in your research, please cite:
671
734
  author = {Syed, Muntaser},
672
735
  year = {2025},
673
736
  url = {https://github.com/jemsbhai/explainiverse},
674
- version = {0.7.0}
737
+ version = {0.8.0}
675
738
  }
676
739
  ```
677
740
 
@@ -699,5 +762,5 @@ MIT License - see [LICENSE](LICENSE) for details.
699
762
 
700
763
  ## Acknowledgments
701
764
 
702
- Explainiverse builds upon the foundational work of many researchers in the XAI community. We thank the authors of LIME, SHAP, Integrated Gradients, DeepLIFT, GradCAM, TCAV, Anchors, DiCE, ALE, SAGE, and ProtoDash for their contributions to interpretable machine learning.
765
+ Explainiverse builds upon the foundational work of many researchers in the XAI community. We thank the authors of LIME, SHAP, Integrated Gradients, DeepLIFT, LRP, GradCAM, TCAV, Anchors, DiCE, ALE, SAGE, and ProtoDash for their contributions to interpretable machine learning.
703
766
 
@@ -1,23 +1,23 @@
1
- explainiverse/__init__.py,sha256=u67gp9ZbzMDs2ECjXvs4aZUbjELRPfx32jGszJhhZ7U,1612
1
+ explainiverse/__init__.py,sha256=CKFagMPKfejfb6ejGktuiLwfo8j92jNAhzmKJkyYWnI,1694
2
2
  explainiverse/adapters/__init__.py,sha256=HcQGISyp-YQ4jEj2IYveX_c9X5otLcTNWRnVRRhzRik,781
3
3
  explainiverse/adapters/base_adapter.py,sha256=Nqt0GeDn_-PjTyJcZsE8dRTulavqFQsv8sMYWS_ps-M,603
4
- explainiverse/adapters/pytorch_adapter.py,sha256=GTilJAR1VF_OgWG88qZoqlqefHaSXB3i9iOwCJkyHTg,13318
4
+ explainiverse/adapters/pytorch_adapter.py,sha256=DLQKJ7gB0foPwAmcrru7QdZnPRnhqDKpFCT-EaD3420,15612
5
5
  explainiverse/adapters/sklearn_adapter.py,sha256=pzIBtMuqrG-6ZbUqUCMt7rSk3Ow0FgrY268FSweFvw4,958
6
6
  explainiverse/core/__init__.py,sha256=P3jHMnH5coFqTTO1w-gT-rurkCM1-9r3pF-055pbXMg,474
7
7
  explainiverse/core/explainer.py,sha256=Z9on-9VblYDlQx9oBm1BHpmAf_NsQajZ3qr-u48Aejo,784
8
- explainiverse/core/explanation.py,sha256=6zxFh_TH8tFHc-r_H5-WHQ05Sp1Kp2TxLz3gyFek5jo,881
9
- explainiverse/core/registry.py,sha256=BAqk2FKqbrZcoLqlODXRCOolb57DBgS-Kxs_CCtngvw,26376
8
+ explainiverse/core/explanation.py,sha256=498BbRYrNR-BOql78sENOsyWxgqLsBVZXn14lh-bhww,6241
9
+ explainiverse/core/registry.py,sha256=6HttL27Ty4jYtugRf-EDIKPy80M8BfvUppAKwwGDyQ8,27207
10
10
  explainiverse/engine/__init__.py,sha256=1sZO8nH1mmwK2e-KUavBQm7zYDWUe27nyWoFy9tgsiA,197
11
- explainiverse/engine/suite.py,sha256=sq8SK_6Pf0qRckTmVJ7Mdosu9bhkjAGPGN8ymLGFP9E,4914
11
+ explainiverse/engine/suite.py,sha256=G-7OjESisSTaQ1FQrlPl4YydX13uz8Bb70hJZNlcl2M,8918
12
12
  explainiverse/evaluation/__init__.py,sha256=ePE97KwSjg_IChZ03DeQax8GruTjx-BVrMSi_nzoyoA,1501
13
13
  explainiverse/evaluation/_utils.py,sha256=ej7YOPZ90gVHuuIMj45EXHq9Jx3QG7lhaj5sk26hRpg,10519
14
14
  explainiverse/evaluation/faithfulness.py,sha256=_40afOW6vJ3dQguHlJySlgWqiJF_xIvN-uVA3nPKRvI,14841
15
- explainiverse/evaluation/metrics.py,sha256=tSBXtyA_-0zOGCGjlPZU6LdGKRH_QpWfgKa78sdlovs,7453
15
+ explainiverse/evaluation/metrics.py,sha256=snNK9Ua1VzHDT6DlrhYL4m2MmRF3X15vuuVXiHbeicU,9944
16
16
  explainiverse/evaluation/stability.py,sha256=q2d3rpxpp0X1s6ADST1iZA4tzksLJpR0mYBnA_U5FIs,12090
17
17
  explainiverse/explainers/__init__.py,sha256=-ncRXbFKahH3bR0oXM2UQM4LtTdTlvdeprL6cHeqNBs,2549
18
18
  explainiverse/explainers/attribution/__init__.py,sha256=YeVs9bS_IWDtqGbp6T37V6Zp5ZDWzLdAXHxxyFGpiQM,431
19
- explainiverse/explainers/attribution/lime_wrapper.py,sha256=OnXIV7t6yd-vt38sIi7XmHFbgzlZfCEbRlFyGGd5XiE,3245
20
- explainiverse/explainers/attribution/shap_wrapper.py,sha256=tKie5AvN7mb55PWOYdMvW0lUAYjfHPzYosEloEY2ZzI,3210
19
+ explainiverse/explainers/attribution/lime_wrapper.py,sha256=yexy8m4VoVbsIGaMIcwU41ChWZvd-Y_vadtfyarwE4k,5518
20
+ explainiverse/explainers/attribution/shap_wrapper.py,sha256=M6t-W9S0czZsOFXDsCn9uMW1wMoVO97ISDpiNjN0TsU,5902
21
21
  explainiverse/explainers/attribution/treeshap_wrapper.py,sha256=LcBjHzQjmeyWCwLXALJ0WFQ9ol-N_8dod577EDxFDKY,16758
22
22
  explainiverse/explainers/counterfactual/__init__.py,sha256=gEV6P8h2fZ3-pv5rqp5sNDqrLErh5ntqpxIIBVCMFv4,247
23
23
  explainiverse/explainers/counterfactual/dice_wrapper.py,sha256=PyJYF-z1nyyy0mFROnkJqPtcuT2PwEBARwfh37mZ5ew,11373
@@ -28,16 +28,17 @@ explainiverse/explainers/global_explainers/ale.py,sha256=tgG3XTppCf8LiD7uKzBt4DI
28
28
  explainiverse/explainers/global_explainers/partial_dependence.py,sha256=dH6yMjpwZads3pACR3rSykTbssLGHH7e6HfMlpl-S3I,6745
29
29
  explainiverse/explainers/global_explainers/permutation_importance.py,sha256=bcgKz1S_D3lrBMgpqEF_Z6qw8Knxl_cfR50hrSO2tBc,4410
30
30
  explainiverse/explainers/global_explainers/sage.py,sha256=57Xw1SK529x5JXWt0TVrcFYUUP3C65LfUwgoM-Z3gaw,5839
31
- explainiverse/explainers/gradient/__init__.py,sha256=Tkf9jiXVfjVVDAhBocDc2tzFJK8RZv8H1pN8J0Ha53o,1362
31
+ explainiverse/explainers/gradient/__init__.py,sha256=BAMGRO-UsGJx7lD3XG34mJAGHFWPX2AJsIdaf_8gZsg,1498
32
32
  explainiverse/explainers/gradient/deeplift.py,sha256=MWOlslizUeoZs31moy2iBgp02N08nBsVU-RoEpODg3M,27775
33
33
  explainiverse/explainers/gradient/gradcam.py,sha256=ywW_8PhALwegkpSUDQMFvvVFkA5NnMMW6BB5tb3i8bw,13721
34
- explainiverse/explainers/gradient/integrated_gradients.py,sha256=feBgY3Vw2rDti7fxRZtLkxse75m2dbP_R05ARqo2BRM,13367
34
+ explainiverse/explainers/gradient/integrated_gradients.py,sha256=EfIX4TMwfSx7Tl_2efy-oLFt9Xx7byJPV8DxbtFIeKw,18098
35
+ explainiverse/explainers/gradient/lrp.py,sha256=hr7-VJQiicCSsPc-408N1Mhd10XrlxQHdyeYsaHOfQo,45161
35
36
  explainiverse/explainers/gradient/saliency.py,sha256=pcimyuSqKzsIR1yCMNWfH2M7T_vcDKkwcVv0zQlPL3w,10305
36
37
  explainiverse/explainers/gradient/smoothgrad.py,sha256=COIKZSFcApmMkA62M0AForHiYlQ6hSFx5hZIabRXGlM,15727
37
38
  explainiverse/explainers/gradient/tcav.py,sha256=zc-8wMsc2ZOhUeSZNBJ6H6BPXlVMJ9DRcAMiL25wU9I,32242
38
39
  explainiverse/explainers/rule_based/__init__.py,sha256=gKzlFCAzwurAMLJcuYgal4XhDj1thteBGcaHWmN7iWk,243
39
40
  explainiverse/explainers/rule_based/anchors_wrapper.py,sha256=ML7W6aam-eMGZHy5ilol8qupZvNBJpYAFatEEPnuMyo,13254
40
- explainiverse-0.7.0.dist-info/LICENSE,sha256=28rbHe8rJgmUlRdxJACfq1Sj-MtCEhyHxkJedQd1ZYA,1070
41
- explainiverse-0.7.0.dist-info/METADATA,sha256=DoZnwq5ew4n_xKcuoWu7Xi6uPb386yUTeDMPHWsWg1s,21352
42
- explainiverse-0.7.0.dist-info/WHEEL,sha256=sP946D7jFCHeNz5Iq4fL4Lu-PrWrFsgfLXbbkciIZwg,88
43
- explainiverse-0.7.0.dist-info/RECORD,,
41
+ explainiverse-0.8.0.dist-info/LICENSE,sha256=28rbHe8rJgmUlRdxJACfq1Sj-MtCEhyHxkJedQd1ZYA,1070
42
+ explainiverse-0.8.0.dist-info/METADATA,sha256=KquLWUS2Rz3N6uR4WxnTaf5coczo8X0vxLPSW1Mdkak,23770
43
+ explainiverse-0.8.0.dist-info/WHEEL,sha256=sP946D7jFCHeNz5Iq4fL4Lu-PrWrFsgfLXbbkciIZwg,88
44
+ explainiverse-0.8.0.dist-info/RECORD,,