explainiverse 0.5.0__py3-none-any.whl → 0.7.0__py3-none-any.whl

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -1,6 +1,6 @@
1
1
  Metadata-Version: 2.1
2
2
  Name: explainiverse
3
- Version: 0.5.0
3
+ Version: 0.7.0
4
4
  Summary: Unified, extensible explainability framework supporting LIME, SHAP, Anchors, Counterfactuals, PDP, ALE, SAGE, and more
5
5
  Home-page: https://github.com/jemsbhai/explainiverse
6
6
  License: MIT
@@ -35,7 +35,7 @@ Description-Content-Type: text/markdown
35
35
  [![Python 3.10+](https://img.shields.io/badge/python-3.10+-blue.svg)](https://www.python.org/downloads/)
36
36
  [![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)
37
37
 
38
- **Explainiverse** is a unified, extensible Python framework for Explainable AI (XAI). It provides a standardized interface for **15 state-of-the-art explanation methods** across local, global, gradient-based, and example-based paradigms, along with **comprehensive evaluation metrics** for assessing explanation quality.
38
+ **Explainiverse** is a unified, extensible Python framework for Explainable AI (XAI). It provides a standardized interface for **17 state-of-the-art explanation methods** across local, global, gradient-based, concept-based, and example-based paradigms, along with **comprehensive evaluation metrics** for assessing explanation quality.
39
39
 
40
40
  ---
41
41
 
@@ -43,7 +43,7 @@ Description-Content-Type: text/markdown
43
43
 
44
44
  | Feature | Description |
45
45
  |---------|-------------|
46
- | **15 Explainers** | LIME, KernelSHAP, TreeSHAP, Integrated Gradients, DeepLIFT, DeepSHAP, SmoothGrad, GradCAM/GradCAM++, Anchors, Counterfactual, Permutation Importance, PDP, ALE, SAGE, ProtoDash |
46
+ | **17 Explainers** | LIME, KernelSHAP, TreeSHAP, Integrated Gradients, DeepLIFT, DeepSHAP, SmoothGrad, Saliency Maps, GradCAM/GradCAM++, TCAV, Anchors, Counterfactual, Permutation Importance, PDP, ALE, SAGE, ProtoDash |
47
47
  | **8 Evaluation Metrics** | Faithfulness (PGI, PGU, Comprehensiveness, Sufficiency, Correlation) and Stability (RIS, ROS, Lipschitz) |
48
48
  | **Unified API** | Consistent `BaseExplainer` interface with standardized `Explanation` output |
49
49
  | **Plugin Registry** | Filter explainers by scope, model type, data type; automatic recommendations |
@@ -64,7 +64,9 @@ Description-Content-Type: text/markdown
64
64
  | **DeepLIFT** | Gradient | [Shrikumar et al., 2017](https://arxiv.org/abs/1704.02685) |
65
65
  | **DeepSHAP** | Gradient + Shapley | [Lundberg & Lee, 2017](https://arxiv.org/abs/1705.07874) |
66
66
  | **SmoothGrad** | Gradient | [Smilkov et al., 2017](https://arxiv.org/abs/1706.03825) |
67
+ | **Saliency Maps** | Gradient | [Simonyan et al., 2014](https://arxiv.org/abs/1312.6034) |
67
68
  | **GradCAM / GradCAM++** | Gradient (CNN) | [Selvaraju et al., 2017](https://arxiv.org/abs/1610.02391) |
69
+ | **TCAV** | Concept-Based | [Kim et al., 2018](https://arxiv.org/abs/1711.11279) |
68
70
  | **Anchors** | Rule-Based | [Ribeiro et al., 2018](https://ojs.aaai.org/index.php/AAAI/article/view/11491) |
69
71
  | **Counterfactual** | Contrastive | [Mothilal et al., 2020](https://arxiv.org/abs/1905.07697) |
70
72
  | **ProtoDash** | Example-Based | [Gurumoorthy et al., 2019](https://arxiv.org/abs/1707.01212) |
@@ -141,8 +143,8 @@ adapter = SklearnAdapter(model, class_names=iris.target_names.tolist())
141
143
  # List all available explainers
142
144
  print(default_registry.list_explainers())
143
145
  # ['lime', 'shap', 'treeshap', 'integrated_gradients', 'deeplift', 'deepshap',
144
- # 'smoothgrad', 'gradcam', 'anchors', 'counterfactual', 'protodash',
145
- # 'permutation_importance', 'partial_dependence', 'ale', 'sage']
146
+ # 'smoothgrad', 'saliency', 'gradcam', 'tcav', 'anchors', 'counterfactual',
147
+ # 'protodash', 'permutation_importance', 'partial_dependence', 'ale', 'sage']
146
148
 
147
149
  # Create an explainer via registry
148
150
  explainer = default_registry.create(
@@ -233,6 +235,41 @@ deepshap = DeepLIFTShapExplainer(
233
235
  explanation = deepshap.explain(X[0])
234
236
  ```
235
237
 
238
+ ### Saliency Maps
239
+
240
+ ```python
241
+ from explainiverse.explainers.gradient import SaliencyExplainer
242
+
243
+ # Saliency Maps - simplest and fastest gradient method
244
+ explainer = SaliencyExplainer(
245
+ model=adapter,
246
+ feature_names=feature_names,
247
+ class_names=class_names,
248
+ absolute_value=True # Default: absolute gradient magnitudes
249
+ )
250
+
251
+ # Standard saliency (absolute gradients)
252
+ explanation = explainer.explain(X[0], method="saliency")
253
+
254
+ # Input × Gradient (gradient scaled by input values)
255
+ explanation = explainer.explain(X[0], method="input_times_gradient")
256
+
257
+ # Signed saliency (keep gradient direction)
258
+ explainer_signed = SaliencyExplainer(
259
+ model=adapter,
260
+ feature_names=feature_names,
261
+ class_names=class_names,
262
+ absolute_value=False
263
+ )
264
+ explanation = explainer_signed.explain(X[0])
265
+
266
+ # Compare all variants
267
+ variants = explainer.compute_all_variants(X[0])
268
+ print(variants["saliency_absolute"])
269
+ print(variants["saliency_signed"])
270
+ print(variants["input_times_gradient"])
271
+ ```
272
+
236
273
  ### SmoothGrad
237
274
 
238
275
  ```python
@@ -281,6 +318,56 @@ heatmap = explanation.explanation_data["heatmap"]
281
318
  overlay = explainer.get_overlay(original_image, heatmap, alpha=0.5)
282
319
  ```
283
320
 
321
+ ### TCAV (Concept-Based Explanations)
322
+
323
+ ```python
324
+ from explainiverse.explainers.gradient import TCAVExplainer
325
+
326
+ # For neural network models with concept examples
327
+ adapter = PyTorchAdapter(model, task="classification", class_names=class_names)
328
+
329
+ # Create TCAV explainer targeting a specific layer
330
+ explainer = TCAVExplainer(
331
+ model=adapter,
332
+ layer_name="layer3", # Target layer for concept analysis
333
+ class_names=class_names
334
+ )
335
+
336
+ # Learn a concept from examples (e.g., "striped" pattern)
337
+ explainer.learn_concept(
338
+ concept_name="striped",
339
+ concept_examples=striped_images, # Images with stripes
340
+ negative_examples=random_images, # Random images without stripes
341
+ min_accuracy=0.6 # Minimum CAV classifier accuracy
342
+ )
343
+
344
+ # Compute TCAV score: fraction of inputs where concept positively influences prediction
345
+ tcav_score = explainer.compute_tcav_score(
346
+ test_inputs=test_images,
347
+ target_class=0, # e.g., "zebra"
348
+ concept_name="striped"
349
+ )
350
+ print(f"TCAV score: {tcav_score:.3f}") # >0.5 means concept positively influences class
351
+
352
+ # Statistical significance testing against random concepts
353
+ result = explainer.statistical_significance_test(
354
+ test_inputs=test_images,
355
+ target_class=0,
356
+ concept_name="striped",
357
+ n_random=10,
358
+ negative_examples=random_images
359
+ )
360
+ print(f"p-value: {result['p_value']:.4f}, significant: {result['significant']}")
361
+
362
+ # Full explanation with multiple concepts
363
+ explanation = explainer.explain(
364
+ test_inputs=test_images,
365
+ target_class=0,
366
+ run_significance_test=True
367
+ )
368
+ print(explanation.explanation_data["tcav_scores"])
369
+ ```
370
+
284
371
  ---
285
372
 
286
373
  ## Example-Based Explanations
@@ -515,7 +602,7 @@ explainiverse/
515
602
  │ └── pytorch_adapter.py # With gradient support
516
603
  ├── explainers/
517
604
  │ ├── attribution/ # LIME, SHAP, TreeSHAP
518
- │ ├── gradient/ # IG, DeepLIFT, DeepSHAP, SmoothGrad, GradCAM
605
+ │ ├── gradient/ # IG, DeepLIFT, DeepSHAP, SmoothGrad, Saliency, GradCAM, TCAV
519
606
  │ ├── rule_based/ # Anchors
520
607
  │ ├── counterfactual/ # DiCE-style
521
608
  │ ├── global_explainers/ # Permutation, PDP, ALE, SAGE
@@ -552,7 +639,8 @@ poetry run pytest tests/test_smoothgrad.py::TestSmoothGradBasic -v
552
639
  ### Completed ✅
553
640
  - [x] Core framework (BaseExplainer, Explanation, Registry)
554
641
  - [x] Perturbation methods: LIME, KernelSHAP, TreeSHAP
555
- - [x] Gradient methods: Integrated Gradients, DeepLIFT, DeepSHAP, SmoothGrad, GradCAM/GradCAM++
642
+ - [x] Gradient methods: Integrated Gradients, DeepLIFT, DeepSHAP, SmoothGrad, Saliency Maps, GradCAM/GradCAM++
643
+ - [x] Concept-based: TCAV (Testing with Concept Activation Vectors)
556
644
  - [x] Rule-based: Anchors
557
645
  - [x] Counterfactual: DiCE-style
558
646
  - [x] Global: Permutation Importance, PDP, ALE, SAGE
@@ -562,8 +650,6 @@ poetry run pytest tests/test_smoothgrad.py::TestSmoothGradBasic -v
562
650
  - [x] PyTorch adapter with gradient support
563
651
 
564
652
  ### In Progress 🚧
565
- - [ ] Saliency Maps (vanilla gradients)
566
- - [ ] TCAV (Testing with Concept Activation Vectors)
567
653
  - [ ] Layer-wise Relevance Propagation (LRP)
568
654
 
569
655
  ### Planned 📋
@@ -585,7 +671,7 @@ If you use Explainiverse in your research, please cite:
585
671
  author = {Syed, Muntaser},
586
672
  year = {2025},
587
673
  url = {https://github.com/jemsbhai/explainiverse},
588
- version = {0.5.0}
674
+ version = {0.7.0}
589
675
  }
590
676
  ```
591
677
 
@@ -613,5 +699,5 @@ MIT License - see [LICENSE](LICENSE) for details.
613
699
 
614
700
  ## Acknowledgments
615
701
 
616
- Explainiverse builds upon the foundational work of many researchers in the XAI community. We thank the authors of LIME, SHAP, Integrated Gradients, DeepLIFT, GradCAM, Anchors, DiCE, ALE, SAGE, and ProtoDash for their contributions to interpretable machine learning.
702
+ Explainiverse builds upon the foundational work of many researchers in the XAI community. We thank the authors of LIME, SHAP, Integrated Gradients, DeepLIFT, GradCAM, TCAV, Anchors, DiCE, ALE, SAGE, and ProtoDash for their contributions to interpretable machine learning.
617
703
 
@@ -1,4 +1,4 @@
1
- explainiverse/__init__.py,sha256=LB5iTelrUHMZa5ctl2MMei54U6Ir_CBqlxi8yY5kHJY,1612
1
+ explainiverse/__init__.py,sha256=u67gp9ZbzMDs2ECjXvs4aZUbjELRPfx32jGszJhhZ7U,1612
2
2
  explainiverse/adapters/__init__.py,sha256=HcQGISyp-YQ4jEj2IYveX_c9X5otLcTNWRnVRRhzRik,781
3
3
  explainiverse/adapters/base_adapter.py,sha256=Nqt0GeDn_-PjTyJcZsE8dRTulavqFQsv8sMYWS_ps-M,603
4
4
  explainiverse/adapters/pytorch_adapter.py,sha256=GTilJAR1VF_OgWG88qZoqlqefHaSXB3i9iOwCJkyHTg,13318
@@ -6,7 +6,7 @@ explainiverse/adapters/sklearn_adapter.py,sha256=pzIBtMuqrG-6ZbUqUCMt7rSk3Ow0Fgr
6
6
  explainiverse/core/__init__.py,sha256=P3jHMnH5coFqTTO1w-gT-rurkCM1-9r3pF-055pbXMg,474
7
7
  explainiverse/core/explainer.py,sha256=Z9on-9VblYDlQx9oBm1BHpmAf_NsQajZ3qr-u48Aejo,784
8
8
  explainiverse/core/explanation.py,sha256=6zxFh_TH8tFHc-r_H5-WHQ05Sp1Kp2TxLz3gyFek5jo,881
9
- explainiverse/core/registry.py,sha256=VK_a2C_ff7HspyR0MhprQPPHYLaN9H7q2F3gofXe0bo,24754
9
+ explainiverse/core/registry.py,sha256=BAqk2FKqbrZcoLqlODXRCOolb57DBgS-Kxs_CCtngvw,26376
10
10
  explainiverse/engine/__init__.py,sha256=1sZO8nH1mmwK2e-KUavBQm7zYDWUe27nyWoFy9tgsiA,197
11
11
  explainiverse/engine/suite.py,sha256=sq8SK_6Pf0qRckTmVJ7Mdosu9bhkjAGPGN8ymLGFP9E,4914
12
12
  explainiverse/evaluation/__init__.py,sha256=ePE97KwSjg_IChZ03DeQax8GruTjx-BVrMSi_nzoyoA,1501
@@ -28,14 +28,16 @@ explainiverse/explainers/global_explainers/ale.py,sha256=tgG3XTppCf8LiD7uKzBt4DI
28
28
  explainiverse/explainers/global_explainers/partial_dependence.py,sha256=dH6yMjpwZads3pACR3rSykTbssLGHH7e6HfMlpl-S3I,6745
29
29
  explainiverse/explainers/global_explainers/permutation_importance.py,sha256=bcgKz1S_D3lrBMgpqEF_Z6qw8Knxl_cfR50hrSO2tBc,4410
30
30
  explainiverse/explainers/global_explainers/sage.py,sha256=57Xw1SK529x5JXWt0TVrcFYUUP3C65LfUwgoM-Z3gaw,5839
31
- explainiverse/explainers/gradient/__init__.py,sha256=HEVotHgwUi731Y_jzSz4_URAGFtjebyU1uk1RHunF4c,707
31
+ explainiverse/explainers/gradient/__init__.py,sha256=Tkf9jiXVfjVVDAhBocDc2tzFJK8RZv8H1pN8J0Ha53o,1362
32
32
  explainiverse/explainers/gradient/deeplift.py,sha256=MWOlslizUeoZs31moy2iBgp02N08nBsVU-RoEpODg3M,27775
33
33
  explainiverse/explainers/gradient/gradcam.py,sha256=ywW_8PhALwegkpSUDQMFvvVFkA5NnMMW6BB5tb3i8bw,13721
34
34
  explainiverse/explainers/gradient/integrated_gradients.py,sha256=feBgY3Vw2rDti7fxRZtLkxse75m2dbP_R05ARqo2BRM,13367
35
+ explainiverse/explainers/gradient/saliency.py,sha256=pcimyuSqKzsIR1yCMNWfH2M7T_vcDKkwcVv0zQlPL3w,10305
35
36
  explainiverse/explainers/gradient/smoothgrad.py,sha256=COIKZSFcApmMkA62M0AForHiYlQ6hSFx5hZIabRXGlM,15727
37
+ explainiverse/explainers/gradient/tcav.py,sha256=zc-8wMsc2ZOhUeSZNBJ6H6BPXlVMJ9DRcAMiL25wU9I,32242
36
38
  explainiverse/explainers/rule_based/__init__.py,sha256=gKzlFCAzwurAMLJcuYgal4XhDj1thteBGcaHWmN7iWk,243
37
39
  explainiverse/explainers/rule_based/anchors_wrapper.py,sha256=ML7W6aam-eMGZHy5ilol8qupZvNBJpYAFatEEPnuMyo,13254
38
- explainiverse-0.5.0.dist-info/LICENSE,sha256=28rbHe8rJgmUlRdxJACfq1Sj-MtCEhyHxkJedQd1ZYA,1070
39
- explainiverse-0.5.0.dist-info/METADATA,sha256=ZYzZiwm33v08PA3Ihl5rwhuWieLzRrFgoR6WmPFABsc,18511
40
- explainiverse-0.5.0.dist-info/WHEEL,sha256=sP946D7jFCHeNz5Iq4fL4Lu-PrWrFsgfLXbbkciIZwg,88
41
- explainiverse-0.5.0.dist-info/RECORD,,
40
+ explainiverse-0.7.0.dist-info/LICENSE,sha256=28rbHe8rJgmUlRdxJACfq1Sj-MtCEhyHxkJedQd1ZYA,1070
41
+ explainiverse-0.7.0.dist-info/METADATA,sha256=DoZnwq5ew4n_xKcuoWu7Xi6uPb386yUTeDMPHWsWg1s,21352
42
+ explainiverse-0.7.0.dist-info/WHEEL,sha256=sP946D7jFCHeNz5Iq4fL4Lu-PrWrFsgfLXbbkciIZwg,88
43
+ explainiverse-0.7.0.dist-info/RECORD,,