explainiverse 0.6.0__py3-none-any.whl → 0.7.1__py3-none-any.whl

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -1,6 +1,6 @@
1
1
  Metadata-Version: 2.1
2
2
  Name: explainiverse
3
- Version: 0.6.0
3
+ Version: 0.7.1
4
4
  Summary: Unified, extensible explainability framework supporting LIME, SHAP, Anchors, Counterfactuals, PDP, ALE, SAGE, and more
5
5
  Home-page: https://github.com/jemsbhai/explainiverse
6
6
  License: MIT
@@ -35,7 +35,7 @@ Description-Content-Type: text/markdown
35
35
  [![Python 3.10+](https://img.shields.io/badge/python-3.10+-blue.svg)](https://www.python.org/downloads/)
36
36
  [![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)
37
37
 
38
- **Explainiverse** is a unified, extensible Python framework for Explainable AI (XAI). It provides a standardized interface for **16 state-of-the-art explanation methods** across local, global, gradient-based, and example-based paradigms, along with **comprehensive evaluation metrics** for assessing explanation quality.
38
+ **Explainiverse** is a unified, extensible Python framework for Explainable AI (XAI). It provides a standardized interface for **17 state-of-the-art explanation methods** across local, global, gradient-based, concept-based, and example-based paradigms, along with **comprehensive evaluation metrics** for assessing explanation quality.
39
39
 
40
40
  ---
41
41
 
@@ -43,7 +43,7 @@ Description-Content-Type: text/markdown
43
43
 
44
44
  | Feature | Description |
45
45
  |---------|-------------|
46
- | **16 Explainers** | LIME, KernelSHAP, TreeSHAP, Integrated Gradients, DeepLIFT, DeepSHAP, SmoothGrad, Saliency Maps, GradCAM/GradCAM++, Anchors, Counterfactual, Permutation Importance, PDP, ALE, SAGE, ProtoDash |
46
+ | **17 Explainers** | LIME, KernelSHAP, TreeSHAP, Integrated Gradients, DeepLIFT, DeepSHAP, SmoothGrad, Saliency Maps, GradCAM/GradCAM++, TCAV, Anchors, Counterfactual, Permutation Importance, PDP, ALE, SAGE, ProtoDash |
47
47
  | **8 Evaluation Metrics** | Faithfulness (PGI, PGU, Comprehensiveness, Sufficiency, Correlation) and Stability (RIS, ROS, Lipschitz) |
48
48
  | **Unified API** | Consistent `BaseExplainer` interface with standardized `Explanation` output |
49
49
  | **Plugin Registry** | Filter explainers by scope, model type, data type; automatic recommendations |
@@ -66,6 +66,7 @@ Description-Content-Type: text/markdown
66
66
  | **SmoothGrad** | Gradient | [Smilkov et al., 2017](https://arxiv.org/abs/1706.03825) |
67
67
  | **Saliency Maps** | Gradient | [Simonyan et al., 2014](https://arxiv.org/abs/1312.6034) |
68
68
  | **GradCAM / GradCAM++** | Gradient (CNN) | [Selvaraju et al., 2017](https://arxiv.org/abs/1610.02391) |
69
+ | **TCAV** | Concept-Based | [Kim et al., 2018](https://arxiv.org/abs/1711.11279) |
69
70
  | **Anchors** | Rule-Based | [Ribeiro et al., 2018](https://ojs.aaai.org/index.php/AAAI/article/view/11491) |
70
71
  | **Counterfactual** | Contrastive | [Mothilal et al., 2020](https://arxiv.org/abs/1905.07697) |
71
72
  | **ProtoDash** | Example-Based | [Gurumoorthy et al., 2019](https://arxiv.org/abs/1707.01212) |
@@ -142,8 +143,8 @@ adapter = SklearnAdapter(model, class_names=iris.target_names.tolist())
142
143
  # List all available explainers
143
144
  print(default_registry.list_explainers())
144
145
  # ['lime', 'shap', 'treeshap', 'integrated_gradients', 'deeplift', 'deepshap',
145
- # 'smoothgrad', 'gradcam', 'anchors', 'counterfactual', 'protodash',
146
- # 'permutation_importance', 'partial_dependence', 'ale', 'sage']
146
+ # 'smoothgrad', 'saliency', 'gradcam', 'tcav', 'anchors', 'counterfactual',
147
+ # 'protodash', 'permutation_importance', 'partial_dependence', 'ale', 'sage']
147
148
 
148
149
  # Create an explainer via registry
149
150
  explainer = default_registry.create(
@@ -317,6 +318,56 @@ heatmap = explanation.explanation_data["heatmap"]
317
318
  overlay = explainer.get_overlay(original_image, heatmap, alpha=0.5)
318
319
  ```
319
320
 
321
+ ### TCAV (Concept-Based Explanations)
322
+
323
+ ```python
324
+ from explainiverse.explainers.gradient import TCAVExplainer
325
+
326
+ # For neural network models with concept examples
327
+ adapter = PyTorchAdapter(model, task="classification", class_names=class_names)
328
+
329
+ # Create TCAV explainer targeting a specific layer
330
+ explainer = TCAVExplainer(
331
+ model=adapter,
332
+ layer_name="layer3", # Target layer for concept analysis
333
+ class_names=class_names
334
+ )
335
+
336
+ # Learn a concept from examples (e.g., "striped" pattern)
337
+ explainer.learn_concept(
338
+ concept_name="striped",
339
+ concept_examples=striped_images, # Images with stripes
340
+ negative_examples=random_images, # Random images without stripes
341
+ min_accuracy=0.6 # Minimum CAV classifier accuracy
342
+ )
343
+
344
+ # Compute TCAV score: fraction of inputs where concept positively influences prediction
345
+ tcav_score = explainer.compute_tcav_score(
346
+ test_inputs=test_images,
347
+ target_class=0, # e.g., "zebra"
348
+ concept_name="striped"
349
+ )
350
+ print(f"TCAV score: {tcav_score:.3f}") # >0.5 means concept positively influences class
351
+
352
+ # Statistical significance testing against random concepts
353
+ result = explainer.statistical_significance_test(
354
+ test_inputs=test_images,
355
+ target_class=0,
356
+ concept_name="striped",
357
+ n_random=10,
358
+ negative_examples=random_images
359
+ )
360
+ print(f"p-value: {result['p_value']:.4f}, significant: {result['significant']}")
361
+
362
+ # Full explanation with multiple concepts
363
+ explanation = explainer.explain(
364
+ test_inputs=test_images,
365
+ target_class=0,
366
+ run_significance_test=True
367
+ )
368
+ print(explanation.explanation_data["tcav_scores"])
369
+ ```
370
+
320
371
  ---
321
372
 
322
373
  ## Example-Based Explanations
@@ -551,7 +602,7 @@ explainiverse/
551
602
  │ └── pytorch_adapter.py # With gradient support
552
603
  ├── explainers/
553
604
  │ ├── attribution/ # LIME, SHAP, TreeSHAP
554
- │ ├── gradient/ # IG, DeepLIFT, DeepSHAP, SmoothGrad, GradCAM
605
+ │ ├── gradient/ # IG, DeepLIFT, DeepSHAP, SmoothGrad, Saliency, GradCAM, TCAV
555
606
  │ ├── rule_based/ # Anchors
556
607
  │ ├── counterfactual/ # DiCE-style
557
608
  │ ├── global_explainers/ # Permutation, PDP, ALE, SAGE
@@ -589,6 +640,7 @@ poetry run pytest tests/test_smoothgrad.py::TestSmoothGradBasic -v
589
640
  - [x] Core framework (BaseExplainer, Explanation, Registry)
590
641
  - [x] Perturbation methods: LIME, KernelSHAP, TreeSHAP
591
642
  - [x] Gradient methods: Integrated Gradients, DeepLIFT, DeepSHAP, SmoothGrad, Saliency Maps, GradCAM/GradCAM++
643
+ - [x] Concept-based: TCAV (Testing with Concept Activation Vectors)
592
644
  - [x] Rule-based: Anchors
593
645
  - [x] Counterfactual: DiCE-style
594
646
  - [x] Global: Permutation Importance, PDP, ALE, SAGE
@@ -598,7 +650,6 @@ poetry run pytest tests/test_smoothgrad.py::TestSmoothGradBasic -v
598
650
  - [x] PyTorch adapter with gradient support
599
651
 
600
652
  ### In Progress 🚧
601
- - [ ] TCAV (Testing with Concept Activation Vectors)
602
653
  - [ ] Layer-wise Relevance Propagation (LRP)
603
654
 
604
655
  ### Planned 📋
@@ -620,7 +671,7 @@ If you use Explainiverse in your research, please cite:
620
671
  author = {Syed, Muntaser},
621
672
  year = {2025},
622
673
  url = {https://github.com/jemsbhai/explainiverse},
623
- version = {0.6.0}
674
+ version = {0.7.1}
624
675
  }
625
676
  ```
626
677
 
@@ -648,5 +699,5 @@ MIT License - see [LICENSE](LICENSE) for details.
648
699
 
649
700
  ## Acknowledgments
650
701
 
651
- Explainiverse builds upon the foundational work of many researchers in the XAI community. We thank the authors of LIME, SHAP, Integrated Gradients, DeepLIFT, GradCAM, Anchors, DiCE, ALE, SAGE, and ProtoDash for their contributions to interpretable machine learning.
702
+ Explainiverse builds upon the foundational work of many researchers in the XAI community. We thank the authors of LIME, SHAP, Integrated Gradients, DeepLIFT, GradCAM, TCAV, Anchors, DiCE, ALE, SAGE, and ProtoDash for their contributions to interpretable machine learning.
652
703
 
@@ -1,23 +1,23 @@
1
- explainiverse/__init__.py,sha256=oVivthTc0ts34DyEPWhilq7YOGYDjLLRvWxoKYHsIlo,1612
1
+ explainiverse/__init__.py,sha256=hkP-f-GcO7dKhO6otGj63cuqwFRiXBYAYZr4wrim4fY,1612
2
2
  explainiverse/adapters/__init__.py,sha256=HcQGISyp-YQ4jEj2IYveX_c9X5otLcTNWRnVRRhzRik,781
3
3
  explainiverse/adapters/base_adapter.py,sha256=Nqt0GeDn_-PjTyJcZsE8dRTulavqFQsv8sMYWS_ps-M,603
4
- explainiverse/adapters/pytorch_adapter.py,sha256=GTilJAR1VF_OgWG88qZoqlqefHaSXB3i9iOwCJkyHTg,13318
4
+ explainiverse/adapters/pytorch_adapter.py,sha256=DLQKJ7gB0foPwAmcrru7QdZnPRnhqDKpFCT-EaD3420,15612
5
5
  explainiverse/adapters/sklearn_adapter.py,sha256=pzIBtMuqrG-6ZbUqUCMt7rSk3Ow0FgrY268FSweFvw4,958
6
6
  explainiverse/core/__init__.py,sha256=P3jHMnH5coFqTTO1w-gT-rurkCM1-9r3pF-055pbXMg,474
7
7
  explainiverse/core/explainer.py,sha256=Z9on-9VblYDlQx9oBm1BHpmAf_NsQajZ3qr-u48Aejo,784
8
- explainiverse/core/explanation.py,sha256=6zxFh_TH8tFHc-r_H5-WHQ05Sp1Kp2TxLz3gyFek5jo,881
9
- explainiverse/core/registry.py,sha256=ZIq9Jzrr_oXi5tvGmY0k9gVNA34EtGIeTpWO5hQ3iVg,25527
8
+ explainiverse/core/explanation.py,sha256=498BbRYrNR-BOql78sENOsyWxgqLsBVZXn14lh-bhww,6241
9
+ explainiverse/core/registry.py,sha256=BAqk2FKqbrZcoLqlODXRCOolb57DBgS-Kxs_CCtngvw,26376
10
10
  explainiverse/engine/__init__.py,sha256=1sZO8nH1mmwK2e-KUavBQm7zYDWUe27nyWoFy9tgsiA,197
11
- explainiverse/engine/suite.py,sha256=sq8SK_6Pf0qRckTmVJ7Mdosu9bhkjAGPGN8ymLGFP9E,4914
11
+ explainiverse/engine/suite.py,sha256=G-7OjESisSTaQ1FQrlPl4YydX13uz8Bb70hJZNlcl2M,8918
12
12
  explainiverse/evaluation/__init__.py,sha256=ePE97KwSjg_IChZ03DeQax8GruTjx-BVrMSi_nzoyoA,1501
13
13
  explainiverse/evaluation/_utils.py,sha256=ej7YOPZ90gVHuuIMj45EXHq9Jx3QG7lhaj5sk26hRpg,10519
14
14
  explainiverse/evaluation/faithfulness.py,sha256=_40afOW6vJ3dQguHlJySlgWqiJF_xIvN-uVA3nPKRvI,14841
15
- explainiverse/evaluation/metrics.py,sha256=tSBXtyA_-0zOGCGjlPZU6LdGKRH_QpWfgKa78sdlovs,7453
15
+ explainiverse/evaluation/metrics.py,sha256=snNK9Ua1VzHDT6DlrhYL4m2MmRF3X15vuuVXiHbeicU,9944
16
16
  explainiverse/evaluation/stability.py,sha256=q2d3rpxpp0X1s6ADST1iZA4tzksLJpR0mYBnA_U5FIs,12090
17
17
  explainiverse/explainers/__init__.py,sha256=-ncRXbFKahH3bR0oXM2UQM4LtTdTlvdeprL6cHeqNBs,2549
18
18
  explainiverse/explainers/attribution/__init__.py,sha256=YeVs9bS_IWDtqGbp6T37V6Zp5ZDWzLdAXHxxyFGpiQM,431
19
- explainiverse/explainers/attribution/lime_wrapper.py,sha256=OnXIV7t6yd-vt38sIi7XmHFbgzlZfCEbRlFyGGd5XiE,3245
20
- explainiverse/explainers/attribution/shap_wrapper.py,sha256=tKie5AvN7mb55PWOYdMvW0lUAYjfHPzYosEloEY2ZzI,3210
19
+ explainiverse/explainers/attribution/lime_wrapper.py,sha256=yexy8m4VoVbsIGaMIcwU41ChWZvd-Y_vadtfyarwE4k,5518
20
+ explainiverse/explainers/attribution/shap_wrapper.py,sha256=M6t-W9S0czZsOFXDsCn9uMW1wMoVO97ISDpiNjN0TsU,5902
21
21
  explainiverse/explainers/attribution/treeshap_wrapper.py,sha256=LcBjHzQjmeyWCwLXALJ0WFQ9ol-N_8dod577EDxFDKY,16758
22
22
  explainiverse/explainers/counterfactual/__init__.py,sha256=gEV6P8h2fZ3-pv5rqp5sNDqrLErh5ntqpxIIBVCMFv4,247
23
23
  explainiverse/explainers/counterfactual/dice_wrapper.py,sha256=PyJYF-z1nyyy0mFROnkJqPtcuT2PwEBARwfh37mZ5ew,11373
@@ -28,15 +28,16 @@ explainiverse/explainers/global_explainers/ale.py,sha256=tgG3XTppCf8LiD7uKzBt4DI
28
28
  explainiverse/explainers/global_explainers/partial_dependence.py,sha256=dH6yMjpwZads3pACR3rSykTbssLGHH7e6HfMlpl-S3I,6745
29
29
  explainiverse/explainers/global_explainers/permutation_importance.py,sha256=bcgKz1S_D3lrBMgpqEF_Z6qw8Knxl_cfR50hrSO2tBc,4410
30
30
  explainiverse/explainers/global_explainers/sage.py,sha256=57Xw1SK529x5JXWt0TVrcFYUUP3C65LfUwgoM-Z3gaw,5839
31
- explainiverse/explainers/gradient/__init__.py,sha256=-A6hk5k9kbSOf8iTkzWvpnQ6SRVc72KvpVUH_1fuFNI,805
31
+ explainiverse/explainers/gradient/__init__.py,sha256=Tkf9jiXVfjVVDAhBocDc2tzFJK8RZv8H1pN8J0Ha53o,1362
32
32
  explainiverse/explainers/gradient/deeplift.py,sha256=MWOlslizUeoZs31moy2iBgp02N08nBsVU-RoEpODg3M,27775
33
33
  explainiverse/explainers/gradient/gradcam.py,sha256=ywW_8PhALwegkpSUDQMFvvVFkA5NnMMW6BB5tb3i8bw,13721
34
- explainiverse/explainers/gradient/integrated_gradients.py,sha256=feBgY3Vw2rDti7fxRZtLkxse75m2dbP_R05ARqo2BRM,13367
34
+ explainiverse/explainers/gradient/integrated_gradients.py,sha256=EfIX4TMwfSx7Tl_2efy-oLFt9Xx7byJPV8DxbtFIeKw,18098
35
35
  explainiverse/explainers/gradient/saliency.py,sha256=pcimyuSqKzsIR1yCMNWfH2M7T_vcDKkwcVv0zQlPL3w,10305
36
36
  explainiverse/explainers/gradient/smoothgrad.py,sha256=COIKZSFcApmMkA62M0AForHiYlQ6hSFx5hZIabRXGlM,15727
37
+ explainiverse/explainers/gradient/tcav.py,sha256=zc-8wMsc2ZOhUeSZNBJ6H6BPXlVMJ9DRcAMiL25wU9I,32242
37
38
  explainiverse/explainers/rule_based/__init__.py,sha256=gKzlFCAzwurAMLJcuYgal4XhDj1thteBGcaHWmN7iWk,243
38
39
  explainiverse/explainers/rule_based/anchors_wrapper.py,sha256=ML7W6aam-eMGZHy5ilol8qupZvNBJpYAFatEEPnuMyo,13254
39
- explainiverse-0.6.0.dist-info/LICENSE,sha256=28rbHe8rJgmUlRdxJACfq1Sj-MtCEhyHxkJedQd1ZYA,1070
40
- explainiverse-0.6.0.dist-info/METADATA,sha256=KLQeJ57yUbYptTo5uS7IaVA3i6I60n3qXS7vfbW8vaU,19569
41
- explainiverse-0.6.0.dist-info/WHEEL,sha256=sP946D7jFCHeNz5Iq4fL4Lu-PrWrFsgfLXbbkciIZwg,88
42
- explainiverse-0.6.0.dist-info/RECORD,,
40
+ explainiverse-0.7.1.dist-info/LICENSE,sha256=28rbHe8rJgmUlRdxJACfq1Sj-MtCEhyHxkJedQd1ZYA,1070
41
+ explainiverse-0.7.1.dist-info/METADATA,sha256=pWUARu9qMLDBnOTcuxumoo9KC0WaCLFh7DdJHdpcDvw,21352
42
+ explainiverse-0.7.1.dist-info/WHEEL,sha256=sP946D7jFCHeNz5Iq4fL4Lu-PrWrFsgfLXbbkciIZwg,88
43
+ explainiverse-0.7.1.dist-info/RECORD,,