explainiverse 0.4.0__py3-none-any.whl → 0.6.0__py3-none-any.whl

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -1,391 +0,0 @@
1
- Metadata-Version: 2.1
2
- Name: explainiverse
3
- Version: 0.4.0
4
- Summary: Unified, extensible explainability framework supporting LIME, SHAP, Anchors, Counterfactuals, PDP, ALE, SAGE, and more
5
- Home-page: https://github.com/jemsbhai/explainiverse
6
- License: MIT
7
- Keywords: xai,explainability,interpretability,machine-learning,lime,shap,anchors
8
- Author: Muntaser Syed
9
- Author-email: jemsbhai@gmail.com
10
- Requires-Python: >=3.10,<3.13
11
- Classifier: Development Status :: 4 - Beta
12
- Classifier: Intended Audience :: Developers
13
- Classifier: Intended Audience :: Science/Research
14
- Classifier: License :: OSI Approved :: MIT License
15
- Classifier: Programming Language :: Python :: 3
16
- Classifier: Programming Language :: Python :: 3.10
17
- Classifier: Programming Language :: Python :: 3.11
18
- Classifier: Programming Language :: Python :: 3.12
19
- Classifier: Topic :: Scientific/Engineering :: Artificial Intelligence
20
- Provides-Extra: torch
21
- Requires-Dist: lime (>=0.2.0.1,<0.3.0.0)
22
- Requires-Dist: numpy (>=1.24,<2.0)
23
- Requires-Dist: pandas (>=1.5,<3.0)
24
- Requires-Dist: scikit-learn (>=1.1,<1.6)
25
- Requires-Dist: scipy (>=1.10,<2.0)
26
- Requires-Dist: shap (>=0.48.0,<0.49.0)
27
- Requires-Dist: torch (>=2.0) ; extra == "torch"
28
- Requires-Dist: xgboost (>=1.7,<3.0)
29
- Project-URL: Repository, https://github.com/jemsbhai/explainiverse
30
- Description-Content-Type: text/markdown
31
-
32
- # Explainiverse
33
-
34
- **Explainiverse** is a unified, extensible Python framework for Explainable AI (XAI).
35
- It provides a standardized interface for model-agnostic explainability with 11 state-of-the-art XAI methods, evaluation metrics, and a plugin registry for easy extensibility.
36
-
37
- ---
38
-
39
- ## Features
40
-
41
- ### 🎯 Comprehensive XAI Coverage
42
-
43
- **Local Explainers** (instance-level explanations):
44
- - **LIME** - Local Interpretable Model-agnostic Explanations ([Ribeiro et al., 2016](https://arxiv.org/abs/1602.04938))
45
- - **SHAP** - SHapley Additive exPlanations via KernelSHAP ([Lundberg & Lee, 2017](https://arxiv.org/abs/1705.07874))
46
- - **TreeSHAP** - Exact SHAP values for tree models, 10x+ faster ([Lundberg et al., 2018](https://arxiv.org/abs/1802.03888))
47
- - **Integrated Gradients** - Axiomatic attributions for neural networks ([Sundararajan et al., 2017](https://arxiv.org/abs/1703.01365))
48
- - **GradCAM/GradCAM++** - Visual explanations for CNNs ([Selvaraju et al., 2017](https://arxiv.org/abs/1610.02391))
49
- - **Anchors** - High-precision rule-based explanations ([Ribeiro et al., 2018](https://ojs.aaai.org/index.php/AAAI/article/view/11491))
50
- - **Counterfactual** - DiCE-style diverse counterfactual explanations ([Mothilal et al., 2020](https://arxiv.org/abs/1905.07697))
51
-
52
- **Global Explainers** (model-level explanations):
53
- - **Permutation Importance** - Feature importance via performance degradation ([Breiman, 2001](https://link.springer.com/article/10.1023/A:1010933404324))
54
- - **Partial Dependence (PDP)** - Marginal feature effects ([Friedman, 2001](https://projecteuclid.org/euclid.aos/1013203451))
55
- - **ALE** - Accumulated Local Effects, unbiased for correlated features ([Apley & Zhu, 2020](https://academic.oup.com/jrsssb/article/82/4/1059/7056085))
56
- - **SAGE** - Shapley Additive Global importancE ([Covert et al., 2020](https://arxiv.org/abs/2004.00668))
57
-
58
- ### 🔌 Extensible Plugin Registry
59
- - Register custom explainers with rich metadata
60
- - Filter by scope (local/global), model type, data type
61
- - Automatic recommendations based on use case
62
-
63
- ### 📊 Evaluation Metrics
64
- - **AOPC** (Area Over Perturbation Curve)
65
- - **ROAR** (Remove And Retrain)
66
- - Multiple baseline options and curve generation
67
-
68
- ### 🧪 Standardized Interface
69
- - Consistent `BaseExplainer` API
70
- - Unified `Explanation` output format
71
- - Model adapters for sklearn and PyTorch
72
-
73
- ---
74
-
75
- ## Installation
76
-
77
- From PyPI:
78
-
79
- ```bash
80
- pip install explainiverse
81
- ```
82
-
83
- With PyTorch support (for neural network explanations):
84
-
85
- ```bash
86
- pip install explainiverse[torch]
87
- ```
88
-
89
- For development:
90
-
91
- ```bash
92
- git clone https://github.com/jemsbhai/explainiverse.git
93
- cd explainiverse
94
- poetry install
95
- ```
96
-
97
- ---
98
-
99
- ## Quick Start
100
-
101
- ### Using the Registry (Recommended)
102
-
103
- ```python
104
- from explainiverse import default_registry, SklearnAdapter
105
- from sklearn.ensemble import RandomForestClassifier
106
- from sklearn.datasets import load_iris
107
-
108
- # Train a model
109
- iris = load_iris()
110
- model = RandomForestClassifier().fit(iris.data, iris.target)
111
- adapter = SklearnAdapter(model, class_names=iris.target_names.tolist())
112
-
113
- # List available explainers
114
- print(default_registry.list_explainers())
115
- # ['lime', 'shap', 'treeshap', 'integrated_gradients', 'gradcam', 'anchors', 'counterfactual', 'permutation_importance', 'partial_dependence', 'ale', 'sage']
116
-
117
- # Create and use an explainer
118
- explainer = default_registry.create(
119
- "lime",
120
- model=adapter,
121
- training_data=iris.data,
122
- feature_names=iris.feature_names,
123
- class_names=iris.target_names.tolist()
124
- )
125
- explanation = explainer.explain(iris.data[0])
126
- print(explanation.explanation_data["feature_attributions"])
127
- ```
128
-
129
- ### Filter Explainers by Criteria
130
-
131
- ```python
132
- # Find local explainers for tabular data
133
- local_tabular = default_registry.filter(scope="local", data_type="tabular")
134
- print(local_tabular) # ['lime', 'shap', 'treeshap', 'integrated_gradients', 'anchors', 'counterfactual']
135
-
136
- # Find explainers for images/CNNs
137
- image_explainers = default_registry.filter(data_type="image")
138
- print(image_explainers) # ['lime', 'integrated_gradients', 'gradcam']
139
-
140
- # Get recommendations
141
- recommendations = default_registry.recommend(
142
- model_type="any",
143
- data_type="tabular",
144
- scope_preference="local"
145
- )
146
- ```
147
-
148
- ### TreeSHAP for Tree Models (10x+ Faster)
149
-
150
- ```python
151
- from explainiverse.explainers import TreeShapExplainer
152
- from sklearn.ensemble import RandomForestClassifier
153
-
154
- # Train a tree-based model
155
- model = RandomForestClassifier(n_estimators=100).fit(X_train, y_train)
156
-
157
- # TreeSHAP works directly with the model (no adapter needed)
158
- explainer = TreeShapExplainer(
159
- model=model,
160
- feature_names=feature_names,
161
- class_names=class_names
162
- )
163
-
164
- # Single instance explanation
165
- explanation = explainer.explain(X_test[0])
166
- print(explanation.explanation_data["feature_attributions"])
167
-
168
- # Batch explanations (efficient)
169
- explanations = explainer.explain_batch(X_test[:10])
170
-
171
- # Feature interactions
172
- interactions = explainer.explain_interactions(X_test[0])
173
- print(interactions.explanation_data["interaction_matrix"])
174
- ```
175
-
176
- ### PyTorch Adapter for Neural Networks
177
-
178
- ```python
179
- from explainiverse import PyTorchAdapter
180
- import torch.nn as nn
181
-
182
- # Define a PyTorch model
183
- model = nn.Sequential(
184
- nn.Linear(10, 64),
185
- nn.ReLU(),
186
- nn.Linear(64, 3)
187
- )
188
-
189
- # Wrap with adapter
190
- adapter = PyTorchAdapter(
191
- model,
192
- task="classification",
193
- class_names=["cat", "dog", "bird"]
194
- )
195
-
196
- # Use with any explainer
197
- predictions = adapter.predict(X) # Returns numpy array
198
-
199
- # Get gradients for attribution methods
200
- predictions, gradients = adapter.predict_with_gradients(X)
201
-
202
- # Access intermediate layers
203
- activations = adapter.get_layer_output(X, layer_name="0")
204
- ```
205
-
206
- ### Integrated Gradients for Neural Networks
207
-
208
- ```python
209
- from explainiverse.explainers import IntegratedGradientsExplainer
210
- from explainiverse import PyTorchAdapter
211
-
212
- # Wrap your PyTorch model
213
- adapter = PyTorchAdapter(model, task="classification", class_names=class_names)
214
-
215
- # Create IG explainer
216
- explainer = IntegratedGradientsExplainer(
217
- model=adapter,
218
- feature_names=feature_names,
219
- class_names=class_names,
220
- n_steps=50 # More steps = more accurate
221
- )
222
-
223
- # Explain a prediction
224
- explanation = explainer.explain(X_test[0])
225
- print(explanation.explanation_data["feature_attributions"])
226
-
227
- # Check convergence (sum of attributions ≈ F(x) - F(baseline))
228
- explanation = explainer.explain(X_test[0], return_convergence_delta=True)
229
- print(f"Convergence delta: {explanation.explanation_data['convergence_delta']}")
230
- ```
231
-
232
- ### GradCAM for CNN Visual Explanations
233
-
234
- ```python
235
- from explainiverse.explainers import GradCAMExplainer
236
- from explainiverse import PyTorchAdapter
237
-
238
- # Wrap your CNN model
239
- adapter = PyTorchAdapter(cnn_model, task="classification", class_names=class_names)
240
-
241
- # Find the last convolutional layer
242
- layers = adapter.list_layers()
243
- target_layer = "layer4" # Adjust based on your model architecture
244
-
245
- # Create GradCAM explainer
246
- explainer = GradCAMExplainer(
247
- model=adapter,
248
- target_layer=target_layer,
249
- class_names=class_names,
250
- method="gradcam" # or "gradcam++" for improved version
251
- )
252
-
253
- # Explain an image prediction
254
- explanation = explainer.explain(image) # image shape: (C, H, W) or (N, C, H, W)
255
- heatmap = explanation.explanation_data["heatmap"]
256
-
257
- # Create overlay visualization
258
- overlay = explainer.get_overlay(original_image, heatmap, alpha=0.5)
259
- ```
260
-
261
- ### Using Specific Explainers
262
-
263
- ```python
264
- # Anchors - Rule-based explanations
265
- from explainiverse.explainers import AnchorsExplainer
266
-
267
- anchors = AnchorsExplainer(
268
- model=adapter,
269
- training_data=X_train,
270
- feature_names=feature_names,
271
- class_names=class_names
272
- )
273
- explanation = anchors.explain(instance)
274
- print(explanation.explanation_data["rules"])
275
- # ['petal length (cm) > 2.45', 'petal width (cm) <= 1.75']
276
-
277
- # Counterfactual - What-if explanations
278
- from explainiverse.explainers import CounterfactualExplainer
279
-
280
- cf = CounterfactualExplainer(
281
- model=adapter,
282
- training_data=X_train,
283
- feature_names=feature_names
284
- )
285
- explanation = cf.explain(instance, num_counterfactuals=3)
286
- print(explanation.explanation_data["changes"])
287
-
288
- # SAGE - Global Shapley importance
289
- from explainiverse.explainers import SAGEExplainer
290
-
291
- sage = SAGEExplainer(
292
- model=adapter,
293
- X=X_train,
294
- y=y_train,
295
- feature_names=feature_names
296
- )
297
- explanation = sage.explain()
298
- print(explanation.explanation_data["feature_attributions"])
299
- ```
300
-
301
- ### Explanation Suite (Multi-Explainer Comparison)
302
-
303
- ```python
304
- from explainiverse import ExplanationSuite
305
-
306
- suite = ExplanationSuite(
307
- model=adapter,
308
- explainer_configs=[
309
- ("lime", {"training_data": X_train, "feature_names": feature_names, "class_names": class_names}),
310
- ("shap", {"background_data": X_train[:50], "feature_names": feature_names, "class_names": class_names}),
311
- ]
312
- )
313
-
314
- results = suite.run(instance)
315
- suite.compare()
316
- ```
317
-
318
- ---
319
-
320
- ## Registering Custom Explainers
321
-
322
- ```python
323
- from explainiverse import ExplainerRegistry, ExplainerMeta, BaseExplainer
324
-
325
- @default_registry.register_decorator(
326
- name="my_explainer",
327
- meta=ExplainerMeta(
328
- scope="local",
329
- model_types=["any"],
330
- data_types=["tabular"],
331
- description="My custom explainer",
332
- paper_reference="Author et al., 2024"
333
- )
334
- )
335
- class MyExplainer(BaseExplainer):
336
- def explain(self, instance, **kwargs):
337
- # Your implementation
338
- return Explanation(...)
339
- ```
340
-
341
- ---
342
-
343
- ## Running Tests
344
-
345
- ```bash
346
- # Run all tests
347
- poetry run pytest
348
-
349
- # Run with coverage
350
- poetry run pytest --cov=explainiverse
351
-
352
- # Run specific test file
353
- poetry run pytest tests/test_new_explainers.py -v
354
- ```
355
-
356
- ---
357
-
358
- ## Roadmap
359
-
360
- - [x] LIME, SHAP (KernelSHAP)
361
- - [x] TreeSHAP (optimized for tree models) ✅
362
- - [x] Anchors, Counterfactuals
363
- - [x] Permutation Importance, PDP, ALE, SAGE
364
- - [x] Explainer Registry with filtering
365
- - [x] PyTorch Adapter ✅
366
- - [x] Integrated Gradients ✅
367
- - [x] GradCAM/GradCAM++ for CNNs ✅ NEW
368
- - [ ] TensorFlow adapter
369
- - [ ] Interactive visualization dashboard
370
-
371
- ---
372
-
373
- ## Citation
374
-
375
- If you use Explainiverse in your research, please cite:
376
-
377
- ```bibtex
378
- @software{explainiverse2024,
379
- title = {Explainiverse: A Unified Framework for Explainable AI},
380
- author = {Syed, Muntaser},
381
- year = {2024},
382
- url = {https://github.com/jemsbhai/explainiverse}
383
- }
384
- ```
385
-
386
- ---
387
-
388
- ## License
389
-
390
- MIT License - see [LICENSE](LICENSE) for details.
391
-