ai-critic 1.0.0__tar.gz → 1.1.0__tar.gz

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (24) hide show
  1. {ai_critic-1.0.0 → ai_critic-1.1.0}/PKG-INFO +56 -24
  2. {ai_critic-1.0.0 → ai_critic-1.1.0}/README.md +55 -23
  3. {ai_critic-1.0.0 → ai_critic-1.1.0}/ai_critic/critic.py +17 -5
  4. ai_critic-1.1.0/ai_critic/evaluators/adapters.py +84 -0
  5. {ai_critic-1.0.0 → ai_critic-1.1.0}/ai_critic.egg-info/PKG-INFO +56 -24
  6. {ai_critic-1.0.0 → ai_critic-1.1.0}/ai_critic.egg-info/SOURCES.txt +1 -0
  7. {ai_critic-1.0.0 → ai_critic-1.1.0}/pyproject.toml +1 -1
  8. {ai_critic-1.0.0 → ai_critic-1.1.0}/ai_critic/__init__.py +0 -0
  9. {ai_critic-1.0.0 → ai_critic-1.1.0}/ai_critic/evaluators/__init__.py +0 -0
  10. {ai_critic-1.0.0 → ai_critic-1.1.0}/ai_critic/evaluators/config.py +0 -0
  11. {ai_critic-1.0.0 → ai_critic-1.1.0}/ai_critic/evaluators/data.py +0 -0
  12. {ai_critic-1.0.0 → ai_critic-1.1.0}/ai_critic/evaluators/performance.py +0 -0
  13. {ai_critic-1.0.0 → ai_critic-1.1.0}/ai_critic/evaluators/robustness.py +0 -0
  14. {ai_critic-1.0.0 → ai_critic-1.1.0}/ai_critic/evaluators/scoring.py +0 -0
  15. {ai_critic-1.0.0 → ai_critic-1.1.0}/ai_critic/evaluators/summary.py +0 -0
  16. {ai_critic-1.0.0 → ai_critic-1.1.0}/ai_critic/evaluators/validation.py +0 -0
  17. {ai_critic-1.0.0 → ai_critic-1.1.0}/ai_critic/sessions/__init__.py +0 -0
  18. {ai_critic-1.0.0 → ai_critic-1.1.0}/ai_critic/sessions/store.py +0 -0
  19. {ai_critic-1.0.0 → ai_critic-1.1.0}/ai_critic.egg-info/dependency_links.txt +0 -0
  20. {ai_critic-1.0.0 → ai_critic-1.1.0}/ai_critic.egg-info/requires.txt +0 -0
  21. {ai_critic-1.0.0 → ai_critic-1.1.0}/ai_critic.egg-info/top_level.txt +0 -0
  22. {ai_critic-1.0.0 → ai_critic-1.1.0}/setup.cfg +0 -0
  23. {ai_critic-1.0.0 → ai_critic-1.1.0}/test/test_in_ia.py +0 -0
  24. {ai_critic-1.0.0 → ai_critic-1.1.0}/test/test_model.py +0 -0
@@ -1,6 +1,6 @@
1
1
  Metadata-Version: 2.4
2
2
  Name: ai-critic
3
- Version: 1.0.0
3
+ Version: 1.1.0
4
4
  Summary: Fast AI evaluator for scikit-learn models
5
5
  Author-email: Luiz Seabra <filipedemarco@yahoo.com>
6
6
  Requires-Python: >=3.9
@@ -10,7 +10,7 @@ Requires-Dist: scikit-learn
10
10
 
11
11
  # ai-critic 🧠: The Quality Gate for Machine Learning Models
12
12
 
13
- **ai-critic** is a specialized **decision-making** tool designed to audit the reliability and readiness for deployment of scikit-learn–compatible Machine Learning models.
13
+ **ai-critic** is a specialized **decision-making** tool designed to audit the reliability and readiness for deployment of **scikit-learn**, **PyTorch**, and **TensorFlow** models.
14
14
 
15
15
  Instead of merely measuring performance (accuracy, F1 score), **ai-critic** acts as a **Quality Gate**, actively probing the model to uncover *hidden risks* that commonly cause production failures — such as **data leakage**, **structural overfitting**, and **fragility under noise**.
16
16
 
@@ -19,11 +19,11 @@ Instead of merely measuring performance (accuracy, F1 score), **ai-critic** acts
19
19
 
20
20
  ---
21
21
 
22
- ## 🚀 Getting Started (The Basics)
22
+ ## 🚀 Getting Started (The Basics)
23
23
 
24
24
  This section is ideal for beginners who need a **fast and reliable verdict** on a trained model.
25
25
 
26
- ### Installation
26
+ ### Installation
27
27
 
28
28
  Install directly from PyPI:
29
29
 
@@ -33,7 +33,7 @@ pip install ai-critic
33
33
 
34
34
  ---
35
35
 
36
- ### The Quick Verdict
36
+ ### The Quick Verdict
37
37
 
38
38
  With just a few lines of code, you obtain an **executive-level assessment** and a **deployment recommendation**.
39
39
 
@@ -70,13 +70,13 @@ If **ai-critic** recommends deployment, it means meaningful risks were *not* det
70
70
 
71
71
  ---
72
72
 
73
- ## 💡 Understanding the Critique (The Intermediary)
73
+ ## 💡 Understanding the Critique (The Intermediary)
74
74
 
75
75
  For data scientists who want to understand **why** the model received a given verdict and **how to improve it**.
76
76
 
77
77
  ---
78
78
 
79
- ### The Four Pillars of the Audit
79
+ ### The Four Pillars of the Audit
80
80
 
81
81
  **ai-critic** evaluates models across four independent risk dimensions:
82
82
 
@@ -91,7 +91,7 @@ Each pillar contributes signals used later in the **deployment gate**.
91
91
 
92
92
  ---
93
93
 
94
- ### Full Technical & Visual Analysis
94
+ ### Full Technical & Visual Analysis
95
95
 
96
96
  To access **all internal diagnostics**, including plots and recommendations, use `view="all"`.
97
97
 
@@ -117,7 +117,7 @@ Generated plots may include:
117
117
 
118
118
  ---
119
119
 
120
- ### Robustness Test (Noise Injection)
120
+ ### Robustness Test (Noise Injection)
121
121
 
122
122
  A model that collapses under small perturbations is **not production-safe**.
123
123
 
@@ -139,13 +139,52 @@ print(f"Verdict: {robustness['verdict']}")
139
139
 
140
140
  ---
141
141
 
142
- ## ⚙️ Integration and Governance (The Advanced)
142
+ ## ⚙️ Integration and Governance (The Advanced)
143
143
 
144
144
  This section targets **MLOps engineers**, **architects**, and teams operating automated pipelines.
145
145
 
146
146
  ---
147
147
 
148
- ### The Deployment Gate (`deploy_decision`)
148
+ ### Multi-Framework Support
149
+
150
+ **ai-critic 1.0+** supports models from multiple frameworks with the **same API**:
151
+
152
+ ```python
153
+ # PyTorch Example
154
+ import torch
155
+ import torch.nn as nn
156
+ from ai_critic import AICritic
157
+
158
+ X = torch.randn(1000, 20)
159
+ y = torch.randint(0, 2, (1000,))
160
+
161
+ model = nn.Sequential(
162
+ nn.Linear(20, 32),
163
+ nn.ReLU(),
164
+ nn.Linear(32, 2)
165
+ )
166
+
167
+ critic = AICritic(model, X, y, framework="torch", adapter_kwargs={"epochs":5, "batch_size":64})
168
+ report = critic.evaluate(view="executive")
169
+ print(report)
170
+
171
+ # TensorFlow Example
172
+ import tensorflow as tf
173
+
174
+ model = tf.keras.Sequential([
175
+ tf.keras.layers.Dense(32, activation="relu", input_shape=(20,)),
176
+ tf.keras.layers.Dense(2)
177
+ ])
178
+ critic = AICritic(model, X.numpy(), y.numpy(), framework="tensorflow", adapter_kwargs={"epochs":5})
179
+ report = critic.evaluate(view="executive")
180
+ print(report)
181
+ ```
182
+
183
+ > No need to rewrite evaluation code — **one Critic API works for sklearn, PyTorch, or TensorFlow**.
184
+
185
+ ---
186
+
187
+ ### The Deployment Gate (`deploy_decision`)
149
188
 
150
189
  The `deploy_decision()` method aggregates *all detected risks* and produces a final gate decision.
151
190
 
@@ -173,7 +212,7 @@ for issue in decision["blocking_issues"]:
173
212
 
174
213
  ---
175
214
 
176
- ### Modes & Views (API Design)
215
+ ### Modes & Views (API Design)
177
216
 
178
217
  The `evaluate()` method supports **multiple modes** via the `view` parameter:
179
218
 
@@ -193,7 +232,7 @@ critic.evaluate(view=["executive", "performance"])
193
232
 
194
233
  ---
195
234
 
196
- ### Session Tracking & Model Comparison (New in 1.0.0)
235
+ ### Session Tracking & Model Comparison
197
236
 
198
237
  You can persist evaluations and compare model versions over time.
199
238
 
@@ -216,7 +255,7 @@ This enables:
216
255
 
217
256
  ---
218
257
 
219
- ### Best Practices & Use Cases
258
+ ### Best Practices & Use Cases
220
259
 
221
260
  | Scenario | Recommended Usage |
222
261
  | ----------------------- | -------------------------------------- |
@@ -226,11 +265,14 @@ This enables:
226
265
  | **Stakeholder Reports** | Share executive summaries |
227
266
 
228
267
  ---
268
+
229
269
  ## 🔒 API Stability
230
270
 
231
271
  Starting from version **1.0.0**, the public API of **ai-critic** follows semantic versioning.
232
272
  Breaking changes will only occur in major releases.
233
273
 
274
+ ---
275
+
234
276
  ## 📄 License
235
277
 
236
278
  Distributed under the **MIT License**.
@@ -245,13 +287,3 @@ Distributed under the **MIT License**.
245
287
  A failed audit does **not** mean the model is bad — it means the model **is not ready to be trusted**.
246
288
 
247
289
  The purpose of **ai-critic** is to introduce *structured skepticism* into machine learning workflows — exactly where it belongs.
248
-
249
- ---
250
-
251
- Se quiser, próximo passo posso:
252
-
253
- * gerar o **CHANGELOG.md oficial do 1.0.0**
254
- * revisar esse README como um **reviewer externo**
255
- * escrever o **post de lançamento** (GitHub / PyPI / Reddit)
256
-
257
- Esse README já está em **nível profissional real**.
@@ -1,6 +1,6 @@
1
1
  # ai-critic 🧠: The Quality Gate for Machine Learning Models
2
2
 
3
- **ai-critic** is a specialized **decision-making** tool designed to audit the reliability and readiness for deployment of scikit-learn–compatible Machine Learning models.
3
+ **ai-critic** is a specialized **decision-making** tool designed to audit the reliability and readiness for deployment of **scikit-learn**, **PyTorch**, and **TensorFlow** models.
4
4
 
5
5
  Instead of merely measuring performance (accuracy, F1 score), **ai-critic** acts as a **Quality Gate**, actively probing the model to uncover *hidden risks* that commonly cause production failures — such as **data leakage**, **structural overfitting**, and **fragility under noise**.
6
6
 
@@ -9,11 +9,11 @@ Instead of merely measuring performance (accuracy, F1 score), **ai-critic** acts
9
9
 
10
10
  ---
11
11
 
12
- ## 🚀 Getting Started (The Basics)
12
+ ## 🚀 Getting Started (The Basics)
13
13
 
14
14
  This section is ideal for beginners who need a **fast and reliable verdict** on a trained model.
15
15
 
16
- ### Installation
16
+ ### Installation
17
17
 
18
18
  Install directly from PyPI:
19
19
 
@@ -23,7 +23,7 @@ pip install ai-critic
23
23
 
24
24
  ---
25
25
 
26
- ### The Quick Verdict
26
+ ### The Quick Verdict
27
27
 
28
28
  With just a few lines of code, you obtain an **executive-level assessment** and a **deployment recommendation**.
29
29
 
@@ -60,13 +60,13 @@ If **ai-critic** recommends deployment, it means meaningful risks were *not* det
60
60
 
61
61
  ---
62
62
 
63
- ## 💡 Understanding the Critique (The Intermediary)
63
+ ## 💡 Understanding the Critique (The Intermediary)
64
64
 
65
65
  For data scientists who want to understand **why** the model received a given verdict and **how to improve it**.
66
66
 
67
67
  ---
68
68
 
69
- ### The Four Pillars of the Audit
69
+ ### The Four Pillars of the Audit
70
70
 
71
71
  **ai-critic** evaluates models across four independent risk dimensions:
72
72
 
@@ -81,7 +81,7 @@ Each pillar contributes signals used later in the **deployment gate**.
81
81
 
82
82
  ---
83
83
 
84
- ### Full Technical & Visual Analysis
84
+ ### Full Technical & Visual Analysis
85
85
 
86
86
  To access **all internal diagnostics**, including plots and recommendations, use `view="all"`.
87
87
 
@@ -107,7 +107,7 @@ Generated plots may include:
107
107
 
108
108
  ---
109
109
 
110
- ### Robustness Test (Noise Injection)
110
+ ### Robustness Test (Noise Injection)
111
111
 
112
112
  A model that collapses under small perturbations is **not production-safe**.
113
113
 
@@ -129,13 +129,52 @@ print(f"Verdict: {robustness['verdict']}")
129
129
 
130
130
  ---
131
131
 
132
- ## ⚙️ Integration and Governance (The Advanced)
132
+ ## ⚙️ Integration and Governance (The Advanced)
133
133
 
134
134
  This section targets **MLOps engineers**, **architects**, and teams operating automated pipelines.
135
135
 
136
136
  ---
137
137
 
138
- ### The Deployment Gate (`deploy_decision`)
138
+ ### Multi-Framework Support
139
+
140
+ **ai-critic 1.0+** supports models from multiple frameworks with the **same API**:
141
+
142
+ ```python
143
+ # PyTorch Example
144
+ import torch
145
+ import torch.nn as nn
146
+ from ai_critic import AICritic
147
+
148
+ X = torch.randn(1000, 20)
149
+ y = torch.randint(0, 2, (1000,))
150
+
151
+ model = nn.Sequential(
152
+ nn.Linear(20, 32),
153
+ nn.ReLU(),
154
+ nn.Linear(32, 2)
155
+ )
156
+
157
+ critic = AICritic(model, X, y, framework="torch", adapter_kwargs={"epochs":5, "batch_size":64})
158
+ report = critic.evaluate(view="executive")
159
+ print(report)
160
+
161
+ # TensorFlow Example
162
+ import tensorflow as tf
163
+
164
+ model = tf.keras.Sequential([
165
+ tf.keras.layers.Dense(32, activation="relu", input_shape=(20,)),
166
+ tf.keras.layers.Dense(2)
167
+ ])
168
+ critic = AICritic(model, X.numpy(), y.numpy(), framework="tensorflow", adapter_kwargs={"epochs":5})
169
+ report = critic.evaluate(view="executive")
170
+ print(report)
171
+ ```
172
+
173
+ > No need to rewrite evaluation code — **one Critic API works for sklearn, PyTorch, or TensorFlow**.
174
+
175
+ ---
176
+
177
+ ### The Deployment Gate (`deploy_decision`)
139
178
 
140
179
  The `deploy_decision()` method aggregates *all detected risks* and produces a final gate decision.
141
180
 
@@ -163,7 +202,7 @@ for issue in decision["blocking_issues"]:
163
202
 
164
203
  ---
165
204
 
166
- ### Modes & Views (API Design)
205
+ ### Modes & Views (API Design)
167
206
 
168
207
  The `evaluate()` method supports **multiple modes** via the `view` parameter:
169
208
 
@@ -183,7 +222,7 @@ critic.evaluate(view=["executive", "performance"])
183
222
 
184
223
  ---
185
224
 
186
- ### Session Tracking & Model Comparison (New in 1.0.0)
225
+ ### Session Tracking & Model Comparison
187
226
 
188
227
  You can persist evaluations and compare model versions over time.
189
228
 
@@ -206,7 +245,7 @@ This enables:
206
245
 
207
246
  ---
208
247
 
209
- ### Best Practices & Use Cases
248
+ ### Best Practices & Use Cases
210
249
 
211
250
  | Scenario | Recommended Usage |
212
251
  | ----------------------- | -------------------------------------- |
@@ -216,11 +255,14 @@ This enables:
216
255
  | **Stakeholder Reports** | Share executive summaries |
217
256
 
218
257
  ---
258
+
219
259
  ## 🔒 API Stability
220
260
 
221
261
  Starting from version **1.0.0**, the public API of **ai-critic** follows semantic versioning.
222
262
  Breaking changes will only occur in major releases.
223
263
 
264
+ ---
265
+
224
266
  ## 📄 License
225
267
 
226
268
  Distributed under the **MIT License**.
@@ -235,13 +277,3 @@ Distributed under the **MIT License**.
235
277
  A failed audit does **not** mean the model is bad — it means the model **is not ready to be trusted**.
236
278
 
237
279
  The purpose of **ai-critic** is to introduce *structured skepticism* into machine learning workflows — exactly where it belongs.
238
-
239
- ---
240
-
241
- Se quiser, próximo passo posso:
242
-
243
- * gerar o **CHANGELOG.md oficial do 1.0.0**
244
- * revisar esse README como um **reviewer externo**
245
- * escrever o **post de lançamento** (GitHub / PyPI / Reddit)
246
-
247
- Esse README já está em **nível profissional real**.
@@ -2,7 +2,8 @@ from ai_critic.evaluators import (
2
2
  robustness,
3
3
  config,
4
4
  data,
5
- performance
5
+ performance,
6
+ adapters # <- novo import
6
7
  )
7
8
  from ai_critic.evaluators.summary import HumanSummary
8
9
  from ai_critic.sessions import CriticSessionStore
@@ -11,7 +12,7 @@ from ai_critic.evaluators.scoring import compute_scores
11
12
 
12
13
  class AICritic:
13
14
  """
14
- Automated reviewer for scikit-learn models.
15
+ Automated reviewer for scikit-learn, PyTorch, or TensorFlow models.
15
16
 
16
17
  Produces a multi-layered risk assessment including:
17
18
  - Data integrity analysis
@@ -21,11 +22,12 @@ class AICritic:
21
22
  - Human-readable executive and technical summaries
22
23
  """
23
24
 
24
- def __init__(self, model, X, y, random_state=None, session=None):
25
+ def __init__(self, model, X, y, random_state=None, session=None, framework="sklearn", adapter_kwargs=None):
25
26
  """
26
27
  Parameters
27
28
  ----------
28
- model : sklearn-compatible estimator
29
+ model : object
30
+ scikit-learn estimator, torch.nn.Module, or tf.keras.Model
29
31
  X : np.ndarray
30
32
  Feature matrix
31
33
  y : np.ndarray
@@ -34,8 +36,18 @@ class AICritic:
34
36
  Global seed for reproducibility (optional)
35
37
  session : str or None
36
38
  Optional session name for longitudinal comparison
39
+ framework : str
40
+ "sklearn" (default), "torch", or "tensorflow"
41
+ adapter_kwargs : dict
42
+ Extra kwargs para o adaptador (ex: epochs, lr, batch_size)
37
43
  """
38
- self.model = model
44
+ adapter_kwargs = adapter_kwargs or {}
45
+ self.framework = framework.lower()
46
+ if self.framework != "sklearn":
47
+ self.model = adapters.ModelAdapter(model, framework=self.framework, **adapter_kwargs)
48
+ else:
49
+ self.model = model
50
+
39
51
  self.X = X
40
52
  self.y = y
41
53
  self.random_state = random_state
@@ -0,0 +1,84 @@
1
+ # evaluators/adapters.py
2
+ import numpy as np
3
+
4
+ try:
5
+ import torch
6
+ import torch.nn as nn
7
+ except ImportError:
8
+ torch = None
9
+
10
+ try:
11
+ import tensorflow as tf
12
+ except ImportError:
13
+ tf = None
14
+
15
+ class ModelAdapter:
16
+ """
17
+ Wraps scikit-learn, PyTorch, or TensorFlow models to provide a
18
+ unified fit/predict interface for AICritic.
19
+ """
20
+
21
+ def __init__(self, model, framework="sklearn", **kwargs):
22
+ """
23
+ Parameters
24
+ ----------
25
+ model : object
26
+ The original model (sklearn estimator, torch.nn.Module, or tf.keras.Model)
27
+ framework : str
28
+ One of "sklearn", "torch", "tensorflow"
29
+ kwargs : dict
30
+ Extra hyperparameters for training (epochs, batch_size, optimizer, etc)
31
+ """
32
+ self.model = model
33
+ self.framework = framework.lower()
34
+ self.kwargs = kwargs
35
+
36
+ if self.framework not in ("sklearn", "torch", "tensorflow"):
37
+ raise ValueError(f"Unsupported framework: {framework}")
38
+
39
+ # PyTorch default settings
40
+ if self.framework == "torch":
41
+ self.epochs = kwargs.get("epochs", 5)
42
+ self.lr = kwargs.get("lr", 1e-3)
43
+ self.loss_fn = kwargs.get("loss_fn", nn.MSELoss())
44
+ self.optimizer_class = kwargs.get("optimizer", torch.optim.Adam)
45
+ self.device = kwargs.get("device", "cpu")
46
+ self.model.to(self.device)
47
+
48
+ # TensorFlow default settings
49
+ if self.framework == "tensorflow":
50
+ self.epochs = kwargs.get("epochs", 5)
51
+ self.batch_size = kwargs.get("batch_size", 32)
52
+ self.loss_fn = kwargs.get("loss_fn", "mse")
53
+ self.optimizer = kwargs.get("optimizer", "adam")
54
+ self.model.compile(optimizer=self.optimizer, loss=self.loss_fn)
55
+
56
+ def fit(self, X, y):
57
+ if self.framework == "sklearn":
58
+ self.model.fit(X, y)
59
+ elif self.framework == "torch":
60
+ X_tensor = torch.tensor(X, dtype=torch.float32).to(self.device)
61
+ y_tensor = torch.tensor(y, dtype=torch.float32).to(self.device).view(-1, 1)
62
+ optimizer = self.optimizer_class(self.model.parameters(), lr=self.lr)
63
+
64
+ self.model.train()
65
+ for epoch in range(self.epochs):
66
+ optimizer.zero_grad()
67
+ output = self.model(X_tensor)
68
+ loss = self.loss_fn(output, y_tensor)
69
+ loss.backward()
70
+ optimizer.step()
71
+ elif self.framework == "tensorflow":
72
+ self.model.fit(X, y, epochs=self.epochs, batch_size=self.batch_size, verbose=0)
73
+ return self
74
+
75
+ def predict(self, X):
76
+ if self.framework == "sklearn":
77
+ return self.model.predict(X)
78
+ elif self.framework == "torch":
79
+ self.model.eval()
80
+ with torch.no_grad():
81
+ X_tensor = torch.tensor(X, dtype=torch.float32).to(self.device)
82
+ return self.model(X_tensor).cpu().numpy().flatten()
83
+ elif self.framework == "tensorflow":
84
+ return self.model.predict(X).flatten()
@@ -1,6 +1,6 @@
1
1
  Metadata-Version: 2.4
2
2
  Name: ai-critic
3
- Version: 1.0.0
3
+ Version: 1.1.0
4
4
  Summary: Fast AI evaluator for scikit-learn models
5
5
  Author-email: Luiz Seabra <filipedemarco@yahoo.com>
6
6
  Requires-Python: >=3.9
@@ -10,7 +10,7 @@ Requires-Dist: scikit-learn
10
10
 
11
11
  # ai-critic 🧠: The Quality Gate for Machine Learning Models
12
12
 
13
- **ai-critic** is a specialized **decision-making** tool designed to audit the reliability and readiness for deployment of scikit-learn–compatible Machine Learning models.
13
+ **ai-critic** is a specialized **decision-making** tool designed to audit the reliability and readiness for deployment of **scikit-learn**, **PyTorch**, and **TensorFlow** models.
14
14
 
15
15
  Instead of merely measuring performance (accuracy, F1 score), **ai-critic** acts as a **Quality Gate**, actively probing the model to uncover *hidden risks* that commonly cause production failures — such as **data leakage**, **structural overfitting**, and **fragility under noise**.
16
16
 
@@ -19,11 +19,11 @@ Instead of merely measuring performance (accuracy, F1 score), **ai-critic** acts
19
19
 
20
20
  ---
21
21
 
22
- ## 🚀 Getting Started (The Basics)
22
+ ## 🚀 Getting Started (The Basics)
23
23
 
24
24
  This section is ideal for beginners who need a **fast and reliable verdict** on a trained model.
25
25
 
26
- ### Installation
26
+ ### Installation
27
27
 
28
28
  Install directly from PyPI:
29
29
 
@@ -33,7 +33,7 @@ pip install ai-critic
33
33
 
34
34
  ---
35
35
 
36
- ### The Quick Verdict
36
+ ### The Quick Verdict
37
37
 
38
38
  With just a few lines of code, you obtain an **executive-level assessment** and a **deployment recommendation**.
39
39
 
@@ -70,13 +70,13 @@ If **ai-critic** recommends deployment, it means meaningful risks were *not* det
70
70
 
71
71
  ---
72
72
 
73
- ## 💡 Understanding the Critique (The Intermediary)
73
+ ## 💡 Understanding the Critique (The Intermediary)
74
74
 
75
75
  For data scientists who want to understand **why** the model received a given verdict and **how to improve it**.
76
76
 
77
77
  ---
78
78
 
79
- ### The Four Pillars of the Audit
79
+ ### The Four Pillars of the Audit
80
80
 
81
81
  **ai-critic** evaluates models across four independent risk dimensions:
82
82
 
@@ -91,7 +91,7 @@ Each pillar contributes signals used later in the **deployment gate**.
91
91
 
92
92
  ---
93
93
 
94
- ### Full Technical & Visual Analysis
94
+ ### Full Technical & Visual Analysis
95
95
 
96
96
  To access **all internal diagnostics**, including plots and recommendations, use `view="all"`.
97
97
 
@@ -117,7 +117,7 @@ Generated plots may include:
117
117
 
118
118
  ---
119
119
 
120
- ### Robustness Test (Noise Injection)
120
+ ### Robustness Test (Noise Injection)
121
121
 
122
122
  A model that collapses under small perturbations is **not production-safe**.
123
123
 
@@ -139,13 +139,52 @@ print(f"Verdict: {robustness['verdict']}")
139
139
 
140
140
  ---
141
141
 
142
- ## ⚙️ Integration and Governance (The Advanced)
142
+ ## ⚙️ Integration and Governance (The Advanced)
143
143
 
144
144
  This section targets **MLOps engineers**, **architects**, and teams operating automated pipelines.
145
145
 
146
146
  ---
147
147
 
148
- ### The Deployment Gate (`deploy_decision`)
148
+ ### Multi-Framework Support
149
+
150
+ **ai-critic 1.0+** supports models from multiple frameworks with the **same API**:
151
+
152
+ ```python
153
+ # PyTorch Example
154
+ import torch
155
+ import torch.nn as nn
156
+ from ai_critic import AICritic
157
+
158
+ X = torch.randn(1000, 20)
159
+ y = torch.randint(0, 2, (1000,))
160
+
161
+ model = nn.Sequential(
162
+ nn.Linear(20, 32),
163
+ nn.ReLU(),
164
+ nn.Linear(32, 2)
165
+ )
166
+
167
+ critic = AICritic(model, X, y, framework="torch", adapter_kwargs={"epochs":5, "batch_size":64})
168
+ report = critic.evaluate(view="executive")
169
+ print(report)
170
+
171
+ # TensorFlow Example
172
+ import tensorflow as tf
173
+
174
+ model = tf.keras.Sequential([
175
+ tf.keras.layers.Dense(32, activation="relu", input_shape=(20,)),
176
+ tf.keras.layers.Dense(2)
177
+ ])
178
+ critic = AICritic(model, X.numpy(), y.numpy(), framework="tensorflow", adapter_kwargs={"epochs":5})
179
+ report = critic.evaluate(view="executive")
180
+ print(report)
181
+ ```
182
+
183
+ > No need to rewrite evaluation code — **one Critic API works for sklearn, PyTorch, or TensorFlow**.
184
+
185
+ ---
186
+
187
+ ### The Deployment Gate (`deploy_decision`)
149
188
 
150
189
  The `deploy_decision()` method aggregates *all detected risks* and produces a final gate decision.
151
190
 
@@ -173,7 +212,7 @@ for issue in decision["blocking_issues"]:
173
212
 
174
213
  ---
175
214
 
176
- ### Modes & Views (API Design)
215
+ ### Modes & Views (API Design)
177
216
 
178
217
  The `evaluate()` method supports **multiple modes** via the `view` parameter:
179
218
 
@@ -193,7 +232,7 @@ critic.evaluate(view=["executive", "performance"])
193
232
 
194
233
  ---
195
234
 
196
- ### Session Tracking & Model Comparison (New in 1.0.0)
235
+ ### Session Tracking & Model Comparison
197
236
 
198
237
  You can persist evaluations and compare model versions over time.
199
238
 
@@ -216,7 +255,7 @@ This enables:
216
255
 
217
256
  ---
218
257
 
219
- ### Best Practices & Use Cases
258
+ ### Best Practices & Use Cases
220
259
 
221
260
  | Scenario | Recommended Usage |
222
261
  | ----------------------- | -------------------------------------- |
@@ -226,11 +265,14 @@ This enables:
226
265
  | **Stakeholder Reports** | Share executive summaries |
227
266
 
228
267
  ---
268
+
229
269
  ## 🔒 API Stability
230
270
 
231
271
  Starting from version **1.0.0**, the public API of **ai-critic** follows semantic versioning.
232
272
  Breaking changes will only occur in major releases.
233
273
 
274
+ ---
275
+
234
276
  ## 📄 License
235
277
 
236
278
  Distributed under the **MIT License**.
@@ -245,13 +287,3 @@ Distributed under the **MIT License**.
245
287
  A failed audit does **not** mean the model is bad — it means the model **is not ready to be trusted**.
246
288
 
247
289
  The purpose of **ai-critic** is to introduce *structured skepticism* into machine learning workflows — exactly where it belongs.
248
-
249
- ---
250
-
251
- Se quiser, próximo passo posso:
252
-
253
- * gerar o **CHANGELOG.md oficial do 1.0.0**
254
- * revisar esse README como um **reviewer externo**
255
- * escrever o **post de lançamento** (GitHub / PyPI / Reddit)
256
-
257
- Esse README já está em **nível profissional real**.
@@ -8,6 +8,7 @@ ai_critic.egg-info/dependency_links.txt
8
8
  ai_critic.egg-info/requires.txt
9
9
  ai_critic.egg-info/top_level.txt
10
10
  ai_critic/evaluators/__init__.py
11
+ ai_critic/evaluators/adapters.py
11
12
  ai_critic/evaluators/config.py
12
13
  ai_critic/evaluators/data.py
13
14
  ai_critic/evaluators/performance.py
@@ -4,7 +4,7 @@ build-backend = "setuptools.build_meta"
4
4
 
5
5
  [project]
6
6
  name = "ai-critic"
7
- version = "1.0.0"
7
+ version = "1.1.0"
8
8
  description = "Fast AI evaluator for scikit-learn models"
9
9
  readme = "README.md"
10
10
  authors = [
File without changes
File without changes
File without changes