m8flow 1.1.2 → 1.1.4

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -1,15 +1,62 @@
1
- # M8Flow CLI
1
+ <p align="center">
2
+ <img src="https://cdn.jsdelivr.net/npm/m8flow@latest/bundled/frontend-dist/assets/logo_full.png" alt="M8Flow" height="60" />
3
+ </p>
2
4
 
3
- Visual ML Pipeline Builder install globally, run anywhere.
5
+ <h3 align="center">AI-powered visual machine learning workflow builder with local execution, pipeline orchestration, and zero-config setup.</h3>
4
6
 
5
- ```
7
+ <p align="center">
8
+ <a href="https://www.npmjs.com/package/m8flow"><img src="https://img.shields.io/npm/v/m8flow?color=6366f1&label=npm" alt="npm version" /></a>
9
+ <a href="https://www.npmjs.com/package/m8flow"><img src="https://img.shields.io/npm/dm/m8flow?color=22c55e&label=downloads" alt="downloads" /></a>
10
+ <a href="https://www.npmjs.com/package/m8flow"><img src="https://img.shields.io/npm/l/m8flow?color=f59e0b" alt="license" /></a>
11
+ <img src="https://img.shields.io/badge/Python-3.8%2B-blue" alt="python" />
12
+ <img src="https://img.shields.io/badge/Node.js-18%2B-green" alt="node" />
13
+ </p>
14
+
15
+ ---
16
+
17
+ ## Quick Start
18
+
19
+ ```bash
6
20
  npm install -g m8flow
7
21
  m8flow run
8
22
  ```
9
23
 
24
+ Your browser opens automatically. Start building ML pipelines in seconds — no Docker, no config files, no cloud accounts.
25
+
26
+ ---
27
+
28
+ ## Why M8Flow?
29
+
30
+ Most ML tooling forces you to write boilerplate, manage environments manually, or pay for cloud compute. M8Flow takes a different approach:
31
+
32
+ | Problem | M8Flow Solution |
33
+ |---|---|
34
+ | Writing pipeline code from scratch | Visual drag-and-drop node editor |
35
+ | Complex environment setup | Automatic Python venv + dependency install |
36
+ | No AI assistance for ML workflows | Built-in AI pipeline generation (OpenRouter / Gemini / Mistral) |
37
+ | Vendor lock-in and cloud costs | 100% local-first — your machine, your data |
38
+ | Slow iteration cycles | Live pipeline execution with real-time logs |
39
+
40
+ ---
41
+
42
+ ## Features
43
+
44
+ - **Visual drag-and-drop pipeline builder** — connect nodes with edges, no boilerplate
45
+ - **AI-assisted workflow generation** — describe your pipeline in plain English
46
+ - **Multi-provider AI support** — OpenRouter, Google Gemini, Mistral La Plateforme
47
+ - **Local-first architecture** — all data stays on your machine
48
+ - **Automatic environment setup** — Python virtualenv + dependencies on first run
49
+ - **60+ built-in ML nodes** — preprocessing, models, evaluation, visualization
50
+ - **Custom Python components** — add your own nodes with AI code generation
51
+ - **CSV dataset handling** — upload files, inspect schemas, feed into pipelines
52
+ - **Pipeline versioning** — save checkpoints, compare versions, roll back
53
+ - **Real-time execution monitoring** — live logs, node status, output previews
54
+ - **Cross-browser state sync** — flows persist across tabs and browsers via local server
55
+ - **Port conflict auto-resolution** — never manually kill a port again
56
+
10
57
  ---
11
58
 
12
- ## System requirements
59
+ ## Requirements
13
60
 
14
61
  | Tool | Minimum |
15
62
  |--------|---------|
@@ -23,38 +70,33 @@ m8flow run
23
70
 
24
71
  ### `m8flow run`
25
72
 
26
- Starts the full stack and opens the browser.
73
+ Launches the complete local M8Flow environment.
27
74
 
28
75
  ```
29
76
  Options:
30
77
  -p, --port <n> Frontend port (default: 3000)
31
78
  -b, --backend <n> Backend port (default: 8000)
32
- --no-browser Don't open the browser automatically
33
- --verbose Stream backend and frontend logs
79
+ --no-browser Skip opening the browser automatically
80
+ --verbose Stream full backend logs
34
81
  -h, --help Show help
35
82
  ```
36
83
 
37
- **Examples**
38
-
39
84
  ```bash
40
- m8flow run # defaults (3000 / 8000)
41
- m8flow run --port 4000 # custom frontend port
42
- m8flow run --backend 9000 --verbose # custom backend + full logs
43
- m8flow run --no-browser # headless / CI mode
85
+ m8flow run # standard launch
86
+ m8flow run --port 4000 # custom port
87
+ m8flow run --no-browser --verbose # headless with logs
44
88
  ```
45
89
 
46
90
  ### `m8flow doctor`
47
91
 
48
- Checks system requirements before you try to run.
92
+ Diagnoses your environment before running.
49
93
 
50
94
  ```bash
51
95
  m8flow doctor
52
96
  ```
53
97
 
54
- Output example:
55
-
56
98
  ```
57
- M8Flow Doctor v1.0.0
99
+ M8Flow Doctor v1.1.3
58
100
 
59
101
  ✔ Node.js (20.11.0)
60
102
  ✔ Python (Python 3.11.5)
@@ -62,77 +104,109 @@ Output example:
62
104
  ✔ Python dependencies
63
105
  ✔ Bundled assets
64
106
  ✔ Port Frontend (3000) — free
65
- Port Backend (8000) — busy — will auto-shift to 8001
107
+ Port Backend (8000) — free
66
108
  ```
67
109
 
68
110
  ---
69
111
 
70
- ## What `m8flow run` does
71
-
72
- | Step | Action |
73
- |------|--------|
74
- | 1 | Detect Python 3.8+ on PATH (`python3`, `python`, `py`) |
75
- | 2 | Create `~/.m8flow/{uploads,models,pipelines}` |
76
- | 3 | Create Python virtualenv at `~/.m8flow/venv` |
77
- | 4 | `pip install -r requirements.txt` (only when file changes) |
78
- | 5 | Auto-resolve port conflicts with `detect-port` |
79
- | 6 | Spawn `uvicorn main:app` via **execa** |
80
- | 7 | Serve React build via built-in Node HTTP server |
81
- | 8 | TCP-poll the backend until it accepts connections |
82
- | 9 | Open `http://localhost:3000` with **open** |
83
- | 10 | Clean `SIGINT` / `SIGTERM` shutdown |
112
+ ## What happens when you run `m8flow run`
113
+
114
+ | Step | What M8Flow does |
115
+ |------|-----------------|
116
+ | 1 | Locates Python 3.8+ on your PATH |
117
+ | 2 | Creates `~/.m8flow/` storage directories |
118
+ | 3 | Creates an isolated Python virtualenv |
119
+ | 4 | Installs all ML dependencies (once only hash-cached) |
120
+ | 5 | Auto-resolves port conflicts |
121
+ | 6 | Starts FastAPI backend via uvicorn |
122
+ | 7 | Serves the React frontend via Node HTTP server |
123
+ | 8 | Waits for backend health check (up to 120 s) |
124
+ | 9 | Opens `http://localhost:3000` in your browser |
84
125
 
85
126
  ---
86
127
 
87
- ## Data storage
128
+ ## Architecture
88
129
 
89
- Everything stays on your machine:
130
+ ```
131
+ m8flow run
132
+ ├── Node.js CLI (bin/m8flow.js)
133
+ │ ├── Python venv setup → ~/.m8flow/venv/
134
+ │ ├── Frontend server → localhost:3000 (Node static)
135
+ │ └── Backend server → localhost:8000 (uvicorn)
136
+
137
+ ├── Frontend — React + XYFlow canvas, Zustand state
138
+ │ └── Visual node editor, AI chat, dataset manager
139
+
140
+ └── Backend — FastAPI + Python runtime
141
+ ├── Pipeline executor (topological DAG execution)
142
+ ├── LLM service (OpenRouter / Gemini / Mistral)
143
+ ├── Self-healer (AI auto-fix on node errors)
144
+ └── Template library (60+ pre-built ML components)
145
+ ```
90
146
 
91
- | Path | Purpose |
92
- |------|---------|
93
- | `~/.m8flow/uploads/` | Uploaded CSV files |
94
- | `~/.m8flow/models/` | Trained models |
95
- | `~/.m8flow/pipelines/`| Saved pipelines |
96
- | `~/.m8flow/venv/` | Python virtualenv |
97
- | `~/.m8flow/.env` | API keys (optional) |
147
+ | Layer | Technology |
148
+ |---|---|
149
+ | Frontend | React, XYFlow, Zustand, Vite |
150
+ | Backend | FastAPI, Python, uvicorn |
151
+ | Runtime | Isolated Python virtualenv |
152
+ | CLI | Node.js 18+, ESM, execa |
153
+ | AI Layer | OpenRouter · Google Gemini · Mistral La Plateforme |
154
+ | Storage | Local filesystem (`~/.m8flow/`) |
98
155
 
99
- ### Setting your OpenRouter API key
156
+ ---
100
157
 
101
- ```bash
102
- echo "OPENROUTER_API_KEY=sk-or-..." >> ~/.m8flow/.env
103
- ```
158
+ ## AI Keys
104
159
 
105
- The backend loads this file on every startup.
160
+ M8Flow works without any API key (demo mode with limited models). For full AI pipeline generation, add at least one key in **Settings → API Keys**:
161
+
162
+ | Provider | Key prefix | Models |
163
+ |---|---|---|
164
+ | OpenRouter | `sk-or-...` | Llama 3.3, NVIDIA Nemotron, Gemma, and 200+ more |
165
+ | Google Gemini | `AIza...` | Gemini 2.5 Flash, Flash Lite, Pro |
166
+ | Mistral | any | Codestral, Mistral Small, Mixtral |
167
+
168
+ Keys are stored locally in your browser and on your machine — never sent to M8Flow servers.
106
169
 
107
170
  ---
108
171
 
109
- ## Build from source
172
+ ## Data & Storage
173
+
174
+ Everything lives on your machine:
175
+
176
+ | Path | Contents |
177
+ |------|---------|
178
+ | `~/.m8flow/uploads/` | Uploaded CSV datasets |
179
+ | `~/.m8flow/models/` | Trained model files |
180
+ | `~/.m8flow/pipelines/` | Saved pipeline state |
181
+ | `~/.m8flow/venv/` | Python virtualenv |
182
+ | `~/.m8flow/app_state.json` | Flows, projects, settings |
183
+
184
+ ---
185
+
186
+ ## Build from Source
110
187
 
111
188
  ```bash
112
- git clone https://github.com/your-org/m8flow
189
+ git clone https://github.com/mursaleen231213/m8flow
113
190
  cd m8flow
114
191
 
115
- # 1. Bundle backend + frontend
192
+ # Build React frontend + bundle Python backend
116
193
  node cli/scripts/build.js
117
194
 
118
- # 2. Install globally from local build
195
+ # Install locally
119
196
  cd cli && npm install -g .
120
197
 
121
- # 3. Run
198
+ # Launch
122
199
  m8flow run
123
200
  ```
124
201
 
125
- ---
126
-
127
- ## Publish to npm
202
+ ### Push updates to npm
128
203
 
129
204
  ```bash
130
- # Inside cli/
131
- npm publish
205
+ cd cli
206
+ npm version patch # bump version
207
+ npm publish --otp=<code> # publish (OTP from authenticator)
132
208
  ```
133
209
 
134
- `prepublishOnly` runs `build.js` automatically.
135
-
136
210
  ---
137
211
 
138
212
  ## Uninstall
@@ -140,6 +214,13 @@ npm publish
140
214
  ```bash
141
215
  npm uninstall -g m8flow
142
216
 
143
- # Optional remove all data:
144
- rm -rf ~/.m8flow
217
+ # Remove all local data (optional):
218
+ rm -rf ~/.m8flow # Mac/Linux
219
+ Remove-Item -Recurse -Force "$env:USERPROFILE\.m8flow" # Windows
145
220
  ```
221
+
222
+ ---
223
+
224
+ <p align="center">
225
+ Built with ❤️ for ML engineers who value speed and simplicity.
226
+ </p>
@@ -1,3 +1,4 @@
1
+ import json
1
2
  from fastapi import APIRouter, HTTPException, Request
2
3
  from pydantic import BaseModel
3
4
  from domain.models import FlowSchema
@@ -86,9 +87,20 @@ def _to_canvas_node(node: dict) -> dict:
86
87
 
87
88
 
88
89
  def _inject_api_key(http_request: Request) -> None:
89
- """Read X-OpenRouter-Key header and set it as the per-request ContextVar."""
90
- key = http_request.headers.get("X-OpenRouter-Key")
91
- llm_service._request_api_key.set(key or None)
90
+ """Read provider key + per-agent model headers and set per-request ContextVars."""
91
+ llm_service._request_api_key.set(http_request.headers.get("X-OpenRouter-Key") or None)
92
+ llm_service._request_gemini_key.set(http_request.headers.get("X-Gemini-Key") or None)
93
+ llm_service._request_mistral_key.set(http_request.headers.get("X-Mistral-Key") or None)
94
+
95
+ # Parse per-agent model matrix sent as JSON: {"architect":"gemini-2.5-flash", ...}
96
+ raw_agents = http_request.headers.get("X-Agent-Models")
97
+ if raw_agents:
98
+ try:
99
+ llm_service._request_agent_models.set(json.loads(raw_agents))
100
+ except Exception:
101
+ llm_service._request_agent_models.set(None)
102
+ else:
103
+ llm_service._request_agent_models.set(None)
92
104
 
93
105
 
94
106
  @router.post("/generate")
@@ -257,6 +269,34 @@ async def ask_flow(http_request: Request, req: AskRequest):
257
269
  }
258
270
 
259
271
 
272
+ class InterviewRequest(BaseModel):
273
+ """Trigger Phase 1 analysis when a CSV is first uploaded."""
274
+ context: str # dataset summary string from the upload response
275
+
276
+
277
+ @router.post("/interview")
278
+ async def interview_flow(http_request: Request, req: InterviewRequest):
279
+ """
280
+ Phase 1 — Interactive interview entry point.
281
+
282
+ Called immediately after a CSV upload. Returns a conversational
283
+ analysis text (with [PLANNING], [ANALYSIS], [DEDUCTION],
284
+ [AWAITING CONFIRMATION] status labels) WITHOUT generating any nodes.
285
+ The frontend displays this as an assistant message and waits for user
286
+ confirmation before proceeding to pipeline generation.
287
+ """
288
+ _inject_api_key(http_request)
289
+ if not req.context.strip():
290
+ raise HTTPException(status_code=422, detail="Context cannot be empty")
291
+ try:
292
+ text = await llm_service.interview_dataset(req.context)
293
+ return {"result_type": "interview", "message": text}
294
+ except RuntimeError as exc:
295
+ raise HTTPException(status_code=503, detail=str(exc))
296
+ except Exception as exc:
297
+ raise HTTPException(status_code=500, detail=f"{type(exc).__name__}: {exc}")
298
+
299
+
260
300
  @router.post("/execute")
261
301
  def execute_flow(flow: FlowSchema):
262
302
  try:
@@ -77,9 +77,9 @@ async def generate_node_code_route(http_request: Request, req: GenerateCodeReque
77
77
  """Generate M8Flow-compatible Python node code from a plain-English description."""
78
78
  from services import llm_service
79
79
 
80
- # Honour the per-request OpenRouter key set by the frontend
81
- key = http_request.headers.get("X-OpenRouter-Key")
82
- llm_service._request_api_key.set(key or None)
80
+ llm_service._request_api_key.set(http_request.headers.get("X-OpenRouter-Key") or None)
81
+ llm_service._request_gemini_key.set(http_request.headers.get("X-Gemini-Key") or None)
82
+ llm_service._request_mistral_key.set(http_request.headers.get("X-Mistral-Key") or None)
83
83
 
84
84
  if not req.description.strip():
85
85
  raise HTTPException(status_code=422, detail="Description cannot be empty")
@@ -10,9 +10,11 @@ if _global_env.exists():
10
10
  load_dotenv(_global_env, override=False) # don't override already-set vars
11
11
 
12
12
  class Config:
13
- MISTRAL_API_KEY = os.getenv("MISTRAL_API_KEY", "")
14
- OPENAI_API_KEY = os.getenv("OPENAI_API_KEY", "")
13
+ MISTRAL_API_KEY = os.getenv("MISTRAL_API_KEY", "")
14
+ OPENAI_API_KEY = os.getenv("OPENAI_API_KEY", "")
15
15
  OPENROUTER_API_KEY = os.getenv("OPENROUTER_API_KEY", "")
16
+ GEMINI_API_KEY = os.getenv("GEMINI_API_KEY", "") # Google AI Studio key
17
+ MISTRAL_API_KEY = os.getenv("MISTRAL_API_KEY", "") # Mistral La Plateforme key
16
18
  STORAGE_DIR = os.getenv("M8FLOW_UPLOAD_DIR", os.getenv("STORAGE_DIR", "./storage"))
17
19
 
18
20
  config = Config()
@@ -41,7 +41,7 @@ ALLOWED_IMPORTS = frozenset({
41
41
  "numpy", "pandas", "scipy", "sklearn", "xgboost", "lightgbm",
42
42
  "statsmodels", "imblearn",
43
43
  # Plotting
44
- "matplotlib", "seaborn", "plotly",
44
+ "matplotlib", "seaborn", "plotly", "mpl_toolkits",
45
45
  # Standard safe libs
46
46
  "math", "statistics", "itertools", "functools", "collections",
47
47
  "json", "re", "datetime", "typing",
@@ -62,6 +62,49 @@ def _serialize_value(val: Any) -> Any:
62
62
  "shape": list(val.shape),
63
63
  "dtype": str(val.dtype),
64
64
  }
65
+
66
+ # ── sklearn displays ─────────────────────────────────────────────────────
67
+ is_display = hasattr(val, "figure_") and hasattr(val.figure_, "savefig")
68
+ if is_display:
69
+ try:
70
+ import io
71
+ import base64
72
+ buf = io.BytesIO()
73
+ val.figure_.savefig(buf, format="png", bbox_inches="tight")
74
+ b64 = base64.b64encode(buf.getvalue()).decode("utf-8")
75
+ return {
76
+ "image_base64": b64,
77
+ "title": getattr(val, "estimator_name", type(val).__name__)
78
+ }
79
+ except Exception:
80
+ pass
81
+
82
+ # ── matplotlib / seaborn ─────────────────────────────────────────────────
83
+ is_figure = hasattr(val, "savefig") and type(val).__module__.startswith("matplotlib")
84
+ if is_figure:
85
+ try:
86
+ import io
87
+ import base64
88
+ buf = io.BytesIO()
89
+ val.savefig(buf, format="png", bbox_inches="tight")
90
+ b64 = base64.b64encode(buf.getvalue()).decode("utf-8")
91
+ return {
92
+ "image_base64": b64,
93
+ "title": "Matplotlib / Seaborn Plot"
94
+ }
95
+ except Exception:
96
+ pass
97
+
98
+ # ── plotly ───────────────────────────────────────────────────────────────
99
+ if hasattr(val, "to_html") and type(val).__module__.startswith("plotly"):
100
+ try:
101
+ return {
102
+ "plotly_html": val.to_html(full_html=False, include_plotlyjs="cdn"),
103
+ "title": "Plotly Visualization"
104
+ }
105
+ except Exception:
106
+ pass
107
+
65
108
  # sklearn / any estimator
66
109
  if hasattr(val, "predict") and hasattr(val, "fit"):
67
110
  return {
@@ -75,9 +118,33 @@ def _serialize_value(val: Any) -> Any:
75
118
  if isinstance(val, (np.bool_,)):
76
119
  return bool(val)
77
120
  if isinstance(val, dict):
121
+ if "data" in val and isinstance(val.get("data"), list) and "layout" in val:
122
+ try:
123
+ import json
124
+ import plotly.io as pio
125
+ val_json = json.dumps(val)
126
+ fig = pio.from_json(val_json)
127
+ return {
128
+ "plotly_html": fig.to_html(full_html=False, include_plotlyjs="cdn"),
129
+ "title": "Plotly Visualization"
130
+ }
131
+ except Exception:
132
+ pass
78
133
  return {k: _serialize_value(v) for k, v in val.items()}
79
134
  if isinstance(val, (list, tuple)):
80
135
  return [_serialize_value(v) for v in val]
136
+ if isinstance(val, str):
137
+ if val.startswith('{') and '"data":' in val and '"layout":' in val:
138
+ try:
139
+ import plotly.io as pio
140
+ fig = pio.from_json(val)
141
+ return {
142
+ "plotly_html": fig.to_html(full_html=False, include_plotlyjs="cdn"),
143
+ "title": "Plotly Visualization"
144
+ }
145
+ except Exception:
146
+ pass
147
+ return val
81
148
  return val
82
149
 
83
150
 
@@ -3,6 +3,13 @@ import inspect
3
3
  import functools
4
4
  from typing import Any
5
5
 
6
+ # Ensure matplotlib uses a non-interactive backend
7
+ try:
8
+ import matplotlib
9
+ matplotlib.use('Agg')
10
+ except ImportError:
11
+ pass
12
+
6
13
 
7
14
  @functools.lru_cache(maxsize=128)
8
15
  def _compile(code: str):
@@ -45,5 +52,33 @@ def execute_node_code(code: str, inputs: dict[str, Any]) -> dict[str, Any]:
45
52
 
46
53
  result = run_fn(**filtered)
47
54
  if not isinstance(result, dict):
48
- return {"output": result}
55
+ result = {"output": result}
56
+
57
+ # ── Automatically capture unreturned matplotlib figures ──────────────────
58
+ import sys
59
+ if "matplotlib.pyplot" in sys.modules:
60
+ import matplotlib.pyplot as plt
61
+ figs = plt.get_fignums()
62
+ if figs:
63
+ # Identify figures already returned explicitly to avoid duplicates
64
+ returned_fig_ids = {id(v) for v in result.values() if hasattr(v, "savefig")}
65
+
66
+ import io, base64
67
+ for i, num in enumerate(figs):
68
+ fig = plt.figure(num)
69
+ if id(fig) in returned_fig_ids:
70
+ continue
71
+
72
+ buf = io.BytesIO()
73
+ fig.savefig(buf, format="png", bbox_inches="tight")
74
+ b64 = base64.b64encode(buf.getvalue()).decode("utf-8")
75
+ # Avoid overwriting explicit returns
76
+ key = f"plot_{i}" if i > 0 else "plot"
77
+ if key not in result:
78
+ result[key] = {
79
+ "image_base64": b64,
80
+ "title": f"Figure {num}"
81
+ }
82
+ plt.close("all")
83
+
49
84
  return result
@@ -1,6 +1,16 @@
1
1
  from fastapi import FastAPI, Request
2
2
  from fastapi.middleware.cors import CORSMiddleware
3
3
  from fastapi.responses import JSONResponse
4
+
5
+ # Force matplotlib to use a non-interactive backend (Agg)
6
+ # This prevents "main thread is not in main loop" errors when running
7
+ # inside background threads (like FastAPI/uvicorn).
8
+ try:
9
+ import matplotlib
10
+ matplotlib.use('Agg')
11
+ except ImportError:
12
+ pass
13
+
4
14
  from api.routes import flows, nodes, appstate
5
15
  import os
6
16
  import time
@@ -60,6 +70,17 @@ app.include_router(nodes.router, prefix="/api/nodes", tags=["Nodes"])
60
70
  app.include_router(appstate.router, prefix="/api/app/state", tags=["AppState"])
61
71
 
62
72
 
73
+ @app.get("/v1/models")
74
+ def openai_compat_models():
75
+ """
76
+ OpenAI-compatible /v1/models stub.
77
+ External tools (VS Code extensions, Cursor, Continue.dev …) probe this
78
+ endpoint to check if the server speaks the OpenAI protocol.
79
+ Return a minimal valid response so they get a 200 instead of a noisy 404.
80
+ """
81
+ return {"object": "list", "data": []}
82
+
83
+
63
84
  @app.get("/api/health")
64
85
  def health_check():
65
86
  from config import config
@@ -1,35 +1,60 @@
1
1
  # ── Core API ─────────────────────────────────────────────────────────────────
2
- fastapi
2
+ fastapi==0.136.1
3
3
  uvicorn[standard]
4
- python-multipart
5
- pydantic
6
- python-dotenv
7
- httpx
8
-
4
+ starlette==1.0.0
5
+ python-multipart==0.0.27
6
+ python-dotenv==1.2.2
7
+ pydantic==2.13.3
8
+ pydantic_core==2.46.3
9
+ annotated-types==0.7.0
10
+ # ── HTTP ─────────────────────────────────────────────────────────────────────
11
+ httpx==0.28.1
12
+ httpcore==1.0.9
13
+ h11==0.16.0
14
+ anyio==4.13.0
15
+ sniffio==1.3.1
16
+ certifi==2026.4.22
17
+ idna==3.13
9
18
  # ── Data ─────────────────────────────────────────────────────────────────────
10
- pandas
11
- numpy
12
- scipy
13
- joblib
14
-
19
+ pandas==3.0.2
20
+ numpy==2.4.4
21
+ scipy==1.17.1
22
+ joblib==1.5.3
23
+ threadpoolctl==3.6.0
24
+ python-dateutil==2.9.0.post0
25
+ six==1.17.0
26
+ tzdata==2026.2
15
27
  # ── ML ───────────────────────────────────────────────────────────────────────
16
- scikit-learn
17
- xgboost
18
- lightgbm
19
- statsmodels
20
-
21
- # ── Imbalanced learning ───────────────────────────────────────────────────────
22
- imbalanced-learn
23
-
24
- # ── Explainability ────────────────────────────────────────────────────────────
25
- shap
26
-
27
- # ── Dimensionality reduction ──────────────────────────────────────────────────
28
- umap-learn
29
-
30
- # ── Visualisation (server-side, not imported at runtime) ─────────────────────
31
- matplotlib
32
- seaborn
33
-
34
- # ── Legacy / compat ───────────────────────────────────────────────────────────
35
- openai
28
+ scikit-learn==1.8.0
29
+ # ── Visualisation ────────────────────────────────────────────────────────────
30
+ matplotlib==3.10.9
31
+ seaborn==0.13.2
32
+ plotly==6.7.0
33
+ pillow==12.2.0
34
+ contourpy==1.3.3
35
+ cycler==0.12.1
36
+ fonttools==4.62.1
37
+ kiwisolver==1.5.0
38
+ pyparsing==3.3.2
39
+ narwhals==2.21.0
40
+ # ── AI / LLM clients ─────────────────────────────────────────────────────────
41
+ openai==2.33.0
42
+ mistralai==2.4.4
43
+ httpx==0.28.1
44
+ jiter==0.14.0
45
+ distro==1.9.0
46
+ tqdm==4.67.3
47
+ # ── Telemetry ────────────────────────────────────────────────────────────────
48
+ opentelemetry-api==1.39.1
49
+ opentelemetry-semantic-conventions==0.60b1
50
+ # ── Utils ────────────────────────────────────────────────────────────────────
51
+ click==8.3.3
52
+ colorama==0.4.6
53
+ annotated-doc==0.0.4
54
+ eval_type_backport==0.3.1
55
+ importlib_metadata==8.7.1
56
+ jsonpath-python==1.1.5
57
+ packaging==26.2
58
+ typing_extensions==4.15.0
59
+ typing-inspection==0.4.2
60
+ zipp==3.23.1