agentji 0.10.0__tar.gz

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (43) hide show
  1. agentji-0.10.0/PKG-INFO +361 -0
  2. agentji-0.10.0/README.md +328 -0
  3. agentji-0.10.0/agentji/__init__.py +3 -0
  4. agentji-0.10.0/agentji/builtins.py +186 -0
  5. agentji-0.10.0/agentji/cli.py +504 -0
  6. agentji-0.10.0/agentji/config.py +417 -0
  7. agentji-0.10.0/agentji/executor.py +89 -0
  8. agentji-0.10.0/agentji/improver.py +183 -0
  9. agentji-0.10.0/agentji/logger.py +290 -0
  10. agentji-0.10.0/agentji/loop.py +922 -0
  11. agentji-0.10.0/agentji/mcp_bridge.py +182 -0
  12. agentji-0.10.0/agentji/memory.py +45 -0
  13. agentji-0.10.0/agentji/router.py +148 -0
  14. agentji-0.10.0/agentji/run_context.py +113 -0
  15. agentji-0.10.0/agentji/server.py +583 -0
  16. agentji-0.10.0/agentji/skill_converter.py +214 -0
  17. agentji-0.10.0/agentji/skill_translator.py +287 -0
  18. agentji-0.10.0/agentji/studio/index.html +1164 -0
  19. agentji-0.10.0/agentji.egg-info/PKG-INFO +361 -0
  20. agentji-0.10.0/agentji.egg-info/SOURCES.txt +41 -0
  21. agentji-0.10.0/agentji.egg-info/dependency_links.txt +1 -0
  22. agentji-0.10.0/agentji.egg-info/entry_points.txt +2 -0
  23. agentji-0.10.0/agentji.egg-info/requires.txt +18 -0
  24. agentji-0.10.0/agentji.egg-info/top_level.txt +1 -0
  25. agentji-0.10.0/pyproject.toml +63 -0
  26. agentji-0.10.0/setup.cfg +4 -0
  27. agentji-0.10.0/tests/test_builtins.py +237 -0
  28. agentji-0.10.0/tests/test_call_agent.py +296 -0
  29. agentji-0.10.0/tests/test_cli.py +214 -0
  30. agentji-0.10.0/tests/test_config.py +295 -0
  31. agentji-0.10.0/tests/test_e2e_session.py +391 -0
  32. agentji-0.10.0/tests/test_executor.py +78 -0
  33. agentji-0.10.0/tests/test_improvement_e2e.py +180 -0
  34. agentji-0.10.0/tests/test_improver.py +264 -0
  35. agentji-0.10.0/tests/test_integration_dashscope.py +336 -0
  36. agentji-0.10.0/tests/test_integration_local.py +121 -0
  37. agentji-0.10.0/tests/test_logger.py +192 -0
  38. agentji-0.10.0/tests/test_loop_unit.py +387 -0
  39. agentji-0.10.0/tests/test_mcp_bridge.py +312 -0
  40. agentji-0.10.0/tests/test_router.py +223 -0
  41. agentji-0.10.0/tests/test_run_context.py +105 -0
  42. agentji-0.10.0/tests/test_serve.py +320 -0
  43. agentji-0.10.0/tests/test_skill_translator.py +300 -0
@@ -0,0 +1,361 @@
1
+ Metadata-Version: 2.4
2
+ Name: agentji
3
+ Version: 0.10.0
4
+ Summary: Universal configuration and execution layer for AI agents
5
+ Author: Winston Wang Qi
6
+ License-Expression: MIT
7
+ Keywords: agent,llm,mcp,qwen,litellm,langgraph,openai,chinese-llm
8
+ Classifier: Development Status :: 4 - Beta
9
+ Classifier: Intended Audience :: Developers
10
+ Classifier: Programming Language :: Python :: 3
11
+ Classifier: Programming Language :: Python :: 3.10
12
+ Classifier: Programming Language :: Python :: 3.11
13
+ Classifier: Programming Language :: Python :: 3.12
14
+ Classifier: Topic :: Software Development :: Libraries :: Python Modules
15
+ Requires-Python: >=3.10
16
+ Description-Content-Type: text/markdown
17
+ Requires-Dist: litellm
18
+ Requires-Dist: langgraph
19
+ Requires-Dist: fastmcp
20
+ Requires-Dist: pydantic>=2.0
21
+ Requires-Dist: pyyaml
22
+ Requires-Dist: typer
23
+ Requires-Dist: rich
24
+ Provides-Extra: serve
25
+ Requires-Dist: fastapi; extra == "serve"
26
+ Requires-Dist: uvicorn[standard]; extra == "serve"
27
+ Provides-Extra: dev
28
+ Requires-Dist: pytest; extra == "dev"
29
+ Requires-Dist: pytest-asyncio; extra == "dev"
30
+ Requires-Dist: python-dotenv; extra == "dev"
31
+ Requires-Dist: yfinance; extra == "dev"
32
+ Requires-Dist: mcp-weather-server; extra == "dev"
33
+
34
+ # agentji
35
+
36
+ Run any agent skill on any model. One YAML file.
37
+
38
+ Anthropic's official skills, Clawhub skills — `docx`, `brand-guidelines`, `data-analysis` — work here unchanged, on Qwen, Kimi, MiniMax, or a local Ollama model. Swap the model with one config line. No code changes.
39
+
40
+ ```yaml
41
+ agents:
42
+ orchestrator:
43
+ model: moonshot/kimi-k2.5 # change this line to switch providers
44
+ agents: [analyst, reporter]
45
+
46
+ analyst:
47
+ model: qwen/MiniMax/MiniMax-M2.7
48
+ skills: [sql-query, data-analysis]
49
+
50
+ reporter:
51
+ model: qwen/glm-5
52
+ skills: [docx-template]
53
+ builtins: [bash, write_file]
54
+ max_iterations: 20
55
+ ```
56
+
57
+ *Orchestrated by Kimi K2.5 · Analysed by MiniMax M2.7 · Reported by GLM-5 · Zero Claude.*
58
+
59
+ ---
60
+
61
+ ![Python](https://img.shields.io/badge/python-3.10+-blue)
62
+ ![License](https://img.shields.io/badge/license-MIT-green)
63
+ ![Status](https://img.shields.io/badge/status-beta-orange)
64
+
65
+ ```bash
66
+ pip install agentji
67
+ ```
68
+
69
+ ---
70
+
71
+ ## Quickstart
72
+
73
+ Three paths. Pick the one that fits.
74
+
75
+ **Path A — free, offline, no API keys**
76
+ Uses a local Ollama model. You get a working weather agent in a browser UI.
77
+
78
+ ```bash
79
+ pip install "agentji[serve]" mcp-weather-server
80
+ ollama pull qwen3:4b
81
+ cd examples/weather-reporter
82
+ agentji serve --studio
83
+ ```
84
+
85
+ Open [http://localhost:8000](http://localhost:8000) → ask: *"Weather in Seoul, Tokyo, London?"*
86
+
87
+ ---
88
+
89
+ **Path B — cloud models, multi-agent pipeline**
90
+ Three providers, one pipeline. You get a Word document with a full market analysis.
91
+
92
+ ```bash
93
+ pip install "agentji[serve]" python-docx matplotlib
94
+ export MOONSHOT_API_KEY=your_key
95
+ export DASHSCOPE_API_KEY=your_key
96
+ cd examples/data-analyst && python data/download_chinook.py
97
+ agentji serve --studio
98
+ ```
99
+
100
+ Open [http://localhost:8000](http://localhost:8000) → ask: *"Which markets should we prioritise for growth? Full report."*
101
+ → `output/growth_strategy.docx` is written to disk when the run completes.
102
+
103
+ ---
104
+
105
+ **Path C — CLI, no server**
106
+ No browser, no server. Pipe it into a script or run it headless.
107
+
108
+ ```bash
109
+ agentji run --config examples/data-analyst/agentji.yaml \
110
+ --agent orchestrator \
111
+ --prompt "Which genres are high-margin but low-volume?"
112
+ ```
113
+
114
+ ---
115
+
116
+ ## Skills
117
+
118
+ A skill is a directory with a `SKILL.md`. Skills from any registry work without modification:
119
+
120
+ | Skill | Source | Type |
121
+ |---|---|---|
122
+ | `sql-query` | Bundled (agentji) | Tool skill |
123
+ | `data-analysis` | [ClawHub — ivangdavila](https://clawhub.ai/ivangdavila/data-analysis) | Prompt skill |
124
+ | Any Claude Code skill | [Anthropic official](https://github.com/anthropics/skills) | Prompt skill |
125
+
126
+ Claude Code's Anthropic-format skills work here unchanged. The model is a config line.
127
+
128
+ ### Two skill types
129
+
130
+ **Prompt skills** — the SKILL.md body is injected into the agent's system prompt. Anthropic's official skills (`brand-guidelines`, `docx`, `data-analysis`) are all prompt skills. They work on any model because they're instructions, not code.
131
+
132
+ **Tool skills** — a `skill.yaml` sidecar alongside SKILL.md adds the tool config: script path, parameters, timeout. SKILL.md stays in pure Anthropic format; `skill.yaml` is the agentji extension.
133
+
134
+ ```
135
+ skills/sql-query/
136
+ ├── SKILL.md ← pure Anthropic format: name + description + body
137
+ ├── skill.yaml ← agentji tool config: scripts.execute + parameters
138
+ └── scripts/
139
+ └── run_query.py
140
+ ```
141
+
142
+ ### Skill converter
143
+
144
+ If a skill has callable scripts but no `skill.yaml`, agentji detects it and offers to auto-generate one using the active agent's model. No separate setup.
145
+
146
+ ---
147
+
148
+ ## Multi-agent orchestration
149
+
150
+ Set `agents:` on any agent to make it an orchestrator. agentji injects a `call_agent(agent, prompt)` tool whose `enum` constraint limits delegation to declared sub-agents — no hallucinated agent names.
151
+
152
+ ```yaml
153
+ agents:
154
+ orchestrator:
155
+ model: moonshot/kimi-k2.5
156
+ agents: [analyst, reporter] # call_agent tool added automatically
157
+
158
+ analyst:
159
+ model: qwen/MiniMax/MiniMax-M2.7
160
+ skills: [sql-query, data-analysis]
161
+
162
+ reporter:
163
+ model: qwen/glm-5
164
+ skills: [docx-template]
165
+ builtins: [bash, write_file]
166
+ ```
167
+
168
+ Sub-agent calls appear in the same log file — the entire pipeline in one JSONL, linked by a shared `pipeline_id`.
169
+
170
+ ---
171
+
172
+ ## MCP servers
173
+
174
+ Declare an MCP server in YAML; agentji connects via FastMCP and exposes its tools to the agent automatically.
175
+
176
+ ```yaml
177
+ mcps:
178
+ - name: weather
179
+ command: python
180
+ args: [-m, mcp_weather_server] # launched as subprocess, stdio transport
181
+
182
+ agents:
183
+ weather-reporter:
184
+ model: ollama/qwen3:4b
185
+ mcps: [weather] # tools discovered at runtime
186
+ ```
187
+
188
+ ---
189
+
190
+ ## agentji serve
191
+
192
+ ```bash
193
+ pip install "agentji[serve]"
194
+
195
+ # API only (default) — suitable for production, CI, headless deployments
196
+ agentji serve --config agentji.yaml --port 8000
197
+
198
+ # API + Studio browser UI
199
+ agentji serve --config agentji.yaml --port 8000 --studio
200
+ ```
201
+
202
+ | Endpoint | Description |
203
+ |---|---|
204
+ | `POST /v1/chat/completions` | OpenAI-compatible, streaming, returns `X-Agentji-Run-Id` header |
205
+ | `GET /v1/events/{run_id}` | SSE stream of all agent events (tool calls, sub-agent delegations) |
206
+ | `GET /v1/pipeline` | Pipeline topology JSON |
207
+ | `POST /v1/sessions/{id}/end` | End a session and trigger skill improvement extraction |
208
+ | `GET /` | agentji Studio (only when `--studio` flag is set) |
209
+
210
+ ### Sessions
211
+
212
+ Pass `X-Agentji-Session-Id` to track a conversation across turns. Control history per request:
213
+
214
+ ```json
215
+ { "messages": [...], "stateful": true, "improve": true }
216
+ ```
217
+
218
+ Or configure defaults in YAML:
219
+
220
+ ```yaml
221
+ studio:
222
+ stateful: true # carry conversation history across turns
223
+ max_turns: 20
224
+ ```
225
+
226
+ ### agentji Studio
227
+
228
+ ```
229
+ ┌──────────────┬────────────────────────┬─────────────┐
230
+ │ agent graph │ chat + thinking cards │ live log │
231
+ │ skill badges │ streaming response │ SSE events │
232
+ │ status dots │ file download links │ stats bar │
233
+ └──────────────┴────────────────────────┴─────────────┘
234
+ ```
235
+
236
+ - Parallel tool calls grouped with a left border
237
+ - `context_write` / `context_read` events in amber — file handoffs between agents
238
+ - Orchestrator step tracker — live phase list with pending → running → done status
239
+ - Iteration limit banner with **Continue** button — never lose work at `max_iterations`
240
+ - **■ Stop** button — cancel a run at the next iteration boundary
241
+ - File download links — `.docx`, `.csv`, `.md` paths become clickable
242
+ - **Stateful toggle** — switch between stateful and stateless sessions in the header
243
+ - **Skill improvement checkbox** — opt individual sessions in/out of improvement extraction
244
+
245
+ ---
246
+
247
+ ## Skill improvement
248
+
249
+ At session end, agentji uses the configured model to review the conversation and extract three types of learning signals — corrections, affirmations, and hints — then appends them to each skill's `improvements.jsonl`:
250
+
251
+ ```yaml
252
+ improvement:
253
+ enabled: true
254
+ model: null # null = inherit default agent model
255
+ skills: [] # empty = all loaded skills
256
+ ```
257
+
258
+ Signal types written to `skills/sql-query/improvements.jsonl`:
259
+ ```json
260
+ {"type": "correction", "skill": "sql-query", "learning": "Use InvoiceLine.UnitPrice * Quantity for revenue, not Invoice.Total.", "context": "User corrected a query that used Invoice.Total which includes tax adjustments."}
261
+ {"type": "hint", "skill": "sql-query", "learning": "The Chinook database covers 2009–2013 only; scope date filters to this range.", "context": "User noted this mid-conversation."}
262
+ ```
263
+
264
+ Session end is triggered automatically on tab close, via `POST /v1/sessions/{id}/end`, or after 30 seconds of inactivity. The Studio checkbox lets users opt sessions in/out individually.
265
+
266
+ ---
267
+
268
+ ## Built-in tools
269
+
270
+ | Builtin | What it does |
271
+ |---|---|
272
+ | `bash` | Execute shell commands |
273
+ | `read_file` | Read a file from disk |
274
+ | `write_file` | Write a file to disk |
275
+
276
+ These replicate the native tools Claude Code provides, enabling prompt skills that rely on file I/O to run on any model.
277
+
278
+ ---
279
+
280
+ ## Provider support
281
+
282
+ | Provider | Model string | Notes |
283
+ |---|---|---|
284
+ | Qwen (DashScope) | `qwen/qwen-max` | |
285
+ | MiniMax (DashScope) | `qwen/MiniMax/MiniMax-M2.7` | Via DashScope routing |
286
+ | GLM (DashScope) | `qwen/glm-5` | |
287
+ | Kimi (Moonshot) | `moonshot/kimi-k2.5` | `fallback_base_url` for China/global auto-detect |
288
+ | Anthropic | `anthropic/claude-haiku-4-5` | No `base_url` needed |
289
+ | OpenAI | `openai/gpt-4o` | |
290
+ | Ollama (local) | `ollama/qwen3:4b` | Free, runs offline, no API key |
291
+ | Any litellm provider | — | [full list →](https://litellm.ai) |
292
+
293
+ **Dual-endpoint auto-detection** — set `fallback_base_url` for providers with regional endpoints (e.g. Moonshot global vs China). agentji probes both on first use and caches the result.
294
+
295
+ ```yaml
296
+ providers:
297
+ moonshot:
298
+ api_key: ${MOONSHOT_API_KEY}
299
+ base_url: https://api.moonshot.ai/v1
300
+ fallback_base_url: https://api.moonshot.cn/v1 # auto-probed on first use
301
+ ```
302
+
303
+ ---
304
+
305
+ ## Roadmap
306
+
307
+ **Shipped**
308
+ - [x] Skill translation (SKILL.md → OpenAI tool schema)
309
+ - [x] skill.yaml sidecar (tool config separate from Anthropic format)
310
+ - [x] Skill converter (auto-generate skill.yaml from scripts via LLM)
311
+ - [x] Prompt skills (Anthropic format, body injected into system prompt)
312
+ - [x] Multi-provider routing via litellm
313
+ - [x] Agentic loop via LangGraph
314
+ - [x] MCP server integration via FastMCP
315
+ - [x] Built-in tools (bash, read_file, write_file)
316
+ - [x] Multi-agent orchestration (call_agent with enum constraint)
317
+ - [x] Per-run RunContext (file-based context handoff between agents)
318
+ - [x] Conversation logging (JSONL, pipeline_id, session_id, daily rotation)
319
+ - [x] Provider endpoint auto-detection + caching
320
+ - [x] agentji serve (OpenAI-compatible HTTP endpoint)
321
+ - [x] agentji Studio (chat UI, pipeline tree, event log)
322
+ - [x] Studio flag (--studio; API-only by default)
323
+ - [x] Stateful / stateless session toggle (per-config and per-request)
324
+ - [x] Skill improvement extraction (post-session, per-skill improvements.jsonl)
325
+ - [x] Consecutive error intervention (stuck detection)
326
+ - [x] Iteration limit banner with Continue / Stop
327
+ - [x] Per-agent tool timeout (tool_timeout in agentji.yaml)
328
+ - [x] Run cancellation (POST /v1/cancel/{run_id})
329
+
330
+ **Coming**
331
+ - [ ] Parallel sub-agent dispatch
332
+ - [ ] Persistent memory (mem0 / Zep)
333
+ - [ ] Plugin system for community skill registries
334
+
335
+ ---
336
+
337
+ ## Why agentji
338
+
339
+ Built for developers working across the global AI ecosystem — for teams where Qwen, Kimi, and local models are first-class requirements, not afterthoughts. If you're locked to one provider because your skills won't port, agentji is the unlock.
340
+
341
+ **机** (jī) — machine, engine. The runtime.
342
+ **集** (jí) — assemble. Skills, models, tools, agents.
343
+ **极** (jí) — ultimate. Any skill on any model.
344
+
345
+ ---
346
+
347
+ ## Contributing
348
+
349
+ Issues and PRs welcome. Adding a skill or a provider integration is the best first PR.
350
+
351
+ ```bash
352
+ pytest # unit tests
353
+ pytest -m integration # requires API keys in .env
354
+ pytest -m local # requires Ollama running locally
355
+ ```
356
+
357
+ ---
358
+
359
+ ## License
360
+
361
+ MIT
@@ -0,0 +1,328 @@
1
+ # agentji
2
+
3
+ Run any agent skill on any model. One YAML file.
4
+
5
+ Anthropic's official skills, Clawhub skills — `docx`, `brand-guidelines`, `data-analysis` — work here unchanged, on Qwen, Kimi, MiniMax, or a local Ollama model. Swap the model with one config line. No code changes.
6
+
7
+ ```yaml
8
+ agents:
9
+ orchestrator:
10
+ model: moonshot/kimi-k2.5 # change this line to switch providers
11
+ agents: [analyst, reporter]
12
+
13
+ analyst:
14
+ model: qwen/MiniMax/MiniMax-M2.7
15
+ skills: [sql-query, data-analysis]
16
+
17
+ reporter:
18
+ model: qwen/glm-5
19
+ skills: [docx-template]
20
+ builtins: [bash, write_file]
21
+ max_iterations: 20
22
+ ```
23
+
24
+ *Orchestrated by Kimi K2.5 · Analysed by MiniMax M2.7 · Reported by GLM-5 · Zero Claude.*
25
+
26
+ ---
27
+
28
+ ![Python](https://img.shields.io/badge/python-3.10+-blue)
29
+ ![License](https://img.shields.io/badge/license-MIT-green)
30
+ ![Status](https://img.shields.io/badge/status-beta-orange)
31
+
32
+ ```bash
33
+ pip install agentji
34
+ ```
35
+
36
+ ---
37
+
38
+ ## Quickstart
39
+
40
+ Three paths. Pick the one that fits.
41
+
42
+ **Path A — free, offline, no API keys**
43
+ Uses a local Ollama model. You get a working weather agent in a browser UI.
44
+
45
+ ```bash
46
+ pip install "agentji[serve]" mcp-weather-server
47
+ ollama pull qwen3:4b
48
+ cd examples/weather-reporter
49
+ agentji serve --studio
50
+ ```
51
+
52
+ Open [http://localhost:8000](http://localhost:8000) → ask: *"Weather in Seoul, Tokyo, London?"*
53
+
54
+ ---
55
+
56
+ **Path B — cloud models, multi-agent pipeline**
57
+ Three providers, one pipeline. You get a Word document with a full market analysis.
58
+
59
+ ```bash
60
+ pip install "agentji[serve]" python-docx matplotlib
61
+ export MOONSHOT_API_KEY=your_key
62
+ export DASHSCOPE_API_KEY=your_key
63
+ cd examples/data-analyst && python data/download_chinook.py
64
+ agentji serve --studio
65
+ ```
66
+
67
+ Open [http://localhost:8000](http://localhost:8000) → ask: *"Which markets should we prioritise for growth? Full report."*
68
+ → `output/growth_strategy.docx` is written to disk when the run completes.
69
+
70
+ ---
71
+
72
+ **Path C — CLI, no server**
73
+ No browser, no server. Pipe it into a script or run it headless.
74
+
75
+ ```bash
76
+ agentji run --config examples/data-analyst/agentji.yaml \
77
+ --agent orchestrator \
78
+ --prompt "Which genres are high-margin but low-volume?"
79
+ ```
80
+
81
+ ---
82
+
83
+ ## Skills
84
+
85
+ A skill is a directory with a `SKILL.md`. Skills from any registry work without modification:
86
+
87
+ | Skill | Source | Type |
88
+ |---|---|---|
89
+ | `sql-query` | Bundled (agentji) | Tool skill |
90
+ | `data-analysis` | [ClawHub — ivangdavila](https://clawhub.ai/ivangdavila/data-analysis) | Prompt skill |
91
+ | Any Claude Code skill | [Anthropic official](https://github.com/anthropics/skills) | Prompt skill |
92
+
93
+ Claude Code's Anthropic-format skills work here unchanged. The model is a config line.
94
+
95
+ ### Two skill types
96
+
97
+ **Prompt skills** — the SKILL.md body is injected into the agent's system prompt. Anthropic's official skills (`brand-guidelines`, `docx`, `data-analysis`) are all prompt skills. They work on any model because they're instructions, not code.
98
+
99
+ **Tool skills** — a `skill.yaml` sidecar alongside SKILL.md adds the tool config: script path, parameters, timeout. SKILL.md stays in pure Anthropic format; `skill.yaml` is the agentji extension.
100
+
101
+ ```
102
+ skills/sql-query/
103
+ ├── SKILL.md ← pure Anthropic format: name + description + body
104
+ ├── skill.yaml ← agentji tool config: scripts.execute + parameters
105
+ └── scripts/
106
+ └── run_query.py
107
+ ```
108
+
109
+ ### Skill converter
110
+
111
+ If a skill has callable scripts but no `skill.yaml`, agentji detects it and offers to auto-generate one using the active agent's model. No separate setup.
112
+
113
+ ---
114
+
115
+ ## Multi-agent orchestration
116
+
117
+ Set `agents:` on any agent to make it an orchestrator. agentji injects a `call_agent(agent, prompt)` tool whose `enum` constraint limits delegation to declared sub-agents — no hallucinated agent names.
118
+
119
+ ```yaml
120
+ agents:
121
+ orchestrator:
122
+ model: moonshot/kimi-k2.5
123
+ agents: [analyst, reporter] # call_agent tool added automatically
124
+
125
+ analyst:
126
+ model: qwen/MiniMax/MiniMax-M2.7
127
+ skills: [sql-query, data-analysis]
128
+
129
+ reporter:
130
+ model: qwen/glm-5
131
+ skills: [docx-template]
132
+ builtins: [bash, write_file]
133
+ ```
134
+
135
+ Sub-agent calls appear in the same log file — the entire pipeline in one JSONL, linked by a shared `pipeline_id`.
136
+
137
+ ---
138
+
139
+ ## MCP servers
140
+
141
+ Declare an MCP server in YAML; agentji connects via FastMCP and exposes its tools to the agent automatically.
142
+
143
+ ```yaml
144
+ mcps:
145
+ - name: weather
146
+ command: python
147
+ args: [-m, mcp_weather_server] # launched as subprocess, stdio transport
148
+
149
+ agents:
150
+ weather-reporter:
151
+ model: ollama/qwen3:4b
152
+ mcps: [weather] # tools discovered at runtime
153
+ ```
154
+
155
+ ---
156
+
157
+ ## agentji serve
158
+
159
+ ```bash
160
+ pip install "agentji[serve]"
161
+
162
+ # API only (default) — suitable for production, CI, headless deployments
163
+ agentji serve --config agentji.yaml --port 8000
164
+
165
+ # API + Studio browser UI
166
+ agentji serve --config agentji.yaml --port 8000 --studio
167
+ ```
168
+
169
+ | Endpoint | Description |
170
+ |---|---|
171
+ | `POST /v1/chat/completions` | OpenAI-compatible, streaming, returns `X-Agentji-Run-Id` header |
172
+ | `GET /v1/events/{run_id}` | SSE stream of all agent events (tool calls, sub-agent delegations) |
173
+ | `GET /v1/pipeline` | Pipeline topology JSON |
174
+ | `POST /v1/sessions/{id}/end` | End a session and trigger skill improvement extraction |
175
+ | `GET /` | agentji Studio (only when `--studio` flag is set) |
176
+
177
+ ### Sessions
178
+
179
+ Pass `X-Agentji-Session-Id` to track a conversation across turns. Control history per request:
180
+
181
+ ```json
182
+ { "messages": [...], "stateful": true, "improve": true }
183
+ ```
184
+
185
+ Or configure defaults in YAML:
186
+
187
+ ```yaml
188
+ studio:
189
+ stateful: true # carry conversation history across turns
190
+ max_turns: 20
191
+ ```
192
+
193
+ ### agentji Studio
194
+
195
+ ```
196
+ ┌──────────────┬────────────────────────┬─────────────┐
197
+ │ agent graph │ chat + thinking cards │ live log │
198
+ │ skill badges │ streaming response │ SSE events │
199
+ │ status dots │ file download links │ stats bar │
200
+ └──────────────┴────────────────────────┴─────────────┘
201
+ ```
202
+
203
+ - Parallel tool calls grouped with a left border
204
+ - `context_write` / `context_read` events in amber — file handoffs between agents
205
+ - Orchestrator step tracker — live phase list with pending → running → done status
206
+ - Iteration limit banner with **Continue** button — never lose work at `max_iterations`
207
+ - **■ Stop** button — cancel a run at the next iteration boundary
208
+ - File download links — `.docx`, `.csv`, `.md` paths become clickable
209
+ - **Stateful toggle** — switch between stateful and stateless sessions in the header
210
+ - **Skill improvement checkbox** — opt individual sessions in/out of improvement extraction
211
+
212
+ ---
213
+
214
+ ## Skill improvement
215
+
216
+ At session end, agentji uses the configured model to review the conversation and extract three types of learning signals — corrections, affirmations, and hints — then appends them to each skill's `improvements.jsonl`:
217
+
218
+ ```yaml
219
+ improvement:
220
+ enabled: true
221
+ model: null # null = inherit default agent model
222
+ skills: [] # empty = all loaded skills
223
+ ```
224
+
225
+ Signal types written to `skills/sql-query/improvements.jsonl`:
226
+ ```json
227
+ {"type": "correction", "skill": "sql-query", "learning": "Use InvoiceLine.UnitPrice * Quantity for revenue, not Invoice.Total.", "context": "User corrected a query that used Invoice.Total which includes tax adjustments."}
228
+ {"type": "hint", "skill": "sql-query", "learning": "The Chinook database covers 2009–2013 only; scope date filters to this range.", "context": "User noted this mid-conversation."}
229
+ ```
230
+
231
+ Session end is triggered automatically on tab close, via `POST /v1/sessions/{id}/end`, or after 30 seconds of inactivity. The Studio checkbox lets users opt sessions in/out individually.
232
+
233
+ ---
234
+
235
+ ## Built-in tools
236
+
237
+ | Builtin | What it does |
238
+ |---|---|
239
+ | `bash` | Execute shell commands |
240
+ | `read_file` | Read a file from disk |
241
+ | `write_file` | Write a file to disk |
242
+
243
+ These replicate the native tools Claude Code provides, enabling prompt skills that rely on file I/O to run on any model.
244
+
245
+ ---
246
+
247
+ ## Provider support
248
+
249
+ | Provider | Model string | Notes |
250
+ |---|---|---|
251
+ | Qwen (DashScope) | `qwen/qwen-max` | |
252
+ | MiniMax (DashScope) | `qwen/MiniMax/MiniMax-M2.7` | Via DashScope routing |
253
+ | GLM (DashScope) | `qwen/glm-5` | |
254
+ | Kimi (Moonshot) | `moonshot/kimi-k2.5` | `fallback_base_url` for China/global auto-detect |
255
+ | Anthropic | `anthropic/claude-haiku-4-5` | No `base_url` needed |
256
+ | OpenAI | `openai/gpt-4o` | |
257
+ | Ollama (local) | `ollama/qwen3:4b` | Free, runs offline, no API key |
258
+ | Any litellm provider | — | [full list →](https://litellm.ai) |
259
+
260
+ **Dual-endpoint auto-detection** — set `fallback_base_url` for providers with regional endpoints (e.g. Moonshot global vs China). agentji probes both on first use and caches the result.
261
+
262
+ ```yaml
263
+ providers:
264
+ moonshot:
265
+ api_key: ${MOONSHOT_API_KEY}
266
+ base_url: https://api.moonshot.ai/v1
267
+ fallback_base_url: https://api.moonshot.cn/v1 # auto-probed on first use
268
+ ```
269
+
270
+ ---
271
+
272
+ ## Roadmap
273
+
274
+ **Shipped**
275
+ - [x] Skill translation (SKILL.md → OpenAI tool schema)
276
+ - [x] skill.yaml sidecar (tool config separate from Anthropic format)
277
+ - [x] Skill converter (auto-generate skill.yaml from scripts via LLM)
278
+ - [x] Prompt skills (Anthropic format, body injected into system prompt)
279
+ - [x] Multi-provider routing via litellm
280
+ - [x] Agentic loop via LangGraph
281
+ - [x] MCP server integration via FastMCP
282
+ - [x] Built-in tools (bash, read_file, write_file)
283
+ - [x] Multi-agent orchestration (call_agent with enum constraint)
284
+ - [x] Per-run RunContext (file-based context handoff between agents)
285
+ - [x] Conversation logging (JSONL, pipeline_id, session_id, daily rotation)
286
+ - [x] Provider endpoint auto-detection + caching
287
+ - [x] agentji serve (OpenAI-compatible HTTP endpoint)
288
+ - [x] agentji Studio (chat UI, pipeline tree, event log)
289
+ - [x] Studio flag (--studio; API-only by default)
290
+ - [x] Stateful / stateless session toggle (per-config and per-request)
291
+ - [x] Skill improvement extraction (post-session, per-skill improvements.jsonl)
292
+ - [x] Consecutive error intervention (stuck detection)
293
+ - [x] Iteration limit banner with Continue / Stop
294
+ - [x] Per-agent tool timeout (tool_timeout in agentji.yaml)
295
+ - [x] Run cancellation (POST /v1/cancel/{run_id})
296
+
297
+ **Coming**
298
+ - [ ] Parallel sub-agent dispatch
299
+ - [ ] Persistent memory (mem0 / Zep)
300
+ - [ ] Plugin system for community skill registries
301
+
302
+ ---
303
+
304
+ ## Why agentji
305
+
306
+ Built for developers working across the global AI ecosystem — for teams where Qwen, Kimi, and local models are first-class requirements, not afterthoughts. If you're locked to one provider because your skills won't port, agentji is the unlock.
307
+
308
+ **机** (jī) — machine, engine. The runtime.
309
+ **集** (jí) — assemble. Skills, models, tools, agents.
310
+ **极** (jí) — ultimate. Any skill on any model.
311
+
312
+ ---
313
+
314
+ ## Contributing
315
+
316
+ Issues and PRs welcome. Adding a skill or a provider integration is the best first PR.
317
+
318
+ ```bash
319
+ pytest # unit tests
320
+ pytest -m integration # requires API keys in .env
321
+ pytest -m local # requires Ollama running locally
322
+ ```
323
+
324
+ ---
325
+
326
+ ## License
327
+
328
+ MIT
@@ -0,0 +1,3 @@
1
+ """agentji — universal configuration and execution layer for AI agents."""
2
+
3
+ __version__ = "0.10.0"