mcp-agent-foundry 1.3.0 → 1.3.1

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (2) hide show
  1. package/README.md +118 -51
  2. package/package.json +1 -1
package/README.md CHANGED
@@ -5,7 +5,7 @@
5
5
  [![MIT License](https://img.shields.io/badge/License-MIT-blue.svg)](LICENSE)
6
6
  [![Node.js >=20](https://img.shields.io/badge/Node.js-%3E%3D20-brightgreen.svg)](https://nodejs.org)
7
7
  [![npm version](https://img.shields.io/npm/v/mcp-agent-foundry.svg)](https://www.npmjs.com/package/mcp-agent-foundry)
8
- [![Tests](https://img.shields.io/badge/tests-131%20passing-brightgreen.svg)](tests/)
8
+ [![Tests](https://img.shields.io/badge/tests-passing-brightgreen.svg)](tests/)
9
9
 
10
10
  Agent Foundry extends Claude Code with multi-provider AI orchestration and git worktree isolation. Each coding agent receives its own isolated worktree - a separate checkout on a dedicated branch - enabling true parallel development without file conflicts. Get second opinions from external AI models without leaving Claude Code.
11
11
 
@@ -13,6 +13,8 @@ Agent Foundry extends Claude Code with multi-provider AI orchestration and git w
13
13
 
14
14
  ## Features
15
15
 
16
+ - **Zero-Config Setup** - Auto-registers with Claude Code on npm install
17
+ - **13+ AI Providers** - From budget (DeepSeek, Groq) to premium (OpenAI, Anthropic)
16
18
  - **Multi-Provider AI Orchestration** - Route tasks to Anthropic, OpenAI, Google Gemini, DeepSeek, Z.AI (GLM-4), Kimi, or local Ollama models
17
19
  - **Git Worktree Isolation** - Each coding agent works in an isolated directory on its own branch
18
20
  - **Pipeline Execution** - Multi-step DAG workflows with dependency ordering and parallel execution
@@ -33,7 +35,12 @@ Agent Foundry extends Claude Code with multi-provider AI orchestration and git w
33
35
  npm install -g mcp-agent-foundry
34
36
  ```
35
37
 
36
- ### Setup
38
+ That's it! Agent Foundry automatically registers with Claude Code during installation.
39
+ Restart Claude Code to activate.
40
+
41
+ To verify: Run `/mcp` in Claude Code - you should see `agent-foundry · ✓ connected`
42
+
43
+ ### Initial Setup
37
44
 
38
45
  ```bash
39
46
  agent-foundry setup
@@ -45,34 +52,6 @@ This interactive wizard will:
45
52
  3. Set up agent roles
46
53
  4. Create your configuration file
47
54
 
48
- ### Connect to Claude Code
49
-
50
- Add to your Claude Code configuration (`~/.claude/claude_desktop_config.json`):
51
-
52
- ```json
53
- {
54
- "mcpServers": {
55
- "agent-foundry": {
56
- "command": "agent-foundry",
57
- "args": ["start"]
58
- }
59
- }
60
- }
61
- ```
62
-
63
- Or specify the full path:
64
-
65
- ```json
66
- {
67
- "mcpServers": {
68
- "agent-foundry": {
69
- "command": "node",
70
- "args": ["/path/to/agent-foundry/dist/index.js"]
71
- }
72
- }
73
- }
74
- ```
75
-
76
55
  ---
77
56
 
78
57
  ## How It Works
@@ -125,6 +104,58 @@ Or specify the full path:
125
104
 
126
105
  ---
127
106
 
107
+ ## Intelligent Failover System
108
+
109
+ Agent Foundry includes automatic provider failover with health tracking:
110
+
111
+ ```
112
+ ┌─────────────────────────────────────────────────────────────┐
113
+ │ Request to Role │
114
+ └─────────────────────────────────────────────────────────────┘
115
+
116
+
117
+ ┌─────────────────┐
118
+ │ Primary Provider │
119
+ │ (e.g. DeepSeek)│
120
+ └────────┬────────┘
121
+
122
+ ┌──────────────┴──────────────┐
123
+ │ │
124
+ ✓ Success ✗ Error (429, 500, etc.)
125
+ │ │
126
+ ▼ ▼
127
+ Return Result ┌─────────────────┐
128
+ │ Health Tracker │
129
+ │ marks unhealthy │
130
+ └────────┬────────┘
131
+
132
+
133
+ ┌─────────────────┐
134
+ │ Circuit Breaker │
135
+ │ checks state │
136
+ └────────┬────────┘
137
+
138
+
139
+ ┌─────────────────┐
140
+ │ Fallback Chain │
141
+ │ (OpenAI → Anthropic)│
142
+ └────────┬────────┘
143
+
144
+
145
+ Next Provider
146
+ ```
147
+
148
+ ### Key Features
149
+
150
+ - **Automatic Retry** - Exponential backoff on transient errors
151
+ - **Health Tracking** - Monitors provider success/failure rates
152
+ - **Cooldown Periods** - Unhealthy providers are temporarily skipped
153
+ - **Circuit Breakers** - Prevents cascading failures
154
+ - **Cost-Aware Routing** - Optionally prefer cheaper providers
155
+ - **Configurable Error Codes** - Define which errors trigger failover
156
+
157
+ ---
158
+
128
159
  ## MCP Tools
129
160
 
130
161
  | Tool | Description |
@@ -162,19 +193,24 @@ providers:
162
193
  openai:
163
194
  api_key: ${OPENAI_API_KEY}
164
195
  default_model: gpt-4o
165
- deepseek:
166
- api_key: ${DEEPSEEK_API_KEY}
167
- base_url: https://api.deepseek.com
168
- default_model: deepseek-reasoner
169
196
  anthropic:
170
197
  api_key: ${ANTHROPIC_API_KEY}
171
198
  default_model: claude-sonnet-4-20250514
199
+ deepseek:
200
+ api_key: ${DEEPSEEK_API_KEY}
201
+ default_model: deepseek-reasoner
172
202
  google:
173
203
  api_key: ${GOOGLE_API_KEY}
174
204
  default_model: gemini-2.5-pro
205
+ groq:
206
+ api_key: ${GROQ_API_KEY}
207
+ default_model: llama-3.3-70b-versatile
208
+ openrouter:
209
+ api_key: ${OPENROUTER_API_KEY}
210
+ default_model: anthropic/claude-3.5-sonnet
175
211
  ollama:
176
212
  base_url: http://localhost:11434
177
- default_model: llama3.2
213
+ default_model: llama3.3:70b
178
214
 
179
215
  roles:
180
216
  coder:
@@ -247,15 +283,28 @@ worktrees:
247
283
 
248
284
  ### Environment Variables
249
285
 
250
- Set your API keys as environment variables:
251
-
252
286
  ```bash
253
- export OPENAI_API_KEY="sk-..."
287
+ # Premium Providers
254
288
  export ANTHROPIC_API_KEY="sk-ant-..."
255
- export DEEPSEEK_API_KEY="sk-..."
289
+ export OPENAI_API_KEY="sk-..."
256
290
  export GOOGLE_API_KEY="..."
257
- export ZAI_API_KEY="..." # Z.AI GLM-4
258
- export KIMI_API_KEY="..." # Moonshot Kimi
291
+
292
+ # Budget-Friendly Providers
293
+ export DEEPSEEK_API_KEY="sk-..."
294
+ export ZAI_API_KEY="..."
295
+ export KIMI_API_KEY="..."
296
+ export KIMI_CODE_API_KEY="..." # Kimi Code subscription
297
+
298
+ # Router/Aggregator
299
+ export OPENROUTER_API_KEY="sk-or-..."
300
+
301
+ # Fast Inference
302
+ export GROQ_API_KEY="gsk_..."
303
+ export TOGETHER_API_KEY="..."
304
+ export FIREWORKS_API_KEY="..."
305
+
306
+ # Web Search
307
+ export PERPLEXITY_API_KEY="pplx-..."
259
308
  ```
260
309
 
261
310
  ---
@@ -310,15 +359,23 @@ agent-foundry help # Show help
310
359
 
311
360
  ## Supported Providers
312
361
 
313
- | Provider | Models | Access Mode | Cost |
314
- |----------|--------|-------------|------|
315
- | **Anthropic** | Claude Sonnet 4, Claude Opus 4 | API | $$$ |
316
- | **OpenAI** | GPT-4o, GPT-4 Turbo, o1, o1-mini | API | $$$ |
317
- | **Google Gemini** | Gemini 2.5 Pro, Gemini 2.5 Flash | API | $$ |
318
- | **DeepSeek** | DeepSeek R1, DeepSeek Chat | API | $ (cheapest) |
319
- | **Z.AI** | GLM-4, GLM-4.7 | API | $ |
320
- | **Kimi** | Moonshot v1 | API | $ |
321
- | **Ollama** | Llama 3.2, Mistral, CodeLlama, etc. | Local | Free |
362
+ | Provider | Models | API Key Env Var | Cost |
363
+ |----------|--------|-----------------|------|
364
+ | **Anthropic** | Claude Opus 4, Claude Sonnet 4 | `ANTHROPIC_API_KEY` | $$$ |
365
+ | **OpenAI** | GPT-4o, GPT-4 Turbo, o1, o1-mini, o3-mini | `OPENAI_API_KEY` | $$$ |
366
+ | **Google Gemini** | Gemini 2.5 Pro, Gemini 2.5 Flash | `GOOGLE_API_KEY` | $$ |
367
+ | **DeepSeek** | DeepSeek R1, DeepSeek Chat, DeepSeek Coder | `DEEPSEEK_API_KEY` | $ |
368
+ | **OpenRouter** | Access 100+ models via single API | `OPENROUTER_API_KEY` | Varies |
369
+ | **Perplexity** | pplx-70b-online, pplx-7b-online (web search) | `PERPLEXITY_API_KEY` | $$ |
370
+ | **Groq** | Llama 3.3 70B, Mixtral (ultra-fast inference) | `GROQ_API_KEY` | $ |
371
+ | **Together AI** | Llama, Mistral, CodeLlama, Qwen | `TOGETHER_API_KEY` | $ |
372
+ | **Fireworks AI** | Llama, Mixtral (optimized inference) | `FIREWORKS_API_KEY` | $ |
373
+ | **Z.AI** | GLM-4, GLM-4.7 | `ZAI_API_KEY` | $ |
374
+ | **Kimi (Moonshot)** | moonshot-v1-128k | `KIMI_API_KEY` | $ |
375
+ | **Kimi Code** | Kimi coding assistant (subscription) | `KIMI_CODE_API_KEY` | Subscription |
376
+ | **Ollama** | Llama 3.3, Mistral, CodeLlama, Qwen, etc. | N/A (local) | Free |
377
+
378
+ **Cost Legend**: $ = Budget, $$ = Moderate, $$$ = Premium
322
379
 
323
380
  ---
324
381
 
@@ -412,9 +469,19 @@ agent-foundry/
412
469
  │ │ ├── anthropic.ts # Anthropic Claude
413
470
  │ │ ├── openai.ts # OpenAI GPT-4o
414
471
  │ │ ├── gemini.ts # Google Gemini
472
+ │ │ ├── deepseek.ts # DeepSeek R1
415
473
  │ │ ├── ollama.ts # Local Ollama
416
474
  │ │ ├── zai.ts # Z.AI GLM
417
- │ │ └── kimi.ts # Moonshot Kimi
475
+ │ │ ├── kimi.ts # Moonshot Kimi
476
+ │ │ ├── openrouter.ts # OpenRouter (100+ models)
477
+ │ │ ├── perplexity.ts # Perplexity (web search)
478
+ │ │ ├── groq.ts # Groq (fast inference)
479
+ │ │ ├── together.ts # Together AI
480
+ │ │ └── fireworks.ts # Fireworks AI
481
+ │ ├── failover/
482
+ │ │ ├── orchestrator.ts # Retry and failover logic
483
+ │ │ ├── health-tracker.ts # Provider health monitoring
484
+ │ │ └── pricing.ts # Cost-aware routing
418
485
  │ ├── persistence/ # State persistence
419
486
  │ ├── config/ # Configuration management
420
487
  │ └── router/ # Routing engine
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "mcp-agent-foundry",
3
- "version": "1.3.0",
3
+ "version": "1.3.1",
4
4
  "description": "Multi-Agent AI Orchestration with Git Worktree Isolation for Claude Code",
5
5
  "type": "module",
6
6
  "main": "dist/index.js",