mcp-agents 0.5.2 → 0.5.4

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (3) hide show
  1. package/README.md +68 -23
  2. package/package.json +1 -1
  3. package/server.js +59 -20
package/README.md CHANGED
@@ -15,16 +15,29 @@ MCP server that wraps AI CLI tools — [Claude Code](https://docs.anthropic.com/
15
15
 
16
16
  Only the CLI you select with `--provider` needs to be present.
17
17
 
18
+ ## Install
19
+
20
+ ```bash
21
+ npm install -g mcp-agents
22
+ ```
23
+
24
+ Global install is **strongly recommended** over `npx -y mcp-agents@latest`. The `npx`
25
+ approach performs a network round-trip on every cold start, which can exceed MCP client
26
+ connection timeouts and cause "stream disconnected" errors.
27
+
28
+ **Tip:** If your project's `.mcp.json` references `mcp-agents`, add `npm install -g mcp-agents`
29
+ to your setup script (e.g. `bin/setup`) so new developers get it automatically.
30
+
18
31
  ## Quick test
19
32
 
20
33
  ```bash
21
34
  # Default provider (codex)
22
- npx mcp-agents
35
+ mcp-agents
23
36
 
24
37
  # Specific provider
25
- npx mcp-agents --provider claude
26
- npx mcp-agents --provider gemini
27
- npx mcp-agents --provider gemini --sandbox false
38
+ mcp-agents --provider claude
39
+ mcp-agents --provider gemini
40
+ mcp-agents --provider gemini --sandbox false
28
41
  ```
29
42
 
30
43
  The server speaks [JSON-RPC over stdio](https://modelcontextprotocol.io/docs/concepts/transports#stdio). It prints `[mcp-agents] ready (provider: <name>)` to stderr when it's listening.
@@ -44,7 +57,9 @@ Each `--provider` flag maps to a single exposed tool:
44
57
  | Parameter | Type | Required | Description |
45
58
  |-----------|------|----------|-------------|
46
59
  | `prompt` | `string` | yes | The prompt to send to Claude Code |
47
- | `timeout_ms` | `integer` | no | Timeout in ms (default: 120 000) |
60
+ | `timeout_ms` | `integer` | no | Timeout in ms (default: 300 000 / 5 minutes) |
61
+
62
+ Any additional `tools/call` arguments are ignored (for example `model` or `model_reasoning_effort`).
48
63
 
49
64
  ### `gemini` parameters
50
65
 
@@ -52,68 +67,98 @@ Each `--provider` flag maps to a single exposed tool:
52
67
  |-----------|------|----------|-------------|
53
68
  | `prompt` | `string` | yes | The prompt to send to Gemini CLI |
54
69
  | `sandbox` | `boolean` | no | Run in sandbox mode (`-s` flag, default: false) |
55
- | `timeout_ms` | `integer` | no | Timeout in ms (default: 120 000) |
70
+ | `timeout_ms` | `integer` | no | Timeout in ms (default: 300 000 / 5 minutes) |
71
+
72
+ Any additional `tools/call` arguments are ignored (for example `model` or `model_reasoning_effort`).
56
73
 
57
74
  ### `codex` (pass-through)
58
75
 
59
76
  The codex provider passes through to Codex's native MCP server (`codex mcp-server`)
60
- with configurable flags:
77
+ using `-c key=value` config overrides:
61
78
 
62
- | CLI Flag | Default | Codex flag |
63
- |----------|---------|------------|
64
- | `--model` | `gpt-5.3-codex` | `-m <model>` |
65
- | `--model_reasoning_effort` | `high` | `-c model_reasoning_effort=<value>` |
79
+ | CLI Flag | Default | Codex config key |
80
+ |----------|---------|-----------------|
81
+ | `--model` | `gpt-5.3-codex` | `model` |
82
+ | `--model_reasoning_effort` | `high` | `model_reasoning_effort` |
66
83
 
67
- Hardcoded defaults: `-s read-only -a never` (safe for MCP server mode).
84
+ Hardcoded defaults: `sandbox_mode=read-only`, `approval_policy=never` (safe for MCP server mode).
68
85
 
69
86
  ## Integration with Claude Code
70
87
 
71
- Add entries to your project's `.mcp.json`:
88
+ Add entries to your project's `.mcp.json` (requires `npm i -g mcp-agents`):
72
89
 
73
90
  ```json
74
91
  {
75
92
  "mcpServers": {
76
93
  "codex": {
77
- "command": "npx",
78
- "args": ["-y", "mcp-agents@latest", "--provider", "codex"]
94
+ "command": "mcp-agents",
95
+ "args": ["--provider", "codex"]
79
96
  },
80
97
  "gemini": {
81
- "command": "npx",
82
- "args": ["-y", "mcp-agents@latest", "--provider", "gemini", "--sandbox", "false"]
98
+ "command": "mcp-agents",
99
+ "args": ["--provider", "gemini", "--sandbox", "false"]
83
100
  }
84
101
  }
85
102
  }
86
103
  ```
87
104
 
88
- Override codex defaults:
105
+ Override codex defaults at server startup (not via `tools/call` arguments):
106
+
107
+ ```json
108
+ {
109
+ "mcpServers": {
110
+ "codex": {
111
+ "command": "mcp-agents",
112
+ "args": ["--provider", "codex", "--model", "gpt-5.3-codex", "--model_reasoning_effort", "medium"]
113
+ }
114
+ }
115
+ }
116
+ ```
117
+
118
+ <details>
119
+ <summary>Alternative: using npx (slower, not recommended)</summary>
89
120
 
90
121
  ```json
91
122
  {
92
123
  "mcpServers": {
93
124
  "codex": {
94
125
  "command": "npx",
95
- "args": ["-y", "mcp-agents@latest", "--provider", "codex", "--model", "o3-pro", "--model_reasoning_effort", "medium"]
126
+ "args": ["-y", "mcp-agents@latest", "--provider", "codex"]
96
127
  }
97
128
  }
98
129
  }
99
130
  ```
100
131
 
132
+ > **Warning:** `npx -y mcp-agents@latest` performs a network round-trip on every cold
133
+ > start (~70s), which can exceed MCP client connection timeouts.
134
+
135
+ </details>
136
+
101
137
  ## Integration with OpenAI Codex
102
138
 
103
139
  Add two entries to `~/.codex/config.toml` — one per provider you want available:
104
140
 
105
141
  ```toml
106
142
  [mcp_servers.claude-code]
107
- command = "npx"
108
- args = ["-y", "mcp-agents", "--provider", "claude"]
143
+ command = "mcp-agents"
144
+ args = ["--provider", "claude"]
109
145
 
110
146
  [mcp_servers.gemini]
111
- command = "npx"
112
- args = ["-y", "mcp-agents", "--provider", "gemini", "--sandbox", "false"]
147
+ command = "mcp-agents"
148
+ args = ["--provider", "gemini", "--sandbox", "false"]
113
149
  ```
114
150
 
115
151
  Then in a Codex session you can call the `claude_code` or `gemini` tools, which shell out to the respective CLIs.
116
152
 
153
+ ## Development
154
+
155
+ ```bash
156
+ npm install
157
+ npm link # symlinks mcp-agents to your local server.js
158
+ ```
159
+
160
+ After `npm link`, any edits to `server.js` take effect immediately — no reinstall needed.
161
+
117
162
  ## How it works
118
163
 
119
164
  1. An MCP client connects over stdio
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "mcp-agents",
3
- "version": "0.5.2",
3
+ "version": "0.5.4",
4
4
  "description": "MCP server that wraps AI CLI tools (Claude Code, Gemini CLI, Codex CLI) for use by any MCP client",
5
5
  "type": "module",
6
6
  "bin": {
package/server.js CHANGED
@@ -29,7 +29,8 @@ const CLI_BACKENDS = {
29
29
  claude: {
30
30
  command: "claude",
31
31
  toolName: "claude_code",
32
- description: "Run Claude Code CLI with a prompt (via stdin).",
32
+ description:
33
+ "Run Claude Code CLI with a prompt (via stdin). Supports prompt + optional timeout_ms only; other arguments are ignored.",
33
34
  stdinPrompt: true,
34
35
  buildArgs: () => ["--no-session-persistence", "-p"],
35
36
  extraProperties: {},
@@ -37,7 +38,8 @@ const CLI_BACKENDS = {
37
38
  gemini: {
38
39
  command: "gemini",
39
40
  toolName: "gemini",
40
- description: "Run Gemini CLI (gemini -p) with a prompt.",
41
+ description:
42
+ "Run Gemini CLI (gemini -p) with a prompt. Supports prompt + optional timeout_ms/sandbox only; other arguments are ignored.",
41
43
  stdinPrompt: false,
42
44
  buildArgs: (prompt, opts) => {
43
45
  const args = [];
@@ -92,8 +94,8 @@ Usage: mcp-agents [options]
92
94
 
93
95
  Options:
94
96
  --provider <name> CLI backend to use (${providers}) [default: codex]
95
- --model <model> Model to use (codex) [default: gpt-5.3-codex]
96
- --model_reasoning_effort <e> Reasoning effort (codex) [default: high]
97
+ --model <model> Codex model [default: gpt-5.3-codex]
98
+ --model_reasoning_effort <e> Codex reasoning effort [default: high]
97
99
  --sandbox <bool> Gemini sandbox mode (true/false) [default: false]
98
100
  --timeout <seconds> Default timeout per call [default: 300]
99
101
  --help, -h Show this help message
@@ -274,28 +276,47 @@ function runCli(command, args, opts = {}) {
274
276
  */
275
277
  function runCodexPassthrough({ model, modelReasoningEffort }) {
276
278
  const args = [
277
- "-m",
278
- model || "gpt-5.3-codex",
279
- "-s",
280
- "read-only",
281
- "-a",
282
- "never",
283
- "-c",
284
- `model_reasoning_effort=${modelReasoningEffort || "high"}`,
285
279
  "mcp-server",
280
+ "-c", `model=${model || "gpt-5.3-codex"}`,
281
+ "-c", "sandbox_mode=read-only",
282
+ "-c", "approval_policy=never",
283
+ "-c", `model_reasoning_effort=${modelReasoningEffort || "high"}`,
286
284
  ];
287
285
 
288
286
  logErr(`[mcp-agents] passthrough: codex ${args.join(" ")}`);
289
287
 
290
- const child = spawn("codex", args, { stdio: "inherit" });
288
+ const child = spawn("codex", args, {
289
+ stdio: ["inherit", "inherit", "pipe"],
290
+ });
291
+
292
+ child.stderr.on("data", (chunk) => {
293
+ logErr(`[codex] ${chunk.toString().trimEnd()}`);
294
+ });
295
+
296
+ const SIGNAL_CODES = { SIGHUP: 1, SIGINT: 2, SIGTERM: 15 };
297
+ for (const sig of ["SIGTERM", "SIGINT", "SIGHUP"]) {
298
+ process.once(sig, () => {
299
+ child.kill(sig);
300
+ setTimeout(() => {
301
+ child.kill("SIGKILL");
302
+ process.exit(128 + SIGNAL_CODES[sig]);
303
+ }, 5000).unref();
304
+ });
305
+ }
291
306
 
292
307
  child.on("error", (err) => {
293
308
  logErr(`[mcp-agents] failed to start codex: ${err.message}`);
294
309
  process.exitCode = 1;
295
310
  });
296
311
 
297
- child.on("exit", (code) => {
298
- process.exitCode = code ?? 1;
312
+ child.on("exit", (code, signal) => {
313
+ if (signal) {
314
+ logErr(`[mcp-agents] codex killed by ${signal}`);
315
+ process.exitCode = 128 + (SIGNAL_CODES[signal] ?? 0);
316
+ } else {
317
+ if (code !== 0) logErr(`[mcp-agents] codex exited with code ${code}`);
318
+ process.exitCode = code ?? 1;
319
+ }
299
320
  });
300
321
  }
301
322
 
@@ -333,7 +354,7 @@ async function main() {
333
354
  const properties = {
334
355
  prompt: {
335
356
  type: "string",
336
- description: `Prompt for ${backend.command}`,
357
+ description: `Prompt for ${backend.command}. Unsupported extra arguments are ignored.`,
337
358
  },
338
359
  timeout_ms: {
339
360
  type: "integer",
@@ -360,7 +381,7 @@ async function main() {
360
381
  description: backend.description,
361
382
  inputSchema: {
362
383
  type: "object",
363
- additionalProperties: false,
384
+ additionalProperties: true,
364
385
  properties,
365
386
  required: ["prompt"],
366
387
  },
@@ -385,8 +406,26 @@ async function main() {
385
406
  };
386
407
  }
387
408
 
388
- const prompt = toStringArg(params.arguments?.prompt);
389
- const timeoutMsRaw = params.arguments?.timeout_ms;
409
+ const rawArgs =
410
+ params.arguments && typeof params.arguments === "object"
411
+ ? params.arguments
412
+ : {};
413
+ const allowedArgKeys = new Set([
414
+ "prompt",
415
+ "timeout_ms",
416
+ ...Object.keys(backend.extraProperties),
417
+ ]);
418
+ const ignoredArgKeys = Object.keys(rawArgs).filter(
419
+ (key) => !allowedArgKeys.has(key),
420
+ );
421
+ if (ignoredArgKeys.length > 0) {
422
+ logErr(
423
+ `[mcp-agents] tools/call: ignoring unsupported args: ${ignoredArgKeys.join(", ")}`,
424
+ );
425
+ }
426
+
427
+ const prompt = toStringArg(rawArgs.prompt);
428
+ const timeoutMsRaw = rawArgs.timeout_ms;
390
429
  const timeoutMs = Number.isInteger(timeoutMsRaw)
391
430
  ? timeoutMsRaw
392
431
  : effectiveTimeout;
@@ -405,7 +444,7 @@ async function main() {
405
444
 
406
445
  const extraOpts = {};
407
446
  for (const key of Object.keys(backend.extraProperties)) {
408
- extraOpts[key] = params.arguments?.[key] ?? backend.extraProperties[key].default;
447
+ extraOpts[key] = rawArgs[key] ?? backend.extraProperties[key].default;
409
448
  }
410
449
 
411
450
  const cliArgs = backend.stdinPrompt