ai-cli-mcp 2.20.0 → 2.21.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/CHANGELOG.md +15 -0
- package/README.ja.md +7 -7
- package/README.md +7 -7
- package/dist/app/cli.js +1 -1
- package/dist/cli.js +1 -1
- package/dist/model-catalog.js +5 -11
- package/package.json +2 -2
- package/server.json +3 -3
package/CHANGELOG.md
CHANGED
|
@@ -1,3 +1,18 @@
|
|
|
1
|
+
# [2.21.0](https://github.com/mkXultra/ai-cli-mcp/compare/v2.20.1...v2.21.0) (2026-04-23)
|
|
2
|
+
|
|
3
|
+
|
|
4
|
+
### Features
|
|
5
|
+
|
|
6
|
+
* Codex モデルカタログを整理し gpt-5.4 をデフォルトに設定 ([20091b7](https://github.com/mkXultra/ai-cli-mcp/commit/20091b7aa119d3b27605e36a592af176eca740e4))
|
|
7
|
+
* gpt-5.5 を Codex モデルに追加し codex-ultra エイリアスを更新 ([f3b704f](https://github.com/mkXultra/ai-cli-mcp/commit/f3b704f8051f61a00e66ff7831a07c3d8b67afe6))
|
|
8
|
+
|
|
9
|
+
## [2.20.1](https://github.com/mkXultra/ai-cli-mcp/compare/v2.20.0...v2.20.1) (2026-04-18)
|
|
10
|
+
|
|
11
|
+
|
|
12
|
+
### Bug Fixes
|
|
13
|
+
|
|
14
|
+
* shorten mcp registry description ([9ca4b56](https://github.com/mkXultra/ai-cli-mcp/commit/9ca4b5647df24651334ad91dfa3af6fc911156c8))
|
|
15
|
+
|
|
1
16
|
# [2.20.0](https://github.com/mkXultra/ai-cli-mcp/compare/v2.19.0...v2.20.0) (2026-04-18)
|
|
2
17
|
|
|
3
18
|
|
package/README.ja.md
CHANGED
|
@@ -24,7 +24,7 @@ Cursorなどのエディタが、複雑な手順を伴う編集や操作に苦
|
|
|
24
24
|
- OpenCode を非対話 JSON モードで実行(`opencode run --format json --dir <workFolder> <prompt>` を使用)
|
|
25
25
|
- 複数のAIモデルのサポート:
|
|
26
26
|
- Claude (sonnet, sonnet[1m], opus, opusplan, haiku)
|
|
27
|
-
- Codex (
|
|
27
|
+
- Codex (gpt-5.4, gpt-5.5, gpt-5.4-mini, gpt-5.3-codex, gpt-5.3-codex-spark, gpt-5.2)
|
|
28
28
|
- Gemini (gemini-2.5-pro, gemini-2.5-flash, gemini-3.1-pro-preview, gemini-3-pro-preview, gemini-3-flash-preview)
|
|
29
29
|
- Forge (`forge`)
|
|
30
30
|
- OpenCode (`opencode` と `oc-<provider/model>` ラッパー。例: `oc-openai/gpt-5.4`)
|
|
@@ -37,7 +37,7 @@ Cursorなどのエディタが、複雑な手順を伴う編集や操作に苦
|
|
|
37
37
|
|
|
38
38
|
> 以下の3つのタスクをacm mcp runでエージェントを起動して:
|
|
39
39
|
> 1. `sonnet` で `src/backend` のコードをリファクタリング
|
|
40
|
-
> 2. `gpt-5.
|
|
40
|
+
> 2. `gpt-5.3-codex` で `src/frontend` のユニットテストを作成
|
|
41
41
|
> 3. `gemini-2.5-pro` で `docs/` のドキュメントを更新
|
|
42
42
|
>
|
|
43
43
|
> 実行中はあなたはTODOリストを更新する作業を行ってください。それが終わったら `wait` ツールを使ってすべての完了を待機し、結果をまとめて報告してください。
|
|
@@ -50,7 +50,7 @@ Cursorなどのエディタが、複雑な手順を伴う編集や操作に苦
|
|
|
50
50
|
> 2. `wait` ツールでこの処理の完了を待ち、結果から `session_id` を取得してください。
|
|
51
51
|
> 3. その `session_id` を使い、以下の2つのタスクを `acm mcp run` で並行して実行してください:
|
|
52
52
|
> - `sonnet` で `src/utils` のリファクタリング案を作成
|
|
53
|
-
> - `gpt-5.
|
|
53
|
+
> - `gpt-5.3-codex` で `README.md` にアーキテクチャの解説を追記
|
|
54
54
|
> 4. 最後に再び `wait` して、両方の結果をまとめてください。
|
|
55
55
|
|
|
56
56
|
[](https://github.com/mkXultra/ai-cli-mcp/releases/download/v2.11.0/demo-resume-jp.mp4)
|
|
@@ -192,7 +192,7 @@ macOSでは、これらのツールを初めて実行する際にフォルダへ
|
|
|
192
192
|
```bash
|
|
193
193
|
ai-cli doctor
|
|
194
194
|
ai-cli models
|
|
195
|
-
ai-cli run --cwd "$PWD" --model
|
|
195
|
+
ai-cli run --cwd "$PWD" --model gpt-5.4 --prompt "デフォルトの Codex モデルで実行"
|
|
196
196
|
ai-cli run --cwd "$PWD" --model codex-ultra --prompt "fix failing tests"
|
|
197
197
|
ai-cli run --cwd "$PWD" --model opencode --session-id ses_existing --prompt "この OpenCode セッションを継続して"
|
|
198
198
|
ai-cli run --cwd "$PWD" --model oc-openai/gpt-5.4 --prompt "明示的な OpenCode モデルで実行"
|
|
@@ -215,7 +215,7 @@ OpenCode のモデル指定は次の 2 つを受け付けます。
|
|
|
215
215
|
|
|
216
216
|
`ai-cli models` は OpenCode を機械可読に `opencode: ["opencode"]` と `dynamicModelBackends.opencode` で公開します。実際に利用可能なバックエンドネイティブなモデル一覧は `opencode models` で確認してください。
|
|
217
217
|
|
|
218
|
-
Codex
|
|
218
|
+
Codex のモデル指定では、公開デフォルトモデルとして `gpt-5.4` を使用します。
|
|
219
219
|
|
|
220
220
|
`doctor` は CLI バイナリの利用可否と path 解決だけを確認します。JSON 出力には `checks` ブロックが含まれ、ログイン状態と利用規約同意は未確認として示されます。
|
|
221
221
|
|
|
@@ -259,9 +259,9 @@ Claude CLI、Codex CLI、Gemini CLI、Forge CLI、または OpenCode を使用
|
|
|
259
259
|
- `prompt_file` (string, 任意): プロンプトを含むファイルへのパス。`prompt` または `prompt_file` のいずれかが必須です。絶対パス、または `workFolder` からの相対パスが指定可能です。
|
|
260
260
|
- `workFolder` (string, 必須): CLIを実行する作業ディレクトリ。絶対パスである必要があります。
|
|
261
261
|
- **モデル (Models):**
|
|
262
|
-
- **Ultra エイリアス:** `claude-ultra` (自動的に max effort に設定), `codex-ultra` (
|
|
262
|
+
- **Ultra エイリアス:** `claude-ultra` (自動的に max effort に設定), `codex-ultra` (`gpt-5.5`、自動的に xhigh reasoning に設定), `gemini-ultra`
|
|
263
263
|
- Claude: `sonnet`, `sonnet[1m]`, `opus`, `opusplan`, `haiku`
|
|
264
|
-
- Codex: `
|
|
264
|
+
- Codex: `gpt-5.4`, `gpt-5.5`, `gpt-5.4-mini`, `gpt-5.3-codex`, `gpt-5.3-codex-spark`, `gpt-5.2`
|
|
265
265
|
- Gemini: `gemini-2.5-pro`, `gemini-2.5-flash`, `gemini-3.1-pro-preview`, `gemini-3-pro-preview`, `gemini-3-flash-preview`
|
|
266
266
|
- Forge: `forge`
|
|
267
267
|
- OpenCode: `opencode`(設定済みのデフォルトモデル)および `oc-openai/gpt-5.4` のような明示ラッパー
|
package/README.md
CHANGED
|
@@ -24,7 +24,7 @@ This MCP server provides tools that can be used by LLMs to interact with AI CLI
|
|
|
24
24
|
- Execute Gemini CLI with automatic approval mode (using `-y`)
|
|
25
25
|
- Execute Forge CLI in non-interactive mode (using `forge -C <workFolder> -p <prompt>`)
|
|
26
26
|
- Execute OpenCode in non-interactive JSON mode (using `opencode run --format json --dir <workFolder> <prompt>`)
|
|
27
|
-
- Support multiple AI models: Claude (sonnet, sonnet[1m], opus, opusplan, haiku), Codex (
|
|
27
|
+
- Support multiple AI models: Claude (sonnet, sonnet[1m], opus, opusplan, haiku), Codex (gpt-5.4, gpt-5.5, gpt-5.4-mini, gpt-5.3-codex, gpt-5.3-codex-spark, gpt-5.2), Gemini (gemini-2.5-pro, gemini-2.5-flash, gemini-3.1-pro-preview, gemini-3-pro-preview, gemini-3-flash-preview), Forge (`forge`), and OpenCode (`opencode` plus explicit `oc-<provider/model>` wrappers such as `oc-openai/gpt-5.4`)
|
|
28
28
|
- Manage background processes with PID tracking
|
|
29
29
|
- Parse and return structured outputs from both tools
|
|
30
30
|
|
|
@@ -34,7 +34,7 @@ You can instruct your main agent to run multiple tasks in parallel like this:
|
|
|
34
34
|
|
|
35
35
|
> Launch agents for the following 3 tasks using acm mcp run:
|
|
36
36
|
> 1. Refactor `src/backend` code using `sonnet`
|
|
37
|
-
> 2. Create unit tests for `src/frontend` using `gpt-5.
|
|
37
|
+
> 2. Create unit tests for `src/frontend` using `gpt-5.3-codex`
|
|
38
38
|
> 3. Update docs in `docs/` using `gemini-2.5-pro`
|
|
39
39
|
>
|
|
40
40
|
> While they run, please update the TODO list. Once done, use the `wait` tool to wait for all completions and report the results together.
|
|
@@ -47,7 +47,7 @@ You can reuse heavy context (like large codebases) using session IDs to save cos
|
|
|
47
47
|
> 2. Use the `wait` tool to wait for completion and retrieve the `session_id` from the result.
|
|
48
48
|
> 3. Using that `session_id`, run the following two tasks in parallel with `acm mcp run`:
|
|
49
49
|
> - Create refactoring proposals for `src/utils` using `sonnet`
|
|
50
|
-
> - Add architecture documentation to `README.md` using `gpt-5.
|
|
50
|
+
> - Add architecture documentation to `README.md` using `gpt-5.3-codex`
|
|
51
51
|
> 4. Finally, `wait` again to combine both results.
|
|
52
52
|
|
|
53
53
|
[](https://github.com/mkXultra/ai-cli-mcp/releases/download/v2.11.0/demo-resume.mp4)
|
|
@@ -189,7 +189,7 @@ Example flow:
|
|
|
189
189
|
```bash
|
|
190
190
|
ai-cli doctor
|
|
191
191
|
ai-cli models
|
|
192
|
-
ai-cli run --cwd "$PWD" --model
|
|
192
|
+
ai-cli run --cwd "$PWD" --model gpt-5.4 --prompt "use the default Codex model"
|
|
193
193
|
ai-cli run --cwd "$PWD" --model codex-ultra --prompt "fix failing tests"
|
|
194
194
|
ai-cli run --cwd "$PWD" --model opencode --session-id ses_existing --prompt "continue this OpenCode session"
|
|
195
195
|
ai-cli run --cwd "$PWD" --model oc-openai/gpt-5.4 --prompt "run with an explicit OpenCode backend model"
|
|
@@ -212,7 +212,7 @@ OpenCode model selection accepts either:
|
|
|
212
212
|
|
|
213
213
|
`ai-cli models` exposes OpenCode machine-readably via `opencode: ["opencode"]` plus `dynamicModelBackends.opencode`, which points users to `opencode models` for backend-native discovery.
|
|
214
214
|
|
|
215
|
-
Codex model selection
|
|
215
|
+
Codex model selection uses `gpt-5.4` as the default advertised model.
|
|
216
216
|
|
|
217
217
|
`doctor` checks only binary availability and path resolution. Its JSON output includes a `checks` block that marks login state and terms acceptance as unchecked.
|
|
218
218
|
|
|
@@ -256,9 +256,9 @@ Executes a prompt using Claude CLI, Codex CLI, Gemini CLI, Forge CLI, or OpenCod
|
|
|
256
256
|
- `prompt_file` (string, optional): Path to a file containing the prompt. Either `prompt` or `prompt_file` is required. Can be absolute path or relative to `workFolder`.
|
|
257
257
|
- `workFolder` (string, required): The working directory for the CLI execution. Must be an absolute path.
|
|
258
258
|
**Models:**
|
|
259
|
-
- **Ultra Aliases:** `claude-ultra` (defaults to max effort), `codex-ultra` (defaults to xhigh reasoning), `gemini-ultra`
|
|
259
|
+
- **Ultra Aliases:** `claude-ultra` (defaults to max effort), `codex-ultra` (`gpt-5.5`, defaults to xhigh reasoning), `gemini-ultra`
|
|
260
260
|
- Claude: `sonnet`, `sonnet[1m]`, `opus`, `opusplan`, `haiku`
|
|
261
|
-
- Codex: `
|
|
261
|
+
- Codex: `gpt-5.4`, `gpt-5.5`, `gpt-5.4-mini`, `gpt-5.3-codex`, `gpt-5.3-codex-spark`, `gpt-5.2`
|
|
262
262
|
- Gemini: `gemini-2.5-pro`, `gemini-2.5-flash`, `gemini-3.1-pro-preview`, `gemini-3-pro-preview`, `gemini-3-flash-preview`
|
|
263
263
|
- Forge: `forge`
|
|
264
264
|
- OpenCode: `opencode` for the configured default backend model, plus explicit wrappers like `oc-openai/gpt-5.4`
|
package/dist/app/cli.js
CHANGED
|
@@ -26,7 +26,7 @@ Options:
|
|
|
26
26
|
--cwd <path> Working directory
|
|
27
27
|
--prompt <text> Prompt text
|
|
28
28
|
--prompt-file <path> Path to a prompt file
|
|
29
|
-
--model <model> Model name or alias (e.g. sonnet, claude-ultra, gpt-5.
|
|
29
|
+
--model <model> Model name or alias (e.g. sonnet, claude-ultra, gpt-5.3-codex, codex-ultra, gemini-2.5-pro, gemini-ultra, forge, opencode, oc-openai/gpt-5.4)
|
|
30
30
|
--session-id <id> Resume a previous session, including OpenCode in-place resumes
|
|
31
31
|
--reasoning-effort <level> Reasoning level for Claude/Codex only; unsupported for Gemini, Forge, and OpenCode
|
|
32
32
|
--help, -h Show this help message
|
package/dist/cli.js
CHANGED
|
@@ -35,7 +35,7 @@ function parseArgs(argv) {
|
|
|
35
35
|
const USAGE = `Usage: npm run -s cli.run -- --model <model> --workFolder <path> --prompt "..." [options]
|
|
36
36
|
|
|
37
37
|
Options:
|
|
38
|
-
--model Model name or alias (e.g. sonnet, opus, gpt-5.
|
|
38
|
+
--model Model name or alias (e.g. sonnet, opus, gpt-5.3-codex, gemini-2.5-pro, forge, opencode, oc-openai/gpt-5.4)
|
|
39
39
|
--workFolder Working directory (absolute path)
|
|
40
40
|
--prompt Prompt string (mutually exclusive with --prompt_file)
|
|
41
41
|
--prompt_file Path to a file containing the prompt
|
package/dist/model-catalog.js
CHANGED
|
@@ -1,17 +1,11 @@
|
|
|
1
1
|
export const CLAUDE_MODELS = ['sonnet', 'sonnet[1m]', 'opus', 'opusplan', 'haiku'];
|
|
2
2
|
export const CODEX_MODELS = [
|
|
3
|
-
'codex',
|
|
4
3
|
'gpt-5.4',
|
|
4
|
+
'gpt-5.5',
|
|
5
|
+
'gpt-5.4-mini',
|
|
5
6
|
'gpt-5.3-codex',
|
|
6
|
-
'gpt-5.
|
|
7
|
-
'gpt-5.1-codex-mini',
|
|
8
|
-
'gpt-5.1-codex-max',
|
|
7
|
+
'gpt-5.3-codex-spark',
|
|
9
8
|
'gpt-5.2',
|
|
10
|
-
'gpt-5.1',
|
|
11
|
-
'gpt-5.1-codex',
|
|
12
|
-
'gpt-5-codex',
|
|
13
|
-
'gpt-5-codex-mini',
|
|
14
|
-
'gpt-5',
|
|
15
9
|
];
|
|
16
10
|
export const GEMINI_MODELS = [
|
|
17
11
|
'gemini-2.5-pro',
|
|
@@ -24,12 +18,12 @@ export const FORGE_MODELS = ['forge'];
|
|
|
24
18
|
export const OPENCODE_MODELS = ['opencode'];
|
|
25
19
|
export const MODEL_ALIASES = {
|
|
26
20
|
'claude-ultra': 'opus',
|
|
27
|
-
'codex-ultra': 'gpt-5.
|
|
21
|
+
'codex-ultra': 'gpt-5.5',
|
|
28
22
|
'gemini-ultra': 'gemini-3.1-pro-preview',
|
|
29
23
|
};
|
|
30
24
|
export const MODEL_ALIAS_DETAILS = [
|
|
31
25
|
{ name: 'claude-ultra', resolvesTo: 'opus', agent: 'claude', defaultReasoningEffort: 'max' },
|
|
32
|
-
{ name: 'codex-ultra', resolvesTo: 'gpt-5.
|
|
26
|
+
{ name: 'codex-ultra', resolvesTo: 'gpt-5.5', agent: 'codex', defaultReasoningEffort: 'xhigh' },
|
|
33
27
|
{ name: 'gemini-ultra', resolvesTo: 'gemini-3.1-pro-preview', agent: 'gemini' },
|
|
34
28
|
];
|
|
35
29
|
export function getSupportedModelsDescription() {
|
package/package.json
CHANGED
|
@@ -1,8 +1,8 @@
|
|
|
1
1
|
{
|
|
2
2
|
"name": "ai-cli-mcp",
|
|
3
|
-
"version": "2.
|
|
3
|
+
"version": "2.21.0",
|
|
4
4
|
"mcpName": "io.github.mkXultra/ai-cli-mcp",
|
|
5
|
-
"description": "
|
|
5
|
+
"description": "Run Claude, Codex, Gemini, Forge, and OpenCode CLIs through MCP with background jobs",
|
|
6
6
|
"author": "mkXultra",
|
|
7
7
|
"license": "MIT",
|
|
8
8
|
"main": "dist/server.js",
|
package/server.json
CHANGED
|
@@ -1,17 +1,17 @@
|
|
|
1
1
|
{
|
|
2
2
|
"$schema": "https://static.modelcontextprotocol.io/schemas/2025-12-11/server.schema.json",
|
|
3
3
|
"name": "io.github.mkXultra/ai-cli-mcp",
|
|
4
|
-
"description": "
|
|
4
|
+
"description": "Run Claude, Codex, Gemini, Forge, and OpenCode CLIs through MCP with background jobs",
|
|
5
5
|
"repository": {
|
|
6
6
|
"url": "https://github.com/mkXultra/ai-cli-mcp",
|
|
7
7
|
"source": "github"
|
|
8
8
|
},
|
|
9
|
-
"version": "2.
|
|
9
|
+
"version": "2.21.0",
|
|
10
10
|
"packages": [
|
|
11
11
|
{
|
|
12
12
|
"registryType": "npm",
|
|
13
13
|
"identifier": "ai-cli-mcp",
|
|
14
|
-
"version": "2.
|
|
14
|
+
"version": "2.21.0",
|
|
15
15
|
"transport": {
|
|
16
16
|
"type": "stdio"
|
|
17
17
|
}
|