@elvatis_com/openclaw-cli-bridge-elvatis 0.2.2 → 0.2.4
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/README.md +135 -28
- package/SKILL.md +37 -9
- package/index.ts +237 -42
- package/openclaw.plugin.json +1 -1
- package/package.json +1 -1
- package/src/cli-runner.ts +87 -90
package/README.md
CHANGED
|
@@ -1,83 +1,190 @@
|
|
|
1
1
|
# openclaw-cli-bridge-elvatis
|
|
2
2
|
|
|
3
|
-
> OpenClaw plugin that bridges locally installed AI CLIs (Codex, Gemini, Claude Code) as model providers.
|
|
3
|
+
> OpenClaw plugin that bridges locally installed AI CLIs (Codex, Gemini, Claude Code) as model providers — with slash commands for instant model switching.
|
|
4
|
+
|
|
5
|
+
**Current version:** `0.2.2`
|
|
6
|
+
|
|
7
|
+
---
|
|
4
8
|
|
|
5
9
|
## What it does
|
|
6
10
|
|
|
7
|
-
|
|
11
|
+
### Phase 1 — Auth bridge (`openai-codex`)
|
|
12
|
+
Registers the `openai-codex` provider by reading OAuth tokens already stored by the Codex CLI (`~/.codex/auth.json`). No re-login needed.
|
|
13
|
+
|
|
14
|
+
### Phase 2 — Request bridge (local proxy)
|
|
15
|
+
Starts a local OpenAI-compatible HTTP proxy on `127.0.0.1:31337` and configures OpenClaw's `vllm` provider to route calls through `gemini` and `claude` CLI subprocesses.
|
|
8
16
|
|
|
9
|
-
|
|
17
|
+
Prompt delivery: always via **stdin** (not CLI args) — avoids `E2BIG` for long sessions. Each message batch is truncated to the last 20 messages + system message (configurable in `src/cli-runner.ts`).
|
|
10
18
|
|
|
11
19
|
| Model reference | CLI invoked |
|
|
12
20
|
|---|---|
|
|
13
|
-
| `vllm/cli-gemini/gemini-2.5-pro` | `gemini -m gemini-2.5-pro
|
|
14
|
-
| `vllm/cli-gemini/gemini-2.5-flash` | `gemini -m gemini-2.5-flash
|
|
15
|
-
| `vllm/cli-
|
|
16
|
-
| `vllm/cli-claude/claude-sonnet-4-6` | `claude -p -
|
|
21
|
+
| `vllm/cli-gemini/gemini-2.5-pro` | `gemini -m gemini-2.5-pro @<tmpfile>` |
|
|
22
|
+
| `vllm/cli-gemini/gemini-2.5-flash` | `gemini -m gemini-2.5-flash @<tmpfile>` |
|
|
23
|
+
| `vllm/cli-gemini/gemini-3-pro` | `gemini -m gemini-3-pro @<tmpfile>` |
|
|
24
|
+
| `vllm/cli-claude/claude-sonnet-4-6` | `claude -p --output-format text --model claude-sonnet-4-6` (stdin) |
|
|
25
|
+
| `vllm/cli-claude/claude-opus-4-6` | `claude -p --output-format text --model claude-opus-4-6` (stdin) |
|
|
26
|
+
| `vllm/cli-claude/claude-haiku-4-5` | `claude -p --output-format text --model claude-haiku-4-5` (stdin) |
|
|
27
|
+
|
|
28
|
+
### Phase 3 — Slash commands
|
|
29
|
+
Six plugin-registered commands for instant model switching (no agent invocation needed):
|
|
30
|
+
|
|
31
|
+
| Command | Switches to |
|
|
32
|
+
|---|---|
|
|
33
|
+
| `/cli-sonnet` | `vllm/cli-claude/claude-sonnet-4-6` |
|
|
34
|
+
| `/cli-opus` | `vllm/cli-claude/claude-opus-4-6` |
|
|
35
|
+
| `/cli-haiku` | `vllm/cli-claude/claude-haiku-4-5` |
|
|
36
|
+
| `/cli-gemini` | `vllm/cli-gemini/gemini-2.5-pro` |
|
|
37
|
+
| `/cli-gemini-flash` | `vllm/cli-gemini/gemini-2.5-flash` |
|
|
38
|
+
| `/cli-gemini3` | `vllm/cli-gemini/gemini-3-pro` |
|
|
39
|
+
|
|
40
|
+
All commands require `requireAuth: true` — only authorized/owner senders can execute them. Each command calls `openclaw models set <model>` via `api.runtime.system.runCommandWithTimeout` and replies with a confirmation.
|
|
41
|
+
|
|
42
|
+
---
|
|
17
43
|
|
|
18
44
|
## Requirements
|
|
19
45
|
|
|
20
|
-
- [OpenClaw](https://openclaw.ai) gateway
|
|
46
|
+
- [OpenClaw](https://openclaw.ai) gateway (tested with `2026.3.x`)
|
|
21
47
|
- One or more of:
|
|
22
48
|
- [`@openai/codex`](https://github.com/openai/codex) — `npm i -g @openai/codex` + `codex login`
|
|
23
49
|
- [`@google/gemini-cli`](https://github.com/google-gemini/gemini-cli) — `npm i -g @google/gemini-cli` + `gemini auth`
|
|
24
50
|
- [`@anthropic-ai/claude-code`](https://github.com/anthropic-ai/claude-code) — `npm i -g @anthropic-ai/claude-code` + `claude auth`
|
|
25
51
|
|
|
52
|
+
---
|
|
53
|
+
|
|
26
54
|
## Installation
|
|
27
55
|
|
|
28
56
|
```bash
|
|
29
|
-
#
|
|
57
|
+
# From ClawHub
|
|
30
58
|
clawhub install openclaw-cli-bridge-elvatis
|
|
31
59
|
|
|
32
|
-
# Or
|
|
60
|
+
# Or from workspace (development / local path)
|
|
33
61
|
# Add to ~/.openclaw/openclaw.json:
|
|
34
|
-
# plugins.load.paths: ["
|
|
35
|
-
# plugins.allow: ["openclaw-cli-bridge-elvatis"]
|
|
62
|
+
# plugins.load.paths: ["~/.openclaw/workspace/openclaw-cli-bridge-elvatis"]
|
|
36
63
|
# plugins.entries.openclaw-cli-bridge-elvatis: { "enabled": true }
|
|
37
64
|
```
|
|
38
65
|
|
|
39
|
-
|
|
66
|
+
---
|
|
67
|
+
|
|
68
|
+
## Setup
|
|
69
|
+
|
|
70
|
+
### 1. Enable + restart
|
|
40
71
|
|
|
41
|
-
|
|
72
|
+
```bash
|
|
73
|
+
# In ~/.openclaw/openclaw.json → plugins.entries:
|
|
74
|
+
"openclaw-cli-bridge-elvatis": { "enabled": true }
|
|
75
|
+
|
|
76
|
+
openclaw gateway restart
|
|
77
|
+
```
|
|
78
|
+
|
|
79
|
+
### 2. Register Codex auth (Phase 1, optional)
|
|
42
80
|
|
|
43
81
|
```bash
|
|
44
82
|
openclaw models auth login --provider openai-codex
|
|
45
83
|
# Select: "Codex CLI (existing login)"
|
|
46
84
|
```
|
|
47
85
|
|
|
48
|
-
|
|
86
|
+
### 3. Verify proxy (Phase 2)
|
|
87
|
+
|
|
88
|
+
On startup the plugin auto-patches `openclaw.json` with the `vllm` provider config (port `31337`) and logs:
|
|
89
|
+
|
|
90
|
+
```
|
|
91
|
+
[cli-bridge] proxy ready — vllm/cli-gemini/* and vllm/cli-claude/* available
|
|
92
|
+
[cli-bridge] registered 6 slash commands: /cli-sonnet, /cli-opus, /cli-haiku, /cli-gemini, /cli-gemini-flash, /cli-gemini3
|
|
93
|
+
```
|
|
94
|
+
|
|
95
|
+
### 4. Switch models (Phase 3)
|
|
96
|
+
|
|
97
|
+
Use any `/cli-*` command from any connected channel:
|
|
98
|
+
|
|
99
|
+
```
|
|
100
|
+
/cli-sonnet
|
|
101
|
+
→ ✅ Switched to Claude Sonnet 4.6 (CLI)
|
|
102
|
+
`vllm/cli-claude/claude-sonnet-4-6`
|
|
103
|
+
```
|
|
104
|
+
|
|
105
|
+
---
|
|
49
106
|
|
|
50
107
|
## Configuration
|
|
51
108
|
|
|
52
|
-
|
|
109
|
+
In `~/.openclaw/openclaw.json` → `plugins.entries.openclaw-cli-bridge-elvatis.config`:
|
|
53
110
|
|
|
54
111
|
```json5
|
|
55
112
|
{
|
|
56
|
-
"enableCodex": true,
|
|
57
|
-
"enableProxy": true,
|
|
58
|
-
"proxyPort": 31337,
|
|
59
|
-
"proxyApiKey": "cli-bridge", // key
|
|
60
|
-
"proxyTimeoutMs": 120000
|
|
113
|
+
"enableCodex": true, // register openai-codex from Codex CLI auth (default: true)
|
|
114
|
+
"enableProxy": true, // start local CLI proxy server (default: true)
|
|
115
|
+
"proxyPort": 31337, // proxy port (default: 31337)
|
|
116
|
+
"proxyApiKey": "cli-bridge", // key between OpenClaw vllm provider and proxy (default: "cli-bridge")
|
|
117
|
+
"proxyTimeoutMs": 120000 // CLI subprocess timeout in ms (default: 120s)
|
|
61
118
|
}
|
|
62
119
|
```
|
|
63
120
|
|
|
121
|
+
---
|
|
122
|
+
|
|
64
123
|
## Architecture
|
|
65
124
|
|
|
66
125
|
```
|
|
67
126
|
OpenClaw agent
|
|
68
127
|
│
|
|
69
|
-
├─ openai-codex/*
|
|
128
|
+
├─ openai-codex/* ──► OpenAI API (auth via ~/.codex/auth.json OAuth tokens)
|
|
70
129
|
│
|
|
71
130
|
└─ vllm/cli-gemini/* ─┐
|
|
72
|
-
vllm/cli-claude/* ─┤─► openclaw-cli-bridge
|
|
73
|
-
│ ├─ cli-gemini/* → gemini -m <model>
|
|
74
|
-
│ └─ cli-claude/* → claude -p
|
|
75
|
-
|
|
131
|
+
vllm/cli-claude/* ─┤─► localhost:31337 (openclaw-cli-bridge proxy)
|
|
132
|
+
│ ├─ cli-gemini/* → gemini -m <model> @<tmpfile>
|
|
133
|
+
│ └─ cli-claude/* → claude -p --model <model> ← prompt via stdin
|
|
134
|
+
└───────────────────────────────────────────────────
|
|
135
|
+
|
|
136
|
+
Slash commands (bypass agent):
|
|
137
|
+
/cli-sonnet|opus|haiku|gemini|gemini-flash|gemini3
|
|
138
|
+
└─► openclaw models set <model> (atomic, ~1s)
|
|
76
139
|
```
|
|
77
140
|
|
|
78
|
-
|
|
141
|
+
---
|
|
142
|
+
|
|
143
|
+
## Known Issues & Fixes
|
|
144
|
+
|
|
145
|
+
### `spawn E2BIG` (fixed in v0.2.1)
|
|
146
|
+
|
|
147
|
+
**Symptom:** `CLI error for cli-claude/…: spawn E2BIG` after ~500+ messages in a session.
|
|
148
|
+
|
|
149
|
+
**Root cause:** The OpenClaw gateway modifies `process.env` at runtime (OPENCLAW_* vars, session context, etc.). Spreading the full `process.env` into `spawn()` pushes `argv + envp` over Linux's `ARG_MAX` (~2MB).
|
|
150
|
+
|
|
151
|
+
**Fix:** `buildMinimalEnv()` in `src/cli-runner.ts` — only passes `HOME`, `PATH`, `USER`, and auth keys to the subprocess. Immune to gateway runtime env size.
|
|
152
|
+
|
|
153
|
+
---
|
|
154
|
+
|
|
155
|
+
## Development
|
|
156
|
+
|
|
157
|
+
```bash
|
|
158
|
+
npm run typecheck # tsc --noEmit
|
|
159
|
+
npm test # vitest run
|
|
160
|
+
```
|
|
161
|
+
|
|
162
|
+
Test coverage: `test/cli-runner.test.ts` — unit tests for `formatPrompt` (truncation, system message handling, MAX_MSG_CHARS).
|
|
163
|
+
|
|
164
|
+
---
|
|
165
|
+
|
|
166
|
+
## Changelog
|
|
167
|
+
|
|
168
|
+
### v0.2.2
|
|
169
|
+
- **feat:** Phase 3 — `/cli-*` slash commands for instant model switching
|
|
170
|
+
- All 6 commands registered via `api.registerCommand` with `requireAuth: true`
|
|
171
|
+
- Calls `openclaw models set <model>` via `api.runtime.system.runCommandWithTimeout`
|
|
172
|
+
|
|
173
|
+
### v0.2.1
|
|
174
|
+
- **fix:** `spawn E2BIG` — use `buildMinimalEnv()` instead of spreading full `process.env`
|
|
175
|
+
- **feat:** Added `test/cli-runner.test.ts` (5 unit tests)
|
|
176
|
+
- Added Gemini 3 Pro model (`vllm/cli-gemini/gemini-3-pro`)
|
|
177
|
+
|
|
178
|
+
### v0.2.0
|
|
179
|
+
- **feat:** Phase 2 — local OpenAI-compatible proxy server
|
|
180
|
+
- Prompt via stdin/tmpfile (never as CLI arg) to prevent arg-size issues
|
|
181
|
+
- `MAX_MESSAGES=20` + `MAX_MSG_CHARS=4000` truncation in `formatPrompt`
|
|
182
|
+
- Auto-patch of `openclaw.json` vllm provider config on first start
|
|
183
|
+
|
|
184
|
+
### v0.1.x
|
|
185
|
+
- Phase 1: Codex CLI OAuth auth bridge
|
|
79
186
|
|
|
80
|
-
|
|
187
|
+
---
|
|
81
188
|
|
|
82
189
|
## License
|
|
83
190
|
|
package/SKILL.md
CHANGED
|
@@ -1,24 +1,52 @@
|
|
|
1
1
|
---
|
|
2
2
|
name: openclaw-cli-bridge-elvatis
|
|
3
|
-
description: Bridge local Codex, Gemini, and Claude Code CLIs into OpenClaw
|
|
3
|
+
description: Bridge local Codex, Gemini, and Claude Code CLIs into OpenClaw as vllm model providers. Includes /cli-* slash commands for instant model switching (/cli-sonnet, /cli-opus, /cli-haiku, /cli-gemini, /cli-gemini-flash, /cli-gemini3). E2BIG-safe spawn via minimal env.
|
|
4
4
|
homepage: https://github.com/elvatis/openclaw-cli-bridge-elvatis
|
|
5
5
|
metadata:
|
|
6
6
|
{
|
|
7
7
|
"openclaw":
|
|
8
8
|
{
|
|
9
9
|
"emoji": "🌉",
|
|
10
|
-
"requires": { "bins": ["openclaw", "
|
|
10
|
+
"requires": { "bins": ["openclaw", "claude", "gemini"] },
|
|
11
|
+
"commands": ["/cli-sonnet", "/cli-opus", "/cli-haiku", "/cli-gemini", "/cli-gemini-flash", "/cli-gemini3"]
|
|
11
12
|
}
|
|
12
13
|
}
|
|
13
14
|
---
|
|
14
15
|
|
|
15
|
-
# OpenClaw CLI Bridge
|
|
16
|
+
# OpenClaw CLI Bridge
|
|
16
17
|
|
|
17
|
-
|
|
18
|
+
Bridges locally installed AI CLIs (Codex, Gemini, Claude Code) as OpenClaw model providers. Three phases:
|
|
18
19
|
|
|
19
|
-
|
|
20
|
-
|
|
21
|
-
- `vllm/cli-gemini/*`
|
|
22
|
-
- `vllm/cli-claude/*`
|
|
20
|
+
## Phase 1 — Codex Auth Bridge
|
|
21
|
+
Registers `openai-codex` provider from existing `~/.codex/auth.json` tokens. No re-login.
|
|
23
22
|
|
|
24
|
-
|
|
23
|
+
## Phase 2 — Request Proxy
|
|
24
|
+
Local OpenAI-compatible HTTP proxy (`127.0.0.1:31337`) routes vllm model calls to CLI subprocesses:
|
|
25
|
+
- `vllm/cli-gemini/gemini-2.5-pro` / `gemini-2.5-flash` / `gemini-3-pro`
|
|
26
|
+
- `vllm/cli-claude/claude-sonnet-4-6` / `claude-opus-4-6` / `claude-haiku-4-5`
|
|
27
|
+
|
|
28
|
+
Prompts go via stdin/tmpfile — never as CLI args (prevents `E2BIG` for long sessions).
|
|
29
|
+
|
|
30
|
+
## Phase 3 — Slash Commands
|
|
31
|
+
Six instant model-switch commands (authorized senders only):
|
|
32
|
+
|
|
33
|
+
| Command | Model |
|
|
34
|
+
|---|---|
|
|
35
|
+
| `/cli-sonnet` | `vllm/cli-claude/claude-sonnet-4-6` |
|
|
36
|
+
| `/cli-opus` | `vllm/cli-claude/claude-opus-4-6` |
|
|
37
|
+
| `/cli-haiku` | `vllm/cli-claude/claude-haiku-4-5` |
|
|
38
|
+
| `/cli-gemini` | `vllm/cli-gemini/gemini-2.5-pro` |
|
|
39
|
+
| `/cli-gemini-flash` | `vllm/cli-gemini/gemini-2.5-flash` |
|
|
40
|
+
| `/cli-gemini3` | `vllm/cli-gemini/gemini-3-pro` |
|
|
41
|
+
|
|
42
|
+
Each command runs `openclaw models set <model>` atomically and replies with a confirmation.
|
|
43
|
+
|
|
44
|
+
## Setup
|
|
45
|
+
|
|
46
|
+
1. Enable plugin + restart gateway
|
|
47
|
+
2. (Optional) Register Codex auth: `openclaw models auth login --provider openai-codex`
|
|
48
|
+
3. Use `/cli-*` commands to switch models from any channel
|
|
49
|
+
|
|
50
|
+
See `README.md` for full configuration reference and architecture diagram.
|
|
51
|
+
|
|
52
|
+
**Version:** 0.2.2
|
package/index.ts
CHANGED
|
@@ -15,23 +15,27 @@
|
|
|
15
15
|
* /cli-gemini → vllm/cli-gemini/gemini-2.5-pro
|
|
16
16
|
* /cli-gemini-flash → vllm/cli-gemini/gemini-2.5-flash
|
|
17
17
|
* /cli-gemini3 → vllm/cli-gemini/gemini-3-pro
|
|
18
|
+
* /cli-back → restore model that was active before last /cli-* switch
|
|
19
|
+
* /cli-test [model] → one-shot proxy health check (does NOT switch global model)
|
|
18
20
|
*
|
|
19
21
|
* Provider / model naming:
|
|
20
|
-
* vllm/cli-gemini/gemini-2.5-pro → `gemini -m gemini-2.5-pro
|
|
21
|
-
* vllm/cli-claude/claude-opus-4-6 → `claude -p -m claude-opus-4-6 --output-format text
|
|
22
|
+
* vllm/cli-gemini/gemini-2.5-pro → `gemini -m gemini-2.5-pro @<tmpfile>`
|
|
23
|
+
* vllm/cli-claude/claude-opus-4-6 → `claude -p -m claude-opus-4-6 --output-format text` (stdin)
|
|
22
24
|
*/
|
|
23
25
|
|
|
26
|
+
import { readFileSync, writeFileSync, mkdirSync } from "node:fs";
|
|
27
|
+
import { homedir } from "node:os";
|
|
28
|
+
import { join } from "node:path";
|
|
29
|
+
import http from "node:http";
|
|
24
30
|
import type {
|
|
25
31
|
OpenClawPluginApi,
|
|
26
32
|
ProviderAuthContext,
|
|
27
33
|
ProviderAuthResult,
|
|
28
34
|
} from "openclaw/plugin-sdk";
|
|
29
35
|
|
|
30
|
-
//
|
|
31
|
-
//
|
|
32
|
-
type
|
|
33
|
-
type PluginCommandContext = Parameters<RegisterCommandParam["handler"]>[0];
|
|
34
|
-
type PluginCommandResult = Awaited<ReturnType<RegisterCommandParam["handler"]>>;
|
|
36
|
+
// OpenClawPluginCommandDefinition is defined in the SDK types but not re-exported
|
|
37
|
+
// by the package — derive it from the registerCommand signature.
|
|
38
|
+
type OpenClawPluginCommandDefinition = Parameters<OpenClawPluginApi["registerCommand"]>[0];
|
|
35
39
|
import { buildOauthProviderAuthResult } from "openclaw/plugin-sdk";
|
|
36
40
|
import {
|
|
37
41
|
DEFAULT_CODEX_AUTH_PATH,
|
|
@@ -41,14 +45,19 @@ import {
|
|
|
41
45
|
import { startProxyServer } from "./src/proxy-server.js";
|
|
42
46
|
import { patchOpencllawConfig } from "./src/config-patcher.js";
|
|
43
47
|
|
|
48
|
+
// ──────────────────────────────────────────────────────────────────────────────
|
|
49
|
+
// Types derived from SDK (not re-exported by the package)
|
|
50
|
+
// ──────────────────────────────────────────────────────────────────────────────
|
|
51
|
+
type RegisterCommandParam = Parameters<OpenClawPluginApi["registerCommand"]>[0];
|
|
52
|
+
type PluginCommandContext = Parameters<RegisterCommandParam["handler"]>[0];
|
|
53
|
+
type PluginCommandResult = Awaited<ReturnType<RegisterCommandParam["handler"]>>;
|
|
54
|
+
|
|
44
55
|
// ──────────────────────────────────────────────────────────────────────────────
|
|
45
56
|
// Plugin config type
|
|
46
57
|
// ──────────────────────────────────────────────────────────────────────────────
|
|
47
58
|
interface CliPluginConfig {
|
|
48
|
-
// Phase 1: auth bridge
|
|
49
59
|
codexAuthPath?: string;
|
|
50
60
|
enableCodex?: boolean;
|
|
51
|
-
// Phase 2: request proxy
|
|
52
61
|
enableProxy?: boolean;
|
|
53
62
|
proxyPort?: number;
|
|
54
63
|
proxyApiKey?: string;
|
|
@@ -59,10 +68,53 @@ const DEFAULT_PROXY_PORT = 31337;
|
|
|
59
68
|
const DEFAULT_PROXY_API_KEY = "cli-bridge";
|
|
60
69
|
|
|
61
70
|
// ──────────────────────────────────────────────────────────────────────────────
|
|
62
|
-
//
|
|
71
|
+
// State file — persists the model that was active before the last /cli-* switch
|
|
72
|
+
// Located at ~/.openclaw/cli-bridge-state.json (survives gateway restarts)
|
|
73
|
+
// ──────────────────────────────────────────────────────────────────────────────
|
|
74
|
+
const STATE_FILE = join(homedir(), ".openclaw", "cli-bridge-state.json");
|
|
75
|
+
|
|
76
|
+
interface CliBridgeState {
|
|
77
|
+
previousModel: string;
|
|
78
|
+
}
|
|
79
|
+
|
|
80
|
+
function readState(): CliBridgeState | null {
|
|
81
|
+
try {
|
|
82
|
+
return JSON.parse(readFileSync(STATE_FILE, "utf8")) as CliBridgeState;
|
|
83
|
+
} catch {
|
|
84
|
+
return null;
|
|
85
|
+
}
|
|
86
|
+
}
|
|
87
|
+
|
|
88
|
+
function writeState(state: CliBridgeState): void {
|
|
89
|
+
try {
|
|
90
|
+
mkdirSync(join(homedir(), ".openclaw"), { recursive: true });
|
|
91
|
+
writeFileSync(STATE_FILE, JSON.stringify(state, null, 2) + "\n", "utf8");
|
|
92
|
+
} catch {
|
|
93
|
+
// non-fatal — /cli-back will just report no previous model
|
|
94
|
+
}
|
|
95
|
+
}
|
|
96
|
+
|
|
97
|
+
// ──────────────────────────────────────────────────────────────────────────────
|
|
98
|
+
// Read the current primary model from openclaw.json
|
|
63
99
|
// ──────────────────────────────────────────────────────────────────────────────
|
|
100
|
+
function readCurrentModel(): string | null {
|
|
101
|
+
try {
|
|
102
|
+
const cfg = JSON.parse(
|
|
103
|
+
readFileSync(join(homedir(), ".openclaw", "openclaw.json"), "utf8")
|
|
104
|
+
);
|
|
105
|
+
const m = cfg?.agents?.defaults?.model;
|
|
106
|
+
if (typeof m === "string") return m;
|
|
107
|
+
if (typeof m === "object" && m !== null && typeof m.primary === "string")
|
|
108
|
+
return m.primary;
|
|
109
|
+
return null;
|
|
110
|
+
} catch {
|
|
111
|
+
return null;
|
|
112
|
+
}
|
|
113
|
+
}
|
|
64
114
|
|
|
65
|
-
|
|
115
|
+
// ──────────────────────────────────────────────────────────────────────────────
|
|
116
|
+
// Phase 3: model command table
|
|
117
|
+
// ──────────────────────────────────────────────────────────────────────────────
|
|
66
118
|
const CLI_MODEL_COMMANDS = [
|
|
67
119
|
{
|
|
68
120
|
name: "cli-sonnet",
|
|
@@ -102,16 +154,24 @@ const CLI_MODEL_COMMANDS = [
|
|
|
102
154
|
},
|
|
103
155
|
] as const;
|
|
104
156
|
|
|
157
|
+
/** Default model used by /cli-test when no arg is given */
|
|
158
|
+
const CLI_TEST_DEFAULT_MODEL = "cli-claude/claude-sonnet-4-6";
|
|
159
|
+
|
|
105
160
|
// ──────────────────────────────────────────────────────────────────────────────
|
|
106
|
-
// Helper:
|
|
161
|
+
// Helper: switch global model, saving previous for /cli-back
|
|
107
162
|
// ──────────────────────────────────────────────────────────────────────────────
|
|
108
|
-
|
|
109
163
|
async function switchModel(
|
|
110
164
|
api: OpenClawPluginApi,
|
|
111
165
|
model: string,
|
|
112
166
|
label: string,
|
|
113
|
-
_ctx: PluginCommandContext
|
|
114
167
|
): Promise<PluginCommandResult> {
|
|
168
|
+
// Save current model BEFORE switching so /cli-back can restore it
|
|
169
|
+
const current = readCurrentModel();
|
|
170
|
+
if (current && current !== model) {
|
|
171
|
+
writeState({ previousModel: current });
|
|
172
|
+
api.logger.info(`[cli-bridge] saved previous model: ${current}`);
|
|
173
|
+
}
|
|
174
|
+
|
|
115
175
|
try {
|
|
116
176
|
const result = await api.runtime.system.runCommandWithTimeout(
|
|
117
177
|
["openclaw", "models", "set", model],
|
|
@@ -126,7 +186,7 @@ async function switchModel(
|
|
|
126
186
|
|
|
127
187
|
api.logger.info(`[cli-bridge] switched model → ${model}`);
|
|
128
188
|
return {
|
|
129
|
-
text: `✅ Switched to
|
|
189
|
+
text: `✅ Switched to **${label}**\n\`${model}\`\n\nUse \`/cli-back\` to restore previous model.`,
|
|
130
190
|
};
|
|
131
191
|
} catch (err) {
|
|
132
192
|
const msg = (err as Error).message;
|
|
@@ -135,17 +195,76 @@ async function switchModel(
|
|
|
135
195
|
}
|
|
136
196
|
}
|
|
137
197
|
|
|
198
|
+
// ──────────────────────────────────────────────────────────────────────────────
|
|
199
|
+
// Helper: fire a one-shot test request directly at the proxy (no global switch)
|
|
200
|
+
// ──────────────────────────────────────────────────────────────────────────────
|
|
201
|
+
function proxyTestRequest(
|
|
202
|
+
port: number,
|
|
203
|
+
apiKey: string,
|
|
204
|
+
model: string,
|
|
205
|
+
timeoutMs: number
|
|
206
|
+
): Promise<string> {
|
|
207
|
+
return new Promise((resolve, reject) => {
|
|
208
|
+
const body = JSON.stringify({
|
|
209
|
+
model,
|
|
210
|
+
messages: [{ role: "user", content: "Reply with exactly: CLI bridge OK" }],
|
|
211
|
+
stream: false,
|
|
212
|
+
});
|
|
213
|
+
|
|
214
|
+
const req = http.request(
|
|
215
|
+
{
|
|
216
|
+
hostname: "127.0.0.1",
|
|
217
|
+
port,
|
|
218
|
+
path: "/v1/chat/completions",
|
|
219
|
+
method: "POST",
|
|
220
|
+
headers: {
|
|
221
|
+
"Content-Type": "application/json",
|
|
222
|
+
"Authorization": `Bearer ${apiKey}`,
|
|
223
|
+
"Content-Length": Buffer.byteLength(body),
|
|
224
|
+
},
|
|
225
|
+
},
|
|
226
|
+
(res) => {
|
|
227
|
+
let data = "";
|
|
228
|
+
res.on("data", (chunk: Buffer) => { data += chunk.toString(); });
|
|
229
|
+
res.on("end", () => {
|
|
230
|
+
try {
|
|
231
|
+
const parsed = JSON.parse(data) as {
|
|
232
|
+
choices?: Array<{ message?: { content?: string } }>;
|
|
233
|
+
error?: { message?: string };
|
|
234
|
+
};
|
|
235
|
+
if (parsed.error) {
|
|
236
|
+
resolve(`Proxy error: ${parsed.error.message}`);
|
|
237
|
+
} else {
|
|
238
|
+
resolve(parsed.choices?.[0]?.message?.content?.trim() ?? "(empty response)");
|
|
239
|
+
}
|
|
240
|
+
} catch {
|
|
241
|
+
resolve(`(non-JSON response: ${data.slice(0, 200)})`);
|
|
242
|
+
}
|
|
243
|
+
});
|
|
244
|
+
}
|
|
245
|
+
);
|
|
246
|
+
|
|
247
|
+
req.setTimeout(timeoutMs, () => {
|
|
248
|
+
req.destroy();
|
|
249
|
+
reject(new Error(`Proxy test timed out after ${timeoutMs}ms`));
|
|
250
|
+
});
|
|
251
|
+
req.on("error", reject);
|
|
252
|
+
req.write(body);
|
|
253
|
+
req.end();
|
|
254
|
+
});
|
|
255
|
+
}
|
|
256
|
+
|
|
138
257
|
// ──────────────────────────────────────────────────────────────────────────────
|
|
139
258
|
// Plugin definition
|
|
140
259
|
// ──────────────────────────────────────────────────────────────────────────────
|
|
141
260
|
const plugin = {
|
|
142
261
|
id: "openclaw-cli-bridge-elvatis",
|
|
143
262
|
name: "OpenClaw CLI Bridge",
|
|
144
|
-
version: "0.2.
|
|
263
|
+
version: "0.2.3",
|
|
145
264
|
description:
|
|
146
|
-
"Phase 1: openai-codex auth bridge
|
|
147
|
-
"Phase 2: HTTP proxy
|
|
148
|
-
"Phase 3: /cli-*
|
|
265
|
+
"Phase 1: openai-codex auth bridge. " +
|
|
266
|
+
"Phase 2: HTTP proxy for gemini/claude CLIs. " +
|
|
267
|
+
"Phase 3: /cli-* model switching, /cli-back restore, /cli-test health check.",
|
|
149
268
|
|
|
150
269
|
register(api: OpenClawPluginApi) {
|
|
151
270
|
const cfg = (api.pluginConfig ?? {}) as CliPluginConfig;
|
|
@@ -176,7 +295,6 @@ const plugin = {
|
|
|
176
295
|
try {
|
|
177
296
|
const creds = await readCodexCredentials(codexAuthPath);
|
|
178
297
|
spin.stop("Codex CLI credentials loaded");
|
|
179
|
-
|
|
180
298
|
return buildOauthProviderAuthResult({
|
|
181
299
|
providerId: "openai-codex",
|
|
182
300
|
defaultModel: CODEX_DEFAULT_MODEL,
|
|
@@ -212,7 +330,7 @@ const plugin = {
|
|
|
212
330
|
},
|
|
213
331
|
});
|
|
214
332
|
|
|
215
|
-
api.logger.info("[cli-bridge] openai-codex provider registered
|
|
333
|
+
api.logger.info("[cli-bridge] openai-codex provider registered");
|
|
216
334
|
}
|
|
217
335
|
|
|
218
336
|
// ── Phase 2: CLI request proxy ─────────────────────────────────────────────
|
|
@@ -226,47 +344,124 @@ const plugin = {
|
|
|
226
344
|
})
|
|
227
345
|
.then(() => {
|
|
228
346
|
api.logger.info(
|
|
229
|
-
`[cli-bridge] proxy ready — vllm/cli-gemini/* and vllm/cli-claude/* available`
|
|
347
|
+
`[cli-bridge] proxy ready on :${port} — vllm/cli-gemini/* and vllm/cli-claude/* available`
|
|
230
348
|
);
|
|
231
|
-
|
|
232
|
-
// Auto-patch openclaw.json with vllm provider config (once)
|
|
233
349
|
const result = patchOpencllawConfig(port);
|
|
234
350
|
if (result.patched) {
|
|
235
351
|
api.logger.info(
|
|
236
|
-
`[cli-bridge] openclaw.json patched with vllm provider.
|
|
237
|
-
`Restart gateway to activate cli-gemini/* and cli-claude/* models.`
|
|
352
|
+
`[cli-bridge] openclaw.json patched with vllm provider. Restart gateway to activate.`
|
|
238
353
|
);
|
|
239
|
-
} else {
|
|
240
|
-
api.logger.info(`[cli-bridge] config check: ${result.reason}`);
|
|
241
354
|
}
|
|
242
355
|
})
|
|
243
356
|
.catch((err: Error) => {
|
|
244
|
-
api.logger.warn(
|
|
245
|
-
`[cli-bridge] proxy server failed to start on port ${port}: ${err.message}`
|
|
246
|
-
);
|
|
357
|
+
api.logger.warn(`[cli-bridge] proxy failed to start on port ${port}: ${err.message}`);
|
|
247
358
|
});
|
|
248
359
|
}
|
|
249
360
|
|
|
250
|
-
// ── Phase
|
|
361
|
+
// ── Phase 3a: /cli-* model switch commands ─────────────────────────────────
|
|
251
362
|
for (const entry of CLI_MODEL_COMMANDS) {
|
|
252
|
-
// Capture entry in closure (const iteration variable is stable in TS/ESM)
|
|
253
363
|
const { name, model, description, label } = entry;
|
|
254
|
-
|
|
255
364
|
api.registerCommand({
|
|
256
365
|
name,
|
|
257
366
|
description,
|
|
258
|
-
requireAuth: true,
|
|
367
|
+
requireAuth: true,
|
|
259
368
|
handler: async (ctx: PluginCommandContext): Promise<PluginCommandResult> => {
|
|
260
|
-
api.logger.info(`[cli-bridge] /${name}
|
|
261
|
-
return switchModel(api, model, label
|
|
369
|
+
api.logger.info(`[cli-bridge] /${name} by ${ctx.senderId ?? "?"}`);
|
|
370
|
+
return switchModel(api, model, label);
|
|
262
371
|
},
|
|
263
|
-
});
|
|
372
|
+
} satisfies OpenClawPluginCommandDefinition);
|
|
264
373
|
}
|
|
265
374
|
|
|
266
|
-
|
|
267
|
-
|
|
268
|
-
|
|
269
|
-
|
|
375
|
+
// ── Phase 3b: /cli-back — restore previous model ──────────────────────────
|
|
376
|
+
api.registerCommand({
|
|
377
|
+
name: "cli-back",
|
|
378
|
+
description: "Restore the model that was active before the last /cli-* switch",
|
|
379
|
+
requireAuth: true,
|
|
380
|
+
handler: async (ctx: PluginCommandContext): Promise<PluginCommandResult> => {
|
|
381
|
+
api.logger.info(`[cli-bridge] /cli-back by ${ctx.senderId ?? "?"}`);
|
|
382
|
+
|
|
383
|
+
const state = readState();
|
|
384
|
+
if (!state?.previousModel) {
|
|
385
|
+
return { text: "ℹ️ No previous model saved. Use `/cli-sonnet` etc. to switch first." };
|
|
386
|
+
}
|
|
387
|
+
|
|
388
|
+
const prev = state.previousModel;
|
|
389
|
+
|
|
390
|
+
// Clear the saved state so a second /cli-back doesn't bounce back
|
|
391
|
+
writeState({ previousModel: "" });
|
|
392
|
+
|
|
393
|
+
try {
|
|
394
|
+
const result = await api.runtime.system.runCommandWithTimeout(
|
|
395
|
+
["openclaw", "models", "set", prev],
|
|
396
|
+
{ timeoutMs: 8_000 }
|
|
397
|
+
);
|
|
398
|
+
|
|
399
|
+
if (result.code !== 0) {
|
|
400
|
+
const err = (result.stderr || result.stdout || "unknown error").trim();
|
|
401
|
+
return { text: `❌ Failed to restore \`${prev}\`: ${err}` };
|
|
402
|
+
}
|
|
403
|
+
|
|
404
|
+
api.logger.info(`[cli-bridge] /cli-back restored → ${prev}`);
|
|
405
|
+
return { text: `✅ Restored previous model\n\`${prev}\`` };
|
|
406
|
+
} catch (err) {
|
|
407
|
+
return { text: `❌ Error: ${(err as Error).message}` };
|
|
408
|
+
}
|
|
409
|
+
},
|
|
410
|
+
} satisfies OpenClawPluginCommandDefinition);
|
|
411
|
+
|
|
412
|
+
// ── Phase 3c: /cli-test — one-shot proxy ping, no global model switch ──────
|
|
413
|
+
api.registerCommand({
|
|
414
|
+
name: "cli-test",
|
|
415
|
+
description: "Test the CLI bridge proxy without switching your active model. Usage: /cli-test [model]",
|
|
416
|
+
acceptsArgs: true,
|
|
417
|
+
requireAuth: true,
|
|
418
|
+
handler: async (ctx: PluginCommandContext): Promise<PluginCommandResult> => {
|
|
419
|
+
const targetModel = ctx.args?.trim() || CLI_TEST_DEFAULT_MODEL;
|
|
420
|
+
// Accept short names like "cli-sonnet" or full "vllm/cli-claude/claude-sonnet-4-6"
|
|
421
|
+
const model = targetModel.startsWith("vllm/")
|
|
422
|
+
? targetModel
|
|
423
|
+
: `vllm/${targetModel}`;
|
|
424
|
+
|
|
425
|
+
api.logger.info(`[cli-bridge] /cli-test → ${model} by ${ctx.senderId ?? "?"}`);
|
|
426
|
+
|
|
427
|
+
if (!enableProxy) {
|
|
428
|
+
return { text: "❌ Proxy is disabled (enableProxy: false in config)." };
|
|
429
|
+
}
|
|
430
|
+
|
|
431
|
+
const current = readCurrentModel();
|
|
432
|
+
const testTimeoutMs = Math.min(timeoutMs, 30_000);
|
|
433
|
+
|
|
434
|
+
try {
|
|
435
|
+
const start = Date.now();
|
|
436
|
+
const response = await proxyTestRequest(port, apiKey, model, testTimeoutMs);
|
|
437
|
+
const elapsed = Date.now() - start;
|
|
438
|
+
|
|
439
|
+
return {
|
|
440
|
+
text:
|
|
441
|
+
`🧪 **CLI Bridge Test**\n` +
|
|
442
|
+
`Model: \`${model}\`\n` +
|
|
443
|
+
`Response: _${response}_\n` +
|
|
444
|
+
`Latency: ${elapsed}ms\n\n` +
|
|
445
|
+
`Active model unchanged: \`${current ?? "unknown"}\``,
|
|
446
|
+
};
|
|
447
|
+
} catch (err) {
|
|
448
|
+
return {
|
|
449
|
+
text:
|
|
450
|
+
`❌ **CLI Bridge Test Failed**\n` +
|
|
451
|
+
`Model: \`${model}\`\n` +
|
|
452
|
+
`Error: ${(err as Error).message}\n\n` +
|
|
453
|
+
`Active model unchanged: \`${current ?? "unknown"}\``,
|
|
454
|
+
};
|
|
455
|
+
}
|
|
456
|
+
},
|
|
457
|
+
} satisfies OpenClawPluginCommandDefinition);
|
|
458
|
+
|
|
459
|
+
const allCommands = [
|
|
460
|
+
...CLI_MODEL_COMMANDS.map((c) => `/${c.name}`),
|
|
461
|
+
"/cli-back",
|
|
462
|
+
"/cli-test",
|
|
463
|
+
];
|
|
464
|
+
api.logger.info(`[cli-bridge] registered ${allCommands.length} commands: ${allCommands.join(", ")}`);
|
|
270
465
|
},
|
|
271
466
|
};
|
|
272
467
|
|
package/openclaw.plugin.json
CHANGED
|
@@ -1,7 +1,7 @@
|
|
|
1
1
|
{
|
|
2
2
|
"id": "openclaw-cli-bridge-elvatis",
|
|
3
3
|
"name": "OpenClaw CLI Bridge",
|
|
4
|
-
"version": "0.2.
|
|
4
|
+
"version": "0.2.4",
|
|
5
5
|
"description": "Phase 1: openai-codex auth bridge. Phase 2: local HTTP proxy routing model calls through gemini/claude CLIs (vllm provider).",
|
|
6
6
|
"providers": [
|
|
7
7
|
"openai-codex"
|
package/package.json
CHANGED
|
@@ -1,6 +1,6 @@
|
|
|
1
1
|
{
|
|
2
2
|
"name": "@elvatis_com/openclaw-cli-bridge-elvatis",
|
|
3
|
-
"version": "0.2.
|
|
3
|
+
"version": "0.2.4",
|
|
4
4
|
"description": "Bridges gemini, claude, and codex CLI tools as OpenClaw model providers. Reads existing CLI auth without re-login.",
|
|
5
5
|
"type": "module",
|
|
6
6
|
"scripts": {
|
package/src/cli-runner.ts
CHANGED
|
@@ -4,15 +4,16 @@
|
|
|
4
4
|
* Spawns CLI subprocesses (gemini, claude) and captures their output.
|
|
5
5
|
* Input: OpenAI-format messages → formatted prompt string → CLI stdin.
|
|
6
6
|
*
|
|
7
|
-
*
|
|
8
|
-
*
|
|
7
|
+
* Both Gemini and Claude receive the prompt via stdin to avoid:
|
|
8
|
+
* - E2BIG (arg list too long) for large conversation histories
|
|
9
|
+
* - Gemini agentic mode (triggered by @file syntax + workspace cwd)
|
|
10
|
+
*
|
|
11
|
+
* Gemini is always spawned with cwd = tmpdir() so it doesn't scan the
|
|
12
|
+
* workspace and enter agentic mode.
|
|
9
13
|
*/
|
|
10
14
|
|
|
11
15
|
import { spawn } from "node:child_process";
|
|
12
|
-
import {
|
|
13
|
-
import { tmpdir } from "node:os";
|
|
14
|
-
import { join } from "node:path";
|
|
15
|
-
import { randomBytes } from "node:crypto";
|
|
16
|
+
import { tmpdir, homedir } from "node:os";
|
|
16
17
|
|
|
17
18
|
/** Max messages to include in the prompt sent to the CLI. */
|
|
18
19
|
const MAX_MESSAGES = 20;
|
|
@@ -31,7 +32,7 @@ export interface ChatMessage {
|
|
|
31
32
|
/**
|
|
32
33
|
* Convert OpenAI messages to a single flat prompt string.
|
|
33
34
|
* Truncates to MAX_MESSAGES (keeping the most recent) and MAX_MSG_CHARS per
|
|
34
|
-
* message to avoid
|
|
35
|
+
* message to avoid oversized payloads.
|
|
35
36
|
*/
|
|
36
37
|
export function formatPrompt(messages: ChatMessage[]): string {
|
|
37
38
|
if (messages.length === 0) return "";
|
|
@@ -42,7 +43,7 @@ export function formatPrompt(messages: ChatMessage[]): string {
|
|
|
42
43
|
const recent = nonSystem.slice(-MAX_MESSAGES);
|
|
43
44
|
const truncated = system ? [system, ...recent] : recent;
|
|
44
45
|
|
|
45
|
-
//
|
|
46
|
+
// Single short user message — send bare (no wrapping needed)
|
|
46
47
|
if (truncated.length === 1 && truncated[0].role === "user") {
|
|
47
48
|
return truncateContent(truncated[0].content);
|
|
48
49
|
}
|
|
@@ -51,13 +52,10 @@ export function formatPrompt(messages: ChatMessage[]): string {
|
|
|
51
52
|
.map((m) => {
|
|
52
53
|
const content = truncateContent(m.content);
|
|
53
54
|
switch (m.role) {
|
|
54
|
-
case "system":
|
|
55
|
-
|
|
56
|
-
case "assistant":
|
|
57
|
-
return `[Assistant]\n${content}`;
|
|
55
|
+
case "system": return `[System]\n${content}`;
|
|
56
|
+
case "assistant": return `[Assistant]\n${content}`;
|
|
58
57
|
case "user":
|
|
59
|
-
default:
|
|
60
|
-
return `[User]\n${content}`;
|
|
58
|
+
default: return `[User]\n${content}`;
|
|
61
59
|
}
|
|
62
60
|
})
|
|
63
61
|
.join("\n\n");
|
|
@@ -69,40 +67,26 @@ function truncateContent(s: string): string {
|
|
|
69
67
|
}
|
|
70
68
|
|
|
71
69
|
// ──────────────────────────────────────────────────────────────────────────────
|
|
72
|
-
//
|
|
70
|
+
// Minimal environment for spawned subprocesses
|
|
73
71
|
// ──────────────────────────────────────────────────────────────────────────────
|
|
74
72
|
|
|
75
|
-
export interface CliRunResult {
|
|
76
|
-
stdout: string;
|
|
77
|
-
stderr: string;
|
|
78
|
-
exitCode: number;
|
|
79
|
-
}
|
|
80
|
-
|
|
81
73
|
/**
|
|
82
74
|
* Build a minimal, safe environment for spawning CLI subprocesses.
|
|
83
75
|
*
|
|
84
|
-
* WHY: The OpenClaw gateway
|
|
85
|
-
*
|
|
86
|
-
*
|
|
87
|
-
* ARG_MAX (~2 MB on Linux), causing "spawn E2BIG". Using only the vars that
|
|
76
|
+
* WHY: The OpenClaw gateway modifies process.env at runtime (OPENCLAW_* vars,
|
|
77
|
+
* session context, etc.). Spreading the full process.env into spawn() can push
|
|
78
|
+
* argv+envp over ARG_MAX (~2 MB on Linux) → "spawn E2BIG". Only passing what
|
|
88
79
|
* the CLI tools actually need keeps us well under the limit regardless of
|
|
89
|
-
*
|
|
80
|
+
* gateway runtime state.
|
|
90
81
|
*/
|
|
91
82
|
function buildMinimalEnv(): Record<string, string> {
|
|
92
|
-
const pick = (key: string)
|
|
93
|
-
|
|
94
|
-
const env: Record<string, string> = {
|
|
95
|
-
NO_COLOR: "1",
|
|
96
|
-
TERM: "dumb",
|
|
97
|
-
};
|
|
83
|
+
const pick = (key: string) => process.env[key];
|
|
84
|
+
const env: Record<string, string> = { NO_COLOR: "1", TERM: "dumb" };
|
|
98
85
|
|
|
99
|
-
// Essential path/identity vars — always include when present.
|
|
100
86
|
for (const key of ["HOME", "PATH", "USER", "LOGNAME", "SHELL", "TMPDIR", "TMP", "TEMP"]) {
|
|
101
87
|
const v = pick(key);
|
|
102
88
|
if (v) env[key] = v;
|
|
103
89
|
}
|
|
104
|
-
|
|
105
|
-
// Allow google-auth / claude auth paths to be inherited.
|
|
106
90
|
for (const key of [
|
|
107
91
|
"GOOGLE_APPLICATION_CREDENTIALS",
|
|
108
92
|
"ANTHROPIC_API_KEY",
|
|
@@ -120,37 +104,56 @@ function buildMinimalEnv(): Record<string, string> {
|
|
|
120
104
|
return env;
|
|
121
105
|
}
|
|
122
106
|
|
|
107
|
+
// ──────────────────────────────────────────────────────────────────────────────
|
|
108
|
+
// Core subprocess runner
|
|
109
|
+
// ──────────────────────────────────────────────────────────────────────────────
|
|
110
|
+
|
|
111
|
+
export interface CliRunResult {
|
|
112
|
+
stdout: string;
|
|
113
|
+
stderr: string;
|
|
114
|
+
exitCode: number;
|
|
115
|
+
}
|
|
116
|
+
|
|
117
|
+
export interface RunCliOptions {
|
|
118
|
+
/**
|
|
119
|
+
* Working directory for the subprocess.
|
|
120
|
+
* Defaults to homedir() — a neutral dir that won't trigger agentic context scanning.
|
|
121
|
+
*/
|
|
122
|
+
cwd?: string;
|
|
123
|
+
timeoutMs?: number;
|
|
124
|
+
}
|
|
125
|
+
|
|
123
126
|
/**
|
|
124
|
-
* Spawn a CLI and deliver the prompt via stdin
|
|
125
|
-
*
|
|
126
|
-
*
|
|
127
|
+
* Spawn a CLI and deliver the prompt via stdin.
|
|
128
|
+
*
|
|
129
|
+
* cwd defaults to homedir() so CLIs that scan the working directory for
|
|
130
|
+
* project context (like Gemini) don't accidentally enter agentic mode.
|
|
127
131
|
*/
|
|
128
132
|
export function runCli(
|
|
129
133
|
cmd: string,
|
|
130
134
|
args: string[],
|
|
131
135
|
prompt: string,
|
|
132
|
-
timeoutMs = 120_000
|
|
136
|
+
timeoutMs = 120_000,
|
|
137
|
+
opts: RunCliOptions = {}
|
|
133
138
|
): Promise<CliRunResult> {
|
|
139
|
+
const cwd = opts.cwd ?? homedir();
|
|
140
|
+
|
|
134
141
|
return new Promise((resolve, reject) => {
|
|
135
142
|
const proc = spawn(cmd, args, {
|
|
136
143
|
timeout: timeoutMs,
|
|
137
144
|
env: buildMinimalEnv(),
|
|
145
|
+
cwd,
|
|
138
146
|
});
|
|
139
147
|
|
|
140
148
|
let stdout = "";
|
|
141
149
|
let stderr = "";
|
|
142
150
|
|
|
143
|
-
// Write prompt to stdin then close — prevents the CLI from waiting for more input.
|
|
144
151
|
proc.stdin.write(prompt, "utf8", () => {
|
|
145
152
|
proc.stdin.end();
|
|
146
153
|
});
|
|
147
154
|
|
|
148
|
-
proc.stdout.on("data", (d: Buffer) => {
|
|
149
|
-
|
|
150
|
-
});
|
|
151
|
-
proc.stderr.on("data", (d: Buffer) => {
|
|
152
|
-
stderr += d.toString();
|
|
153
|
-
});
|
|
155
|
+
proc.stdout.on("data", (d: Buffer) => { stdout += d.toString(); });
|
|
156
|
+
proc.stderr.on("data", (d: Buffer) => { stderr += d.toString(); });
|
|
154
157
|
|
|
155
158
|
proc.on("close", (code) => {
|
|
156
159
|
resolve({ stdout: stdout.trim(), stderr: stderr.trim(), exitCode: code ?? 0 });
|
|
@@ -167,8 +170,19 @@ export function runCli(
|
|
|
167
170
|
// ──────────────────────────────────────────────────────────────────────────────
|
|
168
171
|
|
|
169
172
|
/**
|
|
170
|
-
* Run
|
|
171
|
-
*
|
|
173
|
+
* Run Gemini CLI in headless mode with prompt delivered via stdin.
|
|
174
|
+
*
|
|
175
|
+
* WHY stdin (not @file):
|
|
176
|
+
* The @file syntax (`gemini -p @/tmp/xxx.txt`) triggers Gemini's agentic
|
|
177
|
+
* mode — it scans the current working directory for project context and
|
|
178
|
+
* interprets the prompt as a task instruction, not a Q&A. This causes hangs,
|
|
179
|
+
* wrong answers, and "directory does not exist" errors when run from a
|
|
180
|
+
* project workspace.
|
|
181
|
+
*
|
|
182
|
+
* Gemini CLI: -p "" triggers headless mode; stdin content is the actual prompt
|
|
183
|
+
* (per Gemini docs: "prompt is appended to input on stdin (if any)").
|
|
184
|
+
*
|
|
185
|
+
* cwd = tmpdir() — neutral empty-ish dir, prevents workspace context scanning.
|
|
172
186
|
*/
|
|
173
187
|
export async function runGemini(
|
|
174
188
|
prompt: string,
|
|
@@ -176,24 +190,22 @@ export async function runGemini(
|
|
|
176
190
|
timeoutMs: number
|
|
177
191
|
): Promise<string> {
|
|
178
192
|
const model = stripPrefix(modelId);
|
|
179
|
-
//
|
|
180
|
-
const
|
|
181
|
-
|
|
182
|
-
|
|
183
|
-
|
|
184
|
-
|
|
185
|
-
|
|
186
|
-
|
|
187
|
-
|
|
188
|
-
|
|
189
|
-
|
|
190
|
-
|
|
191
|
-
}
|
|
192
|
-
|
|
193
|
-
return result.stdout || result.stderr;
|
|
194
|
-
} finally {
|
|
195
|
-
try { unlinkSync(tmpFile); } catch { /* ignore */ }
|
|
193
|
+
// -p "" = headless mode trigger; actual prompt arrives via stdin
|
|
194
|
+
const args = ["-m", model, "-p", ""];
|
|
195
|
+
const result = await runCli("gemini", args, prompt, timeoutMs, { cwd: tmpdir() });
|
|
196
|
+
|
|
197
|
+
// Filter out [WARN] lines from stderr (Gemini emits noisy permission warnings)
|
|
198
|
+
const cleanStderr = result.stderr
|
|
199
|
+
.split("\n")
|
|
200
|
+
.filter((l) => !l.startsWith("[WARN]") && !l.startsWith("Loaded cached"))
|
|
201
|
+
.join("\n")
|
|
202
|
+
.trim();
|
|
203
|
+
|
|
204
|
+
if (result.exitCode !== 0 && result.stdout.length === 0) {
|
|
205
|
+
throw new Error(`gemini exited ${result.exitCode}: ${cleanStderr || "(no output)"}`);
|
|
196
206
|
}
|
|
207
|
+
|
|
208
|
+
return result.stdout || cleanStderr;
|
|
197
209
|
}
|
|
198
210
|
|
|
199
211
|
// ──────────────────────────────────────────────────────────────────────────────
|
|
@@ -201,7 +213,7 @@ export async function runGemini(
|
|
|
201
213
|
// ──────────────────────────────────────────────────────────────────────────────
|
|
202
214
|
|
|
203
215
|
/**
|
|
204
|
-
* Run
|
|
216
|
+
* Run Claude Code CLI in headless mode with prompt delivered via stdin.
|
|
205
217
|
* Strips the model prefix ("cli-claude/claude-opus-4-6" → "claude-opus-4-6").
|
|
206
218
|
*/
|
|
207
219
|
export async function runClaude(
|
|
@@ -210,24 +222,17 @@ export async function runClaude(
|
|
|
210
222
|
timeoutMs: number
|
|
211
223
|
): Promise<string> {
|
|
212
224
|
const model = stripPrefix(modelId);
|
|
213
|
-
// No prompt argument — deliver via stdin to avoid E2BIG
|
|
214
225
|
const args = [
|
|
215
226
|
"-p",
|
|
216
|
-
"--output-format",
|
|
217
|
-
"
|
|
218
|
-
"--
|
|
219
|
-
"
|
|
220
|
-
"--tools",
|
|
221
|
-
"",
|
|
222
|
-
"--model",
|
|
223
|
-
model,
|
|
227
|
+
"--output-format", "text",
|
|
228
|
+
"--permission-mode", "plan",
|
|
229
|
+
"--tools", "",
|
|
230
|
+
"--model", model,
|
|
224
231
|
];
|
|
225
232
|
const result = await runCli("claude", args, prompt, timeoutMs);
|
|
226
233
|
|
|
227
234
|
if (result.exitCode !== 0 && result.stdout.length === 0) {
|
|
228
|
-
throw new Error(
|
|
229
|
-
`claude exited ${result.exitCode}: ${result.stderr || "(no output)"}`
|
|
230
|
-
);
|
|
235
|
+
throw new Error(`claude exited ${result.exitCode}: ${result.stderr || "(no output)"}`);
|
|
231
236
|
}
|
|
232
237
|
|
|
233
238
|
return result.stdout;
|
|
@@ -238,8 +243,7 @@ export async function runClaude(
|
|
|
238
243
|
// ──────────────────────────────────────────────────────────────────────────────
|
|
239
244
|
|
|
240
245
|
/**
|
|
241
|
-
* Route a chat completion
|
|
242
|
-
* Model naming convention:
|
|
246
|
+
* Route a chat completion to the correct CLI based on model prefix.
|
|
243
247
|
* cli-gemini/<id> → gemini CLI
|
|
244
248
|
* cli-claude/<id> → claude CLI
|
|
245
249
|
*/
|
|
@@ -250,17 +254,11 @@ export async function routeToCliRunner(
|
|
|
250
254
|
): Promise<string> {
|
|
251
255
|
const prompt = formatPrompt(messages);
|
|
252
256
|
|
|
253
|
-
if (model.startsWith("cli-gemini/"))
|
|
254
|
-
|
|
255
|
-
}
|
|
256
|
-
|
|
257
|
-
if (model.startsWith("cli-claude/")) {
|
|
258
|
-
return runClaude(prompt, model, timeoutMs);
|
|
259
|
-
}
|
|
257
|
+
if (model.startsWith("cli-gemini/")) return runGemini(prompt, model, timeoutMs);
|
|
258
|
+
if (model.startsWith("cli-claude/")) return runClaude(prompt, model, timeoutMs);
|
|
260
259
|
|
|
261
260
|
throw new Error(
|
|
262
|
-
`Unknown CLI bridge model: "${model}".
|
|
263
|
-
`Use "cli-gemini/<model>" or "cli-claude/<model>".`
|
|
261
|
+
`Unknown CLI bridge model: "${model}". Use "cli-gemini/<model>" or "cli-claude/<model>".`
|
|
264
262
|
);
|
|
265
263
|
}
|
|
266
264
|
|
|
@@ -268,7 +266,6 @@ export async function routeToCliRunner(
|
|
|
268
266
|
// Helpers
|
|
269
267
|
// ──────────────────────────────────────────────────────────────────────────────
|
|
270
268
|
|
|
271
|
-
/** Strip the "cli-gemini/" or "cli-claude/" prefix from a model ID. */
|
|
272
269
|
function stripPrefix(modelId: string): string {
|
|
273
270
|
const slash = modelId.indexOf("/");
|
|
274
271
|
return slash === -1 ? modelId : modelId.slice(slash + 1);
|