@elvatis_com/openclaw-cli-bridge-elvatis 0.2.1 → 0.2.3

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -1,83 +1,190 @@
1
1
  # openclaw-cli-bridge-elvatis
2
2
 
3
- > OpenClaw plugin that bridges locally installed AI CLIs (Codex, Gemini, Claude Code) as model providers.
3
+ > OpenClaw plugin that bridges locally installed AI CLIs (Codex, Gemini, Claude Code) as model providers — with slash commands for instant model switching.
4
+
5
+ **Current version:** `0.2.2`
6
+
7
+ ---
4
8
 
5
9
  ## What it does
6
10
 
7
- **Phase 1 — Auth bridge:** Registers the `openai-codex` provider using OAuth tokens already stored by the Codex CLI (`~/.codex/auth.json`). No re-login needed.
11
+ ### Phase 1 — Auth bridge (`openai-codex`)
12
+ Registers the `openai-codex` provider by reading OAuth tokens already stored by the Codex CLI (`~/.codex/auth.json`). No re-login needed.
13
+
14
+ ### Phase 2 — Request bridge (local proxy)
15
+ Starts a local OpenAI-compatible HTTP proxy on `127.0.0.1:31337` and configures OpenClaw's `vllm` provider to route calls through `gemini` and `claude` CLI subprocesses.
8
16
 
9
- **Phase 2 Request bridge:** Starts a local OpenAI-compatible HTTP proxy server (default port `31337`) and configures OpenClaw's `vllm` provider to route model calls through `gemini` and `claude` CLI subprocesses.
17
+ Prompt delivery: always via **stdin** (not CLI args) avoids `E2BIG` for long sessions. Each message batch is truncated to the last 20 messages + system message (configurable in `src/cli-runner.ts`).
10
18
 
11
19
  | Model reference | CLI invoked |
12
20
  |---|---|
13
- | `vllm/cli-gemini/gemini-2.5-pro` | `gemini -m gemini-2.5-pro -p "<prompt>"` |
14
- | `vllm/cli-gemini/gemini-2.5-flash` | `gemini -m gemini-2.5-flash -p "<prompt>"` |
15
- | `vllm/cli-claude/claude-opus-4-6` | `claude -p -m claude-opus-4-6 --output-format text "<prompt>"` |
16
- | `vllm/cli-claude/claude-sonnet-4-6` | `claude -p -m claude-sonnet-4-6 --output-format text "<prompt>"` |
21
+ | `vllm/cli-gemini/gemini-2.5-pro` | `gemini -m gemini-2.5-pro @<tmpfile>` |
22
+ | `vllm/cli-gemini/gemini-2.5-flash` | `gemini -m gemini-2.5-flash @<tmpfile>` |
23
+ | `vllm/cli-gemini/gemini-3-pro` | `gemini -m gemini-3-pro @<tmpfile>` |
24
+ | `vllm/cli-claude/claude-sonnet-4-6` | `claude -p --output-format text --model claude-sonnet-4-6` (stdin) |
25
+ | `vllm/cli-claude/claude-opus-4-6` | `claude -p --output-format text --model claude-opus-4-6` (stdin) |
26
+ | `vllm/cli-claude/claude-haiku-4-5` | `claude -p --output-format text --model claude-haiku-4-5` (stdin) |
27
+
28
+ ### Phase 3 — Slash commands
29
+ Six plugin-registered commands for instant model switching (no agent invocation needed):
30
+
31
+ | Command | Switches to |
32
+ |---|---|
33
+ | `/cli-sonnet` | `vllm/cli-claude/claude-sonnet-4-6` |
34
+ | `/cli-opus` | `vllm/cli-claude/claude-opus-4-6` |
35
+ | `/cli-haiku` | `vllm/cli-claude/claude-haiku-4-5` |
36
+ | `/cli-gemini` | `vllm/cli-gemini/gemini-2.5-pro` |
37
+ | `/cli-gemini-flash` | `vllm/cli-gemini/gemini-2.5-flash` |
38
+ | `/cli-gemini3` | `vllm/cli-gemini/gemini-3-pro` |
39
+
40
+ All commands require `requireAuth: true` — only authorized/owner senders can execute them. Each command calls `openclaw models set <model>` via `api.runtime.system.runCommandWithTimeout` and replies with a confirmation.
41
+
42
+ ---
17
43
 
18
44
  ## Requirements
19
45
 
20
- - [OpenClaw](https://openclaw.ai) gateway running
46
+ - [OpenClaw](https://openclaw.ai) gateway (tested with `2026.3.x`)
21
47
  - One or more of:
22
48
  - [`@openai/codex`](https://github.com/openai/codex) — `npm i -g @openai/codex` + `codex login`
23
49
  - [`@google/gemini-cli`](https://github.com/google-gemini/gemini-cli) — `npm i -g @google/gemini-cli` + `gemini auth`
24
50
  - [`@anthropic-ai/claude-code`](https://github.com/anthropic-ai/claude-code) — `npm i -g @anthropic-ai/claude-code` + `claude auth`
25
51
 
52
+ ---
53
+
26
54
  ## Installation
27
55
 
28
56
  ```bash
29
- # Install from ClawHub (once published)
57
+ # From ClawHub
30
58
  clawhub install openclaw-cli-bridge-elvatis
31
59
 
32
- # Or load directly from this repo (development)
60
+ # Or from workspace (development / local path)
33
61
  # Add to ~/.openclaw/openclaw.json:
34
- # plugins.load.paths: ["<path-to-this-repo>"]
35
- # plugins.allow: ["openclaw-cli-bridge-elvatis"]
62
+ # plugins.load.paths: ["~/.openclaw/workspace/openclaw-cli-bridge-elvatis"]
36
63
  # plugins.entries.openclaw-cli-bridge-elvatis: { "enabled": true }
37
64
  ```
38
65
 
39
- ## Auth setup (Phase 1 — Codex)
66
+ ---
67
+
68
+ ## Setup
69
+
70
+ ### 1. Enable + restart
40
71
 
41
- After enabling the plugin, register the Codex auth profile:
72
+ ```bash
73
+ # In ~/.openclaw/openclaw.json → plugins.entries:
74
+ "openclaw-cli-bridge-elvatis": { "enabled": true }
75
+
76
+ openclaw gateway restart
77
+ ```
78
+
79
+ ### 2. Register Codex auth (Phase 1, optional)
42
80
 
43
81
  ```bash
44
82
  openclaw models auth login --provider openai-codex
45
83
  # Select: "Codex CLI (existing login)"
46
84
  ```
47
85
 
48
- The proxy server (Phase 2) starts automatically and patches `openclaw.json` with the `vllm` provider config. Restart the gateway to activate the new models.
86
+ ### 3. Verify proxy (Phase 2)
87
+
88
+ On startup the plugin auto-patches `openclaw.json` with the `vllm` provider config (port `31337`) and logs:
89
+
90
+ ```
91
+ [cli-bridge] proxy ready — vllm/cli-gemini/* and vllm/cli-claude/* available
92
+ [cli-bridge] registered 6 slash commands: /cli-sonnet, /cli-opus, /cli-haiku, /cli-gemini, /cli-gemini-flash, /cli-gemini3
93
+ ```
94
+
95
+ ### 4. Switch models (Phase 3)
96
+
97
+ Use any `/cli-*` command from any connected channel:
98
+
99
+ ```
100
+ /cli-sonnet
101
+ → ✅ Switched to Claude Sonnet 4.6 (CLI)
102
+ `vllm/cli-claude/claude-sonnet-4-6`
103
+ ```
104
+
105
+ ---
49
106
 
50
107
  ## Configuration
51
108
 
52
- Add to your `plugins.entries.openclaw-cli-bridge-elvatis.config` in `~/.openclaw/openclaw.json`:
109
+ In `~/.openclaw/openclaw.json` `plugins.entries.openclaw-cli-bridge-elvatis.config`:
53
110
 
54
111
  ```json5
55
112
  {
56
- "enableCodex": true, // register openai-codex from Codex CLI auth
57
- "enableProxy": true, // start the local CLI proxy server
58
- "proxyPort": 31337, // port for the proxy (default: 31337)
59
- "proxyApiKey": "cli-bridge", // key used between OpenClaw and the proxy
60
- "proxyTimeoutMs": 120000 // CLI timeout in ms (default: 2 min)
113
+ "enableCodex": true, // register openai-codex from Codex CLI auth (default: true)
114
+ "enableProxy": true, // start local CLI proxy server (default: true)
115
+ "proxyPort": 31337, // proxy port (default: 31337)
116
+ "proxyApiKey": "cli-bridge", // key between OpenClaw vllm provider and proxy (default: "cli-bridge")
117
+ "proxyTimeoutMs": 120000 // CLI subprocess timeout in ms (default: 120s)
61
118
  }
62
119
  ```
63
120
 
121
+ ---
122
+
64
123
  ## Architecture
65
124
 
66
125
  ```
67
126
  OpenClaw agent
68
127
 
69
- ├─ openai-codex/* ──► OpenAI API (auth via Codex CLI OAuth tokens)
128
+ ├─ openai-codex/* ──► OpenAI API (auth via ~/.codex/auth.json OAuth tokens)
70
129
 
71
130
  └─ vllm/cli-gemini/* ─┐
72
- vllm/cli-claude/* ─┤─► openclaw-cli-bridge-elvatis proxy (127.0.0.1:31337)
73
- │ ├─ cli-gemini/* → gemini -m <model> -p "<prompt>"
74
- │ └─ cli-claude/* → claude -p -m <model> "<prompt>"
75
- └──────────────────────────────────────────────────────
131
+ vllm/cli-claude/* ─┤─► localhost:31337 (openclaw-cli-bridge proxy)
132
+ │ ├─ cli-gemini/* → gemini -m <model> @<tmpfile>
133
+ │ └─ cli-claude/* → claude -p --model <model> prompt via stdin
134
+ └───────────────────────────────────────────────────
135
+
136
+ Slash commands (bypass agent):
137
+ /cli-sonnet|opus|haiku|gemini|gemini-flash|gemini3
138
+ └─► openclaw models set <model> (atomic, ~1s)
76
139
  ```
77
140
 
78
- ## AAHP handoff
141
+ ---
142
+
143
+ ## Known Issues & Fixes
144
+
145
+ ### `spawn E2BIG` (fixed in v0.2.1)
146
+
147
+ **Symptom:** `CLI error for cli-claude/…: spawn E2BIG` after ~500+ messages in a session.
148
+
149
+ **Root cause:** The OpenClaw gateway modifies `process.env` at runtime (OPENCLAW_* vars, session context, etc.). Spreading the full `process.env` into `spawn()` pushes `argv + envp` over Linux's `ARG_MAX` (~2MB).
150
+
151
+ **Fix:** `buildMinimalEnv()` in `src/cli-runner.ts` — only passes `HOME`, `PATH`, `USER`, and auth keys to the subprocess. Immune to gateway runtime env size.
152
+
153
+ ---
154
+
155
+ ## Development
156
+
157
+ ```bash
158
+ npm run typecheck # tsc --noEmit
159
+ npm test # vitest run
160
+ ```
161
+
162
+ Test coverage: `test/cli-runner.test.ts` — unit tests for `formatPrompt` (truncation, system message handling, MAX_MSG_CHARS).
163
+
164
+ ---
165
+
166
+ ## Changelog
167
+
168
+ ### v0.2.2
169
+ - **feat:** Phase 3 — `/cli-*` slash commands for instant model switching
170
+ - All 6 commands registered via `api.registerCommand` with `requireAuth: true`
171
+ - Calls `openclaw models set <model>` via `api.runtime.system.runCommandWithTimeout`
172
+
173
+ ### v0.2.1
174
+ - **fix:** `spawn E2BIG` — use `buildMinimalEnv()` instead of spreading full `process.env`
175
+ - **feat:** Added `test/cli-runner.test.ts` (5 unit tests)
176
+ - Added Gemini 3 Pro model (`vllm/cli-gemini/gemini-3-pro`)
177
+
178
+ ### v0.2.0
179
+ - **feat:** Phase 2 — local OpenAI-compatible proxy server
180
+ - Prompt via stdin/tmpfile (never as CLI arg) to prevent arg-size issues
181
+ - `MAX_MESSAGES=20` + `MAX_MSG_CHARS=4000` truncation in `formatPrompt`
182
+ - Auto-patch of `openclaw.json` vllm provider config on first start
183
+
184
+ ### v0.1.x
185
+ - Phase 1: Codex CLI OAuth auth bridge
79
186
 
80
- Project tracking lives in `.ai/handoff/` (AAHP v3 protocol).
187
+ ---
81
188
 
82
189
  ## License
83
190
 
package/SKILL.md CHANGED
@@ -1,24 +1,52 @@
1
1
  ---
2
2
  name: openclaw-cli-bridge-elvatis
3
- description: Bridge local Codex, Gemini, and Claude Code CLIs into OpenClaw (Codex OAuth auth bridge + Gemini/Claude OpenAI-compatible local proxy via vllm).
3
+ description: Bridge local Codex, Gemini, and Claude Code CLIs into OpenClaw as vllm model providers. Includes /cli-* slash commands for instant model switching (/cli-sonnet, /cli-opus, /cli-haiku, /cli-gemini, /cli-gemini-flash, /cli-gemini3). E2BIG-safe spawn via minimal env.
4
4
  homepage: https://github.com/elvatis/openclaw-cli-bridge-elvatis
5
5
  metadata:
6
6
  {
7
7
  "openclaw":
8
8
  {
9
9
  "emoji": "🌉",
10
- "requires": { "bins": ["openclaw", "codex", "gemini", "claude"] }
10
+ "requires": { "bins": ["openclaw", "claude", "gemini"] },
11
+ "commands": ["/cli-sonnet", "/cli-opus", "/cli-haiku", "/cli-gemini", "/cli-gemini-flash", "/cli-gemini3"]
11
12
  }
12
13
  }
13
14
  ---
14
15
 
15
- # OpenClaw CLI Bridge Elvatis
16
+ # OpenClaw CLI Bridge
16
17
 
17
- This project provides two layers:
18
+ Bridges locally installed AI CLIs (Codex, Gemini, Claude Code) as OpenClaw model providers. Three phases:
18
19
 
19
- 1. **Codex auth bridge** for `openai-codex/*` by reading existing Codex CLI OAuth tokens from `~/.codex/auth.json`
20
- 2. **Local OpenAI-compatible proxy** (default `127.0.0.1:31337`) for Gemini/Claude CLI execution via OpenClaw `vllm` provider models:
21
- - `vllm/cli-gemini/*`
22
- - `vllm/cli-claude/*`
20
+ ## Phase 1 Codex Auth Bridge
21
+ Registers `openai-codex` provider from existing `~/.codex/auth.json` tokens. No re-login.
23
22
 
24
- See `README.md` for setup and architecture.
23
+ ## Phase 2 Request Proxy
24
+ Local OpenAI-compatible HTTP proxy (`127.0.0.1:31337`) routes vllm model calls to CLI subprocesses:
25
+ - `vllm/cli-gemini/gemini-2.5-pro` / `gemini-2.5-flash` / `gemini-3-pro`
26
+ - `vllm/cli-claude/claude-sonnet-4-6` / `claude-opus-4-6` / `claude-haiku-4-5`
27
+
28
+ Prompts go via stdin/tmpfile — never as CLI args (prevents `E2BIG` for long sessions).
29
+
30
+ ## Phase 3 — Slash Commands
31
+ Six instant model-switch commands (authorized senders only):
32
+
33
+ | Command | Model |
34
+ |---|---|
35
+ | `/cli-sonnet` | `vllm/cli-claude/claude-sonnet-4-6` |
36
+ | `/cli-opus` | `vllm/cli-claude/claude-opus-4-6` |
37
+ | `/cli-haiku` | `vllm/cli-claude/claude-haiku-4-5` |
38
+ | `/cli-gemini` | `vllm/cli-gemini/gemini-2.5-pro` |
39
+ | `/cli-gemini-flash` | `vllm/cli-gemini/gemini-2.5-flash` |
40
+ | `/cli-gemini3` | `vllm/cli-gemini/gemini-3-pro` |
41
+
42
+ Each command runs `openclaw models set <model>` atomically and replies with a confirmation.
43
+
44
+ ## Setup
45
+
46
+ 1. Enable plugin + restart gateway
47
+ 2. (Optional) Register Codex auth: `openclaw models auth login --provider openai-codex`
48
+ 3. Use `/cli-*` commands to switch models from any channel
49
+
50
+ See `README.md` for full configuration reference and architecture diagram.
51
+
52
+ **Version:** 0.2.2
package/index.ts CHANGED
@@ -8,16 +8,34 @@
8
8
  * and configures OpenClaw's vllm provider to route through it. Model calls
9
9
  * are handled by the Gemini CLI and Claude Code CLI subprocesses.
10
10
  *
11
+ * Phase 3 (slash commands): registers /cli-* commands for instant model switching.
12
+ * /cli-sonnet → vllm/cli-claude/claude-sonnet-4-6
13
+ * /cli-opus → vllm/cli-claude/claude-opus-4-6
14
+ * /cli-haiku → vllm/cli-claude/claude-haiku-4-5
15
+ * /cli-gemini → vllm/cli-gemini/gemini-2.5-pro
16
+ * /cli-gemini-flash → vllm/cli-gemini/gemini-2.5-flash
17
+ * /cli-gemini3 → vllm/cli-gemini/gemini-3-pro
18
+ * /cli-back → restore model that was active before last /cli-* switch
19
+ * /cli-test [model] → one-shot proxy health check (does NOT switch global model)
20
+ *
11
21
  * Provider / model naming:
12
- * vllm/cli-gemini/gemini-2.5-pro → `gemini -m gemini-2.5-pro -p "<prompt>"`
13
- * vllm/cli-claude/claude-opus-4-6 → `claude -p -m claude-opus-4-6 --output-format text "<prompt>"`
22
+ * vllm/cli-gemini/gemini-2.5-pro → `gemini -m gemini-2.5-pro @<tmpfile>`
23
+ * vllm/cli-claude/claude-opus-4-6 → `claude -p -m claude-opus-4-6 --output-format text` (stdin)
14
24
  */
15
25
 
26
+ import { readFileSync, writeFileSync, mkdirSync } from "node:fs";
27
+ import { homedir } from "node:os";
28
+ import { join } from "node:path";
29
+ import http from "node:http";
16
30
  import type {
17
31
  OpenClawPluginApi,
18
32
  ProviderAuthContext,
19
33
  ProviderAuthResult,
20
34
  } from "openclaw/plugin-sdk";
35
+
36
+ // OpenClawPluginCommandDefinition is defined in the SDK types but not re-exported
37
+ // by the package — derive it from the registerCommand signature.
38
+ type OpenClawPluginCommandDefinition = Parameters<OpenClawPluginApi["registerCommand"]>[0];
21
39
  import { buildOauthProviderAuthResult } from "openclaw/plugin-sdk";
22
40
  import {
23
41
  DEFAULT_CODEX_AUTH_PATH,
@@ -27,14 +45,19 @@ import {
27
45
  import { startProxyServer } from "./src/proxy-server.js";
28
46
  import { patchOpencllawConfig } from "./src/config-patcher.js";
29
47
 
48
+ // ──────────────────────────────────────────────────────────────────────────────
49
+ // Types derived from SDK (not re-exported by the package)
50
+ // ──────────────────────────────────────────────────────────────────────────────
51
+ type RegisterCommandParam = Parameters<OpenClawPluginApi["registerCommand"]>[0];
52
+ type PluginCommandContext = Parameters<RegisterCommandParam["handler"]>[0];
53
+ type PluginCommandResult = Awaited<ReturnType<RegisterCommandParam["handler"]>>;
54
+
30
55
  // ──────────────────────────────────────────────────────────────────────────────
31
56
  // Plugin config type
32
57
  // ──────────────────────────────────────────────────────────────────────────────
33
58
  interface CliPluginConfig {
34
- // Phase 1: auth bridge
35
59
  codexAuthPath?: string;
36
60
  enableCodex?: boolean;
37
- // Phase 2: request proxy
38
61
  enableProxy?: boolean;
39
62
  proxyPort?: number;
40
63
  proxyApiKey?: string;
@@ -44,16 +67,204 @@ interface CliPluginConfig {
44
67
  const DEFAULT_PROXY_PORT = 31337;
45
68
  const DEFAULT_PROXY_API_KEY = "cli-bridge";
46
69
 
70
+ // ──────────────────────────────────────────────────────────────────────────────
71
+ // State file — persists the model that was active before the last /cli-* switch
72
+ // Located at ~/.openclaw/cli-bridge-state.json (survives gateway restarts)
73
+ // ──────────────────────────────────────────────────────────────────────────────
74
+ const STATE_FILE = join(homedir(), ".openclaw", "cli-bridge-state.json");
75
+
76
+ interface CliBridgeState {
77
+ previousModel: string;
78
+ }
79
+
80
+ function readState(): CliBridgeState | null {
81
+ try {
82
+ return JSON.parse(readFileSync(STATE_FILE, "utf8")) as CliBridgeState;
83
+ } catch {
84
+ return null;
85
+ }
86
+ }
87
+
88
+ function writeState(state: CliBridgeState): void {
89
+ try {
90
+ mkdirSync(join(homedir(), ".openclaw"), { recursive: true });
91
+ writeFileSync(STATE_FILE, JSON.stringify(state, null, 2) + "\n", "utf8");
92
+ } catch {
93
+ // non-fatal — /cli-back will just report no previous model
94
+ }
95
+ }
96
+
97
+ // ──────────────────────────────────────────────────────────────────────────────
98
+ // Read the current primary model from openclaw.json
99
+ // ──────────────────────────────────────────────────────────────────────────────
100
+ function readCurrentModel(): string | null {
101
+ try {
102
+ const cfg = JSON.parse(
103
+ readFileSync(join(homedir(), ".openclaw", "openclaw.json"), "utf8")
104
+ );
105
+ const m = cfg?.agents?.defaults?.model;
106
+ if (typeof m === "string") return m;
107
+ if (typeof m === "object" && m !== null && typeof m.primary === "string")
108
+ return m.primary;
109
+ return null;
110
+ } catch {
111
+ return null;
112
+ }
113
+ }
114
+
115
+ // ──────────────────────────────────────────────────────────────────────────────
116
+ // Phase 3: model command table
117
+ // ──────────────────────────────────────────────────────────────────────────────
118
+ const CLI_MODEL_COMMANDS = [
119
+ {
120
+ name: "cli-sonnet",
121
+ model: "vllm/cli-claude/claude-sonnet-4-6",
122
+ description: "Switch to Claude Sonnet 4.6 (CLI bridge)",
123
+ label: "Claude Sonnet 4.6 (CLI)",
124
+ },
125
+ {
126
+ name: "cli-opus",
127
+ model: "vllm/cli-claude/claude-opus-4-6",
128
+ description: "Switch to Claude Opus 4.6 (CLI bridge)",
129
+ label: "Claude Opus 4.6 (CLI)",
130
+ },
131
+ {
132
+ name: "cli-haiku",
133
+ model: "vllm/cli-claude/claude-haiku-4-5",
134
+ description: "Switch to Claude Haiku 4.5 (CLI bridge)",
135
+ label: "Claude Haiku 4.5 (CLI)",
136
+ },
137
+ {
138
+ name: "cli-gemini",
139
+ model: "vllm/cli-gemini/gemini-2.5-pro",
140
+ description: "Switch to Gemini 2.5 Pro (CLI bridge)",
141
+ label: "Gemini 2.5 Pro (CLI)",
142
+ },
143
+ {
144
+ name: "cli-gemini-flash",
145
+ model: "vllm/cli-gemini/gemini-2.5-flash",
146
+ description: "Switch to Gemini 2.5 Flash (CLI bridge)",
147
+ label: "Gemini 2.5 Flash (CLI)",
148
+ },
149
+ {
150
+ name: "cli-gemini3",
151
+ model: "vllm/cli-gemini/gemini-3-pro",
152
+ description: "Switch to Gemini 3 Pro (CLI bridge)",
153
+ label: "Gemini 3 Pro (CLI)",
154
+ },
155
+ ] as const;
156
+
157
+ /** Default model used by /cli-test when no arg is given */
158
+ const CLI_TEST_DEFAULT_MODEL = "cli-claude/claude-sonnet-4-6";
159
+
160
+ // ──────────────────────────────────────────────────────────────────────────────
161
+ // Helper: switch global model, saving previous for /cli-back
162
+ // ──────────────────────────────────────────────────────────────────────────────
163
+ async function switchModel(
164
+ api: OpenClawPluginApi,
165
+ model: string,
166
+ label: string,
167
+ ): Promise<PluginCommandResult> {
168
+ // Save current model BEFORE switching so /cli-back can restore it
169
+ const current = readCurrentModel();
170
+ if (current && current !== model) {
171
+ writeState({ previousModel: current });
172
+ api.logger.info(`[cli-bridge] saved previous model: ${current}`);
173
+ }
174
+
175
+ try {
176
+ const result = await api.runtime.system.runCommandWithTimeout(
177
+ ["openclaw", "models", "set", model],
178
+ { timeoutMs: 8_000 }
179
+ );
180
+
181
+ if (result.code !== 0) {
182
+ const err = (result.stderr || result.stdout || "unknown error").trim();
183
+ api.logger.warn(`[cli-bridge] models set failed (code ${result.code}): ${err}`);
184
+ return { text: `❌ Failed to switch to ${label}: ${err}` };
185
+ }
186
+
187
+ api.logger.info(`[cli-bridge] switched model → ${model}`);
188
+ return {
189
+ text: `✅ Switched to **${label}**\n\`${model}\`\n\nUse \`/cli-back\` to restore previous model.`,
190
+ };
191
+ } catch (err) {
192
+ const msg = (err as Error).message;
193
+ api.logger.warn(`[cli-bridge] models set error: ${msg}`);
194
+ return { text: `❌ Error switching model: ${msg}` };
195
+ }
196
+ }
197
+
198
+ // ──────────────────────────────────────────────────────────────────────────────
199
+ // Helper: fire a one-shot test request directly at the proxy (no global switch)
200
+ // ──────────────────────────────────────────────────────────────────────────────
201
+ function proxyTestRequest(
202
+ port: number,
203
+ apiKey: string,
204
+ model: string,
205
+ timeoutMs: number
206
+ ): Promise<string> {
207
+ return new Promise((resolve, reject) => {
208
+ const body = JSON.stringify({
209
+ model,
210
+ messages: [{ role: "user", content: "Reply with exactly: CLI bridge OK" }],
211
+ stream: false,
212
+ });
213
+
214
+ const req = http.request(
215
+ {
216
+ hostname: "127.0.0.1",
217
+ port,
218
+ path: "/v1/chat/completions",
219
+ method: "POST",
220
+ headers: {
221
+ "Content-Type": "application/json",
222
+ "Authorization": `Bearer ${apiKey}`,
223
+ "Content-Length": Buffer.byteLength(body),
224
+ },
225
+ },
226
+ (res) => {
227
+ let data = "";
228
+ res.on("data", (chunk: Buffer) => { data += chunk.toString(); });
229
+ res.on("end", () => {
230
+ try {
231
+ const parsed = JSON.parse(data) as {
232
+ choices?: Array<{ message?: { content?: string } }>;
233
+ error?: { message?: string };
234
+ };
235
+ if (parsed.error) {
236
+ resolve(`Proxy error: ${parsed.error.message}`);
237
+ } else {
238
+ resolve(parsed.choices?.[0]?.message?.content?.trim() ?? "(empty response)");
239
+ }
240
+ } catch {
241
+ resolve(`(non-JSON response: ${data.slice(0, 200)})`);
242
+ }
243
+ });
244
+ }
245
+ );
246
+
247
+ req.setTimeout(timeoutMs, () => {
248
+ req.destroy();
249
+ reject(new Error(`Proxy test timed out after ${timeoutMs}ms`));
250
+ });
251
+ req.on("error", reject);
252
+ req.write(body);
253
+ req.end();
254
+ });
255
+ }
256
+
47
257
  // ──────────────────────────────────────────────────────────────────────────────
48
258
  // Plugin definition
49
259
  // ──────────────────────────────────────────────────────────────────────────────
50
260
  const plugin = {
51
261
  id: "openclaw-cli-bridge-elvatis",
52
262
  name: "OpenClaw CLI Bridge",
53
- version: "0.2.0",
263
+ version: "0.2.3",
54
264
  description:
55
- "Phase 1: openai-codex auth bridge (reads ~/.codex/auth.json). " +
56
- "Phase 2: HTTP proxy server routing model calls through gemini/claude CLIs.",
265
+ "Phase 1: openai-codex auth bridge. " +
266
+ "Phase 2: HTTP proxy for gemini/claude CLIs. " +
267
+ "Phase 3: /cli-* model switching, /cli-back restore, /cli-test health check.",
57
268
 
58
269
  register(api: OpenClawPluginApi) {
59
270
  const cfg = (api.pluginConfig ?? {}) as CliPluginConfig;
@@ -84,7 +295,6 @@ const plugin = {
84
295
  try {
85
296
  const creds = await readCodexCredentials(codexAuthPath);
86
297
  spin.stop("Codex CLI credentials loaded");
87
-
88
298
  return buildOauthProviderAuthResult({
89
299
  providerId: "openai-codex",
90
300
  defaultModel: CODEX_DEFAULT_MODEL,
@@ -120,7 +330,7 @@ const plugin = {
120
330
  },
121
331
  });
122
332
 
123
- api.logger.info("[cli-bridge] openai-codex provider registered (Codex CLI auth bridge)");
333
+ api.logger.info("[cli-bridge] openai-codex provider registered");
124
334
  }
125
335
 
126
336
  // ── Phase 2: CLI request proxy ─────────────────────────────────────────────
@@ -134,26 +344,124 @@ const plugin = {
134
344
  })
135
345
  .then(() => {
136
346
  api.logger.info(
137
- `[cli-bridge] proxy ready — vllm/cli-gemini/* and vllm/cli-claude/* available`
347
+ `[cli-bridge] proxy ready on :${port} — vllm/cli-gemini/* and vllm/cli-claude/* available`
138
348
  );
139
-
140
- // Auto-patch openclaw.json with vllm provider config (once)
141
349
  const result = patchOpencllawConfig(port);
142
350
  if (result.patched) {
143
351
  api.logger.info(
144
- `[cli-bridge] openclaw.json patched with vllm provider. ` +
145
- `Restart gateway to activate cli-gemini/* and cli-claude/* models.`
352
+ `[cli-bridge] openclaw.json patched with vllm provider. Restart gateway to activate.`
146
353
  );
147
- } else {
148
- api.logger.info(`[cli-bridge] config check: ${result.reason}`);
149
354
  }
150
355
  })
151
356
  .catch((err: Error) => {
152
- api.logger.warn(
153
- `[cli-bridge] proxy server failed to start on port ${port}: ${err.message}`
154
- );
357
+ api.logger.warn(`[cli-bridge] proxy failed to start on port ${port}: ${err.message}`);
155
358
  });
156
359
  }
360
+
361
+ // ── Phase 3a: /cli-* model switch commands ─────────────────────────────────
362
+ for (const entry of CLI_MODEL_COMMANDS) {
363
+ const { name, model, description, label } = entry;
364
+ api.registerCommand({
365
+ name,
366
+ description,
367
+ requireAuth: true,
368
+ handler: async (ctx: PluginCommandContext): Promise<PluginCommandResult> => {
369
+ api.logger.info(`[cli-bridge] /${name} by ${ctx.senderId ?? "?"}`);
370
+ return switchModel(api, model, label);
371
+ },
372
+ } satisfies OpenClawPluginCommandDefinition);
373
+ }
374
+
375
+ // ── Phase 3b: /cli-back — restore previous model ──────────────────────────
376
+ api.registerCommand({
377
+ name: "cli-back",
378
+ description: "Restore the model that was active before the last /cli-* switch",
379
+ requireAuth: true,
380
+ handler: async (ctx: PluginCommandContext): Promise<PluginCommandResult> => {
381
+ api.logger.info(`[cli-bridge] /cli-back by ${ctx.senderId ?? "?"}`);
382
+
383
+ const state = readState();
384
+ if (!state?.previousModel) {
385
+ return { text: "ℹ️ No previous model saved. Use `/cli-sonnet` etc. to switch first." };
386
+ }
387
+
388
+ const prev = state.previousModel;
389
+
390
+ // Clear the saved state so a second /cli-back doesn't bounce back
391
+ writeState({ previousModel: "" });
392
+
393
+ try {
394
+ const result = await api.runtime.system.runCommandWithTimeout(
395
+ ["openclaw", "models", "set", prev],
396
+ { timeoutMs: 8_000 }
397
+ );
398
+
399
+ if (result.code !== 0) {
400
+ const err = (result.stderr || result.stdout || "unknown error").trim();
401
+ return { text: `❌ Failed to restore \`${prev}\`: ${err}` };
402
+ }
403
+
404
+ api.logger.info(`[cli-bridge] /cli-back restored → ${prev}`);
405
+ return { text: `✅ Restored previous model\n\`${prev}\`` };
406
+ } catch (err) {
407
+ return { text: `❌ Error: ${(err as Error).message}` };
408
+ }
409
+ },
410
+ } satisfies OpenClawPluginCommandDefinition);
411
+
412
+ // ── Phase 3c: /cli-test — one-shot proxy ping, no global model switch ──────
413
+ api.registerCommand({
414
+ name: "cli-test",
415
+ description: "Test the CLI bridge proxy without switching your active model. Usage: /cli-test [model]",
416
+ acceptsArgs: true,
417
+ requireAuth: true,
418
+ handler: async (ctx: PluginCommandContext): Promise<PluginCommandResult> => {
419
+ const targetModel = ctx.args?.trim() || CLI_TEST_DEFAULT_MODEL;
420
+ // Accept short names like "cli-sonnet" or full "vllm/cli-claude/claude-sonnet-4-6"
421
+ const model = targetModel.startsWith("vllm/")
422
+ ? targetModel
423
+ : `vllm/${targetModel}`;
424
+
425
+ api.logger.info(`[cli-bridge] /cli-test → ${model} by ${ctx.senderId ?? "?"}`);
426
+
427
+ if (!enableProxy) {
428
+ return { text: "❌ Proxy is disabled (enableProxy: false in config)." };
429
+ }
430
+
431
+ const current = readCurrentModel();
432
+ const testTimeoutMs = Math.min(timeoutMs, 30_000);
433
+
434
+ try {
435
+ const start = Date.now();
436
+ const response = await proxyTestRequest(port, apiKey, model, testTimeoutMs);
437
+ const elapsed = Date.now() - start;
438
+
439
+ return {
440
+ text:
441
+ `🧪 **CLI Bridge Test**\n` +
442
+ `Model: \`${model}\`\n` +
443
+ `Response: _${response}_\n` +
444
+ `Latency: ${elapsed}ms\n\n` +
445
+ `Active model unchanged: \`${current ?? "unknown"}\``,
446
+ };
447
+ } catch (err) {
448
+ return {
449
+ text:
450
+ `❌ **CLI Bridge Test Failed**\n` +
451
+ `Model: \`${model}\`\n` +
452
+ `Error: ${(err as Error).message}\n\n` +
453
+ `Active model unchanged: \`${current ?? "unknown"}\``,
454
+ };
455
+ }
456
+ },
457
+ } satisfies OpenClawPluginCommandDefinition);
458
+
459
+ const allCommands = [
460
+ ...CLI_MODEL_COMMANDS.map((c) => `/${c.name}`),
461
+ "/cli-back",
462
+ "/cli-test",
463
+ ];
464
+ api.logger.info(`[cli-bridge] registered ${allCommands.length} commands: ${allCommands.join(", ")}`);
157
465
  },
158
466
  };
159
467
 
@@ -1,7 +1,7 @@
1
1
  {
2
2
  "id": "openclaw-cli-bridge-elvatis",
3
3
  "name": "OpenClaw CLI Bridge",
4
- "version": "0.2.1",
4
+ "version": "0.2.3",
5
5
  "description": "Phase 1: openai-codex auth bridge. Phase 2: local HTTP proxy routing model calls through gemini/claude CLIs (vllm provider).",
6
6
  "providers": [
7
7
  "openai-codex"
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "@elvatis_com/openclaw-cli-bridge-elvatis",
3
- "version": "0.2.1",
3
+ "version": "0.2.3",
4
4
  "description": "Bridges gemini, claude, and codex CLI tools as OpenClaw model providers. Reads existing CLI auth without re-login.",
5
5
  "type": "module",
6
6
  "scripts": {