@elvatis_com/openclaw-cli-bridge-elvatis 0.2.18 → 0.2.20

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -2,13 +2,13 @@
2
2
 
3
3
  _Last updated: 2026-03-08 by Akido (claude-sonnet-4-6)_
4
4
 
5
- ## Current Version: 0.2.18 — STABLE
5
+ ## Current Version: 0.2.19 — STABLE
6
6
 
7
7
  ## What is done
8
8
 
9
9
  - ✅ Repo: `https://github.com/elvatis/openclaw-cli-bridge-elvatis`
10
- - ✅ npm: `@@@elvatis_com/openclaw-cli-bridge-elvatis@0.2.18`
11
- - ✅ ClawHub: `openclaw-cli-bridge-elvatis@0.2.18`
10
+ - ✅ npm: `@@@@elvatis_com/openclaw-cli-bridge-elvatis@0.2.19`
11
+ - ✅ ClawHub: `openclaw-cli-bridge-elvatis@0.2.19`
12
12
  - ✅ Phase 1: `openai-codex` provider via `~/.codex/auth.json` (no re-login)
13
13
  - ✅ Phase 2: Local OpenAI-compatible proxy on `127.0.0.1:31337` (Gemini + Claude CLI)
14
14
  - ✅ Phase 3: 10 slash commands (`/cli-sonnet`, `/cli-opus`, `/cli-haiku`, `/cli-gemini`, `/cli-gemini-flash`, `/cli-gemini3`, `/cli-codex`, `/cli-codex-mini`, `/cli-back`, `/cli-test`)
package/README.md CHANGED
@@ -1,8 +1,8 @@
1
1
  # openclaw-cli-bridge-elvatis
2
2
 
3
- > OpenClaw plugin that bridges locally installed AI CLIs (Codex, Gemini, Claude Code) as model providers — with slash commands for instant model switching, restore, and health testing.
3
+ > OpenClaw plugin that bridges locally installed AI CLIs (Codex, Gemini, Claude Code) as model providers — with slash commands for instant model switching, restore, health testing, and model listing.
4
4
 
5
- **Current version:** `0.2.18`
5
+ **Current version:** `0.2.20`
6
6
 
7
7
  ---
8
8
 
@@ -20,13 +20,15 @@ Starts a local OpenAI-compatible HTTP proxy on `127.0.0.1:31337` and configures
20
20
  |---|---|---|
21
21
  | `vllm/cli-gemini/gemini-2.5-pro` | `gemini -m gemini-2.5-pro -p ""` (stdin, cwd=/tmp) | ~8–10s |
22
22
  | `vllm/cli-gemini/gemini-2.5-flash` | `gemini -m gemini-2.5-flash -p ""` (stdin, cwd=/tmp) | ~4–6s |
23
- | `vllm/cli-gemini/gemini-3-pro` | `gemini -m gemini-3-pro -p ""` (stdin, cwd=/tmp) | ~8–10s |
23
+ | `vllm/cli-gemini/gemini-3-pro-preview` | `gemini -m gemini-3-pro-preview -p ""` (stdin, cwd=/tmp) | ~8–10s |
24
+ | `vllm/cli-gemini/gemini-3-flash-preview` | `gemini -m gemini-3-flash-preview -p ""` (stdin, cwd=/tmp) | ~4–6s |
24
25
  | `vllm/cli-claude/claude-sonnet-4-6` | `claude -p --output-format text --model claude-sonnet-4-6` (stdin) | ~2–4s |
25
26
  | `vllm/cli-claude/claude-opus-4-6` | `claude -p --output-format text --model claude-opus-4-6` (stdin) | ~3–5s |
26
27
  | `vllm/cli-claude/claude-haiku-4-5` | `claude -p --output-format text --model claude-haiku-4-5` (stdin) | ~1–3s |
27
28
 
28
29
  ### Phase 3 — Slash commands
29
- Ten plugin-registered commands (all `requireAuth: true`):
30
+
31
+ All commands use gateway-level `commands.allowFrom` for authorization (`requireAuth: false` at plugin level).
30
32
 
31
33
  **Claude Code CLI** (routed via local proxy on `:31337`):
32
34
 
@@ -42,14 +44,18 @@ Ten plugin-registered commands (all `requireAuth: true`):
42
44
  |---|---|
43
45
  | `/cli-gemini` | `vllm/cli-gemini/gemini-2.5-pro` |
44
46
  | `/cli-gemini-flash` | `vllm/cli-gemini/gemini-2.5-flash` |
45
- | `/cli-gemini3` | `vllm/cli-gemini/gemini-3-pro` |
47
+ | `/cli-gemini3` | `vllm/cli-gemini/gemini-3-pro-preview` |
48
+ | `/cli-gemini3-flash` | `vllm/cli-gemini/gemini-3-flash-preview` |
46
49
 
47
- **Codex CLI** (via `openai-codex` provider — Codex CLI OAuth auth, calls OpenAI API directly, **not** through the local proxy):
50
+ **Codex CLI** (via `openai-codex` provider — OAuth auth, calls OpenAI API directly, **not** through the local proxy):
48
51
 
49
- | Command | Model |
50
- |---|---|
51
- | `/cli-codex` | `openai-codex/gpt-5.3-codex` |
52
- | `/cli-codex-mini` | `openai-codex/gpt-5.1-codex-mini` |
52
+ | Command | Model | Notes |
53
+ |---|---|---|
54
+ | `/cli-codex` | `openai-codex/gpt-5.3-codex` | ✅ Tested |
55
+ | `/cli-codex-spark` | `openai-codex/gpt-5.3-codex-spark` | |
56
+ | `/cli-codex52` | `openai-codex/gpt-5.2-codex` | |
57
+ | `/cli-codex54` | `openai-codex/gpt-5.4` | May require upgraded OAuth scope |
58
+ | `/cli-codex-mini` | `openai-codex/gpt-5.1-codex-mini` | ✅ Tested |
53
59
 
54
60
  **Utility:**
55
61
 
@@ -57,6 +63,7 @@ Ten plugin-registered commands (all `requireAuth: true`):
57
63
  |---|---|
58
64
  | `/cli-back` | Restore the model active **before** the last `/cli-*` switch |
59
65
  | `/cli-test [model]` | One-shot proxy health check — **does NOT switch your active model** |
66
+ | `/cli-list` | Show all registered CLI bridge models with commands |
60
67
 
61
68
  **`/cli-back` details:**
62
69
  - Before every `/cli-*` switch the current model is saved to `~/.openclaw/cli-bridge-state.json`
@@ -68,6 +75,10 @@ Ten plugin-registered commands (all `requireAuth: true`):
68
75
  - Default when no arg given: `cli-claude/claude-sonnet-4-6`
69
76
  - Reports response content, latency, and confirms your active model is unchanged
70
77
 
78
+ **`/cli-list` details:**
79
+ - Lists all registered models grouped by provider (Claude CLI, Gemini CLI, Codex)
80
+ - No arguments required
81
+
71
82
  ---
72
83
 
73
84
  ## Requirements
@@ -111,8 +122,10 @@ openclaw gateway restart
111
122
 
112
123
  ```
113
124
  [cli-bridge] proxy ready on :31337
114
- [cli-bridge] registered 8 commands: /cli-sonnet, /cli-opus, /cli-haiku,
115
- /cli-gemini, /cli-gemini-flash, /cli-gemini3, /cli-back, /cli-test
125
+ [cli-bridge] registered 14 commands: /cli-sonnet, /cli-opus, /cli-haiku,
126
+ /cli-gemini, /cli-gemini-flash, /cli-gemini3, /cli-gemini3-flash,
127
+ /cli-codex, /cli-codex-spark, /cli-codex52, /cli-codex54, /cli-codex-mini,
128
+ /cli-back, /cli-test, /cli-list
116
129
  ```
117
130
 
118
131
  ### 3. Register Codex auth (optional — Phase 1 only)
@@ -122,7 +135,39 @@ openclaw models auth login --provider openai-codex
122
135
  # Select: "Codex CLI (existing login)"
123
136
  ```
124
137
 
125
- ### 4. Test without switching your model
138
+ ### 4. List available models
139
+
140
+ ```
141
+ /cli-list
142
+ → 🤖 CLI Bridge Models
143
+
144
+ Claude Code CLI
145
+ /cli-sonnet claude-sonnet-4-6
146
+ /cli-opus claude-opus-4-6
147
+ /cli-haiku claude-haiku-4-5
148
+
149
+ Gemini CLI
150
+ /cli-gemini gemini-2.5-pro
151
+ /cli-gemini-flash gemini-2.5-flash
152
+ /cli-gemini3 gemini-3-pro-preview
153
+ /cli-gemini3-flash gemini-3-flash-preview
154
+
155
+ Codex (OAuth)
156
+ /cli-codex gpt-5.3-codex
157
+ /cli-codex-spark gpt-5.3-codex-spark
158
+ /cli-codex52 gpt-5.2-codex
159
+ /cli-codex54 gpt-5.4
160
+ /cli-codex-mini gpt-5.1-codex-mini
161
+
162
+ Utility
163
+ /cli-back Restore previous model
164
+ /cli-test [model] Health check (no model switch)
165
+ /cli-list This overview
166
+
167
+ Proxy: 127.0.0.1:31337
168
+ ```
169
+
170
+ ### 5. Test without switching your model
126
171
 
127
172
  ```
128
173
  /cli-test
@@ -140,7 +185,7 @@ openclaw models auth login --provider openai-codex
140
185
  Active model unchanged: anthropic/claude-sonnet-4-6
141
186
  ```
142
187
 
143
- ### 5. Switch and restore
188
+ ### 6. Switch and restore
144
189
 
145
190
  ```
146
191
  /cli-sonnet
@@ -173,15 +218,22 @@ In `~/.openclaw/openclaw.json` → `plugins.entries.openclaw-cli-bridge-elvatis.
173
218
 
174
219
  ---
175
220
 
221
+ ## Model Allowlist
222
+
223
+ `routeToCliRunner` enforces `DEFAULT_ALLOWED_CLI_MODELS` — only models registered in the plugin are accepted by the proxy. Unregistered models receive a clear error listing allowed options.
224
+
225
+ To disable the check (e.g. for custom vllm routing): pass `allowedModels: null` in `RouteOptions`.
226
+
227
+ ---
228
+
176
229
  ## Architecture
177
230
 
178
231
  ```
179
232
  OpenClaw agent
180
233
 
181
234
  ├─ openai-codex/* ──────────────────────────► OpenAI API (direct)
182
- │ auth: ~/.codex/auth.json OAuth tokens
183
-
184
- │ /cli-codex, /cli-codex-mini ─────────────────┘ (switch to this provider)
235
+ │ auth: ~/.codex/auth.json OAuth tokens
236
+ /cli-codex, /cli-codex-spark, /cli-codex52, /cli-codex54, /cli-codex-mini
185
237
 
186
238
  └─ vllm/cli-gemini/* ─┐
187
239
  vllm/cli-claude/* ─┤─► localhost:31337 (openclaw-cli-bridge proxy)
@@ -190,21 +242,16 @@ OpenClaw agent
190
242
  │ │ (neutral cwd prevents agentic mode)
191
243
  │ └─ cli-claude/* → claude -p --model <model>
192
244
  │ stdin=prompt
193
- └───────────────────────────────────────────────────
194
245
 
195
- Slash commands (bypass agent, requireAuth=true):
196
- /cli-sonnet|opus|haiku|gemini|gemini-flash|gemini3|codex|codex-mini
246
+ Slash commands (requireAuth=false, gateway commands.allowFrom is the auth layer):
247
+ /cli-sonnet|opus|haiku|gemini|gemini-flash|gemini3|gemini3-flash
248
+ /cli-codex|codex-spark|codex52|codex54|codex-mini
197
249
  └─► saves current model → ~/.openclaw/cli-bridge-state.json
198
- └─► openclaw models set <model> (~1s, atomic)
250
+ └─► openclaw models set <model>
199
251
 
200
- /cli-back
201
- └─► reads ~/.openclaw/cli-bridge-state.json
202
- └─► openclaw models set <previous>
203
-
204
- /cli-test [model]
205
- └─► HTTP POST → localhost:31337 (no global model change)
206
- └─► reports response + latency
207
- └─► NOTE: only tests the proxy — Codex models bypass the proxy
252
+ /cli-back → reads state file, restores previous model, clears state
253
+ /cli-test → HTTP POST → localhost:31337, no global model change
254
+ /cli-list → formatted table of all registered models
208
255
  ```
209
256
 
210
257
  ---
@@ -218,8 +265,8 @@ Slash commands (bypass agent, requireAuth=true):
218
265
 
219
266
  ### Gemini agentic mode / hangs (fixed in v0.2.4)
220
267
  **Symptom:** Gemini hangs, returns wrong answers, or says "directory does not exist".
221
- **Cause:** `@file` syntax (`gemini -p @/tmp/xxx.txt`) triggers agentic mode — Gemini scans the working directory for project context and treats prompts as task instructions. Running from the workspace root makes this worse.
222
- **Fix:** Stdin delivery (`gemini -p ""` with prompt via stdin) + `cwd=/tmp`. Same pattern as Claude.
268
+ **Cause:** `@file` syntax (`gemini -p @/tmp/xxx.txt`) triggers agentic mode — Gemini scans the working directory for project context and treats prompts as task instructions.
269
+ **Fix:** Stdin delivery (`gemini -p ""` with prompt via stdin) + `cwd=/tmp`.
223
270
 
224
271
  ---
225
272
 
@@ -227,87 +274,71 @@ Slash commands (bypass agent, requireAuth=true):
227
274
 
228
275
  ```bash
229
276
  npm run typecheck # tsc --noEmit
230
- npm test # vitest run (5 unit tests for formatPrompt)
277
+ npm test # vitest run (45 tests)
231
278
  ```
232
279
 
233
280
  ---
234
281
 
235
282
  ## Changelog
236
283
 
284
+ ### v0.2.20
285
+ - **fix:** `formatPrompt` now defensively coerces `content` to string via `contentToString()` — prevents `[object Object]` reaching the CLI when WhatsApp group messages contain structured content objects instead of plain strings
286
+ - **feat:** `ChatMessage.content` now accepts `string | ContentPart[] | unknown` (OpenAI multimodal content arrays supported)
287
+ - **feat:** New `contentToString()` helper: handles string, OpenAI ContentPart arrays, arbitrary objects (JSON.stringify), null/undefined
288
+
289
+ ### v0.2.19
290
+ - **feat:** `/cli-list` command — formatted overview of all registered models grouped by provider
291
+ - **docs:** Rewrite README to reflect current state (correct model names, command count, requireAuth, test count, /cli-list docs)
292
+
237
293
  ### v0.2.18
238
294
  - **feat:** Add `/cli-gemini3-flash` → `gemini-3-flash-preview`
239
295
  - **feat:** Add `/cli-codex-spark` → `gpt-5.3-codex-spark`, `/cli-codex52` → `gpt-5.2-codex`, `/cli-codex54` → `gpt-5.4`
240
296
  - **fix:** Update `DEFAULT_ALLOWED_CLI_MODELS` with `gemini-3-flash-preview`
241
297
 
242
298
  ### v0.2.17
243
- - **fix:** `/cli-gemini3` model corrected to `gemini-3-pro-preview` (was `gemini-3-pro`, which returns 404 from Gemini API)
299
+ - **fix:** `/cli-gemini3` model corrected to `gemini-3-pro-preview` (was `gemini-3-pro`, returns 404 from Gemini API)
244
300
 
245
301
  ### v0.2.16
246
302
  - **feat(T-101):** Expand test suite to 45 tests — new cases for `formatPrompt` (mixed roles, boundary values, system messages) and `routeToCliRunner` (gemini paths, edge cases)
247
303
  - **feat(T-103):** Add `DEFAULT_ALLOWED_CLI_MODELS` allowlist; `routeToCliRunner` now rejects unregistered models by default; pass `allowedModels: null` to opt out
248
304
 
249
305
  ### v0.2.15
250
- - **docs:** Rewrite changelog (entries for v0.2.12–v0.2.14 were corrupted by repeated sed version bumps); all providers verified working (Claude, Gemini, Codex)
251
- - **docs:** Update STATUS.md with end-to-end test results
306
+ - **docs:** Rewrite changelog (entries for v0.2.12–v0.2.14 were corrupted); all providers verified working end-to-end
252
307
 
253
308
  ### v0.2.14
254
- - **fix:** Strip `vllm/` prefix in `routeToCliRunner` — OpenClaw sends full provider path (`vllm/cli-claude/...`) but proxy router expected bare `cli-claude/...`; caused "Unknown CLI bridge model" on all requests
255
- - **test:** Add 4 routing tests covering both prefixed and non-prefixed model paths (9 tests total)
309
+ - **fix:** Strip `vllm/` prefix in `routeToCliRunner` — OpenClaw sends full provider path (`vllm/cli-claude/...`) but proxy router expected bare `cli-claude/...`
310
+ - **test:** Add 4 routing tests (9 total)
256
311
 
257
312
  ### v0.2.13
258
- - **fix:** Set `requireAuth: false` on all `/cli-*` commands — webchat senders were always blocked because plugin-level auth uses a different resolution path than `commands.allowFrom` config; gateway-level allowlist is the correct security layer
259
- - **fix:** Hardcoded `version: "0.2.5"` in plugin object (`index.ts`) — now tracks `package.json`
260
-
261
- ### v0.2.12
262
- - **docs:** Fix changelog continuity — v0.2.10 entry was lost, v0.2.11 description was wrong; all entries now accurate
263
-
264
- ### v0.2.11
265
- - **docs:** Fix README `Current version:** `0.2.18`
266
-
267
- ### v0.2.10
268
- - **docs:** Fix version labels — SKILL.md was showing 0.2.2, README changelog ended at v0.2.5; add entries for v0.2.6–v0.2.9
313
+ - **fix:** Set `requireAuth: false` on all `/cli-*` commands — plugin-level auth uses different resolution path than `commands.allowFrom`; gateway allowlist is the correct security layer
314
+ - **fix:** Hardcoded `version: "0.2.5"` in plugin object now tracks `package.json`
269
315
 
270
316
  ### v0.2.9
271
- - **fix:** Critical — replace `fuser -k 31337/tcp` with safe health probe (`GET /v1/models`)
272
- - Prevents gateway SIGKILL on in-process hot-reloads (systemd `status=9/KILL` was caused by `fuser` finding gateway itself holding the port)
273
- - If proxy responds → reuse it; if EADDRINUSE but no response → wait 1s, retry once
274
-
275
- ### v0.2.8
276
- - **fix:** EADDRINUSE on every gateway restart — `closeAllConnections()` + `registerService` stop() hook (partially; superseded by v0.2.9 health-probe approach)
317
+ - **fix:** Critical — replace `fuser -k 31337/tcp` with safe health probe to prevent gateway SIGKILL on hot-reloads
277
318
 
278
- ### v0.2.7
279
- - **fix:** Port leak on gateway hot-reload — added `registerService` stop() callback to close proxy server on plugin teardown
319
+ ### v0.2.7–v0.2.8
320
+ - **fix:** Port leak on hot-reload — `registerService` stop() hook + `closeAllConnections()`
280
321
 
281
322
  ### v0.2.6
282
- - **fix:** `openclaw.extensions` added to `package.json` (required for `openclaw plugins install`)
283
- - Config patcher: auto-adds vllm provider to `openclaw.json` on first startup
323
+ - **fix:** `openclaw.extensions` added to `package.json`; config patcher auto-adds vllm provider
284
324
 
285
325
  ### v0.2.5
286
- - **feat:** `/cli-codex` `openai-codex/gpt-5.3-codex`
287
- - **feat:** `/cli-codex-mini` → `openai-codex/gpt-5.1-codex-mini`
288
- - Codex commands use the `openai-codex` provider (Codex CLI OAuth auth, direct OpenAI API — not the local proxy)
326
+ - **feat:** `/cli-codex` + `/cli-codex-mini` (Codex OAuth provider, direct API)
289
327
 
290
328
  ### v0.2.4
291
- - **fix:** Gemini agentic mode — replaced `@file` with stdin delivery (`-p ""`) + `cwd=/tmp`
292
- - **fix:** Filter `[WARN]` and `Loaded cached credentials` noise from Gemini stderr
293
- - Added `RunCliOptions` interface with optional `cwd` field
329
+ - **fix:** Gemini agentic mode — stdin delivery + `cwd=/tmp`
294
330
 
295
331
  ### v0.2.3
296
- - **feat:** `/cli-back` restore previous model (state persisted in `~/.openclaw/cli-bridge-state.json`)
297
- - **feat:** `/cli-test [model]` — one-shot proxy health check without changing active model
332
+ - **feat:** `/cli-back` + `/cli-test`
298
333
 
299
334
  ### v0.2.2
300
- - **feat:** Phase 3 — `/cli-*` slash commands for instant model switching
301
- - All 6 model commands via `api.registerCommand` with `requireAuth: true`
335
+ - **feat:** Phase 3 — `/cli-*` slash commands
302
336
 
303
337
  ### v0.2.1
304
- - **fix:** `spawn E2BIG` `buildMinimalEnv()` instead of spreading full `process.env`
305
- - **feat:** Unit tests (`test/cli-runner.test.ts`)
338
+ - **fix:** `spawn E2BIG` + unit tests
306
339
 
307
340
  ### v0.2.0
308
- - **feat:** Phase 2 — local OpenAI-compatible proxy server
309
- - Stdin prompt delivery, `MAX_MESSAGES=20` + `MAX_MSG_CHARS=4000` truncation
310
- - Auto-patch of `openclaw.json` vllm provider config
341
+ - **feat:** Phase 2 — local OpenAI-compatible proxy, stdin delivery, prompt truncation
311
342
 
312
343
  ### v0.1.x
313
344
  - Phase 1: Codex CLI OAuth auth bridge
package/SKILL.md CHANGED
@@ -53,4 +53,4 @@ Each command runs `openclaw models set <model>` atomically and replies with a co
53
53
 
54
54
  See `README.md` for full configuration reference and architecture diagram.
55
55
 
56
- **Version:** 0.2.18
56
+ **Version:** 0.2.20
package/index.ts CHANGED
@@ -20,6 +20,7 @@
20
20
  * /cli-codex-mini → openai-codex/gpt-5.1-codex-mini (Codex CLI OAuth, direct API)
21
21
  * /cli-back → restore model that was active before last /cli-* switch
22
22
  * /cli-test [model] → one-shot proxy health check (does NOT switch global model)
23
+ * /cli-list → list all registered CLI bridge models with commands
23
24
  *
24
25
  * Provider / model naming:
25
26
  * vllm/cli-gemini/gemini-2.5-pro → `gemini -m gemini-2.5-pro @<tmpfile>`
@@ -575,10 +576,51 @@ const plugin = {
575
576
  },
576
577
  } satisfies OpenClawPluginCommandDefinition);
577
578
 
579
+ // ── Phase 3d: /cli-list — formatted model overview ────────────────────────
580
+ api.registerCommand({
581
+ name: "cli-list",
582
+ description: "List all registered CLI bridge models and their commands.",
583
+ requireAuth: false,
584
+ handler: async (): Promise<PluginCommandResult> => {
585
+ const groups: Record<string, { cmd: string; model: string }[]> = {
586
+ "Claude Code CLI": [],
587
+ "Gemini CLI": [],
588
+ "Codex (OAuth)": [],
589
+ };
590
+
591
+ for (const c of CLI_MODEL_COMMANDS) {
592
+ const entry = { cmd: `/${c.name}`, model: c.model };
593
+ if (c.model.startsWith("vllm/cli-claude/")) groups["Claude Code CLI"].push(entry);
594
+ else if (c.model.startsWith("vllm/cli-gemini/")) groups["Gemini CLI"].push(entry);
595
+ else groups["Codex (OAuth)"].push(entry);
596
+ }
597
+
598
+ const lines: string[] = ["🤖 *CLI Bridge Models*", ""];
599
+ for (const [group, entries] of Object.entries(groups)) {
600
+ if (entries.length === 0) continue;
601
+ lines.push(`*${group}*`);
602
+ for (const { cmd, model } of entries) {
603
+ const modelId = model.replace(/^vllm\/cli-(claude|gemini)\//, "").replace(/^openai-codex\//, "");
604
+ lines.push(` ${cmd.padEnd(20)} ${modelId}`);
605
+ }
606
+ lines.push("");
607
+ }
608
+ lines.push("*Utility*");
609
+ lines.push(" /cli-back Restore previous model");
610
+ lines.push(" /cli-test [model] Health check (no model switch)");
611
+ lines.push(" /cli-list This overview");
612
+ lines.push("");
613
+ lines.push(`Proxy: \`127.0.0.1:${port}\``);
614
+
615
+ return { text: lines.join("\n") };
616
+ },
617
+ } satisfies OpenClawPluginCommandDefinition);
618
+
578
619
  const allCommands = [
579
620
  ...CLI_MODEL_COMMANDS.map((c) => `/${c.name}`),
580
621
  "/cli-back",
581
622
  "/cli-test",
623
+ "/cli-list",
582
624
  ];
583
625
  api.logger.info(`[cli-bridge] registered ${allCommands.length} commands: ${allCommands.join(", ")}`);
584
626
  },
@@ -1,7 +1,7 @@
1
1
  {
2
2
  "id": "openclaw-cli-bridge-elvatis",
3
3
  "name": "OpenClaw CLI Bridge",
4
- "version": "0.2.18",
4
+ "version": "0.2.20",
5
5
  "description": "Phase 1: openai-codex auth bridge. Phase 2: local HTTP proxy routing model calls through gemini/claude CLIs (vllm provider).",
6
6
  "providers": [
7
7
  "openai-codex"
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "@elvatis_com/openclaw-cli-bridge-elvatis",
3
- "version": "0.2.18",
3
+ "version": "0.2.20",
4
4
  "description": "Bridges gemini, claude, and codex CLI tools as OpenClaw model providers. Reads existing CLI auth without re-login.",
5
5
  "type": "module",
6
6
  "openclaw": {
package/src/cli-runner.ts CHANGED
@@ -24,9 +24,15 @@ const MAX_MSG_CHARS = 4000;
24
24
  // Message formatting
25
25
  // ──────────────────────────────────────────────────────────────────────────────
26
26
 
27
+ export interface ContentPart {
28
+ type: string;
29
+ text?: string;
30
+ }
31
+
27
32
  export interface ChatMessage {
28
33
  role: "system" | "user" | "assistant";
29
- content: string;
34
+ /** Plain string or OpenAI-style content array (multimodal / structured). */
35
+ content: string | ContentPart[] | unknown;
30
36
  }
31
37
 
32
38
  /**
@@ -61,7 +67,30 @@ export function formatPrompt(messages: ChatMessage[]): string {
61
67
  .join("\n\n");
62
68
  }
63
69
 
64
- function truncateContent(s: string): string {
70
+ /**
71
+ * Coerce any message content value to a plain string.
72
+ *
73
+ * Handles:
74
+ * - string → as-is
75
+ * - ContentPart[] → join text parts (OpenAI multimodal format)
76
+ * - other object → JSON.stringify (prevents "[object Object]" from reaching the CLI)
77
+ * - null/undefined → ""
78
+ */
79
+ function contentToString(content: unknown): string {
80
+ if (typeof content === "string") return content;
81
+ if (content === null || content === undefined) return "";
82
+ if (Array.isArray(content)) {
83
+ return (content as ContentPart[])
84
+ .filter((c) => c?.type === "text" && typeof c.text === "string")
85
+ .map((c) => c.text!)
86
+ .join("\n");
87
+ }
88
+ if (typeof content === "object") return JSON.stringify(content);
89
+ return String(content);
90
+ }
91
+
92
+ function truncateContent(raw: unknown): string {
93
+ const s = contentToString(raw);
65
94
  if (s.length <= MAX_MSG_CHARS) return s;
66
95
  return s.slice(0, MAX_MSG_CHARS) + `\n...[truncated ${s.length - MAX_MSG_CHARS} chars]`;
67
96
  }
@@ -99,6 +99,60 @@ describe("formatPrompt", () => {
99
99
  expect(result).toContain("[System]");
100
100
  expect(result).toContain("[User]");
101
101
  });
102
+
103
+ // contentToString coercion tests (fix: [object Object] in WhatsApp group messages)
104
+ it("coerces ContentPart array to plain text", () => {
105
+ const result = formatPrompt([
106
+ { role: "user", content: [{ type: "text", text: "Hello from WA group" }] },
107
+ ]);
108
+ expect(result).toBe("Hello from WA group");
109
+ expect(result).not.toContain("[object Object]");
110
+ });
111
+
112
+ it("joins multiple text ContentParts with newline", () => {
113
+ const result = formatPrompt([
114
+ {
115
+ role: "user",
116
+ content: [
117
+ { type: "text", text: "Part one" },
118
+ { type: "text", text: "Part two" },
119
+ ],
120
+ },
121
+ ]);
122
+ expect(result).toContain("Part one");
123
+ expect(result).toContain("Part two");
124
+ });
125
+
126
+ it("ignores non-text ContentParts (e.g. image)", () => {
127
+ const result = formatPrompt([
128
+ {
129
+ role: "user",
130
+ content: [
131
+ { type: "image_url", url: "https://example.com/img.png" },
132
+ { type: "text", text: "describe this" },
133
+ ],
134
+ },
135
+ ]);
136
+ expect(result).toBe("describe this");
137
+ });
138
+
139
+ it("coerces plain object content to JSON string (not [object Object])", () => {
140
+ const result = formatPrompt([
141
+ { role: "user", content: { text: "structured", extra: 42 } as any },
142
+ ]);
143
+ expect(result).not.toBe("[object Object]");
144
+ expect(result).toContain("structured");
145
+ });
146
+
147
+ it("handles null content gracefully", () => {
148
+ const result = formatPrompt([{ role: "user", content: null as any }]);
149
+ expect(result).toBe("");
150
+ });
151
+
152
+ it("handles undefined content gracefully", () => {
153
+ const result = formatPrompt([{ role: "user", content: undefined as any }]);
154
+ expect(result).toBe("");
155
+ });
102
156
  });
103
157
 
104
158
  // ──────────────────────────────────────────────────────────────────────────────