@elvatis_com/openclaw-cli-bridge-elvatis 1.6.2 → 1.6.4
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/.ai/handoff/STATUS.md +30 -31
- package/README.md +14 -4
- package/SKILL.md +1 -1
- package/index.ts +38 -10
- package/openclaw.plugin.json +1 -1
- package/package.json +1 -1
- package/src/chatgpt-browser.ts +9 -1
- package/src/cli-runner.ts +17 -2
- package/src/proxy-server.ts +31 -7
- package/test/chatgpt-proxy.test.ts +5 -3
package/.ai/handoff/STATUS.md
CHANGED
|
@@ -1,46 +1,45 @@
|
|
|
1
1
|
# STATUS — openclaw-cli-bridge-elvatis
|
|
2
2
|
|
|
3
|
-
## Current Version: 1.
|
|
3
|
+
## Current Version: 1.6.3
|
|
4
4
|
|
|
5
|
-
## All 4 Providers Available —
|
|
6
|
-
| Provider | Status | Models |
|
|
7
|
-
|
|
8
|
-
| Grok | ✅ | web-grok/grok-3, grok-3-fast, grok-3-mini, grok-3-mini-fast | /grok-login |
|
|
9
|
-
|
|
|
10
|
-
|
|
|
11
|
-
| ChatGPT |
|
|
5
|
+
## All 4 Providers Available — persistent Chromium profiles
|
|
6
|
+
| Provider | Status | Models | Login Cmd | Cookie Expiry |
|
|
7
|
+
|---|---|---|---|---|
|
|
8
|
+
| Grok | ✅ | web-grok/grok-3, grok-3-fast, grok-3-mini, grok-3-mini-fast | /grok-login | ~178d |
|
|
9
|
+
| Gemini | ✅ | web-gemini/gemini-2-5-pro, gemini-2-5-flash, gemini-3-pro, gemini-3-flash | /gemini-login | ~398d |
|
|
10
|
+
| Claude.ai | ⚠️ expired | web-claude/claude-sonnet, claude-opus, claude-haiku | /claude-login | EXPIRED |
|
|
11
|
+
| ChatGPT | ⚠️ expiring | web-chatgpt/gpt-4o, gpt-4o-mini, gpt-4.1, o3, o4-mini, gpt-5, gpt-5-mini | /chatgpt-login | ~6d |
|
|
12
12
|
|
|
13
13
|
## Stats
|
|
14
|
-
- 22 total models
|
|
14
|
+
- 22 total models (6 CLI + 16 web-session)
|
|
15
15
|
- 96/96 tests green (8 test files)
|
|
16
|
-
-
|
|
17
|
-
-
|
|
18
|
-
- Singleton guard on ensureAllProviderContexts (no concurrent spawns)
|
|
19
|
-
- Persistent Chromium fallback for all 4 providers (no CDP dependency)
|
|
16
|
+
- All 4 providers use launchPersistentContext — sessions survive gateway restarts
|
|
17
|
+
- /bridge-status shows cookie-based status (independent of in-memory context)
|
|
20
18
|
|
|
21
19
|
## Architecture: Browser Lifecycle
|
|
22
|
-
- **
|
|
23
|
-
- **On
|
|
24
|
-
- **
|
|
25
|
-
- **On request (no
|
|
26
|
-
- **On /xxx-logout:** closes context + deletes
|
|
27
|
-
|
|
28
|
-
|
|
29
|
-
-
|
|
30
|
-
-
|
|
20
|
+
- **Profile dirs:** `~/.openclaw/{grok,gemini,claude,chatgpt}-profile/`
|
|
21
|
+
- **On plugin start:** startup restore attempts headless reconnect from saved profiles (5s delay)
|
|
22
|
+
- **On /xxx-login:** headed browser, user logs in, cookies baked to profile automatically
|
|
23
|
+
- **On request (no in-memory ctx):** proxy lazy-launches persistent context on first request
|
|
24
|
+
- **On /xxx-logout:** closes context + deletes profile + clears expiry file
|
|
25
|
+
- **bridge-status:** uses cookie expiry files as source of truth (not in-memory state)
|
|
26
|
+
- ✅ active — browser connected and verified
|
|
27
|
+
- 🟡 logged in, browser not loaded — cookies valid, lazy-loads on first request
|
|
28
|
+
- 🔴 session expired — needs /xxx-login
|
|
29
|
+
- ⚪ never logged in
|
|
31
30
|
|
|
32
31
|
## Release History
|
|
32
|
+
- v1.6.1 (2026-03-13): Fix /bridge-status — use cookie expiry as source of truth, not in-memory context
|
|
33
|
+
- v1.6.0 (2026-03-13): Persistent Chromium profiles for all 4 providers (Claude web + ChatGPT)
|
|
34
|
+
- v1.5.1 (2026-03-12): Fix hardcoded plugin version
|
|
35
|
+
- v1.5.0 (2026-03-12): Remove /claude-login and /chatgpt-login (pre-v1.6.0 interim)
|
|
33
36
|
- v1.4.0 (2026-03-12): Persistent browser fallback for Claude/Gemini/ChatGPT (no CDP required)
|
|
34
|
-
- v1.3.5 (2026-03-12): Startup restore guard
|
|
35
|
-
- v1.3.4 (2026-03-12): Safe sequential session restore from saved profiles
|
|
36
|
-
- v1.3.3 (2026-03-12): Remove startup auto-connect - browsers on-demand only (OOM fix)
|
|
37
|
-
- v1.3.2 (2026-03-12): Singleton guard on ensureAllProviderContexts (resource leak fix)
|
|
38
|
-
- v1.3.1 (2026-03-11): Cookie baking into persistent profiles on login
|
|
37
|
+
- v1.3.5 (2026-03-12): Startup restore guard (SIGUSR1 OOM fix)
|
|
39
38
|
- v1.3.0 (2026-03-11): Browser auto-reconnect after gateway restart
|
|
40
|
-
- v1.
|
|
41
|
-
- v1.1.0 (2026-03-11): Auto-connect on startup + /bridge-status
|
|
42
|
-
- v1.0.0 (2026-03-11): All 4 providers headless (Grok/Claude/Gemini/ChatGPT) - 96/96 tests
|
|
39
|
+
- v1.0.0 (2026-03-11): All 4 providers headless — 96/96 tests
|
|
43
40
|
|
|
44
41
|
## Next Steps
|
|
42
|
+
- /claude-login needs to be run (session expired)
|
|
43
|
+
- /chatgpt-login needs to be run in ~6 days
|
|
44
|
+
- Gemini model switching via UI (2.5 Pro vs Flash vs 3)
|
|
45
45
|
- Context-window management for long conversations
|
|
46
|
-
- Gemini model switching (2.5 Pro vs Flash vs 3) via UI
|
package/README.md
CHANGED
|
@@ -2,7 +2,7 @@
|
|
|
2
2
|
|
|
3
3
|
> OpenClaw plugin that bridges locally installed AI CLIs (Codex, Gemini, Claude Code) as model providers — with slash commands for instant model switching, restore, health testing, and model listing.
|
|
4
4
|
|
|
5
|
-
**Current version:** `1.6.
|
|
5
|
+
**Current version:** `1.6.4`
|
|
6
6
|
|
|
7
7
|
---
|
|
8
8
|
|
|
@@ -135,9 +135,11 @@ Routes requests through real browser sessions on the provider's web UI. Requires
|
|
|
135
135
|
|---|---|
|
|
136
136
|
| `web-chatgpt/gpt-4o` | GPT-4o |
|
|
137
137
|
| `web-chatgpt/gpt-4o-mini` | GPT-4o Mini |
|
|
138
|
-
| `web-chatgpt/gpt-
|
|
139
|
-
| `web-chatgpt/
|
|
138
|
+
| `web-chatgpt/gpt-4.1` | GPT-4.1 |
|
|
139
|
+
| `web-chatgpt/o3` | o3 |
|
|
140
|
+
| `web-chatgpt/o4-mini` | o4-mini |
|
|
140
141
|
| `web-chatgpt/gpt-5` | GPT-5 |
|
|
142
|
+
| `web-chatgpt/gpt-5-mini` | GPT-5 Mini |
|
|
141
143
|
|
|
142
144
|
| Command | What it does |
|
|
143
145
|
|---|---|
|
|
@@ -360,6 +362,14 @@ npm test # vitest run (83 tests)
|
|
|
360
362
|
|
|
361
363
|
## Changelog
|
|
362
364
|
|
|
365
|
+
### v1.6.4
|
|
366
|
+
- **chore:** version bump (1.6.3 was already published on npm with partial changes)
|
|
367
|
+
|
|
368
|
+
### v1.6.3
|
|
369
|
+
- **fix:** CLI-proxy models (`cli-gemini/*`, `cli-claude/*`) now return HTTP 400 with `tools_not_supported` when a request includes tool/function call schemas — prevents agents from silently failing or hallucinating when assigned a CLI-proxy model
|
|
370
|
+
- **feat:** `/v1/models` response includes `capabilities.tools: false` for CLI-proxy models so OpenClaw can detect tool support upfront
|
|
371
|
+
- **fix:** EADDRINUSE on hot-reload: re-probe after 1.5s wait before retrying bind; probe timeout 800ms → 2000ms
|
|
372
|
+
|
|
363
373
|
### v1.6.2
|
|
364
374
|
- **docs:** Add missing changelog entries (v1.5.1, v1.6.0, v1.6.1), fix /cli-codex54 command name in SKILL.md, add startup re-login alert description to SKILL.md.
|
|
365
375
|
|
|
@@ -458,7 +468,7 @@ No CLI binaries required — just authenticated browser sessions.
|
|
|
458
468
|
|
|
459
469
|
## v1.0.0
|
|
460
470
|
- **feat:** `chatgpt-browser.ts` — chatgpt.com DOM-automation (`#prompt-textarea` + `[data-message-author-role]`)
|
|
461
|
-
- **feat:** `web-chatgpt/*` models: gpt-4o, gpt-4o-mini, gpt-o3,
|
|
471
|
+
- **feat:** `web-chatgpt/*` models: gpt-4o, gpt-4o-mini, gpt-4.1, o3, o4-mini, gpt-5, gpt-5-mini (updated in v1.6.3)
|
|
462
472
|
- **feat:** `/chatgpt-login`, `/chatgpt-status`, `/chatgpt-logout` + cookie-expiry tracking
|
|
463
473
|
- **feat:** All 4 providers headless: Grok ✅ Claude ✅ Gemini ✅ ChatGPT ✅
|
|
464
474
|
- **test:** 96/83 tests green (8 test files)
|
package/SKILL.md
CHANGED
package/index.ts
CHANGED
|
@@ -854,7 +854,7 @@ function proxyTestRequest(
|
|
|
854
854
|
const plugin = {
|
|
855
855
|
id: "openclaw-cli-bridge-elvatis",
|
|
856
856
|
name: "OpenClaw CLI Bridge",
|
|
857
|
-
version: "1.6.
|
|
857
|
+
version: "1.6.4",
|
|
858
858
|
description:
|
|
859
859
|
"Phase 1: openai-codex auth bridge. " +
|
|
860
860
|
"Phase 2: HTTP proxy for gemini/claude CLIs. " +
|
|
@@ -1067,7 +1067,7 @@ const plugin = {
|
|
|
1067
1067
|
headers: { Authorization: `Bearer ${apiKey}` } },
|
|
1068
1068
|
(res) => { res.resume(); resolve(res.statusCode === 200); }
|
|
1069
1069
|
);
|
|
1070
|
-
req.setTimeout(
|
|
1070
|
+
req.setTimeout(2000, () => { req.destroy(); resolve(false); });
|
|
1071
1071
|
req.on("error", () => resolve(false));
|
|
1072
1072
|
req.end();
|
|
1073
1073
|
});
|
|
@@ -1144,8 +1144,17 @@ const plugin = {
|
|
|
1144
1144
|
} catch (err: unknown) {
|
|
1145
1145
|
const msg = (err as Error).message ?? String(err);
|
|
1146
1146
|
if (msg.includes("EADDRINUSE")) {
|
|
1147
|
-
// Port is busy but probe didn't respond —
|
|
1148
|
-
|
|
1147
|
+
// Port is busy but probe didn't respond — maybe the old server is still shutting down.
|
|
1148
|
+
// Re-probe first: if it now responds, reuse it without rebinding.
|
|
1149
|
+
api.logger.warn(`[cli-bridge] port ${port} busy — re-probing before retry…`);
|
|
1150
|
+
await new Promise(r => setTimeout(r, 1500));
|
|
1151
|
+
const aliveNow = await probeExisting();
|
|
1152
|
+
if (aliveNow) {
|
|
1153
|
+
api.logger.info(`[cli-bridge] proxy now responding on :${port} — reusing`);
|
|
1154
|
+
return;
|
|
1155
|
+
}
|
|
1156
|
+
// Still not responding — wait for OS to release the port, then rebind
|
|
1157
|
+
api.logger.warn(`[cli-bridge] port ${port} still busy, waiting 1s for OS release…`);
|
|
1149
1158
|
await new Promise((r) => setTimeout(r, 1000));
|
|
1150
1159
|
// One final attempt
|
|
1151
1160
|
try {
|
|
@@ -1975,22 +1984,41 @@ const plugin = {
|
|
|
1975
1984
|
return page.locator("#prompt-textarea").isVisible().catch(() => false);
|
|
1976
1985
|
} catch { chatgptContext = null; return false; }
|
|
1977
1986
|
},
|
|
1978
|
-
models: "web-chatgpt/gpt-4o, gpt-4o-mini, gpt-o3,
|
|
1987
|
+
models: "web-chatgpt/gpt-4o, gpt-4o-mini, gpt-4.1, o3, o4-mini, gpt-5, gpt-5-mini",
|
|
1979
1988
|
loginCmd: "/chatgpt-login",
|
|
1980
1989
|
expiry: () => { const e = loadChatGPTExpiry(); return e ? formatChatGPTExpiry(e) : null; },
|
|
1981
1990
|
},
|
|
1982
1991
|
];
|
|
1983
1992
|
|
|
1984
1993
|
for (const c of checks) {
|
|
1985
|
-
const active = c.ctx !== null;
|
|
1986
|
-
const ok = active ? await c.check() : false;
|
|
1987
1994
|
const expiry = c.expiry();
|
|
1988
|
-
|
|
1989
|
-
|
|
1995
|
+
const inMemory = c.ctx !== null;
|
|
1996
|
+
const liveOk = inMemory ? await c.check() : false;
|
|
1997
|
+
|
|
1998
|
+
// Cookie-based status: treat as "logged in" if expiry file exists and not expired
|
|
1999
|
+
// ⚠️ EXPIRED = truly expired, 🚨 = expiring soon (still valid), ✅ = fine
|
|
2000
|
+
// This reflects actual login state independent of in-memory context
|
|
2001
|
+
const cookieExpired = expiry !== null && expiry.startsWith("⚠️ EXPIRED");
|
|
2002
|
+
const cookieValid = expiry !== null && !cookieExpired;
|
|
2003
|
+
|
|
2004
|
+
if (liveOk) {
|
|
2005
|
+
// In-memory context active and verified
|
|
2006
|
+
lines.push(`✅ *${c.name}* — active (browser connected)`);
|
|
1990
2007
|
if (expiry) lines.push(` 🕐 ${expiry}`);
|
|
1991
2008
|
lines.push(` Models: ${c.models}`);
|
|
2009
|
+
} else if (cookieValid) {
|
|
2010
|
+
// Not in memory yet, but cookies are valid — will auto-connect on next request
|
|
2011
|
+
lines.push(`🟡 *${c.name}* — logged in, browser not loaded`);
|
|
2012
|
+
lines.push(` 🕐 ${expiry}`);
|
|
2013
|
+
lines.push(` Models: ${c.models}`);
|
|
2014
|
+
lines.push(` ℹ️ Browser launches on first request`);
|
|
2015
|
+
} else if (cookieExpired) {
|
|
2016
|
+
// Cookies expired — needs re-login
|
|
2017
|
+
lines.push(`🔴 *${c.name}* — session expired (run \`${c.loginCmd}\`)`);
|
|
2018
|
+
if (expiry) lines.push(` 🕐 ${expiry}`);
|
|
1992
2019
|
} else {
|
|
1993
|
-
|
|
2020
|
+
// No cookie file at all — never logged in
|
|
2021
|
+
lines.push(`⚪ *${c.name}* — never logged in (run \`${c.loginCmd}\`)`);
|
|
1994
2022
|
}
|
|
1995
2023
|
lines.push("");
|
|
1996
2024
|
}
|
package/openclaw.plugin.json
CHANGED
|
@@ -1,7 +1,7 @@
|
|
|
1
1
|
{
|
|
2
2
|
"id": "openclaw-cli-bridge-elvatis",
|
|
3
3
|
"name": "OpenClaw CLI Bridge",
|
|
4
|
-
"version": "1.6.
|
|
4
|
+
"version": "1.6.4",
|
|
5
5
|
"license": "MIT",
|
|
6
6
|
"description": "Phase 1: openai-codex auth bridge. Phase 2: local HTTP proxy routing model calls through gemini/claude CLIs (vllm provider).",
|
|
7
7
|
"providers": [
|
package/package.json
CHANGED
|
@@ -1,6 +1,6 @@
|
|
|
1
1
|
{
|
|
2
2
|
"name": "@elvatis_com/openclaw-cli-bridge-elvatis",
|
|
3
|
-
"version": "1.6.
|
|
3
|
+
"version": "1.6.4",
|
|
4
4
|
"description": "Bridges gemini, claude, and codex CLI tools as OpenClaw model providers. Reads existing CLI auth without re-login.",
|
|
5
5
|
"type": "module",
|
|
6
6
|
"openclaw": {
|
package/src/chatgpt-browser.ts
CHANGED
|
@@ -38,18 +38,26 @@ const CHATGPT_HOME = "https://chatgpt.com";
|
|
|
38
38
|
const MODEL_URLS: Record<string, string> = {
|
|
39
39
|
"gpt-4o": "https://chatgpt.com/?model=gpt-4o",
|
|
40
40
|
"gpt-4o-mini": "https://chatgpt.com/?model=gpt-4o-mini",
|
|
41
|
+
"gpt-4.1": "https://chatgpt.com/?model=gpt-4.1",
|
|
41
42
|
"o3": "https://chatgpt.com/?model=o3",
|
|
42
43
|
"o4-mini": "https://chatgpt.com/?model=o4-mini",
|
|
43
44
|
"gpt-5": "https://chatgpt.com/?model=gpt-5",
|
|
45
|
+
"gpt-5-mini": "https://chatgpt.com/?model=gpt-5-mini",
|
|
44
46
|
};
|
|
45
47
|
|
|
46
48
|
const MODEL_MAP: Record<string, string> = {
|
|
49
|
+
// canonical names
|
|
47
50
|
"gpt-4o": "gpt-4o",
|
|
48
51
|
"gpt-4o-mini": "gpt-4o-mini",
|
|
52
|
+
"gpt-4.1": "gpt-4.1",
|
|
53
|
+
"o3": "o3",
|
|
54
|
+
"o4-mini": "o4-mini",
|
|
55
|
+
"gpt-5": "gpt-5",
|
|
56
|
+
"gpt-5-mini": "gpt-5-mini",
|
|
57
|
+
// aliases (web-chatgpt/* prefix stripped by proxy)
|
|
49
58
|
"gpt-o3": "o3",
|
|
50
59
|
"gpt-o4-mini": "o4-mini",
|
|
51
60
|
"gpt-4-1": "gpt-4.1",
|
|
52
|
-
"gpt-5": "gpt-5",
|
|
53
61
|
};
|
|
54
62
|
|
|
55
63
|
function resolveModel(m?: string): string {
|
package/src/cli-runner.ts
CHANGED
|
@@ -316,8 +316,20 @@ export const DEFAULT_ALLOWED_CLI_MODELS: ReadonlySet<string> = new Set([
|
|
|
316
316
|
"cli-gemini/gemini-2.5-flash",
|
|
317
317
|
"cli-gemini/gemini-3-pro-preview",
|
|
318
318
|
"cli-gemini/gemini-3-flash-preview",
|
|
319
|
+
// Aliases (map to preview variants internally)
|
|
320
|
+
"cli-gemini/gemini-3-pro", // alias → gemini-3-pro-preview
|
|
321
|
+
"cli-gemini/gemini-3-flash", // alias → gemini-3-flash-preview
|
|
319
322
|
]);
|
|
320
323
|
|
|
324
|
+
/** Normalize model aliases to their canonical CLI model names. */
|
|
325
|
+
function normalizeModelAlias(normalized: string): string {
|
|
326
|
+
const ALIASES: Record<string, string> = {
|
|
327
|
+
"cli-gemini/gemini-3-pro": "cli-gemini/gemini-3-pro-preview",
|
|
328
|
+
"cli-gemini/gemini-3-flash": "cli-gemini/gemini-3-flash-preview",
|
|
329
|
+
};
|
|
330
|
+
return ALIASES[normalized] ?? normalized;
|
|
331
|
+
}
|
|
332
|
+
|
|
321
333
|
// ──────────────────────────────────────────────────────────────────────────────
|
|
322
334
|
// Router
|
|
323
335
|
// ──────────────────────────────────────────────────────────────────────────────
|
|
@@ -364,8 +376,11 @@ export async function routeToCliRunner(
|
|
|
364
376
|
);
|
|
365
377
|
}
|
|
366
378
|
|
|
367
|
-
|
|
368
|
-
|
|
379
|
+
// Resolve aliases (e.g. gemini-3-pro → gemini-3-pro-preview) after allowlist check
|
|
380
|
+
const resolved = normalizeModelAlias(normalized);
|
|
381
|
+
|
|
382
|
+
if (resolved.startsWith("cli-gemini/")) return runGemini(prompt, resolved, timeoutMs);
|
|
383
|
+
if (resolved.startsWith("cli-claude/")) return runClaude(prompt, resolved, timeoutMs);
|
|
369
384
|
|
|
370
385
|
throw new Error(
|
|
371
386
|
`Unknown CLI bridge model: "${model}". Use "vllm/cli-gemini/<model>" or "vllm/cli-claude/<model>".`
|
package/src/proxy-server.ts
CHANGED
|
@@ -72,6 +72,7 @@ export const CLI_MODELS = [
|
|
|
72
72
|
{ id: "cli-gemini/gemini-2.5-pro", name: "Gemini 2.5 Pro (CLI)", contextWindow: 1_000_000, maxTokens: 8_192 },
|
|
73
73
|
{ id: "cli-gemini/gemini-2.5-flash", name: "Gemini 2.5 Flash (CLI)", contextWindow: 1_000_000, maxTokens: 8_192 },
|
|
74
74
|
{ id: "cli-gemini/gemini-3-pro-preview",name: "Gemini 3 Pro (CLI)", contextWindow: 1_000_000, maxTokens: 8_192 },
|
|
75
|
+
{ id: "cli-gemini/gemini-3-pro", name: "Gemini 3 Pro (CLI, alias)", contextWindow: 1_000_000, maxTokens: 8_192 },
|
|
75
76
|
// Grok web-session models (requires /grok-login)
|
|
76
77
|
{ id: "web-grok/grok-3", name: "Grok 3 (web session)", contextWindow: 131_072, maxTokens: 131_072 },
|
|
77
78
|
{ id: "web-grok/grok-3-fast", name: "Grok 3 Fast (web session)", contextWindow: 131_072, maxTokens: 131_072 },
|
|
@@ -87,11 +88,13 @@ export const CLI_MODELS = [
|
|
|
87
88
|
{ id: "web-claude/claude-opus", name: "Claude Opus (web session)", contextWindow: 200_000, maxTokens: 8192 },
|
|
88
89
|
{ id: "web-claude/claude-haiku", name: "Claude Haiku (web session)", contextWindow: 200_000, maxTokens: 8192 },
|
|
89
90
|
// ChatGPT web-session models (requires /chatgpt-login)
|
|
90
|
-
{ id: "web-chatgpt/gpt-4o", name: "GPT-4o (web session)", contextWindow: 128_000, maxTokens:
|
|
91
|
-
{ id: "web-chatgpt/gpt-4o-mini", name: "GPT-4o Mini (web session)", contextWindow: 128_000, maxTokens:
|
|
92
|
-
{ id: "web-chatgpt/gpt-
|
|
93
|
-
{ id: "web-chatgpt/
|
|
94
|
-
{ id: "web-chatgpt/
|
|
91
|
+
{ id: "web-chatgpt/gpt-4o", name: "GPT-4o (web session)", contextWindow: 128_000, maxTokens: 16_384 },
|
|
92
|
+
{ id: "web-chatgpt/gpt-4o-mini", name: "GPT-4o Mini (web session)", contextWindow: 128_000, maxTokens: 16_384 },
|
|
93
|
+
{ id: "web-chatgpt/gpt-4.1", name: "GPT-4.1 (web session)", contextWindow: 1_047_576, maxTokens: 32_768 },
|
|
94
|
+
{ id: "web-chatgpt/o3", name: "o3 (web session)", contextWindow: 200_000, maxTokens: 100_000 },
|
|
95
|
+
{ id: "web-chatgpt/o4-mini", name: "o4-mini (web session)", contextWindow: 200_000, maxTokens: 100_000 },
|
|
96
|
+
{ id: "web-chatgpt/gpt-5", name: "GPT-5 (web session)", contextWindow: 1_047_576, maxTokens: 32_768 },
|
|
97
|
+
{ id: "web-chatgpt/gpt-5-mini", name: "GPT-5 Mini (web session)", contextWindow: 1_047_576, maxTokens: 32_768 },
|
|
95
98
|
];
|
|
96
99
|
|
|
97
100
|
// ──────────────────────────────────────────────────────────────────────────────
|
|
@@ -165,6 +168,10 @@ async function handleRequest(
|
|
|
165
168
|
object: "model",
|
|
166
169
|
created: now,
|
|
167
170
|
owned_by: "openclaw-cli-bridge",
|
|
171
|
+
// CLI-proxy models stream plain text — no tool/function call support
|
|
172
|
+
capabilities: {
|
|
173
|
+
tools: !(m.id.startsWith("cli-gemini/") || m.id.startsWith("cli-claude/")),
|
|
174
|
+
},
|
|
168
175
|
})),
|
|
169
176
|
})
|
|
170
177
|
);
|
|
@@ -199,7 +206,8 @@ async function handleRequest(
|
|
|
199
206
|
return;
|
|
200
207
|
}
|
|
201
208
|
|
|
202
|
-
const { model, messages, stream = false } = parsed;
|
|
209
|
+
const { model, messages, stream = false } = parsed as { model: string; messages: ChatMessage[]; stream?: boolean; tools?: unknown };
|
|
210
|
+
const hasTools = Array.isArray((parsed as { tools?: unknown }).tools) && (parsed as { tools?: unknown[] }).tools!.length > 0;
|
|
203
211
|
|
|
204
212
|
if (!model || !messages?.length) {
|
|
205
213
|
res.writeHead(400, { "Content-Type": "application/json" });
|
|
@@ -207,7 +215,23 @@ async function handleRequest(
|
|
|
207
215
|
return;
|
|
208
216
|
}
|
|
209
217
|
|
|
210
|
-
|
|
218
|
+
// CLI-proxy models (cli-gemini/*, cli-claude/*) are plain text completions —
|
|
219
|
+
// they cannot process tool/function call schemas. Return a clear 400 so
|
|
220
|
+
// OpenClaw can surface a meaningful error instead of getting a garbled response.
|
|
221
|
+
const isCliModel = model.startsWith("cli-gemini/") || model.startsWith("cli-claude/");
|
|
222
|
+
if (hasTools && isCliModel) {
|
|
223
|
+
res.writeHead(400, { "Content-Type": "application/json" });
|
|
224
|
+
res.end(JSON.stringify({
|
|
225
|
+
error: {
|
|
226
|
+
message: `Model ${model} does not support tool/function calls. Use a native API model (e.g. github-copilot/gpt-5-mini) for agents that need tools.`,
|
|
227
|
+
type: "invalid_request_error",
|
|
228
|
+
code: "tools_not_supported",
|
|
229
|
+
}
|
|
230
|
+
}));
|
|
231
|
+
return;
|
|
232
|
+
}
|
|
233
|
+
|
|
234
|
+
opts.log(`[cli-bridge] ${model} · ${messages.length} msg(s) · stream=${stream}${hasTools ? " · tools=unsupported→rejected" : ""}`);
|
|
211
235
|
|
|
212
236
|
const id = `chatcmpl-cli-${randomBytes(6).toString("hex")}`;
|
|
213
237
|
const created = Math.floor(Date.now() / 1000);
|
|
@@ -81,14 +81,16 @@ describe("ChatGPT web-session routing — model list", () => {
|
|
|
81
81
|
const ids = (res.body as { data: { id: string }[] }).data.map(m => m.id);
|
|
82
82
|
expect(ids).toContain("web-chatgpt/gpt-4o");
|
|
83
83
|
expect(ids).toContain("web-chatgpt/gpt-4o-mini");
|
|
84
|
-
expect(ids).toContain("web-chatgpt/gpt-
|
|
85
|
-
expect(ids).toContain("web-chatgpt/
|
|
84
|
+
expect(ids).toContain("web-chatgpt/gpt-4.1");
|
|
85
|
+
expect(ids).toContain("web-chatgpt/o3");
|
|
86
|
+
expect(ids).toContain("web-chatgpt/o4-mini");
|
|
86
87
|
expect(ids).toContain("web-chatgpt/gpt-5");
|
|
88
|
+
expect(ids).toContain("web-chatgpt/gpt-5-mini");
|
|
87
89
|
});
|
|
88
90
|
|
|
89
91
|
it("web-chatgpt/* models listed in CLI_MODELS constant", () => {
|
|
90
92
|
const chatgpt = CLI_MODELS.filter(m => m.id.startsWith("web-chatgpt/"));
|
|
91
|
-
expect(chatgpt).toHaveLength(
|
|
93
|
+
expect(chatgpt).toHaveLength(7);
|
|
92
94
|
});
|
|
93
95
|
});
|
|
94
96
|
|