@elvatis_com/openclaw-cli-bridge-elvatis 0.2.30 → 1.1.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/.ai/handoff/STATUS.md +33 -52
- package/README.md +28 -1
- package/SKILL.md +1 -1
- package/index.ts +263 -79
- package/openclaw.plugin.json +1 -1
- package/package.json +1 -1
- package/src/chatgpt-browser.ts +253 -0
- package/src/proxy-server.ts +60 -36
- package/test/chatgpt-proxy.test.ts +107 -0
package/.ai/handoff/STATUS.md
CHANGED
|
@@ -1,54 +1,35 @@
|
|
|
1
1
|
# STATUS — openclaw-cli-bridge-elvatis
|
|
2
2
|
|
|
3
|
-
## Current Version: 0.
|
|
4
|
-
|
|
5
|
-
##
|
|
6
|
-
|
|
7
|
-
|
|
8
|
-
-
|
|
9
|
-
-
|
|
10
|
-
|
|
11
|
-
-
|
|
12
|
-
|
|
13
|
-
|
|
14
|
-
|
|
15
|
-
|
|
16
|
-
|
|
17
|
-
|
|
18
|
-
|
|
19
|
-
|
|
20
|
-
|
|
21
|
-
|
|
22
|
-
|
|
23
|
-
|
|
24
|
-
|
|
25
|
-
|
|
26
|
-
|
|
27
|
-
|
|
28
|
-
-
|
|
29
|
-
|
|
30
|
-
|
|
31
|
-
|
|
32
|
-
|
|
33
|
-
|
|
34
|
-
|
|
35
|
-
|
|
36
|
-
4. Add claude-browser integration tests (DI-override, same as grok-proxy.test.ts)
|
|
37
|
-
5. Repeat for Gemini (gemini.google.com) and ChatGPT (chatgpt.com)
|
|
38
|
-
6. Bump to v1.0.0 when all 4 providers green + all tests pass
|
|
39
|
-
|
|
40
|
-
## Key Files
|
|
41
|
-
- src/claude-browser.ts — Claude DOM automation (ready, not wired)
|
|
42
|
-
- src/grok-client.ts — reference implementation
|
|
43
|
-
- src/grok-session.ts — reference for login/session management
|
|
44
|
-
- src/proxy-server.ts — add web-claude/* routing here
|
|
45
|
-
- index.ts — add /claude-login here
|
|
46
|
-
- test/claude-browser.test.ts — unit tests (partial, needs proxy integration test)
|
|
47
|
-
|
|
48
|
-
## Constraints
|
|
49
|
-
- OpenClaw browser (CDP 18800) required for Cloudflare bypass
|
|
50
|
-
- persistent profile approach fails (fingerprint mismatch)
|
|
51
|
-
- Each provider: own profile dir ~/.openclaw/<provider>-profile/
|
|
52
|
-
- All providers share same proxy port 31337
|
|
53
|
-
- Publish only after full test pass (77+ tests green)
|
|
54
|
-
- All 3 platforms on every release: GitHub + npm + ClawHub
|
|
3
|
+
## Current Version: 1.0.0 (npm + ClawHub + GitHub) ✅ RELEASED
|
|
4
|
+
|
|
5
|
+
## All 4 Providers LIVE — Tested 2026-03-11 22:24
|
|
6
|
+
| Provider | Status | Models | Command |
|
|
7
|
+
|---|---|---|---|
|
|
8
|
+
| Grok | ✅ | web-grok/grok-3, grok-3-fast, grok-3-mini, grok-3-mini-fast | /grok-login |
|
|
9
|
+
| Claude | ✅ | web-claude/claude-sonnet, claude-opus, claude-haiku | /claude-login |
|
|
10
|
+
| Gemini | ✅ | web-gemini/gemini-2-5-pro, gemini-2-5-flash, gemini-3-pro, gemini-3-flash | /gemini-login |
|
|
11
|
+
| ChatGPT | ✅ | web-chatgpt/gpt-4o, gpt-4o-mini, gpt-o3, gpt-o4-mini, gpt-5 | /chatgpt-login |
|
|
12
|
+
|
|
13
|
+
Live test: "What is the capital of France?"
|
|
14
|
+
- Grok: "Paris" ✅
|
|
15
|
+
- Claude: "Paris" ✅
|
|
16
|
+
- Gemini: "Paris" ✅
|
|
17
|
+
- ChatGPT: "Paris" ✅
|
|
18
|
+
|
|
19
|
+
## Stats
|
|
20
|
+
- 22 total models, 16 web-session models
|
|
21
|
+
- 96/96 tests green (8 test files)
|
|
22
|
+
- 0 zombie Chromium processes (singleton CDP, cleanupBrowsers on stop)
|
|
23
|
+
- Cookie expiry tracking for all 4 providers
|
|
24
|
+
|
|
25
|
+
## Known Issue: Browser persistence after Gateway restart
|
|
26
|
+
After SIGUSR1/full restart, OpenClaw browser is gone (CDP ECONNREFUSED).
|
|
27
|
+
Workaround: manually open browser + 4 provider pages → lazy connect takes over.
|
|
28
|
+
Fix needed: auto-start browser on plugin init, or keep-alive ping.
|
|
29
|
+
|
|
30
|
+
## Next Steps (v1.1.x)
|
|
31
|
+
- Auto-reconnect OpenClaw browser on plugin start
|
|
32
|
+
- /status command showing all 4 providers at once
|
|
33
|
+
- Context-window management for long conversations (new page per conversation)
|
|
34
|
+
- Handle model-switching within chatgpt.com (dropdown selector)
|
|
35
|
+
- Handle Gemini model switching (2.5 Pro vs Flash vs 3)
|
package/README.md
CHANGED
|
@@ -2,7 +2,7 @@
|
|
|
2
2
|
|
|
3
3
|
> OpenClaw plugin that bridges locally installed AI CLIs (Codex, Gemini, Claude Code) as model providers — with slash commands for instant model switching, restore, health testing, and model listing.
|
|
4
4
|
|
|
5
|
-
**Current version:** `
|
|
5
|
+
**Current version:** `1.1.0`
|
|
6
6
|
|
|
7
7
|
---
|
|
8
8
|
|
|
@@ -287,6 +287,33 @@ npm test # vitest run (45 tests)
|
|
|
287
287
|
|
|
288
288
|
## Changelog
|
|
289
289
|
|
|
290
|
+
### v1.1.0
|
|
291
|
+
- **feat:** Auto-connect all providers on startup (no manual login after restart if browser is open)
|
|
292
|
+
- **feat:** `/bridge-status` — all 4 providers at a glance with expiry info
|
|
293
|
+
- **fix:** Removed obsolete CLI models: gpt-5.2-codex, gpt-5.3-codex-spark, gpt-5.1-codex-mini, gemini-3-flash-preview
|
|
294
|
+
- **fix:** Removed duplicate cli-gemini3-flash (was same as gemini-3-flash-preview)
|
|
295
|
+
- **chore:** Cleaned up CLI_MODEL_COMMANDS (8 models, down from 13)
|
|
296
|
+
|
|
297
|
+
## v1.0.0 — Full Headless Browser Bridge 🚀
|
|
298
|
+
|
|
299
|
+
All four major LLM providers are now available via browser automation.
|
|
300
|
+
No CLI binaries required — just authenticated browser sessions.
|
|
301
|
+
|
|
302
|
+
#### v1.1.0
|
|
303
|
+
- **feat:** Auto-connect all providers on startup (no manual login after restart if browser is open)
|
|
304
|
+
- **feat:** `/bridge-status` — all 4 providers at a glance with expiry info
|
|
305
|
+
- **fix:** Removed obsolete CLI models: gpt-5.2-codex, gpt-5.3-codex-spark, gpt-5.1-codex-mini, gemini-3-flash-preview
|
|
306
|
+
- **fix:** Removed duplicate cli-gemini3-flash (was same as gemini-3-flash-preview)
|
|
307
|
+
- **chore:** Cleaned up CLI_MODEL_COMMANDS (8 models, down from 13)
|
|
308
|
+
|
|
309
|
+
## v1.0.0
|
|
310
|
+
- **feat:** `chatgpt-browser.ts` — chatgpt.com DOM-automation (`#prompt-textarea` + `[data-message-author-role]`)
|
|
311
|
+
- **feat:** `web-chatgpt/*` models: gpt-4o, gpt-4o-mini, gpt-o3, gpt-o4-mini, gpt-5
|
|
312
|
+
- **feat:** `/chatgpt-login`, `/chatgpt-status`, `/chatgpt-logout` + cookie-expiry tracking
|
|
313
|
+
- **feat:** All 4 providers headless: Grok ✅ Claude ✅ Gemini ✅ ChatGPT ✅
|
|
314
|
+
- **test:** 96/96 tests green (8 test files)
|
|
315
|
+
- **fix:** Singleton CDP connection, cleanupBrowsers() on plugin stop
|
|
316
|
+
|
|
290
317
|
### v0.2.30
|
|
291
318
|
- **feat:** `gemini-browser.ts` — gemini.google.com DOM-automation (Quill editor + message-content polling)
|
|
292
319
|
- **feat:** `web-gemini/*` models in proxy (gemini-2-5-pro, gemini-2-5-flash, gemini-3-pro, gemini-3-flash)
|
package/SKILL.md
CHANGED
package/index.ts
CHANGED
|
@@ -6,7 +6,7 @@
|
|
|
6
6
|
*
|
|
7
7
|
* Phase 2 (request bridge): starts a local OpenAI-compatible HTTP proxy server
|
|
8
8
|
* and configures OpenClaw's vllm provider to route through it. Model calls
|
|
9
|
-
* are
|
|
9
|
+
* are routed to CLI tools or browser-session providers.
|
|
10
10
|
*
|
|
11
11
|
* Phase 3 (slash commands): registers /cli-* commands for instant model switching.
|
|
12
12
|
* /cli-sonnet → vllm/cli-claude/claude-sonnet-4-6 (Claude Code CLI proxy)
|
|
@@ -15,9 +15,8 @@
|
|
|
15
15
|
* /cli-gemini → vllm/cli-gemini/gemini-2.5-pro (Gemini CLI proxy)
|
|
16
16
|
* /cli-gemini-flash → vllm/cli-gemini/gemini-2.5-flash (Gemini CLI proxy)
|
|
17
17
|
* /cli-gemini3 → vllm/cli-gemini/gemini-3-pro-preview (Gemini CLI proxy)
|
|
18
|
-
* /cli-gemini3-flash→ vllm/cli-gemini/gemini-3-flash-preview (Gemini CLI proxy)
|
|
19
18
|
* /cli-codex → openai-codex/gpt-5.3-codex (Codex CLI OAuth, direct API)
|
|
20
|
-
* /cli-
|
|
19
|
+
* /cli-codex54 → openai-codex/gpt-5.4 (Codex CLI OAuth, direct API)
|
|
21
20
|
* /cli-back → restore model that was active before last /cli-* switch
|
|
22
21
|
* /cli-test [model] → one-shot proxy health check (does NOT switch global model)
|
|
23
22
|
* /cli-list → list all registered CLI bridge models with commands
|
|
@@ -121,6 +120,32 @@ async function scanGeminiCookieExpiry(ctx: BrowserContext): Promise<GeminiExpiry
|
|
|
121
120
|
}
|
|
122
121
|
// ─────────────────────────────────────────────────────────────────────────────
|
|
123
122
|
|
|
123
|
+
// ── ChatGPT web-session state ─────────────────────────────────────────────────
|
|
124
|
+
let chatgptContext: BrowserContext | null = null;
|
|
125
|
+
const CHATGPT_EXPIRY_FILE = join(homedir(), ".openclaw", "chatgpt-cookie-expiry.json");
|
|
126
|
+
interface ChatGPTExpiryInfo { expiresAt: number; loginAt: number; cookieName: string; }
|
|
127
|
+
function saveChatGPTExpiry(i: ChatGPTExpiryInfo) { try { writeFileSync(CHATGPT_EXPIRY_FILE, JSON.stringify(i, null, 2)); } catch { /* ignore */ } }
|
|
128
|
+
function loadChatGPTExpiry(): ChatGPTExpiryInfo | null { try { return JSON.parse(readFileSync(CHATGPT_EXPIRY_FILE, "utf-8")); } catch { return null; } }
|
|
129
|
+
function formatChatGPTExpiry(i: ChatGPTExpiryInfo): string {
|
|
130
|
+
const d = Math.floor((i.expiresAt - Date.now()) / 86_400_000);
|
|
131
|
+
const dt = new Date(i.expiresAt).toISOString().substring(0, 10);
|
|
132
|
+
if (d < 0) return `⚠️ EXPIRED (${dt}) — run /chatgpt-login`;
|
|
133
|
+
if (d <= 7) return `🚨 expires in ${d}d (${dt}) — run /chatgpt-login NOW`;
|
|
134
|
+
if (d <= 14) return `⚠️ expires in ${d}d (${dt}) — run /chatgpt-login soon`;
|
|
135
|
+
return `✅ valid for ${d} more days (expires ${dt})`;
|
|
136
|
+
}
|
|
137
|
+
async function scanChatGPTCookieExpiry(ctx: BrowserContext): Promise<ChatGPTExpiryInfo | null> {
|
|
138
|
+
try {
|
|
139
|
+
const cookies = await ctx.cookies(["https://chatgpt.com", "https://openai.com"]);
|
|
140
|
+
const auth = cookies.filter(c => ["__Secure-next-auth.session-token", "cf_clearance", "__cf_bm"].includes(c.name) && c.expires && c.expires > 0);
|
|
141
|
+
if (!auth.length) return null;
|
|
142
|
+
auth.sort((a, b) => (a.expires ?? 0) - (b.expires ?? 0));
|
|
143
|
+
const earliest = auth[0];
|
|
144
|
+
return { expiresAt: (earliest.expires ?? 0) * 1000, loginAt: Date.now(), cookieName: earliest.name };
|
|
145
|
+
} catch { return null; }
|
|
146
|
+
}
|
|
147
|
+
// ─────────────────────────────────────────────────────────────────────────────
|
|
148
|
+
|
|
124
149
|
// ── Claude web-session state ──────────────────────────────────────────────────
|
|
125
150
|
let claudeContext: BrowserContext | null = null;
|
|
126
151
|
const CLAUDE_EXPIRY_FILE = join(homedir(), ".openclaw", "claude-cookie-expiry.json");
|
|
@@ -293,6 +318,7 @@ async function cleanupBrowsers(log: (msg: string) => void): Promise<void> {
|
|
|
293
318
|
}
|
|
294
319
|
claudeContext = null;
|
|
295
320
|
geminiContext = null;
|
|
321
|
+
chatgptContext = null;
|
|
296
322
|
log("[cli-bridge] browser resources cleaned up");
|
|
297
323
|
}
|
|
298
324
|
|
|
@@ -378,81 +404,17 @@ function readCurrentModel(): string | null {
|
|
|
378
404
|
// Phase 3: model command table
|
|
379
405
|
// ──────────────────────────────────────────────────────────────────────────────
|
|
380
406
|
const CLI_MODEL_COMMANDS = [
|
|
381
|
-
// ── Claude (via local proxy
|
|
382
|
-
{
|
|
383
|
-
|
|
384
|
-
|
|
385
|
-
|
|
386
|
-
|
|
387
|
-
},
|
|
388
|
-
{
|
|
389
|
-
|
|
390
|
-
|
|
391
|
-
|
|
392
|
-
label: "Claude Opus 4.6 (CLI)",
|
|
393
|
-
},
|
|
394
|
-
{
|
|
395
|
-
name: "cli-haiku",
|
|
396
|
-
model: "vllm/cli-claude/claude-haiku-4-5",
|
|
397
|
-
description: "Switch to Claude Haiku 4.5 (Claude Code CLI via local proxy)",
|
|
398
|
-
label: "Claude Haiku 4.5 (CLI)",
|
|
399
|
-
},
|
|
400
|
-
// ── Gemini (via local proxy → Gemini CLI) ───────────────────────────────────
|
|
401
|
-
{
|
|
402
|
-
name: "cli-gemini",
|
|
403
|
-
model: "vllm/cli-gemini/gemini-2.5-pro",
|
|
404
|
-
description: "Switch to Gemini 2.5 Pro (Gemini CLI via local proxy)",
|
|
405
|
-
label: "Gemini 2.5 Pro (CLI)",
|
|
406
|
-
},
|
|
407
|
-
{
|
|
408
|
-
name: "cli-gemini-flash",
|
|
409
|
-
model: "vllm/cli-gemini/gemini-2.5-flash",
|
|
410
|
-
description: "Switch to Gemini 2.5 Flash (Gemini CLI via local proxy)",
|
|
411
|
-
label: "Gemini 2.5 Flash (CLI)",
|
|
412
|
-
},
|
|
413
|
-
{
|
|
414
|
-
name: "cli-gemini3",
|
|
415
|
-
model: "vllm/cli-gemini/gemini-3-pro-preview",
|
|
416
|
-
description: "Switch to Gemini 3 Pro Preview (Gemini CLI via local proxy)",
|
|
417
|
-
label: "Gemini 3 Pro Preview (CLI)",
|
|
418
|
-
},
|
|
419
|
-
{
|
|
420
|
-
name: "cli-gemini3-flash",
|
|
421
|
-
model: "vllm/cli-gemini/gemini-3-flash-preview",
|
|
422
|
-
description: "Switch to Gemini 3 Flash Preview (Gemini CLI via local proxy)",
|
|
423
|
-
label: "Gemini 3 Flash Preview (CLI)",
|
|
424
|
-
},
|
|
425
|
-
// ── Codex (via openai-codex provider — Codex CLI OAuth auth, direct API) ────
|
|
426
|
-
{
|
|
427
|
-
name: "cli-codex",
|
|
428
|
-
model: "openai-codex/gpt-5.3-codex",
|
|
429
|
-
description: "Switch to GPT-5.3 Codex (openai-codex provider, Codex CLI auth)",
|
|
430
|
-
label: "GPT-5.3 Codex",
|
|
431
|
-
},
|
|
432
|
-
{
|
|
433
|
-
name: "cli-codex-spark",
|
|
434
|
-
model: "openai-codex/gpt-5.3-codex-spark",
|
|
435
|
-
description: "Switch to GPT-5.3 Codex Spark (openai-codex provider, Codex CLI auth)",
|
|
436
|
-
label: "GPT-5.3 Codex Spark",
|
|
437
|
-
},
|
|
438
|
-
{
|
|
439
|
-
name: "cli-codex52",
|
|
440
|
-
model: "openai-codex/gpt-5.2-codex",
|
|
441
|
-
description: "Switch to GPT-5.2 Codex (openai-codex provider, Codex CLI auth)",
|
|
442
|
-
label: "GPT-5.2 Codex",
|
|
443
|
-
},
|
|
444
|
-
{
|
|
445
|
-
name: "cli-codex54",
|
|
446
|
-
model: "openai-codex/gpt-5.4",
|
|
447
|
-
description: "Switch to GPT-5.4 (openai-codex provider, Codex CLI auth — may require upgraded scope)",
|
|
448
|
-
label: "GPT-5.4 (Codex)",
|
|
449
|
-
},
|
|
450
|
-
{
|
|
451
|
-
name: "cli-codex-mini",
|
|
452
|
-
model: "openai-codex/gpt-5.1-codex-mini",
|
|
453
|
-
description: "Switch to GPT-5.1 Codex Mini (openai-codex provider, Codex CLI auth)",
|
|
454
|
-
label: "GPT-5.1 Codex Mini",
|
|
455
|
-
},
|
|
407
|
+
// ── Claude Code CLI (via local proxy) ────────────────────────────────────────
|
|
408
|
+
{ name: "cli-sonnet", model: "vllm/cli-claude/claude-sonnet-4-6", description: "Claude Sonnet 4.6 (Claude Code CLI)", label: "Claude Sonnet 4.6 (CLI)" },
|
|
409
|
+
{ name: "cli-opus", model: "vllm/cli-claude/claude-opus-4-6", description: "Claude Opus 4.6 (Claude Code CLI)", label: "Claude Opus 4.6 (CLI)" },
|
|
410
|
+
{ name: "cli-haiku", model: "vllm/cli-claude/claude-haiku-4-5", description: "Claude Haiku 4.5 (Claude Code CLI)", label: "Claude Haiku 4.5 (CLI)" },
|
|
411
|
+
// ── Gemini CLI (via local proxy) ─────────────────────────────────────────────
|
|
412
|
+
{ name: "cli-gemini", model: "vllm/cli-gemini/gemini-2.5-pro", description: "Gemini 2.5 Pro (Gemini CLI)", label: "Gemini 2.5 Pro (CLI)" },
|
|
413
|
+
{ name: "cli-gemini-flash", model: "vllm/cli-gemini/gemini-2.5-flash", description: "Gemini 2.5 Flash (Gemini CLI)", label: "Gemini 2.5 Flash (CLI)" },
|
|
414
|
+
{ name: "cli-gemini3", model: "vllm/cli-gemini/gemini-3-pro-preview", description: "Gemini 3 Pro (Gemini CLI)", label: "Gemini 3 Pro (CLI)" },
|
|
415
|
+
// ── Codex CLI (openai-codex provider, OAuth auth) ────────────────────────────
|
|
416
|
+
{ name: "cli-codex", model: "openai-codex/gpt-5.3-codex", description: "GPT-5.3 Codex (Codex CLI auth)", label: "GPT-5.3 Codex" },
|
|
417
|
+
{ name: "cli-codex54", model: "openai-codex/gpt-5.4", description: "GPT-5.4 (Codex CLI auth)", label: "GPT-5.4" },
|
|
456
418
|
] as const;
|
|
457
419
|
|
|
458
420
|
/** Default model used by /cli-test when no arg is given */
|
|
@@ -651,7 +613,7 @@ function proxyTestRequest(
|
|
|
651
613
|
const plugin = {
|
|
652
614
|
id: "openclaw-cli-bridge-elvatis",
|
|
653
615
|
name: "OpenClaw CLI Bridge",
|
|
654
|
-
version: "
|
|
616
|
+
version: "1.1.0",
|
|
655
617
|
description:
|
|
656
618
|
"Phase 1: openai-codex auth bridge. " +
|
|
657
619
|
"Phase 2: HTTP proxy for gemini/claude CLIs. " +
|
|
@@ -670,6 +632,53 @@ const plugin = {
|
|
|
670
632
|
// ── Grok session restore (non-blocking) ───────────────────────────────────
|
|
671
633
|
void tryRestoreGrokSession(grokSessionPath, (msg) => api.logger.info(msg));
|
|
672
634
|
|
|
635
|
+
// ── Auto-connect all browser providers on startup (non-blocking) ──────────
|
|
636
|
+
void (async () => {
|
|
637
|
+
// Wait for proxy to start first
|
|
638
|
+
await new Promise(r => setTimeout(r, 3000));
|
|
639
|
+
const log = (msg: string) => api.logger.info(msg);
|
|
640
|
+
const ctx = await connectToOpenClawBrowser(log);
|
|
641
|
+
if (!ctx) { log("[cli-bridge] startup auto-connect: OpenClaw browser not available"); return; }
|
|
642
|
+
const pages = ctx.pages().map(p => p.url());
|
|
643
|
+
log(`[cli-bridge] startup auto-connect: ${pages.length} pages open`);
|
|
644
|
+
|
|
645
|
+
// Claude
|
|
646
|
+
if (pages.some(u => u.includes("claude.ai")) && !claudeContext) {
|
|
647
|
+
try {
|
|
648
|
+
const { getOrCreateClaudePage } = await import("./src/claude-browser.js");
|
|
649
|
+
const { page } = await getOrCreateClaudePage(ctx);
|
|
650
|
+
if (await page.locator(".ProseMirror").isVisible().catch(() => false)) {
|
|
651
|
+
claudeContext = ctx;
|
|
652
|
+
log("[cli-bridge:claude] auto-connected ✅");
|
|
653
|
+
}
|
|
654
|
+
} catch { /* not available */ }
|
|
655
|
+
}
|
|
656
|
+
|
|
657
|
+
// Gemini
|
|
658
|
+
if (pages.some(u => u.includes("gemini.google.com")) && !geminiContext) {
|
|
659
|
+
try {
|
|
660
|
+
const { getOrCreateGeminiPage } = await import("./src/gemini-browser.js");
|
|
661
|
+
const { page } = await getOrCreateGeminiPage(ctx);
|
|
662
|
+
if (await page.locator(".ql-editor").isVisible().catch(() => false)) {
|
|
663
|
+
geminiContext = ctx;
|
|
664
|
+
log("[cli-bridge:gemini] auto-connected ✅");
|
|
665
|
+
}
|
|
666
|
+
} catch { /* not available */ }
|
|
667
|
+
}
|
|
668
|
+
|
|
669
|
+
// ChatGPT
|
|
670
|
+
if (pages.some(u => u.includes("chatgpt.com")) && !chatgptContext) {
|
|
671
|
+
try {
|
|
672
|
+
const { getOrCreateChatGPTPage } = await import("./src/chatgpt-browser.js");
|
|
673
|
+
const { page } = await getOrCreateChatGPTPage(ctx);
|
|
674
|
+
if (await page.locator("#prompt-textarea").isVisible().catch(() => false)) {
|
|
675
|
+
chatgptContext = ctx;
|
|
676
|
+
log("[cli-bridge:chatgpt] auto-connected ✅");
|
|
677
|
+
}
|
|
678
|
+
} catch { /* not available */ }
|
|
679
|
+
}
|
|
680
|
+
})();
|
|
681
|
+
|
|
673
682
|
// ── Phase 1: openai-codex auth bridge ─────────────────────────────────────
|
|
674
683
|
if (enableCodex) {
|
|
675
684
|
api.registerProvider({
|
|
@@ -796,6 +805,17 @@ const plugin = {
|
|
|
796
805
|
}
|
|
797
806
|
return null;
|
|
798
807
|
},
|
|
808
|
+
getChatGPTContext: () => chatgptContext,
|
|
809
|
+
connectChatGPTContext: async () => {
|
|
810
|
+
const ctx = await connectToOpenClawBrowser((msg) => api.logger.info(msg));
|
|
811
|
+
if (ctx) {
|
|
812
|
+
const { getOrCreateChatGPTPage } = await import("./src/chatgpt-browser.js");
|
|
813
|
+
const { page } = await getOrCreateChatGPTPage(ctx);
|
|
814
|
+
const editor = await page.locator("#prompt-textarea").isVisible().catch(() => false);
|
|
815
|
+
if (editor) { chatgptContext = ctx; return ctx; }
|
|
816
|
+
}
|
|
817
|
+
return null;
|
|
818
|
+
},
|
|
799
819
|
});
|
|
800
820
|
proxyServer = server;
|
|
801
821
|
api.logger.info(
|
|
@@ -850,6 +870,17 @@ const plugin = {
|
|
|
850
870
|
}
|
|
851
871
|
return null;
|
|
852
872
|
},
|
|
873
|
+
getChatGPTContext: () => chatgptContext,
|
|
874
|
+
connectChatGPTContext: async () => {
|
|
875
|
+
const ctx = await connectToOpenClawBrowser((msg) => api.logger.info(msg));
|
|
876
|
+
if (ctx) {
|
|
877
|
+
const { getOrCreateChatGPTPage } = await import("./src/chatgpt-browser.js");
|
|
878
|
+
const { page } = await getOrCreateChatGPTPage(ctx);
|
|
879
|
+
const editor = await page.locator("#prompt-textarea").isVisible().catch(() => false);
|
|
880
|
+
if (editor) { chatgptContext = ctx; return ctx; }
|
|
881
|
+
}
|
|
882
|
+
return null;
|
|
883
|
+
},
|
|
853
884
|
});
|
|
854
885
|
proxyServer = server;
|
|
855
886
|
api.logger.info(`[cli-bridge] proxy ready on :${port} (retry)`);
|
|
@@ -1346,6 +1377,155 @@ const plugin = {
|
|
|
1346
1377
|
return { text: "✅ Disconnected from gemini.google.com. Run `/gemini-login` to reconnect." };
|
|
1347
1378
|
},
|
|
1348
1379
|
} satisfies OpenClawPluginCommandDefinition);
|
|
1380
|
+
|
|
1381
|
+
// ── ChatGPT web-session commands ──────────────────────────────────────────
|
|
1382
|
+
api.registerCommand({
|
|
1383
|
+
name: "chatgpt-login",
|
|
1384
|
+
description: "Authenticate chatgpt.com: imports session from OpenClaw browser",
|
|
1385
|
+
handler: async (): Promise<PluginCommandResult> => {
|
|
1386
|
+
if (chatgptContext) {
|
|
1387
|
+
const { getOrCreateChatGPTPage } = await import("./src/chatgpt-browser.js");
|
|
1388
|
+
try {
|
|
1389
|
+
const { page } = await getOrCreateChatGPTPage(chatgptContext);
|
|
1390
|
+
if (await page.locator("#prompt-textarea").isVisible().catch(() => false))
|
|
1391
|
+
return { text: "✅ Already connected to chatgpt.com. Use `/chatgpt-logout` first to reset." };
|
|
1392
|
+
} catch { /* fall through */ }
|
|
1393
|
+
chatgptContext = null;
|
|
1394
|
+
}
|
|
1395
|
+
api.logger.info("[cli-bridge:chatgpt] /chatgpt-login: connecting to OpenClaw browser…");
|
|
1396
|
+
const ctx = await connectToOpenClawBrowser((msg) => api.logger.info(msg));
|
|
1397
|
+
if (!ctx) return { text: "❌ Could not connect to OpenClaw browser.\nMake sure chatgpt.com is open in your browser." };
|
|
1398
|
+
|
|
1399
|
+
const { getOrCreateChatGPTPage } = await import("./src/chatgpt-browser.js");
|
|
1400
|
+
let page;
|
|
1401
|
+
try { ({ page } = await getOrCreateChatGPTPage(ctx)); }
|
|
1402
|
+
catch (err) { return { text: `❌ Failed to open chatgpt.com: ${(err as Error).message}` }; }
|
|
1403
|
+
|
|
1404
|
+
if (!await page.locator("#prompt-textarea").isVisible().catch(() => false))
|
|
1405
|
+
return { text: "❌ ChatGPT editor not visible — are you logged in?\nOpen chatgpt.com in your browser and try again." };
|
|
1406
|
+
|
|
1407
|
+
chatgptContext = ctx;
|
|
1408
|
+
const expiry = await scanChatGPTCookieExpiry(ctx);
|
|
1409
|
+
if (expiry) { saveChatGPTExpiry(expiry); api.logger.info(`[cli-bridge:chatgpt] cookie expiry: ${new Date(expiry.expiresAt).toISOString()}`); }
|
|
1410
|
+
const expiryLine = expiry ? `\n\n🕐 Cookie expiry: ${formatChatGPTExpiry(expiry)}` : "";
|
|
1411
|
+
return { text: `✅ ChatGPT session ready!\n\nModels available:\n• \`vllm/web-chatgpt/gpt-4o\`\n• \`vllm/web-chatgpt/gpt-4o-mini\`\n• \`vllm/web-chatgpt/gpt-o3\`\n• \`vllm/web-chatgpt/gpt-o4-mini\`\n• \`vllm/web-chatgpt/gpt-5\`${expiryLine}` };
|
|
1412
|
+
},
|
|
1413
|
+
} satisfies OpenClawPluginCommandDefinition);
|
|
1414
|
+
|
|
1415
|
+
api.registerCommand({
|
|
1416
|
+
name: "chatgpt-status",
|
|
1417
|
+
description: "Check chatgpt.com session status",
|
|
1418
|
+
handler: async (): Promise<PluginCommandResult> => {
|
|
1419
|
+
if (!chatgptContext) return { text: "❌ No active chatgpt.com session\nRun `/chatgpt-login` to authenticate." };
|
|
1420
|
+
const { getOrCreateChatGPTPage } = await import("./src/chatgpt-browser.js");
|
|
1421
|
+
try {
|
|
1422
|
+
const { page } = await getOrCreateChatGPTPage(chatgptContext);
|
|
1423
|
+
if (await page.locator("#prompt-textarea").isVisible().catch(() => false)) {
|
|
1424
|
+
const expiry = loadChatGPTExpiry();
|
|
1425
|
+
const expiryLine = expiry ? `\n🕐 ${formatChatGPTExpiry(expiry)}` : "";
|
|
1426
|
+
return { text: `✅ chatgpt.com session active\nProxy: \`127.0.0.1:${port}\`\nModels: web-chatgpt/gpt-4o, gpt-4o-mini, gpt-o3, gpt-o4-mini, gpt-5${expiryLine}` };
|
|
1427
|
+
}
|
|
1428
|
+
} catch { /* fall through */ }
|
|
1429
|
+
chatgptContext = null;
|
|
1430
|
+
return { text: "❌ Session lost — run `/chatgpt-login` to re-authenticate." };
|
|
1431
|
+
},
|
|
1432
|
+
} satisfies OpenClawPluginCommandDefinition);
|
|
1433
|
+
|
|
1434
|
+
api.registerCommand({
|
|
1435
|
+
name: "chatgpt-logout",
|
|
1436
|
+
description: "Disconnect from chatgpt.com session",
|
|
1437
|
+
handler: async (): Promise<PluginCommandResult> => {
|
|
1438
|
+
chatgptContext = null;
|
|
1439
|
+
return { text: "✅ Disconnected from chatgpt.com. Run `/chatgpt-login` to reconnect." };
|
|
1440
|
+
},
|
|
1441
|
+
} satisfies OpenClawPluginCommandDefinition);
|
|
1442
|
+
|
|
1443
|
+
// ── /bridge-status — all providers at a glance ───────────────────────────
|
|
1444
|
+
api.registerCommand({
|
|
1445
|
+
name: "bridge-status",
|
|
1446
|
+
description: "Show status of all headless browser providers (Grok, Claude, Gemini, ChatGPT)",
|
|
1447
|
+
handler: async (): Promise<PluginCommandResult> => {
|
|
1448
|
+
const lines: string[] = [`🌉 *CLI Bridge v${plugin.version} — Provider Status*\n`];
|
|
1449
|
+
|
|
1450
|
+
const checks = [
|
|
1451
|
+
{
|
|
1452
|
+
name: "Grok",
|
|
1453
|
+
ctx: grokContext,
|
|
1454
|
+
check: async () => {
|
|
1455
|
+
if (!grokContext) return false;
|
|
1456
|
+
const r = await verifySession(grokContext, () => {});
|
|
1457
|
+
if (!r.valid) { grokContext = null; }
|
|
1458
|
+
return r.valid;
|
|
1459
|
+
},
|
|
1460
|
+
models: "web-grok/grok-3, grok-3-fast, grok-3-mini",
|
|
1461
|
+
loginCmd: "/grok-login",
|
|
1462
|
+
expiry: () => { const e = loadGrokExpiry(); return e ? formatExpiryInfo(e) : null; },
|
|
1463
|
+
},
|
|
1464
|
+
{
|
|
1465
|
+
name: "Claude",
|
|
1466
|
+
ctx: claudeContext,
|
|
1467
|
+
check: async () => {
|
|
1468
|
+
if (!claudeContext) return false;
|
|
1469
|
+
try {
|
|
1470
|
+
const { getOrCreateClaudePage } = await import("./src/claude-browser.js");
|
|
1471
|
+
const { page } = await getOrCreateClaudePage(claudeContext);
|
|
1472
|
+
return page.locator(".ProseMirror").isVisible().catch(() => false);
|
|
1473
|
+
} catch { claudeContext = null; return false; }
|
|
1474
|
+
},
|
|
1475
|
+
models: "web-claude/claude-sonnet, claude-opus, claude-haiku",
|
|
1476
|
+
loginCmd: "/claude-login",
|
|
1477
|
+
expiry: () => { const e = loadClaudeExpiry(); return e ? formatClaudeExpiry(e) : null; },
|
|
1478
|
+
},
|
|
1479
|
+
{
|
|
1480
|
+
name: "Gemini",
|
|
1481
|
+
ctx: geminiContext,
|
|
1482
|
+
check: async () => {
|
|
1483
|
+
if (!geminiContext) return false;
|
|
1484
|
+
try {
|
|
1485
|
+
const { getOrCreateGeminiPage } = await import("./src/gemini-browser.js");
|
|
1486
|
+
const { page } = await getOrCreateGeminiPage(geminiContext);
|
|
1487
|
+
return page.locator(".ql-editor").isVisible().catch(() => false);
|
|
1488
|
+
} catch { geminiContext = null; return false; }
|
|
1489
|
+
},
|
|
1490
|
+
models: "web-gemini/gemini-2-5-pro, gemini-2-5-flash, gemini-3-pro, gemini-3-flash",
|
|
1491
|
+
loginCmd: "/gemini-login",
|
|
1492
|
+
expiry: () => { const e = loadGeminiExpiry(); return e ? formatGeminiExpiry(e) : null; },
|
|
1493
|
+
},
|
|
1494
|
+
{
|
|
1495
|
+
name: "ChatGPT",
|
|
1496
|
+
ctx: chatgptContext,
|
|
1497
|
+
check: async () => {
|
|
1498
|
+
if (!chatgptContext) return false;
|
|
1499
|
+
try {
|
|
1500
|
+
const { getOrCreateChatGPTPage } = await import("./src/chatgpt-browser.js");
|
|
1501
|
+
const { page } = await getOrCreateChatGPTPage(chatgptContext);
|
|
1502
|
+
return page.locator("#prompt-textarea").isVisible().catch(() => false);
|
|
1503
|
+
} catch { chatgptContext = null; return false; }
|
|
1504
|
+
},
|
|
1505
|
+
models: "web-chatgpt/gpt-4o, gpt-o3, gpt-o4-mini, gpt-5",
|
|
1506
|
+
loginCmd: "/chatgpt-login",
|
|
1507
|
+
expiry: () => { const e = loadChatGPTExpiry(); return e ? formatChatGPTExpiry(e) : null; },
|
|
1508
|
+
},
|
|
1509
|
+
];
|
|
1510
|
+
|
|
1511
|
+
for (const c of checks) {
|
|
1512
|
+
const active = c.ctx !== null;
|
|
1513
|
+
const ok = active ? await c.check() : false;
|
|
1514
|
+
const expiry = c.expiry();
|
|
1515
|
+
if (ok) {
|
|
1516
|
+
lines.push(`✅ *${c.name}* — active`);
|
|
1517
|
+
if (expiry) lines.push(` 🕐 ${expiry}`);
|
|
1518
|
+
lines.push(` Models: ${c.models}`);
|
|
1519
|
+
} else {
|
|
1520
|
+
lines.push(`❌ *${c.name}* — not connected (run \`${c.loginCmd}\`)`);
|
|
1521
|
+
}
|
|
1522
|
+
lines.push("");
|
|
1523
|
+
}
|
|
1524
|
+
|
|
1525
|
+
lines.push(`🔌 Proxy: \`127.0.0.1:${port}\` | 22 models total`);
|
|
1526
|
+
return { text: lines.join("\n") };
|
|
1527
|
+
},
|
|
1528
|
+
} satisfies OpenClawPluginCommandDefinition);
|
|
1349
1529
|
// ─────────────────────────────────────────────────────────────────────────
|
|
1350
1530
|
|
|
1351
1531
|
const allCommands = [
|
|
@@ -1362,6 +1542,10 @@ const plugin = {
|
|
|
1362
1542
|
"/gemini-login",
|
|
1363
1543
|
"/gemini-status",
|
|
1364
1544
|
"/gemini-logout",
|
|
1545
|
+
"/chatgpt-login",
|
|
1546
|
+
"/chatgpt-status",
|
|
1547
|
+
"/chatgpt-logout",
|
|
1548
|
+
"/bridge-status",
|
|
1365
1549
|
];
|
|
1366
1550
|
api.logger.info(`[cli-bridge] registered ${allCommands.length} commands: ${allCommands.join(", ")}`);
|
|
1367
1551
|
},
|
package/openclaw.plugin.json
CHANGED
|
@@ -1,7 +1,7 @@
|
|
|
1
1
|
{
|
|
2
2
|
"id": "openclaw-cli-bridge-elvatis",
|
|
3
3
|
"name": "OpenClaw CLI Bridge",
|
|
4
|
-
"version": "
|
|
4
|
+
"version": "1.1.0",
|
|
5
5
|
"description": "Phase 1: openai-codex auth bridge. Phase 2: local HTTP proxy routing model calls through gemini/claude CLIs (vllm provider).",
|
|
6
6
|
"providers": [
|
|
7
7
|
"openai-codex"
|
package/package.json
CHANGED
|
@@ -1,6 +1,6 @@
|
|
|
1
1
|
{
|
|
2
2
|
"name": "@elvatis_com/openclaw-cli-bridge-elvatis",
|
|
3
|
-
"version": "
|
|
3
|
+
"version": "1.1.0",
|
|
4
4
|
"description": "Bridges gemini, claude, and codex CLI tools as OpenClaw model providers. Reads existing CLI auth without re-login.",
|
|
5
5
|
"type": "module",
|
|
6
6
|
"openclaw": {
|
|
@@ -0,0 +1,253 @@
|
|
|
1
|
+
/**
|
|
2
|
+
* chatgpt-browser.ts
|
|
3
|
+
*
|
|
4
|
+
* ChatGPT web automation via Playwright DOM-polling.
|
|
5
|
+
* Strategy identical to claude-browser.ts / grok-client.ts.
|
|
6
|
+
*
|
|
7
|
+
* DOM structure (confirmed 2026-03-11):
|
|
8
|
+
* Editor: #prompt-textarea (ProseMirror — use execCommand)
|
|
9
|
+
* Send btn: button[data-testid="send-button"]
|
|
10
|
+
* Response: [data-message-author-role="assistant"] (last element)
|
|
11
|
+
* Streaming indicator: button[data-testid="stop-button"]
|
|
12
|
+
*/
|
|
13
|
+
|
|
14
|
+
import type { BrowserContext, Page } from "playwright";
|
|
15
|
+
|
|
16
|
+
export interface ChatMessage {
|
|
17
|
+
role: "system" | "user" | "assistant";
|
|
18
|
+
content: string;
|
|
19
|
+
}
|
|
20
|
+
|
|
21
|
+
export interface ChatGPTBrowserOptions {
|
|
22
|
+
messages: ChatMessage[];
|
|
23
|
+
model?: string;
|
|
24
|
+
timeoutMs?: number;
|
|
25
|
+
}
|
|
26
|
+
|
|
27
|
+
export interface ChatGPTBrowserResult {
|
|
28
|
+
content: string;
|
|
29
|
+
model: string;
|
|
30
|
+
finishReason: string;
|
|
31
|
+
}
|
|
32
|
+
|
|
33
|
+
const DEFAULT_TIMEOUT_MS = 120_000;
|
|
34
|
+
const STABLE_CHECKS = 3;
|
|
35
|
+
const STABLE_INTERVAL_MS = 500;
|
|
36
|
+
const CHATGPT_HOME = "https://chatgpt.com";
|
|
37
|
+
|
|
38
|
+
const MODEL_MAP: Record<string, string> = {
|
|
39
|
+
"gpt-4o": "gpt-4o",
|
|
40
|
+
"gpt-4o-mini": "gpt-4o-mini",
|
|
41
|
+
"gpt-o3": "o3",
|
|
42
|
+
"gpt-o4-mini": "o4-mini",
|
|
43
|
+
"gpt-4-1": "gpt-4.1",
|
|
44
|
+
"gpt-5": "gpt-5",
|
|
45
|
+
};
|
|
46
|
+
|
|
47
|
+
function resolveModel(m?: string): string {
|
|
48
|
+
const clean = (m ?? "gpt-4o").replace("web-chatgpt/", "");
|
|
49
|
+
return MODEL_MAP[clean] ?? clean;
|
|
50
|
+
}
|
|
51
|
+
|
|
52
|
+
function flattenMessages(messages: ChatMessage[]): string {
|
|
53
|
+
if (messages.length === 1) return messages[0].content;
|
|
54
|
+
return messages
|
|
55
|
+
.map((m) => {
|
|
56
|
+
if (m.role === "system") return `[System]: ${m.content}`;
|
|
57
|
+
if (m.role === "assistant") return `[Assistant]: ${m.content}`;
|
|
58
|
+
return m.content;
|
|
59
|
+
})
|
|
60
|
+
.join("\n\n");
|
|
61
|
+
}
|
|
62
|
+
|
|
63
|
+
/**
|
|
64
|
+
* Get or create a chatgpt.com page in the given context.
|
|
65
|
+
*/
|
|
66
|
+
export async function getOrCreateChatGPTPage(
|
|
67
|
+
context: BrowserContext
|
|
68
|
+
): Promise<{ page: Page; owned: boolean }> {
|
|
69
|
+
const existing = context.pages().filter((p) => p.url().startsWith("https://chatgpt.com"));
|
|
70
|
+
if (existing.length > 0) return { page: existing[0], owned: false };
|
|
71
|
+
const page = await context.newPage();
|
|
72
|
+
await page.goto(CHATGPT_HOME, { waitUntil: "domcontentloaded", timeout: 15_000 });
|
|
73
|
+
await new Promise((r) => setTimeout(r, 2_000));
|
|
74
|
+
return { page, owned: true };
|
|
75
|
+
}
|
|
76
|
+
|
|
77
|
+
/**
|
|
78
|
+
* Count assistant messages on the page.
|
|
79
|
+
*/
|
|
80
|
+
async function countAssistantMessages(page: Page): Promise<number> {
|
|
81
|
+
return page.evaluate(() =>
|
|
82
|
+
document.querySelectorAll('[data-message-author-role="assistant"]').length
|
|
83
|
+
);
|
|
84
|
+
}
|
|
85
|
+
|
|
86
|
+
/**
|
|
87
|
+
* Get the text of the last assistant message.
|
|
88
|
+
*/
|
|
89
|
+
async function getLastAssistantText(page: Page): Promise<string> {
|
|
90
|
+
return page.evaluate(() => {
|
|
91
|
+
const els = [...document.querySelectorAll('[data-message-author-role="assistant"]')];
|
|
92
|
+
return els[els.length - 1]?.textContent?.trim() ?? "";
|
|
93
|
+
});
|
|
94
|
+
}
|
|
95
|
+
|
|
96
|
+
/**
|
|
97
|
+
* Check if ChatGPT is still generating (stop button visible).
|
|
98
|
+
*/
|
|
99
|
+
async function isStreaming(page: Page): Promise<boolean> {
|
|
100
|
+
return page.evaluate(() =>
|
|
101
|
+
!!document.querySelector('button[data-testid="stop-button"]')
|
|
102
|
+
);
|
|
103
|
+
}
|
|
104
|
+
|
|
105
|
+
/**
|
|
106
|
+
* Send a message and wait for stable response.
|
|
107
|
+
*/
|
|
108
|
+
async function sendAndWait(
|
|
109
|
+
page: Page,
|
|
110
|
+
message: string,
|
|
111
|
+
timeoutMs: number,
|
|
112
|
+
log: (msg: string) => void
|
|
113
|
+
): Promise<string> {
|
|
114
|
+
const countBefore = await countAssistantMessages(page);
|
|
115
|
+
|
|
116
|
+
// Type into ProseMirror via execCommand
|
|
117
|
+
await page.evaluate((msg: string) => {
|
|
118
|
+
const ed = document.querySelector("#prompt-textarea") as HTMLElement | null;
|
|
119
|
+
if (!ed) throw new Error("ChatGPT editor (#prompt-textarea) not found");
|
|
120
|
+
ed.focus();
|
|
121
|
+
document.execCommand("insertText", false, msg);
|
|
122
|
+
}, message);
|
|
123
|
+
|
|
124
|
+
await new Promise((r) => setTimeout(r, 300));
|
|
125
|
+
|
|
126
|
+
// Click send button (preferred) or Enter
|
|
127
|
+
const sendBtn = page.locator('button[data-testid="send-button"]').first();
|
|
128
|
+
const hasSendBtn = await sendBtn.isVisible().catch(() => false);
|
|
129
|
+
if (hasSendBtn) {
|
|
130
|
+
await sendBtn.click();
|
|
131
|
+
} else {
|
|
132
|
+
await page.keyboard.press("Enter");
|
|
133
|
+
}
|
|
134
|
+
|
|
135
|
+
log(`chatgpt-browser: message sent (${message.length} chars), waiting…`);
|
|
136
|
+
|
|
137
|
+
const deadline = Date.now() + timeoutMs;
|
|
138
|
+
let lastText = "";
|
|
139
|
+
let stableCount = 0;
|
|
140
|
+
|
|
141
|
+
while (Date.now() < deadline) {
|
|
142
|
+
await new Promise((r) => setTimeout(r, STABLE_INTERVAL_MS));
|
|
143
|
+
|
|
144
|
+
const currentCount = await countAssistantMessages(page);
|
|
145
|
+
if (currentCount <= countBefore) continue;
|
|
146
|
+
|
|
147
|
+
// Still generating?
|
|
148
|
+
const streaming = await isStreaming(page);
|
|
149
|
+
if (streaming) { stableCount = 0; continue; }
|
|
150
|
+
|
|
151
|
+
const text = await getLastAssistantText(page);
|
|
152
|
+
if (!text) continue;
|
|
153
|
+
|
|
154
|
+
if (text === lastText) {
|
|
155
|
+
stableCount++;
|
|
156
|
+
if (stableCount >= STABLE_CHECKS) {
|
|
157
|
+
log(`chatgpt-browser: response stable (${text.length} chars)`);
|
|
158
|
+
return text;
|
|
159
|
+
}
|
|
160
|
+
} else {
|
|
161
|
+
stableCount = 0;
|
|
162
|
+
lastText = text;
|
|
163
|
+
}
|
|
164
|
+
}
|
|
165
|
+
|
|
166
|
+
throw new Error(`chatgpt.com response timeout after ${timeoutMs}ms`);
|
|
167
|
+
}
|
|
168
|
+
|
|
169
|
+
// ─────────────────────────────────────────────────────────────────────────────
|
|
170
|
+
|
|
171
|
+
export async function chatgptComplete(
|
|
172
|
+
context: BrowserContext,
|
|
173
|
+
opts: ChatGPTBrowserOptions,
|
|
174
|
+
log: (msg: string) => void
|
|
175
|
+
): Promise<ChatGPTBrowserResult> {
|
|
176
|
+
const { page, owned } = await getOrCreateChatGPTPage(context);
|
|
177
|
+
const model = resolveModel(opts.model);
|
|
178
|
+
const prompt = flattenMessages(opts.messages);
|
|
179
|
+
const timeoutMs = opts.timeoutMs ?? DEFAULT_TIMEOUT_MS;
|
|
180
|
+
|
|
181
|
+
log(`chatgpt-browser: complete model=${model}`);
|
|
182
|
+
|
|
183
|
+
try {
|
|
184
|
+
const content = await sendAndWait(page, prompt, timeoutMs, log);
|
|
185
|
+
return { content, model, finishReason: "stop" };
|
|
186
|
+
} finally {
|
|
187
|
+
if (owned) await page.close().catch(() => {});
|
|
188
|
+
}
|
|
189
|
+
}
|
|
190
|
+
|
|
191
|
+
export async function chatgptCompleteStream(
|
|
192
|
+
context: BrowserContext,
|
|
193
|
+
opts: ChatGPTBrowserOptions,
|
|
194
|
+
onToken: (token: string) => void,
|
|
195
|
+
log: (msg: string) => void
|
|
196
|
+
): Promise<ChatGPTBrowserResult> {
|
|
197
|
+
const { page, owned } = await getOrCreateChatGPTPage(context);
|
|
198
|
+
const model = resolveModel(opts.model);
|
|
199
|
+
const prompt = flattenMessages(opts.messages);
|
|
200
|
+
const timeoutMs = opts.timeoutMs ?? DEFAULT_TIMEOUT_MS;
|
|
201
|
+
|
|
202
|
+
log(`chatgpt-browser: stream model=${model}`);
|
|
203
|
+
|
|
204
|
+
const countBefore = await countAssistantMessages(page);
|
|
205
|
+
|
|
206
|
+
await page.evaluate((msg: string) => {
|
|
207
|
+
const ed = document.querySelector("#prompt-textarea") as HTMLElement | null;
|
|
208
|
+
if (!ed) throw new Error("ChatGPT editor not found");
|
|
209
|
+
ed.focus();
|
|
210
|
+
document.execCommand("insertText", false, msg);
|
|
211
|
+
}, prompt);
|
|
212
|
+
await new Promise((r) => setTimeout(r, 300));
|
|
213
|
+
const sendBtn = page.locator('button[data-testid="send-button"]').first();
|
|
214
|
+
if (await sendBtn.isVisible().catch(() => false)) await sendBtn.click();
|
|
215
|
+
else await page.keyboard.press("Enter");
|
|
216
|
+
|
|
217
|
+
const deadline = Date.now() + timeoutMs;
|
|
218
|
+
let emittedLength = 0;
|
|
219
|
+
let lastText = "";
|
|
220
|
+
let stableCount = 0;
|
|
221
|
+
|
|
222
|
+
while (Date.now() < deadline) {
|
|
223
|
+
await new Promise((r) => setTimeout(r, STABLE_INTERVAL_MS));
|
|
224
|
+
|
|
225
|
+
const currentCount = await countAssistantMessages(page);
|
|
226
|
+
if (currentCount <= countBefore) continue;
|
|
227
|
+
|
|
228
|
+
const text = await getLastAssistantText(page);
|
|
229
|
+
|
|
230
|
+
if (text.length > emittedLength) {
|
|
231
|
+
onToken(text.slice(emittedLength));
|
|
232
|
+
emittedLength = text.length;
|
|
233
|
+
}
|
|
234
|
+
|
|
235
|
+
const streaming = await isStreaming(page);
|
|
236
|
+
if (streaming) { stableCount = 0; continue; }
|
|
237
|
+
|
|
238
|
+
if (text && text === lastText) {
|
|
239
|
+
stableCount++;
|
|
240
|
+
if (stableCount >= STABLE_CHECKS) {
|
|
241
|
+
log(`chatgpt-browser: stream done (${text.length} chars)`);
|
|
242
|
+
if (owned) await page.close().catch(() => {});
|
|
243
|
+
return { content: text, model, finishReason: "stop" };
|
|
244
|
+
}
|
|
245
|
+
} else {
|
|
246
|
+
stableCount = 0;
|
|
247
|
+
lastText = text;
|
|
248
|
+
}
|
|
249
|
+
}
|
|
250
|
+
|
|
251
|
+
if (owned) await page.close().catch(() => {});
|
|
252
|
+
throw new Error(`chatgpt.com stream timeout after ${timeoutMs}ms`);
|
|
253
|
+
}
|
package/src/proxy-server.ts
CHANGED
|
@@ -15,6 +15,7 @@ import { scheduleTokenRefresh, setAuthLogger, stopTokenRefresh } from "./claude-
|
|
|
15
15
|
import { grokComplete, grokCompleteStream, type ChatMessage as GrokChatMessage } from "./grok-client.js";
|
|
16
16
|
import { claudeComplete, claudeCompleteStream, type ChatMessage as ClaudeBrowserChatMessage } from "./claude-browser.js";
|
|
17
17
|
import { geminiComplete, geminiCompleteStream, type ChatMessage as GeminiBrowserChatMessage } from "./gemini-browser.js";
|
|
18
|
+
import { chatgptComplete, chatgptCompleteStream, type ChatMessage as ChatGPTBrowserChatMessage } from "./chatgpt-browser.js";
|
|
18
19
|
import type { BrowserContext } from "playwright";
|
|
19
20
|
|
|
20
21
|
export type GrokCompleteOptions = Parameters<typeof grokComplete>[1];
|
|
@@ -51,46 +52,26 @@ export interface ProxyServerOptions {
|
|
|
51
52
|
_geminiComplete?: typeof geminiComplete;
|
|
52
53
|
/** Override for testing — replaces geminiCompleteStream */
|
|
53
54
|
_geminiCompleteStream?: typeof geminiCompleteStream;
|
|
55
|
+
/** Returns the current authenticated ChatGPT BrowserContext */
|
|
56
|
+
getChatGPTContext?: () => BrowserContext | null;
|
|
57
|
+
/** Async lazy connect for ChatGPT */
|
|
58
|
+
connectChatGPTContext?: () => Promise<BrowserContext | null>;
|
|
59
|
+
/** Override for testing */
|
|
60
|
+
_chatgptComplete?: typeof chatgptComplete;
|
|
61
|
+
/** Override for testing */
|
|
62
|
+
_chatgptCompleteStream?: typeof chatgptCompleteStream;
|
|
54
63
|
}
|
|
55
64
|
|
|
56
65
|
/** Available CLI bridge models for GET /v1/models */
|
|
57
66
|
export const CLI_MODELS = [
|
|
58
|
-
|
|
59
|
-
|
|
60
|
-
|
|
61
|
-
|
|
62
|
-
|
|
63
|
-
},
|
|
64
|
-
{
|
|
65
|
-
|
|
66
|
-
name: "Gemini 2.5 Flash (CLI)",
|
|
67
|
-
contextWindow: 1_000_000,
|
|
68
|
-
maxTokens: 8192,
|
|
69
|
-
},
|
|
70
|
-
{
|
|
71
|
-
id: "cli-gemini/gemini-3-pro",
|
|
72
|
-
name: "Gemini 3 Pro (CLI)",
|
|
73
|
-
contextWindow: 1_000_000,
|
|
74
|
-
maxTokens: 8192,
|
|
75
|
-
},
|
|
76
|
-
{
|
|
77
|
-
id: "cli-claude/claude-opus-4-6",
|
|
78
|
-
name: "Claude Opus 4.6 (CLI)",
|
|
79
|
-
contextWindow: 200_000,
|
|
80
|
-
maxTokens: 8192,
|
|
81
|
-
},
|
|
82
|
-
{
|
|
83
|
-
id: "cli-claude/claude-sonnet-4-6",
|
|
84
|
-
name: "Claude Sonnet 4.6 (CLI)",
|
|
85
|
-
contextWindow: 200_000,
|
|
86
|
-
maxTokens: 8192,
|
|
87
|
-
},
|
|
88
|
-
{
|
|
89
|
-
id: "cli-claude/claude-haiku-4-5",
|
|
90
|
-
name: "Claude Haiku 4.5 (CLI)",
|
|
91
|
-
contextWindow: 200_000,
|
|
92
|
-
maxTokens: 8192,
|
|
93
|
-
},
|
|
67
|
+
// ── Claude Code CLI ───────────────────────────────────────────────────────
|
|
68
|
+
{ id: "cli-claude/claude-sonnet-4-6", name: "Claude Sonnet 4.6 (CLI)", contextWindow: 200_000, maxTokens: 8_192 },
|
|
69
|
+
{ id: "cli-claude/claude-opus-4-6", name: "Claude Opus 4.6 (CLI)", contextWindow: 200_000, maxTokens: 8_192 },
|
|
70
|
+
{ id: "cli-claude/claude-haiku-4-5", name: "Claude Haiku 4.5 (CLI)", contextWindow: 200_000, maxTokens: 8_192 },
|
|
71
|
+
// ── Gemini CLI ────────────────────────────────────────────────────────────
|
|
72
|
+
{ id: "cli-gemini/gemini-2.5-pro", name: "Gemini 2.5 Pro (CLI)", contextWindow: 1_000_000, maxTokens: 8_192 },
|
|
73
|
+
{ id: "cli-gemini/gemini-2.5-flash", name: "Gemini 2.5 Flash (CLI)", contextWindow: 1_000_000, maxTokens: 8_192 },
|
|
74
|
+
{ id: "cli-gemini/gemini-3-pro-preview",name: "Gemini 3 Pro (CLI)", contextWindow: 1_000_000, maxTokens: 8_192 },
|
|
94
75
|
// Grok web-session models (requires /grok-login)
|
|
95
76
|
{ id: "web-grok/grok-3", name: "Grok 3 (web session)", contextWindow: 131_072, maxTokens: 131_072 },
|
|
96
77
|
{ id: "web-grok/grok-3-fast", name: "Grok 3 Fast (web session)", contextWindow: 131_072, maxTokens: 131_072 },
|
|
@@ -105,6 +86,12 @@ export const CLI_MODELS = [
|
|
|
105
86
|
{ id: "web-gemini/gemini-2-5-flash", name: "Gemini 2.5 Flash (web session)", contextWindow: 1_000_000, maxTokens: 8192 },
|
|
106
87
|
{ id: "web-gemini/gemini-3-pro", name: "Gemini 3 Pro (web session)", contextWindow: 1_000_000, maxTokens: 8192 },
|
|
107
88
|
{ id: "web-gemini/gemini-3-flash", name: "Gemini 3 Flash (web session)", contextWindow: 1_000_000, maxTokens: 8192 },
|
|
89
|
+
// ChatGPT web-session models (requires /chatgpt-login)
|
|
90
|
+
{ id: "web-chatgpt/gpt-4o", name: "GPT-4o (web session)", contextWindow: 128_000, maxTokens: 16_384 },
|
|
91
|
+
{ id: "web-chatgpt/gpt-4o-mini", name: "GPT-4o Mini (web session)", contextWindow: 128_000, maxTokens: 16_384 },
|
|
92
|
+
{ id: "web-chatgpt/gpt-o3", name: "o3 (web session)", contextWindow: 200_000, maxTokens: 100_000 },
|
|
93
|
+
{ id: "web-chatgpt/gpt-o4-mini", name: "o4-mini (web session)", contextWindow: 200_000, maxTokens: 100_000 },
|
|
94
|
+
{ id: "web-chatgpt/gpt-5", name: "GPT-5 (web session)", contextWindow: 1_000_000, maxTokens: 32_768 },
|
|
108
95
|
];
|
|
109
96
|
|
|
110
97
|
// ──────────────────────────────────────────────────────────────────────────────
|
|
@@ -374,6 +361,43 @@ async function handleRequest(
|
|
|
374
361
|
}
|
|
375
362
|
// ─────────────────────────────────────────────────────────────────────────
|
|
376
363
|
|
|
364
|
+
// ── ChatGPT web-session routing ───────────────────────────────────────────
|
|
365
|
+
if (model.startsWith("web-chatgpt/")) {
|
|
366
|
+
let chatgptCtx = opts.getChatGPTContext?.() ?? null;
|
|
367
|
+
if (!chatgptCtx && opts.connectChatGPTContext) chatgptCtx = await opts.connectChatGPTContext();
|
|
368
|
+
if (!chatgptCtx) {
|
|
369
|
+
res.writeHead(503, { "Content-Type": "application/json" });
|
|
370
|
+
res.end(JSON.stringify({ error: { message: "No active chatgpt.com session. Use /chatgpt-login to authenticate.", code: "no_chatgpt_session" } }));
|
|
371
|
+
return;
|
|
372
|
+
}
|
|
373
|
+
const timeoutMs = opts.timeoutMs ?? 120_000;
|
|
374
|
+
const msgs = messages as ChatGPTBrowserChatMessage[];
|
|
375
|
+
const doComplete = opts._chatgptComplete ?? chatgptComplete;
|
|
376
|
+
const doStream = opts._chatgptCompleteStream ?? chatgptCompleteStream;
|
|
377
|
+
try {
|
|
378
|
+
if (stream) {
|
|
379
|
+
res.writeHead(200, { "Content-Type": "text/event-stream", "Cache-Control": "no-cache", Connection: "keep-alive", ...corsHeaders() });
|
|
380
|
+
sendSseChunk(res, { id, created, model, delta: { role: "assistant" }, finish_reason: null });
|
|
381
|
+
const result = await doStream(chatgptCtx, { messages: msgs, model, timeoutMs },
|
|
382
|
+
(token) => sendSseChunk(res, { id, created, model, delta: { content: token }, finish_reason: null }), opts.log);
|
|
383
|
+
sendSseChunk(res, { id, created, model, delta: {}, finish_reason: result.finishReason });
|
|
384
|
+
res.write("data: [DONE]\n\n"); res.end();
|
|
385
|
+
} else {
|
|
386
|
+
const result = await doComplete(chatgptCtx, { messages: msgs, model, timeoutMs }, opts.log);
|
|
387
|
+
res.writeHead(200, { "Content-Type": "application/json", ...corsHeaders() });
|
|
388
|
+
res.end(JSON.stringify({ id, object: "chat.completion", created, model,
|
|
389
|
+
choices: [{ index: 0, message: { role: "assistant", content: result.content }, finish_reason: result.finishReason }],
|
|
390
|
+
usage: { prompt_tokens: 0, completion_tokens: 0, total_tokens: 0 } }));
|
|
391
|
+
}
|
|
392
|
+
} catch (err) {
|
|
393
|
+
const msg = (err as Error).message;
|
|
394
|
+
opts.warn(`[cli-bridge] ChatGPT browser error for ${model}: ${msg}`);
|
|
395
|
+
if (!res.headersSent) { res.writeHead(500, { "Content-Type": "application/json" }); res.end(JSON.stringify({ error: { message: msg, type: "chatgpt_browser_error" } })); }
|
|
396
|
+
}
|
|
397
|
+
return;
|
|
398
|
+
}
|
|
399
|
+
// ─────────────────────────────────────────────────────────────────────────
|
|
400
|
+
|
|
377
401
|
// ── CLI runner routing (Gemini / Claude Code) ─────────────────────────────
|
|
378
402
|
let content: string;
|
|
379
403
|
try {
|
|
@@ -0,0 +1,107 @@
|
|
|
1
|
+
/**
|
|
2
|
+
* test/chatgpt-proxy.test.ts
|
|
3
|
+
*
|
|
4
|
+
* Tests for ChatGPT web-session routing in the cli-bridge proxy.
|
|
5
|
+
* Uses _chatgptComplete/_chatgptCompleteStream DI overrides (no real browser).
|
|
6
|
+
*/
|
|
7
|
+
|
|
8
|
+
import { describe, it, expect, beforeAll, afterAll, vi } from "vitest";
|
|
9
|
+
import http from "node:http";
|
|
10
|
+
import type { AddressInfo } from "node:net";
|
|
11
|
+
import { startProxyServer, CLI_MODELS } from "../src/proxy-server.js";
|
|
12
|
+
import type { BrowserContext } from "playwright";
|
|
13
|
+
|
|
14
|
+
type Opts = { messages: { role: string; content: string }[]; model?: string; timeoutMs?: number };
|
|
15
|
+
type Result = { content: string; model: string; finishReason: string };
|
|
16
|
+
|
|
17
|
+
const stubComplete = vi.fn(async (_ctx: BrowserContext, opts: Opts, _log: (m: string) => void): Promise<Result> => ({
|
|
18
|
+
content: `chatgpt mock: ${opts.messages[opts.messages.length - 1]?.content ?? ""}`,
|
|
19
|
+
model: opts.model ?? "web-chatgpt/gpt-4o", finishReason: "stop",
|
|
20
|
+
}));
|
|
21
|
+
const stubStream = vi.fn(async (_ctx: BrowserContext, opts: Opts, onToken: (t: string) => void, _log: (m: string) => void): Promise<Result> => {
|
|
22
|
+
["chatgpt ", "stream ", "mock"].forEach(t => onToken(t));
|
|
23
|
+
return { content: "chatgpt stream mock", model: opts.model ?? "web-chatgpt/gpt-4o", finishReason: "stop" };
|
|
24
|
+
});
|
|
25
|
+
|
|
26
|
+
async function post(url: string, body: unknown): Promise<{ status: number; body: unknown }> {
|
|
27
|
+
return new Promise((res, rej) => {
|
|
28
|
+
const d = JSON.stringify(body); const u = new URL(url);
|
|
29
|
+
const r = http.request({ hostname: u.hostname, port: Number(u.port), path: u.pathname, method: "POST",
|
|
30
|
+
headers: { "Content-Type": "application/json", "Content-Length": Buffer.byteLength(d) } },
|
|
31
|
+
resp => { let raw = ""; resp.on("data", c => raw += c); resp.on("end", () => { try { res({ status: resp.statusCode ?? 0, body: JSON.parse(raw) }); } catch { res({ status: resp.statusCode ?? 0, body: raw }); } }); });
|
|
32
|
+
r.on("error", rej); r.write(d); r.end();
|
|
33
|
+
});
|
|
34
|
+
}
|
|
35
|
+
async function get(url: string): Promise<{ status: number; body: unknown }> {
|
|
36
|
+
return new Promise((res, rej) => {
|
|
37
|
+
const u = new URL(url);
|
|
38
|
+
const r = http.request({ hostname: u.hostname, port: Number(u.port), path: u.pathname, method: "GET" },
|
|
39
|
+
resp => { let raw = ""; resp.on("data", c => raw += c); resp.on("end", () => { try { res({ status: resp.statusCode ?? 0, body: JSON.parse(raw) }); } catch { res({ status: resp.statusCode ?? 0, body: raw }); } }); });
|
|
40
|
+
r.on("error", rej); r.end();
|
|
41
|
+
});
|
|
42
|
+
}
|
|
43
|
+
|
|
44
|
+
const fakeCtx = {} as BrowserContext;
|
|
45
|
+
let server: http.Server;
|
|
46
|
+
let baseUrl: string;
|
|
47
|
+
|
|
48
|
+
beforeAll(async () => {
|
|
49
|
+
server = await startProxyServer({
|
|
50
|
+
port: 0, log: () => {}, warn: () => {},
|
|
51
|
+
getChatGPTContext: () => fakeCtx,
|
|
52
|
+
// @ts-expect-error stub
|
|
53
|
+
_chatgptComplete: stubComplete,
|
|
54
|
+
// @ts-expect-error stub
|
|
55
|
+
_chatgptCompleteStream: stubStream,
|
|
56
|
+
});
|
|
57
|
+
baseUrl = `http://127.0.0.1:${(server.address() as AddressInfo).port}`;
|
|
58
|
+
});
|
|
59
|
+
afterAll(() => server.close());
|
|
60
|
+
|
|
61
|
+
describe("ChatGPT routing — model list", () => {
|
|
62
|
+
it("includes web-chatgpt/* models in /v1/models", async () => {
|
|
63
|
+
const res = await get(`${baseUrl}/v1/models`);
|
|
64
|
+
expect(res.status).toBe(200);
|
|
65
|
+
const ids = (res.body as { data: { id: string }[] }).data.map(m => m.id);
|
|
66
|
+
expect(ids).toContain("web-chatgpt/gpt-4o");
|
|
67
|
+
expect(ids).toContain("web-chatgpt/gpt-5");
|
|
68
|
+
});
|
|
69
|
+
it("CLI_MODELS includes web-chatgpt/*", () => {
|
|
70
|
+
expect(CLI_MODELS.some(m => m.id.startsWith("web-chatgpt/"))).toBe(true);
|
|
71
|
+
});
|
|
72
|
+
});
|
|
73
|
+
|
|
74
|
+
describe("ChatGPT routing — non-streaming", () => {
|
|
75
|
+
it("returns assistant message for web-chatgpt/gpt-4o", async () => {
|
|
76
|
+
const res = await post(`${baseUrl}/v1/chat/completions`, {
|
|
77
|
+
model: "web-chatgpt/gpt-4o", messages: [{ role: "user", content: "hello chatgpt" }], stream: false,
|
|
78
|
+
});
|
|
79
|
+
expect(res.status).toBe(200);
|
|
80
|
+
expect((res.body as any).choices[0].message.content).toContain("chatgpt mock");
|
|
81
|
+
});
|
|
82
|
+
it("passes correct model to stub", async () => {
|
|
83
|
+
stubComplete.mockClear();
|
|
84
|
+
await post(`${baseUrl}/v1/chat/completions`, { model: "web-chatgpt/gpt-o3", messages: [{ role: "user", content: "x" }] });
|
|
85
|
+
expect(stubComplete).toHaveBeenCalledOnce();
|
|
86
|
+
expect(stubComplete.mock.calls[0][1].model).toBe("web-chatgpt/gpt-o3");
|
|
87
|
+
});
|
|
88
|
+
it("returns 503 when no context", async () => {
|
|
89
|
+
const s = await startProxyServer({ port: 0, log: () => {}, warn: () => {}, getChatGPTContext: () => null });
|
|
90
|
+
const u = `http://127.0.0.1:${(s.address() as AddressInfo).port}`;
|
|
91
|
+
const r = await post(`${u}/v1/chat/completions`, { model: "web-chatgpt/gpt-4o", messages: [{ role: "user", content: "hi" }] });
|
|
92
|
+
expect(r.status).toBe(503);
|
|
93
|
+
expect((r.body as any).error.code).toBe("no_chatgpt_session");
|
|
94
|
+
s.close();
|
|
95
|
+
});
|
|
96
|
+
});
|
|
97
|
+
|
|
98
|
+
describe("ChatGPT routing — streaming", () => {
|
|
99
|
+
it("returns SSE stream with [DONE]", () => new Promise<void>((resolve, reject) => {
|
|
100
|
+
const body = JSON.stringify({ model: "web-chatgpt/gpt-4o", messages: [{ role: "user", content: "s" }], stream: true });
|
|
101
|
+
const u = new URL(`${baseUrl}/v1/chat/completions`);
|
|
102
|
+
const r = http.request({ hostname: u.hostname, port: Number(u.port), path: u.pathname, method: "POST",
|
|
103
|
+
headers: { "Content-Type": "application/json", "Content-Length": Buffer.byteLength(body) } },
|
|
104
|
+
resp => { let raw = ""; resp.on("data", c => raw += c); resp.on("end", () => { expect(raw).toContain("[DONE]"); resolve(); }); });
|
|
105
|
+
r.on("error", reject); r.write(body); r.end();
|
|
106
|
+
}));
|
|
107
|
+
});
|