@elvatis_com/openclaw-cli-bridge-elvatis 1.0.0 → 1.2.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/.ai/handoff/STATUS.md +33 -52
- package/README.md +21 -2
- package/SKILL.md +1 -1
- package/index.ts +149 -79
- package/openclaw.plugin.json +1 -1
- package/package.json +1 -1
- package/src/chatgpt-browser.ts +65 -48
- package/src/claude-browser.ts +44 -35
- package/src/gemini-browser.ts +44 -36
- package/src/grok-client.ts +60 -51
- package/src/proxy-server.ts +8 -36
package/.ai/handoff/STATUS.md
CHANGED
|
@@ -1,54 +1,35 @@
|
|
|
1
1
|
# STATUS — openclaw-cli-bridge-elvatis
|
|
2
2
|
|
|
3
|
-
## Current Version: 0.
|
|
4
|
-
|
|
5
|
-
##
|
|
6
|
-
|
|
7
|
-
|
|
8
|
-
-
|
|
9
|
-
-
|
|
10
|
-
|
|
11
|
-
-
|
|
12
|
-
|
|
13
|
-
|
|
14
|
-
|
|
15
|
-
|
|
16
|
-
|
|
17
|
-
|
|
18
|
-
|
|
19
|
-
|
|
20
|
-
|
|
21
|
-
|
|
22
|
-
|
|
23
|
-
|
|
24
|
-
|
|
25
|
-
|
|
26
|
-
|
|
27
|
-
|
|
28
|
-
-
|
|
29
|
-
|
|
30
|
-
|
|
31
|
-
|
|
32
|
-
|
|
33
|
-
|
|
34
|
-
|
|
35
|
-
|
|
36
|
-
4. Add claude-browser integration tests (DI-override, same as grok-proxy.test.ts)
|
|
37
|
-
5. Repeat for Gemini (gemini.google.com) and ChatGPT (chatgpt.com)
|
|
38
|
-
6. Bump to v1.0.0 when all 4 providers green + all tests pass
|
|
39
|
-
|
|
40
|
-
## Key Files
|
|
41
|
-
- src/claude-browser.ts — Claude DOM automation (ready, not wired)
|
|
42
|
-
- src/grok-client.ts — reference implementation
|
|
43
|
-
- src/grok-session.ts — reference for login/session management
|
|
44
|
-
- src/proxy-server.ts — add web-claude/* routing here
|
|
45
|
-
- index.ts — add /claude-login here
|
|
46
|
-
- test/claude-browser.test.ts — unit tests (partial, needs proxy integration test)
|
|
47
|
-
|
|
48
|
-
## Constraints
|
|
49
|
-
- OpenClaw browser (CDP 18800) required for Cloudflare bypass
|
|
50
|
-
- persistent profile approach fails (fingerprint mismatch)
|
|
51
|
-
- Each provider: own profile dir ~/.openclaw/<provider>-profile/
|
|
52
|
-
- All providers share same proxy port 31337
|
|
53
|
-
- Publish only after full test pass (77+ tests green)
|
|
54
|
-
- All 3 platforms on every release: GitHub + npm + ClawHub
|
|
3
|
+
## Current Version: 1.0.0 (npm + ClawHub + GitHub) ✅ RELEASED
|
|
4
|
+
|
|
5
|
+
## All 4 Providers LIVE — Tested 2026-03-11 22:24
|
|
6
|
+
| Provider | Status | Models | Command |
|
|
7
|
+
|---|---|---|---|
|
|
8
|
+
| Grok | ✅ | web-grok/grok-3, grok-3-fast, grok-3-mini, grok-3-mini-fast | /grok-login |
|
|
9
|
+
| Claude | ✅ | web-claude/claude-sonnet, claude-opus, claude-haiku | /claude-login |
|
|
10
|
+
| Gemini | ✅ | web-gemini/gemini-2-5-pro, gemini-2-5-flash, gemini-3-pro, gemini-3-flash | /gemini-login |
|
|
11
|
+
| ChatGPT | ✅ | web-chatgpt/gpt-4o, gpt-4o-mini, gpt-o3, gpt-o4-mini, gpt-5 | /chatgpt-login |
|
|
12
|
+
|
|
13
|
+
Live test: "What is the capital of France?"
|
|
14
|
+
- Grok: "Paris" ✅
|
|
15
|
+
- Claude: "Paris" ✅
|
|
16
|
+
- Gemini: "Paris" ✅
|
|
17
|
+
- ChatGPT: "Paris" ✅
|
|
18
|
+
|
|
19
|
+
## Stats
|
|
20
|
+
- 22 total models, 16 web-session models
|
|
21
|
+
- 96/96 tests green (8 test files)
|
|
22
|
+
- 0 zombie Chromium processes (singleton CDP, cleanupBrowsers on stop)
|
|
23
|
+
- Cookie expiry tracking for all 4 providers
|
|
24
|
+
|
|
25
|
+
## Known Issue: Browser persistence after Gateway restart
|
|
26
|
+
After SIGUSR1/full restart, OpenClaw browser is gone (CDP ECONNREFUSED).
|
|
27
|
+
Workaround: manually open browser + 4 provider pages → lazy connect takes over.
|
|
28
|
+
Fix needed: auto-start browser on plugin init, or keep-alive ping.
|
|
29
|
+
|
|
30
|
+
## Next Steps (v1.1.x)
|
|
31
|
+
- Auto-reconnect OpenClaw browser on plugin start
|
|
32
|
+
- /status command showing all 4 providers at once
|
|
33
|
+
- Context-window management for long conversations (new page per conversation)
|
|
34
|
+
- Handle model-switching within chatgpt.com (dropdown selector)
|
|
35
|
+
- Handle Gemini model switching (2.5 Pro vs Flash vs 3)
|
package/README.md
CHANGED
|
@@ -2,7 +2,7 @@
|
|
|
2
2
|
|
|
3
3
|
> OpenClaw plugin that bridges locally installed AI CLIs (Codex, Gemini, Claude Code) as model providers — with slash commands for instant model switching, restore, health testing, and model listing.
|
|
4
4
|
|
|
5
|
-
**Current version:** `1.
|
|
5
|
+
**Current version:** `1.2.0`
|
|
6
6
|
|
|
7
7
|
---
|
|
8
8
|
|
|
@@ -287,12 +287,31 @@ npm test # vitest run (45 tests)
|
|
|
287
287
|
|
|
288
288
|
## Changelog
|
|
289
289
|
|
|
290
|
+
### v1.2.0
|
|
291
|
+
- **fix:** Fresh page per request — no more message accumulation across calls
|
|
292
|
+
- **feat:** ChatGPT model switching via URL param (?model=gpt-4o, o3, etc.)
|
|
293
|
+
- **chore:** Gemini model switching: TODO (requires UI interaction)
|
|
294
|
+
|
|
295
|
+
### v1.1.0
|
|
296
|
+
- **feat:** Auto-connect all providers on startup (no manual login after restart if browser is open)
|
|
297
|
+
- **feat:** `/bridge-status` — all 4 providers at a glance with expiry info
|
|
298
|
+
- **fix:** Removed obsolete CLI models: gpt-5.2-codex, gpt-5.3-codex-spark, gpt-5.1-codex-mini, gemini-3-flash-preview
|
|
299
|
+
- **fix:** Removed duplicate cli-gemini3-flash (was same as gemini-3-flash-preview)
|
|
300
|
+
- **chore:** Cleaned up CLI_MODEL_COMMANDS (8 models, down from 13)
|
|
301
|
+
|
|
290
302
|
## v1.0.0 — Full Headless Browser Bridge 🚀
|
|
291
303
|
|
|
292
304
|
All four major LLM providers are now available via browser automation.
|
|
293
305
|
No CLI binaries required — just authenticated browser sessions.
|
|
294
306
|
|
|
295
|
-
|
|
307
|
+
#### v1.1.0
|
|
308
|
+
- **feat:** Auto-connect all providers on startup (no manual login after restart if browser is open)
|
|
309
|
+
- **feat:** `/bridge-status` — all 4 providers at a glance with expiry info
|
|
310
|
+
- **fix:** Removed obsolete CLI models: gpt-5.2-codex, gpt-5.3-codex-spark, gpt-5.1-codex-mini, gemini-3-flash-preview
|
|
311
|
+
- **fix:** Removed duplicate cli-gemini3-flash (was same as gemini-3-flash-preview)
|
|
312
|
+
- **chore:** Cleaned up CLI_MODEL_COMMANDS (8 models, down from 13)
|
|
313
|
+
|
|
314
|
+
## v1.0.0
|
|
296
315
|
- **feat:** `chatgpt-browser.ts` — chatgpt.com DOM-automation (`#prompt-textarea` + `[data-message-author-role]`)
|
|
297
316
|
- **feat:** `web-chatgpt/*` models: gpt-4o, gpt-4o-mini, gpt-o3, gpt-o4-mini, gpt-5
|
|
298
317
|
- **feat:** `/chatgpt-login`, `/chatgpt-status`, `/chatgpt-logout` + cookie-expiry tracking
|
package/SKILL.md
CHANGED
package/index.ts
CHANGED
|
@@ -6,7 +6,7 @@
|
|
|
6
6
|
*
|
|
7
7
|
* Phase 2 (request bridge): starts a local OpenAI-compatible HTTP proxy server
|
|
8
8
|
* and configures OpenClaw's vllm provider to route through it. Model calls
|
|
9
|
-
* are
|
|
9
|
+
* are routed to CLI tools or browser-session providers.
|
|
10
10
|
*
|
|
11
11
|
* Phase 3 (slash commands): registers /cli-* commands for instant model switching.
|
|
12
12
|
* /cli-sonnet → vllm/cli-claude/claude-sonnet-4-6 (Claude Code CLI proxy)
|
|
@@ -15,9 +15,8 @@
|
|
|
15
15
|
* /cli-gemini → vllm/cli-gemini/gemini-2.5-pro (Gemini CLI proxy)
|
|
16
16
|
* /cli-gemini-flash → vllm/cli-gemini/gemini-2.5-flash (Gemini CLI proxy)
|
|
17
17
|
* /cli-gemini3 → vllm/cli-gemini/gemini-3-pro-preview (Gemini CLI proxy)
|
|
18
|
-
* /cli-gemini3-flash→ vllm/cli-gemini/gemini-3-flash-preview (Gemini CLI proxy)
|
|
19
18
|
* /cli-codex → openai-codex/gpt-5.3-codex (Codex CLI OAuth, direct API)
|
|
20
|
-
* /cli-
|
|
19
|
+
* /cli-codex54 → openai-codex/gpt-5.4 (Codex CLI OAuth, direct API)
|
|
21
20
|
* /cli-back → restore model that was active before last /cli-* switch
|
|
22
21
|
* /cli-test [model] → one-shot proxy health check (does NOT switch global model)
|
|
23
22
|
* /cli-list → list all registered CLI bridge models with commands
|
|
@@ -405,81 +404,17 @@ function readCurrentModel(): string | null {
|
|
|
405
404
|
// Phase 3: model command table
|
|
406
405
|
// ──────────────────────────────────────────────────────────────────────────────
|
|
407
406
|
const CLI_MODEL_COMMANDS = [
|
|
408
|
-
// ── Claude (via local proxy
|
|
409
|
-
{
|
|
410
|
-
|
|
411
|
-
|
|
412
|
-
|
|
413
|
-
|
|
414
|
-
},
|
|
415
|
-
{
|
|
416
|
-
|
|
417
|
-
|
|
418
|
-
|
|
419
|
-
label: "Claude Opus 4.6 (CLI)",
|
|
420
|
-
},
|
|
421
|
-
{
|
|
422
|
-
name: "cli-haiku",
|
|
423
|
-
model: "vllm/cli-claude/claude-haiku-4-5",
|
|
424
|
-
description: "Switch to Claude Haiku 4.5 (Claude Code CLI via local proxy)",
|
|
425
|
-
label: "Claude Haiku 4.5 (CLI)",
|
|
426
|
-
},
|
|
427
|
-
// ── Gemini (via local proxy → Gemini CLI) ───────────────────────────────────
|
|
428
|
-
{
|
|
429
|
-
name: "cli-gemini",
|
|
430
|
-
model: "vllm/cli-gemini/gemini-2.5-pro",
|
|
431
|
-
description: "Switch to Gemini 2.5 Pro (Gemini CLI via local proxy)",
|
|
432
|
-
label: "Gemini 2.5 Pro (CLI)",
|
|
433
|
-
},
|
|
434
|
-
{
|
|
435
|
-
name: "cli-gemini-flash",
|
|
436
|
-
model: "vllm/cli-gemini/gemini-2.5-flash",
|
|
437
|
-
description: "Switch to Gemini 2.5 Flash (Gemini CLI via local proxy)",
|
|
438
|
-
label: "Gemini 2.5 Flash (CLI)",
|
|
439
|
-
},
|
|
440
|
-
{
|
|
441
|
-
name: "cli-gemini3",
|
|
442
|
-
model: "vllm/cli-gemini/gemini-3-pro-preview",
|
|
443
|
-
description: "Switch to Gemini 3 Pro Preview (Gemini CLI via local proxy)",
|
|
444
|
-
label: "Gemini 3 Pro Preview (CLI)",
|
|
445
|
-
},
|
|
446
|
-
{
|
|
447
|
-
name: "cli-gemini3-flash",
|
|
448
|
-
model: "vllm/cli-gemini/gemini-3-flash-preview",
|
|
449
|
-
description: "Switch to Gemini 3 Flash Preview (Gemini CLI via local proxy)",
|
|
450
|
-
label: "Gemini 3 Flash Preview (CLI)",
|
|
451
|
-
},
|
|
452
|
-
// ── Codex (via openai-codex provider — Codex CLI OAuth auth, direct API) ────
|
|
453
|
-
{
|
|
454
|
-
name: "cli-codex",
|
|
455
|
-
model: "openai-codex/gpt-5.3-codex",
|
|
456
|
-
description: "Switch to GPT-5.3 Codex (openai-codex provider, Codex CLI auth)",
|
|
457
|
-
label: "GPT-5.3 Codex",
|
|
458
|
-
},
|
|
459
|
-
{
|
|
460
|
-
name: "cli-codex-spark",
|
|
461
|
-
model: "openai-codex/gpt-5.3-codex-spark",
|
|
462
|
-
description: "Switch to GPT-5.3 Codex Spark (openai-codex provider, Codex CLI auth)",
|
|
463
|
-
label: "GPT-5.3 Codex Spark",
|
|
464
|
-
},
|
|
465
|
-
{
|
|
466
|
-
name: "cli-codex52",
|
|
467
|
-
model: "openai-codex/gpt-5.2-codex",
|
|
468
|
-
description: "Switch to GPT-5.2 Codex (openai-codex provider, Codex CLI auth)",
|
|
469
|
-
label: "GPT-5.2 Codex",
|
|
470
|
-
},
|
|
471
|
-
{
|
|
472
|
-
name: "cli-codex54",
|
|
473
|
-
model: "openai-codex/gpt-5.4",
|
|
474
|
-
description: "Switch to GPT-5.4 (openai-codex provider, Codex CLI auth — may require upgraded scope)",
|
|
475
|
-
label: "GPT-5.4 (Codex)",
|
|
476
|
-
},
|
|
477
|
-
{
|
|
478
|
-
name: "cli-codex-mini",
|
|
479
|
-
model: "openai-codex/gpt-5.1-codex-mini",
|
|
480
|
-
description: "Switch to GPT-5.1 Codex Mini (openai-codex provider, Codex CLI auth)",
|
|
481
|
-
label: "GPT-5.1 Codex Mini",
|
|
482
|
-
},
|
|
407
|
+
// ── Claude Code CLI (via local proxy) ────────────────────────────────────────
|
|
408
|
+
{ name: "cli-sonnet", model: "vllm/cli-claude/claude-sonnet-4-6", description: "Claude Sonnet 4.6 (Claude Code CLI)", label: "Claude Sonnet 4.6 (CLI)" },
|
|
409
|
+
{ name: "cli-opus", model: "vllm/cli-claude/claude-opus-4-6", description: "Claude Opus 4.6 (Claude Code CLI)", label: "Claude Opus 4.6 (CLI)" },
|
|
410
|
+
{ name: "cli-haiku", model: "vllm/cli-claude/claude-haiku-4-5", description: "Claude Haiku 4.5 (Claude Code CLI)", label: "Claude Haiku 4.5 (CLI)" },
|
|
411
|
+
// ── Gemini CLI (via local proxy) ─────────────────────────────────────────────
|
|
412
|
+
{ name: "cli-gemini", model: "vllm/cli-gemini/gemini-2.5-pro", description: "Gemini 2.5 Pro (Gemini CLI)", label: "Gemini 2.5 Pro (CLI)" },
|
|
413
|
+
{ name: "cli-gemini-flash", model: "vllm/cli-gemini/gemini-2.5-flash", description: "Gemini 2.5 Flash (Gemini CLI)", label: "Gemini 2.5 Flash (CLI)" },
|
|
414
|
+
{ name: "cli-gemini3", model: "vllm/cli-gemini/gemini-3-pro-preview", description: "Gemini 3 Pro (Gemini CLI)", label: "Gemini 3 Pro (CLI)" },
|
|
415
|
+
// ── Codex CLI (openai-codex provider, OAuth auth) ────────────────────────────
|
|
416
|
+
{ name: "cli-codex", model: "openai-codex/gpt-5.3-codex", description: "GPT-5.3 Codex (Codex CLI auth)", label: "GPT-5.3 Codex" },
|
|
417
|
+
{ name: "cli-codex54", model: "openai-codex/gpt-5.4", description: "GPT-5.4 (Codex CLI auth)", label: "GPT-5.4" },
|
|
483
418
|
] as const;
|
|
484
419
|
|
|
485
420
|
/** Default model used by /cli-test when no arg is given */
|
|
@@ -678,7 +613,7 @@ function proxyTestRequest(
|
|
|
678
613
|
const plugin = {
|
|
679
614
|
id: "openclaw-cli-bridge-elvatis",
|
|
680
615
|
name: "OpenClaw CLI Bridge",
|
|
681
|
-
version: "1.
|
|
616
|
+
version: "1.2.0",
|
|
682
617
|
description:
|
|
683
618
|
"Phase 1: openai-codex auth bridge. " +
|
|
684
619
|
"Phase 2: HTTP proxy for gemini/claude CLIs. " +
|
|
@@ -697,6 +632,53 @@ const plugin = {
|
|
|
697
632
|
// ── Grok session restore (non-blocking) ───────────────────────────────────
|
|
698
633
|
void tryRestoreGrokSession(grokSessionPath, (msg) => api.logger.info(msg));
|
|
699
634
|
|
|
635
|
+
// ── Auto-connect all browser providers on startup (non-blocking) ──────────
|
|
636
|
+
void (async () => {
|
|
637
|
+
// Wait for proxy to start first
|
|
638
|
+
await new Promise(r => setTimeout(r, 3000));
|
|
639
|
+
const log = (msg: string) => api.logger.info(msg);
|
|
640
|
+
const ctx = await connectToOpenClawBrowser(log);
|
|
641
|
+
if (!ctx) { log("[cli-bridge] startup auto-connect: OpenClaw browser not available"); return; }
|
|
642
|
+
const pages = ctx.pages().map(p => p.url());
|
|
643
|
+
log(`[cli-bridge] startup auto-connect: ${pages.length} pages open`);
|
|
644
|
+
|
|
645
|
+
// Claude
|
|
646
|
+
if (pages.some(u => u.includes("claude.ai")) && !claudeContext) {
|
|
647
|
+
try {
|
|
648
|
+
const { getOrCreateClaudePage } = await import("./src/claude-browser.js");
|
|
649
|
+
const { page } = await getOrCreateClaudePage(ctx);
|
|
650
|
+
if (await page.locator(".ProseMirror").isVisible().catch(() => false)) {
|
|
651
|
+
claudeContext = ctx;
|
|
652
|
+
log("[cli-bridge:claude] auto-connected ✅");
|
|
653
|
+
}
|
|
654
|
+
} catch { /* not available */ }
|
|
655
|
+
}
|
|
656
|
+
|
|
657
|
+
// Gemini
|
|
658
|
+
if (pages.some(u => u.includes("gemini.google.com")) && !geminiContext) {
|
|
659
|
+
try {
|
|
660
|
+
const { getOrCreateGeminiPage } = await import("./src/gemini-browser.js");
|
|
661
|
+
const { page } = await getOrCreateGeminiPage(ctx);
|
|
662
|
+
if (await page.locator(".ql-editor").isVisible().catch(() => false)) {
|
|
663
|
+
geminiContext = ctx;
|
|
664
|
+
log("[cli-bridge:gemini] auto-connected ✅");
|
|
665
|
+
}
|
|
666
|
+
} catch { /* not available */ }
|
|
667
|
+
}
|
|
668
|
+
|
|
669
|
+
// ChatGPT
|
|
670
|
+
if (pages.some(u => u.includes("chatgpt.com")) && !chatgptContext) {
|
|
671
|
+
try {
|
|
672
|
+
const { getOrCreateChatGPTPage } = await import("./src/chatgpt-browser.js");
|
|
673
|
+
const { page } = await getOrCreateChatGPTPage(ctx);
|
|
674
|
+
if (await page.locator("#prompt-textarea").isVisible().catch(() => false)) {
|
|
675
|
+
chatgptContext = ctx;
|
|
676
|
+
log("[cli-bridge:chatgpt] auto-connected ✅");
|
|
677
|
+
}
|
|
678
|
+
} catch { /* not available */ }
|
|
679
|
+
}
|
|
680
|
+
})();
|
|
681
|
+
|
|
700
682
|
// ── Phase 1: openai-codex auth bridge ─────────────────────────────────────
|
|
701
683
|
if (enableCodex) {
|
|
702
684
|
api.registerProvider({
|
|
@@ -1457,6 +1439,93 @@ const plugin = {
|
|
|
1457
1439
|
return { text: "✅ Disconnected from chatgpt.com. Run `/chatgpt-login` to reconnect." };
|
|
1458
1440
|
},
|
|
1459
1441
|
} satisfies OpenClawPluginCommandDefinition);
|
|
1442
|
+
|
|
1443
|
+
// ── /bridge-status — all providers at a glance ───────────────────────────
|
|
1444
|
+
api.registerCommand({
|
|
1445
|
+
name: "bridge-status",
|
|
1446
|
+
description: "Show status of all headless browser providers (Grok, Claude, Gemini, ChatGPT)",
|
|
1447
|
+
handler: async (): Promise<PluginCommandResult> => {
|
|
1448
|
+
const lines: string[] = [`🌉 *CLI Bridge v${plugin.version} — Provider Status*\n`];
|
|
1449
|
+
|
|
1450
|
+
const checks = [
|
|
1451
|
+
{
|
|
1452
|
+
name: "Grok",
|
|
1453
|
+
ctx: grokContext,
|
|
1454
|
+
check: async () => {
|
|
1455
|
+
if (!grokContext) return false;
|
|
1456
|
+
const r = await verifySession(grokContext, () => {});
|
|
1457
|
+
if (!r.valid) { grokContext = null; }
|
|
1458
|
+
return r.valid;
|
|
1459
|
+
},
|
|
1460
|
+
models: "web-grok/grok-3, grok-3-fast, grok-3-mini",
|
|
1461
|
+
loginCmd: "/grok-login",
|
|
1462
|
+
expiry: () => { const e = loadGrokExpiry(); return e ? formatExpiryInfo(e) : null; },
|
|
1463
|
+
},
|
|
1464
|
+
{
|
|
1465
|
+
name: "Claude",
|
|
1466
|
+
ctx: claudeContext,
|
|
1467
|
+
check: async () => {
|
|
1468
|
+
if (!claudeContext) return false;
|
|
1469
|
+
try {
|
|
1470
|
+
const { getOrCreateClaudePage } = await import("./src/claude-browser.js");
|
|
1471
|
+
const { page } = await getOrCreateClaudePage(claudeContext);
|
|
1472
|
+
return page.locator(".ProseMirror").isVisible().catch(() => false);
|
|
1473
|
+
} catch { claudeContext = null; return false; }
|
|
1474
|
+
},
|
|
1475
|
+
models: "web-claude/claude-sonnet, claude-opus, claude-haiku",
|
|
1476
|
+
loginCmd: "/claude-login",
|
|
1477
|
+
expiry: () => { const e = loadClaudeExpiry(); return e ? formatClaudeExpiry(e) : null; },
|
|
1478
|
+
},
|
|
1479
|
+
{
|
|
1480
|
+
name: "Gemini",
|
|
1481
|
+
ctx: geminiContext,
|
|
1482
|
+
check: async () => {
|
|
1483
|
+
if (!geminiContext) return false;
|
|
1484
|
+
try {
|
|
1485
|
+
const { getOrCreateGeminiPage } = await import("./src/gemini-browser.js");
|
|
1486
|
+
const { page } = await getOrCreateGeminiPage(geminiContext);
|
|
1487
|
+
return page.locator(".ql-editor").isVisible().catch(() => false);
|
|
1488
|
+
} catch { geminiContext = null; return false; }
|
|
1489
|
+
},
|
|
1490
|
+
models: "web-gemini/gemini-2-5-pro, gemini-2-5-flash, gemini-3-pro, gemini-3-flash",
|
|
1491
|
+
loginCmd: "/gemini-login",
|
|
1492
|
+
expiry: () => { const e = loadGeminiExpiry(); return e ? formatGeminiExpiry(e) : null; },
|
|
1493
|
+
},
|
|
1494
|
+
{
|
|
1495
|
+
name: "ChatGPT",
|
|
1496
|
+
ctx: chatgptContext,
|
|
1497
|
+
check: async () => {
|
|
1498
|
+
if (!chatgptContext) return false;
|
|
1499
|
+
try {
|
|
1500
|
+
const { getOrCreateChatGPTPage } = await import("./src/chatgpt-browser.js");
|
|
1501
|
+
const { page } = await getOrCreateChatGPTPage(chatgptContext);
|
|
1502
|
+
return page.locator("#prompt-textarea").isVisible().catch(() => false);
|
|
1503
|
+
} catch { chatgptContext = null; return false; }
|
|
1504
|
+
},
|
|
1505
|
+
models: "web-chatgpt/gpt-4o, gpt-o3, gpt-o4-mini, gpt-5",
|
|
1506
|
+
loginCmd: "/chatgpt-login",
|
|
1507
|
+
expiry: () => { const e = loadChatGPTExpiry(); return e ? formatChatGPTExpiry(e) : null; },
|
|
1508
|
+
},
|
|
1509
|
+
];
|
|
1510
|
+
|
|
1511
|
+
for (const c of checks) {
|
|
1512
|
+
const active = c.ctx !== null;
|
|
1513
|
+
const ok = active ? await c.check() : false;
|
|
1514
|
+
const expiry = c.expiry();
|
|
1515
|
+
if (ok) {
|
|
1516
|
+
lines.push(`✅ *${c.name}* — active`);
|
|
1517
|
+
if (expiry) lines.push(` 🕐 ${expiry}`);
|
|
1518
|
+
lines.push(` Models: ${c.models}`);
|
|
1519
|
+
} else {
|
|
1520
|
+
lines.push(`❌ *${c.name}* — not connected (run \`${c.loginCmd}\`)`);
|
|
1521
|
+
}
|
|
1522
|
+
lines.push("");
|
|
1523
|
+
}
|
|
1524
|
+
|
|
1525
|
+
lines.push(`🔌 Proxy: \`127.0.0.1:${port}\` | 22 models total`);
|
|
1526
|
+
return { text: lines.join("\n") };
|
|
1527
|
+
},
|
|
1528
|
+
} satisfies OpenClawPluginCommandDefinition);
|
|
1460
1529
|
// ─────────────────────────────────────────────────────────────────────────
|
|
1461
1530
|
|
|
1462
1531
|
const allCommands = [
|
|
@@ -1476,6 +1545,7 @@ const plugin = {
|
|
|
1476
1545
|
"/chatgpt-login",
|
|
1477
1546
|
"/chatgpt-status",
|
|
1478
1547
|
"/chatgpt-logout",
|
|
1548
|
+
"/bridge-status",
|
|
1479
1549
|
];
|
|
1480
1550
|
api.logger.info(`[cli-bridge] registered ${allCommands.length} commands: ${allCommands.join(", ")}`);
|
|
1481
1551
|
},
|
package/openclaw.plugin.json
CHANGED
|
@@ -1,7 +1,7 @@
|
|
|
1
1
|
{
|
|
2
2
|
"id": "openclaw-cli-bridge-elvatis",
|
|
3
3
|
"name": "OpenClaw CLI Bridge",
|
|
4
|
-
"version": "1.
|
|
4
|
+
"version": "1.2.0",
|
|
5
5
|
"description": "Phase 1: openai-codex auth bridge. Phase 2: local HTTP proxy routing model calls through gemini/claude CLIs (vllm provider).",
|
|
6
6
|
"providers": [
|
|
7
7
|
"openai-codex"
|
package/package.json
CHANGED
|
@@ -1,6 +1,6 @@
|
|
|
1
1
|
{
|
|
2
2
|
"name": "@elvatis_com/openclaw-cli-bridge-elvatis",
|
|
3
|
-
"version": "1.
|
|
3
|
+
"version": "1.2.0",
|
|
4
4
|
"description": "Bridges gemini, claude, and codex CLI tools as OpenClaw model providers. Reads existing CLI auth without re-login.",
|
|
5
5
|
"type": "module",
|
|
6
6
|
"openclaw": {
|
package/src/chatgpt-browser.ts
CHANGED
|
@@ -35,6 +35,14 @@ const STABLE_CHECKS = 3;
|
|
|
35
35
|
const STABLE_INTERVAL_MS = 500;
|
|
36
36
|
const CHATGPT_HOME = "https://chatgpt.com";
|
|
37
37
|
|
|
38
|
+
const MODEL_URLS: Record<string, string> = {
|
|
39
|
+
"gpt-4o": "https://chatgpt.com/?model=gpt-4o",
|
|
40
|
+
"gpt-4o-mini": "https://chatgpt.com/?model=gpt-4o-mini",
|
|
41
|
+
"o3": "https://chatgpt.com/?model=o3",
|
|
42
|
+
"o4-mini": "https://chatgpt.com/?model=o4-mini",
|
|
43
|
+
"gpt-5": "https://chatgpt.com/?model=gpt-5",
|
|
44
|
+
};
|
|
45
|
+
|
|
38
46
|
const MODEL_MAP: Record<string, string> = {
|
|
39
47
|
"gpt-4o": "gpt-4o",
|
|
40
48
|
"gpt-4o-mini": "gpt-4o-mini",
|
|
@@ -173,18 +181,21 @@ export async function chatgptComplete(
|
|
|
173
181
|
opts: ChatGPTBrowserOptions,
|
|
174
182
|
log: (msg: string) => void
|
|
175
183
|
): Promise<ChatGPTBrowserResult> {
|
|
176
|
-
const { page, owned } = await getOrCreateChatGPTPage(context);
|
|
177
184
|
const model = resolveModel(opts.model);
|
|
178
185
|
const prompt = flattenMessages(opts.messages);
|
|
179
186
|
const timeoutMs = opts.timeoutMs ?? DEFAULT_TIMEOUT_MS;
|
|
187
|
+
const navUrl = MODEL_URLS[model] ?? CHATGPT_HOME;
|
|
180
188
|
|
|
181
189
|
log(`chatgpt-browser: complete model=${model}`);
|
|
182
190
|
|
|
191
|
+
const page = await context.newPage();
|
|
183
192
|
try {
|
|
193
|
+
await page.goto(navUrl, { waitUntil: "domcontentloaded", timeout: 15_000 });
|
|
194
|
+
await new Promise((r) => setTimeout(r, 2_000));
|
|
184
195
|
const content = await sendAndWait(page, prompt, timeoutMs, log);
|
|
185
196
|
return { content, model, finishReason: "stop" };
|
|
186
197
|
} finally {
|
|
187
|
-
|
|
198
|
+
await page.close().catch(() => {});
|
|
188
199
|
}
|
|
189
200
|
}
|
|
190
201
|
|
|
@@ -194,60 +205,66 @@ export async function chatgptCompleteStream(
|
|
|
194
205
|
onToken: (token: string) => void,
|
|
195
206
|
log: (msg: string) => void
|
|
196
207
|
): Promise<ChatGPTBrowserResult> {
|
|
197
|
-
const { page, owned } = await getOrCreateChatGPTPage(context);
|
|
198
208
|
const model = resolveModel(opts.model);
|
|
199
209
|
const prompt = flattenMessages(opts.messages);
|
|
200
210
|
const timeoutMs = opts.timeoutMs ?? DEFAULT_TIMEOUT_MS;
|
|
211
|
+
const navUrl = MODEL_URLS[model] ?? CHATGPT_HOME;
|
|
201
212
|
|
|
202
213
|
log(`chatgpt-browser: stream model=${model}`);
|
|
203
214
|
|
|
204
|
-
const
|
|
205
|
-
|
|
206
|
-
|
|
207
|
-
|
|
208
|
-
|
|
209
|
-
|
|
210
|
-
|
|
211
|
-
|
|
212
|
-
|
|
213
|
-
|
|
214
|
-
|
|
215
|
-
|
|
216
|
-
|
|
217
|
-
|
|
218
|
-
|
|
219
|
-
|
|
220
|
-
|
|
221
|
-
|
|
222
|
-
|
|
223
|
-
|
|
224
|
-
|
|
225
|
-
|
|
226
|
-
|
|
227
|
-
|
|
228
|
-
|
|
229
|
-
|
|
230
|
-
|
|
231
|
-
|
|
232
|
-
|
|
233
|
-
|
|
234
|
-
|
|
235
|
-
|
|
236
|
-
|
|
215
|
+
const page = await context.newPage();
|
|
216
|
+
try {
|
|
217
|
+
await page.goto(navUrl, { waitUntil: "domcontentloaded", timeout: 15_000 });
|
|
218
|
+
await new Promise((r) => setTimeout(r, 2_000));
|
|
219
|
+
|
|
220
|
+
const countBefore = await countAssistantMessages(page);
|
|
221
|
+
|
|
222
|
+
await page.evaluate((msg: string) => {
|
|
223
|
+
const ed = document.querySelector("#prompt-textarea") as HTMLElement | null;
|
|
224
|
+
if (!ed) throw new Error("ChatGPT editor not found");
|
|
225
|
+
ed.focus();
|
|
226
|
+
document.execCommand("insertText", false, msg);
|
|
227
|
+
}, prompt);
|
|
228
|
+
await new Promise((r) => setTimeout(r, 300));
|
|
229
|
+
const sendBtn = page.locator('button[data-testid="send-button"]').first();
|
|
230
|
+
if (await sendBtn.isVisible().catch(() => false)) await sendBtn.click();
|
|
231
|
+
else await page.keyboard.press("Enter");
|
|
232
|
+
|
|
233
|
+
const deadline = Date.now() + timeoutMs;
|
|
234
|
+
let emittedLength = 0;
|
|
235
|
+
let lastText = "";
|
|
236
|
+
let stableCount = 0;
|
|
237
|
+
|
|
238
|
+
while (Date.now() < deadline) {
|
|
239
|
+
await new Promise((r) => setTimeout(r, STABLE_INTERVAL_MS));
|
|
240
|
+
|
|
241
|
+
const currentCount = await countAssistantMessages(page);
|
|
242
|
+
if (currentCount <= countBefore) continue;
|
|
243
|
+
|
|
244
|
+
const text = await getLastAssistantText(page);
|
|
245
|
+
|
|
246
|
+
if (text.length > emittedLength) {
|
|
247
|
+
onToken(text.slice(emittedLength));
|
|
248
|
+
emittedLength = text.length;
|
|
249
|
+
}
|
|
237
250
|
|
|
238
|
-
|
|
239
|
-
stableCount
|
|
240
|
-
|
|
241
|
-
|
|
242
|
-
|
|
243
|
-
|
|
251
|
+
const streaming = await isStreaming(page);
|
|
252
|
+
if (streaming) { stableCount = 0; continue; }
|
|
253
|
+
|
|
254
|
+
if (text && text === lastText) {
|
|
255
|
+
stableCount++;
|
|
256
|
+
if (stableCount >= STABLE_CHECKS) {
|
|
257
|
+
log(`chatgpt-browser: stream done (${text.length} chars)`);
|
|
258
|
+
return { content: text, model, finishReason: "stop" };
|
|
259
|
+
}
|
|
260
|
+
} else {
|
|
261
|
+
stableCount = 0;
|
|
262
|
+
lastText = text;
|
|
244
263
|
}
|
|
245
|
-
} else {
|
|
246
|
-
stableCount = 0;
|
|
247
|
-
lastText = text;
|
|
248
264
|
}
|
|
249
|
-
}
|
|
250
265
|
|
|
251
|
-
|
|
252
|
-
|
|
266
|
+
throw new Error(`chatgpt.com stream timeout after ${timeoutMs}ms`);
|
|
267
|
+
} finally {
|
|
268
|
+
await page.close().catch(() => {});
|
|
269
|
+
}
|
|
253
270
|
}
|
package/src/claude-browser.ts
CHANGED
|
@@ -160,18 +160,20 @@ export async function claudeComplete(
|
|
|
160
160
|
opts: ClaudeBrowserOptions,
|
|
161
161
|
log: (msg: string) => void
|
|
162
162
|
): Promise<ClaudeBrowserResult> {
|
|
163
|
-
const { page, owned } = await getOrCreateClaudePage(context);
|
|
164
163
|
const model = resolveModel(opts.model);
|
|
165
164
|
const prompt = flattenMessages(opts.messages);
|
|
166
165
|
const timeoutMs = opts.timeoutMs ?? DEFAULT_TIMEOUT_MS;
|
|
167
166
|
|
|
168
167
|
log(`claude-browser: complete model=${model}`);
|
|
169
168
|
|
|
169
|
+
const page = await context.newPage();
|
|
170
170
|
try {
|
|
171
|
+
await page.goto(CLAUDE_HOME, { waitUntil: "domcontentloaded", timeout: 15_000 });
|
|
172
|
+
await new Promise((r) => setTimeout(r, 2_000));
|
|
171
173
|
const content = await sendAndWait(page, prompt, timeoutMs, log);
|
|
172
174
|
return { content, model, finishReason: "stop" };
|
|
173
175
|
} finally {
|
|
174
|
-
|
|
176
|
+
await page.close().catch(() => {});
|
|
175
177
|
}
|
|
176
178
|
}
|
|
177
179
|
|
|
@@ -181,53 +183,60 @@ export async function claudeCompleteStream(
|
|
|
181
183
|
onToken: (token: string) => void,
|
|
182
184
|
log: (msg: string) => void
|
|
183
185
|
): Promise<ClaudeBrowserResult> {
|
|
184
|
-
const { page, owned } = await getOrCreateClaudePage(context);
|
|
185
186
|
const model = resolveModel(opts.model);
|
|
186
187
|
const prompt = flattenMessages(opts.messages);
|
|
187
188
|
const timeoutMs = opts.timeoutMs ?? DEFAULT_TIMEOUT_MS;
|
|
188
189
|
|
|
189
190
|
log(`claude-browser: stream model=${model}`);
|
|
190
191
|
|
|
191
|
-
const
|
|
192
|
+
const page = await context.newPage();
|
|
193
|
+
try {
|
|
194
|
+
await page.goto(CLAUDE_HOME, { waitUntil: "domcontentloaded", timeout: 15_000 });
|
|
195
|
+
await new Promise((r) => setTimeout(r, 2_000));
|
|
192
196
|
|
|
193
|
-
|
|
194
|
-
const ed = document.querySelector(".ProseMirror") as HTMLElement | null;
|
|
195
|
-
if (!ed) throw new Error("Claude editor not found");
|
|
196
|
-
ed.focus();
|
|
197
|
-
document.execCommand("insertText", false, msg);
|
|
198
|
-
}, prompt);
|
|
199
|
-
await new Promise((r) => setTimeout(r, 300));
|
|
200
|
-
await page.keyboard.press("Enter");
|
|
197
|
+
const countBefore = await countAssistantMessages(page);
|
|
201
198
|
|
|
202
|
-
|
|
203
|
-
|
|
204
|
-
|
|
205
|
-
|
|
199
|
+
await page.evaluate((msg: string) => {
|
|
200
|
+
const ed = document.querySelector(".ProseMirror") as HTMLElement | null;
|
|
201
|
+
if (!ed) throw new Error("Claude editor not found");
|
|
202
|
+
ed.focus();
|
|
203
|
+
document.execCommand("insertText", false, msg);
|
|
204
|
+
}, prompt);
|
|
205
|
+
await new Promise((r) => setTimeout(r, 300));
|
|
206
|
+
await page.keyboard.press("Enter");
|
|
206
207
|
|
|
207
|
-
|
|
208
|
-
|
|
208
|
+
const deadline = Date.now() + timeoutMs;
|
|
209
|
+
let emittedLength = 0;
|
|
210
|
+
let lastText = "";
|
|
211
|
+
let stableCount = 0;
|
|
209
212
|
|
|
210
|
-
|
|
211
|
-
|
|
213
|
+
while (Date.now() < deadline) {
|
|
214
|
+
await new Promise((r) => setTimeout(r, STABLE_INTERVAL_MS));
|
|
212
215
|
|
|
213
|
-
|
|
216
|
+
const currentCount = await countAssistantMessages(page);
|
|
217
|
+
if (currentCount <= countBefore) continue;
|
|
214
218
|
|
|
215
|
-
|
|
216
|
-
onToken(text.slice(emittedLength));
|
|
217
|
-
emittedLength = text.length;
|
|
218
|
-
}
|
|
219
|
+
const text = await getLastAssistantText(page);
|
|
219
220
|
|
|
220
|
-
|
|
221
|
-
|
|
222
|
-
|
|
223
|
-
|
|
224
|
-
|
|
221
|
+
if (text.length > emittedLength) {
|
|
222
|
+
onToken(text.slice(emittedLength));
|
|
223
|
+
emittedLength = text.length;
|
|
224
|
+
}
|
|
225
|
+
|
|
226
|
+
if (text && text === lastText) {
|
|
227
|
+
stableCount++;
|
|
228
|
+
if (stableCount >= STABLE_CHECKS) {
|
|
229
|
+
log(`claude-browser: stream done (${text.length} chars)`);
|
|
230
|
+
return { content: text, model, finishReason: "stop" };
|
|
231
|
+
}
|
|
232
|
+
} else {
|
|
233
|
+
stableCount = 0;
|
|
234
|
+
lastText = text;
|
|
225
235
|
}
|
|
226
|
-
} else {
|
|
227
|
-
stableCount = 0;
|
|
228
|
-
lastText = text;
|
|
229
236
|
}
|
|
230
|
-
}
|
|
231
237
|
|
|
232
|
-
|
|
238
|
+
throw new Error(`claude.ai stream timeout after ${timeoutMs}ms`);
|
|
239
|
+
} finally {
|
|
240
|
+
await page.close().catch(() => {});
|
|
241
|
+
}
|
|
233
242
|
}
|
package/src/gemini-browser.ts
CHANGED
|
@@ -167,18 +167,21 @@ export async function geminiComplete(
|
|
|
167
167
|
opts: GeminiBrowserOptions,
|
|
168
168
|
log: (msg: string) => void
|
|
169
169
|
): Promise<GeminiBrowserResult> {
|
|
170
|
-
|
|
170
|
+
// TODO: Gemini model switching requires UI interaction (model picker dropdown), not yet implemented
|
|
171
171
|
const model = resolveModel(opts.model);
|
|
172
172
|
const prompt = flattenMessages(opts.messages);
|
|
173
173
|
const timeoutMs = opts.timeoutMs ?? DEFAULT_TIMEOUT_MS;
|
|
174
174
|
|
|
175
175
|
log(`gemini-browser: complete model=${model}`);
|
|
176
176
|
|
|
177
|
+
const page = await context.newPage();
|
|
177
178
|
try {
|
|
179
|
+
await page.goto(GEMINI_HOME, { waitUntil: "domcontentloaded", timeout: 15_000 });
|
|
180
|
+
await new Promise((r) => setTimeout(r, 2_000));
|
|
178
181
|
const content = await sendAndWait(page, prompt, timeoutMs, log);
|
|
179
182
|
return { content, model, finishReason: "stop" };
|
|
180
183
|
} finally {
|
|
181
|
-
|
|
184
|
+
await page.close().catch(() => {});
|
|
182
185
|
}
|
|
183
186
|
}
|
|
184
187
|
|
|
@@ -188,55 +191,60 @@ export async function geminiCompleteStream(
|
|
|
188
191
|
onToken: (token: string) => void,
|
|
189
192
|
log: (msg: string) => void
|
|
190
193
|
): Promise<GeminiBrowserResult> {
|
|
191
|
-
const { page, owned } = await getOrCreateGeminiPage(context);
|
|
192
194
|
const model = resolveModel(opts.model);
|
|
193
195
|
const prompt = flattenMessages(opts.messages);
|
|
194
196
|
const timeoutMs = opts.timeoutMs ?? DEFAULT_TIMEOUT_MS;
|
|
195
197
|
|
|
196
198
|
log(`gemini-browser: stream model=${model}`);
|
|
197
199
|
|
|
198
|
-
const
|
|
200
|
+
const page = await context.newPage();
|
|
201
|
+
try {
|
|
202
|
+
await page.goto(GEMINI_HOME, { waitUntil: "domcontentloaded", timeout: 15_000 });
|
|
203
|
+
await new Promise((r) => setTimeout(r, 2_000));
|
|
199
204
|
|
|
200
|
-
|
|
201
|
-
await editor.click();
|
|
202
|
-
await editor.type(prompt, { delay: 10 });
|
|
203
|
-
await new Promise((r) => setTimeout(r, 300));
|
|
204
|
-
await page.keyboard.press("Enter");
|
|
205
|
+
const countBefore = await countResponses(page);
|
|
205
206
|
|
|
206
|
-
|
|
207
|
-
|
|
208
|
-
|
|
209
|
-
|
|
207
|
+
const editor = page.locator(".ql-editor");
|
|
208
|
+
await editor.click();
|
|
209
|
+
await editor.type(prompt, { delay: 10 });
|
|
210
|
+
await new Promise((r) => setTimeout(r, 300));
|
|
211
|
+
await page.keyboard.press("Enter");
|
|
210
212
|
|
|
211
|
-
|
|
212
|
-
|
|
213
|
+
const deadline = Date.now() + timeoutMs;
|
|
214
|
+
let emittedLength = 0;
|
|
215
|
+
let lastText = "";
|
|
216
|
+
let stableCount = 0;
|
|
213
217
|
|
|
214
|
-
|
|
215
|
-
|
|
218
|
+
while (Date.now() < deadline) {
|
|
219
|
+
await new Promise((r) => setTimeout(r, STABLE_INTERVAL_MS));
|
|
216
220
|
|
|
217
|
-
|
|
221
|
+
const currentCount = await countResponses(page);
|
|
222
|
+
if (currentCount <= countBefore) continue;
|
|
218
223
|
|
|
219
|
-
|
|
220
|
-
onToken(text.slice(emittedLength));
|
|
221
|
-
emittedLength = text.length;
|
|
222
|
-
}
|
|
224
|
+
const text = await getLastResponseText(page);
|
|
223
225
|
|
|
224
|
-
|
|
225
|
-
|
|
226
|
+
if (text.length > emittedLength) {
|
|
227
|
+
onToken(text.slice(emittedLength));
|
|
228
|
+
emittedLength = text.length;
|
|
229
|
+
}
|
|
226
230
|
|
|
227
|
-
|
|
228
|
-
stableCount
|
|
229
|
-
|
|
230
|
-
|
|
231
|
-
|
|
232
|
-
|
|
231
|
+
const streaming = await isStreaming(page);
|
|
232
|
+
if (streaming) { stableCount = 0; continue; }
|
|
233
|
+
|
|
234
|
+
if (text && text === lastText) {
|
|
235
|
+
stableCount++;
|
|
236
|
+
if (stableCount >= STABLE_CHECKS) {
|
|
237
|
+
log(`gemini-browser: stream done (${text.length} chars)`);
|
|
238
|
+
return { content: text, model, finishReason: "stop" };
|
|
239
|
+
}
|
|
240
|
+
} else {
|
|
241
|
+
stableCount = 0;
|
|
242
|
+
lastText = text;
|
|
233
243
|
}
|
|
234
|
-
} else {
|
|
235
|
-
stableCount = 0;
|
|
236
|
-
lastText = text;
|
|
237
244
|
}
|
|
238
|
-
}
|
|
239
245
|
|
|
240
|
-
|
|
241
|
-
|
|
246
|
+
throw new Error(`gemini.google.com stream timeout after ${timeoutMs}ms`);
|
|
247
|
+
} finally {
|
|
248
|
+
await page.close().catch(() => {});
|
|
249
|
+
}
|
|
242
250
|
}
|
package/src/grok-client.ts
CHANGED
|
@@ -160,18 +160,20 @@ export async function grokComplete(
|
|
|
160
160
|
opts: GrokCompleteOptions,
|
|
161
161
|
log: (msg: string) => void
|
|
162
162
|
): Promise<GrokCompleteResult> {
|
|
163
|
-
const { page, owned } = await getOrCreateGrokPage(context);
|
|
164
163
|
const model = resolveModel(opts.model);
|
|
165
164
|
const prompt = flattenMessages(opts.messages);
|
|
166
165
|
const timeoutMs = opts.timeoutMs ?? DEFAULT_TIMEOUT_MS;
|
|
167
166
|
|
|
168
167
|
log(`grok-client: complete model=${model}`);
|
|
169
168
|
|
|
169
|
+
const page = await context.newPage();
|
|
170
170
|
try {
|
|
171
|
+
await page.goto("https://grok.com", { waitUntil: "domcontentloaded", timeout: 15_000 });
|
|
172
|
+
await new Promise((r) => setTimeout(r, 2_000));
|
|
171
173
|
const content = await sendAndWait(page, prompt, timeoutMs, log);
|
|
172
174
|
return { content, model, finishReason: "stop" };
|
|
173
175
|
} finally {
|
|
174
|
-
|
|
176
|
+
await page.close().catch(() => {});
|
|
175
177
|
}
|
|
176
178
|
}
|
|
177
179
|
|
|
@@ -184,66 +186,73 @@ export async function grokCompleteStream(
|
|
|
184
186
|
onToken: (token: string) => void,
|
|
185
187
|
log: (msg: string) => void
|
|
186
188
|
): Promise<GrokCompleteResult> {
|
|
187
|
-
const { page, owned } = await getOrCreateGrokPage(context);
|
|
188
189
|
const model = resolveModel(opts.model);
|
|
189
190
|
const prompt = flattenMessages(opts.messages);
|
|
190
191
|
const timeoutMs = opts.timeoutMs ?? DEFAULT_TIMEOUT_MS;
|
|
191
192
|
|
|
192
193
|
log(`grok-client: stream model=${model}`);
|
|
193
194
|
|
|
194
|
-
const
|
|
195
|
-
|
|
196
|
-
|
|
197
|
-
|
|
198
|
-
// Send message
|
|
199
|
-
await page.evaluate((msg: string) => {
|
|
200
|
-
const ed =
|
|
201
|
-
document.querySelector(".ProseMirror") ||
|
|
202
|
-
document.querySelector('[contenteditable="true"]');
|
|
203
|
-
if (!ed) throw new Error("Grok editor not found");
|
|
204
|
-
(ed as HTMLElement).focus();
|
|
205
|
-
document.execCommand("insertText", false, msg);
|
|
206
|
-
}, prompt);
|
|
207
|
-
await new Promise((r) => setTimeout(r, 300));
|
|
208
|
-
await page.keyboard.press("Enter");
|
|
209
|
-
|
|
210
|
-
log(`grok-client: message sent, streaming…`);
|
|
211
|
-
|
|
212
|
-
// Stream: poll DOM, emit new chars as tokens
|
|
213
|
-
const deadline = Date.now() + timeoutMs;
|
|
214
|
-
let emittedLength = 0;
|
|
215
|
-
let lastText = "";
|
|
216
|
-
let stableCount = 0;
|
|
217
|
-
|
|
218
|
-
while (Date.now() < deadline) {
|
|
219
|
-
await new Promise((r) => setTimeout(r, STABLE_INTERVAL_MS));
|
|
195
|
+
const page = await context.newPage();
|
|
196
|
+
try {
|
|
197
|
+
await page.goto("https://grok.com", { waitUntil: "domcontentloaded", timeout: 15_000 });
|
|
198
|
+
await new Promise((r) => setTimeout(r, 2_000));
|
|
220
199
|
|
|
221
|
-
const
|
|
222
|
-
(
|
|
223
|
-
const bubbles = [...document.querySelectorAll(".message-bubble")];
|
|
224
|
-
if (bubbles.length <= before) return "";
|
|
225
|
-
return bubbles[bubbles.length - 1].textContent?.trim() ?? "";
|
|
226
|
-
},
|
|
227
|
-
countBefore
|
|
200
|
+
const countBefore = await page.evaluate(
|
|
201
|
+
() => document.querySelectorAll(".message-bubble").length
|
|
228
202
|
);
|
|
229
203
|
|
|
230
|
-
|
|
231
|
-
|
|
232
|
-
|
|
233
|
-
|
|
234
|
-
|
|
204
|
+
// Send message
|
|
205
|
+
await page.evaluate((msg: string) => {
|
|
206
|
+
const ed =
|
|
207
|
+
document.querySelector(".ProseMirror") ||
|
|
208
|
+
document.querySelector('[contenteditable="true"]');
|
|
209
|
+
if (!ed) throw new Error("Grok editor not found");
|
|
210
|
+
(ed as HTMLElement).focus();
|
|
211
|
+
document.execCommand("insertText", false, msg);
|
|
212
|
+
}, prompt);
|
|
213
|
+
await new Promise((r) => setTimeout(r, 300));
|
|
214
|
+
await page.keyboard.press("Enter");
|
|
215
|
+
|
|
216
|
+
log(`grok-client: message sent, streaming…`);
|
|
217
|
+
|
|
218
|
+
// Stream: poll DOM, emit new chars as tokens
|
|
219
|
+
const deadline = Date.now() + timeoutMs;
|
|
220
|
+
let emittedLength = 0;
|
|
221
|
+
let lastText = "";
|
|
222
|
+
let stableCount = 0;
|
|
223
|
+
|
|
224
|
+
while (Date.now() < deadline) {
|
|
225
|
+
await new Promise((r) => setTimeout(r, STABLE_INTERVAL_MS));
|
|
226
|
+
|
|
227
|
+
const text = await page.evaluate(
|
|
228
|
+
(before: number) => {
|
|
229
|
+
const bubbles = [...document.querySelectorAll(".message-bubble")];
|
|
230
|
+
if (bubbles.length <= before) return "";
|
|
231
|
+
return bubbles[bubbles.length - 1].textContent?.trim() ?? "";
|
|
232
|
+
},
|
|
233
|
+
countBefore
|
|
234
|
+
);
|
|
235
|
+
|
|
236
|
+
if (text && text.length > emittedLength) {
|
|
237
|
+
const newChars = text.slice(emittedLength);
|
|
238
|
+
onToken(newChars);
|
|
239
|
+
emittedLength = text.length;
|
|
240
|
+
}
|
|
235
241
|
|
|
236
|
-
|
|
237
|
-
|
|
238
|
-
|
|
239
|
-
|
|
240
|
-
|
|
242
|
+
if (text && text === lastText) {
|
|
243
|
+
stableCount++;
|
|
244
|
+
if (stableCount >= STABLE_CHECKS) {
|
|
245
|
+
log(`grok-client: stream done (${text.length} chars)`);
|
|
246
|
+
return { content: text, model, finishReason: "stop" };
|
|
247
|
+
}
|
|
248
|
+
} else {
|
|
249
|
+
stableCount = 0;
|
|
250
|
+
lastText = text;
|
|
241
251
|
}
|
|
242
|
-
} else {
|
|
243
|
-
stableCount = 0;
|
|
244
|
-
lastText = text;
|
|
245
252
|
}
|
|
246
|
-
}
|
|
247
253
|
|
|
248
|
-
|
|
254
|
+
throw new Error(`grok.com stream timeout after ${timeoutMs}ms`);
|
|
255
|
+
} finally {
|
|
256
|
+
await page.close().catch(() => {});
|
|
257
|
+
}
|
|
249
258
|
}
|
package/src/proxy-server.ts
CHANGED
|
@@ -64,42 +64,14 @@ export interface ProxyServerOptions {
|
|
|
64
64
|
|
|
65
65
|
/** Available CLI bridge models for GET /v1/models */
|
|
66
66
|
export const CLI_MODELS = [
|
|
67
|
-
|
|
68
|
-
|
|
69
|
-
|
|
70
|
-
|
|
71
|
-
|
|
72
|
-
},
|
|
73
|
-
{
|
|
74
|
-
|
|
75
|
-
name: "Gemini 2.5 Flash (CLI)",
|
|
76
|
-
contextWindow: 1_000_000,
|
|
77
|
-
maxTokens: 8192,
|
|
78
|
-
},
|
|
79
|
-
{
|
|
80
|
-
id: "cli-gemini/gemini-3-pro",
|
|
81
|
-
name: "Gemini 3 Pro (CLI)",
|
|
82
|
-
contextWindow: 1_000_000,
|
|
83
|
-
maxTokens: 8192,
|
|
84
|
-
},
|
|
85
|
-
{
|
|
86
|
-
id: "cli-claude/claude-opus-4-6",
|
|
87
|
-
name: "Claude Opus 4.6 (CLI)",
|
|
88
|
-
contextWindow: 200_000,
|
|
89
|
-
maxTokens: 8192,
|
|
90
|
-
},
|
|
91
|
-
{
|
|
92
|
-
id: "cli-claude/claude-sonnet-4-6",
|
|
93
|
-
name: "Claude Sonnet 4.6 (CLI)",
|
|
94
|
-
contextWindow: 200_000,
|
|
95
|
-
maxTokens: 8192,
|
|
96
|
-
},
|
|
97
|
-
{
|
|
98
|
-
id: "cli-claude/claude-haiku-4-5",
|
|
99
|
-
name: "Claude Haiku 4.5 (CLI)",
|
|
100
|
-
contextWindow: 200_000,
|
|
101
|
-
maxTokens: 8192,
|
|
102
|
-
},
|
|
67
|
+
// ── Claude Code CLI ───────────────────────────────────────────────────────
|
|
68
|
+
{ id: "cli-claude/claude-sonnet-4-6", name: "Claude Sonnet 4.6 (CLI)", contextWindow: 200_000, maxTokens: 8_192 },
|
|
69
|
+
{ id: "cli-claude/claude-opus-4-6", name: "Claude Opus 4.6 (CLI)", contextWindow: 200_000, maxTokens: 8_192 },
|
|
70
|
+
{ id: "cli-claude/claude-haiku-4-5", name: "Claude Haiku 4.5 (CLI)", contextWindow: 200_000, maxTokens: 8_192 },
|
|
71
|
+
// ── Gemini CLI ────────────────────────────────────────────────────────────
|
|
72
|
+
{ id: "cli-gemini/gemini-2.5-pro", name: "Gemini 2.5 Pro (CLI)", contextWindow: 1_000_000, maxTokens: 8_192 },
|
|
73
|
+
{ id: "cli-gemini/gemini-2.5-flash", name: "Gemini 2.5 Flash (CLI)", contextWindow: 1_000_000, maxTokens: 8_192 },
|
|
74
|
+
{ id: "cli-gemini/gemini-3-pro-preview",name: "Gemini 3 Pro (CLI)", contextWindow: 1_000_000, maxTokens: 8_192 },
|
|
103
75
|
// Grok web-session models (requires /grok-login)
|
|
104
76
|
{ id: "web-grok/grok-3", name: "Grok 3 (web session)", contextWindow: 131_072, maxTokens: 131_072 },
|
|
105
77
|
{ id: "web-grok/grok-3-fast", name: "Grok 3 Fast (web session)", contextWindow: 131_072, maxTokens: 131_072 },
|