@elvatis_com/openclaw-cli-bridge-elvatis 1.8.4 → 1.8.6
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/README.md +42 -55
- package/SKILL.md +1 -1
- package/openclaw.plugin.json +2 -1
- package/package.json +1 -1
- package/src/proxy-server.ts +7 -15
package/README.md
CHANGED
|
@@ -2,7 +2,7 @@
|
|
|
2
2
|
|
|
3
3
|
> OpenClaw plugin that bridges locally installed AI CLIs (Codex, Gemini, Claude Code) as model providers — with slash commands for instant model switching, restore, health testing, and model listing.
|
|
4
4
|
|
|
5
|
-
**Current version:** `1.8.
|
|
5
|
+
**Current version:** `1.8.6`
|
|
6
6
|
|
|
7
7
|
---
|
|
8
8
|
|
|
@@ -32,36 +32,36 @@ All commands use gateway-level `commands.allowFrom` for authorization (`requireA
|
|
|
32
32
|
|
|
33
33
|
**Claude Code CLI** (routed via local proxy on `:31337`):
|
|
34
34
|
|
|
35
|
-
| Command | Model |
|
|
36
|
-
|
|
37
|
-
| `/cli-sonnet` | `vllm/cli-claude/claude-sonnet-4-6` |
|
|
38
|
-
| `/cli-opus` | `vllm/cli-claude/claude-opus-4-6` |
|
|
39
|
-
| `/cli-haiku` | `vllm/cli-claude/claude-haiku-4-5` |
|
|
35
|
+
| Command | Model | Notes |
|
|
36
|
+
|---|---|---|
|
|
37
|
+
| `/cli-sonnet` | `vllm/cli-claude/claude-sonnet-4-6` | ✅ Tested |
|
|
38
|
+
| `/cli-opus` | `vllm/cli-claude/claude-opus-4-6` | ✅ Tested |
|
|
39
|
+
| `/cli-haiku` | `vllm/cli-claude/claude-haiku-4-5` | ✅ Tested |
|
|
40
40
|
|
|
41
41
|
**Gemini CLI** (routed via local proxy on `:31337`, stdin + `cwd=/tmp`):
|
|
42
42
|
|
|
43
|
-
| Command | Model |
|
|
44
|
-
|
|
45
|
-
| `/cli-gemini` | `vllm/cli-gemini/gemini-2.5-pro` |
|
|
46
|
-
| `/cli-gemini-flash` | `vllm/cli-gemini/gemini-2.5-flash` |
|
|
47
|
-
| `/cli-gemini3` | `vllm/cli-gemini/gemini-3-pro-preview` |
|
|
48
|
-
| `/cli-gemini3-flash` | `vllm/cli-gemini/gemini-3-flash-preview` |
|
|
43
|
+
| Command | Model | Notes |
|
|
44
|
+
|---|---|---|
|
|
45
|
+
| `/cli-gemini` | `vllm/cli-gemini/gemini-2.5-pro` | ✅ Tested |
|
|
46
|
+
| `/cli-gemini-flash` | `vllm/cli-gemini/gemini-2.5-flash` | ✅ Tested |
|
|
47
|
+
| `/cli-gemini3` | `vllm/cli-gemini/gemini-3-pro-preview` | ✅ Tested |
|
|
48
|
+
| `/cli-gemini3-flash` | `vllm/cli-gemini/gemini-3-flash-preview` | ✅ Tested |
|
|
49
49
|
|
|
50
50
|
**Codex CLI** (via `openai-codex` provider — OAuth auth, calls OpenAI API directly, **not** through the local proxy):
|
|
51
51
|
|
|
52
52
|
| Command | Model | Notes |
|
|
53
53
|
|---|---|---|
|
|
54
54
|
| `/cli-codex` | `openai-codex/gpt-5.3-codex` | ✅ Tested |
|
|
55
|
-
| `/cli-codex-spark` | `openai-codex/gpt-5.3-codex-spark` | |
|
|
56
|
-
| `/cli-codex52` | `openai-codex/gpt-5.2-codex` | |
|
|
55
|
+
| `/cli-codex-spark` | `openai-codex/gpt-5.3-codex-spark` | ✅ Tested |
|
|
56
|
+
| `/cli-codex52` | `openai-codex/gpt-5.2-codex` | ✅ Tested |
|
|
57
57
|
| `/cli-codex54` | `openai-codex/gpt-5.4` | May require upgraded OAuth scope |
|
|
58
58
|
| `/cli-codex-mini` | `openai-codex/gpt-5.1-codex-mini` | ✅ Tested |
|
|
59
59
|
|
|
60
60
|
**BitNet local inference** (via local proxy → llama-server on 127.0.0.1:8082, no API key):
|
|
61
61
|
|
|
62
|
-
| Command | Model |
|
|
63
|
-
|
|
64
|
-
| `/cli-bitnet` | `vllm/local-bitnet/bitnet-2b` |
|
|
62
|
+
| Command | Model | Notes |
|
|
63
|
+
|---|---|---|
|
|
64
|
+
| `/cli-bitnet` | `vllm/local-bitnet/bitnet-2b` | ✅ Tested |
|
|
65
65
|
|
|
66
66
|
**Utility:**
|
|
67
67
|
|
|
@@ -91,14 +91,16 @@ All commands use gateway-level `commands.allowFrom` for authorization (`requireA
|
|
|
91
91
|
|
|
92
92
|
Routes requests through real browser sessions on the provider's web UI. Requires a valid login session (free or paid tier). Uses persistent Chromium profiles — sessions survive gateway restarts.
|
|
93
93
|
|
|
94
|
+
> **Note:** Only Grok and Gemini are active browser providers. Claude and ChatGPT browser routes were removed in v1.6.x — use `cli-claude/*` (Claude CLI) and `openai-codex/*` / `copilot-proxy` instead.
|
|
95
|
+
|
|
94
96
|
**Grok** (grok.com — SuperGrok subscription):
|
|
95
97
|
|
|
96
98
|
| Model | Notes |
|
|
97
99
|
|---|---|
|
|
98
|
-
| `web-grok/grok-3` | Full model |
|
|
99
|
-
| `web-grok/grok-3-fast` | Faster variant |
|
|
100
|
-
| `web-grok/grok-3-mini` | Lightweight |
|
|
101
|
-
| `web-grok/grok-3-mini-fast` | Fastest |
|
|
100
|
+
| `web-grok/grok-3` | Full model | ✅ Tested |
|
|
101
|
+
| `web-grok/grok-3-fast` | Faster variant | ✅ Tested |
|
|
102
|
+
| `web-grok/grok-3-mini` | Lightweight | ✅ Tested |
|
|
103
|
+
| `web-grok/grok-3-mini-fast` | Fastest | ✅ Tested |
|
|
102
104
|
|
|
103
105
|
| Command | What it does |
|
|
104
106
|
|---|---|
|
|
@@ -110,10 +112,10 @@ Routes requests through real browser sessions on the provider's web UI. Requires
|
|
|
110
112
|
|
|
111
113
|
| Model | Notes |
|
|
112
114
|
|---|---|
|
|
113
|
-
| `web-gemini/gemini-2-5-pro` | Gemini 2.5 Pro |
|
|
114
|
-
| `web-gemini/gemini-2-5-flash` | Gemini 2.5 Flash |
|
|
115
|
-
| `web-gemini/gemini-3-pro` | Gemini 3 Pro |
|
|
116
|
-
| `web-gemini/gemini-3-flash` | Gemini 3 Flash |
|
|
115
|
+
| `web-gemini/gemini-2-5-pro` | Gemini 2.5 Pro | ✅ Tested |
|
|
116
|
+
| `web-gemini/gemini-2-5-flash` | Gemini 2.5 Flash | ✅ Tested |
|
|
117
|
+
| `web-gemini/gemini-3-pro` | Gemini 3 Pro | ✅ Tested |
|
|
118
|
+
| `web-gemini/gemini-3-flash` | Gemini 3 Flash | ✅ Tested |
|
|
117
119
|
|
|
118
120
|
| Command | What it does |
|
|
119
121
|
|---|---|
|
|
@@ -121,37 +123,9 @@ Routes requests through real browser sessions on the provider's web UI. Requires
|
|
|
121
123
|
| `/gemini-status` | Show session validity + cookie expiry |
|
|
122
124
|
| `/gemini-logout` | Clear session |
|
|
123
125
|
|
|
124
|
-
**Claude.ai** (
|
|
126
|
+
**Claude.ai** ~~(removed in v1.6.x)~~ — use `/cli-sonnet`, `/cli-opus`, `/cli-haiku` instead.
|
|
125
127
|
|
|
126
|
-
|
|
127
|
-
|---|---|
|
|
128
|
-
| `web-claude/claude-sonnet` | Claude Sonnet (web) |
|
|
129
|
-
| `web-claude/claude-opus` | Claude Opus (web) |
|
|
130
|
-
| `web-claude/claude-haiku` | Claude Haiku (web) |
|
|
131
|
-
|
|
132
|
-
| Command | What it does |
|
|
133
|
-
|---|---|
|
|
134
|
-
| `/claude-login` | Authenticate, save cookies to `~/.openclaw/claude-profile/` |
|
|
135
|
-
| `/claude-status` | Show session validity + cookie expiry |
|
|
136
|
-
| `/claude-logout` | Clear session |
|
|
137
|
-
|
|
138
|
-
**ChatGPT** (chatgpt.com — Plus/Pro subscription):
|
|
139
|
-
|
|
140
|
-
| Model | Notes |
|
|
141
|
-
|---|---|
|
|
142
|
-
| `web-chatgpt/gpt-4o` | GPT-4o |
|
|
143
|
-
| `web-chatgpt/gpt-4o-mini` | GPT-4o Mini |
|
|
144
|
-
| `web-chatgpt/gpt-4.1` | GPT-4.1 |
|
|
145
|
-
| `web-chatgpt/o3` | o3 |
|
|
146
|
-
| `web-chatgpt/o4-mini` | o4-mini |
|
|
147
|
-
| `web-chatgpt/gpt-5` | GPT-5 |
|
|
148
|
-
| `web-chatgpt/gpt-5-mini` | GPT-5 Mini |
|
|
149
|
-
|
|
150
|
-
| Command | What it does |
|
|
151
|
-
|---|---|
|
|
152
|
-
| `/chatgpt-login` | Authenticate, save cookies to `~/.openclaw/chatgpt-profile/` |
|
|
153
|
-
| `/chatgpt-status` | Show session validity + cookie expiry |
|
|
154
|
-
| `/chatgpt-logout` | Clear session |
|
|
128
|
+
**ChatGPT** ~~(removed in v1.6.x)~~ — use `/cli-codex`, `openai-codex/*`, or `copilot-proxy` instead.
|
|
155
129
|
|
|
156
130
|
**Session lifecycle:**
|
|
157
131
|
- First use: run `/xxx-login` once — authenticates and saves cookies to persistent Chromium profile
|
|
@@ -368,6 +342,19 @@ npm test # vitest run (83 tests)
|
|
|
368
342
|
|
|
369
343
|
## Changelog
|
|
370
344
|
|
|
345
|
+
### v1.8.6
|
|
346
|
+
- **fix:** Remove stale web-claude/* and web-chatgpt/* entries from model list (status page showed removed providers)
|
|
347
|
+
|
|
348
|
+
/
|
|
349
|
+
### v1.8.5
|
|
350
|
+
- **fix:** Replace full system prompt with 30-token mini stub for BitNet (prevents context overflow)
|
|
351
|
+
- **fix:** Truncate to last 6 non-system messages before forwarding to BitNet (4096 token limit)
|
|
352
|
+
- **fix:** Flatten multi-part content arrays to plain strings (llama-server crash fix)
|
|
353
|
+
|
|
354
|
+
### v1.8.3 → v1.8.4
|
|
355
|
+
- Intermediate BitNet crash fixes (superseded by v1.8.5)
|
|
356
|
+
|
|
357
|
+
/
|
|
371
358
|
### v1.8.2
|
|
372
359
|
- **fix:** `local-bitnet/*` exempt from tool-call rejection — llama-server ignores tool schemas silently. OpenClaw always sends tools with every request, so this was blocking all BitNet usage.
|
|
373
360
|
|
package/SKILL.md
CHANGED
package/openclaw.plugin.json
CHANGED
|
@@ -1,7 +1,8 @@
|
|
|
1
1
|
{
|
|
2
2
|
"id": "openclaw-cli-bridge-elvatis",
|
|
3
|
+
"slug": "openclaw-cli-bridge-elvatis",
|
|
3
4
|
"name": "OpenClaw CLI Bridge",
|
|
4
|
-
"version": "1.8.
|
|
5
|
+
"version": "1.8.6",
|
|
5
6
|
"license": "MIT",
|
|
6
7
|
"description": "Phase 1: openai-codex auth bridge. Phase 2: local HTTP proxy routing model calls through gemini/claude CLIs (vllm provider).",
|
|
7
8
|
"providers": [
|
package/package.json
CHANGED
|
@@ -1,6 +1,6 @@
|
|
|
1
1
|
{
|
|
2
2
|
"name": "@elvatis_com/openclaw-cli-bridge-elvatis",
|
|
3
|
-
"version": "1.8.
|
|
3
|
+
"version": "1.8.6",
|
|
4
4
|
"description": "Bridges gemini, claude, and codex CLI tools as OpenClaw model providers. Reads existing CLI auth without re-login.",
|
|
5
5
|
"type": "module",
|
|
6
6
|
"openclaw": {
|
package/src/proxy-server.ts
CHANGED
|
@@ -94,18 +94,8 @@ export const CLI_MODELS = [
|
|
|
94
94
|
{ id: "web-gemini/gemini-2-5-flash", name: "Gemini 2.5 Flash (web session)", contextWindow: 1_000_000, maxTokens: 8192 },
|
|
95
95
|
{ id: "web-gemini/gemini-3-pro", name: "Gemini 3 Pro (web session)", contextWindow: 1_000_000, maxTokens: 8192 },
|
|
96
96
|
{ id: "web-gemini/gemini-3-flash", name: "Gemini 3 Flash (web session)", contextWindow: 1_000_000, maxTokens: 8192 },
|
|
97
|
-
// Claude
|
|
98
|
-
|
|
99
|
-
{ id: "web-claude/claude-opus", name: "Claude Opus (web session)", contextWindow: 200_000, maxTokens: 8192 },
|
|
100
|
-
{ id: "web-claude/claude-haiku", name: "Claude Haiku (web session)", contextWindow: 200_000, maxTokens: 8192 },
|
|
101
|
-
// ChatGPT web-session models (requires /chatgpt-login)
|
|
102
|
-
{ id: "web-chatgpt/gpt-4o", name: "GPT-4o (web session)", contextWindow: 128_000, maxTokens: 16_384 },
|
|
103
|
-
{ id: "web-chatgpt/gpt-4o-mini", name: "GPT-4o Mini (web session)", contextWindow: 128_000, maxTokens: 16_384 },
|
|
104
|
-
{ id: "web-chatgpt/gpt-4.1", name: "GPT-4.1 (web session)", contextWindow: 1_047_576, maxTokens: 32_768 },
|
|
105
|
-
{ id: "web-chatgpt/o3", name: "o3 (web session)", contextWindow: 200_000, maxTokens: 100_000 },
|
|
106
|
-
{ id: "web-chatgpt/o4-mini", name: "o4-mini (web session)", contextWindow: 200_000, maxTokens: 100_000 },
|
|
107
|
-
{ id: "web-chatgpt/gpt-5", name: "GPT-5 (web session)", contextWindow: 1_047_576, maxTokens: 32_768 },
|
|
108
|
-
{ id: "web-chatgpt/gpt-5-mini", name: "GPT-5 Mini (web session)", contextWindow: 1_047_576, maxTokens: 32_768 },
|
|
97
|
+
// Claude → use cli-claude/* instead (web-claude removed in v1.6.x)
|
|
98
|
+
// ChatGPT → use openai-codex/* or copilot-proxy instead (web-chatgpt removed in v1.6.x)
|
|
109
99
|
// ── Local BitNet inference ──────────────────────────────────────────────────
|
|
110
100
|
{ id: "local-bitnet/bitnet-2b", name: "BitNet b1.58 2B (local CPU inference)", contextWindow: 4_096, maxTokens: 2_048 },
|
|
111
101
|
];
|
|
@@ -578,15 +568,17 @@ async function handleRequest(
|
|
|
578
568
|
// BitNet has a 4096 token context window. Long sessions blow it up and
|
|
579
569
|
// cause a hard C++ crash (no graceful error). Truncate to system prompt +
|
|
580
570
|
// last 10 messages (~2k tokens max) to stay safely within the limit.
|
|
581
|
-
const BITNET_MAX_MESSAGES =
|
|
571
|
+
const BITNET_MAX_MESSAGES = 6;
|
|
572
|
+
// Replace the full system prompt (MEMORY.md etc, ~2k+ tokens) with a
|
|
573
|
+
// minimal one so BitNet's 4096-token context isn't blown by the system msg alone.
|
|
574
|
+
const BITNET_SYSTEM = "You are Akido, a concise AI assistant. Answer briefly and directly. Current user: Emre. Timezone: Europe/Berlin.";
|
|
582
575
|
const allFlat = parsed.messages.map((m) => ({
|
|
583
576
|
role: m.role,
|
|
584
577
|
content: flattenContent(m.content),
|
|
585
578
|
}));
|
|
586
|
-
const systemMsgs = allFlat.filter((m) => m.role === "system");
|
|
587
579
|
const nonSystemMsgs = allFlat.filter((m) => m.role !== "system");
|
|
588
580
|
const truncated = nonSystemMsgs.slice(-BITNET_MAX_MESSAGES);
|
|
589
|
-
const bitnetMessages = [
|
|
581
|
+
const bitnetMessages = [{ role: "system", content: BITNET_SYSTEM }, ...truncated];
|
|
590
582
|
const requestBody = JSON.stringify({ ...parsed, messages: bitnetMessages, tools: undefined });
|
|
591
583
|
|
|
592
584
|
try {
|