@elvatis_com/openclaw-cli-bridge-elvatis 2.7.0 → 2.7.1
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/README.md +6 -1
- package/SKILL.md +1 -1
- package/openclaw.plugin.json +1 -1
- package/package.json +1 -1
package/README.md
CHANGED
|
@@ -2,7 +2,7 @@
|
|
|
2
2
|
|
|
3
3
|
> OpenClaw plugin that bridges locally installed AI CLIs (Codex, Gemini, Claude Code, OpenCode, Pi) as model providers — with slash commands for instant model switching, restore, health testing, and model listing.
|
|
4
4
|
|
|
5
|
-
**Current version:** `2.7.
|
|
5
|
+
**Current version:** `2.7.1`
|
|
6
6
|
|
|
7
7
|
---
|
|
8
8
|
|
|
@@ -406,6 +406,11 @@ npm run ci # lint + typecheck + test
|
|
|
406
406
|
|
|
407
407
|
## Changelog
|
|
408
408
|
|
|
409
|
+
### v2.7.1
|
|
410
|
+
- **fix:** Fallback model `openai-codex/gpt-5.1` → `openai-codex/gpt-5.2-codex` — the bare `gpt-5.1` model ID doesn't exist in the CLI bridge allowlist, causing fallback failures with "model not allowed" errors
|
|
411
|
+
- **fix:** Broken aliases `gpt51`, `gpt52`, `gemini25`, `gemini25-flash` now point to working CLI bridge models instead of non-existent providers (`google-gemini-cli`, bare `openai-codex` IDs)
|
|
412
|
+
- **docs:** Perplexity `sonar-pro` tool incompatibility documented — API rejects tool parameters with `400 Tool parameters must be a JSON object`, tracked upstream as [openclaw/openclaw#64175](https://github.com/openclaw/openclaw/issues/64175). Fallback chain handles this correctly.
|
|
413
|
+
|
|
409
414
|
### v2.7.0
|
|
410
415
|
- **feat:** Persistent per-model metrics — request counts, error rates, latency, and token usage now survive gateway restarts. Stored in `~/.openclaw/cli-bridge/metrics.json`, debounced writes (5s).
|
|
411
416
|
- **feat:** Token usage estimation for all models — CLI runners (claude, gemini, codex), web-session models (gemini, claude, chatgpt) now report estimated `prompt_tokens` and `completion_tokens` in the OpenAI-compatible `usage` response field (~4 chars/token heuristic). Grok models continue to use real token counts from the API.
|
package/SKILL.md
CHANGED
package/openclaw.plugin.json
CHANGED
|
@@ -2,7 +2,7 @@
|
|
|
2
2
|
"id": "openclaw-cli-bridge-elvatis",
|
|
3
3
|
"slug": "openclaw-cli-bridge-elvatis",
|
|
4
4
|
"name": "OpenClaw CLI Bridge",
|
|
5
|
-
"version": "2.7.
|
|
5
|
+
"version": "2.7.1",
|
|
6
6
|
"license": "MIT",
|
|
7
7
|
"description": "Phase 1: openai-codex auth bridge. Phase 2: local HTTP proxy routing model calls through gemini/claude CLIs (vllm provider).",
|
|
8
8
|
"providers": [
|
package/package.json
CHANGED
|
@@ -1,6 +1,6 @@
|
|
|
1
1
|
{
|
|
2
2
|
"name": "@elvatis_com/openclaw-cli-bridge-elvatis",
|
|
3
|
-
"version": "2.7.
|
|
3
|
+
"version": "2.7.1",
|
|
4
4
|
"description": "Bridges gemini, claude, and codex CLI tools as OpenClaw model providers. Reads existing CLI auth without re-login.",
|
|
5
5
|
"type": "module",
|
|
6
6
|
"openclaw": {
|