clawmux 0.3.15 → 0.3.16
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/README.md +15 -14
- package/clawmux.example.json +4 -4
- package/dist/cli.cjs +48091 -37020
- package/dist/index.cjs +48074 -37003
- package/package.json +2 -2
package/README.md
CHANGED
|
@@ -7,9 +7,8 @@ Smart model routing + context compression proxy for OpenClaw.
|
|
|
7
7
|
|
|
8
8
|
- 🧠 **Smart Routing**: Signal-based escalation → LIGHT tries first, auto-escalates to MEDIUM/HEAVY when needed
|
|
9
9
|
- 📦 **Context Compression**: Preemptive background summarization at configurable threshold (default 75%)
|
|
10
|
-
- 🔌 **All Providers**: Supports all OpenClaw providers via
|
|
10
|
+
- 🔌 **All Providers**: Supports all OpenClaw providers via 7 API format adapters (Anthropic, OpenAI Chat Completions, OpenAI Responses, OpenAI Codex, Google, Ollama, Bedrock)
|
|
11
11
|
- ⚡ **Zero Config Auth**: Uses OpenClaw's existing provider credentials — no separate API keys
|
|
12
|
-
- 📊 **Cost Tracking**: Real-time savings stats at /stats endpoint
|
|
13
12
|
- 🔄 **Hot Reload**: Config changes apply without restart
|
|
14
13
|
|
|
15
14
|
## Installation
|
|
@@ -35,14 +34,14 @@ Adjust as needed:
|
|
|
35
34
|
{
|
|
36
35
|
"compression": {
|
|
37
36
|
"threshold": 0.75, // trigger compression at 75% of context window
|
|
38
|
-
"model": "
|
|
37
|
+
"model": "zai/glm-5-turbo", // model used for summarization (provider/model format)
|
|
39
38
|
"targetRatio": 0.6 // compress to 60% of original token count
|
|
40
39
|
},
|
|
41
40
|
"routing": {
|
|
42
41
|
"models": {
|
|
43
|
-
"LIGHT": "
|
|
44
|
-
"MEDIUM": "anthropic/claude-sonnet-4
|
|
45
|
-
"HEAVY": "
|
|
42
|
+
"LIGHT": "zai/glm-5-turbo", // fast & cheap first attempt (openai-completions)
|
|
43
|
+
"MEDIUM": "anthropic/claude-sonnet-4.5", // balanced middle tier (anthropic-messages)
|
|
44
|
+
"HEAVY": "openai/gpt-5.4" // most capable terminal tier (openai-completions)
|
|
46
45
|
// Model IDs use 'provider/model' format. Do NOT use "clawmux" as provider — causes infinite loops
|
|
47
46
|
}
|
|
48
47
|
},
|
|
@@ -57,21 +56,23 @@ Config is watched for changes. Edit `~/.openclaw/clawmux.json` while the proxy i
|
|
|
57
56
|
|
|
58
57
|
### Cross-Provider Routing
|
|
59
58
|
|
|
60
|
-
|
|
59
|
+
The default example above already mixes three providers (ZAI, Anthropic, OpenAI). You can swap in any combination, as long as every provider you reference is configured in your `openclaw.json`. A Google + Anthropic + OpenAI mix:
|
|
61
60
|
|
|
62
61
|
```jsonc
|
|
63
62
|
{
|
|
64
63
|
"routing": {
|
|
65
64
|
"models": {
|
|
66
|
-
"LIGHT": "
|
|
67
|
-
"MEDIUM": "anthropic/claude-sonnet-4
|
|
68
|
-
"HEAVY": "openai/gpt-5.4"
|
|
65
|
+
"LIGHT": "google/gemini-2.5-flash", // Google (google-generative-ai)
|
|
66
|
+
"MEDIUM": "anthropic/claude-sonnet-4.5", // Anthropic (anthropic-messages)
|
|
67
|
+
"HEAVY": "openai/gpt-5.4" // OpenAI (openai-completions)
|
|
69
68
|
}
|
|
70
69
|
}
|
|
71
70
|
}
|
|
72
71
|
```
|
|
73
72
|
|
|
74
|
-
|
|
73
|
+
If you've authenticated a provider through OpenClaw that's not in the pi-ai catalog (for example a ChatGPT subscription registered as `openai-codex` with `api: openai-codex-responses`), you can reference its model IDs here too — ClawMux routes through whatever OpenClaw already knows about.
|
|
74
|
+
|
|
75
|
+
ClawMux handles format translation transparently — a request arriving in Anthropic format gets translated to OpenAI format when routed to GPT, and the response is translated back to Anthropic format before returning to OpenClaw.
|
|
75
76
|
|
|
76
77
|
Supported translation pairs: Anthropic ↔ OpenAI ↔ Google ↔ Ollama ↔ Bedrock (all combinations).
|
|
77
78
|
|
|
@@ -103,7 +104,7 @@ OpenClaw → ClawMux Proxy (localhost:3456) → Upstream Provider(s)
|
|
|
103
104
|
|
|
104
105
|
**Kill switch**: Set `routing.escalation.enabled` to `false` in your config to disable escalation and always use the MEDIUM model. This is useful for debugging or when you want predictable routing.
|
|
105
106
|
|
|
106
|
-
**Context compression** runs in the background after each response
|
|
107
|
+
**Context compression** happens in two layers. Deterministic compaction runs inline on the request path: once the incoming payload crosses the hard ceiling (90% of the smallest routing-tier context window), ClawMux truncates oversized `tool_result` blocks head+tail with a marker before falling back to whole-message truncation, so tool chains stay intact. LLM-based summarisation runs in the background after each response: when the conversation crosses the configured threshold, ClawMux summarises older messages for the next request, and if the conversation itself exceeds the summarisation model's own context window the work is split into chunks, each chunk is summarised in parallel, and the summaries are recursively reduced until they fit.
|
|
107
108
|
|
|
108
109
|
### Context Window Resolution
|
|
109
110
|
|
|
@@ -111,7 +112,7 @@ ClawMux resolves each model's context window using this priority chain:
|
|
|
111
112
|
|
|
112
113
|
1. **~/.openclaw/clawmux.json** `routing.contextWindows` — explicit per-model override
|
|
113
114
|
2. **openclaw.json** `models.providers[provider].models[].contextWindow` — user config
|
|
114
|
-
3. **OpenClaw built-in catalog** — pi-ai model database (
|
|
115
|
+
3. **OpenClaw built-in catalog** — pi-ai model database (890+ models, updated regularly)
|
|
115
116
|
4. **Default: 200,000 tokens**
|
|
116
117
|
|
|
117
118
|
Compression threshold uses the **minimum** context window across all routing models, since compression happens before routing decides which model to use.
|
|
@@ -121,7 +122,7 @@ Compression threshold uses the **minimum** context window across all routing mod
|
|
|
121
122
|
| Method | Path | Description |
|
|
122
123
|
|---|---|---|
|
|
123
124
|
| `GET` | `/health` | Health check |
|
|
124
|
-
| `GET` | `/
|
|
125
|
+
| `GET` | `/v1/models` | OpenAI-compatible model list (used by OpenClaw for validation) |
|
|
125
126
|
| `POST` | `/v1/messages` | Anthropic Messages |
|
|
126
127
|
| `POST` | `/v1/chat/completions` | OpenAI Chat Completions |
|
|
127
128
|
| `POST` | `/v1/responses` | OpenAI Responses |
|
package/clawmux.example.json
CHANGED
|
@@ -3,16 +3,16 @@
|
|
|
3
3
|
"compression": {
|
|
4
4
|
"threshold": 0.75,
|
|
5
5
|
"_comment_threshold": "Token ratio (0.1–0.95) that triggers context compression.",
|
|
6
|
-
"model": "
|
|
6
|
+
"model": "zai/glm-5-turbo",
|
|
7
7
|
"_comment_model": "Model ID in 'provider/model' format used for the compression summarisation call.",
|
|
8
8
|
"targetRatio": 0.6,
|
|
9
9
|
"_comment_targetRatio": "Desired compression ratio (0.2–0.9). 0.6 = compress to 60% of original."
|
|
10
10
|
},
|
|
11
11
|
"routing": {
|
|
12
12
|
"models": {
|
|
13
|
-
"LIGHT": "
|
|
14
|
-
"MEDIUM": "anthropic/claude-sonnet-4
|
|
15
|
-
"HEAVY": "
|
|
13
|
+
"LIGHT": "zai/glm-5-turbo",
|
|
14
|
+
"MEDIUM": "anthropic/claude-sonnet-4.5",
|
|
15
|
+
"HEAVY": "openai/gpt-5.4",
|
|
16
16
|
"_comment_models": "Model IDs in 'provider/model' format for each routing tier. Do NOT use provider names starting with 'clawmux-' — this causes infinite loops."
|
|
17
17
|
},
|
|
18
18
|
"escalation": {
|