clawmux 0.3.14 → 0.3.16

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -7,9 +7,8 @@ Smart model routing + context compression proxy for OpenClaw.
7
7
 
8
8
  - 🧠 **Smart Routing**: Signal-based escalation → LIGHT tries first, auto-escalates to MEDIUM/HEAVY when needed
9
9
  - 📦 **Context Compression**: Preemptive background summarization at configurable threshold (default 75%)
10
- - 🔌 **All Providers**: Supports all OpenClaw providers via 6 API format adapters
10
+ - 🔌 **All Providers**: Supports all OpenClaw providers via 7 API format adapters (Anthropic, OpenAI Chat Completions, OpenAI Responses, OpenAI Codex, Google, Ollama, Bedrock)
11
11
  - ⚡ **Zero Config Auth**: Uses OpenClaw's existing provider credentials — no separate API keys
12
- - 📊 **Cost Tracking**: Real-time savings stats at /stats endpoint
13
12
  - 🔄 **Hot Reload**: Config changes apply without restart
14
13
 
15
14
  ## Installation
@@ -35,14 +34,14 @@ Adjust as needed:
35
34
  {
36
35
  "compression": {
37
36
  "threshold": 0.75, // trigger compression at 75% of context window
38
- "model": "anthropic/claude-3-5-haiku-20241022", // model used for summarization (provider/model format)
37
+ "model": "zai/glm-5-turbo", // model used for summarization (provider/model format)
39
38
  "targetRatio": 0.6 // compress to 60% of original token count
40
39
  },
41
40
  "routing": {
42
41
  "models": {
43
- "LIGHT": "anthropic/claude-3-5-haiku-20241022",
44
- "MEDIUM": "anthropic/claude-sonnet-4-20250514",
45
- "HEAVY": "anthropic/claude-opus-4-20250514"
42
+ "LIGHT": "zai/glm-5-turbo", // fast & cheap first attempt (openai-completions)
43
+ "MEDIUM": "anthropic/claude-sonnet-4.5", // balanced middle tier (anthropic-messages)
44
+ "HEAVY": "openai/gpt-5.4" // most capable terminal tier (openai-completions)
46
45
  // Model IDs use 'provider/model' format. Do NOT use "clawmux" as provider — causes infinite loops
47
46
  }
48
47
  },
@@ -57,21 +56,23 @@ Config is watched for changes. Edit `~/.openclaw/clawmux.json` while the proxy i
57
56
 
58
57
  ### Cross-Provider Routing
59
58
 
60
- Mix models from different providers by tier. ClawMux automatically translates request and response formats between providers:
59
+ The default example above already mixes three providers (ZAI, Anthropic, OpenAI). You can swap in any combination, as long as every provider you reference is configured in your `openclaw.json`. A Google + Anthropic + OpenAI mix:
61
60
 
62
61
  ```jsonc
63
62
  {
64
63
  "routing": {
65
64
  "models": {
66
- "LIGHT": "zai/glm-5", // ZAI (openai-completions)
67
- "MEDIUM": "anthropic/claude-sonnet-4-20250514", // Anthropic (anthropic-messages)
68
- "HEAVY": "openai/gpt-5.4" // OpenAI (openai-completions)
65
+ "LIGHT": "google/gemini-2.5-flash", // Google (google-generative-ai)
66
+ "MEDIUM": "anthropic/claude-sonnet-4.5", // Anthropic (anthropic-messages)
67
+ "HEAVY": "openai/gpt-5.4" // OpenAI (openai-completions)
69
68
  }
70
69
  }
71
70
  }
72
71
  ```
73
72
 
74
- All three providers must be configured in your `openclaw.json`. ClawMux handles format translation transparently a request arriving in Anthropic format gets translated to OpenAI format when routed to GPT, and the response is translated back to Anthropic format before returning to OpenClaw.
73
+ If you've authenticated a provider through OpenClaw that's not in the pi-ai catalog (for example a ChatGPT subscription registered as `openai-codex` with `api: openai-codex-responses`), you can reference its model IDs here too ClawMux routes through whatever OpenClaw already knows about.
74
+
75
+ ClawMux handles format translation transparently — a request arriving in Anthropic format gets translated to OpenAI format when routed to GPT, and the response is translated back to Anthropic format before returning to OpenClaw.
75
76
 
76
77
  Supported translation pairs: Anthropic ↔ OpenAI ↔ Google ↔ Ollama ↔ Bedrock (all combinations).
77
78
 
@@ -83,6 +84,8 @@ ClawMux registers as a single provider `clawmux` in OpenClaw with model `auto`.
83
84
  openclaw provider clawmux
84
85
  ```
85
86
 
87
+ `clawmux init` manages the `clawmux` provider entry in `openclaw.json` (and the per-agent `models.json` caches) for you. It sets `api` to match your MEDIUM tier's API format and computes the correct `baseUrl` from that — `http://localhost:<port>/v1` for OpenAI-style APIs (where the upstream SDK appends `/chat/completions` or `/responses`) and `http://localhost:<port>` for everything else. Do not hand-edit these fields; rerun `clawmux init` after changing the MEDIUM model in `~/.openclaw/clawmux.json`.
88
+
86
89
  ## How It Works
87
90
 
88
91
  ```
@@ -101,7 +104,7 @@ OpenClaw → ClawMux Proxy (localhost:3456) → Upstream Provider(s)
101
104
 
102
105
  **Kill switch**: Set `routing.escalation.enabled` to `false` in your config to disable escalation and always use the MEDIUM model. This is useful for debugging or when you want predictable routing.
103
106
 
104
- **Context compression** runs in the background after each response. When the conversation approaches the configured threshold, ClawMux summarizes older messages before the next request goes out. This keeps costs down on long conversations without interrupting the flow.
107
+ **Context compression** happens in two layers. Deterministic compaction runs inline on the request path: once the incoming payload crosses the hard ceiling (90% of the smallest routing-tier context window), ClawMux truncates oversized `tool_result` blocks head+tail with a marker before falling back to whole-message truncation, so tool chains stay intact. LLM-based summarisation runs in the background after each response: when the conversation crosses the configured threshold, ClawMux summarises older messages for the next request, and if the conversation itself exceeds the summarisation model's own context window the work is split into chunks, each chunk is summarised in parallel, and the summaries are recursively reduced until they fit.
105
108
 
106
109
  ### Context Window Resolution
107
110
 
@@ -109,7 +112,7 @@ ClawMux resolves each model's context window using this priority chain:
109
112
 
110
113
  1. **~/.openclaw/clawmux.json** `routing.contextWindows` — explicit per-model override
111
114
  2. **openclaw.json** `models.providers[provider].models[].contextWindow` — user config
112
- 3. **OpenClaw built-in catalog** — pi-ai model database (830+ models)
115
+ 3. **OpenClaw built-in catalog** — pi-ai model database (890+ models, updated regularly)
113
116
  4. **Default: 200,000 tokens**
114
117
 
115
118
  Compression threshold uses the **minimum** context window across all routing models, since compression happens before routing decides which model to use.
@@ -119,7 +122,7 @@ Compression threshold uses the **minimum** context window across all routing mod
119
122
  | Method | Path | Description |
120
123
  |---|---|---|
121
124
  | `GET` | `/health` | Health check |
122
- | `GET` | `/stats` | Cost savings statistics |
125
+ | `GET` | `/v1/models` | OpenAI-compatible model list (used by OpenClaw for validation) |
123
126
  | `POST` | `/v1/messages` | Anthropic Messages |
124
127
  | `POST` | `/v1/chat/completions` | OpenAI Chat Completions |
125
128
  | `POST` | `/v1/responses` | OpenAI Responses |
@@ -3,16 +3,16 @@
3
3
  "compression": {
4
4
  "threshold": 0.75,
5
5
  "_comment_threshold": "Token ratio (0.1–0.95) that triggers context compression.",
6
- "model": "anthropic/claude-3-5-haiku-20241022",
6
+ "model": "zai/glm-5-turbo",
7
7
  "_comment_model": "Model ID in 'provider/model' format used for the compression summarisation call.",
8
8
  "targetRatio": 0.6,
9
9
  "_comment_targetRatio": "Desired compression ratio (0.2–0.9). 0.6 = compress to 60% of original."
10
10
  },
11
11
  "routing": {
12
12
  "models": {
13
- "LIGHT": "anthropic/claude-3-5-haiku-20241022",
14
- "MEDIUM": "anthropic/claude-sonnet-4-20250514",
15
- "HEAVY": "anthropic/claude-opus-4-20250514",
13
+ "LIGHT": "zai/glm-5-turbo",
14
+ "MEDIUM": "anthropic/claude-sonnet-4.5",
15
+ "HEAVY": "openai/gpt-5.4",
16
16
  "_comment_models": "Model IDs in 'provider/model' format for each routing tier. Do NOT use provider names starting with 'clawmux-' — this causes infinite loops."
17
17
  },
18
18
  "escalation": {