openclaw-freerouter 1.3.0 → 2.0.1

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/LICENSE CHANGED
@@ -1,6 +1,6 @@
1
1
  MIT License
2
2
 
3
- Copyright (c) 2025 OpenFreeRouter
3
+ Copyright (c) 2026 OpenFreeRouter
4
4
 
5
5
  Permission is hereby granted, free of charge, to any person obtaining a copy
6
6
  of this software and associated documentation files (the "Software"), to deal
package/README.md CHANGED
@@ -1,48 +1,175 @@
1
- # @openfreerouter/openclaw-freerouter
1
+ # FreeRouter — Smart LLM Router for OpenClaw
2
2
 
3
- **FreeRouter** Smart model router for OpenClaw. Auto-classifies requests using a 14-dimension weighted scorer and routes to the cheapest capable model using your own API keys.
3
+ **Stop overpaying for AI.** Every message to your AI assistant costs money — a simple "hello" shouldn't cost the same as "prove the Riemann hypothesis." FreeRouter automatically routes each request to the right model based on complexity.
4
+
5
+ ## The Problem
6
+
7
+ - 🔥 **Wasted money** — Running Claude Opus ($15/$75 per 1M tokens) for every message, even "what's 2+2?"
8
+ - 🤷 **No control** — Can't switch models mid-conversation without editing config files and restarting
9
+ - 📊 **Blind routing** — Your AI shows "freerouter/auto" instead of telling you which model actually answered
10
+ - 🔧 **Complex setup** — Existing routers need separate servers, Docker, complex infra
11
+
12
+ ## The Solution
13
+
14
+ FreeRouter is an OpenClaw plugin that:
15
+ - **Classifies every request in <1ms** using a 14-dimension weighted scorer (no LLM needed for classification)
16
+ - **Routes to the cheapest model that can handle it** — Kimi for "hello", Opus for architecture design
17
+ - **Reports the real model name** — You see `anthropic/claude-opus-4-6`, not `freerouter/auto`
18
+ - **Lets you override anytime** — Just say "use opus" in plain English
4
19
 
5
20
  ## Install
6
21
 
7
22
  ```bash
8
- openclaw plugins install @openfreerouter/openclaw-freerouter
23
+ openclaw plugins install openclaw-freerouter
9
24
  ```
10
25
 
11
- Or install from local path:
26
+ Then run the setup wizard:
12
27
 
13
28
  ```bash
14
- openclaw plugins install ./openclaw-freerouter
29
+ openclaw freerouter setup
15
30
  ```
16
31
 
17
- ## Configuration
32
+ Or configure manually — see [Configuration](#configure) below.
33
+
34
+ ## Switch Models Anytime
35
+
36
+ The killer feature: **switch models using natural language, slash commands, or session locks.**
37
+
38
+ ### Just Say It (Natural Language)
39
+
40
+ No slash commands needed. Just talk:
41
+
42
+ | What you say | What happens |
43
+ |---|---|
44
+ | `use opus` | Switches to Claude Opus for this message |
45
+ | `switch to sonnet` | Switches to Claude Sonnet |
46
+ | `try kimi` | Switches to Kimi |
47
+ | `let's use opus` | Switches to Opus |
48
+ | `please use sonnet` | Switches to Sonnet |
49
+ | `can you use haiku` | Switches to Haiku |
50
+ | `use opus: explain quantum computing` | Uses Opus for this specific prompt |
51
+ | `use opus, what is 2+2?` | Same — model + prompt in one message |
52
+ | `go back to auto` | Return to automatic routing |
53
+
54
+ ### Lock a Model for the Whole Session
55
+
56
+ When you know the task is important and you want Opus (or any model) for everything:
57
+
58
+ | What you say | What happens |
59
+ |---|---|
60
+ | `use opus for this session` | 🔒 Locks ALL messages to Opus |
61
+ | `switch to sonnet from now on` | 🔒 Locks to Sonnet |
62
+ | `stick with opus` | 🔒 Locks to Opus |
63
+ | `keep using sonnet` | 🔒 Locks to Sonnet |
64
+ | `/lock opus` | 🔒 Same thing, slash command |
65
+ | `/unlock` | 🔓 Back to auto-routing |
66
+ | `/lock status` | Shows current lock state |
67
+
68
+ Session locks expire after 4 hours of inactivity.
69
+
70
+ ### When FreeRouter Isn't Sure
71
+
72
+ If your request is ambiguous, FreeRouter asks before switching:
73
+
74
+ > **You:** "opus please"
75
+ > **FreeRouter:** 🤔 Did you want to switch to **anthropic/claude-opus-4-6**? Reply **yes** or **no**.
76
+ > **You:** "yes"
77
+ > **FreeRouter:** ✅ Confirmed!
78
+
79
+ This prevents accidental switches when you're just talking *about* a model.
80
+
81
+ ### Slash Commands (Power Users)
82
+
83
+ | Command | Effect |
84
+ |---|---|
85
+ | `/opus What is 2+2?` | Per-prompt: Opus for this message only |
86
+ | `/sonnet Write a poem` | Per-prompt: Sonnet |
87
+ | `/kimi Quick answer` | Per-prompt: Kimi |
88
+ | `/haiku Translate this` | Per-prompt: Haiku |
89
+ | `/simple What is 2+2?` | Per-prompt: use SIMPLE tier model |
90
+ | `/max Prove this theorem` | Per-prompt: use REASONING tier model |
91
+ | `[opus] Deep analysis` | Bracket syntax (same as /opus) |
92
+
93
+ ### Supported Model Aliases
94
+
95
+ | Alias | Model |
96
+ |-------|-------|
97
+ | `opus`, `opus-4`, `opus-4.6` | anthropic/claude-opus-4-6 |
98
+ | `sonnet`, `sonnet-4`, `sonnet-4.5` | anthropic/claude-sonnet-4-5 |
99
+ | `sonnet-4.6` | anthropic/claude-sonnet-4-6 |
100
+ | `haiku`, `haiku-4`, `haiku-4.5` | anthropic/claude-haiku-4-5 |
101
+ | `kimi`, `kimi-k2`, `k2.5` | kimi-coding/kimi-for-coding |
102
+
103
+ ## How Routing Works
104
+
105
+ 1. You send a message → OpenClaw forwards to FreeRouter
106
+ 2. FreeRouter's 14-dimension classifier scores the request in **<1ms** (0.035ms average)
107
+ 3. Based on the score, it picks the best tier:
108
+
109
+ | Tier | Default Model | Use Case | Cost |
110
+ |------|--------------|----------|------|
111
+ | SIMPLE | Kimi K2.5 | Quick lookups, translations, "hello" | $0.50/1M |
112
+ | MEDIUM | Claude Sonnet 4.5 | Code, creative writing, moderate tasks | $3/$15/1M |
113
+ | COMPLEX | Claude Opus 4.6 | Architecture, deep analysis | $15/$75/1M |
114
+ | REASONING | Claude Opus 4.6 | Proofs, formal logic, step-by-step | $15/$75/1M |
115
+
116
+ 4. Forwards to the real provider API (Anthropic, Kimi, OpenAI, etc.)
117
+ 5. Returns the response with the **actual model name** — not "freerouter/auto"
18
118
 
19
- Add to your `openclaw.json`:
119
+ ### Scoring Dimensions
20
120
 
21
- ```json
121
+ The classifier evaluates 14 weighted dimensions without calling any LLM:
122
+ - Token count, code presence, reasoning markers, technical terms
123
+ - Creative markers, simple indicators, multi-step patterns
124
+ - Question complexity, imperative verbs, constraints
125
+ - Output format, references, negation, domain specificity
126
+ - Agentic task indicators (multi-tool, multi-step workflows)
127
+
128
+ Supports multilingual classification: English, Chinese, Japanese, Russian, German, Vietnamese, Arabic, Korean, and more.
129
+
130
+ ## Configure
131
+
132
+ After install, add to your `openclaw.json`:
133
+
134
+ ```json5
22
135
  {
136
+ // 1. Set FreeRouter as your default model
137
+ "agents": {
138
+ "defaults": {
139
+ "model": {
140
+ "primary": "freerouter/freerouter/auto",
141
+ "fallbacks": ["anthropic/claude-opus-4-6"]
142
+ }
143
+ }
144
+ },
145
+
146
+ // 2. Add FreeRouter as a provider
147
+ "providers": {
148
+ "freerouter": {
149
+ "baseUrl": "http://127.0.0.1:18801/v1",
150
+ "api": "openai-completions"
151
+ }
152
+ },
153
+
154
+ // 3. Plugin config
23
155
  "plugins": {
24
156
  "entries": {
25
157
  "freerouter": {
158
+ "enabled": true,
26
159
  "config": {
27
- "port": 18800,
160
+ "port": 18801,
28
161
  "host": "127.0.0.1",
29
- "providers": {
30
- "anthropic": {
31
- "baseUrl": "https://api.anthropic.com",
32
- "api": "anthropic"
33
- },
34
- "kimi-coding": {
35
- "baseUrl": "https://api.kimi.com/coding/v1",
36
- "api": "openai",
37
- "headers": { "User-Agent": "KimiCLI/0.77" }
38
- }
39
- },
40
162
  "tiers": {
41
163
  "SIMPLE": { "primary": "kimi-coding/kimi-for-coding", "fallback": ["anthropic/claude-haiku-4-5"] },
42
164
  "MEDIUM": { "primary": "anthropic/claude-sonnet-4-5", "fallback": ["anthropic/claude-opus-4-6"] },
43
- "COMPLEX": { "primary": "anthropic/claude-opus-4-6", "fallback": ["anthropic/claude-haiku-4-5"] },
44
- "REASONING": { "primary": "anthropic/claude-opus-4-6", "fallback": ["anthropic/claude-haiku-4-5"] }
45
- }
165
+ "COMPLEX": { "primary": "anthropic/claude-opus-4-6", "fallback": [] },
166
+ "REASONING": { "primary": "anthropic/claude-opus-4-6", "fallback": [] }
167
+ },
168
+ "thinking": {
169
+ "adaptive": ["claude-opus-4-6"],
170
+ "enabled": { "models": ["claude-sonnet-4-5"], "budget": 4096 }
171
+ },
172
+ "defaultTier": "MEDIUM"
46
173
  }
47
174
  }
48
175
  }
@@ -50,54 +177,77 @@ Add to your `openclaw.json`:
50
177
  }
51
178
  ```
52
179
 
53
- All config fields are optional — sensible defaults are built in.
180
+ Then restart: `openclaw gateway restart`
54
181
 
55
- ## How It Works
182
+ ## CLI Commands
56
183
 
57
- 1. **Classify**: Each request is scored across 14 dimensions (code presence, reasoning markers, technical terms, creativity, etc.)
58
- 2. **Route**: Score maps to a tier (SIMPLE → MEDIUM → COMPLEX → REASONING)
59
- 3. **Forward**: Request is forwarded to the tier's primary model, with automatic fallback
60
- 4. **Translate**: Anthropic Messages API OpenAI format translation happens transparently
184
+ | Command | Description |
185
+ |---|---|
186
+ | `openclaw freerouter status` | Show config, tier mapping, and live stats |
187
+ | `openclaw freerouter setup` | Interactive setup wizard |
188
+ | `openclaw freerouter test` | Quick 5-query classification smoke test |
189
+ | `openclaw freerouter doctor` | Diagnose issues (port conflicts, missing config, etc.) |
190
+ | `openclaw freerouter port <n>` | Change the proxy port |
191
+ | `openclaw freerouter reset` | Show default config for recovery |
61
192
 
62
- ## Tiers
193
+ ### Chat Commands
63
194
 
64
- | Tier | Default Model | Use Case |
65
- |------|--------------|----------|
66
- | SIMPLE | Kimi K2.5 | Greetings, facts, translations |
67
- | MEDIUM | Claude Sonnet 4.5 | Code, conversation, tool use |
68
- | COMPLEX | Claude Opus 4.6 | Architecture, debugging, analysis |
69
- | REASONING | Claude Opus 4.6 | Proofs, formal reasoning, deep analysis |
195
+ | Command | Description |
196
+ |---|---|
197
+ | `/freerouter` | Show routing stats in chat |
198
+ | `/freerouter-doctor` | Quick health check |
70
199
 
71
- ## Mode Overrides
200
+ ## Port Conflicts
72
201
 
73
- Force a specific tier in your prompt:
202
+ If port 18801 is in use, change it:
74
203
 
75
- - `/max prove that P ≠ NP` → REASONING
76
- - `simple mode: what's 2+2` SIMPLE
77
- - `[complex] review this architecture` → COMPLEX
204
+ ```json5
205
+ { "plugins": { "entries": { "freerouter": { "config": { "port": 18802 } } } } }
206
+ ```
78
207
 
79
- ## Endpoints
208
+ Set `"port": 0` to disable the HTTP proxy entirely.
80
209
 
81
- | Endpoint | Method | Description |
82
- |----------|--------|-------------|
83
- | `/v1/chat/completions` | POST | OpenAI-compatible chat |
84
- | `/v1/models` | GET | List available models |
85
- | `/health` | GET | Health check |
86
- | `/stats` | GET | Request statistics |
87
- | `/config` | GET | Show sanitized config |
88
- | `/reload` | POST | Reload auth keys |
89
- | `/reload-config` | POST | Reload config + auth |
210
+ ## Troubleshooting
90
211
 
91
- ## Use as Default Model
212
+ ```bash
213
+ # Check if everything is working
214
+ openclaw freerouter doctor
92
215
 
93
- Set your default model to `freerouter/auto` in openclaw.json:
216
+ # Quick classification test
217
+ openclaw freerouter test
94
218
 
95
- ```json
96
- {
97
- "model": "freerouter/auto"
98
- }
219
+ # Reset to defaults
220
+ openclaw freerouter reset
221
+ ```
222
+
223
+ ## Response Headers
224
+
225
+ Every proxied response includes metadata headers:
226
+ - `X-FreeRouter-Model` — Actual model used (e.g., `anthropic/claude-opus-4-6`)
227
+ - `X-FreeRouter-Tier` — Classification tier (SIMPLE/MEDIUM/COMPLEX/REASONING)
228
+ - `X-FreeRouter-Thinking` — Thinking mode (off/adaptive/enabled)
229
+ - `X-FreeRouter-Reasoning` — Why this model was chosen
230
+
231
+ ## API Endpoints
232
+
233
+ The HTTP proxy exposes:
234
+ - `POST /v1/chat/completions` — OpenAI-compatible chat endpoint
235
+ - `GET /v1/models` — List available models
236
+ - `GET /health` — Health check
237
+ - `GET /stats` — Routing statistics
238
+ - `GET /sessions/locks` — Active session locks
239
+ - `DELETE /sessions/locks` — Clear all session locks
240
+
241
+ ## Tests
242
+
243
+ ```bash
244
+ npm test # Runs all 212 tests
99
245
  ```
100
246
 
247
+ - `test-freerouter.mjs` — Classification, multilingual, edge cases, performance
248
+ - `test-resilience.mjs` — Fallback chains, bad configs, recovery, port conflicts
249
+ - `test-overrides.mjs` — Model overrides, session locks, natural language switching
250
+
101
251
  ## License
102
252
 
103
253
  MIT
@@ -1,25 +1,109 @@
1
1
  {
2
2
  "id": "freerouter",
3
3
  "name": "FreeRouter",
4
- "description": "Smart model router — auto-classify requests and route to the cheapest capable model using your own API keys",
5
- "version": "1.3.0",
4
+ "version": "2.0.0",
5
+ "description": "Smart LLM router — classify requests and route to the best model using your own API keys. 14-dimension weighted scoring, <1ms classification, configurable tiers.",
6
6
  "providers": ["freerouter"],
7
7
  "configSchema": {
8
8
  "type": "object",
9
9
  "additionalProperties": false,
10
10
  "properties": {
11
- "port": { "type": "number", "default": 18800 },
12
- "host": { "type": "string", "default": "127.0.0.1" },
13
- "providers": { "type": "object" },
14
- "tiers": { "type": "object" },
15
- "agenticTiers": { "type": "object" },
16
- "tierBoundaries": { "type": "object" },
17
- "thinking": { "type": "object" },
18
- "auth": { "type": "object" }
11
+ "port": {
12
+ "type": "number",
13
+ "description": "HTTP proxy port (0 = disabled, in-process only)",
14
+ "default": 18801
15
+ },
16
+ "host": {
17
+ "type": "string",
18
+ "description": "HTTP proxy bind address",
19
+ "default": "127.0.0.1"
20
+ },
21
+ "tiers": {
22
+ "type": "object",
23
+ "description": "Model assignments per classification tier",
24
+ "additionalProperties": false,
25
+ "properties": {
26
+ "SIMPLE": { "$ref": "#/$defs/tierConfig" },
27
+ "MEDIUM": { "$ref": "#/$defs/tierConfig" },
28
+ "COMPLEX": { "$ref": "#/$defs/tierConfig" },
29
+ "REASONING": { "$ref": "#/$defs/tierConfig" }
30
+ }
31
+ },
32
+ "thinking": {
33
+ "type": "object",
34
+ "description": "Thinking/reasoning config per tier",
35
+ "additionalProperties": false,
36
+ "properties": {
37
+ "adaptive": {
38
+ "type": "array",
39
+ "items": { "type": "string" },
40
+ "description": "Model patterns that support adaptive thinking (e.g. ['claude-opus-4-6'])"
41
+ },
42
+ "enabled": {
43
+ "type": "object",
44
+ "properties": {
45
+ "models": {
46
+ "type": "array",
47
+ "items": { "type": "string" },
48
+ "description": "Model patterns that get thinking enabled"
49
+ },
50
+ "budget": {
51
+ "type": "number",
52
+ "description": "Thinking token budget",
53
+ "default": 4096
54
+ }
55
+ }
56
+ }
57
+ }
58
+ },
59
+ "scoring": {
60
+ "type": "object",
61
+ "description": "Override scoring weights, boundaries, or keyword lists",
62
+ "additionalProperties": true
63
+ },
64
+ "defaultTier": {
65
+ "type": "string",
66
+ "enum": ["SIMPLE", "MEDIUM", "COMPLEX", "REASONING"],
67
+ "description": "Default tier when classification is ambiguous",
68
+ "default": "MEDIUM"
69
+ },
70
+ "logLevel": {
71
+ "type": "string",
72
+ "enum": ["debug", "info", "warn", "error"],
73
+ "default": "info"
74
+ }
75
+ },
76
+ "$defs": {
77
+ "tierConfig": {
78
+ "type": "object",
79
+ "properties": {
80
+ "primary": {
81
+ "type": "string",
82
+ "description": "Primary model (e.g. 'anthropic/claude-opus-4-6')"
83
+ },
84
+ "fallback": {
85
+ "type": "array",
86
+ "items": { "type": "string" },
87
+ "description": "Fallback models if primary fails"
88
+ }
89
+ },
90
+ "required": ["primary"]
91
+ }
19
92
  }
20
93
  },
21
94
  "uiHints": {
22
- "port": { "label": "Proxy Port", "placeholder": "18800" },
23
- "host": { "label": "Bind Address", "placeholder": "127.0.0.1" }
95
+ "port": {
96
+ "label": "Proxy Port",
97
+ "placeholder": "18801 (0 to disable HTTP proxy)"
98
+ },
99
+ "tiers": {
100
+ "label": "Tier → Model Mapping"
101
+ },
102
+ "defaultTier": {
103
+ "label": "Default Tier (ambiguous requests)"
104
+ },
105
+ "logLevel": {
106
+ "label": "Log Level"
107
+ }
24
108
  }
25
109
  }
package/package.json CHANGED
@@ -1,37 +1,51 @@
1
1
  {
2
2
  "name": "openclaw-freerouter",
3
- "version": "1.3.0",
4
- "description": "FreeRouter plugin for OpenClaw — smart model routing with your own API keys",
3
+ "version": "2.0.1",
4
+ "description": "Smart LLM router plugin for OpenClaw — classify requests and route to the best model using your own API keys. 14-dimension scoring, <1ms classification, per-prompt/session model switching.",
5
5
  "type": "module",
6
- "main": "index.ts",
7
- "exports": {
8
- ".": "./index.ts"
6
+ "openclaw": {
7
+ "extensions": [
8
+ "./src/index.ts"
9
+ ]
10
+ },
11
+ "main": "src/index.ts",
12
+ "scripts": {
13
+ "test": "npx tsx test/test-freerouter.mjs && npx tsx test/test-resilience.mjs && npx tsx test/test-overrides.mjs",
14
+ "test:classification": "npx tsx test/test-freerouter.mjs",
15
+ "test:resilience": "npx tsx test/test-resilience.mjs",
16
+ "test:overrides": "npx tsx test/test-overrides.mjs",
17
+ "typecheck": "tsc --noEmit"
9
18
  },
10
- "files": [
11
- "index.ts",
12
- "src/",
13
- "openclaw.plugin.json",
14
- "README.md",
15
- "LICENSE",
16
- "CHANGELOG.md"
17
- ],
18
19
  "keywords": [
19
20
  "openclaw",
20
21
  "openclaw-plugin",
21
- "ai-router",
22
+ "llm-router",
22
23
  "model-router",
24
+ "smart-routing",
23
25
  "freerouter",
24
- "llm",
25
- "proxy"
26
+ "anthropic",
27
+ "claude",
28
+ "kimi",
29
+ "openai"
26
30
  ],
27
- "author": "OpenFreeRouter",
28
- "license": "MIT",
29
31
  "repository": {
30
32
  "type": "git",
31
33
  "url": "https://github.com/openfreerouter/openclaw-freerouter"
32
34
  },
35
+ "homepage": "https://github.com/openfreerouter/openclaw-freerouter#readme",
36
+ "bugs": {
37
+ "url": "https://github.com/openfreerouter/openclaw-freerouter/issues"
38
+ },
39
+ "author": "dusama",
40
+ "license": "MIT",
41
+ "files": [
42
+ "src/",
43
+ "openclaw.plugin.json",
44
+ "README.md",
45
+ "LICENSE"
46
+ ],
33
47
  "devDependencies": {
34
- "@types/node": "^22.19.11",
35
- "typescript": "^5.9.3"
48
+ "@types/node": "^22.0.0",
49
+ "typescript": "^5.7.0"
36
50
  }
37
51
  }