openclaw-autoproxy 1.0.3 → 1.0.6

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -1,210 +1,117 @@
1
- # openclaw-autoproxy (OpenClaw Auto Gateway)
1
+ # Documentation: [English](README.md) · [简体中文](README.zh-CN.md)
2
2
 
3
- Local proxy gateway that forwards OpenAI-compatible APIs and automatically switches model IDs when upstream returns retryable status codes (for example 412).
3
+ # Make Large Model APIs Always Available in OpenClaw
4
4
 
5
- ## Features
5
+ OpenClaw Auto Proxy Gateway — a local proxy that exposes OpenAI-compatible `/v1/*` and Anthropic-compatible `/anthropic/*` endpoints, forwarding requests to configured upstreams and supporting automatic model fallback based on `routes.yml`.
6
6
 
7
- - OpenAI-compatible proxy endpoint: `/v1/*`
8
- - Automatic model fallback on retryable statuses for `model: auto` only (default: 412, 429, 500, 502, 503, 504)
9
- - Model-based route selection: different models can use different upstream URLs and auth headers
10
- - Per-model and global fallback chains
11
- - TypeScript runtime powered by `tsx`
12
- - Node.js HTTP gateway server (openclaw-style)
13
- - Cross-platform startup on macOS and Windows (Node.js 18+)
14
- - Health endpoint: `/health`
7
+ ## Quick start
15
8
 
16
- ## Quick Start
17
-
18
- 1. Install Node.js 18 or newer.
19
- 2. Install dependencies:
9
+ 1. Install globally (recommended):
20
10
 
21
11
  ```bash
22
- npm install
12
+ npm i -g openclaw-autoproxy@latest
23
13
  ```
24
14
 
25
- 3. Create local env file:
26
-
27
- macOS/Linux:
15
+ 2. Edit the route configuration in the project root:
28
16
 
29
17
  ```bash
30
- cp .env.example .env
18
+ vim routes.yml
31
19
  ```
32
20
 
33
- Windows PowerShell:
34
-
35
- ```powershell
36
- Copy-Item .env.example .env
37
- ```
38
-
39
- 4. Edit `.env` (runtime options) and `routes.yml` (all upstream route mappings and auth).
40
- 5. Start the gateway:
21
+ 3. Start the gateway (installed mode):
41
22
 
42
23
  ```bash
43
- npm run dev
24
+ openclaw-autoproxy start
44
25
  ```
45
26
 
46
- Production mode:
27
+ Or run without installing (via `npx`):
47
28
 
48
29
  ```bash
49
- npm start
30
+ npx openclaw-autoproxy@latest start
50
31
  ```
51
32
 
52
- ## Global CLI Usage
33
+ After starting, the local OpenAI-compatible endpoint is usually available at `http://127.0.0.1:8787/v1/*`, and the local Anthropic-compatible endpoint at `http://127.0.0.1:8787/anthropic/*` (port is configurable).
53
34
 
54
- You can install this project globally and run it via `openclaw-autoproxy`:
35
+ ## Example `routes.yml`
55
36
 
56
- ```bash
57
- npm i -g .
58
- openclaw-autoproxy gateway start
59
- ```
60
-
61
- Watch mode:
37
+ ```yaml
38
+ # Optional global defaults
39
+ defaults:
40
+ authHeader: cf-aig-authorization
41
+ authPrefix: "Bearer "
42
+ apiKey: xxxxxxxxxxxxxxxxxx
62
43
 
63
- ```bash
64
- openclaw-autoproxy gateway dev
65
- ```
44
+ retryStatusCodes: [412, 429, 500, 502, 503, 504]
66
45
 
67
- Show CLI help:
46
+ routes:
47
+ - name: openai
48
+ url: http://api.openai.com
49
+ model: gpt-4.1
50
+ # Route-level token (overrides defaults)
51
+ apiKeyEnv: UPSTREAM_API_KEY
68
52
 
69
- ```bash
70
- openclaw-autoproxy gateway help
53
+ - name: azure
54
+ url: http://azure-openai-endpoint
55
+ model: gpt-3.5-turbo
56
+ apiKeyEnv: UPSTREAM_API_KEY
71
57
  ```
72
58
 
73
- Backward-compatible aliases are still supported:
59
+ ## Common commands
74
60
 
75
- ```bash
76
- openclaw-autoproxy start
77
- openclaw-autoproxy dev
78
- openclaw-autoproxy help
79
- ```
80
-
81
- ## OpenAI-Compatible Calls For 3 Models
61
+ - Start: `openclaw-autoproxy start`
62
+ - Dev (watch): `openclaw-autoproxy dev`
63
+ - Help: `openclaw-autoproxy help`
82
64
 
83
- After starting gateway locally, always call the local OpenAI-style endpoint:
65
+ Quick run (installed):
84
66
 
85
67
  ```bash
86
- curl -X POST http://127.0.0.1:8787/v1/chat/completions \
87
- -H "Content-Type: application/json" \
88
- -d '{
89
- "model": "GLM-4.7-Flash",
90
- "messages": [{"role":"user","content":"你好"}]
91
- }'
68
+ npm i -g openclaw-autoproxy@latest
69
+ vim routes.yml
70
+ openclaw-autoproxy start
92
71
  ```
93
72
 
94
- ```bash
95
- curl -X POST http://127.0.0.1:8787/v1/chat/completions \
96
- -H "Content-Type: application/json" \
97
- -d '{
98
- "model": "doubao-seed-2-0-pro-260215",
99
- "messages": [{"role":"user","content":"你好"}]
100
- }'
101
- ```
73
+ Quick run (npx):
102
74
 
103
75
  ```bash
104
- curl -X POST http://127.0.0.1:8787/v1/chat/completions \
105
- -H "Content-Type: application/json" \
106
- -d '{
107
- "model": "ernie-4.5-turbo-128k",
108
- "messages": [{"role":"user","content":"你好"}]
109
- }'
76
+ npx openclaw-autoproxy@latest start
110
77
  ```
111
78
 
112
- ## API
79
+ ## Usage example
113
80
 
114
- - `ALL /v1/*`: Forward to upstream; automatic model fallback is used only when request model is `auto`.
115
- - `GET /health`: Health check and active retry status list.
116
-
117
- ## Project Structure
118
-
119
- ```text
120
- src/
121
- gateway/
122
- config.ts
123
- proxy.ts
124
- server-http.ts
125
- server.impl.ts
126
- server.ts
127
- ```
128
-
129
- ### Example Chat Request
130
-
131
- ```bash
132
- curl -X POST http://127.0.0.1:8787/v1/chat/completions \
133
- -H "Content-Type: application/json" \
134
- -H "Authorization: Bearer <your-upstream-token>" \
135
- -d '{
136
- "model": "gpt-4.1",
137
- "messages": [{"role": "user", "content": "hello"}],
138
- "temperature": 0.2
139
- }'
140
- ```
141
-
142
- Then call local gateway:
81
+ Call the gateway locally:
143
82
 
144
83
  ```bash
145
84
  curl -X POST http://127.0.0.1:8787/v1/chat/completions \
146
- -H "Content-Type: application/json" \
147
- -d '{
148
- "model": "GLM-4.7-Flash",
149
- "messages": [{"role":"user","content":"你好"}]
85
+ --header 'Content-Type: application/json' \
86
+ --data '{
87
+ "model": "auto",
88
+ "messages": [
89
+ {
90
+ "role": "user",
91
+ "content": "what model are you"
92
+ }
93
+ ]
150
94
  }'
151
95
  ```
152
96
 
153
- ### Helpful Response Headers
154
-
155
- - `x-gateway-model-used`: The actual model used by this attempt.
156
- - `x-gateway-attempt-count`: Number of attempts before returning response.
157
- - `x-gateway-switched`: `1` when model fallback happened in this response.
158
-
159
- ### Switch Notice In Response Data
160
-
161
- - JSON response: when fallback happened, gateway appends `gateway_notice` at top-level JSON.
162
- - SSE response: when fallback happened, gateway prepends one event:
163
-
164
- ```text
165
- event: gateway_notice
166
- data: {"fromModel":"...","toModel":"...","triggerStatus":412,...}
167
- ```
168
-
169
- ## Fallback Strategy
170
-
171
- The gateway behavior is split by request model:
172
-
173
- 1. `model != auto`: pinned mode, only the requested model is used (no automatic switch).
174
- 2. `model == auto`: automatic mode, candidates are all enabled route models from `routes.yml`, and each request uses a round-robin start model.
97
+ Notes:
98
+ - Using `"model": "auto"` causes the gateway to automatically rotate and fallback between candidate models configured in `routes.yml` when upstream returns retryable errors.
99
+ - To pin a specific model, replace `"auto"` with the desired model name (for example, `"gpt-4.1"`).
175
100
 
176
- When upstream returns a status in `retryStatusCodes` (from `routes.yml`), automatic mode retries using the next candidate model in the same rotated list. If this key is absent, it falls back to `RETRY_STATUS_CODES` env.
101
+ ## Anthropic Compatibility
177
102
 
178
- ## Model Route Configuration
103
+ - The local `/anthropic/v1/messages` endpoint can translate Anthropic Messages API requests into OpenAI-compatible `chat/completions` requests when the selected upstream route is OpenAI-style rather than native Anthropic.
104
+ - This translation covers both non-streaming and streaming text/tool-call responses for OpenAI-style upstream routes.
105
+ - When an upstream returns `4xx` or `5xx`, the gateway now logs a compact `[gateway] upstream_error ...` line with the selected route, model, upstream URL, and a response body snippet.
179
106
 
180
- `routes.yml` is loaded automatically from the project root.
181
107
 
182
- Recommended YAML shape:
183
-
184
- - `defaults`: optional global auth defaults used by all routes
185
- - `retryStatusCodes`: optional array of retryable HTTP status codes (for example `[412, 429, 500, 502, 503, 504]`)
186
- - `routes`: required array of route objects
187
-
188
- Top-level array is also supported when you do not need global defaults.
189
-
190
- Each route object supports:
191
-
192
- - `name`: optional logical route name
193
- - `url`: upstream URL
194
- - `model`: model list (or a single string)
195
- - `authHeader`: optional auth header name
196
- - `authPrefix`: optional auth value prefix (default `Bearer `)
197
- - `apiKey`: inline token value (preferred in this setup)
198
- - `apiKeyEnv`: optional env-based token fallback
199
- - `headers`: optional fixed headers map
200
- - `isBaseUrl`: optional boolean to force base URL behavior
201
- - `enabled`: optional boolean (default `true`), set `false` to disable the route without deleting it
108
+ ## Notes
202
109
 
203
- `routes.yml` is required and loaded from the project root.
110
+ - `routes.yml` is loaded from the project root.
111
+ - Prefer `UPSTREAM_API_KEY` as an environment variable for upstream authentication. Route-level `apiKey` is supported but not recommended for production.
112
+ - If a route authenticates with the standard `Authorization` header, the client `Authorization` header is forwarded unless route credentials override it. If a route authenticates with a different header such as `cf-aig-authorization`, the gateway strips conflicting client auth headers such as `Authorization` and `x-api-key` to avoid leaking dummy or incompatible provider tokens upstream.
113
+ - Streaming responses are forwarded as streams when an attempt succeeds.
114
+ - When automatic model fallback occurs, the gateway may append a `gateway_notice` in JSON responses or emit a `gateway_notice` SSE event.
204
115
 
205
- ## Notes
116
+ See the implementation and more configuration options under `src/gateway`.
206
117
 
207
- - If client request already includes `Authorization`, gateway forwards it.
208
- - If client request does not include `Authorization`, gateway uses `UPSTREAM_API_KEY`.
209
- - Streaming responses are forwarded as stream when an attempt succeeds.
210
- - Requests with invalid JSON body return `400`.
@@ -0,0 +1,127 @@
1
+ # 在 OpenClaw 中让大模型 API 永远可用
2
+
3
+ OpenClaw 自动代理网关 — 在本地同时提供 OpenAI 兼容的 `/v1/*` 和 Anthropic 兼容的 `/anthropic/*` 接口,转发请求到配置的上游,并根据 `routes.yml` 支持模型自动回退与路由选择。
4
+
5
+ ## 快速开始
6
+
7
+ 1. 全局安装(推荐):
8
+
9
+ ```bash
10
+ npm i -g openclaw-autoproxy@latest
11
+ ```
12
+
13
+ 2. 编辑路由配置(位于项目根目录):
14
+
15
+ ```bash
16
+ vim routes.yml
17
+ ```
18
+
19
+ 3. 启动代理(已安装模式):
20
+
21
+ ```bash
22
+ openclaw-autoproxy start
23
+ ```
24
+
25
+ 或使用 `npx`(无需安装):
26
+
27
+ ```bash
28
+ npx openclaw-autoproxy@latest start
29
+ ```
30
+
31
+ 启动后,本地 OpenAI 兼容接口通常可通过 `http://127.0.0.1:8787/v1/*` 访问,本地 Anthropic 兼容接口可通过 `http://127.0.0.1:8787/anthropic/*` 访问(端口可配置)。
32
+
33
+ ## 示例 `routes.yml`
34
+
35
+ ```yaml
36
+ # 可选全局默认设置
37
+ defaults:
38
+ authHeader: cf-aig-authorization
39
+ authPrefix: "Bearer "
40
+ apiKey: xxxxxxxxxxxxxxxxxx
41
+
42
+ retryStatusCodes: [412, 429, 500, 502, 503, 504]
43
+
44
+ routes:
45
+ - name: openai
46
+ url: https://api.openai.com
47
+ model: gpt-4.1
48
+ # 路由级 token(优先于 defaults)
49
+ apiKeyEnv: UPSTREAM_API_KEY
50
+
51
+ - name: azure
52
+ url: https://your-azure-endpoint
53
+ model: gpt-3.5-turbo
54
+ apiKeyEnv: UPSTREAM_API_KEY
55
+ ```
56
+
57
+ ## 常用命令
58
+
59
+ - 启动:`openclaw-autoproxy start`
60
+ - 开发(热重载):`openclaw-autoproxy dev`
61
+ - 帮助:`openclaw-autoproxy help`
62
+
63
+ 快速示例(安装并立即启动):
64
+
65
+ ```bash
66
+ npm i -g openclaw-autoproxy@latest
67
+ vim routes.yml
68
+ openclaw-autoproxy start
69
+ ```
70
+
71
+ 使用 npx 直接运行(win):
72
+
73
+ ```bash
74
+ npx openclaw-autoproxy@latest start
75
+ ```
76
+
77
+ ## 使用示例
78
+
79
+ 通过本地代理调用模型(示例):
80
+
81
+ ```bash
82
+ curl -X POST http://127.0.0.1:8787/v1/chat/completions \
83
+ --header 'Content-Type: application/json' \
84
+ --data '{
85
+ "model": "auto",
86
+ "messages": [
87
+ {
88
+ "role": "user",
89
+ "content": "你是啥模型"
90
+ }
91
+ ]
92
+ }'
93
+ ```
94
+
95
+ 说明:
96
+ - 使用 `model: "auto"` 时,网关会在 `routes.yml` 中已启用的候选模型间自动切换并在可重试的上游错误时进行回退。
97
+ - 若希望指定具体模型,请替换 `"model": "auto"` 为目标模型名(例如 `"gpt-4.1"`)。
98
+
99
+ ## Anthropic 兼容说明
100
+
101
+ - 本地 `/anthropic/v1/messages` 在命中 OpenAI 风格上游时,会把 Anthropic Messages API 请求转换为 OpenAI `chat/completions` 请求。
102
+ - 当前转换同时支持非流式和流式的文本/工具调用返回,即使选中的上游是 OpenAI 风格路由也可以使用 Anthropic Messages 流式接口。
103
+ - 当上游返回 `4xx` 或 `5xx` 时,网关现在会输出一条精简的 `[gateway] upstream_error ...` 日志,包含路由、模型、上游 URL 和响应体摘要。
104
+
105
+ ## 对接 Claude Code
106
+
107
+ Claude Code 使用 Anthropic 风格接口。这个网关在本地暴露 `/anthropic/*`,并在转发到上游时自动映射为 `/v1/*`。
108
+
109
+ 让 Claude Code 指向本地网关:
110
+
111
+ ```bash
112
+ export ANTHROPIC_BASE_URL=http://127.0.0.1:8787/anthropic
113
+ export ANTHROPIC_API_KEY=dummy-key
114
+ ```
115
+
116
+ 说明:
117
+ - 如果上游鉴权由网关路由凭证负责,`ANTHROPIC_API_KEY` 可以是占位值。
118
+ - 为兼容历史配置,当路由 URL 固定为 `/v1/chat/completions` 时,网关也会自动把 Claude 相关路径(`/v1/messages*`、`/v1/models`、`/v1/complete`)重写到对应上游路径。
119
+
120
+ ## 说明
121
+
122
+ - `routes.yml`:项目根目录下的上游路由与认证配置。
123
+ - `UPSTREAM_API_KEY`:建议通过环境变量提供上游认证密钥;`apiKey` 可用于临时或测试场景但不推荐在生产中明文存放。
124
+ - 如果某条路由本身就是通过标准 `Authorization` 头鉴权,客户端传入的 `Authorization` 会继续转发,除非被路由凭证覆盖。如果某条路由使用 `cf-aig-authorization` 这类非标准鉴权头,网关会移除冲突的客户端认证头,例如 `Authorization` 和 `x-api-key`,避免把本地 dummy key 或不兼容的 provider token 透传到上游。
125
+ - 当发生自动回退时,网关可能在 JSON 返回中附加 `gateway_notice`,或在 SSE 中发送 `gateway_notice` 事件。
126
+
127
+ 更多高级配置与实现细节请查看 `src/gateway` 目录。