@classicicn/codex-transfer 0.2.0 → 0.3.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.en.md ADDED
@@ -0,0 +1,376 @@
1
+ # codex-transfer
2
+
3
+ > Responses API ↔ Chat Completions translation bridge — use DeepSeek, Kimi, Qwen, and other OpenAI-compatible providers with Codex CLI.
4
+
5
+ ## Overview
6
+
7
+ Codex CLI communicates using OpenAI's **Responses API**, while most third-party LLM providers (DeepSeek, Moonshot, Qwen, etc.) only implement the earlier **Chat Completions API**. These two APIs differ significantly in request format, response structure, streaming event sequences, and tool call representation.
8
+
9
+ `codex-transfer` runs a local HTTP proxy that transparently translates Responses API requests from Codex CLI into Chat Completions API requests, and reverse-translates upstream responses back into Responses API format — so Codex CLI never notices the difference.
10
+
11
+ ```
12
+ Codex CLI (Responses API) → codex-transfer (:4444) → Third-party Provider (Chat Completions API)
13
+ ```
14
+
15
+ - **Zero runtime dependencies**: esbuild bundles everything into a single `dist/codex-transfer.mjs` file — run via `npx` instantly
16
+ - **Stateless by design**: in-process session management, no external database required
17
+ - **~1500 lines of TypeScript**: lightweight and auditable
18
+
19
+ ---
20
+
21
+ ## Quick Start
22
+
23
+ ```bash
24
+ # One-shot run (no install needed)
25
+ npx @classicicn/codex-transfer -k
26
+
27
+ # Specify upstream provider
28
+ npx @classicicn/codex-transfer -k -u https://api.deepseek.com/v1 --api-key sk-xxx
29
+
30
+ # Global install
31
+ npm install -g @classicicn/codex-transfer
32
+ codex-transfer -k
33
+ ```
34
+
35
+ ---
36
+
37
+ ## CLI Options
38
+
39
+ ```
40
+ codex-transfer [options]
41
+
42
+ Options:
43
+ -p, --port PORT Listen port (default: 4444)
44
+ -u, --upstream URL Upstream Chat Completions base URL
45
+ --api-key KEY API key for upstream
46
+ -m, --model MODEL Force override model name (highest priority)
47
+ -c, --config PATH Path to config file (JSON)
48
+ -k, --insecure Skip TLS certificate verification
49
+ --no-reasoning-effort Don't send reasoning_effort to upstream
50
+ -d, --daemon Run in background, logs to logs/ directory
51
+ -h, --help Show this help
52
+ ```
53
+
54
+ ### Daemon Mode
55
+
56
+ ```bash
57
+ codex-transfer -d -k -u https://api.deepseek.com/v1 --api-key sk-xxx
58
+
59
+ # Output:
60
+ # codex-transfer started in background (PID: 12345)
61
+ # Log file: ~/.codex-transfer/logs/codex-transfer-20260507-143022.log
62
+ # PID file: ~/.codex-transfer/logs/codex-transfer.pid
63
+ # Stop: kill $(cat ~/.codex-transfer/logs/codex-transfer.pid)
64
+ ```
65
+
66
+ In daemon mode, all `console` output is redirected to timestamped log files. Logs auto-rotate when a single file exceeds **10MB**, keeping up to **5 historical files**.
67
+
68
+ ---
69
+
70
+ ## Configuration
71
+
72
+ ### Priority
73
+
74
+ ```
75
+ CLI args > environment variables > config file > defaults
76
+ ```
77
+
78
+ ### Config File
79
+
80
+ Create a JSON config file at one of these locations (searched in order):
81
+
82
+ 1. Explicit `--config` path or `CODEX_TRANSFER_CONFIG` env var
83
+ 2. `./codex-transfer.json` (current directory)
84
+ 3. `~/.codex-transfer/config.json` (user home)
85
+
86
+ ```json
87
+ {
88
+ "port": 4446,
89
+ "upstream": "https://api.deepseek.com/v1",
90
+ "apiKey": "sk-your-key-here",
91
+ "insecure": false,
92
+ "reasoningEffort": true,
93
+ "modelMap": {
94
+ "*": "deepseek-v4-pro",
95
+ "codex-auto-review": "deepseek-v4-pro"
96
+ }
97
+ }
98
+ ```
99
+
100
+ ### Environment Variables
101
+
102
+ | Variable | Default | Description |
103
+ |----------|---------|-------------|
104
+ | `CODEX_TRANSFER_PORT` | `4444` | Listen port |
105
+ | `CODEX_TRANSFER_UPSTREAM` | `https://openrouter.ai/api/v1` | Upstream Chat Completions base URL |
106
+ | `CODEX_TRANSFER_API_KEY` | _(empty)_ | API key forwarded to upstream |
107
+ | `CODEX_TRANSFER_CONFIG` | _(auto)_ | Path to config file |
108
+ | `CODEX_TRANSFER_INSECURE` | `false` | Set to `"1"` or `"true"` to skip TLS verification |
109
+ | `CODEX_TRANSFER_REASONING_EFFORT` | `true` | Set to `"0"` or `"false"` to disable sending reasoning_effort |
110
+
111
+ ### Model Name Mapping
112
+
113
+ Codex CLI may send non-standard model names (e.g. `codex-auto-review`) that upstream providers don't recognize. Use `modelMap` to translate them:
114
+
115
+ ```json
116
+ {
117
+ "modelMap": {
118
+ "*": "deepseek-v4-pro",
119
+ "codex-auto-review": "deepseek-v4-pro"
120
+ }
121
+ }
122
+ ```
123
+
124
+ **Lookup order**: exact key match → wildcard `"*"` → original name passthrough.
125
+
126
+ The `--model` / `-m` flag takes precedence over `modelMap`, overriding all model names.
127
+
128
+ ---
129
+
130
+ ## API Endpoints
131
+
132
+ | Method | Path | Purpose |
133
+ |--------|------|---------|
134
+ | `GET` | `/health` | Health check — tests upstream `/models` connectivity, returns diagnostics |
135
+ | `GET` | `/v1/models` | Model catalog proxy — transparently forwards upstream model list |
136
+ | `POST` | `/v1/responses` | **Core endpoint** — receives Responses API requests, translates and forwards upstream |
137
+
138
+ ### `/v1/responses` Request Flow
139
+
140
+ ```
141
+ Codex request arrives
142
+ → JSON parse & validate
143
+ → resolveModel() model name mapping
144
+ → Load message history (via previous_response_id)
145
+ → toChatRequest() protocol translation
146
+ → Branch:
147
+ ├─ stream=true → translateStream() SSE generator → text/event-stream
148
+ └─ stream=false → fetch upstream → fromChatResponse() → JSON
149
+ ```
150
+
151
+ ---
152
+
153
+ ## Features in Detail
154
+
155
+ ### Protocol Translation
156
+
157
+ Full bidirectional translation between Responses API and Chat Completions API:
158
+
159
+ - **Request translation**: `input` array (`function_call` / `function_call_output` / regular messages) → Chat Completions `messages[]` array
160
+ - **Response translation**: Chat Completions `choices[0].message` → Responses API `output[]` structure
161
+ - **System prompt**: `instructions` (Codex CLI field) → Chat Completions `system` role
162
+ - **Role mapping**: `developer` → `system`
163
+
164
+ ### Streaming Translation (SSE)
165
+
166
+ Upstream Chat Completions SSE delta stream is translated chunk-by-chunk into the standard Responses API event sequence:
167
+
168
+ ```
169
+ response.created
170
+ → response.output_item.added (message)
171
+ → response.output_text.delta × N
172
+ → response.output_item.done
173
+ → [if tool calls present]
174
+ response.output_item.added (function_call)
175
+ → response.function_call_arguments.delta
176
+ → response.output_item.done
177
+ → response.completed
178
+ ```
179
+
180
+ **Design notes**:
181
+ - Text deltas are forwarded in real time; tool call deltas are batched after stream completion (Chat Completions scatters tool calls across multiple chunks by index)
182
+ - Top-level error fallback: even if upstream disconnects unexpectedly, a `response.failed` event is emitted, preventing Codex CLI from hanging
183
+
184
+ ### Token Usage Details
185
+
186
+ Codex CLI relies on the usage fields in Responses API to calculate context window utilization. `codex-transfer` automatically extracts token usage from upstream responses and maps them to the Responses API format, handling differences between OpenAI and DeepSeek upstream formats:
187
+
188
+ | Responses API Output | OpenAI Upstream Field | DeepSeek Upstream Field |
189
+ |---|---|---|
190
+ | `input_tokens` | `prompt_tokens` | `prompt_tokens` |
191
+ | `output_tokens` | `completion_tokens` | `completion_tokens` |
192
+ | `total_tokens` | `total_tokens` | `total_tokens` |
193
+ | `input_tokens_details.cached_tokens` | `prompt_tokens_details.cached_tokens` | `prompt_cache_hit_tokens` |
194
+ | `output_tokens_details.reasoning_tokens` | `completion_tokens_details.reasoning_tokens` | `completion_tokens_details.reasoning_tokens` |
195
+
196
+ **Auto-detection**: The upstream format is automatically detected based on which fields are present in the response — no configuration needed. `cached_tokens` prefers the OpenAI nested field, falling back to the DeepSeek top-level field; `reasoning_tokens` uses the same path in both formats. Detail objects are omitted when the corresponding fields are absent.
197
+
198
+ Both non-streaming and streaming paths share the same mapping logic.
199
+
200
+ ### Reasoning Effort Mapping
201
+
202
+ Codex CLI controls model reasoning intensity via `reasoning.effort` (none/low/medium/high/xhigh), but providers implement this differently. `codex-transfer` automatically maps the Responses API reasoning effort to provider-specific parameters:
203
+
204
+ | Codex Level | Responses API Value | DeepSeek | MiMo / Kimi / GLM |
205
+ |---|---|---|---|
206
+ | Minimal | `none` | `thinking: {type: "disabled"}` | `thinking: {type: "disabled"}` |
207
+ | Low | `low` | `thinking: {type: "enabled"}, reasoning_effort: "high"` | `thinking: {type: "enabled"}` |
208
+ | Medium | `medium` | `thinking: {type: "enabled"}, reasoning_effort: "high"` | `thinking: {type: "enabled"}` |
209
+ | High | `high` | `thinking: {type: "enabled"}, reasoning_effort: "high"` | `thinking: {type: "enabled"}` |
210
+ | Ultra | `xhigh` | `thinking: {type: "enabled"}, reasoning_effort: "max"` | `thinking: {type: "enabled"}` |
211
+
212
+ **Compatibility strategy**: The `thinking` toggle is always sent (supported by all providers). `reasoning_effort` is sent by default (natively supported by DeepSeek); if the upstream doesn't support this field, it can be disabled via the `--no-reasoning-effort` CLI flag or `"reasoningEffort": false` in the config file.
213
+
214
+ ### Session Management
215
+
216
+ Codex CLI uses `previous_response_id` for multi-turn conversations. `SessionStore` maintains the full message history for each session in memory, making every Chat Completions call **self-contained** (no dependency on upstream context caching).
217
+
218
+ ```
219
+ ┌─────────────────────────────────┐
220
+ │ SessionStore (in-memory) │
221
+ │ │
222
+ │ history: Map<response_id, │
223
+ │ ChatMessage[]> │
224
+ │ │
225
+ │ reasoning: Map<call_id, │
226
+ │ reasoning_text> │
227
+ │ │
228
+ │ turnReasoning: Map< │
229
+ │ SHA256(content), │
230
+ │ reasoning_text> │
231
+ │ ) │
232
+ └─────────────────────────────────┘
233
+ ```
234
+
235
+ ### Reasoning Model Support (DeepSeek-R1 / Kimi-K2.6)
236
+
237
+ Reasoning models produce `reasoning_content` (chain of thought) that must be **round-tripped verbatim** across turns — otherwise the model may reject the request or behave incorrectly.
238
+
239
+ `codex-transfer` uses a **dual-index cache** to recover reasoning content:
240
+
241
+ | Index method | Use case | Implementation |
242
+ |-------------|----------|----------------|
243
+ | **call_id exact match** | Codex uses `previous_response_id` + tool call replay | `Map<call_id, reasoning>` |
244
+ | **Content SHA256 fingerprint** | Codex replays full `input[]` without `previous_response_id` | `Map<SHA256(content), reasoning>` |
245
+
246
+ The two mechanisms complement each other, covering both conversation replay modes of Codex CLI.
247
+
248
+ ### Tool Call Handling
249
+
250
+ - **Tool filtering**: Automatically filters OpenAI-proprietary built-in tools (`web_search`, `file_search`, `computer`, etc.), keeping only `type: "function"` custom tools to prevent upstream rejection
251
+ - **Format conversion**: Responses API flat format `{type, name, description, parameters}` ↔ Chat Completions nested format `{type, function: {name, description, parameters}}`
252
+ - **Parallel tool calls**: Consecutive `function_call` input items are merged into a single assistant message with multiple `tool_calls` entries
253
+ - **Message reordering**: Codex may interleave other messages between `function_call` and `function_call_output` items, but providers like DeepSeek strictly require `assistant(tool_calls)` immediately followed by matching `tool` messages. `reorderForToolCalls()` handles this automatically, synthesizing empty output for orphaned tool calls
254
+
255
+ ### Health Check
256
+
257
+ ```
258
+ GET /health → 200 OK
259
+ {
260
+ "upstream": "https://api.deepseek.com/v1",
261
+ "apiKeySet": true,
262
+ "apiKeyPrefix": "sk-abc…",
263
+ "upstreamStatus": 200,
264
+ "upstreamOk": true
265
+ }
266
+ ```
267
+
268
+ ---
269
+
270
+ ## Supported Providers
271
+
272
+ Any provider implementing the OpenAI Chat Completions API format is supported.
273
+
274
+ | Provider | Base URL |
275
+ |----------|----------|
276
+ | DeepSeek | `https://api.deepseek.com/v1` |
277
+ | Xiaomi MiMo | `https://api.xiaomimimo.com/v1` |
278
+ | Kimi (Moonshot) | `https://api.moonshot.cn/v1` |
279
+ | Qwen | `https://dashscope.aliyuncs.com/compatible-mode/v1` |
280
+ | OpenRouter | `https://openrouter.ai/api/v1` |
281
+
282
+ > Any OpenAI API-compatible provider should work in principle. If you find a working provider not listed here, PRs are welcome.
283
+
284
+ ---
285
+
286
+ ## Codex CLI Configuration
287
+
288
+ Add to `~/.codex/config.toml`:
289
+
290
+ ```toml
291
+ model = "deepseek-v4-pro"
292
+ model_provider = "deepseek-transfer"
293
+
294
+ [model_providers.deepseek-transfer]
295
+ name = "DeepSeek"
296
+ base_url = "http://127.0.0.1:4446/v1"
297
+ wire_api = "responses"
298
+ ```
299
+
300
+ > **Note**: The `base_url` port must match the `codex-transfer` listen port, and `wire_api` must be `"responses"`.
301
+
302
+ ---
303
+
304
+ ## Project Structure
305
+
306
+ ```
307
+ src/
308
+ ├── cli.ts CLI entry — argument parsing, daemon process management, log rotation
309
+ ├── server.ts HTTP server — Hono route registration, request dispatch, proxy instance creation
310
+ ├── config.ts Configuration — multi-source merging, priority control, config file discovery
311
+ ├── session.ts Session state — message history storage, dual-index reasoning cache
312
+ ├── translate.ts Protocol translation — Responses ↔ Chat Completions bidirectional conversion
313
+ ├── stream.ts SSE translation — streaming chunk parsing, event sequence generation, error fallback
314
+ └── types.ts Type definitions — complete TypeScript types for both APIs
315
+ build.mjs Build script — esbuild single-file bundling
316
+ ```
317
+
318
+ ### Dependency Graph
319
+
320
+ ```
321
+ cli.ts → server.ts → translate.ts + stream.ts → session.ts + types.ts
322
+ → config.ts
323
+ ```
324
+
325
+ ### Data Flow
326
+
327
+ ```
328
+ ┌─────────────┐
329
+ │ Config │ ◄── CLI / ENV / File
330
+ └──────┬──────┘
331
+
332
+ Codex ──POST──► Server ──┼──► toChatRequest() ──► fetch ──► Upstream
333
+ ▲ │ │ │
334
+ │ │ SessionStore │
335
+ └──SSE/JSON────┘ (history + │
336
+ reasoning) ◄── translateStream() ──────┘
337
+ ◄── fromChatResponse()
338
+ ```
339
+
340
+ ---
341
+
342
+ ## Build
343
+
344
+ ```bash
345
+ git clone https://github.com/Icicno/codex-transfer.git
346
+ cd codex-transfer
347
+ npm install
348
+ npm run build # esbuild bundle + tsc type check
349
+ node dist/codex-transfer.mjs -k
350
+
351
+ # Or link as a global command
352
+ npm link
353
+ codex-transfer -k
354
+ ```
355
+
356
+ ---
357
+
358
+ ## Programmatic Usage
359
+
360
+ ```typescript
361
+ import { createTransfer } from "./src/server.js";
362
+
363
+ const { app, port } = createTransfer({
364
+ configPath: "./codex-transfer.json",
365
+ port: 4446,
366
+ upstream: "https://api.deepseek.com/v1",
367
+ apiKey: "sk-...",
368
+ disableTlsVerify: true,
369
+ });
370
+ ```
371
+
372
+ ---
373
+
374
+ ## License
375
+
376
+ MIT
package/README.md CHANGED
@@ -1,51 +1,87 @@
1
1
  # codex-transfer
2
2
 
3
- Responses API ↔ Chat Completions translation bridge for Codex CLI (TypeScript implementation)
3
+ > Responses API ↔ Chat Completions 协议翻译桥接 Codex CLI 无缝对接 DeepSeek、Kimi、Qwen 等任意 OpenAI 兼容厂商。
4
4
 
5
- ## Overview
5
+ ## 概述
6
6
 
7
- A lightweight proxy that translates the OpenAI **Responses API** (used by Codex CLI) into the **Chat Completions API**, letting Codex work with any OpenAI-compatible provider — DeepSeek, Kimi, Qwen, Mistral, Groq, xAI, OpenRouter, and more.
7
+ Codex CLI 使用 OpenAI **Responses API** 作为通信协议,而市面上大多数第三方大模型厂商(DeepSeek、Moonshot、Qwen 等)仅实现了早期的 **Chat Completions API**。这两种 API 在请求格式、响应结构、流式事件序列、工具调用表达等方面存在显著差异。
8
+
9
+ `codex-transfer` 在本地启动一个 HTTP 代理服务,透明地将 Codex CLI 发出的 Responses API 请求翻译为 Chat Completions API 请求,并将上游响应逆向翻译回 Responses API 格式,使 Codex CLI"察觉不到"协议差异。
8
10
 
9
11
  ```
10
- Codex CLI (Responses API) codex-transfer → DeepSeek (Chat Completions API)
12
+ Codex CLI (Responses API) codex-transfer (:4444) 第三方厂商 (Chat Completions API)
11
13
  ```
12
14
 
13
- ## Quick Start
15
+ - **零运行时依赖**:esbuild 打包为单文件 `dist/codex-transfer.mjs`,`npx` 一键运行
16
+ - **无状态**:进程内维护会话,无需外部数据库
17
+ - **约 1500 行 TypeScript**:轻量、可审计
18
+
19
+ ---
20
+
21
+ ## 快速开始
14
22
 
15
23
  ```bash
16
- # Install from npm
17
- npm install -g @classicicn/codex-transfer
24
+ # 一键运行(无需安装)
25
+ npx @classicicn/codex-transfer -k
26
+
27
+ # 指定上游厂商
28
+ npx @classicicn/codex-transfer -k -u https://api.deepseek.com/v1 --api-key sk-xxx
18
29
 
19
- # Run
30
+ # 全局安装后使用
31
+ npm install -g @classicicn/codex-transfer
20
32
  codex-transfer -k
21
33
  ```
22
34
 
23
- ## CLI Options
35
+ ---
36
+
37
+ ## CLI 选项
24
38
 
25
39
  ```
26
40
  codex-transfer [options]
27
41
 
28
- Options:
29
- -p, --port PORT Listen port (default: 4444)
30
- -u, --upstream URL Upstream Chat Completions base URL
31
- --api-key KEY API key for upstream
32
- -m, --model MODEL Force override model name (highest priority)
33
- -c, --config PATH Path to config file (JSON)
34
- -k, --insecure Skip TLS certificate verification
35
- -d, --daemon Run in background, logs to logs/ directory
36
- -h, --help Show this help
42
+ 选项:
43
+ -p, --port PORT 监听端口(默认:4444
44
+ -u, --upstream URL 上游 Chat Completions 基础 URL
45
+ --api-key KEY 上游 API Key
46
+ -m, --model MODEL 强制覆盖模型名称(最高优先级)
47
+ -c, --config PATH 配置文件路径(JSON 格式)
48
+ -k, --insecure 跳过 TLS 证书验证(企业代理/自签证书场景)
49
+ --no-reasoning-effort 不向上传递 reasoning_effort 参数
50
+ -d, --daemon 后台运行,日志写入 logs/ 目录
51
+ -h, --help 显示帮助信息
52
+ ```
53
+
54
+ ### 后台运行(Daemon 模式)
55
+
56
+ ```bash
57
+ codex-transfer -d -k -u https://api.deepseek.com/v1 --api-key sk-xxx
58
+
59
+ # 输出:
60
+ # codex-transfer started in background (PID: 12345)
61
+ # Log file: ~/.codex-transfer/logs/codex-transfer-20260507-143022.log
62
+ # PID file: ~/.codex-transfer/logs/codex-transfer.pid
63
+ # Stop: kill $(cat ~/.codex-transfer/logs/codex-transfer.pid)
37
64
  ```
38
65
 
39
- ## Configuration
66
+ Daemon 模式自动将 `console` 输出重定向到带时间戳的日志文件,单文件超过 **10MB** 自动轮转,最多保留 **5 个历史文件**。
67
+
68
+ ---
69
+
70
+ ## 配置
71
+
72
+ ### 优先级
40
73
 
41
- Priority: CLI args > environment variables > config file > defaults
74
+ ```
75
+ CLI 参数 > 环境变量 > 配置文件 > 默认值
76
+ ```
42
77
 
43
- ### Config File
78
+ ### 配置文件
44
79
 
45
- Create a JSON config file at one of these locations:
46
- - `./codex-transfer.json` (current directory)
47
- - `~/.codex-transfer/config.json` (user home)
48
- - Custom path via `--config` or `CODEX_TRANSFER_CONFIG`
80
+ 创建一个 JSON 配置文件,放置在以下任一位置(按搜索顺序):
81
+
82
+ 1. `--config` 显式路径 或 `CODEX_TRANSFER_CONFIG` 环境变量
83
+ 2. `./codex-transfer.json`(当前目录)
84
+ 3. `~/.codex-transfer/config.json`(用户主目录)
49
85
 
50
86
  ```json
51
87
  {
@@ -53,15 +89,28 @@ Create a JSON config file at one of these locations:
53
89
  "upstream": "https://api.deepseek.com/v1",
54
90
  "apiKey": "sk-your-key-here",
55
91
  "insecure": false,
92
+ "reasoningEffort": true,
56
93
  "modelMap": {
57
- "*": "deepseek-v4-pro"
94
+ "*": "deepseek-v4-pro",
95
+ "codex-auto-review": "deepseek-v4-pro"
58
96
  }
59
97
  }
60
98
  ```
61
99
 
62
- ### Model Name Mapping
100
+ ### 环境变量
101
+
102
+ | 变量 | 默认值 | 说明 |
103
+ |------|--------|------|
104
+ | `CODEX_TRANSFER_PORT` | `4444` | 监听端口 |
105
+ | `CODEX_TRANSFER_UPSTREAM` | `https://openrouter.ai/api/v1` | 上游 Chat Completions 基础 URL |
106
+ | `CODEX_TRANSFER_API_KEY` | _(空)_ | 转发给上游的 API Key |
107
+ | `CODEX_TRANSFER_CONFIG` | _(自动)_ | 配置文件路径 |
108
+ | `CODEX_TRANSFER_INSECURE` | `false` | 设为 `"1"` 或 `"true"` 跳过 TLS 验证 |
109
+ | `CODEX_TRANSFER_REASONING_EFFORT` | `true` | 设为 `"0"` 或 `"false"` 关闭 reasoning_effort 传递 |
63
110
 
64
- Codex CLI may send non-standard model names (e.g. `codex-auto-review`) that the upstream provider doesn't recognize. Use `modelMap` to translate them:
111
+ ### 模型名称映射
112
+
113
+ Codex CLI 可能发送非标准模型名(如 `codex-auto-review`),上游厂商无法识别。使用 `modelMap` 进行翻译:
65
114
 
66
115
  ```json
67
116
  {
@@ -72,84 +121,171 @@ Codex CLI may send non-standard model names (e.g. `codex-auto-review`) that the
72
121
  }
73
122
  ```
74
123
 
75
- Lookup order: exact key match wildcard `"*"` → original name (passthrough).
124
+ **查找顺序**:精确键匹配通配符 `"*"` → 原名称透传。
76
125
 
77
- Or use `--model` CLI flag to force-override all model names:
126
+ `--model` / `-m` 参数优先级高于 `modelMap`,可强制覆盖所有模型名。
78
127
 
79
- ```bash
80
- codex-transfer --model deepseek-v4-pro -k
128
+ ---
129
+
130
+ ## API 端点
131
+
132
+ | 方法 | 路径 | 功能 |
133
+ |------|------|------|
134
+ | `GET` | `/health` | 健康检查 — 测试上游 `/models` 连通性,返回诊断信息 |
135
+ | `GET` | `/v1/models` | 模型列表代理 — 透明转发上游模型目录 |
136
+ | `POST` | `/v1/responses` | **核心端点** — 接收 Responses API 请求,翻译后转发上游 |
137
+
138
+ ### `/v1/responses` 处理流程
139
+
140
+ ```
141
+ Codex 请求到达
142
+ → JSON 解析 & 校验
143
+ → resolveModel() 模型名映射
144
+ → 加载历史消息(通过 previous_response_id)
145
+ → toChatRequest() 协议翻译
146
+ → 分流:
147
+ ├─ stream=true → translateStream() SSE 生成器 → text/event-stream
148
+ └─ stream=false → fetch 上游 → fromChatResponse() → JSON
81
149
  ```
82
150
 
83
- ### Environment Variables
151
+ ---
84
152
 
85
- | Variable | Default | Description |
86
- |----------|---------|-------------|
87
- | `CODEX_TRANSFER_PORT` | `4444` | Listen port |
88
- | `CODEX_TRANSFER_UPSTREAM` | `https://openrouter.ai/api/v1` | Upstream Chat Completions base URL |
89
- | `CODEX_TRANSFER_API_KEY` | _(empty)_ | API key forwarded to upstream |
90
- | `CODEX_TRANSFER_CONFIG` | _(auto)_ | Path to config file |
91
- | `CODEX_TRANSFER_INSECURE` | `false` | Skip TLS certificate verification |
153
+ ## 功能详解
92
154
 
93
- ## Usage
155
+ ### 协议翻译
94
156
 
95
- ### Method 1: Direct execution
157
+ 完整实现 Responses API Chat Completions API 之间的双向转换:
96
158
 
97
- ```bash
98
- node dist/codex-transfer.mjs -k -p 4446 -u https://api.deepseek.com/v1
99
- ```
159
+ - **请求翻译**:`input` 数组(`function_call` / `function_call_output` / 普通消息)→ Chat Completions `messages[]` 数组
160
+ - **响应翻译**:Chat Completions `choices[0].message` Responses API `output[]` 结构
161
+ - **系统提示**:`instructions`(Codex CLI 字段)→ Chat Completions `system` 角色
162
+ - **角色映射**:`developer` → `system`
100
163
 
101
- ### Method 2: npm link (global command)
164
+ ### 流式翻译(SSE)
165
+
166
+ 上游 Chat Completions 的 SSE 增量流被逐 chunk 翻译为 Responses API 标准事件序列:
102
167
 
103
- ```bash
104
- npm link
105
- # Then run directly
106
- codex-transfer -k
168
+ ```
169
+ response.created
170
+ response.output_item.added (message)
171
+ response.output_text.delta × N
172
+ → response.output_item.done
173
+ → [如有工具调用]
174
+ response.output_item.added (function_call)
175
+ → response.function_call_arguments.delta
176
+ → response.output_item.done
177
+ → response.completed
107
178
  ```
108
179
 
109
- ### Method 3: npx
180
+ **设计要点**:
181
+ - 文本 delta 实时透传,工具调用 delta 在流结束后批量封装(因 Chat Completions 的 tool call 按 index 散落在多个 chunk 中)
182
+ - 顶层异常兜底:即使上游异常断开,也会产出 `response.failed` 事件,确保 Codex CLI 不会挂起等待
110
183
 
111
- ```bash
112
- npx @classicicn/codex-transfer -k
113
- ```
184
+ ### Token 用量详情
114
185
 
115
- ### Method 4: Background (daemon) mode
186
+ Codex CLI 依赖 Responses API 中的 usage 字段计算上下文占用率。`codex-transfer` 自动提取上游响应中的用量信息并映射到 Responses API 格式,同时兼容 OpenAI 和 DeepSeek 两种上游格式差异:
116
187
 
117
- ```bash
118
- # Start as background process (logs → config_dir/logs/)
119
- node dist/codex-transfer.mjs -d -k
188
+ | Responses API 输出 | OpenAI 上游字段 | DeepSeek 上游字段 |
189
+ |---|---|---|
190
+ | `input_tokens` | `prompt_tokens` | `prompt_tokens` |
191
+ | `output_tokens` | `completion_tokens` | `completion_tokens` |
192
+ | `total_tokens` | `total_tokens` | `total_tokens` |
193
+ | `input_tokens_details.cached_tokens` | `prompt_tokens_details.cached_tokens` | `prompt_cache_hit_tokens` |
194
+ | `output_tokens_details.reasoning_tokens` | `completion_tokens_details.reasoning_tokens` | `completion_tokens_details.reasoning_tokens` |
120
195
 
121
- # Output:
122
- # codex-transfer started in background (PID: 12345)
123
- # Log file: ~/.codex-transfer/logs/codex-transfer.log
124
- # PID file: ~/.codex-transfer/logs/codex-transfer.pid
125
- # Stop: kill $(cat ~/.codex-transfer/logs/codex-transfer.pid)
196
+ **自动格式检测**:根据上游响应中实际存在的字段自动判断格式,无需配置。`cached_tokens` 优先取 OpenAI 嵌套字段,无则取 DeepSeek 顶层字段;`reasoning_tokens` 两者路径一致直接取值。字段不存在时不会输出对应的 `details` 对象。
197
+
198
+ 非流式和流式路径共享同一套映射逻辑。
199
+
200
+ ### 推理强度映射(Reasoning Effort)
201
+
202
+ Codex CLI 通过 `reasoning.effort` 控制模型推理强度(极低/低/中/高/超高),但各厂商的实现方式不同。`codex-transfer` 自动将 Responses API 的推理强度映射为各厂商可识别的参数:
203
+
204
+ | Codex 等级 | Responses API 值 | DeepSeek | MiMo / Kimi / GLM |
205
+ |---|---|---|---|
206
+ | 极低 | `none` | `thinking: {type: "disabled"}` | `thinking: {type: "disabled"}` |
207
+ | 低 | `low` | `thinking: {type: "enabled"}, reasoning_effort: "high"` | `thinking: {type: "enabled"}` |
208
+ | 中 | `medium` | `thinking: {type: "enabled"}, reasoning_effort: "high"` | `thinking: {type: "enabled"}` |
209
+ | 高 | `high` | `thinking: {type: "enabled"}, reasoning_effort: "high"` | `thinking: {type: "enabled"}` |
210
+ | 超高 | `xhigh` | `thinking: {type: "enabled"}, reasoning_effort: "max"` | `thinking: {type: "enabled"}` |
211
+
212
+ **兼容策略**:`thinking` 参数始终发送(所有厂商支持)。`reasoning_effort` 默认发送(DeepSeek 原生支持);如果上游不支持该字段,可通过 `--no-reasoning-effort` CLI 参数或配置文件 `"reasoningEffort": false` 关闭。
213
+
214
+ ### 会话管理
126
215
 
127
- # View logs
128
- tail -f ~/.codex-transfer/logs/codex-transfer.log
216
+ Codex CLI 通过 `previous_response_id` 实现多轮对话。`SessionStore` 在内存中维护每个会话的完整消息历史,使得每次 Chat Completions 调用都是**自包含**的(无需依赖上游的上下文缓存)。
129
217
 
130
- # Stop
131
- kill $(cat ~/.codex-transfer/logs/codex-transfer.pid)
218
+ ```
219
+ ┌─────────────────────────────────┐
220
+ │ SessionStore (内存) │
221
+ │ │
222
+ │ history: Map<response_id, │
223
+ │ ChatMessage[]> │
224
+ │ │
225
+ │ reasoning: Map<call_id, │
226
+ │ reasoning_text> │
227
+ │ │
228
+ │ turnReasoning: Map< │
229
+ │ SHA256(content), │
230
+ │ reasoning_text> │
231
+ │ ) │
232
+ └─────────────────────────────────┘
132
233
  ```
133
234
 
134
- ### Method 5: As a library (from source only)
235
+ ### 推理模型支持(DeepSeek-R1 / Kimi-K2.6)
135
236
 
136
- > **Note:** The npm package contains only the CLI bundle. To use as a library, clone the repo and import from source.
237
+ 推理模型会产出 `reasoning_content`(思考过程),该字段需要在多轮对话中**原样回传**,否则模型会拒绝或行为异常。
137
238
 
138
- ```typescript
139
- import { createTransfer } from "./src/server.js";
239
+ `codex-transfer` 使用**双索引缓存**来恢复推理内容:
240
+
241
+ | 索引方式 | 适用场景 | 实现 |
242
+ |----------|---------|------|
243
+ | **call_id 精确匹配** | Codex 使用 `previous_response_id` + tool call 重放 | `Map<call_id, reasoning>` |
244
+ | **内容 SHA256 指纹** | Codex 完整重放 `input[]` 而不使用 `previous_response_id` | `Map<SHA256(content), reasoning>` |
245
+
246
+ 两种机制互为补充,覆盖 Codex CLI 的两种对话重放模式。
247
+
248
+ ### 工具调用处理
249
+
250
+ - **工具过滤**:自动过滤 `web_search`、`file_search`、`computer` 等 OpenAI 专有内置工具,仅保留 `type: "function"` 的自定义工具,避免第三方厂商拒绝请求
251
+ - **格式转换**:Responses API 扁平格式 `{type, name, description, parameters}` ↔ Chat Completions 嵌套格式 `{type, function: {name, description, parameters}}`
252
+ - **并行工具调用**:连续多个 `function_call` 输入项合并为一条 assistant 消息中的多个 `tool_calls` 条目
253
+ - **消息重排序**:Codex 可能在 `function_call` 和 `function_call_output` 之间插入其他消息,但 DeepSeek 等厂商严格要求 `assistant(tool_calls)` 后紧跟匹配的 `tool` 消息。`reorderForToolCalls()` 自动重排,孤立的 tool call 自动合成空输出
254
+
255
+ ### 健康检查
140
256
 
141
- const { app, port } = createTransfer({
142
- configPath: "./codex-transfer.json",
143
- port: 4446,
144
- upstream: "https://api.deepseek.com/v1",
145
- apiKey: "sk-...",
146
- disableTlsVerify: true,
147
- });
257
+ ```
258
+ GET /health → 200 OK
259
+ {
260
+ "upstream": "https://api.deepseek.com/v1",
261
+ "apiKeySet": true,
262
+ "apiKeyPrefix": "sk-abc…",
263
+ "upstreamStatus": 200,
264
+ "upstreamOk": true
265
+ }
148
266
  ```
149
267
 
150
- ## Codex Configuration
268
+ ---
151
269
 
152
- Add to `~/.codex/config.toml`:
270
+ ## 支持的厂商
271
+
272
+ 任意实现 OpenAI Chat Completions API 格式的厂商均可使用。
273
+
274
+ | 厂商 | 基础 URL |
275
+ |------|----------|
276
+ | DeepSeek | `https://api.deepseek.com/v1` |
277
+ | 小米 MiMo | `https://api.xiaomimimo.com/v1` |
278
+ | Kimi (Moonshot) | `https://api.moonshot.cn/v1` |
279
+ | Qwen (通义千问) | `https://dashscope.aliyuncs.com/compatible-mode/v1` |
280
+ | OpenRouter | `https://openrouter.ai/api/v1` |
281
+
282
+ > 任何 OpenAI API 兼容的厂商理论上均可正常工作。如果发现未列出的可用厂商,欢迎提交 PR。
283
+
284
+ ---
285
+
286
+ ## Codex CLI 配置
287
+
288
+ 在 `~/.codex/config.toml` 中添加:
153
289
 
154
290
  ```toml
155
291
  model = "deepseek-v4-pro"
@@ -161,44 +297,79 @@ base_url = "http://127.0.0.1:4446/v1"
161
297
  wire_api = "responses"
162
298
  ```
163
299
 
164
- ## Supported Providers
300
+ > **注意**:`base_url` 端口需与 `codex-transfer` 监听端口一致,`wire_api` 必须为 `"responses"`。
165
301
 
166
- | Provider | Base URL |
167
- |----------|----------|
168
- | DeepSeek | `https://api.deepseek.com/v1` |
169
- | Kimi (Moonshot) | `https://api.moonshot.cn/v1` |
170
- | Qwen | `https://dashscope.aliyuncs.com/compatible-mode/v1` |
171
- | Mistral | `https://api.mistral.ai/v1` |
172
- | Groq | `https://api.groq.com/openai/v1` |
173
- | xAI | `https://api.x.ai/v1` |
174
- | OpenRouter | `https://openrouter.ai/api/v1` |
302
+ ---
303
+
304
+ ## 项目结构
305
+
306
+ ```
307
+ src/
308
+ ├── cli.ts CLI 入口 参数解析、daemon 进程管理、日志轮转
309
+ ├── server.ts HTTP 服务 Hono 路由注册、请求调度、创建代理实例
310
+ ├── config.ts 配置管理 多来源合并、优先级控制、配置文件搜索
311
+ ├── session.ts 会话状态 — 消息历史存储、推理内容双索引缓存
312
+ ├── translate.ts 协议翻译 — Responses ↔ Chat Completions 双向转换
313
+ ├── stream.ts SSE 翻译 — 流式 chunk 解析、事件序列生成、错误兜底
314
+ └── types.ts 类型定义 — 两套 API 的完整 TypeScript 类型
315
+ build.mjs 构建脚本 — esbuild 打包为单文件
316
+ ```
317
+
318
+ ### 依赖关系
319
+
320
+ ```
321
+ cli.ts → server.ts → translate.ts + stream.ts → session.ts + types.ts
322
+ → config.ts
323
+ ```
324
+
325
+ ### 数据流
326
+
327
+ ```
328
+ ┌─────────────┐
329
+ │ Config │ ◄── CLI / ENV / File
330
+ └──────┬──────┘
331
+
332
+ Codex ──POST──► Server ──┼──► toChatRequest() ──► fetch ──► Upstream
333
+ ▲ │ │ │
334
+ │ │ SessionStore │
335
+ └──SSE/JSON────┘ (history + │
336
+ reasoning) ◄── translateStream() ──────┘
337
+ ◄── fromChatResponse()
338
+ ```
339
+
340
+ ---
341
+
342
+ ## 构建
343
+
344
+ ```bash
345
+ git clone https://github.com/Icicno/codex-transfer.git
346
+ cd codex-transfer
347
+ npm install
348
+ npm run build # esbuild 打包 + tsc 类型检查
349
+ node dist/codex-transfer.mjs -k
350
+
351
+ # 或链接为全局命令
352
+ npm link
353
+ codex-transfer -k
354
+ ```
355
+
356
+ ---
357
+
358
+ ## 程序化使用
359
+
360
+ ```typescript
361
+ import { createTransfer } from "./src/server.js";
362
+
363
+ const { app, port } = createTransfer({
364
+ configPath: "./codex-transfer.json",
365
+ port: 4446,
366
+ upstream: "https://api.deepseek.com/v1",
367
+ apiKey: "sk-...",
368
+ disableTlsVerify: true,
369
+ });
370
+ ```
175
371
 
176
- ## Features
177
-
178
- - **Single-file bundle** — `dist/codex-transfer.mjs` has zero runtime dependencies
179
- - **Streaming** — full SSE streaming with correct event sequencing
180
- - **Tool calls** — accumulates streaming deltas and emits structured function_call items
181
- - **Parallel tool calls** — consecutive function_call input items merged into one assistant message
182
- - **Tool call message ordering** — automatically reorders messages to ensure `assistant(tool_calls)` is immediately followed by matching `tool` messages (required by DeepSeek and other strict providers)
183
- - **Model name mapping** — maps non-standard Codex model names (e.g. `codex-auto-review`) to upstream provider models via `modelMap` config or `--model` flag
184
- - **Reasoning models** — preserves `reasoning_content` across turns (DeepSeek, kimi-k2.6)
185
- - **Model catalog** — proxies `/v1/models` from the upstream provider
186
- - **Health check** — `GET /health` diagnostic endpoint
187
- - **TLS skip** — supports corporate proxy / self-signed certificate scenarios
188
- - **Daemon mode** — `--daemon` runs in background with logs to `logs/` directory next to config file
189
-
190
- ## Project Structure
191
-
192
- | File | Description |
193
- |------|-------------|
194
- | `src/types.ts` | Responses/Chat Completions API type definitions |
195
- | `src/config.ts` | Configuration loading (file + env vars) |
196
- | `src/session.ts` | Session store and reasoning content cache |
197
- | `src/translate.ts` | Request/response translation logic |
198
- | `src/stream.ts` | SSE stream translation |
199
- | `src/server.ts` | HTTP server (Hono) |
200
- | `src/cli.ts` | CLI entry point |
201
- | `build.mjs` | esbuild bundler script |
372
+ ---
202
373
 
203
374
  ## License
204
375
 
@@ -2975,13 +2975,26 @@ function toChatRequest(req, history, sessions) {
2975
2975
  }
2976
2976
  const filteredTools = (req.tools ?? []).filter((t) => t.type === "function").map(convertTool);
2977
2977
  const reordered = reorderForToolCalls(messages);
2978
+ const reasoningFields = mapReasoningEffort(req.reasoning?.effort);
2978
2979
  return {
2979
2980
  model: req.model,
2980
2981
  messages: reordered,
2981
2982
  ...filteredTools.length > 0 ? { tools: filteredTools } : {},
2982
2983
  ...req.temperature != null ? { temperature: req.temperature } : {},
2983
2984
  ...req.max_output_tokens != null ? { max_tokens: req.max_output_tokens } : {},
2984
- stream: req.stream ?? false
2985
+ stream: req.stream ?? false,
2986
+ ...reasoningFields
2987
+ };
2988
+ }
2989
+ function mapReasoningEffort(effort) {
2990
+ if (!effort) return {};
2991
+ if (effort === "none") {
2992
+ return { thinking: { type: "disabled" } };
2993
+ }
2994
+ const reasoningEffort = effort === "xhigh" ? "max" : "high";
2995
+ return {
2996
+ thinking: { type: "enabled" },
2997
+ reasoning_effort: reasoningEffort
2985
2998
  };
2986
2999
  }
2987
3000
  function convertTool(tool) {
@@ -3013,11 +3026,7 @@ function fromChatResponse(id, model, chat) {
3013
3026
  content: [{ type: "output_text", text }]
3014
3027
  }
3015
3028
  ];
3016
- const respUsage = {
3017
- input_tokens: usage.prompt_tokens,
3018
- output_tokens: usage.completion_tokens,
3019
- total_tokens: usage.total_tokens
3020
- };
3029
+ const respUsage = mapUsage(usage);
3021
3030
  const response = {
3022
3031
  id,
3023
3032
  object: "response",
@@ -3027,6 +3036,22 @@ function fromChatResponse(id, model, chat) {
3027
3036
  };
3028
3037
  return { response, assistantMessage: choice.message };
3029
3038
  }
3039
+ function mapUsage(usage) {
3040
+ const cachedTokens = usage.prompt_tokens_details?.cached_tokens ?? usage.prompt_cache_hit_tokens;
3041
+ const reasoningTokens = usage.completion_tokens_details?.reasoning_tokens;
3042
+ const result = {
3043
+ input_tokens: usage.prompt_tokens,
3044
+ output_tokens: usage.completion_tokens,
3045
+ total_tokens: usage.total_tokens
3046
+ };
3047
+ if (cachedTokens != null) {
3048
+ result.input_tokens_details = { cached_tokens: cachedTokens };
3049
+ }
3050
+ if (reasoningTokens != null) {
3051
+ result.output_tokens_details = { reasoning_tokens: reasoningTokens };
3052
+ }
3053
+ return result;
3054
+ }
3030
3055
  function valueToText(v) {
3031
3056
  if (v == null) return "";
3032
3057
  if (typeof v === "string") return v;
@@ -3147,6 +3172,7 @@ async function* translateStream(args2, signal) {
3147
3172
  const toolCalls = /* @__PURE__ */ new Map();
3148
3173
  let emittedMessageItem = false;
3149
3174
  let done = false;
3175
+ let streamUsage;
3150
3176
  const reader = upstream.body.getReader();
3151
3177
  const decoder = new TextDecoder();
3152
3178
  let buffer = "";
@@ -3187,6 +3213,9 @@ async function* translateStream(args2, signal) {
3187
3213
  if (err) {
3188
3214
  console.error(`[transfer] upstream error in stream:`, err);
3189
3215
  }
3216
+ if (chunk.usage) {
3217
+ streamUsage = chunk.usage;
3218
+ }
3190
3219
  for (const choice of chunk.choices ?? []) {
3191
3220
  const rc = choice.delta?.reasoning_content;
3192
3221
  if (rc) {
@@ -3350,7 +3379,7 @@ async function* translateStream(args2, signal) {
3350
3379
  status: "completed",
3351
3380
  model,
3352
3381
  output: outputItems,
3353
- usage: { input_tokens: 0, output_tokens: 0, total_tokens: 0 }
3382
+ usage: streamUsage ? mapUsage(streamUsage) : { input_tokens: 0, output_tokens: 0, total_tokens: 0 }
3354
3383
  }
3355
3384
  });
3356
3385
  console.log(`[transfer] stream completed: ${accumulatedText.length} chars, ${toolCalls.size} tool calls`);
@@ -3387,7 +3416,8 @@ var DEFAULT_CONFIG = {
3387
3416
  upstream: "https://openrouter.ai/api/v1",
3388
3417
  apiKey: "",
3389
3418
  insecure: false,
3390
- modelMap: {}
3419
+ modelMap: {},
3420
+ reasoningEffort: true
3391
3421
  };
3392
3422
  function loadConfig(configPath) {
3393
3423
  const fileConfig = loadConfigFile(configPath);
@@ -3398,7 +3428,10 @@ function loadConfig(configPath) {
3398
3428
  insecure: parseBool(
3399
3429
  process.env.CODEX_TRANSFER_INSECURE ?? fileConfig.insecure
3400
3430
  ),
3401
- modelMap: fileConfig.modelMap ?? DEFAULT_CONFIG.modelMap
3431
+ modelMap: fileConfig.modelMap ?? DEFAULT_CONFIG.modelMap,
3432
+ reasoningEffort: parseBool(
3433
+ process.env.CODEX_TRANSFER_REASONING_EFFORT ?? fileConfig.reasoningEffort ?? true
3434
+ )
3402
3435
  };
3403
3436
  }
3404
3437
  function loadConfigFile(explicitPath) {
@@ -3422,7 +3455,8 @@ function loadConfigFile(explicitPath) {
3422
3455
  upstream: typeof parsed.upstream === "string" ? parsed.upstream : void 0,
3423
3456
  apiKey: typeof parsed.apiKey === "string" ? parsed.apiKey : void 0,
3424
3457
  insecure: typeof parsed.insecure === "boolean" ? parsed.insecure : void 0,
3425
- modelMap: typeof parsed.modelMap === "object" && parsed.modelMap !== null ? parsed.modelMap : void 0
3458
+ modelMap: typeof parsed.modelMap === "object" && parsed.modelMap !== null ? parsed.modelMap : void 0,
3459
+ reasoningEffort: typeof parsed.reasoningEffort === "boolean" ? parsed.reasoningEffort : void 0
3426
3460
  };
3427
3461
  } catch {
3428
3462
  }
@@ -3527,6 +3561,9 @@ function createTransfer(options = {}) {
3527
3561
  const history = req.previous_response_id ? sessions.getHistory(req.previous_response_id) : [];
3528
3562
  const chatReq = toChatRequest(req, history, sessions);
3529
3563
  chatReq.model = model;
3564
+ if (!fileConfig.reasoningEffort) {
3565
+ delete chatReq.reasoning_effort;
3566
+ }
3530
3567
  const url = `${upstream}/chat/completions`;
3531
3568
  if (req.stream) {
3532
3569
  const responseId = sessions.newId();
@@ -3637,6 +3674,8 @@ for (let i = 0; i < args.length; i++) {
3637
3674
  overrides.configPath = args[++i];
3638
3675
  } else if ((a === "--model" || a === "-m") && args[i + 1]) {
3639
3676
  overrides.model = args[++i];
3677
+ } else if (a === "--no-reasoning-effort") {
3678
+ overrides.reasoningEffort = "false";
3640
3679
  } else if (a === "--help" || a === "-h") {
3641
3680
  console.log(`
3642
3681
  codex-transfer \u2014 Responses API \u2194 Chat Completions bridge
@@ -3651,6 +3690,7 @@ Options:
3651
3690
  -m, --model MODEL Override model name (highest priority model mapping)
3652
3691
  -c, --config PATH Path to config file (JSON)
3653
3692
  -k, --insecure Skip TLS certificate verification
3693
+ --no-reasoning-effort Don't send reasoning_effort to upstream
3654
3694
  -d, --daemon Run in background, logs to logs/ directory
3655
3695
  -h, --help Show this help
3656
3696
 
@@ -3660,6 +3700,7 @@ Environment variables:
3660
3700
  CODEX_TRANSFER_API_KEY Same as --api-key
3661
3701
  CODEX_TRANSFER_CONFIG Same as --config
3662
3702
  CODEX_TRANSFER_INSECURE Set to "1" to skip TLS verification
3703
+ CODEX_TRANSFER_REASONING_EFFORT Set to "0" to disable reasoning_effort
3663
3704
 
3664
3705
  Config file options:
3665
3706
  modelMap Model name mapping, e.g. {"*": "deepseek-v4-pro"}
@@ -3735,6 +3776,9 @@ if (logFile) {
3735
3776
  }
3736
3777
  var rotateIfNeeded2;
3737
3778
  var logWrite2;
3779
+ if (overrides.reasoningEffort) {
3780
+ process.env.CODEX_TRANSFER_REASONING_EFFORT = overrides.reasoningEffort;
3781
+ }
3738
3782
  var { app, port } = createTransfer({
3739
3783
  configPath: overrides.configPath,
3740
3784
  port: overrides.port ? Number(overrides.port) : void 0,
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "@classicicn/codex-transfer",
3
- "version": "0.2.0",
3
+ "version": "0.3.0",
4
4
  "description": "Responses API ↔ Chat Completions translation bridge for Codex — use DeepSeek, Kimi, Qwen, and other providers with Codex",
5
5
  "type": "module",
6
6
  "main": "dist/codex-transfer.mjs",
@@ -10,7 +10,7 @@
10
10
  "files": [
11
11
  "dist/codex-transfer.mjs",
12
12
  "README.md",
13
- "README.zh-CN.md",
13
+ "README.en.md",
14
14
  "LICENSE"
15
15
  ],
16
16
  "scripts": {
package/README.zh-CN.md DELETED
@@ -1,205 +0,0 @@
1
- # codex-transfer
2
-
3
- Responses API ↔ Chat Completions 转换桥(TypeScript 实现)
4
-
5
- ## 概述
6
-
7
- 一个轻量级代理,用于将 OpenAI **Responses API**(Codex CLI 使用)转换为 **Chat Completions API**,让 Codex 能够与任何 OpenAI 兼容的提供商配合使用——DeepSeek、Kimi、Qwen、Mistral、Groq、xAI、OpenRouter 等。
8
-
9
- ```
10
- Codex CLI (Responses API) → codex-transfer → DeepSeek (Chat Completions API)
11
- ```
12
-
13
- ## 快速开始
14
-
15
- ```bash
16
- # 从 npm 安装
17
- npm install -g @classicicn/codex-transfer
18
-
19
- # 启动
20
- codex-transfer -k
21
- ```
22
-
23
- ## 命令行参数
24
-
25
- ```
26
- codex-transfer [options]
27
-
28
- Options:
29
- -p, --port PORT 监听端口(默认 4444)
30
- -u, --upstream URL 上游 Chat Completions 地址
31
- --api-key KEY 上游 API 密钥
32
- -m, --model MODEL 强制覆盖模型名(最高优先级)
33
- -c, --config PATH 配置文件路径(JSON)
34
- -k, --insecure 跳过 TLS 证书验证
35
- -d, --daemon 后台运行,日志输出到 logs/ 目录
36
- -h, --help 显示帮助
37
- ```
38
-
39
- ## 配置
40
-
41
- 配置优先级:CLI 参数 > 环境变量 > 配置文件 > 默认值
42
-
43
- ### 配置文件
44
-
45
- 在以下位置之一创建 JSON 配置文件:
46
- - `./codex-transfer.json`(当前目录)
47
- - `~/.codex-transfer/config.json`(用户主目录)
48
- - 通过 `--config` 或 `CODEX_TRANSFER_CONFIG` 指定
49
-
50
- ```json
51
- {
52
- "port": 4446,
53
- "upstream": "https://api.deepseek.com/v1",
54
- "apiKey": "sk-your-key-here",
55
- "insecure": false,
56
- "modelMap": {
57
- "*": "deepseek-v4-pro"
58
- }
59
- }
60
- ```
61
-
62
- ### 模型名映射
63
-
64
- Codex CLI 可能发送上游不识别的模型名(如 `codex-auto-review`)。使用 `modelMap` 进行转换:
65
-
66
- ```json
67
- {
68
- "modelMap": {
69
- "*": "deepseek-v4-pro",
70
- "codex-auto-review": "deepseek-v4-pro"
71
- }
72
- }
73
- ```
74
-
75
- 查找顺序:精确匹配 key → 通配符 `"*"` → 原始模型名(直接透传)。
76
-
77
- 也可使用 `--model` CLI 参数强制覆盖所有模型名:
78
-
79
- ```bash
80
- codex-transfer --model deepseek-v4-pro -k
81
- ```
82
-
83
- ### 环境变量
84
-
85
- | 变量名 | 默认值 | 说明 |
86
- |--------|--------|------|
87
- | `CODEX_TRANSFER_PORT` | `4444` | 监听端口 |
88
- | `CODEX_TRANSFER_UPSTREAM` | `https://openrouter.ai/api/v1` | 上游 Chat Completions 地址 |
89
- | `CODEX_TRANSFER_API_KEY` | _(空)_ | 上游 API 密钥 |
90
- | `CODEX_TRANSFER_CONFIG` | _(自动)_ | 配置文件路径 |
91
- | `CODEX_TRANSFER_INSECURE` | `false` | 跳过 TLS 验证 |
92
-
93
- ## 使用方式
94
-
95
- ### 方式一:直接运行打包产物
96
-
97
- ```bash
98
- node dist/codex-transfer.mjs -k -p 4446 -u https://api.deepseek.com/v1
99
- ```
100
-
101
- ### 方式二:npm link(全局命令)
102
-
103
- ```bash
104
- npm link
105
- # 之后可直接执行
106
- codex-transfer -k
107
- ```
108
-
109
- ### 方式三:npx
110
-
111
- ```bash
112
- npx @classicicn/codex-transfer -k
113
- ```
114
-
115
- ### 方式四:后台运行
116
-
117
- ```bash
118
- # 启动后台进程(日志输出到配置文件同级 logs/ 目录)
119
- node dist/codex-transfer.mjs -d -k
120
-
121
- # 输出示例:
122
- # codex-transfer started in background (PID: 12345)
123
- # Log file: ~/.codex-transfer/logs/codex-transfer.log
124
- # PID file: ~/.codex-transfer/logs/codex-transfer.pid
125
- # Stop: kill $(cat ~/.codex-transfer/logs/codex-transfer.pid)
126
-
127
- # 查看日志
128
- tail -f ~/.codex-transfer/logs/codex-transfer.log
129
-
130
- # 停止
131
- kill $(cat ~/.codex-transfer/logs/codex-transfer.pid)
132
- ```
133
-
134
- ### 方式五:作为库使用(仅限源码)
135
-
136
- > **注意:** npm 包仅包含 CLI 打包产物。如需作为库使用,请 clone 仓库后从源码导入。
137
-
138
- ```typescript
139
- import { createTransfer } from "./src/server.js";
140
-
141
- const { app, port } = createTransfer({
142
- configPath: "./codex-transfer.json",
143
- port: 4446,
144
- upstream: "https://api.deepseek.com/v1",
145
- apiKey: "sk-...",
146
- disableTlsVerify: true,
147
- });
148
- ```
149
-
150
- ## Codex 配置
151
-
152
- 在 `~/.codex/config.toml` 中添加:
153
-
154
- ```toml
155
- model = "deepseek-v4-pro"
156
- model_provider = "deepseek-transfer"
157
-
158
- [model_providers.deepseek-transfer]
159
- name = "DeepSeek"
160
- base_url = "http://127.0.0.1:4446/v1"
161
- wire_api = "responses"
162
- ```
163
-
164
- ## 支持的提供商
165
-
166
- | 提供商 | 基础 URL |
167
- |--------|----------|
168
- | DeepSeek | `https://api.deepseek.com/v1` |
169
- | Kimi (Moonshot) | `https://api.moonshot.cn/v1` |
170
- | Qwen | `https://dashscope.aliyuncs.com/compatible-mode/v1` |
171
- | Mistral | `https://api.mistral.ai/v1` |
172
- | Groq | `https://api.groq.com/openai/v1` |
173
- | xAI | `https://api.x.ai/v1` |
174
- | OpenRouter | `https://openrouter.ai/api/v1` |
175
-
176
- ## 功能特性
177
-
178
- - **单文件打包** — `dist/codex-transfer.mjs` 无外部依赖,拷贝即用
179
- - **流式传输** — 完整的 SSE 流式传输,正确的事件排序
180
- - **工具调用** — 累积流式增量并发出结构化的 function_call 项目
181
- - **并行工具调用** — 连续的 function_call 输入项目合并为单个 assistant 消息
182
- - **工具调用消息排序** — 自动重排消息,确保 `assistant(tool_calls)` 后紧跟对应的 `tool` 消息(DeepSeek 等严格提供商要求)
183
- - **模型名映射** — 将 Codex 非标准模型名(如 `codex-auto-review`)映射到上游提供商模型,支持 `modelMap` 配置或 `--model` 参数
184
- - **推理模型** — 跨轮次保留 `reasoning_content`(DeepSeek、kimi-k2.6)
185
- - **模型目录** — 代理上游的 `/v1/models` 端点
186
- - **健康检查** — `GET /health` 诊断上游连接状态
187
- - **TLS 跳过** — 支持企业代理/自签名证书场景
188
- - **后台运行** — `--daemon` 后台运行,日志输出到配置文件同级 `logs/` 目录
189
-
190
- ## 项目架构
191
-
192
- | 文件 | 说明 |
193
- |------|------|
194
- | `src/types.ts` | Responses/Chat Completions API 类型定义 |
195
- | `src/config.ts` | 配置加载(配置文件 + 环境变量) |
196
- | `src/session.ts` | 会话存储与推理内容缓存 |
197
- | `src/translate.ts` | 请求/响应转换逻辑 |
198
- | `src/stream.ts` | SSE 流式转换 |
199
- | `src/server.ts` | HTTP 服务(Hono) |
200
- | `src/cli.ts` | CLI 入口 |
201
- | `build.mjs` | esbuild 打包脚本 |
202
-
203
- ## 许可证
204
-
205
- MIT