@classicicn/codex-transfer 0.3.0 → 0.3.2

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.en.md CHANGED
@@ -1,5 +1,7 @@
1
1
  # codex-transfer
2
2
 
3
+ **English** | [中文](./README.md)
4
+
3
5
  > Responses API ↔ Chat Completions translation bridge — use DeepSeek, Kimi, Qwen, and other OpenAI-compatible providers with Codex CLI.
4
6
 
5
7
  ## Overview
@@ -46,7 +48,7 @@ Options:
46
48
  -m, --model MODEL Force override model name (highest priority)
47
49
  -c, --config PATH Path to config file (JSON)
48
50
  -k, --insecure Skip TLS certificate verification
49
- --no-reasoning-effort Don't send reasoning_effort to upstream
51
+ --reasoning-effort Send reasoning effort to upstream (default: off)
50
52
  -d, --daemon Run in background, logs to logs/ directory
51
53
  -h, --help Show this help
52
54
  ```
@@ -89,7 +91,7 @@ Create a JSON config file at one of these locations (searched in order):
89
91
  "upstream": "https://api.deepseek.com/v1",
90
92
  "apiKey": "sk-your-key-here",
91
93
  "insecure": false,
92
- "reasoningEffort": true,
94
+ "reasoningEffort": false,
93
95
  "modelMap": {
94
96
  "*": "deepseek-v4-pro",
95
97
  "codex-auto-review": "deepseek-v4-pro"
@@ -106,7 +108,7 @@ Create a JSON config file at one of these locations (searched in order):
106
108
  | `CODEX_TRANSFER_API_KEY` | _(empty)_ | API key forwarded to upstream |
107
109
  | `CODEX_TRANSFER_CONFIG` | _(auto)_ | Path to config file |
108
110
  | `CODEX_TRANSFER_INSECURE` | `false` | Set to `"1"` or `"true"` to skip TLS verification |
109
- | `CODEX_TRANSFER_REASONING_EFFORT` | `true` | Set to `"0"` or `"false"` to disable sending reasoning_effort |
111
+ | `CODEX_TRANSFER_REASONING_EFFORT` | `false` | Set to `"1"` or `"true"` to enable sending reasoning_effort |
110
112
 
111
113
  ### Model Name Mapping
112
114
 
@@ -209,7 +211,7 @@ Codex CLI controls model reasoning intensity via `reasoning.effort` (none/low/me
209
211
  | High | `high` | `thinking: {type: "enabled"}, reasoning_effort: "high"` | `thinking: {type: "enabled"}` |
210
212
  | Ultra | `xhigh` | `thinking: {type: "enabled"}, reasoning_effort: "max"` | `thinking: {type: "enabled"}` |
211
213
 
212
- **Compatibility strategy**: The `thinking` toggle is always sent (supported by all providers). `reasoning_effort` is sent by default (natively supported by DeepSeek); if the upstream doesn't support this field, it can be disabled via the `--no-reasoning-effort` CLI flag or `"reasoningEffort": false` in the config file.
214
+ **Compatibility strategy**: The `thinking` toggle is always sent (supported by all providers). `reasoning_effort` is not sent by default (to avoid 400 errors from incompatible providers); to enable it, use the `--reasoning-effort` CLI flag or `"reasoningEffort": true` in the config file.
213
215
 
214
216
  ### Session Management
215
217
 
@@ -371,6 +373,25 @@ const { app, port } = createTransfer({
371
373
 
372
374
  ---
373
375
 
376
+ ## Changelog
377
+
378
+ ### v0.3.0 (2026-05-08)
379
+
380
+ - **Token usage**: Extract usage from upstream streaming responses — Codex now correctly displays context utilization (fixes 0% display issue)
381
+ - **Usage details**: Auto-map `cached_tokens` and `reasoning_tokens`, compatible with both OpenAI and DeepSeek upstream formats
382
+ - **Reasoning effort**: Map `reasoning.effort` to DeepSeek `thinking`/`reasoning_effort` and MiMo/Kimi/GLM `thinking` toggle
383
+ - **Config-driven control**: `reasoning_effort` is off by default; enable via `--reasoning-effort` or `reasoningEffort` config option
384
+
385
+ ### v0.2.0 (2026-05-07)
386
+
387
+ - First npm release
388
+ - Responses API ↔ Chat Completions bidirectional protocol translation
389
+ - Streaming SSE event generation, session management, reasoning model support
390
+ - Model name mapping, daemon mode, log rotation
391
+ - TLS certificate bypass, config file support
392
+
393
+ ---
394
+
374
395
  ## License
375
396
 
376
397
  MIT
package/README.md CHANGED
@@ -1,5 +1,7 @@
1
1
  # codex-transfer
2
2
 
3
+ [English](./README.en.md) | **中文**
4
+
3
5
  > Responses API ↔ Chat Completions 协议翻译桥接 — 让 Codex CLI 无缝对接 DeepSeek、Kimi、Qwen 等任意 OpenAI 兼容厂商。
4
6
 
5
7
  ## 概述
@@ -46,7 +48,7 @@ codex-transfer [options]
46
48
  -m, --model MODEL 强制覆盖模型名称(最高优先级)
47
49
  -c, --config PATH 配置文件路径(JSON 格式)
48
50
  -k, --insecure 跳过 TLS 证书验证(企业代理/自签证书场景)
49
- --no-reasoning-effort 不向上传递 reasoning_effort 参数
51
+ --reasoning-effort 向上游传递 reasoning_effort 参数(默认关闭)
50
52
  -d, --daemon 后台运行,日志写入 logs/ 目录
51
53
  -h, --help 显示帮助信息
52
54
  ```
@@ -89,7 +91,7 @@ CLI 参数 > 环境变量 > 配置文件 > 默认值
89
91
  "upstream": "https://api.deepseek.com/v1",
90
92
  "apiKey": "sk-your-key-here",
91
93
  "insecure": false,
92
- "reasoningEffort": true,
94
+ "reasoningEffort": false,
93
95
  "modelMap": {
94
96
  "*": "deepseek-v4-pro",
95
97
  "codex-auto-review": "deepseek-v4-pro"
@@ -106,7 +108,7 @@ CLI 参数 > 环境变量 > 配置文件 > 默认值
106
108
  | `CODEX_TRANSFER_API_KEY` | _(空)_ | 转发给上游的 API Key |
107
109
  | `CODEX_TRANSFER_CONFIG` | _(自动)_ | 配置文件路径 |
108
110
  | `CODEX_TRANSFER_INSECURE` | `false` | 设为 `"1"` 或 `"true"` 跳过 TLS 验证 |
109
- | `CODEX_TRANSFER_REASONING_EFFORT` | `true` | 设为 `"0"` 或 `"false"` 关闭 reasoning_effort 传递 |
111
+ | `CODEX_TRANSFER_REASONING_EFFORT` | `false` | 设为 `"1"` 或 `"true"` 开启 reasoning_effort 传递 |
110
112
 
111
113
  ### 模型名称映射
112
114
 
@@ -209,7 +211,7 @@ Codex CLI 通过 `reasoning.effort` 控制模型推理强度(极低/低/中/
209
211
  | 高 | `high` | `thinking: {type: "enabled"}, reasoning_effort: "high"` | `thinking: {type: "enabled"}` |
210
212
  | 超高 | `xhigh` | `thinking: {type: "enabled"}, reasoning_effort: "max"` | `thinking: {type: "enabled"}` |
211
213
 
212
- **兼容策略**:`thinking` 参数始终发送(所有厂商支持)。`reasoning_effort` 默认发送(DeepSeek 原生支持);如果上游不支持该字段,可通过 `--no-reasoning-effort` CLI 参数或配置文件 `"reasoningEffort": false` 关闭。
214
+ **兼容策略**:`thinking` 参数始终发送(所有厂商支持)。`reasoning_effort` 默认不发送(避免不兼容的上游返回 400 错误);如需启用,可通过 `--reasoning-effort` CLI 参数或配置文件 `"reasoningEffort": true` 开启。
213
215
 
214
216
  ### 会话管理
215
217
 
@@ -371,6 +373,25 @@ const { app, port } = createTransfer({
371
373
 
372
374
  ---
373
375
 
376
+ ## 更新日志
377
+
378
+ ### v0.3.0 (2026-05-08)
379
+
380
+ - **Token 用量**:从上游流式响应中提取 usage,Codex 可正确显示上下文占用率(修复 0% 问题)
381
+ - **用量详情**:自动映射 `cached_tokens` 和 `reasoning_tokens`,兼容 OpenAI 和 DeepSeek 两种上游格式
382
+ - **推理强度**:映射 `reasoning.effort` 到 DeepSeek `thinking`/`reasoning_effort`、MiMo/Kimi/GLM `thinking` 开关
383
+ - **配置化控制**:`reasoning_effort` 默认关闭,可通过 `--reasoning-effort` 或 `reasoningEffort` 配置项按需开启
384
+
385
+ ### v0.2.0 (2026-05-07)
386
+
387
+ - 首次 npm 发布
388
+ - Responses API ↔ Chat Completions 双向协议翻译
389
+ - 流式 SSE 事件序列生成、会话管理、推理模型支持
390
+ - 模型名称映射、Daemon 模式、日志轮转
391
+ - TLS 证书跳过、配置文件支持
392
+
393
+ ---
394
+
374
395
  ## License
375
396
 
376
397
  MIT
@@ -3417,7 +3417,7 @@ var DEFAULT_CONFIG = {
3417
3417
  apiKey: "",
3418
3418
  insecure: false,
3419
3419
  modelMap: {},
3420
- reasoningEffort: true
3420
+ reasoningEffort: false
3421
3421
  };
3422
3422
  function loadConfig(configPath) {
3423
3423
  const fileConfig = loadConfigFile(configPath);
@@ -3430,7 +3430,7 @@ function loadConfig(configPath) {
3430
3430
  ),
3431
3431
  modelMap: fileConfig.modelMap ?? DEFAULT_CONFIG.modelMap,
3432
3432
  reasoningEffort: parseBool(
3433
- process.env.CODEX_TRANSFER_REASONING_EFFORT ?? fileConfig.reasoningEffort ?? true
3433
+ process.env.CODEX_TRANSFER_REASONING_EFFORT ?? fileConfig.reasoningEffort ?? false
3434
3434
  )
3435
3435
  };
3436
3436
  }
@@ -3674,8 +3674,8 @@ for (let i = 0; i < args.length; i++) {
3674
3674
  overrides.configPath = args[++i];
3675
3675
  } else if ((a === "--model" || a === "-m") && args[i + 1]) {
3676
3676
  overrides.model = args[++i];
3677
- } else if (a === "--no-reasoning-effort") {
3678
- overrides.reasoningEffort = "false";
3677
+ } else if (a === "--reasoning-effort") {
3678
+ overrides.reasoningEffort = "true";
3679
3679
  } else if (a === "--help" || a === "-h") {
3680
3680
  console.log(`
3681
3681
  codex-transfer \u2014 Responses API \u2194 Chat Completions bridge
@@ -3690,7 +3690,7 @@ Options:
3690
3690
  -m, --model MODEL Override model name (highest priority model mapping)
3691
3691
  -c, --config PATH Path to config file (JSON)
3692
3692
  -k, --insecure Skip TLS certificate verification
3693
- --no-reasoning-effort Don't send reasoning_effort to upstream
3693
+ --reasoning-effort Send reasoning effort to upstream (default: off)
3694
3694
  -d, --daemon Run in background, logs to logs/ directory
3695
3695
  -h, --help Show this help
3696
3696
 
@@ -3700,7 +3700,7 @@ Environment variables:
3700
3700
  CODEX_TRANSFER_API_KEY Same as --api-key
3701
3701
  CODEX_TRANSFER_CONFIG Same as --config
3702
3702
  CODEX_TRANSFER_INSECURE Set to "1" to skip TLS verification
3703
- CODEX_TRANSFER_REASONING_EFFORT Set to "0" to disable reasoning_effort
3703
+ CODEX_TRANSFER_REASONING_EFFORT Set to "1" to enable reasoning_effort
3704
3704
 
3705
3705
  Config file options:
3706
3706
  modelMap Model name mapping, e.g. {"*": "deepseek-v4-pro"}
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "@classicicn/codex-transfer",
3
- "version": "0.3.0",
3
+ "version": "0.3.2",
4
4
  "description": "Responses API ↔ Chat Completions translation bridge for Codex — use DeepSeek, Kimi, Qwen, and other providers with Codex",
5
5
  "type": "module",
6
6
  "main": "dist/codex-transfer.mjs",