@classicicn/codex-transfer 0.3.1 → 0.3.3

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.en.md CHANGED
@@ -48,7 +48,7 @@ Options:
48
48
  -m, --model MODEL Force override model name (highest priority)
49
49
  -c, --config PATH Path to config file (JSON)
50
50
  -k, --insecure Skip TLS certificate verification
51
- --no-reasoning-effort Don't send reasoning_effort to upstream
51
+ --reasoning-effort Send reasoning effort to upstream (default: off)
52
52
  -d, --daemon Run in background, logs to logs/ directory
53
53
  -h, --help Show this help
54
54
  ```
@@ -91,7 +91,7 @@ Create a JSON config file at one of these locations (searched in order):
91
91
  "upstream": "https://api.deepseek.com/v1",
92
92
  "apiKey": "sk-your-key-here",
93
93
  "insecure": false,
94
- "reasoningEffort": true,
94
+ "reasoningEffort": false,
95
95
  "modelMap": {
96
96
  "*": "deepseek-v4-pro",
97
97
  "codex-auto-review": "deepseek-v4-pro"
@@ -108,7 +108,7 @@ Create a JSON config file at one of these locations (searched in order):
108
108
  | `CODEX_TRANSFER_API_KEY` | _(empty)_ | API key forwarded to upstream |
109
109
  | `CODEX_TRANSFER_CONFIG` | _(auto)_ | Path to config file |
110
110
  | `CODEX_TRANSFER_INSECURE` | `false` | Set to `"1"` or `"true"` to skip TLS verification |
111
- | `CODEX_TRANSFER_REASONING_EFFORT` | `true` | Set to `"0"` or `"false"` to disable sending reasoning_effort |
111
+ | `CODEX_TRANSFER_REASONING_EFFORT` | `false` | Set to `"1"` or `"true"` to enable sending reasoning_effort |
112
112
 
113
113
  ### Model Name Mapping
114
114
 
@@ -211,7 +211,7 @@ Codex CLI controls model reasoning intensity via `reasoning.effort` (none/low/me
211
211
  | High | `high` | `thinking: {type: "enabled"}, reasoning_effort: "high"` | `thinking: {type: "enabled"}` |
212
212
  | Ultra | `xhigh` | `thinking: {type: "enabled"}, reasoning_effort: "max"` | `thinking: {type: "enabled"}` |
213
213
 
214
- **Compatibility strategy**: The `thinking` toggle is always sent (supported by all providers). `reasoning_effort` is sent by default (natively supported by DeepSeek); if the upstream doesn't support this field, it can be disabled via the `--no-reasoning-effort` CLI flag or `"reasoningEffort": false` in the config file.
214
+ **Compatibility strategy**: The `thinking` toggle is always sent (supported by all providers). `reasoning_effort` is not sent by default (to avoid 400 errors from incompatible providers); to enable it, use the `--reasoning-effort` CLI flag or `"reasoningEffort": true` in the config file.
215
215
 
216
216
  ### Session Management
217
217
 
@@ -375,12 +375,17 @@ const { app, port } = createTransfer({
375
375
 
376
376
  ## Changelog
377
377
 
378
+ ### v0.3.3 (2026-05-09)
379
+
380
+ - **Non-streaming tool call support**: `fromChatResponse()` now correctly handles `tool_calls`, generating `function_call` output items (previously the non-streaming path ignored tool calls entirely)
381
+ - **Streaming arguments incremental delivery**: `response.output_item.added` and `response.function_call_arguments.delta` events are now emitted in real time during streaming, instead of being sent in batch after stream completion
382
+
378
383
  ### v0.3.0 (2026-05-08)
379
384
 
380
385
  - **Token usage**: Extract usage from upstream streaming responses — Codex now correctly displays context utilization (fixes 0% display issue)
381
386
  - **Usage details**: Auto-map `cached_tokens` and `reasoning_tokens`, compatible with both OpenAI and DeepSeek upstream formats
382
387
  - **Reasoning effort**: Map `reasoning.effort` to DeepSeek `thinking`/`reasoning_effort` and MiMo/Kimi/GLM `thinking` toggle
383
- - **Config-driven control**: New `--no-reasoning-effort` / `reasoningEffort` option to strip reasoning effort parameters on demand
388
+ - **Config-driven control**: `reasoning_effort` is off by default; enable via `--reasoning-effort` or `reasoningEffort` config option
384
389
 
385
390
  ### v0.2.0 (2026-05-07)
386
391
 
package/README.md CHANGED
@@ -48,7 +48,7 @@ codex-transfer [options]
48
48
  -m, --model MODEL 强制覆盖模型名称(最高优先级)
49
49
  -c, --config PATH 配置文件路径(JSON 格式)
50
50
  -k, --insecure 跳过 TLS 证书验证(企业代理/自签证书场景)
51
- --no-reasoning-effort 不向上传递 reasoning_effort 参数
51
+ --reasoning-effort 向上游传递 reasoning_effort 参数(默认关闭)
52
52
  -d, --daemon 后台运行,日志写入 logs/ 目录
53
53
  -h, --help 显示帮助信息
54
54
  ```
@@ -91,7 +91,7 @@ CLI 参数 > 环境变量 > 配置文件 > 默认值
91
91
  "upstream": "https://api.deepseek.com/v1",
92
92
  "apiKey": "sk-your-key-here",
93
93
  "insecure": false,
94
- "reasoningEffort": true,
94
+ "reasoningEffort": false,
95
95
  "modelMap": {
96
96
  "*": "deepseek-v4-pro",
97
97
  "codex-auto-review": "deepseek-v4-pro"
@@ -108,7 +108,7 @@ CLI 参数 > 环境变量 > 配置文件 > 默认值
108
108
  | `CODEX_TRANSFER_API_KEY` | _(空)_ | 转发给上游的 API Key |
109
109
  | `CODEX_TRANSFER_CONFIG` | _(自动)_ | 配置文件路径 |
110
110
  | `CODEX_TRANSFER_INSECURE` | `false` | 设为 `"1"` 或 `"true"` 跳过 TLS 验证 |
111
- | `CODEX_TRANSFER_REASONING_EFFORT` | `true` | 设为 `"0"` 或 `"false"` 关闭 reasoning_effort 传递 |
111
+ | `CODEX_TRANSFER_REASONING_EFFORT` | `false` | 设为 `"1"` 或 `"true"` 开启 reasoning_effort 传递 |
112
112
 
113
113
  ### 模型名称映射
114
114
 
@@ -211,7 +211,7 @@ Codex CLI 通过 `reasoning.effort` 控制模型推理强度(极低/低/中/
211
211
  | 高 | `high` | `thinking: {type: "enabled"}, reasoning_effort: "high"` | `thinking: {type: "enabled"}` |
212
212
  | 超高 | `xhigh` | `thinking: {type: "enabled"}, reasoning_effort: "max"` | `thinking: {type: "enabled"}` |
213
213
 
214
- **兼容策略**:`thinking` 参数始终发送(所有厂商支持)。`reasoning_effort` 默认发送(DeepSeek 原生支持);如果上游不支持该字段,可通过 `--no-reasoning-effort` CLI 参数或配置文件 `"reasoningEffort": false` 关闭。
214
+ **兼容策略**:`thinking` 参数始终发送(所有厂商支持)。`reasoning_effort` 默认不发送(避免不兼容的上游返回 400 错误);如需启用,可通过 `--reasoning-effort` CLI 参数或配置文件 `"reasoningEffort": true` 开启。
215
215
 
216
216
  ### 会话管理
217
217
 
@@ -375,12 +375,17 @@ const { app, port } = createTransfer({
375
375
 
376
376
  ## 更新日志
377
377
 
378
+ ### v0.3.3 (2026-05-09)
379
+
380
+ - **非流式工具调用支持**:`fromChatResponse()` 现在正确处理 `tool_calls`,生成 `function_call` 输出项(此前非流式路径完全忽略工具调用)
381
+ - **流式 arguments 增量推送**:函数调用的 `response.output_item.added` 和 `response.function_call_arguments.delta` 事件在流式过程中实时发出,而非流结束后一次性发送
382
+
378
383
  ### v0.3.0 (2026-05-08)
379
384
 
380
385
  - **Token 用量**:从上游流式响应中提取 usage,Codex 可正确显示上下文占用率(修复 0% 问题)
381
386
  - **用量详情**:自动映射 `cached_tokens` 和 `reasoning_tokens`,兼容 OpenAI 和 DeepSeek 两种上游格式
382
387
  - **推理强度**:映射 `reasoning.effort` 到 DeepSeek `thinking`/`reasoning_effort`、MiMo/Kimi/GLM `thinking` 开关
383
- - **配置化控制**:新增 `--no-reasoning-effort` / `reasoningEffort` 配置项,按需剥离推理强度参数
388
+ - **配置化控制**:`reasoning_effort` 默认关闭,可通过 `--reasoning-effort` `reasoningEffort` 配置项按需开启
384
389
 
385
390
  ### v0.2.0 (2026-05-07)
386
391
 
@@ -2903,6 +2903,7 @@ var SessionStore = class {
2903
2903
  };
2904
2904
 
2905
2905
  // src/translate.ts
2906
+ import { randomUUID as randomUUID2 } from "node:crypto";
2906
2907
  function toChatRequest(req, history, sessions) {
2907
2908
  const messages = [...history];
2908
2909
  const systemText = req.instructions ?? req.system;
@@ -3019,13 +3020,33 @@ function fromChatResponse(id, model, chat) {
3019
3020
  completion_tokens: 0,
3020
3021
  total_tokens: 0
3021
3022
  };
3022
- const output = [
3023
- {
3023
+ const output = [];
3024
+ if (text) {
3025
+ output.push({
3024
3026
  type: "message",
3025
3027
  role: "assistant",
3026
3028
  content: [{ type: "output_text", text }]
3027
- }
3028
- ];
3029
+ });
3030
+ }
3031
+ for (const tc of choice.message.tool_calls ?? []) {
3032
+ const tcRecord = tc;
3033
+ const func = tcRecord.function;
3034
+ output.push({
3035
+ type: "function_call",
3036
+ id: `fc_${randomUUID2().replace(/-/g, "")}`,
3037
+ call_id: tcRecord.id ?? "",
3038
+ name: func?.name ?? "",
3039
+ arguments: func?.arguments ?? "{}",
3040
+ status: "completed"
3041
+ });
3042
+ }
3043
+ if (output.length === 0) {
3044
+ output.push({
3045
+ type: "message",
3046
+ role: "assistant",
3047
+ content: [{ type: "output_text", text: "" }]
3048
+ });
3049
+ }
3029
3050
  const respUsage = mapUsage(usage);
3030
3051
  const response = {
3031
3052
  id,
@@ -3107,7 +3128,7 @@ function reorderForToolCalls(messages) {
3107
3128
  }
3108
3129
 
3109
3130
  // src/stream.ts
3110
- import { randomUUID as randomUUID2 } from "node:crypto";
3131
+ import { randomUUID as randomUUID3 } from "node:crypto";
3111
3132
  async function* translateStream(args2, signal) {
3112
3133
  const {
3113
3134
  url,
@@ -3120,7 +3141,7 @@ async function* translateStream(args2, signal) {
3120
3141
  model
3121
3142
  } = args2;
3122
3143
  try {
3123
- const msgItemId = `msg_${randomUUID2().replace(/-/g, "")}`;
3144
+ const msgItemId = `msg_${randomUUID3().replace(/-/g, "")}`;
3124
3145
  const createdAt = Math.floor(Date.now() / 1e3);
3125
3146
  yield formatSSE("response.created", {
3126
3147
  type: "response.created",
@@ -3251,13 +3272,45 @@ async function* translateStream(args2, signal) {
3251
3272
  for (const dc of deltaCalls) {
3252
3273
  let entry = toolCalls.get(dc.index);
3253
3274
  if (!entry) {
3254
- entry = { id: "", name: "", arguments: "" };
3275
+ entry = {
3276
+ id: "",
3277
+ name: "",
3278
+ arguments: "",
3279
+ fcItemId: `fc_${randomUUID3().replace(/-/g, "")}`,
3280
+ emittedAdded: false
3281
+ };
3255
3282
  toolCalls.set(dc.index, entry);
3256
3283
  }
3257
3284
  if (dc.id) entry.id = dc.id;
3258
3285
  if (dc.function?.name) entry.name += dc.function.name;
3259
- if (dc.function?.arguments)
3286
+ if (!entry.emittedAdded && entry.id && entry.name) {
3287
+ const fcOutputIndex = (emittedMessageItem ? 1 : 0) + dc.index;
3288
+ yield formatSSE("response.output_item.added", {
3289
+ type: "response.output_item.added",
3290
+ output_index: fcOutputIndex,
3291
+ item: {
3292
+ type: "function_call",
3293
+ id: entry.fcItemId,
3294
+ call_id: entry.id,
3295
+ name: entry.name,
3296
+ arguments: "",
3297
+ status: "in_progress"
3298
+ }
3299
+ });
3300
+ entry.emittedAdded = true;
3301
+ }
3302
+ if (dc.function?.arguments) {
3260
3303
  entry.arguments += dc.function.arguments;
3304
+ if (entry.emittedAdded) {
3305
+ const fcOutputIndex = (emittedMessageItem ? 1 : 0) + dc.index;
3306
+ yield formatSSE("response.function_call_arguments.delta", {
3307
+ type: "response.function_call_arguments.delta",
3308
+ item_id: entry.fcItemId,
3309
+ output_index: fcOutputIndex,
3310
+ delta: dc.function.arguments
3311
+ });
3312
+ }
3313
+ }
3261
3314
  }
3262
3315
  }
3263
3316
  if (choice.finish_reason) {
@@ -3294,43 +3347,39 @@ async function* translateStream(args2, signal) {
3294
3347
  const fcItems = [];
3295
3348
  let relIdx = 0;
3296
3349
  for (const [, tc] of toolCalls) {
3297
- const fcItemId = `fc_${randomUUID2().replace(/-/g, "")}`;
3298
3350
  const outputIndex = baseIndex + relIdx;
3299
- yield formatSSE("response.output_item.added", {
3300
- type: "response.output_item.added",
3301
- output_index: outputIndex,
3302
- item: {
3303
- type: "function_call",
3304
- id: fcItemId,
3305
- call_id: tc.id,
3306
- name: tc.name,
3307
- arguments: "",
3308
- status: "in_progress"
3309
- }
3310
- });
3311
- if (tc.arguments) {
3312
- yield formatSSE("response.function_call_arguments.delta", {
3313
- type: "response.function_call_arguments.delta",
3314
- item_id: fcItemId,
3351
+ if (!tc.emittedAdded && tc.id && tc.name) {
3352
+ yield formatSSE("response.output_item.added", {
3353
+ type: "response.output_item.added",
3315
3354
  output_index: outputIndex,
3316
- delta: tc.arguments
3355
+ item: {
3356
+ type: "function_call",
3357
+ id: tc.fcItemId,
3358
+ call_id: tc.id,
3359
+ name: tc.name,
3360
+ arguments: "",
3361
+ status: "in_progress"
3362
+ }
3363
+ });
3364
+ tc.emittedAdded = true;
3365
+ }
3366
+ if (tc.emittedAdded) {
3367
+ yield formatSSE("response.output_item.done", {
3368
+ type: "response.output_item.done",
3369
+ output_index: outputIndex,
3370
+ item: {
3371
+ type: "function_call",
3372
+ id: tc.fcItemId,
3373
+ call_id: tc.id,
3374
+ name: tc.name,
3375
+ arguments: tc.arguments,
3376
+ status: "completed"
3377
+ }
3317
3378
  });
3318
3379
  }
3319
- yield formatSSE("response.output_item.done", {
3320
- type: "response.output_item.done",
3321
- output_index: outputIndex,
3322
- item: {
3323
- type: "function_call",
3324
- id: fcItemId,
3325
- call_id: tc.id,
3326
- name: tc.name,
3327
- arguments: tc.arguments,
3328
- status: "completed"
3329
- }
3330
- });
3331
3380
  fcItems.push({
3332
3381
  type: "function_call",
3333
- id: fcItemId,
3382
+ id: tc.fcItemId,
3334
3383
  call_id: tc.id,
3335
3384
  name: tc.name,
3336
3385
  arguments: tc.arguments,
@@ -3417,7 +3466,7 @@ var DEFAULT_CONFIG = {
3417
3466
  apiKey: "",
3418
3467
  insecure: false,
3419
3468
  modelMap: {},
3420
- reasoningEffort: true
3469
+ reasoningEffort: false
3421
3470
  };
3422
3471
  function loadConfig(configPath) {
3423
3472
  const fileConfig = loadConfigFile(configPath);
@@ -3430,7 +3479,7 @@ function loadConfig(configPath) {
3430
3479
  ),
3431
3480
  modelMap: fileConfig.modelMap ?? DEFAULT_CONFIG.modelMap,
3432
3481
  reasoningEffort: parseBool(
3433
- process.env.CODEX_TRANSFER_REASONING_EFFORT ?? fileConfig.reasoningEffort ?? true
3482
+ process.env.CODEX_TRANSFER_REASONING_EFFORT ?? fileConfig.reasoningEffort ?? false
3434
3483
  )
3435
3484
  };
3436
3485
  }
@@ -3674,8 +3723,8 @@ for (let i = 0; i < args.length; i++) {
3674
3723
  overrides.configPath = args[++i];
3675
3724
  } else if ((a === "--model" || a === "-m") && args[i + 1]) {
3676
3725
  overrides.model = args[++i];
3677
- } else if (a === "--no-reasoning-effort") {
3678
- overrides.reasoningEffort = "false";
3726
+ } else if (a === "--reasoning-effort") {
3727
+ overrides.reasoningEffort = "true";
3679
3728
  } else if (a === "--help" || a === "-h") {
3680
3729
  console.log(`
3681
3730
  codex-transfer \u2014 Responses API \u2194 Chat Completions bridge
@@ -3690,7 +3739,7 @@ Options:
3690
3739
  -m, --model MODEL Override model name (highest priority model mapping)
3691
3740
  -c, --config PATH Path to config file (JSON)
3692
3741
  -k, --insecure Skip TLS certificate verification
3693
- --no-reasoning-effort Don't send reasoning_effort to upstream
3742
+ --reasoning-effort Send reasoning effort to upstream (default: off)
3694
3743
  -d, --daemon Run in background, logs to logs/ directory
3695
3744
  -h, --help Show this help
3696
3745
 
@@ -3700,7 +3749,7 @@ Environment variables:
3700
3749
  CODEX_TRANSFER_API_KEY Same as --api-key
3701
3750
  CODEX_TRANSFER_CONFIG Same as --config
3702
3751
  CODEX_TRANSFER_INSECURE Set to "1" to skip TLS verification
3703
- CODEX_TRANSFER_REASONING_EFFORT Set to "0" to disable reasoning_effort
3752
+ CODEX_TRANSFER_REASONING_EFFORT Set to "1" to enable reasoning_effort
3704
3753
 
3705
3754
  Config file options:
3706
3755
  modelMap Model name mapping, e.g. {"*": "deepseek-v4-pro"}
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "@classicicn/codex-transfer",
3
- "version": "0.3.1",
3
+ "version": "0.3.3",
4
4
  "description": "Responses API ↔ Chat Completions translation bridge for Codex — use DeepSeek, Kimi, Qwen, and other providers with Codex",
5
5
  "type": "module",
6
6
  "main": "dist/codex-transfer.mjs",