@tencent-ai/codebuddy-code 2.93.6 → 2.93.7-next.2babce8.20260428

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (35) hide show
  1. package/CHANGELOG.md +24 -0
  2. package/dist/codebuddy-headless.js +188 -179
  3. package/dist/codebuddy.js +140 -131
  4. package/dist/web-ui/docs/cn/cli/env-vars.md +28 -0
  5. package/dist/web-ui/docs/cn/cli/iam.md +10 -0
  6. package/dist/web-ui/docs/cn/cli/models.md +105 -6
  7. package/dist/web-ui/docs/cn/cli/release-notes/README.md +4 -0
  8. package/dist/web-ui/docs/cn/cli/release-notes/v2.93.4.md +11 -0
  9. package/dist/web-ui/docs/cn/cli/release-notes/v2.93.5.md +5 -0
  10. package/dist/web-ui/docs/cn/cli/release-notes/v2.93.6.md +11 -0
  11. package/dist/web-ui/docs/cn/cli/settings.md +2 -0
  12. package/dist/web-ui/docs/en/cli/env-vars.md +28 -0
  13. package/dist/web-ui/docs/en/cli/iam.md +10 -0
  14. package/dist/web-ui/docs/en/cli/models.md +105 -6
  15. package/dist/web-ui/docs/en/cli/release-notes/README.md +4 -0
  16. package/dist/web-ui/docs/en/cli/release-notes/v2.93.4.md +11 -0
  17. package/dist/web-ui/docs/en/cli/release-notes/v2.93.5.md +5 -0
  18. package/dist/web-ui/docs/en/cli/release-notes/v2.93.6.md +11 -0
  19. package/dist/web-ui/docs/en/cli/release-notes/v2.93.7.md +22 -0
  20. package/dist/web-ui/docs/en/cli/settings.md +2 -0
  21. package/dist/web-ui/docs/search-index-en.json +1 -1
  22. package/dist/web-ui/docs/search-index-zh.json +1 -1
  23. package/dist/web-ui/docs/sidebar-en.json +1 -1
  24. package/dist/web-ui/docs/sidebar-zh.json +1 -1
  25. package/package.json +3 -2
  26. package/product.cloudhosted.json +7 -32
  27. package/product.internal.json +7 -32
  28. package/product.ioa.json +27 -32
  29. package/product.json +89 -54
  30. package/product.selfhosted.json +7 -32
  31. package/vendor/sandbox/sandbox-cli +0 -0
  32. package/vendor/sandbox/sandbox-cli.exe +0 -0
  33. package/vendor/sandbox/sandbox_ffi.dll +0 -0
  34. package/vendor/sandbox/tsbx.dll +0 -0
  35. package/vendor/sandbox/tsbx_sdk.dll +0 -0
@@ -91,6 +91,7 @@ CodeBuddy Code 支持通过环境变量来控制其行为。这些变量可以
91
91
  | `CODEBUDDY_CODE_FILE_READ_MAX_OUTPUT_TOKENS` | 覆盖文件读取的默认 token 限制(默认:20000) |
92
92
  | `CODEBUDDY_STREAM_TIMEOUT_MS` | 流式响应中两个数据块之间允许的最大静默时间(毫秒)(默认:120000) |
93
93
  | `CODEBUDDY_FIRST_TOKEN_TIMEOUT_MS` | 等待第一个模型输出的最大时间(毫秒)(默认:120000) |
94
+ | `CODEBUDDY_SESSION_MAX_ITEMS` | `session/load` 回放时历史消息的最大条数(默认:1000)。达到阈值且遇到 user message 时停止逆序读取 JSONL。需要支持超长会话(如沙箱场景)时可调大(例如 2000 或更多);零/负数/非数字会回退到默认值 |
94
95
 
95
96
  ## 文件系统和配置
96
97
 
@@ -248,6 +249,33 @@ export CODEBUDDY_BASE_URL="https://api.example.com/v1"
248
249
  codebuddy --model your-model-name
249
250
  ```
250
251
 
252
+ #### 对接 DeepSeek 示例
253
+
254
+ 对接任意兼容 Anthropic 协议的第三方模型服务(如 DeepSeek),只需配置 Base URL、API Key 和模型变量,无需额外修改 `models.json`:
255
+
256
+ ```bash
257
+ # 端点与密钥
258
+ export CODEBUDDY_BASE_URL="https://api.deepseek.com"
259
+ export CODEBUDDY_API_KEY="<your-deepseek-api-key>"
260
+
261
+ # 主 Agent 默认模型
262
+ export CODEBUDDY_MODEL="deepseek-v4-pro"
263
+
264
+ # 复杂推理使用的大模型
265
+ export CODEBUDDY_BIG_SLOW_MODEL="deepseek-v4-pro"
266
+
267
+ # 后台/轻量任务使用的小模型
268
+ export CODEBUDDY_SMALL_FAST_MODEL="deepseek-v4-flash"
269
+
270
+ # 子代理使用的模型(不设置则继承主 Agent)
271
+ export CODEBUDDY_CODE_SUBAGENT_MODEL="deepseek-v4-flash"
272
+
273
+ # 启动时可通过 --model 显式指定主模型
274
+ codebuddy --model deepseek-v4-pro
275
+ ```
276
+
277
+ > **提示**:以上变量也可以写入 `settings.json` 的 `env` 字段,每个会话自动应用,便于团队统一配置。
278
+
251
279
  ### 中国版配置
252
280
 
253
281
  ```bash
@@ -214,6 +214,16 @@ CodeBuddy Code 支持几种权限模式,可以在[设置文件](settings.md#
214
214
  `bypassPermissions` 模式应仅在安全、隔离的环境中使用,例如 Docker 容器或 VM。在生产环境或包含敏感数据的系统上使用此模式可能会带来安全风险。
215
215
  </Warning>
216
216
 
217
+ <Note>
218
+ **`trustAll` / `trustedDirectories` 不是权限模式替代项。** 这两个字段只影响启动时的**目录信任授权提示**(即"是否信任此目录并允许在其中运行 CodeBuddy"的一次性弹窗),和工具执行时是否弹审批无关。
219
+
220
+ - 免除工具审批请设置 `permissions.defaultMode` 或使用 `--permission-mode bypassPermissions` / `-y` / `--dangerously-skip-permissions`
221
+ - 免除目录信任弹窗才用 `trustAll: true` 或把目录加入 `trustedDirectories`
222
+ - 两个开关独立;`bypassPermissions` 开着时目录信任弹窗仍会正常出现,反之亦然
223
+
224
+ 所以"开了 `bypassPermissions` + `trustAll`,其他就不用配"这个说法是不准确的——前者控制工具审批,后者控制目录信任,两者解决的是不同的确认入口。
225
+ </Note>
226
+
217
227
  #### 工作目录
218
228
 
219
229
  默认情况下,CodeBuddy 可以访问其启动目录中的文件。您可以扩展此访问权限:
@@ -72,6 +72,7 @@
72
72
  | `supportsToolCall` | boolean | - | 是否支持工具调用 |
73
73
  | `supportsImages` | boolean | - | 是否支持图片输入 |
74
74
  | `supportsReasoning` | boolean | - | 是否支持推理模式 |
75
+ | `relatedModels` | object | - | 关联模型配置,指定在不同场景(`lite`/`reasoning`/`vision`/`longContext`/`subagent`)下使用哪个模型 id。详见[配置关联模型](#配置关联模型) |
75
76
 
76
77
  **重要说明:**
77
78
  - 目前仅支持 OpenAI 接口格式的 API
@@ -283,26 +284,124 @@ http://localhost:11434
283
284
 
284
285
  ### DeepSeek 平台配置示例
285
286
 
286
- 使用 DeepSeek 模型:
287
+ 使用 DeepSeek 模型(配置 `url` 后,即使与云端同 id 也会按"完全替换"语义生效,不会被云端默认项合并覆盖):
287
288
 
288
289
  ```json
289
290
  {
290
291
  "models": [
291
292
  {
292
- "id": "deepseek-chat",
293
- "name": "DeepSeek Chat",
293
+ "id": "deepseek-v4-pro",
294
+ "name": "DeepSeek V4 Pro",
294
295
  "vendor": "DeepSeek",
295
296
  "url": "https://api.deepseek.com/v1/chat/completions",
296
- "apiKey": "sk-your-deepseek-api-key",
297
- "maxInputTokens": 32000,
298
- "maxOutputTokens": 4096,
297
+ "apiKey": "${DEEPSEEK_API_KEY}",
298
+ "maxInputTokens": 128000,
299
+ "maxOutputTokens": 8192,
300
+ "supportsToolCall": true,
301
+ "supportsImages": false
302
+ },
303
+ {
304
+ "id": "deepseek-v4-flash",
305
+ "name": "DeepSeek V4 Flash",
306
+ "vendor": "DeepSeek",
307
+ "url": "https://api.deepseek.com/v1/chat/completions",
308
+ "apiKey": "${DEEPSEEK_API_KEY}",
309
+ "maxInputTokens": 128000,
310
+ "maxOutputTokens": 8192,
299
311
  "supportsToolCall": true,
300
312
  "supportsImages": false
301
313
  }
314
+ ],
315
+ "availableModels": [
316
+ "deepseek-v4-pro",
317
+ "deepseek-v4-flash"
318
+ ]
319
+ }
320
+ ```
321
+
322
+ 设置 API 密钥环境变量后启动:
323
+
324
+ ```bash
325
+ export DEEPSEEK_API_KEY="<your-deepseek-api-key>"
326
+ codebuddy --model deepseek-v4-pro
327
+ ```
328
+
329
+ > **提示**:如果不希望维护 `models.json`,也可以完全通过环境变量对接 DeepSeek,见 [env-vars.md 对接 DeepSeek 示例](./env-vars.md#对接-deepseek-示例)。
330
+
331
+ ### 配置关联模型
332
+
333
+ CodeBuddy Code 在一次会话中会根据场景切换模型,避免用大模型处理简单任务、或用通用模型处理需要推理 / 视觉 / 长上下文的请求。这些场景通过模型条目的 `relatedModels` 字段声明。
334
+
335
+ **支持的场景(variant type):**
336
+
337
+ | 场景 | 用途 | 当前状态 |
338
+ |------|------|---------|
339
+ | `lite` | 轻量快速模型,用于后台提取、摘要等低价值请求;也是 Agent 工具 `model: "lite"` 参数对应的模型 | **已生效** |
340
+ | `reasoning` | 推理增强模型,用于需要深度思考的复杂推理;Agent 工具 `model: "reasoning"` 参数对应的模型 | **已生效** |
341
+ | `subagent` | 子 Agent / 团队成员默认使用的模型 | **预留未启用**——当前子代理模型通过 `CODEBUDDY_CODE_SUBAGENT_MODEL` 环境变量或 agent 配置的 `models[0]` 决定,不读取本字段 |
342
+ | `vision` | 视觉理解模型,用于需要处理图片的请求 | **预留未启用**——类型已定义,尚无调用点消费此 variant |
343
+ | `longContext` | 长上下文模型,用于上下文超长的请求 | **预留未启用**——类型已定义,尚无调用点消费此 variant |
344
+
345
+ > **当前实际可用**:只有 `lite` 和 `reasoning` 两个 variant 在 agent-manager 中被消费并映射到模型切换逻辑。`subagent` / `vision` / `longContext` 三项仅保留在类型定义中,为后续迭代预留,现在写进 `relatedModels` 不会报错但也不会生效。
346
+
347
+ **关键规则(自定义模型必读):**
348
+
349
+ > 通过 `models.json` 添加的自定义模型**不会继承**产品内置的 `defaultRelatedModels`。
350
+ > 如果没有在自身条目里显式声明 `relatedModels`,所有场景都会回退到主模型自己——也就是说子代理、lite、reasoning 全部用同一个大模型跑,成本和速度都不划算。
351
+
352
+ **配置示例(DeepSeek 主模型 + flash 作为 lite / reasoning):**
353
+
354
+ ```json
355
+ {
356
+ "models": [
357
+ {
358
+ "id": "deepseek-v4-pro",
359
+ "name": "DeepSeek V4 Pro",
360
+ "vendor": "DeepSeek",
361
+ "url": "https://api.deepseek.com/v1/chat/completions",
362
+ "apiKey": "${DEEPSEEK_API_KEY}",
363
+ "maxInputTokens": 128000,
364
+ "maxOutputTokens": 8192,
365
+ "supportsToolCall": true,
366
+ "relatedModels": {
367
+ "lite": "deepseek-v4-flash",
368
+ "reasoning": "deepseek-v4-pro"
369
+ }
370
+ },
371
+ {
372
+ "id": "deepseek-v4-flash",
373
+ "name": "DeepSeek V4 Flash",
374
+ "vendor": "DeepSeek",
375
+ "url": "https://api.deepseek.com/v1/chat/completions",
376
+ "apiKey": "${DEEPSEEK_API_KEY}",
377
+ "maxInputTokens": 128000,
378
+ "maxOutputTokens": 8192,
379
+ "supportsToolCall": true
380
+ }
381
+ ],
382
+ "availableModels": [
383
+ "deepseek-v4-pro",
384
+ "deepseek-v4-flash"
302
385
  ]
303
386
  }
304
387
  ```
305
388
 
389
+ **解析优先级(从高到低,当前仅 `lite` / `reasoning` 会走到这个解析链):**
390
+
391
+ 1. 环境变量显式指定(`CODEBUDDY_SMALL_FAST_MODEL` 对应 `lite`、`CODEBUDDY_BIG_SLOW_MODEL` 对应 `reasoning`)
392
+ 2. 当前主模型条目里的 `relatedModels[variant]`
393
+ 3. 产品内置的 `defaultRelatedModels[variant]`(**仅对内置模型生效,自定义模型跳过这步**)
394
+ 4. 回落到主模型自身
395
+
396
+ > **子代理模型的取值规则不走 `relatedModels.subagent`**,而是独立的链路:Agent 工具 `model` 参数 > `CODEBUDDY_CODE_SUBAGENT_MODEL` 环境变量 > agent 配置的 `models[0]` > 主模型。
397
+
398
+ **与环境变量方式的取舍:**
399
+
400
+ - 想让某主模型在 `lite` / `reasoning` 场景自动切到另一个同厂小模型:在主模型条目里声明 `relatedModels` 最直观,绑定关系跟着模型走。
401
+ - 想让子代理用小模型:目前必须走环境变量 `CODEBUDDY_CODE_SUBAGENT_MODEL`,或在 agent 配置里把小模型写到 `models[0]`——**不是** `relatedModels.subagent`。
402
+ - 想在不同项目里用不同的大小模型组合:用环境变量(`CODEBUDDY_MODEL` / `CODEBUDDY_BIG_SLOW_MODEL` / `CODEBUDDY_SMALL_FAST_MODEL` / `CODEBUDDY_CODE_SUBAGENT_MODEL`),配合项目级 `.env` 切换。
403
+ - 两种方式同时生效时,环境变量优先。
404
+
306
405
  ### 完整示例
307
406
 
308
407
  ```json
@@ -17,6 +17,10 @@ Release Notes 记录了每个版本的用户可见变更,包括:
17
17
 
18
18
  <!-- 新版本自动添加到此处 -->
19
19
 
20
+ - [v2.93.7](./v2.93.7.md) - 2026-04-26
21
+ - [v2.93.6](./v2.93.6.md) - 2026-04-25
22
+ - [v2.93.5](./v2.93.5.md) - 2026-04-23
23
+ - [v2.93.4](./v2.93.4.md) - 2026-04-23
20
24
  - [v2.93.3](./v2.93.3.md) - 2026-04-22
21
25
  - [v2.93.2](./v2.93.2.md) - 2026-04-22
22
26
  - [v2.93.1](./v2.93.1.md) - 2026-04-21
@@ -0,0 +1,11 @@
1
+ # 🚀 CodeBuddy Code v2.93.4 发布
2
+
3
+ ## 🔧 改进优化
4
+
5
+ ### Hy3 preview 模型上线
6
+
7
+ 新增 Hy3 preview 模型支持(国内版),该模型支持 192K 上下文窗口、推理能力和工具调用,为您提供更强大的 AI 助手体验。
8
+
9
+ ### iOA 模型配置优化
10
+
11
+ 优化 iOA 版本的 Hy3 模型配置,从 dev-0402 版本更新到 preview 版本,并调整推理参数(temperature: 0.7→0.9,移除 top_k 和 repetition_penalty),提升模型响应质量。
@@ -0,0 +1,5 @@
1
+ # 🚀 CodeBuddy Code v2.93.5 发布
2
+
3
+ ## 🔧 改进优化
4
+
5
+ - **新增模型**:上线 GLM-5-1-MaaS 模型(国内版),扩展可用模型选择
@@ -0,0 +1,11 @@
1
+ # 🚀 CodeBuddy Code v2.93.6 发布
2
+
3
+ ## 🔧 改进优化
4
+
5
+ - **Skill/命令/子代理 模型指定**:skill frontmatter、自定义 slash command 及 `Agent` 工具的 `model` 参数现支持指定模型 id、名称或变体(`lite` / `reasoning` / `default`)。匹配不到或模型被禁用时,自动降级到主 agent 当前选中的模型并记录日志,便于定位"配置里写了模型但没生效"的问题
6
+
7
+ ## 🐛 问题修复
8
+
9
+ - **后台 Agent 撞名启动失败**:修复会话残留 teammate(如上一轮未清理的 `critic`)导致同名后台 Agent 无法启动的问题,自动追加短 hex 后缀重试
10
+ - **PWA 离线加载异常**:修复 Service Worker 初始化找不到 `/` 预缓存导致的 `non-precached-url` 报错,PWA 离线启动更稳定
11
+ - **画布终端刷新后连不上**:修复刷新后先打开的旧终端因两个 store 状态不一致而走到新建 session 分支的问题,现在能正确复用原有会话
@@ -78,6 +78,8 @@ CodeBuddy Code 使用分层配置系统,让您能够在不同级别进行个
78
78
  | `promptSuggestionEnabled` | 启用 Prompt 建议功能,在 Agent 完成对话后自动预测下一步操作(默认:`true`) | `false` |
79
79
  | `reasoningEffort` | Reasoning effort 级别配置,控制模型推理的深度。可选值:`low`、`medium`、`high`、`xhigh`。留空时使用产品配置默认值。可通过 `/config` 面板切换,选择 `auto` 等效于清除此设置 | `"high"` |
80
80
  | `memory` | [Experimental] 记忆功能配置,见[记忆功能配置](#记忆功能配置experimental) | `{"enabled": true}` |
81
+ | `trustedDirectories` | 已经信任过的工作目录列表。命中的目录启动时不会再弹"是否信任此目录"的授权提示。通常由首次启动时的弹窗自动写入,也可手动编辑 | `["~/workspace/myproj"]` |
82
+ | `trustAll` | 信任所有工作目录,启动时不再弹"是否信任此目录"的授权提示。**仅免除目录信任授权,不会跳过工具执行权限**——是否弹工具审批仍由 `permissions.defaultMode` / `bypassPermissions` 模式决定,与本字段相互独立 | `true` |
81
83
 
82
84
  ### 权限设置
83
85
 
@@ -91,6 +91,7 @@ CodeBuddy Code supports environment variables to control its behavior. These var
91
91
  | `CODEBUDDY_CODE_FILE_READ_MAX_OUTPUT_TOKENS` | Override the default token limit for file reads (default: 20000) |
92
92
  | `CODEBUDDY_STREAM_TIMEOUT_MS` | Maximum silent time between two data chunks in streaming responses (milliseconds) (default: 120000) |
93
93
  | `CODEBUDDY_FIRST_TOKEN_TIMEOUT_MS` | Maximum time to wait for the first model output (milliseconds) (default: 120000) |
94
+ | `CODEBUDDY_SESSION_MAX_ITEMS` | Maximum number of history messages to replay during `session/load` (default: 1000). Stops reading the JSONL in reverse when the threshold is reached and a user message is encountered. Increase (e.g., 2000 or more) when very long sessions need to be supported (such as sandbox scenarios); zero/negative/non-numeric values fall back to the default |
94
95
 
95
96
  ## Filesystem and Configuration
96
97
 
@@ -248,6 +249,33 @@ export CODEBUDDY_BASE_URL="https://api.example.com/v1"
248
249
  codebuddy --model your-model-name
249
250
  ```
250
251
 
252
+ #### Connecting to DeepSeek Example
253
+
254
+ To connect to any third-party model service compatible with the Anthropic protocol (such as DeepSeek), you only need to configure the Base URL, API Key, and model variables — no additional `models.json` modifications are required:
255
+
256
+ ```bash
257
+ # Endpoint and key
258
+ export CODEBUDDY_BASE_URL="https://api.deepseek.com"
259
+ export CODEBUDDY_API_KEY="<your-deepseek-api-key>"
260
+
261
+ # Default model for the main Agent
262
+ export CODEBUDDY_MODEL="deepseek-v4-pro"
263
+
264
+ # Large model used for complex reasoning
265
+ export CODEBUDDY_BIG_SLOW_MODEL="deepseek-v4-pro"
266
+
267
+ # Small model for background/lightweight tasks
268
+ export CODEBUDDY_SMALL_FAST_MODEL="deepseek-v4-flash"
269
+
270
+ # Model used by sub-agents (inherits from the main Agent when unset)
271
+ export CODEBUDDY_CODE_SUBAGENT_MODEL="deepseek-v4-flash"
272
+
273
+ # Optionally specify the main model explicitly via --model at startup
274
+ codebuddy --model deepseek-v4-pro
275
+ ```
276
+
277
+ > **Tip**: The variables above can also be written into the `env` field of `settings.json` so they automatically apply to each session, making team-wide configuration easier.
278
+
251
279
  ### China Edition Configuration
252
280
 
253
281
  ```bash
@@ -214,6 +214,16 @@ CodeBuddy Code supports several permission modes, which can be set as `defaultMo
214
214
  `bypassPermissions` mode should only be used in secure, isolated environments, such as Docker containers or VMs. Using this mode on production environments or systems with sensitive data may pose security risks.
215
215
  </Warning>
216
216
 
217
+ <Note>
218
+ **`trustAll` / `trustedDirectories` are not replacements for permission modes.** These two fields only affect the **directory trust authorization prompt at startup** (the one-time popup asking "trust this directory and allow CodeBuddy to run in it"), and have nothing to do with whether approval is prompted when a tool executes.
219
+
220
+ - To exempt tool approval, set `permissions.defaultMode` or use `--permission-mode bypassPermissions` / `-y` / `--dangerously-skip-permissions`
221
+ - To exempt the directory trust popup, use `trustAll: true` or add the directory to `trustedDirectories`
222
+ - The two switches are independent; the directory trust popup still appears normally when `bypassPermissions` is enabled, and vice versa
223
+
224
+ Therefore, the claim that "enabling `bypassPermissions` + `trustAll` means you don't need to configure anything else" is inaccurate — the former controls tool approval, the latter controls directory trust; they address different confirmation entry points.
225
+ </Note>
226
+
217
227
  #### Working Directory
218
228
 
219
229
  By default, CodeBuddy can access files in the directory where it's launched. You can extend this access:
@@ -72,6 +72,7 @@ Define custom model list. You can add new models or override built-in model conf
72
72
  | `supportsToolCall` | boolean | - | Whether tool calls are supported |
73
73
  | `supportsImages` | boolean | - | Whether image input is supported |
74
74
  | `supportsReasoning` | boolean | - | Whether reasoning mode is supported |
75
+ | `relatedModels` | object | - | Related model configuration, specifies which model id to use in different scenarios (`lite`/`reasoning`/`vision`/`longContext`/`subagent`). See [Configuring Related Models](#configuring-related-models) for details |
75
76
 
76
77
  **Important Notes:**
77
78
  - Currently only supports OpenAI interface format API
@@ -283,26 +284,124 @@ Using OpenRouter to access various models:
283
284
 
284
285
  ### DeepSeek Platform Configuration Example
285
286
 
286
- Using DeepSeek models:
287
+ Using DeepSeek models (once `url` is configured, even entries with the same id as cloud defaults take effect with "full replacement" semantics and are not overridden by cloud default merges):
287
288
 
288
289
  ```json
289
290
  {
290
291
  "models": [
291
292
  {
292
- "id": "deepseek-chat",
293
- "name": "DeepSeek Chat",
293
+ "id": "deepseek-v4-pro",
294
+ "name": "DeepSeek V4 Pro",
294
295
  "vendor": "DeepSeek",
295
296
  "url": "https://api.deepseek.com/v1/chat/completions",
296
- "apiKey": "sk-your-deepseek-api-key",
297
- "maxInputTokens": 32000,
298
- "maxOutputTokens": 4096,
297
+ "apiKey": "${DEEPSEEK_API_KEY}",
298
+ "maxInputTokens": 128000,
299
+ "maxOutputTokens": 8192,
300
+ "supportsToolCall": true,
301
+ "supportsImages": false
302
+ },
303
+ {
304
+ "id": "deepseek-v4-flash",
305
+ "name": "DeepSeek V4 Flash",
306
+ "vendor": "DeepSeek",
307
+ "url": "https://api.deepseek.com/v1/chat/completions",
308
+ "apiKey": "${DEEPSEEK_API_KEY}",
309
+ "maxInputTokens": 128000,
310
+ "maxOutputTokens": 8192,
299
311
  "supportsToolCall": true,
300
312
  "supportsImages": false
301
313
  }
314
+ ],
315
+ "availableModels": [
316
+ "deepseek-v4-pro",
317
+ "deepseek-v4-flash"
318
+ ]
319
+ }
320
+ ```
321
+
322
+ Set the API key environment variable and start:
323
+
324
+ ```bash
325
+ export DEEPSEEK_API_KEY="<your-deepseek-api-key>"
326
+ codebuddy --model deepseek-v4-pro
327
+ ```
328
+
329
+ > **Tip**: If you prefer not to maintain a `models.json`, you can also connect to DeepSeek entirely through environment variables. See [env-vars.md DeepSeek Integration Example](./env-vars.md#deepseek-integration-example).
330
+
331
+ ### Configuring Related Models
332
+
333
+ During a single CodeBuddy Code session, the model is switched based on the scenario, to avoid using a large model for simple tasks or a general-purpose model for requests that need reasoning / vision / long-context. These scenarios are declared via the `relatedModels` field on a model entry.
334
+
335
+ **Supported scenarios (variant type):**
336
+
337
+ | Scenario | Purpose | Current Status |
338
+ |------|------|---------|
339
+ | `lite` | Lightweight fast model, used for background extraction, summarization, and other low-value requests; also the model corresponding to the Agent tool's `model: "lite"` parameter | **Active** |
340
+ | `reasoning` | Reasoning-enhanced model, used for complex reasoning requiring deep thinking; the model corresponding to the Agent tool's `model: "reasoning"` parameter | **Active** |
341
+ | `subagent` | Default model used by sub-agents / team members | **Reserved, not yet active** — currently the sub-agent model is determined by the `CODEBUDDY_CODE_SUBAGENT_MODEL` environment variable or the agent configuration's `models[0]`, and this field is not read |
342
+ | `vision` | Vision understanding model, used for requests that need to process images | **Reserved, not yet active** — the type is defined but there is no call site consuming this variant |
343
+ | `longContext` | Long-context model, used for requests with very long context | **Reserved, not yet active** — the type is defined but there is no call site consuming this variant |
344
+
345
+ > **Currently actually usable**: only the two variants `lite` and `reasoning` are consumed by the agent-manager and mapped into the model switching logic. `subagent` / `vision` / `longContext` are only kept in the type definitions, reserved for future iterations; writing them into `relatedModels` today will not error but will also have no effect.
346
+
347
+ **Key rule (must-read for custom models):**
348
+
349
+ > Custom models added via `models.json` **do not inherit** the product's built-in `defaultRelatedModels`.
350
+ > If `relatedModels` is not explicitly declared on the entry itself, all scenarios fall back to the main model itself — meaning sub-agents, lite, and reasoning all run on the same large model, which is inefficient in both cost and speed.
351
+
352
+ **Configuration example (DeepSeek main model + flash as lite / reasoning):**
353
+
354
+ ```json
355
+ {
356
+ "models": [
357
+ {
358
+ "id": "deepseek-v4-pro",
359
+ "name": "DeepSeek V4 Pro",
360
+ "vendor": "DeepSeek",
361
+ "url": "https://api.deepseek.com/v1/chat/completions",
362
+ "apiKey": "${DEEPSEEK_API_KEY}",
363
+ "maxInputTokens": 128000,
364
+ "maxOutputTokens": 8192,
365
+ "supportsToolCall": true,
366
+ "relatedModels": {
367
+ "lite": "deepseek-v4-flash",
368
+ "reasoning": "deepseek-v4-pro"
369
+ }
370
+ },
371
+ {
372
+ "id": "deepseek-v4-flash",
373
+ "name": "DeepSeek V4 Flash",
374
+ "vendor": "DeepSeek",
375
+ "url": "https://api.deepseek.com/v1/chat/completions",
376
+ "apiKey": "${DEEPSEEK_API_KEY}",
377
+ "maxInputTokens": 128000,
378
+ "maxOutputTokens": 8192,
379
+ "supportsToolCall": true
380
+ }
381
+ ],
382
+ "availableModels": [
383
+ "deepseek-v4-pro",
384
+ "deepseek-v4-flash"
302
385
  ]
303
386
  }
304
387
  ```
305
388
 
389
+ **Resolution priority (from highest to lowest; currently only `lite` / `reasoning` go through this resolution chain):**
390
+
391
+ 1. Environment variables explicitly specified (`CODEBUDDY_SMALL_FAST_MODEL` corresponds to `lite`, `CODEBUDDY_BIG_SLOW_MODEL` corresponds to `reasoning`)
392
+ 2. `relatedModels[variant]` on the current main model entry
393
+ 3. The product's built-in `defaultRelatedModels[variant]` (**only takes effect for built-in models; custom models skip this step**)
394
+ 4. Fall back to the main model itself
395
+
396
+ > **The resolution rule for sub-agent models does not go through `relatedModels.subagent`**, but follows an independent chain: the Agent tool's `model` parameter > the `CODEBUDDY_CODE_SUBAGENT_MODEL` environment variable > the agent configuration's `models[0]` > the main model.
397
+
398
+ **Trade-offs with the environment variable approach:**
399
+
400
+ - If you want a main model to automatically switch to another small model from the same vendor in the `lite` / `reasoning` scenarios: declaring `relatedModels` on the main model entry is the most intuitive way, and the binding follows the model.
401
+ - If you want sub-agents to use a small model: currently you must go through the environment variable `CODEBUDDY_CODE_SUBAGENT_MODEL`, or write the small model into `models[0]` in the agent configuration — **not** via `relatedModels.subagent`.
402
+ - If you want different large/small model combinations in different projects: use environment variables (`CODEBUDDY_MODEL` / `CODEBUDDY_BIG_SLOW_MODEL` / `CODEBUDDY_SMALL_FAST_MODEL` / `CODEBUDDY_CODE_SUBAGENT_MODEL`) combined with project-level `.env` switching.
403
+ - When both approaches are active simultaneously, environment variables take priority.
404
+
306
405
  ### Complete Example
307
406
 
308
407
  ```json
@@ -17,6 +17,10 @@ Difference from CHANGELOG.md:
17
17
 
18
18
  <!-- New versions are automatically added here -->
19
19
 
20
+ - [v2.93.7](./v2.93.7.md) - 2026-04-26
21
+ - [v2.93.6](./v2.93.6.md) - 2026-04-25
22
+ - [v2.93.5](./v2.93.5.md) - 2026-04-23
23
+ - [v2.93.4](./v2.93.4.md) - 2026-04-23
20
24
  - [v2.93.3](./v2.93.3.md) - 2026-04-22
21
25
  - [v2.93.2](./v2.93.2.md) - 2026-04-22
22
26
  - [v2.93.1](./v2.93.1.md) - 2026-04-21
@@ -0,0 +1,11 @@
1
+ # 🚀 CodeBuddy Code v2.93.4 Release
2
+
3
+ ## 🔧 Improvements
4
+
5
+ ### Hy3 Preview Model Released
6
+
7
+ Added support for the Hy3 preview model (Domestic edition). This model supports a 192K context window, reasoning capabilities, and tool calling, providing you with a more powerful AI assistant experience.
8
+
9
+ ### iOA Model Configuration Optimization
10
+
11
+ Optimized the Hy3 model configuration for the iOA edition, upgrading from `dev-0402` to the preview version, and adjusted inference parameters (`temperature`: 0.7→0.9, removed `top_k` and `repetition_penalty`) to improve model response quality.
@@ -0,0 +1,5 @@
1
+ # 🚀 CodeBuddy Code v2.93.5 Release
2
+
3
+ ## 🔧 Improvements
4
+
5
+ - **New Model**: Released the GLM-5-1-MaaS model (Domestic edition), expanding the available model options
@@ -0,0 +1,11 @@
1
+ # 🚀 CodeBuddy Code v2.93.6 Release
2
+
3
+ ## 🔧 Improvements
4
+
5
+ - **Model Specification for Skills / Commands / Sub-agents**: The skill frontmatter, custom slash commands, and the `model` parameter of the `Agent` tool now support specifying a model by id, name, or variant (`lite` / `reasoning` / `default`). When no match is found or the model is disabled, it automatically falls back to the currently selected model of the main agent and logs the event, making it easier to diagnose cases where "a model was configured but didn't take effect"
6
+
7
+ ## 🐛 Bug Fixes
8
+
9
+ - **Background Agent Name Collision on Startup**: Fixed an issue where leftover teammates in the session (such as a `critic` from a previous round that wasn't cleaned up) prevented a background Agent with the same name from starting. It now automatically appends a short hex suffix and retries
10
+ - **PWA Offline Loading Error**: Fixed a `non-precached-url` error caused by the Service Worker initialization failing to find `/` in the precache, making PWA offline startup more stable
11
+ - **Canvas Terminal Cannot Reconnect After Refresh**: Fixed an issue where, after a refresh, a previously opened terminal would fall into the "create new session" branch due to inconsistent state between two stores. It now correctly reuses the original session
@@ -0,0 +1,22 @@
1
+ # 🚀 CodeBuddy Code v2.93.7 Release
2
+
3
+ ## 🔧 Improvements
4
+
5
+ - **Slimmer Default Main Agent Toolset**: The default agent toolchain now focuses on core tools such as Read / Write / Edit / Bash / PowerShell. Redundant tools that are no longer mounted by default have been removed; user-defined tool lists in settings are not affected
6
+ - **Built-in `/commit` and `/commit-push-pr` Commands**: Used for internal delegation by the main agent to standardize Git commit and PR flows; not visible in the `/` menu for users
7
+ - **Refined Tool Descriptions for Bash / Agent / Skill**: Trimmed down the tool descriptions in the model-side system prompt and added guidance on how to write sub-agent prompts and how to choose between foreground and background execution
8
+ - **Long Conversation Recovery**: Improved stability of automatic recovery after context overflow, reducing context handling anomalies during error retries; refined pre-message compaction behavior in long sessions so the current input is no longer unexpectedly interrupted when compaction fails
9
+ - **Aligned Cost Semantics**: `/cost` now follows the industry-standard semantics — `input / cacheRead / cacheWrite` are mutually exclusive and sum up to the total input, avoiding double counting
10
+ - **Unified 1000-based Token Display**: Token numbers in `/cost`, `/context`, and related panels now use 1000-based units (k / M), matching the conventions in model provider documentation
11
+ - **Cross-Turn Memory Deduplication**: Memories already injected earlier in a session will no longer be re-injected in later turns, leaving more "relevant memory" budget for new candidates
12
+ - **Exact ID Matching for Custom Agent Models**: The `model:` field in the frontmatter of `.codebuddy/agents/*.md` will fall back to the main agent's current model if it's not a valid model id, preventing accidental literal name collisions with user-defined models
13
+
14
+ ## 🐛 Bug Fixes
15
+
16
+ - **Model Request 502 Stability**: Mitigated occasional `502 socket hang up` issues. HTTPS requests in non-proxy scenarios now enable TCP keep-alive to resist idle / NAT timeouts in intermediary gateways; connection-level failures that occur before response headers arrive (ECONNRESET / socket hang up, etc.) are transparently retried once. The retry is strictly limited to the phase before upstream inference starts, so it will not cause double token billing
17
+
18
+ ## 📝 Documentation
19
+
20
+ - **Third-Party Model Integration Examples**: `env-vars.md` / `models.md` now include complete configuration examples covering endpoint, API key, and main / small / sub-agent models, with support for team-wide configuration via the `env` field in `settings.json`
21
+ - **`relatedModels` Field Documentation**: `models.md` adds documentation for the `relatedModels` field, explaining how to associate different model ids for scenarios like `lite` / `reasoning` / `vision` / `longContext` / `subagent`
22
+ - **Clarified Boundary of Permission Fields**: `iam.md` / `settings.md` now clarify that `trustAll` / `trustedDirectories` only affect the directory trust authorization prompt at startup — they do not bypass tool execution approval and are independent from `permissions.defaultMode`
@@ -78,6 +78,8 @@ The `settings.json` file is the official mechanism for configuring CodeBuddy Cod
78
78
  | `promptSuggestionEnabled` | Enable prompt suggestions, automatically predicting next actions after the agent completes a conversation (default: `true`) | `false` |
79
79
  | `reasoningEffort` | Reasoning effort level, controlling the depth of model reasoning. Options: `low`, `medium`, `high`, `xhigh`. When empty, uses the product configuration default. Can be toggled via the `/config` panel; selecting `auto` is equivalent to clearing this setting | `"high"` |
80
80
  | `memory` | [Experimental] Memory feature configuration, see [Memory Configuration](#memory-configuration-experimental) | `{"enabled": true}` |
81
+ | `trustedDirectories` | List of working directories that have already been trusted. Matching directories will not trigger the "trust this directory" authorization prompt at startup. Usually populated automatically by the first-run popup, but can also be edited manually | `["~/workspace/myproj"]` |
82
+ | `trustAll` | Trust all working directories, so the "trust this directory" authorization prompt no longer appears at startup. **Only exempts directory trust authorization — does not skip tool execution approval.** Whether tool approval is prompted is still governed by `permissions.defaultMode` / `bypassPermissions` mode and is independent from this field | `true` |
81
83
 
82
84
  ### Permission Settings
83
85