jinzd-ai-cli 0.1.28 → 0.1.30

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/CLAUDE.md CHANGED
@@ -343,6 +343,83 @@ const stdinLines = Array.isArray(rawStdin) ? rawStdin.map(String)
343
343
  - [x] **web_fetch DNS 解析时 SSRF 防护**(v0.1.25):新增 `resolveAndCheck()` 函数,用 `dns.promises.lookup()` 预解析域名,检查结果 IP 是否为私有地址。初始 URL 和 redirect 目标均校验。
344
344
  - [ ] **`persistentCwd` 全局状态**:bash 工具的当前工作目录是模块级全局变量,多 session 并发时可能串扰。现阶段单 session REPL 无影响,GUI 多会话扩展时需重构为 per-session 状态。
345
345
 
346
+ ## 本轮开发完成记录(2026-03-01,v0.1.26 → v0.1.30)
347
+
348
+ ### 安全修复续篇:低危 L2–L7
349
+
350
+ **L2**(`src/tools/builtin/web-fetch.ts`):`htmlToText()` 函数新增 200 KB HTML 大小上限(`HTML_REGEX_LIMIT = 200_000`),超出则截断后再做正则替换,防止恶意大 HTML 导致正则性能崩溃。
351
+
352
+ **L3**(`src/session/session-manager.ts`):新增 `safeDate(value: unknown): Date` 辅助函数,对无效日期字符串返回 `new Date(0)` 而非 `Invalid Date`,防止 `listSessions()` 和 `searchMessages()` 的日期比较静默失败。
353
+
354
+ **L4**(`src/tools/builtin/ask-user.ts` + `src/tools/builtin/google-search.ts`):为模块级全局上下文对象(`askUserContext` / `googleSearchContext`)添加架构说明注释,提示 GUI 多会话扩展时需重构为 per-session 状态。
355
+
356
+ **L5**(`src/config/schema.ts`):为 `timeouts` 字段补充三层优先级文档注释:
357
+ 1. `modelParams[modelId].timeout` — 最高
358
+ 2. `timeouts[providerId]` — provider 级默认
359
+ 3. 内置 provider 硬编码默认值 — 最低兜底
360
+
361
+ **L6/L7**:确认已安全(`call.arguments['path']` 已用 `String(... ?? '')` 兜底;主循环 `line` handler 已有 try/catch)。
362
+
363
+ ---
364
+
365
+ ### 新增功能 1:多模态图片输入(P0)
366
+
367
+ **背景**:用户希望通过 `@image.png 描述这张图` 语法将图片发送给各 AI Provider。
368
+
369
+ **类型层**(已有,无需改动):`src/core/types.ts` 的 `ImageContentPart { type: 'image_url', image_url: { url: 'data:mime;base64,...' } }` 与 OpenAI 格式完全兼容。
370
+
371
+ **Claude Provider**(`src/providers/claude.ts`):
372
+ - 新增 `contentToClaudeParts()` 私有方法:将内部 `image_url` 格式转换为 Anthropic SDK 期望的 `{ type: 'image', source: { type: 'base64', media_type, data } }` 格式
373
+ - 应用到 `chat()`、`chatStream()`、`chatWithTools()` 的消息构建
374
+
375
+ **Gemini Provider**(`src/providers/gemini.ts`):
376
+ - 移除错误的 `getContentText()` 调用(会丢弃图片)
377
+ - 新增 `contentToGeminiParts()` 私有方法:转换为 Gemini SDK 格式 `{ inlineData: { mimeType, data } }`
378
+ - 修复 `toGeminiHistory()`、`chat()`、`chatStream()`、`chatWithTools()` 的消息构建
379
+ - `chatWithTools()` 中变量 `lastMessage: string` 重命名为 `lastMsgParts: Part[]`,历史嵌入改为 `{ role: 'user', parts: lastMsgParts }`
380
+
381
+ **OpenAI 兼容 Provider**:已就绪,格式一致,无需修改。
382
+
383
+ **REPL 层**(`src/repl/repl.ts`):
384
+ - 新增常量 `MAX_IMAGE_BYTES = 10 * 1024 * 1024`(10 MB)
385
+ - `parseAtReferences()` 图片读取前用 `statSync` 校验大小,超限时加入 `refs` 类型为 `'toolarge'`,不内联
386
+ - `handleChat()` 新增 `toolarge` 分支:打印黄色警告 `⚠ Image too large (> 10 MB): <path>`
387
+ - 修复 `getVisionModelHint()`:Claude/Gemini 返回 `null`(原生支持,无需警告);DeepSeek 显示明确不支持提示;zhipu 推荐 `glm-4.6v`;kimi `moonshot-v1-*` 推荐对应 vision 版本
388
+
389
+ ---
390
+
391
+ ### 新增功能 2:Escape/Ctrl+C 中断 AI 流式输出(P0)
392
+
393
+ **背景**:流式生成期间无法中断,必须等待 AI 返回完整内容。
394
+
395
+ **架构设计**:两个流式生成入口——`handleChatSimple()` 和 `handleChatWithTools()` 的 tee streaming 分支(`save_last_response` 工具)——均支持中断。主路径(`chatWithTools()` → `renderResponse()`)为非流式,暂不支持。
396
+
397
+ **`src/core/types.ts`**:`ChatRequest` 新增 `signal?: AbortSignal`,透传给 Provider API 调用。
398
+
399
+ **Provider 层**:
400
+ - `claude.ts`:`messages.stream()` 第二参数传入 `{ signal: request.signal }`(Anthropic SDK 支持)
401
+ - `openai-compatible.ts`:流式 `create()` 第二参数传入 `{ signal: request.signal }`(OpenAI SDK 支持)
402
+ - `gemini.ts`:生成器循环开头检查 `if (request.signal?.aborted) break`(兼容性保险)
403
+
404
+ **`src/repl/renderer.ts`**:`renderStream()` 的 `options` 新增 `signal?: AbortSignal`:
405
+ - `for await` 循环开头检查 `signal.aborted` → 标记 `interrupted = true` 并 `break`
406
+ - 捕获 SDK 抛出的 `AbortError`(name 检测)→ 同样标记 `interrupted`
407
+ - 中断时 `flushBuf()` 输出残留缓冲,打印灰色 `[interrupted]` 提示
408
+ - 返回值不变(`{ content, usage, tokensShown }`),`content` 为已生成的部分内容
409
+
410
+ **`src/repl/repl.ts`**:
411
+ - 新增类属性 `streamAbortController: AbortController | null` + `_escHandler: ((d: Buffer) => void) | null`
412
+ - 新增 `setupStreamInterrupt()` 方法:创建 `AbortController`,`process.stdin.resume()`(绕过 `rl.pause()` 暂停),注册原始字节监听器检测纯 ESC(`0x1b`,单字节,区别于 ESC 序列)和 Ctrl+C(`0x03`)
413
+ - 新增 `teardownStreamInterrupt()` 方法:移除监听器,`process.stdin.pause()`,清空引用
414
+ - SIGINT 处理器最顶部新增分支:`streamAbortController` 非空时 `abort()` 并 return,不退出程序
415
+ - `handleChatSimple()` 流式分支和 tee streaming 分支均用 `setupStreamInterrupt()`/`teardownStreamInterrupt()` 包裹 + `signal` 传入
416
+
417
+ **版本与收尾**
418
+ - `src/core/constants.ts`:VERSION `0.1.26` → `0.1.30`(合并前几次 bump)
419
+ - `package.json`:version `0.1.29` → `0.1.30`
420
+
421
+ ---
422
+
346
423
  ## 本轮开发完成记录(2026-03-01,v0.1.25 → v0.1.26)
347
424
 
348
425
  ### 新增功能:Context 自动管理 + 测试报告 + 脚手架
@@ -871,8 +948,8 @@ const stdinLines = Array.isArray(rawStdin) ? rawStdin.map(String)
871
948
  ### ❌ 缺失功能路线图(新增)
872
949
 
873
950
  #### P0 — 核心竞争力缺口
874
- - [ ] **中断生成**:`Escape` 键立即停止 AI 流式输出(当前无法中断,必须等待完成)
875
- - [ ] **多模态输入(图片)**:支持粘贴/引用图片路径作为输入(当前纯文本)
951
+ - [x] **中断生成**(v0.1.30):`Escape` 键或 `Ctrl+C` 立即停止 AI 流式输出,不退出程序,显示 `[interrupted]` 后恢复提示符;已生成内容保留到 session
952
+ - [x] **多模态输入(图片)**(v0.1.30):`@image.png 描述这张图` 语法发送图片;Claude/Gemini/OpenAI 兼容格式自动转换;10MB 大小限制;`getVisionModelHint()` 正确识别 Claude/Gemini 原生支持视觉
876
953
  - [ ] **`/add-dir` 命令**:运行时动态添加目录到上下文
877
954
  - [ ] **并行工具调用**:AI 一次返回多个工具同时执行(当前为分组串行)
878
955
 
@@ -8,7 +8,7 @@ import { platform } from "os";
8
8
  import chalk from "chalk";
9
9
 
10
10
  // src/core/constants.ts
11
- var VERSION = "0.1.26";
11
+ var VERSION = "0.1.30";
12
12
  var APP_NAME = "ai-cli";
13
13
  var CONFIG_DIR_NAME = ".aicli";
14
14
  var CONFIG_FILE_NAME = "config.json";
package/dist/index.js CHANGED
@@ -27,7 +27,7 @@ import {
27
27
  SUBAGENT_MAX_ROUNDS_LIMIT,
28
28
  VERSION,
29
29
  runTestsTool
30
- } from "./chunk-3EJWGNHV.js";
30
+ } from "./chunk-IW6VVPO4.js";
31
31
 
32
32
  // src/index.ts
33
33
  import { program } from "commander";
@@ -341,11 +341,41 @@ var ClaudeProvider = class extends BaseProvider {
341
341
  baseURL: options?.baseUrl
342
342
  });
343
343
  }
344
+ /**
345
+ * 将内部 MessageContentPart[] 格式转换为 Anthropic SDK 期望的 ContentBlockParam[]。
346
+ *
347
+ * 内部格式(OpenAI 兼容):{ type: 'image_url', image_url: { url: 'data:mime;base64,...' } }
348
+ * Anthropic 格式:{ type: 'image', source: { type: 'base64', media_type: ..., data: ... } }
349
+ *
350
+ * 文本 part 和纯字符串保持不变。
351
+ */
352
+ contentToClaudeParts(content) {
353
+ if (typeof content === "string") return content;
354
+ const blocks = [];
355
+ for (const part of content) {
356
+ if (part.type === "text") {
357
+ blocks.push({ type: "text", text: part.text });
358
+ } else if (part.type === "image_url") {
359
+ const match = part.image_url.url.match(/^data:([^;]+);base64,(.+)$/);
360
+ if (match) {
361
+ blocks.push({
362
+ type: "image",
363
+ source: {
364
+ type: "base64",
365
+ media_type: match[1],
366
+ data: match[2]
367
+ }
368
+ });
369
+ }
370
+ }
371
+ }
372
+ return blocks.length > 0 ? blocks : "";
373
+ }
344
374
  async chat(request) {
345
375
  try {
346
376
  const messages = request.messages.filter((m) => m.role !== "system").map((m) => ({
347
377
  role: m.role,
348
- content: m.content
378
+ content: this.contentToClaudeParts(m.content)
349
379
  }));
350
380
  const response = await this.client.messages.create({
351
381
  model: request.model,
@@ -372,14 +402,14 @@ var ClaudeProvider = class extends BaseProvider {
372
402
  try {
373
403
  const messages = request.messages.filter((m) => m.role !== "system").map((m) => ({
374
404
  role: m.role,
375
- content: m.content
405
+ content: this.contentToClaudeParts(m.content)
376
406
  }));
377
407
  const stream = this.client.messages.stream({
378
408
  model: request.model,
379
409
  messages,
380
410
  system: request.systemPrompt,
381
411
  max_tokens: request.maxTokens ?? 8192
382
- });
412
+ }, { signal: request.signal });
383
413
  for await (const event of stream) {
384
414
  if (event.type === "content_block_delta" && event.delta.type === "text_delta") {
385
415
  yield { delta: event.delta.text, done: false };
@@ -412,7 +442,7 @@ var ClaudeProvider = class extends BaseProvider {
412
442
  required: Object.entries(t.parameters).filter(([, s]) => s.required).map(([k]) => k)
413
443
  }
414
444
  }));
415
- const baseMessages = request.messages.filter((m) => m.role !== "system").map((m) => ({ role: m.role, content: m.content }));
445
+ const baseMessages = request.messages.filter((m) => m.role !== "system").map((m) => ({ role: m.role, content: this.contentToClaudeParts(m.content) }));
416
446
  const extraMessages = request._extraMessages ?? [];
417
447
  const allMessages = [...baseMessages, ...extraMessages];
418
448
  const response = await this.client.messages.create({
@@ -490,14 +520,6 @@ var ClaudeProvider = class extends BaseProvider {
490
520
 
491
521
  // src/providers/gemini.ts
492
522
  import { GoogleGenerativeAI } from "@google/generative-ai";
493
-
494
- // src/core/types.ts
495
- function getContentText(content) {
496
- if (typeof content === "string") return content;
497
- return content.filter((p) => p.type === "text").map((p) => p.text).join("");
498
- }
499
-
500
- // src/providers/gemini.ts
501
523
  var GeminiProvider = class extends BaseProvider {
502
524
  client;
503
525
  /** 自定义 base URL(来自 config.customBaseUrls.gemini),用于配置代理或镜像站 */
@@ -555,10 +577,35 @@ var GeminiProvider = class extends BaseProvider {
555
577
  get reqOpts() {
556
578
  return this.baseUrl ? { baseUrl: this.baseUrl } : void 0;
557
579
  }
580
+ /**
581
+ * 将内部 MessageContentPart[] 格式转换为 Gemini SDK 期望的 Part[]。
582
+ *
583
+ * 内部格式(OpenAI 兼容):{ type: 'image_url', image_url: { url: 'data:mime;base64,...' } }
584
+ * Gemini 格式:{ inlineData: { mimeType: 'image/png', data: 'base64...' } }
585
+ *
586
+ * 空字符串保留为 [{ text: '' }] 以避免 Gemini SDK 报空 parts 错误。
587
+ */
588
+ contentToGeminiParts(content) {
589
+ if (typeof content === "string") {
590
+ return [{ text: content }];
591
+ }
592
+ const parts = [];
593
+ for (const p of content) {
594
+ if (p.type === "text") {
595
+ if (p.text.trim()) parts.push({ text: p.text });
596
+ } else if (p.type === "image_url") {
597
+ const match = p.image_url.url.match(/^data:([^;]+);base64,(.+)$/);
598
+ if (match) {
599
+ parts.push({ inlineData: { mimeType: match[1], data: match[2] } });
600
+ }
601
+ }
602
+ }
603
+ return parts.length > 0 ? parts : [{ text: "" }];
604
+ }
558
605
  toGeminiHistory(messages) {
559
606
  return messages.filter((m) => m.role !== "system").map((m) => ({
560
607
  role: m.role === "assistant" ? "model" : "user",
561
- parts: [{ text: getContentText(m.content) }]
608
+ parts: this.contentToGeminiParts(m.content)
562
609
  }));
563
610
  }
564
611
  async chat(request) {
@@ -571,9 +618,11 @@ var GeminiProvider = class extends BaseProvider {
571
618
  systemInstruction: request.systemPrompt
572
619
  }, this.reqOpts);
573
620
  const history = this.toGeminiHistory(request.messages.slice(0, -1));
574
- const lastMessage = getContentText(request.messages[request.messages.length - 1].content);
621
+ const lastMsgParts = this.contentToGeminiParts(
622
+ request.messages[request.messages.length - 1].content
623
+ );
575
624
  const chat = genModel.startChat({ history });
576
- const result = await chat.sendMessage(lastMessage);
625
+ const result = await chat.sendMessage(lastMsgParts);
577
626
  const meta = result.response.usageMetadata;
578
627
  return {
579
628
  content: result.response.text(),
@@ -595,10 +644,13 @@ var GeminiProvider = class extends BaseProvider {
595
644
  systemInstruction: request.systemPrompt
596
645
  }, this.reqOpts);
597
646
  const history = this.toGeminiHistory(request.messages.slice(0, -1));
598
- const lastMessage = getContentText(request.messages[request.messages.length - 1].content);
647
+ const lastMsgParts = this.contentToGeminiParts(
648
+ request.messages[request.messages.length - 1].content
649
+ );
599
650
  const chat = genModel.startChat({ history });
600
- const result = await chat.sendMessageStream(lastMessage);
651
+ const result = await chat.sendMessageStream(lastMsgParts);
601
652
  for await (const chunk of result.stream) {
653
+ if (request.signal?.aborted) break;
602
654
  yield { delta: chunk.text(), done: false };
603
655
  }
604
656
  const finalResponse = await result.response;
@@ -642,7 +694,7 @@ var GeminiProvider = class extends BaseProvider {
642
694
  }))
643
695
  }];
644
696
  const baseHistory = this.toGeminiHistory(request.messages.slice(0, -1));
645
- const lastMessage = getContentText(
697
+ const lastMsgParts = this.contentToGeminiParts(
646
698
  request.messages[request.messages.length - 1].content
647
699
  );
648
700
  const extraHistory = request._extraMessages ?? [];
@@ -650,12 +702,12 @@ var GeminiProvider = class extends BaseProvider {
650
702
  let msgToSend;
651
703
  if (extraHistory.length === 0) {
652
704
  fullHistory = [...baseHistory];
653
- msgToSend = lastMessage;
705
+ msgToSend = lastMsgParts;
654
706
  } else {
655
707
  const lastFnResponse = extraHistory[extraHistory.length - 1];
656
708
  fullHistory = [
657
709
  ...baseHistory,
658
- { role: "user", parts: [{ text: lastMessage }] },
710
+ { role: "user", parts: lastMsgParts },
659
711
  ...extraHistory.slice(0, -1)
660
712
  ];
661
713
  msgToSend = lastFnResponse.parts;
@@ -806,7 +858,8 @@ var OpenAICompatibleProvider = class extends BaseProvider {
806
858
  stream_options: { include_usage: true },
807
859
  ...request.thinking ? { thinking: { type: "enabled" } } : {}
808
860
  }, {
809
- timeout: request.timeout ?? this.defaultTimeout
861
+ timeout: request.timeout ?? this.defaultTimeout,
862
+ signal: request.signal
810
863
  });
811
864
  for await (const chunk of stream) {
812
865
  const choice = chunk.choices[0];
@@ -1327,6 +1380,12 @@ import { readFileSync as readFileSync2, writeFileSync as writeFileSync2, existsS
1327
1380
  import { join as join2 } from "path";
1328
1381
  import { v4 as uuidv4 } from "uuid";
1329
1382
 
1383
+ // src/core/types.ts
1384
+ function getContentText(content) {
1385
+ if (typeof content === "string") return content;
1386
+ return content.filter((p) => p.type === "text").map((p) => p.text).join("");
1387
+ }
1388
+
1330
1389
  // src/session/session.ts
1331
1390
  var Session = class _Session {
1332
1391
  id;
@@ -1793,19 +1852,36 @@ var Renderer = class {
1793
1852
  if (out) process.stdout.write(out);
1794
1853
  buf = "";
1795
1854
  };
1796
- for await (const chunk of stream) {
1797
- if (chunk.usage) {
1798
- usage = chunk.usage;
1855
+ let interrupted = false;
1856
+ try {
1857
+ for await (const chunk of stream) {
1858
+ if (options?.signal?.aborted) {
1859
+ interrupted = true;
1860
+ break;
1861
+ }
1862
+ if (chunk.usage) {
1863
+ usage = chunk.usage;
1864
+ }
1865
+ if (chunk.done) {
1866
+ flushBuf();
1867
+ break;
1868
+ }
1869
+ if (!chunk.delta) continue;
1870
+ fullContent += chunk.delta;
1871
+ buf += chunk.delta;
1872
+ if (fileStream) fileStream.write(chunk.delta);
1873
+ flushBuf();
1799
1874
  }
1800
- if (chunk.done) {
1875
+ } catch (err) {
1876
+ if (err?.name === "AbortError") {
1877
+ interrupted = true;
1801
1878
  flushBuf();
1802
- break;
1879
+ } else {
1880
+ throw err;
1803
1881
  }
1804
- if (!chunk.delta) continue;
1805
- fullContent += chunk.delta;
1806
- buf += chunk.delta;
1807
- if (fileStream) fileStream.write(chunk.delta);
1808
- flushBuf();
1882
+ }
1883
+ if (interrupted) {
1884
+ process.stdout.write(chalk.dim(" [interrupted]\n"));
1809
1885
  }
1810
1886
  let tokensShown = false;
1811
1887
  if (options?.showTokens) {
@@ -3199,7 +3275,7 @@ ${text}
3199
3275
  description: "Run project tests and show structured report",
3200
3276
  usage: "/test [command|filter]",
3201
3277
  async execute(args, _ctx) {
3202
- const { executeTests } = await import("./run-tests-MKMB3TXE.js");
3278
+ const { executeTests } = await import("./run-tests-VVR5SMST.js");
3203
3279
  const argStr = args.join(" ").trim();
3204
3280
  let testArgs = {};
3205
3281
  if (argStr) {
@@ -6931,6 +7007,7 @@ var IMAGE_MIME = {
6931
7007
  ".gif": "image/gif",
6932
7008
  ".webp": "image/webp"
6933
7009
  };
7010
+ var MAX_IMAGE_BYTES = 10 * 1024 * 1024;
6934
7011
  function parseAtReferences(input2, cwd) {
6935
7012
  const atPattern = /@(?:"([^"]+)"|'([^']+)'|(\S+))/g;
6936
7013
  const refs = [];
@@ -6947,6 +7024,11 @@ function parseAtReferences(input2, cwd) {
6947
7024
  continue;
6948
7025
  }
6949
7026
  if (mime) {
7027
+ const fileSize = statSync6(absPath).size;
7028
+ if (fileSize > MAX_IMAGE_BYTES) {
7029
+ refs.push({ path: rawPath, type: "toolarge" });
7030
+ continue;
7031
+ }
6950
7032
  const data = readFileSync13(absPath).toString("base64");
6951
7033
  imageParts.push({
6952
7034
  type: "image_url",
@@ -7039,6 +7121,10 @@ var Repl = class {
7039
7121
  /** 技能管理器 */
7040
7122
  skillManager = null;
7041
7123
  customCommandManager = null;
7124
+ /** 流式生成中断控制器(ESC/Ctrl+C 中断时使用) */
7125
+ streamAbortController = null;
7126
+ /** ESC 键监听器引用(用于 removeListener 时取消注册) */
7127
+ _escHandler = null;
7042
7128
  /**
7043
7129
  * 交互式列表选择器进行中标志。
7044
7130
  * 与 toolExecutor.confirming 类似:主循环 line handler 在此为 true 时忽略 line 事件,
@@ -7453,6 +7539,10 @@ ${response.content.trim()}
7453
7539
  }
7454
7540
  }
7455
7541
  this.rl.on("SIGINT", () => {
7542
+ if (this.streamAbortController) {
7543
+ this.streamAbortController.abort();
7544
+ return;
7545
+ }
7456
7546
  if (this.toolExecutor.confirming) {
7457
7547
  this.toolExecutor.cancelConfirm();
7458
7548
  return;
@@ -7516,6 +7606,9 @@ ${response.content.trim()}
7516
7606
  for (const ref of refs) {
7517
7607
  if (ref.type === "notfound") {
7518
7608
  process.stdout.write(chalk10.yellow(` \u26A0 File not found: ${ref.path}
7609
+ `));
7610
+ } else if (ref.type === "toolarge") {
7611
+ process.stdout.write(chalk10.yellow(` \u26A0 Image too large (> 10 MB): ${ref.path}
7519
7612
  `));
7520
7613
  } else if (ref.type === "image") {
7521
7614
  process.stdout.write(chalk10.dim(` \u{1F4CE} Image: ${ref.path}
@@ -7571,6 +7664,8 @@ ${response.content.trim()}
7571
7664
  getVisionModelHint() {
7572
7665
  const model = this.currentModel;
7573
7666
  const provider = this.currentProvider;
7667
+ if (provider === "claude") return null;
7668
+ if (provider === "gemini") return null;
7574
7669
  if (model.includes("vision") || model.includes("vl") || model.endsWith(".6v") || // glm-4.6v
7575
7670
  model.endsWith("-v")) return null;
7576
7671
  if (provider === "kimi") {
@@ -7583,6 +7678,9 @@ ${response.content.trim()}
7583
7678
  if (provider === "zhipu") {
7584
7679
  return `model "${model}" does not support images. Use /model glm-4.6v`;
7585
7680
  }
7681
+ if (provider === "deepseek") {
7682
+ return `model "${model}" does not support vision. DeepSeek standard models are text-only.`;
7683
+ }
7586
7684
  return `model "${model}" may not support image input`;
7587
7685
  }
7588
7686
  /**
@@ -7782,36 +7880,71 @@ ${response.content.trim()}
7782
7880
  shouldShowTokens() {
7783
7881
  return this.config.get("ui").showTokenCount;
7784
7882
  }
7883
+ /**
7884
+ * 流式生成开始前调用:创建 AbortController,监听 ESC 键(0x1b)和 Ctrl+C(0x03)原始字节。
7885
+ * rl.pause() 会暂停 stdin,这里临时恢复以接收原始键盘字节。
7886
+ */
7887
+ setupStreamInterrupt() {
7888
+ const ac = new AbortController();
7889
+ this.streamAbortController = ac;
7890
+ process.stdin.resume();
7891
+ const handler = (data) => {
7892
+ if (data[0] === 27 && data.length === 1 || data[0] === 3) {
7893
+ ac.abort();
7894
+ }
7895
+ };
7896
+ this._escHandler = handler;
7897
+ process.stdin.on("data", handler);
7898
+ return ac;
7899
+ }
7900
+ /**
7901
+ * 流式生成结束后调用(finally 块中):清理 ESC 监听器,还原 stdin 暂停状态。
7902
+ */
7903
+ teardownStreamInterrupt() {
7904
+ if (this._escHandler) {
7905
+ process.stdin.removeListener("data", this._escHandler);
7906
+ this._escHandler = null;
7907
+ }
7908
+ process.stdin.pause();
7909
+ this.streamAbortController = null;
7910
+ }
7785
7911
  async handleChatSimple(provider, messages) {
7786
7912
  const session = this.sessions.current;
7787
7913
  const useStreaming = this.config.get("ui").streaming;
7788
7914
  const modelParams = this.getModelParams();
7789
7915
  if (useStreaming) {
7790
- const stream = provider.chatStream({
7791
- messages,
7792
- model: this.currentModel,
7793
- systemPrompt: this.buildCurrentSystemPrompt(),
7794
- stream: true,
7795
- temperature: modelParams.temperature,
7796
- maxTokens: modelParams.maxTokens,
7797
- timeout: modelParams.timeout,
7798
- thinking: modelParams.thinking
7799
- });
7800
- const showTokens = this.shouldShowTokens();
7801
- const { content, usage, tokensShown } = await this.renderer.renderStream(stream, {
7802
- showTokens,
7803
- sessionTotal: showTokens ? { ...this.sessionTokenUsage } : void 0
7804
- });
7805
- lastResponseStore.content = content;
7806
- session.addMessage({ role: "assistant", content, timestamp: /* @__PURE__ */ new Date() });
7807
- this.events.emit("message.after", { content });
7808
- if (usage) {
7809
- this.sessionTokenUsage.inputTokens += usage.inputTokens;
7810
- this.sessionTokenUsage.outputTokens += usage.outputTokens;
7811
- session.addTokenUsage(usage);
7812
- if (showTokens && !tokensShown) {
7813
- this.renderer.renderUsage(usage, this.sessionTokenUsage);
7916
+ const ac = this.setupStreamInterrupt();
7917
+ try {
7918
+ const stream = provider.chatStream({
7919
+ messages,
7920
+ model: this.currentModel,
7921
+ systemPrompt: this.buildCurrentSystemPrompt(),
7922
+ stream: true,
7923
+ temperature: modelParams.temperature,
7924
+ maxTokens: modelParams.maxTokens,
7925
+ timeout: modelParams.timeout,
7926
+ thinking: modelParams.thinking,
7927
+ signal: ac.signal
7928
+ });
7929
+ const showTokens = this.shouldShowTokens();
7930
+ const { content, usage, tokensShown } = await this.renderer.renderStream(stream, {
7931
+ showTokens,
7932
+ sessionTotal: showTokens ? { ...this.sessionTokenUsage } : void 0,
7933
+ signal: ac.signal
7934
+ });
7935
+ lastResponseStore.content = content;
7936
+ session.addMessage({ role: "assistant", content, timestamp: /* @__PURE__ */ new Date() });
7937
+ this.events.emit("message.after", { content });
7938
+ if (usage) {
7939
+ this.sessionTokenUsage.inputTokens += usage.inputTokens;
7940
+ this.sessionTokenUsage.outputTokens += usage.outputTokens;
7941
+ session.addTokenUsage(usage);
7942
+ if (showTokens && !tokensShown) {
7943
+ this.renderer.renderUsage(usage, this.sessionTokenUsage);
7944
+ }
7814
7945
  }
7946
+ } finally {
7947
+ this.teardownStreamInterrupt();
7815
7948
  }
7816
7949
  } else {
7817
7950
  const spinner = this.renderer.showSpinner("Thinking...");
@@ -7913,46 +8046,52 @@ ${response.content.trim()}
7913
8046
  const saveToFile = String(saveLastResponseCall.arguments["path"] ?? "");
7914
8047
  if (!saveToFile) {
7915
8048
  } else {
7916
- const genStream = provider.chatStream({
7917
- messages: apiMessages,
7918
- model: this.currentModel,
7919
- systemPrompt,
7920
- stream: true,
7921
- temperature: modelParams.temperature,
7922
- maxTokens: modelParams.maxTokens,
7923
- timeout: modelParams.timeout,
7924
- thinking: modelParams.thinking,
7925
- ...extraMessages.length > 0 ? { _extraMessages: extraMessages } : {}
7926
- });
7927
- const teeShowTokens = this.shouldShowTokens();
7928
- const { content: genContent, usage: genUsage, tokensShown: teeTokShown } = await this.renderer.renderStream(
7929
- genStream,
7930
- { saveToFile, showTokens: teeShowTokens, sessionTotal: teeShowTokens ? { ...this.sessionTokenUsage } : void 0 }
7931
- );
7932
- lastResponseStore.content = genContent;
7933
- if (genUsage) {
7934
- roundUsage.inputTokens += genUsage.inputTokens;
7935
- roundUsage.outputTokens += genUsage.outputTokens;
7936
- }
7937
- session.addMessage({ role: "assistant", content: genContent, timestamp: /* @__PURE__ */ new Date() });
7938
- this.events.emit("message.after", { content: genContent });
7939
- const lines = genContent.split("\n").length;
7940
- const bytes = Buffer.byteLength(genContent, "utf-8");
7941
- const syntheticResults = result.toolCalls.map((tc) => ({
7942
- callId: tc.id,
7943
- content: tc.name === "save_last_response" ? `File saved: ${saveToFile} (${lines} lines, ${bytes} bytes)` : `[skipped: file already saved by tee streaming]`,
7944
- isError: false
7945
- }));
7946
- const reasoningContent2 = "reasoningContent" in result ? result.reasoningContent : void 0;
7947
- const newMsgs2 = provider.buildToolResultMessages(result.toolCalls, syntheticResults, reasoningContent2);
7948
- extraMessages.push(...newMsgs2);
7949
- if (roundUsage.inputTokens > 0 || roundUsage.outputTokens > 0) {
7950
- this.sessionTokenUsage.inputTokens += roundUsage.inputTokens;
7951
- this.sessionTokenUsage.outputTokens += roundUsage.outputTokens;
7952
- session.addTokenUsage(roundUsage);
7953
- if (teeShowTokens && !teeTokShown) {
7954
- this.renderer.renderUsage(roundUsage, this.sessionTokenUsage);
8049
+ const teeAc = this.setupStreamInterrupt();
8050
+ try {
8051
+ const genStream = provider.chatStream({
8052
+ messages: apiMessages,
8053
+ model: this.currentModel,
8054
+ systemPrompt,
8055
+ stream: true,
8056
+ temperature: modelParams.temperature,
8057
+ maxTokens: modelParams.maxTokens,
8058
+ timeout: modelParams.timeout,
8059
+ thinking: modelParams.thinking,
8060
+ signal: teeAc.signal,
8061
+ ...extraMessages.length > 0 ? { _extraMessages: extraMessages } : {}
8062
+ });
8063
+ const teeShowTokens = this.shouldShowTokens();
8064
+ const { content: genContent, usage: genUsage, tokensShown: teeTokShown } = await this.renderer.renderStream(
8065
+ genStream,
8066
+ { saveToFile, showTokens: teeShowTokens, sessionTotal: teeShowTokens ? { ...this.sessionTokenUsage } : void 0, signal: teeAc.signal }
8067
+ );
8068
+ lastResponseStore.content = genContent;
8069
+ if (genUsage) {
8070
+ roundUsage.inputTokens += genUsage.inputTokens;
8071
+ roundUsage.outputTokens += genUsage.outputTokens;
8072
+ }
8073
+ session.addMessage({ role: "assistant", content: genContent, timestamp: /* @__PURE__ */ new Date() });
8074
+ this.events.emit("message.after", { content: genContent });
8075
+ const lines = genContent.split("\n").length;
8076
+ const bytes = Buffer.byteLength(genContent, "utf-8");
8077
+ const syntheticResults = result.toolCalls.map((tc) => ({
8078
+ callId: tc.id,
8079
+ content: tc.name === "save_last_response" ? `File saved: ${saveToFile} (${lines} lines, ${bytes} bytes)` : `[skipped: file already saved by tee streaming]`,
8080
+ isError: false
8081
+ }));
8082
+ const reasoningContent2 = "reasoningContent" in result ? result.reasoningContent : void 0;
8083
+ const newMsgs2 = provider.buildToolResultMessages(result.toolCalls, syntheticResults, reasoningContent2);
8084
+ extraMessages.push(...newMsgs2);
8085
+ if (roundUsage.inputTokens > 0 || roundUsage.outputTokens > 0) {
8086
+ this.sessionTokenUsage.inputTokens += roundUsage.inputTokens;
8087
+ this.sessionTokenUsage.outputTokens += roundUsage.outputTokens;
8088
+ session.addTokenUsage(roundUsage);
8089
+ if (teeShowTokens && !teeTokShown) {
8090
+ this.renderer.renderUsage(roundUsage, this.sessionTokenUsage);
8091
+ }
7955
8092
  }
8093
+ } finally {
8094
+ this.teardownStreamInterrupt();
7956
8095
  }
7957
8096
  return;
7958
8097
  }
@@ -2,7 +2,7 @@
2
2
  import {
3
3
  executeTests,
4
4
  runTestsTool
5
- } from "./chunk-3EJWGNHV.js";
5
+ } from "./chunk-IW6VVPO4.js";
6
6
  export {
7
7
  executeTests,
8
8
  runTestsTool
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "jinzd-ai-cli",
3
- "version": "0.1.28",
3
+ "version": "0.1.30",
4
4
  "description": "Cross-platform REPL-style AI CLI with multi-provider support",
5
5
  "type": "module",
6
6
  "main": "./dist/index.js",