jinzd-ai-cli 0.1.29 → 0.1.30

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/CLAUDE.md CHANGED
@@ -343,6 +343,83 @@ const stdinLines = Array.isArray(rawStdin) ? rawStdin.map(String)
343
343
  - [x] **web_fetch DNS 解析时 SSRF 防护**(v0.1.25):新增 `resolveAndCheck()` 函数,用 `dns.promises.lookup()` 预解析域名,检查结果 IP 是否为私有地址。初始 URL 和 redirect 目标均校验。
344
344
  - [ ] **`persistentCwd` 全局状态**:bash 工具的当前工作目录是模块级全局变量,多 session 并发时可能串扰。现阶段单 session REPL 无影响,GUI 多会话扩展时需重构为 per-session 状态。
345
345
 
346
+ ## 本轮开发完成记录(2026-03-01,v0.1.26 → v0.1.30)
347
+
348
+ ### 安全修复续篇:低危 L2–L7
349
+
350
+ **L2**(`src/tools/builtin/web-fetch.ts`):`htmlToText()` 函数新增 200 KB HTML 大小上限(`HTML_REGEX_LIMIT = 200_000`),超出则截断后再做正则替换,防止恶意大 HTML 导致正则性能崩溃。
351
+
352
+ **L3**(`src/session/session-manager.ts`):新增 `safeDate(value: unknown): Date` 辅助函数,对无效日期字符串返回 `new Date(0)` 而非 `Invalid Date`,防止 `listSessions()` 和 `searchMessages()` 的日期比较静默失败。
353
+
354
+ **L4**(`src/tools/builtin/ask-user.ts` + `src/tools/builtin/google-search.ts`):为模块级全局上下文对象(`askUserContext` / `googleSearchContext`)添加架构说明注释,提示 GUI 多会话扩展时需重构为 per-session 状态。
355
+
356
+ **L5**(`src/config/schema.ts`):为 `timeouts` 字段补充三层优先级文档注释:
357
+ 1. `modelParams[modelId].timeout` — 最高
358
+ 2. `timeouts[providerId]` — provider 级默认
359
+ 3. 内置 provider 硬编码默认值 — 最低兜底
360
+
361
+ **L6/L7**:确认已安全(`call.arguments['path']` 已用 `String(... ?? '')` 兜底;主循环 `line` handler 已有 try/catch)。
362
+
363
+ ---
364
+
365
+ ### 新增功能 1:多模态图片输入(P0)
366
+
367
+ **背景**:用户希望通过 `@image.png 描述这张图` 语法将图片发送给各 AI Provider。
368
+
369
+ **类型层**(已有,无需改动):`src/core/types.ts` 的 `ImageContentPart { type: 'image_url', image_url: { url: 'data:mime;base64,...' } }` 与 OpenAI 格式完全兼容。
370
+
371
+ **Claude Provider**(`src/providers/claude.ts`):
372
+ - 新增 `contentToClaudeParts()` 私有方法:将内部 `image_url` 格式转换为 Anthropic SDK 期望的 `{ type: 'image', source: { type: 'base64', media_type, data } }` 格式
373
+ - 应用到 `chat()`、`chatStream()`、`chatWithTools()` 的消息构建
374
+
375
+ **Gemini Provider**(`src/providers/gemini.ts`):
376
+ - 移除错误的 `getContentText()` 调用(会丢弃图片)
377
+ - 新增 `contentToGeminiParts()` 私有方法:转换为 Gemini SDK 格式 `{ inlineData: { mimeType, data } }`
378
+ - 修复 `toGeminiHistory()`、`chat()`、`chatStream()`、`chatWithTools()` 的消息构建
379
+ - `chatWithTools()` 中变量 `lastMessage: string` 重命名为 `lastMsgParts: Part[]`,历史嵌入改为 `{ role: 'user', parts: lastMsgParts }`
380
+
381
+ **OpenAI 兼容 Provider**:已就绪,格式一致,无需修改。
382
+
383
+ **REPL 层**(`src/repl/repl.ts`):
384
+ - 新增常量 `MAX_IMAGE_BYTES = 10 * 1024 * 1024`(10 MB)
385
+ - `parseAtReferences()` 图片读取前用 `statSync` 校验大小,超限时加入 `refs` 类型为 `'toolarge'`,不内联
386
+ - `handleChat()` 新增 `toolarge` 分支:打印黄色警告 `⚠ Image too large (> 10 MB): <path>`
387
+ - 修复 `getVisionModelHint()`:Claude/Gemini 返回 `null`(原生支持,无需警告);DeepSeek 显示明确不支持提示;zhipu 推荐 `glm-4.6v`;kimi `moonshot-v1-*` 推荐对应 vision 版本
388
+
389
+ ---
390
+
391
+ ### 新增功能 2:Escape/Ctrl+C 中断 AI 流式输出(P0)
392
+
393
+ **背景**:流式生成期间无法中断,必须等待 AI 返回完整内容。
394
+
395
+ **架构设计**:两个流式生成入口——`handleChatSimple()` 和 `handleChatWithTools()` 的 tee streaming 分支(`save_last_response` 工具)——均支持中断。主路径(`chatWithTools()` → `renderResponse()`)为非流式,暂不支持。
396
+
397
+ **`src/core/types.ts`**:`ChatRequest` 新增 `signal?: AbortSignal`,透传给 Provider API 调用。
398
+
399
+ **Provider 层**:
400
+ - `claude.ts`:`messages.stream()` 第二参数传入 `{ signal: request.signal }`(Anthropic SDK 支持)
401
+ - `openai-compatible.ts`:流式 `create()` 第二参数传入 `{ signal: request.signal }`(OpenAI SDK 支持)
402
+ - `gemini.ts`:生成器循环开头检查 `if (request.signal?.aborted) break`(兼容性保险)
403
+
404
+ **`src/repl/renderer.ts`**:`renderStream()` 的 `options` 新增 `signal?: AbortSignal`:
405
+ - `for await` 循环开头检查 `signal.aborted` → 标记 `interrupted = true` 并 `break`
406
+ - 捕获 SDK 抛出的 `AbortError`(name 检测)→ 同样标记 `interrupted`
407
+ - 中断时 `flushBuf()` 输出残留缓冲,打印灰色 `[interrupted]` 提示
408
+ - 返回值不变(`{ content, usage, tokensShown }`),`content` 为已生成的部分内容
409
+
410
+ **`src/repl/repl.ts`**:
411
+ - 新增类属性 `streamAbortController: AbortController | null` + `_escHandler: ((d: Buffer) => void) | null`
412
+ - 新增 `setupStreamInterrupt()` 方法:创建 `AbortController`,`process.stdin.resume()`(绕过 `rl.pause()` 暂停),注册原始字节监听器检测纯 ESC(`0x1b`,单字节,区别于 ESC 序列)和 Ctrl+C(`0x03`)
413
+ - 新增 `teardownStreamInterrupt()` 方法:移除监听器,`process.stdin.pause()`,清空引用
414
+ - SIGINT 处理器最顶部新增分支:`streamAbortController` 非空时 `abort()` 并 return,不退出程序
415
+ - `handleChatSimple()` 流式分支和 tee streaming 分支均用 `setupStreamInterrupt()`/`teardownStreamInterrupt()` 包裹 + `signal` 传入
416
+
417
+ **版本与收尾**
418
+ - `src/core/constants.ts`:VERSION `0.1.26` → `0.1.30`(合并前几次 bump)
419
+ - `package.json`:version `0.1.29` → `0.1.30`
420
+
421
+ ---
422
+
346
423
  ## 本轮开发完成记录(2026-03-01,v0.1.25 → v0.1.26)
347
424
 
348
425
  ### 新增功能:Context 自动管理 + 测试报告 + 脚手架
@@ -871,8 +948,8 @@ const stdinLines = Array.isArray(rawStdin) ? rawStdin.map(String)
871
948
  ### ❌ 缺失功能路线图(新增)
872
949
 
873
950
  #### P0 — 核心竞争力缺口
874
- - [ ] **中断生成**:`Escape` 键立即停止 AI 流式输出(当前无法中断,必须等待完成)
875
- - [ ] **多模态输入(图片)**:支持粘贴/引用图片路径作为输入(当前纯文本)
951
+ - [x] **中断生成**(v0.1.30):`Escape` 键或 `Ctrl+C` 立即停止 AI 流式输出,不退出程序,显示 `[interrupted]` 后恢复提示符;已生成内容保留到 session
952
+ - [x] **多模态输入(图片)**(v0.1.30):`@image.png 描述这张图` 语法发送图片;Claude/Gemini/OpenAI 兼容格式自动转换;10MB 大小限制;`getVisionModelHint()` 正确识别 Claude/Gemini 原生支持视觉
876
953
  - [ ] **`/add-dir` 命令**:运行时动态添加目录到上下文
877
954
  - [ ] **并行工具调用**:AI 一次返回多个工具同时执行(当前为分组串行)
878
955
 
@@ -8,7 +8,7 @@ import { platform } from "os";
8
8
  import chalk from "chalk";
9
9
 
10
10
  // src/core/constants.ts
11
- var VERSION = "0.1.26";
11
+ var VERSION = "0.1.30";
12
12
  var APP_NAME = "ai-cli";
13
13
  var CONFIG_DIR_NAME = ".aicli";
14
14
  var CONFIG_FILE_NAME = "config.json";
package/dist/index.js CHANGED
@@ -27,7 +27,7 @@ import {
27
27
  SUBAGENT_MAX_ROUNDS_LIMIT,
28
28
  VERSION,
29
29
  runTestsTool
30
- } from "./chunk-3EJWGNHV.js";
30
+ } from "./chunk-IW6VVPO4.js";
31
31
 
32
32
  // src/index.ts
33
33
  import { program } from "commander";
@@ -409,7 +409,7 @@ var ClaudeProvider = class extends BaseProvider {
409
409
  messages,
410
410
  system: request.systemPrompt,
411
411
  max_tokens: request.maxTokens ?? 8192
412
- });
412
+ }, { signal: request.signal });
413
413
  for await (const event of stream) {
414
414
  if (event.type === "content_block_delta" && event.delta.type === "text_delta") {
415
415
  yield { delta: event.delta.text, done: false };
@@ -650,6 +650,7 @@ var GeminiProvider = class extends BaseProvider {
650
650
  const chat = genModel.startChat({ history });
651
651
  const result = await chat.sendMessageStream(lastMsgParts);
652
652
  for await (const chunk of result.stream) {
653
+ if (request.signal?.aborted) break;
653
654
  yield { delta: chunk.text(), done: false };
654
655
  }
655
656
  const finalResponse = await result.response;
@@ -857,7 +858,8 @@ var OpenAICompatibleProvider = class extends BaseProvider {
857
858
  stream_options: { include_usage: true },
858
859
  ...request.thinking ? { thinking: { type: "enabled" } } : {}
859
860
  }, {
860
- timeout: request.timeout ?? this.defaultTimeout
861
+ timeout: request.timeout ?? this.defaultTimeout,
862
+ signal: request.signal
861
863
  });
862
864
  for await (const chunk of stream) {
863
865
  const choice = chunk.choices[0];
@@ -1850,19 +1852,36 @@ var Renderer = class {
1850
1852
  if (out) process.stdout.write(out);
1851
1853
  buf = "";
1852
1854
  };
1853
- for await (const chunk of stream) {
1854
- if (chunk.usage) {
1855
- usage = chunk.usage;
1855
+ let interrupted = false;
1856
+ try {
1857
+ for await (const chunk of stream) {
1858
+ if (options?.signal?.aborted) {
1859
+ interrupted = true;
1860
+ break;
1861
+ }
1862
+ if (chunk.usage) {
1863
+ usage = chunk.usage;
1864
+ }
1865
+ if (chunk.done) {
1866
+ flushBuf();
1867
+ break;
1868
+ }
1869
+ if (!chunk.delta) continue;
1870
+ fullContent += chunk.delta;
1871
+ buf += chunk.delta;
1872
+ if (fileStream) fileStream.write(chunk.delta);
1873
+ flushBuf();
1856
1874
  }
1857
- if (chunk.done) {
1875
+ } catch (err) {
1876
+ if (err?.name === "AbortError") {
1877
+ interrupted = true;
1858
1878
  flushBuf();
1859
- break;
1879
+ } else {
1880
+ throw err;
1860
1881
  }
1861
- if (!chunk.delta) continue;
1862
- fullContent += chunk.delta;
1863
- buf += chunk.delta;
1864
- if (fileStream) fileStream.write(chunk.delta);
1865
- flushBuf();
1882
+ }
1883
+ if (interrupted) {
1884
+ process.stdout.write(chalk.dim(" [interrupted]\n"));
1866
1885
  }
1867
1886
  let tokensShown = false;
1868
1887
  if (options?.showTokens) {
@@ -3256,7 +3275,7 @@ ${text}
3256
3275
  description: "Run project tests and show structured report",
3257
3276
  usage: "/test [command|filter]",
3258
3277
  async execute(args, _ctx) {
3259
- const { executeTests } = await import("./run-tests-MKMB3TXE.js");
3278
+ const { executeTests } = await import("./run-tests-VVR5SMST.js");
3260
3279
  const argStr = args.join(" ").trim();
3261
3280
  let testArgs = {};
3262
3281
  if (argStr) {
@@ -7102,6 +7121,10 @@ var Repl = class {
7102
7121
  /** 技能管理器 */
7103
7122
  skillManager = null;
7104
7123
  customCommandManager = null;
7124
+ /** 流式生成中断控制器(ESC/Ctrl+C 中断时使用) */
7125
+ streamAbortController = null;
7126
+ /** ESC 键监听器引用(用于 removeListener 时取消注册) */
7127
+ _escHandler = null;
7105
7128
  /**
7106
7129
  * 交互式列表选择器进行中标志。
7107
7130
  * 与 toolExecutor.confirming 类似:主循环 line handler 在此为 true 时忽略 line 事件,
@@ -7516,6 +7539,10 @@ ${response.content.trim()}
7516
7539
  }
7517
7540
  }
7518
7541
  this.rl.on("SIGINT", () => {
7542
+ if (this.streamAbortController) {
7543
+ this.streamAbortController.abort();
7544
+ return;
7545
+ }
7519
7546
  if (this.toolExecutor.confirming) {
7520
7547
  this.toolExecutor.cancelConfirm();
7521
7548
  return;
@@ -7853,36 +7880,71 @@ ${response.content.trim()}
7853
7880
  shouldShowTokens() {
7854
7881
  return this.config.get("ui").showTokenCount;
7855
7882
  }
7883
+ /**
7884
+ * 流式生成开始前调用:创建 AbortController,监听 ESC 键(0x1b)和 Ctrl+C(0x03)原始字节。
7885
+ * rl.pause() 会暂停 stdin,这里临时恢复以接收原始键盘字节。
7886
+ */
7887
+ setupStreamInterrupt() {
7888
+ const ac = new AbortController();
7889
+ this.streamAbortController = ac;
7890
+ process.stdin.resume();
7891
+ const handler = (data) => {
7892
+ if (data[0] === 27 && data.length === 1 || data[0] === 3) {
7893
+ ac.abort();
7894
+ }
7895
+ };
7896
+ this._escHandler = handler;
7897
+ process.stdin.on("data", handler);
7898
+ return ac;
7899
+ }
7900
+ /**
7901
+ * 流式生成结束后调用(finally 块中):清理 ESC 监听器,还原 stdin 暂停状态。
7902
+ */
7903
+ teardownStreamInterrupt() {
7904
+ if (this._escHandler) {
7905
+ process.stdin.removeListener("data", this._escHandler);
7906
+ this._escHandler = null;
7907
+ }
7908
+ process.stdin.pause();
7909
+ this.streamAbortController = null;
7910
+ }
7856
7911
  async handleChatSimple(provider, messages) {
7857
7912
  const session = this.sessions.current;
7858
7913
  const useStreaming = this.config.get("ui").streaming;
7859
7914
  const modelParams = this.getModelParams();
7860
7915
  if (useStreaming) {
7861
- const stream = provider.chatStream({
7862
- messages,
7863
- model: this.currentModel,
7864
- systemPrompt: this.buildCurrentSystemPrompt(),
7865
- stream: true,
7866
- temperature: modelParams.temperature,
7867
- maxTokens: modelParams.maxTokens,
7868
- timeout: modelParams.timeout,
7869
- thinking: modelParams.thinking
7870
- });
7871
- const showTokens = this.shouldShowTokens();
7872
- const { content, usage, tokensShown } = await this.renderer.renderStream(stream, {
7873
- showTokens,
7874
- sessionTotal: showTokens ? { ...this.sessionTokenUsage } : void 0
7875
- });
7876
- lastResponseStore.content = content;
7877
- session.addMessage({ role: "assistant", content, timestamp: /* @__PURE__ */ new Date() });
7878
- this.events.emit("message.after", { content });
7879
- if (usage) {
7880
- this.sessionTokenUsage.inputTokens += usage.inputTokens;
7881
- this.sessionTokenUsage.outputTokens += usage.outputTokens;
7882
- session.addTokenUsage(usage);
7883
- if (showTokens && !tokensShown) {
7884
- this.renderer.renderUsage(usage, this.sessionTokenUsage);
7916
+ const ac = this.setupStreamInterrupt();
7917
+ try {
7918
+ const stream = provider.chatStream({
7919
+ messages,
7920
+ model: this.currentModel,
7921
+ systemPrompt: this.buildCurrentSystemPrompt(),
7922
+ stream: true,
7923
+ temperature: modelParams.temperature,
7924
+ maxTokens: modelParams.maxTokens,
7925
+ timeout: modelParams.timeout,
7926
+ thinking: modelParams.thinking,
7927
+ signal: ac.signal
7928
+ });
7929
+ const showTokens = this.shouldShowTokens();
7930
+ const { content, usage, tokensShown } = await this.renderer.renderStream(stream, {
7931
+ showTokens,
7932
+ sessionTotal: showTokens ? { ...this.sessionTokenUsage } : void 0,
7933
+ signal: ac.signal
7934
+ });
7935
+ lastResponseStore.content = content;
7936
+ session.addMessage({ role: "assistant", content, timestamp: /* @__PURE__ */ new Date() });
7937
+ this.events.emit("message.after", { content });
7938
+ if (usage) {
7939
+ this.sessionTokenUsage.inputTokens += usage.inputTokens;
7940
+ this.sessionTokenUsage.outputTokens += usage.outputTokens;
7941
+ session.addTokenUsage(usage);
7942
+ if (showTokens && !tokensShown) {
7943
+ this.renderer.renderUsage(usage, this.sessionTokenUsage);
7944
+ }
7885
7945
  }
7946
+ } finally {
7947
+ this.teardownStreamInterrupt();
7886
7948
  }
7887
7949
  } else {
7888
7950
  const spinner = this.renderer.showSpinner("Thinking...");
@@ -7984,46 +8046,52 @@ ${response.content.trim()}
7984
8046
  const saveToFile = String(saveLastResponseCall.arguments["path"] ?? "");
7985
8047
  if (!saveToFile) {
7986
8048
  } else {
7987
- const genStream = provider.chatStream({
7988
- messages: apiMessages,
7989
- model: this.currentModel,
7990
- systemPrompt,
7991
- stream: true,
7992
- temperature: modelParams.temperature,
7993
- maxTokens: modelParams.maxTokens,
7994
- timeout: modelParams.timeout,
7995
- thinking: modelParams.thinking,
7996
- ...extraMessages.length > 0 ? { _extraMessages: extraMessages } : {}
7997
- });
7998
- const teeShowTokens = this.shouldShowTokens();
7999
- const { content: genContent, usage: genUsage, tokensShown: teeTokShown } = await this.renderer.renderStream(
8000
- genStream,
8001
- { saveToFile, showTokens: teeShowTokens, sessionTotal: teeShowTokens ? { ...this.sessionTokenUsage } : void 0 }
8002
- );
8003
- lastResponseStore.content = genContent;
8004
- if (genUsage) {
8005
- roundUsage.inputTokens += genUsage.inputTokens;
8006
- roundUsage.outputTokens += genUsage.outputTokens;
8007
- }
8008
- session.addMessage({ role: "assistant", content: genContent, timestamp: /* @__PURE__ */ new Date() });
8009
- this.events.emit("message.after", { content: genContent });
8010
- const lines = genContent.split("\n").length;
8011
- const bytes = Buffer.byteLength(genContent, "utf-8");
8012
- const syntheticResults = result.toolCalls.map((tc) => ({
8013
- callId: tc.id,
8014
- content: tc.name === "save_last_response" ? `File saved: ${saveToFile} (${lines} lines, ${bytes} bytes)` : `[skipped: file already saved by tee streaming]`,
8015
- isError: false
8016
- }));
8017
- const reasoningContent2 = "reasoningContent" in result ? result.reasoningContent : void 0;
8018
- const newMsgs2 = provider.buildToolResultMessages(result.toolCalls, syntheticResults, reasoningContent2);
8019
- extraMessages.push(...newMsgs2);
8020
- if (roundUsage.inputTokens > 0 || roundUsage.outputTokens > 0) {
8021
- this.sessionTokenUsage.inputTokens += roundUsage.inputTokens;
8022
- this.sessionTokenUsage.outputTokens += roundUsage.outputTokens;
8023
- session.addTokenUsage(roundUsage);
8024
- if (teeShowTokens && !teeTokShown) {
8025
- this.renderer.renderUsage(roundUsage, this.sessionTokenUsage);
8049
+ const teeAc = this.setupStreamInterrupt();
8050
+ try {
8051
+ const genStream = provider.chatStream({
8052
+ messages: apiMessages,
8053
+ model: this.currentModel,
8054
+ systemPrompt,
8055
+ stream: true,
8056
+ temperature: modelParams.temperature,
8057
+ maxTokens: modelParams.maxTokens,
8058
+ timeout: modelParams.timeout,
8059
+ thinking: modelParams.thinking,
8060
+ signal: teeAc.signal,
8061
+ ...extraMessages.length > 0 ? { _extraMessages: extraMessages } : {}
8062
+ });
8063
+ const teeShowTokens = this.shouldShowTokens();
8064
+ const { content: genContent, usage: genUsage, tokensShown: teeTokShown } = await this.renderer.renderStream(
8065
+ genStream,
8066
+ { saveToFile, showTokens: teeShowTokens, sessionTotal: teeShowTokens ? { ...this.sessionTokenUsage } : void 0, signal: teeAc.signal }
8067
+ );
8068
+ lastResponseStore.content = genContent;
8069
+ if (genUsage) {
8070
+ roundUsage.inputTokens += genUsage.inputTokens;
8071
+ roundUsage.outputTokens += genUsage.outputTokens;
8072
+ }
8073
+ session.addMessage({ role: "assistant", content: genContent, timestamp: /* @__PURE__ */ new Date() });
8074
+ this.events.emit("message.after", { content: genContent });
8075
+ const lines = genContent.split("\n").length;
8076
+ const bytes = Buffer.byteLength(genContent, "utf-8");
8077
+ const syntheticResults = result.toolCalls.map((tc) => ({
8078
+ callId: tc.id,
8079
+ content: tc.name === "save_last_response" ? `File saved: ${saveToFile} (${lines} lines, ${bytes} bytes)` : `[skipped: file already saved by tee streaming]`,
8080
+ isError: false
8081
+ }));
8082
+ const reasoningContent2 = "reasoningContent" in result ? result.reasoningContent : void 0;
8083
+ const newMsgs2 = provider.buildToolResultMessages(result.toolCalls, syntheticResults, reasoningContent2);
8084
+ extraMessages.push(...newMsgs2);
8085
+ if (roundUsage.inputTokens > 0 || roundUsage.outputTokens > 0) {
8086
+ this.sessionTokenUsage.inputTokens += roundUsage.inputTokens;
8087
+ this.sessionTokenUsage.outputTokens += roundUsage.outputTokens;
8088
+ session.addTokenUsage(roundUsage);
8089
+ if (teeShowTokens && !teeTokShown) {
8090
+ this.renderer.renderUsage(roundUsage, this.sessionTokenUsage);
8091
+ }
8026
8092
  }
8093
+ } finally {
8094
+ this.teardownStreamInterrupt();
8027
8095
  }
8028
8096
  return;
8029
8097
  }
@@ -2,7 +2,7 @@
2
2
  import {
3
3
  executeTests,
4
4
  runTestsTool
5
- } from "./chunk-3EJWGNHV.js";
5
+ } from "./chunk-IW6VVPO4.js";
6
6
  export {
7
7
  executeTests,
8
8
  runTestsTool
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "jinzd-ai-cli",
3
- "version": "0.1.29",
3
+ "version": "0.1.30",
4
4
  "description": "Cross-platform REPL-style AI CLI with multi-provider support",
5
5
  "type": "module",
6
6
  "main": "./dist/index.js",