jinzd-ai-cli 0.4.38 → 0.4.40

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -7,11 +7,11 @@
7
7
  [![npm version](https://img.shields.io/npm/v/jinzd-ai-cli)](https://www.npmjs.com/package/jinzd-ai-cli)
8
8
  [![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](LICENSE)
9
9
  [![Node.js](https://img.shields.io/badge/node-%3E%3D20-brightgreen)](https://nodejs.org)
10
- [![Tests](https://img.shields.io/badge/tests-282%20passing-brightgreen)]()
10
+ [![Tests](https://img.shields.io/badge/tests-343%20passing-brightgreen)]()
11
11
  [![GitHub Release](https://img.shields.io/github/v/release/jinzhengdong/ai-cli)](https://github.com/jinzhengdong/ai-cli/releases)
12
12
  [![CI](https://github.com/jinzhengdong/ai-cli/actions/workflows/ci.yml/badge.svg)](https://github.com/jinzhengdong/ai-cli/actions/workflows/ci.yml)
13
13
 
14
- **ai-cli** is a powerful AI assistant that connects to 7 providers and executes tasks autonomously through agentic tool calling. Use it as a terminal REPL, a browser-based Web UI, or a standalone Electron desktop app.
14
+ **ai-cli** is a powerful AI assistant that connects to 8 providers (including local Ollama models) and executes tasks autonomously through agentic tool calling. Use it as a terminal REPL, a browser-based Web UI, or a standalone Electron desktop app.
15
15
 
16
16
  <p align="center">
17
17
  <img src="https://img.shields.io/badge/CLI-Terminal-blue" alt="CLI" />
@@ -21,7 +21,7 @@
21
21
 
22
22
  ## Highlights
23
23
 
24
- - **7 Built-in Providers** — Claude, Gemini, DeepSeek, OpenAI, Zhipu GLM, Kimi, OpenRouter (300+ models)
24
+ - **8 Built-in Providers** — Claude, Gemini, DeepSeek, OpenAI, Zhipu GLM, Kimi, OpenRouter (300+ models), **Ollama** (local models, no API key needed)
25
25
  - **3 Interfaces** — Terminal CLI, browser Web UI (`aicli web`), Electron desktop app
26
26
  - **Agentic Tool Calling** — AI autonomously runs shell commands, reads/writes files, searches code, fetches web, runs tests (up to 25 rounds)
27
27
  - **Streaming Tool Use** — Real-time streaming of AI reasoning and tool calls as they happen
@@ -116,9 +116,28 @@ aicli user delete x # Delete user
116
116
  | **OpenRouter** | 300+ models (Claude, GPT, Gemini, Llama, Qwen, Mistral...) | [openrouter.ai](https://openrouter.ai) |
117
117
  | **Zhipu** | GLM-4, GLM-5 | [open.bigmodel.cn](https://open.bigmodel.cn) |
118
118
  | **Kimi** | Moonshot, Kimi-K2 | [platform.moonshot.cn](https://platform.moonshot.cn) |
119
+ | **Ollama** | Any locally installed model (Llama, Qwen, Gemma, Mistral...) | No API key — [ollama.com](https://ollama.com) |
119
120
 
120
121
  Any OpenAI-compatible API can also be used via `customBaseUrls` in config.
121
122
 
123
+ ### Ollama (Local Models)
124
+
125
+ Run AI models entirely on your own hardware — no API key, no usage fees, no data leaving your machine.
126
+
127
+ ```bash
128
+ # Install Ollama from https://ollama.com, then pull a model:
129
+ ollama pull qwen3:4b # recommended: good tool-calling support
130
+ ollama pull gemma3:4b
131
+ ollama pull llama3.1:8b
132
+
133
+ # Start aicli and switch to Ollama:
134
+ aicli
135
+ [deepseek] > /provider ollama # auto-discovers installed models
136
+ [ollama] > /model # select from your local models
137
+ ```
138
+
139
+ > **Note**: Use models 4B+ for best results with tool calling. Small models (<4B) may struggle with the tool definitions injected by MCP servers.
140
+
122
141
  ## Built-in Tools (Agentic)
123
142
 
124
143
  AI autonomously invokes these 19 tools during conversations:
@@ -354,11 +373,11 @@ The Web UI (`aicli web`) provides a full-featured browser interface:
354
373
  ## Testing
355
374
 
356
375
  ```bash
357
- npm test # Run all 282 tests
376
+ npm test # Run all 343 tests
358
377
  npm run test:watch # Watch mode
359
378
  ```
360
379
 
361
- 18 test suites covering: authentication, sessions, tool types & danger levels, permissions, output truncation, diff rendering, edit-file similarity, error hierarchy, config management, env loading, provider registry, web-fetch, grep-files, hub renderer, hub discussion, hub presets, dev-state.
380
+ 21 test suites covering: authentication, sessions, tool types & danger levels, permissions, output truncation, diff rendering, edit-file similarity, error hierarchy, config management, env loading, provider registry, web-fetch, grep-files, hub renderer, hub discussion, hub presets, dev-state.
362
381
 
363
382
  ## Documentation
364
383
 
package/README.zh-CN.md CHANGED
@@ -2,18 +2,18 @@
2
2
 
3
3
  # ai-cli
4
4
 
5
- > 跨平台 AI 编程助手 — CLI 终端、Web 界面、桌面应用三合一,支持 7 大 Provider Agentic 工具调用
5
+ > 跨平台 AI 编程助手 — CLI 终端、Web 界面、桌面应用三合一,支持 8 大 Provider(含本地 Ollama)与 Agentic 工具调用
6
6
 
7
7
  [![npm version](https://img.shields.io/npm/v/jinzd-ai-cli)](https://www.npmjs.com/package/jinzd-ai-cli)
8
8
  [![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](LICENSE)
9
9
  [![Node.js](https://img.shields.io/badge/node-%3E%3D20-brightgreen)](https://nodejs.org)
10
- [![Tests](https://img.shields.io/badge/tests-282%20passing-brightgreen)]()
10
+ [![Tests](https://img.shields.io/badge/tests-343%20passing-brightgreen)]()
11
11
  [![GitHub Release](https://img.shields.io/github/v/release/jinzhengdong/ai-cli)](https://github.com/jinzhengdong/ai-cli/releases)
12
12
  [![CI](https://github.com/jinzhengdong/ai-cli/actions/workflows/ci.yml/badge.svg)](https://github.com/jinzhengdong/ai-cli/actions/workflows/ci.yml)
13
13
 
14
14
  ## 特性亮点
15
15
 
16
- - **7 大内置 Provider** — Claude、Gemini、DeepSeek、OpenAI、智谱 GLM、Kimi、OpenRouter(300+ 模型)
16
+ - **8 大内置 Provider** — Claude、Gemini、DeepSeek、OpenAI、智谱 GLM、Kimi、OpenRouter(300+ 模型)、**Ollama**(本地模型,无需 API Key)
17
17
  - **三种使用方式** — 终端 CLI、浏览器 Web UI(`aicli web`)、Electron 桌面应用
18
18
  - **Agentic 工具调用** — AI 自主执行 bash 命令、读写文件、搜索代码、抓取网页、运行测试(每轮最多 25 次)
19
19
  - **流式工具调用** — 实时流式展示 AI 推理过程和工具调用
@@ -108,9 +108,28 @@ aicli user delete x # 删除用户
108
108
  | **OpenRouter** | 300+ 模型(Claude/GPT/Gemini/Llama/Qwen/Mistral...) | [openrouter.ai](https://openrouter.ai) |
109
109
  | **智谱清言** | GLM-4, GLM-5 | [open.bigmodel.cn](https://open.bigmodel.cn) |
110
110
  | **Kimi** | Moonshot, Kimi-K2 | [platform.moonshot.cn](https://platform.moonshot.cn) |
111
+ | **Ollama** | 任意本地模型(Llama、Qwen、Gemma、Mistral...) | 无需 API Key — [ollama.com](https://ollama.com) |
111
112
 
112
113
  也可通过 `customBaseUrls` 配置接入任意 OpenAI 兼容 API。
113
114
 
115
+ ### Ollama(本地模型)
116
+
117
+ 在自己的硬件上完全本地运行 AI,无需 API Key,无使用费,数据不离开本机。
118
+
119
+ ```bash
120
+ # 从 https://ollama.com 安装 Ollama,然后拉取模型:
121
+ ollama pull qwen3:4b # 推荐:工具调用支持好
122
+ ollama pull gemma3:4b
123
+ ollama pull llama3.1:8b
124
+
125
+ # 启动 aicli,切换到 Ollama:
126
+ aicli
127
+ [deepseek] > /provider ollama # 自动发现已安装的模型
128
+ [ollama] > /model # 从本地模型中选择
129
+ ```
130
+
131
+ > **注意**:建议使用 4B 及以上的模型以获得较好的工具调用支持。小模型(<4B)在 MCP 服务器注入大量工具定义时可能无法正常工作。
132
+
114
133
  ## 内置工具(Agentic 能力)
115
134
 
116
135
  AI 在对话中可自主调用 19 个工具:
@@ -367,11 +386,11 @@ Web UI(`aicli web`)提供功能完备的浏览器界面:
367
386
  ## 测试
368
387
 
369
388
  ```bash
370
- npm test # 运行全部 282 个测试
389
+ npm test # 运行全部 343 个测试
371
390
  npm run test:watch # 监听模式
372
391
  ```
373
392
 
374
- 18 个测试套件覆盖:认证、会话、工具类型与危险级别、权限、输出截断、diff 渲染、edit-file 相似度、错误层级、配置管理、环境变量、Provider 注册、web-fetch、grep-files、Hub 渲染、Hub 讨论、Hub 预设、开发状态。
393
+ 21 个测试套件覆盖:认证、会话、工具类型与危险级别、权限、输出截断、diff 渲染、edit-file 相似度、错误层级、配置管理、环境变量、Provider 注册、web-fetch、grep-files、Hub 渲染、Hub 讨论、Hub 预设、开发状态。
375
394
 
376
395
  ## 文档
377
396
 
@@ -6,7 +6,7 @@ import { platform } from "os";
6
6
  import chalk from "chalk";
7
7
 
8
8
  // src/core/constants.ts
9
- var VERSION = "0.4.38";
9
+ var VERSION = "0.4.40";
10
10
  var APP_NAME = "ai-cli";
11
11
  var CONFIG_DIR_NAME = ".aicli";
12
12
  var CONFIG_FILE_NAME = "config.json";
@@ -9,7 +9,7 @@ import {
9
9
  SUBAGENT_DEFAULT_MAX_ROUNDS,
10
10
  SUBAGENT_MAX_ROUNDS_LIMIT,
11
11
  runTestsTool
12
- } from "./chunk-XPQEGIFO.js";
12
+ } from "./chunk-RVJMWZEJ.js";
13
13
 
14
14
  // src/tools/builtin/bash.ts
15
15
  import { execSync } from "child_process";
@@ -8,7 +8,7 @@ import { platform } from "os";
8
8
  import chalk from "chalk";
9
9
 
10
10
  // src/core/constants.ts
11
- var VERSION = "0.4.38";
11
+ var VERSION = "0.4.40";
12
12
  var APP_NAME = "ai-cli";
13
13
  var CONFIG_DIR_NAME = ".aicli";
14
14
  var CONFIG_FILE_NAME = "config.json";
@@ -7,7 +7,7 @@ import {
7
7
  ProviderNotFoundError,
8
8
  RateLimitError,
9
9
  schemaToJsonSchema
10
- } from "./chunk-OZER6O2H.js";
10
+ } from "./chunk-KPHYJOJT.js";
11
11
  import {
12
12
  APP_NAME,
13
13
  CONFIG_DIR_NAME,
@@ -20,7 +20,7 @@ import {
20
20
  MCP_TOOL_PREFIX,
21
21
  PLUGINS_DIR_NAME,
22
22
  VERSION
23
- } from "./chunk-XPQEGIFO.js";
23
+ } from "./chunk-RVJMWZEJ.js";
24
24
 
25
25
  // src/config/config-manager.ts
26
26
  import { readFileSync, writeFileSync, existsSync, mkdirSync } from "fs";
@@ -555,6 +555,8 @@ var ClaudeProvider = class extends BaseProvider {
555
555
  const extraMessages = request._extraMessages ?? [];
556
556
  const allMessages = [...baseMessages, ...extraMessages];
557
557
  const { thinking, temperature } = this.buildThinkingParams(request);
558
+ let doneEmitted = false;
559
+ const rawContentBlocks = [];
558
560
  try {
559
561
  const stream = this.client.messages.stream({
560
562
  model: request.model,
@@ -567,8 +569,6 @@ var ClaudeProvider = class extends BaseProvider {
567
569
  }, { signal: request.signal });
568
570
  let currentBlockType = null;
569
571
  let currentToolIndex = 0;
570
- let doneEmitted2 = false;
571
- const rawContentBlocks2 = [];
572
572
  let currentBlockData = {};
573
573
  for await (const event of stream) {
574
574
  if (event.type === "content_block_start") {
@@ -627,20 +627,20 @@ var ClaudeProvider = class extends BaseProvider {
627
627
  }
628
628
  delete currentBlockData._inputJson;
629
629
  }
630
- rawContentBlocks2.push(currentBlockData);
630
+ rawContentBlocks.push(currentBlockData);
631
631
  currentBlockType = null;
632
632
  currentBlockData = {};
633
633
  } else if (event.type === "message_delta") {
634
634
  const usage = event.usage;
635
635
  if (usage) {
636
- doneEmitted2 = true;
636
+ doneEmitted = true;
637
637
  yield {
638
638
  type: "done",
639
639
  usage: {
640
640
  inputTokens: usage.input_tokens ?? 0,
641
641
  outputTokens: usage.output_tokens ?? 0
642
642
  },
643
- rawContent: rawContentBlocks2
643
+ rawContent: rawContentBlocks
644
644
  };
645
645
  }
646
646
  }
@@ -387,7 +387,7 @@ ${content}`);
387
387
  }
388
388
  }
389
389
  async function runTaskMode(config, providers, configManager, topic) {
390
- const { TaskOrchestrator } = await import("./task-orchestrator-XNINKBSB.js");
390
+ const { TaskOrchestrator } = await import("./task-orchestrator-7B3UBHMI.js");
391
391
  const orchestrator = new TaskOrchestrator(config, providers, configManager);
392
392
  let interrupted = false;
393
393
  const onSigint = () => {
package/dist/index.js CHANGED
@@ -24,7 +24,7 @@ import {
24
24
  saveDevState,
25
25
  sessionHasMeaningfulContent,
26
26
  setupProxy
27
- } from "./chunk-3UUFCRRL.js";
27
+ } from "./chunk-Y4B6HE6J.js";
28
28
  import {
29
29
  ToolExecutor,
30
30
  ToolRegistry,
@@ -37,7 +37,7 @@ import {
37
37
  spawnAgentContext,
38
38
  theme,
39
39
  undoStack
40
- } from "./chunk-OZER6O2H.js";
40
+ } from "./chunk-KPHYJOJT.js";
41
41
  import {
42
42
  fileCheckpoints
43
43
  } from "./chunk-4BKXL7SM.js";
@@ -61,7 +61,7 @@ import {
61
61
  SKILLS_DIR_NAME,
62
62
  VERSION,
63
63
  buildUserIdentityPrompt
64
- } from "./chunk-XPQEGIFO.js";
64
+ } from "./chunk-RVJMWZEJ.js";
65
65
 
66
66
  // src/index.ts
67
67
  import { program } from "commander";
@@ -194,9 +194,9 @@ var Renderer = class {
194
194
  console.log(label("Desc") + chalk.white(DESCRIPTION));
195
195
  console.log(label("Author") + theme.warning(AUTHOR) + theme.dim(" <" + AUTHOR_EMAIL + ">"));
196
196
  console.log(HR);
197
- console.log(theme.dim(" Supported Providers (7):"));
197
+ console.log(theme.dim(" Supported Providers (8):"));
198
198
  console.log(theme.dim(" OpenAI \xB7 DeepSeek \xB7 Kimi (Moonshot) \xB7 Claude (Anthropic)"));
199
- console.log(theme.dim(" Gemini (Google) \xB7 Zhipu (GLM) \xB7 Custom OpenAI-compatible"));
199
+ console.log(theme.dim(" Gemini (Google) \xB7 Zhipu (GLM) \xB7 OpenRouter \xB7 Ollama (Local, no API key)"));
200
200
  console.log(HR);
201
201
  const mcpToolCount = mcpInfo?.tools ?? 0;
202
202
  const toolTotal = 19 + pluginCount + mcpToolCount;
@@ -290,6 +290,7 @@ var Renderer = class {
290
290
  console.log(feat("Human participation: aicli join --human \u2014 real person joins multi-agent discussion"));
291
291
  console.log(feat("Context injection: aicli hub -c doc.md \u2014 inject external documents for all agents"));
292
292
  console.log(feat("Task Mode: aicli hub --task \u2014 agents plan, write code, and execute with tools (plan\u2192approve\u2192execute\u2192review)"));
293
+ console.log(feat("Ollama local models: built-in provider, no API key, auto-discovers installed models via /v1/models"));
293
294
  console.log();
294
295
  }
295
296
  printPrompt(provider, _model) {
@@ -2098,7 +2099,7 @@ ${hint}` : "")
2098
2099
  usage: "/test [command|filter]",
2099
2100
  async execute(args, ctx) {
2100
2101
  try {
2101
- const { executeTests } = await import("./run-tests-3QVMT5WL.js");
2102
+ const { executeTests } = await import("./run-tests-AXMJPIO7.js");
2102
2103
  const argStr = args.join(" ").trim();
2103
2104
  let testArgs = {};
2104
2105
  if (argStr) {
@@ -5469,7 +5470,7 @@ program.command("web").description("Start Web UI server with browser-based chat
5469
5470
  console.error("Error: Invalid port number. Must be between 1 and 65535.");
5470
5471
  process.exit(1);
5471
5472
  }
5472
- const { startWebServer } = await import("./server-G6ALWI6B.js");
5473
+ const { startWebServer } = await import("./server-QJSBW5NP.js");
5473
5474
  await startWebServer({ port, host: options.host });
5474
5475
  });
5475
5476
  program.command("user [action] [username]").description("Manage Web UI users (list | create <name> | delete <name> | reset-password <name> | migrate <name>)").action(async (action, username) => {
@@ -5702,7 +5703,7 @@ program.command("hub [topic]").description("Start multi-agent hub (discuss / bra
5702
5703
  }),
5703
5704
  config.get("customProviders")
5704
5705
  );
5705
- const { startHub } = await import("./hub-VQPJKWKE.js");
5706
+ const { startHub } = await import("./hub-YFHYOGDB.js");
5706
5707
  await startHub(
5707
5708
  {
5708
5709
  topic: topic ?? "",
@@ -1,7 +1,7 @@
1
1
  import {
2
2
  executeTests,
3
3
  runTestsTool
4
- } from "./chunk-TNRYHTGD.js";
4
+ } from "./chunk-CYBC7PWO.js";
5
5
  export {
6
6
  executeTests,
7
7
  runTestsTool
@@ -2,7 +2,7 @@
2
2
  import {
3
3
  executeTests,
4
4
  runTestsTool
5
- } from "./chunk-XPQEGIFO.js";
5
+ } from "./chunk-RVJMWZEJ.js";
6
6
  export {
7
7
  executeTests,
8
8
  runTestsTool
@@ -15,7 +15,7 @@ import {
15
15
  hadPreviousWriteToolCalls,
16
16
  loadDevState,
17
17
  setupProxy
18
- } from "./chunk-3UUFCRRL.js";
18
+ } from "./chunk-Y4B6HE6J.js";
19
19
  import {
20
20
  AuthManager
21
21
  } from "./chunk-BYNY5JPB.js";
@@ -33,7 +33,7 @@ import {
33
33
  spawnAgentContext,
34
34
  truncateOutput,
35
35
  undoStack
36
- } from "./chunk-OZER6O2H.js";
36
+ } from "./chunk-KPHYJOJT.js";
37
37
  import "./chunk-4BKXL7SM.js";
38
38
  import {
39
39
  AGENTIC_BEHAVIOR_GUIDELINE,
@@ -52,7 +52,7 @@ import {
52
52
  SKILLS_DIR_NAME,
53
53
  VERSION,
54
54
  buildUserIdentityPrompt
55
- } from "./chunk-XPQEGIFO.js";
55
+ } from "./chunk-RVJMWZEJ.js";
56
56
 
57
57
  // src/web/server.ts
58
58
  import express from "express";
@@ -1606,7 +1606,7 @@ ${undoResults.map((r) => ` \u2022 ${r}`).join("\n")}` });
1606
1606
  case "test": {
1607
1607
  this.send({ type: "info", message: "\u{1F9EA} Running tests..." });
1608
1608
  try {
1609
- const { executeTests } = await import("./run-tests-3QVMT5WL.js");
1609
+ const { executeTests } = await import("./run-tests-AXMJPIO7.js");
1610
1610
  const argStr = args.join(" ").trim();
1611
1611
  let testArgs = {};
1612
1612
  if (argStr) {
@@ -4,11 +4,11 @@ import {
4
4
  getDangerLevel,
5
5
  googleSearchContext,
6
6
  truncateOutput
7
- } from "./chunk-OZER6O2H.js";
7
+ } from "./chunk-KPHYJOJT.js";
8
8
  import "./chunk-4BKXL7SM.js";
9
9
  import {
10
10
  SUBAGENT_ALLOWED_TOOLS
11
- } from "./chunk-XPQEGIFO.js";
11
+ } from "./chunk-RVJMWZEJ.js";
12
12
 
13
13
  // src/hub/task-orchestrator.ts
14
14
  import { createInterface } from "readline";
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "jinzd-ai-cli",
3
- "version": "0.4.38",
3
+ "version": "0.4.40",
4
4
  "description": "Cross-platform REPL-style AI CLI with multi-provider support",
5
5
  "type": "module",
6
6
  "main": "./dist/index.js",