jinzd-ai-cli 0.4.38 → 0.4.39
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/README.md +24 -5
- package/README.zh-CN.md +24 -5
- package/dist/{chunk-XPQEGIFO.js → chunk-E7OZDXCK.js} +1 -1
- package/dist/{chunk-OZER6O2H.js → chunk-NCY3EPA6.js} +1 -1
- package/dist/{chunk-TNRYHTGD.js → chunk-NP3647PM.js} +1 -1
- package/dist/{chunk-3UUFCRRL.js → chunk-XRWQAZAC.js} +2 -2
- package/dist/{hub-VQPJKWKE.js → hub-PIWTW5RI.js} +1 -1
- package/dist/index.js +9 -8
- package/dist/{run-tests-4D4EQB33.js → run-tests-6MNCHVPH.js} +1 -1
- package/dist/{run-tests-3QVMT5WL.js → run-tests-E6KEK7CK.js} +1 -1
- package/dist/{server-G6ALWI6B.js → server-4NK3UFNS.js} +4 -4
- package/dist/{task-orchestrator-XNINKBSB.js → task-orchestrator-XFQD6X5A.js} +2 -2
- package/package.json +1 -1
package/README.md
CHANGED
|
@@ -7,11 +7,11 @@
|
|
|
7
7
|
[](https://www.npmjs.com/package/jinzd-ai-cli)
|
|
8
8
|
[](LICENSE)
|
|
9
9
|
[](https://nodejs.org)
|
|
10
|
-
[]()
|
|
11
11
|
[](https://github.com/jinzhengdong/ai-cli/releases)
|
|
12
12
|
[](https://github.com/jinzhengdong/ai-cli/actions/workflows/ci.yml)
|
|
13
13
|
|
|
14
|
-
**ai-cli** is a powerful AI assistant that connects to
|
|
14
|
+
**ai-cli** is a powerful AI assistant that connects to 8 providers (including local Ollama models) and executes tasks autonomously through agentic tool calling. Use it as a terminal REPL, a browser-based Web UI, or a standalone Electron desktop app.
|
|
15
15
|
|
|
16
16
|
<p align="center">
|
|
17
17
|
<img src="https://img.shields.io/badge/CLI-Terminal-blue" alt="CLI" />
|
|
@@ -21,7 +21,7 @@
|
|
|
21
21
|
|
|
22
22
|
## Highlights
|
|
23
23
|
|
|
24
|
-
- **
|
|
24
|
+
- **8 Built-in Providers** — Claude, Gemini, DeepSeek, OpenAI, Zhipu GLM, Kimi, OpenRouter (300+ models), **Ollama** (local models, no API key needed)
|
|
25
25
|
- **3 Interfaces** — Terminal CLI, browser Web UI (`aicli web`), Electron desktop app
|
|
26
26
|
- **Agentic Tool Calling** — AI autonomously runs shell commands, reads/writes files, searches code, fetches web, runs tests (up to 25 rounds)
|
|
27
27
|
- **Streaming Tool Use** — Real-time streaming of AI reasoning and tool calls as they happen
|
|
@@ -116,9 +116,28 @@ aicli user delete x # Delete user
|
|
|
116
116
|
| **OpenRouter** | 300+ models (Claude, GPT, Gemini, Llama, Qwen, Mistral...) | [openrouter.ai](https://openrouter.ai) |
|
|
117
117
|
| **Zhipu** | GLM-4, GLM-5 | [open.bigmodel.cn](https://open.bigmodel.cn) |
|
|
118
118
|
| **Kimi** | Moonshot, Kimi-K2 | [platform.moonshot.cn](https://platform.moonshot.cn) |
|
|
119
|
+
| **Ollama** | Any locally installed model (Llama, Qwen, Gemma, Mistral...) | No API key — [ollama.com](https://ollama.com) |
|
|
119
120
|
|
|
120
121
|
Any OpenAI-compatible API can also be used via `customBaseUrls` in config.
|
|
121
122
|
|
|
123
|
+
### Ollama (Local Models)
|
|
124
|
+
|
|
125
|
+
Run AI models entirely on your own hardware — no API key, no usage fees, no data leaving your machine.
|
|
126
|
+
|
|
127
|
+
```bash
|
|
128
|
+
# Install Ollama from https://ollama.com, then pull a model:
|
|
129
|
+
ollama pull qwen3:4b # recommended: good tool-calling support
|
|
130
|
+
ollama pull gemma3:4b
|
|
131
|
+
ollama pull llama3.1:8b
|
|
132
|
+
|
|
133
|
+
# Start aicli and switch to Ollama:
|
|
134
|
+
aicli
|
|
135
|
+
[deepseek] > /provider ollama # auto-discovers installed models
|
|
136
|
+
[ollama] > /model # select from your local models
|
|
137
|
+
```
|
|
138
|
+
|
|
139
|
+
> **Note**: Use models 4B+ for best results with tool calling. Small models (<4B) may struggle with the tool definitions injected by MCP servers.
|
|
140
|
+
|
|
122
141
|
## Built-in Tools (Agentic)
|
|
123
142
|
|
|
124
143
|
AI autonomously invokes these 19 tools during conversations:
|
|
@@ -354,11 +373,11 @@ The Web UI (`aicli web`) provides a full-featured browser interface:
|
|
|
354
373
|
## Testing
|
|
355
374
|
|
|
356
375
|
```bash
|
|
357
|
-
npm test # Run all
|
|
376
|
+
npm test # Run all 343 tests
|
|
358
377
|
npm run test:watch # Watch mode
|
|
359
378
|
```
|
|
360
379
|
|
|
361
|
-
|
|
380
|
+
21 test suites covering: authentication, sessions, tool types & danger levels, permissions, output truncation, diff rendering, edit-file similarity, error hierarchy, config management, env loading, provider registry, web-fetch, grep-files, hub renderer, hub discussion, hub presets, dev-state.
|
|
362
381
|
|
|
363
382
|
## Documentation
|
|
364
383
|
|
package/README.zh-CN.md
CHANGED
|
@@ -2,18 +2,18 @@
|
|
|
2
2
|
|
|
3
3
|
# ai-cli
|
|
4
4
|
|
|
5
|
-
> 跨平台 AI 编程助手 — CLI 终端、Web 界面、桌面应用三合一,支持
|
|
5
|
+
> 跨平台 AI 编程助手 — CLI 终端、Web 界面、桌面应用三合一,支持 8 大 Provider(含本地 Ollama)与 Agentic 工具调用
|
|
6
6
|
|
|
7
7
|
[](https://www.npmjs.com/package/jinzd-ai-cli)
|
|
8
8
|
[](LICENSE)
|
|
9
9
|
[](https://nodejs.org)
|
|
10
|
-
[]()
|
|
11
11
|
[](https://github.com/jinzhengdong/ai-cli/releases)
|
|
12
12
|
[](https://github.com/jinzhengdong/ai-cli/actions/workflows/ci.yml)
|
|
13
13
|
|
|
14
14
|
## 特性亮点
|
|
15
15
|
|
|
16
|
-
- **
|
|
16
|
+
- **8 大内置 Provider** — Claude、Gemini、DeepSeek、OpenAI、智谱 GLM、Kimi、OpenRouter(300+ 模型)、**Ollama**(本地模型,无需 API Key)
|
|
17
17
|
- **三种使用方式** — 终端 CLI、浏览器 Web UI(`aicli web`)、Electron 桌面应用
|
|
18
18
|
- **Agentic 工具调用** — AI 自主执行 bash 命令、读写文件、搜索代码、抓取网页、运行测试(每轮最多 25 次)
|
|
19
19
|
- **流式工具调用** — 实时流式展示 AI 推理过程和工具调用
|
|
@@ -108,9 +108,28 @@ aicli user delete x # 删除用户
|
|
|
108
108
|
| **OpenRouter** | 300+ 模型(Claude/GPT/Gemini/Llama/Qwen/Mistral...) | [openrouter.ai](https://openrouter.ai) |
|
|
109
109
|
| **智谱清言** | GLM-4, GLM-5 | [open.bigmodel.cn](https://open.bigmodel.cn) |
|
|
110
110
|
| **Kimi** | Moonshot, Kimi-K2 | [platform.moonshot.cn](https://platform.moonshot.cn) |
|
|
111
|
+
| **Ollama** | 任意本地模型(Llama、Qwen、Gemma、Mistral...) | 无需 API Key — [ollama.com](https://ollama.com) |
|
|
111
112
|
|
|
112
113
|
也可通过 `customBaseUrls` 配置接入任意 OpenAI 兼容 API。
|
|
113
114
|
|
|
115
|
+
### Ollama(本地模型)
|
|
116
|
+
|
|
117
|
+
在自己的硬件上完全本地运行 AI,无需 API Key,无使用费,数据不离开本机。
|
|
118
|
+
|
|
119
|
+
```bash
|
|
120
|
+
# 从 https://ollama.com 安装 Ollama,然后拉取模型:
|
|
121
|
+
ollama pull qwen3:4b # 推荐:工具调用支持好
|
|
122
|
+
ollama pull gemma3:4b
|
|
123
|
+
ollama pull llama3.1:8b
|
|
124
|
+
|
|
125
|
+
# 启动 aicli,切换到 Ollama:
|
|
126
|
+
aicli
|
|
127
|
+
[deepseek] > /provider ollama # 自动发现已安装的模型
|
|
128
|
+
[ollama] > /model # 从本地模型中选择
|
|
129
|
+
```
|
|
130
|
+
|
|
131
|
+
> **注意**:建议使用 4B 及以上的模型以获得较好的工具调用支持。小模型(<4B)在 MCP 服务器注入大量工具定义时可能无法正常工作。
|
|
132
|
+
|
|
114
133
|
## 内置工具(Agentic 能力)
|
|
115
134
|
|
|
116
135
|
AI 在对话中可自主调用 19 个工具:
|
|
@@ -367,11 +386,11 @@ Web UI(`aicli web`)提供功能完备的浏览器界面:
|
|
|
367
386
|
## 测试
|
|
368
387
|
|
|
369
388
|
```bash
|
|
370
|
-
npm test # 运行全部
|
|
389
|
+
npm test # 运行全部 343 个测试
|
|
371
390
|
npm run test:watch # 监听模式
|
|
372
391
|
```
|
|
373
392
|
|
|
374
|
-
|
|
393
|
+
21 个测试套件覆盖:认证、会话、工具类型与危险级别、权限、输出截断、diff 渲染、edit-file 相似度、错误层级、配置管理、环境变量、Provider 注册、web-fetch、grep-files、Hub 渲染、Hub 讨论、Hub 预设、开发状态。
|
|
375
394
|
|
|
376
395
|
## 文档
|
|
377
396
|
|
|
@@ -7,7 +7,7 @@ import {
|
|
|
7
7
|
ProviderNotFoundError,
|
|
8
8
|
RateLimitError,
|
|
9
9
|
schemaToJsonSchema
|
|
10
|
-
} from "./chunk-
|
|
10
|
+
} from "./chunk-NCY3EPA6.js";
|
|
11
11
|
import {
|
|
12
12
|
APP_NAME,
|
|
13
13
|
CONFIG_DIR_NAME,
|
|
@@ -20,7 +20,7 @@ import {
|
|
|
20
20
|
MCP_TOOL_PREFIX,
|
|
21
21
|
PLUGINS_DIR_NAME,
|
|
22
22
|
VERSION
|
|
23
|
-
} from "./chunk-
|
|
23
|
+
} from "./chunk-E7OZDXCK.js";
|
|
24
24
|
|
|
25
25
|
// src/config/config-manager.ts
|
|
26
26
|
import { readFileSync, writeFileSync, existsSync, mkdirSync } from "fs";
|
|
@@ -387,7 +387,7 @@ ${content}`);
|
|
|
387
387
|
}
|
|
388
388
|
}
|
|
389
389
|
async function runTaskMode(config, providers, configManager, topic) {
|
|
390
|
-
const { TaskOrchestrator } = await import("./task-orchestrator-
|
|
390
|
+
const { TaskOrchestrator } = await import("./task-orchestrator-XFQD6X5A.js");
|
|
391
391
|
const orchestrator = new TaskOrchestrator(config, providers, configManager);
|
|
392
392
|
let interrupted = false;
|
|
393
393
|
const onSigint = () => {
|
package/dist/index.js
CHANGED
|
@@ -24,7 +24,7 @@ import {
|
|
|
24
24
|
saveDevState,
|
|
25
25
|
sessionHasMeaningfulContent,
|
|
26
26
|
setupProxy
|
|
27
|
-
} from "./chunk-
|
|
27
|
+
} from "./chunk-XRWQAZAC.js";
|
|
28
28
|
import {
|
|
29
29
|
ToolExecutor,
|
|
30
30
|
ToolRegistry,
|
|
@@ -37,7 +37,7 @@ import {
|
|
|
37
37
|
spawnAgentContext,
|
|
38
38
|
theme,
|
|
39
39
|
undoStack
|
|
40
|
-
} from "./chunk-
|
|
40
|
+
} from "./chunk-NCY3EPA6.js";
|
|
41
41
|
import {
|
|
42
42
|
fileCheckpoints
|
|
43
43
|
} from "./chunk-4BKXL7SM.js";
|
|
@@ -61,7 +61,7 @@ import {
|
|
|
61
61
|
SKILLS_DIR_NAME,
|
|
62
62
|
VERSION,
|
|
63
63
|
buildUserIdentityPrompt
|
|
64
|
-
} from "./chunk-
|
|
64
|
+
} from "./chunk-E7OZDXCK.js";
|
|
65
65
|
|
|
66
66
|
// src/index.ts
|
|
67
67
|
import { program } from "commander";
|
|
@@ -194,9 +194,9 @@ var Renderer = class {
|
|
|
194
194
|
console.log(label("Desc") + chalk.white(DESCRIPTION));
|
|
195
195
|
console.log(label("Author") + theme.warning(AUTHOR) + theme.dim(" <" + AUTHOR_EMAIL + ">"));
|
|
196
196
|
console.log(HR);
|
|
197
|
-
console.log(theme.dim(" Supported Providers (
|
|
197
|
+
console.log(theme.dim(" Supported Providers (8):"));
|
|
198
198
|
console.log(theme.dim(" OpenAI \xB7 DeepSeek \xB7 Kimi (Moonshot) \xB7 Claude (Anthropic)"));
|
|
199
|
-
console.log(theme.dim(" Gemini (Google) \xB7 Zhipu (GLM) \xB7
|
|
199
|
+
console.log(theme.dim(" Gemini (Google) \xB7 Zhipu (GLM) \xB7 OpenRouter \xB7 Ollama (Local, no API key)"));
|
|
200
200
|
console.log(HR);
|
|
201
201
|
const mcpToolCount = mcpInfo?.tools ?? 0;
|
|
202
202
|
const toolTotal = 19 + pluginCount + mcpToolCount;
|
|
@@ -290,6 +290,7 @@ var Renderer = class {
|
|
|
290
290
|
console.log(feat("Human participation: aicli join --human \u2014 real person joins multi-agent discussion"));
|
|
291
291
|
console.log(feat("Context injection: aicli hub -c doc.md \u2014 inject external documents for all agents"));
|
|
292
292
|
console.log(feat("Task Mode: aicli hub --task \u2014 agents plan, write code, and execute with tools (plan\u2192approve\u2192execute\u2192review)"));
|
|
293
|
+
console.log(feat("Ollama local models: built-in provider, no API key, auto-discovers installed models via /v1/models"));
|
|
293
294
|
console.log();
|
|
294
295
|
}
|
|
295
296
|
printPrompt(provider, _model) {
|
|
@@ -2098,7 +2099,7 @@ ${hint}` : "")
|
|
|
2098
2099
|
usage: "/test [command|filter]",
|
|
2099
2100
|
async execute(args, ctx) {
|
|
2100
2101
|
try {
|
|
2101
|
-
const { executeTests } = await import("./run-tests-
|
|
2102
|
+
const { executeTests } = await import("./run-tests-E6KEK7CK.js");
|
|
2102
2103
|
const argStr = args.join(" ").trim();
|
|
2103
2104
|
let testArgs = {};
|
|
2104
2105
|
if (argStr) {
|
|
@@ -5469,7 +5470,7 @@ program.command("web").description("Start Web UI server with browser-based chat
|
|
|
5469
5470
|
console.error("Error: Invalid port number. Must be between 1 and 65535.");
|
|
5470
5471
|
process.exit(1);
|
|
5471
5472
|
}
|
|
5472
|
-
const { startWebServer } = await import("./server-
|
|
5473
|
+
const { startWebServer } = await import("./server-4NK3UFNS.js");
|
|
5473
5474
|
await startWebServer({ port, host: options.host });
|
|
5474
5475
|
});
|
|
5475
5476
|
program.command("user [action] [username]").description("Manage Web UI users (list | create <name> | delete <name> | reset-password <name> | migrate <name>)").action(async (action, username) => {
|
|
@@ -5702,7 +5703,7 @@ program.command("hub [topic]").description("Start multi-agent hub (discuss / bra
|
|
|
5702
5703
|
}),
|
|
5703
5704
|
config.get("customProviders")
|
|
5704
5705
|
);
|
|
5705
|
-
const { startHub } = await import("./hub-
|
|
5706
|
+
const { startHub } = await import("./hub-PIWTW5RI.js");
|
|
5706
5707
|
await startHub(
|
|
5707
5708
|
{
|
|
5708
5709
|
topic: topic ?? "",
|
|
@@ -15,7 +15,7 @@ import {
|
|
|
15
15
|
hadPreviousWriteToolCalls,
|
|
16
16
|
loadDevState,
|
|
17
17
|
setupProxy
|
|
18
|
-
} from "./chunk-
|
|
18
|
+
} from "./chunk-XRWQAZAC.js";
|
|
19
19
|
import {
|
|
20
20
|
AuthManager
|
|
21
21
|
} from "./chunk-BYNY5JPB.js";
|
|
@@ -33,7 +33,7 @@ import {
|
|
|
33
33
|
spawnAgentContext,
|
|
34
34
|
truncateOutput,
|
|
35
35
|
undoStack
|
|
36
|
-
} from "./chunk-
|
|
36
|
+
} from "./chunk-NCY3EPA6.js";
|
|
37
37
|
import "./chunk-4BKXL7SM.js";
|
|
38
38
|
import {
|
|
39
39
|
AGENTIC_BEHAVIOR_GUIDELINE,
|
|
@@ -52,7 +52,7 @@ import {
|
|
|
52
52
|
SKILLS_DIR_NAME,
|
|
53
53
|
VERSION,
|
|
54
54
|
buildUserIdentityPrompt
|
|
55
|
-
} from "./chunk-
|
|
55
|
+
} from "./chunk-E7OZDXCK.js";
|
|
56
56
|
|
|
57
57
|
// src/web/server.ts
|
|
58
58
|
import express from "express";
|
|
@@ -1606,7 +1606,7 @@ ${undoResults.map((r) => ` \u2022 ${r}`).join("\n")}` });
|
|
|
1606
1606
|
case "test": {
|
|
1607
1607
|
this.send({ type: "info", message: "\u{1F9EA} Running tests..." });
|
|
1608
1608
|
try {
|
|
1609
|
-
const { executeTests } = await import("./run-tests-
|
|
1609
|
+
const { executeTests } = await import("./run-tests-E6KEK7CK.js");
|
|
1610
1610
|
const argStr = args.join(" ").trim();
|
|
1611
1611
|
let testArgs = {};
|
|
1612
1612
|
if (argStr) {
|
|
@@ -4,11 +4,11 @@ import {
|
|
|
4
4
|
getDangerLevel,
|
|
5
5
|
googleSearchContext,
|
|
6
6
|
truncateOutput
|
|
7
|
-
} from "./chunk-
|
|
7
|
+
} from "./chunk-NCY3EPA6.js";
|
|
8
8
|
import "./chunk-4BKXL7SM.js";
|
|
9
9
|
import {
|
|
10
10
|
SUBAGENT_ALLOWED_TOOLS
|
|
11
|
-
} from "./chunk-
|
|
11
|
+
} from "./chunk-E7OZDXCK.js";
|
|
12
12
|
|
|
13
13
|
// src/hub/task-orchestrator.ts
|
|
14
14
|
import { createInterface } from "readline";
|