jinzd-ai-cli 0.4.93 → 0.4.95
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/README.md +429 -429
- package/README.zh-CN.md +448 -448
- package/dist/{batch-NAPWSE3V.js → batch-EGNBEZ7L.js} +2 -2
- package/dist/{chunk-5D7LG2TY.js → chunk-27KAJTRV.js} +2 -2
- package/dist/{chunk-3SRJBTIY.js → chunk-3OTYFGZX.js} +1 -1
- package/dist/{chunk-OL6TUUBE.js → chunk-64NBIR3O.js} +1 -1
- package/dist/{chunk-ZTBXPEG2.js → chunk-DBNMDRR3.js} +1 -1
- package/dist/{chunk-WR4M4TXV.js → chunk-KANAVCLY.js} +1 -1
- package/dist/{chunk-3CU4NKPD.js → chunk-VWJ6UJCF.js} +35 -9
- package/dist/electron-server.js +38 -10
- package/dist/{hub-LJAXL2LO.js → hub-6NPN7P76.js} +1 -1
- package/dist/index.js +21 -13
- package/dist/{run-tests-RO24F4Z2.js → run-tests-52NQAWN3.js} +2 -2
- package/dist/{run-tests-LKZFHPQK.js → run-tests-A3HXYWYK.js} +1 -1
- package/dist/{server-YXETATAI.js → server-DBJJXHMH.js} +3 -3
- package/dist/{server-YHECGYRX.js → server-ON3BYV2G.js} +9 -7
- package/dist/{task-orchestrator-3LZHLZCG.js → task-orchestrator-3TEU2UXU.js} +3 -3
- package/dist/web/client/style.css +129 -129
- package/package.json +164 -164
- package/dist/wasm/tree-sitter-javascript.wasm +0 -0
- package/dist/wasm/tree-sitter-python.wasm +0 -0
- package/dist/wasm/tree-sitter-tsx.wasm +0 -0
- package/dist/wasm/tree-sitter-typescript.wasm +0 -0
- package/dist/wasm/web-tree-sitter.wasm +0 -0
package/README.md
CHANGED
|
@@ -1,429 +1,429 @@
|
|
|
1
|
-
**English** | [中文](README.zh-CN.md)
|
|
2
|
-
|
|
3
|
-
# ai-cli
|
|
4
|
-
|
|
5
|
-
> A cross-platform AI coding assistant — CLI, Web UI, and Desktop App — with multi-provider support and agentic tool calling
|
|
6
|
-
|
|
7
|
-
[](https://www.npmjs.com/package/jinzd-ai-cli)
|
|
8
|
-
[](LICENSE)
|
|
9
|
-
[](https://nodejs.org)
|
|
10
|
-
[]()
|
|
11
|
-
[](https://github.com/jinzhengdong/ai-cli/releases)
|
|
12
|
-
[](https://github.com/jinzhengdong/ai-cli/actions/workflows/ci.yml)
|
|
13
|
-
|
|
14
|
-
**ai-cli** is a powerful AI assistant that connects to 8 providers (including local Ollama models) and executes tasks autonomously through agentic tool calling. Use it as a terminal REPL, a browser-based Web UI, or a standalone Electron desktop app.
|
|
15
|
-
|
|
16
|
-
<p align="center">
|
|
17
|
-
<img src="https://img.shields.io/badge/CLI-Terminal-blue" alt="CLI" />
|
|
18
|
-
<img src="https://img.shields.io/badge/Web_UI-Browser-green" alt="Web UI" />
|
|
19
|
-
<img src="https://img.shields.io/badge/Desktop-Electron-purple" alt="Desktop" />
|
|
20
|
-
</p>
|
|
21
|
-
|
|
22
|
-
## Highlights
|
|
23
|
-
|
|
24
|
-
- **8 Built-in Providers** — Claude, Gemini, DeepSeek, OpenAI, Zhipu GLM, Kimi, OpenRouter (300+ models), **Ollama** (local models, no API key needed)
|
|
25
|
-
- **3 Interfaces** — Terminal CLI, browser Web UI (`aicli web`), Electron desktop app
|
|
26
|
-
- **Agentic Tool Calling** — AI autonomously runs shell commands, reads/writes files, searches code, fetches web, runs tests (default 200 rounds, configurable up to 10000 via `config.maxToolRounds` or `--max-tool-rounds`)
|
|
27
|
-
- **Prompt Caching** *(v0.4.70+)* — System prompt split into stable/volatile halves so Claude caches the stable part with `cache_control: ephemeral`; cached tokens bill at ~10% of the input price
|
|
28
|
-
- **Unified-Diff Patch Edits** *(v0.4.72+)* — `edit_file` accepts standard `@@ -a,b +c,d @@` hunks for the most compact way to apply many scattered small changes to a large file (±200-line drift tolerance + whitespace fallback)
|
|
29
|
-
- **Anthropic Batches API** *(v0.4.73+)* — `aicli batch submit/list/status/results/cancel` for 50%-off, 24-hour async processing — ideal for offline analysis and bulk evals
|
|
30
|
-
- **Web UI Session Replay** *(v0.4.71+)* — 🎬 button on every saved session opens a timeline replay: every message, tool call, reasoning, and cache-aware token usage at a glance
|
|
31
|
-
- **Conversation Branching** *(v0.4.74+)* — `/branch list/new/switch/delete/rename` inside the REPL, plus a 🌿 "fork here" button on every replay step — explore alternate directions without losing the original thread
|
|
32
|
-
- **Symbol Index** *(v0.4.76+)* — persistent tree-sitter index for TS/JS/TSX/Python powers three new AI tools: `find_symbol`, `get_outline`, `find_references`. Orders of magnitude faster than grep for definition lookups; background refresh on REPL startup, `/index status|rebuild|clear` to manage
|
|
33
|
-
- **Semantic Code Search** *(v0.4.77+)* — `search_code` tool finds code by meaning, not name. Local sentence embeddings (multilingual MiniLM, 117 MB one-time download) score symbols by cosine similarity against natural-language queries in English or Chinese ("where are users authenticated", "哪里做了速率限制"). No API key, runs on CPU. Manage with `/index semantic-rebuild|semantic-clear`
|
|
34
|
-
- **MCP Server Mode** *(v0.4.84+)* — `aicli mcp-serve` reverses ai-cli into an MCP server (JSON-RPC 2.0 over stdio), exposing its 26 built-in tools (incl. `find_symbol` / `search_code` / `run_tests`) to Claude Desktop / Cursor / any MCP client. Opt-in destructive-tool allow, `--tools` whitelist, `--cwd` override
|
|
35
|
-
- **Session Sensitive-Data Redaction** *(v0.4.88+)* — unified redactor scrubs `password=` / `api_key` / bearer tokens / OpenAI-style keys from every message **before it hits disk**. Query text is redacted too, so secrets never reach embeddings or logs. `/security status` + `/security scan` to audit
|
|
36
|
-
- **Human-like Long-Term Memory** *(v0.4.89+, B4)* — semantic index over every past chat session + `recall_memory` AI tool + `/memory rebuild|refresh|status|recall` commands. AI is prompted to auto-recall when it sees "last time" / "之前" / ambiguous references. Reuses the same MiniLM embedder as semantic code search
|
|
37
|
-
- **Web UI Memory Panel** *(v0.4.90+, B4)* — new 🧠 Memory sidebar tab with semantic search across past chats; each hit has **➕ Inject** (quotes the snippet into the chat input as a markdown blockquote so you can review/edit before sending — no silent context injection) and **↗ Load** (jumps to source session). Bulk "Inject top 3" for recall bundles
|
|
38
|
-
- **Streaming Tool Use** — Real-time streaming of AI reasoning and tool calls as they happen
|
|
39
|
-
- **Sub-Agents** — Delegate complex subtasks to isolated child agents with independent tool loops
|
|
40
|
-
- **Extended Thinking** — Claude deep reasoning mode with `/think` toggle
|
|
41
|
-
- **Plan Mode** — Read-only planning phase (`/plan`) where AI analyzes before executing, with loop detection
|
|
42
|
-
- **Auto-Pause** — Automatically pauses every 10 rounds for user review and redirection
|
|
43
|
-
- **MCP Protocol** — Connect external MCP servers for dynamic tool discovery
|
|
44
|
-
- **Multi-User Auth** — Web UI supports multiple users with password authentication
|
|
45
|
-
- **PWA Support** — Install Web UI as a desktop/mobile app, accessible over LAN
|
|
46
|
-
- **Hierarchical Context** — 3-layer context files (global / project / subdirectory) auto-injected
|
|
47
|
-
- **Headless Mode** — `ai-cli -p "prompt"` for CI/CD pipelines and scripting
|
|
48
|
-
- **43 REPL Commands** — Session management, checkpointing, code review, security review/scan, rewind, scaffolding, cross-session history search, chat-memory recall, and more
|
|
49
|
-
- **GitHub Actions CI/CD** — Automated testing on Node 20/22 + npm publish on release tags
|
|
50
|
-
- **Cross-Platform** — Windows, macOS, Linux
|
|
51
|
-
|
|
52
|
-
## Installation
|
|
53
|
-
|
|
54
|
-
### npm (recommended)
|
|
55
|
-
|
|
56
|
-
```bash
|
|
57
|
-
npm install -g jinzd-ai-cli
|
|
58
|
-
```
|
|
59
|
-
|
|
60
|
-
Requires Node.js >= 20. After installation, use `aicli` to start.
|
|
61
|
-
|
|
62
|
-
### Electron Desktop App (Windows)
|
|
63
|
-
|
|
64
|
-
Download the installer from [GitHub Releases](https://github.com/jinzhengdong/ai-cli/releases) — no Node.js required:
|
|
65
|
-
|
|
66
|
-
| Platform | Download |
|
|
67
|
-
|----------|----------|
|
|
68
|
-
| Windows x64 | [`ai-cli-setup.exe`](https://github.com/jinzhengdong/ai-cli/releases/latest) |
|
|
69
|
-
|
|
70
|
-
### Standalone CLI Executables
|
|
71
|
-
|
|
72
|
-
Pre-built CLI binaries (no Node.js required, ~56 MB):
|
|
73
|
-
|
|
74
|
-
| Platform | File |
|
|
75
|
-
|----------|------|
|
|
76
|
-
| Windows x64 | `ai-cli-win.exe` |
|
|
77
|
-
| macOS arm64 | `ai-cli-mac` |
|
|
78
|
-
| macOS x64 | `ai-cli-mac-x64` |
|
|
79
|
-
| Linux x64 | `ai-cli-linux` |
|
|
80
|
-
|
|
81
|
-
## Quick Start
|
|
82
|
-
|
|
83
|
-
### Terminal CLI
|
|
84
|
-
|
|
85
|
-
```bash
|
|
86
|
-
aicli
|
|
87
|
-
```
|
|
88
|
-
|
|
89
|
-
On first run, an interactive setup wizard guides you through setting up your profile and entering your API key. Your identity is persisted and injected into every AI conversation.
|
|
90
|
-
|
|
91
|
-
```
|
|
92
|
-
[deepseek] > Hello! Tell me about this project
|
|
93
|
-
[deepseek] > @src/main.ts Review this file for bugs
|
|
94
|
-
[deepseek] > @screenshot.png What's in this image?
|
|
95
|
-
[deepseek] > /help
|
|
96
|
-
```
|
|
97
|
-
|
|
98
|
-
Use `@filepath` to reference files or images directly in your prompt.
|
|
99
|
-
|
|
100
|
-
### Web UI
|
|
101
|
-
|
|
102
|
-
```bash
|
|
103
|
-
aicli web # Start on localhost:3456
|
|
104
|
-
aicli web --port 8080 # Custom port
|
|
105
|
-
aicli web --host 0.0.0.0 # LAN access (shows QR-friendly URL)
|
|
106
|
-
```
|
|
107
|
-
|
|
108
|
-
Features: multi-tab sessions, file tree panel, drag & drop images, prompt templates, 8 DaisyUI themes, PWA installable, keyboard shortcuts, diff syntax highlighting.
|
|
109
|
-
|
|
110
|
-
### User Management
|
|
111
|
-
|
|
112
|
-
```bash
|
|
113
|
-
aicli user create admin # Create user (enables auth)
|
|
114
|
-
aicli user list # List all users
|
|
115
|
-
aicli user reset-password x # Reset password
|
|
116
|
-
aicli user delete x # Delete user
|
|
117
|
-
```
|
|
118
|
-
|
|
119
|
-
## Supported Providers
|
|
120
|
-
|
|
121
|
-
| Provider | Models | Get API Key |
|
|
122
|
-
|----------|--------|-------------|
|
|
123
|
-
| **Claude** | Opus 4, Sonnet 4, Haiku 4 | [console.anthropic.com](https://console.anthropic.com) |
|
|
124
|
-
| **Gemini** | 2.5 Pro, 2.5 Flash | [aistudio.google.com](https://aistudio.google.com) |
|
|
125
|
-
| **DeepSeek** | DeepSeek-Chat (V3), Reasoner (R1) | [platform.deepseek.com](https://platform.deepseek.com) |
|
|
126
|
-
| **OpenAI** | GPT-5.4, GPT-4o, o3, o4-mini | [platform.openai.com](https://platform.openai.com) |
|
|
127
|
-
| **OpenRouter** | 300+ models (Claude, GPT, Gemini, Llama, Qwen, Mistral...) | [openrouter.ai](https://openrouter.ai) |
|
|
128
|
-
| **Zhipu** | GLM-4, GLM-5 | [open.bigmodel.cn](https://open.bigmodel.cn) |
|
|
129
|
-
| **Kimi** | Moonshot, Kimi-K2 | [platform.moonshot.cn](https://platform.moonshot.cn) |
|
|
130
|
-
| **Ollama** | Any locally installed model (Llama, Qwen, Gemma, Mistral...) | No API key — [ollama.com](https://ollama.com) |
|
|
131
|
-
|
|
132
|
-
Any OpenAI-compatible API can also be used via `customBaseUrls` in config.
|
|
133
|
-
|
|
134
|
-
### Ollama (Local Models)
|
|
135
|
-
|
|
136
|
-
Run AI models entirely on your own hardware — no API key, no usage fees, no data leaving your machine.
|
|
137
|
-
|
|
138
|
-
```bash
|
|
139
|
-
# Install Ollama from https://ollama.com, then pull a model:
|
|
140
|
-
ollama pull qwen3:4b # recommended: good tool-calling support
|
|
141
|
-
ollama pull gemma3:4b
|
|
142
|
-
ollama pull llama3.1:8b
|
|
143
|
-
|
|
144
|
-
# Start aicli and switch to Ollama:
|
|
145
|
-
aicli
|
|
146
|
-
[deepseek] > /provider ollama # auto-discovers installed models
|
|
147
|
-
[ollama] > /model # select from your local models
|
|
148
|
-
```
|
|
149
|
-
|
|
150
|
-
> **Note**: Use models 4B+ for best results with tool calling. Small models (<4B) may struggle with the tool definitions injected by MCP servers.
|
|
151
|
-
|
|
152
|
-
## Built-in Tools (Agentic)
|
|
153
|
-
|
|
154
|
-
AI autonomously invokes these 27 tools during conversations:
|
|
155
|
-
|
|
156
|
-
| Tool | Safety | Description |
|
|
157
|
-
|------|--------|-------------|
|
|
158
|
-
| `bash` | varies | Execute shell commands (PowerShell on Windows, $SHELL on Unix) |
|
|
159
|
-
| `read_file` | safe | Read file contents (10 MB limit, image support) |
|
|
160
|
-
| `write_file` | write | Create/overwrite files (diff preview + confirmation) |
|
|
161
|
-
| `edit_file` | write | Precise string replacement with fuzzy matching hints + `replaceAll` mode |
|
|
162
|
-
| `list_dir` | safe | List directory contents |
|
|
163
|
-
| `grep_files` | safe | Regex search across files |
|
|
164
|
-
| `glob_files` | safe | Match files by glob pattern |
|
|
165
|
-
| `web_fetch` | safe | Fetch web pages as Markdown (SSRF-protected) |
|
|
166
|
-
| `google_search` | safe | Google Custom Search API |
|
|
167
|
-
| `run_interactive` | safe | Run interactive programs with stdin input |
|
|
168
|
-
| `run_tests` | safe | Auto-detect and run project tests (JUnit XML parsing) |
|
|
169
|
-
| `spawn_agent` | safe | Delegate subtasks to isolated child agents |
|
|
170
|
-
| `ask_user` | safe | Pause and ask the user a question |
|
|
171
|
-
| `save_memory` | safe | Persist important info across sessions |
|
|
172
|
-
| `write_todos` | safe | Task breakdown with live progress rendering |
|
|
173
|
-
| `save_last_response` | write | Save AI response to file |
|
|
174
|
-
| `task_create` | write | Start a command running in the background |
|
|
175
|
-
| `task_list` | safe | List background tasks and their status/output |
|
|
176
|
-
| `task_stop` | write | Stop a running background task |
|
|
177
|
-
| `git_status` | safe | Show working tree status (branch, staged, modified, untracked) |
|
|
178
|
-
| `git_diff` | safe | Show file diffs (staged/unstaged, stat summary) |
|
|
179
|
-
| `git_log` | safe | Show commit history (oneline/full, filter by file/author) |
|
|
180
|
-
| `git_commit` | write | Create a git commit (stage files, message) |
|
|
181
|
-
| `notebook_edit` | write | Edit Jupyter notebook cells (add/edit/delete/move) |
|
|
182
|
-
| `find_symbol` | safe | Locate symbol definitions via persistent tree-sitter index (TS/JS/TSX/Python) |
|
|
183
|
-
| `get_outline` | safe | Enumerate all top-level declarations in one source file |
|
|
184
|
-
| `find_references` | safe | Search indexed files for references to a symbol name |
|
|
185
|
-
| `search_code` | safe | Semantic (meaning-based) code search via local sentence embeddings — bilingual, "grep by meaning" |
|
|
186
|
-
|
|
187
|
-
**Safety levels**: `safe` = auto-execute, `write` = diff preview + confirmation, `destructive` = prominent warning + confirmation.
|
|
188
|
-
|
|
189
|
-
## Key REPL Commands
|
|
190
|
-
|
|
191
|
-
| Command | Description |
|
|
192
|
-
|---------|-------------|
|
|
193
|
-
| `/provider` | Switch AI provider |
|
|
194
|
-
| `/model` | Switch model |
|
|
195
|
-
| `/plan` | Enter read-only planning mode |
|
|
196
|
-
| `/think` | Toggle Claude extended thinking |
|
|
197
|
-
| `/test` | Auto-detect and run project tests |
|
|
198
|
-
| `/review` | AI code review of current git diff |
|
|
199
|
-
| `/security-review` | Security vulnerability scan on git diff |
|
|
200
|
-
| `/rewind` | Rewind conversation + restore files to checkpoint state |
|
|
201
|
-
| `/scaffold <desc>` | AI generates project skeleton |
|
|
202
|
-
| `/init` | AI generates project context file (AICLI.md) |
|
|
203
|
-
| `/compact` | Compress conversation history |
|
|
204
|
-
| `/session` | Session management (new / list / load) |
|
|
205
|
-
| `/checkpoint` | Save/restore conversation checkpoints |
|
|
206
|
-
| `/fork` | Fork the current session into a new session file |
|
|
207
|
-
| `/branch` | Create/switch/delete branches *within* the current session (B2) |
|
|
208
|
-
| `/index` | Manage symbol + semantic index (status/rebuild/clear/semantic-rebuild/semantic-clear) — powers `find_symbol` / `get_outline` / `find_references` / `search_code` (C1+C2) |
|
|
209
|
-
| `/search <keyword>` | Full-text search across all sessions |
|
|
210
|
-
| `/skill` | Manage agent skill packs |
|
|
211
|
-
| `/mcp` | View MCP server status and tools |
|
|
212
|
-
| `/cost` | Show token usage statistics |
|
|
213
|
-
| `/undo` | Undo last file operation |
|
|
214
|
-
| `/doctor` | Health check (API keys, MCP, context) |
|
|
215
|
-
| `/export` | Export session as Markdown or JSON |
|
|
216
|
-
| `/profile` | View/edit your identity (AI knows who you are across all providers) |
|
|
217
|
-
| `/config` | Open configuration wizard |
|
|
218
|
-
| `/help` | Show all available commands |
|
|
219
|
-
|
|
220
|
-
**Multi-line input**: Use `\` at end of line for continuation, or paste multi-line content directly (auto-detected and merged).
|
|
221
|
-
|
|
222
|
-
Type `/help` in the REPL to see all 40 commands.
|
|
223
|
-
|
|
224
|
-
## CLI Parameters
|
|
225
|
-
|
|
226
|
-
```bash
|
|
227
|
-
aicli [options]
|
|
228
|
-
|
|
229
|
-
Options:
|
|
230
|
-
--provider <name> Set AI provider
|
|
231
|
-
-m, --model <name> Set model
|
|
232
|
-
-p, --prompt <text> Headless mode: single prompt, then exit
|
|
233
|
-
--system <prompt> Override system prompt (headless)
|
|
234
|
-
--json Output JSON response (headless)
|
|
235
|
-
--output-format <fmt> text | streaming-json (NDJSON)
|
|
236
|
-
--resume <id> Resume a previous session
|
|
237
|
-
--allowed-tools <list> Comma-separated tool whitelist
|
|
238
|
-
--blocked-tools <list> Comma-separated tool blacklist
|
|
239
|
-
--no-stream Disable streaming output
|
|
240
|
-
|
|
241
|
-
Subcommands:
|
|
242
|
-
aicli web [options] Start Web UI server
|
|
243
|
-
aicli config Run configuration wizard
|
|
244
|
-
aicli providers List all providers and status
|
|
245
|
-
aicli sessions List recent sessions
|
|
246
|
-
aicli user <action> Manage Web UI users
|
|
247
|
-
aicli batch <action> Anthropic Batches API (submit | list | status | results | cancel)
|
|
248
|
-
```
|
|
249
|
-
|
|
250
|
-
### Batch Mode (Anthropic Message Batches)
|
|
251
|
-
|
|
252
|
-
For offline analysis, bulk evals, or any workload where latency is flexible, use the Batches API for **50% off** tokens with a 24-hour processing window.
|
|
253
|
-
|
|
254
|
-
```bash
|
|
255
|
-
# 1. Prepare a JSONL file (one request per line):
|
|
256
|
-
# {"customId":"req-1","messages":[{"role":"user","content":"..."}],"maxTokens":1024}
|
|
257
|
-
aicli batch submit prompts.jsonl # validate + submit + track locally
|
|
258
|
-
aicli batch submit --dry-run prompts.jsonl # parse only, no network
|
|
259
|
-
|
|
260
|
-
aicli batch list # live status of recent batches
|
|
261
|
-
aicli batch status <id> # detailed status + request counts
|
|
262
|
-
aicli batch results <id> out.jsonl # download results (stdout if no path)
|
|
263
|
-
aicli batch cancel <id> # cancel an in-progress batch
|
|
264
|
-
```
|
|
265
|
-
|
|
266
|
-
Local tracking file: `~/.aicli/batches.json` (last 200 submissions). Requires `AICLI_API_KEY_CLAUDE` or a Claude API key configured via `aicli config`.
|
|
267
|
-
|
|
268
|
-
### Headless Mode
|
|
269
|
-
|
|
270
|
-
```bash
|
|
271
|
-
# Single prompt
|
|
272
|
-
aicli -p "Explain recursion in one sentence"
|
|
273
|
-
|
|
274
|
-
# Pipe stdin
|
|
275
|
-
cat src/main.ts | aicli -p "Review this code"
|
|
276
|
-
|
|
277
|
-
# JSON output for scripting
|
|
278
|
-
aicli -p "hello" --json
|
|
279
|
-
|
|
280
|
-
# Streaming JSON (NDJSON)
|
|
281
|
-
aicli -p "write a poem" --output-format streaming-json
|
|
282
|
-
```
|
|
283
|
-
|
|
284
|
-
## Configuration
|
|
285
|
-
|
|
286
|
-
Configuration is stored at `~/.aicli/config.json`. Run `aicli config` for the interactive wizard, or edit directly:
|
|
287
|
-
|
|
288
|
-
```json
|
|
289
|
-
{
|
|
290
|
-
"defaultProvider": "deepseek",
|
|
291
|
-
"apiKeys": {
|
|
292
|
-
"deepseek": "sk-...",
|
|
293
|
-
"claude": "sk-ant-...",
|
|
294
|
-
"openrouter": "sk-or-..."
|
|
295
|
-
},
|
|
296
|
-
"proxy": "http://127.0.0.1:10809",
|
|
297
|
-
"mcpServers": { },
|
|
298
|
-
"ui": {
|
|
299
|
-
"theme": "dark",
|
|
300
|
-
"wordWrap": 0,
|
|
301
|
-
"notificationThreshold": 10000
|
|
302
|
-
}
|
|
303
|
-
}
|
|
304
|
-
```
|
|
305
|
-
|
|
306
|
-
### Permission Rules
|
|
307
|
-
|
|
308
|
-
Control when tools require confirmation. Rules are checked in order — first match wins:
|
|
309
|
-
|
|
310
|
-
```json
|
|
311
|
-
{
|
|
312
|
-
"permissionRules": [
|
|
313
|
-
{ "tool": "read_file", "action": "auto-approve" },
|
|
314
|
-
{ "tool": "list_dir", "action": "auto-approve" },
|
|
315
|
-
{ "tool": "grep_files", "action": "auto-approve" },
|
|
316
|
-
{ "tool": "glob_files", "action": "auto-approve" },
|
|
317
|
-
{ "tool": "write_todos", "action": "auto-approve" },
|
|
318
|
-
{ "tool": "bash", "action": "auto-approve", "when": { "dangerLevel": "safe" } },
|
|
319
|
-
{ "tool": "write_file", "action": "auto-approve", "when": { "pathPattern": "src/" } },
|
|
320
|
-
{ "tool": "bash", "action": "deny", "when": { "pathPattern": "rm -rf" } },
|
|
321
|
-
{ "tool": "*", "action": "confirm" }
|
|
322
|
-
]
|
|
323
|
-
}
|
|
324
|
-
```
|
|
325
|
-
|
|
326
|
-
| Field | Description |
|
|
327
|
-
|-------|-------------|
|
|
328
|
-
| `tool` | Tool name, or `*` for all tools |
|
|
329
|
-
| `action` | `auto-approve` (skip confirmation), `deny` (block), `confirm` (ask user) |
|
|
330
|
-
| `when.dangerLevel` | Only match when danger level is `safe`, `write`, or `destructive` |
|
|
331
|
-
| `when.pathPattern` | Substring match against tool's `path` or `command` argument |
|
|
332
|
-
|
|
333
|
-
**Recommended minimal config** — auto-approve all read-only tools to reduce y/N prompts:
|
|
334
|
-
|
|
335
|
-
```json
|
|
336
|
-
{
|
|
337
|
-
"permissionRules": [
|
|
338
|
-
{ "tool": "read_file", "action": "auto-approve" },
|
|
339
|
-
{ "tool": "list_dir", "action": "auto-approve" },
|
|
340
|
-
{ "tool": "grep_files", "action": "auto-approve" },
|
|
341
|
-
{ "tool": "glob_files", "action": "auto-approve" },
|
|
342
|
-
{ "tool": "web_fetch", "action": "auto-approve" },
|
|
343
|
-
{ "tool": "write_todos", "action": "auto-approve" },
|
|
344
|
-
{ "tool": "ask_user", "action": "auto-approve" },
|
|
345
|
-
{ "tool": "run_tests", "action": "auto-approve" }
|
|
346
|
-
]
|
|
347
|
-
}
|
|
348
|
-
```
|
|
349
|
-
|
|
350
|
-
### Environment Variables
|
|
351
|
-
|
|
352
|
-
Environment variables take precedence over config file values:
|
|
353
|
-
|
|
354
|
-
| Variable | Description |
|
|
355
|
-
|----------|-------------|
|
|
356
|
-
| `AICLI_API_KEY_CLAUDE` | Claude API Key |
|
|
357
|
-
| `AICLI_API_KEY_GEMINI` | Gemini API Key |
|
|
358
|
-
| `AICLI_API_KEY_DEEPSEEK` | DeepSeek API Key |
|
|
359
|
-
| `AICLI_API_KEY_OPENAI` | OpenAI API Key |
|
|
360
|
-
| `AICLI_API_KEY_OPENROUTER` | OpenRouter API Key |
|
|
361
|
-
| `AICLI_API_KEY_ZHIPU` | Zhipu API Key |
|
|
362
|
-
| `AICLI_API_KEY_KIMI` | Kimi API Key |
|
|
363
|
-
| `AICLI_PROVIDER` | Default provider ID |
|
|
364
|
-
| `AICLI_NO_STREAM` | Set to `1` to disable streaming |
|
|
365
|
-
| `HTTPS_PROXY` / `HTTP_PROXY` | Proxy URL |
|
|
366
|
-
|
|
367
|
-
### Hierarchical Context Files
|
|
368
|
-
|
|
369
|
-
ai-cli automatically discovers and injects context files into the system prompt:
|
|
370
|
-
|
|
371
|
-
| Layer | Path | Purpose |
|
|
372
|
-
|-------|------|---------|
|
|
373
|
-
| Global | `~/.aicli/AICLI.md` | Personal preferences across all projects |
|
|
374
|
-
| Project | `<git-root>/AICLI.md` | Project rules (commit to git for team sharing) |
|
|
375
|
-
| Subdirectory | `<cwd>/AICLI.md` | Directory-specific instructions |
|
|
376
|
-
|
|
377
|
-
Also supports `CLAUDE.md` as an alternative filename at each layer.
|
|
378
|
-
|
|
379
|
-
### MCP Integration
|
|
380
|
-
|
|
381
|
-
Connect external [MCP](https://modelcontextprotocol.io/) servers for dynamic tool discovery. Configuration is compatible with Claude Desktop format:
|
|
382
|
-
|
|
383
|
-
```json
|
|
384
|
-
{
|
|
385
|
-
"mcpServers": {
|
|
386
|
-
"filesystem": {
|
|
387
|
-
"command": "npx",
|
|
388
|
-
"args": ["-y", "@modelcontextprotocol/server-filesystem", "/path"],
|
|
389
|
-
"timeout": 30000
|
|
390
|
-
}
|
|
391
|
-
}
|
|
392
|
-
}
|
|
393
|
-
```
|
|
394
|
-
|
|
395
|
-
Project-level `.mcp.json` files are also supported and automatically merged with global config.
|
|
396
|
-
|
|
397
|
-
## Web UI Features
|
|
398
|
-
|
|
399
|
-
The Web UI (`aicli web`) provides a full-featured browser interface:
|
|
400
|
-
|
|
401
|
-
- **Multi-Tab Sessions** — parallel conversations in separate browser tabs
|
|
402
|
-
- **File Tree Panel** — browse project files, click to insert `@path` references
|
|
403
|
-
- **Image Upload** — drag & drop or Ctrl+V paste images into chat
|
|
404
|
-
- **Prompt Templates** — CRUD with tags, search, import/export
|
|
405
|
-
- **8 Themes** — DaisyUI themes with code highlight auto-sync
|
|
406
|
-
- **Diff Syntax Highlighting** — colored diff in tool confirm dialogs
|
|
407
|
-
- **Keyboard Shortcuts** — `Esc` stop, `Ctrl+L` clear, `↑↓` history
|
|
408
|
-
- **Export** — `/export md` or `/export json` browser download
|
|
409
|
-
- **PWA** — installable as desktop/mobile app
|
|
410
|
-
- **LAN Access** — `--host 0.0.0.0` for phone/tablet access
|
|
411
|
-
- **Multi-User Auth** — password authentication with per-user data isolation
|
|
412
|
-
- **Auto-Reconnect** — heartbeat + exponential backoff reconnection
|
|
413
|
-
|
|
414
|
-
## Testing
|
|
415
|
-
|
|
416
|
-
```bash
|
|
417
|
-
npm test # Run all 396 tests
|
|
418
|
-
npm run test:watch # Watch mode
|
|
419
|
-
```
|
|
420
|
-
|
|
421
|
-
26 test suites covering: authentication, sessions, tool types & danger levels, permissions, output truncation, diff rendering, edit-file similarity, error hierarchy, config management, env loading, provider registry, web-fetch, grep-files, hub renderer, hub discussion, hub presets, dev-state, token estimator, tool registry budget, parallel tool execution, cost tracker, session tool history.
|
|
422
|
-
|
|
423
|
-
## Documentation
|
|
424
|
-
|
|
425
|
-
- [Chinese README](README.zh-CN.md) — 中文说明文档
|
|
426
|
-
|
|
427
|
-
## License
|
|
428
|
-
|
|
429
|
-
[MIT](LICENSE)
|
|
1
|
+
**English** | [中文](README.zh-CN.md)
|
|
2
|
+
|
|
3
|
+
# ai-cli
|
|
4
|
+
|
|
5
|
+
> A cross-platform AI coding assistant — CLI, Web UI, and Desktop App — with multi-provider support and agentic tool calling
|
|
6
|
+
|
|
7
|
+
[](https://www.npmjs.com/package/jinzd-ai-cli)
|
|
8
|
+
[](LICENSE)
|
|
9
|
+
[](https://nodejs.org)
|
|
10
|
+
[]()
|
|
11
|
+
[](https://github.com/jinzhengdong/ai-cli/releases)
|
|
12
|
+
[](https://github.com/jinzhengdong/ai-cli/actions/workflows/ci.yml)
|
|
13
|
+
|
|
14
|
+
**ai-cli** is a powerful AI assistant that connects to 8 providers (including local Ollama models) and executes tasks autonomously through agentic tool calling. Use it as a terminal REPL, a browser-based Web UI, or a standalone Electron desktop app.
|
|
15
|
+
|
|
16
|
+
<p align="center">
|
|
17
|
+
<img src="https://img.shields.io/badge/CLI-Terminal-blue" alt="CLI" />
|
|
18
|
+
<img src="https://img.shields.io/badge/Web_UI-Browser-green" alt="Web UI" />
|
|
19
|
+
<img src="https://img.shields.io/badge/Desktop-Electron-purple" alt="Desktop" />
|
|
20
|
+
</p>
|
|
21
|
+
|
|
22
|
+
## Highlights
|
|
23
|
+
|
|
24
|
+
- **8 Built-in Providers** — Claude, Gemini, DeepSeek, OpenAI, Zhipu GLM, Kimi, OpenRouter (300+ models), **Ollama** (local models, no API key needed)
|
|
25
|
+
- **3 Interfaces** — Terminal CLI, browser Web UI (`aicli web`), Electron desktop app
|
|
26
|
+
- **Agentic Tool Calling** — AI autonomously runs shell commands, reads/writes files, searches code, fetches web, runs tests (default 200 rounds, configurable up to 10000 via `config.maxToolRounds` or `--max-tool-rounds`)
|
|
27
|
+
- **Prompt Caching** *(v0.4.70+)* — System prompt split into stable/volatile halves so Claude caches the stable part with `cache_control: ephemeral`; cached tokens bill at ~10% of the input price
|
|
28
|
+
- **Unified-Diff Patch Edits** *(v0.4.72+)* — `edit_file` accepts standard `@@ -a,b +c,d @@` hunks for the most compact way to apply many scattered small changes to a large file (±200-line drift tolerance + whitespace fallback)
|
|
29
|
+
- **Anthropic Batches API** *(v0.4.73+)* — `aicli batch submit/list/status/results/cancel` for 50%-off, 24-hour async processing — ideal for offline analysis and bulk evals
|
|
30
|
+
- **Web UI Session Replay** *(v0.4.71+)* — 🎬 button on every saved session opens a timeline replay: every message, tool call, reasoning, and cache-aware token usage at a glance
|
|
31
|
+
- **Conversation Branching** *(v0.4.74+)* — `/branch list/new/switch/delete/rename` inside the REPL, plus a 🌿 "fork here" button on every replay step — explore alternate directions without losing the original thread
|
|
32
|
+
- **Symbol Index** *(v0.4.76+)* — persistent tree-sitter index for TS/JS/TSX/Python powers three new AI tools: `find_symbol`, `get_outline`, `find_references`. Orders of magnitude faster than grep for definition lookups; background refresh on REPL startup, `/index status|rebuild|clear` to manage
|
|
33
|
+
- **Semantic Code Search** *(v0.4.77+)* — `search_code` tool finds code by meaning, not name. Local sentence embeddings (multilingual MiniLM, 117 MB one-time download) score symbols by cosine similarity against natural-language queries in English or Chinese ("where are users authenticated", "哪里做了速率限制"). No API key, runs on CPU. Manage with `/index semantic-rebuild|semantic-clear`
|
|
34
|
+
- **MCP Server Mode** *(v0.4.84+)* — `aicli mcp-serve` reverses ai-cli into an MCP server (JSON-RPC 2.0 over stdio), exposing its 26 built-in tools (incl. `find_symbol` / `search_code` / `run_tests`) to Claude Desktop / Cursor / any MCP client. Opt-in destructive-tool allow, `--tools` whitelist, `--cwd` override
|
|
35
|
+
- **Session Sensitive-Data Redaction** *(v0.4.88+)* — unified redactor scrubs `password=` / `api_key` / bearer tokens / OpenAI-style keys from every message **before it hits disk**. Query text is redacted too, so secrets never reach embeddings or logs. `/security status` + `/security scan` to audit
|
|
36
|
+
- **Human-like Long-Term Memory** *(v0.4.89+, B4)* — semantic index over every past chat session + `recall_memory` AI tool + `/memory rebuild|refresh|status|recall` commands. AI is prompted to auto-recall when it sees "last time" / "之前" / ambiguous references. Reuses the same MiniLM embedder as semantic code search
|
|
37
|
+
- **Web UI Memory Panel** *(v0.4.90+, B4)* — new 🧠 Memory sidebar tab with semantic search across past chats; each hit has **➕ Inject** (quotes the snippet into the chat input as a markdown blockquote so you can review/edit before sending — no silent context injection) and **↗ Load** (jumps to source session). Bulk "Inject top 3" for recall bundles
|
|
38
|
+
- **Streaming Tool Use** — Real-time streaming of AI reasoning and tool calls as they happen
|
|
39
|
+
- **Sub-Agents** — Delegate complex subtasks to isolated child agents with independent tool loops
|
|
40
|
+
- **Extended Thinking** — Claude deep reasoning mode with `/think` toggle
|
|
41
|
+
- **Plan Mode** — Read-only planning phase (`/plan`) where AI analyzes before executing, with loop detection
|
|
42
|
+
- **Auto-Pause** — Automatically pauses every 10 rounds for user review and redirection
|
|
43
|
+
- **MCP Protocol** — Connect external MCP servers for dynamic tool discovery
|
|
44
|
+
- **Multi-User Auth** — Web UI supports multiple users with password authentication
|
|
45
|
+
- **PWA Support** — Install Web UI as a desktop/mobile app, accessible over LAN
|
|
46
|
+
- **Hierarchical Context** — 3-layer context files (global / project / subdirectory) auto-injected
|
|
47
|
+
- **Headless Mode** — `ai-cli -p "prompt"` for CI/CD pipelines and scripting
|
|
48
|
+
- **43 REPL Commands** — Session management, checkpointing, code review, security review/scan, rewind, scaffolding, cross-session history search, chat-memory recall, and more
|
|
49
|
+
- **GitHub Actions CI/CD** — Automated testing on Node 20/22 + npm publish on release tags
|
|
50
|
+
- **Cross-Platform** — Windows, macOS, Linux
|
|
51
|
+
|
|
52
|
+
## Installation
|
|
53
|
+
|
|
54
|
+
### npm (recommended)
|
|
55
|
+
|
|
56
|
+
```bash
|
|
57
|
+
npm install -g jinzd-ai-cli
|
|
58
|
+
```
|
|
59
|
+
|
|
60
|
+
Requires Node.js >= 20. After installation, use `aicli` to start.
|
|
61
|
+
|
|
62
|
+
### Electron Desktop App (Windows)
|
|
63
|
+
|
|
64
|
+
Download the installer from [GitHub Releases](https://github.com/jinzhengdong/ai-cli/releases) — no Node.js required:
|
|
65
|
+
|
|
66
|
+
| Platform | Download |
|
|
67
|
+
|----------|----------|
|
|
68
|
+
| Windows x64 | [`ai-cli-setup.exe`](https://github.com/jinzhengdong/ai-cli/releases/latest) |
|
|
69
|
+
|
|
70
|
+
### Standalone CLI Executables
|
|
71
|
+
|
|
72
|
+
Pre-built CLI binaries (no Node.js required, ~56 MB):
|
|
73
|
+
|
|
74
|
+
| Platform | File |
|
|
75
|
+
|----------|------|
|
|
76
|
+
| Windows x64 | `ai-cli-win.exe` |
|
|
77
|
+
| macOS arm64 | `ai-cli-mac` |
|
|
78
|
+
| macOS x64 | `ai-cli-mac-x64` |
|
|
79
|
+
| Linux x64 | `ai-cli-linux` |
|
|
80
|
+
|
|
81
|
+
## Quick Start
|
|
82
|
+
|
|
83
|
+
### Terminal CLI
|
|
84
|
+
|
|
85
|
+
```bash
|
|
86
|
+
aicli
|
|
87
|
+
```
|
|
88
|
+
|
|
89
|
+
On first run, an interactive setup wizard guides you through setting up your profile and entering your API key. Your identity is persisted and injected into every AI conversation.
|
|
90
|
+
|
|
91
|
+
```
|
|
92
|
+
[deepseek] > Hello! Tell me about this project
|
|
93
|
+
[deepseek] > @src/main.ts Review this file for bugs
|
|
94
|
+
[deepseek] > @screenshot.png What's in this image?
|
|
95
|
+
[deepseek] > /help
|
|
96
|
+
```
|
|
97
|
+
|
|
98
|
+
Use `@filepath` to reference files or images directly in your prompt.
|
|
99
|
+
|
|
100
|
+
### Web UI
|
|
101
|
+
|
|
102
|
+
```bash
|
|
103
|
+
aicli web # Start on localhost:3456
|
|
104
|
+
aicli web --port 8080 # Custom port
|
|
105
|
+
aicli web --host 0.0.0.0 # LAN access (shows QR-friendly URL)
|
|
106
|
+
```
|
|
107
|
+
|
|
108
|
+
Features: multi-tab sessions, file tree panel, drag & drop images, prompt templates, 8 DaisyUI themes, PWA installable, keyboard shortcuts, diff syntax highlighting.
|
|
109
|
+
|
|
110
|
+
### User Management
|
|
111
|
+
|
|
112
|
+
```bash
|
|
113
|
+
aicli user create admin # Create user (enables auth)
|
|
114
|
+
aicli user list # List all users
|
|
115
|
+
aicli user reset-password x # Reset password
|
|
116
|
+
aicli user delete x # Delete user
|
|
117
|
+
```
|
|
118
|
+
|
|
119
|
+
## Supported Providers
|
|
120
|
+
|
|
121
|
+
| Provider | Models | Get API Key |
|
|
122
|
+
|----------|--------|-------------|
|
|
123
|
+
| **Claude** | Opus 4, Sonnet 4, Haiku 4 | [console.anthropic.com](https://console.anthropic.com) |
|
|
124
|
+
| **Gemini** | 2.5 Pro, 2.5 Flash | [aistudio.google.com](https://aistudio.google.com) |
|
|
125
|
+
| **DeepSeek** | DeepSeek-Chat (V3), Reasoner (R1) | [platform.deepseek.com](https://platform.deepseek.com) |
|
|
126
|
+
| **OpenAI** | GPT-5.4, GPT-4o, o3, o4-mini | [platform.openai.com](https://platform.openai.com) |
|
|
127
|
+
| **OpenRouter** | 300+ models (Claude, GPT, Gemini, Llama, Qwen, Mistral...) | [openrouter.ai](https://openrouter.ai) |
|
|
128
|
+
| **Zhipu** | GLM-4, GLM-5 | [open.bigmodel.cn](https://open.bigmodel.cn) |
|
|
129
|
+
| **Kimi** | Moonshot, Kimi-K2 | [platform.moonshot.cn](https://platform.moonshot.cn) |
|
|
130
|
+
| **Ollama** | Any locally installed model (Llama, Qwen, Gemma, Mistral...) | No API key — [ollama.com](https://ollama.com) |
|
|
131
|
+
|
|
132
|
+
Any OpenAI-compatible API can also be used via `customBaseUrls` in config.
|
|
133
|
+
|
|
134
|
+
### Ollama (Local Models)
|
|
135
|
+
|
|
136
|
+
Run AI models entirely on your own hardware — no API key, no usage fees, no data leaving your machine.
|
|
137
|
+
|
|
138
|
+
```bash
|
|
139
|
+
# Install Ollama from https://ollama.com, then pull a model:
|
|
140
|
+
ollama pull qwen3:4b # recommended: good tool-calling support
|
|
141
|
+
ollama pull gemma3:4b
|
|
142
|
+
ollama pull llama3.1:8b
|
|
143
|
+
|
|
144
|
+
# Start aicli and switch to Ollama:
|
|
145
|
+
aicli
|
|
146
|
+
[deepseek] > /provider ollama # auto-discovers installed models
|
|
147
|
+
[ollama] > /model # select from your local models
|
|
148
|
+
```
|
|
149
|
+
|
|
150
|
+
> **Note**: Use models 4B+ for best results with tool calling. Small models (<4B) may struggle with the tool definitions injected by MCP servers.
|
|
151
|
+
|
|
152
|
+
## Built-in Tools (Agentic)
|
|
153
|
+
|
|
154
|
+
AI autonomously invokes these 27 tools during conversations:
|
|
155
|
+
|
|
156
|
+
| Tool | Safety | Description |
|
|
157
|
+
|------|--------|-------------|
|
|
158
|
+
| `bash` | varies | Execute shell commands (PowerShell on Windows, $SHELL on Unix) |
|
|
159
|
+
| `read_file` | safe | Read file contents (10 MB limit, image support) |
|
|
160
|
+
| `write_file` | write | Create/overwrite files (diff preview + confirmation) |
|
|
161
|
+
| `edit_file` | write | Precise string replacement with fuzzy matching hints + `replaceAll` mode |
|
|
162
|
+
| `list_dir` | safe | List directory contents |
|
|
163
|
+
| `grep_files` | safe | Regex search across files |
|
|
164
|
+
| `glob_files` | safe | Match files by glob pattern |
|
|
165
|
+
| `web_fetch` | safe | Fetch web pages as Markdown (SSRF-protected) |
|
|
166
|
+
| `google_search` | safe | Google Custom Search API |
|
|
167
|
+
| `run_interactive` | safe | Run interactive programs with stdin input |
|
|
168
|
+
| `run_tests` | safe | Auto-detect and run project tests (JUnit XML parsing) |
|
|
169
|
+
| `spawn_agent` | safe | Delegate subtasks to isolated child agents |
|
|
170
|
+
| `ask_user` | safe | Pause and ask the user a question |
|
|
171
|
+
| `save_memory` | safe | Persist important info across sessions |
|
|
172
|
+
| `write_todos` | safe | Task breakdown with live progress rendering |
|
|
173
|
+
| `save_last_response` | write | Save AI response to file |
|
|
174
|
+
| `task_create` | write | Start a command running in the background |
|
|
175
|
+
| `task_list` | safe | List background tasks and their status/output |
|
|
176
|
+
| `task_stop` | write | Stop a running background task |
|
|
177
|
+
| `git_status` | safe | Show working tree status (branch, staged, modified, untracked) |
|
|
178
|
+
| `git_diff` | safe | Show file diffs (staged/unstaged, stat summary) |
|
|
179
|
+
| `git_log` | safe | Show commit history (oneline/full, filter by file/author) |
|
|
180
|
+
| `git_commit` | write | Create a git commit (stage files, message) |
|
|
181
|
+
| `notebook_edit` | write | Edit Jupyter notebook cells (add/edit/delete/move) |
|
|
182
|
+
| `find_symbol` | safe | Locate symbol definitions via persistent tree-sitter index (TS/JS/TSX/Python) |
|
|
183
|
+
| `get_outline` | safe | Enumerate all top-level declarations in one source file |
|
|
184
|
+
| `find_references` | safe | Search indexed files for references to a symbol name |
|
|
185
|
+
| `search_code` | safe | Semantic (meaning-based) code search via local sentence embeddings — bilingual, "grep by meaning" |
|
|
186
|
+
|
|
187
|
+
**Safety levels**: `safe` = auto-execute, `write` = diff preview + confirmation, `destructive` = prominent warning + confirmation.
|
|
188
|
+
|
|
189
|
+
## Key REPL Commands
|
|
190
|
+
|
|
191
|
+
| Command | Description |
|
|
192
|
+
|---------|-------------|
|
|
193
|
+
| `/provider` | Switch AI provider |
|
|
194
|
+
| `/model` | Switch model |
|
|
195
|
+
| `/plan` | Enter read-only planning mode |
|
|
196
|
+
| `/think` | Toggle Claude extended thinking |
|
|
197
|
+
| `/test` | Auto-detect and run project tests |
|
|
198
|
+
| `/review` | AI code review of current git diff |
|
|
199
|
+
| `/security-review` | Security vulnerability scan on git diff |
|
|
200
|
+
| `/rewind` | Rewind conversation + restore files to checkpoint state |
|
|
201
|
+
| `/scaffold <desc>` | AI generates project skeleton |
|
|
202
|
+
| `/init` | AI generates project context file (AICLI.md) |
|
|
203
|
+
| `/compact` | Compress conversation history |
|
|
204
|
+
| `/session` | Session management (new / list / load) |
|
|
205
|
+
| `/checkpoint` | Save/restore conversation checkpoints |
|
|
206
|
+
| `/fork` | Fork the current session into a new session file |
|
|
207
|
+
| `/branch` | Create/switch/delete branches *within* the current session (B2) |
|
|
208
|
+
| `/index` | Manage symbol + semantic index (status/rebuild/clear/semantic-rebuild/semantic-clear) — powers `find_symbol` / `get_outline` / `find_references` / `search_code` (C1+C2) |
|
|
209
|
+
| `/search <keyword>` | Full-text search across all sessions |
|
|
210
|
+
| `/skill` | Manage agent skill packs |
|
|
211
|
+
| `/mcp` | View MCP server status and tools |
|
|
212
|
+
| `/cost` | Show token usage statistics |
|
|
213
|
+
| `/undo` | Undo last file operation |
|
|
214
|
+
| `/doctor` | Health check (API keys, MCP, context) |
|
|
215
|
+
| `/export` | Export session as Markdown or JSON |
|
|
216
|
+
| `/profile` | View/edit your identity (AI knows who you are across all providers) |
|
|
217
|
+
| `/config` | Open configuration wizard |
|
|
218
|
+
| `/help` | Show all available commands |
|
|
219
|
+
|
|
220
|
+
**Multi-line input**: Use `\` at end of line for continuation, or paste multi-line content directly (auto-detected and merged).
|
|
221
|
+
|
|
222
|
+
Type `/help` in the REPL to see all 40 commands.
|
|
223
|
+
|
|
224
|
+
## CLI Parameters
|
|
225
|
+
|
|
226
|
+
```bash
|
|
227
|
+
aicli [options]
|
|
228
|
+
|
|
229
|
+
Options:
|
|
230
|
+
--provider <name> Set AI provider
|
|
231
|
+
-m, --model <name> Set model
|
|
232
|
+
-p, --prompt <text> Headless mode: single prompt, then exit
|
|
233
|
+
--system <prompt> Override system prompt (headless)
|
|
234
|
+
--json Output JSON response (headless)
|
|
235
|
+
--output-format <fmt> text | streaming-json (NDJSON)
|
|
236
|
+
--resume <id> Resume a previous session
|
|
237
|
+
--allowed-tools <list> Comma-separated tool whitelist
|
|
238
|
+
--blocked-tools <list> Comma-separated tool blacklist
|
|
239
|
+
--no-stream Disable streaming output
|
|
240
|
+
|
|
241
|
+
Subcommands:
|
|
242
|
+
aicli web [options] Start Web UI server
|
|
243
|
+
aicli config Run configuration wizard
|
|
244
|
+
aicli providers List all providers and status
|
|
245
|
+
aicli sessions List recent sessions
|
|
246
|
+
aicli user <action> Manage Web UI users
|
|
247
|
+
aicli batch <action> Anthropic Batches API (submit | list | status | results | cancel)
|
|
248
|
+
```
|
|
249
|
+
|
|
250
|
+
### Batch Mode (Anthropic Message Batches)
|
|
251
|
+
|
|
252
|
+
For offline analysis, bulk evals, or any workload where latency is flexible, use the Batches API for **50% off** tokens with a 24-hour processing window.
|
|
253
|
+
|
|
254
|
+
```bash
|
|
255
|
+
# 1. Prepare a JSONL file (one request per line):
|
|
256
|
+
# {"customId":"req-1","messages":[{"role":"user","content":"..."}],"maxTokens":1024}
|
|
257
|
+
aicli batch submit prompts.jsonl # validate + submit + track locally
|
|
258
|
+
aicli batch submit --dry-run prompts.jsonl # parse only, no network
|
|
259
|
+
|
|
260
|
+
aicli batch list # live status of recent batches
|
|
261
|
+
aicli batch status <id> # detailed status + request counts
|
|
262
|
+
aicli batch results <id> out.jsonl # download results (stdout if no path)
|
|
263
|
+
aicli batch cancel <id> # cancel an in-progress batch
|
|
264
|
+
```
|
|
265
|
+
|
|
266
|
+
Local tracking file: `~/.aicli/batches.json` (last 200 submissions). Requires `AICLI_API_KEY_CLAUDE` or a Claude API key configured via `aicli config`.
|
|
267
|
+
|
|
268
|
+
### Headless Mode
|
|
269
|
+
|
|
270
|
+
```bash
|
|
271
|
+
# Single prompt
|
|
272
|
+
aicli -p "Explain recursion in one sentence"
|
|
273
|
+
|
|
274
|
+
# Pipe stdin
|
|
275
|
+
cat src/main.ts | aicli -p "Review this code"
|
|
276
|
+
|
|
277
|
+
# JSON output for scripting
|
|
278
|
+
aicli -p "hello" --json
|
|
279
|
+
|
|
280
|
+
# Streaming JSON (NDJSON)
|
|
281
|
+
aicli -p "write a poem" --output-format streaming-json
|
|
282
|
+
```
|
|
283
|
+
|
|
284
|
+
## Configuration
|
|
285
|
+
|
|
286
|
+
Configuration is stored at `~/.aicli/config.json`. Run `aicli config` for the interactive wizard, or edit directly:
|
|
287
|
+
|
|
288
|
+
```json
|
|
289
|
+
{
|
|
290
|
+
"defaultProvider": "deepseek",
|
|
291
|
+
"apiKeys": {
|
|
292
|
+
"deepseek": "sk-...",
|
|
293
|
+
"claude": "sk-ant-...",
|
|
294
|
+
"openrouter": "sk-or-..."
|
|
295
|
+
},
|
|
296
|
+
"proxy": "http://127.0.0.1:10809",
|
|
297
|
+
"mcpServers": { },
|
|
298
|
+
"ui": {
|
|
299
|
+
"theme": "dark",
|
|
300
|
+
"wordWrap": 0,
|
|
301
|
+
"notificationThreshold": 10000
|
|
302
|
+
}
|
|
303
|
+
}
|
|
304
|
+
```
|
|
305
|
+
|
|
306
|
+
### Permission Rules
|
|
307
|
+
|
|
308
|
+
Control when tools require confirmation. Rules are checked in order — first match wins:
|
|
309
|
+
|
|
310
|
+
```json
|
|
311
|
+
{
|
|
312
|
+
"permissionRules": [
|
|
313
|
+
{ "tool": "read_file", "action": "auto-approve" },
|
|
314
|
+
{ "tool": "list_dir", "action": "auto-approve" },
|
|
315
|
+
{ "tool": "grep_files", "action": "auto-approve" },
|
|
316
|
+
{ "tool": "glob_files", "action": "auto-approve" },
|
|
317
|
+
{ "tool": "write_todos", "action": "auto-approve" },
|
|
318
|
+
{ "tool": "bash", "action": "auto-approve", "when": { "dangerLevel": "safe" } },
|
|
319
|
+
{ "tool": "write_file", "action": "auto-approve", "when": { "pathPattern": "src/" } },
|
|
320
|
+
{ "tool": "bash", "action": "deny", "when": { "pathPattern": "rm -rf" } },
|
|
321
|
+
{ "tool": "*", "action": "confirm" }
|
|
322
|
+
]
|
|
323
|
+
}
|
|
324
|
+
```
|
|
325
|
+
|
|
326
|
+
| Field | Description |
|
|
327
|
+
|-------|-------------|
|
|
328
|
+
| `tool` | Tool name, or `*` for all tools |
|
|
329
|
+
| `action` | `auto-approve` (skip confirmation), `deny` (block), `confirm` (ask user) |
|
|
330
|
+
| `when.dangerLevel` | Only match when danger level is `safe`, `write`, or `destructive` |
|
|
331
|
+
| `when.pathPattern` | Substring match against tool's `path` or `command` argument |
|
|
332
|
+
|
|
333
|
+
**Recommended minimal config** — auto-approve all read-only tools to reduce y/N prompts:
|
|
334
|
+
|
|
335
|
+
```json
|
|
336
|
+
{
|
|
337
|
+
"permissionRules": [
|
|
338
|
+
{ "tool": "read_file", "action": "auto-approve" },
|
|
339
|
+
{ "tool": "list_dir", "action": "auto-approve" },
|
|
340
|
+
{ "tool": "grep_files", "action": "auto-approve" },
|
|
341
|
+
{ "tool": "glob_files", "action": "auto-approve" },
|
|
342
|
+
{ "tool": "web_fetch", "action": "auto-approve" },
|
|
343
|
+
{ "tool": "write_todos", "action": "auto-approve" },
|
|
344
|
+
{ "tool": "ask_user", "action": "auto-approve" },
|
|
345
|
+
{ "tool": "run_tests", "action": "auto-approve" }
|
|
346
|
+
]
|
|
347
|
+
}
|
|
348
|
+
```
|
|
349
|
+
|
|
350
|
+
### Environment Variables
|
|
351
|
+
|
|
352
|
+
Environment variables take precedence over config file values:
|
|
353
|
+
|
|
354
|
+
| Variable | Description |
|
|
355
|
+
|----------|-------------|
|
|
356
|
+
| `AICLI_API_KEY_CLAUDE` | Claude API Key |
|
|
357
|
+
| `AICLI_API_KEY_GEMINI` | Gemini API Key |
|
|
358
|
+
| `AICLI_API_KEY_DEEPSEEK` | DeepSeek API Key |
|
|
359
|
+
| `AICLI_API_KEY_OPENAI` | OpenAI API Key |
|
|
360
|
+
| `AICLI_API_KEY_OPENROUTER` | OpenRouter API Key |
|
|
361
|
+
| `AICLI_API_KEY_ZHIPU` | Zhipu API Key |
|
|
362
|
+
| `AICLI_API_KEY_KIMI` | Kimi API Key |
|
|
363
|
+
| `AICLI_PROVIDER` | Default provider ID |
|
|
364
|
+
| `AICLI_NO_STREAM` | Set to `1` to disable streaming |
|
|
365
|
+
| `HTTPS_PROXY` / `HTTP_PROXY` | Proxy URL |
|
|
366
|
+
|
|
367
|
+
### Hierarchical Context Files
|
|
368
|
+
|
|
369
|
+
ai-cli automatically discovers and injects context files into the system prompt:
|
|
370
|
+
|
|
371
|
+
| Layer | Path | Purpose |
|
|
372
|
+
|-------|------|---------|
|
|
373
|
+
| Global | `~/.aicli/AICLI.md` | Personal preferences across all projects |
|
|
374
|
+
| Project | `<git-root>/AICLI.md` | Project rules (commit to git for team sharing) |
|
|
375
|
+
| Subdirectory | `<cwd>/AICLI.md` | Directory-specific instructions |
|
|
376
|
+
|
|
377
|
+
Also supports `CLAUDE.md` as an alternative filename at each layer.
|
|
378
|
+
|
|
379
|
+
### MCP Integration
|
|
380
|
+
|
|
381
|
+
Connect external [MCP](https://modelcontextprotocol.io/) servers for dynamic tool discovery. Configuration is compatible with Claude Desktop format:
|
|
382
|
+
|
|
383
|
+
```json
|
|
384
|
+
{
|
|
385
|
+
"mcpServers": {
|
|
386
|
+
"filesystem": {
|
|
387
|
+
"command": "npx",
|
|
388
|
+
"args": ["-y", "@modelcontextprotocol/server-filesystem", "/path"],
|
|
389
|
+
"timeout": 30000
|
|
390
|
+
}
|
|
391
|
+
}
|
|
392
|
+
}
|
|
393
|
+
```
|
|
394
|
+
|
|
395
|
+
Project-level `.mcp.json` files are also supported and automatically merged with global config.
|
|
396
|
+
|
|
397
|
+
## Web UI Features
|
|
398
|
+
|
|
399
|
+
The Web UI (`aicli web`) provides a full-featured browser interface:
|
|
400
|
+
|
|
401
|
+
- **Multi-Tab Sessions** — parallel conversations in separate browser tabs
|
|
402
|
+
- **File Tree Panel** — browse project files, click to insert `@path` references
|
|
403
|
+
- **Image Upload** — drag & drop or Ctrl+V paste images into chat
|
|
404
|
+
- **Prompt Templates** — CRUD with tags, search, import/export
|
|
405
|
+
- **8 Themes** — DaisyUI themes with code highlight auto-sync
|
|
406
|
+
- **Diff Syntax Highlighting** — colored diff in tool confirm dialogs
|
|
407
|
+
- **Keyboard Shortcuts** — `Esc` stop, `Ctrl+L` clear, `↑↓` history
|
|
408
|
+
- **Export** — `/export md` or `/export json` browser download
|
|
409
|
+
- **PWA** — installable as desktop/mobile app
|
|
410
|
+
- **LAN Access** — `--host 0.0.0.0` for phone/tablet access
|
|
411
|
+
- **Multi-User Auth** — password authentication with per-user data isolation
|
|
412
|
+
- **Auto-Reconnect** — heartbeat + exponential backoff reconnection
|
|
413
|
+
|
|
414
|
+
## Testing
|
|
415
|
+
|
|
416
|
+
```bash
|
|
417
|
+
npm test # Run all 396 tests
|
|
418
|
+
npm run test:watch # Watch mode
|
|
419
|
+
```
|
|
420
|
+
|
|
421
|
+
26 test suites covering: authentication, sessions, tool types & danger levels, permissions, output truncation, diff rendering, edit-file similarity, error hierarchy, config management, env loading, provider registry, web-fetch, grep-files, hub renderer, hub discussion, hub presets, dev-state, token estimator, tool registry budget, parallel tool execution, cost tracker, session tool history.
|
|
422
|
+
|
|
423
|
+
## Documentation
|
|
424
|
+
|
|
425
|
+
- [Chinese README](README.zh-CN.md) — 中文说明文档
|
|
426
|
+
|
|
427
|
+
## License
|
|
428
|
+
|
|
429
|
+
[MIT](LICENSE)
|