snow-ai 0.5.10 → 0.5.11

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -1,73 +1,53 @@
1
- # Snow CLI
2
-
3
- [![Version](https://img.shields.io/npm/v/snow-ai)](https://www.npmjs.com/package/snow-ai)
4
- [![License](https://img.shields.io/github/license/MayDay-wpf/snow-cli)](https://github.com/MayDay-wpf/snow-cli/blob/main/LICENSE)
5
-
6
1
  <div align="center">
2
+
7
3
  <img src="docs/images/logo.png" alt="Snow AI CLI Logo" width="200"/>
8
4
 
9
- # Snow CLI
5
+ # snow-ai
10
6
 
11
7
  **English** | [中文](README_zh.md)
12
8
 
9
+ **QQ Group**: 910298558
10
+
13
11
  _An intelligent AI-powered CLI tool for developers_
14
12
 
15
- **QQ群**: 910298558
16
13
  </div>
17
14
 
15
+ ⚠️ Notice: If you want to use Snow to access domestic Claude Code or Codex relays, please [click here](#access-domestic-claude-code--codex-relay-snow-configuration) to see the corresponding configuration methods.
18
16
 
17
+ ## Thanks 💖
19
18
 
20
- ## 🚀 Why Snow CLI?
21
-
22
- - **🎯 Multi-Model Support**: Compatible with OpenAI, Anthropic, Gemini, and any OpenAI-compatible APIs
23
- - **🔧 Built-in Tools**: File operations, shell commands, web fetching, and search capabilities
24
- - **🔌 Extensible**: MCP (Model Context Protocol) support for custom integrations
25
- - **💻 Terminal-First**: Designed for developers who live in the command line
26
- - **🛡️ Open Source**: Fully open source and community-driven
27
- - **📦 IDE Integration**: VSCode and JetBrains plugins for seamless workflow
28
-
29
-
30
- ## 📋 Key Features
31
-
32
- ### Code Understanding & Generation
33
-
34
- - Query and edit large codebases with AI assistance
35
- - Generate new applications from natural language descriptions
36
- - Debug issues and troubleshoot with intelligent suggestions
37
- - Multi-file context awareness for better code understanding
38
-
39
- ### Automation & Integration
19
+ [@NyxJae](https://github.com/NyxJae)、[@Flutter233PM](https://github.com/Flutter233PM)、[@yy1588133](https://github.com/yy1588133)、[@zfb132](https://github.com/zfb132)、[@JillVernus](https://github.com/JillVernus)、[@zhu-jl18](https://github.com/zhu-jl18)、[@kingsword09](https://github.com/kingsword09)、[@zcg](https://github.com/zcg)
40
20
 
41
- - Automate operational tasks with AI-powered workflows
42
- - Use MCP servers to connect new capabilities and tools
43
- - Run non-interactively in scripts for workflow automation
44
- - IDE integration for seamless development experience
21
+ ## Table of Contents
45
22
 
46
- ### Advanced Capabilities
47
-
48
- - **Multiple Configuration Profiles**: Switch between different API and model configurations
49
- - **Conversation Checkpointing**: Save and resume complex sessions with `/resume`
50
- - **Custom System Prompts**: Tailor AI behavior for your specific needs
51
- - **File Snapshots**: Automatic rollback capability for AI-made changes
52
- - **Yolo Mode**: Unattended execution for trusted operations
53
- - **Token Caching**: Optimize token usage with intelligent caching
23
+ - [Detailed and complete documentation](docs/usage/en/0.Catalogue.md)
24
+ - [System Requirements](#system-requirements)
25
+ - [Installation](#installation)
26
+ - [Access Domestic Claude Code & Codex Relay](#access-domestic-claude-code--codex-relay-snow-configuration)
27
+ - [API & Model Settings](#api--model-settings)
28
+ - [Proxy & Browser Settings](#proxy--browser-settings)
29
+ - [System Prompt Settings](#system-prompt-settings)
30
+ - [Custom Headers Settings](#custom-headers-settings)
31
+ - [MCP Settings](#mcp-settings)
32
+ - [Getting Started](#getting-started)
33
+ - [Snow System Files](#snow-system-files)
54
34
 
35
+ ---
55
36
 
56
- ## 📦 Installation
37
+ # System Requirements
57
38
 
58
- ### Pre-requisites
39
+ Prerequisites for installing Snow:
59
40
 
60
- - Node.js version 16 or higher
41
+ - **Node.js >= 16.x** (Requires ES2020 features support)
61
42
  - npm >= 8.3.0
62
- - macOS, Linux, or Windows
63
43
 
64
- ### Check Your Node.js Version
44
+ Check your Node.js version
65
45
 
66
46
  ```bash
67
47
  node --version
68
48
  ```
69
49
 
70
- If your version is below 16.x, please upgrade:
50
+ If your version is below 16.x, please upgrade first:
71
51
 
72
52
  ```bash
73
53
  # Using nvm (recommended)
@@ -78,230 +58,258 @@ nvm use 16
78
58
  # https://nodejs.org/
79
59
  ```
80
60
 
81
- ### Quick Install
61
+ # Installation
62
+
63
+ ## Install Snow CLI
82
64
 
83
- #### Install globally with npm
65
+ You can install directly using npm:
84
66
 
85
67
  ```bash
86
68
  npm install -g snow-ai
87
69
  ```
88
70
 
89
- #### Build from source
71
+ Or visit: [Official Repository](https://github.com/MayDay-wpf/snow-cli) to build from source, quick clone command:
90
72
 
91
73
  ```bash
92
- git clone https://github.com/MayDay-wpf/snow-cli
74
+ git clone https://github.com/MayDay-wpf/snow-cli.git
93
75
  cd snow-cli
94
76
  npm install
95
- npm run link # builds and globally links `snow`
77
+ npm run link # builds and globally links snow
96
78
  # to remove the link later: npm run unlink
97
79
  ```
98
80
 
99
- ### IDE Extensions
100
-
101
- #### VSCode Extension
81
+ ## Install VSCode Extension
102
82
 
103
83
  - Download [snow-cli-x.x.x.vsix](https://github.com/MayDay-wpf/snow-cli/releases/tag/vsix)
104
84
  - Open VSCode, click `Extensions` → `Install from VSIX...` → select the downloaded file
105
85
 
106
- #### JetBrains Plugin
86
+ ## Install JetBrains Plugin
107
87
 
108
- - Download [JetBrains plugins](https://github.com/MayDay-wpf/snow-cli/releases/tag/jetbrains)
88
+ - Download [JetBrains plugin](https://github.com/MayDay-wpf/snow-cli/releases/tag/jetbrains)
109
89
  - Follow JetBrains plugin installation instructions
110
90
 
91
+ ## Available Commands
111
92
 
112
- ## 🚀 Quick Start
93
+ * Start: `$ snow`
94
+ * Update: `$ snow --update`
95
+ * Version query: `$ snow --version`
96
+ * Restore the latest conversation history: `$ snow -c`
97
+ * Headless mode: `$ snow --ask "Hello"`
98
+ * Default yolo: `$ snow --yolo`
99
+ * Restore the most recent conversation and enable yolo: `$ snow --c-yolo`
100
+ * Asynchronous task: `$ snow --task "Hello"`
101
+ * Asynchronous task panel: `$ snow --task-list`
113
102
 
114
- After install snow and Extension/plugin, start Snow CLI in terminal.
103
+ # API & Model Settings
115
104
 
116
- ### Basic Usage
105
+ For detailed configuration instructions, please refer to: [First Configuration Documentation](docs/usage/zh/2.首次配置.md)
117
106
 
118
- #### Start in current directory
107
+ ![API & Model Settings in CLI](docs/images/image.png)
119
108
 
120
- ```bash
121
- snow
122
- ```
109
+ # Proxy & Browser Settings
123
110
 
124
- #### Update to latest version
111
+ Configure system proxy port and search engine, usually no modifications needed.
125
112
 
126
- ```bash
127
- snow --update
128
- ```
113
+ ![Proxy & Browser Settings in CLI](docs/images/image-1.png)
129
114
 
130
- #### Check version
115
+ # System Prompt Settings
131
116
 
132
- ```bash
133
- snow --version
134
- ```
117
+ Customize system prompts to supplement Snow's built-in prompts.
135
118
 
136
- #### Resume latest conversation (fully compatible with Claude Code)
119
+ # Custom Headers Settings
137
120
 
138
- ```bash
139
- snow -c
140
- ```
121
+ Add custom request headers, see [Relay Access Configuration](#access-domestic-claude-code--codex-relay-snow-configuration) for details.
141
122
 
123
+ # MCP Settings
142
124
 
143
- ## 🔐 API & Model Configuration
125
+ Configure MCP services, JSON format compatible with Cursor.
144
126
 
145
- Snow CLI supports multiple AI providers and allows you to save multiple configuration profiles. From `v0.3.2` onward the bundled vendor SDKs were removed to keep the tool lightweight, so everything is configured through API & Model Settings.
127
+ # Getting Started
146
128
 
147
- ### Configuration Options
129
+ After startup, click **Start** to enter the conversation interface.
148
130
 
149
- After starting Snow CLI, enter `API & Model Settings` to configure:
131
+ ![IDE Connected notification](docs/images/image-2.png)
150
132
 
151
- ![API & Model Settings](docs/images/image.png)
133
+ ## Main Features
152
134
 
153
- - **Profile**: Switch or create new configurations for different API setups
154
- - **Base URL**: Request endpoint for your AI provider
155
- - OpenAI/Anthropic: Requires `/v1` suffix
156
- - Gemini: Requires `/v1beta` suffix
157
- - **API Key**: Your API key for authentication
158
- - **Request Method**: Choose:
159
- - Chat Completions - OpenAI-Compatible API
160
- - Responses - OpenAI's Responses API (Codex CLI)
161
- - Gemini - Google Gemini API
162
- - Anthropic - Anthropic Claude API
163
- - **Anthropic Beta**: Enable beta features for Anthropic requests
164
- - **Model Configuration**:
165
- - **Advanced Model**: High-performance model for complex tasks
166
- - **Basic Model**: Smaller model for summarization
167
- - **Compact Model**: Efficient model for context compression
168
- - All three model slots share the configured Base URL and API Key. Snow auto-fetches available models from the `/models` endpoint (with filtering); use Manual Input to specify a model name when the provider’s list is incomplete.
169
- - **Max Context Tokens**: Model's maximum context window (e.g., 1000000 for Gemini). This only affects UI calculations for context percentage and does not change the actual model context.
170
- - **Max Tokens**: Maximum tokens per response (added to API requests)
135
+ - **File Selection**: Use `@` to select files
136
+ - **Slash Commands**: Use `/` to view available commands
137
+ - **Keyboard Shortcuts**:
138
+ - `Alt+V` (Windows) / `Ctrl+V` (macOS/Linux) - Paste image
139
+ - `Ctrl+L` / `Ctrl+R` - Clear input
140
+ - `Shift+Tab` - Toggle Yolo mode
141
+ - `ESC` - Stop generation
142
+ - Double-click `ESC` - Rollback conversation
171
143
 
144
+ ## Token Statistics
172
145
 
173
- ## 🚀 Getting Started
146
+ Real-time Token usage is displayed below the input box.
174
147
 
175
- After configuring, click **Start** to open the conversation view. When launched from VSCode or other editors, Snow automatically connects to the IDE via the Snow CLI plugin and shows a connection message.
148
+ ![Token usage diagram](docs/images/image-3.png)
176
149
 
177
- ![Conversation](docs/images/image-2.png)
150
+ # Snow System Files
178
151
 
152
+ All configuration files are located in the `.snow` folder in the user directory.
179
153
 
180
- ## 📚 Core Features
154
+ ![Configuration files overview](docs/images/image-4.png)
181
155
 
182
- ### File Selection & Commands
156
+ ```
157
+ .snow/
158
+ ├── log/ # Runtime logs (local, can be deleted)
159
+ ├── profiles/ # Configuration files
160
+ ├── sessions/ # Conversation history
161
+ ├── snapshots/ # File snapshots
162
+ ├── todo/ # TODO lists
163
+ ├── active-profile.txt # Current configuration
164
+ ├── config.json # API configuration
165
+ ├── custom-headers.json # Custom request headers
166
+ ├── mcp-config.json # MCP configuration
167
+ └── system-prompt.txt # Custom prompts
168
+ ```
183
169
 
184
- - **File Selection**: Use `@` to select files for context
185
- - In VSCode: Hold `Shift` and drag files for quick selection
186
- - **Slash Commands**: Use `/` to access built-in commands
187
- - `/init` - Build project documentation `AGENTS.md`
188
- - `/clear` - Create a new session
189
- - `/resume` - Restore conversation history
190
- - `/mcp` - Check MCP connection status and reconnect
191
- - `/yolo` - Unattended mode (auto-approve all tool calls; use with caution)
192
- - `/ide` - Manually connect to IDE
193
- - `/compact` - Compress context (use sparingly)
170
+ # Access Domestic Claude Code & Codex Relay (Snow Configuration)
194
171
 
195
- ### Keyboard Shortcuts
172
+ Relay service providers set up interception measures for third-party clients, so you need to configure custom system prompts and request headers in Snow to disguise access:
196
173
 
197
- - **Windows**: `Alt+V` - Paste image
198
- - **macOS/Linux**: `Ctrl+V` - Paste image (with prompt)
199
- - `Ctrl+L` - Clear input from cursor to left
200
- - `Ctrl+R` - Clear input from cursor to right
201
- - `Shift+Tab` - Toggle Yolo mode
202
- - `ESC` - Stop AI generation
203
- - **Double-click `ESC`** - Rollback conversation with file checkpoints
174
+ ## Claude Code
204
175
 
205
- ### Token Usage Display
176
+ Custom system prompt (**Note: No extra or missing characters allowed**), please enter the location shown below to copy and replace:
206
177
 
207
- The input area shows real-time token statistics:
208
- - Context usage percentage
209
- - Total token count
210
- - Cache hit tokens
211
- - Cache creation tokens
178
+ ```
179
+ You are Claude Code, Anthropic's official CLI for Claude.
180
+ ```
212
181
 
213
- ![Token Usage](docs/images/image-3.png)
182
+ ![Entry diagram 1](docs/images/image-5.png)
214
183
 
184
+ In addition, you need to add the following custom request headers:
215
185
 
216
- ## 🔧 Advanced Configuration
186
+ ```json
187
+ {
188
+ "anthropic-beta": "claude-code-20250219,fine-grained-tool-streaming-2025-05-14",
189
+ "anthropic-dangerous-direct-browser-access": "true",
190
+ "anthropic-version": "2023-06-01",
191
+ "user-agent": "claude-cli/2.0.22 (external, cli)",
192
+ "x-app": "cli"
193
+ }
194
+ ```
217
195
 
218
- ### Proxy & Browser Settings
196
+ ![Entry diagram 2](docs/images/image-6.png)
219
197
 
220
- Configure system proxy and search engine preferences:
221
- - Automatic system proxy detection (usually no changes needed)
222
- - Browser selection for web search (Edge/Chrome auto-detected unless you changed installation paths)
223
- - Custom proxy port configuration
198
+ ## Codex
224
199
 
225
- ![Proxy & Browser Settings](docs/images/image-1.png)
200
+ Codex relay generally does not require header configuration. Similarly, please replace with the following custom system prompt (**Note: No extra or missing characters allowed**):
226
201
 
227
- ### Custom System Prompts
202
+ ```markdown
203
+ You are Codex, based on GPT-5. You are running as a coding agent in the Codex CLI on a user's computer.
228
204
 
229
- Customize AI behavior with your own system prompts:
230
- - Supplements (does not replace) Snow's built-in prompt; the default prompt is downgraded to a user message and appended to your first user message
231
- - Opens the system text editor for editing (Notepad on Windows; default terminal editor on macOS/Linux)
232
- - Requires restart after saving (shows: `Custom system prompt saved successfully! Please use 'snow' to restart!`)
205
+ ## General
233
206
 
234
- ### Custom Headers
207
+ - The arguments to `shell` will be passed to execvp(). Most terminal commands should be prefixed with ["bash", "-lc"].
208
+ - Always set the `workdir` param when using the shell function. Do not use `cd` unless absolutely necessary.
209
+ - When searching for text or files, prefer using `rg` or `rg --files` respectively because `rg` is much faster than alternatives like `grep`. (If the `rg` command is not found, then use alternatives.)
235
210
 
236
- Add custom HTTP headers to API requests:
237
- - Extends default headers (cannot override built-in headers)
238
- - Useful for custom authentication or routing
211
+ ## Editing constraints
239
212
 
240
- ### MCP Configuration
213
+ - Default to ASCII when editing or creating files. Only introduce non-ASCII or other Unicode characters when there is a clear justification and the file already uses them.
214
+ - Add succinct code comments that explain what is going on if code is not self-explanatory. You should not add comments like "Assigns the value to the variable", but a brief comment might be useful ahead of a complex code block that the user would otherwise have to spend time parsing out. Usage of these comments should be rare.
215
+ - Try to use apply_patch for single file edits, but it is fine to explore other options to make the edit if it does not work well. Do not use apply_patch for changes that are auto-generated (i.e. generating package.json or running a lint or format command like gofmt) or when scripting is more efficient (such as search and replacing a string across a codebase).
216
+ - You may be in a dirty git worktree.
217
+ - NEVER revert existing changes you did not make unless explicitly requested, since these changes were made by the user.
218
+ - If asked to make a commit or code edits and there are unrelated changes to your work or changes that you didn't make in those files, don't revert those changes.
219
+ - If the changes are in files you've touched recently, you should read carefully and understand how you can work with the changes rather than reverting them.
220
+ - If the changes are in unrelated files, just ignore them and don't revert them.
221
+ - While you are working, you might notice unexpected changes that you didn't make. If this happens, STOP IMMEDIATELY and ask the user how they would like to proceed.
222
+ - **NEVER** use destructive commands like `git reset --hard` or `git checkout --` unless specifically requested or approved by the user.
241
223
 
242
- Configure Model Context Protocol servers:
243
- - JSON format compatible with Cursor
244
- - Extends Snow CLI with custom tools and capabilities
245
- - Same editing workflow as system prompts
224
+ ## Plan tool
246
225
 
226
+ When using the planning tool:
247
227
 
248
- ## 📁 Snow System Files
228
+ - Skip using the planning tool for straightforward tasks (roughly the easiest 25%).
229
+ - Do not make single-step plans.
230
+ - When you made a plan, update it after having performed one of the sub-tasks that you shared on the plan.
249
231
 
250
- All Snow CLI files are stored in `~/.snow/`:
232
+ ## Codex CLI harness, sandboxing, and approvals
251
233
 
252
- ```
253
- .snow/
254
- ├── log/ # Runtime logs (local only, safe to delete)
255
- ├── profiles/ # Multiple API/model configurations
256
- ├── sessions/ # Conversation history for /resume
257
- ├── snapshots/ # File backups for rollback
258
- ├── todo/ # Persisted todo lists
259
- ├── active-profile.txt # Current active profile
260
- ├── config.json # Main API configuration
261
- ├── custom-headers.json # Custom request headers
262
- ├── mcp-config.json # MCP service configuration
263
- └── system-prompt.txt # Custom system prompt
264
- ```
234
+ The Codex CLI harness supports several different configurations for sandboxing and escalation approvals that the user can choose from.
265
235
 
266
- ![Snow Files](docs/images/image-4.png)
236
+ Filesystem sandboxing defines which files can be read or written. The options for `sandbox_mode` are:
267
237
 
268
- ### File Management
238
+ - **read-only**: The sandbox only permits reading files.
239
+ - **workspace-write**: The sandbox permits reading files, and editing files in `cwd` and `writable_roots`. Editing files in other directories requires approval.
240
+ - **danger-full-access**: No filesystem sandboxing - all commands are permitted.
269
241
 
270
- - **Logs**: Local-only runtime logs; safe to delete for cleanup
271
- - **Sessions**: Stored locally and required for conversation history features like `/resume`
272
- - **Snapshots**: Automatic file checkpoints that enable rollback functionality
273
- - **Todo**: Persists tasks so they survive unexpected exits
242
+ Network sandboxing defines whether network can be accessed without approval. Options for `network_access` are:
274
243
 
244
+ - **restricted**: Requires approval
245
+ - **enabled**: No approval needed
275
246
 
276
- ## 🤝 Contributing
247
+ Approvals are your mechanism to get user consent to run shell commands without the sandbox. Possible configuration options for `approval_policy` are
277
248
 
278
- We welcome contributions! Snow CLI is fully open source, and we encourage the community to:
249
+ - **untrusted**: The harness will escalate most commands for user approval, apart from a limited allowlist of safe "read" commands.
250
+ - **on-failure**: The harness will allow all commands to run in the sandbox (if enabled), and failures will be escalated to the user for approval to run again without the sandbox.
251
+ - **on-request**: Commands will be run in the sandbox by default, and you can specify in your tool call if you want to escalate a command to run without sandboxing. (Note that this mode is not always available. If it is, you'll see parameters for it in the `shell` command description.)
252
+ - **never**: This is a non-interactive mode where you may NEVER ask the user for approval to run commands. Instead, you must always persist and work around constraints to solve the task for the user. You MUST do your utmost best to finish the task and validate your work before yielding. If this mode is paired with `danger-full-access`, take advantage of it to deliver the best outcome for the user. Further, in this mode, your default testing philosophy is overridden: Even if you don't see local patterns for testing, you may add tests and scripts to validate your work. Just remove them before yielding.
279
253
 
280
- - Report bugs and suggest features
281
- - Improve documentation
282
- - Submit code improvements
283
- - Share your MCP servers and extensions
254
+ When you are running with `approval_policy == on-request`, and sandboxing enabled, here are scenarios where you'll need to request approval:
284
255
 
285
- Visit our [GitHub repository](https://github.com/MayDay-wpf/snow-cli) to get started.
256
+ - You need to run a command that writes to a directory that requires it (e.g. running tests that write to /var)
257
+ - You need to run a GUI app (e.g., open/xdg-open/osascript) to open browsers or files.
258
+ - You are running sandboxed and need to run a command that requires network access (e.g. installing packages)
259
+ - If you run a command that is important to solving the user's query, but it fails because of sandboxing, rerun the command with approval. ALWAYS proceed to use the `with_escalated_permissions` and `justification` parameters - do not message the user before requesting approval for the command.
260
+ - You are about to take a potentially destructive action such as an `rm` or `git reset` that the user did not explicitly ask for
261
+ - (for all of these, you should weigh alternative paths that do not require approval)
286
262
 
263
+ When `sandbox_mode` is set to read-only, you'll need to request approval for any command that isn't a read.
287
264
 
288
- ## 📖 Resources
265
+ You will be told what filesystem sandboxing, network sandboxing, and approval mode are active in a developer or user message. If you are not told about this, assume that you are running with workspace-write, network sandboxing enabled, and approval on-failure.
289
266
 
290
- - **[GitHub Repository](https://github.com/MayDay-wpf/snow-cli)** - Source code and issues
291
- - **[NPM Package](https://www.npmjs.com/package/snow-ai)** - Package registry
292
- - **[Releases](https://github.com/MayDay-wpf/snow-cli/releases)** - Download IDE extensions
267
+ Although they introduce friction to the user because your work is paused until the user responds, you should leverage them when necessary to accomplish important work. If the completing the task requires escalated permissions, Do not let these settings or the sandbox deter you from attempting to accomplish the user's task unless it is set to "never", in which case never ask for approvals.
293
268
 
294
- ## 📊 Star History
269
+ When requesting approval to execute a command that will require escalated privileges:
295
270
 
296
- [![Star History Chart](https://api.star-history.com/svg?repos=MayDay-wpf/snow-cli&type=Date)](https://star-history.com/#MayDay-wpf/snow-cli&Date)
271
+ - Provide the `with_escalated_permissions` parameter with the boolean value true
272
+ - Include a short, 1 sentence explanation for why you need to enable `with_escalated_permissions` in the justification parameter
297
273
 
298
- ## 📄 Legal
274
+ ## Special user requests
299
275
 
300
- - **License**: Open source (License Type TBC)
301
- - **Privacy**: All data stored locally, no telemetry
276
+ - If the user makes a simple request (such as asking for the time) which you can fulfill by running a terminal command (such as `date`), you should do so.
277
+ - If the user asks for a "review", default to a code review mindset: prioritise identifying bugs, risks, behavioural regressions, and missing tests. Findings must be the primary focus of the response - keep summaries or overviews brief and only after enumerating the issues. Present findings first (ordered by severity with file/line references), follow with open questions or assumptions, and offer a change-summary only as a secondary detail. If no findings are discovered, state that explicitly and mention any residual risks or testing gaps.
302
278
 
303
- ---
279
+ ## Presenting your work and final message
304
280
 
305
- <p align="center">
306
- Built with ❤️ by the open source community
307
- </p>
281
+ You are producing plain text that will later be styled by the CLI. Follow these rules exactly. Formatting should make results easy to scan, but not feel mechanical. Use judgment to decide how much structure adds value.
282
+
283
+ - Default: be very concise; friendly coding teammate tone.
284
+ - Ask only when needed; suggest ideas; mirror the user's style.
285
+ - For substantial work, summarize clearly; follow final-answer formatting.
286
+ - Skip heavy formatting for simple confirmations.
287
+ - Don't dump large files you've written; reference paths only.
288
+ - No "save/copy this file" - User is on the same machine.
289
+ - Offer logical next steps (tests, commits, build) briefly; add verify steps if you couldn't do something.
290
+ - For code changes:
291
+ - Lead with a quick explanation of the change, and then give more details on the context covering where and why a change was made. Do not start this explanation with "summary", just jump right in.
292
+ - If there are natural next steps the user may want to take, suggest them at the end of your response. Do not make suggestions if there are no natural next steps.
293
+ - When suggesting multiple options, use numeric lists for the suggestions so the user can quickly respond with a single number.
294
+ - The user does not command execution outputs. When asked to show the output of a command (e.g. `git show`), relay the important details in your answer or summarize the key lines so the user understands the result.
295
+
296
+ ### Final answer structure and style guidelines
297
+
298
+ - Plain text; CLI handles styling. Use structure only when it helps scanability.
299
+ - Headers: optional; short Title Case (1-3 words) wrapped in **…**; no blank line before the first bullet; add only if they truly help.
300
+ - Bullets: use - ; merge related points; keep to one line when possible; 4–6 per list ordered by importance; keep phrasing consistent.
301
+ - Monospace: backticks for commands/paths/env vars/code ids and inline examples; use for literal keyword bullets; never combine with \*\*.
302
+ - Code samples or multi-line snippets should be wrapped in fenced code blocks; include an info string as often as possible.
303
+ - Structure: group related bullets; order sections general → specific → supporting; for subsections, start with a bolded keyword bullet, then items; match complexity to the task.
304
+ - Tone: collaborative, concise, factual; present tense, active voice; self-contained; no "above/below"; parallel wording.
305
+ - Don'ts: no nested bullets/hierarchies; no ANSI codes; don't cram unrelated keywords; keep keyword lists short—wrap/reformat if long; avoid naming formatting styles in answers.
306
+ - Adaptation: code explanations → precise, structured with code refs; simple tasks → lead with outcome; big changes → logical walkthrough + rationale + next actions; casual one-offs → plain sentences, no headers/bullets.
307
+ - File References: When referencing files in your response, make sure to include the relevant start line and always follow the below rules:
308
+ - Use inline code to make file paths clickable.
309
+ - Each reference should have a stand alone path. Even if it's the same file.
310
+ - Accepted: absolute, workspace-relative, a/ or b/ diff prefixes, or bare filename/suffix.
311
+ - Line/column (1-based, optional): :line[:column] or #Lline[Ccolumn] (column defaults to 1).
312
+ - Do not use URIs like file://, vscode://, or https://.
313
+ - Do not provide range of lines
314
+ - Examples: src/app.ts, src/app.ts:42, b/server/index.js#L10, C:\repo\project\main.rs:12:5
315
+ ```
package/bundle/cli.mjs CHANGED
@@ -88873,12 +88873,21 @@ function Menu({ options: options3, onSelect, onSelectionChange, maxHeight, defau
88873
88873
  setSelectedIndex(newIndex);
88874
88874
  setScrollOffset(getInitialScrollOffset(newIndex, visibleItemCount));
88875
88875
  }, [defaultIndex, options3.length, visibleItemCount]);
88876
+ const onSelectionChangeRef = import_react59.default.useRef(onSelectionChange);
88877
+ import_react59.default.useEffect(() => {
88878
+ onSelectionChangeRef.current = onSelectionChange;
88879
+ }, [onSelectionChange]);
88876
88880
  import_react59.default.useEffect(() => {
88877
88881
  const currentOption = options3[selectedIndex];
88878
- if (onSelectionChange && (currentOption == null ? void 0 : currentOption.infoText)) {
88879
- onSelectionChange(currentOption.infoText, currentOption.value);
88882
+ if (onSelectionChangeRef.current && (currentOption == null ? void 0 : currentOption.infoText)) {
88883
+ const handle = setImmediate(() => {
88884
+ var _a21;
88885
+ (_a21 = onSelectionChangeRef.current) == null ? void 0 : _a21.call(onSelectionChangeRef, currentOption.infoText, currentOption.value);
88886
+ });
88887
+ return () => clearImmediate(handle);
88880
88888
  }
88881
- }, [selectedIndex, options3, onSelectionChange]);
88889
+ return void 0;
88890
+ }, [selectedIndex, options3]);
88882
88891
  import_react59.default.useEffect(() => {
88883
88892
  if (selectedIndex < scrollOffset) {
88884
88893
  setScrollOffset(selectedIndex);
@@ -438111,7 +438120,7 @@ Tip: Consider breaking down the operation into smaller chunks or filtering the d
438111
438120
  return { isValid: true, tokenCount: estimatedTokens };
438112
438121
  }
438113
438122
  }
438114
- async function wrapToolResultWithTokenLimit(result2, toolName, maxTokens = 5e4) {
438123
+ async function wrapToolResultWithTokenLimit(result2, toolName, maxTokens = 1e5) {
438115
438124
  const validation = await validateTokenLimit(result2, maxTokens);
438116
438125
  if (!validation.isValid) {
438117
438126
  throw new Error(`Tool "${toolName}" returned content that exceeds token limit.
@@ -443667,14 +443676,21 @@ function WelcomeScreen({ version: version2 = "1.0.0", onMenuSelect, defaultMenuI
443667
443676
  }
443668
443677
  ], [t]);
443669
443678
  const [remountKey, setRemountKey] = (0, import_react76.useState)(0);
443679
+ const optionsIndexMap = (0, import_react76.useMemo)(() => {
443680
+ const map3 = /* @__PURE__ */ new Map();
443681
+ menuOptions.forEach((opt, idx2) => {
443682
+ map3.set(opt.value, idx2);
443683
+ });
443684
+ return map3;
443685
+ }, [menuOptions]);
443670
443686
  const handleSelectionChange = (0, import_react76.useCallback)((newInfoText, value) => {
443671
- setInfoText(newInfoText);
443672
- const index = menuOptions.findIndex((opt) => opt.value === value);
443673
- if (index !== -1) {
443687
+ setInfoText((prev) => prev === newInfoText ? prev : newInfoText);
443688
+ const index = optionsIndexMap.get(value);
443689
+ if (index !== void 0) {
443674
443690
  setCurrentMenuIndex(index);
443675
443691
  onMenuSelectionPersist == null ? void 0 : onMenuSelectionPersist(index);
443676
443692
  }
443677
- }, [menuOptions, onMenuSelectionPersist]);
443693
+ }, [optionsIndexMap, onMenuSelectionPersist]);
443678
443694
  const handleInlineMenuSelect = (0, import_react76.useCallback)((value) => {
443679
443695
  const index = menuOptions.findIndex((opt) => opt.value === value);
443680
443696
  if (index !== -1) {
@@ -445708,6 +445724,7 @@ function useKeyboardInput(options3) {
445708
445724
  const isPasting = (0, import_react82.useRef)(false);
445709
445725
  const inputStartCursorPos = (0, import_react82.useRef)(0);
445710
445726
  const isProcessingInput = (0, import_react82.useRef)(false);
445727
+ const componentMountTime = (0, import_react82.useRef)(Date.now());
445711
445728
  (0, import_react82.useEffect)(() => {
445712
445729
  return () => {
445713
445730
  if (inputTimer.current) {
@@ -445727,7 +445744,13 @@ function useKeyboardInput(options3) {
445727
445744
  var _a21;
445728
445745
  if (disabled)
445729
445746
  return;
445730
- const focusEventPattern = /(\s|^)\[(?:I|O)(?=(?:\s|$|["'~\\\/]|[A-Za-z]:))/;
445747
+ const timeSinceMount = Date.now() - componentMountTime.current;
445748
+ if (timeSinceMount < 500) {
445749
+ if (input2.includes("[I") || input2.includes("[O") || input2 === "\x1B[I" || input2 === "\x1B[O" || /^[\s\x1b\[IO]+$/.test(input2)) {
445750
+ return;
445751
+ }
445752
+ }
445753
+ const focusEventPattern = /(\s|^)\[(?:I|O)(?=(?:\s|$|["'~\\/]|[A-Za-z]:))/;
445731
445754
  if (
445732
445755
  // Complete escape sequences
445733
445756
  input2 === "\x1B[I" || input2 === "\x1B[O" || // Standalone sequences (exact match only)
@@ -529234,6 +529257,7 @@ async function handleConversationWithTools(options3) {
529234
529257
  let toolCallAccumulator = "";
529235
529258
  let reasoningAccumulator = "";
529236
529259
  let chunkCount = 0;
529260
+ let currentTokenCount = 0;
529237
529261
  const currentSession2 = sessionManager.getCurrentSession();
529238
529262
  const cacheKey = currentSession2 == null ? void 0 : currentSession2.id;
529239
529263
  const onRetry = (error, attempt, nextDelay) => {
@@ -529311,24 +529335,27 @@ async function handleConversationWithTools(options3) {
529311
529335
  }
529312
529336
  reasoningAccumulator += chunk2.delta;
529313
529337
  try {
529314
- const tokens2 = encoder.encode(streamedContent + toolCallAccumulator + reasoningAccumulator);
529315
- setStreamTokenCount(tokens2.length);
529338
+ const deltaTokens = encoder.encode(chunk2.delta);
529339
+ currentTokenCount += deltaTokens.length;
529340
+ setStreamTokenCount(currentTokenCount);
529316
529341
  } catch (e) {
529317
529342
  }
529318
529343
  } else if (chunk2.type === "content" && chunk2.content) {
529319
529344
  setIsReasoning == null ? void 0 : setIsReasoning(false);
529320
529345
  streamedContent += chunk2.content;
529321
529346
  try {
529322
- const tokens2 = encoder.encode(streamedContent + toolCallAccumulator + reasoningAccumulator);
529323
- setStreamTokenCount(tokens2.length);
529347
+ const deltaTokens = encoder.encode(chunk2.content);
529348
+ currentTokenCount += deltaTokens.length;
529349
+ setStreamTokenCount(currentTokenCount);
529324
529350
  } catch (e) {
529325
529351
  }
529326
529352
  } else if (chunk2.type === "tool_call_delta" && chunk2.delta) {
529327
529353
  setIsReasoning == null ? void 0 : setIsReasoning(false);
529328
529354
  toolCallAccumulator += chunk2.delta;
529329
529355
  try {
529330
- const tokens2 = encoder.encode(streamedContent + toolCallAccumulator + reasoningAccumulator);
529331
- setStreamTokenCount(tokens2.length);
529356
+ const deltaTokens = encoder.encode(chunk2.delta);
529357
+ currentTokenCount += deltaTokens.length;
529358
+ setStreamTokenCount(currentTokenCount);
529332
529359
  } catch (e) {
529333
529360
  }
529334
529361
  } else if (chunk2.type === "tool_calls" && chunk2.tool_calls) {
@@ -529369,7 +529396,6 @@ async function handleConversationWithTools(options3) {
529369
529396
  }
529370
529397
  }
529371
529398
  }
529372
- setStreamTokenCount(0);
529373
529399
  if (controller.signal.aborted) {
529374
529400
  freeEncoder();
529375
529401
  break;
@@ -530983,7 +531009,7 @@ ${errorMsg}`,
530983
531009
  setCurrentModel: streamingState.setCurrentModel
530984
531010
  });
530985
531011
  } catch (error) {
530986
- if (!controller.signal.aborted) {
531012
+ if (!controller.signal.aborted && !userInterruptedRef.current) {
530987
531013
  const errorMessage = error instanceof Error ? error.message : "Unknown error occurred";
530988
531014
  const finalMessage = {
530989
531015
  role: "assistant",
@@ -531154,7 +531180,7 @@ ${errorMsg}`,
531154
531180
  setCurrentModel: streamingState.setCurrentModel
531155
531181
  });
531156
531182
  } catch (error) {
531157
- if (!controller.signal.aborted) {
531183
+ if (!controller.signal.aborted && !userInterruptedRef.current) {
531158
531184
  const errorMessage = error instanceof Error ? error.message : "Unknown error occurred";
531159
531185
  const finalMessage = {
531160
531186
  role: "assistant",
@@ -535180,8 +535206,10 @@ function ChatScreen({ autoResume, enableYolo }) {
535180
535206
  }
535181
535207
  if (key.escape && streamingState.isStreaming && streamingState.abortController && hasFocus) {
535182
535208
  userInterruptedRef.current = true;
535183
- streamingState.abortController.abort();
535184
535209
  streamingState.setRetryStatus(null);
535210
+ streamingState.setCodebaseSearchStatus(null);
535211
+ streamingState.setIsStreaming(false);
535212
+ streamingState.abortController.abort();
535185
535213
  setMessages((prev) => prev.filter((msg) => !msg.toolPending));
535186
535214
  }
535187
535215
  });
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "snow-ai",
3
- "version": "0.5.10",
3
+ "version": "0.5.11",
4
4
  "description": "Intelligent Command Line Assistant powered by AI",
5
5
  "license": "MIT",
6
6
  "bin": {
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "snow-ai",
3
- "version": "0.5.10",
3
+ "version": "0.5.11",
4
4
  "description": "Intelligent Command Line Assistant powered by AI",
5
5
  "license": "MIT",
6
6
  "bin": {