qianshou 3.0.3

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (5) hide show
  1. package/LICENSE +21 -0
  2. package/README.md +278 -0
  3. package/README_en.md +277 -0
  4. package/package.json +126 -0
  5. package/qianshou.js +8731 -0
package/LICENSE ADDED
@@ -0,0 +1,21 @@
1
+ MIT License
2
+
3
+ Copyright (c) 2026 Qianshou Contributors
4
+
5
+ Permission is hereby granted, free of charge, to any person obtaining a copy
6
+ of this software and associated documentation files (the "Software"), to deal
7
+ in the Software without restriction, including without limitation the rights
8
+ to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9
+ copies of the Software, and to permit persons to whom the Software is
10
+ furnished to do so, subject to the following conditions:
11
+
12
+ The above copyright notice and this permission notice shall be included in all
13
+ copies or substantial portions of the Software.
14
+
15
+ THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16
+ IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17
+ FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18
+ AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19
+ LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20
+ OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21
+ SOFTWARE.
package/README.md ADDED
@@ -0,0 +1,278 @@
1
+ <h1 align="center">千手</h1>
2
+
3
+ <p align="center">
4
+ <strong>面向办公人士的 AI 智能体应用。</strong><br>
5
+ 理解项目文档、代码库。规划复杂任务。自主执行。<br>
6
+ 你的终端 AI 队友。
7
+ </p>
8
+
9
+ ---
10
+
11
+ ## 目录
12
+
13
+ - [什么是 千手](#什么是-千手)
14
+ - [核心功能](#核心功能)
15
+ - [支持的模型](#支持的模型)
16
+ - [安装](#安装)
17
+ - [快速开始](#快速开始)
18
+ - [使用方法](#使用方法)
19
+ - [实验性功能](#实验性功能)
20
+
21
+ ---
22
+
23
+ ## 什么是千手
24
+
25
+ 千手 是一款运行在终端中的**智能体编码工具**。与传统的代码补全工具不同,千手 能够理解你的整个代码库,规划复杂的多步骤任务,并自主执行。
26
+
27
+ 把它当作一个 AI 队友,它可以:
28
+ - 阅读并理解大规模项目
29
+ - 将复杂需求分解为可执行的步骤
30
+ - 编写、修改和重构跨多个文件的代码
31
+ - 运行终端命令并管理 Git 工作流
32
+ - 解释代码逻辑并生成文档
33
+
34
+ ---
35
+
36
+ ## 核心功能
37
+
38
+ ### 🧠 智能体编码
39
+
40
+ 千手 作为一个**自主智能体**运行,而不仅仅是一个被动的助手。当你给它一个高层次的目标,比如"添加 OAuth 用户认证"时,它会:
41
+
42
+ 1. **理解意图** - 分析你的自然语言请求
43
+ 2. **扫描上下文** - 阅读相关文件并理解项目结构
44
+ 3. **制定计划** - 将任务分解为清晰、可执行的步骤
45
+ 4. **执行** - 在你的批准下实施每个步骤
46
+ 5. **迭代** - 报告进度并在需要时寻求澄清
47
+
48
+ ### 📚 代码库级理解
49
+
50
+ 支持 **200K+ token 上下文窗口**,千手 可以一次性分析整个项目。它理解:
51
+ - 项目结构和模块依赖关系
52
+ - 编码模式和设计约定
53
+ - 跨文件关系和架构决策
54
+
55
+ ### 🔧 深度代码分析
56
+
57
+ - **解释复杂代码** - 获取函数、算法和逻辑流的详细说明
58
+ - **追踪函数调用** - 跟踪代码库中的执行路径
59
+ - **识别问题** - 检测性能瓶颈、安全漏洞和潜在错误
60
+ - **智能重构** - 执行大规模、跨文件的代码转换
61
+
62
+ ### ⚡ 工作流自动化
63
+
64
+ 千手 与你的开发工作流无缝集成:
65
+
66
+ - **运行终端命令** - 执行 `npm install`、运行测试、构建 Docker 镜像
67
+ - **Git 集成** - 搜索提交历史、解决合并冲突、生成提交信息
68
+ - **测试生成** - 自动创建全面的单元测试和集成测试
69
+ - **文档生成** - 生成 API 文档、代码注释和 README 文件
70
+
71
+ ### 🔌 可扩展架构
72
+
73
+ - **MCP 协议** - 通过模型上下文协议连接外部工具和 API
74
+ - **自定义技能** - 编写自己的命令和工作流
75
+ - **插件系统** - 通过社区插件扩展功能
76
+ - **IDE 无关** - 适用于任何编辑器(VS Code、JetBrains、Vim 等)
77
+
78
+ ---
79
+
80
+ ## 支持的模型
81
+
82
+ 千手 原生支持多个 AI 提供商。设置相应的环境变量即可切换提供商。
83
+
84
+ 常用配置:
85
+ ```bash
86
+ export QS_AUTH_TOKEN="..."
87
+ export QS_BIG_MODEL=""
88
+ export QS_MIDDLE_MODEL=""
89
+ export QS_SMALL_MODEL=""
90
+ export API_TIMEOUT_MS=600000
91
+ export QS_BASE_URL="https://coding.dashscope.aliyuncs.com/apps/anthropic"
92
+ export QS_BIG_MODEL= ...
93
+ #直接运行即可, auto-run 是无需再对每个命令进行确认。如果对于非常严肃的场景,需要去掉 auto-run,逐个动作进行审核。
94
+ cd xxx 进入你的目录
95
+ qianshou --auto-run
96
+
97
+ # 如果要运行桌面办公版,处理一些文档 文件之类
98
+ QS_OFFICE=1
99
+ qianshou --auto-run
100
+ ...
101
+ ```
102
+
103
+ 你可创建一个 AGENTS.md 文件便于 AI 了解整个项目。
104
+
105
+ # 智谱
106
+ ```bash
107
+ export QS_AUTH_TOKEN="..."
108
+ export QS_BIG_MODEL="glm-4.7"
109
+ export QS_MIDDLE_MODEL="glm-4.7"
110
+ export QS_SMALL_MODEL="glm-4.7"
111
+ export QS_BASE_URL="https://open.bigmodel.cn/api/anthropic"
112
+ qianshou --auto-run
113
+ ...
114
+ ```
115
+
116
+ # DeepSeek
117
+ ```bash
118
+ export QS_BASE_URL=https://api.deepseek.com/anthropic
119
+ export QS_AUTH_TOKEN=sk-...
120
+ export API_TIMEOUT_MS=600000
121
+ export QS_MODEL=deepseek-chat
122
+ export QS_SMALL_FAST_MODEL=deepseek-chat
123
+ export QS_DISABLE_NONESSENTIAL_TRAFFIC=1
124
+
125
+ qianshou --auto-run
126
+ ```
127
+
128
+ ### OpenAI Codex
129
+
130
+ 使用 OpenAI 的 Codex 模型进行代码生成。
131
+
132
+ ```bash
133
+ export QS_USE_OPENAI=1
134
+ qianshou
135
+ ```
136
+
137
+ ### AWS Bedrock
138
+
139
+ 通过你的 AWS 账户路由请求。
140
+
141
+ ```bash
142
+ export QS_USE_AWS=1
143
+ export AWS_REGION="us-east-1"
144
+ qianshou
145
+ ```
146
+
147
+ ### Google Cloud Vertex AI
148
+
149
+ 通过你的 GCP 项目路由请求。
150
+
151
+ ```bash
152
+ export QS_USE_VERTEX=1
153
+ qianshou
154
+ ```
155
+
156
+ ### Anthropic Foundry
157
+
158
+ 使用 Anthropic Foundry 进行专用部署。
159
+
160
+ ```bash
161
+ export QS_USE_CLAUDE_FOUNDRY=1
162
+ export ANTHROPIC_FOUNDRY_API_KEY="..."
163
+ qianshou
164
+ ```
165
+
166
+ ---
167
+
168
+ ## 安装
169
+
170
+ ### 快速安装
171
+
172
+ npm i -g @zytech/qianshou
173
+
174
+
175
+ ### 系统要求
176
+
177
+ - **运行时**: [Bun](https://bun.sh) >= 1.3.11
178
+ - **操作系统**: macOS、Linux 或 Windows(通过 WSL)
179
+ - **认证**: 所选提供商的 API 密钥或 OAuth 登录
180
+
181
+ ---
182
+
183
+ ## 快速开始
184
+
185
+ ```bash
186
+ # 使用你喜欢的提供商进行身份验证
187
+ ./qianshou /login
188
+
189
+ # 启动交互式 REPL
190
+ ./qianshou
191
+
192
+ # 单次执行模式
193
+ ./qianshou -p "这个目录里有什么文件?"
194
+
195
+ # 指定模型
196
+ ./qianshou --model ...
197
+ ```
198
+
199
+ ---
200
+
201
+ ## 使用方法
202
+
203
+ ### 交互式 REPL
204
+
205
+ ```bash
206
+ ./qianshou --auto-run
207
+ ```
208
+
209
+ ### 对话示例
210
+
211
+ **实现新功能:**
212
+ ```
213
+ 你: 添加一个用户注册的 REST API 端点
214
+
215
+ Qianshou: 我来帮你添加用户注册端点。让我先检查你的项目结构...
216
+
217
+ [阅读文件,分析代码库]
218
+
219
+ 我将创建以下内容:
220
+ 1. POST /api/users/register 端点
221
+ 2. 用户验证模式
222
+ 3. 用户的数据库模型
223
+ 4. 端点的单元测试
224
+
225
+ 我可以继续吗?
226
+ ```
227
+
228
+ **调试问题:**
229
+ ```
230
+ 你: 上传文件时出现 500 错误。这是堆栈跟踪...
231
+
232
+ Qianshou: 查看错误和你的代码,问题似乎出现在文件上传处理程序中。
233
+ 问题在第 42 行,缓冲区大小处理不当。让我修复这个问题...
234
+ ```
235
+
236
+ **重构代码:**
237
+ ```
238
+ 你: 重构认证模块以使用 JWT 令牌
239
+
240
+ Qianshou: 我将重构你的认证模块以使用 JWT。这需要:
241
+ 1. 安装 jsonwebtoken 包
242
+ 2. 创建 JWT 工具函数
243
+ 3. 更新登录/登出处理程序
244
+ 4. 添加令牌验证中间件
245
+
246
+ [继续重构]
247
+ ```
248
+
249
+ ---
250
+
251
+ ## 实验性功能
252
+
253
+ `build:dev:full` 构建启用了 54 个实验性功能标志。亮点包括:
254
+
255
+ ### 交互与 UI
256
+ - **ULTRAPLAN** - 远程多智能体规划
257
+ - **ULTRATHINK** - 用于复杂推理的深度思考模式
258
+ - **VOICE_MODE** - 按住说话语音输入
259
+ - **TOKEN_BUDGET** - 使用跟踪和警告
260
+ - **HISTORY_PICKER** - 交互式提示历史
261
+ - **QUICK_SEARCH** - 快速提示搜索
262
+
263
+ ### 智能体、记忆与规划
264
+ - **BUILTIN_EXPLORE_PLAN_AGENTS** - 内置智能体预设
265
+ - **VERIFICATION_AGENT** - 任务验证
266
+ - **AGENT_TRIGGERS** - 后台自动化
267
+ - **EXTRACT_MEMORIES** - 自动记忆提取
268
+ - **TEAMMEM** - 团队记忆文件
269
+
270
+ ### 工具与基础设施
271
+ - **BRIDGE_MODE** - IDE 远程控制桥接
272
+ - **BASH_CLASSIFIER** - 智能权限决策
273
+ - **MCP_RICH_OUTPUT** - 增强的 MCP 输出
274
+
275
+
276
+ ---
277
+
278
+ **用 ❤️ 构建,为想要从 AI 工具中获得更多功能的开发者服务。**
package/README_en.md ADDED
@@ -0,0 +1,277 @@
1
+ <h1 align="center">Qianshou</h1>
2
+
3
+ <p align="center">
4
+ <strong>AI Agent Application for Office Professionals.</strong><br>
5
+ Understand project documentation and codebases. Plan complex tasks. Execute autonomously.<br>
6
+ Your terminal AI teammate.
7
+ </p>
8
+
9
+ ---
10
+
11
+ ## Table of Contents
12
+
13
+ - [What is Qianshou](#what-is-qianshou)
14
+ - [Core Features](#core-features)
15
+ - [Supported Models](#supported-models)
16
+ - [Installation](#installation)
17
+ - [Quick Start](#quick-start)
18
+ - [Usage](#usage)
19
+ - [Experimental Features](#experimental-features)
20
+
21
+ ---
22
+
23
+ ## What is Qianshou
24
+
25
+ Qianshou is an **agent-based coding tool** that runs in your terminal. Unlike traditional code completion tools, Qianshou can understand your entire codebase, plan complex multi-step tasks, and execute them autonomously.
26
+
27
+ Think of it as an AI teammate that can:
28
+ - Read and understand large-scale projects
29
+ - Break down complex requirements into executable steps
30
+ - Write, modify, and refactor code across multiple files
31
+ - Run terminal commands and manage Git workflows
32
+ - Explain code logic and generate documentation
33
+
34
+ ---
35
+
36
+ ## Core Features
37
+
38
+ ### 🧠 Agent-Based Coding
39
+
40
+ Qianshou runs as an **autonomous agent**, not just a passive assistant. When you give it a high-level goal like "add OAuth user authentication," it will:
41
+
42
+ 1. **Understand Intent** - Analyze your natural language request
43
+ 2. **Scan Context** - Read relevant files and understand project structure
44
+ 3. **Create Plan** - Break down the task into clear, actionable steps
45
+ 4. **Execute** - Implement each step with your approval
46
+ 5. **Iterate** - Report progress and seek clarification when needed
47
+
48
+ ### 📚 Codebase-Level Understanding
49
+
50
+ With support for **200K+ token context windows**, Qianshou can analyze entire projects at once. It understands:
51
+ - Project structure and module dependencies
52
+ - Coding patterns and design conventions
53
+ - Cross-file relationships and architectural decisions
54
+
55
+ ### 🔧 Deep Code Analysis
56
+
57
+ - **Explain Complex Code** - Get detailed explanations of functions, algorithms, and logic flows
58
+ - **Trace Function Calls** - Follow execution paths through the codebase
59
+ - **Identify Issues** - Detect performance bottlenecks, security vulnerabilities, and potential bugs
60
+ - **Smart Refactoring** - Execute large-scale, cross-file code transformations
61
+
62
+ ### ⚡ Workflow Automation
63
+
64
+ Qianshou seamlessly integrates with your development workflow:
65
+
66
+ - **Run Terminal Commands** - Execute `npm install`, run tests, build Docker images
67
+ - **Git Integration** - Search commit history, resolve merge conflicts, generate commit messages
68
+ - **Test Generation** - Automatically create comprehensive unit and integration tests
69
+ - **Documentation Generation** - Generate API docs, code comments, and README files
70
+
71
+ ### 🔌 Extensible Architecture
72
+
73
+ - **MCP Protocol** - Connect external tools and APIs via Model Context Protocol
74
+ - **Custom Skills** - Write your own commands and workflows
75
+ - **Plugin System** - Extend functionality with community plugins
76
+ - **IDE Agnostic** - Works with any editor (VS Code, JetBrains, Vim, etc.)
77
+
78
+ ---
79
+
80
+ ## Supported Models
81
+
82
+ Qianshou natively supports multiple AI providers. Simply set the appropriate environment variables to switch providers.
83
+
84
+ Common configuration:
85
+ ```bash
86
+ export QS_AUTH_TOKEN="..."
87
+ export QS_BIG_MODEL=""
88
+ export QS_MIDDLE_MODEL=""
89
+ export QS_SMALL_MODEL=""
90
+ export API_TIMEOUT_MS=600000
91
+ export QS_BASE_URL="https://coding.dashscope.aliyuncs.com/apps/anthropic"
92
+ export QS_BIG_MODEL= ...
93
+ # Run directly, auto-run means no need to confirm each command. For very serious scenarios, remove auto-run and review each action.
94
+ cd xxx # Enter your directory
95
+ qianshou --auto-run
96
+
97
+ # If you want to run the desktop office version for processing documents and files
98
+ QS_OFFICE=1
99
+ qianshou --auto-run
100
+ ...
101
+ ```
102
+
103
+ You can create an AGENTS.md file to help AI understand the entire project.
104
+
105
+ # Zhipu AI
106
+ ```bash
107
+ export QS_AUTH_TOKEN="..."
108
+ export QS_BIG_MODEL="glm-4.7"
109
+ export QS_MIDDLE_MODEL="glm-4.7"
110
+ export QS_SMALL_MODEL="glm-4.7"
111
+ export QS_BASE_URL="https://open.bigmodel.cn/api/anthropic"
112
+ qianshou --auto-run
113
+ ...
114
+ ```
115
+
116
+ # DeepSeek
117
+ ```bash
118
+ export QS_BASE_URL=https://api.deepseek.com/anthropic
119
+ export QS_AUTH_TOKEN=sk-...
120
+ export API_TIMEOUT_MS=600000
121
+ export QS_MODEL=deepseek-chat
122
+ export QS_SMALL_FAST_MODEL=deepseek-chat
123
+ export QS_DISABLE_NONESSENTIAL_TRAFFIC=1
124
+
125
+ qianshou --auto-run
126
+ ```
127
+
128
+ ### OpenAI Codex
129
+
130
+ Use OpenAI's Codex models for code generation.
131
+
132
+ ```bash
133
+ export QS_USE_OPENAI=1
134
+ qianshou
135
+ ```
136
+
137
+ ### AWS Bedrock
138
+
139
+ Route requests through your AWS account.
140
+
141
+ ```bash
142
+ export QS_USE_AWS=1
143
+ export AWS_REGION="us-east-1"
144
+ qianshou
145
+ ```
146
+
147
+ ### Google Cloud Vertex AI
148
+
149
+ Route requests through your GCP project.
150
+
151
+ ```bash
152
+ export QS_USE_VERTEX=1
153
+ qianshou
154
+ ```
155
+
156
+ ### Anthropic Foundry
157
+
158
+ Use Anthropic Foundry for dedicated deployments.
159
+
160
+ ```bash
161
+ export QS_USE_CLAUDE_FOUNDRY=1
162
+ export ANTHROPIC_FOUNDRY_API_KEY="..."
163
+ qianshou
164
+ ```
165
+
166
+ ---
167
+
168
+ ## Installation
169
+
170
+ ### Quick Install
171
+
172
+ npm i -g @zytech/qianshou
173
+
174
+ ### System Requirements
175
+
176
+ - **Runtime**: [Bun](https://bun.sh) >= 1.3.11
177
+ - **Operating System**: macOS, Linux, or Windows (via WSL)
178
+ - **Authentication**: API keys or OAuth login for your chosen provider
179
+
180
+ ---
181
+
182
+ ## Quick Start
183
+
184
+ ```bash
185
+ # Authenticate with your preferred provider
186
+ ./qianshou /login
187
+
188
+ # Start the interactive REPL
189
+ ./qianshou
190
+
191
+ # One-shot execution mode
192
+ ./qianshou -p "What files are in this directory?"
193
+
194
+ # Specify model
195
+ ./qianshou --model ...
196
+ ```
197
+
198
+ ---
199
+
200
+ ## Usage
201
+
202
+ ### Interactive REPL
203
+
204
+ ```bash
205
+ ./qianshou --auto-run
206
+ ```
207
+
208
+ ### Conversation Examples
209
+
210
+ **Implementing a new feature:**
211
+ ```
212
+ You: Add a REST API endpoint for user registration
213
+
214
+ Qianshou: I'll help you add a user registration endpoint. Let me first check your project structure...
215
+
216
+ [Reading files, analyzing codebase]
217
+
218
+ I'll create the following:
219
+ 1. POST /api/users/register endpoint
220
+ 2. User validation schema
221
+ 3. Database model for users
222
+ 4. Unit tests for the endpoint
223
+
224
+ Shall I proceed?
225
+ ```
226
+
227
+ **Debugging an issue:**
228
+ ```
229
+ You: I'm getting a 500 error when uploading files. Here's the stack trace...
230
+
231
+ Qianshou: Looking at the error and your code, the issue appears to be in the file upload handler.
232
+ The problem is on line 42, where the buffer size is not handled properly. Let me fix this...
233
+ ```
234
+
235
+ **Refactoring code:**
236
+ ```
237
+ You: Refactor the authentication module to use JWT tokens
238
+
239
+ Qianshou: I'll refactor your authentication module to use JWT. This will involve:
240
+ 1. Installing the jsonwebtoken package
241
+ 2. Creating JWT utility functions
242
+ 3. Updating login/logout handlers
243
+ 4. Adding token verification middleware
244
+
245
+ [Proceeding with refactor]
246
+ ```
247
+
248
+ ---
249
+
250
+ ## Experimental Features
251
+
252
+ The `build:dev:full` build enables 54 experimental feature flags. Highlights include:
253
+
254
+ ### Interaction & UI
255
+ - **ULTRAPLAN** - Remote multi-agent planning
256
+ - **ULTRATHINK** - Deep thinking mode for complex reasoning
257
+ - **VOICE_MODE** - Push-to-talk voice input
258
+ - **TOKEN_BUDGET** - Usage tracking and warnings
259
+ - **HISTORY_PICKER** - Interactive prompt history
260
+ - **QUICK_SEARCH** - Quick prompt search
261
+
262
+ ### Agents, Memory & Planning
263
+ - **BUILTIN_EXPLORE_PLAN_AGENTS** - Built-in agent presets
264
+ - **VERIFICATION_AGENT** - Task verification
265
+ - **AGENT_TRIGGERS** - Background automation
266
+ - **EXTRACT_MEMORIES** - Automatic memory extraction
267
+ - **TEAMMEM** - Team memory files
268
+
269
+ ### Tools & Infrastructure
270
+ - **BRIDGE_MODE** - IDE remote control bridge
271
+ - **BASH_CLASSIFIER** - Intelligent permission decisions
272
+ - **MCP_RICH_OUTPUT** - Enhanced MCP output
273
+
274
+
275
+ ---
276
+
277
+ **Built with ❤️ for developers who want more from their AI tools.**
package/package.json ADDED
@@ -0,0 +1,126 @@
1
+ {
2
+ "name": "qianshou",
3
+ "version": "3.0.3",
4
+ "description": "Qianshou - AI-powered assistant",
5
+ "type": "module",
6
+ "packageManager": "bun@1.3.11",
7
+ "bin": {
8
+ "qianshou": "./qianshou.js"
9
+ },
10
+ "files": [
11
+ "qianshou.js",
12
+ "README.md",
13
+ "README_en.md",
14
+ "LICENSE"
15
+ ],
16
+ "engines": {
17
+ "bun": ">=1.3.11"
18
+ },
19
+ "scripts": {
20
+ "build": "bun run ./scripts/build.ts",
21
+ "build:dev": "bun run ./scripts/build.ts --dev",
22
+ "build:dev:full": "bun run ./scripts/build.ts --dev --feature-set=dev-full",
23
+ "compile": "bun run ./scripts/build.ts --compile",
24
+ "dev": "bun run ./src/entrypoints/cli.tsx"
25
+ },
26
+ "dependencies": {
27
+ "@alcalzone/ansi-tokenize": "^0.3.0",
28
+ "@anthropic-ai/bedrock-sdk": "^0.26.4",
29
+ "@anthropic-ai/claude-agent-sdk": "^0.2.87",
30
+ "@anthropic-ai/foundry-sdk": "^0.2.3",
31
+ "@anthropic-ai/mcpb": "^2.1.2",
32
+ "@anthropic-ai/sandbox-runtime": "^0.0.44",
33
+ "@anthropic-ai/sdk": "^0.80.0",
34
+ "@anthropic-ai/vertex-sdk": "^0.14.4",
35
+ "@aws-sdk/client-bedrock": "^3.1020.0",
36
+ "@aws-sdk/client-bedrock-runtime": "^3.1020.0",
37
+ "@aws-sdk/client-sts": "^3.1020.0",
38
+ "@aws-sdk/credential-provider-node": "^3.972.28",
39
+ "@aws-sdk/credential-providers": "^3.1020.0",
40
+ "@azure/identity": "^4.13.1",
41
+ "@commander-js/extra-typings": "^14.0.0",
42
+ "@growthbook/growthbook": "^1.6.5",
43
+ "@modelcontextprotocol/sdk": "^1.29.0",
44
+ "@opentelemetry/api": "^1.9.1",
45
+ "@opentelemetry/api-logs": "^0.214.0",
46
+ "@opentelemetry/core": "^2.6.1",
47
+ "@opentelemetry/exporter-logs-otlp-grpc": "^0.214.0",
48
+ "@opentelemetry/exporter-logs-otlp-http": "^0.214.0",
49
+ "@opentelemetry/exporter-logs-otlp-proto": "^0.214.0",
50
+ "@opentelemetry/exporter-metrics-otlp-grpc": "^0.214.0",
51
+ "@opentelemetry/exporter-metrics-otlp-http": "^0.214.0",
52
+ "@opentelemetry/exporter-metrics-otlp-proto": "^0.214.0",
53
+ "@opentelemetry/exporter-prometheus": "^0.214.0",
54
+ "@opentelemetry/exporter-trace-otlp-grpc": "^0.214.0",
55
+ "@opentelemetry/exporter-trace-otlp-http": "^0.214.0",
56
+ "@opentelemetry/exporter-trace-otlp-proto": "^0.214.0",
57
+ "@opentelemetry/resources": "^2.6.1",
58
+ "@opentelemetry/sdk-logs": "^0.214.0",
59
+ "@opentelemetry/sdk-metrics": "^2.6.1",
60
+ "@opentelemetry/sdk-trace-base": "^2.6.1",
61
+ "@opentelemetry/semantic-conventions": "^1.40.0",
62
+ "@smithy/core": "^3.23.13",
63
+ "@smithy/node-http-handler": "^4.5.1",
64
+ "ajv": "^8.18.0",
65
+ "asciichart": "^1.5.25",
66
+ "auto-bind": "^5.0.1",
67
+ "axios": "^1.14.0",
68
+ "bidi-js": "^1.0.3",
69
+ "cacache": "^20.0.4",
70
+ "chalk": "^5.6.2",
71
+ "chokidar": "^5.0.0",
72
+ "cli-boxes": "^4.0.1",
73
+ "cli-highlight": "^2.1.11",
74
+ "code-excerpt": "^4.0.0",
75
+ "diff": "^8.0.4",
76
+ "emoji-regex": "^10.6.0",
77
+ "env-paths": "^4.0.0",
78
+ "execa": "^9.6.1",
79
+ "fflate": "^0.8.2",
80
+ "figures": "^6.1.0",
81
+ "fuse.js": "^7.1.0",
82
+ "get-east-asian-width": "^1.5.0",
83
+ "google-auth-library": "^10.6.2",
84
+ "highlight.js": "^11.11.1",
85
+ "https-proxy-agent": "^8.0.0",
86
+ "ignore": "^7.0.5",
87
+ "indent-string": "^5.0.0",
88
+ "ink": "^6.8.0",
89
+ "jsonc-parser": "^3.3.1",
90
+ "lodash-es": "^4.17.24",
91
+ "lru-cache": "^11.2.7",
92
+ "marked": "^17.0.5",
93
+ "p-map": "^7.0.4",
94
+ "picomatch": "^4.0.4",
95
+ "plist": "^3.1.0",
96
+ "proper-lockfile": "^4.1.2",
97
+ "qrcode": "^1.5.4",
98
+ "react": "^19.2.4",
99
+ "react-reconciler": "^0.33.0",
100
+ "semver": "^7.7.4",
101
+ "sharp": "^0.34.5",
102
+ "shell-quote": "^1.8.3",
103
+ "signal-exit": "^4.1.0",
104
+ "stack-utils": "^2.0.6",
105
+ "strip-ansi": "^7.2.0",
106
+ "supports-hyperlinks": "^4.4.0",
107
+ "tree-kill": "^1.2.2",
108
+ "turndown": "^7.2.2",
109
+ "type-fest": "^5.5.0",
110
+ "undici": "^7.24.6",
111
+ "usehooks-ts": "^3.1.1",
112
+ "vscode-jsonrpc": "^8.2.1",
113
+ "vscode-languageserver-protocol": "^3.17.5",
114
+ "vscode-languageserver-types": "^3.17.5",
115
+ "wrap-ansi": "^10.0.0",
116
+ "ws": "^8.20.0",
117
+ "xss": "^1.0.15",
118
+ "xxhash-wasm": "^1.1.0",
119
+ "yaml": "^2.8.3",
120
+ "zod": "^4.3.6"
121
+ },
122
+ "devDependencies": {
123
+ "@types/bun": "^1.3.11",
124
+ "typescript": "^6.0.2"
125
+ }
126
+ }