@xinshu/openagi 0.0.2-dev.3 → 0.0.2-dev.4

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -1,12 +1,10 @@
1
- # OpenAGI - AI Coding
1
+ # Kode - AI Coding
2
+ <img width="991" height="479" alt="image" src="https://github.com/user-attachments/assets/c1751e92-94dc-4e4a-9558-8cd2d058c1a1" /> <br>
3
+ [![npm version](https://badge.fury.io/js/@shareai-lab%2Fkode.svg)](https://www.npmjs.com/package/@shareai-lab/kode)
4
+ [![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0)
5
+ [![AGENTS.md](https://img.shields.io/badge/AGENTS.md-Compatible-brightgreen)](https://agents.md)
2
6
 
3
- <div align="center">
4
- <img src="./logo.svg" alt="OpenAGI Logo" width="400" />
5
- </div>
6
-
7
- ## 🎯 项目介绍
8
-
9
- OpenAGI 是开源 AI 编程助手。采用 TypeScript + React + Ink 现代化架构,提供强大的 AI 辅助编程能力,支持文件操作、系统交互、智能搜索等丰富功能。
7
+ [中文文档](README.zh-CN.md) | [Contributing](CONTRIBUTING.md) | [Documentation](docs/README.md)
10
8
 
11
9
  <img width="90%" alt="image" src="https://github.com/user-attachments/assets/fdce7017-8095-429d-b74e-07f43a6919e1" />
12
10
 
@@ -18,352 +16,622 @@ OpenAGI 是开源 AI 编程助手。采用 TypeScript + React + Ink 现代化架
18
16
  <img width="90%" alt="image" src="https://github.com/user-attachments/assets/b30696ce-5ab1-40a0-b741-c7ef3945dba0" />
19
17
 
20
18
 
21
- ## 📢 更新日志
19
+ ## 📢 Update Log
20
+
21
+ **2025-12-22**: Native-first distribution (Windows OOTB). Kode prefers a cached native binary and falls back to the Node.js runtime when needed. See `docs/binary-distribution.md`.
22
+
22
23
 
23
- **2025-08-29**:我们已添加 Windows 支持!所有 Windows 用户现在可以在他们的计算机上使用 Git Bash、Unix 子系统或 WSL(Windows 子系统 for Linux)运行 OpenAGI。
24
+ ## 🤝 AGENTS.md Standard Support
24
25
 
26
+ Kode supports the [AGENTS.md standard](https://agents.md): a simple, open format for guiding coding agents, used by 60k+ open-source projects.
25
27
 
26
- ## 🤝 AGENTS.md 标准支持
28
+ ### Full Compatibility with Multiple Standards
27
29
 
28
- **OpenAGI 骄傲地支持由 OpenAI 发起的 [AGENTS.md 标准协议](https://agents.md)** - 一种用于指导编程代理的简单、开放格式,已被 20k+ 开源项目使用。
30
+ - **AGENTS.md** - Native support for the OpenAI-initiated standard format
31
+ - ✅ **Legacy `.claude` compatibility** - Reads `.claude` directories and `CLAUDE.md` when present (see `docs/compatibility.md`)
32
+ - ✅ **Subagent System** - Advanced agent delegation and task orchestration
33
+ - ✅ **Cross-platform** - Works with 20+ AI models and providers
29
34
 
30
- ### 与多种标准的完全兼容
35
+ Use `# Your documentation request` to generate and maintain your AGENTS.md file automatically, while preserving compatibility with existing `.claude` workflows.
31
36
 
32
- - **AGENTS.md** - 原生支持 OpenAI 发起的标准格式
33
- - ✅ **CLAUDE.md** - 与 Claude Code 的 `.claude` 配置完全向后兼容
34
- - ✅ **子代理系统** - 高级代理委派和任务编排
35
- - ✅ **跨平台** - 可与 20+ AI 模型和提供商配合使用
37
+ ### Instruction Discovery (Codex-compatible)
36
38
 
37
- 使用 `# 您的文档请求` 自动生成和维护您的 AGENTS.md 文件,同时保持与现有 `.claude` 工作流的兼容性。
39
+ - Kode reads project instructions by walking from the Git repo root → current working directory.
40
+ - In each directory, it prefers `AGENTS.override.md` over `AGENTS.md` (at most one file per directory).
41
+ - Discovered files are concatenated root → leaf (combined size capped at 32 KiB by default; override with `KODE_PROJECT_DOC_MAX_BYTES`).
42
+ - If `CLAUDE.md` exists in the current directory, Kode also reads it as a legacy instruction file.
38
43
 
39
- ## 概述
44
+ ## Overview
40
45
 
41
- OpenAGI 是一个强大的 AI 助手,驻留在您的终端中。它可以理解您的代码库、编辑文件、运行命令并为您处理整个工作流程。
46
+ Kode is a powerful AI assistant that lives in your terminal. It can understand your codebase, edit files, run commands, and handle entire workflows for you.
42
47
 
43
- > **⚠️ 安全提示**:OpenAGI 默认以 YOLO 模式运行(相当于 Claude Code `--dangerously-skip-permissions` 标志),绕过所有权限检查以实现最大生产力。YOLO 模式仅建议在受信任的安全环境中处理非关键项目时使用。如果您正在处理重要文件或使用能力存疑的模型,我们强烈建议使用 `openagi --safe` 启用权限检查和所有操作的手动批准。
44
- >
45
- > **📊 模型性能**:为了获得最佳性能,我们建议使用专为自主任务完成设计的更新、更强大的模型。避免使用像 GPT-4o Gemini 2.5 Pro 这样的旧版问答优化模型,它们针对回答问题而非持续独立任务执行进行了优化。应选择专门为代理工作流程和扩展推理能力训练的模型。
48
+ > **⚠️ Security Notice**: Kode runs in YOLO mode by default (equivalent to the `--dangerously-skip-permissions` flag), bypassing all permission checks for maximum productivity. YOLO mode is recommended only for trusted, secure environments when working on non-critical projects. If you're working with important files or using models of questionable capability, we strongly recommend using `kode --safe` to enable permission checks and manual approval for all operations.
49
+ >
50
+ > **📊 Model Performance**: For optimal performance, we recommend using newer, more capable models designed for autonomous task completion. Avoid older Q&A-focused models like GPT-4o or Gemini 2.5 Pro, which are optimized for answering questions rather than sustained independent task execution. Choose models specifically trained for agentic workflows and extended reasoning capabilities.
51
+
52
+ ## Network & Privacy
53
+
54
+ - Kode does not send product telemetry/analytics by default.
55
+ - Network requests happen only when you explicitly use networked features:
56
+ - Model provider requests (Anthropic/OpenAI-compatible endpoints you configure)
57
+ - Web tools (`WebFetch`, `WebSearch`)
58
+ - Plugin marketplace downloads (GitHub/URL sources) and OAuth flows (when used)
59
+ - Optional update checks (opt-in via `autoUpdaterStatus: enabled`)
46
60
 
47
61
  <img width="600" height="577" alt="image" src="https://github.com/user-attachments/assets/8b46a39d-1ab6-4669-9391-14ccc6c5234c" />
48
62
 
49
- ## 功能特性
50
-
51
- ### 核心能力
52
- - 🤖 **AI 驱动助手** - 使用先进的 AI 模型理解和响应您的请求
53
- - 🔄 **多模型协作** - 灵活切换和组合多个 AI 模型以利用它们的独特优势
54
- - 🦜 **专家模型咨询** - 使用 `@ask-model-name` 咨询特定 AI 模型以获得专业分析
55
- - 👤 **智能代理系统** - 使用 `@run-agent-name` 将任务委派给专业子代理
56
- - 📝 **代码编辑** - 通过智能建议和改进直接编辑文件
57
- - 🔍 **代码库理解** - 分析您的项目结构和代码关系
58
- - 🚀 **命令执行** - 运行 shell 命令并实时查看结果
59
- - 🛠️ **工作流程自动化** - 通过简单的提示处理复杂的开发任务
60
-
61
- ### 编辑舒适度
62
- - `Ctrl+G` 在您偏好的编辑器中打开消息(尊重 `$EDITOR`/`$VISUAL`;回退到 code/nano/vim/notepad),并在关闭时将文本返回到提示符。
63
- - `Shift+Enter` 在提示符内插入换行符而不发送;直接 Enter 提交。`Ctrl+M` 切换活动模型。
64
-
65
- ### 🎯 先进的智能补全系统
66
- 我们的最先进补全系统提供无与伦比的编码辅助:
67
-
68
- #### 智能模糊匹配
69
- - **连字符感知匹配** - 输入 `dao` 即可匹配 `run-agent-dao-qi-harmony-designer`
70
- - **缩写支持** - `dq` 匹配 `dao-qi`,`nde` 匹配 `node`
71
- - **数字后缀处理** - `py3` 智能匹配 `python3`
72
- - **多算法融合** - 结合 7+ 种匹配算法以获得最佳结果
73
-
74
- #### 智能上下文检测
75
- - **无需 @ 符号** - 直接输入 `gp5` 即可匹配 `@ask-gpt-5`
76
- - **自动前缀添加** - Tab/Enter 自动为代理和模型添加 `@`
77
- - **混合补全** - 在命令、文件、代理和模型之间无缝切换
78
- - **智能优先级** - 根据相关性和使用频率对结果进行排名
79
-
80
- #### Unix 命令优化
81
- - **500+ 常用命令** - 精心策划的常用 Unix/Linux 命令数据库
82
- - **系统交集** - 仅显示您的系统上实际存在的命令
83
- - **优先级评分** - 常用命令排在前面(gitnpmdocker 等)
84
- - **实时加载** - 从系统 PATH 动态发现命令
85
-
86
- ### 用户体验
87
- - 🎨 **交互式 UI** - 具有语法高亮的精美终端界面
88
- - 🔌 **工具系统** - 可扩展架构,针对不同任务提供专业工具
89
- - 💾 **上下文管理** - 智能上下文处理以保持对话连续性
90
- - 📋 **AGENTS.md 集成** - 使用 `# 文档请求` 自动生成和维护项目文档
91
-
92
- ## 安装
63
+ ## Features
64
+
65
+ ### Core Capabilities
66
+ - 🤖 **AI-Powered Assistance** - Uses advanced AI models to understand and respond to your requests
67
+ - 🔄 **Multi-Model Collaboration** - Flexibly switch and combine multiple AI models to leverage their unique strengths
68
+ - 🦜 **Expert Model Consultation** - Use `@ask-model-name` to consult specific AI models for specialized analysis
69
+ - 👤 **Intelligent Agent System** - Use `@run-agent-name` to delegate tasks to specialized subagents
70
+ - 📝 **Code Editing** - Directly edit files with intelligent suggestions and improvements
71
+ - 🔍 **Codebase Understanding** - Analyzes your project structure and code relationships
72
+ - 🚀 **Command Execution** - Run shell commands and see results in real-time
73
+ - 🛠️ **Workflow Automation** - Handle complex development tasks with simple prompts
74
+
75
+ ### Authoring Comfort
76
+ - `Option+G` (Alt+G) opens your message in your preferred editor (respects `$EDITOR`/`$VISUAL`; falls back to code/nano/vim/notepad) and returns the text to the prompt when you close it.
77
+ - `Option+Enter` inserts a newline inside the prompt without sending; plain Enter submits. `Option+M` cycles the active model.
78
+
79
+ ### 🎯 Advanced Intelligent Completion System
80
+ Our state-of-the-art completion system provides unparalleled coding assistance:
81
+
82
+ #### Smart Fuzzy Matching
83
+ - **Hyphen-Aware Matching** - Type `dao` to match `run-agent-dao-qi-harmony-designer`
84
+ - **Abbreviation Support** - `dq` matches `dao-qi`, `nde` matches `node`
85
+ - **Numeric Suffix Handling** - `py3` intelligently matches `python3`
86
+ - **Multi-Algorithm Fusion** - Combines 7+ matching algorithms for best results
87
+
88
+ #### Intelligent Context Detection
89
+ - **No @ Required** - Type `gp5` directly to match `@ask-gpt-5`
90
+ - **Auto-Prefix Addition** - Tab/Enter automatically adds `@` for agents and models
91
+ - **Mixed Completion** - Seamlessly switch between commands, files, agents, and models
92
+ - **Smart Prioritization** - Results ranked by relevance and usage frequency
93
+
94
+ #### Unix Command Optimization
95
+ - **500+ Common Commands** - Curated database of frequently used Unix/Linux commands
96
+ - **System Intersection** - Only shows commands that actually exist on your system
97
+ - **Priority Scoring** - Common commands appear first (git, npm, docker, etc.)
98
+ - **Real-time Loading** - Dynamic command discovery from system PATH
99
+
100
+ ### User Experience
101
+ - 🎨 **Interactive UI** - Beautiful terminal interface with syntax highlighting
102
+ - 🔌 **Tool System** - Extensible architecture with specialized tools for different tasks
103
+ - 💾 **Context Management** - Smart context handling to maintain conversation continuity
104
+ - 📋 **AGENTS.md Integration** - Use `# documentation requests` to auto-generate and maintain project documentation
105
+
106
+ ## Installation
93
107
 
94
108
  ```bash
95
- npm install -g @xinshu/openagi
109
+ npm install -g @shareai-lab/kode
96
110
  ```
97
111
 
98
- 安装后,您可以使用以下任何命令:
99
- - `openagi` - 主命令
100
- - `agi` - OpenAGI With Agent(替代命令)
112
+ > **🇨🇳 For users in China**: If you encounter network issues, use a mirror registry:
113
+ > ```bash
114
+ > npm install -g @shareai-lab/kode --registry=https://registry.npmmirror.com
115
+ > ```
116
+
117
+ Dev channel (latest features):
118
+
119
+ ```bash
120
+ npm install -g @shareai-lab/kode@dev
121
+ ```
122
+
123
+ After installation, you can use any of these commands:
124
+ - `kode` - Primary command
125
+ - `kwa` - Kode With Agent (alternative)
126
+ - `kd` - Ultra-short alias
127
+
128
+ ### Native binaries (Windows OOTB)
129
+
130
+ - No WSL/Git Bash required.
131
+ - On `postinstall`, Kode will best-effort download a native binary from GitHub Releases into `${KODE_BIN_DIR:-~/.kode/bin}/<version>/<platform>-<arch>/kode(.exe)`.
132
+ - The wrapper (`cli.js`) prefers the native binary and falls back to the Node.js runtime (`node dist/index.js`) when needed.
101
133
 
102
- ### Windows 说明
134
+ Overrides:
135
+ - Mirror downloads: `KODE_BINARY_BASE_URL`
136
+ - Disable download: `KODE_SKIP_BINARY_DOWNLOAD=1`
137
+ - Cache directory: `KODE_BIN_DIR`
103
138
 
104
- - 安装 Git for Windows 以提供 Bash(类似 Unix)环境:https://git-scm.com/download/win
105
- - OpenAGI 在可用时自动优先使用 Git Bash/MSYS 或 WSL Bash。
106
- - 如果两者都不可用,它将回退到您的默认 shell,但许多功能在 Bash 下效果最佳。
107
- - 使用 VS Code 的集成终端而不是旧版命令提示符(cmd):
108
- - 更好的字体渲染和图标支持。
109
- - 与 cmd 相比,路径和编码的怪癖更少。
110
- - 尽可能选择"Git Bash"作为 VS Code 终端 shell。
111
- - 可选:如果您通过 npm 全局安装,请避免全局前缀路径中的空格以防止 shim 问题。
112
- - 示例:`npm config set prefix "C:\\npm"` 并重新安装全局包。
139
+ See `docs/binary-distribution.md`.
113
140
 
114
- ## 使用方法
141
+ ### Configuration / API keys
115
142
 
116
- ### 交互式模式
117
- 启动交互式会话:
143
+ - Global config (models, pointers, theme, etc): `~/.kode.json` (or `<KODE_CONFIG_DIR>/config.json` when `KODE_CONFIG_DIR`/`CLAUDE_CONFIG_DIR` is set).
144
+ - Project/local settings (output style, etc): `./.kode/settings.json` and `./.kode/settings.local.json` (legacy `.claude` is supported for some features).
145
+ - Configure models via `/model` (UI) or `kode models import/export` (YAML). Details: `docs/develop/configuration.md`.
146
+
147
+ ## Usage
148
+
149
+ ### Interactive Mode
150
+ Start an interactive session:
118
151
  ```bash
119
- openagi
120
- #
121
- agi
152
+ kode
153
+ # or
154
+ kwa
155
+ # or
156
+ kd
157
+ ```
122
158
 
159
+ ### Non-Interactive Mode
160
+ Get a quick response:
161
+ ```bash
162
+ kode -p "explain this function" path/to/file.js
163
+ # or
164
+ kwa -p "explain this function" path/to/file.js
123
165
  ```
124
166
 
125
- ### 非交互式模式
126
- 快速获取响应:
167
+ ### ACP (Agent Client Protocol)
168
+
169
+ Run Kode as an ACP agent server (stdio JSON-RPC), for clients like Toad/Zed:
170
+
127
171
  ```bash
128
- openagi -p "解释这个函数" path/to/file.js
129
- #
130
- agi -p "解释这个函数" path/to/file.js
172
+ kode-acp
173
+ # or
174
+ kode --acp
131
175
  ```
132
176
 
133
- ### 使用 @ 提及系统
177
+ Toad example:
134
178
 
135
- OpenAGI 支持强大的 @ 提及系统以实现智能补全:
179
+ ```bash
180
+ toad acp "kode-acp"
181
+ ```
182
+
183
+ More: `docs/acp.md`.
184
+
185
+ ### Using the @ Mention System
136
186
 
137
- #### 🦜 专家模型咨询
187
+ Kode supports a powerful @ mention system for intelligent completions:
188
+
189
+ #### 🦜 Expert Model Consultation
138
190
  ```bash
139
- # 咨询特定 AI 模型以获得专家意见
140
- @ask-claude-sonnet-4 我应该如何优化这个 React 组件的性能?
141
- @ask-gpt-5 这种身份验证方法的安全影响是什么?
142
- @ask-o1-preview 分析这个算法的复杂性
191
+ # Consult specific AI models for expert opinions
192
+ @ask-claude-sonnet-4 How should I optimize this React component for performance?
193
+ @ask-gpt-5 What are the security implications of this authentication method?
194
+ @ask-o1-preview Analyze the complexity of this algorithm
143
195
  ```
144
196
 
145
- #### 👤 专业代理委派
197
+ #### 👤 Specialized Agent Delegation
146
198
  ```bash
147
- # 将任务委派给专业子代理
148
- @run-agent-simplicity-auditor 审查这段代码是否存在过度工程化
149
- @run-agent-architect 为这个系统设计微服务架构
150
- @run-agent-test-writer 为这些模块创建全面的测试
199
+ # Delegate tasks to specialized subagents
200
+ @run-agent-simplicity-auditor Review this code for over-engineering
201
+ @run-agent-architect Design a microservices architecture for this system
202
+ @run-agent-test-writer Create comprehensive tests for these modules
151
203
  ```
152
204
 
153
- #### 📁 智能文件引用
205
+ #### 📁 Smart File References
154
206
  ```bash
155
- # 使用自动补全引用文件和目录
156
- @src/components/Button.tsx
157
- @docs/api-reference.md
207
+ # Reference files and directories with auto-completion
208
+ @packages/core/src/query/index.ts
209
+ @docs/README.md
158
210
  @.env.example
159
211
  ```
160
212
 
161
- @ 提及系统在您输入时提供智能补全,显示可用的模型、代理和文件。
213
+ The @ mention system provides intelligent completions as you type, showing available models, agents, and files.
214
+
215
+ ### MCP Servers (Extensions)
162
216
 
163
- ### AGENTS.md 文档模式
217
+ Kode can connect to MCP servers to extend tools and context.
164
218
 
165
- 使用 `#` 前缀生成和维护您的 AGENTS.md 文档:
219
+ - Config files: `.mcp.json` (recommended) or `.mcprc` in your project root. See `docs/mcp.md`.
220
+ - CLI:
166
221
 
167
222
  ```bash
168
- # 生成设置说明
169
- # 我如何设置开发环境?
223
+ kode mcp add
224
+ kode mcp list
225
+ kode mcp get <name>
226
+ kode mcp remove <name>
227
+ ```
170
228
 
171
- # 创建测试文档
172
- # 这个项目的测试程序是什么?
229
+ Example `.mcprc`:
173
230
 
174
- # 记录部署过程
175
- # 解释部署管道和要求
231
+ ```json
232
+ {
233
+ "my-sse-server": { "type": "sse", "url": "http://127.0.0.1:3333/sse" }
234
+ }
176
235
  ```
177
236
 
178
- 此模式自动将响应格式化为结构化文档,并将它们附加到您的 AGENTS.md 文件中。
237
+ ### Permissions & Approvals
238
+
239
+ - Default mode skips most prompts for speed.
240
+ - Safe mode: `kode --safe` requires approval for Bash commands and file writes/edits.
241
+ - Plan mode: the assistant may ask to enter plan mode to draft a plan file; while in plan mode, only read-only/planning tools (and the plan file) are allowed until you approve exiting plan mode.
242
+
243
+ ### Paste & Images
244
+
245
+ - Multi-line/large paste is inserted as a placeholder and expanded on submit.
246
+ - Pasting multiple existing file paths inserts `@path` mentions automatically (quoted when needed).
247
+ - Image paste (macOS): press `Ctrl+V` to attach clipboard images; you can paste multiple images before sending.
248
+
249
+ ### System Sandbox (Linux)
250
+
251
+ - In safe mode (or with `KODE_SYSTEM_SANDBOX=1`), agent-triggered Bash tool calls try to run inside a `bwrap` sandbox when available.
252
+ - Network is disabled by default; set `KODE_SYSTEM_SANDBOX_NETWORK=inherit` to allow network.
253
+ - Set `KODE_SYSTEM_SANDBOX=required` to fail closed if sandbox cannot be started.
254
+ - See `docs/system-sandbox.md` for details and platform notes.
179
255
 
180
- ### Docker 使用
256
+ ### Troubleshooting
181
257
 
182
- #### 替代方案:从本地源构建
258
+ - Models: use `/model`, or `kode models import kode-models.yaml`, and ensure required API key env vars exist.
259
+ - Windows: if the native binary download is blocked/offline, set `KODE_BINARY_BASE_URL` (mirror) or `KODE_SKIP_BINARY_DOWNLOAD=1` (skip download); the wrapper will fall back to the Node.js runtime (`dist/index.js`).
260
+ - MCP: use `kode mcp list` to check server status; tune `MCP_CONNECTION_TIMEOUT_MS`, `MCP_SERVER_CONNECTION_BATCH_SIZE`, and `MCP_TOOL_TIMEOUT` if servers are slow.
261
+ - Sandbox: install `bwrap` (bubblewrap) on Linux, or set `KODE_SYSTEM_SANDBOX=0` to disable.
262
+
263
+ ### AGENTS.md Documentation Mode
264
+
265
+ Use the `#` prefix to generate and maintain your AGENTS.md documentation:
183
266
 
184
267
  ```bash
185
- # 克隆仓库
186
- git clone https://github.com/xingshuzhice/openagi.git
187
- cd openagi
268
+ # Generate setup instructions
269
+ # How do I set up the development environment?
270
+
271
+ # Create testing documentation
272
+ # What are the testing procedures for this project?
273
+
274
+ # Document deployment process
275
+ # Explain the deployment pipeline and requirements
276
+ ```
277
+
278
+ This mode automatically formats responses as structured documentation and appends them to your AGENTS.md file.
188
279
 
189
- # 在本地构建镜像
190
- docker build --no-cache -t openagi .
280
+ ### Docker Usage
191
281
 
192
- # 在您的项目目录中运行
282
+ #### Alternative: Build from local source
283
+
284
+ ```bash
285
+ # Clone the repository
286
+ git clone https://github.com/shareAI-lab/Kode.git
287
+ cd Kode
288
+
289
+ # Build the image locally
290
+ docker build --no-cache -t kode .
291
+
292
+ # Run in your project directory
193
293
  cd your-project
194
294
  docker run -it --rm \
195
295
  -v $(pwd):/workspace \
196
- -v ~/.openagi:/root/.openagi \
197
- -v ~/.openagi.json:/root/.openagi.json \
296
+ -v ~/.kode:/root/.kode \
297
+ -v ~/.kode.json:/root/.kode.json \
198
298
  -w /workspace \
199
- openagi
299
+ kode
200
300
  ```
201
301
 
202
- #### Docker 配置详情
302
+ #### Docker Configuration Details
303
+
304
+ The Docker setup includes:
305
+
306
+ - **Volume Mounts**:
307
+ - `$(pwd):/workspace` - Mounts your current project directory
308
+ - `~/.kode:/root/.kode` - Preserves your kode configuration directory between runs
309
+ - `~/.kode.json:/root/.kode.json` - Preserves your kode global configuration file between runs
203
310
 
204
- Docker 设置包括:
311
+ - **Working Directory**: Set to `/workspace` inside the container
205
312
 
206
- - **卷挂载**:
207
- - `$(pwd):/workspace` - 挂载您当前的项目目录
208
- - `~/.openagi:/root/.openagi` - 在运行之间保留您的 openagi 配置目录
209
- - `~/.openagi.json:/root/.openagi.json` - 在运行之间保留您的 openagi 全局配置文件
313
+ - **Interactive Mode**: Uses `-it` flags for interactive terminal access
210
314
 
211
- - **工作目录**:在容器内设置为 `/workspace`
315
+ - **Cleanup**: `--rm` flag removes the container after exit
212
316
 
213
- - **交互模式**:使用 `-it` 标志进行交互式终端访问
317
+ **Note**: Kode uses both `~/.kode` directory for additional data (like memory files) and `~/.kode.json` file for global configuration.
214
318
 
215
- - **清理**:`--rm` 标志在退出后删除容器
319
+ The first time you run the Docker command, it will build the image. Subsequent runs will use the cached image for faster startup.
216
320
 
217
- **注意**:OpenAGI 同时使用 `~/.openagi` 目录存储额外数据(如内存文件)和 `~/.openagi.json` 文件进行全局配置。
321
+ You can use the onboarding to set up the model, or `/model`.
322
+ If you don't see the models you want on the list, you can manually set them in `/config`
323
+ As long as you have an openai-like endpoint, it should work.
218
324
 
219
- 第一次运行 Docker 命令时,它将构建镜像。后续运行将使用缓存镜像以加快启动速度。
325
+ ### Commands
220
326
 
221
- 您可以使用入门来设置模型,或使用 `/model`。
222
- 如果您在列表中看不到想要的模型,可以在 `/config` 中手动设置
223
- 只要您有类似 openai 的端点,它就应该工作。
327
+ - `/help` - Show available commands
328
+ - `/model` - Change AI model settings
329
+ - `/config` - Open configuration panel
330
+ - `/agents` - Manage subagents
331
+ - `/output-style` - Set the output style
332
+ - `/statusline` - Configure a custom status line command
333
+ - `/cost` - Show token usage and costs
334
+ - `/clear` - Clear conversation history
335
+ - `/init` - Initialize project context
336
+ - `/plugin` - Manage plugins/marketplaces (skills, commands)
224
337
 
225
- ### 命令
338
+ ## Agents / Subagents
226
339
 
227
- - `/help` - 显示可用命令
228
- - `/model` - 更改 AI 模型设置
229
- - `/config` - 打开配置面板
230
- - `/cost` - 显示令牌使用情况和成本
231
- - `/clear` - 清除对话历史
232
- - `/init` - 初始化项目上下文
340
+ Kode supports subagents (agent templates) for delegation and task orchestration.
233
341
 
234
- ## 多模型智能协作
342
+ - Agents are loaded from `.kode/agents` and `.claude/agents` (user + project), plus plugins/policy and `--agents`.
343
+ - Manage in the UI: `/agents` (creates new agents under `./.claude/agents` / `~/.claude/agents` by default).
344
+ - Run via mentions: `@run-agent-<agentType> ...`
345
+ - Run via tooling: `Task(subagent_type: "<agentType>", ...)`
346
+ - CLI flags: `--agents <json>` (inject agents for this run), `--setting-sources user,project,local` (control which sources are loaded)
235
347
 
236
- 与只支持单一模型的官方 Claude 不同,OpenAGI 实现了**真正的多模型协作**,让您充分利用不同 AI 模型的独特优势。
348
+ Minimal agent file example (`./.kode/agents/reviewer.md`):
237
349
 
238
- ### 🏗️ 核心技术架构
350
+ ```md
351
+ ---
352
+ name: reviewer
353
+ description: "Review diffs for correctness, security, and simplicity"
354
+ tools: ["Read", "Grep"]
355
+ model: inherit
356
+ ---
239
357
 
240
- #### 1. **ModelManager 多模型管理器**
241
- 我们设计了统一的 `ModelManager` 系统,支持:
242
- - **模型配置文件**:每个模型都有独立的配置文件,包含 API 端点、身份验证、上下文窗口大小、成本参数等。
243
- - **模型指针**:用户可以在 `/model` 命令中为不同目的配置默认模型:
244
- - `main`:主代理的默认模型
245
- - `task`:子代理的默认模型
246
- - `reasoning`:保留供将来 ThinkTool 使用
247
- - `quick`:用于简单 NLP 任务的快速模型(安全识别、标题生成等)
248
- - **动态模型切换**:支持运行时模型切换,无需重启会话,保持上下文连续性
358
+ Be strict. Point out bugs and risky changes. Prefer small, targeted fixes.
359
+ ```
360
+
361
+ Model field notes:
362
+ - Compatibility aliases: `inherit`, `opus`, `sonnet`, `haiku` (mapped to model pointers)
363
+ - Kode selectors (via `/model`): pointers (`main|task|compact|quick`), profile name, modelName, or `provider:modelName` (e.g. `openai:o3`)
364
+
365
+ Validate agent templates:
366
+
367
+ ```bash
368
+ kode agents validate
369
+ ```
370
+
371
+ See `docs/agents-system.md`.
372
+
373
+ ## Skills & Plugins
374
+
375
+ Kode supports the [Agent Skills](https://agentskills.io) open format for extending agent capabilities:
376
+ - **Agent Skills** format (`SKILL.md`) - see [specification](https://agentskills.io/specification)
377
+ - **Marketplace compatibility** (`.kode-plugin/marketplace.json`, legacy `.claude-plugin/marketplace.json`)
378
+ - **Install from any repository** using [`add-skill` CLI](https://github.com/vercel-labs/add-skill)
379
+
380
+ ### Quick install with add-skill
381
+
382
+ Install skills from any git repository:
383
+
384
+ ```bash
385
+ # Install from GitHub
386
+ npx add-skill vercel-labs/agent-skills -a kode
387
+
388
+ # Install to global directory
389
+ npx add-skill vercel-labs/agent-skills -a kode -g
390
+
391
+ # Install specific skills
392
+ npx add-skill vercel-labs/agent-skills -a kode -s pdf -s xlsx
393
+ ```
394
+
395
+ ### Install skills from a marketplace
396
+
397
+ ```bash
398
+ # Add a marketplace (local path, GitHub owner/repo, or URL)
399
+ kode plugin marketplace add ./path/to/marketplace-repo
400
+ kode plugin marketplace add owner/repo
401
+ kode plugin marketplace list
402
+
403
+ # Install a plugin pack (installs skills/commands)
404
+ kode plugin install document-skills@anthropic-agent-skills --scope user
405
+
406
+ # Project-scoped install (writes to ./.kode/...)
407
+ kode plugin install document-skills@anthropic-agent-skills --scope project
408
+
409
+ # Disable/enable an installed plugin
410
+ kode plugin disable document-skills@anthropic-agent-skills --scope user
411
+ kode plugin enable document-skills@anthropic-agent-skills --scope user
412
+ ```
413
+
414
+ Interactive equivalents:
415
+
416
+ ```text
417
+ /plugin marketplace add owner/repo
418
+ /plugin install document-skills@anthropic-agent-skills --scope user
419
+ ```
420
+
421
+ ### Use skills
422
+
423
+ - In interactive mode, run a skill as a slash command: `/pdf`, `/xlsx`, etc.
424
+ - Kode can also invoke skills automatically via the `Skill` tool when relevant.
425
+
426
+ ### Create a skill (Agent Skills)
427
+
428
+ Create `./.kode/skills/<skill-name>/SKILL.md` (project) or `~/.kode/skills/<skill-name>/SKILL.md` (user):
429
+
430
+ ```md
431
+ ---
432
+ name: my-skill
433
+ description: Describe what this skill does and when to use it.
434
+ allowed-tools: Read Bash(git:*) Bash(jq:*)
435
+ ---
436
+
437
+ # Skill instructions
438
+ ```
439
+
440
+ Naming rules:
441
+ - `name` must match the folder name
442
+ - Lowercase letters/numbers/hyphens only, 1–64 chars
443
+
444
+ Compatibility:
445
+ - Kode also discovers `.claude/skills` and `.claude/commands` for legacy compatibility.
446
+
447
+ ### Distribute skills
448
+
449
+ - Marketplace repo: publish a repo containing `.kode-plugin/marketplace.json` listing plugin packs and their `skills` directories (legacy `.claude-plugin/marketplace.json` is also supported).
450
+ - Plugin repo: for full plugins (beyond skills), include `.kode-plugin/plugin.json` at the plugin root and keep all paths relative (`./...`).
451
+
452
+ See `docs/skills.md` for a compact reference and examples.
453
+
454
+ ### Output styles
455
+
456
+ Use output styles to switch system-prompt behavior.
457
+
458
+ - Select: `/output-style` (menu) or `/output-style <style>`
459
+ - Built-ins: `default`, `Explanatory`, `Learning`
460
+ - Stored per-project in `./.kode/settings.local.json` as `outputStyle` (legacy `.claude/settings.local.json` is supported)
461
+ - Custom styles: Markdown files under `output-styles/` in `.claude`/`.kode` user + project locations
462
+ - Plugins can provide styles under `output-styles/` (or manifest `outputStyles`); plugin styles are namespaced as `<plugin>:<style>`
463
+
464
+ See `docs/output-styles.md`.
465
+
466
+ ## Multi-Model Intelligent Collaboration
467
+
468
+ Unlike single-model CLIs, Kode implements **true multi-model collaboration**, allowing you to fully leverage the unique strengths of different AI models.
469
+
470
+ ### 🏗️ Core Technical Architecture
471
+
472
+ #### 1. **ModelManager Multi-Model Manager**
473
+ We designed a unified `ModelManager` system that supports:
474
+ - **Model Profiles**: Each model has an independent configuration file containing API endpoints, authentication, context window size, cost parameters, etc.
475
+ - **Model Pointers**: Users can configure default models for different purposes in the `/model` command:
476
+ - `main`: Default model for main Agent
477
+ - `task`: Default model for SubAgent
478
+ - `compact`: Model used for automatic context compression when nearing the context window
479
+ - `quick`: Fast model for simple operations and utilities
480
+ - **Dynamic Model Switching**: Support runtime model switching without restarting sessions, maintaining context continuity
481
+
482
+ #### 📦 Shareable Model Config (YAML)
483
+
484
+ You can export/import model profiles + pointers as a team-shareable YAML file. By default, exports do **not** include plaintext API keys (use env vars instead).
485
+
486
+ ```bash
487
+ # Export to a file (or omit --output to print to stdout)
488
+ kode models export --output kode-models.yaml
489
+
490
+ # Import (merge by default)
491
+ kode models import kode-models.yaml
492
+
493
+ # Replace existing profiles instead of merging
494
+ kode models import --replace kode-models.yaml
495
+
496
+ # List configured profiles + pointers
497
+ kode models list
498
+ ```
499
+
500
+ Example `kode-models.yaml`:
501
+
502
+ ```yaml
503
+ version: 1
504
+ profiles:
505
+ - name: OpenAI Main
506
+ provider: openai
507
+ modelName: gpt-4o
508
+ maxTokens: 8192
509
+ contextLength: 128000
510
+ apiKey:
511
+ fromEnv: OPENAI_API_KEY
512
+ pointers:
513
+ main: gpt-4o
514
+ task: gpt-4o
515
+ compact: gpt-4o
516
+ quick: gpt-4o
517
+ ```
249
518
 
250
- #### 2. **TaskTool 智能任务分配**
251
- 我们专门设计的 `TaskTool`(架构师工具)实现了:
252
- - **子代理机制**:可以启动多个子代理并行处理任务
253
- - **模型参数传递**:用户可以在请求中指定子代理应使用哪个模型
254
- - **默认模型配置**:子代理默认使用 `task` 指针配置的模型
519
+ #### 2. **TaskTool Intelligent Task Distribution**
520
+ Our specially designed `TaskTool` (Architect tool) implements:
521
+ - **Subagent Mechanism**: Can launch multiple sub-agents to process tasks in parallel
522
+ - **Model Parameter Passing**: Users can specify which model SubAgents should use in their requests
523
+ - **Default Model Configuration**: SubAgents use the model configured by the `task` pointer by default
255
524
 
256
- #### 3. **AskExpertModel 专家咨询工具**
257
- 我们专门设计了 `AskExpertModel` 工具:
258
- - **专家模型调用**:允许在对话期间临时调用特定专家模型解决难题
259
- - **模型隔离执行**:专家模型响应独立处理,不影响主对话流程
260
- - **知识整合**:将专家模型见解整合到当前任务中
525
+ #### 3. **AskExpertModel Expert Consultation Tool**
526
+ We specially designed the `AskExpertModel` tool:
527
+ - **Expert Model Invocation**: Allows temporarily calling specific expert models to solve difficult problems during conversations
528
+ - **Model Isolation Execution**: Expert model responses are processed independently without affecting the main conversation flow
529
+ - **Knowledge Integration**: Integrates expert model insights into the current task
261
530
 
262
- #### 🎯 灵活的模型切换
263
- - **Tab 键快速切换**:在输入框中按 Tab 键快速切换当前对话的模型
264
- - **`/model` 命令**:使用 `/model` 命令配置和管理多个模型配置文件,为不同目的设置默认模型
265
- - **用户控制**:用户可以随时指定特定模型进行任务处理
531
+ #### 🎯 Flexible Model Switching
532
+ - **Option+M Quick Switch**: Press Option+M in the input box to cycle the main conversation model
533
+ - **`/model` Command**: Use `/model` command to configure and manage multiple model profiles, set default models for different purposes
534
+ - **User Control**: Users can specify specific models for task processing at any time
266
535
 
267
- #### 🔄 智能工作分配策略
536
+ #### 🔄 Intelligent Work Allocation Strategy
268
537
 
269
- **架构设计阶段**
270
- - 使用 **o3 模型** **GPT-5 模型** 探索系统架构并制定清晰明确的技术方案
271
- - 这些模型在抽象思维和系统设计方面表现出色
538
+ **Architecture Design Phase**
539
+ - Use **o3 model** or **GPT-5 model** to explore system architecture and formulate sharp and clear technical solutions
540
+ - These models excel in abstract thinking and system design
272
541
 
273
- **方案细化阶段**
274
- - 使用 **gemini 模型** 深入探索生产环境设计细节
275
- - 利用其在实际工程和平衡推理方面的深厚积累
542
+ **Solution Refinement Phase**
543
+ - Use **gemini model** to deeply explore production environment design details
544
+ - Leverage its deep accumulation in practical engineering and balanced reasoning capabilities
276
545
 
277
- **代码实现阶段**
278
- - 使用 **Qwen Coder 模型**、**Kimi k2 模型**、**GLM-4.5 模型** **Claude Sonnet 4 模型** 进行具体代码编写
279
- - 这些模型在代码生成、文件编辑和工程实现方面表现强劲
280
- - 支持通过子代理并行处理多个编码任务
546
+ **Code Implementation Phase**
547
+ - Use **Qwen Coder model**, **Kimi k2 model**, **GLM-4.5 model**, or **Claude Sonnet 4 model** for specific code writing
548
+ - These models have strong performance in code generation, file editing, and engineering implementation
549
+ - Support parallel processing of multiple coding tasks through subagents
281
550
 
282
- **问题解决**
283
- - 遇到复杂问题时,咨询 **o3 模型**、**Claude Opus 4.1 模型** **Grok 4 模型** 等专家模型
284
- - 获得深入的技术见解和创新解决方案
551
+ **Problem Solving**
552
+ - When encountering complex problems, consult expert models like **o3 model**, **Claude Opus 4.1 model**, or **Grok 4 model**
553
+ - Obtain deep technical insights and innovative solutions
285
554
 
286
- #### 💡 实际应用场景
555
+ #### 💡 Practical Application Scenarios
287
556
 
288
557
  ```bash
289
- # 示例 1:架构设计
290
- "使用 o3 模型帮我设计一个高并发消息队列系统架构"
558
+ # Example 1: Architecture Design
559
+ "Use o3 model to help me design a high-concurrency message queue system architecture"
291
560
 
292
- # 示例 2:多模型协作
293
- "首先使用 GPT-5 模型分析这个性能问题的根本原因,然后使用 Claude Sonnet 4 模型编写优化代码"
561
+ # Example 2: Multi-Model Collaboration
562
+ "First use GPT-5 model to analyze the root cause of this performance issue, then use Claude Sonnet 4 model to write optimization code"
294
563
 
295
- # 示例 3:并行任务处理
296
- "使用 Qwen Coder 模型作为子代理同时重构这三个模块"
564
+ # Example 3: Parallel Task Processing
565
+ "Use Qwen Coder model as subagent to refactor these three modules simultaneously"
297
566
 
298
- # 示例 4:专家咨询
299
- "这个内存泄漏问题很棘手,单独询问 Claude Opus 4.1 模型寻求解决方案"
567
+ # Example 4: Expert Consultation
568
+ "This memory leak issue is tricky, ask Claude Opus 4.1 model separately for solutions"
300
569
 
301
- # 示例 5:代码审查
302
- " Kimi k2 模型审查这个 PR 的代码质量"
570
+ # Example 5: Code Review
571
+ "Have Kimi k2 model review the code quality of this PR"
303
572
 
304
- # 示例 6:复杂推理
305
- "使用 Grok 4 模型帮我推导这个算法的时间复杂度"
573
+ # Example 6: Complex Reasoning
574
+ "Use Grok 4 model to help me derive the time complexity of this algorithm"
306
575
 
307
- # 示例 7:方案设计
308
- " GLM-4.5 模型设计一个微服务分解方案"
576
+ # Example 7: Solution Design
577
+ "Have GLM-4.5 model design a microservice decomposition plan"
309
578
  ```
310
579
 
311
- ### 🛠️ 关键实现机制
580
+ ### 🛠️ Key Implementation Mechanisms
312
581
 
313
- #### **配置系统**
582
+ #### **Configuration System**
314
583
  ```typescript
315
- // 多模型配置支持示例
584
+ // Example of multi-model configuration support
316
585
  {
317
- "modelProfiles": {
318
- "o3": { "provider": "openai", "model": "o3", "apiKey": "..." },
319
- "claude4": { "provider": "anthropic", "model": "claude-sonnet-4", "apiKey": "..." },
320
- "qwen": { "provider": "alibaba", "model": "qwen-coder", "apiKey": "..." }
321
- },
586
+ "modelProfiles": [
587
+ { "name": "o3", "provider": "openai", "modelName": "o3", "apiKey": "...", "maxTokens": 1024, "contextLength": 128000, "isActive": true, "createdAt": 1710000000000 },
588
+ { "name": "qwen", "provider": "alibaba", "modelName": "qwen-coder", "apiKey": "...", "maxTokens": 1024, "contextLength": 128000, "isActive": true, "createdAt": 1710000000001 }
589
+ ],
322
590
  "modelPointers": {
323
- "main": "claude4", // 主对话模型
324
- "task": "qwen", // 任务执行模型
325
- "reasoning": "o3", // 推理模型
326
- "quick": "glm-4.5" // 快速响应模型
591
+ "main": "o3", // Main conversation model
592
+ "task": "qwen-coder", // Sub-agent model
593
+ "compact": "o3", // Context compression model
594
+ "quick": "o3" // Quick operations model
327
595
  }
328
596
  }
329
597
  ```
330
598
 
331
- #### **成本跟踪系统**
332
- - **使用统计**:使用 `/cost` 命令查看每个模型的令牌使用情况和成本
333
- - **多模型成本比较**:实时跟踪不同模型的使用成本
334
- - **历史记录**:保存每个会话的成本数据
599
+ #### **Cost Tracking System**
600
+ - **Usage Statistics**: Use `/cost` command to view token usage and costs for each model
601
+ - **Multi-Model Cost Comparison**: Track usage costs of different models in real-time
602
+ - **History Records**: Save cost data for each session
335
603
 
336
- #### **上下文管理器**
337
- - **上下文继承**:切换模型时保持对话连续性
338
- - **上下文窗口适应**:根据不同模型的上下文窗口大小自动调整
339
- - **会话状态保持**:确保多模型协作期间的信息一致性
604
+ #### **Context Manager**
605
+ - **Context Inheritance**: Maintain conversation continuity when switching models
606
+ - **Context Window Adaptation**: Automatically adjust based on different models' context window sizes
607
+ - **Session State Preservation**: Ensure information consistency during multi-model collaboration
340
608
 
341
- ### 🚀 多模型协作的优势
609
+ ### 🚀 Advantages of Multi-Model Collaboration
342
610
 
343
- 1. **最大化效率**:每个任务由最合适的模型处理
344
- 2. **成本优化**:简单任务使用轻量级模型,复杂任务使用强大的模型
345
- 3. **并行处理**:多个模型可以同时处理不同的子任务
346
- 4. **灵活切换**:根据任务需求切换模型,无需重启会话
347
- 5. **利用优势**:结合不同模型的优势以获得最佳整体结果
611
+ 1. **Maximized Efficiency**: Each task is handled by the most suitable model
612
+ 2. **Cost Optimization**: Use lightweight models for simple tasks, powerful models for complex tasks
613
+ 3. **Parallel Processing**: Multiple models can work on different subtasks simultaneously
614
+ 4. **Flexible Switching**: Switch models based on task requirements without restarting sessions
615
+ 5. **Leveraging Strengths**: Combine advantages of different models for optimal overall results
348
616
 
349
- ### 📊 与官方实现的比较
617
+ ### 📊 Comparison (Single-model CLI)
350
618
 
351
- | 功能 | OpenAGI | 官方 Claude |
619
+ | Feature | Kode | Single-model CLI |
352
620
  |---------|------|-----------------|
353
- | 支持的模型数量 | 无限,可配置任何模型 | 仅支持单一 Claude 模型 |
354
- | 模型切换 | ✅ Tab 键快速切换 | ❌ 需要重启会话 |
355
- | 并行处理 | ✅ 多个子代理并行工作 | ❌ 单线程处理 |
356
- | 成本跟踪 | ✅ 多个模型分别统计 | ❌ 单一模型成本 |
357
- | 任务模型配置 | ✅ 不同目的不同默认模型 | ❌ 所有任务使用相同模型 |
358
- | 专家咨询 | ✅ AskExpertModel 工具 | ❌ 不支持 |
621
+ | Number of Supported Models | Unlimited, configurable for any model | Only supports one model |
622
+ | Model Switching | ✅ Option+M quick switch | ❌ Requires session restart |
623
+ | Parallel Processing | ✅ Multiple SubAgents work in parallel | ❌ Single-threaded processing |
624
+ | Cost Tracking | ✅ Separate statistics for multiple models | ❌ Single model cost |
625
+ | Task Model Configuration | ✅ Different default models for different purposes | ❌ Same model for all tasks |
626
+ | Expert Consultation | ✅ AskExpertModel tool | ❌ Not supported |
359
627
 
360
- 这种多模型协作能力使 OpenAGI 成为真正的 **AI 开发工作台**,而不仅仅是一个单一的 AI 助手。
628
+ This multi-model collaboration capability makes Kode a true **AI Development Workbench**, not just a single AI assistant.
361
629
 
362
- ## 开发
630
+ ## Development
363
631
 
364
- OpenAGI 使用现代工具构建,开发需要 [Bun](https://bun.sh)
632
+ Kode is built with modern tools and requires [Bun](https://bun.sh) for development.
365
633
 
366
- ### 安装 Bun
634
+ ### Install Bun
367
635
 
368
636
  ```bash
369
637
  # macOS/Linux
@@ -373,52 +641,52 @@ curl -fsSL https://bun.sh/install | bash
373
641
  powershell -c "irm bun.sh/install.ps1 | iex"
374
642
  ```
375
643
 
376
- ### 设置开发环境
644
+ ### Setup Development Environment
377
645
 
378
646
  ```bash
379
- # 克隆仓库
380
- git clone https://github.com/xingshuzhice/openagi.git
381
- cd openagi
647
+ # Clone the repository
648
+ git clone https://github.com/shareAI-lab/kode.git
649
+ cd kode
382
650
 
383
- # 安装依赖
651
+ # Install dependencies
384
652
  bun install
385
653
 
386
- # 以开发模式运行
654
+ # Run in development mode
387
655
  bun run dev
388
656
  ```
389
657
 
390
- ### 构建
658
+ ### Build
391
659
 
392
660
  ```bash
393
661
  bun run build
394
662
  ```
395
663
 
396
- ### 测试
664
+ ### Testing
397
665
 
398
666
  ```bash
399
- # 运行测试
667
+ # Run tests
400
668
  bun test
401
669
 
402
- # 测试 CLI
670
+ # Test the CLI
403
671
  ./cli.js --help
404
672
  ```
405
673
 
406
- ## 贡献
674
+ ## Contributing
407
675
 
408
- 我们欢迎贡献!详情请参阅我们的[贡献指南](CONTRIBUTING.md)
676
+ We welcome contributions! Please see our [Contributing Guide](CONTRIBUTING.md) for details.
409
677
 
410
- ## 许可证
678
+ ## License
411
679
 
412
- Apache 2.0 许可证 - 详情请参阅 [LICENSE](LICENSE)
680
+ Apache 2.0 License - see [LICENSE](LICENSE) for details.
413
681
 
414
- ## 致谢
682
+ ## Thanks
415
683
 
416
- - 部分代码来自 @dnakov anonkode
417
- - 部分 UI 学习自 gemini-cli
418
- - 部分系统设计学习自 claude code
684
+ - Some code from @dnakov's anonkode
685
+ - Some UI learned from gemini-cli
686
+ - Some system design learned from upstream agent CLIs
419
687
 
420
- ## 支持
688
+ ## Support
421
689
 
422
- - 📚 [文档](docs/)
423
- - 🐛 [报告问题](https://github.com/xingshuzhice/openagi/issues)
424
- - 💬 [讨论](https://github.com/xingshuzhice/openagi/discussions)
690
+ - 📚 [Documentation](docs/)
691
+ - 🐛 [Report Issues](https://github.com/shareAI-lab/kode/issues)
692
+ - 💬 [Discussions](https://github.com/shareAI-lab/kode/discussions)