@xinshu/openagi 0.0.1 → 0.0.2

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -1,60 +1,423 @@
1
- # OpenAGI (@anthropic-ai/claude-code style)
1
+ # OpenAGI - AI Coding
2
+ <img width="991" height="479" alt="image" src="https://github.com/user-attachments/assets/c1751e92-94dc-4e4a-9558-8cd2d058c1a1" /> <br>
3
+ [![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0)
4
+ [![AGENTS.md](https://img.shields.io/badge/AGENTS.md-Compatible-brightgreen)](https://agents.md)
2
5
 
3
- This is a self-contained distribution of OpenAGI, packaged in the style of @anthropic-ai/claude-code.
6
+ [中文文档](README.zh-CN.md) | [Contributing](CONTRIBUTING.md) | [Documentation](docs/)
4
7
 
5
- ## Structure
8
+ <img width="90%" alt="image" src="https://github.com/user-attachments/assets/fdce7017-8095-429d-b74e-07f43a6919e1" />
6
9
 
7
- This distribution follows the same structure as @anthropic-ai/claude-code:
10
+ <img width="90%" alt="2c0ad8540f2872d197c7b17ae23d74f5" src="https://github.com/user-attachments/assets/f220cc27-084d-468e-a3f4-d5bc44d84fac" />
8
11
 
9
- ```
10
- openagi-anthropic-style/
11
- ├── cli.js # Main CLI executable (self-contained bundle)
12
- ├── cli.mjs # ES module version
13
- ├── package.json # Package configuration
14
- ├── sdk.d.ts # TypeScript definitions for SDK
15
- ├── sdk-tools.d.ts # TypeScript definitions for tools
16
- ├── sdk.mjs # SDK ES module
17
- ├── LICENSE.md # License information
18
- ├── README.md # This file
19
- ├── node_modules/ # Optional dependencies structure
20
- │ └── @img/ # Sharp platform-specific binaries
21
- └── vendor/ # Binary tools and utilities
22
- └── ripgrep # Ripgrep binary
12
+ <img width="90%" alt="f266d316d90ddd0db5a3d640c1126930" src="https://github.com/user-attachments/assets/90ec7399-1349-4607-b689-96613b3dc3e2" />
13
+
14
+
15
+ <img width="90%" alt="image" src="https://github.com/user-attachments/assets/b30696ce-5ab1-40a0-b741-c7ef3945dba0" />
16
+
17
+
18
+ ## 📢 Update Log
19
+
20
+ **2025-08-29**: We've added Windows support! All Windows users can now run Kode using Git Bash, Unix subsystems, or WSL (Windows Subsystem for Linux) on their computers.
21
+
22
+
23
+ ## 🤝 AGENTS.md Standard Support
24
+
25
+ **Kode proudly supports the [AGENTS.md standard protocol](https://agents.md) initiated by OpenAI** - a simple, open format for guiding programming agents that's used by 20k+ open source projects.
26
+
27
+ ### Full Compatibility with Multiple Standards
28
+
29
+ - ✅ **AGENTS.md** - Native support for the OpenAI-initiated standard format
30
+ - ✅ **CLAUDE.md** - Full backward compatibility with Claude Code `.claude` configurations
31
+ - ✅ **Subagent System** - Advanced agent delegation and task orchestration
32
+ - ✅ **Cross-platform** - Works with 20+ AI models and providers
33
+
34
+ Use `# Your documentation request` to generate and maintain your AGENTS.md file automatically, while preserving compatibility with existing `.claude` workflows.
35
+
36
+ ## Overview
37
+
38
+ Kode is a powerful AI assistant that lives in your terminal. It can understand your codebase, edit files, run commands, and handle entire workflows for you.
39
+
40
+ > **⚠️ Security Notice**: Kode runs in YOLO mode by default (equivalent to Claude Code's `--dangerously-skip-permissions` flag), bypassing all permission checks for maximum productivity. YOLO mode is recommended only for trusted, secure environments when working on non-critical projects. If you're working with important files or using models of questionable capability, we strongly recommend using `kode --safe` to enable permission checks and manual approval for all operations.
41
+ >
42
+ > **📊 Model Performance**: For optimal performance, we recommend using newer, more capable models designed for autonomous task completion. Avoid older Q&A-focused models like GPT-4o or Gemini 2.5 Pro, which are optimized for answering questions rather than sustained independent task execution. Choose models specifically trained for agentic workflows and extended reasoning capabilities.
43
+
44
+ <img width="600" height="577" alt="image" src="https://github.com/user-attachments/assets/8b46a39d-1ab6-4669-9391-14ccc6c5234c" />
45
+
46
+ ## Features
47
+
48
+ ### Core Capabilities
49
+ - 🤖 **AI-Powered Assistance** - Uses advanced AI models to understand and respond to your requests
50
+ - 🔄 **Multi-Model Collaboration** - Flexibly switch and combine multiple AI models to leverage their unique strengths
51
+ - 🦜 **Expert Model Consultation** - Use `@ask-model-name` to consult specific AI models for specialized analysis
52
+ - 👤 **Intelligent Agent System** - Use `@run-agent-name` to delegate tasks to specialized subagents
53
+ - 📝 **Code Editing** - Directly edit files with intelligent suggestions and improvements
54
+ - 🔍 **Codebase Understanding** - Analyzes your project structure and code relationships
55
+ - 🚀 **Command Execution** - Run shell commands and see results in real-time
56
+ - 🛠️ **Workflow Automation** - Handle complex development tasks with simple prompts
57
+
58
+ ### Authoring Comfort
59
+ - `Ctrl+G` opens your message in your preferred editor (respects `$EDITOR`/`$VISUAL`; falls back to code/nano/vim/notepad) and returns the text to the prompt when you close it.
60
+ - `Shift+Enter` inserts a newline inside the prompt without sending; plain Enter submits. `Ctrl+M` switches the active model.
61
+
62
+ ### 🎯 Advanced Intelligent Completion System
63
+ Our state-of-the-art completion system provides unparalleled coding assistance:
64
+
65
+ #### Smart Fuzzy Matching
66
+ - **Hyphen-Aware Matching** - Type `dao` to match `run-agent-dao-qi-harmony-designer`
67
+ - **Abbreviation Support** - `dq` matches `dao-qi`, `nde` matches `node`
68
+ - **Numeric Suffix Handling** - `py3` intelligently matches `python3`
69
+ - **Multi-Algorithm Fusion** - Combines 7+ matching algorithms for best results
70
+
71
+ #### Intelligent Context Detection
72
+ - **No @ Required** - Type `gp5` directly to match `@ask-gpt-5`
73
+ - **Auto-Prefix Addition** - Tab/Enter automatically adds `@` for agents and models
74
+ - **Mixed Completion** - Seamlessly switch between commands, files, agents, and models
75
+ - **Smart Prioritization** - Results ranked by relevance and usage frequency
76
+
77
+ #### Unix Command Optimization
78
+ - **500+ Common Commands** - Curated database of frequently used Unix/Linux commands
79
+ - **System Intersection** - Only shows commands that actually exist on your system
80
+ - **Priority Scoring** - Common commands appear first (git, npm, docker, etc.)
81
+ - **Real-time Loading** - Dynamic command discovery from system PATH
82
+
83
+ ### User Experience
84
+ - 🎨 **Interactive UI** - Beautiful terminal interface with syntax highlighting
85
+ - 🔌 **Tool System** - Extensible architecture with specialized tools for different tasks
86
+ - 💾 **Context Management** - Smart context handling to maintain conversation continuity
87
+ - 📋 **AGENTS.md Integration** - Use `# documentation requests` to auto-generate and maintain project documentation
88
+
89
+ ## Installation
90
+
91
+ ```bash
92
+ npm install -g @xinshu/openagi
23
93
  ```
24
94
 
95
+ After installation, you can use any of these commands:
96
+ - `kode` - Primary command
97
+ - `kwa` - Kode With Agent (alternative)
98
+ - `kd` - Ultra-short alias
99
+
100
+ ### Windows Notes
101
+
102
+ - Install Git for Windows to provide a Bash (Unix‑like) environment: https://git-scm.com/download/win
103
+ - Kode automatically prefers Git Bash/MSYS or WSL Bash when available.
104
+ - If neither is available, it will fall back to your default shell, but many features work best with Bash.
105
+ - Use VS Code’s integrated terminal rather than legacy Command Prompt (cmd):
106
+ - Better font rendering and icon support.
107
+ - Fewer path and encoding quirks compared to cmd.
108
+ - Select “Git Bash” as the VS Code terminal shell when possible.
109
+ - Optional: If you install globally via npm, avoid spaces in the global prefix path to prevent shim issues.
110
+ - Example: `npm config set prefix "C:\\npm"` and reinstall global packages.
111
+
25
112
  ## Usage
26
113
 
114
+ ### Interactive Mode
115
+ Start an interactive session:
27
116
  ```bash
28
- # Make executable (if needed)
29
- chmod +x cli.js
117
+ kode
118
+ # or
119
+ kwa
120
+ # or
121
+ kd
122
+ ```
30
123
 
31
- # Run OpenAGI directly
32
- ./cli.js --help
124
+ ### Non-Interactive Mode
125
+ Get a quick response:
126
+ ```bash
127
+ kode -p "explain this function" path/to/file.js
128
+ # or
129
+ kwa -p "explain this function" path/to/file.js
130
+ ```
131
+
132
+ ### Using the @ Mention System
33
133
 
34
- # Or via node
35
- node cli.js --help
134
+ Kode supports a powerful @ mention system for intelligent completions:
135
+
136
+ #### 🦜 Expert Model Consultation
137
+ ```bash
138
+ # Consult specific AI models for expert opinions
139
+ @ask-claude-sonnet-4 How should I optimize this React component for performance?
140
+ @ask-gpt-5 What are the security implications of this authentication method?
141
+ @ask-o1-preview Analyze the complexity of this algorithm
36
142
  ```
37
143
 
38
- ## Installation
144
+ #### 👤 Specialized Agent Delegation
145
+ ```bash
146
+ # Delegate tasks to specialized subagents
147
+ @run-agent-simplicity-auditor Review this code for over-engineering
148
+ @run-agent-architect Design a microservices architecture for this system
149
+ @run-agent-test-writer Create comprehensive tests for these modules
150
+ ```
39
151
 
40
- 1. Copy this entire directory to your desired location
41
- 2. Add the directory to your PATH, or
42
- 3. Create a symlink: `ln -s /path/to/dist/cli.js /usr/local/bin/openagi`
152
+ #### 📁 Smart File References
153
+ ```bash
154
+ # Reference files and directories with auto-completion
155
+ @src/components/Button.tsx
156
+ @docs/api-reference.md
157
+ @.env.example
158
+ ```
43
159
 
44
- ## Features
160
+ The @ mention system provides intelligent completions as you type, showing available models, agents, and files.
161
+
162
+ ### AGENTS.md Documentation Mode
163
+
164
+ Use the `#` prefix to generate and maintain your AGENTS.md documentation:
165
+
166
+ ```bash
167
+ # Generate setup instructions
168
+ # How do I set up the development environment?
169
+
170
+ # Create testing documentation
171
+ # What are the testing procedures for this project?
172
+
173
+ # Document deployment process
174
+ # Explain the deployment pipeline and requirements
175
+ ```
176
+
177
+ This mode automatically formats responses as structured documentation and appends them to your AGENTS.md file.
178
+
179
+ ### Docker Usage
180
+
181
+ #### Alternative: Build from local source
182
+
183
+ ```bash
184
+ # Clone the repository
185
+ git clone https://github.com/xingshuzhice/openagi.git
186
+ cd Kode
187
+
188
+ # Build the image locally
189
+ docker build --no-cache -t kode .
190
+
191
+ # Run in your project directory
192
+ cd your-project
193
+ docker run -it --rm \
194
+ -v $(pwd):/workspace \
195
+ -v ~/.kode:/root/.kode \
196
+ -v ~/.kode.json:/root/.kode.json \
197
+ -w /workspace \
198
+ kode
199
+ ```
200
+
201
+ #### Docker Configuration Details
202
+
203
+ The Docker setup includes:
204
+
205
+ - **Volume Mounts**:
206
+ - `$(pwd):/workspace` - Mounts your current project directory
207
+ - `~/.kode:/root/.kode` - Preserves your kode configuration directory between runs
208
+ - `~/.kode.json:/root/.kode.json` - Preserves your kode global configuration file between runs
209
+
210
+ - **Working Directory**: Set to `/workspace` inside the container
211
+
212
+ - **Interactive Mode**: Uses `-it` flags for interactive terminal access
213
+
214
+ - **Cleanup**: `--rm` flag removes the container after exit
215
+
216
+ **Note**: Kode uses both `~/.kode` directory for additional data (like memory files) and `~/.kode.json` file for global configuration.
217
+
218
+ The first time you run the Docker command, it will build the image. Subsequent runs will use the cached image for faster startup.
219
+
220
+ You can use the onboarding to set up the model, or `/model`.
221
+ If you don't see the models you want on the list, you can manually set them in `/config`
222
+ As long as you have an openai-like endpoint, it should work.
223
+
224
+ ### Commands
225
+
226
+ - `/help` - Show available commands
227
+ - `/model` - Change AI model settings
228
+ - `/config` - Open configuration panel
229
+ - `/cost` - Show token usage and costs
230
+ - `/clear` - Clear conversation history
231
+ - `/init` - Initialize project context
232
+
233
+ ## Multi-Model Intelligent Collaboration
234
+
235
+ Unlike official Claude which supports only a single model, Kode implements **true multi-model collaboration**, allowing you to fully leverage the unique strengths of different AI models.
236
+
237
+ ### 🏗️ Core Technical Architecture
238
+
239
+ #### 1. **ModelManager Multi-Model Manager**
240
+ We designed a unified `ModelManager` system that supports:
241
+ - **Model Profiles**: Each model has an independent configuration file containing API endpoints, authentication, context window size, cost parameters, etc.
242
+ - **Model Pointers**: Users can configure default models for different purposes in the `/model` command:
243
+ - `main`: Default model for main Agent
244
+ - `task`: Default model for SubAgent
245
+ - `reasoning`: Reserved for future ThinkTool usage
246
+ - `quick`: Fast model for simple NLP tasks (security identification, title generation, etc.)
247
+ - **Dynamic Model Switching**: Support runtime model switching without restarting sessions, maintaining context continuity
248
+
249
+ #### 2. **TaskTool Intelligent Task Distribution**
250
+ Our specially designed `TaskTool` (Architect tool) implements:
251
+ - **Subagent Mechanism**: Can launch multiple sub-agents to process tasks in parallel
252
+ - **Model Parameter Passing**: Users can specify which model SubAgents should use in their requests
253
+ - **Default Model Configuration**: SubAgents use the model configured by the `task` pointer by default
254
+
255
+ #### 3. **AskExpertModel Expert Consultation Tool**
256
+ We specially designed the `AskExpertModel` tool:
257
+ - **Expert Model Invocation**: Allows temporarily calling specific expert models to solve difficult problems during conversations
258
+ - **Model Isolation Execution**: Expert model responses are processed independently without affecting the main conversation flow
259
+ - **Knowledge Integration**: Integrates expert model insights into the current task
260
+
261
+ #### 🎯 Flexible Model Switching
262
+ - **Tab Key Quick Switch**: Press Tab in the input box to quickly switch the model for the current conversation
263
+ - **`/model` Command**: Use `/model` command to configure and manage multiple model profiles, set default models for different purposes
264
+ - **User Control**: Users can specify specific models for task processing at any time
265
+
266
+ #### 🔄 Intelligent Work Allocation Strategy
267
+
268
+ **Architecture Design Phase**
269
+ - Use **o3 model** or **GPT-5 model** to explore system architecture and formulate sharp and clear technical solutions
270
+ - These models excel in abstract thinking and system design
271
+
272
+ **Solution Refinement Phase**
273
+ - Use **gemini model** to deeply explore production environment design details
274
+ - Leverage its deep accumulation in practical engineering and balanced reasoning capabilities
275
+
276
+ **Code Implementation Phase**
277
+ - Use **Qwen Coder model**, **Kimi k2 model**, **GLM-4.5 model**, or **Claude Sonnet 4 model** for specific code writing
278
+ - These models have strong performance in code generation, file editing, and engineering implementation
279
+ - Support parallel processing of multiple coding tasks through subagents
280
+
281
+ **Problem Solving**
282
+ - When encountering complex problems, consult expert models like **o3 model**, **Claude Opus 4.1 model**, or **Grok 4 model**
283
+ - Obtain deep technical insights and innovative solutions
284
+
285
+ #### 💡 Practical Application Scenarios
286
+
287
+ ```bash
288
+ # Example 1: Architecture Design
289
+ "Use o3 model to help me design a high-concurrency message queue system architecture"
290
+
291
+ # Example 2: Multi-Model Collaboration
292
+ "First use GPT-5 model to analyze the root cause of this performance issue, then use Claude Sonnet 4 model to write optimization code"
293
+
294
+ # Example 3: Parallel Task Processing
295
+ "Use Qwen Coder model as subagent to refactor these three modules simultaneously"
296
+
297
+ # Example 4: Expert Consultation
298
+ "This memory leak issue is tricky, ask Claude Opus 4.1 model separately for solutions"
299
+
300
+ # Example 5: Code Review
301
+ "Have Kimi k2 model review the code quality of this PR"
302
+
303
+ # Example 6: Complex Reasoning
304
+ "Use Grok 4 model to help me derive the time complexity of this algorithm"
305
+
306
+ # Example 7: Solution Design
307
+ "Have GLM-4.5 model design a microservice decomposition plan"
308
+ ```
309
+
310
+ ### 🛠️ Key Implementation Mechanisms
311
+
312
+ #### **Configuration System**
313
+ ```typescript
314
+ // Example of multi-model configuration support
315
+ {
316
+ "modelProfiles": {
317
+ "o3": { "provider": "openai", "model": "o3", "apiKey": "..." },
318
+ "claude4": { "provider": "anthropic", "model": "claude-sonnet-4", "apiKey": "..." },
319
+ "qwen": { "provider": "alibaba", "model": "qwen-coder", "apiKey": "..." }
320
+ },
321
+ "modelPointers": {
322
+ "main": "claude4", // Main conversation model
323
+ "task": "qwen", // Task execution model
324
+ "reasoning": "o3", // Reasoning model
325
+ "quick": "glm-4.5" // Quick response model
326
+ }
327
+ }
328
+ ```
329
+
330
+ #### **Cost Tracking System**
331
+ - **Usage Statistics**: Use `/cost` command to view token usage and costs for each model
332
+ - **Multi-Model Cost Comparison**: Track usage costs of different models in real-time
333
+ - **History Records**: Save cost data for each session
334
+
335
+ #### **Context Manager**
336
+ - **Context Inheritance**: Maintain conversation continuity when switching models
337
+ - **Context Window Adaptation**: Automatically adjust based on different models' context window sizes
338
+ - **Session State Preservation**: Ensure information consistency during multi-model collaboration
339
+
340
+ ### 🚀 Advantages of Multi-Model Collaboration
341
+
342
+ 1. **Maximized Efficiency**: Each task is handled by the most suitable model
343
+ 2. **Cost Optimization**: Use lightweight models for simple tasks, powerful models for complex tasks
344
+ 3. **Parallel Processing**: Multiple models can work on different subtasks simultaneously
345
+ 4. **Flexible Switching**: Switch models based on task requirements without restarting sessions
346
+ 5. **Leveraging Strengths**: Combine advantages of different models for optimal overall results
347
+
348
+ ### 📊 Comparison with Official Implementation
349
+
350
+ | Feature | Kode | Official Claude |
351
+ |---------|------|-----------------|
352
+ | Number of Supported Models | Unlimited, configurable for any model | Only supports single Claude model |
353
+ | Model Switching | ✅ Tab key quick switch | ❌ Requires session restart |
354
+ | Parallel Processing | ✅ Multiple SubAgents work in parallel | ❌ Single-threaded processing |
355
+ | Cost Tracking | ✅ Separate statistics for multiple models | ❌ Single model cost |
356
+ | Task Model Configuration | ✅ Different default models for different purposes | ❌ Same model for all tasks |
357
+ | Expert Consultation | ✅ AskExpertModel tool | ❌ Not supported |
358
+
359
+ This multi-model collaboration capability makes Kode a true **AI Development Workbench**, not just a single AI assistant.
360
+
361
+ ## Development
362
+
363
+ Kode is built with modern tools and requires [Bun](https://bun.sh) for development.
364
+
365
+ ### Install Bun
366
+
367
+ ```bash
368
+ # macOS/Linux
369
+ curl -fsSL https://bun.sh/install | bash
370
+
371
+ # Windows
372
+ powershell -c "irm bun.sh/install.ps1 | iex"
373
+ ```
374
+
375
+ ### Setup Development Environment
376
+
377
+ ```bash
378
+ # Clone the repository
379
+ git clone https://github.com/xingshuzhice/openagi.git
380
+ cd kode
381
+
382
+ # Install dependencies
383
+ bun install
384
+
385
+ # Run in development mode
386
+ bun run dev
387
+ ```
388
+
389
+ ### Build
390
+
391
+ ```bash
392
+ bun run build
393
+ ```
394
+
395
+ ### Testing
396
+
397
+ ```bash
398
+ # Run tests
399
+ bun test
400
+
401
+ # Test the CLI
402
+ ./cli.js --help
403
+ ```
404
+
405
+ ## Contributing
406
+
407
+ We welcome contributions! Please see our [Contributing Guide](CONTRIBUTING.md) for details.
408
+
409
+ ## License
410
+
411
+ Apache 2.0 License - see [LICENSE](LICENSE) for details.
45
412
 
46
- - **@anthropic-ai/claude-code architecture**: Same structure and conventions
47
- - **Self-contained**: All npm dependencies bundled into cli.js
48
- - **Optional dependencies**: Platform-specific binaries in node_modules/@img/
49
- - **SDK included**: TypeScript definitions and ES module SDK
50
- - **Vendor tools**: Binary utilities in vendor/ directory
413
+ ## Thanks
51
414
 
52
- ## Dependencies
415
+ - Some code from @dnakov's anonkode
416
+ - Some UI learned from gemini-cli
417
+ - Some system design learned from claude code
53
418
 
54
- This distribution is completely self-contained:
55
- - **ALL** application dependencies are bundled into `cli.js`
56
- - Only Node.js built-in modules are external
57
- - Platform-specific optional dependencies for image processing are in node_modules/
58
- - No additional `npm install` required!
419
+ ## Support
59
420
 
60
- Everything needed to run OpenAGI is included with just Node.js 18+.
421
+ - 📚 [Documentation](docs/)
422
+ - 🐛 [Report Issues](https://github.com/xingshuzhice/openagi/issues)
423
+ - 💬 [Discussions](https://github.com/xingshuzhice/openagi/discussions)