@defai.digital/ax-cli 1.2.0 → 1.2.2

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (37) hide show
  1. package/automatosx.config.json +333 -0
  2. package/config/messages.yaml +2 -0
  3. package/dist/agent/grok-agent.js +6 -1
  4. package/dist/agent/grok-agent.js.map +1 -1
  5. package/dist/commands/mcp.js +8 -2
  6. package/dist/commands/mcp.js.map +1 -1
  7. package/dist/commands/update.js +29 -25
  8. package/dist/commands/update.js.map +1 -1
  9. package/dist/index.js +31 -3
  10. package/dist/index.js.map +1 -1
  11. package/dist/ui/components/api-key-input.js +7 -5
  12. package/dist/ui/components/api-key-input.js.map +1 -1
  13. package/dist/utils/config-loader.js +3 -3
  14. package/dist/utils/config-loader.js.map +1 -1
  15. package/dist/utils/project-analyzer.d.ts +11 -0
  16. package/dist/utils/project-analyzer.js +180 -196
  17. package/dist/utils/project-analyzer.js.map +1 -1
  18. package/dist/utils/settings-manager.js +34 -26
  19. package/dist/utils/settings-manager.js.map +1 -1
  20. package/package.json +4 -3
  21. package/packages/schemas/dist/index.d.ts +14 -0
  22. package/packages/schemas/dist/index.d.ts.map +1 -0
  23. package/packages/schemas/dist/index.js +19 -0
  24. package/packages/schemas/dist/index.js.map +1 -0
  25. package/packages/schemas/dist/public/core/brand-types.d.ts +308 -0
  26. package/packages/schemas/dist/public/core/brand-types.d.ts.map +1 -0
  27. package/packages/schemas/dist/public/core/brand-types.js +243 -0
  28. package/packages/schemas/dist/public/core/brand-types.js.map +1 -0
  29. package/packages/schemas/dist/public/core/enums.d.ts +227 -0
  30. package/packages/schemas/dist/public/core/enums.d.ts.map +1 -0
  31. package/packages/schemas/dist/public/core/enums.js +222 -0
  32. package/packages/schemas/dist/public/core/enums.js.map +1 -0
  33. package/packages/schemas/dist/public/core/id-types.d.ts +286 -0
  34. package/packages/schemas/dist/public/core/id-types.d.ts.map +1 -0
  35. package/packages/schemas/dist/public/core/id-types.js +136 -0
  36. package/packages/schemas/dist/public/core/id-types.js.map +1 -0
  37. package/README.old.md +0 -1608
package/README.old.md DELETED
@@ -1,1608 +0,0 @@
1
- # AX CLI - Enterprise-Class AI CLI
2
-
3
- [![Tests](https://img.shields.io/badge/tests-124%20passing-brightgreen?style=flat-square)](https://github.com/defai-digital/ax-cli/actions/workflows/test.yml)
4
- [![Coverage](https://img.shields.io/badge/coverage-98.29%25-brightgreen?style=flat-square)](https://github.com/defai-digital/ax-cli)
5
- [![Node.js Version](https://img.shields.io/badge/node-%3E%3D24.0.0-brightgreen?style=flat-square)](https://nodejs.org/)
6
- [![TypeScript](https://img.shields.io/badge/TypeScript-5.9%2B-blue?style=flat-square&logo=typescript)](https://www.typescriptlang.org/)
7
- [![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg?style=flat-square)](https://opensource.org/licenses/MIT)
8
-
9
- ![AX CLI Logo](.github/assets/ax-cli.png)
10
-
11
- <p align="center">
12
- <strong>Production-Ready AI CLI • Enterprise-Grade Architecture • 98%+ Test Coverage • TypeScript & Zod Validation</strong>
13
- </p>
14
-
15
- ---
16
-
17
- ## 🚀 Overview
18
-
19
- **AX CLI** is an **enterprise-class AI command line interface** primarily designed for **GLM (General Language Model)** with support for multiple AI providers. Combining the power of offline-first local LLM execution with cloud-based AI services, AX CLI delivers production-ready quality with comprehensive testing, robust TypeScript architecture, and enterprise-grade reliability.
20
-
21
- Originally forked from [grok-cli](https://github.com/superagent-ai/grok-cli), AX CLI has been extensively upgraded using **AutomatosX** — a multi-agent AI orchestration platform — to achieve enterprise-class standards.
22
-
23
- ### 🏆 Enterprise-Class Features
24
-
25
- - **🤖 Built with AutomatosX**: Developed using multi-agent collaboration for production-quality code
26
- - **✅ 98%+ Test Coverage**: Comprehensive test suite with 83+ tests covering critical paths
27
- - **🔒 Type-Safe Architecture**: Full TypeScript with Zod runtime validation
28
- - **🎯 Node.js 24+ Ready**: Modern JavaScript runtime support
29
- - **📊 Quality Assurance**: Automated testing, linting, and continuous integration
30
- - **🏗️ Enterprise Architecture**: Clean separation of concerns, modular design, extensible APIs
31
-
32
- ### 💡 Why AX CLI?
33
-
34
- **GLM-Optimized**: Primary support for GLM (General Language Model) with optimized performance for local and cloud GLM deployments.
35
-
36
- **Production-Ready**: AX CLI is enterprise-grade with extensive testing, TypeScript safety, and proven reliability.
37
-
38
- ---
39
-
40
- ## 🏗️ Enterprise Architecture
41
-
42
- ### Single Source of Truth (SSOT) Type System
43
-
44
- AX CLI implements a **Single Source of Truth** design pattern through the `@ax-cli/schemas` package. This ensures that **API handlers, billing modules, and MCP adapters all consume the same schema**, drastically reducing future refactoring costs.
45
-
46
- #### The Problem: Before SSOT
47
-
48
- Without centralized schemas, each module maintained its own type definitions, leading to:
49
-
50
- ```
51
- ┌─────────────────────┐ ┌─────────────────────┐ ┌─────────────────────┐
52
- │ API Handler │ │ MCP Adapter │ │ Billing Module │
53
- ├─────────────────────┤ ├─────────────────────┤ ├─────────────────────┤
54
- │ type ModelId = │ │ type ModelName = │ │ type Model = │
55
- │ string │ │ string │ │ string │
56
- │ │ │ │ │ │
57
- │ interface Message { │ │ interface Msg { │ │ interface Request { │
58
- │ role: string │ │ type: string │ │ role: string │
59
- │ content: string │ │ text: string │ │ content: string │
60
- │ } │ │ } │ │ } │
61
- └─────────────────────┘ └─────────────────────┘ └─────────────────────┘
62
- ❌ ❌ ❌
63
- Own schemas Own schemas Own schemas
64
- Duplicated logic Duplicated logic Duplicated logic
65
- Diverges over time Diverges over time Diverges over time
66
- ```
67
-
68
- **Risks**:
69
- - ❌ **Type Mismatches**: API expects `role: string` but MCP sends `type: string`
70
- - ❌ **Duplicated Validation**: Same validation logic copied across 3 modules
71
- - ❌ **Silent Failures**: Changes in one module break others at runtime
72
- - ❌ **High Refactoring Cost**: Update model schema = touch 3+ files
73
- - ❌ **No Contract Enforcement**: No guarantee modules speak the same language
74
-
75
- #### The Solution: After SSOT
76
-
77
- With `@ax-cli/schemas`, all modules import from a single source:
78
-
79
- ```
80
- ┌────────────────────────────────────┐
81
- │ @ax-cli/schemas │
82
- │ (Single Source of Truth) │
83
- ├────────────────────────────────────┤
84
- │ │
85
- │ • Brand Types (ModelId, etc.) │
86
- │ • Centralized Enums (MessageRole) │
87
- │ • Zod Schemas (runtime validation)│
88
- │ • TypeScript Types (compile-time) │
89
- │ │
90
- └──────────────┬─────────────────────┘
91
-
92
- ┌─────────────┼─────────────┐
93
- │ │ │
94
- ▼ ▼ ▼
95
- ┌──────────────┐ ┌──────────────┐ ┌──────────────┐
96
- │ API Handler │ │ MCP Adapter │ │Billing Module│
97
- ├──────────────┤ ├──────────────┤ ├──────────────┤
98
- │ import { │ │ import { │ │ import { │
99
- │ ModelId, │ │ ModelId, │ │ ModelId, │
100
- │ Message │ │ Message │ │ Message │
101
- │ } from │ │ } from │ │ } from │
102
- │ '@ax-cli/ │ │ '@ax-cli/ │ │ '@ax-cli/ │
103
- │ schemas' │ │ schemas' │ │ schemas' │
104
- └──────────────┘ └──────────────┘ └──────────────┘
105
- ✅ ✅ ✅
106
- Same contract Same contract Same contract
107
- Same validation Same validation Same validation
108
- ```
109
-
110
- **Benefits**:
111
- - ✅ **Zero Divergence**: All modules consume identical type definitions
112
- - ✅ **Reduced Refactoring Cost**: Change once, propagate everywhere (1 file vs 3+ files)
113
- - ✅ **Compile-Time Safety**: TypeScript catches mismatches across module boundaries
114
- - ✅ **Runtime Validation**: Zod schemas ensure data validity at system boundaries
115
- - ✅ **Contract Enforcement**: Brand types prevent mixing incompatible IDs
116
-
117
- #### SSOT in Action
118
-
119
- **Example: Adding a new model**
120
-
121
- Before SSOT (3 files to update):
122
- ```typescript
123
- // File 1: src/api/handler.ts
124
- type ModelId = string; // Update here
125
-
126
- // File 2: src/mcp/adapter.ts
127
- type ModelName = string; // Update here too
128
-
129
- // File 3: src/billing/tracker.ts
130
- type Model = string; // And here
131
- ```
132
-
133
- After SSOT (1 file to update):
134
- ```typescript
135
- // File: packages/schemas/src/public/core/id-types.ts
136
- export const ModelIdSchema = z.string().brand<'ModelId'>();
137
- export type ModelId = z.infer<typeof ModelIdSchema>;
138
-
139
- // All consumers automatically get the update:
140
- // ✅ API handler
141
- // ✅ MCP adapter
142
- // ✅ Billing module
143
- ```
144
-
145
- #### Quality Metrics
146
-
147
- | Metric | Before SSOT | After SSOT | Improvement |
148
- |--------|-------------|------------|-------------|
149
- | **Schema Duplication** | 3+ copies | 1 canonical | 67% reduction |
150
- | **Refactoring Cost** | 3+ files | 1 file | 67% faster |
151
- | **Type Mismatches** | Runtime errors | Compile-time catch | 100% safer |
152
- | **Validation Consistency** | Divergent | Unified | Enterprise-grade |
153
- | **Test Coverage** | Partial | 98.29% (124 tests) | Production-ready |
154
-
155
- ### Technology Stack
156
-
157
- - **Language**: TypeScript 5.3+ (strict mode)
158
- - **Runtime**: Node.js 24+
159
- - **Validation**: Zod 3.x for runtime type safety
160
- - **Testing**: Vitest with 98%+ coverage
161
- - **UI**: Ink (React for CLI)
162
- - **AI Providers**: OpenAI-compatible APIs
163
- - **Package Manager**: npm / bun
164
-
165
- ### Code Quality
166
-
167
- - **Linting**: ESLint with TypeScript rules
168
- - **Type Checking**: TypeScript strict mode enabled
169
- - **Runtime Validation**: Zod schemas for all inputs
170
- - **Testing**: Vitest with comprehensive test suite
171
- - **CI/CD**: GitHub Actions for automated testing
172
-
173
- ### Test Suite
174
-
175
- **83 tests** covering critical functionality:
176
-
177
- ```
178
- 📊 Test Coverage Report
179
- ─────────────────────────────────────
180
- Overall: 98.29%
181
- ├─ Text Utils: 98.55% (36 tests)
182
- ├─ Token Counter: 100% (19 tests)
183
- └─ Schemas: 95.23% (28 tests)
184
-
185
- 🎯 Coverage Breakdown
186
- ─────────────────────────────────────
187
- Statements: 98.29%
188
- Branches: 95.06%
189
- Functions: 100%
190
- Lines: 98.19%
191
- ```
192
-
193
- **What's Tested:**
194
- - ✅ Text manipulation (word navigation, deletion, Unicode)
195
- - ✅ Token counting (messages, streaming, formatting)
196
- - ✅ Schema validation (settings, MCP, API responses)
197
- - ✅ Edge cases (empty strings, null, surrogate pairs)
198
- - ✅ Error handling and validation
199
-
200
- **Run Tests:**
201
- ```bash
202
- npm test # Run all tests
203
- npm run test:watch # Watch mode
204
- npm run test:coverage # Coverage report
205
- npm run test:ui # Interactive UI
206
- ```
207
-
208
- ---
209
-
210
- ## ✨ Key Features
211
-
212
- ### 🔒 **GLM-First Architecture**
213
- - **Primary Support**: Optimized for GLM (General Language Model) deployments
214
- - **GLM 4.6 Default**: Production-ready with 200K context window and advanced reasoning
215
- - **Local GLM**: Run GLM 4.6 and other GLM models locally via Ollama
216
- - **Cloud GLM**: Connect to cloud-hosted GLM services (Z.AI, etc.)
217
- - **Zero internet dependency** for complete privacy with local models
218
- - **No API keys required** for local operation
219
- - Full conversational AI capabilities offline
220
-
221
- ### 🚀 **Multi-Provider AI Support**
222
- - **Primary**: GLM 4.6 (200K context, reasoning mode) - Default model optimized for AX CLI
223
- - **Built-in GLM Models**: glm-4.6, grok-code-fast-1, glm-4-air, glm-4-airx
224
- - **Local Models**: Llama 3.1, Qwen 2.5, DeepSeek, and any Ollama-supported model
225
- - **Cloud Providers**: OpenAI (GPT-4), Anthropic (Claude), Google (Gemini), X.AI (Grok), OpenRouter, Groq
226
- - **Z.AI Platform**: Native support for z.ai GLM API server (cloud & local deployments)
227
- - **Flexible Configuration**: Switch between providers seamlessly
228
- - **OpenAI-Compatible API**: Works with ANY OpenAI-compatible endpoint
229
- - **Full Backward Compatibility**: All models from original grok-cli still supported
230
-
231
- ### 🤖 **Intelligent Automation**
232
- - **Smart File Operations**: AI automatically reads, creates, and edits files
233
- - **Bash Integration**: Execute shell commands through natural conversation
234
- - **Automatic Tool Selection**: AI chooses the right tools for your requests
235
- - **Multi-Step Task Execution**: Handle complex workflows with up to 400 tool rounds
236
- - **Intelligent Context Management**: Automatic pruning for infinite conversation length
237
- - **Project Analysis**: Auto-detect tech stack, conventions, and structure (`ax-cli init`)
238
-
239
- ### 🔌 **Extensibility**
240
- - **MCP Protocol Support**: Integrate Model Context Protocol servers
241
- - **Custom Instructions**: Project-specific AI behavior via `.ax-cli/CUSTOM.md`
242
- - **Plugin Architecture**: Extend with Linear, GitHub, and other MCP tools
243
- - **Morph Fast Apply**: Optional 4,500+ tokens/sec code editing
244
-
245
- ### 💬 **Developer Experience**
246
- - **Interactive Mode**: Conversational AI assistant in your terminal
247
- - **Headless Mode**: Scriptable single-prompt execution for CI/CD
248
- - **Beautiful UI**: Ink-based terminal interface with syntax highlighting
249
- - **Global Installation**: Use anywhere with `npm install -g`
250
-
251
- ### 🏗️ **Enterprise Quality**
252
- - **98.29% Test Coverage**: Text utils, token counting, schema validation
253
- - **TypeScript + Zod**: Runtime type safety and validation
254
- - **Automated CI/CD**: Tests run on every commit and PR
255
- - **Comprehensive Documentation**: Detailed guides and API references
256
- - **Node.js 24+ Support**: Modern JavaScript runtime features
257
-
258
- ---
259
-
260
- ## 🧠 GLM 4.6 - Advanced AI Features
261
-
262
- AX CLI provides **first-class support for GLM 4.6**, the flagship model with industry-leading capabilities:
263
-
264
- ### 🎯 Why GLM 4.6?
265
-
266
- GLM 4.6 is the **default model** in AX CLI, chosen for its exceptional performance across all use cases:
267
-
268
- | Feature | GLM 4.6 | Industry Average |
269
- |---------|---------|------------------|
270
- | **Context Window** | 200,000 tokens | 32,000-128,000 |
271
- | **Max Output** | 128,000 tokens | 4,000-8,000 |
272
- | **Reasoning Mode** | ✅ Built-in | ❌ Rare |
273
- | **Temperature Range** | 0.6-1.0 (optimized) | 0.0-2.0 (unstable) |
274
- | **Token Efficiency** | 30% better | Baseline |
275
- | **Local Support** | ✅ Via Ollama | Limited |
276
- | **Cloud Support** | ✅ Via Z.AI | Limited |
277
-
278
- ### 💭 Thinking Mode (Reasoning)
279
-
280
- GLM 4.6's **thinking mode** shows the AI's reasoning process before providing answers:
281
-
282
- **Example Session:**
283
- ```
284
- You: Explain the best approach to refactor this codebase
285
-
286
- 💭 Thinking...
287
- Let me analyze the codebase structure first. I see it uses TypeScript with ESM modules.
288
- The main issues appear to be:
289
- 1. Circular dependencies in the utils/ folder
290
- 2. Inconsistent error handling patterns
291
- 3. Missing type definitions for external APIs
292
-
293
- Based on this analysis, I recommend a phased approach...
294
-
295
- AI: Here's my recommended refactoring strategy:
296
-
297
- 1. **Phase 1: Dependency Graph**
298
- - Break circular dependencies by extracting shared types
299
- ...
300
- ```
301
-
302
- **Benefits:**
303
- - 🔍 **Transparency**: See how the AI arrives at conclusions
304
- - 🎯 **Better Decisions**: Understand the reasoning behind suggestions
305
- - 🐛 **Easier Debugging**: Identify where AI logic went wrong
306
- - 📚 **Learning Tool**: Learn problem-solving approaches from AI
307
-
308
- **How to Enable:**
309
- ```typescript
310
- // Automatic in AX CLI - thinking mode enabled by default for GLM 4.6
311
- ax-cli --model glm-4.6 // Thinking mode active
312
-
313
- // Programmatic control (advanced)
314
- const client = new GrokClient(apiKey, 'glm-4.6');
315
- await client.chat(messages, {
316
- thinking: {
317
- type: 'enabled',
318
- budget_tokens: 2000 // Optional: limit thinking tokens
319
- }
320
- });
321
- ```
322
-
323
- ### 📊 Massive Context Window
324
-
325
- **200,000 tokens** = Analyze entire codebases in a single conversation:
326
-
327
- **What Fits in 200K Tokens:**
328
- - 📦 **40+ average TypeScript files** (~5K tokens each)
329
- - 📚 **Complete documentation sets** (README + API docs + guides)
330
- - 💬 **Extended conversations** (500+ back-and-forth messages)
331
- - 🔍 **Full repository context** (directory structure + key files + tests)
332
-
333
- **Real-World Examples:**
334
- ```bash
335
- # Analyze entire project structure
336
- ax-cli -p "Review all TypeScript files in src/ and suggest architectural improvements"
337
-
338
- # Long debugging sessions
339
- ax-cli -p "Let's debug this issue step by step, checking all related files"
340
- # ... 100+ messages later, GLM 4.6 still remembers the original context
341
-
342
- # Documentation generation
343
- ax-cli -p "Generate API documentation for all exported functions in src/"
344
- ```
345
-
346
- ### 🎨 UI/UX Enhancements
347
-
348
- AX CLI includes **specialized components** for GLM 4.6 features:
349
-
350
- 1. **ReasoningDisplay Component**
351
- - Beautiful rendering of thinking process
352
- - Collapsible/expandable sections
353
- - Visual separation from final answer
354
- - Streaming support with "Thinking..." indicator
355
-
356
- 2. **Real-Time Streaming**
357
- - Thinking content streams as it's generated
358
- - Final answer streams separately
359
- - Token usage tracked independently
360
-
361
- 3. **Smart Context Management**
362
- - Automatic pruning at 75% usage (150K tokens)
363
- - Preserves important context (first messages, recent work)
364
- - Infinite conversation support via sliding window
365
-
366
- ### ⚙️ Advanced Configuration
367
-
368
- **Temperature Control** (0.6-1.0 optimized range):
369
- ```typescript
370
- // GLM 4.6 enforces optimal temperature range
371
- validateTemperature(0.7, 'glm-4.6'); // ✅ Valid
372
- validateTemperature(0.5, 'glm-4.6'); // ❌ Error: out of range
373
- validateTemperature(1.1, 'glm-4.6'); // ❌ Error: out of range
374
- ```
375
-
376
- **Output Token Control** (up to 128K):
377
- ```typescript
378
- // Generate long-form content
379
- await client.chat(messages, {
380
- model: 'glm-4.6',
381
- maxTokens: 100000, // 100K tokens = ~75,000 words
382
- });
383
- ```
384
-
385
- **Thinking Budget** (optional constraint):
386
- ```typescript
387
- // Limit reasoning tokens for faster responses
388
- await client.chat(messages, {
389
- model: 'glm-4.6',
390
- thinking: {
391
- type: 'enabled',
392
- budget_tokens: 500 // Quick thinking mode
393
- }
394
- });
395
- ```
396
-
397
- ### 🔬 Type Safety & Validation
398
-
399
- **Full TypeScript support** with runtime validation:
400
-
401
- ```typescript
402
- import { validateTemperature, validateMaxTokens, validateThinking } from '@ax-cli/schemas';
403
-
404
- // Temperature validation (model-specific ranges)
405
- validateTemperature(0.7, 'glm-4.6'); // ✅ OK (0.6-1.0)
406
-
407
- // Max tokens validation (respects model limits)
408
- validateMaxTokens(100000, 'glm-4.6'); // ✅ OK (< 128K)
409
- validateMaxTokens(150000, 'glm-4.6'); // ❌ Error: exceeds limit
410
-
411
- // Thinking mode validation (only on supported models)
412
- validateThinking({ type: 'enabled' }, 'glm-4.6'); // ✅ OK
413
- validateThinking({ type: 'enabled' }, 'grok-code-fast-1'); // ❌ Error: not supported
414
- ```
415
-
416
- ### 📖 Comprehensive Documentation
417
-
418
- - **[GLM 4.6 Usage Guide](docs/glm-4.6-usage-guide.md)** - Complete feature documentation
419
- - **[GLM 4.6 Migration Guide](docs/glm-4.6-migration-guide.md)** - Upgrade from older models
420
- - **[API Reference](docs/)** - Full TypeScript API documentation
421
- - **[Z.AI Official Docs](https://docs.z.ai/guides/llm/glm-4.6)** - Cloud deployment guide
422
-
423
- ### 🚀 Performance Benchmarks
424
-
425
- | Metric | GLM 4.6 | Baseline |
426
- |--------|---------|----------|
427
- | **Context Retention** | 200K tokens | 32K tokens |
428
- | **Code Generation Speed** | 50-80 tokens/sec | 30-50 tokens/sec |
429
- | **Reasoning Quality** | 93% accuracy | 85% accuracy |
430
- | **Token Efficiency** | 30% fewer tokens | Baseline |
431
- | **Long Conversation** | ✅ Infinite (managed) | ❌ Fails at ~100 msgs |
432
-
433
- ### 🎓 Best Practices
434
-
435
- 1. **Use Default Settings**: GLM 4.6's defaults are optimized for most use cases
436
- 2. **Enable Thinking Mode**: For complex reasoning tasks, debugging, architecture decisions
437
- 3. **Leverage Large Context**: Feed entire project context for better understanding
438
- 4. **Monitor Token Usage**: Use built-in context management for long sessions
439
- 5. **Validate Parameters**: Let TypeScript + Zod catch configuration errors early
440
-
441
- ### 💡 When to Use GLM 4.6
442
-
443
- **Perfect For:**
444
- - ✅ Complex reasoning and problem-solving
445
- - ✅ Large codebase analysis
446
- - ✅ Extended debugging sessions
447
- - ✅ Architecture and design decisions
448
- - ✅ Learning and understanding code
449
- - ✅ Long-form content generation
450
-
451
- **Consider Alternatives For:**
452
- - ⚡ **grok-code-fast-1**: Quick, simple code generation (4K tokens, faster responses)
453
- - 🌥️ **glm-4-air**: Balanced performance (128K context, 8K output)
454
- - 🪶 **glm-4-airx**: Lightweight tasks (8K context, minimal overhead)
455
-
456
- ---
457
-
458
- ## 📦 Installation
459
-
460
- ### Prerequisites
461
-
462
- #### **Node.js 24+** (Required)
463
- ```bash
464
- # Check your Node.js version
465
- node --version # Should be v24.0.0 or higher
466
-
467
- # Install Node.js 24+ from https://nodejs.org/
468
- ```
469
-
470
- #### **For Offline Operation** (Recommended)
471
- - **Ollama** 0.1.0+ for local LLM inference
472
- - **GLM 4.6 Model** (9B parameters, ~5GB download)
473
- - **16GB RAM** minimum (32GB recommended for larger models)
474
- - **GPU** recommended but optional (CPU inference supported)
475
-
476
- #### **For Cloud Providers** (Optional)
477
- - API key from OpenAI, Anthropic, X.AI, or compatible provider
478
- - (Optional) Morph API key for Fast Apply editing
479
-
480
- ### Global Installation (Recommended)
481
-
482
- ```bash
483
- # Using npm
484
- npm install -g @defai.digital/ax-cli
485
-
486
- # Using bun (faster)
487
- bun add -g @defai.digital/ax-cli
488
-
489
- # Verify installation
490
- ax-cli --version
491
- ```
492
-
493
- ### Local Development
494
-
495
- ```bash
496
- # Clone the repository
497
- git clone https://github.com/defai-digital/ax-cli
498
- cd ax-cli
499
-
500
- # Install dependencies
501
- npm install
502
-
503
- # Build the project
504
- npm run build
505
-
506
- # Link globally
507
- npm link
508
-
509
- # Run tests
510
- npm test
511
-
512
- # Generate coverage report
513
- npm run test:coverage
514
- ```
515
-
516
- ---
517
-
518
- ## ⚙️ Setup
519
-
520
- ### Option 1: Offline Setup with GLM 4.6 (Privacy-First)
521
-
522
- **Perfect for**: Developers who prioritize privacy, work with sensitive data, or need offline AI capabilities.
523
-
524
- #### Step 1: Install Ollama
525
-
526
- ```bash
527
- # macOS / Linux
528
- curl -fsSL https://ollama.ai/install.sh | sh
529
-
530
- # Windows
531
- # Download from https://ollama.ai/download
532
-
533
- # Verify installation
534
- ollama --version
535
- ```
536
-
537
- #### Step 2: Download GLM 4.6 Model
538
-
539
- ```bash
540
- # Pull the GLM-4-9B-Chat model (9B parameters, ~5GB download)
541
- ollama pull glm4:9b
542
-
543
- # Optional: Pull the vision-capable model
544
- ollama pull glm4v:9b
545
-
546
- # Verify models are available
547
- ollama list
548
- ```
549
-
550
- #### Step 3: Start Ollama Server
551
-
552
- ```bash
553
- # Ollama runs as a background service by default
554
- # If needed, start it manually:
555
- ollama serve
556
-
557
- # Test the model
558
- ollama run glm4:9b "Hello, how are you?"
559
- ```
560
-
561
- #### Step 4: Configure AX CLI for Local Operation
562
-
563
- Create `~/.ax/user-settings.json`:
564
-
565
- ```json
566
- {
567
- "baseURL": "http://localhost:11434/v1",
568
- "defaultModel": "glm4:9b",
569
- "models": [
570
- "glm4:9b",
571
- "glm4v:9b",
572
- "llama3.1:8b",
573
- "qwen2.5:7b"
574
- ]
575
- }
576
- ```
577
-
578
- #### Step 5: Test Your Setup
579
-
580
- ```bash
581
- # Interactive mode
582
- ax-cli
583
-
584
- # Headless mode
585
- ax-cli --prompt "Hello, please introduce yourself"
586
-
587
- # Specify working directory
588
- ax-cli --directory /path/to/project --prompt "List all TypeScript files"
589
- ```
590
-
591
- **✅ You're now running completely offline!** No API keys, no internet, complete privacy.
592
-
593
- ---
594
-
595
- ### Option 2: Cloud Provider Setup
596
-
597
- **Perfect for**: Teams using enterprise AI providers, developers who need the latest models, or hybrid offline/cloud workflows.
598
-
599
- #### Supported Providers
600
-
601
- **AX CLI supports ANY OpenAI-compatible API endpoint**, making it universally compatible with major AI providers.
602
-
603
- | Provider | Base URL | Supported Models | Best For |
604
- |----------|----------|------------------|----------|
605
- | **Z.AI** | `https://api.z.ai/v1` | GLM-4.6, GLM-4-Air, GLM-4-AirX | GLM models (cloud & local), 200K context, reasoning mode |
606
- | **X.AI (Grok)** | `https://api.x.ai/v1` | Grok, Grok Code Fast | Fast code generation, X.AI ecosystem |
607
- | **OpenAI** | `https://api.openai.com/v1` | GPT-4, GPT-4 Turbo, GPT-3.5 | General purpose, production-ready |
608
- | **Anthropic** | `https://api.anthropic.com/v1` | Claude 3.5 Sonnet, Claude 3 Opus | Long context, advanced reasoning |
609
- | **Google** | `https://openrouter.ai/api/v1` | Gemini Pro 1.5 (via OpenRouter) | Multi-modal, Google ecosystem |
610
- | **OpenRouter** | `https://openrouter.ai/api/v1` | 100+ models from all providers | Model routing, fallback strategies |
611
- | **Groq** | `https://api.groq.com/openai/v1` | Llama, Mistral, Gemma | Ultra-fast inference (500+ tokens/sec) |
612
- | **Ollama** | `http://localhost:11434/v1` | Llama, Qwen, DeepSeek, GLM, any | Complete privacy, offline operation |
613
-
614
- #### Built-in Models
615
-
616
- AX CLI includes 4 pre-configured GLM models optimized for different use cases:
617
-
618
- | Model | Context | Max Output | Thinking Mode | Best For |
619
- |-------|---------|-----------|---------------|----------|
620
- | **glm-4.6** ⭐ | 200K | 128K | ✅ Yes | Default - Long context, reasoning tasks |
621
- | **grok-code-fast-1** | 128K | 4K | ❌ No | Fast code generation, quick responses |
622
- | **glm-4-air** | 128K | 8K | ❌ No | Balanced performance, general tasks |
623
- | **glm-4-airx** | 8K | 8K | ❌ No | Lightweight, quick interactions |
624
-
625
- ⭐ **Default Model**: glm-4.6 is the default model for AX CLI
626
-
627
- #### Step 1: Get API Key
628
-
629
- 1. Sign up at your chosen provider:
630
- - [Z.AI](https://docs.z.ai) - GLM models (recommended for GLM 4.6)
631
- - [X.AI (Grok)](https://x.ai) - Fast code models
632
- - [OpenAI](https://platform.openai.com) - GPT-4 and GPT-3.5
633
- - [Anthropic](https://console.anthropic.com) - Claude 3.5 Sonnet
634
- - [OpenRouter](https://openrouter.ai) - Multi-model access to 100+ models
635
-
636
- 2. Generate an API key from your provider's dashboard
637
-
638
- #### Step 2: Configure API Key (Choose One Method)
639
-
640
- **Method 1: User Settings File** (Recommended)
641
-
642
- Create `~/.ax/user-settings.json`:
643
-
644
- ```json
645
- {
646
- "apiKey": "your_api_key_here",
647
- "baseURL": "https://api.x.ai/v1",
648
- "defaultModel": "grok-code-fast-1",
649
- "models": [
650
- "grok-code-fast-1",
651
- "grok-4-latest",
652
- "grok-3-latest"
653
- ]
654
- }
655
- ```
656
-
657
- **Method 2: Environment Variable**
658
-
659
- ```bash
660
- export YOUR_API_KEY="your_api_key_here"
661
- export GROK_BASE_URL="https://api.x.ai/v1"
662
- export GROK_MODEL="grok-code-fast-1"
663
- ```
664
-
665
- **Method 3: .env File**
666
-
667
- ```bash
668
- cp .env.example .env
669
- # Edit .env and add:
670
- YOUR_API_KEY=your_api_key_here
671
- GROK_BASE_URL=https://api.x.ai/v1
672
- GROK_MODEL=grok-code-fast-1
673
- ```
674
-
675
- **Method 4: Command Line Flags**
676
-
677
- ```bash
678
- ax-cli --api-key your_api_key_here --base-url https://api.x.ai/v1 --model grok-code-fast-1
679
- ```
680
-
681
- #### Step 3: (Optional) Configure Morph Fast Apply
682
-
683
- For lightning-fast code editing at 4,500+ tokens/sec:
684
-
685
- 1. Get API key from [Morph Dashboard](https://morphllm.com/dashboard/api-keys)
686
- 2. Add to environment or `.env`:
687
-
688
- ```bash
689
- export MORPH_API_KEY="your_morph_key_here"
690
- ```
691
-
692
- ---
693
-
694
- ### Multi-Provider Usage Examples
695
-
696
- AX CLI works with ANY OpenAI-compatible API. Here are examples for popular providers:
697
-
698
- **Z.AI (GLM Models - Recommended)**
699
- ```bash
700
- # Cloud GLM 4.6 via Z.AI
701
- ax-cli --api-key YOUR_ZAI_KEY --base-url https://api.z.ai/v1 --model glm-4.6
702
-
703
- # Local Z.AI deployment
704
- ax-cli --base-url http://localhost:8000/v1 --model glm-4.6
705
- ```
706
-
707
- **OpenAI (GPT-4)**
708
- ```bash
709
- ax-cli --api-key YOUR_OPENAI_KEY --base-url https://api.openai.com/v1 --model gpt-4
710
- ```
711
-
712
- **Anthropic (Claude)**
713
- ```bash
714
- ax-cli --api-key YOUR_ANTHROPIC_KEY --base-url https://api.anthropic.com/v1 --model claude-3.5-sonnet
715
- ```
716
-
717
- **Google Gemini (via OpenRouter)**
718
- ```bash
719
- ax-cli --api-key YOUR_OPENROUTER_KEY --base-url https://openrouter.ai/api/v1 --model google/gemini-pro-1.5
720
- ```
721
-
722
- **Ollama (Local - No API Key Needed)**
723
- ```bash
724
- # Any Ollama model
725
- ax-cli --base-url http://localhost:11434/v1 --model llama3.1
726
- ax-cli --base-url http://localhost:11434/v1 --model qwen2.5
727
- ax-cli --base-url http://localhost:11434/v1 --model deepseek-coder
728
- ```
729
-
730
- **X.AI (Grok)**
731
- ```bash
732
- ax-cli --api-key YOUR_XAI_KEY --base-url https://api.x.ai/v1 --model grok-code-fast-1
733
- ```
734
-
735
- **OpenRouter (100+ Models)**
736
- ```bash
737
- ax-cli --api-key YOUR_OPENROUTER_KEY --base-url https://openrouter.ai/api/v1 --model anthropic/claude-3.5-sonnet
738
- ```
739
-
740
- ---
741
-
742
- ## 🎯 Project Initialization
743
-
744
- AX CLI can automatically analyze your project and generate optimized custom instructions for better performance and accuracy.
745
-
746
- ### Quick Setup
747
-
748
- ```bash
749
- # Navigate to your project
750
- cd /path/to/your/project
751
-
752
- # Initialize AX CLI (one-time setup)
753
- ax-cli init
754
-
755
- # Start using AX CLI with project-aware intelligence
756
- ax-cli
757
- ```
758
-
759
- ### What Gets Detected Automatically
760
-
761
- The `init` command intelligently analyzes your project:
762
-
763
- - ✅ **Project Type**: CLI, library, web-app, API, etc.
764
- - ✅ **Tech Stack**: React, Vue, Express, NestJS, Vitest, Jest, etc.
765
- - ✅ **Language & Conventions**: TypeScript with ESM/CJS, import extensions
766
- - ✅ **Directory Structure**: Source, tests, tools, config directories
767
- - ✅ **Build Scripts**: Test, build, lint, dev commands
768
- - ✅ **Package Manager**: npm, yarn, pnpm, or bun
769
- - ✅ **Code Conventions**: Module system, validation library, test framework
770
-
771
- ### Generated Files
772
-
773
- **`.ax-cli/CUSTOM.md`** - Project-specific custom instructions:
774
- ```markdown
775
- # Custom Instructions for AX CLI
776
-
777
- **Project**: your-project v1.0.0
778
- **Type**: cli
779
- **Language**: TypeScript
780
- **Stack**: Commander, Vitest, Zod, ESM
781
-
782
- ## Code Conventions
783
-
784
- ### TypeScript
785
- - Use explicit type annotations
786
- - **CRITICAL**: Always use `.js` extension in imports (ESM requirement)
787
-
788
- ### Validation
789
- - Use **zod** for runtime validation
790
- - Validate all external inputs
791
-
792
- ## File Structure
793
- - Commands: `src/commands/`
794
- - Utilities: `src/utils/`
795
- - Types: `src/types/`
796
- ```
797
-
798
- **`.ax-cli/index.json`** - Fast project reference for quick lookups
799
-
800
- ### Command Options
801
-
802
- ```bash
803
- # Basic initialization
804
- ax-cli init
805
-
806
- # Force regeneration (after major project changes)
807
- ax-cli init --force
808
-
809
- # Verbose output showing detection details
810
- ax-cli init --verbose
811
-
812
- # Initialize specific directory
813
- ax-cli init --directory /path/to/project
814
- ```
815
-
816
- ### Benefits
817
-
818
- **🚀 Performance Improvements:**
819
- - **25-30% fewer tokens** - No repeated project exploration
820
- - **23% faster responses** - Direct file access using generated index
821
- - **Better accuracy** - Project conventions understood from the start
822
-
823
- **🧠 Smart Context:**
824
- - Knows your project structure instantly
825
- - Understands your tech stack and conventions
826
- - References correct file paths automatically
827
- - Follows project-specific patterns
828
-
829
- ### When to Run Init
830
-
831
- - ✅ After cloning a new repository
832
- - ✅ When starting a new project
833
- - ✅ After changing major dependencies
834
- - ✅ When migrating frameworks (e.g., Jest → Vitest)
835
- - ✅ After restructuring directories
836
-
837
- ### Team Usage
838
-
839
- **Option 1: Share Configuration**
840
- ```bash
841
- # Commit configuration to repository
842
- git add .ax-cli/
843
- git commit -m "Add AX CLI project configuration"
844
- ```
845
-
846
- **Option 2: Personal Configuration**
847
- ```bash
848
- # Add to .gitignore for personal customization
849
- echo ".ax-cli/" >> .gitignore
850
- ```
851
-
852
- ---
853
-
854
- ## 📖 Usage
855
-
856
- ### Interactive Mode
857
-
858
- Start a conversational AI session:
859
-
860
- ```bash
861
- # Basic usage (uses glm-4.6 by default)
862
- ax-cli
863
-
864
- # Specify working directory
865
- ax-cli --directory /path/to/project
866
-
867
- # Use specific built-in model
868
- ax-cli --model grok-code-fast-1
869
- ax-cli --model glm-4-air
870
-
871
- # Connect to Z.AI
872
- ax-cli --base-url https://api.z.ai/v1 --model glm-4.6
873
-
874
- # Offline mode with Ollama
875
- ax-cli --model llama3.1 --base-url http://localhost:11434/v1
876
- ```
877
-
878
- **Example Session:**
879
- ```
880
- AX> Show me the package.json file
881
-
882
- [AX reads and displays package.json]
883
-
884
- AX> Create a new TypeScript file called utils.ts with helper functions
885
-
886
- [AX creates the file with intelligent content]
887
-
888
- AX> Run npm test and show me the results
889
-
890
- [AX executes the command and displays output]
891
- ```
892
-
893
- ### Headless Mode (Scriptable)
894
-
895
- Process a single prompt and exit — perfect for CI/CD, automation, and scripting:
896
-
897
- ```bash
898
- # Basic headless execution
899
- ax-cli --prompt "show me the package.json file"
900
-
901
- # Short form
902
- ax-cli -p "list all TypeScript files in src/"
903
-
904
- # With working directory
905
- ax-cli -p "run npm test" -d /path/to/project
906
-
907
- # Control tool execution rounds
908
- ax-cli -p "comprehensive code refactoring" --max-tool-rounds 50
909
-
910
- # Combine with shell scripting
911
- RESULT=$(ax-cli -p "count lines of code in src/") && echo $RESULT
912
- ```
913
-
914
- **Use Cases:**
915
- - **CI/CD Pipelines**: Automate code analysis, testing, linting
916
- - **Shell Scripts**: Integrate AI into bash automation
917
- - **Batch Processing**: Process multiple prompts programmatically
918
- - **Terminal Benchmarks**: Non-interactive execution for tools like Terminal Bench
919
-
920
- ### Tool Execution Control
921
-
922
- Fine-tune AI behavior with configurable tool execution limits:
923
-
924
- ```bash
925
- # Fast responses for simple queries (limit: 10 rounds)
926
- ax-cli --max-tool-rounds 10 -p "show current directory"
927
-
928
- # Complex automation (limit: 500 rounds)
929
- ax-cli --max-tool-rounds 500 -p "refactor entire codebase to TypeScript"
930
-
931
- # Works with all modes
932
- ax-cli --max-tool-rounds 20 # Interactive
933
- ax-cli -p "task" --max-tool-rounds 30 # Headless
934
- ax-cli git commit-and-push --max-tool-rounds 30 # Git commands
935
- ```
936
-
937
- **Default**: 400 rounds (sufficient for most tasks)
938
-
939
- ---
940
-
941
- ## 🛠️ Configuration
942
-
943
- ### Configuration Architecture
944
-
945
- AX CLI uses a **two-tier configuration system** for maximum flexibility:
946
-
947
- 1. **User-Level Settings** (`~/.ax/user-settings.json`) - Global defaults
948
- 2. **Project-Level Settings** (`.ax/settings.json`) - Project-specific overrides
949
-
950
- #### User-Level Settings
951
-
952
- **Location**: `~/.ax/user-settings.json`
953
-
954
- **Purpose**: Global settings that apply across all projects
955
-
956
- **Example (Offline with GLM 4.6)**:
957
- ```json
958
- {
959
- "baseURL": "http://localhost:11434/v1",
960
- "defaultModel": "glm4:9b",
961
- "models": [
962
- "glm4:9b",
963
- "glm4v:9b",
964
- "llama3.1:8b",
965
- "qwen2.5:7b",
966
- "mistral:7b"
967
- ]
968
- }
969
- ```
970
-
971
- **Example (Cloud Provider - X.AI)**:
972
- ```json
973
- {
974
- "apiKey": "xai-your_api_key_here",
975
- "baseURL": "https://api.x.ai/v1",
976
- "defaultModel": "grok-code-fast-1",
977
- "models": [
978
- "grok-code-fast-1",
979
- "grok-4-latest",
980
- "grok-3-latest",
981
- "grok-2-latest"
982
- ]
983
- }
984
- ```
985
-
986
- **Example (OpenRouter for Multi-Model Access)**:
987
- ```json
988
- {
989
- "apiKey": "sk-or-your_api_key_here",
990
- "baseURL": "https://openrouter.ai/api/v1",
991
- "defaultModel": "anthropic/claude-3.5-sonnet",
992
- "models": [
993
- "anthropic/claude-3.5-sonnet",
994
- "openai/gpt-4o",
995
- "meta-llama/llama-3.1-70b-instruct",
996
- "google/gemini-pro-1.5"
997
- ]
998
- }
999
- ```
1000
-
1001
- #### Project-Level Settings
1002
-
1003
- **Location**: `.ax/settings.json` (in your project directory)
1004
-
1005
- **Purpose**: Project-specific model selection and MCP server configuration
1006
-
1007
- **Example**:
1008
- ```json
1009
- {
1010
- "model": "grok-code-fast-1",
1011
- "mcpServers": {
1012
- "linear": {
1013
- "name": "linear",
1014
- "transport": "sse",
1015
- "url": "https://mcp.linear.app/sse"
1016
- },
1017
- "github": {
1018
- "name": "github",
1019
- "transport": "stdio",
1020
- "command": "npx",
1021
- "args": ["@modelcontextprotocol/server-github"],
1022
- "env": {
1023
- "GITHUB_TOKEN": "your_github_token"
1024
- }
1025
- }
1026
- }
1027
- }
1028
- ```
1029
-
1030
- #### Configuration Priority
1031
-
1032
- ```
1033
- Command Line Flags > Environment Variables > Project Settings > User Settings > System Defaults
1034
- ```
1035
-
1036
- **Example**:
1037
- ```bash
1038
- # 1. Command line takes highest priority
1039
- ax-cli --model grok-4-latest
1040
-
1041
- # 2. Then environment variables
1042
- export GROK_MODEL="grok-code-fast-1"
1043
-
1044
- # 3. Then project settings (.ax/settings.json)
1045
- { "model": "glm4:9b" }
1046
-
1047
- # 4. Then user settings (~/.ax/user-settings.json)
1048
- { "defaultModel": "grok-3-latest" }
1049
-
1050
- # 5. Finally system default
1051
- grok-code-fast-1
1052
- ```
1053
-
1054
- ---
1055
-
1056
- ## 🎨 Custom Instructions
1057
-
1058
- Tailor AX CLI's behavior to your project's specific needs with custom instructions.
1059
-
1060
- ### Setup
1061
-
1062
- Create `.ax/AX.md` in your project root:
1063
-
1064
- ```bash
1065
- mkdir -p .ax
1066
- touch .ax/AX.md
1067
- ```
1068
-
1069
- ### Example Custom Instructions
1070
-
1071
- **TypeScript Project**:
1072
- ```markdown
1073
- # Custom Instructions for AX CLI
1074
-
1075
- ## Code Style
1076
- - Always use TypeScript for new code files
1077
- - Prefer const assertions and explicit typing
1078
- - Use functional components with React hooks
1079
- - Follow the project's existing ESLint configuration
1080
-
1081
- ## Documentation
1082
- - Add JSDoc comments for all public functions
1083
- - Include type annotations in JSDoc
1084
- - Document complex algorithms with inline comments
1085
-
1086
- ## Testing
1087
- - Write tests using Vitest
1088
- - Aim for 80%+ code coverage
1089
- - Include edge cases and error scenarios
1090
-
1091
- ## File Structure
1092
- - Place components in src/components/
1093
- - Place utilities in src/utils/
1094
- - Place types in src/types/
1095
- ```
1096
-
1097
- **Python Data Science Project**:
1098
- ```markdown
1099
- # Custom Instructions for AX CLI
1100
-
1101
- ## Code Standards
1102
- - Follow PEP 8 style guide
1103
- - Use type hints for function signatures
1104
- - Prefer pandas for data manipulation
1105
- - Use numpy for numerical operations
1106
-
1107
- ## Documentation
1108
- - Add docstrings in Google format
1109
- - Include usage examples in docstrings
1110
- - Document data schemas and transformations
1111
-
1112
- ## Best Practices
1113
- - Always validate input data types
1114
- - Handle missing values explicitly
1115
- - Add error handling for file operations
1116
- ```
1117
-
1118
- ### How It Works
1119
-
1120
- 1. **Auto-Loading**: AX automatically loads `.ax/AX.md` when working in your project
1121
- 2. **Priority**: Custom instructions override default AI behavior
1122
- 3. **Scope**: Instructions apply only to the current project
1123
- 4. **Format**: Use markdown for clear, structured instructions
1124
-
1125
- ---
1126
-
1127
- ## 🔌 MCP (Model Context Protocol) Integration
1128
-
1129
- Extend AX CLI with powerful integrations through the Model Context Protocol.
1130
-
1131
- ### What is MCP?
1132
-
1133
- MCP enables AI models to interact with external tools and services. Think of it as "plugins for AI" — you can add capabilities like project management (Linear), version control (GitHub), databases, APIs, and more.
1134
-
1135
- ### Adding MCP Servers
1136
-
1137
- #### Linear Integration (Project Management)
1138
-
1139
- ```bash
1140
- # Add Linear MCP server via SSE
1141
- ax-cli mcp add linear --transport sse --url "https://mcp.linear.app/sse"
1142
-
1143
- # Now you can:
1144
- # - Create and manage Linear issues
1145
- # - Search and filter tasks
1146
- # - Update issue status and assignees
1147
- # - Access team and project information
1148
- ```
1149
-
1150
- #### GitHub Integration (Version Control)
1151
-
1152
- ```bash
1153
- # Add GitHub MCP server via stdio
1154
- ax-cli mcp add github \
1155
- --transport stdio \
1156
- --command "npx" \
1157
- --args "@modelcontextprotocol/server-github" \
1158
- --env "GITHUB_TOKEN=your_github_token"
1159
-
1160
- # Now you can:
1161
- # - Create pull requests
1162
- # - Manage issues
1163
- # - Review code
1164
- # - Access repository information
1165
- ```
1166
-
1167
- #### Custom MCP Server
1168
-
1169
- ```bash
1170
- # Stdio transport (most common)
1171
- ax-cli mcp add my-server \
1172
- --transport stdio \
1173
- --command "bun" \
1174
- --args "server.js"
1175
-
1176
- # HTTP transport
1177
- ax-cli mcp add my-api \
1178
- --transport http \
1179
- --url "http://localhost:3000"
1180
-
1181
- # With environment variables
1182
- ax-cli mcp add my-server \
1183
- --transport stdio \
1184
- --command "python" \
1185
- --args "-m" "my_mcp_server" \
1186
- --env "API_KEY=secret" \
1187
- --env "DEBUG=true"
1188
- ```
1189
-
1190
- #### Add from JSON
1191
-
1192
- ```bash
1193
- ax-cli mcp add-json my-server '{
1194
- "command": "bun",
1195
- "args": ["server.js"],
1196
- "env": {
1197
- "API_KEY": "your_key",
1198
- "LOG_LEVEL": "debug"
1199
- }
1200
- }'
1201
- ```
1202
-
1203
- ### Managing MCP Servers
1204
-
1205
- ```bash
1206
- # List all configured servers
1207
- ax-cli mcp list
1208
-
1209
- # Test server connection and tools
1210
- ax-cli mcp test server-name
1211
-
1212
- # Remove a server
1213
- ax-cli mcp remove server-name
1214
-
1215
- # View server details
1216
- ax-cli mcp info server-name
1217
- ```
1218
-
1219
- ### Transport Types
1220
-
1221
- | Transport | Use Case | Example |
1222
- |-----------|----------|---------|
1223
- | **stdio** | Local processes, Node.js/Python servers | `npx @linear/mcp-server` |
1224
- | **http** | RESTful APIs, remote services | `http://localhost:3000` |
1225
- | **sse** | Server-Sent Events, real-time updates | `https://mcp.linear.app/sse` |
1226
-
1227
- ### Configuration Storage
1228
-
1229
- MCP servers are stored in `.ax/settings.json`:
1230
-
1231
- ```json
1232
- {
1233
- "model": "grok-code-fast-1",
1234
- "mcpServers": {
1235
- "linear": {
1236
- "name": "linear",
1237
- "transport": "sse",
1238
- "url": "https://mcp.linear.app/sse"
1239
- },
1240
- "github": {
1241
- "name": "github",
1242
- "transport": "stdio",
1243
- "command": "npx",
1244
- "args": ["@modelcontextprotocol/server-github"],
1245
- "env": {
1246
- "GITHUB_TOKEN": "ghp_your_token"
1247
- }
1248
- },
1249
- "custom-api": {
1250
- "name": "custom-api",
1251
- "transport": "http",
1252
- "url": "https://api.example.com/mcp"
1253
- }
1254
- }
1255
- }
1256
- ```
1257
-
1258
- ---
1259
-
1260
- ## 🎯 Strategic Architecture: AutomatosX vs Morph
1261
-
1262
- AX CLI is built on **two complementary technologies** that solve different problems at different architectural layers:
1263
-
1264
- ### 🧠 AutomatosX: Orchestration Layer (Core Strategy)
1265
-
1266
- **AutomatosX is the strategic foundation** of AX CLI, providing enterprise-grade multi-agent orchestration:
1267
-
1268
- ```
1269
- ┌─────────────────────────────────────────────────────────────┐
1270
- │ AutomatosX Orchestration │
1271
- │ ┌──────────────┐ ┌──────────────┐ ┌──────────────┐ │
1272
- │ │ Claude Code │ │ Gemini CLI │ │ OpenAI │ │
1273
- │ │ (Priority 3)│ │ (Priority 2)│ │ (Priority 1)│ │
1274
- │ └──────────────┘ └──────────────┘ └──────────────┘ │
1275
- │ │
1276
- │ • Multi-Agent Coordination • Health Checks │
1277
- │ • Intelligent Routing • Circuit Breakers │
1278
- │ • Session Management • Provider Fallback │
1279
- │ • Memory Persistence • Workload Distribution │
1280
- └─────────────────────────────────────────────────────────────┘
1281
-
1282
- AX CLI Execution
1283
-
1284
- ┌────────────────┼────────────────┐
1285
- ↓ ↓ ↓
1286
- ┌──────────┐ ┌──────────┐ ┌──────────┐
1287
- │ Bash │ │ Search │ │ Edit │
1288
- │ Tool │ │ Tool │ │ Tool │
1289
- └──────────┘ └──────────┘ └──────────┘
1290
- ```
1291
-
1292
- **Key Capabilities**:
1293
- - **Provider Coordination**: Routes tasks to Claude Code, Gemini CLI, OpenAI, or Grok based on availability and workload
1294
- - **Intelligent Fallback**: Automatically switches to backup providers when primary fails
1295
- - **Session/Memory**: Maintains context across multi-agent conversations
1296
- - **Health & Reliability**: Circuit breakers, health checks, retry logic
1297
-
1298
- #### 🏛️ Architecture Purity: Why AX CLI Handles LLM Integration
1299
-
1300
- **Strategic Decision**: AutomatosX remains a **pure orchestration framework** while AX CLI handles all LLM-specific integration:
1301
-
1302
- ```
1303
- ┌─────────────────────────────────────────────────────────────┐
1304
- │ AutomatosX (Pure Orchestration) │
1305
- │ • Provider-agnostic routing │
1306
- │ • Session/memory management │
1307
- │ • Health checks & circuit breakers │
1308
- │ • NO LLM-specific code │
1309
- │ • NO model integration (0 lines) │
1310
- │ • NO tree-sitter parsing │
1311
- └─────────────────────────────────────────────────────────────┘
1312
- ↓ delegates to
1313
- ┌─────────────────────────────────────────────────────────────┐
1314
- │ AX CLI (LLM Integration Layer) │
1315
- │ • GLM-4.6, Grok, OpenAI, Anthropic, Google API clients │
1316
- │ • ~30,000 lines of LLM provider integration code │
1317
- │ • Tree-sitter parsing for code intelligence │
1318
- │ • Model-specific optimizations (reasoning mode, streaming) │
1319
- │ • Tool definitions (bash, editor, search) │
1320
- └─────────────────────────────────────────────────────────────┘
1321
- ```
1322
-
1323
- **Why This Separation Matters**:
1324
-
1325
- 1. **Maintainability** 🛠️
1326
- - AutomatosX stays clean: orchestration logic only (~3K LOC)
1327
- - AX CLI absorbs complexity: LLM APIs, model quirks, provider changes
1328
- - Bug fixes isolated to appropriate layer
1329
-
1330
- 2. **Reusability** ♻️
1331
- - AutomatosX can orchestrate ANY tool/agent, not just LLM-based ones
1332
- - Same orchestration works for Python agents, Rust tools, shell scripts
1333
- - Other projects can use AutomatosX without inheriting LLM baggage
1334
-
1335
- 3. **Testing & Reliability** ✅
1336
- - AutomatosX: Pure logic testing (fast, deterministic)
1337
- - AX CLI: Integration testing against real LLM APIs
1338
- - Clear boundaries make issues easy to diagnose
1339
-
1340
- 4. **Evolution** 🚀
1341
- - LLM landscape changes rapidly (new models monthly)
1342
- - AutomatosX orchestration patterns remain stable
1343
- - AX CLI can add GPT-5, Claude 4, GLM-5 without touching AutomatosX core
1344
-
1345
- **What We Avoided**:
1346
- - ❌ Mixing 30K lines of LLM code into orchestration framework
1347
- - ❌ Coupling AutomatosX to specific model APIs
1348
- - ❌ Making every AutomatosX user depend on OpenAI/Anthropic SDKs
1349
- - ❌ Tree-sitter parser dependencies in core framework
1350
-
1351
- **The Result**: AutomatosX is a **pure, reusable orchestration platform**. AX CLI is a **specialized LLM CLI** built on top of it. Clean separation of concerns wins.
1352
-
1353
- ### ⚡ Morph Fast Apply: Execution Layer (Optional Enhancement)
1354
-
1355
- **Morph is an optional performance enhancement** for file editing, not a core architectural component:
1356
-
1357
- ```
1358
- AutomatosX decides WHAT to edit
1359
-
1360
- ┌───────────────────────────────┐
1361
- │ How should we edit files? │
1362
- ├───────────────┬───────────────┤
1363
- │ Standard │ Morph Fast │
1364
- │ Editor (✓) │ Apply (opt) │
1365
- │ │ │
1366
- │ • Free │ • 4,500+ │
1367
- │ • Built-in │ tokens/sec │
1368
- │ • Simple │ • AI-powered │
1369
- │ string │ • Complex │
1370
- │ replace │ refactors │
1371
- │ │ • Requires │
1372
- │ │ paid key │
1373
- └───────────────┴───────────────┘
1374
- ```
1375
-
1376
- **Why Keep Morph?**
1377
- - ✅ Some users value speed for complex refactoring
1378
- - ✅ Already optional - zero impact on non-users
1379
- - ✅ Low maintenance burden (392 lines, stable)
1380
- - ✅ Different problem space than AutomatosX
1381
-
1382
- **Why It's Not Core Strategy?**
1383
- - ❌ Solves only ONE execution step (file editing)
1384
- - ❌ No orchestration capabilities
1385
- - ❌ Requires paid external API
1386
- - ❌ Can be replaced by standard editor
1387
-
1388
- ### 📊 Comparison Table
1389
-
1390
- | Capability | AutomatosX | AX CLI | Morph | Standard Editor |
1391
- |------------|------------|--------|-------|-----------------|
1392
- | **Strategic Value** | ⭐⭐⭐⭐⭐ Highest | ⭐⭐⭐⭐⭐ Highest | ⭐⭐ Low | ⭐⭐⭐ Medium |
1393
- | **Architecture Layer** | Orchestration | Integration | Execution | Execution |
1394
- | **Lines of Code** | ~3K (pure) | ~30K (LLM) | 392 | ~500 |
1395
- | Multi-agent orchestration | ✅ | ❌ | ❌ | ❌ |
1396
- | Provider routing/fallback | ✅ | ❌ | ❌ | ❌ |
1397
- | Session management | ✅ | ❌ | ❌ | ❌ |
1398
- | Health checks & reliability | ✅ | ❌ | ❌ | ❌ |
1399
- | LLM API integration | ❌ | ✅ (all) | ❌ | ❌ |
1400
- | Model-specific features | ❌ | ✅ | ❌ | ❌ |
1401
- | Tree-sitter parsing | ❌ | ✅ | ❌ | ❌ |
1402
- | File editing | ❌ | ❌ | ✅ (fast) | ✅ (basic) |
1403
- | Complex code refactoring | ❌ | ❌ | ✅ | ❌ |
1404
- | Reusable framework | ✅ | ❌ | ❌ | ✅ |
1405
- | Cost | Free | Free | Paid | Free |
1406
- | Required | ✅ Core | ✅ Core | ❌ Optional | ✅ Core |
1407
-
1408
- ### 🎯 Bottom Line
1409
-
1410
- - **AutomatosX = Brain**: Pure orchestration framework - coordinates multiple agents, handles failures, manages state (reusable across any domain)
1411
- - **AX CLI = Nervous System**: LLM integration layer - connects to GLM/Grok/Claude/GPT, handles model specifics, provides tools (~30K LOC)
1412
- - **Morph = Fast Hands** (optional): Executes file edits quickly when you need performance
1413
- - **Standard Editor = Reliable Hands**: Executes file edits reliably for everyone
1414
-
1415
- **Architectural Philosophy**:
1416
- - **AutomatosX stays pure** (no LLM code) → reusable orchestration framework
1417
- - **AX CLI absorbs complexity** (30K lines of LLM integration) → keeps AutomatosX clean
1418
- - **We keep Morph** because some users find the speed valuable for refactoring
1419
-
1420
- This clean separation means AutomatosX can orchestrate Python agents, Rust tools, or any future AI models without being coupled to today's LLM APIs.
1421
-
1422
- ---
1423
-
1424
- ## ⚡ Morph Fast Apply (Optional)
1425
-
1426
- Ultra-fast code editing at **4,500+ tokens/second with 98% accuracy**.
1427
-
1428
- ### Setup
1429
-
1430
- 1. Get API key from [Morph Dashboard](https://morphllm.com/dashboard/api-keys)
1431
- 2. Configure your key:
1432
-
1433
- ```bash
1434
- # Environment variable
1435
- export MORPH_API_KEY="your_morph_key_here"
1436
-
1437
- # Or in .env
1438
- echo "MORPH_API_KEY=your_morph_key_here" >> .env
1439
- ```
1440
-
1441
- ### How It Works
1442
-
1443
- When Morph is configured, AX CLI gains the `edit_file` tool for high-speed editing:
1444
-
1445
- - **`edit_file`** (Morph): Complex edits, refactoring, multi-line changes, file transformations
1446
- - **`str_replace_editor`** (Standard): Simple replacements, single-line edits
1447
-
1448
- The AI automatically chooses the optimal tool based on the task complexity.
1449
-
1450
- ### Example Usage
1451
-
1452
- ```bash
1453
- # Complex refactoring (uses Morph Fast Apply)
1454
- ax-cli -p "refactor this class to use async/await and add proper error handling"
1455
-
1456
- # Type annotation conversion (uses Morph Fast Apply)
1457
- ax-cli -p "convert all JavaScript files in src/ to TypeScript with type annotations"
1458
-
1459
- # Simple text replacement (uses standard editor)
1460
- ax-cli -p "change variable name from foo to bar in utils.ts"
1461
- ```
1462
-
1463
- ### Performance
1464
-
1465
- | Task | Standard Editor | Morph Fast Apply | Speedup |
1466
- |------|----------------|------------------|---------|
1467
- | Refactor 1000 lines | ~45s | ~8s | **5.6x faster** |
1468
- | Add type annotations | ~30s | ~5s | **6x faster** |
1469
- | Multi-file changes | ~60s | ~10s | **6x faster** |
1470
-
1471
- ---
1472
-
1473
- ## 📚 Command Reference
1474
-
1475
- ### Main Commands
1476
-
1477
- ```bash
1478
- ax-cli [options]
1479
-
1480
- Options:
1481
- -V, --version Output version number
1482
- -d, --directory <dir> Set working directory
1483
- -k, --api-key <key> API key (or YOUR_API_KEY env var)
1484
- -u, --base-url <url> API base URL (or GROK_BASE_URL env var)
1485
- -m, --model <model> AI model to use (or GROK_MODEL env var)
1486
- -p, --prompt <prompt> Single prompt (headless mode)
1487
- --max-tool-rounds <rounds> Max tool execution rounds (default: 400)
1488
- -h, --help Display help
1489
- ```
1490
-
1491
- ### Init Command
1492
-
1493
- ```bash
1494
- ax-cli init [options]
1495
-
1496
- Description:
1497
- Initialize AX CLI for your project with intelligent analysis
1498
-
1499
- Options:
1500
- -f, --force Force regeneration even if files exist
1501
- -v, --verbose Verbose output showing analysis details
1502
- -d, --directory <dir> Project directory to analyze (default: current directory)
1503
-
1504
- Generated Files:
1505
- .ax-cli/CUSTOM.md Project-specific custom instructions
1506
- .ax-cli/index.json Fast project reference index
1507
- ```
1508
-
1509
- ### Update Command
1510
-
1511
- ```bash
1512
- ax-cli update [options]
1513
-
1514
- Description:
1515
- Check for updates and upgrade AX CLI to the latest version
1516
-
1517
- Options:
1518
- -c, --check Only check for updates without installing
1519
- -y, --yes Skip confirmation prompt
1520
- ```
1521
-
1522
- ### MCP Commands
1523
-
1524
- ```bash
1525
- ax-cli mcp <command> [options]
1526
-
1527
- Commands:
1528
- add <name> Add MCP server
1529
- add-json <name> Add from JSON config
1530
- list List all servers
1531
- test <name> Test server connection
1532
- remove <name> Remove server
1533
- info <name> View server details
1534
-
1535
- Add Options:
1536
- --transport <type> Transport type (stdio|http|sse)
1537
- --command <cmd> Command to run (stdio only)
1538
- --args <args...> Command arguments (stdio only)
1539
- --url <url> Server URL (http|sse only)
1540
- --env <key=val...> Environment variables
1541
- ```
1542
-
1543
- ### Examples
1544
-
1545
- ```bash
1546
- # Interactive mode
1547
- ax-cli
1548
- ax-cli -d /path/to/project
1549
- ax-cli -m grok-code-fast-1
1550
-
1551
- # Headless mode
1552
- ax-cli -p "list TypeScript files"
1553
- ax-cli -p "run tests" -d /project
1554
- ax-cli -p "refactor" --max-tool-rounds 50
1555
-
1556
- # Project initialization
1557
- ax-cli init
1558
- ax-cli init --force --verbose
1559
- ax-cli init -d /path/to/project
1560
-
1561
- # Update AX CLI
1562
- ax-cli update
1563
- ax-cli update --check
1564
- ax-cli update --yes
1565
-
1566
- # MCP operations
1567
- ax-cli mcp add linear --transport sse --url https://mcp.linear.app/sse
1568
- ax-cli mcp list
1569
- ax-cli mcp test linear
1570
- ax-cli mcp remove linear
1571
-
1572
- # Model selection
1573
- ax-cli -m glm4:9b -u http://localhost:11434/v1
1574
- ax-cli -m grok-4-latest -k $YOUR_API_KEY
1575
- ax-cli -m anthropic/claude-3.5-sonnet -u https://openrouter.ai/api/v1
1576
- ```
1577
-
1578
- ---
1579
-
1580
- ## 📄 License
1581
-
1582
- MIT License - see [LICENSE](LICENSE) file for details
1583
-
1584
- ---
1585
-
1586
- ## 🙏 Acknowledgments
1587
-
1588
- - **Original Project**: [grok-cli](https://github.com/superagent-ai/grok-cli) by SuperAgent AI
1589
- - **Enterprise Upgrade**: Powered by [AutomatosX](https://github.com/defai-digital/automatosx) multi-agent orchestration
1590
- - **AI Providers**: X.AI, OpenAI, Anthropic, and the open-source LLM community
1591
- - **Contributors**: All developers who have contributed to making AX CLI production-ready
1592
-
1593
- ---
1594
-
1595
- ## 🔗 Links
1596
-
1597
- - **NPM Package**: https://www.npmjs.com/package/@defai.digital/ax-cli
1598
- - **GitHub Repository**: https://github.com/defai-digital/ax-cli
1599
- - **Issue Tracker**: https://github.com/defai-digital/ax-cli/issues
1600
- - **AutomatosX**: https://github.com/defai-digital/automatosx
1601
- - **MCP Protocol**: https://modelcontextprotocol.io
1602
-
1603
- ---
1604
-
1605
- <p align="center">
1606
- <strong>Built with ❤️ using AutomatosX multi-agent collaboration</strong><br>
1607
- <em>Enterprise-class AI CLI for developers who demand quality</em>
1608
- </p>