@vybestack/llxprt-code-core 0.5.0-nightly.251104.b1b63628 → 0.5.0-nightly.251106.c2b44a77
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/README.md +163 -383
- package/dist/prompt-config/defaults/default-prompts.json +3 -0
- package/dist/src/auth/precedence.js +1 -1
- package/dist/src/auth/precedence.js.map +1 -1
- package/dist/src/config/config.js +6 -0
- package/dist/src/config/config.js.map +1 -1
- package/dist/src/prompt-config/defaults/tool-defaults.js +3 -0
- package/dist/src/prompt-config/defaults/tool-defaults.js.map +1 -1
- package/dist/src/prompt-config/defaults/tools/delete_line_range.md +3 -0
- package/dist/src/prompt-config/defaults/tools/insert_at_line.md +3 -0
- package/dist/src/prompt-config/defaults/tools/read_line_range.md +2 -0
- package/dist/src/providers/BaseProvider.d.ts +5 -0
- package/dist/src/providers/BaseProvider.js +38 -2
- package/dist/src/providers/BaseProvider.js.map +1 -1
- package/dist/src/providers/anthropic/AnthropicProvider.d.ts +4 -0
- package/dist/src/providers/anthropic/AnthropicProvider.js +34 -2
- package/dist/src/providers/anthropic/AnthropicProvider.js.map +1 -1
- package/dist/src/providers/gemini/GeminiProvider.d.ts +0 -4
- package/dist/src/providers/gemini/GeminiProvider.js +35 -66
- package/dist/src/providers/gemini/GeminiProvider.js.map +1 -1
- package/dist/src/providers/openai/OpenAIProvider.d.ts +0 -5
- package/dist/src/providers/openai/OpenAIProvider.js +91 -32
- package/dist/src/providers/openai/OpenAIProvider.js.map +1 -1
- package/dist/src/tools/delete_line_range.d.ts +34 -0
- package/dist/src/tools/delete_line_range.js +147 -0
- package/dist/src/tools/delete_line_range.js.map +1 -0
- package/dist/src/tools/doubleEscapeUtils.js +8 -1
- package/dist/src/tools/doubleEscapeUtils.js.map +1 -1
- package/dist/src/tools/insert_at_line.d.ts +34 -0
- package/dist/src/tools/insert_at_line.js +164 -0
- package/dist/src/tools/insert_at_line.js.map +1 -0
- package/dist/src/tools/read_line_range.d.ts +34 -0
- package/dist/src/tools/read_line_range.js +119 -0
- package/dist/src/tools/read_line_range.js.map +1 -0
- package/package.json +1 -1
package/README.md
CHANGED
|
@@ -1,464 +1,244 @@
|
|
|
1
1
|
# LLxprt Code
|
|
2
2
|
|
|
3
3
|
[](https://github.com/vybestack/llxprt-code/actions/workflows/ci.yml)
|
|
4
|
-
|
|
5
|
-
[](https://github.com/Piebald-AI/awesome-gemini-cli)
|
|
6
|
-
|
|
7
|
-
[](https://discord.gg/Wc6dZqWWYv)
|
|
4
|
+
[](https://github.com/Piebald-AI/awesome-gemini-cli) [](https://discord.gg/Wc6dZqWWYv)
|
|
8
5
|
|
|
9
6
|

|
|
10
7
|
|
|
11
|
-
|
|
8
|
+
**AI-powered coding assistant that works with any LLM provider.** Command-line interface for querying and editing codebases, generating applications, and automating development workflows.
|
|
12
9
|
|
|
13
|
-
##
|
|
14
|
-
|
|
15
|
-
- **Multi-Provider Support**: Direct access to OpenAI (gpt-5), Anthropic (Claude Opus/Sonnet), Google Gemini, plus OpenRouter, Fireworks, Synthetic, Cerebras, Chutes, Z.ai and local models
|
|
16
|
-
- **Authenticate** to use free: Gemini and Qwen models as well as using your Claude Pro/Max account. Use `/auth` to enable/disable/logout of Google/Anthropic/Qwen.
|
|
17
|
-
- **Installable Provider Aliases**: Save `/provider` setups as reusable configs and load OpenAI-compatible endpoints instantly
|
|
18
|
-
- **Multi-model/Provider Subagents**: Use `/subagent` to define specialized subagents with isolated contexts
|
|
19
|
-
- **Configuration Profiles**: define and save specific model/provider settings using `/profile` for instance temperature or custom headers
|
|
20
|
-
- **Local Model Support**: Run models locally with LM Studio, llama.cpp, or any OpenAI-compatible server
|
|
21
|
-
- **Flexible Configuration**: Switch providers, models, and API keys on the fly
|
|
22
|
-
- **Advanced Settings & Profiles**: Fine-tune model parameters, manage ephemeral settings, and save configurations for reuse. [Learn more →](./docs/settings-and-profiles.md)
|
|
23
|
-
|
|
24
|
-
With LLxprt Code you can:
|
|
25
|
-
|
|
26
|
-
- Query and edit large codebases with any LLM provider
|
|
27
|
-
- Generate new apps from PDFs or sketches, using multimodal capabilities
|
|
28
|
-
- Use local models for privacy-sensitive work
|
|
29
|
-
- Switch between providers seamlessly within a session
|
|
30
|
-
- Leverage all the powerful tools and MCP servers from Gemini CLI
|
|
31
|
-
- Use tools and MCP servers to connect new capabilities, including [media generation with Imagen, Veo or Lyria](https://github.com/GoogleCloudPlatform/vertex-ai-creative-studio/tree/main/experiments/mcp-genmedia)
|
|
32
|
-
- Ground your queries with the [Google Search](https://ai.google.dev/gemini-api/docs/grounding) tool when using Gemini
|
|
33
|
-
- Enjoy a beautifully themed interface across all commands
|
|
34
|
-
|
|
35
|
-
## Quickstart
|
|
36
|
-
|
|
37
|
-
You have two options to install LLxprt Code.
|
|
38
|
-
|
|
39
|
-
### With Node
|
|
40
|
-
|
|
41
|
-
1. **Prerequisites:** Ensure you have [Node.js version 20](https://nodejs.org/en/download) or higher installed.
|
|
42
|
-
2. **Install LLxprt Code:**
|
|
43
|
-
|
|
44
|
-
```bash
|
|
45
|
-
npm install -g @vybestack/llxprt-code
|
|
46
|
-
```
|
|
47
|
-
|
|
48
|
-
Or run directly with npx:
|
|
49
|
-
|
|
50
|
-
```bash
|
|
51
|
-
npx https://github.com/vybestack/llxprt-code
|
|
52
|
-
```
|
|
53
|
-
|
|
54
|
-
### Common Configuration Steps
|
|
55
|
-
|
|
56
|
-
3. **Run and configure:**
|
|
57
|
-
|
|
58
|
-
```bash
|
|
59
|
-
llxprt
|
|
60
|
-
```
|
|
61
|
-
|
|
62
|
-
- Pick a beautiful theme
|
|
63
|
-
- Choose your provider with `/provider` (defaults to Gemini)
|
|
64
|
-
- Set up authentication as needed
|
|
65
|
-
|
|
66
|
-
## Provider Configuration
|
|
67
|
-
|
|
68
|
-
### Using OpenAI
|
|
69
|
-
|
|
70
|
-
Direct access to GPT-5, and other OpenAI models:
|
|
71
|
-
|
|
72
|
-
1. Get your API key from [OpenAI](https://platform.openai.com/api-keys)
|
|
73
|
-
2. Configure LLxprt Code:
|
|
74
|
-
```
|
|
75
|
-
/provider openai
|
|
76
|
-
/key sk-your-openai-key-here
|
|
77
|
-
/model o3-mini
|
|
78
|
-
```
|
|
79
|
-
|
|
80
|
-
### Using Anthropic
|
|
81
|
-
|
|
82
|
-
Access Claude Sonnet 4, Claude Opus 4.1, and other Anthropic models:
|
|
83
|
-
|
|
84
|
-
#### Option 1: Log in with Anthropic to use your Claude Pro or Max account
|
|
85
|
-
|
|
86
|
-
Use OAuth authentication to access Claude with your existing Claude Pro or Max subscription:
|
|
87
|
-
|
|
88
|
-
1. Select the Anthropic provider:
|
|
89
|
-
```
|
|
90
|
-
/provider anthropic
|
|
91
|
-
```
|
|
92
|
-
2. Authenticate with your Claude account:
|
|
93
|
-
```
|
|
94
|
-
/auth
|
|
95
|
-
```
|
|
96
|
-
3. Your browser will open to the Claude authentication page
|
|
97
|
-
4. Log in and authorize LLxprt Code
|
|
98
|
-
5. Copy the authorization code shown and paste it back in the terminal
|
|
99
|
-
6. You're now using your Claude Pro/Max account!
|
|
100
|
-
|
|
101
|
-
#### Option 2: Use an API Key
|
|
10
|
+
## Free & Subscription Options
|
|
102
11
|
|
|
103
|
-
|
|
104
|
-
2. Configure:
|
|
105
|
-
```
|
|
106
|
-
/provider anthropic
|
|
107
|
-
/key sk-ant-your-key-here
|
|
108
|
-
/model claude-sonnet-4-20250115
|
|
109
|
-
```
|
|
12
|
+
Get started immediately with powerful LLM options:
|
|
110
13
|
|
|
111
|
-
|
|
112
|
-
|
|
113
|
-
|
|
114
|
-
|
|
115
|
-
|
|
116
|
-
|
|
117
|
-
Use OAuth authentication to access Qwen with your free account:
|
|
14
|
+
```bash
|
|
15
|
+
# Free Gemini models
|
|
16
|
+
/auth gemini enable
|
|
17
|
+
/provider gemini
|
|
18
|
+
/model gemini-2.5-flash
|
|
118
19
|
|
|
119
|
-
|
|
20
|
+
# Free Qwen models
|
|
120
21
|
/auth qwen enable
|
|
121
|
-
|
|
122
|
-
|
|
123
|
-
This enables OAuth for Qwen. When you send your first message, your browser will automatically open to the Qwen authentication page. Log in and authorize LLxprt Code - the authentication will complete automatically and your request will be processed. You're now using Qwen3-Coder-Pro for free!
|
|
124
|
-
|
|
125
|
-
#### Option 2: Use an API Key
|
|
126
|
-
|
|
127
|
-
For advanced users who need API access:
|
|
128
|
-
|
|
129
|
-
1. Get your API key from [Qwen](https://platform.qwen.ai/)
|
|
130
|
-
2. Configure:
|
|
131
|
-
```
|
|
132
|
-
/provider qwen
|
|
133
|
-
/key your-qwen-api-key
|
|
134
|
-
/model qwen3-coder-pro
|
|
135
|
-
```
|
|
136
|
-
|
|
137
|
-
### Using Cerebras Code Max/Pro
|
|
138
|
-
|
|
139
|
-
Access Cerebras Code Max/Pro plan with the powerful qwen-3-coder-480b model:
|
|
140
|
-
|
|
141
|
-
1. Get your API key from [Cerebras](https://cloud.cerebras.ai/)
|
|
142
|
-
2. Configure LLxprt Code:
|
|
143
|
-
```
|
|
144
|
-
/provider openai
|
|
145
|
-
/baseurl https://api.cerebras.ai/v1
|
|
146
|
-
/key your-cerebras-api-key
|
|
147
|
-
/model qwen-3-coder-480b
|
|
148
|
-
```
|
|
149
|
-
|
|
150
|
-
For optimal performance with this model, consider setting a high context limit:
|
|
22
|
+
/provider qwen
|
|
23
|
+
/model qwen-3-coder
|
|
151
24
|
|
|
25
|
+
# Your Claude Pro / Max subscription
|
|
26
|
+
/auth anthropic enable
|
|
27
|
+
/provider anthropic
|
|
28
|
+
/model claude-sonnet-4-5
|
|
152
29
|
```
|
|
153
|
-
/set context-limit 100000
|
|
154
|
-
```
|
|
155
|
-
|
|
156
|
-
### Using Local Models
|
|
157
30
|
|
|
158
|
-
|
|
31
|
+
## Why Choose LLxprt Code?
|
|
159
32
|
|
|
160
|
-
**
|
|
33
|
+
- **Free Tier Support**: Start coding immediately with Gemini, Qwen, or your existing Claude account
|
|
34
|
+
- **Provider Flexibility**: Switch between any Anthropic, Gemini, or OpenAI-compatible provider
|
|
35
|
+
- **Top Open Models**: Works seamlessly with GLM 4.6, MiniMax-2, and Qwen 3 Coder
|
|
36
|
+
- **Local Models**: Run models locally with LM Studio, llama.cpp for complete privacy
|
|
37
|
+
- **Privacy First**: No telemetry by default, local processing available
|
|
38
|
+
- **Subagent Flexibility**: Create agents with different models, providers, or settings
|
|
39
|
+
- **[ACTION] Real-time**: Interactive REPL with beautiful themes
|
|
40
|
+
- **Zed Integration**: Native Zed editor integration for seamless workflow
|
|
161
41
|
|
|
162
|
-
|
|
163
|
-
|
|
164
|
-
|
|
165
|
-
|
|
166
|
-
/baseurl http://127.0.0.1:1234/v1/
|
|
167
|
-
/model gemma-3b-it
|
|
168
|
-
```
|
|
169
|
-
|
|
170
|
-
**Example with llama.cpp:**
|
|
171
|
-
|
|
172
|
-
1. Start llama.cpp server: `./server -m model.gguf -c 2048`
|
|
173
|
-
2. In LLxprt Code:
|
|
174
|
-
```
|
|
175
|
-
/provider openai
|
|
176
|
-
/baseurl http://localhost:8080/v1/
|
|
177
|
-
/model local-model
|
|
178
|
-
```
|
|
179
|
-
|
|
180
|
-
**List available models:**
|
|
42
|
+
```bash
|
|
43
|
+
# Install and get started
|
|
44
|
+
npm install -g @vybestack/llxprt-code
|
|
45
|
+
llxprt
|
|
181
46
|
|
|
182
|
-
|
|
183
|
-
/model
|
|
47
|
+
# Try without installing
|
|
48
|
+
npx @vybestack/llxprt-code --provider synthetic --model hf:zai-org/GLM-4.6 --keyfile ~/.synthetic_key "simplify the README.md"
|
|
184
49
|
```
|
|
185
50
|
|
|
186
|
-
|
|
51
|
+
## What is LLxprt Code?
|
|
187
52
|
|
|
188
|
-
|
|
53
|
+
LLxprt Code is a command-line AI assistant designed for developers who want powerful LLM capabilities without leaving their terminal. Unlike GitHub Copilot or ChatGPT, LLxprt Code works with **any provider** and can run **locally** for complete privacy.
|
|
189
54
|
|
|
190
|
-
|
|
55
|
+
**Key differences:**
|
|
191
56
|
|
|
192
|
-
|
|
193
|
-
|
|
194
|
-
|
|
195
|
-
|
|
196
|
-
|
|
197
|
-
/keyfile ~/.openrouter_key
|
|
198
|
-
/model qwen/qwen3-coder
|
|
199
|
-
/profile save qwen3-coder
|
|
200
|
-
```
|
|
201
|
-
|
|
202
|
-
### Using Fireworks
|
|
203
|
-
|
|
204
|
-
For fast inference with popular open models:
|
|
205
|
-
|
|
206
|
-
1. Get your API key from [Fireworks](https://app.fireworks.ai/api-keys)
|
|
207
|
-
2. Configure:
|
|
208
|
-
```
|
|
209
|
-
/provider openai
|
|
210
|
-
/baseurl https://api.fireworks.ai/inference/v1/
|
|
211
|
-
/key fw_your-key-here
|
|
212
|
-
/model accounts/fireworks/models/llama-v3p3-70b-instruct
|
|
213
|
-
```
|
|
214
|
-
|
|
215
|
-
### Using xAI (Grok)
|
|
216
|
-
|
|
217
|
-
Access Grok models through xAI's API:
|
|
57
|
+
- **Open source & community driven**: Not locked into proprietary ecosystems
|
|
58
|
+
- **Provider agnostic**: Not locked into one AI service
|
|
59
|
+
- **Local-first**: Run entirely offline if needed
|
|
60
|
+
- **Developer-centric**: Built specifically for coding workflows
|
|
61
|
+
- **Terminal native**: Designed for CLI workflows, not web interfaces
|
|
218
62
|
|
|
219
|
-
|
|
220
|
-
2. Configure using command line:
|
|
63
|
+
## Quick Start
|
|
221
64
|
|
|
65
|
+
1. **Prerequisites:** Node.js 20+ installed
|
|
66
|
+
2. **Install:**
|
|
222
67
|
```bash
|
|
223
|
-
|
|
224
|
-
|
|
225
|
-
|
|
226
|
-
Or configure interactively:
|
|
227
|
-
|
|
228
|
-
```
|
|
229
|
-
/provider openai
|
|
230
|
-
/baseurl https://api.x.ai/v1/
|
|
231
|
-
/model grok-3
|
|
232
|
-
/keyfile ~/.mh_key
|
|
233
|
-
```
|
|
234
|
-
|
|
235
|
-
3. List available Grok models:
|
|
236
|
-
```
|
|
237
|
-
/model
|
|
238
|
-
```
|
|
239
|
-
|
|
240
|
-
### Using Google Gemini
|
|
241
|
-
|
|
242
|
-
You can still use Google's services:
|
|
243
|
-
|
|
244
|
-
1. **With Google Account:** Use `/auth` to sign in
|
|
245
|
-
2. **With API Key:**
|
|
246
|
-
```bash
|
|
247
|
-
export GEMINI_API_KEY="YOUR_API_KEY"
|
|
68
|
+
npm install -g @vybestack/llxprt-code
|
|
69
|
+
# Or try without installing:
|
|
70
|
+
npx @vybestack/llxprt-code
|
|
248
71
|
```
|
|
249
|
-
|
|
250
|
-
|
|
251
|
-
|
|
72
|
+
3. **Run:** `llxprt`
|
|
73
|
+
4. **Choose provider:** Use `/provider` to select your preferred LLM service
|
|
74
|
+
5. **Start coding:** Ask questions, generate code, or analyze projects
|
|
252
75
|
|
|
253
|
-
|
|
254
|
-
- **Load key from file:** `/keyfile ~/.keys/openai.txt`
|
|
255
|
-
- **Environment variables:** Still supported for all providers
|
|
76
|
+
**First session example:**
|
|
256
77
|
|
|
257
|
-
|
|
258
|
-
|
|
259
|
-
Start a new project:
|
|
260
|
-
|
|
261
|
-
```sh
|
|
262
|
-
cd new-project/
|
|
263
|
-
llxprt
|
|
264
|
-
> Create a Discord bot that answers questions using a FAQ.md file I will provide
|
|
265
|
-
```
|
|
266
|
-
|
|
267
|
-
Work with existing code:
|
|
268
|
-
|
|
269
|
-
```sh
|
|
270
|
-
git clone https://github.com/acoliver/llxprt-code
|
|
271
|
-
cd llxprt-code
|
|
78
|
+
```bash
|
|
79
|
+
cd your-project/
|
|
272
80
|
llxprt
|
|
273
|
-
>
|
|
81
|
+
> Explain the architecture of this codebase and suggest improvements
|
|
82
|
+
> Create a test file for the user authentication module
|
|
83
|
+
> Help me debug this error: [paste error message]
|
|
274
84
|
```
|
|
275
85
|
|
|
276
|
-
|
|
86
|
+
## Key Features
|
|
277
87
|
|
|
278
|
-
|
|
279
|
-
|
|
280
|
-
|
|
281
|
-
|
|
282
|
-
|
|
283
|
-
|
|
284
|
-
|
|
88
|
+
- **Free & Subscription Options** - Gemini, Qwen (free), Claude Pro/Max (subscription)
|
|
89
|
+
- **Extensive Provider Support** - Any Anthropic, Gemini, or OpenAI-compatible provider [**Provider Guide →**](./docs/providers/quick-reference.md)
|
|
90
|
+
- **Top Open Models** - GLM 4.6, MiniMax-2, Qwen 3 Coder
|
|
91
|
+
- **Local Model Support** - LM Studio, llama.cpp, Ollama for complete privacy
|
|
92
|
+
- **Profile System** - Save provider configurations and model settings
|
|
93
|
+
- **Advanced Subagents** - Isolated AI assistants with different models/providers
|
|
94
|
+
- **MCP Integration** - Connect to external tools and services
|
|
95
|
+
- **Beautiful Terminal UI** - Multiple themes with syntax highlighting
|
|
285
96
|
|
|
286
|
-
##
|
|
97
|
+
## Interactive vs Non-Interactive Workflows
|
|
287
98
|
|
|
288
|
-
|
|
99
|
+
**Interactive Mode (REPL):**
|
|
100
|
+
Perfect for exploration, rapid prototyping, and iterative development:
|
|
289
101
|
|
|
290
102
|
```bash
|
|
291
|
-
#
|
|
292
|
-
|
|
293
|
-
/set modelparam max_tokens 4096
|
|
294
|
-
|
|
295
|
-
# Configure context handling
|
|
296
|
-
/set context-limit 100000
|
|
297
|
-
/set compression-threshold 0.7
|
|
298
|
-
|
|
299
|
-
# Save your configuration
|
|
300
|
-
/profile save my-assistant
|
|
103
|
+
# Start interactive session
|
|
104
|
+
llxprt
|
|
301
105
|
|
|
302
|
-
|
|
303
|
-
|
|
106
|
+
> Explore this codebase and suggest improvements
|
|
107
|
+
> Create a REST API endpoint with tests
|
|
108
|
+
> Debug this authentication issue
|
|
109
|
+
> Optimize this database query
|
|
304
110
|
```
|
|
305
111
|
|
|
306
|
-
|
|
307
|
-
|
|
308
|
-
## Key Features
|
|
309
|
-
|
|
310
|
-
### Code Understanding & Generation
|
|
311
|
-
|
|
312
|
-
- Query and edit large codebases
|
|
313
|
-
- Generate new apps from PDFs, images, or sketches using multimodal capabilities
|
|
314
|
-
- Debug issues and troubleshoot with natural language
|
|
315
|
-
|
|
316
|
-
### Automation & Integration
|
|
317
|
-
|
|
318
|
-
- Automate operational tasks like querying pull requests or handling complex rebases
|
|
319
|
-
- Use MCP servers to connect new capabilities
|
|
320
|
-
- Run non-interactively in scripts for workflow automation
|
|
112
|
+
**Non-Interactive Mode:**
|
|
113
|
+
Ideal for automation, CI/CD, and scripted workflows:
|
|
321
114
|
|
|
322
|
-
|
|
323
|
-
|
|
324
|
-
-
|
|
325
|
-
|
|
326
|
-
|
|
327
|
-
|
|
328
|
-
### GitHub Integration (Experimental)
|
|
329
|
-
|
|
330
|
-
> **Note:** GitHub Actions integration is currently experimental and disabled by default in LLxprt Code. This feature is under development and may not be fully functional.
|
|
331
|
-
|
|
332
|
-
For potential future GitHub workflow integration:
|
|
333
|
-
|
|
334
|
-
- **Pull Request Reviews**: Automated code review with contextual feedback and suggestions
|
|
335
|
-
- **Issue Triage**: Automated labeling and prioritization of GitHub issues based on content analysis
|
|
336
|
-
- **On-demand Assistance**: Mention capabilities in issues and pull requests for help with debugging, explanations, or task delegation
|
|
337
|
-
- **Custom Workflows**: Build automated, scheduled and on-demand workflows tailored to your team's needs
|
|
115
|
+
```bash
|
|
116
|
+
# Single command with immediate response
|
|
117
|
+
llxprt --profile-load zai-glm46 "Refactor this function for better readability"
|
|
118
|
+
llxprt "Generate unit tests for payment module" > tests/payment.test.js
|
|
119
|
+
```
|
|
338
120
|
|
|
339
|
-
##
|
|
121
|
+
## Top Open Weight Models
|
|
340
122
|
|
|
341
|
-
LLxprt Code
|
|
123
|
+
LLxprt Code works seamlessly with the best open-weight models:
|
|
342
124
|
|
|
343
|
-
|
|
344
|
-
- Override provider-specific behaviors
|
|
345
|
-
- Add environment-aware instructions
|
|
346
|
-
- Customize tool usage guidelines
|
|
125
|
+
### GLM 4.6
|
|
347
126
|
|
|
348
|
-
|
|
127
|
+
- **Context Window**: 200,000 tokens
|
|
128
|
+
- **Architecture**: Mixture-of-Experts with 355B total parameters (32B active)
|
|
129
|
+
- **Strengths**: Coding, multi-step planning, tool integration
|
|
130
|
+
- **15% fewer tokens** for equivalent tasks vs previous generation
|
|
349
131
|
|
|
350
|
-
###
|
|
132
|
+
### MiniMax-2
|
|
351
133
|
|
|
352
|
-
-
|
|
353
|
-
-
|
|
354
|
-
-
|
|
355
|
-
-
|
|
356
|
-
- **Migrating from Gemini CLI?** Check out our [tips for Gemini CLI users](./docs/gemini-cli-tips.md).
|
|
357
|
-
- Take a look at some [popular tasks](#popular-tasks) for more inspiration.
|
|
358
|
-
- Check out our **[Official Roadmap](./ROADMAP.md)**
|
|
134
|
+
- **Context Window**: ~204,800 tokens
|
|
135
|
+
- **Architecture**: MoE with 230B total parameters (10B active)
|
|
136
|
+
- **Strengths**: Coding workflows, multi-step agents, tool calling
|
|
137
|
+
- **Cost**: Only 8% of Claude Sonnet, ~2x faster
|
|
359
138
|
|
|
360
|
-
###
|
|
139
|
+
### Qwen 3 Coder
|
|
361
140
|
|
|
362
|
-
-
|
|
363
|
-
-
|
|
364
|
-
-
|
|
365
|
-
-
|
|
366
|
-
- `/keyfile` - Load API key from file
|
|
367
|
-
- `/auth` - Authenticate with Google (for Gemini), Anthropic (for Claude), or Qwen
|
|
141
|
+
- **Context Window**: 256,000 tokens (extendable to 1M)
|
|
142
|
+
- **Architecture**: MoE with 480B total parameters (35B active)
|
|
143
|
+
- **Strengths**: Agentic coding, browser automation, tool usage
|
|
144
|
+
- **Performance**: State-of-the-art on SWE-bench Verified (69.6%)
|
|
368
145
|
|
|
369
|
-
|
|
146
|
+
## Local Models
|
|
370
147
|
|
|
371
|
-
|
|
372
|
-
- [**IDE Integration**](./docs/ide-integration.md) - VS Code companion
|
|
373
|
-
- [**Sandboxing & Security**](./docs/sandbox.md) - Safe execution environments
|
|
374
|
-
- [**Enterprise Deployment**](./docs/deployment.md) - Docker, system-wide config
|
|
375
|
-
- [**Telemetry & Monitoring**](./docs/telemetry.md) - Usage tracking
|
|
376
|
-
- [**Tools API Development**](./docs/core/tools-api.md) - Create custom tools
|
|
148
|
+
Run models completely offline for maximum privacy:
|
|
377
149
|
|
|
378
|
-
|
|
150
|
+
```bash
|
|
151
|
+
# With LM Studio
|
|
152
|
+
/provider openai
|
|
153
|
+
/baseurl http://localhost:1234/v1/
|
|
154
|
+
/model your-local-model
|
|
379
155
|
|
|
380
|
-
|
|
156
|
+
# With Ollama
|
|
157
|
+
/provider ollama
|
|
158
|
+
/model codellama:13b
|
|
159
|
+
```
|
|
381
160
|
|
|
382
|
-
|
|
161
|
+
Supported local providers:
|
|
383
162
|
|
|
384
|
-
|
|
163
|
+
- **LM Studio**: Easy Windows/Mac/Linux setup
|
|
164
|
+
- **llama.cpp**: Maximum performance and control
|
|
165
|
+
- **Ollama**: Simple model management
|
|
166
|
+
- **Any OpenAI-compatible API**: Full flexibility
|
|
385
167
|
|
|
386
|
-
|
|
168
|
+
## Advanced Subagents
|
|
387
169
|
|
|
388
|
-
|
|
389
|
-
> Describe the main pieces of this system's architecture.
|
|
390
|
-
```
|
|
170
|
+
Create specialized AI assistants with isolated contexts and different configurations:
|
|
391
171
|
|
|
392
|
-
```
|
|
393
|
-
|
|
172
|
+
```bash
|
|
173
|
+
# Subagents run with custom profiles and tool access
|
|
174
|
+
# Access via the commands interface
|
|
175
|
+
/subagent list
|
|
176
|
+
/subagent create <name>
|
|
394
177
|
```
|
|
395
178
|
|
|
396
|
-
|
|
397
|
-
> Provide a step-by-step dev onboarding doc for developers new to the codebase.
|
|
398
|
-
```
|
|
179
|
+
Each subagent can be configured with:
|
|
399
180
|
|
|
400
|
-
|
|
401
|
-
|
|
402
|
-
|
|
181
|
+
- **Different providers** (Gemini vs Anthropic vs Qwen vs Local)
|
|
182
|
+
- **Different models** (Flash vs Sonnet vs GLM 4.6 vs Custom)
|
|
183
|
+
- **Different tool access** (Restrict or allow specific tools)
|
|
184
|
+
- **Different settings** (Temperature, timeouts, max turns)
|
|
185
|
+
- **Isolated runtime context** (No memory or state crossover)
|
|
403
186
|
|
|
404
|
-
|
|
405
|
-
> Identify potential areas for improvement or refactoring in this codebase, highlighting parts that appear fragile, complex, or hard to maintain.
|
|
406
|
-
```
|
|
187
|
+
Subagents are designed for:
|
|
407
188
|
|
|
408
|
-
|
|
409
|
-
|
|
410
|
-
|
|
189
|
+
- **Specialized tasks** (Code review, debugging, documentation)
|
|
190
|
+
- **Different expertise areas** (Frontend vs Backend vs DevOps)
|
|
191
|
+
- **Tool-limited environments** (Read-only analysis vs Full development)
|
|
192
|
+
- **Experimental configurations** (Testing new models or settings)
|
|
411
193
|
|
|
412
|
-
|
|
413
|
-
> Generate a README section for the [module name] module explaining what it does and how to use it.
|
|
414
|
-
```
|
|
194
|
+
**[Full Subagent Documentation →](./docs/subagents.md)**
|
|
415
195
|
|
|
416
|
-
|
|
417
|
-
> What kind of error handling and logging strategies does the project use?
|
|
418
|
-
```
|
|
196
|
+
## Zed Integration
|
|
419
197
|
|
|
420
|
-
|
|
421
|
-
> Which tools, libraries, and dependencies are used in this project?
|
|
422
|
-
```
|
|
198
|
+
Native Zed editor support for seamless development workflow:
|
|
423
199
|
|
|
424
|
-
|
|
200
|
+
```bash
|
|
201
|
+
# Install Zed extension
|
|
202
|
+
zed:install llxprt-code
|
|
425
203
|
|
|
426
|
-
|
|
427
|
-
|
|
204
|
+
# Use within Zed
|
|
205
|
+
# (See docs for Zed integration setup)
|
|
428
206
|
```
|
|
429
207
|
|
|
430
|
-
|
|
431
|
-
> Help me migrate this codebase to the latest version of Java. Start with a plan.
|
|
432
|
-
```
|
|
208
|
+
Features:
|
|
433
209
|
|
|
434
|
-
|
|
210
|
+
- **In-editor chat**: Direct AI interaction without leaving Zed
|
|
211
|
+
- **Code selection**: Ask about specific code selections
|
|
212
|
+
- **Inline suggestions**: Get AI help while typing
|
|
213
|
+
- **Project awareness**: Full context of your open workspace
|
|
435
214
|
|
|
436
|
-
|
|
215
|
+
** [Zed Integration Guide →](./docs/zed-integration.md)**
|
|
437
216
|
|
|
438
|
-
|
|
439
|
-
> Make me a slide deck showing the git history from the last 7 days, grouped by feature and team member.
|
|
440
|
-
```
|
|
217
|
+
** [Complete Provider Guide →](./docs/cli/providers.md)**
|
|
441
218
|
|
|
442
|
-
|
|
443
|
-
> Make a full-screen web app for a wall display to show our most interacted-with GitHub issues.
|
|
444
|
-
```
|
|
219
|
+
## Advanced Features
|
|
445
220
|
|
|
446
|
-
|
|
221
|
+
- **Settings & Profiles**: Fine-tune model parameters and save configurations
|
|
222
|
+
- **Subagents**: Create specialized assistants for different tasks
|
|
223
|
+
- **MCP Servers**: Connect external tools and data sources
|
|
224
|
+
- **Checkpointing**: Save and resume complex conversations
|
|
225
|
+
- **IDE Integration**: Connect to VS Code and other editors
|
|
447
226
|
|
|
448
|
-
|
|
449
|
-
> Convert all the images in this directory to png, and rename them to use dates from the exif data.
|
|
450
|
-
```
|
|
227
|
+
** [Full Documentation →](./docs/index.md)**
|
|
451
228
|
|
|
452
|
-
|
|
453
|
-
> Organize my PDF invoices by month of expenditure.
|
|
454
|
-
```
|
|
229
|
+
## Migration & Resources
|
|
455
230
|
|
|
456
|
-
|
|
231
|
+
- **From Gemini CLI**: [Migration Guide](./docs/gemini-cli-tips.md)
|
|
232
|
+
- **Local Models Setup**: [Local Models Guide](./docs/local-models.md)
|
|
233
|
+
- **Command Reference**: [CLI Commands](./docs/cli/commands.md)
|
|
234
|
+
- **Troubleshooting**: [Common Issues](./docs/troubleshooting.md)
|
|
457
235
|
|
|
458
|
-
|
|
236
|
+
## Privacy & Terms
|
|
459
237
|
|
|
460
|
-
|
|
238
|
+
LLxprt Code does not collect telemetry by default. Your data stays with you unless you choose to send it to external AI providers.
|
|
461
239
|
|
|
462
|
-
|
|
240
|
+
When using external services, their respective terms of service apply:
|
|
463
241
|
|
|
464
|
-
|
|
242
|
+
- [OpenAI Terms](https://openai.com/policies/terms-of-use)
|
|
243
|
+
- [Anthropic Terms](https://www.anthropic.com/legal/terms)
|
|
244
|
+
- [Google Terms](https://policies.google.com/terms)
|