@vybestack/llxprt-code-core 0.5.0-nightly.251127.ec44835c5 → 0.6.0-nightly.251128.1049d5f2b
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/README.md +244 -0
- package/package.json +1 -1
package/README.md
CHANGED
|
@@ -0,0 +1,244 @@
|
|
|
1
|
+
# LLxprt Code
|
|
2
|
+
|
|
3
|
+
[](https://github.com/vybestack/llxprt-code/actions/workflows/ci.yml)
|
|
4
|
+
[](https://github.com/Piebald-AI/awesome-gemini-cli) [](https://discord.gg/Wc6dZqWWYv) 
|
|
5
|
+
|
|
6
|
+

|
|
7
|
+
|
|
8
|
+
**AI-powered coding assistant that works with any LLM provider.** Command-line interface for querying and editing codebases, generating applications, and automating development workflows.
|
|
9
|
+
|
|
10
|
+
## Free & Subscription Options
|
|
11
|
+
|
|
12
|
+
Get started immediately with powerful LLM options:
|
|
13
|
+
|
|
14
|
+
```bash
|
|
15
|
+
# Free Gemini models
|
|
16
|
+
/auth gemini enable
|
|
17
|
+
/provider gemini
|
|
18
|
+
/model gemini-2.5-flash
|
|
19
|
+
|
|
20
|
+
# Free Qwen models
|
|
21
|
+
/auth qwen enable
|
|
22
|
+
/provider qwen
|
|
23
|
+
/model qwen-3-coder
|
|
24
|
+
|
|
25
|
+
# Your Claude Pro / Max subscription
|
|
26
|
+
/auth anthropic enable
|
|
27
|
+
/provider anthropic
|
|
28
|
+
/model claude-sonnet-4-5
|
|
29
|
+
```
|
|
30
|
+
|
|
31
|
+
## Why Choose LLxprt Code?
|
|
32
|
+
|
|
33
|
+
- **Free Tier Support**: Start coding immediately with Gemini, Qwen, or your existing Claude account
|
|
34
|
+
- **Provider Flexibility**: Switch between any Anthropic, Gemini, or OpenAI-compatible provider
|
|
35
|
+
- **Top Open Models**: Works seamlessly with GLM 4.6, MiniMax-2, and Qwen 3 Coder
|
|
36
|
+
- **Local Models**: Run models locally with LM Studio, llama.cpp for complete privacy
|
|
37
|
+
- **Privacy First**: No telemetry by default, local processing available
|
|
38
|
+
- **Subagent Flexibility**: Create agents with different models, providers, or settings
|
|
39
|
+
- **[ACTION] Real-time**: Interactive REPL with beautiful themes
|
|
40
|
+
- **Zed Integration**: Native Zed editor integration for seamless workflow
|
|
41
|
+
|
|
42
|
+
```bash
|
|
43
|
+
# Install and get started
|
|
44
|
+
npm install -g @vybestack/llxprt-code
|
|
45
|
+
llxprt
|
|
46
|
+
|
|
47
|
+
# Try without installing
|
|
48
|
+
npx @vybestack/llxprt-code --provider synthetic --model hf:zai-org/GLM-4.6 --keyfile ~/.synthetic_key "simplify the README.md"
|
|
49
|
+
```
|
|
50
|
+
|
|
51
|
+
## What is LLxprt Code?
|
|
52
|
+
|
|
53
|
+
LLxprt Code is a command-line AI assistant designed for developers who want powerful LLM capabilities without leaving their terminal. Unlike GitHub Copilot or ChatGPT, LLxprt Code works with **any provider** and can run **locally** for complete privacy.
|
|
54
|
+
|
|
55
|
+
**Key differences:**
|
|
56
|
+
|
|
57
|
+
- **Open source & community driven**: Not locked into proprietary ecosystems
|
|
58
|
+
- **Provider agnostic**: Not locked into one AI service
|
|
59
|
+
- **Local-first**: Run entirely offline if needed
|
|
60
|
+
- **Developer-centric**: Built specifically for coding workflows
|
|
61
|
+
- **Terminal native**: Designed for CLI workflows, not web interfaces
|
|
62
|
+
|
|
63
|
+
## Quick Start
|
|
64
|
+
|
|
65
|
+
1. **Prerequisites:** Node.js 20+ installed
|
|
66
|
+
2. **Install:**
|
|
67
|
+
```bash
|
|
68
|
+
npm install -g @vybestack/llxprt-code
|
|
69
|
+
# Or try without installing:
|
|
70
|
+
npx @vybestack/llxprt-code
|
|
71
|
+
```
|
|
72
|
+
3. **Run:** `llxprt`
|
|
73
|
+
4. **Choose provider:** Use `/provider` to select your preferred LLM service
|
|
74
|
+
5. **Start coding:** Ask questions, generate code, or analyze projects
|
|
75
|
+
|
|
76
|
+
**First session example:**
|
|
77
|
+
|
|
78
|
+
```bash
|
|
79
|
+
cd your-project/
|
|
80
|
+
llxprt
|
|
81
|
+
> Explain the architecture of this codebase and suggest improvements
|
|
82
|
+
> Create a test file for the user authentication module
|
|
83
|
+
> Help me debug this error: [paste error message]
|
|
84
|
+
```
|
|
85
|
+
|
|
86
|
+
## Key Features
|
|
87
|
+
|
|
88
|
+
- **Free & Subscription Options** - Gemini, Qwen (free), Claude Pro/Max (subscription)
|
|
89
|
+
- **Extensive Provider Support** - Any Anthropic, Gemini, or OpenAI-compatible provider [**Provider Guide →**](./docs/providers/quick-reference.md)
|
|
90
|
+
- **Top Open Models** - GLM 4.6, MiniMax-2, Qwen 3 Coder
|
|
91
|
+
- **Local Model Support** - LM Studio, llama.cpp, Ollama for complete privacy
|
|
92
|
+
- **Profile System** - Save provider configurations and model settings
|
|
93
|
+
- **Advanced Subagents** - Isolated AI assistants with different models/providers
|
|
94
|
+
- **MCP Integration** - Connect to external tools and services
|
|
95
|
+
- **Beautiful Terminal UI** - Multiple themes with syntax highlighting
|
|
96
|
+
|
|
97
|
+
## Interactive vs Non-Interactive Workflows
|
|
98
|
+
|
|
99
|
+
**Interactive Mode (REPL):**
|
|
100
|
+
Perfect for exploration, rapid prototyping, and iterative development:
|
|
101
|
+
|
|
102
|
+
```bash
|
|
103
|
+
# Start interactive session
|
|
104
|
+
llxprt
|
|
105
|
+
|
|
106
|
+
> Explore this codebase and suggest improvements
|
|
107
|
+
> Create a REST API endpoint with tests
|
|
108
|
+
> Debug this authentication issue
|
|
109
|
+
> Optimize this database query
|
|
110
|
+
```
|
|
111
|
+
|
|
112
|
+
**Non-Interactive Mode:**
|
|
113
|
+
Ideal for automation, CI/CD, and scripted workflows:
|
|
114
|
+
|
|
115
|
+
```bash
|
|
116
|
+
# Single command with immediate response
|
|
117
|
+
llxprt --profile-load zai-glm46 "Refactor this function for better readability"
|
|
118
|
+
llxprt "Generate unit tests for payment module" > tests/payment.test.js
|
|
119
|
+
```
|
|
120
|
+
|
|
121
|
+
## Top Open Weight Models
|
|
122
|
+
|
|
123
|
+
LLxprt Code works seamlessly with the best open-weight models:
|
|
124
|
+
|
|
125
|
+
### GLM 4.6
|
|
126
|
+
|
|
127
|
+
- **Context Window**: 200,000 tokens
|
|
128
|
+
- **Architecture**: Mixture-of-Experts with 355B total parameters (32B active)
|
|
129
|
+
- **Strengths**: Coding, multi-step planning, tool integration
|
|
130
|
+
- **15% fewer tokens** for equivalent tasks vs previous generation
|
|
131
|
+
|
|
132
|
+
### MiniMax-2
|
|
133
|
+
|
|
134
|
+
- **Context Window**: ~204,800 tokens
|
|
135
|
+
- **Architecture**: MoE with 230B total parameters (10B active)
|
|
136
|
+
- **Strengths**: Coding workflows, multi-step agents, tool calling
|
|
137
|
+
- **Cost**: Only 8% of Claude Sonnet, ~2x faster
|
|
138
|
+
|
|
139
|
+
### Qwen 3 Coder
|
|
140
|
+
|
|
141
|
+
- **Context Window**: 256,000 tokens (extendable to 1M)
|
|
142
|
+
- **Architecture**: MoE with 480B total parameters (35B active)
|
|
143
|
+
- **Strengths**: Agentic coding, browser automation, tool usage
|
|
144
|
+
- **Performance**: State-of-the-art on SWE-bench Verified (69.6%)
|
|
145
|
+
|
|
146
|
+
## Local Models
|
|
147
|
+
|
|
148
|
+
Run models completely offline for maximum privacy:
|
|
149
|
+
|
|
150
|
+
```bash
|
|
151
|
+
# With LM Studio
|
|
152
|
+
/provider openai
|
|
153
|
+
/baseurl http://localhost:1234/v1/
|
|
154
|
+
/model your-local-model
|
|
155
|
+
|
|
156
|
+
# With Ollama
|
|
157
|
+
/provider ollama
|
|
158
|
+
/model codellama:13b
|
|
159
|
+
```
|
|
160
|
+
|
|
161
|
+
Supported local providers:
|
|
162
|
+
|
|
163
|
+
- **LM Studio**: Easy Windows/Mac/Linux setup
|
|
164
|
+
- **llama.cpp**: Maximum performance and control
|
|
165
|
+
- **Ollama**: Simple model management
|
|
166
|
+
- **Any OpenAI-compatible API**: Full flexibility
|
|
167
|
+
|
|
168
|
+
## Advanced Subagents
|
|
169
|
+
|
|
170
|
+
Create specialized AI assistants with isolated contexts and different configurations:
|
|
171
|
+
|
|
172
|
+
```bash
|
|
173
|
+
# Subagents run with custom profiles and tool access
|
|
174
|
+
# Access via the commands interface
|
|
175
|
+
/subagent list
|
|
176
|
+
/subagent create <name>
|
|
177
|
+
```
|
|
178
|
+
|
|
179
|
+
Each subagent can be configured with:
|
|
180
|
+
|
|
181
|
+
- **Different providers** (Gemini vs Anthropic vs Qwen vs Local)
|
|
182
|
+
- **Different models** (Flash vs Sonnet vs GLM 4.6 vs Custom)
|
|
183
|
+
- **Different tool access** (Restrict or allow specific tools)
|
|
184
|
+
- **Different settings** (Temperature, timeouts, max turns)
|
|
185
|
+
- **Isolated runtime context** (No memory or state crossover)
|
|
186
|
+
|
|
187
|
+
Subagents are designed for:
|
|
188
|
+
|
|
189
|
+
- **Specialized tasks** (Code review, debugging, documentation)
|
|
190
|
+
- **Different expertise areas** (Frontend vs Backend vs DevOps)
|
|
191
|
+
- **Tool-limited environments** (Read-only analysis vs Full development)
|
|
192
|
+
- **Experimental configurations** (Testing new models or settings)
|
|
193
|
+
|
|
194
|
+
**[Full Subagent Documentation →](./docs/subagents.md)**
|
|
195
|
+
|
|
196
|
+
## Zed Integration
|
|
197
|
+
|
|
198
|
+
Native Zed editor support for seamless development workflow:
|
|
199
|
+
|
|
200
|
+
```bash
|
|
201
|
+
# Install Zed extension
|
|
202
|
+
zed:install llxprt-code
|
|
203
|
+
|
|
204
|
+
# Use within Zed
|
|
205
|
+
# (See docs for Zed integration setup)
|
|
206
|
+
```
|
|
207
|
+
|
|
208
|
+
Features:
|
|
209
|
+
|
|
210
|
+
- **In-editor chat**: Direct AI interaction without leaving Zed
|
|
211
|
+
- **Code selection**: Ask about specific code selections
|
|
212
|
+
- **Inline suggestions**: Get AI help while typing
|
|
213
|
+
- **Project awareness**: Full context of your open workspace
|
|
214
|
+
|
|
215
|
+
** [Zed Integration Guide →](./docs/zed-integration.md)**
|
|
216
|
+
|
|
217
|
+
** [Complete Provider Guide →](./docs/cli/providers.md)**
|
|
218
|
+
|
|
219
|
+
## Advanced Features
|
|
220
|
+
|
|
221
|
+
- **Settings & Profiles**: Fine-tune model parameters and save configurations
|
|
222
|
+
- **Subagents**: Create specialized assistants for different tasks
|
|
223
|
+
- **MCP Servers**: Connect external tools and data sources
|
|
224
|
+
- **Checkpointing**: Save and resume complex conversations
|
|
225
|
+
- **IDE Integration**: Connect to VS Code and other editors
|
|
226
|
+
|
|
227
|
+
** [Full Documentation →](./docs/index.md)**
|
|
228
|
+
|
|
229
|
+
## Migration & Resources
|
|
230
|
+
|
|
231
|
+
- **From Gemini CLI**: [Migration Guide](./docs/gemini-cli-tips.md)
|
|
232
|
+
- **Local Models Setup**: [Local Models Guide](./docs/local-models.md)
|
|
233
|
+
- **Command Reference**: [CLI Commands](./docs/cli/commands.md)
|
|
234
|
+
- **Troubleshooting**: [Common Issues](./docs/troubleshooting.md)
|
|
235
|
+
|
|
236
|
+
## Privacy & Terms
|
|
237
|
+
|
|
238
|
+
LLxprt Code does not collect telemetry by default. Your data stays with you unless you choose to send it to external AI providers.
|
|
239
|
+
|
|
240
|
+
When using external services, their respective terms of service apply:
|
|
241
|
+
|
|
242
|
+
- [OpenAI Terms](https://openai.com/policies/terms-of-use)
|
|
243
|
+
- [Anthropic Terms](https://www.anthropic.com/legal/terms)
|
|
244
|
+
- [Google Terms](https://policies.google.com/terms)
|