@vybestack/llxprt-code-core 0.9.0 → 0.9.1
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/README.md +31 -13
- package/package.json +1 -1
package/README.md
CHANGED
|
@@ -1,7 +1,10 @@
|
|
|
1
|
-
|
|
1
|
+
<h1>
|
|
2
|
+
<img src="docs/assets/llxprt.svg" alt="LLxprt logo" width="42" />
|
|
3
|
+
<a href="https://vybestack.dev/llxprt-code.html">LLxprt Code</a>
|
|
4
|
+
</h1>
|
|
2
5
|
|
|
3
6
|
[](https://github.com/vybestack/llxprt-code/actions/workflows/ci.yml)
|
|
4
|
-
[](https://discord.gg/Wc6dZqWWYv) 
|
|
5
8
|
|
|
6
9
|

|
|
7
10
|
|
|
@@ -45,7 +48,7 @@ Get started immediately with powerful LLM options:
|
|
|
45
48
|
- **Load Balancer Profiles**: Balance requests across providers or accounts with automatic failover
|
|
46
49
|
- **Free Tier Support**: Start coding immediately with Gemini or Qwen free tiers
|
|
47
50
|
- **Provider Flexibility**: Switch between any Anthropic, Gemini, OpenAI, Kimi, or OpenAI-compatible provider
|
|
48
|
-
- **Top Open Models**: Works seamlessly with GLM
|
|
51
|
+
- **Top Open Models**: Works seamlessly with GLM 5, Kimi K2.5, MiniMax M2.5, and Qwen 3 Coder Next
|
|
49
52
|
- **Local Models**: Run models locally with LM Studio, llama.cpp for complete privacy
|
|
50
53
|
- **Privacy First**: No telemetry by default, local processing available
|
|
51
54
|
- **Subagent Flexibility**: Create agents with different models, providers, or settings
|
|
@@ -53,8 +56,15 @@ Get started immediately with powerful LLM options:
|
|
|
53
56
|
- **Zed Integration**: Native Zed editor integration for seamless workflow
|
|
54
57
|
|
|
55
58
|
```bash
|
|
56
|
-
#
|
|
59
|
+
# macOS (Homebrew)
|
|
60
|
+
brew tap vybestack/homebrew-tap
|
|
61
|
+
brew update
|
|
62
|
+
brew install llxprt-code
|
|
63
|
+
|
|
64
|
+
# npm
|
|
57
65
|
npm install -g @vybestack/llxprt-code
|
|
66
|
+
|
|
67
|
+
# Start coding
|
|
58
68
|
llxprt
|
|
59
69
|
|
|
60
70
|
# Try without installing
|
|
@@ -75,13 +85,21 @@ LLxprt Code is a command-line AI assistant designed for developers who want powe
|
|
|
75
85
|
|
|
76
86
|
## Quick Start
|
|
77
87
|
|
|
78
|
-
1. **Prerequisites:** Node.js 20+ installed
|
|
88
|
+
1. **Prerequisites:** Node.js 20+ installed (not required for Homebrew)
|
|
79
89
|
2. **Install:**
|
|
90
|
+
|
|
80
91
|
```bash
|
|
92
|
+
# macOS (Homebrew)
|
|
93
|
+
brew tap vybestack/homebrew-tap
|
|
94
|
+
brew update
|
|
95
|
+
brew install llxprt-code
|
|
96
|
+
|
|
97
|
+
# npm
|
|
81
98
|
npm install -g @vybestack/llxprt-code
|
|
82
99
|
# Or try without installing:
|
|
83
100
|
npx @vybestack/llxprt-code
|
|
84
101
|
```
|
|
102
|
+
|
|
85
103
|
3. **Run:** `llxprt`
|
|
86
104
|
4. **Choose provider:** Use `/provider` to select your preferred LLM service
|
|
87
105
|
5. **Start coding:** Ask questions, generate code, or analyze projects
|
|
@@ -103,7 +121,7 @@ llxprt
|
|
|
103
121
|
- **Multi-Account Failover** - Configure multiple OAuth buckets that failover automatically on rate limits
|
|
104
122
|
- **Load Balancer Profiles** - Balance across providers/accounts with roundrobin or failover policies
|
|
105
123
|
- **Extensive Provider Support** - Anthropic, Gemini, OpenAI, Kimi, and any OpenAI-compatible provider [**Provider Guide →**](./docs/providers/quick-reference.md)
|
|
106
|
-
- **Top Open Models** - GLM
|
|
124
|
+
- **Top Open Models** - GLM 5, Kimi K2.5, MiniMax M2.5, Qwen 3 Coder Next
|
|
107
125
|
- **Local Model Support** - LM Studio, llama.cpp, Ollama for complete privacy
|
|
108
126
|
- **Profile System** - Save provider configurations and model settings
|
|
109
127
|
- **Advanced Subagents** - Isolated AI assistants with different models/providers
|
|
@@ -138,7 +156,7 @@ llxprt "Generate unit tests for payment module" > tests/payment.test.js
|
|
|
138
156
|
|
|
139
157
|
LLxprt Code works seamlessly with the best open-weight models:
|
|
140
158
|
|
|
141
|
-
### Kimi K2
|
|
159
|
+
### Kimi K2.5
|
|
142
160
|
|
|
143
161
|
- **Context Window**: 262,144 tokens
|
|
144
162
|
- **Architecture**: Trillion-parameter MoE (32B active)
|
|
@@ -147,27 +165,27 @@ LLxprt Code works seamlessly with the best open-weight models:
|
|
|
147
165
|
|
|
148
166
|
```bash
|
|
149
167
|
/provider kimi
|
|
150
|
-
/model kimi-
|
|
168
|
+
/model kimi-for-coding
|
|
151
169
|
# Or via Synthetic/Chutes:
|
|
152
170
|
/provider synthetic
|
|
153
|
-
/model hf:moonshotai/Kimi-K2
|
|
171
|
+
/model hf:moonshotai/Kimi-K2.5
|
|
154
172
|
```
|
|
155
173
|
|
|
156
|
-
### GLM
|
|
174
|
+
### GLM 5
|
|
157
175
|
|
|
158
176
|
- **Context Window**: 200,000 tokens
|
|
159
177
|
- **Max Output**: 131,072 tokens
|
|
160
178
|
- **Architecture**: Mixture-of-Experts with 355B total parameters (32B active)
|
|
161
179
|
- **Strengths**: Coding, multi-step planning, tool integration
|
|
162
180
|
|
|
163
|
-
### MiniMax M2.
|
|
181
|
+
### MiniMax M2.5
|
|
164
182
|
|
|
165
183
|
- **Context Window**: 196,608 tokens
|
|
166
184
|
- **Architecture**: MoE with 230B total parameters (10B active)
|
|
167
185
|
- **Strengths**: Coding workflows, multi-step agents, tool calling
|
|
168
186
|
- **Cost**: Only 8% of Claude Sonnet, ~2x faster
|
|
169
187
|
|
|
170
|
-
###
|
|
188
|
+
### Qwen 3 Coder Next
|
|
171
189
|
|
|
172
190
|
- **Context Window**: 262,144 tokens
|
|
173
191
|
- **Max Output**: 65,536 tokens
|
|
@@ -211,7 +229,7 @@ Create specialized AI assistants with isolated contexts and different configurat
|
|
|
211
229
|
Each subagent can be configured with:
|
|
212
230
|
|
|
213
231
|
- **Different providers** (Gemini vs Anthropic vs Qwen vs Local)
|
|
214
|
-
- **Different models** (Flash vs Sonnet vs GLM
|
|
232
|
+
- **Different models** (Flash vs Sonnet vs GLM 5 vs Custom)
|
|
215
233
|
- **Different tool access** (Restrict or allow specific tools)
|
|
216
234
|
- **Different settings** (Temperature, timeouts, max turns)
|
|
217
235
|
- **Isolated runtime context** (No memory or state crossover)
|