@iamharshil/aix-cli 2.0.4 → 3.0.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/README.md +77 -55
- package/dist/bin/aix.js +8 -7
- package/dist/bin/aix.js.map +4 -4
- package/dist/commands/init.d.ts.map +1 -1
- package/dist/commands/run.d.ts +2 -1
- package/dist/commands/run.d.ts.map +1 -1
- package/dist/commands/status.d.ts.map +1 -1
- package/dist/services/config.d.ts +4 -1
- package/dist/services/config.d.ts.map +1 -1
- package/dist/services/ollama.d.ts +11 -0
- package/dist/services/ollama.d.ts.map +1 -0
- package/dist/src/index.js +8 -7
- package/dist/src/index.js.map +4 -4
- package/dist/types/index.d.ts +19 -0
- package/dist/types/index.d.ts.map +1 -1
- package/package.json +3 -2
package/README.md
CHANGED
|
@@ -18,14 +18,17 @@
|
|
|
18
18
|
|
|
19
19
|
## What is AIX?
|
|
20
20
|
|
|
21
|
-
AIX CLI is a bridge between [LM Studio](https://lmstudio.ai)
|
|
21
|
+
AIX CLI is a bridge between local model servers and AI coding assistants. It connects [LM Studio](https://lmstudio.ai) or [Ollama](https://ollama.com) to [Claude Code](https://docs.anthropic.com/en/docs/claude-code) or [OpenCode](https://opencode.ai) — letting you use **locally-running language models** as the backend for your favorite AI dev tools.
|
|
22
|
+
|
|
23
|
+
No API keys. No cloud calls. No data leaving your machine.
|
|
22
24
|
|
|
23
25
|
```
|
|
24
26
|
┌──────────────────────────────────────────────────┐
|
|
25
27
|
│ $ aix-cli run │
|
|
26
28
|
│ │
|
|
27
|
-
│
|
|
28
|
-
│ ✔
|
|
29
|
+
│ ? Select model backend: Ollama │
|
|
30
|
+
│ ✔ Connected to Ollama │
|
|
31
|
+
│ ✔ Model selected: qwen2.5-coder:14b │
|
|
29
32
|
│ ✔ Launching Claude Code... │
|
|
30
33
|
│ │
|
|
31
34
|
│ Your code stays local. Always. │
|
|
@@ -36,8 +39,9 @@ AIX CLI is a bridge between [LM Studio](https://lmstudio.ai) and AI coding assis
|
|
|
36
39
|
|
|
37
40
|
- 🔒 **Privacy-first** — All inference runs locally on your hardware. Your code never leaves your machine.
|
|
38
41
|
- 🔑 **No API keys** — No subscriptions, no usage limits, no cloud dependencies.
|
|
39
|
-
- 🚀 **GPU-accelerated** — Take advantage of your local GPU for fast inference
|
|
40
|
-
- 🔀 **Multi-
|
|
42
|
+
- 🚀 **GPU-accelerated** — Take advantage of your local GPU for fast inference.
|
|
43
|
+
- 🔀 **Multi-backend** — Use LM Studio or Ollama as your model server.
|
|
44
|
+
- 🛠️ **Multi-provider** — Switch between Claude Code and OpenCode with a single flag.
|
|
41
45
|
- ⚡ **Zero config** — Just run `aix-cli run` and start coding.
|
|
42
46
|
|
|
43
47
|
---
|
|
@@ -49,7 +53,7 @@ AIX CLI is a bridge between [LM Studio](https://lmstudio.ai) and AI coding assis
|
|
|
49
53
|
| Requirement | Description |
|
|
50
54
|
|---|---|
|
|
51
55
|
| [Node.js](https://nodejs.org/) ≥ 18 | JavaScript runtime |
|
|
52
|
-
| [LM Studio](https://lmstudio.ai)
|
|
56
|
+
| [LM Studio](https://lmstudio.ai) **or** [Ollama](https://ollama.com) | Local model server (at least one required) |
|
|
53
57
|
| [Claude Code](https://docs.anthropic.com/en/docs/claude-code) **or** [OpenCode](https://opencode.ai) | AI coding assistant (at least one required) |
|
|
54
58
|
|
|
55
59
|
### Install
|
|
@@ -90,7 +94,7 @@ npm link
|
|
|
90
94
|
aix-cli doctor
|
|
91
95
|
```
|
|
92
96
|
|
|
93
|
-
This checks that LM Studio, Claude Code / OpenCode, and your environment are properly configured.
|
|
97
|
+
This checks that LM Studio / Ollama, Claude Code / OpenCode, and your environment are properly configured.
|
|
94
98
|
|
|
95
99
|
---
|
|
96
100
|
|
|
@@ -98,35 +102,36 @@ This checks that LM Studio, Claude Code / OpenCode, and your environment are pro
|
|
|
98
102
|
|
|
99
103
|
### `aix-cli run` — Start a coding session
|
|
100
104
|
|
|
101
|
-
The primary command. Launches Claude Code or OpenCode backed by a local
|
|
105
|
+
The primary command. Launches Claude Code or OpenCode backed by a local model.
|
|
102
106
|
|
|
103
107
|
```bash
|
|
104
|
-
# Interactive —
|
|
108
|
+
# Interactive — prompts for backend, model, and provider
|
|
105
109
|
aix-cli run
|
|
106
110
|
|
|
107
|
-
# Specify
|
|
108
|
-
aix-cli run -m qwen2.5-coder
|
|
111
|
+
# Specify backend and model
|
|
112
|
+
aix-cli run -b ollama -m qwen2.5-coder:14b
|
|
113
|
+
aix-cli run -b lmstudio -m llama-3-8b
|
|
109
114
|
|
|
110
115
|
# Use OpenCode instead of Claude Code
|
|
111
116
|
aix-cli run --provider opencode
|
|
112
117
|
|
|
113
118
|
# Pass a prompt directly
|
|
114
|
-
aix-cli run -m qwen2.5-coder
|
|
119
|
+
aix-cli run -b ollama -m qwen2.5-coder:14b -- "Refactor auth middleware"
|
|
115
120
|
```
|
|
116
121
|
|
|
117
|
-
### `aix-cli init` —
|
|
122
|
+
### `aix-cli init` — Set up backend and model
|
|
118
123
|
|
|
119
|
-
|
|
124
|
+
Configure your preferred backend and load/select a model.
|
|
120
125
|
|
|
121
126
|
```bash
|
|
122
|
-
aix-cli init
|
|
123
|
-
aix-cli init -m
|
|
124
|
-
aix-cli init -m llama-3-8b -p
|
|
127
|
+
aix-cli init # Interactive setup
|
|
128
|
+
aix-cli init -b ollama -m qwen2.5-coder:14b # Ollama with specific model
|
|
129
|
+
aix-cli init -b lmstudio -m llama-3-8b -p claude # Full config in one command
|
|
125
130
|
```
|
|
126
131
|
|
|
127
132
|
### `aix-cli status` — Check what's running
|
|
128
133
|
|
|
129
|
-
Shows
|
|
134
|
+
Shows status for both LM Studio and Ollama, including available and running models.
|
|
130
135
|
|
|
131
136
|
```bash
|
|
132
137
|
aix-cli status
|
|
@@ -145,16 +150,17 @@ aix-cli doctor
|
|
|
145
150
|
| Command | Aliases | Description |
|
|
146
151
|
|---|---|---|
|
|
147
152
|
| `run` | `r` | Run Claude Code / OpenCode with a local model |
|
|
148
|
-
| `init` | `i`, `load` |
|
|
149
|
-
| `status` | `s`, `stats` | Show LM Studio
|
|
153
|
+
| `init` | `i`, `load` | Set up backend, select model, configure provider |
|
|
154
|
+
| `status` | `s`, `stats` | Show LM Studio & Ollama status |
|
|
150
155
|
| `doctor` | `d`, `check` | Run system diagnostics |
|
|
151
156
|
|
|
152
157
|
### Global Options
|
|
153
158
|
|
|
154
159
|
| Flag | Description |
|
|
155
160
|
|---|---|
|
|
161
|
+
| `-b, --backend <name>` | Model backend: `lmstudio` or `ollama` |
|
|
156
162
|
| `-m, --model <name>` | Model name or ID to use |
|
|
157
|
-
| `-p, --provider <name>` |
|
|
163
|
+
| `-p, --provider <name>` | Coding tool: `claude` (default) or `opencode` |
|
|
158
164
|
| `-v, --verbose` | Show verbose output |
|
|
159
165
|
| `-h, --help` | Show help |
|
|
160
166
|
| `-V, --version` | Show version |
|
|
@@ -177,9 +183,12 @@ AIX stores its configuration in the OS-appropriate config directory:
|
|
|
177
183
|
{
|
|
178
184
|
"lmStudioUrl": "http://localhost",
|
|
179
185
|
"lmStudioPort": 1234,
|
|
186
|
+
"ollamaUrl": "http://localhost",
|
|
187
|
+
"ollamaPort": 11434,
|
|
180
188
|
"defaultTimeout": 30000,
|
|
181
|
-
"
|
|
182
|
-
"defaultProvider": "claude"
|
|
189
|
+
"defaultBackend": "ollama",
|
|
190
|
+
"defaultProvider": "claude",
|
|
191
|
+
"model": "qwen2.5-coder:14b"
|
|
183
192
|
}
|
|
184
193
|
```
|
|
185
194
|
|
|
@@ -194,29 +203,31 @@ AIX stores its configuration in the OS-appropriate config directory:
|
|
|
194
203
|
## How It Works
|
|
195
204
|
|
|
196
205
|
```
|
|
197
|
-
|
|
198
|
-
|
|
199
|
-
|
|
200
|
-
|
|
201
|
-
|
|
202
|
-
|
|
203
|
-
|
|
204
|
-
|
|
205
|
-
|
|
206
|
-
|
|
207
|
-
|
|
208
|
-
|
|
209
|
-
|
|
210
|
-
|
|
211
|
-
|
|
212
|
-
|
|
213
|
-
|
|
214
|
-
|
|
215
|
-
|
|
216
|
-
|
|
206
|
+
┌───────────────────┐ ┌───────────────────┐
|
|
207
|
+
│ LM Studio │ │ Ollama │
|
|
208
|
+
│ (port 1234) │ │ (port 11434) │
|
|
209
|
+
└────────┬──────────┘ └────────┬───────────┘
|
|
210
|
+
│ │
|
|
211
|
+
REST API REST API
|
|
212
|
+
│ │
|
|
213
|
+
└───────────┬────────────┘
|
|
214
|
+
│
|
|
215
|
+
┌────────┴──────────┐
|
|
216
|
+
│ AIX CLI │
|
|
217
|
+
│ backend routing │
|
|
218
|
+
│ model selection │
|
|
219
|
+
│ config mgmt │
|
|
220
|
+
└───┬──────────┬────┘
|
|
221
|
+
│ │
|
|
222
|
+
┌────────┘ └────────┐
|
|
223
|
+
▼ ▼
|
|
224
|
+
┌──────────────┐ ┌──────────────┐
|
|
225
|
+
│ Claude Code │ │ OpenCode │
|
|
226
|
+
│ --model X │ │ --model X │
|
|
227
|
+
└──────────────┘ └──────────────┘
|
|
217
228
|
```
|
|
218
229
|
|
|
219
|
-
1. **LM Studio** runs a local inference server
|
|
230
|
+
1. **LM Studio** or **Ollama** runs a local inference server with an OpenAI-compatible API.
|
|
220
231
|
2. **AIX CLI** discovers available models, manages configuration, and orchestrates the connection.
|
|
221
232
|
3. **Claude Code** or **OpenCode** receives the model endpoint and runs as it normally would — except fully local.
|
|
222
233
|
|
|
@@ -234,25 +245,35 @@ AIX stores its configuration in the OS-appropriate config directory:
|
|
|
234
245
|
|
|
235
246
|
</details>
|
|
236
247
|
|
|
248
|
+
<details>
|
|
249
|
+
<summary><strong>Ollama not running</strong></summary>
|
|
250
|
+
|
|
251
|
+
1. Install Ollama from [ollama.com](https://ollama.com)
|
|
252
|
+
2. Start the server: `ollama serve`
|
|
253
|
+
3. Pull a model: `ollama pull qwen2.5-coder:14b`
|
|
254
|
+
4. Confirm with `aix-cli status`
|
|
255
|
+
|
|
256
|
+
</details>
|
|
257
|
+
|
|
237
258
|
<details>
|
|
238
259
|
<summary><strong>No models found</strong></summary>
|
|
239
260
|
|
|
240
|
-
|
|
241
|
-
|
|
242
|
-
|
|
243
|
-
|
|
261
|
+
**LM Studio:** Open LM Studio → **Search** tab → download a model.
|
|
262
|
+
|
|
263
|
+
**Ollama:** Run `ollama pull <model>` to download a model (e.g., `ollama pull llama3.2`).
|
|
264
|
+
|
|
265
|
+
Then run `aix-cli init` to select and configure.
|
|
244
266
|
|
|
245
267
|
</details>
|
|
246
268
|
|
|
247
269
|
<details>
|
|
248
|
-
<summary><strong>Connection refused
|
|
270
|
+
<summary><strong>Connection refused</strong></summary>
|
|
249
271
|
|
|
250
|
-
Check
|
|
272
|
+
Check that the correct port is being used:
|
|
273
|
+
- LM Studio defaults to port `1234`
|
|
274
|
+
- Ollama defaults to port `11434`
|
|
251
275
|
|
|
252
|
-
|
|
253
|
-
# The config file path is shown by `aix-cli doctor`
|
|
254
|
-
# Edit the config file and set "lmStudioPort" to the correct port
|
|
255
|
-
```
|
|
276
|
+
You can configure custom ports in your AIX config file (path shown by `aix-cli doctor`).
|
|
256
277
|
|
|
257
278
|
</details>
|
|
258
279
|
|
|
@@ -279,7 +300,7 @@ Then re-run `aix-cli doctor` to confirm.
|
|
|
279
300
|
|
|
280
301
|
AIX is designed around a simple principle: **your code never leaves your machine.**
|
|
281
302
|
|
|
282
|
-
- ✅ All AI inference runs locally via LM Studio
|
|
303
|
+
- ✅ All AI inference runs locally via LM Studio or Ollama
|
|
283
304
|
- ✅ No telemetry, analytics, or tracking of any kind
|
|
284
305
|
- ✅ No outbound network calls (except to `localhost`)
|
|
285
306
|
- ✅ No API keys or accounts required
|
|
@@ -307,6 +328,7 @@ npm run lint # Lint
|
|
|
307
328
|
## Related Projects
|
|
308
329
|
|
|
309
330
|
- [LM Studio](https://lmstudio.ai) — Run local AI models with a visual interface
|
|
331
|
+
- [Ollama](https://ollama.com) — Run large language models locally
|
|
310
332
|
- [Claude Code](https://docs.anthropic.com/en/docs/claude-code) — Anthropic's AI coding assistant
|
|
311
333
|
- [OpenCode](https://opencode.ai) — Open-source AI coding assistant
|
|
312
334
|
|
package/dist/bin/aix.js
CHANGED
|
@@ -1,10 +1,11 @@
|
|
|
1
1
|
#!/usr/bin/env node
|
|
2
|
-
var
|
|
3
|
-
Running: claude ${
|
|
4
|
-
`));try{await
|
|
2
|
+
var ge=Object.defineProperty;var pe=(t=>typeof require<"u"?require:typeof Proxy<"u"?new Proxy(t,{get:(e,o)=>(typeof require<"u"?require:e)[o]}):t)(function(t){if(typeof require<"u")return require.apply(this,arguments);throw Error('Dynamic require of "'+t+'" is not supported')});var C=(t,e)=>()=>(t&&(e=t(t=0)),e);var O=(t,e)=>{for(var o in e)ge(t,o,{get:e[o],enumerable:!0})};var Y={};O(Y,{ConfigService:()=>I,configService:()=>c});import fe from"conf";var I,c,M=C(()=>{"use strict";I=class{store;constructor(){this.store=new fe({projectName:"aix",defaults:{lmStudioUrl:"http://localhost",lmStudioPort:1234,ollamaUrl:"http://localhost",ollamaPort:11434,defaultTimeout:3e4,autoStartServer:!1},clearInvalidConfig:!0})}get(e){return this.store.get(e)}set(e,o){this.store.set(e,o)}setModel(e){this.store.set("model",e)}getLastUsedModel(){return this.store.get("model")}setDefaultProvider(e){this.store.set("defaultProvider",e)}getDefaultProvider(){return this.store.get("defaultProvider")}setDefaultBackend(e){this.store.set("defaultBackend",e)}getDefaultBackend(){return this.store.get("defaultBackend")}getLMStudioUrl(){let e=this.store.get("lmStudioUrl"),o=this.store.get("lmStudioPort");return`${e}:${o}`}getOllamaUrl(){let e=this.store.get("ollamaUrl"),o=this.store.get("ollamaPort");return`${e}:${o}`}reset(){this.store.clear()}},c=new I});var te={};O(te,{LMStudioService:()=>z,lmStudioService:()=>h});import{execa as U}from"execa";import Z from"ora";import ee from"chalk";var oe,z,h,L=C(()=>{"use strict";M();oe=[1234,1235,1236,1237],z=class{baseUrl;constructor(){this.baseUrl=c.getLMStudioUrl()}getApiUrl(e){return`${this.baseUrl}${e}`}async checkStatus(){try{return(await fetch(this.getApiUrl("/api/status"),{method:"GET",signal:AbortSignal.timeout(3e3)})).ok}catch{return!1}}async getAvailableModels(){let e=["/api/v1/models","/api/models","/v1/models","/api/ls-model/list"];for(let o of e)try{let r=await fetch(this.getApiUrl(o),{method:"GET",signal:AbortSignal.timeout(1e4)});if(!r.ok)continue;let n=await r.json(),i=[];return Array.isArray(n)?i=n:n.models&&Array.isArray(n.models)?i=n.models:n.data&&Array.isArray(n.data)&&(i=n.data),i.map(l=>{let s=l;return{id:String(s.key||s.id||s.model||""),name:String(s.display_name||s.name||s.id||s.model||""),size:Number(s.size_bytes||s.size||s.file_size||0),quantization:String(s.quantization?typeof s.quantization=="object"?s.quantization.name:s.quantization:"")}}).filter(l=>l.id&&l.name)}catch{continue}return[]}async getStatus(){if(!await this.checkStatus())return{running:!1,port:c.get("lmStudioPort"),models:[]};try{let o=await fetch(this.getApiUrl("/api/status"),{method:"GET",signal:AbortSignal.timeout(1e4)});if(!o.ok)return{running:!1,port:c.get("lmStudioPort"),models:[]};let r=await o.json();return{running:!0,port:c.get("lmStudioPort"),models:r.models??[],activeModel:r.active_model}}catch{return{running:!1,port:c.get("lmStudioPort"),models:[]}}}async loadModel(e,o){let r=o??Z({text:`Loading model: ${ee.cyan(e)}`,color:"cyan"}).start();try{let n=await fetch(this.getApiUrl("/api/model/load"),{method:"POST",headers:{"Content-Type":"application/json"},body:JSON.stringify({model:e}),signal:AbortSignal.timeout(3e5)});if(!n.ok)throw new Error(`Failed to load model: ${n.statusText}`);return r.succeed(`Model ${ee.green(e)} loaded successfully`),c.setModel(e),{loadSpinner:r}}catch(n){throw r.fail(`Failed to load model: ${n instanceof Error?n.message:"Unknown error"}`),n}}async startServer(e){let o=e??Z({text:"Starting LM Studio server...",color:"cyan"}).start();try{let r=process.platform==="darwin",n=process.platform==="linux",i=process.platform==="win32",l;if(r){let s=["/Applications/LM Studio.app",`${process.env.HOME}/Applications/LM Studio.app`];for(let d of s)try{let{existsSync:u}=await import("fs");if(u(d)){l=`open "${d}" --args --server`;break}}catch{}if(l?.startsWith("open")){await U("open",[s.find(d=>{try{let{existsSync:u}=pe("fs");return u(d)}catch{return!1}})||"/Applications/LM Studio.app","--args","--server"],{detached:!0,stdio:"ignore"}),o.succeed("LM Studio server started"),await this.waitForServer(6e4);return}}else n?l=await this.findLinuxBinary():i&&(l=await this.findWindowsExecutable());if(!l)throw o.fail("LM Studio not found. Please install it from https://lmstudio.ai"),new Error("LM Studio not installed");await U(l,["--server"],{detached:!0,stdio:"ignore",env:{...process.env,LM_STUDIO_SERVER_PORT:String(c.get("lmStudioPort"))}}),o.succeed("LM Studio server started"),await this.waitForServer(6e4)}catch(r){throw o.fail(`Failed to start LM Studio: ${r instanceof Error?r.message:"Unknown error"}`),r}}async findLinuxBinary(){let e=["/usr/bin/lm-studio","/usr/local/bin/lm-studio",`${process.env.HOME}/.local/bin/lm-studio`];for(let o of e)try{return await U("test",["-x",o]),o}catch{continue}}async findWindowsExecutable(){let e=process.env.LOCALAPPDATA,o=process.env.PROGRAMFILES,r=[e?`${e}\\Programs\\LM Studio\\lm-studio.exe`:"",o?`${o}\\LM Studio\\lm-studio.exe`:""].filter(Boolean);for(let n of r)try{return await U("cmd",["/c","if exist",`"${n}"`,"echo","yes"]),n}catch{continue}}async waitForServer(e=6e4){let o=Date.now();for(;Date.now()-o<e;){if(await this.checkStatus())return!0;await this.sleep(2e3)}return!1}sleep(e){return new Promise(o=>setTimeout(o,e))}async findAvailablePort(){for(let e of oe)try{if((await fetch(`http://localhost:${e}/api/status`,{method:"GET",signal:AbortSignal.timeout(1e3)})).ok)return e}catch{return c.set("lmStudioPort",e),e}return oe[0]??1234}},h=new z});var ne={};O(ne,{OllamaService:()=>j,ollamaService:()=>S});var j,S,A=C(()=>{"use strict";M();j=class{getBaseUrl(){return c.getOllamaUrl()}getApiUrl(e){return`${this.getBaseUrl()}${e}`}async checkStatus(){try{return(await fetch(this.getApiUrl("/api/tags"),{method:"GET",signal:AbortSignal.timeout(3e3)})).ok}catch{return!1}}async getAvailableModels(){try{let e=await fetch(this.getApiUrl("/api/tags"),{method:"GET",signal:AbortSignal.timeout(1e4)});return e.ok?((await e.json()).models??[]).map(n=>{let i=n.details??{};return{id:String(n.name??n.model??""),name:String(n.name??n.model??""),size:Number(n.size??0),quantization:String(i.quantization_level??""),family:String(i.family??""),parameterSize:String(i.parameter_size??"")}}).filter(n=>n.id&&n.name):[]}catch{return[]}}async getRunningModels(){try{let e=await fetch(this.getApiUrl("/api/ps"),{method:"GET",signal:AbortSignal.timeout(5e3)});return e.ok?((await e.json()).models??[]).map(n=>String(n.name??n.model??"")).filter(Boolean):[]}catch{return[]}}async getStatus(){if(!await this.checkStatus())return{running:!1,port:c.get("ollamaPort"),models:[],runningModels:[]};let[o,r]=await Promise.all([this.getAvailableModels(),this.getRunningModels()]);return{running:!0,port:c.get("ollamaPort"),models:o,runningModels:r}}},S=new j});var se={};O(se,{ClaudeService:()=>q,claudeService:()=>R});import{execa as K}from"execa";import Me from"chalk";var q,R,H=C(()=>{"use strict";q=class{async isClaudeCodeInstalled(){try{return await K("claude",["--version"],{stdio:"ignore"}),!0}catch{return!1}}async run(e){let{model:o,args:r=[],verbose:n=!1}=e,i=this.extractProvider(o),l=this.extractModelName(o);if(!i||!l)throw new Error(`Invalid model format: ${o}. Expected format: provider/model-name`);let s=`${i}/${l}`,d=["--model",s,...r];n&&console.log(Me.dim(`
|
|
3
|
+
Running: claude ${d.join(" ")}
|
|
4
|
+
`));try{await K("claude",d,{stdio:"inherit",env:{...process.env,ANTHROPIC_MODEL:s}})}catch(u){if(u instanceof Error&&"exitCode"in u){let v=u.exitCode;process.exit(v??1)}throw u}}extractProvider(e){return e.split("/")[0]}extractModelName(e){let o=e.split("/");if(!(o.length<2))return o.slice(1).join("/")}async getVersion(){try{return(await K("claude",["--version"])).stdout}catch{return}}},R=new q});var ae={};O(ae,{OpenCodeService:()=>G,openCodeService:()=>E});import{execa as V}from"execa";import $e from"chalk";var G,E,Q=C(()=>{"use strict";G=class{async isOpenCodeInstalled(){try{return await V("opencode",["--version"],{stdio:"ignore"}),!0}catch{return!1}}async run(e){let{model:o,args:r=[],verbose:n=!1}=e,i=this.extractProvider(o),l=this.extractModelName(o);if(!i||!l)throw new Error(`Invalid model format: ${o}. Expected format: provider/model-name`);let s=["--model",o,...r];n&&console.log($e.dim(`
|
|
5
5
|
Running: opencode ${s.join(" ")}
|
|
6
|
-
`));try{await
|
|
7
|
-
Model ready: ${
|
|
8
|
-
|
|
9
|
-
|
|
6
|
+
`));try{await V("opencode",s,{stdio:"inherit",env:{...process.env,OPENCODE_MODEL_NAME:l,OPENCODE_MODEL_PROVIDER:i}})}catch(d){if(d instanceof Error&&"exitCode"in d){let u=d.exitCode;process.exit(u??1)}throw d}}extractProvider(e){return e.split("/")[0]}extractModelName(e){let o=e.split("/");if(!(o.length<2))return o.slice(1).join("/")}async getVersion(){try{return(await V("opencode",["--version"])).stdout}catch{return}}},E=new G});import{Command as Ce}from"commander";import g from"chalk";L();A();M();import F from"ora";import p from"chalk";import _ from"inquirer";import ie from"inquirer";async function $(t,e){let o=t.map(i=>({name:`${i.name} (${i.id})`,value:i,short:i.name})),r=e?o.findIndex(i=>i.value.id===e):0;return(await ie.prompt([{type:"list",name:"model",message:"Select a model to load:",choices:o,default:Math.max(0,r),pageSize:Math.min(t.length,15)}])).model}async function T(t,e=!0){return(await ie.prompt([{type:"confirm",name:"confirm",message:t,default:e}])).confirm}import re from"chalk";function b(t){if(t===0)return"0 B";let e=1024,o=["B","KB","MB","GB","TB"],r=Math.floor(Math.log(t)/Math.log(e));return`${parseFloat((t/Math.pow(e,r)).toFixed(2))} ${o[r]}`}function y(t){console.log(re.green("\u2713")+" "+t)}function ve(t){console.error(re.red("\u2717")+" "+t)}function f(t,e=1){ve(t),process.exit(e)}async function he(){let t=c.getDefaultBackend(),{backendSelection:e}=await _.prompt([{type:"list",name:"backendSelection",message:"Select model backend:",default:t??"lmstudio",choices:[{name:"\u{1F5A5}\uFE0F LM Studio",value:"lmstudio"},{name:"\u{1F999} Ollama",value:"ollama"}]}]),{saveDefault:o}=await _.prompt([{type:"confirm",name:"saveDefault",message:"Save as default backend?",default:!1}]);return o&&(c.setDefaultBackend(e),y(`Default backend set to ${p.cyan(e)}`)),e}async function Se(){let t=c.getDefaultProvider(),{providerSelection:e}=await _.prompt([{type:"list",name:"providerSelection",message:"Select coding tool:",default:t??"claude",choices:[{name:"Claude Code",value:"claude"},{name:"OpenCode",value:"opencode"}]}]),{saveDefault:o}=await _.prompt([{type:"confirm",name:"saveDefault",message:"Save as default coding tool?",default:!1}]);return o&&(c.setDefaultProvider(e),y(`Default coding tool set to ${p.cyan(e)}`)),e}async function we(t,e){let o=F({text:"Checking LM Studio status...",color:"cyan"}).start(),r=await h.checkStatus();r||(o.info("LM Studio server not running"),o.stop(),await T("Would you like to start the LM Studio server?")||f("LM Studio server must be running. Start it manually or use the Server tab in LM Studio."),await h.startServer(),r=!0),o.succeed("Connected to LM Studio");let n=F({text:"Fetching available models...",color:"cyan"}).start(),i=await h.getAvailableModels();i.length===0&&(n.fail("No models found. Download some models in LM Studio first."),f("No models available")),n.succeed(`Found ${p.bold(i.length)} model${i.length===1?"":"s"}`),console.log(),console.log(p.bold("Available Models:")),console.log(p.dim("\u2500".repeat(process.stdout.columns||80))),i.forEach((m,x)=>{let w=b(m.size),N=m.loaded?p.green(" [LOADED]"):"";console.log(` ${p.dim(String(x+1).padStart(2))}. ${m.name} ${p.dim(`(${w})`)}${N}`)}),console.log();let l=c.getLastUsedModel(),s=t.model,d=s?i.find(m=>m.id===s||m.name.includes(s)):await $(i,l);d||f("No model selected"),await h.loadModel(d.id,o);let u=d.id.replace("/","--"),v=e==="opencode"?"OpenCode":"Claude Code";y(p.bold(`
|
|
7
|
+
Model ready: ${d.name}`)),console.log(),console.log(`Run ${v} with this model:`),console.log(` ${p.cyan((e==="opencode"?"opencode":"claude")+" --model lmstudio/"+u)}`),console.log(),console.log(`Or use ${p.cyan("aix-cli run")} to start an interactive session`)}async function ye(t,e){let o=F({text:"Checking Ollama status...",color:"cyan"}).start();await S.checkStatus()||(o.fail("Ollama is not running"),f("Ollama must be running. Start it with: ollama serve")),o.succeed("Connected to Ollama");let n=F({text:"Fetching available models...",color:"cyan"}).start(),i=await S.getAvailableModels();i.length===0&&(n.fail("No models found. Pull a model first: ollama pull <model>"),f("No models available")),n.succeed(`Found ${p.bold(i.length)} model${i.length===1?"":"s"}`);let l=await S.getRunningModels();console.log(),console.log(p.bold("Available Models:")),console.log(p.dim("\u2500".repeat(process.stdout.columns||80))),i.forEach((m,x)=>{let w=b(m.size),ue=l.includes(m.id)?p.green(" [RUNNING]"):"",me=m.parameterSize?p.dim(` ${m.parameterSize}`):"";console.log(` ${p.dim(String(x+1).padStart(2))}. ${m.name}${me} ${p.dim(`(${w})`)}${ue}`)}),console.log();let s=c.getLastUsedModel(),d=t.model,u=d?i.find(m=>m.id===d||m.name.includes(d)):await $(i,s);u||f("No model selected"),c.setModel(u.id);let v=e==="opencode"?"OpenCode":"Claude Code";y(p.bold(`
|
|
8
|
+
Model selected: ${u.name}`)),console.log(),console.log(`Run ${v} with this model:`),console.log(` ${p.cyan((e==="opencode"?"opencode":"claude")+" --model ollama/"+u.id)}`),console.log(),console.log(`Or use ${p.cyan("aix-cli run")} to start an interactive session`)}async function W(t={}){let e=t.backend??await he(),o=t.provider??await Se();e==="ollama"?await ye(t,o):await we(t,o)}L();A();H();Q();M();import P from"ora";import le from"chalk";import de from"inquirer";async function be(){let t=c.getDefaultBackend();if(t)return t;let{backendSelection:e}=await de.prompt([{type:"list",name:"backendSelection",message:"Select model backend:",choices:[{name:"\u{1F5A5}\uFE0F LM Studio",value:"lmstudio"},{name:"\u{1F999} Ollama",value:"ollama"}]}]);return e}async function Pe(){let t=await R.isClaudeCodeInstalled(),e=await E.isOpenCodeInstalled(),o=[];if(t&&o.push({name:"Claude Code",value:"claude"}),e&&o.push({name:"OpenCode",value:"opencode"}),o.length===0&&f("Neither Claude Code nor OpenCode is installed."),o.length===1)return o[0].value;let{providerSelection:r}=await de.prompt([{type:"list",name:"providerSelection",message:"Select coding tool:",choices:o}]);return r}function B(t){return t==="opencode"?"OpenCode":"Claude Code"}async function ke(t,e){let o=P({text:"Checking LM Studio status...",color:"cyan"}).start(),r=await h.checkStatus();r||(o.info("LM Studio server not running"),o.stop(),await T("Would you like to start the LM Studio server?")||f("LM Studio server must be running. Start it manually or use the Server tab in LM Studio."),await h.startServer(),r=!0),o.succeed("Connected to LM Studio");let n=P({text:"Fetching available models...",color:"cyan"}).start(),i=await h.getAvailableModels();i.length===0&&(n.fail("No models found. Download some models in LM Studio first."),f("No models available")),n.stop();let l;if(t.model){let v=i.find(m=>m.id===t.model||m.name.toLowerCase().includes(t.model.toLowerCase()));v||f(`Model "${t.model}" not found. Available models: ${i.map(m=>m.name).join(", ")}`),l=v.id}else{let v=c.getLastUsedModel();l=(await $(i,v)).id}let s=P({text:`Loading model: ${le.cyan(l)}`,color:"cyan"}).start();await h.loadModel(l,s);let u=`lmstudio/${l.replace("/","--")}`;await ce(e,u,t)}async function xe(t,e){let o=P({text:"Checking Ollama status...",color:"cyan"}).start();await S.checkStatus()||(o.fail("Ollama is not running"),f("Ollama must be running. Start it with: ollama serve")),o.succeed("Connected to Ollama");let n=P({text:"Fetching available models...",color:"cyan"}).start(),i=await S.getAvailableModels();i.length===0&&(n.fail("No models found. Pull a model first: ollama pull <model>"),f("No models available")),n.stop();let l;if(t.model){let d=i.find(u=>u.id===t.model||u.name.toLowerCase().includes(t.model.toLowerCase()));d||f(`Model "${t.model}" not found. Available models: ${i.map(u=>u.name).join(", ")}`),l=d.id}else{let d=c.getLastUsedModel();l=(await $(i,d)).id}c.setModel(l);let s=`ollama/${l}`;await ce(e,s,t)}async function ce(t,e,o){let r=B(t);y(le.green(`
|
|
9
|
+
Starting ${r} with model: ${e}
|
|
10
|
+
`));try{t==="opencode"?await E.run({model:e,args:o.args??[],verbose:o.verbose}):await R.run({model:e,args:o.args??[],verbose:o.verbose})}catch(n){f(`Failed to run ${r}: ${n instanceof Error?n.message:"Unknown error"}`)}}async function J(t={}){let e;if(t.provider)e=t.provider;else{let i=c.getDefaultProvider();i?e=i:e=await Pe()}let o=P({text:`Checking ${B(e)} installation...`,color:"cyan"}).start();(e==="opencode"?await E.isOpenCodeInstalled():await R.isClaudeCodeInstalled())||(o.fail(`${B(e)} is not installed.`),f(`Please install ${B(e)} first.`)),o.succeed(`${B(e)} is installed`),(t.backend??await be())==="ollama"?await xe(t,e):await ke(t,e)}L();A();import a from"chalk";async function X(){let[t,e]=await Promise.all([h.getStatus(),S.getStatus()]);console.log(),console.log(a.bold("LM Studio")),console.log(a.dim("\u2500".repeat(50))),console.log(` ${t.running?a.green("\u25CF"):a.red("\u25CB")} Server: ${t.running?a.green("Running"):a.red("Stopped")}`),console.log(` ${a.dim("\u25B8")} Port: ${a.cyan(String(t.port))}`),console.log(` ${a.dim("\u25B8")} URL: ${a.cyan(`http://localhost:${t.port}`)}`),t.activeModel&&console.log(` ${a.dim("\u25B8")} Active Model: ${a.green(t.activeModel)}`),t.running&&t.models.length>0?(console.log(),console.log(a.bold(" Models")),t.models.forEach((o,r)=>{let n=b(o.size),i=o.id===t.activeModel?` ${a.green("[LOADED]")}`:"";console.log(` ${a.dim(String(r+1)+".")} ${o.name}${i}`),console.log(` ${a.dim("ID:")} ${o.id}`),console.log(` ${a.dim("Size:")} ${n}`),o.quantization&&console.log(` ${a.dim("Quantization:")} ${o.quantization}`)})):t.running&&console.log(` ${a.dim("No models available")}`),console.log(),console.log(a.bold("Ollama")),console.log(a.dim("\u2500".repeat(50))),console.log(` ${e.running?a.green("\u25CF"):a.red("\u25CB")} Server: ${e.running?a.green("Running"):a.red("Stopped")}`),console.log(` ${a.dim("\u25B8")} Port: ${a.cyan(String(e.port))}`),console.log(` ${a.dim("\u25B8")} URL: ${a.cyan(`http://localhost:${e.port}`)}`),e.running&&e.runningModels.length>0&&console.log(` ${a.dim("\u25B8")} Running: ${a.green(e.runningModels.join(", "))}`),e.running&&e.models.length>0?(console.log(),console.log(a.bold(" Models")),e.models.forEach((o,r)=>{let n=b(o.size),l=e.runningModels.includes(o.id)?` ${a.green("[RUNNING]")}`:"",s=o.parameterSize?` ${a.dim(o.parameterSize)}`:"";console.log(` ${a.dim(String(r+1)+".")} ${o.name}${s}${l}`),console.log(` ${a.dim("Size:")} ${n}`),o.family&&console.log(` ${a.dim("Family:")} ${o.family}`),o.quantization&&console.log(` ${a.dim("Quantization:")} ${o.quantization}`)})):e.running&&console.log(` ${a.dim("No models available")}`),console.log()}var k=new Ce;k.name("aix-cli").description("Run Claude Code or OpenCode with local AI models from LM Studio or Ollama").version("3.0.0").showHelpAfterError();function D(t=0){console.log(),console.log(g.dim(t===0?"\u{1F44B} Goodbye!":"\u274C Cancelled.")),process.exit(t)}process.on("SIGINT",()=>D(0));process.on("SIGTERM",()=>D(0));process.on("uncaughtException",t=>{t.message?.includes("ExitPromptError")||t.message?.includes("User force closed")||t.message?.includes("prompt")?D(0):(console.error(g.red("Error:"),t.message),process.exit(1))});process.on("unhandledRejection",t=>{let e=String(t);(e.includes("ExitPromptError")||e.includes("User force closed")||e.includes("prompt"))&&D(0)});k.command("init",{isDefault:!1}).aliases(["i","load"]).description("Select a backend, load a model, and configure your provider").option("-m, --model <name>","Model name or ID to load","").option("-p, --provider <provider>","Coding tool to use (claude or opencode)","").option("-b, --backend <backend>","Model backend to use (lmstudio or ollama)","").action(W);k.command("run",{isDefault:!1}).aliases(["r"]).description("Run Claude Code or OpenCode with a model from LM Studio or Ollama").option("-m, --model <name>","Model name or ID to use","").option("-p, --provider <provider>","Coding tool to use (claude or opencode)","").option("-b, --backend <backend>","Model backend to use (lmstudio or ollama)","").option("-v, --verbose","Show verbose output").argument("[args...]","Additional arguments for the provider").action(async(t,e)=>{await J({...e,args:t})});k.command("status",{isDefault:!1}).aliases(["s","stats"]).description("Show LM Studio and Ollama status and available models").action(X);k.command("doctor",{isDefault:!1}).aliases(["d","check"]).description("Check system requirements and configuration").action(async()=>{let{lmStudioService:t}=await Promise.resolve().then(()=>(L(),te)),{ollamaService:e}=await Promise.resolve().then(()=>(A(),ne)),{claudeService:o}=await Promise.resolve().then(()=>(H(),se)),{openCodeService:r}=await Promise.resolve().then(()=>(Q(),ae)),{configService:n}=await Promise.resolve().then(()=>(M(),Y));console.log(g.bold.cyan("\u{1F527} AIX CLI System Check")),console.log(g.dim("\u2500".repeat(40)));let[i,l,s,d]=await Promise.all([t.checkStatus(),e.checkStatus(),o.isClaudeCodeInstalled(),r.isOpenCodeInstalled()]),u=n.getDefaultProvider(),v=n.getDefaultBackend(),m=n.get("lmStudioPort"),x=n.get("ollamaPort");console.log(),console.log(g.bold("Backends")),console.log(` ${i?"\u2705":"\u26A0\uFE0F"} LM Studio: ${i?g.green("Running"):g.yellow("Not running")} ${g.dim(`(port ${m})`)}`),console.log(` ${l?"\u2705":"\u26A0\uFE0F"} Ollama: ${l?g.green("Running"):g.yellow("Not running")} ${g.dim(`(port ${x})`)}`),console.log(),console.log(g.bold("Coding Tools")),console.log(` ${s?"\u2705":"\u274C"} Claude Code: ${s?g.green("Installed"):g.red("Not installed")}`),console.log(` ${d?"\u2705":"\u274C"} OpenCode: ${d?g.green("Installed"):g.red("Not installed")}`),console.log(),console.log(g.bold("Defaults")),console.log(` \u{1F4CC} Backend: ${g.cyan(v??"not set")}`),console.log(` \u{1F4CC} Coding tool: ${g.cyan(u??"not set")}`);let w=[];s||w.push(` \u2192 ${g.cyan("npm install -g @anthropic-ai/claude-code")}`),d||w.push(` \u2192 ${g.cyan("npm install -g opencode")}`),!i&&!l&&w.push(` \u2192 Start LM Studio or run ${g.cyan("ollama serve")}`),w.length>0&&(console.log(),console.log(g.bold("\u{1F4CB} Next Steps:")),w.forEach(N=>console.log(N))),console.log()});k.parse();
|
|
10
11
|
//# sourceMappingURL=aix.js.map
|