@iamharshil/aix-cli 3.3.5 → 3.4.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/README.md +57 -58
- package/dist/bin/aix.js +10 -9
- package/dist/bin/aix.js.map +4 -4
- package/dist/commands/init.d.ts.map +1 -1
- package/dist/commands/run.d.ts +1 -0
- package/dist/commands/run.d.ts.map +1 -1
- package/dist/index.js +10 -9
- package/dist/index.js.map +4 -4
- package/dist/services/claude.d.ts.map +1 -1
- package/dist/services/lmstudio.d.ts +1 -1
- package/dist/services/lmstudio.d.ts.map +1 -1
- package/dist/types/index.d.ts +1 -1
- package/dist/types/index.d.ts.map +1 -1
- package/package.json +2 -3
- package/dist/services/opencode.d.ts +0 -14
- package/dist/services/opencode.d.ts.map +0 -1
package/README.md
CHANGED
|
@@ -2,7 +2,7 @@
|
|
|
2
2
|
|
|
3
3
|
# AIX CLI
|
|
4
4
|
|
|
5
|
-
**Run Claude Code
|
|
5
|
+
**Run Claude Code with local AI models. No API keys. No cloud. Complete privacy.**
|
|
6
6
|
|
|
7
7
|
[](https://www.npmjs.com/package/@iamharshil/aix-cli)
|
|
8
8
|
[](https://www.npmjs.com/package/@iamharshil/aix-cli)
|
|
@@ -18,7 +18,7 @@
|
|
|
18
18
|
|
|
19
19
|
## What is AIX?
|
|
20
20
|
|
|
21
|
-
AIX CLI is a bridge between local model servers and AI coding assistants. It connects [LM Studio](https://lmstudio.ai) or [Ollama](https://ollama.com) to [Claude Code](https://docs.anthropic.com/en/docs/claude-code)
|
|
21
|
+
AIX CLI is a bridge between local model servers and AI coding assistants. It connects [LM Studio](https://lmstudio.ai) or [Ollama](https://ollama.com) to [Claude Code](https://docs.anthropic.com/en/docs/claude-code) — letting you use **locally-running language models** as the backend for your favorite AI dev tools.
|
|
22
22
|
|
|
23
23
|
No API keys. No cloud calls. No data leaving your machine.
|
|
24
24
|
|
|
@@ -40,16 +40,18 @@ No API keys. No cloud calls. No data leaving your machine.
|
|
|
40
40
|
- 🔒 **Privacy-first** — All inference runs locally on your hardware. Your code never leaves your machine.
|
|
41
41
|
- 🔑 **No API keys** — No subscriptions, no usage limits, no cloud dependencies.
|
|
42
42
|
- 🚀 **GPU-accelerated** — Take advantage of your local GPU for fast inference.
|
|
43
|
-
- 🔀 **
|
|
44
|
-
-
|
|
45
|
-
- ⚡ **Zero config** — Just run `aix-cli run` and start coding. OpenCode is launched using an OpenAI-compatible endpoint with `openai-compatible/<model>` so you don’t need to manually register models in `opencode.json`.
|
|
43
|
+
- 🔀 **Single provider** — Claude Code is the only supported AI coding assistant.
|
|
44
|
+
- ⚡ **Zero config** — Just run `aix-cli run` and start coding.
|
|
46
45
|
|
|
47
46
|
### Compatibility notes
|
|
48
47
|
|
|
49
|
-
|
|
48
|
+
- **LM Studio**: Works natively with Claude Code via Anthropic-compatible API at `/v1`.
|
|
49
|
+
- **Ollama**: Use the `--native` flag to leverage Ollama's built-in `ollama launch claude` integration, which handles all API configuration automatically.
|
|
50
50
|
|
|
51
|
-
|
|
52
|
-
|
|
51
|
+
```bash
|
|
52
|
+
# Recommended for Ollama
|
|
53
|
+
aix-cli run --ollama --native -m qwen2.5-coder:14b
|
|
54
|
+
```
|
|
53
55
|
|
|
54
56
|
---
|
|
55
57
|
|
|
@@ -57,11 +59,11 @@ Both providers can be launched with either backend, but compatibility ultimately
|
|
|
57
59
|
|
|
58
60
|
### Prerequisites
|
|
59
61
|
|
|
60
|
-
| Requirement
|
|
61
|
-
|
|
62
|
-
| [Node.js](https://nodejs.org/) ≥ 18
|
|
62
|
+
| Requirement | Description |
|
|
63
|
+
| -------------------------------------------------------------------- | ------------------------------------------ |
|
|
64
|
+
| [Node.js](https://nodejs.org/) ≥ 18 | JavaScript runtime |
|
|
63
65
|
| [LM Studio](https://lmstudio.ai) **or** [Ollama](https://ollama.com) | Local model server (at least one required) |
|
|
64
|
-
| [Claude Code](https://docs.anthropic.com/en/docs/claude-code)
|
|
66
|
+
| [Claude Code](https://docs.anthropic.com/en/docs/claude-code) | AI coding assistant |
|
|
65
67
|
|
|
66
68
|
### Install
|
|
67
69
|
|
|
@@ -101,7 +103,7 @@ npm link
|
|
|
101
103
|
aix-cli doctor
|
|
102
104
|
```
|
|
103
105
|
|
|
104
|
-
This checks that LM Studio / Ollama, Claude Code
|
|
106
|
+
This checks that LM Studio / Ollama, Claude Code, and your environment are properly configured.
|
|
105
107
|
|
|
106
108
|
---
|
|
107
109
|
|
|
@@ -109,19 +111,21 @@ This checks that LM Studio / Ollama, Claude Code / OpenCode, and your environmen
|
|
|
109
111
|
|
|
110
112
|
### `aix-cli run` — Start a coding session
|
|
111
113
|
|
|
112
|
-
The primary command. Launches Claude Code
|
|
114
|
+
The primary command. Launches Claude Code backed by a local model.
|
|
113
115
|
|
|
114
116
|
```bash
|
|
115
|
-
# Interactive — prompts for backend
|
|
117
|
+
# Interactive — prompts for backend and model
|
|
116
118
|
aix-cli run
|
|
117
119
|
|
|
118
120
|
# Specify backend and model
|
|
119
121
|
aix-cli run -b ollama -m qwen2.5-coder:14b
|
|
120
122
|
aix-cli run -b lmstudio -m llama-3-8b
|
|
121
123
|
|
|
122
|
-
#
|
|
123
|
-
aix-cli run -
|
|
124
|
-
|
|
124
|
+
# Use Ollama's native Claude Code integration (recommended for Ollama)
|
|
125
|
+
aix-cli run -b ollama --native -m qwen2.5-coder:14b
|
|
126
|
+
|
|
127
|
+
# Global shortcuts
|
|
128
|
+
aix-cli run --ollama --native -m gemma4
|
|
125
129
|
|
|
126
130
|
# Pass a prompt directly
|
|
127
131
|
aix-cli run -b ollama -m qwen2.5-coder:14b -- "Refactor auth middleware"
|
|
@@ -155,25 +159,25 @@ aix-cli doctor
|
|
|
155
159
|
|
|
156
160
|
### Command Reference
|
|
157
161
|
|
|
158
|
-
| Command
|
|
159
|
-
|
|
160
|
-
| `run`
|
|
161
|
-
| `init`
|
|
162
|
-
| `status` | `s`, `stats`
|
|
163
|
-
| `doctor` | `d`, `check`
|
|
164
|
-
| `update` | `upgrade`, `u`
|
|
165
|
-
| `config` | `c`, `settings
|
|
162
|
+
| Command | Aliases | Description |
|
|
163
|
+
| -------- | --------------- | ------------------------------------------------ |
|
|
164
|
+
| `run` | `r` | Run Claude Code with a local model |
|
|
165
|
+
| `init` | `i`, `load` | Set up backend, select model, configure provider |
|
|
166
|
+
| `status` | `s`, `stats` | Show LM Studio & Ollama status |
|
|
167
|
+
| `doctor` | `d`, `check` | Run system diagnostics |
|
|
168
|
+
| `update` | `upgrade`, `u` | Update AIX CLI to the latest version |
|
|
169
|
+
| `config` | `c`, `settings` | View, set, or reset CLI configurations |
|
|
166
170
|
|
|
167
171
|
### Global Options
|
|
168
172
|
|
|
169
|
-
| Flag
|
|
170
|
-
|
|
171
|
-
| `-b, --backend <name>` | Model backend: `lmstudio` or `ollama`
|
|
172
|
-
| `-m, --model <name>`
|
|
173
|
-
| `-
|
|
174
|
-
| `-v, --verbose`
|
|
175
|
-
| `-h, --help`
|
|
176
|
-
| `-V, --version`
|
|
173
|
+
| Flag | Description |
|
|
174
|
+
| ---------------------- | ------------------------------------------- |
|
|
175
|
+
| `-b, --backend <name>` | Model backend: `lmstudio` or `ollama` |
|
|
176
|
+
| `-m, --model <name>` | Model name or ID to use |
|
|
177
|
+
| `-n, --native` | Use Ollama's native Claude Code integration |
|
|
178
|
+
| `-v, --verbose` | Show verbose output |
|
|
179
|
+
| `-h, --help` | Show help |
|
|
180
|
+
| `-V, --version` | Show version |
|
|
177
181
|
|
|
178
182
|
---
|
|
179
183
|
|
|
@@ -181,11 +185,11 @@ aix-cli doctor
|
|
|
181
185
|
|
|
182
186
|
AIX stores its configuration in the OS-appropriate config directory:
|
|
183
187
|
|
|
184
|
-
| Platform | Path
|
|
185
|
-
|
|
186
|
-
| macOS
|
|
187
|
-
| Linux
|
|
188
|
-
| Windows
|
|
188
|
+
| Platform | Path |
|
|
189
|
+
| -------- | ---------------------------------------- |
|
|
190
|
+
| macOS | `~/Library/Application Support/aix-cli/` |
|
|
191
|
+
| Linux | `~/.config/aix-cli/` |
|
|
192
|
+
| Windows | `%APPDATA%\aix-cli\` |
|
|
189
193
|
|
|
190
194
|
### Config File
|
|
191
195
|
|
|
@@ -204,9 +208,9 @@ AIX stores its configuration in the OS-appropriate config directory:
|
|
|
204
208
|
|
|
205
209
|
### Environment Variables
|
|
206
210
|
|
|
207
|
-
| Variable
|
|
208
|
-
|
|
209
|
-
| `LM_STUDIO_PORT` | Override the LM Studio server port | `1234`
|
|
211
|
+
| Variable | Description | Default |
|
|
212
|
+
| ---------------- | ---------------------------------- | ------- |
|
|
213
|
+
| `LM_STUDIO_PORT` | Override the LM Studio server port | `1234` |
|
|
210
214
|
|
|
211
215
|
---
|
|
212
216
|
|
|
@@ -227,19 +231,18 @@ AIX stores its configuration in the OS-appropriate config directory:
|
|
|
227
231
|
│ backend routing │
|
|
228
232
|
│ model selection │
|
|
229
233
|
│ config mgmt │
|
|
230
|
-
|
|
231
|
-
|
|
232
|
-
|
|
233
|
-
|
|
234
|
-
|
|
235
|
-
|
|
236
|
-
|
|
237
|
-
└──────────────┘ └──────────────┘
|
|
234
|
+
└────────┬──────────┘
|
|
235
|
+
│
|
|
236
|
+
▼
|
|
237
|
+
┌──────────────┐
|
|
238
|
+
│ Claude Code │
|
|
239
|
+
│ --model X │
|
|
240
|
+
└──────────────┘
|
|
238
241
|
```
|
|
239
242
|
|
|
240
243
|
1. **LM Studio** or **Ollama** runs a local inference server with an OpenAI-compatible API.
|
|
241
244
|
2. **AIX CLI** discovers available models, manages configuration, and orchestrates the connection.
|
|
242
|
-
3. **Claude Code**
|
|
245
|
+
3. **Claude Code** receives the model endpoint and runs as it normally would — except fully local.
|
|
243
246
|
|
|
244
247
|
---
|
|
245
248
|
|
|
@@ -280,6 +283,7 @@ Then run `aix-cli init` to select and configure.
|
|
|
280
283
|
<summary><strong>Connection refused</strong></summary>
|
|
281
284
|
|
|
282
285
|
Check that the correct port is being used:
|
|
286
|
+
|
|
283
287
|
- LM Studio defaults to port `1234`
|
|
284
288
|
- Ollama defaults to port `11434`
|
|
285
289
|
|
|
@@ -288,16 +292,12 @@ You can configure custom ports in your AIX config file (path shown by `aix-cli d
|
|
|
288
292
|
</details>
|
|
289
293
|
|
|
290
294
|
<details>
|
|
291
|
-
<summary><strong>Claude Code
|
|
295
|
+
<summary><strong>Claude Code not detected</strong></summary>
|
|
292
296
|
|
|
293
|
-
Install
|
|
297
|
+
Install Claude Code globally:
|
|
294
298
|
|
|
295
299
|
```bash
|
|
296
|
-
# Claude Code
|
|
297
300
|
npm install -g @anthropic-ai/claude-code
|
|
298
|
-
|
|
299
|
-
# OpenCode
|
|
300
|
-
npm install -g opencode-ai
|
|
301
301
|
```
|
|
302
302
|
|
|
303
303
|
Then re-run `aix-cli doctor` to confirm.
|
|
@@ -340,7 +340,6 @@ npm run lint # Lint
|
|
|
340
340
|
- [LM Studio](https://lmstudio.ai) — Run local AI models with a visual interface
|
|
341
341
|
- [Ollama](https://ollama.com) — Run large language models locally
|
|
342
342
|
- [Claude Code](https://docs.anthropic.com/en/docs/claude-code) — Anthropic's AI coding assistant
|
|
343
|
-
- [OpenCode](https://opencode.ai) — Open-source AI coding assistant
|
|
344
343
|
|
|
345
344
|
## License
|
|
346
345
|
|
package/dist/bin/aix.js
CHANGED
|
@@ -1,11 +1,12 @@
|
|
|
1
1
|
#!/usr/bin/env node
|
|
2
|
-
var
|
|
3
|
-
Running: claude ${
|
|
4
|
-
`));let m=
|
|
5
|
-
|
|
6
|
-
`))
|
|
7
|
-
|
|
8
|
-
|
|
9
|
-
Starting
|
|
10
|
-
`))
|
|
2
|
+
var ge=Object.defineProperty;var fe=(o=>typeof require<"u"?require:typeof Proxy<"u"?new Proxy(o,{get:(e,t)=>(typeof require<"u"?require:e)[t]}):o)(function(o){if(typeof require<"u")return require.apply(this,arguments);throw Error('Dynamic require of "'+o+'" is not supported')});var B=(o,e)=>()=>(o&&(e=o(o=0)),e);var N=(o,e)=>{for(var t in e)ge(o,t,{get:e[t],enumerable:!0})};var ne={};N(ne,{ConfigService:()=>D,configService:()=>l});import pe from"conf";var D,l,M=B(()=>{"use strict";D=class{store;constructor(){this.store=new pe({projectName:"aix",defaults:{lmStudioUrl:"http://localhost",lmStudioPort:1234,lmStudioContextLength:65536,ollamaUrl:"http://localhost",ollamaPort:11434,defaultTimeout:3e4,autoStartServer:!1},clearInvalidConfig:!0})}get(e){return this.store.get(e)}set(e,t){this.store.set(e,t)}delete(e){this.store.delete(e)}setModel(e){this.store.set("model",e)}getLastUsedModel(){return this.store.get("model")}setDefaultProvider(e){this.store.set("defaultProvider",e)}getDefaultProvider(){return this.store.get("defaultProvider")}setDefaultBackend(e){this.store.set("defaultBackend",e)}getDefaultBackend(){return this.store.get("defaultBackend")}getLMStudioUrl(){let e=this.store.get("lmStudioUrl"),t=this.store.get("lmStudioPort");return`${e}:${t}`}getOllamaUrl(){let e=this.store.get("ollamaUrl"),t=this.store.get("ollamaPort");return`${e}:${t}`}reset(){this.store.clear()}},l=new D});var re={};N(re,{LMStudioService:()=>z,lmStudioService:()=>S});import{execa as I}from"execa";import ie from"ora";import W from"chalk";var he,z,S,L=B(()=>{"use strict";M();he=[1234,1235,1236,1237],z=class{constructor(){}getBaseUrl(){return l.getLMStudioUrl()}getApiUrl(e){return`${this.getBaseUrl()}${e}`}async checkStatus(){try{return(await fetch(this.getApiUrl("/api/status"),{method:"GET",signal:AbortSignal.timeout(3e3)})).ok}catch{return!1}}async getAvailableModels(){let e=["/api/v1/models","/api/models","/v1/models","/api/ls-model/list"];for(let t of e)try{let r=await fetch(this.getApiUrl(t),{method:"GET",signal:AbortSignal.timeout(1e4)});if(!r.ok)continue;let n=await r.json(),i=[];return Array.isArray(n)?i=n:n.models&&Array.isArray(n.models)?i=n.models:n.data&&Array.isArray(n.data)&&(i=n.data),i.map(a=>{let s=a;return{id:String(s.key||s.id||s.model||""),name:String(s.display_name||s.name||s.id||s.model||""),size:Number(s.size_bytes||s.size||s.file_size||0),quantization:String(s.quantization?typeof s.quantization=="object"?s.quantization.name:s.quantization:"")}}).filter(a=>a.id&&a.name)}catch{continue}return[]}async getStatus(){if(!await this.checkStatus())return{running:!1,port:l.get("lmStudioPort"),models:[]};try{let t=await fetch(this.getApiUrl("/api/status"),{method:"GET",signal:AbortSignal.timeout(1e4)});if(!t.ok)return{running:!1,port:l.get("lmStudioPort"),models:[]};let r=await t.json();return{running:!0,port:l.get("lmStudioPort"),models:r.models??[],activeModel:r.active_model}}catch{return{running:!1,port:l.get("lmStudioPort"),models:[]}}}async loadModel(e,t){let r=t??ie({text:`Loading model: ${W.cyan(e)}`,color:"cyan"}).start();try{let n=l.get("lmStudioContextLength"),i=["/api/v1/models/load","/api/model/load"],a=async(m,c)=>{let u={model:e};c&&(u.context_length=n);let g=await fetch(this.getApiUrl(m),{method:"POST",headers:{"Content-Type":"application/json"},body:JSON.stringify(u),signal:AbortSignal.timeout(3e5)});if(g.ok)return;let v="";try{v=await g.text()}catch{v=""}let k=v?` ${v}`:"";throw new Error(`Failed to load model: ${g.status} ${g.statusText}${k}`)},s;for(let m of i)try{return await a(m,!0),r.succeed(`Model ${W.green(e)} loaded successfully`),l.setModel(e),{loadSpinner:r}}catch(c){s=c;let u=c instanceof Error?c.message:String(c);if(/context|token|max.*(context|token)|too.*large/i.test(u))try{return r.warn(`Model load failed with context_length=${n}. Retrying with LM Studio defaults...`),await a(m,!1),r.succeed(`Model ${W.green(e)} loaded successfully`),l.setModel(e),{loadSpinner:r}}catch(v){s=v}}throw s instanceof Error?s:new Error(String(s))}catch(n){throw r.fail(`Failed to load model: ${n instanceof Error?n.message:"Unknown error"}`),n}}async startServer(e){let t=e??ie({text:"Starting LM Studio server...",color:"cyan"}).start();try{let r=process.platform==="darwin",n=process.platform==="linux",i=process.platform==="win32",a;if(r){let s=["/Applications/LM Studio.app",`${process.env.HOME}/Applications/LM Studio.app`];for(let m of s)try{let{existsSync:c}=await import("fs");if(c(m)){a=`open "${m}" --args --server`;break}}catch{}if(a?.startsWith("open")){await I("open",[s.find(m=>{try{let{existsSync:c}=fe("fs");return c(m)}catch{return!1}})||"/Applications/LM Studio.app","--args","--server"],{detached:!0,stdio:"ignore"}),t.succeed("LM Studio server started"),await this.waitForServer(6e4);return}}else n?a=await this.findLinuxBinary():i&&(a=await this.findWindowsExecutable());if(!a)throw t.fail("LM Studio not found. Please install it from https://lmstudio.ai"),new Error("LM Studio not installed");await I(a,["--server"],{detached:!0,stdio:"ignore",env:{...process.env,LM_STUDIO_SERVER_PORT:String(l.get("lmStudioPort"))}}),t.succeed("LM Studio server started"),await this.waitForServer(6e4)}catch(r){throw t.fail(`Failed to start LM Studio: ${r instanceof Error?r.message:"Unknown error"}`),r}}async findLinuxBinary(){let e=["/usr/bin/lm-studio","/usr/local/bin/lm-studio",`${process.env.HOME}/.local/bin/lm-studio`];for(let t of e)try{return await I("test",["-x",t]),t}catch{continue}}async findWindowsExecutable(){let e=process.env.LOCALAPPDATA,t=process.env.PROGRAMFILES,r=[e?`${e}\\Programs\\LM Studio\\lm-studio.exe`:"",t?`${t}\\LM Studio\\lm-studio.exe`:""].filter(Boolean);for(let n of r)try{return await I("cmd",["/c","if exist",`"${n}"`,"echo","yes"]),n}catch{continue}}async waitForServer(e=6e4){let t=Date.now();for(;Date.now()-t<e;){if(await this.checkStatus())return!0;await this.sleep(2e3)}return!1}sleep(e){return new Promise(t=>setTimeout(t,e))}async findAvailablePort(){for(let e of he)try{if((await fetch(`http://localhost:${e}/api/status`,{method:"GET",signal:AbortSignal.timeout(1e3)})).ok)return l.set("lmStudioPort",e),e}catch{continue}return l.get("lmStudioPort")}},S=new z});var ae={};N(ae,{OllamaService:()=>T,ollamaService:()=>b});var T,b,A=B(()=>{"use strict";M();T=class{getBaseUrl(){return l.getOllamaUrl()}getApiUrl(e){return`${this.getBaseUrl()}${e}`}async checkStatus(){try{return(await fetch(this.getApiUrl("/api/tags"),{method:"GET",signal:AbortSignal.timeout(3e3)})).ok}catch{return!1}}async getAvailableModels(){try{let e=await fetch(this.getApiUrl("/api/tags"),{method:"GET",signal:AbortSignal.timeout(1e4)});return e.ok?((await e.json()).models??[]).map(n=>{let i=n.details??{};return{id:String(n.name??n.model??""),name:String(n.name??n.model??""),size:Number(n.size??0),quantization:String(i.quantization_level??""),family:String(i.family??""),parameterSize:String(i.parameter_size??"")}}).filter(n=>n.id&&n.name):[]}catch{return[]}}async getRunningModels(){try{let e=await fetch(this.getApiUrl("/api/ps"),{method:"GET",signal:AbortSignal.timeout(5e3)});return e.ok?((await e.json()).models??[]).map(n=>String(n.name??n.model??"")).filter(Boolean):[]}catch{return[]}}async getStatus(){if(!await this.checkStatus())return{running:!1,port:l.get("ollamaPort"),models:[],runningModels:[]};let[t,r]=await Promise.all([this.getAvailableModels(),this.getRunningModels()]);return{running:!0,port:l.get("ollamaPort"),models:t,runningModels:r}}},b=new T});var le={};N(le,{ClaudeService:()=>K,claudeService:()=>O});import{execa as J}from"execa";import be from"chalk";var K,O,Q=B(()=>{"use strict";M();K=class{async isClaudeCodeInstalled(){try{return await J("claude",["--version"],{stdio:"ignore"}),!0}catch{return!1}}async run(e){let{model:t,args:r=[],verbose:n=!1}=e,i=this.extractProvider(t),a=this.extractModelName(t);if(!i||!a)throw new Error(`Invalid model format: ${t}. Expected format: provider/model-name`);let s=["--model",a,...r];n&&console.log(be.dim(`
|
|
3
|
+
Running: claude ${s.join(" ")}
|
|
4
|
+
`));let m=i==="ollama"?l.getOllamaUrl():l.getLMStudioUrl(),c=i==="ollama"?"ollama":"lmstudio",u=`${m}/v1/messages`;try{let g=await fetch(u,{method:"POST",headers:{"Content-Type":"application/json","x-api-key":c},body:JSON.stringify({model:a,max_tokens:1,messages:[{role:"user",content:"test"}]}),signal:AbortSignal.timeout(3e3)});if(!g.ok&&g.status>=500)throw new Error(`HTTP ${g.status} ${g.statusText}`)}catch(g){let v=g instanceof Error?g.message:String(g),k=i==="ollama"?"Claude Code requires an Anthropic-compatible API. Ollama does not support this. Use OpenCode for Ollama, or switch to LM Studio.":"Ensure LM Studio server is running and the Anthropic Compatibility server is enabled. Check the Developer tab in LM Studio.";throw new Error(`Claude Code could not reach an Anthropic-compatible endpoint at ${u} (${v}). ${k}`)}try{await J("claude",s,{stdio:"inherit",env:{...process.env,ANTHROPIC_MODEL:a,ANTHROPIC_BASE_URL:m,ANTHROPIC_AUTH_TOKEN:c,ANTHROPIC_API_KEY:""}})}catch(g){if(g instanceof Error&&"exitCode"in g){let v=g.exitCode;process.exit(v??1)}throw g}}extractProvider(e){return e.split("/")[0]}extractModelName(e){let t=e.split("/");if(!(t.length<2))return t.slice(1).join("/")}async getVersion(){try{return(await J("claude",["--version"])).stdout}catch{return}}},O=new K});import{Command as Ee}from"commander";import p from"chalk";L();A();M();import F from"ora";import h from"chalk";import q from"inquirer";import se from"inquirer";async function P(o,e){let t=o.map(i=>({name:`${i.name} (${i.id})`,value:i,short:i.name})),r=e?t.findIndex(i=>i.value.id===e):0;return(await se.prompt([{type:"list",name:"model",message:"Select a model to load:",choices:t,default:Math.max(0,r),pageSize:Math.min(o.length,15)}])).model}async function j(o,e=!0){return(await se.prompt([{type:"confirm",name:"confirm",message:o,default:e}])).confirm}import H from"chalk";function x(o){if(o===0)return"0 B";let e=1024,t=["B","KB","MB","GB","TB"],r=Math.floor(Math.log(o)/Math.log(e));return`${parseFloat((o/Math.pow(e,r)).toFixed(2))} ${t[r]}`}function w(o){console.log(H.green("\u2713")+" "+o)}function V(o){console.error(H.red("\u2717")+" "+o)}function _(o){console.log(H.blue("\u2139")+" "+o)}function f(o,e=1){V(o),process.exit(e)}async function ve(){let o=l.getDefaultBackend(),{backendSelection:e}=await q.prompt([{type:"list",name:"backendSelection",message:"Select model backend:",default:o??"lmstudio",choices:[{name:"\u{1F5A5}\uFE0F LM Studio",value:"lmstudio"},{name:"\u{1F999} Ollama",value:"ollama"}]}]),{saveDefault:t}=await q.prompt([{type:"confirm",name:"saveDefault",message:"Save as default backend?",default:!1}]);return t&&(l.setDefaultBackend(e),w(`Default backend set to ${h.cyan(e)}`)),e}async function Se(o){let e=l.getDefaultProvider(),t=[{name:"Claude Code",value:"claude"}],r=e??"claude",{providerSelection:n}=await q.prompt([{type:"list",name:"providerSelection",message:"Select coding tool:",default:r,choices:t}]),{saveDefault:i}=await q.prompt([{type:"confirm",name:"saveDefault",message:"Save as default coding tool?",default:!1}]);return i&&(l.setDefaultProvider(n),w(`Default coding tool set to ${h.cyan(n)}`)),n}async function ye(o,e){let t=F({text:"Checking LM Studio status...",color:"cyan"}).start(),r=await S.checkStatus();r||(t.info("LM Studio server not running"),t.stop(),await j("Would you like to start the LM Studio server?")||f("LM Studio server must be running. Start it manually or use the Server tab in LM Studio."),await S.startServer(),r=!0),t.succeed("Connected to LM Studio");let n=F({text:"Fetching available models...",color:"cyan"}).start(),i=await S.getAvailableModels();i.length===0&&(n.fail("No models found. Download some models in LM Studio first."),f("No models available")),n.succeed(`Found ${h.bold(i.length)} model${i.length===1?"":"s"}`),console.log(),console.log(h.bold("Available Models:")),console.log(h.dim("\u2500".repeat(process.stdout.columns||80))),i.forEach((u,g)=>{let v=x(u.size),k=u.loaded?h.green(" [LOADED]"):"";console.log(` ${h.dim(String(g+1).padStart(2))}. ${u.name} ${h.dim(`(${v})`)}${k}`)}),console.log();let a=l.getLastUsedModel(),s=o.model,m=s?i.find(u=>u.id===s||u.name.includes(s)):await P(i,a);m||f("No model selected"),await S.loadModel(m.id,t);let c=m.id;w(h.bold(`
|
|
5
|
+
Model ready: ${m.name}`)),console.log(),console.log("Start your interactive coding session:"),console.log(` ${h.cyan(`aix-cli run --provider ${e} --backend lmstudio --model ${c}`)}`),console.log()}async function we(o,e){let t=F({text:"Checking Ollama status...",color:"cyan"}).start();await b.checkStatus()||(t.fail("Ollama is not running"),f("Ollama must be running. Start it with: ollama serve")),t.succeed("Connected to Ollama");let n=F({text:"Fetching available models...",color:"cyan"}).start(),i=await b.getAvailableModels();i.length===0&&(n.fail("No models found. Pull a model first: ollama pull <model>"),f("No models available")),n.succeed(`Found ${h.bold(i.length)} model${i.length===1?"":"s"}`);let a=await b.getRunningModels();console.log(),console.log(h.bold("Available Models:")),console.log(h.dim("\u2500".repeat(process.stdout.columns||80))),i.forEach((u,g)=>{let v=x(u.size),ue=a.includes(u.id)?h.green(" [RUNNING]"):"",me=u.parameterSize?h.dim(` ${u.parameterSize}`):"";console.log(` ${h.dim(String(g+1).padStart(2))}. ${u.name}${me} ${h.dim(`(${v})`)}${ue}`)}),console.log();let s=l.getLastUsedModel(),m=o.model,c=m?i.find(u=>u.id===m||u.name.includes(m)):await P(i,s);c||f("No model selected"),l.setModel(c.id),w(h.bold(`
|
|
6
|
+
Model selected: ${c.name}`)),console.log(),console.log("Start your interactive coding session:"),console.log(` ${h.cyan(`aix-cli run --provider ${e} --backend ollama --model ${c.id}`)}`),console.log()}async function X(o={}){let e=o.backend??await ve(),t=o.provider??await Se(e);e==="ollama"?await we(o,t):await ye(o,t)}L();A();Q();M();import C from"ora";import G from"chalk";import $e from"inquirer";import{execa as Me}from"execa";async function ke(o){let e=l.getDefaultBackend();if(e)return e;let{backendSelection:t}=await $e.prompt([{type:"list",name:"backendSelection",message:"Select model backend:",choices:[{name:"\u{1F5A5}\uFE0F LM Studio",value:"lmstudio"},{name:"\u{1F999} Ollama",value:"ollama"}]}]);return t}async function Pe(){return await O.isClaudeCodeInstalled()||f("Claude Code is not installed."),"claude"}function R(o){return"Claude Code"}async function xe(o,e){let t=C({text:"Checking LM Studio status...",color:"cyan"}).start();await S.findAvailablePort();let r=await S.checkStatus();r||(t.info("LM Studio server not running"),t.stop(),await j("Would you like to start the LM Studio server?")||f("LM Studio server must be running. Start it manually or use the Server tab in LM Studio."),await S.startServer(),r=!0),t.succeed("Connected to LM Studio");let n=C({text:"Fetching available models...",color:"cyan"}).start(),i=await S.getAvailableModels();i.length===0&&(n.fail("No models found. Download some models in LM Studio first."),f("No models available")),n.stop();let a;if(o.model){let c=i.find(u=>u.id===o.model||u.name.toLowerCase().includes(o.model.toLowerCase()));c||f(`Model "${o.model}" not found. Available models: ${i.map(u=>u.name).join(", ")}`),a=c.id}else{let c=l.getLastUsedModel();a=(await P(i,c)).id}let s=C({text:`Loading model: ${G.cyan(a)}`,color:"cyan"}).start();await S.loadModel(a,s);let m=`lmstudio/${a}`;await de(e,m,o)}async function Ce(o,e){let t=C({text:"Checking Ollama status...",color:"cyan"}).start();await b.checkStatus()||(t.fail("Ollama is not running"),f("Ollama must be running. Start it with: ollama serve")),t.succeed("Connected to Ollama");let n=C({text:"Fetching available models...",color:"cyan"}).start(),i=await b.getAvailableModels();i.length===0&&(n.fail("No models found. Pull a model first: ollama pull <model>"),f("No models available")),n.stop();let a;if(o.model){let m=i.find(c=>c.id===o.model||c.name.toLowerCase().includes(o.model.toLowerCase()));m||f(`Model "${o.model}" not found. Available models: ${i.map(c=>c.name).join(", ")}`),a=m.id}else{let m=l.getLastUsedModel();a=(await P(i,m)).id}l.setModel(a);let s=`ollama/${a}`;await de(e,s,o)}async function de(o,e,t){let r=R(o);w(G.green(`
|
|
7
|
+
Starting ${r} with model: ${e}
|
|
8
|
+
`));try{await O.run({model:e,args:t.args??[],verbose:t.verbose})}catch(n){f(`Failed to run ${r}: ${n instanceof Error?n.message:"Unknown error"}`)}}async function Y(o={}){let e;if(o.provider)e=o.provider;else{let i=l.getDefaultProvider();i?e=i:e=await Pe()}let t=C({text:`Checking ${R(e)} installation...`,color:"cyan"}).start();await O.isClaudeCodeInstalled()||(t.fail(`${R(e)} is not installed.`),f(`Please install ${R(e)} first.`)),t.succeed(`${R(e)} is installed`);let n=o.backend??await ke(e);if(o.native&&n==="ollama"){let i=o.model??l.getLastUsedModel();i||f("No model specified. Use -m, --model to specify a model."),await Le(i,o.args??[],o.verbose??!1);return}n==="ollama"&&e==="claude"&&f("Claude Code requires an Anthropic-compatible API. LM Studio supports this natively, but Ollama does not. Use --native to use Ollama's built-in Claude Code integration, or switch to LM Studio."),n==="ollama"?await Ce(o,e):await xe(o,e)}async function Le(o,e,t){w(G.green(`
|
|
9
|
+
Starting Claude Code with Ollama native integration...
|
|
10
|
+
`)),t&&console.log(G.dim(`Running: ollama launch claude --model ${o}
|
|
11
|
+
`));try{await Me("ollama",["launch","claude","--model",o,...e],{stdio:"inherit"})}catch(r){f(`Failed to launch Claude Code via Ollama: ${r instanceof Error?r.message:"Unknown error"}`)}}L();A();import d from"chalk";async function Z(){let[o,e]=await Promise.all([S.getStatus(),b.getStatus()]);console.log(),console.log(d.bold("LM Studio")),console.log(d.dim("\u2500".repeat(50))),console.log(` ${o.running?d.green("\u25CF"):d.red("\u25CB")} Server: ${o.running?d.green("Running"):d.red("Stopped")}`),console.log(` ${d.dim("\u25B8")} Port: ${d.cyan(String(o.port))}`),console.log(` ${d.dim("\u25B8")} URL: ${d.cyan(`http://localhost:${o.port}`)}`),o.activeModel&&console.log(` ${d.dim("\u25B8")} Active Model: ${d.green(o.activeModel)}`),o.running&&o.models.length>0?(console.log(),console.log(d.bold(" Models")),o.models.forEach((t,r)=>{let n=x(t.size),i=t.id===o.activeModel?` ${d.green("[LOADED]")}`:"";console.log(` ${d.dim(String(r+1)+".")} ${t.name}${i}`),console.log(` ${d.dim("ID:")} ${t.id}`),console.log(` ${d.dim("Size:")} ${n}`),t.quantization&&console.log(` ${d.dim("Quantization:")} ${t.quantization}`)})):o.running&&console.log(` ${d.dim("No models available")}`),console.log(),console.log(d.bold("Ollama")),console.log(d.dim("\u2500".repeat(50))),console.log(` ${e.running?d.green("\u25CF"):d.red("\u25CB")} Server: ${e.running?d.green("Running"):d.red("Stopped")}`),console.log(` ${d.dim("\u25B8")} Port: ${d.cyan(String(e.port))}`),console.log(` ${d.dim("\u25B8")} URL: ${d.cyan(`http://localhost:${e.port}`)}`),e.running&&e.runningModels.length>0&&console.log(` ${d.dim("\u25B8")} Running: ${d.green(e.runningModels.join(", "))}`),e.running&&e.models.length>0?(console.log(),console.log(d.bold(" Models")),e.models.forEach((t,r)=>{let n=x(t.size),a=e.runningModels.includes(t.id)?` ${d.green("[RUNNING]")}`:"",s=t.parameterSize?` ${d.dim(t.parameterSize)}`:"";console.log(` ${d.dim(String(r+1)+".")} ${t.name}${s}${a}`),console.log(` ${d.dim("Size:")} ${n}`),t.family&&console.log(` ${d.dim("Family:")} ${t.family}`),t.quantization&&console.log(` ${d.dim("Quantization:")} ${t.quantization}`)})):e.running&&console.log(` ${d.dim("No models available")}`),console.log()}import Ae from"ora";import U from"chalk";import{execa as ce}from"execa";import{readFileSync as Oe}from"fs";import{fileURLToPath as Re}from"url";function ee(){try{let o=Re(new URL("../../package.json",import.meta.url)),e=JSON.parse(Oe(o,"utf8"));return String(e.version)}catch{return"unknown"}}async function oe(){let o=Ae({text:"Checking for updates...",color:"cyan"}).start();try{let e=ee();if(e==="unknown"){o.fail("Could not determine current version.");return}let{stdout:t}=await ce("npm",["view","@iamharshil/aix-cli","version"]),r=t.trim();if(e===r){o.succeed(`You're already on the latest version: ${U.green(`v${e}`)}`);return}o.text=`Updating: ${U.yellow(`v${e}`)} \u2192 ${U.green(`v${r}`)}...`,await ce("npm",["install","-g","@iamharshil/aix-cli@latest"]),o.succeed(`Successfully updated to ${U.green(`v${r}`)}! \u{1F680}`),_(`Restart your terminal or run ${U.cyan("aix-cli --help")} to see what's new.`)}catch(e){o.fail("Failed to update."),V(e instanceof Error?e.message:String(e))}}M();import y from"chalk";import Ue from"inquirer";async function te(o,e,t){if(o==="reset"){let{confirm:n}=await Ue.prompt([{type:"confirm",name:"confirm",message:"Are you sure you want to completely reset all configuration to defaults?",default:!1}]);n?(l.reset(),w("Configuration has been reset to defaults.")):_("Reset cancelled.");return}if(o==="set"&&e&&t){let n=t;t==="true"?n=!0:t==="false"?n=!1:Number.isNaN(Number(t))||(n=Number(t)),l.set(e,n),w(`Set ${y.cyan(e)} to ${y.green(t)}`);return}if(o==="unset"&&e){l.delete(e),w(`Unset configuration key ${y.cyan(e)}`);return}console.log(),console.log(y.bold.cyan("\u2699\uFE0F AIX CLI Configuration")),console.log(y.dim("\u2500".repeat(40))),["defaultBackend","defaultProvider","model","lmStudioUrl","lmStudioPort","lmStudioContextLength","ollamaUrl","ollamaPort","defaultTimeout","autoStartServer"].forEach(n=>{let i=l.get(n);console.log(i!==void 0?` ${y.bold(n)}: ${y.green(i)}`:` ${y.bold(n)}: ${y.dim("not set")}`)}),console.log(),console.log(y.dim("Commands:")),console.log(y.dim(" aix-cli config set <key> <value>")),console.log(y.dim(" aix-cli config unset <key>")),console.log(y.dim(" aix-cli config reset")),console.log()}var $=new Ee;$.name("aix-cli").description("Run Claude Code with local AI models from LM Studio or Ollama").version(ee()).option("--ollama","Shortcut to use Ollama backend").option("--lmstudio","Shortcut to use LM Studio backend").showHelpAfterError();function E(o=0){console.log(),console.log(p.dim(o===0?"\u{1F44B} Goodbye!":"\u274C Cancelled.")),process.exit(o)}process.on("SIGINT",()=>E(0));process.on("SIGTERM",()=>E(0));process.on("uncaughtException",o=>{o.message?.includes("ExitPromptError")||o.message?.includes("User force closed")||o.message?.includes("prompt")?E(0):(console.error(p.red("Error:"),o.message),process.exit(1))});process.on("unhandledRejection",o=>{let e=String(o);(e.includes("ExitPromptError")||e.includes("User force closed")||e.includes("prompt"))&&E(0)});$.command("init",{isDefault:!1}).aliases(["i","load"]).description("Select a backend, load a model, and configure your provider").option("-m, --model <name>","Model name or ID to load").option("-p, --provider <provider>","Coding tool to use (claude)").option("-b, --backend <backend>","Model backend to use (lmstudio or ollama)").action(o=>{let e=$.opts();return e.ollama&&(o.backend="ollama"),e.lmstudio&&(o.backend="lmstudio"),X(o)});$.command("run",{isDefault:!1}).aliases(["r"]).description("Run Claude Code with a model from LM Studio or Ollama").option("-m, --model <name>","Model name or ID to use").option("-p, --provider <provider>","Coding tool to use (claude)").option("-b, --backend <backend>","Model backend to use (lmstudio or ollama)").option("-n, --native","Use Ollama's native Claude Code integration (ollama launch)").option("-v, --verbose","Show verbose output").argument("[args...]","Additional arguments for the provider").action(async(o,e)=>{let t=$.opts();t.ollama&&(e.backend="ollama"),t.lmstudio&&(e.backend="lmstudio"),await Y({...e,args:o})});$.command("status",{isDefault:!1}).aliases(["s","stats"]).description("Show LM Studio and Ollama status and available models").action(Z);$.command("doctor",{isDefault:!1}).aliases(["d","check"]).description("Check system requirements and configuration").action(async()=>{let{lmStudioService:o}=await Promise.resolve().then(()=>(L(),re)),{ollamaService:e}=await Promise.resolve().then(()=>(A(),ae)),{claudeService:t}=await Promise.resolve().then(()=>(Q(),le)),{configService:r}=await Promise.resolve().then(()=>(M(),ne));console.log(p.bold.cyan("\u{1F527} AIX CLI System Check")),console.log(p.dim("\u2500".repeat(40)));let[n,i,a]=await Promise.all([o.checkStatus(),e.checkStatus(),t.isClaudeCodeInstalled()]),s=r.getDefaultProvider(),m=r.getDefaultBackend(),c=r.get("lmStudioPort"),u=r.get("ollamaPort");console.log(),console.log(p.bold("Backends")),console.log(` ${n?"\u2705":"\u26A0\uFE0F"} LM Studio: ${n?p.green("Running"):p.yellow("Not running")} ${p.dim(`(port ${c})`)}`),console.log(` ${i?"\u2705":"\u26A0\uFE0F"} Ollama: ${i?p.green("Running"):p.yellow("Not running")} ${p.dim(`(port ${u})`)}`),console.log(),console.log(p.bold("Coding Tools")),console.log(` ${a?"\u2705":"\u274C"} Claude Code: ${a?p.green("Installed"):p.red("Not installed")}`),console.log(),console.log(p.bold("Defaults")),console.log(` \u{1F4CC} Backend: ${p.cyan(m??"not set")}`),console.log(` \u{1F4CC} Coding tool: ${p.cyan(s??"not set")}`);let g=[];if(a||g.push(` \u2192 ${p.cyan("npm install -g @anthropic-ai/claude-code")}`),!n&&!i&&g.push(` \u2192 Start LM Studio or run ${p.cyan("ollama serve")}`),g.length>0){console.log(),console.log(p.bold("\u{1F4CB} Next Steps:"));for(let v of g)console.log(v)}console.log()});$.command("update",{isDefault:!1}).aliases(["upgrade","u"]).description("Update AIX CLI to the latest version").action(oe);$.command("config [action] [key] [value]",{isDefault:!1}).aliases(["c","settings"]).description("View, set, or reset AIX CLI configuration constraints").action(te);$.parse();
|
|
11
12
|
//# sourceMappingURL=aix.js.map
|