@iamharshil/aix-cli 3.3.7 → 3.4.1

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -2,7 +2,7 @@
2
2
 
3
3
  # AIX CLI
4
4
 
5
- **Run Claude Code & OpenCode with local AI models. No API keys. No cloud. Complete privacy.**
5
+ **Run Claude Code with local AI models. No API keys. No cloud. Complete privacy.**
6
6
 
7
7
  [![npm version](https://img.shields.io/npm/v/@iamharshil/aix-cli.svg?style=flat-square&color=cb3837)](https://www.npmjs.com/package/@iamharshil/aix-cli)
8
8
  [![Downloads](https://img.shields.io/npm/dm/@iamharshil/aix-cli?style=flat-square&color=blue)](https://www.npmjs.com/package/@iamharshil/aix-cli)
@@ -18,7 +18,7 @@
18
18
 
19
19
  ## What is AIX?
20
20
 
21
- AIX CLI is a bridge between local model servers and AI coding assistants. It connects [LM Studio](https://lmstudio.ai) or [Ollama](https://ollama.com) to [Claude Code](https://docs.anthropic.com/en/docs/claude-code) or [OpenCode](https://opencode.ai) — letting you use **locally-running language models** as the backend for your favorite AI dev tools.
21
+ AIX CLI is a bridge between local model servers and AI coding assistants. It connects [LM Studio](https://lmstudio.ai) or [Ollama](https://ollama.com) to [Claude Code](https://docs.anthropic.com/en/docs/claude-code) — letting you use **locally-running language models** as the backend for your favorite AI dev tools.
22
22
 
23
23
  No API keys. No cloud calls. No data leaving your machine.
24
24
 
@@ -40,16 +40,18 @@ No API keys. No cloud calls. No data leaving your machine.
40
40
  - 🔒 **Privacy-first** — All inference runs locally on your hardware. Your code never leaves your machine.
41
41
  - 🔑 **No API keys** — No subscriptions, no usage limits, no cloud dependencies.
42
42
  - 🚀 **GPU-accelerated** — Take advantage of your local GPU for fast inference.
43
- - 🔀 **Multi-backend** — Use LM Studio or Ollama as your model server.
44
- - 🛠️ **Multi-provider** — Switch between Claude Code and OpenCode with a single flag.
45
- - ⚡ **Zero config** — Just run `aix-cli run` and start coding. OpenCode is launched using an OpenAI-compatible endpoint with `openai-compatible/<model>` so you don’t need to manually register models in `opencode.json`.
43
+ - 🔀 **Single provider** — Claude Code is the only supported AI coding assistant.
44
+ - **Zero config** — Just run `aix-cli run` and start coding.
46
45
 
47
46
  ### Compatibility notes
48
47
 
49
- Both providers can be launched with either backend, but compatibility ultimately depends on what API shape the provider expects:
48
+ - **LM Studio**: Works natively with Claude Code via Anthropic-compatible API at `/v1`.
49
+ - **Ollama**: Use the `--native` flag to leverage Ollama's built-in `ollama launch claude` integration, which handles all API configuration automatically.
50
50
 
51
- - **OpenCode** works with **LM Studio** and **Ollama** by pointing at their OpenAI-compatible `/v1` endpoints.
52
- - **Claude Code** is configured via `ANTHROPIC_BASE_URL` and works best with backends that expose an Anthropic-compatible API at `/v1` (LM Studio supports this). If your Ollama setup does not provide an Anthropic-compatible `/v1`, Claude Code may fail even though AIX can launch it.
51
+ ```bash
52
+ # Recommended for Ollama
53
+ aix-cli run --ollama --native -m qwen2.5-coder:14b
54
+ ```
53
55
 
54
56
  ---
55
57
 
@@ -57,11 +59,11 @@ Both providers can be launched with either backend, but compatibility ultimately
57
59
 
58
60
  ### Prerequisites
59
61
 
60
- | Requirement | Description |
61
- |---|---|
62
- | [Node.js](https://nodejs.org/) ≥ 18 | JavaScript runtime |
62
+ | Requirement | Description |
63
+ | -------------------------------------------------------------------- | ------------------------------------------ |
64
+ | [Node.js](https://nodejs.org/) ≥ 18 | JavaScript runtime |
63
65
  | [LM Studio](https://lmstudio.ai) **or** [Ollama](https://ollama.com) | Local model server (at least one required) |
64
- | [Claude Code](https://docs.anthropic.com/en/docs/claude-code) **or** [OpenCode](https://opencode.ai) | AI coding assistant (at least one required) |
66
+ | [Claude Code](https://docs.anthropic.com/en/docs/claude-code) | AI coding assistant |
65
67
 
66
68
  ### Install
67
69
 
@@ -101,7 +103,7 @@ npm link
101
103
  aix-cli doctor
102
104
  ```
103
105
 
104
- This checks that LM Studio / Ollama, Claude Code / OpenCode, and your environment are properly configured.
106
+ This checks that LM Studio / Ollama, Claude Code, and your environment are properly configured.
105
107
 
106
108
  ---
107
109
 
@@ -109,19 +111,21 @@ This checks that LM Studio / Ollama, Claude Code / OpenCode, and your environmen
109
111
 
110
112
  ### `aix-cli run` — Start a coding session
111
113
 
112
- The primary command. Launches Claude Code or OpenCode backed by a local model.
114
+ The primary command. Launches Claude Code backed by a local model.
113
115
 
114
116
  ```bash
115
- # Interactive — prompts for backend, model, and provider
117
+ # Interactive — prompts for backend and model
116
118
  aix-cli run
117
119
 
118
120
  # Specify backend and model
119
121
  aix-cli run -b ollama -m qwen2.5-coder:14b
120
122
  aix-cli run -b lmstudio -m llama-3-8b
121
123
 
122
- # Choose provider explicitly
123
- aix-cli run -p claude
124
- aix-cli run -p opencode
124
+ # Use Ollama's native Claude Code integration (recommended for Ollama)
125
+ aix-cli run -b ollama --native -m qwen2.5-coder:14b
126
+
127
+ # Global shortcuts
128
+ aix-cli run --ollama --native -m gemma4
125
129
 
126
130
  # Pass a prompt directly
127
131
  aix-cli run -b ollama -m qwen2.5-coder:14b -- "Refactor auth middleware"
@@ -155,25 +159,25 @@ aix-cli doctor
155
159
 
156
160
  ### Command Reference
157
161
 
158
- | Command | Aliases | Description |
159
- |---|---|---|
160
- | `run` | `r` | Run Claude Code / OpenCode with a local model |
161
- | `init` | `i`, `load` | Set up backend, select model, configure provider |
162
- | `status` | `s`, `stats` | Show LM Studio & Ollama status |
163
- | `doctor` | `d`, `check` | Run system diagnostics |
164
- | `update` | `upgrade`, `u` | Update AIX CLI to the latest version |
165
- | `config` | `c`, `settings`| View, set, or reset CLI configurations |
162
+ | Command | Aliases | Description |
163
+ | -------- | --------------- | ------------------------------------------------ |
164
+ | `run` | `r` | Run Claude Code with a local model |
165
+ | `init` | `i`, `load` | Set up backend, select model, configure provider |
166
+ | `status` | `s`, `stats` | Show LM Studio & Ollama status |
167
+ | `doctor` | `d`, `check` | Run system diagnostics |
168
+ | `update` | `upgrade`, `u` | Update AIX CLI to the latest version |
169
+ | `config` | `c`, `settings` | View, set, or reset CLI configurations |
166
170
 
167
171
  ### Global Options
168
172
 
169
- | Flag | Description |
170
- |---|---|
171
- | `-b, --backend <name>` | Model backend: `lmstudio` or `ollama` |
172
- | `-m, --model <name>` | Model name or ID to use |
173
- | `-p, --provider <name>` | Coding tool: `claude` (default) or `opencode` |
174
- | `-v, --verbose` | Show verbose output |
175
- | `-h, --help` | Show help |
176
- | `-V, --version` | Show version |
173
+ | Flag | Description |
174
+ | ---------------------- | ------------------------------------------- |
175
+ | `-b, --backend <name>` | Model backend: `lmstudio` or `ollama` |
176
+ | `-m, --model <name>` | Model name or ID to use |
177
+ | `-n, --native` | Use Ollama's native Claude Code integration |
178
+ | `-v, --verbose` | Show verbose output |
179
+ | `-h, --help` | Show help |
180
+ | `-V, --version` | Show version |
177
181
 
178
182
  ---
179
183
 
@@ -181,11 +185,11 @@ aix-cli doctor
181
185
 
182
186
  AIX stores its configuration in the OS-appropriate config directory:
183
187
 
184
- | Platform | Path |
185
- |---|---|
186
- | macOS | `~/Library/Application Support/aix-cli/` |
187
- | Linux | `~/.config/aix-cli/` |
188
- | Windows | `%APPDATA%\aix-cli\` |
188
+ | Platform | Path |
189
+ | -------- | ---------------------------------------- |
190
+ | macOS | `~/Library/Application Support/aix-cli/` |
191
+ | Linux | `~/.config/aix-cli/` |
192
+ | Windows | `%APPDATA%\aix-cli\` |
189
193
 
190
194
  ### Config File
191
195
 
@@ -204,9 +208,9 @@ AIX stores its configuration in the OS-appropriate config directory:
204
208
 
205
209
  ### Environment Variables
206
210
 
207
- | Variable | Description | Default |
208
- |---|---|---|
209
- | `LM_STUDIO_PORT` | Override the LM Studio server port | `1234` |
211
+ | Variable | Description | Default |
212
+ | ---------------- | ---------------------------------- | ------- |
213
+ | `LM_STUDIO_PORT` | Override the LM Studio server port | `1234` |
210
214
 
211
215
  ---
212
216
 
@@ -227,19 +231,18 @@ AIX stores its configuration in the OS-appropriate config directory:
227
231
  │ backend routing │
228
232
  │ model selection │
229
233
  │ config mgmt │
230
- └───┬──────────┬────┘
231
-
232
- ┌────────┘ └────────┐
233
- ▼ ▼
234
- ┌──────────────┐ ┌──────────────┐
235
- Claude Code │ │ OpenCode
236
- │ --model X │ │ --model X │
237
- └──────────────┘ └──────────────┘
234
+ └────────┬──────────┘
235
+
236
+
237
+ ┌──────────────┐
238
+ │ Claude Code │
239
+ --model X
240
+ └──────────────┘
238
241
  ```
239
242
 
240
243
  1. **LM Studio** or **Ollama** runs a local inference server with an OpenAI-compatible API.
241
244
  2. **AIX CLI** discovers available models, manages configuration, and orchestrates the connection.
242
- 3. **Claude Code** or **OpenCode** receives the model endpoint and runs as it normally would — except fully local.
245
+ 3. **Claude Code** receives the model endpoint and runs as it normally would — except fully local.
243
246
 
244
247
  ---
245
248
 
@@ -280,6 +283,7 @@ Then run `aix-cli init` to select and configure.
280
283
  <summary><strong>Connection refused</strong></summary>
281
284
 
282
285
  Check that the correct port is being used:
286
+
283
287
  - LM Studio defaults to port `1234`
284
288
  - Ollama defaults to port `11434`
285
289
 
@@ -288,16 +292,12 @@ You can configure custom ports in your AIX config file (path shown by `aix-cli d
288
292
  </details>
289
293
 
290
294
  <details>
291
- <summary><strong>Claude Code / OpenCode not detected</strong></summary>
295
+ <summary><strong>Claude Code not detected</strong></summary>
292
296
 
293
- Install the missing provider globally:
297
+ Install Claude Code globally:
294
298
 
295
299
  ```bash
296
- # Claude Code
297
300
  npm install -g @anthropic-ai/claude-code
298
-
299
- # OpenCode
300
- npm install -g opencode-ai
301
301
  ```
302
302
 
303
303
  Then re-run `aix-cli doctor` to confirm.
@@ -340,7 +340,6 @@ npm run lint # Lint
340
340
  - [LM Studio](https://lmstudio.ai) — Run local AI models with a visual interface
341
341
  - [Ollama](https://ollama.com) — Run large language models locally
342
342
  - [Claude Code](https://docs.anthropic.com/en/docs/claude-code) — Anthropic's AI coding assistant
343
- - [OpenCode](https://opencode.ai) — Open-source AI coding assistant
344
343
 
345
344
  ## License
346
345
 
package/dist/bin/aix.js CHANGED
@@ -1,11 +1,12 @@
1
1
  #!/usr/bin/env node
2
- var ye=Object.defineProperty;var we=(t=>typeof require<"u"?require:typeof Proxy<"u"?new Proxy(t,{get:(e,o)=>(typeof require<"u"?require:e)[o]}):t)(function(t){if(typeof require<"u")return require.apply(this,arguments);throw Error('Dynamic require of "'+t+'" is not supported')});var L=(t,e)=>()=>(t&&(e=t(t=0)),e);var A=(t,e)=>{for(var o in e)ye(t,o,{get:e[o],enumerable:!0})};var se={};A(se,{ConfigService:()=>T,configService:()=>l});import be from"conf";var T,l,P=L(()=>{"use strict";T=class{store;constructor(){this.store=new be({projectName:"aix",defaults:{lmStudioUrl:"http://localhost",lmStudioPort:1234,lmStudioContextLength:65536,ollamaUrl:"http://localhost",ollamaPort:11434,defaultTimeout:3e4,autoStartServer:!1},clearInvalidConfig:!0})}get(e){return this.store.get(e)}set(e,o){this.store.set(e,o)}delete(e){this.store.delete(e)}setModel(e){this.store.set("model",e)}getLastUsedModel(){return this.store.get("model")}setDefaultProvider(e){this.store.set("defaultProvider",e)}getDefaultProvider(){return this.store.get("defaultProvider")}setDefaultBackend(e){this.store.set("defaultBackend",e)}getDefaultBackend(){return this.store.get("defaultBackend")}getLMStudioUrl(){let e=this.store.get("lmStudioUrl"),o=this.store.get("lmStudioPort");return`${e}:${o}`}getOllamaUrl(){let e=this.store.get("ollamaUrl"),o=this.store.get("ollamaPort");return`${e}:${o}`}reset(){this.store.clear()}},l=new T});var de={};A(de,{LMStudioService:()=>z,lmStudioService:()=>y});import{execa as _}from"execa";import le from"ora";import V from"chalk";var $e,z,y,E=L(()=>{"use strict";P();$e=[1234,1235,1236,1237],z=class{constructor(){}getBaseUrl(){return l.getLMStudioUrl()}getApiUrl(e){return`${this.getBaseUrl()}${e}`}async checkStatus(){try{return(await fetch(this.getApiUrl("/api/status"),{method:"GET",signal:AbortSignal.timeout(3e3)})).ok}catch{return!1}}async getAvailableModels(){let e=["/api/v1/models","/api/models","/v1/models","/api/ls-model/list"];for(let o of e)try{let i=await fetch(this.getApiUrl(o),{method:"GET",signal:AbortSignal.timeout(1e4)});if(!i.ok)continue;let n=await i.json(),r=[];return Array.isArray(n)?r=n:n.models&&Array.isArray(n.models)?r=n.models:n.data&&Array.isArray(n.data)&&(r=n.data),r.map(a=>{let s=a;return{id:String(s.key||s.id||s.model||""),name:String(s.display_name||s.name||s.id||s.model||""),size:Number(s.size_bytes||s.size||s.file_size||0),quantization:String(s.quantization?typeof s.quantization=="object"?s.quantization.name:s.quantization:"")}}).filter(a=>a.id&&a.name)}catch{continue}return[]}async getStatus(){if(!await this.checkStatus())return{running:!1,port:l.get("lmStudioPort"),models:[]};try{let o=await fetch(this.getApiUrl("/api/status"),{method:"GET",signal:AbortSignal.timeout(1e4)});if(!o.ok)return{running:!1,port:l.get("lmStudioPort"),models:[]};let i=await o.json();return{running:!0,port:l.get("lmStudioPort"),models:i.models??[],activeModel:i.active_model}}catch{return{running:!1,port:l.get("lmStudioPort"),models:[]}}}async loadModel(e,o){let i=o??le({text:`Loading model: ${V.cyan(e)}`,color:"cyan"}).start();try{let n=l.get("lmStudioContextLength"),r=["/api/v1/models/load","/api/model/load"],a=async(u,d)=>{let m={model:e};d&&(m.context_length=n);let g=await fetch(this.getApiUrl(u),{method:"POST",headers:{"Content-Type":"application/json"},body:JSON.stringify(m),signal:AbortSignal.timeout(3e5)});if(g.ok)return;let p="";try{p=await g.text()}catch{p=""}let h=p?` ${p}`:"";throw new Error(`Failed to load model: ${g.status} ${g.statusText}${h}`)},s;for(let u of r)try{return await a(u,!0),i.succeed(`Model ${V.green(e)} loaded successfully`),l.setModel(e),{loadSpinner:i}}catch(d){s=d;let m=d instanceof Error?d.message:String(d);if(/context|token|max.*(context|token)|too.*large/i.test(m))try{return i.warn(`Model load failed with context_length=${n}. Retrying with LM Studio defaults...`),await a(u,!1),i.succeed(`Model ${V.green(e)} loaded successfully`),l.setModel(e),{loadSpinner:i}}catch(p){s=p}}throw s instanceof Error?s:new Error(String(s))}catch(n){throw i.fail(`Failed to load model: ${n instanceof Error?n.message:"Unknown error"}`),n}}async startServer(e){let o=e??le({text:"Starting LM Studio server...",color:"cyan"}).start();try{let i=process.platform==="darwin",n=process.platform==="linux",r=process.platform==="win32",a;if(i){let s=["/Applications/LM Studio.app",`${process.env.HOME}/Applications/LM Studio.app`];for(let u of s)try{let{existsSync:d}=await import("fs");if(d(u)){a=`open "${u}" --args --server`;break}}catch{}if(a?.startsWith("open")){await _("open",[s.find(u=>{try{let{existsSync:d}=we("fs");return d(u)}catch{return!1}})||"/Applications/LM Studio.app","--args","--server"],{detached:!0,stdio:"ignore"}),o.succeed("LM Studio server started"),await this.waitForServer(6e4);return}}else n?a=await this.findLinuxBinary():r&&(a=await this.findWindowsExecutable());if(!a)throw o.fail("LM Studio not found. Please install it from https://lmstudio.ai"),new Error("LM Studio not installed");await _(a,["--server"],{detached:!0,stdio:"ignore",env:{...process.env,LM_STUDIO_SERVER_PORT:String(l.get("lmStudioPort"))}}),o.succeed("LM Studio server started"),await this.waitForServer(6e4)}catch(i){throw o.fail(`Failed to start LM Studio: ${i instanceof Error?i.message:"Unknown error"}`),i}}async findLinuxBinary(){let e=["/usr/bin/lm-studio","/usr/local/bin/lm-studio",`${process.env.HOME}/.local/bin/lm-studio`];for(let o of e)try{return await _("test",["-x",o]),o}catch{continue}}async findWindowsExecutable(){let e=process.env.LOCALAPPDATA,o=process.env.PROGRAMFILES,i=[e?`${e}\\Programs\\LM Studio\\lm-studio.exe`:"",o?`${o}\\LM Studio\\lm-studio.exe`:""].filter(Boolean);for(let n of i)try{return await _("cmd",["/c","if exist",`"${n}"`,"echo","yes"]),n}catch{continue}}async waitForServer(e=6e4){let o=Date.now();for(;Date.now()-o<e;){if(await this.checkStatus())return!0;await this.sleep(2e3)}return!1}sleep(e){return new Promise(o=>setTimeout(o,e))}async findAvailablePort(){for(let e of $e)try{if((await fetch(`http://localhost:${e}/api/status`,{method:"GET",signal:AbortSignal.timeout(1e3)})).ok)return l.set("lmStudioPort",e),e}catch{continue}return l.get("lmStudioPort")}},y=new z});var ce={};A(ce,{OllamaService:()=>j,ollamaService:()=>$});var j,$,R=L(()=>{"use strict";P();j=class{getBaseUrl(){return l.getOllamaUrl()}getApiUrl(e){return`${this.getBaseUrl()}${e}`}async checkStatus(){try{return(await fetch(this.getApiUrl("/api/tags"),{method:"GET",signal:AbortSignal.timeout(3e3)})).ok}catch{return!1}}async getAvailableModels(){try{let e=await fetch(this.getApiUrl("/api/tags"),{method:"GET",signal:AbortSignal.timeout(1e4)});return e.ok?((await e.json()).models??[]).map(n=>{let r=n.details??{};return{id:String(n.name??n.model??""),name:String(n.name??n.model??""),size:Number(n.size??0),quantization:String(r.quantization_level??""),family:String(r.family??""),parameterSize:String(r.parameter_size??"")}}).filter(n=>n.id&&n.name):[]}catch{return[]}}async getRunningModels(){try{let e=await fetch(this.getApiUrl("/api/ps"),{method:"GET",signal:AbortSignal.timeout(5e3)});return e.ok?((await e.json()).models??[]).map(n=>String(n.name??n.model??"")).filter(Boolean):[]}catch{return[]}}async getStatus(){if(!await this.checkStatus())return{running:!1,port:l.get("ollamaPort"),models:[],runningModels:[]};let[o,i]=await Promise.all([this.getAvailableModels(),this.getRunningModels()]);return{running:!0,port:l.get("ollamaPort"),models:o,runningModels:i}}},$=new j});var me={};A(me,{ClaudeService:()=>H,claudeService:()=>N});import{execa as Q}from"execa";import xe from"chalk";var H,N,Z=L(()=>{"use strict";P();H=class{async isClaudeCodeInstalled(){try{return await Q("claude",["--version"],{stdio:"ignore"}),!0}catch{return!1}}async run(e){let{model:o,args:i=[],verbose:n=!1}=e,r=this.extractProvider(o),a=this.extractModelName(o);if(!r||!a)throw new Error(`Invalid model format: ${o}. Expected format: provider/model-name`);let s=["--model",a,...i];n&&console.log(xe.dim(`
3
- Running: claude ${s.join(" ")}
4
- `));let u=r==="ollama"?l.getOllamaUrl():l.getLMStudioUrl(),d=r==="ollama"?"ollama":"lmstudio",m=`${u}/v1/messages`;try{let g=await fetch(m,{method:"POST",headers:{"Content-Type":"application/json","x-api-key":d},body:JSON.stringify({model:a,max_tokens:1,messages:[{role:"user",content:"test"}]}),signal:AbortSignal.timeout(3e3)});if(!g.ok&&g.status>=500)throw new Error(`HTTP ${g.status} ${g.statusText}`)}catch(g){let p=g instanceof Error?g.message:String(g),h=r==="ollama"?"Claude Code requires an Anthropic-compatible API. Ollama does not support this. Use OpenCode for Ollama, or switch to LM Studio.":"Ensure LM Studio server is running and the Anthropic Compatibility server is enabled. Check the Developer tab in LM Studio.";throw new Error(`Claude Code could not reach an Anthropic-compatible endpoint at ${m} (${p}). ${h}`)}try{await Q("claude",s,{stdio:"inherit",env:{...process.env,ANTHROPIC_MODEL:a,ANTHROPIC_BASE_URL:u,ANTHROPIC_AUTH_TOKEN:d,ANTHROPIC_API_KEY:""}})}catch(g){if(g instanceof Error&&"exitCode"in g){let p=g.exitCode;process.exit(p??1)}throw g}}extractProvider(e){return e.split("/")[0]}extractModelName(e){let o=e.split("/");if(!(o.length<2))return o.slice(1).join("/")}async getVersion(){try{return(await Q("claude",["--version"])).stdout}catch{return}}},N=new H});var ge={};A(ge,{OpenCodeService:()=>W,openCodeService:()=>I});import{execa as ee}from"execa";import Oe from"chalk";var W,I,oe=L(()=>{"use strict";P();W=class{async isOpenCodeInstalled(){try{return await ee("opencode",["--version"],{stdio:"ignore"}),!0}catch{return!1}}async run(e){let{model:o,args:i=[],verbose:n=!1}=e,r=this.extractProvider(o),a=this.extractModelName(o);if(!r||!a)throw new Error(`Invalid model format: ${o}. Expected format: provider/model-name`);let s=["--model",a,...i];n&&console.log(Oe.dim(`
5
- Running: opencode ${s.join(" ")}
6
- `));let u=r==="ollama"?l.getOllamaUrl():l.getLMStudioUrl(),d=`${u}/v1`,m=r==="ollama"?"ollama":"lmstudio",g={$schema:"https://opencode.ai/config.json",provider:{[m]:{npm:"@ai-sdk/openai-compatible",name:r==="ollama"?"Ollama":"LM Studio",options:{baseURL:d},models:{[a]:{name:a}}}}};try{let p=`${d}/models`,h=await fetch(p,{method:"GET",signal:AbortSignal.timeout(3e3)});if(!h.ok)throw new Error(`HTTP ${h.status} ${h.statusText}`)}catch(p){let h=p instanceof Error?p.message:String(p),O=r==="ollama"?"Ensure Ollama is running (ollama serve) and reachable at the configured host/port.":"Ensure LM Studio server is running and the OpenAI Compatibility server is enabled.";throw new Error(`OpenCode could not reach an OpenAI-compatible endpoint at ${d} (${h}). ${O}`)}try{await ee("opencode",s,{stdio:"inherit",env:{...process.env,OPENCODE_MODEL_NAME:a,OPENCODE_MODEL_PROVIDER:m,OPENCODE_CONFIG_CONTENT:JSON.stringify(g),OPENAI_BASE_URL:d,OPENAI_API_KEY:"local-no-key-required",OPENAI_COMPATIBLE_BASE_URL:d,OPENAI_COMPATIBLE_API_KEY:"local-no-key-required",OLLAMA_HOST:r==="ollama"?u:process.env.OLLAMA_HOST??""}})}catch(p){if(p instanceof Error&&"exitCode"in p){let h=p.exitCode;process.exit(h??1)}throw p}}extractProvider(e){return e.split("/")[0]}extractModelName(e){let o=e.split("/");if(!(o.length<2))return o.slice(1).join("/")}async getVersion(){try{return(await ee("opencode",["--version"])).stdout}catch{return}}},I=new W});import{Command as De}from"commander";import f from"chalk";E();R();P();import K from"ora";import v from"chalk";import G from"inquirer";import ue from"inquirer";async function k(t,e){let o=t.map(r=>({name:`${r.name} (${r.id})`,value:r,short:r.name})),i=e?o.findIndex(r=>r.value.id===e):0;return(await ue.prompt([{type:"list",name:"model",message:"Select a model to load:",choices:o,default:Math.max(0,i),pageSize:Math.min(t.length,15)}])).model}async function q(t,e=!0){return(await ue.prompt([{type:"confirm",name:"confirm",message:t,default:e}])).confirm}import J from"chalk";function C(t){if(t===0)return"0 B";let e=1024,o=["B","KB","MB","GB","TB"],i=Math.floor(Math.log(t)/Math.log(e));return`${parseFloat((t/Math.pow(e,i)).toFixed(2))} ${o[i]}`}function b(t){console.log(J.green("\u2713")+" "+t)}function X(t){console.error(J.red("\u2717")+" "+t)}function F(t){console.log(J.blue("\u2139")+" "+t)}function S(t,e=1){X(t),process.exit(e)}async function Me(){let t=l.getDefaultBackend(),{backendSelection:e}=await G.prompt([{type:"list",name:"backendSelection",message:"Select model backend:",default:t??"lmstudio",choices:[{name:"\u{1F5A5}\uFE0F LM Studio",value:"lmstudio"},{name:"\u{1F999} Ollama",value:"ollama"}]}]),{saveDefault:o}=await G.prompt([{type:"confirm",name:"saveDefault",message:"Save as default backend?",default:!1}]);return o&&(l.setDefaultBackend(e),b(`Default backend set to ${v.cyan(e)}`)),e}async function Pe(t){let e=l.getDefaultProvider(),o=[{name:"Claude Code",value:"claude"},{name:"OpenCode",value:"opencode"}],i=e??"claude",{providerSelection:n}=await G.prompt([{type:"list",name:"providerSelection",message:"Select coding tool:",default:i,choices:o}]),{saveDefault:r}=await G.prompt([{type:"confirm",name:"saveDefault",message:"Save as default coding tool?",default:!1}]);return r&&(l.setDefaultProvider(n),b(`Default coding tool set to ${v.cyan(n)}`)),n}async function ke(t,e){let o=K({text:"Checking LM Studio status...",color:"cyan"}).start(),i=await y.checkStatus();i||(o.info("LM Studio server not running"),o.stop(),await q("Would you like to start the LM Studio server?")||S("LM Studio server must be running. Start it manually or use the Server tab in LM Studio."),await y.startServer(),i=!0),o.succeed("Connected to LM Studio");let n=K({text:"Fetching available models...",color:"cyan"}).start(),r=await y.getAvailableModels();r.length===0&&(n.fail("No models found. Download some models in LM Studio first."),S("No models available")),n.succeed(`Found ${v.bold(r.length)} model${r.length===1?"":"s"}`),console.log(),console.log(v.bold("Available Models:")),console.log(v.dim("\u2500".repeat(process.stdout.columns||80))),r.forEach((m,g)=>{let p=C(m.size),h=m.loaded?v.green(" [LOADED]"):"";console.log(` ${v.dim(String(g+1).padStart(2))}. ${m.name} ${v.dim(`(${p})`)}${h}`)}),console.log();let a=l.getLastUsedModel(),s=t.model,u=s?r.find(m=>m.id===s||m.name.includes(s)):await k(r,a);u||S("No model selected"),await y.loadModel(u.id,o);let d=u.id;b(v.bold(`
7
- Model ready: ${u.name}`)),console.log(),console.log("Start your interactive coding session:"),console.log(` ${v.cyan(`aix-cli run --provider ${e} --backend lmstudio --model ${d}`)}`),console.log()}async function Ce(t,e){let o=K({text:"Checking Ollama status...",color:"cyan"}).start();await $.checkStatus()||(o.fail("Ollama is not running"),S("Ollama must be running. Start it with: ollama serve")),o.succeed("Connected to Ollama");let n=K({text:"Fetching available models...",color:"cyan"}).start(),r=await $.getAvailableModels();r.length===0&&(n.fail("No models found. Pull a model first: ollama pull <model>"),S("No models available")),n.succeed(`Found ${v.bold(r.length)} model${r.length===1?"":"s"}`);let a=await $.getRunningModels();console.log(),console.log(v.bold("Available Models:")),console.log(v.dim("\u2500".repeat(process.stdout.columns||80))),r.forEach((m,g)=>{let p=C(m.size),O=a.includes(m.id)?v.green(" [RUNNING]"):"",Se=m.parameterSize?v.dim(` ${m.parameterSize}`):"";console.log(` ${v.dim(String(g+1).padStart(2))}. ${m.name}${Se} ${v.dim(`(${p})`)}${O}`)}),console.log();let s=l.getLastUsedModel(),u=t.model,d=u?r.find(m=>m.id===u||m.name.includes(u)):await k(r,s);d||S("No model selected"),l.setModel(d.id),b(v.bold(`
8
- Model selected: ${d.name}`)),console.log(),console.log("Start your interactive coding session:"),console.log(` ${v.cyan(`aix-cli run --provider ${e} --backend ollama --model ${d.id}`)}`),console.log()}async function Y(t={}){let e=t.backend??await Me(),o=t.provider??await Pe(e);e==="ollama"?await Ce(t,o):await ke(t,o)}E();R();Z();oe();P();import x from"ora";import pe from"chalk";import fe from"inquirer";async function Le(t){let e=l.getDefaultBackend();if(e)return e;let{backendSelection:o}=await fe.prompt([{type:"list",name:"backendSelection",message:"Select model backend:",choices:[{name:"\u{1F5A5}\uFE0F LM Studio",value:"lmstudio"},{name:"\u{1F999} Ollama",value:"ollama"}]}]);return o}async function Ae(){let t=await N.isClaudeCodeInstalled(),e=await I.isOpenCodeInstalled(),o=[{name:"Claude Code",value:"claude",disabled:t?!1:"Not installed (install: npm i -g @anthropic-ai/claude-code)"},{name:"OpenCode",value:"opencode",disabled:e?!1:"Not installed (install: npm i -g opencode-ai)"}];!t&&!e&&S("Neither Claude Code nor OpenCode is installed.");let{providerSelection:i}=await fe.prompt([{type:"list",name:"providerSelection",message:"Select coding tool:",choices:o}]);return i}function U(t){return t==="opencode"?"OpenCode":"Claude Code"}async function Ee(t,e){let o=x({text:"Checking LM Studio status...",color:"cyan"}).start();await y.findAvailablePort();let i=await y.checkStatus();i||(o.info("LM Studio server not running"),o.stop(),await q("Would you like to start the LM Studio server?")||S("LM Studio server must be running. Start it manually or use the Server tab in LM Studio."),await y.startServer(),i=!0),o.succeed("Connected to LM Studio");let n=x({text:"Fetching available models...",color:"cyan"}).start(),r=await y.getAvailableModels();r.length===0&&(n.fail("No models found. Download some models in LM Studio first."),S("No models available")),n.stop();let a;if(t.model){let d=r.find(m=>m.id===t.model||m.name.toLowerCase().includes(t.model.toLowerCase()));d||S(`Model "${t.model}" not found. Available models: ${r.map(m=>m.name).join(", ")}`),a=d.id}else{let d=l.getLastUsedModel();a=(await k(r,d)).id}let s=x({text:`Loading model: ${pe.cyan(a)}`,color:"cyan"}).start();await y.loadModel(a,s);let u=`lmstudio/${a}`;await he(e,u,t)}async function Re(t,e){let o=x({text:"Checking Ollama status...",color:"cyan"}).start();await $.checkStatus()||(o.fail("Ollama is not running"),S("Ollama must be running. Start it with: ollama serve")),o.succeed("Connected to Ollama");let n=x({text:"Fetching available models...",color:"cyan"}).start(),r=await $.getAvailableModels();r.length===0&&(n.fail("No models found. Pull a model first: ollama pull <model>"),S("No models available")),n.stop();let a;if(t.model){let u=r.find(d=>d.id===t.model||d.name.toLowerCase().includes(t.model.toLowerCase()));u||S(`Model "${t.model}" not found. Available models: ${r.map(d=>d.name).join(", ")}`),a=u.id}else{let u=l.getLastUsedModel();a=(await k(r,u)).id}l.setModel(a);let s=`ollama/${a}`;await he(e,s,t)}async function he(t,e,o){let i=U(t);b(pe.green(`
2
+ var ge=Object.defineProperty;var fe=(t=>typeof require<"u"?require:typeof Proxy<"u"?new Proxy(t,{get:(e,o)=>(typeof require<"u"?require:e)[o]}):t)(function(t){if(typeof require<"u")return require.apply(this,arguments);throw Error('Dynamic require of "'+t+'" is not supported')});var E=(t,e)=>()=>(t&&(e=t(t=0)),e);var R=(t,e)=>{for(var o in e)ge(t,o,{get:e[o],enumerable:!0})};var ne={};R(ne,{ConfigService:()=>B,configService:()=>a});import pe from"conf";var B,a,b=E(()=>{"use strict";B=class{store;constructor(){this.store=new pe({projectName:"aix",defaults:{lmStudioUrl:"http://localhost",lmStudioPort:1234,lmStudioContextLength:65536,ollamaUrl:"http://localhost",ollamaPort:11434,defaultTimeout:3e4,autoStartServer:!1},clearInvalidConfig:!0})}get(e){return this.store.get(e)}set(e,o){this.store.set(e,o)}delete(e){this.store.delete(e)}setModel(e){this.store.set("model",e)}getLastUsedModel(){return this.store.get("model")}setDefaultProvider(e){this.store.set("defaultProvider",e)}getDefaultProvider(){return this.store.get("defaultProvider")}setDefaultBackend(e){this.store.set("defaultBackend",e)}getDefaultBackend(){return this.store.get("defaultBackend")}getLMStudioUrl(){let e=this.store.get("lmStudioUrl"),o=this.store.get("lmStudioPort");return`${e}:${o}`}getOllamaUrl(){let e=this.store.get("ollamaUrl"),o=this.store.get("ollamaPort");return`${e}:${o}`}reset(){this.store.clear()}},a=new B});var ie={};R(ie,{LMStudioService:()=>D,lmStudioService:()=>v});import{execa as N}from"execa";import re from"ora";import W from"chalk";var he,D,v,x=E(()=>{"use strict";b();he=[1234,1235,1236,1237],D=class{constructor(){}getBaseUrl(){return a.getLMStudioUrl()}getApiUrl(e){return`${this.getBaseUrl()}${e}`}async checkStatus(){try{return(await fetch(this.getApiUrl("/api/status"),{method:"GET",signal:AbortSignal.timeout(3e3)})).ok}catch{return!1}}async getAvailableModels(){let e=["/api/v1/models","/api/models","/v1/models","/api/ls-model/list"];for(let o of e)try{let i=await fetch(this.getApiUrl(o),{method:"GET",signal:AbortSignal.timeout(1e4)});if(!i.ok)continue;let n=await i.json(),r=[];return Array.isArray(n)?r=n:n.models&&Array.isArray(n.models)?r=n.models:n.data&&Array.isArray(n.data)&&(r=n.data),r.map(d=>{let l=d;return{id:String(l.key||l.id||l.model||""),name:String(l.display_name||l.name||l.id||l.model||""),size:Number(l.size_bytes||l.size||l.file_size||0),quantization:String(l.quantization?typeof l.quantization=="object"?l.quantization.name:l.quantization:"")}}).filter(d=>d.id&&d.name)}catch{continue}return[]}async getStatus(){if(!await this.checkStatus())return{running:!1,port:a.get("lmStudioPort"),models:[]};try{let o=await fetch(this.getApiUrl("/api/status"),{method:"GET",signal:AbortSignal.timeout(1e4)});if(!o.ok)return{running:!1,port:a.get("lmStudioPort"),models:[]};let i=await o.json();return{running:!0,port:a.get("lmStudioPort"),models:i.models??[],activeModel:i.active_model}}catch{return{running:!1,port:a.get("lmStudioPort"),models:[]}}}async loadModel(e,o){let i=o??re({text:`Loading model: ${W.cyan(e)}`,color:"cyan"}).start();try{let n=a.get("lmStudioContextLength"),r=["/api/v1/models/load","/api/model/load"],d=async(g,u)=>{let c={model:e};u&&(c.context_length=n);let m=await fetch(this.getApiUrl(g),{method:"POST",headers:{"Content-Type":"application/json"},body:JSON.stringify(c),signal:AbortSignal.timeout(3e5)});if(m.ok)return;let S="";try{S=await m.text()}catch{S=""}let M=S?` ${S}`:"";throw new Error(`Failed to load model: ${m.status} ${m.statusText}${M}`)},l;for(let g of r)try{return await d(g,!0),i.succeed(`Model ${W.green(e)} loaded successfully`),a.setModel(e),{loadSpinner:i}}catch(u){l=u;let c=u instanceof Error?u.message:String(u);if(/context|token|max.*(context|token)|too.*large/i.test(c))try{return i.warn(`Model load failed with context_length=${n}. Retrying with LM Studio defaults...`),await d(g,!1),i.succeed(`Model ${W.green(e)} loaded successfully`),a.setModel(e),{loadSpinner:i}}catch(S){l=S}}throw l instanceof Error?l:new Error(String(l))}catch(n){throw i.fail(`Failed to load model: ${n instanceof Error?n.message:"Unknown error"}`),n}}async startServer(e){let o=e??re({text:"Starting LM Studio server...",color:"cyan"}).start();try{let i=process.platform==="darwin",n=process.platform==="linux",r=process.platform==="win32",d;if(i){let l=["/Applications/LM Studio.app",`${process.env.HOME}/Applications/LM Studio.app`];for(let g of l)try{let{existsSync:u}=await import("fs");if(u(g)){d=`open "${g}" --args --server`;break}}catch{}if(d?.startsWith("open")){await N("open",[l.find(g=>{try{let{existsSync:u}=fe("fs");return u(g)}catch{return!1}})||"/Applications/LM Studio.app","--args","--server"],{detached:!0,stdio:"ignore"}),o.succeed("LM Studio server started"),await this.waitForServer(6e4);return}}else n?d=await this.findLinuxBinary():r&&(d=await this.findWindowsExecutable());if(!d)throw o.fail("LM Studio not found. Please install it from https://lmstudio.ai"),new Error("LM Studio not installed");await N(d,["--server"],{detached:!0,stdio:"ignore",env:{...process.env,LM_STUDIO_SERVER_PORT:String(a.get("lmStudioPort"))}}),o.succeed("LM Studio server started"),await this.waitForServer(6e4)}catch(i){throw o.fail(`Failed to start LM Studio: ${i instanceof Error?i.message:"Unknown error"}`),i}}async findLinuxBinary(){let e=["/usr/bin/lm-studio","/usr/local/bin/lm-studio",`${process.env.HOME}/.local/bin/lm-studio`];for(let o of e)try{return await N("test",["-x",o]),o}catch{continue}}async findWindowsExecutable(){let e=process.env.LOCALAPPDATA,o=process.env.PROGRAMFILES,i=[e?`${e}\\Programs\\LM Studio\\lm-studio.exe`:"",o?`${o}\\LM Studio\\lm-studio.exe`:""].filter(Boolean);for(let n of i)try{return await N("cmd",["/c","if exist",`"${n}"`,"echo","yes"]),n}catch{continue}}async waitForServer(e=6e4){let o=Date.now();for(;Date.now()-o<e;){if(await this.checkStatus())return!0;await this.sleep(2e3)}return!1}sleep(e){return new Promise(o=>setTimeout(o,e))}async findAvailablePort(){for(let e of he)try{if((await fetch(`http://localhost:${e}/api/status`,{method:"GET",signal:AbortSignal.timeout(1e3)})).ok)return a.set("lmStudioPort",e),e}catch{continue}return a.get("lmStudioPort")}},v=new D});var ae={};R(ae,{OllamaService:()=>I,ollamaService:()=>k});var I,k,z=E(()=>{"use strict";b();I=class{getBaseUrl(){return a.getOllamaUrl()}getApiUrl(e){return`${this.getBaseUrl()}${e}`}async checkStatus(){try{return(await fetch(this.getApiUrl("/api/tags"),{method:"GET",signal:AbortSignal.timeout(3e3)})).ok}catch{return!1}}async getAvailableModels(){try{let e=await fetch(this.getApiUrl("/api/tags"),{method:"GET",signal:AbortSignal.timeout(1e4)});return e.ok?((await e.json()).models??[]).map(n=>{let r=n.details??{};return{id:String(n.name??n.model??""),name:String(n.name??n.model??""),size:Number(n.size??0),quantization:String(r.quantization_level??""),family:String(r.family??""),parameterSize:String(r.parameter_size??"")}}).filter(n=>n.id&&n.name):[]}catch{return[]}}async getRunningModels(){try{let e=await fetch(this.getApiUrl("/api/ps"),{method:"GET",signal:AbortSignal.timeout(5e3)});return e.ok?((await e.json()).models??[]).map(n=>String(n.name??n.model??"")).filter(Boolean):[]}catch{return[]}}async getStatus(){if(!await this.checkStatus())return{running:!1,port:a.get("ollamaPort"),models:[],runningModels:[]};let[o,i]=await Promise.all([this.getAvailableModels(),this.getRunningModels()]);return{running:!0,port:a.get("ollamaPort"),models:o,runningModels:i}}},k=new I});var le={};R(le,{ClaudeService:()=>q,claudeService:()=>L});import{execa as J}from"execa";import $e from"chalk";var q,L,Q=E(()=>{"use strict";b();q=class{async isClaudeCodeInstalled(){try{return await J("claude",["--version"],{stdio:"ignore"}),!0}catch{return!1}}async run(e){let{model:o,args:i=[],verbose:n=!1}=e,r=this.extractProvider(o),d=this.extractModelName(o);if(!r||!d)throw new Error(`Invalid model format: ${o}. Expected format: provider/model-name`);let l=["--model",d,...i];n&&console.log($e.dim(`
3
+ Running: claude ${l.join(" ")}
4
+ `));let g=r==="ollama"?a.getOllamaUrl():a.getLMStudioUrl(),u=r==="ollama"?"ollama":"lmstudio",c=`${g}/v1/messages`;try{let m=await fetch(c,{method:"POST",headers:{"Content-Type":"application/json","x-api-key":u},body:JSON.stringify({model:d,max_tokens:1,messages:[{role:"user",content:"test"}]}),signal:AbortSignal.timeout(3e3)});if(!m.ok&&m.status>=500)throw new Error(`HTTP ${m.status} ${m.statusText}`)}catch(m){let S=m instanceof Error?m.message:String(m),M=r==="ollama"?"Claude Code requires an Anthropic-compatible API. Ollama does not support this. Use --native flag to use Ollama's built-in launch, or switch to LM Studio.":"Ensure LM Studio server is running and the Anthropic Compatibility server is enabled. Check the Developer tab in LM Studio.";throw new Error(`Claude Code could not reach an Anthropic-compatible endpoint at ${c} (${S}). ${M}`)}try{await J("claude",l,{stdio:"inherit",env:{...process.env,ANTHROPIC_MODEL:d,ANTHROPIC_BASE_URL:g,ANTHROPIC_AUTH_TOKEN:u,ANTHROPIC_API_KEY:""}})}catch(m){if(m instanceof Error&&"exitCode"in m){let S=m.exitCode;process.exit(S??1)}throw m}}extractProvider(e){return e.split("/")[0]}extractModelName(e){let o=e.split("/");if(!(o.length<2))return o.slice(1).join("/")}async getVersion(){try{return(await J("claude",["--version"])).stdout}catch{return}}},L=new q});import{Command as Ee}from"commander";import f from"chalk";x();z();b();import _ from"ora";import p from"chalk";import F from"inquirer";import se from"inquirer";async function C(t,e){let o=t.map(r=>({name:`${r.name} (${r.id})`,value:r,short:r.name})),i=e?o.findIndex(r=>r.value.id===e):0;return(await se.prompt([{type:"list",name:"model",message:"Select a model to load:",choices:o,default:Math.max(0,i),pageSize:Math.min(t.length,15)}])).model}async function T(t,e=!0){return(await se.prompt([{type:"confirm",name:"confirm",message:t,default:e}])).confirm}import H from"chalk";function P(t){if(t===0)return"0 B";let e=1024,o=["B","KB","MB","GB","TB"],i=Math.floor(Math.log(t)/Math.log(e));return`${parseFloat((t/Math.pow(e,i)).toFixed(2))} ${o[i]}`}function w(t){console.log(H.green("\u2713")+" "+t)}function V(t){console.error(H.red("\u2717")+" "+t)}function j(t){console.log(H.blue("\u2139")+" "+t)}function h(t,e=1){V(t),process.exit(e)}async function Se(){let t=a.getDefaultBackend(),{backendSelection:e}=await F.prompt([{type:"list",name:"backendSelection",message:"Select model backend:",default:t??"lmstudio",choices:[{name:"\u{1F5A5}\uFE0F LM Studio",value:"lmstudio"},{name:"\u{1F999} Ollama",value:"ollama"}]}]),{saveDefault:o}=await F.prompt([{type:"confirm",name:"saveDefault",message:"Save as default backend?",default:!1}]);return o&&(a.setDefaultBackend(e),w(`Default backend set to ${p.cyan(e)}`)),e}async function ve(t){let e=a.getDefaultProvider(),o=[{name:"Claude Code",value:"claude"}],i=e??"claude",{providerSelection:n}=await F.prompt([{type:"list",name:"providerSelection",message:"Select coding tool:",default:i,choices:o}]),{saveDefault:r}=await F.prompt([{type:"confirm",name:"saveDefault",message:"Save as default coding tool?",default:!1}]);return r&&(a.setDefaultProvider(n),w(`Default coding tool set to ${p.cyan(n)}`)),n}async function ye(t,e){let o=_({text:"Checking LM Studio status...",color:"cyan"}).start(),i=await v.checkStatus();i||(o.info("LM Studio server not running"),o.stop(),await T("Would you like to start the LM Studio server?")||h("LM Studio server must be running. Start it manually or use the Server tab in LM Studio."),await v.startServer(),i=!0),o.succeed("Connected to LM Studio");let n=_({text:"Fetching available models...",color:"cyan"}).start(),r=await v.getAvailableModels();r.length===0&&(n.fail("No models found. Download some models in LM Studio first."),h("No models available")),n.succeed(`Found ${p.bold(r.length)} model${r.length===1?"":"s"}`),console.log(),console.log(p.bold("Available Models:")),console.log(p.dim("\u2500".repeat(process.stdout.columns||80))),r.forEach((c,m)=>{let S=P(c.size),M=c.loaded?p.green(" [LOADED]"):"";console.log(` ${p.dim(String(m+1).padStart(2))}. ${c.name} ${p.dim(`(${S})`)}${M}`)}),console.log();let d=a.getLastUsedModel(),l=t.model,g=l?r.find(c=>c.id===l||c.name.includes(l)):await C(r,d);g||h("No model selected"),await v.loadModel(g.id,o);let u=g.id;w(p.bold(`
5
+ Model ready: ${g.name}`)),console.log(),console.log("Start your interactive coding session:"),console.log(` ${p.cyan(`aix-cli run --provider ${e} --backend lmstudio --model ${u}`)}`),console.log()}async function we(t,e){let o=_({text:"Checking Ollama status...",color:"cyan"}).start();await k.checkStatus()||(o.fail("Ollama is not running"),h("Ollama must be running. Start it with: ollama serve")),o.succeed("Connected to Ollama");let n=_({text:"Fetching available models...",color:"cyan"}).start(),r=await k.getAvailableModels();r.length===0&&(n.fail("No models found. Pull a model first: ollama pull <model>"),h("No models available")),n.succeed(`Found ${p.bold(r.length)} model${r.length===1?"":"s"}`);let d=await k.getRunningModels();console.log(),console.log(p.bold("Available Models:")),console.log(p.dim("\u2500".repeat(process.stdout.columns||80))),r.forEach((c,m)=>{let S=P(c.size),ue=d.includes(c.id)?p.green(" [RUNNING]"):"",me=c.parameterSize?p.dim(` ${c.parameterSize}`):"";console.log(` ${p.dim(String(m+1).padStart(2))}. ${c.name}${me} ${p.dim(`(${S})`)}${ue}`)}),console.log();let l=a.getLastUsedModel(),g=t.model,u=g?r.find(c=>c.id===g||c.name.includes(g)):await C(r,l);u||h("No model selected"),a.setModel(u.id),w(p.bold(`
6
+ Model selected: ${u.name}`)),console.log(),console.log("Start your interactive coding session:"),console.log(` ${p.cyan(`aix-cli run --provider ${e} --backend ollama --model ${u.id}`)}`),console.log()}async function X(t={}){let e=t.backend??await Se(),o=t.provider??await ve(e);e==="ollama"?await we(t,o):await ye(t,o)}x();Q();b();import K from"ora";import G from"chalk";import be from"inquirer";import{execa as ke}from"execa";async function Me(t){let e=a.getDefaultBackend();if(e)return e;let{backendSelection:o}=await be.prompt([{type:"list",name:"backendSelection",message:"Select model backend:",choices:[{name:"\u{1F5A5}\uFE0F LM Studio",value:"lmstudio"},{name:"\u{1F999} Ollama",value:"ollama"}]}]);return o}async function Pe(){return await L.isClaudeCodeInstalled()||h("Claude Code is not installed."),"claude"}function A(t){return"Claude Code"}async function xe(t,e){let o=K({text:"Checking LM Studio status...",color:"cyan"}).start();await v.findAvailablePort();let i=await v.checkStatus();i||(o.info("LM Studio server not running"),o.stop(),await T("Would you like to start the LM Studio server?")||h("LM Studio server must be running. Start it manually or use the Server tab in LM Studio."),await v.startServer(),i=!0),o.succeed("Connected to LM Studio");let n=K({text:"Fetching available models...",color:"cyan"}).start(),r=await v.getAvailableModels();r.length===0&&(n.fail("No models found. Download some models in LM Studio first."),h("No models available")),n.stop();let d;if(t.model){let u=r.find(c=>c.id===t.model||c.name.toLowerCase().includes(t.model.toLowerCase()));u||h(`Model "${t.model}" not found. Available models: ${r.map(c=>c.name).join(", ")}`),d=u.id}else{let u=a.getLastUsedModel();d=(await C(r,u)).id}let l=K({text:`Loading model: ${G.cyan(d)}`,color:"cyan"}).start();await v.loadModel(d,l);let g=`lmstudio/${d}`;await Ce(e,g,t)}async function Ce(t,e,o){let i=A(t);w(G.green(`
9
7
  Starting ${i} with model: ${e}
10
- `));try{t==="opencode"?await I.run({model:e,args:o.args??[],verbose:o.verbose}):await N.run({model:e,args:o.args??[],verbose:o.verbose})}catch(n){S(`Failed to run ${i}: ${n instanceof Error?n.message:"Unknown error"}`)}}async function te(t={}){let e;if(t.provider)e=t.provider;else{let r=l.getDefaultProvider();r?e=r:e=await Ae()}let o=x({text:`Checking ${U(e)} installation...`,color:"cyan"}).start();(e==="opencode"?await I.isOpenCodeInstalled():await N.isClaudeCodeInstalled())||(o.fail(`${U(e)} is not installed.`),S(`Please install ${U(e)} first.`)),o.succeed(`${U(e)} is installed`);let n=t.backend??await Le(e);n==="ollama"&&e==="claude"&&S("Claude Code requires an Anthropic-compatible API. LM Studio supports this natively, but Ollama does not. Please select OpenCode for Ollama, or switch to LM Studio."),n==="ollama"?await Re(t,e):await Ee(t,e)}E();R();import c from"chalk";async function ne(){let[t,e]=await Promise.all([y.getStatus(),$.getStatus()]);console.log(),console.log(c.bold("LM Studio")),console.log(c.dim("\u2500".repeat(50))),console.log(` ${t.running?c.green("\u25CF"):c.red("\u25CB")} Server: ${t.running?c.green("Running"):c.red("Stopped")}`),console.log(` ${c.dim("\u25B8")} Port: ${c.cyan(String(t.port))}`),console.log(` ${c.dim("\u25B8")} URL: ${c.cyan(`http://localhost:${t.port}`)}`),t.activeModel&&console.log(` ${c.dim("\u25B8")} Active Model: ${c.green(t.activeModel)}`),t.running&&t.models.length>0?(console.log(),console.log(c.bold(" Models")),t.models.forEach((o,i)=>{let n=C(o.size),r=o.id===t.activeModel?` ${c.green("[LOADED]")}`:"";console.log(` ${c.dim(String(i+1)+".")} ${o.name}${r}`),console.log(` ${c.dim("ID:")} ${o.id}`),console.log(` ${c.dim("Size:")} ${n}`),o.quantization&&console.log(` ${c.dim("Quantization:")} ${o.quantization}`)})):t.running&&console.log(` ${c.dim("No models available")}`),console.log(),console.log(c.bold("Ollama")),console.log(c.dim("\u2500".repeat(50))),console.log(` ${e.running?c.green("\u25CF"):c.red("\u25CB")} Server: ${e.running?c.green("Running"):c.red("Stopped")}`),console.log(` ${c.dim("\u25B8")} Port: ${c.cyan(String(e.port))}`),console.log(` ${c.dim("\u25B8")} URL: ${c.cyan(`http://localhost:${e.port}`)}`),e.running&&e.runningModels.length>0&&console.log(` ${c.dim("\u25B8")} Running: ${c.green(e.runningModels.join(", "))}`),e.running&&e.models.length>0?(console.log(),console.log(c.bold(" Models")),e.models.forEach((o,i)=>{let n=C(o.size),a=e.runningModels.includes(o.id)?` ${c.green("[RUNNING]")}`:"",s=o.parameterSize?` ${c.dim(o.parameterSize)}`:"";console.log(` ${c.dim(String(i+1)+".")} ${o.name}${s}${a}`),console.log(` ${c.dim("Size:")} ${n}`),o.family&&console.log(` ${c.dim("Family:")} ${o.family}`),o.quantization&&console.log(` ${c.dim("Quantization:")} ${o.quantization}`)})):e.running&&console.log(` ${c.dim("No models available")}`),console.log()}import Ne from"ora";import B from"chalk";import{execa as ve}from"execa";import{readFileSync as Ie}from"fs";import{fileURLToPath as Ue}from"url";function re(){try{let t=Ue(new URL("../../package.json",import.meta.url)),e=JSON.parse(Ie(t,"utf8"));return String(e.version)}catch{return"unknown"}}async function ie(){let t=Ne({text:"Checking for updates...",color:"cyan"}).start();try{let e=re();if(e==="unknown"){t.fail("Could not determine current version.");return}let{stdout:o}=await ve("npm",["view","@iamharshil/aix-cli","version"]),i=o.trim();if(e===i){t.succeed(`You're already on the latest version: ${B.green(`v${e}`)}`);return}t.text=`Updating: ${B.yellow(`v${e}`)} \u2192 ${B.green(`v${i}`)}...`,await ve("npm",["install","-g","@iamharshil/aix-cli@latest"]),t.succeed(`Successfully updated to ${B.green(`v${i}`)}! \u{1F680}`),F(`Restart your terminal or run ${B.cyan("aix-cli --help")} to see what's new.`)}catch(e){t.fail("Failed to update."),X(e instanceof Error?e.message:String(e))}}P();import w from"chalk";import Be from"inquirer";async function ae(t,e,o){if(t==="reset"){let{confirm:n}=await Be.prompt([{type:"confirm",name:"confirm",message:"Are you sure you want to completely reset all configuration to defaults?",default:!1}]);n?(l.reset(),b("Configuration has been reset to defaults.")):F("Reset cancelled.");return}if(t==="set"&&e&&o){let n=o;o==="true"?n=!0:o==="false"?n=!1:Number.isNaN(Number(o))||(n=Number(o)),l.set(e,n),b(`Set ${w.cyan(e)} to ${w.green(o)}`);return}if(t==="unset"&&e){l.delete(e),b(`Unset configuration key ${w.cyan(e)}`);return}console.log(),console.log(w.bold.cyan("\u2699\uFE0F AIX CLI Configuration")),console.log(w.dim("\u2500".repeat(40))),["defaultBackend","defaultProvider","model","lmStudioUrl","lmStudioPort","lmStudioContextLength","ollamaUrl","ollamaPort","defaultTimeout","autoStartServer"].forEach(n=>{let r=l.get(n);console.log(r!==void 0?` ${w.bold(n)}: ${w.green(r)}`:` ${w.bold(n)}: ${w.dim("not set")}`)}),console.log(),console.log(w.dim("Commands:")),console.log(w.dim(" aix-cli config set <key> <value>")),console.log(w.dim(" aix-cli config unset <key>")),console.log(w.dim(" aix-cli config reset")),console.log()}var M=new De;M.name("aix-cli").description("Run Claude Code or OpenCode with local AI models from LM Studio or Ollama").version(re()).option("--ollama","Shortcut to use Ollama backend").option("--lmstudio","Shortcut to use LM Studio backend").showHelpAfterError();function D(t=0){console.log(),console.log(f.dim(t===0?"\u{1F44B} Goodbye!":"\u274C Cancelled.")),process.exit(t)}process.on("SIGINT",()=>D(0));process.on("SIGTERM",()=>D(0));process.on("uncaughtException",t=>{t.message?.includes("ExitPromptError")||t.message?.includes("User force closed")||t.message?.includes("prompt")?D(0):(console.error(f.red("Error:"),t.message),process.exit(1))});process.on("unhandledRejection",t=>{let e=String(t);(e.includes("ExitPromptError")||e.includes("User force closed")||e.includes("prompt"))&&D(0)});M.command("init",{isDefault:!1}).aliases(["i","load"]).description("Select a backend, load a model, and configure your provider").option("-m, --model <name>","Model name or ID to load").option("-p, --provider <provider>","Coding tool to use (claude or opencode)").option("-b, --backend <backend>","Model backend to use (lmstudio or ollama)").action(t=>{let e=M.opts();return e.ollama&&(t.backend="ollama"),e.lmstudio&&(t.backend="lmstudio"),Y(t)});M.command("run",{isDefault:!1}).aliases(["r"]).description("Run Claude Code or OpenCode with a model from LM Studio or Ollama").option("-m, --model <name>","Model name or ID to use").option("-p, --provider <provider>","Coding tool to use (claude or opencode)").option("-b, --backend <backend>","Model backend to use (lmstudio or ollama)").option("-v, --verbose","Show verbose output").argument("[args...]","Additional arguments for the provider").action(async(t,e)=>{let o=M.opts();o.ollama&&(e.backend="ollama"),o.lmstudio&&(e.backend="lmstudio"),await te({...e,args:t})});M.command("status",{isDefault:!1}).aliases(["s","stats"]).description("Show LM Studio and Ollama status and available models").action(ne);M.command("doctor",{isDefault:!1}).aliases(["d","check"]).description("Check system requirements and configuration").action(async()=>{let{lmStudioService:t}=await Promise.resolve().then(()=>(E(),de)),{ollamaService:e}=await Promise.resolve().then(()=>(R(),ce)),{claudeService:o}=await Promise.resolve().then(()=>(Z(),me)),{openCodeService:i}=await Promise.resolve().then(()=>(oe(),ge)),{configService:n}=await Promise.resolve().then(()=>(P(),se));console.log(f.bold.cyan("\u{1F527} AIX CLI System Check")),console.log(f.dim("\u2500".repeat(40)));let[r,a,s,u]=await Promise.all([t.checkStatus(),e.checkStatus(),o.isClaudeCodeInstalled(),i.isOpenCodeInstalled()]),d=n.getDefaultProvider(),m=n.getDefaultBackend(),g=n.get("lmStudioPort"),p=n.get("ollamaPort");console.log(),console.log(f.bold("Backends")),console.log(` ${r?"\u2705":"\u26A0\uFE0F"} LM Studio: ${r?f.green("Running"):f.yellow("Not running")} ${f.dim(`(port ${g})`)}`),console.log(` ${a?"\u2705":"\u26A0\uFE0F"} Ollama: ${a?f.green("Running"):f.yellow("Not running")} ${f.dim(`(port ${p})`)}`),console.log(),console.log(f.bold("Coding Tools")),console.log(` ${s?"\u2705":"\u274C"} Claude Code: ${s?f.green("Installed"):f.red("Not installed")}`),console.log(` ${u?"\u2705":"\u274C"} OpenCode: ${u?f.green("Installed"):f.red("Not installed")}`),console.log(),console.log(f.bold("Defaults")),console.log(` \u{1F4CC} Backend: ${f.cyan(m??"not set")}`),console.log(` \u{1F4CC} Coding tool: ${f.cyan(d??"not set")}`);let h=[];s||h.push(` \u2192 ${f.cyan("npm install -g @anthropic-ai/claude-code")}`),u||h.push(` \u2192 ${f.cyan("npm install -g opencode-ai")}`),!r&&!a&&h.push(` \u2192 Start LM Studio or run ${f.cyan("ollama serve")}`),h.length>0&&(console.log(),console.log(f.bold("\u{1F4CB} Next Steps:")),h.forEach(O=>console.log(O))),console.log()});M.command("update",{isDefault:!1}).aliases(["upgrade","u"]).description("Update AIX CLI to the latest version").action(ie);M.command("config [action] [key] [value]",{isDefault:!1}).aliases(["c","settings"]).description("View, set, or reset AIX CLI configuration constraints").action(ae);M.parse();
8
+ `));try{await L.run({model:e,args:o.args??[],verbose:o.verbose})}catch(n){h(`Failed to run ${i}: ${n instanceof Error?n.message:"Unknown error"}`)}}async function Y(t={}){let e;if(t.provider)e=t.provider;else{let r=a.getDefaultProvider();r?e=r:e=await Pe()}let o=K({text:`Checking ${A(e)} installation...`,color:"cyan"}).start();await L.isClaudeCodeInstalled()||(o.fail(`${A(e)} is not installed.`),h(`Please install ${A(e)} first.`)),o.succeed(`${A(e)} is installed`);let n=t.backend??await Me(e);if(t.native&&n==="ollama"){let r=t.model??a.getLastUsedModel();r||h("No model specified. Use -m, --model to specify a model."),await de(r,t.args??[],t.verbose??!1);return}if(n==="ollama"){let r=t.model??a.getLastUsedModel();r||h("No model specified. Use -m, --model to specify a model."),await de(r,t.args??[],t.verbose??!1);return}await xe(t,e)}async function de(t,e,o){w(G.green(`
9
+ Starting Claude Code with Ollama native integration...
10
+ `)),o&&console.log(G.dim(`Running: ollama launch claude --model ${t}
11
+ `));try{await ke("ollama",["launch","claude","--model",t,...e],{stdio:"inherit"})}catch(i){h(`Failed to launch Claude Code via Ollama: ${i instanceof Error?i.message:"Unknown error"}`)}}x();z();import s from"chalk";async function Z(){let[t,e]=await Promise.all([v.getStatus(),k.getStatus()]);console.log(),console.log(s.bold("LM Studio")),console.log(s.dim("\u2500".repeat(50))),console.log(` ${t.running?s.green("\u25CF"):s.red("\u25CB")} Server: ${t.running?s.green("Running"):s.red("Stopped")}`),console.log(` ${s.dim("\u25B8")} Port: ${s.cyan(String(t.port))}`),console.log(` ${s.dim("\u25B8")} URL: ${s.cyan(`http://localhost:${t.port}`)}`),t.activeModel&&console.log(` ${s.dim("\u25B8")} Active Model: ${s.green(t.activeModel)}`),t.running&&t.models.length>0?(console.log(),console.log(s.bold(" Models")),t.models.forEach((o,i)=>{let n=P(o.size),r=o.id===t.activeModel?` ${s.green("[LOADED]")}`:"";console.log(` ${s.dim(String(i+1)+".")} ${o.name}${r}`),console.log(` ${s.dim("ID:")} ${o.id}`),console.log(` ${s.dim("Size:")} ${n}`),o.quantization&&console.log(` ${s.dim("Quantization:")} ${o.quantization}`)})):t.running&&console.log(` ${s.dim("No models available")}`),console.log(),console.log(s.bold("Ollama")),console.log(s.dim("\u2500".repeat(50))),console.log(` ${e.running?s.green("\u25CF"):s.red("\u25CB")} Server: ${e.running?s.green("Running"):s.red("Stopped")}`),console.log(` ${s.dim("\u25B8")} Port: ${s.cyan(String(e.port))}`),console.log(` ${s.dim("\u25B8")} URL: ${s.cyan(`http://localhost:${e.port}`)}`),e.running&&e.runningModels.length>0&&console.log(` ${s.dim("\u25B8")} Running: ${s.green(e.runningModels.join(", "))}`),e.running&&e.models.length>0?(console.log(),console.log(s.bold(" Models")),e.models.forEach((o,i)=>{let n=P(o.size),d=e.runningModels.includes(o.id)?` ${s.green("[RUNNING]")}`:"",l=o.parameterSize?` ${s.dim(o.parameterSize)}`:"";console.log(` ${s.dim(String(i+1)+".")} ${o.name}${l}${d}`),console.log(` ${s.dim("Size:")} ${n}`),o.family&&console.log(` ${s.dim("Family:")} ${o.family}`),o.quantization&&console.log(` ${s.dim("Quantization:")} ${o.quantization}`)})):e.running&&console.log(` ${s.dim("No models available")}`),console.log()}import Le from"ora";import O from"chalk";import{execa as ce}from"execa";import{readFileSync as Ae}from"fs";import{fileURLToPath as Oe}from"url";function ee(){try{let t=Oe(new URL("../../package.json",import.meta.url)),e=JSON.parse(Ae(t,"utf8"));return String(e.version)}catch{return"unknown"}}async function oe(){let t=Le({text:"Checking for updates...",color:"cyan"}).start();try{let e=ee();if(e==="unknown"){t.fail("Could not determine current version.");return}let{stdout:o}=await ce("npm",["view","@iamharshil/aix-cli","version"]),i=o.trim();if(e===i){t.succeed(`You're already on the latest version: ${O.green(`v${e}`)}`);return}t.text=`Updating: ${O.yellow(`v${e}`)} \u2192 ${O.green(`v${i}`)}...`,await ce("npm",["install","-g","@iamharshil/aix-cli@latest"]),t.succeed(`Successfully updated to ${O.green(`v${i}`)}! \u{1F680}`),j(`Restart your terminal or run ${O.cyan("aix-cli --help")} to see what's new.`)}catch(e){t.fail("Failed to update."),V(e instanceof Error?e.message:String(e))}}b();import y from"chalk";import Ue from"inquirer";async function te(t,e,o){if(t==="reset"){let{confirm:n}=await Ue.prompt([{type:"confirm",name:"confirm",message:"Are you sure you want to completely reset all configuration to defaults?",default:!1}]);n?(a.reset(),w("Configuration has been reset to defaults.")):j("Reset cancelled.");return}if(t==="set"&&e&&o){let n=o;o==="true"?n=!0:o==="false"?n=!1:Number.isNaN(Number(o))||(n=Number(o)),a.set(e,n),w(`Set ${y.cyan(e)} to ${y.green(o)}`);return}if(t==="unset"&&e){a.delete(e),w(`Unset configuration key ${y.cyan(e)}`);return}console.log(),console.log(y.bold.cyan("\u2699\uFE0F AIX CLI Configuration")),console.log(y.dim("\u2500".repeat(40))),["defaultBackend","defaultProvider","model","lmStudioUrl","lmStudioPort","lmStudioContextLength","ollamaUrl","ollamaPort","defaultTimeout","autoStartServer"].forEach(n=>{let r=a.get(n);console.log(r!==void 0?` ${y.bold(n)}: ${y.green(r)}`:` ${y.bold(n)}: ${y.dim("not set")}`)}),console.log(),console.log(y.dim("Commands:")),console.log(y.dim(" aix-cli config set <key> <value>")),console.log(y.dim(" aix-cli config unset <key>")),console.log(y.dim(" aix-cli config reset")),console.log()}var $=new Ee;$.name("aix-cli").description("Run Claude Code with local AI models from LM Studio or Ollama").version(ee()).option("--ollama","Shortcut to use Ollama backend").option("--lmstudio","Shortcut to use LM Studio backend").showHelpAfterError();function U(t=0){console.log(),console.log(f.dim(t===0?"\u{1F44B} Goodbye!":"\u274C Cancelled.")),process.exit(t)}process.on("SIGINT",()=>U(0));process.on("SIGTERM",()=>U(0));process.on("uncaughtException",t=>{t.message?.includes("ExitPromptError")||t.message?.includes("User force closed")||t.message?.includes("prompt")?U(0):(console.error(f.red("Error:"),t.message),process.exit(1))});process.on("unhandledRejection",t=>{let e=String(t);(e.includes("ExitPromptError")||e.includes("User force closed")||e.includes("prompt"))&&U(0)});$.command("init",{isDefault:!1}).aliases(["i","load"]).description("Select a backend, load a model, and configure your provider").option("-m, --model <name>","Model name or ID to load").option("-p, --provider <provider>","Coding tool to use (claude)").option("-b, --backend <backend>","Model backend to use (lmstudio or ollama)").action(t=>{let e=$.opts();return e.ollama&&(t.backend="ollama"),e.lmstudio&&(t.backend="lmstudio"),X(t)});$.command("run",{isDefault:!1}).aliases(["r"]).description("Run Claude Code with a model from LM Studio or Ollama").option("-m, --model <name>","Model name or ID to use").option("-p, --provider <provider>","Coding tool to use (claude)").option("-b, --backend <backend>","Model backend to use (lmstudio or ollama)").option("-n, --native","Use Ollama's native Claude Code integration (ollama launch)").option("-v, --verbose","Show verbose output").argument("[args...]","Additional arguments for the provider").action(async(t,e)=>{let o=$.opts();o.ollama&&(e.backend="ollama"),o.lmstudio&&(e.backend="lmstudio"),await Y({...e,args:t})});$.command("status",{isDefault:!1}).aliases(["s","stats"]).description("Show LM Studio and Ollama status and available models").action(Z);$.command("doctor",{isDefault:!1}).aliases(["d","check"]).description("Check system requirements and configuration").action(async()=>{let{lmStudioService:t}=await Promise.resolve().then(()=>(x(),ie)),{ollamaService:e}=await Promise.resolve().then(()=>(z(),ae)),{claudeService:o}=await Promise.resolve().then(()=>(Q(),le)),{configService:i}=await Promise.resolve().then(()=>(b(),ne));console.log(f.bold.cyan("\u{1F527} AIX CLI System Check")),console.log(f.dim("\u2500".repeat(40)));let[n,r,d]=await Promise.all([t.checkStatus(),e.checkStatus(),o.isClaudeCodeInstalled()]),l=i.getDefaultProvider(),g=i.getDefaultBackend(),u=i.get("lmStudioPort"),c=i.get("ollamaPort");console.log(),console.log(f.bold("Backends")),console.log(` ${n?"\u2705":"\u26A0\uFE0F"} LM Studio: ${n?f.green("Running"):f.yellow("Not running")} ${f.dim(`(port ${u})`)}`),console.log(` ${r?"\u2705":"\u26A0\uFE0F"} Ollama: ${r?f.green("Running"):f.yellow("Not running")} ${f.dim(`(port ${c})`)}`),console.log(),console.log(f.bold("Coding Tools")),console.log(` ${d?"\u2705":"\u274C"} Claude Code: ${d?f.green("Installed"):f.red("Not installed")}`),console.log(),console.log(f.bold("Defaults")),console.log(` \u{1F4CC} Backend: ${f.cyan(g??"not set")}`),console.log(` \u{1F4CC} Coding tool: ${f.cyan(l??"not set")}`);let m=[];if(d||m.push(` \u2192 ${f.cyan("npm install -g @anthropic-ai/claude-code")}`),!n&&!r&&m.push(` \u2192 Start LM Studio or run ${f.cyan("ollama serve")}`),m.length>0){console.log(),console.log(f.bold("\u{1F4CB} Next Steps:"));for(let S of m)console.log(S)}console.log()});$.command("update",{isDefault:!1}).aliases(["upgrade","u"]).description("Update AIX CLI to the latest version").action(oe);$.command("config [action] [key] [value]",{isDefault:!1}).aliases(["c","settings"]).description("View, set, or reset AIX CLI configuration constraints").action(te);$.parse();
11
12
  //# sourceMappingURL=aix.js.map