free-coding-models 0.1.2 → 0.1.4

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -5,11 +5,21 @@
5
5
  <img src="https://img.shields.io/badge/models-44-76b900?logo=nvidia" alt="models count">
6
6
  </p>
7
7
 
8
- <h1 align="center">⚡ Free Coding Models</h1>
8
+ <h1 align="center">free-coding-models</h1>
9
+
10
+ <p align="center">
11
+
12
+ ```
13
+ 1. Create a free API key on NVIDIA → https://build.nvidia.com
14
+ 2. npm i -g free-coding-models
15
+ 3. free-coding-models
16
+ ```
17
+
18
+ </p>
9
19
 
10
20
  <p align="center">
11
21
  <strong>Find the fastest coding LLM models in seconds</strong><br>
12
- <sub>Ping free models from multiple providers — pick the best one for OpenCode, Cursor, or any AI coding assistant</sub>
22
+ <sub>Ping free NVIDIA NIM models in real-time — pick the best one for OpenCode, OpenClaw, or any AI coding assistant</sub>
13
23
  </p>
14
24
 
15
25
  <p align="center">
@@ -22,6 +32,8 @@
22
32
  <a href="#-installation">Installation</a> •
23
33
  <a href="#-usage">Usage</a> •
24
34
  <a href="#-models">Models</a> •
35
+ <a href="#-opencode-integration">OpenCode</a> •
36
+ <a href="#-openclaw-integration">OpenClaw</a> •
25
37
  <a href="#-how-it-works">How it works</a>
26
38
  </p>
27
39
 
@@ -37,11 +49,14 @@
37
49
  - **📈 Rolling averages** — Avg calculated from ALL successful pings since start
38
50
  - **📊 Uptime tracking** — Percentage of successful pings shown in real-time
39
51
  - **🔄 Auto-retry** — Timeout models keep getting retried, nothing is ever "given up on"
40
- - **🎮 Interactive selection** — Navigate with arrow keys directly in the table, press Enter to launch OpenCode
41
- - **🔌 Auto-configuration** — Detects NVIDIA NIM setup, installs if missing, sets as default model
52
+ - **🎮 Interactive selection** — Navigate with arrow keys directly in the table, press Enter to act
53
+ - **🔀 Startup mode menu** — Choose between OpenCode and OpenClaw before the TUI launches
54
+ - **💻 OpenCode integration** — Auto-detects NIM setup, sets model as default, launches OpenCode
55
+ - **🦞 OpenClaw integration** — Sets selected model as default provider in `~/.openclaw/openclaw.json`
42
56
  - **🎨 Clean output** — Zero scrollback pollution, interface stays open until Ctrl+C
43
57
  - **📶 Status indicators** — UP ✅ · Timeout ⏳ · Overloaded 🔥 · Not Found 🚫
44
58
  - **🔧 Multi-source support** — Extensible architecture via `sources.js` (add new providers easily)
59
+ - **🏷 Tier filtering** — Filter models by tier letter (S, A, B, C) with `--tier`
45
60
 
46
61
  ---
47
62
 
@@ -50,11 +65,12 @@
50
65
  Before using `free-coding-models`, make sure you have:
51
66
 
52
67
  1. **Node.js 18+** — Required for native `fetch` API
53
- 2. **OpenCode installed** — [Install OpenCode](https://github.com/opencode-ai/opencode) (`npm install -g opencode`)
54
- 3. **NVIDIA NIM account** — Free tier available at [build.nvidia.com](https://build.nvidia.com)
55
- 4. **API key** — Generate one from Profile API Keys → Generate API Key
68
+ 2. **NVIDIA NIM account** — Free tier available at [build.nvidia.com](https://build.nvidia.com)
69
+ 3. **API key** — Generate one from Profile → API Keys → Generate API Key
70
+ 4. **OpenCode** *(optional)* [Install OpenCode](https://github.com/opencode-ai/opencode) to use the OpenCode integration
71
+ 5. **OpenClaw** *(optional)* — [Install OpenClaw](https://openclaw.ai) to use the OpenClaw integration
56
72
 
57
- > 💡 **Tip:** Without OpenCode installed, you can still use the tool to benchmark models. OpenCode is only needed for the auto-launch feature.
73
+ > 💡 **Tip:** Without OpenCode/OpenClaw installed, you can still benchmark models and get latency data.
58
74
 
59
75
  ---
60
76
 
@@ -81,24 +97,56 @@ bunx free-coding-models YOUR_API_KEY
81
97
  ## 🚀 Usage
82
98
 
83
99
  ```bash
84
- # Just run it — will prompt for API key if not set
100
+ # Just run it — shows a startup menu to pick OpenCode or OpenClaw, prompts for API key if not set
85
101
  free-coding-models
86
102
 
103
+ # Explicitly target OpenCode (current default behavior — TUI + Enter launches OpenCode)
104
+ free-coding-models --opencode
105
+
106
+ # Explicitly target OpenClaw (TUI + Enter sets model as default in OpenClaw)
107
+ free-coding-models --openclaw
108
+
87
109
  # Show only top-tier models (A+, S, S+)
88
110
  free-coding-models --best
89
111
 
90
112
  # Analyze for 10 seconds and output the most reliable model
91
113
  free-coding-models --fiable
114
+
115
+ # Filter models by tier letter
116
+ free-coding-models --tier S # S+ and S only
117
+ free-coding-models --tier A # A+, A, A- only
118
+ free-coding-models --tier B # B+, B only
119
+ free-coding-models --tier C # C only
120
+
121
+ # Combine flags freely
122
+ free-coding-models --openclaw --tier S
123
+ free-coding-models --opencode --best
124
+ ```
125
+
126
+ ### Startup mode menu
127
+
128
+ When you run `free-coding-models` without `--opencode` or `--openclaw`, you get an interactive startup menu:
129
+
130
+ ```
131
+ ⚡ Free Coding Models — Choose your tool
132
+
133
+ ❯ 💻 OpenCode
134
+ Press Enter on a model → launch OpenCode with it as default
135
+
136
+ 🦞 OpenClaw
137
+ Press Enter on a model → set it as default in OpenClaw config
138
+
139
+ ↑↓ Navigate • Enter Select • Ctrl+C Exit
92
140
  ```
93
141
 
142
+ Use `↑↓` arrows to select, `Enter` to confirm. Then the TUI launches with your chosen mode shown in the header badge.
143
+
94
144
  **How it works:**
95
145
  1. **Ping phase** — All 44 models are pinged in parallel
96
146
  2. **Continuous monitoring** — Models are re-pinged every 2 seconds forever
97
147
  3. **Real-time updates** — Watch "Latest", "Avg", and "Up%" columns update live
98
- 4. **Select anytime** — Use ↑↓ arrows to navigate, press Enter on a model to launch OpenCode
99
- 5. **Smart detection** — Automatically detects if NVIDIA NIM is configured in OpenCode:
100
- - ✅ If configured → Sets model as default and launches OpenCode
101
- - ⚠️ If missing → Shows installation instructions and launches OpenCode
148
+ 4. **Select anytime** — Use ↑↓ arrows to navigate, press Enter on a model to act
149
+ 5. **Smart detection** — Automatically detects if NVIDIA NIM is configured in OpenCode or OpenClaw
102
150
 
103
151
  Setup wizard:
104
152
 
@@ -161,13 +209,24 @@ free-coding-models
161
209
  - **A-/B+** — Solid performers, good for targeted programming tasks
162
210
  - **B/C** — Lightweight or older models, good for code completion on constrained infra
163
211
 
212
+ ### Filtering by tier
213
+
214
+ Use `--tier` to focus on a specific capability band:
215
+
216
+ ```bash
217
+ free-coding-models --tier S # Only S+ and S (frontier models)
218
+ free-coding-models --tier A # Only A+, A, A- (solid performers)
219
+ free-coding-models --tier B # Only B+, B (lightweight options)
220
+ free-coding-models --tier C # Only C (edge/minimal models)
221
+ ```
222
+
164
223
  ---
165
224
 
166
- ## 🔌 Use with OpenCode
225
+ ## 🔌 OpenCode Integration
167
226
 
168
227
  **The easiest way** — let `free-coding-models` do everything:
169
228
 
170
- 1. **Run**: `free-coding-models`
229
+ 1. **Run**: `free-coding-models --opencode` (or choose OpenCode from the startup menu)
171
230
  2. **Wait** for models to be pinged (green ✅ status)
172
231
  3. **Navigate** with ↑↓ arrows to your preferred model
173
232
  4. **Press Enter** — tool automatically:
@@ -175,23 +234,7 @@ free-coding-models
175
234
  - Sets your selected model as default in `~/.config/opencode/opencode.json`
176
235
  - Launches OpenCode with the model ready to use
177
236
 
178
- That's it! No manual config needed.
179
-
180
- ### Manual Setup (Optional)
181
-
182
- If you prefer to configure OpenCode yourself:
183
-
184
- #### Prerequisites
185
-
186
- 1. **OpenCode installed**: `npm install -g opencode` (or equivalent)
187
- 2. **NVIDIA NIM account**: Get a free account at [build.nvidia.com](https://build.nvidia.com)
188
- 3. **API key generated**: Go to Profile → API Keys → Generate API Key
189
-
190
- #### 1. Find your model
191
-
192
- Run `free-coding-models` to see which models are available and fast. The "Latest" column shows real-time latency, "Avg" shows rolling average, and "Up%" shows uptime percentage (reliability over time).
193
-
194
- #### 2. Configure OpenCode
237
+ ### Manual OpenCode Setup (Optional)
195
238
 
196
239
  Create or edit `~/.config/opencode/opencode.json`:
197
240
 
@@ -211,53 +254,88 @@ Create or edit `~/.config/opencode/opencode.json`:
211
254
  }
212
255
  ```
213
256
 
214
- #### 3. Set environment variable
257
+ Then set the environment variable:
215
258
 
216
259
  ```bash
217
260
  export NVIDIA_API_KEY=nvapi-xxxx-your-key-here
218
261
  # Add to ~/.bashrc or ~/.zshrc for persistence
219
262
  ```
220
263
 
221
- #### 4. Use it
222
-
223
264
  Run `/models` in OpenCode and select **NVIDIA NIM** provider and your chosen model.
224
265
 
225
266
  > ⚠️ **Note:** Free models have usage limits based on NVIDIA's tier — check [build.nvidia.com](https://build.nvidia.com) for quotas.
226
267
 
227
- ### Automatic Installation
268
+ ### Automatic Installation Fallback
269
+
270
+ If NVIDIA NIM is not yet configured in OpenCode, the tool:
271
+ - Shows installation instructions in your terminal
272
+ - Creates a `prompt` file in `$HOME/prompt` with the exact configuration
273
+ - Launches OpenCode, which will detect and display the prompt automatically
228
274
 
229
- The tool includes a **smart fallback mechanism**:
275
+ ---
230
276
 
231
- 1. **Primary**: Try to launch OpenCode with the selected model
232
- 2. **Fallback**: If NVIDIA NIM is not detected in `~/.config/opencode/opencode.json`, the tool:
233
- - Shows installation instructions in your terminal
234
- - Creates a `prompt` file in `$HOME/prompt` with the exact configuration to add
235
- - Launches OpenCode, which will detect and display the prompt automatically
277
+ ## 🦞 OpenClaw Integration
236
278
 
237
- This **"prompt" fallback** ensures that even if NVIDIA NIM isn't pre-configured, OpenCode will guide you through installation with the ready-to-use configuration already prepared.
279
+ OpenClaw is an autonomous AI agent daemon. `free-coding-models` can configure it to use NVIDIA NIM models as its default provider no download or local setup needed, everything runs via the NIM remote API.
238
280
 
239
- #### Example prompt file created at `$HOME/prompt`:
281
+ ### Quick Start
240
282
 
241
- ```json
242
- Please install NVIDIA NIM provider in OpenCode by adding this to ~/.config/opencode/opencode.json:
283
+ ```bash
284
+ free-coding-models --openclaw
285
+ ```
286
+
287
+ Or run without flags and choose **OpenClaw** from the startup menu.
243
288
 
289
+ 1. **Wait** for models to be pinged
290
+ 2. **Navigate** with ↑↓ arrows to your preferred model
291
+ 3. **Press Enter** — tool automatically:
292
+ - Reads `~/.openclaw/openclaw.json`
293
+ - Adds the `nvidia` provider block (NIM base URL + your API key) if missing
294
+ - Sets `agents.defaults.model.primary` to `nvidia/<model-id>`
295
+ - Saves config and prints next steps
296
+
297
+ ### What gets written to OpenClaw config
298
+
299
+ ```json
244
300
  {
245
- "provider": {
246
- "nvidia": {
247
- "npm": "@ai-sdk/openai-compatible",
248
- "name": "NVIDIA NIM",
249
- "options": {
250
- "baseURL": "https://integrate.api.nvidia.com/v1",
251
- "apiKey": "{env:NVIDIA_API_KEY}"
301
+ "models": {
302
+ "providers": {
303
+ "nvidia": {
304
+ "baseUrl": "https://integrate.api.nvidia.com/v1",
305
+ "api": "openai-completions"
306
+ }
307
+ }
308
+ },
309
+ "env": {
310
+ "NVIDIA_API_KEY": "nvapi-xxxx-your-key"
311
+ },
312
+ "agents": {
313
+ "defaults": {
314
+ "model": {
315
+ "primary": "nvidia/deepseek-ai/deepseek-v3.2"
252
316
  }
253
317
  }
254
318
  }
255
319
  }
320
+ ```
321
+
322
+ > ⚠️ **Note:** `providers` must be nested under `models.providers` — not at the config root. A root-level `providers` key is ignored by OpenClaw.
323
+
324
+ ### After updating OpenClaw config
256
325
 
257
- Then set env var: export NVIDIA_API_KEY=your_key_here
326
+ OpenClaw's gateway **auto-reloads** config file changes (depending on `gateway.reload.mode`). To apply manually:
327
+
328
+ ```bash
329
+ # Apply via CLI
330
+ openclaw models set nvidia/deepseek-ai/deepseek-v3.2
331
+
332
+ # Or re-run the interactive setup wizard
333
+ openclaw configure
258
334
  ```
259
335
 
260
- OpenCode will automatically detect this file when launched and guide you through the installation.
336
+ > ⚠️ **Note:** `openclaw restart` does **not** exist as a CLI command. Kill and relaunch the process manually if you need a full restart.
337
+
338
+ > 💡 **Why use remote NIM models with OpenClaw?** NVIDIA NIM serves models via a fast API — no local GPU required, no VRAM limits, free credits for developers. You get frontier-class coding models (DeepSeek V3, Kimi K2, Qwen3 Coder) without downloading anything.
261
339
 
262
340
  ---
263
341
 
@@ -271,14 +349,12 @@ OpenCode will automatically detect this file when launched and guide you through
271
349
  │ 4. Re-ping ALL models every 2 seconds (forever) │
272
350
  │ 5. Update rolling averages from ALL successful pings │
273
351
  │ 6. User can navigate with ↑↓ and select with Enter │
274
- │ 7. On Enter: stop monitoring, exit alt screen
275
- │ 8. Detect NVIDIA NIM config in OpenCode
276
- │ 9. If configured: update default model, launch OpenCode │
277
- │ 10. If missing: show install prompt, launch OpenCode │
352
+ │ 7. On Enter (OpenCode): set model, launch OpenCode
353
+ │ 8. On Enter (OpenClaw): update ~/.openclaw/openclaw.json
278
354
  └─────────────────────────────────────────────────────────────┘
279
355
  ```
280
356
 
281
- **Result:** Continuous monitoring interface that stays open until you select a model or press Ctrl+C. Rolling averages give you accurate long-term latency data, uptime percentage tracks reliability, and you can launch OpenCode with your chosen model in one keystroke.
357
+ **Result:** Continuous monitoring interface that stays open until you select a model or press Ctrl+C. Rolling averages give you accurate long-term latency data, uptime percentage tracks reliability, and you can configure your tool of choice with your chosen model in one keystroke.
282
358
 
283
359
  ---
284
360
 
@@ -295,12 +371,22 @@ OpenCode will automatically detect this file when launched and guide you through
295
371
  - **Monitor mode**: Interface stays open forever, press Ctrl+C to exit
296
372
 
297
373
  **Flags:**
298
- - **--best** — Show only top-tier models (A+, S, S+)
299
- - **--fiable** Analyze for 10 seconds and output the most reliable model in format `provider/model_id`
374
+
375
+ | Flag | Description |
376
+ |------|-------------|
377
+ | *(none)* | Show startup menu to choose OpenCode or OpenClaw |
378
+ | `--opencode` | OpenCode mode — Enter launches OpenCode with selected model |
379
+ | `--openclaw` | OpenClaw mode — Enter sets selected model as default in OpenClaw |
380
+ | `--best` | Show only top-tier models (A+, S, S+) |
381
+ | `--fiable` | Analyze 10 seconds, output the most reliable model as `provider/model_id` |
382
+ | `--tier S` | Show only S+ and S tier models |
383
+ | `--tier A` | Show only A+, A, A- tier models |
384
+ | `--tier B` | Show only B+, B tier models |
385
+ | `--tier C` | Show only C tier models |
300
386
 
301
387
  **Keyboard shortcuts:**
302
388
  - **↑↓** — Navigate models
303
- - **Enter** — Select model and launch OpenCode
389
+ - **Enter** — Select model (launches OpenCode or sets OpenClaw default, depending on mode)
304
390
  - **R/T/O/M/P/A/S/V/U** — Sort by Rank/Tier/Origin/Model/Ping/Avg/Status/Verdict/Uptime
305
391
  - **W** — Decrease ping interval (faster pings)
306
392
  - **X** — Increase ping interval (slower pings)
@@ -317,6 +403,21 @@ npm install
317
403
  npm start -- YOUR_API_KEY
318
404
  ```
319
405
 
406
+ ### Releasing a new version
407
+
408
+ 1. Make your changes and commit them with a descriptive message
409
+ 2. Update `CHANGELOG.md` with the new version entry
410
+ 3. Bump `"version"` in `package.json` (e.g. `0.1.3` → `0.1.4`)
411
+ 4. Commit with **just the version number** as the message:
412
+
413
+ ```bash
414
+ git add .
415
+ git commit -m "0.1.4"
416
+ git push
417
+ ```
418
+
419
+ The GitHub Actions workflow automatically publishes to npm on every push to `main`.
420
+
320
421
  ---
321
422
 
322
423
  ## 📄 License
@@ -338,5 +439,8 @@ We welcome contributions! Feel free to open issues, submit pull requests, or get
338
439
  **Q:** How accurate are the latency numbers?
339
440
  **A:** They represent average round-trip times measured during testing; actual performance may vary based on network conditions.
340
441
 
442
+ **Q:** Do I need to download models locally for OpenClaw?
443
+ **A:** No — `free-coding-models` configures OpenClaw to use NVIDIA NIM's remote API, so models run on NVIDIA's infrastructure. No GPU or local setup required.
444
+
341
445
  ## 📧 Support
342
446
  For questions or issues, open a GitHub issue or join our community Discord: https://discord.gg/QnR8xq9p
@@ -1,30 +1,33 @@
1
1
  #!/usr/bin/env node
2
2
  /**
3
3
  * @file free-coding-models.js
4
- * @description Live terminal availability checker for coding LLM models with OpenCode integration.
4
+ * @description Live terminal availability checker for coding LLM models with OpenCode & OpenClaw integration.
5
5
  *
6
6
  * @details
7
7
  * This CLI tool discovers and benchmarks language models optimized for coding.
8
8
  * It runs in an alternate screen buffer, pings all models in parallel, re-pings successful ones
9
9
  * multiple times for reliable latency measurements, and prints a clean final table.
10
- * During benchmarking, users can navigate with arrow keys and press Enter to launch OpenCode immediately.
10
+ * During benchmarking, users can navigate with arrow keys and press Enter to act on the selected model.
11
11
  *
12
12
  * 🎯 Key features:
13
13
  * - Parallel pings across all models with animated real-time updates
14
- * - Continuous monitoring with 10-second ping intervals (never stops)
14
+ * - Continuous monitoring with 2-second ping intervals (never stops)
15
15
  * - Rolling averages calculated from ALL successful pings since start
16
16
  * - Best-per-tier highlighting with medals (🥇🥈🥉)
17
17
  * - Interactive navigation with arrow keys directly in the table
18
- * - Instant OpenCode launch on Enter key press (any model, even timeout/down)
19
- * - Automatic OpenCode config detection and model setup
18
+ * - Instant OpenCode OR OpenClaw action on Enter key press
19
+ * - Startup mode menu (OpenCode vs OpenClaw) when no flag is given
20
+ * - Automatic config detection and model setup for both tools
20
21
  * - Persistent API key storage in ~/.free-coding-models
21
22
  * - Multi-source support via sources.js (easily add new providers)
22
23
  * - Uptime percentage tracking (successful pings / total pings)
23
24
  * - Sortable columns (R/T/O/M/P/A/S/V/U keys)
25
+ * - Tier filtering via --tier S/A/B/C flags
24
26
  *
25
27
  * → Functions:
26
28
  * - `loadApiKey` / `saveApiKey`: Manage persisted API key in ~/.free-coding-models
27
29
  * - `promptApiKey`: Interactive wizard for first-time API key setup
30
+ * - `promptModeSelection`: Startup menu to choose OpenCode vs OpenClaw
28
31
  * - `ping`: Perform HTTP request to NIM endpoint with timeout handling
29
32
  * - `renderTable`: Generate ASCII table with colored latency indicators and status emojis
30
33
  * - `getAvg`: Calculate average latency from all successful pings
@@ -33,6 +36,9 @@
33
36
  * - `sortResults`: Sort models by various columns
34
37
  * - `checkNvidiaNimConfig`: Check if NVIDIA NIM provider is configured in OpenCode
35
38
  * - `startOpenCode`: Launch OpenCode with selected model (configures if needed)
39
+ * - `loadOpenClawConfig` / `saveOpenClawConfig`: Manage ~/.openclaw/openclaw.json
40
+ * - `startOpenClaw`: Set selected model as default in OpenClaw config (remote, no launch)
41
+ * - `filterByTier`: Filter models by tier letter prefix (S, A, B, C)
36
42
  * - `main`: Orchestrates CLI flow, wizard, ping loops, animation, and output
37
43
  *
38
44
  * 📦 Dependencies:
@@ -45,13 +51,22 @@
45
51
  * - API key stored in ~/.free-coding-models
46
52
  * - Models loaded from sources.js (extensible for new providers)
47
53
  * - OpenCode config: ~/.config/opencode/opencode.json
48
- * - Ping timeout: 6s per attempt, max 2 retries (12s total)
49
- * - Ping interval: 10 seconds (continuous monitoring mode)
54
+ * - OpenClaw config: ~/.openclaw/openclaw.json
55
+ * - Ping timeout: 15s per attempt
56
+ * - Ping interval: 2 seconds (continuous monitoring mode)
50
57
  * - Animation: 12 FPS with braille spinners
51
- * - Reliability: Green → Yellow → Orange → Red → Black (degrades with instability)
58
+ *
59
+ * 🚀 CLI flags:
60
+ * - (no flag): Show startup menu → choose OpenCode or OpenClaw
61
+ * - --opencode: OpenCode mode (launch with selected model)
62
+ * - --openclaw: OpenClaw mode (set selected model as default in OpenClaw)
63
+ * - --best: Show only top-tier models (A+, S, S+)
64
+ * - --fiable: Analyze 10s and output the most reliable model
65
+ * - --tier S/A/B/C: Filter models by tier letter (S=S+/S, A=A+/A/A-, B=B+/B, C=C)
52
66
  *
53
67
  * @see {@link https://build.nvidia.com} NVIDIA API key generation
54
68
  * @see {@link https://github.com/opencode-ai/opencode} OpenCode repository
69
+ * @see {@link https://openclaw.ai} OpenClaw documentation
55
70
  */
56
71
 
57
72
  import chalk from 'chalk'
@@ -110,6 +125,78 @@ async function promptApiKey() {
110
125
  })
111
126
  }
112
127
 
128
+ // ─── Startup mode selection menu ──────────────────────────────────────────────
129
+ // 📖 Shown at startup when neither --opencode nor --openclaw flag is given.
130
+ // 📖 Simple arrow-key selector in normal terminal (not alt screen).
131
+ // 📖 Returns 'opencode' or 'openclaw'.
132
+ async function promptModeSelection() {
133
+ const options = [
134
+ {
135
+ label: 'OpenCode',
136
+ icon: '💻',
137
+ description: 'Press Enter on a model → launch OpenCode with it as default',
138
+ },
139
+ {
140
+ label: 'OpenClaw',
141
+ icon: '🦞',
142
+ description: 'Press Enter on a model → set it as default in OpenClaw config',
143
+ },
144
+ ]
145
+
146
+ return new Promise((resolve) => {
147
+ let selected = 0
148
+
149
+ // 📖 Render the menu to stdout (clear + redraw)
150
+ const render = () => {
151
+ process.stdout.write('\x1b[2J\x1b[H') // clear screen + cursor home
152
+ console.log()
153
+ console.log(chalk.bold(' ⚡ Free Coding Models') + chalk.dim(' — Choose your tool'))
154
+ console.log()
155
+ for (let i = 0; i < options.length; i++) {
156
+ const isSelected = i === selected
157
+ const bullet = isSelected ? chalk.bold.cyan(' ❯ ') : chalk.dim(' ')
158
+ const label = isSelected
159
+ ? chalk.bold.white(options[i].icon + ' ' + options[i].label)
160
+ : chalk.dim(options[i].icon + ' ' + options[i].label)
161
+ const desc = chalk.dim(' ' + options[i].description)
162
+ console.log(bullet + label)
163
+ console.log(chalk.dim(' ' + options[i].description))
164
+ console.log()
165
+ }
166
+ console.log(chalk.dim(' ↑↓ Navigate • Enter Select • Ctrl+C Exit'))
167
+ console.log()
168
+ }
169
+
170
+ render()
171
+
172
+ readline.emitKeypressEvents(process.stdin)
173
+ if (process.stdin.isTTY) process.stdin.setRawMode(true)
174
+
175
+ const onKey = (_str, key) => {
176
+ if (!key) return
177
+ if (key.ctrl && key.name === 'c') {
178
+ if (process.stdin.isTTY) process.stdin.setRawMode(false)
179
+ process.stdin.removeListener('keypress', onKey)
180
+ process.exit(0)
181
+ }
182
+ if (key.name === 'up' && selected > 0) {
183
+ selected--
184
+ render()
185
+ } else if (key.name === 'down' && selected < options.length - 1) {
186
+ selected++
187
+ render()
188
+ } else if (key.name === 'return') {
189
+ if (process.stdin.isTTY) process.stdin.setRawMode(false)
190
+ process.stdin.removeListener('keypress', onKey)
191
+ process.stdin.pause()
192
+ resolve(selected === 0 ? 'opencode' : 'openclaw')
193
+ }
194
+ }
195
+
196
+ process.stdin.on('keypress', onKey)
197
+ })
198
+ }
199
+
113
200
  // ─── Alternate screen control ─────────────────────────────────────────────────
114
201
  // 📖 \x1b[?1049h = enter alt screen \x1b[?1049l = leave alt screen
115
202
  // 📖 \x1b[?25l = hide cursor \x1b[?25h = show cursor
@@ -181,7 +268,7 @@ const VERDICT_ORDER = ['Perfect', 'Normal', 'Slow', 'Very Slow', 'Overloaded', '
181
268
  const getVerdict = (r) => {
182
269
  const avg = getAvg(r)
183
270
  const wasUpBefore = r.pings.length > 0 && r.pings.some(p => p.code === '200')
184
-
271
+
185
272
  // 📖 429 = rate limited = Overloaded
186
273
  if (r.httpCode === '429') return 'Overloaded'
187
274
  if ((r.status === 'timeout' || r.status === 'down') && wasUpBefore) return 'Unstable'
@@ -207,7 +294,7 @@ const getUptime = (r) => {
207
294
  const sortResults = (results, sortColumn, sortDirection) => {
208
295
  return [...results].sort((a, b) => {
209
296
  let cmp = 0
210
-
297
+
211
298
  switch (sortColumn) {
212
299
  case 'rank':
213
300
  cmp = a.idx - b.idx
@@ -245,12 +332,13 @@ const sortResults = (results, sortColumn, sortDirection) => {
245
332
  cmp = getUptime(a) - getUptime(b)
246
333
  break
247
334
  }
248
-
335
+
249
336
  return sortDirection === 'asc' ? cmp : -cmp
250
337
  })
251
338
  }
252
339
 
253
- function renderTable(results, pendingPings, frame, cursor = null, sortColumn = 'avg', sortDirection = 'asc', pingInterval = PING_INTERVAL, lastPingTime = Date.now()) {
340
+ // 📖 renderTable: mode param controls footer hint text (opencode vs openclaw)
341
+ function renderTable(results, pendingPings, frame, cursor = null, sortColumn = 'avg', sortDirection = 'asc', pingInterval = PING_INTERVAL, lastPingTime = Date.now(), mode = 'opencode') {
254
342
  const up = results.filter(r => r.status === 'up').length
255
343
  const down = results.filter(r => r.status === 'down').length
256
344
  const timeout = results.filter(r => r.status === 'timeout').length
@@ -267,6 +355,11 @@ function renderTable(results, pendingPings, frame, cursor = null, sortColumn = '
267
355
  ? chalk.dim(`pinging — ${pendingPings} in flight…`)
268
356
  : chalk.dim(`next ping ${secondsUntilNext}s`)
269
357
 
358
+ // 📖 Mode badge shown in header so user knows what Enter will do
359
+ const modeBadge = mode === 'openclaw'
360
+ ? chalk.bold.rgb(255, 100, 50)(' [🦞 OpenClaw]')
361
+ : chalk.bold.rgb(0, 200, 255)(' [💻 OpenCode]')
362
+
270
363
  // 📖 Column widths (generous spacing with margins)
271
364
  const W_RANK = 6
272
365
  const W_TIER = 6
@@ -283,7 +376,7 @@ function renderTable(results, pendingPings, frame, cursor = null, sortColumn = '
283
376
 
284
377
  const lines = [
285
378
  '',
286
- ` ${chalk.bold('⚡ Free Coding Models')} ` +
379
+ ` ${chalk.bold('⚡ Free Coding Models')}${modeBadge} ` +
287
380
  chalk.greenBright(`✅ ${up}`) + chalk.dim(' up ') +
288
381
  chalk.yellow(`⏱ ${timeout}`) + chalk.dim(' timeout ') +
289
382
  chalk.red(`❌ ${down}`) + chalk.dim(' down ') +
@@ -295,7 +388,7 @@ function renderTable(results, pendingPings, frame, cursor = null, sortColumn = '
295
388
  // 📖 NOTE: padEnd on chalk strings counts ANSI codes, breaking alignment
296
389
  // 📖 Solution: build plain text first, then colorize
297
390
  const dir = sortDirection === 'asc' ? '↑' : '↓'
298
-
391
+
299
392
  const rankH = 'Rank'
300
393
  const tierH = 'Tier'
301
394
  const originH = 'Origin'
@@ -305,7 +398,7 @@ function renderTable(results, pendingPings, frame, cursor = null, sortColumn = '
305
398
  const statusH = sortColumn === 'status' ? dir + ' Status' : 'Status'
306
399
  const verdictH = sortColumn === 'verdict' ? dir + ' Verdict' : 'Verdict'
307
400
  const uptimeH = sortColumn === 'uptime' ? dir + ' Up%' : 'Up%'
308
-
401
+
309
402
  // 📖 Now colorize after padding is calculated on plain text
310
403
  const rankH_c = chalk.dim(rankH.padEnd(W_RANK))
311
404
  const tierH_c = chalk.dim(tierH.padEnd(W_TIER))
@@ -316,13 +409,13 @@ function renderTable(results, pendingPings, frame, cursor = null, sortColumn = '
316
409
  const statusH_c = sortColumn === 'status' ? chalk.bold.cyan(statusH.padEnd(W_STATUS)) : chalk.dim(statusH.padEnd(W_STATUS))
317
410
  const verdictH_c = sortColumn === 'verdict' ? chalk.bold.cyan(verdictH.padEnd(W_VERDICT)) : chalk.dim(verdictH.padEnd(W_VERDICT))
318
411
  const uptimeH_c = sortColumn === 'uptime' ? chalk.bold.cyan(uptimeH.padStart(W_UPTIME)) : chalk.dim(uptimeH.padStart(W_UPTIME))
319
-
412
+
320
413
  // 📖 Header with proper spacing
321
414
  lines.push(' ' + rankH_c + ' ' + tierH_c + ' ' + originH_c + ' ' + modelH_c + ' ' + pingH_c + ' ' + avgH_c + ' ' + statusH_c + ' ' + verdictH_c + ' ' + uptimeH_c)
322
-
415
+
323
416
  // 📖 Separator line
324
417
  lines.push(
325
- ' ' +
418
+ ' ' +
326
419
  chalk.dim('─'.repeat(W_RANK)) + ' ' +
327
420
  chalk.dim('─'.repeat(W_TIER)) + ' ' +
328
421
  '─'.repeat(W_SOURCE) + ' ' +
@@ -337,9 +430,9 @@ function renderTable(results, pendingPings, frame, cursor = null, sortColumn = '
337
430
  for (let i = 0; i < sorted.length; i++) {
338
431
  const r = sorted[i]
339
432
  const tierFn = TIER_COLOR[r.tier] ?? (t => chalk.white(t))
340
-
433
+
341
434
  const isCursor = cursor !== null && i === cursor
342
-
435
+
343
436
  // 📖 Left-aligned columns - pad plain text first, then colorize
344
437
  const num = chalk.dim(String(r.idx).padEnd(W_RANK))
345
438
  const tier = tierFn(r.tier.padEnd(W_TIER))
@@ -452,7 +545,7 @@ function renderTable(results, pendingPings, frame, cursor = null, sortColumn = '
452
545
 
453
546
  // 📖 Build row with double space between columns
454
547
  const row = ' ' + num + ' ' + tier + ' ' + source + ' ' + name + ' ' + pingCell + ' ' + avgCell + ' ' + status + ' ' + speedCell + ' ' + uptimeCell
455
-
548
+
456
549
  if (isCursor) {
457
550
  lines.push(chalk.bgRgb(139, 0, 139)(row))
458
551
  } else {
@@ -462,7 +555,12 @@ function renderTable(results, pendingPings, frame, cursor = null, sortColumn = '
462
555
 
463
556
  lines.push('')
464
557
  const intervalSec = Math.round(pingInterval / 1000)
465
- lines.push(chalk.dim(` ↑↓ Navigate • Enter Select • R/T/O/M/P/A/S/V/U Sort • W↓/X↑ Interval (${intervalSec}s) • Ctrl+C Exit`))
558
+
559
+ // 📖 Footer hints adapt based on active mode
560
+ const actionHint = mode === 'openclaw'
561
+ ? chalk.rgb(255, 100, 50)('Enter→SetOpenClaw')
562
+ : chalk.rgb(0, 200, 255)('Enter→OpenCode')
563
+ lines.push(chalk.dim(` ↑↓ Navigate • `) + actionHint + chalk.dim(` • R/T/O/M/P/A/S/V/U Sort • W↓/X↑ Interval (${intervalSec}s) • Ctrl+C Exit`))
466
564
  lines.push('')
467
565
  return lines.join('\n')
468
566
  }
@@ -482,9 +580,9 @@ async function ping(apiKey, modelId) {
482
580
  return { code: String(resp.status), ms: Math.round(performance.now() - t0) }
483
581
  } catch (err) {
484
582
  const isTimeout = err.name === 'AbortError'
485
- return {
486
- code: isTimeout ? '000' : 'ERR',
487
- ms: isTimeout ? 'TIMEOUT' : Math.round(performance.now() - t0)
583
+ return {
584
+ code: isTimeout ? '000' : 'ERR',
585
+ ms: isTimeout ? 'TIMEOUT' : Math.round(performance.now() - t0)
488
586
  }
489
587
  } finally {
490
588
  clearTimeout(timer)
@@ -520,7 +618,7 @@ function checkNvidiaNimConfig() {
520
618
  if (!config.provider) return false
521
619
  // 📖 Check for nvidia/nim provider by key name or display name (case-insensitive)
522
620
  const providerKeys = Object.keys(config.provider)
523
- return providerKeys.some(key =>
621
+ return providerKeys.some(key =>
524
622
  key === 'nvidia' || key === 'nim' ||
525
623
  config.provider[key]?.name?.toLowerCase().includes('nvidia') ||
526
624
  config.provider[key]?.name?.toLowerCase().includes('nim')
@@ -533,38 +631,38 @@ function checkNvidiaNimConfig() {
533
631
  // 📖 Model format: { modelId, label, tier }
534
632
  async function startOpenCode(model) {
535
633
  const hasNim = checkNvidiaNimConfig()
536
-
634
+
537
635
  if (hasNim) {
538
636
  // 📖 NVIDIA NIM already configured - launch with model flag
539
637
  console.log(chalk.green(` 🚀 Setting ${chalk.bold(model.label)} as default…`))
540
638
  console.log(chalk.dim(` Model: nvidia/${model.modelId}`))
541
639
  console.log()
542
-
640
+
543
641
  const config = loadOpenCodeConfig()
544
642
  const backupPath = `${OPENCODE_CONFIG}.backup-${Date.now()}`
545
-
643
+
546
644
  // 📖 Backup current config
547
645
  if (existsSync(OPENCODE_CONFIG)) {
548
646
  copyFileSync(OPENCODE_CONFIG, backupPath)
549
647
  console.log(chalk.dim(` 💾 Backup: ${backupPath}`))
550
648
  }
551
-
649
+
552
650
  // 📖 Update default model to nvidia/model_id
553
651
  config.model = `nvidia/${model.modelId}`
554
652
  saveOpenCodeConfig(config)
555
-
653
+
556
654
  console.log(chalk.green(` ✓ Default model set to: nvidia/${model.modelId}`))
557
655
  console.log()
558
656
  console.log(chalk.dim(' Starting OpenCode…'))
559
657
  console.log()
560
-
658
+
561
659
  // 📖 Launch OpenCode and wait for it
562
660
  const { spawn } = await import('child_process')
563
661
  const child = spawn('opencode', [], {
564
662
  stdio: 'inherit',
565
663
  shell: false
566
664
  })
567
-
665
+
568
666
  // 📖 Wait for OpenCode to exit
569
667
  await new Promise((resolve, reject) => {
570
668
  child.on('exit', resolve)
@@ -576,7 +674,7 @@ async function startOpenCode(model) {
576
674
  console.log()
577
675
  console.log(chalk.dim(' Starting OpenCode with installation prompt…'))
578
676
  console.log()
579
-
677
+
580
678
  const installPrompt = `Please install NVIDIA NIM provider in OpenCode by adding this to ~/.config/opencode/opencode.json:
581
679
 
582
680
  {
@@ -595,18 +693,18 @@ async function startOpenCode(model) {
595
693
  Then set env var: export NVIDIA_API_KEY=your_key_here
596
694
 
597
695
  After installation, you can use: opencode --model nvidia/${model.modelId}`
598
-
696
+
599
697
  console.log(chalk.cyan(installPrompt))
600
698
  console.log()
601
699
  console.log(chalk.dim(' Starting OpenCode…'))
602
700
  console.log()
603
-
701
+
604
702
  const { spawn } = await import('child_process')
605
703
  const child = spawn('opencode', [], {
606
704
  stdio: 'inherit',
607
705
  shell: false
608
706
  })
609
-
707
+
610
708
  // 📖 Wait for OpenCode to exit
611
709
  await new Promise((resolve, reject) => {
612
710
  child.on('exit', resolve)
@@ -615,6 +713,96 @@ After installation, you can use: opencode --model nvidia/${model.modelId}`
615
713
  }
616
714
  }
617
715
 
716
+ // ─── OpenClaw integration ──────────────────────────────────────────────────────
717
+ // 📖 OpenClaw config: ~/.openclaw/openclaw.json (JSON format, may be JSON5 in newer versions)
718
+ // 📖 To set a model: set agents.defaults.model.primary = "nvidia/model-id"
719
+ // 📖 Providers section uses baseUrl + apiKey + api: "openai-completions" format
720
+ // 📖 See: https://docs.openclaw.ai/gateway/configuration
721
+ const OPENCLAW_CONFIG = join(homedir(), '.openclaw', 'openclaw.json')
722
+
723
+ function loadOpenClawConfig() {
724
+ if (!existsSync(OPENCLAW_CONFIG)) return {}
725
+ try {
726
+ // 📖 JSON.parse works for standard JSON; OpenClaw may use JSON5 but base config is valid JSON
727
+ return JSON.parse(readFileSync(OPENCLAW_CONFIG, 'utf8'))
728
+ } catch {
729
+ return {}
730
+ }
731
+ }
732
+
733
+ function saveOpenClawConfig(config) {
734
+ const dir = join(homedir(), '.openclaw')
735
+ if (!existsSync(dir)) {
736
+ mkdirSync(dir, { recursive: true })
737
+ }
738
+ writeFileSync(OPENCLAW_CONFIG, JSON.stringify(config, null, 2))
739
+ }
740
+
741
+ // 📖 startOpenClaw: sets the selected NVIDIA NIM model as default in OpenClaw config.
742
+ // 📖 Also ensures the nvidia provider block is present with the NIM base URL.
743
+ // 📖 Does NOT launch OpenClaw — OpenClaw runs as a daemon, so config changes are picked up on restart.
744
+ async function startOpenClaw(model, apiKey) {
745
+ console.log(chalk.rgb(255, 100, 50)(` 🦞 Setting ${chalk.bold(model.label)} as OpenClaw default…`))
746
+ console.log(chalk.dim(` Model: nvidia/${model.modelId}`))
747
+ console.log()
748
+
749
+ const config = loadOpenClawConfig()
750
+
751
+ // 📖 Backup existing config before touching it
752
+ if (existsSync(OPENCLAW_CONFIG)) {
753
+ const backupPath = `${OPENCLAW_CONFIG}.backup-${Date.now()}`
754
+ copyFileSync(OPENCLAW_CONFIG, backupPath)
755
+ console.log(chalk.dim(` 💾 Backup: ${backupPath}`))
756
+ }
757
+
758
+ // 📖 Ensure models.providers section exists with nvidia NIM block.
759
+ // 📖 Per OpenClaw docs (docs.openclaw.ai/providers/nvidia), providers MUST be nested under
760
+ // 📖 "models.providers", NOT at the config root. Root-level "providers" is ignored by OpenClaw.
761
+ // 📖 API key is NOT stored in the provider block — it's read from env var NVIDIA_API_KEY.
762
+ // 📖 If needed, it can be stored under the root "env" key: { env: { NVIDIA_API_KEY: "nvapi-..." } }
763
+ if (!config.models) config.models = {}
764
+ if (!config.models.providers) config.models.providers = {}
765
+ if (!config.models.providers.nvidia) {
766
+ config.models.providers.nvidia = {
767
+ baseUrl: 'https://integrate.api.nvidia.com/v1',
768
+ api: 'openai-completions',
769
+ }
770
+ console.log(chalk.dim(' ➕ Added nvidia provider block to OpenClaw config (models.providers.nvidia)'))
771
+ }
772
+
773
+ // 📖 Store API key in the root "env" section so OpenClaw can read it as NVIDIA_API_KEY env var.
774
+ // 📖 Only writes if not already set to avoid overwriting an existing key.
775
+ const resolvedKey = apiKey || process.env.NVIDIA_API_KEY
776
+ if (resolvedKey) {
777
+ if (!config.env) config.env = {}
778
+ if (!config.env.NVIDIA_API_KEY) {
779
+ config.env.NVIDIA_API_KEY = resolvedKey
780
+ console.log(chalk.dim(' 🔑 Stored NVIDIA_API_KEY in config env section'))
781
+ }
782
+ }
783
+
784
+ // 📖 Set as the default primary model for all agents.
785
+ // 📖 Format: "provider/model-id" — e.g. "nvidia/deepseek-ai/deepseek-v3.2"
786
+ if (!config.agents) config.agents = {}
787
+ if (!config.agents.defaults) config.agents.defaults = {}
788
+ if (!config.agents.defaults.model) config.agents.defaults.model = {}
789
+ config.agents.defaults.model.primary = `nvidia/${model.modelId}`
790
+
791
+ saveOpenClawConfig(config)
792
+
793
+ console.log(chalk.rgb(255, 140, 0)(` ✓ Default model set to: nvidia/${model.modelId}`))
794
+ console.log()
795
+ console.log(chalk.dim(' 📄 Config updated: ' + OPENCLAW_CONFIG))
796
+ console.log()
797
+ // 📖 "openclaw restart" does NOT exist. The gateway auto-reloads on config file changes.
798
+ // 📖 To apply manually: use "openclaw models set" or "openclaw configure"
799
+ // 📖 See: https://docs.openclaw.ai/gateway/configuration
800
+ console.log(chalk.dim(' 💡 OpenClaw will reload config automatically (gateway.reload.mode).'))
801
+ console.log(chalk.dim(' To apply manually: openclaw models set nvidia/' + model.modelId))
802
+ console.log(chalk.dim(' Or run the setup wizard: openclaw configure'))
803
+ console.log()
804
+ }
805
+
618
806
  // ─── Helper function to find best model after analysis ────────────────────────
619
807
  function findBestModel(results) {
620
808
  // 📖 Sort by avg ping (fastest first), then by uptime percentage (most reliable)
@@ -623,18 +811,18 @@ function findBestModel(results) {
623
811
  const avgB = getAvg(b)
624
812
  const uptimeA = getUptime(a)
625
813
  const uptimeB = getUptime(b)
626
-
814
+
627
815
  // 📖 Priority 1: Models that are up (status === 'up')
628
816
  if (a.status === 'up' && b.status !== 'up') return -1
629
817
  if (a.status !== 'up' && b.status === 'up') return 1
630
-
818
+
631
819
  // 📖 Priority 2: Fastest average ping
632
820
  if (avgA !== avgB) return avgA - avgB
633
-
821
+
634
822
  // 📖 Priority 3: Highest uptime percentage
635
823
  return uptimeB - uptimeA
636
824
  })
637
-
825
+
638
826
  return sorted.length > 0 ? sorted[0] : null
639
827
  }
640
828
 
@@ -642,17 +830,17 @@ function findBestModel(results) {
642
830
  async function runFiableMode(apiKey) {
643
831
  console.log(chalk.cyan(' ⚡ Analyzing models for reliability (10 seconds)...'))
644
832
  console.log()
645
-
833
+
646
834
  let results = MODELS.map(([modelId, label, tier], i) => ({
647
835
  idx: i + 1, modelId, label, tier,
648
836
  status: 'pending',
649
837
  pings: [],
650
838
  httpCode: null,
651
839
  }))
652
-
840
+
653
841
  const startTime = Date.now()
654
842
  const analysisDuration = 10000 // 10 seconds
655
-
843
+
656
844
  // 📖 Run initial pings
657
845
  const pingPromises = results.map(r => ping(apiKey, r.modelId).then(({ code, ms }) => {
658
846
  r.pings.push({ ms, code })
@@ -665,23 +853,23 @@ async function runFiableMode(apiKey) {
665
853
  r.httpCode = code
666
854
  }
667
855
  }))
668
-
856
+
669
857
  await Promise.allSettled(pingPromises)
670
-
858
+
671
859
  // 📖 Continue pinging for the remaining time
672
860
  const remainingTime = Math.max(0, analysisDuration - (Date.now() - startTime))
673
861
  if (remainingTime > 0) {
674
862
  await new Promise(resolve => setTimeout(resolve, remainingTime))
675
863
  }
676
-
864
+
677
865
  // 📖 Find best model
678
866
  const best = findBestModel(results)
679
-
867
+
680
868
  if (!best) {
681
869
  console.log(chalk.red(' ✖ No reliable model found'))
682
870
  process.exit(1)
683
871
  }
684
-
872
+
685
873
  // 📖 Output in format: provider/name
686
874
  const provider = 'nvidia' // Always NVIDIA NIM for now
687
875
  console.log(chalk.green(` ✓ Most reliable model:`))
@@ -691,18 +879,41 @@ async function runFiableMode(apiKey) {
691
879
  console.log(chalk.dim(` Avg ping: ${getAvg(best)}ms`))
692
880
  console.log(chalk.dim(` Uptime: ${getUptime(best)}%`))
693
881
  console.log(chalk.dim(` Status: ${best.status === 'up' ? '✅ UP' : '❌ DOWN'}`))
694
-
882
+
695
883
  process.exit(0)
696
884
  }
697
885
 
886
+ // ─── Tier filter helper ────────────────────────────────────────────────────────
887
+ // 📖 Maps a single tier letter (S, A, B, C) to the full set of matching tier strings.
888
+ // 📖 --tier S → includes S+ and S
889
+ // 📖 --tier A → includes A+, A, A-
890
+ // 📖 --tier B → includes B+, B
891
+ // 📖 --tier C → includes C only
892
+ const TIER_LETTER_MAP = {
893
+ 'S': ['S+', 'S'],
894
+ 'A': ['A+', 'A', 'A-'],
895
+ 'B': ['B+', 'B'],
896
+ 'C': ['C'],
897
+ }
898
+
899
+ function filterByTier(results, tierLetter) {
900
+ const letter = tierLetter.toUpperCase()
901
+ const allowed = TIER_LETTER_MAP[letter]
902
+ if (!allowed) {
903
+ console.error(chalk.red(` ✖ Unknown tier "${tierLetter}". Valid tiers: S, A, B, C`))
904
+ process.exit(1)
905
+ }
906
+ return results.filter(r => allowed.includes(r.tier))
907
+ }
908
+
698
909
  async function main() {
699
910
  // 📖 Parse CLI arguments properly
700
911
  const args = process.argv.slice(2)
701
-
912
+
702
913
  // 📖 Extract API key (first non-flag argument) and flags
703
914
  let apiKey = null
704
915
  const flags = []
705
-
916
+
706
917
  for (const arg of args) {
707
918
  if (arg.startsWith('--')) {
708
919
  flags.push(arg.toLowerCase())
@@ -710,16 +921,26 @@ async function main() {
710
921
  apiKey = arg
711
922
  }
712
923
  }
713
-
924
+
714
925
  // 📖 Priority: CLI arg > env var > saved config > wizard
715
926
  if (!apiKey) {
716
927
  apiKey = process.env.NVIDIA_API_KEY || loadApiKey()
717
928
  }
718
-
929
+
719
930
  // 📖 Check for CLI flags
720
- const bestMode = flags.includes('--best')
721
- const fiableMode = flags.includes('--fiable') || flags.includes('--fiable') // Support both
722
-
931
+ const bestMode = flags.includes('--best')
932
+ const fiableMode = flags.includes('--fiable')
933
+ const openCodeMode = flags.includes('--opencode')
934
+ const openClawMode = flags.includes('--openclaw')
935
+
936
+ // 📖 Parse --tier X flag (e.g. --tier S, --tier A)
937
+ // 📖 Find "--tier" in flags array, then get the next raw arg as the tier value
938
+ let tierFilter = null
939
+ const tierIdx = args.findIndex(a => a.toLowerCase() === '--tier')
940
+ if (tierIdx !== -1 && args[tierIdx + 1] && !args[tierIdx + 1].startsWith('--')) {
941
+ tierFilter = args[tierIdx + 1].toUpperCase()
942
+ }
943
+
723
944
  if (!apiKey) {
724
945
  apiKey = await promptApiKey()
725
946
  if (!apiKey) {
@@ -730,12 +951,26 @@ async function main() {
730
951
  process.exit(1)
731
952
  }
732
953
  }
733
-
954
+
734
955
  // 📖 Handle fiable mode first (it exits after analysis)
735
956
  if (fiableMode) {
736
957
  await runFiableMode(apiKey)
737
958
  }
738
959
 
960
+ // 📖 Determine active mode:
961
+ // --opencode → opencode
962
+ // --openclaw → openclaw
963
+ // neither → show interactive startup menu
964
+ let mode
965
+ if (openClawMode) {
966
+ mode = 'openclaw'
967
+ } else if (openCodeMode) {
968
+ mode = 'opencode'
969
+ } else {
970
+ // 📖 No mode flag given — ask user with the startup menu
971
+ mode = await promptModeSelection()
972
+ }
973
+
739
974
  // 📖 Filter models to only show top tiers if BEST mode is active
740
975
  let results = MODELS.map(([modelId, label, tier], i) => ({
741
976
  idx: i + 1, modelId, label, tier,
@@ -743,26 +978,32 @@ async function main() {
743
978
  pings: [], // 📖 All ping results (ms or 'TIMEOUT')
744
979
  httpCode: null,
745
980
  }))
746
-
981
+
747
982
  if (bestMode) {
748
983
  results = results.filter(r => r.tier === 'S+' || r.tier === 'S' || r.tier === 'A+')
749
984
  }
750
985
 
986
+ // 📖 Apply tier letter filter if --tier X was given
987
+ if (tierFilter) {
988
+ results = filterByTier(results, tierFilter)
989
+ }
990
+
751
991
  // 📖 Add interactive selection state - cursor index and user's choice
752
992
  // 📖 sortColumn: 'rank'|'tier'|'origin'|'model'|'ping'|'avg'|'status'|'verdict'|'uptime'
753
993
  // 📖 sortDirection: 'asc' (default) or 'desc'
754
- // 📖 pingInterval: current interval in ms (default 5000, adjustable with W/X keys)
755
- const state = {
756
- results,
757
- pendingPings: 0,
758
- frame: 0,
759
- cursor: 0,
994
+ // 📖 pingInterval: current interval in ms (default 2000, adjustable with W/X keys)
995
+ const state = {
996
+ results,
997
+ pendingPings: 0,
998
+ frame: 0,
999
+ cursor: 0,
760
1000
  selectedModel: null,
761
1001
  sortColumn: 'avg',
762
1002
  sortDirection: 'asc',
763
- pingInterval: PING_INTERVAL, // 📖 Track current interval for C/V keys
764
- lastPingTime: Date.now(), // 📖 Track when last ping cycle started
765
- fiableMode // 📖 Pass fiable mode to state
1003
+ pingInterval: PING_INTERVAL, // 📖 Track current interval for W/X keys
1004
+ lastPingTime: Date.now(), // 📖 Track when last ping cycle started
1005
+ fiableMode, // 📖 Pass fiable mode to state
1006
+ mode, // 📖 'opencode' or 'openclaw' — controls Enter action
766
1007
  }
767
1008
 
768
1009
  // 📖 Enter alternate screen — animation runs here, zero scrollback pollution
@@ -782,18 +1023,18 @@ async function main() {
782
1023
  // 📖 Use readline with keypress event for arrow key handling
783
1024
  process.stdin.setEncoding('utf8')
784
1025
  process.stdin.resume()
785
-
1026
+
786
1027
  let userSelected = null
787
-
1028
+
788
1029
  const onKeyPress = async (str, key) => {
789
1030
  if (!key) return
790
-
791
- // 📖 Sorting keys: R=rank, T=tier, O=origin, M=model, P=ping, A=avg, S=status, V=verdict, L=reliability
1031
+
1032
+ // 📖 Sorting keys: R=rank, T=tier, O=origin, M=model, P=ping, A=avg, S=status, V=verdict, U=uptime
792
1033
  const sortKeys = {
793
1034
  'r': 'rank', 't': 'tier', 'o': 'origin', 'm': 'model',
794
1035
  'p': 'ping', 'a': 'avg', 's': 'status', 'v': 'verdict', 'u': 'uptime'
795
1036
  }
796
-
1037
+
797
1038
  if (sortKeys[key.name]) {
798
1039
  const col = sortKeys[key.name]
799
1040
  // 📖 Toggle direction if same column, otherwise reset to asc
@@ -805,98 +1046,101 @@ async function main() {
805
1046
  }
806
1047
  return
807
1048
  }
808
-
1049
+
809
1050
  // 📖 Interval adjustment keys: W=decrease (faster), X=increase (slower)
810
1051
  // 📖 Minimum 1s, maximum 60s
811
1052
  if (key.name === 'w') {
812
1053
  state.pingInterval = Math.max(1000, state.pingInterval - 1000)
813
1054
  return
814
1055
  }
815
-
1056
+
816
1057
  if (key.name === 'x') {
817
1058
  state.pingInterval = Math.min(60000, state.pingInterval + 1000)
818
1059
  return
819
1060
  }
820
-
1061
+
821
1062
  if (key.name === 'up') {
822
1063
  if (state.cursor > 0) {
823
1064
  state.cursor--
824
1065
  }
825
1066
  return
826
1067
  }
827
-
1068
+
828
1069
  if (key.name === 'down') {
829
1070
  if (state.cursor < results.length - 1) {
830
1071
  state.cursor++
831
1072
  }
832
1073
  return
833
1074
  }
834
-
1075
+
835
1076
  if (key.name === 'c' && key.ctrl) { // Ctrl+C
836
1077
  exit(0)
837
1078
  return
838
1079
  }
839
-
1080
+
840
1081
  if (key.name === 'return') { // Enter
841
1082
  // 📖 Use the same sorting as the table display
842
1083
  const sorted = sortResults(results, state.sortColumn, state.sortDirection)
843
1084
  const selected = sorted[state.cursor]
844
1085
  // 📖 Allow selecting ANY model (even timeout/down) - user knows what they're doing
845
- if (true) {
846
- userSelected = { modelId: selected.modelId, label: selected.label, tier: selected.tier }
847
- // 📖 Stop everything and launch OpenCode immediately
848
- clearInterval(ticker)
849
- clearTimeout(state.pingIntervalObj)
850
- readline.emitKeypressEvents(process.stdin)
851
- process.stdin.setRawMode(true)
852
- process.stdin.pause()
853
- process.stdin.removeListener('keypress', onKeyPress)
854
- process.stdout.write(ALT_LEAVE)
855
-
856
- // 📖 Show selection with status
857
- if (selected.status === 'timeout') {
858
- console.log(chalk.yellow(` ⚠ Selected: ${selected.label} (currently timing out)`))
859
- } else if (selected.status === 'down') {
860
- console.log(chalk.red(` ⚠ Selected: ${selected.label} (currently down)`))
861
- } else {
862
- console.log(chalk.cyan(` ✓ Selected: ${selected.label}`))
863
- }
864
- console.log()
865
-
866
- // 📖 Wait for OpenCode to finish before exiting
1086
+ userSelected = { modelId: selected.modelId, label: selected.label, tier: selected.tier }
1087
+
1088
+ // 📖 Stop everything and act on selection immediately
1089
+ clearInterval(ticker)
1090
+ clearTimeout(state.pingIntervalObj)
1091
+ readline.emitKeypressEvents(process.stdin)
1092
+ process.stdin.setRawMode(true)
1093
+ process.stdin.pause()
1094
+ process.stdin.removeListener('keypress', onKeyPress)
1095
+ process.stdout.write(ALT_LEAVE)
1096
+
1097
+ // 📖 Show selection with status
1098
+ if (selected.status === 'timeout') {
1099
+ console.log(chalk.yellow(` ⚠ Selected: ${selected.label} (currently timing out)`))
1100
+ } else if (selected.status === 'down') {
1101
+ console.log(chalk.red(` ⚠ Selected: ${selected.label} (currently down)`))
1102
+ } else {
1103
+ console.log(chalk.cyan(` ✓ Selected: ${selected.label}`))
1104
+ }
1105
+ console.log()
1106
+
1107
+ // 📖 Dispatch to the correct integration based on active mode
1108
+ if (state.mode === 'openclaw') {
1109
+ await startOpenClaw(userSelected, apiKey)
1110
+ } else {
867
1111
  await startOpenCode(userSelected)
868
- process.exit(0)
869
1112
  }
1113
+ process.exit(0)
870
1114
  }
871
1115
  }
872
-
1116
+
873
1117
  // 📖 Enable keypress events on stdin
874
1118
  readline.emitKeypressEvents(process.stdin)
875
1119
  if (process.stdin.isTTY) {
876
1120
  process.stdin.setRawMode(true)
877
1121
  }
878
-
1122
+
879
1123
  process.stdin.on('keypress', onKeyPress)
880
1124
 
881
1125
  // 📖 Animation loop: clear alt screen + redraw table at FPS with cursor
882
1126
  const ticker = setInterval(() => {
883
1127
  state.frame++
884
- process.stdout.write(ALT_CLEAR + renderTable(state.results, state.pendingPings, state.frame, state.cursor, state.sortColumn, state.sortDirection, state.pingInterval, state.lastPingTime))
1128
+ process.stdout.write(ALT_CLEAR + renderTable(state.results, state.pendingPings, state.frame, state.cursor, state.sortColumn, state.sortDirection, state.pingInterval, state.lastPingTime, state.mode))
885
1129
  }, Math.round(1000 / FPS))
886
1130
 
887
- process.stdout.write(ALT_CLEAR + renderTable(state.results, state.pendingPings, state.frame, state.cursor, state.sortColumn, state.sortDirection, state.pingInterval, state.lastPingTime))
1131
+ process.stdout.write(ALT_CLEAR + renderTable(state.results, state.pendingPings, state.frame, state.cursor, state.sortColumn, state.sortDirection, state.pingInterval, state.lastPingTime, state.mode))
1132
+
1133
+ // ── Continuous ping loop — ping all models every N seconds forever ──────────
888
1134
 
889
- // ── Continuous ping loop — ping all models every 10 seconds forever ──────────
890
-
891
1135
  // 📖 Single ping function that updates result
892
1136
  const pingModel = async (r) => {
893
1137
  const { code, ms } = await ping(apiKey, r.modelId)
894
-
1138
+
895
1139
  // 📖 Store ping result as object with ms and code
896
1140
  // 📖 ms = actual response time (even for errors like 429)
897
1141
  // 📖 code = HTTP status code ('200', '429', '500', '000' for timeout)
898
1142
  r.pings.push({ ms, code })
899
-
1143
+
900
1144
  // 📖 Update status based on latest ping
901
1145
  if (code === '200') {
902
1146
  r.status = 'up'
@@ -910,23 +1154,23 @@ async function main() {
910
1154
 
911
1155
  // 📖 Initial ping of all models
912
1156
  const initialPing = Promise.all(results.map(r => pingModel(r)))
913
-
1157
+
914
1158
  // 📖 Continuous ping loop with dynamic interval (adjustable with W/X keys)
915
1159
  const schedulePing = () => {
916
1160
  state.pingIntervalObj = setTimeout(async () => {
917
1161
  state.lastPingTime = Date.now()
918
-
1162
+
919
1163
  results.forEach(r => {
920
1164
  pingModel(r).catch(() => {
921
1165
  // Individual ping failures don't crash the loop
922
1166
  })
923
1167
  })
924
-
1168
+
925
1169
  // 📖 Schedule next ping with current interval
926
1170
  schedulePing()
927
1171
  }, state.pingInterval)
928
1172
  }
929
-
1173
+
930
1174
  // 📖 Start the ping loop
931
1175
  state.pingIntervalObj = null
932
1176
  schedulePing()
@@ -943,4 +1187,4 @@ main().catch((err) => {
943
1187
  process.stdout.write(ALT_LEAVE)
944
1188
  console.error(err)
945
1189
  process.exit(1)
946
- })
1190
+ })
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "free-coding-models",
3
- "version": "0.1.2",
3
+ "version": "0.1.4",
4
4
  "description": "Find the fastest coding LLM models in seconds — ping free models from multiple providers, pick the best one for OpenCode, Cursor, or any AI coding assistant.",
5
5
  "keywords": [
6
6
  "nvidia",