free-coding-models 0.1.2 → 0.1.3

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -9,7 +9,7 @@
9
9
 
10
10
  <p align="center">
11
11
  <strong>Find the fastest coding LLM models in seconds</strong><br>
12
- <sub>Ping free models from multiple providers — pick the best one for OpenCode, Cursor, or any AI coding assistant</sub>
12
+ <sub>Ping free NVIDIA NIM models in real-time — pick the best one for OpenCode, OpenClaw, or any AI coding assistant</sub>
13
13
  </p>
14
14
 
15
15
  <p align="center">
@@ -22,6 +22,8 @@
22
22
  <a href="#-installation">Installation</a> •
23
23
  <a href="#-usage">Usage</a> •
24
24
  <a href="#-models">Models</a> •
25
+ <a href="#-opencode-integration">OpenCode</a> •
26
+ <a href="#-openclaw-integration">OpenClaw</a> •
25
27
  <a href="#-how-it-works">How it works</a>
26
28
  </p>
27
29
 
@@ -37,11 +39,14 @@
37
39
  - **📈 Rolling averages** — Avg calculated from ALL successful pings since start
38
40
  - **📊 Uptime tracking** — Percentage of successful pings shown in real-time
39
41
  - **🔄 Auto-retry** — Timeout models keep getting retried, nothing is ever "given up on"
40
- - **🎮 Interactive selection** — Navigate with arrow keys directly in the table, press Enter to launch OpenCode
41
- - **🔌 Auto-configuration** — Detects NVIDIA NIM setup, installs if missing, sets as default model
42
+ - **🎮 Interactive selection** — Navigate with arrow keys directly in the table, press Enter to act
43
+ - **🔀 Startup mode menu** — Choose between OpenCode and OpenClaw before the TUI launches
44
+ - **💻 OpenCode integration** — Auto-detects NIM setup, sets model as default, launches OpenCode
45
+ - **🦞 OpenClaw integration** — Sets selected model as default provider in `~/.openclaw/openclaw.json`
42
46
  - **🎨 Clean output** — Zero scrollback pollution, interface stays open until Ctrl+C
43
47
  - **📶 Status indicators** — UP ✅ · Timeout ⏳ · Overloaded 🔥 · Not Found 🚫
44
48
  - **🔧 Multi-source support** — Extensible architecture via `sources.js` (add new providers easily)
49
+ - **🏷 Tier filtering** — Filter models by tier letter (S, A, B, C) with `--tier`
45
50
 
46
51
  ---
47
52
 
@@ -50,11 +55,12 @@
50
55
  Before using `free-coding-models`, make sure you have:
51
56
 
52
57
  1. **Node.js 18+** — Required for native `fetch` API
53
- 2. **OpenCode installed** — [Install OpenCode](https://github.com/opencode-ai/opencode) (`npm install -g opencode`)
54
- 3. **NVIDIA NIM account** — Free tier available at [build.nvidia.com](https://build.nvidia.com)
55
- 4. **API key** — Generate one from Profile API Keys → Generate API Key
58
+ 2. **NVIDIA NIM account** — Free tier available at [build.nvidia.com](https://build.nvidia.com)
59
+ 3. **API key** — Generate one from Profile → API Keys → Generate API Key
60
+ 4. **OpenCode** *(optional)* [Install OpenCode](https://github.com/opencode-ai/opencode) to use the OpenCode integration
61
+ 5. **OpenClaw** *(optional)* — [Install OpenClaw](https://openclaw.ai) to use the OpenClaw integration
56
62
 
57
- > 💡 **Tip:** Without OpenCode installed, you can still use the tool to benchmark models. OpenCode is only needed for the auto-launch feature.
63
+ > 💡 **Tip:** Without OpenCode/OpenClaw installed, you can still benchmark models and get latency data.
58
64
 
59
65
  ---
60
66
 
@@ -81,24 +87,56 @@ bunx free-coding-models YOUR_API_KEY
81
87
  ## 🚀 Usage
82
88
 
83
89
  ```bash
84
- # Just run it — will prompt for API key if not set
90
+ # Just run it — shows a startup menu to pick OpenCode or OpenClaw, prompts for API key if not set
85
91
  free-coding-models
86
92
 
93
+ # Explicitly target OpenCode (current default behavior — TUI + Enter launches OpenCode)
94
+ free-coding-models --opencode
95
+
96
+ # Explicitly target OpenClaw (TUI + Enter sets model as default in OpenClaw)
97
+ free-coding-models --openclaw
98
+
87
99
  # Show only top-tier models (A+, S, S+)
88
100
  free-coding-models --best
89
101
 
90
102
  # Analyze for 10 seconds and output the most reliable model
91
103
  free-coding-models --fiable
104
+
105
+ # Filter models by tier letter
106
+ free-coding-models --tier S # S+ and S only
107
+ free-coding-models --tier A # A+, A, A- only
108
+ free-coding-models --tier B # B+, B only
109
+ free-coding-models --tier C # C only
110
+
111
+ # Combine flags freely
112
+ free-coding-models --openclaw --tier S
113
+ free-coding-models --opencode --best
92
114
  ```
93
115
 
116
+ ### Startup mode menu
117
+
118
+ When you run `free-coding-models` without `--opencode` or `--openclaw`, you get an interactive startup menu:
119
+
120
+ ```
121
+ ⚡ Free Coding Models — Choose your tool
122
+
123
+ ❯ 💻 OpenCode
124
+ Press Enter on a model → launch OpenCode with it as default
125
+
126
+ 🦞 OpenClaw
127
+ Press Enter on a model → set it as default in OpenClaw config
128
+
129
+ ↑↓ Navigate • Enter Select • Ctrl+C Exit
130
+ ```
131
+
132
+ Use `↑↓` arrows to select, `Enter` to confirm. Then the TUI launches with your chosen mode shown in the header badge.
133
+
94
134
  **How it works:**
95
135
  1. **Ping phase** — All 44 models are pinged in parallel
96
136
  2. **Continuous monitoring** — Models are re-pinged every 2 seconds forever
97
137
  3. **Real-time updates** — Watch "Latest", "Avg", and "Up%" columns update live
98
- 4. **Select anytime** — Use ↑↓ arrows to navigate, press Enter on a model to launch OpenCode
99
- 5. **Smart detection** — Automatically detects if NVIDIA NIM is configured in OpenCode:
100
- - ✅ If configured → Sets model as default and launches OpenCode
101
- - ⚠️ If missing → Shows installation instructions and launches OpenCode
138
+ 4. **Select anytime** — Use ↑↓ arrows to navigate, press Enter on a model to act
139
+ 5. **Smart detection** — Automatically detects if NVIDIA NIM is configured in OpenCode or OpenClaw
102
140
 
103
141
  Setup wizard:
104
142
 
@@ -161,13 +199,24 @@ free-coding-models
161
199
  - **A-/B+** — Solid performers, good for targeted programming tasks
162
200
  - **B/C** — Lightweight or older models, good for code completion on constrained infra
163
201
 
202
+ ### Filtering by tier
203
+
204
+ Use `--tier` to focus on a specific capability band:
205
+
206
+ ```bash
207
+ free-coding-models --tier S # Only S+ and S (frontier models)
208
+ free-coding-models --tier A # Only A+, A, A- (solid performers)
209
+ free-coding-models --tier B # Only B+, B (lightweight options)
210
+ free-coding-models --tier C # Only C (edge/minimal models)
211
+ ```
212
+
164
213
  ---
165
214
 
166
- ## 🔌 Use with OpenCode
215
+ ## 🔌 OpenCode Integration
167
216
 
168
217
  **The easiest way** — let `free-coding-models` do everything:
169
218
 
170
- 1. **Run**: `free-coding-models`
219
+ 1. **Run**: `free-coding-models --opencode` (or choose OpenCode from the startup menu)
171
220
  2. **Wait** for models to be pinged (green ✅ status)
172
221
  3. **Navigate** with ↑↓ arrows to your preferred model
173
222
  4. **Press Enter** — tool automatically:
@@ -175,23 +224,7 @@ free-coding-models
175
224
  - Sets your selected model as default in `~/.config/opencode/opencode.json`
176
225
  - Launches OpenCode with the model ready to use
177
226
 
178
- That's it! No manual config needed.
179
-
180
- ### Manual Setup (Optional)
181
-
182
- If you prefer to configure OpenCode yourself:
183
-
184
- #### Prerequisites
185
-
186
- 1. **OpenCode installed**: `npm install -g opencode` (or equivalent)
187
- 2. **NVIDIA NIM account**: Get a free account at [build.nvidia.com](https://build.nvidia.com)
188
- 3. **API key generated**: Go to Profile → API Keys → Generate API Key
189
-
190
- #### 1. Find your model
191
-
192
- Run `free-coding-models` to see which models are available and fast. The "Latest" column shows real-time latency, "Avg" shows rolling average, and "Up%" shows uptime percentage (reliability over time).
193
-
194
- #### 2. Configure OpenCode
227
+ ### Manual OpenCode Setup (Optional)
195
228
 
196
229
  Create or edit `~/.config/opencode/opencode.json`:
197
230
 
@@ -211,53 +244,86 @@ Create or edit `~/.config/opencode/opencode.json`:
211
244
  }
212
245
  ```
213
246
 
214
- #### 3. Set environment variable
247
+ Then set the environment variable:
215
248
 
216
249
  ```bash
217
250
  export NVIDIA_API_KEY=nvapi-xxxx-your-key-here
218
251
  # Add to ~/.bashrc or ~/.zshrc for persistence
219
252
  ```
220
253
 
221
- #### 4. Use it
222
-
223
254
  Run `/models` in OpenCode and select **NVIDIA NIM** provider and your chosen model.
224
255
 
225
256
  > ⚠️ **Note:** Free models have usage limits based on NVIDIA's tier — check [build.nvidia.com](https://build.nvidia.com) for quotas.
226
257
 
227
- ### Automatic Installation
258
+ ### Automatic Installation Fallback
228
259
 
229
- The tool includes a **smart fallback mechanism**:
260
+ If NVIDIA NIM is not yet configured in OpenCode, the tool:
261
+ - Shows installation instructions in your terminal
262
+ - Creates a `prompt` file in `$HOME/prompt` with the exact configuration
263
+ - Launches OpenCode, which will detect and display the prompt automatically
230
264
 
231
- 1. **Primary**: Try to launch OpenCode with the selected model
232
- 2. **Fallback**: If NVIDIA NIM is not detected in `~/.config/opencode/opencode.json`, the tool:
233
- - Shows installation instructions in your terminal
234
- - Creates a `prompt` file in `$HOME/prompt` with the exact configuration to add
235
- - Launches OpenCode, which will detect and display the prompt automatically
265
+ ---
236
266
 
237
- This **"prompt" fallback** ensures that even if NVIDIA NIM isn't pre-configured, OpenCode will guide you through installation with the ready-to-use configuration already prepared.
267
+ ## 🦞 OpenClaw Integration
238
268
 
239
- #### Example prompt file created at `$HOME/prompt`:
269
+ OpenClaw is an autonomous AI agent daemon. `free-coding-models` can configure it to use NVIDIA NIM models as its default provider — no download or local setup needed, everything runs via the NIM remote API.
240
270
 
241
- ```json
242
- Please install NVIDIA NIM provider in OpenCode by adding this to ~/.config/opencode/opencode.json:
271
+ ### Quick Start
272
+
273
+ ```bash
274
+ free-coding-models --openclaw
275
+ ```
243
276
 
277
+ Or run without flags and choose **OpenClaw** from the startup menu.
278
+
279
+ 1. **Wait** for models to be pinged
280
+ 2. **Navigate** with ↑↓ arrows to your preferred model
281
+ 3. **Press Enter** — tool automatically:
282
+ - Reads `~/.openclaw/openclaw.json`
283
+ - Adds the `nvidia` provider block (NIM base URL + your API key) if missing
284
+ - Sets `agents.defaults.model.primary` to `nvidia/<model-id>`
285
+ - Saves config and prints next steps
286
+
287
+ ### What gets written to OpenClaw config
288
+
289
+ ```json
244
290
  {
245
- "provider": {
291
+ "providers": {
246
292
  "nvidia": {
247
- "npm": "@ai-sdk/openai-compatible",
248
- "name": "NVIDIA NIM",
249
- "options": {
250
- "baseURL": "https://integrate.api.nvidia.com/v1",
251
- "apiKey": "{env:NVIDIA_API_KEY}"
293
+ "baseUrl": "https://integrate.api.nvidia.com/v1",
294
+ "apiKey": "nvapi-xxxx-your-key",
295
+ "api": "openai-completions",
296
+ "models": [
297
+ {
298
+ "id": "deepseek-ai/deepseek-v3.2",
299
+ "name": "DeepSeek V3.2",
300
+ "contextWindow": 128000,
301
+ "maxTokens": 8192
302
+ }
303
+ ]
304
+ }
305
+ },
306
+ "agents": {
307
+ "defaults": {
308
+ "model": {
309
+ "primary": "nvidia/deepseek-ai/deepseek-v3.2"
252
310
  }
253
311
  }
254
312
  }
255
313
  }
314
+ ```
315
+
316
+ ### After updating OpenClaw config
317
+
318
+ Restart OpenClaw or run the CLI command to apply the new model:
256
319
 
257
- Then set env var: export NVIDIA_API_KEY=your_key_here
320
+ ```bash
321
+ openclaw restart
322
+ # or
323
+ openclaw models set nvidia/deepseek-ai/deepseek-v3.2
258
324
  ```
259
325
 
260
- OpenCode will automatically detect this file when launched and guide you through the installation.
326
+ > 💡 **Why use remote NIM models with OpenClaw?** NVIDIA NIM serves models via a fast API — no local GPU required, no VRAM limits, free credits for developers. You get frontier-class coding models (DeepSeek V3, Kimi K2, Qwen3 Coder) without downloading anything.
261
327
 
262
328
  ---
263
329
 
@@ -271,14 +337,12 @@ OpenCode will automatically detect this file when launched and guide you through
271
337
  │ 4. Re-ping ALL models every 2 seconds (forever) │
272
338
  │ 5. Update rolling averages from ALL successful pings │
273
339
  │ 6. User can navigate with ↑↓ and select with Enter │
274
- │ 7. On Enter: stop monitoring, exit alt screen
275
- │ 8. Detect NVIDIA NIM config in OpenCode
276
- │ 9. If configured: update default model, launch OpenCode │
277
- │ 10. If missing: show install prompt, launch OpenCode │
340
+ │ 7. On Enter (OpenCode): set model, launch OpenCode
341
+ │ 8. On Enter (OpenClaw): update ~/.openclaw/openclaw.json
278
342
  └─────────────────────────────────────────────────────────────┘
279
343
  ```
280
344
 
281
- **Result:** Continuous monitoring interface that stays open until you select a model or press Ctrl+C. Rolling averages give you accurate long-term latency data, uptime percentage tracks reliability, and you can launch OpenCode with your chosen model in one keystroke.
345
+ **Result:** Continuous monitoring interface that stays open until you select a model or press Ctrl+C. Rolling averages give you accurate long-term latency data, uptime percentage tracks reliability, and you can configure your tool of choice with your chosen model in one keystroke.
282
346
 
283
347
  ---
284
348
 
@@ -295,12 +359,22 @@ OpenCode will automatically detect this file when launched and guide you through
295
359
  - **Monitor mode**: Interface stays open forever, press Ctrl+C to exit
296
360
 
297
361
  **Flags:**
298
- - **--best** — Show only top-tier models (A+, S, S+)
299
- - **--fiable** Analyze for 10 seconds and output the most reliable model in format `provider/model_id`
362
+
363
+ | Flag | Description |
364
+ |------|-------------|
365
+ | *(none)* | Show startup menu to choose OpenCode or OpenClaw |
366
+ | `--opencode` | OpenCode mode — Enter launches OpenCode with selected model |
367
+ | `--openclaw` | OpenClaw mode — Enter sets selected model as default in OpenClaw |
368
+ | `--best` | Show only top-tier models (A+, S, S+) |
369
+ | `--fiable` | Analyze 10 seconds, output the most reliable model as `provider/model_id` |
370
+ | `--tier S` | Show only S+ and S tier models |
371
+ | `--tier A` | Show only A+, A, A- tier models |
372
+ | `--tier B` | Show only B+, B tier models |
373
+ | `--tier C` | Show only C tier models |
300
374
 
301
375
  **Keyboard shortcuts:**
302
376
  - **↑↓** — Navigate models
303
- - **Enter** — Select model and launch OpenCode
377
+ - **Enter** — Select model (launches OpenCode or sets OpenClaw default, depending on mode)
304
378
  - **R/T/O/M/P/A/S/V/U** — Sort by Rank/Tier/Origin/Model/Ping/Avg/Status/Verdict/Uptime
305
379
  - **W** — Decrease ping interval (faster pings)
306
380
  - **X** — Increase ping interval (slower pings)
@@ -338,5 +412,8 @@ We welcome contributions! Feel free to open issues, submit pull requests, or get
338
412
  **Q:** How accurate are the latency numbers?
339
413
  **A:** They represent average round-trip times measured during testing; actual performance may vary based on network conditions.
340
414
 
415
+ **Q:** Do I need to download models locally for OpenClaw?
416
+ **A:** No — `free-coding-models` configures OpenClaw to use NVIDIA NIM's remote API, so models run on NVIDIA's infrastructure. No GPU or local setup required.
417
+
341
418
  ## 📧 Support
342
419
  For questions or issues, open a GitHub issue or join our community Discord: https://discord.gg/QnR8xq9p
@@ -1,30 +1,33 @@
1
1
  #!/usr/bin/env node
2
2
  /**
3
3
  * @file free-coding-models.js
4
- * @description Live terminal availability checker for coding LLM models with OpenCode integration.
4
+ * @description Live terminal availability checker for coding LLM models with OpenCode & OpenClaw integration.
5
5
  *
6
6
  * @details
7
7
  * This CLI tool discovers and benchmarks language models optimized for coding.
8
8
  * It runs in an alternate screen buffer, pings all models in parallel, re-pings successful ones
9
9
  * multiple times for reliable latency measurements, and prints a clean final table.
10
- * During benchmarking, users can navigate with arrow keys and press Enter to launch OpenCode immediately.
10
+ * During benchmarking, users can navigate with arrow keys and press Enter to act on the selected model.
11
11
  *
12
12
  * 🎯 Key features:
13
13
  * - Parallel pings across all models with animated real-time updates
14
- * - Continuous monitoring with 10-second ping intervals (never stops)
14
+ * - Continuous monitoring with 2-second ping intervals (never stops)
15
15
  * - Rolling averages calculated from ALL successful pings since start
16
16
  * - Best-per-tier highlighting with medals (🥇🥈🥉)
17
17
  * - Interactive navigation with arrow keys directly in the table
18
- * - Instant OpenCode launch on Enter key press (any model, even timeout/down)
19
- * - Automatic OpenCode config detection and model setup
18
+ * - Instant OpenCode OR OpenClaw action on Enter key press
19
+ * - Startup mode menu (OpenCode vs OpenClaw) when no flag is given
20
+ * - Automatic config detection and model setup for both tools
20
21
  * - Persistent API key storage in ~/.free-coding-models
21
22
  * - Multi-source support via sources.js (easily add new providers)
22
23
  * - Uptime percentage tracking (successful pings / total pings)
23
24
  * - Sortable columns (R/T/O/M/P/A/S/V/U keys)
25
+ * - Tier filtering via --tier S/A/B/C flags
24
26
  *
25
27
  * → Functions:
26
28
  * - `loadApiKey` / `saveApiKey`: Manage persisted API key in ~/.free-coding-models
27
29
  * - `promptApiKey`: Interactive wizard for first-time API key setup
30
+ * - `promptModeSelection`: Startup menu to choose OpenCode vs OpenClaw
28
31
  * - `ping`: Perform HTTP request to NIM endpoint with timeout handling
29
32
  * - `renderTable`: Generate ASCII table with colored latency indicators and status emojis
30
33
  * - `getAvg`: Calculate average latency from all successful pings
@@ -33,6 +36,9 @@
33
36
  * - `sortResults`: Sort models by various columns
34
37
  * - `checkNvidiaNimConfig`: Check if NVIDIA NIM provider is configured in OpenCode
35
38
  * - `startOpenCode`: Launch OpenCode with selected model (configures if needed)
39
+ * - `loadOpenClawConfig` / `saveOpenClawConfig`: Manage ~/.openclaw/openclaw.json
40
+ * - `startOpenClaw`: Set selected model as default in OpenClaw config (remote, no launch)
41
+ * - `filterByTier`: Filter models by tier letter prefix (S, A, B, C)
36
42
  * - `main`: Orchestrates CLI flow, wizard, ping loops, animation, and output
37
43
  *
38
44
  * 📦 Dependencies:
@@ -45,13 +51,22 @@
45
51
  * - API key stored in ~/.free-coding-models
46
52
  * - Models loaded from sources.js (extensible for new providers)
47
53
  * - OpenCode config: ~/.config/opencode/opencode.json
48
- * - Ping timeout: 6s per attempt, max 2 retries (12s total)
49
- * - Ping interval: 10 seconds (continuous monitoring mode)
54
+ * - OpenClaw config: ~/.openclaw/openclaw.json
55
+ * - Ping timeout: 15s per attempt
56
+ * - Ping interval: 2 seconds (continuous monitoring mode)
50
57
  * - Animation: 12 FPS with braille spinners
51
- * - Reliability: Green → Yellow → Orange → Red → Black (degrades with instability)
58
+ *
59
+ * 🚀 CLI flags:
60
+ * - (no flag): Show startup menu → choose OpenCode or OpenClaw
61
+ * - --opencode: OpenCode mode (launch with selected model)
62
+ * - --openclaw: OpenClaw mode (set selected model as default in OpenClaw)
63
+ * - --best: Show only top-tier models (A+, S, S+)
64
+ * - --fiable: Analyze 10s and output the most reliable model
65
+ * - --tier S/A/B/C: Filter models by tier letter (S=S+/S, A=A+/A/A-, B=B+/B, C=C)
52
66
  *
53
67
  * @see {@link https://build.nvidia.com} NVIDIA API key generation
54
68
  * @see {@link https://github.com/opencode-ai/opencode} OpenCode repository
69
+ * @see {@link https://openclaw.ai} OpenClaw documentation
55
70
  */
56
71
 
57
72
  import chalk from 'chalk'
@@ -110,6 +125,78 @@ async function promptApiKey() {
110
125
  })
111
126
  }
112
127
 
128
+ // ─── Startup mode selection menu ──────────────────────────────────────────────
129
+ // 📖 Shown at startup when neither --opencode nor --openclaw flag is given.
130
+ // 📖 Simple arrow-key selector in normal terminal (not alt screen).
131
+ // 📖 Returns 'opencode' or 'openclaw'.
132
+ async function promptModeSelection() {
133
+ const options = [
134
+ {
135
+ label: 'OpenCode',
136
+ icon: '💻',
137
+ description: 'Press Enter on a model → launch OpenCode with it as default',
138
+ },
139
+ {
140
+ label: 'OpenClaw',
141
+ icon: '🦞',
142
+ description: 'Press Enter on a model → set it as default in OpenClaw config',
143
+ },
144
+ ]
145
+
146
+ return new Promise((resolve) => {
147
+ let selected = 0
148
+
149
+ // 📖 Render the menu to stdout (clear + redraw)
150
+ const render = () => {
151
+ process.stdout.write('\x1b[2J\x1b[H') // clear screen + cursor home
152
+ console.log()
153
+ console.log(chalk.bold(' ⚡ Free Coding Models') + chalk.dim(' — Choose your tool'))
154
+ console.log()
155
+ for (let i = 0; i < options.length; i++) {
156
+ const isSelected = i === selected
157
+ const bullet = isSelected ? chalk.bold.cyan(' ❯ ') : chalk.dim(' ')
158
+ const label = isSelected
159
+ ? chalk.bold.white(options[i].icon + ' ' + options[i].label)
160
+ : chalk.dim(options[i].icon + ' ' + options[i].label)
161
+ const desc = chalk.dim(' ' + options[i].description)
162
+ console.log(bullet + label)
163
+ console.log(chalk.dim(' ' + options[i].description))
164
+ console.log()
165
+ }
166
+ console.log(chalk.dim(' ↑↓ Navigate • Enter Select • Ctrl+C Exit'))
167
+ console.log()
168
+ }
169
+
170
+ render()
171
+
172
+ readline.emitKeypressEvents(process.stdin)
173
+ if (process.stdin.isTTY) process.stdin.setRawMode(true)
174
+
175
+ const onKey = (_str, key) => {
176
+ if (!key) return
177
+ if (key.ctrl && key.name === 'c') {
178
+ if (process.stdin.isTTY) process.stdin.setRawMode(false)
179
+ process.stdin.removeListener('keypress', onKey)
180
+ process.exit(0)
181
+ }
182
+ if (key.name === 'up' && selected > 0) {
183
+ selected--
184
+ render()
185
+ } else if (key.name === 'down' && selected < options.length - 1) {
186
+ selected++
187
+ render()
188
+ } else if (key.name === 'return') {
189
+ if (process.stdin.isTTY) process.stdin.setRawMode(false)
190
+ process.stdin.removeListener('keypress', onKey)
191
+ process.stdin.pause()
192
+ resolve(selected === 0 ? 'opencode' : 'openclaw')
193
+ }
194
+ }
195
+
196
+ process.stdin.on('keypress', onKey)
197
+ })
198
+ }
199
+
113
200
  // ─── Alternate screen control ─────────────────────────────────────────────────
114
201
  // 📖 \x1b[?1049h = enter alt screen \x1b[?1049l = leave alt screen
115
202
  // 📖 \x1b[?25l = hide cursor \x1b[?25h = show cursor
@@ -181,7 +268,7 @@ const VERDICT_ORDER = ['Perfect', 'Normal', 'Slow', 'Very Slow', 'Overloaded', '
181
268
  const getVerdict = (r) => {
182
269
  const avg = getAvg(r)
183
270
  const wasUpBefore = r.pings.length > 0 && r.pings.some(p => p.code === '200')
184
-
271
+
185
272
  // 📖 429 = rate limited = Overloaded
186
273
  if (r.httpCode === '429') return 'Overloaded'
187
274
  if ((r.status === 'timeout' || r.status === 'down') && wasUpBefore) return 'Unstable'
@@ -207,7 +294,7 @@ const getUptime = (r) => {
207
294
  const sortResults = (results, sortColumn, sortDirection) => {
208
295
  return [...results].sort((a, b) => {
209
296
  let cmp = 0
210
-
297
+
211
298
  switch (sortColumn) {
212
299
  case 'rank':
213
300
  cmp = a.idx - b.idx
@@ -245,12 +332,13 @@ const sortResults = (results, sortColumn, sortDirection) => {
245
332
  cmp = getUptime(a) - getUptime(b)
246
333
  break
247
334
  }
248
-
335
+
249
336
  return sortDirection === 'asc' ? cmp : -cmp
250
337
  })
251
338
  }
252
339
 
253
- function renderTable(results, pendingPings, frame, cursor = null, sortColumn = 'avg', sortDirection = 'asc', pingInterval = PING_INTERVAL, lastPingTime = Date.now()) {
340
+ // 📖 renderTable: mode param controls footer hint text (opencode vs openclaw)
341
+ function renderTable(results, pendingPings, frame, cursor = null, sortColumn = 'avg', sortDirection = 'asc', pingInterval = PING_INTERVAL, lastPingTime = Date.now(), mode = 'opencode') {
254
342
  const up = results.filter(r => r.status === 'up').length
255
343
  const down = results.filter(r => r.status === 'down').length
256
344
  const timeout = results.filter(r => r.status === 'timeout').length
@@ -267,6 +355,11 @@ function renderTable(results, pendingPings, frame, cursor = null, sortColumn = '
267
355
  ? chalk.dim(`pinging — ${pendingPings} in flight…`)
268
356
  : chalk.dim(`next ping ${secondsUntilNext}s`)
269
357
 
358
+ // 📖 Mode badge shown in header so user knows what Enter will do
359
+ const modeBadge = mode === 'openclaw'
360
+ ? chalk.bold.rgb(255, 100, 50)(' [🦞 OpenClaw]')
361
+ : chalk.bold.rgb(0, 200, 255)(' [💻 OpenCode]')
362
+
270
363
  // 📖 Column widths (generous spacing with margins)
271
364
  const W_RANK = 6
272
365
  const W_TIER = 6
@@ -283,7 +376,7 @@ function renderTable(results, pendingPings, frame, cursor = null, sortColumn = '
283
376
 
284
377
  const lines = [
285
378
  '',
286
- ` ${chalk.bold('⚡ Free Coding Models')} ` +
379
+ ` ${chalk.bold('⚡ Free Coding Models')}${modeBadge} ` +
287
380
  chalk.greenBright(`✅ ${up}`) + chalk.dim(' up ') +
288
381
  chalk.yellow(`⏱ ${timeout}`) + chalk.dim(' timeout ') +
289
382
  chalk.red(`❌ ${down}`) + chalk.dim(' down ') +
@@ -295,7 +388,7 @@ function renderTable(results, pendingPings, frame, cursor = null, sortColumn = '
295
388
  // 📖 NOTE: padEnd on chalk strings counts ANSI codes, breaking alignment
296
389
  // 📖 Solution: build plain text first, then colorize
297
390
  const dir = sortDirection === 'asc' ? '↑' : '↓'
298
-
391
+
299
392
  const rankH = 'Rank'
300
393
  const tierH = 'Tier'
301
394
  const originH = 'Origin'
@@ -305,7 +398,7 @@ function renderTable(results, pendingPings, frame, cursor = null, sortColumn = '
305
398
  const statusH = sortColumn === 'status' ? dir + ' Status' : 'Status'
306
399
  const verdictH = sortColumn === 'verdict' ? dir + ' Verdict' : 'Verdict'
307
400
  const uptimeH = sortColumn === 'uptime' ? dir + ' Up%' : 'Up%'
308
-
401
+
309
402
  // 📖 Now colorize after padding is calculated on plain text
310
403
  const rankH_c = chalk.dim(rankH.padEnd(W_RANK))
311
404
  const tierH_c = chalk.dim(tierH.padEnd(W_TIER))
@@ -316,13 +409,13 @@ function renderTable(results, pendingPings, frame, cursor = null, sortColumn = '
316
409
  const statusH_c = sortColumn === 'status' ? chalk.bold.cyan(statusH.padEnd(W_STATUS)) : chalk.dim(statusH.padEnd(W_STATUS))
317
410
  const verdictH_c = sortColumn === 'verdict' ? chalk.bold.cyan(verdictH.padEnd(W_VERDICT)) : chalk.dim(verdictH.padEnd(W_VERDICT))
318
411
  const uptimeH_c = sortColumn === 'uptime' ? chalk.bold.cyan(uptimeH.padStart(W_UPTIME)) : chalk.dim(uptimeH.padStart(W_UPTIME))
319
-
412
+
320
413
  // 📖 Header with proper spacing
321
414
  lines.push(' ' + rankH_c + ' ' + tierH_c + ' ' + originH_c + ' ' + modelH_c + ' ' + pingH_c + ' ' + avgH_c + ' ' + statusH_c + ' ' + verdictH_c + ' ' + uptimeH_c)
322
-
415
+
323
416
  // 📖 Separator line
324
417
  lines.push(
325
- ' ' +
418
+ ' ' +
326
419
  chalk.dim('─'.repeat(W_RANK)) + ' ' +
327
420
  chalk.dim('─'.repeat(W_TIER)) + ' ' +
328
421
  '─'.repeat(W_SOURCE) + ' ' +
@@ -337,9 +430,9 @@ function renderTable(results, pendingPings, frame, cursor = null, sortColumn = '
337
430
  for (let i = 0; i < sorted.length; i++) {
338
431
  const r = sorted[i]
339
432
  const tierFn = TIER_COLOR[r.tier] ?? (t => chalk.white(t))
340
-
433
+
341
434
  const isCursor = cursor !== null && i === cursor
342
-
435
+
343
436
  // 📖 Left-aligned columns - pad plain text first, then colorize
344
437
  const num = chalk.dim(String(r.idx).padEnd(W_RANK))
345
438
  const tier = tierFn(r.tier.padEnd(W_TIER))
@@ -452,7 +545,7 @@ function renderTable(results, pendingPings, frame, cursor = null, sortColumn = '
452
545
 
453
546
  // 📖 Build row with double space between columns
454
547
  const row = ' ' + num + ' ' + tier + ' ' + source + ' ' + name + ' ' + pingCell + ' ' + avgCell + ' ' + status + ' ' + speedCell + ' ' + uptimeCell
455
-
548
+
456
549
  if (isCursor) {
457
550
  lines.push(chalk.bgRgb(139, 0, 139)(row))
458
551
  } else {
@@ -462,7 +555,12 @@ function renderTable(results, pendingPings, frame, cursor = null, sortColumn = '
462
555
 
463
556
  lines.push('')
464
557
  const intervalSec = Math.round(pingInterval / 1000)
465
- lines.push(chalk.dim(` ↑↓ Navigate • Enter Select • R/T/O/M/P/A/S/V/U Sort • W↓/X↑ Interval (${intervalSec}s) • Ctrl+C Exit`))
558
+
559
+ // 📖 Footer hints adapt based on active mode
560
+ const actionHint = mode === 'openclaw'
561
+ ? chalk.rgb(255, 100, 50)('Enter→SetOpenClaw')
562
+ : chalk.rgb(0, 200, 255)('Enter→OpenCode')
563
+ lines.push(chalk.dim(` ↑↓ Navigate • `) + actionHint + chalk.dim(` • R/T/O/M/P/A/S/V/U Sort • W↓/X↑ Interval (${intervalSec}s) • Ctrl+C Exit`))
466
564
  lines.push('')
467
565
  return lines.join('\n')
468
566
  }
@@ -482,9 +580,9 @@ async function ping(apiKey, modelId) {
482
580
  return { code: String(resp.status), ms: Math.round(performance.now() - t0) }
483
581
  } catch (err) {
484
582
  const isTimeout = err.name === 'AbortError'
485
- return {
486
- code: isTimeout ? '000' : 'ERR',
487
- ms: isTimeout ? 'TIMEOUT' : Math.round(performance.now() - t0)
583
+ return {
584
+ code: isTimeout ? '000' : 'ERR',
585
+ ms: isTimeout ? 'TIMEOUT' : Math.round(performance.now() - t0)
488
586
  }
489
587
  } finally {
490
588
  clearTimeout(timer)
@@ -520,7 +618,7 @@ function checkNvidiaNimConfig() {
520
618
  if (!config.provider) return false
521
619
  // 📖 Check for nvidia/nim provider by key name or display name (case-insensitive)
522
620
  const providerKeys = Object.keys(config.provider)
523
- return providerKeys.some(key =>
621
+ return providerKeys.some(key =>
524
622
  key === 'nvidia' || key === 'nim' ||
525
623
  config.provider[key]?.name?.toLowerCase().includes('nvidia') ||
526
624
  config.provider[key]?.name?.toLowerCase().includes('nim')
@@ -533,38 +631,38 @@ function checkNvidiaNimConfig() {
533
631
  // 📖 Model format: { modelId, label, tier }
534
632
  async function startOpenCode(model) {
535
633
  const hasNim = checkNvidiaNimConfig()
536
-
634
+
537
635
  if (hasNim) {
538
636
  // 📖 NVIDIA NIM already configured - launch with model flag
539
637
  console.log(chalk.green(` 🚀 Setting ${chalk.bold(model.label)} as default…`))
540
638
  console.log(chalk.dim(` Model: nvidia/${model.modelId}`))
541
639
  console.log()
542
-
640
+
543
641
  const config = loadOpenCodeConfig()
544
642
  const backupPath = `${OPENCODE_CONFIG}.backup-${Date.now()}`
545
-
643
+
546
644
  // 📖 Backup current config
547
645
  if (existsSync(OPENCODE_CONFIG)) {
548
646
  copyFileSync(OPENCODE_CONFIG, backupPath)
549
647
  console.log(chalk.dim(` 💾 Backup: ${backupPath}`))
550
648
  }
551
-
649
+
552
650
  // 📖 Update default model to nvidia/model_id
553
651
  config.model = `nvidia/${model.modelId}`
554
652
  saveOpenCodeConfig(config)
555
-
653
+
556
654
  console.log(chalk.green(` ✓ Default model set to: nvidia/${model.modelId}`))
557
655
  console.log()
558
656
  console.log(chalk.dim(' Starting OpenCode…'))
559
657
  console.log()
560
-
658
+
561
659
  // 📖 Launch OpenCode and wait for it
562
660
  const { spawn } = await import('child_process')
563
661
  const child = spawn('opencode', [], {
564
662
  stdio: 'inherit',
565
663
  shell: false
566
664
  })
567
-
665
+
568
666
  // 📖 Wait for OpenCode to exit
569
667
  await new Promise((resolve, reject) => {
570
668
  child.on('exit', resolve)
@@ -576,7 +674,7 @@ async function startOpenCode(model) {
576
674
  console.log()
577
675
  console.log(chalk.dim(' Starting OpenCode with installation prompt…'))
578
676
  console.log()
579
-
677
+
580
678
  const installPrompt = `Please install NVIDIA NIM provider in OpenCode by adding this to ~/.config/opencode/opencode.json:
581
679
 
582
680
  {
@@ -595,18 +693,18 @@ async function startOpenCode(model) {
595
693
  Then set env var: export NVIDIA_API_KEY=your_key_here
596
694
 
597
695
  After installation, you can use: opencode --model nvidia/${model.modelId}`
598
-
696
+
599
697
  console.log(chalk.cyan(installPrompt))
600
698
  console.log()
601
699
  console.log(chalk.dim(' Starting OpenCode…'))
602
700
  console.log()
603
-
701
+
604
702
  const { spawn } = await import('child_process')
605
703
  const child = spawn('opencode', [], {
606
704
  stdio: 'inherit',
607
705
  shell: false
608
706
  })
609
-
707
+
610
708
  // 📖 Wait for OpenCode to exit
611
709
  await new Promise((resolve, reject) => {
612
710
  child.on('exit', resolve)
@@ -615,6 +713,93 @@ After installation, you can use: opencode --model nvidia/${model.modelId}`
615
713
  }
616
714
  }
617
715
 
716
+ // ─── OpenClaw integration ──────────────────────────────────────────────────────
717
+ // 📖 OpenClaw config: ~/.openclaw/openclaw.json (JSON format, may be JSON5 in newer versions)
718
+ // 📖 To set a model: set agents.defaults.model.primary = "nvidia/model-id"
719
+ // 📖 Providers section uses baseUrl + apiKey + api: "openai-completions" format
720
+ // 📖 See: https://docs.openclaw.ai/gateway/configuration
721
+ const OPENCLAW_CONFIG = join(homedir(), '.openclaw', 'openclaw.json')
722
+
723
+ function loadOpenClawConfig() {
724
+ if (!existsSync(OPENCLAW_CONFIG)) return {}
725
+ try {
726
+ // 📖 JSON.parse works for standard JSON; OpenClaw may use JSON5 but base config is valid JSON
727
+ return JSON.parse(readFileSync(OPENCLAW_CONFIG, 'utf8'))
728
+ } catch {
729
+ return {}
730
+ }
731
+ }
732
+
733
+ function saveOpenClawConfig(config) {
734
+ const dir = join(homedir(), '.openclaw')
735
+ if (!existsSync(dir)) {
736
+ mkdirSync(dir, { recursive: true })
737
+ }
738
+ writeFileSync(OPENCLAW_CONFIG, JSON.stringify(config, null, 2))
739
+ }
740
+
741
+ // 📖 startOpenClaw: sets the selected NVIDIA NIM model as default in OpenClaw config.
742
+ // 📖 Also ensures the nvidia provider block is present with the NIM base URL.
743
+ // 📖 Does NOT launch OpenClaw — OpenClaw runs as a daemon, so config changes are picked up on restart.
744
+ async function startOpenClaw(model, apiKey) {
745
+ console.log(chalk.rgb(255, 100, 50)(` 🦞 Setting ${chalk.bold(model.label)} as OpenClaw default…`))
746
+ console.log(chalk.dim(` Model: nvidia/${model.modelId}`))
747
+ console.log()
748
+
749
+ const config = loadOpenClawConfig()
750
+
751
+ // 📖 Backup existing config before touching it
752
+ if (existsSync(OPENCLAW_CONFIG)) {
753
+ const backupPath = `${OPENCLAW_CONFIG}.backup-${Date.now()}`
754
+ copyFileSync(OPENCLAW_CONFIG, backupPath)
755
+ console.log(chalk.dim(` 💾 Backup: ${backupPath}`))
756
+ }
757
+
758
+ // 📖 Ensure providers section exists with nvidia NIM block
759
+ // 📖 Only injects if not already present - we don't overwrite existing provider config
760
+ if (!config.providers) config.providers = {}
761
+ if (!config.providers.nvidia) {
762
+ config.providers.nvidia = {
763
+ baseUrl: 'https://integrate.api.nvidia.com/v1',
764
+ // 📖 Store key reference as env var name — avoid hardcoding key in config file
765
+ apiKey: apiKey || process.env.NVIDIA_API_KEY || 'YOUR_NVIDIA_API_KEY',
766
+ api: 'openai-completions',
767
+ models: [],
768
+ }
769
+ console.log(chalk.dim(' ➕ Added nvidia provider block to OpenClaw config'))
770
+ }
771
+
772
+ // 📖 Ensure the chosen model is in the nvidia models array
773
+ const modelsArr = config.providers.nvidia.models
774
+ const modelEntry = {
775
+ id: model.modelId,
776
+ name: model.label,
777
+ contextWindow: 128000,
778
+ maxTokens: 8192,
779
+ }
780
+ const alreadyListed = modelsArr.some(m => m.id === model.modelId)
781
+ if (!alreadyListed) {
782
+ modelsArr.push(modelEntry)
783
+ console.log(chalk.dim(` ➕ Added ${model.label} to nvidia models list`))
784
+ }
785
+
786
+ // 📖 Set as the default primary model for all agents
787
+ if (!config.agents) config.agents = {}
788
+ if (!config.agents.defaults) config.agents.defaults = {}
789
+ if (!config.agents.defaults.model) config.agents.defaults.model = {}
790
+ config.agents.defaults.model.primary = `nvidia/${model.modelId}`
791
+
792
+ saveOpenClawConfig(config)
793
+
794
+ console.log(chalk.rgb(255, 140, 0)(` ✓ Default model set to: nvidia/${model.modelId}`))
795
+ console.log()
796
+ console.log(chalk.dim(' 📄 Config updated: ' + OPENCLAW_CONFIG))
797
+ console.log()
798
+ console.log(chalk.dim(' 💡 Restart OpenClaw for changes to take effect:'))
799
+ console.log(chalk.dim(' openclaw restart') + chalk.dim(' or ') + chalk.dim('openclaw models set nvidia/' + model.modelId))
800
+ console.log()
801
+ }
802
+
618
803
  // ─── Helper function to find best model after analysis ────────────────────────
619
804
  function findBestModel(results) {
620
805
  // 📖 Sort by avg ping (fastest first), then by uptime percentage (most reliable)
@@ -623,18 +808,18 @@ function findBestModel(results) {
623
808
  const avgB = getAvg(b)
624
809
  const uptimeA = getUptime(a)
625
810
  const uptimeB = getUptime(b)
626
-
811
+
627
812
  // 📖 Priority 1: Models that are up (status === 'up')
628
813
  if (a.status === 'up' && b.status !== 'up') return -1
629
814
  if (a.status !== 'up' && b.status === 'up') return 1
630
-
815
+
631
816
  // 📖 Priority 2: Fastest average ping
632
817
  if (avgA !== avgB) return avgA - avgB
633
-
818
+
634
819
  // 📖 Priority 3: Highest uptime percentage
635
820
  return uptimeB - uptimeA
636
821
  })
637
-
822
+
638
823
  return sorted.length > 0 ? sorted[0] : null
639
824
  }
640
825
 
@@ -642,17 +827,17 @@ function findBestModel(results) {
642
827
  async function runFiableMode(apiKey) {
643
828
  console.log(chalk.cyan(' ⚡ Analyzing models for reliability (10 seconds)...'))
644
829
  console.log()
645
-
830
+
646
831
  let results = MODELS.map(([modelId, label, tier], i) => ({
647
832
  idx: i + 1, modelId, label, tier,
648
833
  status: 'pending',
649
834
  pings: [],
650
835
  httpCode: null,
651
836
  }))
652
-
837
+
653
838
  const startTime = Date.now()
654
839
  const analysisDuration = 10000 // 10 seconds
655
-
840
+
656
841
  // 📖 Run initial pings
657
842
  const pingPromises = results.map(r => ping(apiKey, r.modelId).then(({ code, ms }) => {
658
843
  r.pings.push({ ms, code })
@@ -665,23 +850,23 @@ async function runFiableMode(apiKey) {
665
850
  r.httpCode = code
666
851
  }
667
852
  }))
668
-
853
+
669
854
  await Promise.allSettled(pingPromises)
670
-
855
+
671
856
  // 📖 Continue pinging for the remaining time
672
857
  const remainingTime = Math.max(0, analysisDuration - (Date.now() - startTime))
673
858
  if (remainingTime > 0) {
674
859
  await new Promise(resolve => setTimeout(resolve, remainingTime))
675
860
  }
676
-
861
+
677
862
  // 📖 Find best model
678
863
  const best = findBestModel(results)
679
-
864
+
680
865
  if (!best) {
681
866
  console.log(chalk.red(' ✖ No reliable model found'))
682
867
  process.exit(1)
683
868
  }
684
-
869
+
685
870
  // 📖 Output in format: provider/name
686
871
  const provider = 'nvidia' // Always NVIDIA NIM for now
687
872
  console.log(chalk.green(` ✓ Most reliable model:`))
@@ -691,18 +876,41 @@ async function runFiableMode(apiKey) {
691
876
  console.log(chalk.dim(` Avg ping: ${getAvg(best)}ms`))
692
877
  console.log(chalk.dim(` Uptime: ${getUptime(best)}%`))
693
878
  console.log(chalk.dim(` Status: ${best.status === 'up' ? '✅ UP' : '❌ DOWN'}`))
694
-
879
+
695
880
  process.exit(0)
696
881
  }
697
882
 
883
+ // ─── Tier filter helper ────────────────────────────────────────────────────────
884
+ // 📖 Maps a single tier letter (S, A, B, C) to the full set of matching tier strings.
885
+ // 📖 --tier S → includes S+ and S
886
+ // 📖 --tier A → includes A+, A, A-
887
+ // 📖 --tier B → includes B+, B
888
+ // 📖 --tier C → includes C only
889
+ const TIER_LETTER_MAP = {
890
+ 'S': ['S+', 'S'],
891
+ 'A': ['A+', 'A', 'A-'],
892
+ 'B': ['B+', 'B'],
893
+ 'C': ['C'],
894
+ }
895
+
896
+ function filterByTier(results, tierLetter) {
897
+ const letter = tierLetter.toUpperCase()
898
+ const allowed = TIER_LETTER_MAP[letter]
899
+ if (!allowed) {
900
+ console.error(chalk.red(` ✖ Unknown tier "${tierLetter}". Valid tiers: S, A, B, C`))
901
+ process.exit(1)
902
+ }
903
+ return results.filter(r => allowed.includes(r.tier))
904
+ }
905
+
698
906
  async function main() {
699
907
  // 📖 Parse CLI arguments properly
700
908
  const args = process.argv.slice(2)
701
-
909
+
702
910
  // 📖 Extract API key (first non-flag argument) and flags
703
911
  let apiKey = null
704
912
  const flags = []
705
-
913
+
706
914
  for (const arg of args) {
707
915
  if (arg.startsWith('--')) {
708
916
  flags.push(arg.toLowerCase())
@@ -710,16 +918,26 @@ async function main() {
710
918
  apiKey = arg
711
919
  }
712
920
  }
713
-
921
+
714
922
  // 📖 Priority: CLI arg > env var > saved config > wizard
715
923
  if (!apiKey) {
716
924
  apiKey = process.env.NVIDIA_API_KEY || loadApiKey()
717
925
  }
718
-
926
+
719
927
  // 📖 Check for CLI flags
720
- const bestMode = flags.includes('--best')
721
- const fiableMode = flags.includes('--fiable') || flags.includes('--fiable') // Support both
722
-
928
+ const bestMode = flags.includes('--best')
929
+ const fiableMode = flags.includes('--fiable')
930
+ const openCodeMode = flags.includes('--opencode')
931
+ const openClawMode = flags.includes('--openclaw')
932
+
933
+ // 📖 Parse --tier X flag (e.g. --tier S, --tier A)
934
+ // 📖 Find "--tier" in flags array, then get the next raw arg as the tier value
935
+ let tierFilter = null
936
+ const tierIdx = args.findIndex(a => a.toLowerCase() === '--tier')
937
+ if (tierIdx !== -1 && args[tierIdx + 1] && !args[tierIdx + 1].startsWith('--')) {
938
+ tierFilter = args[tierIdx + 1].toUpperCase()
939
+ }
940
+
723
941
  if (!apiKey) {
724
942
  apiKey = await promptApiKey()
725
943
  if (!apiKey) {
@@ -730,12 +948,26 @@ async function main() {
730
948
  process.exit(1)
731
949
  }
732
950
  }
733
-
951
+
734
952
  // 📖 Handle fiable mode first (it exits after analysis)
735
953
  if (fiableMode) {
736
954
  await runFiableMode(apiKey)
737
955
  }
738
956
 
957
+ // 📖 Determine active mode:
958
+ // --opencode → opencode
959
+ // --openclaw → openclaw
960
+ // neither → show interactive startup menu
961
+ let mode
962
+ if (openClawMode) {
963
+ mode = 'openclaw'
964
+ } else if (openCodeMode) {
965
+ mode = 'opencode'
966
+ } else {
967
+ // 📖 No mode flag given — ask user with the startup menu
968
+ mode = await promptModeSelection()
969
+ }
970
+
739
971
  // 📖 Filter models to only show top tiers if BEST mode is active
740
972
  let results = MODELS.map(([modelId, label, tier], i) => ({
741
973
  idx: i + 1, modelId, label, tier,
@@ -743,26 +975,32 @@ async function main() {
743
975
  pings: [], // 📖 All ping results (ms or 'TIMEOUT')
744
976
  httpCode: null,
745
977
  }))
746
-
978
+
747
979
  if (bestMode) {
748
980
  results = results.filter(r => r.tier === 'S+' || r.tier === 'S' || r.tier === 'A+')
749
981
  }
750
982
 
983
+ // 📖 Apply tier letter filter if --tier X was given
984
+ if (tierFilter) {
985
+ results = filterByTier(results, tierFilter)
986
+ }
987
+
751
988
  // 📖 Add interactive selection state - cursor index and user's choice
752
989
  // 📖 sortColumn: 'rank'|'tier'|'origin'|'model'|'ping'|'avg'|'status'|'verdict'|'uptime'
753
990
  // 📖 sortDirection: 'asc' (default) or 'desc'
754
- // 📖 pingInterval: current interval in ms (default 5000, adjustable with W/X keys)
755
- const state = {
756
- results,
757
- pendingPings: 0,
758
- frame: 0,
759
- cursor: 0,
991
+ // 📖 pingInterval: current interval in ms (default 2000, adjustable with W/X keys)
992
+ const state = {
993
+ results,
994
+ pendingPings: 0,
995
+ frame: 0,
996
+ cursor: 0,
760
997
  selectedModel: null,
761
998
  sortColumn: 'avg',
762
999
  sortDirection: 'asc',
763
- pingInterval: PING_INTERVAL, // 📖 Track current interval for C/V keys
764
- lastPingTime: Date.now(), // 📖 Track when last ping cycle started
765
- fiableMode // 📖 Pass fiable mode to state
1000
+ pingInterval: PING_INTERVAL, // 📖 Track current interval for W/X keys
1001
+ lastPingTime: Date.now(), // 📖 Track when last ping cycle started
1002
+ fiableMode, // 📖 Pass fiable mode to state
1003
+ mode, // 📖 'opencode' or 'openclaw' — controls Enter action
766
1004
  }
767
1005
 
768
1006
  // 📖 Enter alternate screen — animation runs here, zero scrollback pollution
@@ -782,18 +1020,18 @@ async function main() {
782
1020
  // 📖 Use readline with keypress event for arrow key handling
783
1021
  process.stdin.setEncoding('utf8')
784
1022
  process.stdin.resume()
785
-
1023
+
786
1024
  let userSelected = null
787
-
1025
+
788
1026
  const onKeyPress = async (str, key) => {
789
1027
  if (!key) return
790
-
791
- // 📖 Sorting keys: R=rank, T=tier, O=origin, M=model, P=ping, A=avg, S=status, V=verdict, L=reliability
1028
+
1029
+ // 📖 Sorting keys: R=rank, T=tier, O=origin, M=model, P=ping, A=avg, S=status, V=verdict, U=uptime
792
1030
  const sortKeys = {
793
1031
  'r': 'rank', 't': 'tier', 'o': 'origin', 'm': 'model',
794
1032
  'p': 'ping', 'a': 'avg', 's': 'status', 'v': 'verdict', 'u': 'uptime'
795
1033
  }
796
-
1034
+
797
1035
  if (sortKeys[key.name]) {
798
1036
  const col = sortKeys[key.name]
799
1037
  // 📖 Toggle direction if same column, otherwise reset to asc
@@ -805,98 +1043,101 @@ async function main() {
805
1043
  }
806
1044
  return
807
1045
  }
808
-
1046
+
809
1047
  // 📖 Interval adjustment keys: W=decrease (faster), X=increase (slower)
810
1048
  // 📖 Minimum 1s, maximum 60s
811
1049
  if (key.name === 'w') {
812
1050
  state.pingInterval = Math.max(1000, state.pingInterval - 1000)
813
1051
  return
814
1052
  }
815
-
1053
+
816
1054
  if (key.name === 'x') {
817
1055
  state.pingInterval = Math.min(60000, state.pingInterval + 1000)
818
1056
  return
819
1057
  }
820
-
1058
+
821
1059
  if (key.name === 'up') {
822
1060
  if (state.cursor > 0) {
823
1061
  state.cursor--
824
1062
  }
825
1063
  return
826
1064
  }
827
-
1065
+
828
1066
  if (key.name === 'down') {
829
1067
  if (state.cursor < results.length - 1) {
830
1068
  state.cursor++
831
1069
  }
832
1070
  return
833
1071
  }
834
-
1072
+
835
1073
  if (key.name === 'c' && key.ctrl) { // Ctrl+C
836
1074
  exit(0)
837
1075
  return
838
1076
  }
839
-
1077
+
840
1078
  if (key.name === 'return') { // Enter
841
1079
  // 📖 Use the same sorting as the table display
842
1080
  const sorted = sortResults(results, state.sortColumn, state.sortDirection)
843
1081
  const selected = sorted[state.cursor]
844
1082
  // 📖 Allow selecting ANY model (even timeout/down) - user knows what they're doing
845
- if (true) {
846
- userSelected = { modelId: selected.modelId, label: selected.label, tier: selected.tier }
847
- // 📖 Stop everything and launch OpenCode immediately
848
- clearInterval(ticker)
849
- clearTimeout(state.pingIntervalObj)
850
- readline.emitKeypressEvents(process.stdin)
851
- process.stdin.setRawMode(true)
852
- process.stdin.pause()
853
- process.stdin.removeListener('keypress', onKeyPress)
854
- process.stdout.write(ALT_LEAVE)
855
-
856
- // 📖 Show selection with status
857
- if (selected.status === 'timeout') {
858
- console.log(chalk.yellow(` ⚠ Selected: ${selected.label} (currently timing out)`))
859
- } else if (selected.status === 'down') {
860
- console.log(chalk.red(` ⚠ Selected: ${selected.label} (currently down)`))
861
- } else {
862
- console.log(chalk.cyan(` ✓ Selected: ${selected.label}`))
863
- }
864
- console.log()
865
-
866
- // 📖 Wait for OpenCode to finish before exiting
1083
+ userSelected = { modelId: selected.modelId, label: selected.label, tier: selected.tier }
1084
+
1085
+ // 📖 Stop everything and act on selection immediately
1086
+ clearInterval(ticker)
1087
+ clearTimeout(state.pingIntervalObj)
1088
+ readline.emitKeypressEvents(process.stdin)
1089
+ process.stdin.setRawMode(true)
1090
+ process.stdin.pause()
1091
+ process.stdin.removeListener('keypress', onKeyPress)
1092
+ process.stdout.write(ALT_LEAVE)
1093
+
1094
+ // 📖 Show selection with status
1095
+ if (selected.status === 'timeout') {
1096
+ console.log(chalk.yellow(` ⚠ Selected: ${selected.label} (currently timing out)`))
1097
+ } else if (selected.status === 'down') {
1098
+ console.log(chalk.red(` ⚠ Selected: ${selected.label} (currently down)`))
1099
+ } else {
1100
+ console.log(chalk.cyan(` ✓ Selected: ${selected.label}`))
1101
+ }
1102
+ console.log()
1103
+
1104
+ // 📖 Dispatch to the correct integration based on active mode
1105
+ if (state.mode === 'openclaw') {
1106
+ await startOpenClaw(userSelected, apiKey)
1107
+ } else {
867
1108
  await startOpenCode(userSelected)
868
- process.exit(0)
869
1109
  }
1110
+ process.exit(0)
870
1111
  }
871
1112
  }
872
-
1113
+
873
1114
  // 📖 Enable keypress events on stdin
874
1115
  readline.emitKeypressEvents(process.stdin)
875
1116
  if (process.stdin.isTTY) {
876
1117
  process.stdin.setRawMode(true)
877
1118
  }
878
-
1119
+
879
1120
  process.stdin.on('keypress', onKeyPress)
880
1121
 
881
1122
  // 📖 Animation loop: clear alt screen + redraw table at FPS with cursor
882
1123
  const ticker = setInterval(() => {
883
1124
  state.frame++
884
- process.stdout.write(ALT_CLEAR + renderTable(state.results, state.pendingPings, state.frame, state.cursor, state.sortColumn, state.sortDirection, state.pingInterval, state.lastPingTime))
1125
+ process.stdout.write(ALT_CLEAR + renderTable(state.results, state.pendingPings, state.frame, state.cursor, state.sortColumn, state.sortDirection, state.pingInterval, state.lastPingTime, state.mode))
885
1126
  }, Math.round(1000 / FPS))
886
1127
 
887
- process.stdout.write(ALT_CLEAR + renderTable(state.results, state.pendingPings, state.frame, state.cursor, state.sortColumn, state.sortDirection, state.pingInterval, state.lastPingTime))
1128
+ process.stdout.write(ALT_CLEAR + renderTable(state.results, state.pendingPings, state.frame, state.cursor, state.sortColumn, state.sortDirection, state.pingInterval, state.lastPingTime, state.mode))
1129
+
1130
+ // ── Continuous ping loop — ping all models every N seconds forever ──────────
888
1131
 
889
- // ── Continuous ping loop — ping all models every 10 seconds forever ──────────
890
-
891
1132
  // 📖 Single ping function that updates result
892
1133
  const pingModel = async (r) => {
893
1134
  const { code, ms } = await ping(apiKey, r.modelId)
894
-
1135
+
895
1136
  // 📖 Store ping result as object with ms and code
896
1137
  // 📖 ms = actual response time (even for errors like 429)
897
1138
  // 📖 code = HTTP status code ('200', '429', '500', '000' for timeout)
898
1139
  r.pings.push({ ms, code })
899
-
1140
+
900
1141
  // 📖 Update status based on latest ping
901
1142
  if (code === '200') {
902
1143
  r.status = 'up'
@@ -910,23 +1151,23 @@ async function main() {
910
1151
 
911
1152
  // 📖 Initial ping of all models
912
1153
  const initialPing = Promise.all(results.map(r => pingModel(r)))
913
-
1154
+
914
1155
  // 📖 Continuous ping loop with dynamic interval (adjustable with W/X keys)
915
1156
  const schedulePing = () => {
916
1157
  state.pingIntervalObj = setTimeout(async () => {
917
1158
  state.lastPingTime = Date.now()
918
-
1159
+
919
1160
  results.forEach(r => {
920
1161
  pingModel(r).catch(() => {
921
1162
  // Individual ping failures don't crash the loop
922
1163
  })
923
1164
  })
924
-
1165
+
925
1166
  // 📖 Schedule next ping with current interval
926
1167
  schedulePing()
927
1168
  }, state.pingInterval)
928
1169
  }
929
-
1170
+
930
1171
  // 📖 Start the ping loop
931
1172
  state.pingIntervalObj = null
932
1173
  schedulePing()
@@ -943,4 +1184,4 @@ main().catch((err) => {
943
1184
  process.stdout.write(ALT_LEAVE)
944
1185
  console.error(err)
945
1186
  process.exit(1)
946
- })
1187
+ })
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "free-coding-models",
3
- "version": "0.1.2",
3
+ "version": "0.1.3",
4
4
  "description": "Find the fastest coding LLM models in seconds — ping free models from multiple providers, pick the best one for OpenCode, Cursor, or any AI coding assistant.",
5
5
  "keywords": [
6
6
  "nvidia",