free-coding-models 0.1.1 → 0.1.3

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -9,7 +9,7 @@
9
9
 
10
10
  <p align="center">
11
11
  <strong>Find the fastest coding LLM models in seconds</strong><br>
12
- <sub>Ping free models from multiple providers — pick the best one for OpenCode, Cursor, or any AI coding assistant</sub>
12
+ <sub>Ping free NVIDIA NIM models in real-time — pick the best one for OpenCode, OpenClaw, or any AI coding assistant</sub>
13
13
  </p>
14
14
 
15
15
  <p align="center">
@@ -22,6 +22,8 @@
22
22
  <a href="#-installation">Installation</a> •
23
23
  <a href="#-usage">Usage</a> •
24
24
  <a href="#-models">Models</a> •
25
+ <a href="#-opencode-integration">OpenCode</a> •
26
+ <a href="#-openclaw-integration">OpenClaw</a> •
25
27
  <a href="#-how-it-works">How it works</a>
26
28
  </p>
27
29
 
@@ -37,11 +39,14 @@
37
39
  - **📈 Rolling averages** — Avg calculated from ALL successful pings since start
38
40
  - **📊 Uptime tracking** — Percentage of successful pings shown in real-time
39
41
  - **🔄 Auto-retry** — Timeout models keep getting retried, nothing is ever "given up on"
40
- - **🎮 Interactive selection** — Navigate with arrow keys directly in the table, press Enter to launch OpenCode
41
- - **🔌 Auto-configuration** — Detects NVIDIA NIM setup, installs if missing, sets as default model
42
+ - **🎮 Interactive selection** — Navigate with arrow keys directly in the table, press Enter to act
43
+ - **🔀 Startup mode menu** — Choose between OpenCode and OpenClaw before the TUI launches
44
+ - **💻 OpenCode integration** — Auto-detects NIM setup, sets model as default, launches OpenCode
45
+ - **🦞 OpenClaw integration** — Sets selected model as default provider in `~/.openclaw/openclaw.json`
42
46
  - **🎨 Clean output** — Zero scrollback pollution, interface stays open until Ctrl+C
43
47
  - **📶 Status indicators** — UP ✅ · Timeout ⏳ · Overloaded 🔥 · Not Found 🚫
44
48
  - **🔧 Multi-source support** — Extensible architecture via `sources.js` (add new providers easily)
49
+ - **🏷 Tier filtering** — Filter models by tier letter (S, A, B, C) with `--tier`
45
50
 
46
51
  ---
47
52
 
@@ -50,11 +55,12 @@
50
55
  Before using `free-coding-models`, make sure you have:
51
56
 
52
57
  1. **Node.js 18+** — Required for native `fetch` API
53
- 2. **OpenCode installed** — [Install OpenCode](https://github.com/opencode-ai/opencode) (`npm install -g opencode`)
54
- 3. **NVIDIA NIM account** — Free tier available at [build.nvidia.com](https://build.nvidia.com)
55
- 4. **API key** — Generate one from Profile API Keys → Generate API Key
58
+ 2. **NVIDIA NIM account** — Free tier available at [build.nvidia.com](https://build.nvidia.com)
59
+ 3. **API key** — Generate one from Profile → API Keys → Generate API Key
60
+ 4. **OpenCode** *(optional)* [Install OpenCode](https://github.com/opencode-ai/opencode) to use the OpenCode integration
61
+ 5. **OpenClaw** *(optional)* — [Install OpenClaw](https://openclaw.ai) to use the OpenClaw integration
56
62
 
57
- > 💡 **Tip:** Without OpenCode installed, you can still use the tool to benchmark models. OpenCode is only needed for the auto-launch feature.
63
+ > 💡 **Tip:** Without OpenCode/OpenClaw installed, you can still benchmark models and get latency data.
58
64
 
59
65
  ---
60
66
 
@@ -81,18 +87,56 @@ bunx free-coding-models YOUR_API_KEY
81
87
  ## 🚀 Usage
82
88
 
83
89
  ```bash
84
- # Just run it — will prompt for API key if not set
90
+ # Just run it — shows a startup menu to pick OpenCode or OpenClaw, prompts for API key if not set
85
91
  free-coding-models
92
+
93
+ # Explicitly target OpenCode (current default behavior — TUI + Enter launches OpenCode)
94
+ free-coding-models --opencode
95
+
96
+ # Explicitly target OpenClaw (TUI + Enter sets model as default in OpenClaw)
97
+ free-coding-models --openclaw
98
+
99
+ # Show only top-tier models (A+, S, S+)
100
+ free-coding-models --best
101
+
102
+ # Analyze for 10 seconds and output the most reliable model
103
+ free-coding-models --fiable
104
+
105
+ # Filter models by tier letter
106
+ free-coding-models --tier S # S+ and S only
107
+ free-coding-models --tier A # A+, A, A- only
108
+ free-coding-models --tier B # B+, B only
109
+ free-coding-models --tier C # C only
110
+
111
+ # Combine flags freely
112
+ free-coding-models --openclaw --tier S
113
+ free-coding-models --opencode --best
114
+ ```
115
+
116
+ ### Startup mode menu
117
+
118
+ When you run `free-coding-models` without `--opencode` or `--openclaw`, you get an interactive startup menu:
119
+
120
+ ```
121
+ ⚡ Free Coding Models — Choose your tool
122
+
123
+ ❯ 💻 OpenCode
124
+ Press Enter on a model → launch OpenCode with it as default
125
+
126
+ 🦞 OpenClaw
127
+ Press Enter on a model → set it as default in OpenClaw config
128
+
129
+ ↑↓ Navigate • Enter Select • Ctrl+C Exit
86
130
  ```
87
131
 
132
+ Use `↑↓` arrows to select, `Enter` to confirm. Then the TUI launches with your chosen mode shown in the header badge.
133
+
88
134
  **How it works:**
89
135
  1. **Ping phase** — All 44 models are pinged in parallel
90
136
  2. **Continuous monitoring** — Models are re-pinged every 2 seconds forever
91
137
  3. **Real-time updates** — Watch "Latest", "Avg", and "Up%" columns update live
92
- 4. **Select anytime** — Use ↑↓ arrows to navigate, press Enter on a model to launch OpenCode
93
- 5. **Smart detection** — Automatically detects if NVIDIA NIM is configured in OpenCode:
94
- - ✅ If configured → Sets model as default and launches OpenCode
95
- - ⚠️ If missing → Shows installation instructions and launches OpenCode
138
+ 4. **Select anytime** — Use ↑↓ arrows to navigate, press Enter on a model to act
139
+ 5. **Smart detection** — Automatically detects if NVIDIA NIM is configured in OpenCode or OpenClaw
96
140
 
97
141
  Setup wizard:
98
142
 
@@ -155,13 +199,24 @@ free-coding-models
155
199
  - **A-/B+** — Solid performers, good for targeted programming tasks
156
200
  - **B/C** — Lightweight or older models, good for code completion on constrained infra
157
201
 
202
+ ### Filtering by tier
203
+
204
+ Use `--tier` to focus on a specific capability band:
205
+
206
+ ```bash
207
+ free-coding-models --tier S # Only S+ and S (frontier models)
208
+ free-coding-models --tier A # Only A+, A, A- (solid performers)
209
+ free-coding-models --tier B # Only B+, B (lightweight options)
210
+ free-coding-models --tier C # Only C (edge/minimal models)
211
+ ```
212
+
158
213
  ---
159
214
 
160
- ## 🔌 Use with OpenCode
215
+ ## 🔌 OpenCode Integration
161
216
 
162
217
  **The easiest way** — let `free-coding-models` do everything:
163
218
 
164
- 1. **Run**: `free-coding-models`
219
+ 1. **Run**: `free-coding-models --opencode` (or choose OpenCode from the startup menu)
165
220
  2. **Wait** for models to be pinged (green ✅ status)
166
221
  3. **Navigate** with ↑↓ arrows to your preferred model
167
222
  4. **Press Enter** — tool automatically:
@@ -169,23 +224,7 @@ free-coding-models
169
224
  - Sets your selected model as default in `~/.config/opencode/opencode.json`
170
225
  - Launches OpenCode with the model ready to use
171
226
 
172
- That's it! No manual config needed.
173
-
174
- ### Manual Setup (Optional)
175
-
176
- If you prefer to configure OpenCode yourself:
177
-
178
- #### Prerequisites
179
-
180
- 1. **OpenCode installed**: `npm install -g opencode` (or equivalent)
181
- 2. **NVIDIA NIM account**: Get a free account at [build.nvidia.com](https://build.nvidia.com)
182
- 3. **API key generated**: Go to Profile → API Keys → Generate API Key
183
-
184
- #### 1. Find your model
185
-
186
- Run `free-coding-models` to see which models are available and fast. The "Latest" column shows real-time latency, "Avg" shows rolling average, and "Up%" shows uptime percentage (reliability over time).
187
-
188
- #### 2. Configure OpenCode
227
+ ### Manual OpenCode Setup (Optional)
189
228
 
190
229
  Create or edit `~/.config/opencode/opencode.json`:
191
230
 
@@ -205,53 +244,86 @@ Create or edit `~/.config/opencode/opencode.json`:
205
244
  }
206
245
  ```
207
246
 
208
- #### 3. Set environment variable
247
+ Then set the environment variable:
209
248
 
210
249
  ```bash
211
250
  export NVIDIA_API_KEY=nvapi-xxxx-your-key-here
212
251
  # Add to ~/.bashrc or ~/.zshrc for persistence
213
252
  ```
214
253
 
215
- #### 4. Use it
216
-
217
254
  Run `/models` in OpenCode and select **NVIDIA NIM** provider and your chosen model.
218
255
 
219
256
  > ⚠️ **Note:** Free models have usage limits based on NVIDIA's tier — check [build.nvidia.com](https://build.nvidia.com) for quotas.
220
257
 
221
- ### Automatic Installation
258
+ ### Automatic Installation Fallback
222
259
 
223
- The tool includes a **smart fallback mechanism**:
260
+ If NVIDIA NIM is not yet configured in OpenCode, the tool:
261
+ - Shows installation instructions in your terminal
262
+ - Creates a `prompt` file in `$HOME/prompt` with the exact configuration
263
+ - Launches OpenCode, which will detect and display the prompt automatically
224
264
 
225
- 1. **Primary**: Try to launch OpenCode with the selected model
226
- 2. **Fallback**: If NVIDIA NIM is not detected in `~/.config/opencode/opencode.json`, the tool:
227
- - Shows installation instructions in your terminal
228
- - Creates a `prompt` file in `$HOME/prompt` with the exact configuration to add
229
- - Launches OpenCode, which will detect and display the prompt automatically
265
+ ---
230
266
 
231
- This **"prompt" fallback** ensures that even if NVIDIA NIM isn't pre-configured, OpenCode will guide you through installation with the ready-to-use configuration already prepared.
267
+ ## 🦞 OpenClaw Integration
232
268
 
233
- #### Example prompt file created at `$HOME/prompt`:
269
+ OpenClaw is an autonomous AI agent daemon. `free-coding-models` can configure it to use NVIDIA NIM models as its default provider — no download or local setup needed, everything runs via the NIM remote API.
234
270
 
235
- ```json
236
- Please install NVIDIA NIM provider in OpenCode by adding this to ~/.config/opencode/opencode.json:
271
+ ### Quick Start
272
+
273
+ ```bash
274
+ free-coding-models --openclaw
275
+ ```
237
276
 
277
+ Or run without flags and choose **OpenClaw** from the startup menu.
278
+
279
+ 1. **Wait** for models to be pinged
280
+ 2. **Navigate** with ↑↓ arrows to your preferred model
281
+ 3. **Press Enter** — tool automatically:
282
+ - Reads `~/.openclaw/openclaw.json`
283
+ - Adds the `nvidia` provider block (NIM base URL + your API key) if missing
284
+ - Sets `agents.defaults.model.primary` to `nvidia/<model-id>`
285
+ - Saves config and prints next steps
286
+
287
+ ### What gets written to OpenClaw config
288
+
289
+ ```json
238
290
  {
239
- "provider": {
291
+ "providers": {
240
292
  "nvidia": {
241
- "npm": "@ai-sdk/openai-compatible",
242
- "name": "NVIDIA NIM",
243
- "options": {
244
- "baseURL": "https://integrate.api.nvidia.com/v1",
245
- "apiKey": "{env:NVIDIA_API_KEY}"
293
+ "baseUrl": "https://integrate.api.nvidia.com/v1",
294
+ "apiKey": "nvapi-xxxx-your-key",
295
+ "api": "openai-completions",
296
+ "models": [
297
+ {
298
+ "id": "deepseek-ai/deepseek-v3.2",
299
+ "name": "DeepSeek V3.2",
300
+ "contextWindow": 128000,
301
+ "maxTokens": 8192
302
+ }
303
+ ]
304
+ }
305
+ },
306
+ "agents": {
307
+ "defaults": {
308
+ "model": {
309
+ "primary": "nvidia/deepseek-ai/deepseek-v3.2"
246
310
  }
247
311
  }
248
312
  }
249
313
  }
314
+ ```
250
315
 
251
- Then set env var: export NVIDIA_API_KEY=your_key_here
316
+ ### After updating OpenClaw config
317
+
318
+ Restart OpenClaw or run the CLI command to apply the new model:
319
+
320
+ ```bash
321
+ openclaw restart
322
+ # or
323
+ openclaw models set nvidia/deepseek-ai/deepseek-v3.2
252
324
  ```
253
325
 
254
- OpenCode will automatically detect this file when launched and guide you through the installation.
326
+ > 💡 **Why use remote NIM models with OpenClaw?** NVIDIA NIM serves models via a fast API — no local GPU required, no VRAM limits, free credits for developers. You get frontier-class coding models (DeepSeek V3, Kimi K2, Qwen3 Coder) without downloading anything.
255
327
 
256
328
  ---
257
329
 
@@ -265,14 +337,12 @@ OpenCode will automatically detect this file when launched and guide you through
265
337
  │ 4. Re-ping ALL models every 2 seconds (forever) │
266
338
  │ 5. Update rolling averages from ALL successful pings │
267
339
  │ 6. User can navigate with ↑↓ and select with Enter │
268
- │ 7. On Enter: stop monitoring, exit alt screen
269
- │ 8. Detect NVIDIA NIM config in OpenCode
270
- │ 9. If configured: update default model, launch OpenCode │
271
- │ 10. If missing: show install prompt, launch OpenCode │
340
+ │ 7. On Enter (OpenCode): set model, launch OpenCode
341
+ │ 8. On Enter (OpenClaw): update ~/.openclaw/openclaw.json
272
342
  └─────────────────────────────────────────────────────────────┘
273
343
  ```
274
344
 
275
- **Result:** Continuous monitoring interface that stays open until you select a model or press Ctrl+C. Rolling averages give you accurate long-term latency data, uptime percentage tracks reliability, and you can launch OpenCode with your chosen model in one keystroke.
345
+ **Result:** Continuous monitoring interface that stays open until you select a model or press Ctrl+C. Rolling averages give you accurate long-term latency data, uptime percentage tracks reliability, and you can configure your tool of choice with your chosen model in one keystroke.
276
346
 
277
347
  ---
278
348
 
@@ -288,9 +358,23 @@ OpenCode will automatically detect this file when launched and guide you through
288
358
  - **Ping interval**: 2 seconds between complete re-pings of all models (adjustable with W/X keys)
289
359
  - **Monitor mode**: Interface stays open forever, press Ctrl+C to exit
290
360
 
361
+ **Flags:**
362
+
363
+ | Flag | Description |
364
+ |------|-------------|
365
+ | *(none)* | Show startup menu to choose OpenCode or OpenClaw |
366
+ | `--opencode` | OpenCode mode — Enter launches OpenCode with selected model |
367
+ | `--openclaw` | OpenClaw mode — Enter sets selected model as default in OpenClaw |
368
+ | `--best` | Show only top-tier models (A+, S, S+) |
369
+ | `--fiable` | Analyze 10 seconds, output the most reliable model as `provider/model_id` |
370
+ | `--tier S` | Show only S+ and S tier models |
371
+ | `--tier A` | Show only A+, A, A- tier models |
372
+ | `--tier B` | Show only B+, B tier models |
373
+ | `--tier C` | Show only C tier models |
374
+
291
375
  **Keyboard shortcuts:**
292
376
  - **↑↓** — Navigate models
293
- - **Enter** — Select model and launch OpenCode
377
+ - **Enter** — Select model (launches OpenCode or sets OpenClaw default, depending on mode)
294
378
  - **R/T/O/M/P/A/S/V/U** — Sort by Rank/Tier/Origin/Model/Ping/Avg/Status/Verdict/Uptime
295
379
  - **W** — Decrease ping interval (faster pings)
296
380
  - **X** — Increase ping interval (slower pings)
@@ -328,5 +412,8 @@ We welcome contributions! Feel free to open issues, submit pull requests, or get
328
412
  **Q:** How accurate are the latency numbers?
329
413
  **A:** They represent average round-trip times measured during testing; actual performance may vary based on network conditions.
330
414
 
415
+ **Q:** Do I need to download models locally for OpenClaw?
416
+ **A:** No — `free-coding-models` configures OpenClaw to use NVIDIA NIM's remote API, so models run on NVIDIA's infrastructure. No GPU or local setup required.
417
+
331
418
  ## 📧 Support
332
419
  For questions or issues, open a GitHub issue or join our community Discord: https://discord.gg/QnR8xq9p
@@ -1,30 +1,33 @@
1
1
  #!/usr/bin/env node
2
2
  /**
3
3
  * @file free-coding-models.js
4
- * @description Live terminal availability checker for coding LLM models with OpenCode integration.
4
+ * @description Live terminal availability checker for coding LLM models with OpenCode & OpenClaw integration.
5
5
  *
6
6
  * @details
7
7
  * This CLI tool discovers and benchmarks language models optimized for coding.
8
8
  * It runs in an alternate screen buffer, pings all models in parallel, re-pings successful ones
9
9
  * multiple times for reliable latency measurements, and prints a clean final table.
10
- * During benchmarking, users can navigate with arrow keys and press Enter to launch OpenCode immediately.
10
+ * During benchmarking, users can navigate with arrow keys and press Enter to act on the selected model.
11
11
  *
12
12
  * 🎯 Key features:
13
13
  * - Parallel pings across all models with animated real-time updates
14
- * - Continuous monitoring with 10-second ping intervals (never stops)
14
+ * - Continuous monitoring with 2-second ping intervals (never stops)
15
15
  * - Rolling averages calculated from ALL successful pings since start
16
16
  * - Best-per-tier highlighting with medals (🥇🥈🥉)
17
17
  * - Interactive navigation with arrow keys directly in the table
18
- * - Instant OpenCode launch on Enter key press (any model, even timeout/down)
19
- * - Automatic OpenCode config detection and model setup
18
+ * - Instant OpenCode OR OpenClaw action on Enter key press
19
+ * - Startup mode menu (OpenCode vs OpenClaw) when no flag is given
20
+ * - Automatic config detection and model setup for both tools
20
21
  * - Persistent API key storage in ~/.free-coding-models
21
22
  * - Multi-source support via sources.js (easily add new providers)
22
23
  * - Uptime percentage tracking (successful pings / total pings)
23
24
  * - Sortable columns (R/T/O/M/P/A/S/V/U keys)
25
+ * - Tier filtering via --tier S/A/B/C flags
24
26
  *
25
27
  * → Functions:
26
28
  * - `loadApiKey` / `saveApiKey`: Manage persisted API key in ~/.free-coding-models
27
29
  * - `promptApiKey`: Interactive wizard for first-time API key setup
30
+ * - `promptModeSelection`: Startup menu to choose OpenCode vs OpenClaw
28
31
  * - `ping`: Perform HTTP request to NIM endpoint with timeout handling
29
32
  * - `renderTable`: Generate ASCII table with colored latency indicators and status emojis
30
33
  * - `getAvg`: Calculate average latency from all successful pings
@@ -33,6 +36,9 @@
33
36
  * - `sortResults`: Sort models by various columns
34
37
  * - `checkNvidiaNimConfig`: Check if NVIDIA NIM provider is configured in OpenCode
35
38
  * - `startOpenCode`: Launch OpenCode with selected model (configures if needed)
39
+ * - `loadOpenClawConfig` / `saveOpenClawConfig`: Manage ~/.openclaw/openclaw.json
40
+ * - `startOpenClaw`: Set selected model as default in OpenClaw config (remote, no launch)
41
+ * - `filterByTier`: Filter models by tier letter prefix (S, A, B, C)
36
42
  * - `main`: Orchestrates CLI flow, wizard, ping loops, animation, and output
37
43
  *
38
44
  * 📦 Dependencies:
@@ -45,13 +51,22 @@
45
51
  * - API key stored in ~/.free-coding-models
46
52
  * - Models loaded from sources.js (extensible for new providers)
47
53
  * - OpenCode config: ~/.config/opencode/opencode.json
48
- * - Ping timeout: 6s per attempt, max 2 retries (12s total)
49
- * - Ping interval: 10 seconds (continuous monitoring mode)
54
+ * - OpenClaw config: ~/.openclaw/openclaw.json
55
+ * - Ping timeout: 15s per attempt
56
+ * - Ping interval: 2 seconds (continuous monitoring mode)
50
57
  * - Animation: 12 FPS with braille spinners
51
- * - Reliability: Green → Yellow → Orange → Red → Black (degrades with instability)
58
+ *
59
+ * 🚀 CLI flags:
60
+ * - (no flag): Show startup menu → choose OpenCode or OpenClaw
61
+ * - --opencode: OpenCode mode (launch with selected model)
62
+ * - --openclaw: OpenClaw mode (set selected model as default in OpenClaw)
63
+ * - --best: Show only top-tier models (A+, S, S+)
64
+ * - --fiable: Analyze 10s and output the most reliable model
65
+ * - --tier S/A/B/C: Filter models by tier letter (S=S+/S, A=A+/A/A-, B=B+/B, C=C)
52
66
  *
53
67
  * @see {@link https://build.nvidia.com} NVIDIA API key generation
54
68
  * @see {@link https://github.com/opencode-ai/opencode} OpenCode repository
69
+ * @see {@link https://openclaw.ai} OpenClaw documentation
55
70
  */
56
71
 
57
72
  import chalk from 'chalk'
@@ -110,6 +125,78 @@ async function promptApiKey() {
110
125
  })
111
126
  }
112
127
 
128
+ // ─── Startup mode selection menu ──────────────────────────────────────────────
129
+ // 📖 Shown at startup when neither --opencode nor --openclaw flag is given.
130
+ // 📖 Simple arrow-key selector in normal terminal (not alt screen).
131
+ // 📖 Returns 'opencode' or 'openclaw'.
132
+ async function promptModeSelection() {
133
+ const options = [
134
+ {
135
+ label: 'OpenCode',
136
+ icon: '💻',
137
+ description: 'Press Enter on a model → launch OpenCode with it as default',
138
+ },
139
+ {
140
+ label: 'OpenClaw',
141
+ icon: '🦞',
142
+ description: 'Press Enter on a model → set it as default in OpenClaw config',
143
+ },
144
+ ]
145
+
146
+ return new Promise((resolve) => {
147
+ let selected = 0
148
+
149
+ // 📖 Render the menu to stdout (clear + redraw)
150
+ const render = () => {
151
+ process.stdout.write('\x1b[2J\x1b[H') // clear screen + cursor home
152
+ console.log()
153
+ console.log(chalk.bold(' ⚡ Free Coding Models') + chalk.dim(' — Choose your tool'))
154
+ console.log()
155
+ for (let i = 0; i < options.length; i++) {
156
+ const isSelected = i === selected
157
+ const bullet = isSelected ? chalk.bold.cyan(' ❯ ') : chalk.dim(' ')
158
+ const label = isSelected
159
+ ? chalk.bold.white(options[i].icon + ' ' + options[i].label)
160
+ : chalk.dim(options[i].icon + ' ' + options[i].label)
161
+ const desc = chalk.dim(' ' + options[i].description)
162
+ console.log(bullet + label)
163
+ console.log(chalk.dim(' ' + options[i].description))
164
+ console.log()
165
+ }
166
+ console.log(chalk.dim(' ↑↓ Navigate • Enter Select • Ctrl+C Exit'))
167
+ console.log()
168
+ }
169
+
170
+ render()
171
+
172
+ readline.emitKeypressEvents(process.stdin)
173
+ if (process.stdin.isTTY) process.stdin.setRawMode(true)
174
+
175
+ const onKey = (_str, key) => {
176
+ if (!key) return
177
+ if (key.ctrl && key.name === 'c') {
178
+ if (process.stdin.isTTY) process.stdin.setRawMode(false)
179
+ process.stdin.removeListener('keypress', onKey)
180
+ process.exit(0)
181
+ }
182
+ if (key.name === 'up' && selected > 0) {
183
+ selected--
184
+ render()
185
+ } else if (key.name === 'down' && selected < options.length - 1) {
186
+ selected++
187
+ render()
188
+ } else if (key.name === 'return') {
189
+ if (process.stdin.isTTY) process.stdin.setRawMode(false)
190
+ process.stdin.removeListener('keypress', onKey)
191
+ process.stdin.pause()
192
+ resolve(selected === 0 ? 'opencode' : 'openclaw')
193
+ }
194
+ }
195
+
196
+ process.stdin.on('keypress', onKey)
197
+ })
198
+ }
199
+
113
200
  // ─── Alternate screen control ─────────────────────────────────────────────────
114
201
  // 📖 \x1b[?1049h = enter alt screen \x1b[?1049l = leave alt screen
115
202
  // 📖 \x1b[?25l = hide cursor \x1b[?25h = show cursor
@@ -181,7 +268,7 @@ const VERDICT_ORDER = ['Perfect', 'Normal', 'Slow', 'Very Slow', 'Overloaded', '
181
268
  const getVerdict = (r) => {
182
269
  const avg = getAvg(r)
183
270
  const wasUpBefore = r.pings.length > 0 && r.pings.some(p => p.code === '200')
184
-
271
+
185
272
  // 📖 429 = rate limited = Overloaded
186
273
  if (r.httpCode === '429') return 'Overloaded'
187
274
  if ((r.status === 'timeout' || r.status === 'down') && wasUpBefore) return 'Unstable'
@@ -207,7 +294,7 @@ const getUptime = (r) => {
207
294
  const sortResults = (results, sortColumn, sortDirection) => {
208
295
  return [...results].sort((a, b) => {
209
296
  let cmp = 0
210
-
297
+
211
298
  switch (sortColumn) {
212
299
  case 'rank':
213
300
  cmp = a.idx - b.idx
@@ -245,12 +332,13 @@ const sortResults = (results, sortColumn, sortDirection) => {
245
332
  cmp = getUptime(a) - getUptime(b)
246
333
  break
247
334
  }
248
-
335
+
249
336
  return sortDirection === 'asc' ? cmp : -cmp
250
337
  })
251
338
  }
252
339
 
253
- function renderTable(results, pendingPings, frame, cursor = null, sortColumn = 'avg', sortDirection = 'asc', pingInterval = PING_INTERVAL, lastPingTime = Date.now()) {
340
+ // 📖 renderTable: mode param controls footer hint text (opencode vs openclaw)
341
+ function renderTable(results, pendingPings, frame, cursor = null, sortColumn = 'avg', sortDirection = 'asc', pingInterval = PING_INTERVAL, lastPingTime = Date.now(), mode = 'opencode') {
254
342
  const up = results.filter(r => r.status === 'up').length
255
343
  const down = results.filter(r => r.status === 'down').length
256
344
  const timeout = results.filter(r => r.status === 'timeout').length
@@ -267,6 +355,11 @@ function renderTable(results, pendingPings, frame, cursor = null, sortColumn = '
267
355
  ? chalk.dim(`pinging — ${pendingPings} in flight…`)
268
356
  : chalk.dim(`next ping ${secondsUntilNext}s`)
269
357
 
358
+ // 📖 Mode badge shown in header so user knows what Enter will do
359
+ const modeBadge = mode === 'openclaw'
360
+ ? chalk.bold.rgb(255, 100, 50)(' [🦞 OpenClaw]')
361
+ : chalk.bold.rgb(0, 200, 255)(' [💻 OpenCode]')
362
+
270
363
  // 📖 Column widths (generous spacing with margins)
271
364
  const W_RANK = 6
272
365
  const W_TIER = 6
@@ -283,7 +376,7 @@ function renderTable(results, pendingPings, frame, cursor = null, sortColumn = '
283
376
 
284
377
  const lines = [
285
378
  '',
286
- ` ${chalk.bold('⚡ Free Coding Models')} ` +
379
+ ` ${chalk.bold('⚡ Free Coding Models')}${modeBadge} ` +
287
380
  chalk.greenBright(`✅ ${up}`) + chalk.dim(' up ') +
288
381
  chalk.yellow(`⏱ ${timeout}`) + chalk.dim(' timeout ') +
289
382
  chalk.red(`❌ ${down}`) + chalk.dim(' down ') +
@@ -295,7 +388,7 @@ function renderTable(results, pendingPings, frame, cursor = null, sortColumn = '
295
388
  // 📖 NOTE: padEnd on chalk strings counts ANSI codes, breaking alignment
296
389
  // 📖 Solution: build plain text first, then colorize
297
390
  const dir = sortDirection === 'asc' ? '↑' : '↓'
298
-
391
+
299
392
  const rankH = 'Rank'
300
393
  const tierH = 'Tier'
301
394
  const originH = 'Origin'
@@ -305,7 +398,7 @@ function renderTable(results, pendingPings, frame, cursor = null, sortColumn = '
305
398
  const statusH = sortColumn === 'status' ? dir + ' Status' : 'Status'
306
399
  const verdictH = sortColumn === 'verdict' ? dir + ' Verdict' : 'Verdict'
307
400
  const uptimeH = sortColumn === 'uptime' ? dir + ' Up%' : 'Up%'
308
-
401
+
309
402
  // 📖 Now colorize after padding is calculated on plain text
310
403
  const rankH_c = chalk.dim(rankH.padEnd(W_RANK))
311
404
  const tierH_c = chalk.dim(tierH.padEnd(W_TIER))
@@ -316,13 +409,13 @@ function renderTable(results, pendingPings, frame, cursor = null, sortColumn = '
316
409
  const statusH_c = sortColumn === 'status' ? chalk.bold.cyan(statusH.padEnd(W_STATUS)) : chalk.dim(statusH.padEnd(W_STATUS))
317
410
  const verdictH_c = sortColumn === 'verdict' ? chalk.bold.cyan(verdictH.padEnd(W_VERDICT)) : chalk.dim(verdictH.padEnd(W_VERDICT))
318
411
  const uptimeH_c = sortColumn === 'uptime' ? chalk.bold.cyan(uptimeH.padStart(W_UPTIME)) : chalk.dim(uptimeH.padStart(W_UPTIME))
319
-
412
+
320
413
  // 📖 Header with proper spacing
321
414
  lines.push(' ' + rankH_c + ' ' + tierH_c + ' ' + originH_c + ' ' + modelH_c + ' ' + pingH_c + ' ' + avgH_c + ' ' + statusH_c + ' ' + verdictH_c + ' ' + uptimeH_c)
322
-
415
+
323
416
  // 📖 Separator line
324
417
  lines.push(
325
- ' ' +
418
+ ' ' +
326
419
  chalk.dim('─'.repeat(W_RANK)) + ' ' +
327
420
  chalk.dim('─'.repeat(W_TIER)) + ' ' +
328
421
  '─'.repeat(W_SOURCE) + ' ' +
@@ -337,9 +430,9 @@ function renderTable(results, pendingPings, frame, cursor = null, sortColumn = '
337
430
  for (let i = 0; i < sorted.length; i++) {
338
431
  const r = sorted[i]
339
432
  const tierFn = TIER_COLOR[r.tier] ?? (t => chalk.white(t))
340
-
433
+
341
434
  const isCursor = cursor !== null && i === cursor
342
-
435
+
343
436
  // 📖 Left-aligned columns - pad plain text first, then colorize
344
437
  const num = chalk.dim(String(r.idx).padEnd(W_RANK))
345
438
  const tier = tierFn(r.tier.padEnd(W_TIER))
@@ -452,7 +545,7 @@ function renderTable(results, pendingPings, frame, cursor = null, sortColumn = '
452
545
 
453
546
  // 📖 Build row with double space between columns
454
547
  const row = ' ' + num + ' ' + tier + ' ' + source + ' ' + name + ' ' + pingCell + ' ' + avgCell + ' ' + status + ' ' + speedCell + ' ' + uptimeCell
455
-
548
+
456
549
  if (isCursor) {
457
550
  lines.push(chalk.bgRgb(139, 0, 139)(row))
458
551
  } else {
@@ -462,7 +555,12 @@ function renderTable(results, pendingPings, frame, cursor = null, sortColumn = '
462
555
 
463
556
  lines.push('')
464
557
  const intervalSec = Math.round(pingInterval / 1000)
465
- lines.push(chalk.dim(` ↑↓ Navigate • Enter Select • R/T/O/M/P/A/S/V/U Sort • W↓/X↑ Interval (${intervalSec}s) • Ctrl+C Exit`))
558
+
559
+ // 📖 Footer hints adapt based on active mode
560
+ const actionHint = mode === 'openclaw'
561
+ ? chalk.rgb(255, 100, 50)('Enter→SetOpenClaw')
562
+ : chalk.rgb(0, 200, 255)('Enter→OpenCode')
563
+ lines.push(chalk.dim(` ↑↓ Navigate • `) + actionHint + chalk.dim(` • R/T/O/M/P/A/S/V/U Sort • W↓/X↑ Interval (${intervalSec}s) • Ctrl+C Exit`))
466
564
  lines.push('')
467
565
  return lines.join('\n')
468
566
  }
@@ -482,9 +580,9 @@ async function ping(apiKey, modelId) {
482
580
  return { code: String(resp.status), ms: Math.round(performance.now() - t0) }
483
581
  } catch (err) {
484
582
  const isTimeout = err.name === 'AbortError'
485
- return {
486
- code: isTimeout ? '000' : 'ERR',
487
- ms: isTimeout ? 'TIMEOUT' : Math.round(performance.now() - t0)
583
+ return {
584
+ code: isTimeout ? '000' : 'ERR',
585
+ ms: isTimeout ? 'TIMEOUT' : Math.round(performance.now() - t0)
488
586
  }
489
587
  } finally {
490
588
  clearTimeout(timer)
@@ -520,7 +618,7 @@ function checkNvidiaNimConfig() {
520
618
  if (!config.provider) return false
521
619
  // 📖 Check for nvidia/nim provider by key name or display name (case-insensitive)
522
620
  const providerKeys = Object.keys(config.provider)
523
- return providerKeys.some(key =>
621
+ return providerKeys.some(key =>
524
622
  key === 'nvidia' || key === 'nim' ||
525
623
  config.provider[key]?.name?.toLowerCase().includes('nvidia') ||
526
624
  config.provider[key]?.name?.toLowerCase().includes('nim')
@@ -533,38 +631,38 @@ function checkNvidiaNimConfig() {
533
631
  // 📖 Model format: { modelId, label, tier }
534
632
  async function startOpenCode(model) {
535
633
  const hasNim = checkNvidiaNimConfig()
536
-
634
+
537
635
  if (hasNim) {
538
636
  // 📖 NVIDIA NIM already configured - launch with model flag
539
637
  console.log(chalk.green(` 🚀 Setting ${chalk.bold(model.label)} as default…`))
540
638
  console.log(chalk.dim(` Model: nvidia/${model.modelId}`))
541
639
  console.log()
542
-
640
+
543
641
  const config = loadOpenCodeConfig()
544
642
  const backupPath = `${OPENCODE_CONFIG}.backup-${Date.now()}`
545
-
643
+
546
644
  // 📖 Backup current config
547
645
  if (existsSync(OPENCODE_CONFIG)) {
548
646
  copyFileSync(OPENCODE_CONFIG, backupPath)
549
647
  console.log(chalk.dim(` 💾 Backup: ${backupPath}`))
550
648
  }
551
-
649
+
552
650
  // 📖 Update default model to nvidia/model_id
553
651
  config.model = `nvidia/${model.modelId}`
554
652
  saveOpenCodeConfig(config)
555
-
653
+
556
654
  console.log(chalk.green(` ✓ Default model set to: nvidia/${model.modelId}`))
557
655
  console.log()
558
656
  console.log(chalk.dim(' Starting OpenCode…'))
559
657
  console.log()
560
-
658
+
561
659
  // 📖 Launch OpenCode and wait for it
562
660
  const { spawn } = await import('child_process')
563
661
  const child = spawn('opencode', [], {
564
662
  stdio: 'inherit',
565
663
  shell: false
566
664
  })
567
-
665
+
568
666
  // 📖 Wait for OpenCode to exit
569
667
  await new Promise((resolve, reject) => {
570
668
  child.on('exit', resolve)
@@ -576,7 +674,7 @@ async function startOpenCode(model) {
576
674
  console.log()
577
675
  console.log(chalk.dim(' Starting OpenCode with installation prompt…'))
578
676
  console.log()
579
-
677
+
580
678
  const installPrompt = `Please install NVIDIA NIM provider in OpenCode by adding this to ~/.config/opencode/opencode.json:
581
679
 
582
680
  {
@@ -595,18 +693,18 @@ async function startOpenCode(model) {
595
693
  Then set env var: export NVIDIA_API_KEY=your_key_here
596
694
 
597
695
  After installation, you can use: opencode --model nvidia/${model.modelId}`
598
-
696
+
599
697
  console.log(chalk.cyan(installPrompt))
600
698
  console.log()
601
699
  console.log(chalk.dim(' Starting OpenCode…'))
602
700
  console.log()
603
-
701
+
604
702
  const { spawn } = await import('child_process')
605
703
  const child = spawn('opencode', [], {
606
704
  stdio: 'inherit',
607
705
  shell: false
608
706
  })
609
-
707
+
610
708
  // 📖 Wait for OpenCode to exit
611
709
  await new Promise((resolve, reject) => {
612
710
  child.on('exit', resolve)
@@ -615,14 +713,230 @@ After installation, you can use: opencode --model nvidia/${model.modelId}`
615
713
  }
616
714
  }
617
715
 
618
- // ─── Main ─────────────────────────────────────────────────────────────────────
716
+ // ─── OpenClaw integration ──────────────────────────────────────────────────────
717
+ // 📖 OpenClaw config: ~/.openclaw/openclaw.json (JSON format, may be JSON5 in newer versions)
718
+ // 📖 To set a model: set agents.defaults.model.primary = "nvidia/model-id"
719
+ // 📖 Providers section uses baseUrl + apiKey + api: "openai-completions" format
720
+ // 📖 See: https://docs.openclaw.ai/gateway/configuration
721
+ const OPENCLAW_CONFIG = join(homedir(), '.openclaw', 'openclaw.json')
722
+
723
+ function loadOpenClawConfig() {
724
+ if (!existsSync(OPENCLAW_CONFIG)) return {}
725
+ try {
726
+ // 📖 JSON.parse works for standard JSON; OpenClaw may use JSON5 but base config is valid JSON
727
+ return JSON.parse(readFileSync(OPENCLAW_CONFIG, 'utf8'))
728
+ } catch {
729
+ return {}
730
+ }
731
+ }
732
+
733
+ function saveOpenClawConfig(config) {
734
+ const dir = join(homedir(), '.openclaw')
735
+ if (!existsSync(dir)) {
736
+ mkdirSync(dir, { recursive: true })
737
+ }
738
+ writeFileSync(OPENCLAW_CONFIG, JSON.stringify(config, null, 2))
739
+ }
740
+
741
+ // 📖 startOpenClaw: sets the selected NVIDIA NIM model as default in OpenClaw config.
742
+ // 📖 Also ensures the nvidia provider block is present with the NIM base URL.
743
+ // 📖 Does NOT launch OpenClaw — OpenClaw runs as a daemon, so config changes are picked up on restart.
744
+ async function startOpenClaw(model, apiKey) {
745
+ console.log(chalk.rgb(255, 100, 50)(` 🦞 Setting ${chalk.bold(model.label)} as OpenClaw default…`))
746
+ console.log(chalk.dim(` Model: nvidia/${model.modelId}`))
747
+ console.log()
748
+
749
+ const config = loadOpenClawConfig()
750
+
751
+ // 📖 Backup existing config before touching it
752
+ if (existsSync(OPENCLAW_CONFIG)) {
753
+ const backupPath = `${OPENCLAW_CONFIG}.backup-${Date.now()}`
754
+ copyFileSync(OPENCLAW_CONFIG, backupPath)
755
+ console.log(chalk.dim(` 💾 Backup: ${backupPath}`))
756
+ }
757
+
758
+ // 📖 Ensure providers section exists with nvidia NIM block
759
+ // 📖 Only injects if not already present - we don't overwrite existing provider config
760
+ if (!config.providers) config.providers = {}
761
+ if (!config.providers.nvidia) {
762
+ config.providers.nvidia = {
763
+ baseUrl: 'https://integrate.api.nvidia.com/v1',
764
+ // 📖 Store key reference as env var name — avoid hardcoding key in config file
765
+ apiKey: apiKey || process.env.NVIDIA_API_KEY || 'YOUR_NVIDIA_API_KEY',
766
+ api: 'openai-completions',
767
+ models: [],
768
+ }
769
+ console.log(chalk.dim(' ➕ Added nvidia provider block to OpenClaw config'))
770
+ }
771
+
772
+ // 📖 Ensure the chosen model is in the nvidia models array
773
+ const modelsArr = config.providers.nvidia.models
774
+ const modelEntry = {
775
+ id: model.modelId,
776
+ name: model.label,
777
+ contextWindow: 128000,
778
+ maxTokens: 8192,
779
+ }
780
+ const alreadyListed = modelsArr.some(m => m.id === model.modelId)
781
+ if (!alreadyListed) {
782
+ modelsArr.push(modelEntry)
783
+ console.log(chalk.dim(` ➕ Added ${model.label} to nvidia models list`))
784
+ }
785
+
786
+ // 📖 Set as the default primary model for all agents
787
+ if (!config.agents) config.agents = {}
788
+ if (!config.agents.defaults) config.agents.defaults = {}
789
+ if (!config.agents.defaults.model) config.agents.defaults.model = {}
790
+ config.agents.defaults.model.primary = `nvidia/${model.modelId}`
791
+
792
+ saveOpenClawConfig(config)
793
+
794
+ console.log(chalk.rgb(255, 140, 0)(` ✓ Default model set to: nvidia/${model.modelId}`))
795
+ console.log()
796
+ console.log(chalk.dim(' 📄 Config updated: ' + OPENCLAW_CONFIG))
797
+ console.log()
798
+ console.log(chalk.dim(' 💡 Restart OpenClaw for changes to take effect:'))
799
+ console.log(chalk.dim(' openclaw restart') + chalk.dim(' or ') + chalk.dim('openclaw models set nvidia/' + model.modelId))
800
+ console.log()
801
+ }
802
+
803
+ // ─── Helper function to find best model after analysis ────────────────────────
804
+ function findBestModel(results) {
805
+ // 📖 Sort by avg ping (fastest first), then by uptime percentage (most reliable)
806
+ const sorted = [...results].sort((a, b) => {
807
+ const avgA = getAvg(a)
808
+ const avgB = getAvg(b)
809
+ const uptimeA = getUptime(a)
810
+ const uptimeB = getUptime(b)
811
+
812
+ // 📖 Priority 1: Models that are up (status === 'up')
813
+ if (a.status === 'up' && b.status !== 'up') return -1
814
+ if (a.status !== 'up' && b.status === 'up') return 1
815
+
816
+ // 📖 Priority 2: Fastest average ping
817
+ if (avgA !== avgB) return avgA - avgB
818
+
819
+ // 📖 Priority 3: Highest uptime percentage
820
+ return uptimeB - uptimeA
821
+ })
822
+
823
+ return sorted.length > 0 ? sorted[0] : null
824
+ }
825
+
826
+ // ─── Function to run in fiable mode (10-second analysis then output best model) ──
827
+ async function runFiableMode(apiKey) {
828
+ console.log(chalk.cyan(' ⚡ Analyzing models for reliability (10 seconds)...'))
829
+ console.log()
830
+
831
+ let results = MODELS.map(([modelId, label, tier], i) => ({
832
+ idx: i + 1, modelId, label, tier,
833
+ status: 'pending',
834
+ pings: [],
835
+ httpCode: null,
836
+ }))
837
+
838
+ const startTime = Date.now()
839
+ const analysisDuration = 10000 // 10 seconds
840
+
841
+ // 📖 Run initial pings
842
+ const pingPromises = results.map(r => ping(apiKey, r.modelId).then(({ code, ms }) => {
843
+ r.pings.push({ ms, code })
844
+ if (code === '200') {
845
+ r.status = 'up'
846
+ } else if (code === '000') {
847
+ r.status = 'timeout'
848
+ } else {
849
+ r.status = 'down'
850
+ r.httpCode = code
851
+ }
852
+ }))
853
+
854
+ await Promise.allSettled(pingPromises)
855
+
856
+ // 📖 Continue pinging for the remaining time
857
+ const remainingTime = Math.max(0, analysisDuration - (Date.now() - startTime))
858
+ if (remainingTime > 0) {
859
+ await new Promise(resolve => setTimeout(resolve, remainingTime))
860
+ }
861
+
862
+ // 📖 Find best model
863
+ const best = findBestModel(results)
864
+
865
+ if (!best) {
866
+ console.log(chalk.red(' ✖ No reliable model found'))
867
+ process.exit(1)
868
+ }
869
+
870
+ // 📖 Output in format: provider/name
871
+ const provider = 'nvidia' // Always NVIDIA NIM for now
872
+ console.log(chalk.green(` ✓ Most reliable model:`))
873
+ console.log(chalk.bold(` ${provider}/${best.modelId}`))
874
+ console.log()
875
+ console.log(chalk.dim(` 📊 Stats:`))
876
+ console.log(chalk.dim(` Avg ping: ${getAvg(best)}ms`))
877
+ console.log(chalk.dim(` Uptime: ${getUptime(best)}%`))
878
+ console.log(chalk.dim(` Status: ${best.status === 'up' ? '✅ UP' : '❌ DOWN'}`))
879
+
880
+ process.exit(0)
881
+ }
882
+
883
+ // ─── Tier filter helper ────────────────────────────────────────────────────────
884
+ // 📖 Maps a single tier letter (S, A, B, C) to the full set of matching tier strings.
885
+ // 📖 --tier S → includes S+ and S
886
+ // 📖 --tier A → includes A+, A, A-
887
+ // 📖 --tier B → includes B+, B
888
+ // 📖 --tier C → includes C only
889
+ const TIER_LETTER_MAP = {
890
+ 'S': ['S+', 'S'],
891
+ 'A': ['A+', 'A', 'A-'],
892
+ 'B': ['B+', 'B'],
893
+ 'C': ['C'],
894
+ }
895
+
896
+ function filterByTier(results, tierLetter) {
897
+ const letter = tierLetter.toUpperCase()
898
+ const allowed = TIER_LETTER_MAP[letter]
899
+ if (!allowed) {
900
+ console.error(chalk.red(` ✖ Unknown tier "${tierLetter}". Valid tiers: S, A, B, C`))
901
+ process.exit(1)
902
+ }
903
+ return results.filter(r => allowed.includes(r.tier))
904
+ }
619
905
 
620
906
  async function main() {
907
+ // 📖 Parse CLI arguments properly
908
+ const args = process.argv.slice(2)
909
+
910
+ // 📖 Extract API key (first non-flag argument) and flags
911
+ let apiKey = null
912
+ const flags = []
913
+
914
+ for (const arg of args) {
915
+ if (arg.startsWith('--')) {
916
+ flags.push(arg.toLowerCase())
917
+ } else if (!apiKey) {
918
+ apiKey = arg
919
+ }
920
+ }
921
+
621
922
  // 📖 Priority: CLI arg > env var > saved config > wizard
622
- let apiKey = process.argv[2] || process.env.NVIDIA_API_KEY || loadApiKey()
623
-
624
- // 📖 Check for BEST flag - only show top tiers (A+, S, S+)
625
- const bestMode = process.argv.includes('--BEST') || process.argv.includes('--best')
923
+ if (!apiKey) {
924
+ apiKey = process.env.NVIDIA_API_KEY || loadApiKey()
925
+ }
926
+
927
+ // 📖 Check for CLI flags
928
+ const bestMode = flags.includes('--best')
929
+ const fiableMode = flags.includes('--fiable')
930
+ const openCodeMode = flags.includes('--opencode')
931
+ const openClawMode = flags.includes('--openclaw')
932
+
933
+ // 📖 Parse --tier X flag (e.g. --tier S, --tier A)
934
+ // 📖 Find "--tier" in flags array, then get the next raw arg as the tier value
935
+ let tierFilter = null
936
+ const tierIdx = args.findIndex(a => a.toLowerCase() === '--tier')
937
+ if (tierIdx !== -1 && args[tierIdx + 1] && !args[tierIdx + 1].startsWith('--')) {
938
+ tierFilter = args[tierIdx + 1].toUpperCase()
939
+ }
626
940
 
627
941
  if (!apiKey) {
628
942
  apiKey = await promptApiKey()
@@ -635,6 +949,25 @@ async function main() {
635
949
  }
636
950
  }
637
951
 
952
+ // 📖 Handle fiable mode first (it exits after analysis)
953
+ if (fiableMode) {
954
+ await runFiableMode(apiKey)
955
+ }
956
+
957
+ // 📖 Determine active mode:
958
+ // --opencode → opencode
959
+ // --openclaw → openclaw
960
+ // neither → show interactive startup menu
961
+ let mode
962
+ if (openClawMode) {
963
+ mode = 'openclaw'
964
+ } else if (openCodeMode) {
965
+ mode = 'opencode'
966
+ } else {
967
+ // 📖 No mode flag given — ask user with the startup menu
968
+ mode = await promptModeSelection()
969
+ }
970
+
638
971
  // 📖 Filter models to only show top tiers if BEST mode is active
639
972
  let results = MODELS.map(([modelId, label, tier], i) => ({
640
973
  idx: i + 1, modelId, label, tier,
@@ -642,25 +975,32 @@ async function main() {
642
975
  pings: [], // 📖 All ping results (ms or 'TIMEOUT')
643
976
  httpCode: null,
644
977
  }))
645
-
978
+
646
979
  if (bestMode) {
647
980
  results = results.filter(r => r.tier === 'S+' || r.tier === 'S' || r.tier === 'A+')
648
981
  }
649
982
 
983
+ // 📖 Apply tier letter filter if --tier X was given
984
+ if (tierFilter) {
985
+ results = filterByTier(results, tierFilter)
986
+ }
987
+
650
988
  // 📖 Add interactive selection state - cursor index and user's choice
651
989
  // 📖 sortColumn: 'rank'|'tier'|'origin'|'model'|'ping'|'avg'|'status'|'verdict'|'uptime'
652
990
  // 📖 sortDirection: 'asc' (default) or 'desc'
653
- // 📖 pingInterval: current interval in ms (default 5000, adjustable with W/X keys)
654
- const state = {
655
- results,
656
- pendingPings: 0,
657
- frame: 0,
658
- cursor: 0,
991
+ // 📖 pingInterval: current interval in ms (default 2000, adjustable with W/X keys)
992
+ const state = {
993
+ results,
994
+ pendingPings: 0,
995
+ frame: 0,
996
+ cursor: 0,
659
997
  selectedModel: null,
660
998
  sortColumn: 'avg',
661
999
  sortDirection: 'asc',
662
- pingInterval: PING_INTERVAL, // 📖 Track current interval for C/V keys
663
- lastPingTime: Date.now() // 📖 Track when last ping cycle started
1000
+ pingInterval: PING_INTERVAL, // 📖 Track current interval for W/X keys
1001
+ lastPingTime: Date.now(), // 📖 Track when last ping cycle started
1002
+ fiableMode, // 📖 Pass fiable mode to state
1003
+ mode, // 📖 'opencode' or 'openclaw' — controls Enter action
664
1004
  }
665
1005
 
666
1006
  // 📖 Enter alternate screen — animation runs here, zero scrollback pollution
@@ -680,18 +1020,18 @@ async function main() {
680
1020
  // 📖 Use readline with keypress event for arrow key handling
681
1021
  process.stdin.setEncoding('utf8')
682
1022
  process.stdin.resume()
683
-
1023
+
684
1024
  let userSelected = null
685
-
1025
+
686
1026
  const onKeyPress = async (str, key) => {
687
1027
  if (!key) return
688
-
689
- // 📖 Sorting keys: R=rank, T=tier, O=origin, M=model, P=ping, A=avg, S=status, V=verdict, L=reliability
1028
+
1029
+ // 📖 Sorting keys: R=rank, T=tier, O=origin, M=model, P=ping, A=avg, S=status, V=verdict, U=uptime
690
1030
  const sortKeys = {
691
1031
  'r': 'rank', 't': 'tier', 'o': 'origin', 'm': 'model',
692
1032
  'p': 'ping', 'a': 'avg', 's': 'status', 'v': 'verdict', 'u': 'uptime'
693
1033
  }
694
-
1034
+
695
1035
  if (sortKeys[key.name]) {
696
1036
  const col = sortKeys[key.name]
697
1037
  // 📖 Toggle direction if same column, otherwise reset to asc
@@ -703,98 +1043,101 @@ async function main() {
703
1043
  }
704
1044
  return
705
1045
  }
706
-
1046
+
707
1047
  // 📖 Interval adjustment keys: W=decrease (faster), X=increase (slower)
708
1048
  // 📖 Minimum 1s, maximum 60s
709
1049
  if (key.name === 'w') {
710
1050
  state.pingInterval = Math.max(1000, state.pingInterval - 1000)
711
1051
  return
712
1052
  }
713
-
1053
+
714
1054
  if (key.name === 'x') {
715
1055
  state.pingInterval = Math.min(60000, state.pingInterval + 1000)
716
1056
  return
717
1057
  }
718
-
1058
+
719
1059
  if (key.name === 'up') {
720
1060
  if (state.cursor > 0) {
721
1061
  state.cursor--
722
1062
  }
723
1063
  return
724
1064
  }
725
-
1065
+
726
1066
  if (key.name === 'down') {
727
1067
  if (state.cursor < results.length - 1) {
728
1068
  state.cursor++
729
1069
  }
730
1070
  return
731
1071
  }
732
-
1072
+
733
1073
  if (key.name === 'c' && key.ctrl) { // Ctrl+C
734
1074
  exit(0)
735
1075
  return
736
1076
  }
737
-
1077
+
738
1078
  if (key.name === 'return') { // Enter
739
1079
  // 📖 Use the same sorting as the table display
740
1080
  const sorted = sortResults(results, state.sortColumn, state.sortDirection)
741
1081
  const selected = sorted[state.cursor]
742
1082
  // 📖 Allow selecting ANY model (even timeout/down) - user knows what they're doing
743
- if (true) {
744
- userSelected = { modelId: selected.modelId, label: selected.label, tier: selected.tier }
745
- // 📖 Stop everything and launch OpenCode immediately
746
- clearInterval(ticker)
747
- clearTimeout(state.pingIntervalObj)
748
- readline.emitKeypressEvents(process.stdin)
749
- process.stdin.setRawMode(true)
750
- process.stdin.pause()
751
- process.stdin.removeListener('keypress', onKeyPress)
752
- process.stdout.write(ALT_LEAVE)
753
-
754
- // 📖 Show selection with status
755
- if (selected.status === 'timeout') {
756
- console.log(chalk.yellow(` ⚠ Selected: ${selected.label} (currently timing out)`))
757
- } else if (selected.status === 'down') {
758
- console.log(chalk.red(` ⚠ Selected: ${selected.label} (currently down)`))
759
- } else {
760
- console.log(chalk.cyan(` ✓ Selected: ${selected.label}`))
761
- }
762
- console.log()
763
-
764
- // 📖 Wait for OpenCode to finish before exiting
1083
+ userSelected = { modelId: selected.modelId, label: selected.label, tier: selected.tier }
1084
+
1085
+ // 📖 Stop everything and act on selection immediately
1086
+ clearInterval(ticker)
1087
+ clearTimeout(state.pingIntervalObj)
1088
+ readline.emitKeypressEvents(process.stdin)
1089
+ process.stdin.setRawMode(true)
1090
+ process.stdin.pause()
1091
+ process.stdin.removeListener('keypress', onKeyPress)
1092
+ process.stdout.write(ALT_LEAVE)
1093
+
1094
+ // 📖 Show selection with status
1095
+ if (selected.status === 'timeout') {
1096
+ console.log(chalk.yellow(` ⚠ Selected: ${selected.label} (currently timing out)`))
1097
+ } else if (selected.status === 'down') {
1098
+ console.log(chalk.red(` ⚠ Selected: ${selected.label} (currently down)`))
1099
+ } else {
1100
+ console.log(chalk.cyan(` ✓ Selected: ${selected.label}`))
1101
+ }
1102
+ console.log()
1103
+
1104
+ // 📖 Dispatch to the correct integration based on active mode
1105
+ if (state.mode === 'openclaw') {
1106
+ await startOpenClaw(userSelected, apiKey)
1107
+ } else {
765
1108
  await startOpenCode(userSelected)
766
- process.exit(0)
767
1109
  }
1110
+ process.exit(0)
768
1111
  }
769
1112
  }
770
-
1113
+
771
1114
  // 📖 Enable keypress events on stdin
772
1115
  readline.emitKeypressEvents(process.stdin)
773
1116
  if (process.stdin.isTTY) {
774
1117
  process.stdin.setRawMode(true)
775
1118
  }
776
-
1119
+
777
1120
  process.stdin.on('keypress', onKeyPress)
778
1121
 
779
1122
  // 📖 Animation loop: clear alt screen + redraw table at FPS with cursor
780
1123
  const ticker = setInterval(() => {
781
1124
  state.frame++
782
- process.stdout.write(ALT_CLEAR + renderTable(state.results, state.pendingPings, state.frame, state.cursor, state.sortColumn, state.sortDirection, state.pingInterval, state.lastPingTime))
1125
+ process.stdout.write(ALT_CLEAR + renderTable(state.results, state.pendingPings, state.frame, state.cursor, state.sortColumn, state.sortDirection, state.pingInterval, state.lastPingTime, state.mode))
783
1126
  }, Math.round(1000 / FPS))
784
1127
 
785
- process.stdout.write(ALT_CLEAR + renderTable(state.results, state.pendingPings, state.frame, state.cursor, state.sortColumn, state.sortDirection, state.pingInterval, state.lastPingTime))
1128
+ process.stdout.write(ALT_CLEAR + renderTable(state.results, state.pendingPings, state.frame, state.cursor, state.sortColumn, state.sortDirection, state.pingInterval, state.lastPingTime, state.mode))
1129
+
1130
+ // ── Continuous ping loop — ping all models every N seconds forever ──────────
786
1131
 
787
- // ── Continuous ping loop — ping all models every 10 seconds forever ──────────
788
-
789
1132
  // 📖 Single ping function that updates result
790
1133
  const pingModel = async (r) => {
791
1134
  const { code, ms } = await ping(apiKey, r.modelId)
792
-
1135
+
793
1136
  // 📖 Store ping result as object with ms and code
794
1137
  // 📖 ms = actual response time (even for errors like 429)
795
1138
  // 📖 code = HTTP status code ('200', '429', '500', '000' for timeout)
796
1139
  r.pings.push({ ms, code })
797
-
1140
+
798
1141
  // 📖 Update status based on latest ping
799
1142
  if (code === '200') {
800
1143
  r.status = 'up'
@@ -808,23 +1151,23 @@ async function main() {
808
1151
 
809
1152
  // 📖 Initial ping of all models
810
1153
  const initialPing = Promise.all(results.map(r => pingModel(r)))
811
-
1154
+
812
1155
  // 📖 Continuous ping loop with dynamic interval (adjustable with W/X keys)
813
1156
  const schedulePing = () => {
814
1157
  state.pingIntervalObj = setTimeout(async () => {
815
1158
  state.lastPingTime = Date.now()
816
-
1159
+
817
1160
  results.forEach(r => {
818
1161
  pingModel(r).catch(() => {
819
1162
  // Individual ping failures don't crash the loop
820
1163
  })
821
1164
  })
822
-
1165
+
823
1166
  // 📖 Schedule next ping with current interval
824
1167
  schedulePing()
825
1168
  }, state.pingInterval)
826
1169
  }
827
-
1170
+
828
1171
  // 📖 Start the ping loop
829
1172
  state.pingIntervalObj = null
830
1173
  schedulePing()
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "free-coding-models",
3
- "version": "0.1.1",
3
+ "version": "0.1.3",
4
4
  "description": "Find the fastest coding LLM models in seconds — ping free models from multiple providers, pick the best one for OpenCode, Cursor, or any AI coding assistant.",
5
5
  "keywords": [
6
6
  "nvidia",