free-coding-models 0.1.43 → 0.1.45

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -2,7 +2,8 @@
2
2
  <img src="https://img.shields.io/npm/v/free-coding-models?color=76b900&label=npm&logo=npm" alt="npm version">
3
3
  <img src="https://img.shields.io/node/v/free-coding-models?color=76b900&logo=node.js" alt="node version">
4
4
  <img src="https://img.shields.io/npm/l/free-coding-models?color=76b900" alt="license">
5
- <img src="https://img.shields.io/badge/nvidia%20nim%20models-44-76b900?logo=nvidia" alt="models count">
5
+ <img src="https://img.shields.io/badge/models-53-76b900?logo=nvidia" alt="models count">
6
+ <img src="https://img.shields.io/badge/providers-3-blue" alt="providers count">
6
7
  </p>
7
8
 
8
9
  <h1 align="center">free-coding-models</h1>
@@ -14,7 +15,7 @@
14
15
  <p align="center">
15
16
 
16
17
  ```
17
- 1. Create a free API key on NVIDIA https://build.nvidia.com
18
+ 1. Create a free API key (NVIDIA, Groq, or Cerebras)
18
19
  2. npm i -g free-coding-models
19
20
  3. free-coding-models
20
21
  ```
@@ -23,7 +24,7 @@
23
24
 
24
25
  <p align="center">
25
26
  <strong>Find the fastest coding LLM models in seconds</strong><br>
26
- <sub>Ping free NVIDIA NIM models in real-time — pick the best one for OpenCode, OpenClaw, or any AI coding assistant</sub>
27
+ <sub>Ping free models from NVIDIA NIM, Groq, and Cerebras in real-time — pick the best one for OpenCode, OpenClaw, or any AI coding assistant</sub>
27
28
  </p>
28
29
 
29
30
  <p align="center">
@@ -46,7 +47,9 @@
46
47
  ## ✨ Features
47
48
 
48
49
  - **🎯 Coding-focused** — Only LLM models optimized for code generation, not chat or vision
49
- - **🚀 Parallel pings** — All 44 models tested simultaneously via native `fetch`
50
+ - **🌐 Multi-provider** — 53 models from NVIDIA NIM, Groq, and Cerebras — all free to use
51
+ - **⚙️ Settings screen** — Press `P` to manage provider API keys, enable/disable providers, and test keys live
52
+ - **🚀 Parallel pings** — All models tested simultaneously via native `fetch`
50
53
  - **📊 Real-time animation** — Watch latency appear live in alternate screen buffer
51
54
  - **🏆 Smart ranking** — Top 3 fastest models highlighted with medals 🥇🥈🥉
52
55
  - **⏱ Continuous monitoring** — Pings all models every 2 seconds forever, never stops
@@ -59,8 +62,7 @@
59
62
  - **🦞 OpenClaw integration** — Sets selected model as default provider in `~/.openclaw/openclaw.json`
60
63
  - **🎨 Clean output** — Zero scrollback pollution, interface stays open until Ctrl+C
61
64
  - **📶 Status indicators** — UP ✅ · Timeout ⏳ · Overloaded 🔥 · Not Found 🚫
62
- - **🔧 Multi-source support** — Extensible architecture via `sources.js` (add new providers easily)
63
- - **🏷 Tier filtering** — Filter models by tier letter (S, A, B, C) with `--tier` flag or dynamically with E/D keys
65
+ - **🏷 Tier filtering** — Filter models by tier letter (S, A, B, C) with `--tier` flag or dynamically with `T` key
64
66
 
65
67
  ---
66
68
 
@@ -69,12 +71,14 @@
69
71
  Before using `free-coding-models`, make sure you have:
70
72
 
71
73
  1. **Node.js 18+** — Required for native `fetch` API
72
- 2. **NVIDIA NIM account** — Free tier available at [build.nvidia.com](https://build.nvidia.com)
73
- 3. **API key** — Generate one from Profile → API Keys → Generate API Key
74
- 4. **OpenCode** *(optional)* — [Install OpenCode](https://github.com/opencode-ai/opencode) to use the OpenCode integration
75
- 5. **OpenClaw** *(optional)* — [Install OpenClaw](https://openclaw.ai) to use the OpenClaw integration
74
+ 2. **At least one free API key** — pick any or all of:
75
+ - **NVIDIA NIM** — [build.nvidia.com](https://build.nvidia.com) Profile → API Keys → Generate
76
+ - **Groq** — [console.groq.com/keys](https://console.groq.com/keys) Create API Key
77
+ - **Cerebras** — [cloud.cerebras.ai](https://cloud.cerebras.ai) API Keys Create
78
+ 3. **OpenCode** *(optional)* — [Install OpenCode](https://github.com/opencode-ai/opencode) to use the OpenCode integration
79
+ 4. **OpenClaw** *(optional)* — [Install OpenClaw](https://openclaw.ai) to use the OpenClaw integration
76
80
 
77
- > 💡 **Tip:** Without OpenCode/OpenClaw installed, you can still benchmark models and get latency data.
81
+ > 💡 **Tip:** You don't need all three providers. One key is enough to get started. Add more later via the Settings screen (`P` key).
78
82
 
79
83
  ---
80
84
 
@@ -152,54 +156,98 @@ When you run `free-coding-models` without `--opencode` or `--openclaw`, you get
152
156
  Use `↑↓` arrows to select, `Enter` to confirm. Then the TUI launches with your chosen mode shown in the header badge.
153
157
 
154
158
  **How it works:**
155
- 1. **Ping phase** — All 44 models are pinged in parallel
159
+ 1. **Ping phase** — All enabled models are pinged in parallel (up to 53 across 3 providers)
156
160
  2. **Continuous monitoring** — Models are re-pinged every 2 seconds forever
157
161
  3. **Real-time updates** — Watch "Latest", "Avg", and "Up%" columns update live
158
162
  4. **Select anytime** — Use ↑↓ arrows to navigate, press Enter on a model to act
159
163
  5. **Smart detection** — Automatically detects if NVIDIA NIM is configured in OpenCode or OpenClaw
160
164
 
161
- Setup wizard:
165
+ Setup wizard (first run — walks through all 3 providers):
162
166
 
163
167
  ```
164
- 🔑 Setup your NVIDIA API key
165
- 📝 Get a free key at: https://build.nvidia.com
166
- 💾 Key will be saved to ~/.free-coding-models
168
+ 🔑 First-time setup API keys
169
+ Enter keys for any provider you want to use. Press Enter to skip one.
170
+
171
+ ● NVIDIA NIM
172
+ Free key at: https://build.nvidia.com
173
+ Profile → API Keys → Generate
174
+ Enter key (or Enter to skip): nvapi-xxxx
175
+
176
+ ● Groq
177
+ Free key at: https://console.groq.com/keys
178
+ API Keys → Create API Key
179
+ Enter key (or Enter to skip): gsk_xxxx
180
+
181
+ ● Cerebras
182
+ Free key at: https://cloud.cerebras.ai
183
+ API Keys → Create
184
+ Enter key (or Enter to skip):
185
+
186
+ ✅ 2 key(s) saved to ~/.free-coding-models.json
187
+ You can add or change keys anytime with the P key in the TUI.
188
+ ```
189
+
190
+ You don't need all three — skip any provider by pressing Enter. At least one key is required.
167
191
 
168
- Enter your API key: nvapi-xxxx-xxxx
192
+ ### Adding or changing keys later
193
+
194
+ Press **`P`** to open the Settings screen at any time:
169
195
 
170
- ✅ API key saved to ~/.free-coding-models
171
196
  ```
197
+ ⚙ Settings
172
198
 
173
- ### Other ways to provide the key
199
+ Providers
174
200
 
175
- ```bash
176
- # Pass directly
177
- free-coding-models nvapi-xxxx-your-key-here
201
+ ❯ [ ✅ ] NIM nvapi-••••••••••••3f9a [Test ✅]
202
+ [ ] Groq (no key set) [Test —]
203
+ [ ✅ ] Cerebras (no key set) [Test —]
178
204
 
179
- # Use environment variable
180
- NVIDIA_API_KEY=nvapi-xxx free-coding-models
205
+ ↑↓ Navigate • Enter Edit key • Space Toggle enabled • T Test key • Esc Close
206
+ ```
181
207
 
182
- # Or add to your shell profile
183
- export NVIDIA_API_KEY=nvapi-xxxx-your-key-here
184
- free-coding-models
208
+ - **↑↓** navigate providers
209
+ - **Enter** — enter inline key edit mode (type your key, Enter to save, Esc to cancel)
210
+ - **Space** — toggle provider enabled/disabled
211
+ - **T** — fire a real test ping to verify the key works (shows ✅/❌)
212
+ - **Esc** — close settings and reload models list
213
+
214
+ Keys are saved to `~/.free-coding-models.json` (permissions `0600`).
215
+
216
+ ### Environment variable overrides
217
+
218
+ Env vars always take priority over the config file:
219
+
220
+ ```bash
221
+ NVIDIA_API_KEY=nvapi-xxx free-coding-models
222
+ GROQ_API_KEY=gsk_xxx free-coding-models
223
+ CEREBRAS_API_KEY=csk_xxx free-coding-models
185
224
  ```
186
225
 
187
- ### Get your free API key
226
+ ### Get your free API keys
227
+
228
+ **NVIDIA NIM** (44 models, S+ → C tier):
229
+ 1. Sign up at [build.nvidia.com](https://build.nvidia.com)
230
+ 2. Go to Profile → API Keys → Generate API Key
231
+ 3. Name it (e.g. "free-coding-models"), set expiry to "Never"
232
+ 4. Copy — shown only once!
233
+
234
+ **Groq** (6 models, fast inference):
235
+ 1. Sign up at [console.groq.com](https://console.groq.com)
236
+ 2. Go to API Keys → Create API Key
188
237
 
189
- 1. **Create NVIDIA Account** Sign up at [build.nvidia.com](https://build.nvidia.com) with your email
190
- 2. **Verify** Confirm email, set privacy options, create NGC account, verify phone
191
- 3. **Generate Key** — Go to Profile → API Keys → Generate API Key
192
- 4. **Name it** — e.g., "free-coding-models" or "OpenCode-NIM"
193
- 5. **Set expiration** — Choose "Never" for convenience
194
- 6. **Copy securely** — Key is shown only once!
238
+ **Cerebras** (3 models, ultra-fast silicon):
239
+ 1. Sign up at [cloud.cerebras.ai](https://cloud.cerebras.ai)
240
+ 2. Go to API Keys → Create
195
241
 
196
- > 💡 **Free credits** — NVIDIA offers free credits for NIM models via their API Catalog for developers.
242
+ > 💡 **Free credits** — All three providers offer free tiers for developers.
197
243
 
198
244
  ---
199
245
 
200
246
  ## 🤖 Coding Models
201
247
 
202
- **44 coding models** across 8 tiers, ranked by [SWE-bench Verified](https://www.swebench.com) — the industry-standard benchmark measuring real GitHub issue resolution. Scores are self-reported by providers unless noted.
248
+ **53 coding models** across 3 providers and 8 tiers, ranked by [SWE-bench Verified](https://www.swebench.com) — the industry-standard benchmark measuring real GitHub issue resolution. Scores are self-reported by providers unless noted.
249
+
250
+ ### NVIDIA NIM (44 models)
203
251
 
204
252
  | Tier | SWE-bench | Models |
205
253
  |------|-----------|--------|
@@ -212,6 +260,23 @@ free-coding-models
212
260
  | **B** 20–30% | R1 Distill 8B (28.2%), R1 Distill 7B (22.6%) |
213
261
  | **C** <20% | Gemma 2 9B (18.0%), Phi 4 Mini (14.0%), Phi 3.5 Mini (12.0%) |
214
262
 
263
+ ### Groq (6 models)
264
+
265
+ | Tier | SWE-bench | Model |
266
+ |------|-----------|-------|
267
+ | **S** 60–70% | Kimi K2 Instruct (65.8%), Llama 4 Maverick (62.0%) |
268
+ | **A+** 50–60% | QwQ 32B (50.0%) |
269
+ | **A** 40–50% | Llama 4 Scout (44.0%), R1 Distill 70B (43.9%) |
270
+ | **A-** 35–40% | Llama 3.3 70B (39.5%) |
271
+
272
+ ### Cerebras (3 models)
273
+
274
+ | Tier | SWE-bench | Model |
275
+ |------|-----------|-------|
276
+ | **A+** 50–60% | Qwen3 32B (50.0%) |
277
+ | **A** 40–50% | Llama 4 Scout (44.0%) |
278
+ | **A-** 35–40% | Llama 3.3 70B (39.5%) |
279
+
215
280
  ### Tier scale
216
281
 
217
282
  - **S+/S** — Elite frontier coders (≥60% SWE-bench), best for complex real-world tasks and refactors
@@ -421,10 +486,30 @@ This script:
421
486
 
422
487
  ## 📋 API Reference
423
488
 
424
- | Parameter | Description |
425
- |-----------|-------------|
426
- | `NVIDIA_API_KEY` | Environment variable for API key |
427
- | `<api-key>` | First positional argument |
489
+ **Environment variables (override config file):**
490
+
491
+ | Variable | Provider |
492
+ |----------|----------|
493
+ | `NVIDIA_API_KEY` | NVIDIA NIM |
494
+ | `GROQ_API_KEY` | Groq |
495
+ | `CEREBRAS_API_KEY` | Cerebras |
496
+
497
+ **Config file:** `~/.free-coding-models.json` (created automatically, permissions `0600`)
498
+
499
+ ```json
500
+ {
501
+ "apiKeys": {
502
+ "nvidia": "nvapi-xxx",
503
+ "groq": "gsk_xxx",
504
+ "cerebras": "csk_xxx"
505
+ },
506
+ "providers": {
507
+ "nvidia": { "enabled": true },
508
+ "groq": { "enabled": true },
509
+ "cerebras": { "enabled": true }
510
+ }
511
+ }
512
+ ```
428
513
 
429
514
  **Configuration:**
430
515
  - **Ping timeout**: 15 seconds per attempt (slow models get more time)
@@ -446,16 +531,24 @@ This script:
446
531
  | `--tier B` | Show only B+, B tier models |
447
532
  | `--tier C` | Show only C tier models |
448
533
 
449
- **Keyboard shortcuts:**
534
+ **Keyboard shortcuts (main TUI):**
450
535
  - **↑↓** — Navigate models
451
536
  - **Enter** — Select model (launches OpenCode or sets OpenClaw default, depending on mode)
452
- - **R/T/O/M/P/A/S/V/U** — Sort by Rank/Tier/Origin/Model/Ping/Avg/Status/Verdict/Uptime
537
+ - **R/Y/O/M/L/A/S/N/H/V/U** — Sort by Rank/Tier/Origin/Model/LatestPing/Avg/SWE/Ctx/Health/Verdict/Uptime
538
+ - **T** — Cycle tier filter (All → S+ → S → A+ → A → A- → B+ → B → C → All)
539
+ - **Z** — Cycle mode (OpenCode CLI → OpenCode Desktop → OpenClaw)
540
+ - **P** — Open Settings (manage API keys, enable/disable providers)
453
541
  - **W** — Decrease ping interval (faster pings)
454
542
  - **X** — Increase ping interval (slower pings)
455
- - **E** — Elevate tier filter (show fewer, higher-tier models)
456
- - **D** — Descend tier filter (show more, lower-tier models)
457
543
  - **Ctrl+C** — Exit
458
544
 
545
+ **Keyboard shortcuts (Settings screen — `P` key):**
546
+ - **↑↓** — Navigate providers
547
+ - **Enter** — Edit API key inline (type key, Enter to save, Esc to cancel)
548
+ - **Space** — Toggle provider enabled/disabled
549
+ - **T** — Test current provider's API key (fires a live ping)
550
+ - **Esc** — Close settings and return to main TUI
551
+
459
552
  ---
460
553
 
461
554
  ## 🔧 Development
@@ -10,7 +10,7 @@
10
10
  * During benchmarking, users can navigate with arrow keys and press Enter to act on the selected model.
11
11
  *
12
12
  * 🎯 Key features:
13
- * - Parallel pings across all models with animated real-time updates
13
+ * - Parallel pings across all models with animated real-time updates (3 providers: NIM, Groq, Cerebras)
14
14
  * - Continuous monitoring with 2-second ping intervals (never stops)
15
15
  * - Rolling averages calculated from ALL successful pings since start
16
16
  * - Best-per-tier highlighting with medals (🥇🥈🥉)
@@ -18,15 +18,16 @@
18
18
  * - Instant OpenCode OR OpenClaw action on Enter key press
19
19
  * - Startup mode menu (OpenCode CLI vs OpenCode Desktop vs OpenClaw) when no flag is given
20
20
  * - Automatic config detection and model setup for both tools
21
- * - Persistent API key storage in ~/.free-coding-models
22
- * - Multi-source support via sources.js (easily add new providers)
21
+ * - JSON config stored in ~/.free-coding-models.json (auto-migrates from old plain-text)
22
+ * - Multi-provider support via sources.js (NIM, Groq, Cerebras — extensible)
23
+ * - Settings screen (P key) to manage API keys per provider, enable/disable, test keys
23
24
  * - Uptime percentage tracking (successful pings / total pings)
24
- * - Sortable columns (R/T/O/M/P/A/S/V/U keys)
25
- * - Tier filtering via --tier S/A/B/C flags
25
+ * - Sortable columns (R/Y/O/M/L/A/S/N/H/V/U keys)
26
+ * - Tier filtering via T key (cycles S+→S→A+→A→A-→B+→B→C→All)
26
27
  *
27
28
  * → Functions:
28
- * - `loadApiKey` / `saveApiKey`: Manage persisted API key in ~/.free-coding-models
29
- * - `promptApiKey`: Interactive wizard for first-time API key setup
29
+ * - `loadConfig` / `saveConfig` / `getApiKey`: Multi-provider JSON config via lib/config.js
30
+ * - `promptApiKey`: Interactive wizard for first-time NVIDIA API key setup
30
31
  * - `promptModeSelection`: Startup menu to choose OpenCode vs OpenClaw
31
32
  * - `ping`: Perform HTTP request to NIM endpoint with timeout handling
32
33
  * - `renderTable`: Generate ASCII table with colored latency indicators and status emojis
@@ -49,8 +50,10 @@
49
50
  * - sources.js: Model definitions from all providers
50
51
  *
51
52
  * ⚙️ Configuration:
52
- * - API key stored in ~/.free-coding-models
53
- * - Models loaded from sources.js (extensible for new providers)
53
+ * - API keys stored per-provider in ~/.free-coding-models.json (0600 perms)
54
+ * - Old ~/.free-coding-models plain-text auto-migrated as nvidia key on first run
55
+ * - Env vars override config: NVIDIA_API_KEY, GROQ_API_KEY, CEREBRAS_API_KEY
56
+ * - Models loaded from sources.js — 53 models across NIM, Groq, Cerebras
54
57
  * - OpenCode config: ~/.config/opencode/opencode.json
55
58
  * - OpenClaw config: ~/.openclaw/openclaw.json
56
59
  * - Ping timeout: 15s per attempt
@@ -76,9 +79,10 @@ import { createRequire } from 'module'
76
79
  import { readFileSync, writeFileSync, existsSync, copyFileSync, mkdirSync } from 'fs'
77
80
  import { homedir } from 'os'
78
81
  import { join, dirname } from 'path'
79
- import { MODELS } from '../sources.js'
82
+ import { MODELS, sources } from '../sources.js'
80
83
  import { patchOpenClawModelsJson } from '../patch-openclaw-models.js'
81
84
  import { getAvg, getVerdict, getUptime, sortResults, filterByTier, findBestModel, parseArgs, TIER_ORDER, VERDICT_ORDER, TIER_LETTER_MAP } from '../lib/utils.js'
85
+ import { loadConfig, saveConfig, getApiKey, isProviderEnabled } from '../lib/config.js'
82
86
 
83
87
  const require = createRequire(import.meta.url)
84
88
  const readline = require('readline')
@@ -154,50 +158,82 @@ function runUpdate(latestVersion) {
154
158
  process.exit(1)
155
159
  }
156
160
 
157
- // ─── Config path ──────────────────────────────────────────────────────────────
158
- const CONFIG_PATH = join(homedir(), '.free-coding-models')
159
-
160
- function loadApiKey() {
161
- try {
162
- if (existsSync(CONFIG_PATH)) {
163
- return readFileSync(CONFIG_PATH, 'utf8').trim()
164
- }
165
- } catch {}
166
- return null
167
- }
168
-
169
- function saveApiKey(key) {
170
- try {
171
- writeFileSync(CONFIG_PATH, key, { mode: 0o600 })
172
- } catch {}
173
- }
161
+ // 📖 Config is now managed via lib/config.js (JSON format ~/.free-coding-models.json)
162
+ // 📖 loadConfig/saveConfig/getApiKey are imported above
174
163
 
175
164
  // ─── First-run wizard ─────────────────────────────────────────────────────────
176
- async function promptApiKey() {
165
+ // 📖 Shown when NO provider has a key configured yet.
166
+ // 📖 Steps through all 3 providers sequentially — each is optional (Enter to skip).
167
+ // 📖 At least one key must be entered to proceed. Keys saved to ~/.free-coding-models.json.
168
+ // 📖 Returns the nvidia key (or null) for backward-compat with the rest of main().
169
+ async function promptApiKey(config) {
177
170
  console.log()
178
- console.log(chalk.dim(' 🔑 Setup your NVIDIA API key'))
179
- console.log(chalk.dim(' 📝 Get a free key at: ') + chalk.cyanBright('https://build.nvidia.com'))
180
- console.log(chalk.dim(' 💾 Key will be saved to ~/.free-coding-models'))
171
+ console.log(chalk.bold(' 🔑 First-time setup API keys'))
172
+ console.log(chalk.dim(' Enter keys for any provider you want to use. Press Enter to skip one.'))
181
173
  console.log()
182
174
 
183
- const rl = readline.createInterface({
184
- input: process.stdin,
185
- output: process.stdout,
186
- })
175
+ // 📖 Provider definitions: label, key field, url for getting the key
176
+ const providers = [
177
+ {
178
+ key: 'nvidia',
179
+ label: 'NVIDIA NIM',
180
+ color: chalk.rgb(118, 185, 0),
181
+ url: 'https://build.nvidia.com',
182
+ hint: 'Profile → API Keys → Generate',
183
+ prefix: 'nvapi-',
184
+ },
185
+ {
186
+ key: 'groq',
187
+ label: 'Groq',
188
+ color: chalk.rgb(249, 103, 20),
189
+ url: 'https://console.groq.com/keys',
190
+ hint: 'API Keys → Create API Key',
191
+ prefix: 'gsk_',
192
+ },
193
+ {
194
+ key: 'cerebras',
195
+ label: 'Cerebras',
196
+ color: chalk.rgb(0, 180, 255),
197
+ url: 'https://cloud.cerebras.ai',
198
+ hint: 'API Keys → Create',
199
+ prefix: 'csk_ / cauth_',
200
+ },
201
+ ]
187
202
 
188
- return new Promise((resolve) => {
189
- rl.question(chalk.bold(' Enter your API key: '), (answer) => {
190
- rl.close()
191
- const key = answer.trim()
192
- if (key) {
193
- saveApiKey(key)
194
- console.log()
195
- console.log(chalk.green(' ✅ API key saved to ~/.free-coding-models'))
196
- console.log()
197
- }
198
- resolve(key || null)
199
- })
203
+ const rl = readline.createInterface({ input: process.stdin, output: process.stdout })
204
+
205
+ // 📖 Ask a single question — returns trimmed string or '' for skip
206
+ const ask = (question) => new Promise((resolve) => {
207
+ rl.question(question, (answer) => resolve(answer.trim()))
200
208
  })
209
+
210
+ for (const p of providers) {
211
+ console.log(` ${p.color('●')} ${chalk.bold(p.label)}`)
212
+ console.log(chalk.dim(` Free key at: `) + chalk.cyanBright(p.url))
213
+ console.log(chalk.dim(` ${p.hint}`))
214
+ const answer = await ask(chalk.dim(` Enter key (or Enter to skip): `))
215
+ console.log()
216
+ if (answer) {
217
+ config.apiKeys[p.key] = answer
218
+ }
219
+ }
220
+
221
+ rl.close()
222
+
223
+ // 📖 Check at least one key was entered
224
+ const anyKey = Object.values(config.apiKeys).some(v => v)
225
+ if (!anyKey) {
226
+ return null
227
+ }
228
+
229
+ saveConfig(config)
230
+ const savedCount = Object.values(config.apiKeys).filter(v => v).length
231
+ console.log(chalk.green(` ✅ ${savedCount} key(s) saved to ~/.free-coding-models.json`))
232
+ console.log(chalk.dim(' You can add or change keys anytime with the ') + chalk.yellow('P') + chalk.dim(' key in the TUI.'))
233
+ console.log()
234
+
235
+ // 📖 Return nvidia key for backward-compat (main() checks it exists before continuing)
236
+ return config.apiKeys.nvidia || Object.values(config.apiKeys).find(v => v) || null
201
237
  }
202
238
 
203
239
  // ─── Update notification menu ──────────────────────────────────────────────
@@ -310,7 +346,6 @@ const ALT_HOME = '\x1b[H'
310
346
  // 📖 Models are now loaded from sources.js to support multiple providers
311
347
  // 📖 This allows easy addition of new model sources beyond NVIDIA NIM
312
348
 
313
- const NIM_URL = 'https://integrate.api.nvidia.com/v1/chat/completions'
314
349
  const PING_TIMEOUT = 15_000 // 📖 15s per attempt before abort - slow models get more time
315
350
  const PING_INTERVAL = 2_000 // 📖 Ping all models every 2 seconds in continuous mode
316
351
 
@@ -519,7 +554,9 @@ function renderTable(results, pendingPings, frame, cursor = null, sortColumn = '
519
554
  // 📖 Left-aligned columns - pad plain text first, then colorize
520
555
  const num = chalk.dim(String(r.idx).padEnd(W_RANK))
521
556
  const tier = tierFn(r.tier.padEnd(W_TIER))
522
- const source = chalk.green('NIM'.padEnd(W_SOURCE))
557
+ // 📖 Show provider name from sources map (NIM / Groq / Cerebras)
558
+ const providerName = sources[r.providerKey]?.name ?? r.providerKey ?? 'NIM'
559
+ const source = chalk.green(providerName.padEnd(W_SOURCE))
523
560
  const name = r.label.slice(0, W_MODEL).padEnd(W_MODEL)
524
561
  const sweScore = r.sweScore ?? '—'
525
562
  const sweCell = sweScore !== '—' && parseFloat(sweScore) >= 50
@@ -663,7 +700,7 @@ function renderTable(results, pendingPings, frame, cursor = null, sortColumn = '
663
700
  : mode === 'opencode-desktop'
664
701
  ? chalk.rgb(0, 200, 255)('Enter→OpenDesktop')
665
702
  : chalk.rgb(0, 200, 255)('Enter→OpenCode')
666
- lines.push(chalk.dim(` ↑↓ Navigate • `) + actionHint + chalk.dim(` • R/Y/O/M/L/A/S/C/H/V/U Sort • W↓/X↑ Interval (${intervalSec}s) • T Filter tier • Z Mode • Ctrl+C Exit`))
703
+ lines.push(chalk.dim(` ↑↓ Navigate • `) + actionHint + chalk.dim(` • R/Y/O/M/L/A/S/C/H/V/U Sort • W↓/X↑ Interval (${intervalSec}s) • T Filter tier • Z Mode • `) + chalk.yellow('P') + chalk.dim(` Settings • Ctrl+C Exit`))
667
704
  lines.push('')
668
705
  lines.push(chalk.dim(' Made with ') + '💖 & ☕' + chalk.dim(' by ') + '\x1b]8;;https://github.com/vava-nessa\x1b\\vava-nessa\x1b]8;;\x1b\\' + chalk.dim(' • ') + '🫂 ' + chalk.cyanBright('\x1b]8;;https://discord.gg/WKA3TwYVuZ\x1b\\Join our Discord!\x1b]8;;\x1b\\') + chalk.dim(' • ') + '⭐ ' + '\x1b]8;;https://github.com/vava-nessa/free-coding-models\x1b\\Read the docs on GitHub\x1b]8;;\x1b\\')
669
706
  lines.push('')
@@ -679,12 +716,14 @@ function renderTable(results, pendingPings, frame, cursor = null, sortColumn = '
679
716
 
680
717
  // ─── HTTP ping ────────────────────────────────────────────────────────────────
681
718
 
682
- async function ping(apiKey, modelId) {
719
+ // 📖 ping: Send a single chat completion request to measure model availability and latency.
720
+ // 📖 url param is the provider's endpoint URL — differs per provider (NIM, Groq, Cerebras).
721
+ async function ping(apiKey, modelId, url) {
683
722
  const ctrl = new AbortController()
684
723
  const timer = setTimeout(() => ctrl.abort(), PING_TIMEOUT)
685
724
  const t0 = performance.now()
686
725
  try {
687
- const resp = await fetch(NIM_URL, {
726
+ const resp = await fetch(url, {
688
727
  method: 'POST', signal: ctrl.signal,
689
728
  headers: { 'Authorization': `Bearer ${apiKey}`, 'Content-Type': 'application/json' },
690
729
  body: JSON.stringify({ model: modelId, messages: [{ role: 'user', content: 'hi' }], max_tokens: 1 }),
@@ -1101,32 +1140,41 @@ async function startOpenClaw(model, apiKey) {
1101
1140
  // 📖 findBestModel is imported from lib/utils.js
1102
1141
 
1103
1142
  // ─── Function to run in fiable mode (10-second analysis then output best model) ──
1104
- async function runFiableMode(apiKey) {
1143
+ async function runFiableMode(config) {
1105
1144
  console.log(chalk.cyan(' ⚡ Analyzing models for reliability (10 seconds)...'))
1106
1145
  console.log()
1107
1146
 
1108
- let results = MODELS.map(([modelId, label, tier, sweScore, ctx], i) => ({
1109
- idx: i + 1, modelId, label, tier, sweScore, ctx,
1110
- status: 'pending',
1111
- pings: [],
1112
- httpCode: null,
1113
- }))
1147
+ // 📖 Only include models from enabled providers that have API keys
1148
+ let results = MODELS
1149
+ .filter(([,,,,,providerKey]) => {
1150
+ return isProviderEnabled(config, providerKey) && getApiKey(config, providerKey)
1151
+ })
1152
+ .map(([modelId, label, tier, sweScore, ctx, providerKey], i) => ({
1153
+ idx: i + 1, modelId, label, tier, sweScore, ctx, providerKey,
1154
+ status: 'pending',
1155
+ pings: [],
1156
+ httpCode: null,
1157
+ }))
1114
1158
 
1115
1159
  const startTime = Date.now()
1116
1160
  const analysisDuration = 10000 // 10 seconds
1117
1161
 
1118
- // 📖 Run initial pings
1119
- const pingPromises = results.map(r => ping(apiKey, r.modelId).then(({ code, ms }) => {
1120
- r.pings.push({ ms, code })
1121
- if (code === '200') {
1122
- r.status = 'up'
1123
- } else if (code === '000') {
1124
- r.status = 'timeout'
1125
- } else {
1126
- r.status = 'down'
1127
- r.httpCode = code
1128
- }
1129
- }))
1162
+ // 📖 Run initial pings using per-provider API key and URL
1163
+ const pingPromises = results.map(r => {
1164
+ const rApiKey = getApiKey(config, r.providerKey)
1165
+ const url = sources[r.providerKey]?.url
1166
+ return ping(rApiKey, r.modelId, url).then(({ code, ms }) => {
1167
+ r.pings.push({ ms, code })
1168
+ if (code === '200') {
1169
+ r.status = 'up'
1170
+ } else if (code === '000') {
1171
+ r.status = 'timeout'
1172
+ } else {
1173
+ r.status = 'down'
1174
+ r.httpCode = code
1175
+ }
1176
+ })
1177
+ })
1130
1178
 
1131
1179
  await Promise.allSettled(pingPromises)
1132
1180
 
@@ -1144,10 +1192,10 @@ async function runFiableMode(apiKey) {
1144
1192
  process.exit(1)
1145
1193
  }
1146
1194
 
1147
- // 📖 Output in format: provider/name
1148
- const provider = 'nvidia' // Always NVIDIA NIM for now
1195
+ // 📖 Output in format: providerName/modelId
1196
+ const providerName = sources[best.providerKey]?.name ?? best.providerKey ?? 'nvidia'
1149
1197
  console.log(chalk.green(` ✓ Most reliable model:`))
1150
- console.log(chalk.bold(` ${provider}/${best.modelId}`))
1198
+ console.log(chalk.bold(` ${providerName}/${best.modelId}`))
1151
1199
  console.log()
1152
1200
  console.log(chalk.dim(` 📊 Stats:`))
1153
1201
  console.log(chalk.dim(` Avg ping: ${getAvg(best)}ms`))
@@ -1169,20 +1217,26 @@ function filterByTierOrExit(results, tierLetter) {
1169
1217
  }
1170
1218
 
1171
1219
  async function main() {
1172
- // 📖 Simple CLI without flags - just API key handling
1173
- let apiKey = process.env.NVIDIA_API_KEY || loadApiKey()
1220
+ // 📖 Load JSON config (auto-migrates old plain-text ~/.free-coding-models if needed)
1221
+ const config = loadConfig()
1174
1222
 
1175
- if (!apiKey) {
1176
- apiKey = await promptApiKey()
1177
- if (!apiKey) {
1223
+ // 📖 Check if any provider has a key — if not, run the first-time setup wizard
1224
+ const hasAnyKey = Object.keys(sources).some(pk => !!getApiKey(config, pk))
1225
+
1226
+ if (!hasAnyKey) {
1227
+ const result = await promptApiKey(config)
1228
+ if (!result) {
1178
1229
  console.log()
1179
1230
  console.log(chalk.red(' ✖ No API key provided.'))
1180
- console.log(chalk.dim(' Run `free-coding-models` again or set NVIDIA_API_KEY env var.'))
1231
+ console.log(chalk.dim(' Run `free-coding-models` again or set NVIDIA_API_KEY / GROQ_API_KEY / CEREBRAS_API_KEY.'))
1181
1232
  console.log()
1182
1233
  process.exit(1)
1183
1234
  }
1184
1235
  }
1185
1236
 
1237
+ // 📖 Backward-compat: keep apiKey var for startOpenClaw() which still needs it
1238
+ let apiKey = getApiKey(config, 'nvidia')
1239
+
1186
1240
  // 📖 Check for updates in the background
1187
1241
  let latestVersion = null
1188
1242
  try {
@@ -1221,16 +1275,17 @@ async function main() {
1221
1275
  // If action is null (Continue without update) or changelogs, proceed to main app
1222
1276
  }
1223
1277
 
1224
- // 📖 Create results array with all models initially visible
1225
- let results = MODELS.map(([modelId, label, tier, sweScore, ctx], i) => ({
1226
- idx: i + 1, modelId, label, tier, sweScore, ctx,
1227
- status: 'pending',
1228
- pings: [], // 📖 All ping results (ms or 'TIMEOUT')
1229
- httpCode: null,
1230
- hidden: false, // 📖 Simple flag to hide/show models
1231
- }))
1232
-
1233
- // 📖 No initial filters - all models visible by default
1278
+ // 📖 Build results from MODELS only include enabled providers
1279
+ // 📖 Each result gets providerKey so ping() knows which URL + API key to use
1280
+ let results = MODELS
1281
+ .filter(([,,,,,providerKey]) => isProviderEnabled(config, providerKey))
1282
+ .map(([modelId, label, tier, sweScore, ctx, providerKey], i) => ({
1283
+ idx: i + 1, modelId, label, tier, sweScore, ctx, providerKey,
1284
+ status: 'pending',
1285
+ pings: [], // 📖 All ping results (ms or 'TIMEOUT')
1286
+ httpCode: null,
1287
+ hidden: false, // 📖 Simple flag to hide/show models
1288
+ }))
1234
1289
 
1235
1290
  // 📖 Clamp scrollOffset so cursor is always within the visible viewport window.
1236
1291
  // 📖 Called after every cursor move, sort change, and terminal resize.
@@ -1275,6 +1330,13 @@ async function main() {
1275
1330
  mode, // 📖 'opencode' or 'openclaw' — controls Enter action
1276
1331
  scrollOffset: 0, // 📖 First visible model index in viewport
1277
1332
  terminalRows: process.stdout.rows || 24, // 📖 Current terminal height
1333
+ // 📖 Settings screen state (P key opens it)
1334
+ settingsOpen: false, // 📖 Whether settings overlay is active
1335
+ settingsCursor: 0, // 📖 Which provider row is selected in settings
1336
+ settingsEditMode: false, // 📖 Whether we're in inline key editing mode
1337
+ settingsEditBuffer: '', // 📖 Typed characters for the API key being edited
1338
+ settingsTestResults: {}, // 📖 { providerKey: 'pending'|'ok'|'fail'|null }
1339
+ config, // 📖 Live reference to the config object (updated on save)
1278
1340
  }
1279
1341
 
1280
1342
  // 📖 Re-clamp viewport on terminal resize
@@ -1308,6 +1370,88 @@ async function main() {
1308
1370
  return state.results
1309
1371
  }
1310
1372
 
1373
+ // ─── Settings screen renderer ─────────────────────────────────────────────
1374
+ // 📖 renderSettings: Draw the settings overlay in the alt screen buffer.
1375
+ // 📖 Shows all providers with their API key (masked) + enabled state.
1376
+ // 📖 When in edit mode (settingsEditMode=true), shows an inline input field.
1377
+ // 📖 Key "T" in settings = test API key for selected provider.
1378
+ function renderSettings() {
1379
+ const providerKeys = Object.keys(sources)
1380
+ const EL = '\x1b[K'
1381
+ const lines = []
1382
+
1383
+ lines.push('')
1384
+ lines.push(` ${chalk.bold('⚙ Settings')} ${chalk.dim('— free-coding-models v' + LOCAL_VERSION)}`)
1385
+ lines.push('')
1386
+ lines.push(` ${chalk.bold('Providers')}`)
1387
+ lines.push('')
1388
+
1389
+ for (let i = 0; i < providerKeys.length; i++) {
1390
+ const pk = providerKeys[i]
1391
+ const src = sources[pk]
1392
+ const isCursor = i === state.settingsCursor
1393
+ const enabled = isProviderEnabled(state.config, pk)
1394
+ const keyVal = state.config.apiKeys?.[pk] ?? ''
1395
+
1396
+ // 📖 Build API key display — mask most chars, show last 4
1397
+ let keyDisplay
1398
+ if (state.settingsEditMode && isCursor) {
1399
+ // 📖 Inline editing: show typed buffer with cursor indicator
1400
+ keyDisplay = chalk.cyanBright(`${state.settingsEditBuffer || ''}▏`)
1401
+ } else if (keyVal) {
1402
+ const visible = keyVal.slice(-4)
1403
+ const masked = '•'.repeat(Math.min(16, Math.max(4, keyVal.length - 4)))
1404
+ keyDisplay = chalk.dim(masked + visible)
1405
+ } else {
1406
+ keyDisplay = chalk.dim('(no key set)')
1407
+ }
1408
+
1409
+ // 📖 Test result badge
1410
+ const testResult = state.settingsTestResults[pk]
1411
+ let testBadge = chalk.dim('[Test —]')
1412
+ if (testResult === 'pending') testBadge = chalk.yellow('[Testing…]')
1413
+ else if (testResult === 'ok') testBadge = chalk.greenBright('[Test ✅]')
1414
+ else if (testResult === 'fail') testBadge = chalk.red('[Test ❌]')
1415
+
1416
+ const enabledBadge = enabled ? chalk.greenBright('✅') : chalk.dim('⬜')
1417
+ const providerName = chalk.bold(src.name.padEnd(10))
1418
+ const bullet = isCursor ? chalk.bold.cyan(' ❯ ') : chalk.dim(' ')
1419
+
1420
+ const row = `${bullet}[ ${enabledBadge} ] ${providerName} ${keyDisplay.padEnd(30)} ${testBadge}`
1421
+ lines.push(isCursor ? chalk.bgRgb(30, 30, 60)(row) : row)
1422
+ }
1423
+
1424
+ lines.push('')
1425
+ if (state.settingsEditMode) {
1426
+ lines.push(chalk.dim(' Type API key • Enter Save • Esc Cancel'))
1427
+ } else {
1428
+ lines.push(chalk.dim(' ↑↓ Navigate • Enter Edit key • Space Toggle enabled • T Test key • Esc Close'))
1429
+ }
1430
+ lines.push('')
1431
+
1432
+ const cleared = lines.map(l => l + EL)
1433
+ const remaining = state.terminalRows > 0 ? Math.max(0, state.terminalRows - cleared.length) : 0
1434
+ for (let i = 0; i < remaining; i++) cleared.push(EL)
1435
+ return cleared.join('\n')
1436
+ }
1437
+
1438
+ // ─── Settings key test helper ───────────────────────────────────────────────
1439
+ // 📖 Fires a single ping to the selected provider to verify the API key works.
1440
+ async function testProviderKey(providerKey) {
1441
+ const src = sources[providerKey]
1442
+ if (!src) return
1443
+ const testKey = getApiKey(state.config, providerKey)
1444
+ if (!testKey) { state.settingsTestResults[providerKey] = 'fail'; return }
1445
+
1446
+ // 📖 Use the first model in the provider's list for the test ping
1447
+ const testModel = src.models[0]?.[0]
1448
+ if (!testModel) { state.settingsTestResults[providerKey] = 'fail'; return }
1449
+
1450
+ state.settingsTestResults[providerKey] = 'pending'
1451
+ const { code } = await ping(testKey, testModel, src.url)
1452
+ state.settingsTestResults[providerKey] = code === '200' ? 'ok' : 'fail'
1453
+ }
1454
+
1311
1455
  // 📖 Setup keyboard input for interactive selection during pings
1312
1456
  // 📖 Use readline with keypress event for arrow key handling
1313
1457
  process.stdin.setEncoding('utf8')
@@ -1318,6 +1462,103 @@ async function main() {
1318
1462
  const onKeyPress = async (str, key) => {
1319
1463
  if (!key) return
1320
1464
 
1465
+ // ─── Settings overlay keyboard handling ───────────────────────────────────
1466
+ if (state.settingsOpen) {
1467
+ const providerKeys = Object.keys(sources)
1468
+
1469
+ // 📖 Edit mode: capture typed characters for the API key
1470
+ if (state.settingsEditMode) {
1471
+ if (key.name === 'return') {
1472
+ // 📖 Save the new key and exit edit mode
1473
+ const pk = providerKeys[state.settingsCursor]
1474
+ const newKey = state.settingsEditBuffer.trim()
1475
+ if (newKey) {
1476
+ state.config.apiKeys[pk] = newKey
1477
+ saveConfig(state.config)
1478
+ }
1479
+ state.settingsEditMode = false
1480
+ state.settingsEditBuffer = ''
1481
+ } else if (key.name === 'escape') {
1482
+ // 📖 Cancel without saving
1483
+ state.settingsEditMode = false
1484
+ state.settingsEditBuffer = ''
1485
+ } else if (key.name === 'backspace') {
1486
+ state.settingsEditBuffer = state.settingsEditBuffer.slice(0, -1)
1487
+ } else if (str && !key.ctrl && !key.meta && str.length === 1) {
1488
+ // 📖 Append printable character to buffer
1489
+ state.settingsEditBuffer += str
1490
+ }
1491
+ return
1492
+ }
1493
+
1494
+ // 📖 Normal settings navigation
1495
+ if (key.name === 'escape') {
1496
+ // 📖 Close settings — rebuild results to reflect provider changes
1497
+ state.settingsOpen = false
1498
+ // 📖 Rebuild results: add models from newly enabled providers, remove disabled
1499
+ results = MODELS
1500
+ .filter(([,,,,,pk]) => isProviderEnabled(state.config, pk))
1501
+ .map(([modelId, label, tier, sweScore, ctx, providerKey], i) => {
1502
+ // 📖 Try to reuse existing result to keep ping history
1503
+ const existing = state.results.find(r => r.modelId === modelId && r.providerKey === providerKey)
1504
+ if (existing) return existing
1505
+ return { idx: i + 1, modelId, label, tier, sweScore, ctx, providerKey, status: 'pending', pings: [], httpCode: null, hidden: false }
1506
+ })
1507
+ // 📖 Re-index results
1508
+ results.forEach((r, i) => { r.idx = i + 1 })
1509
+ state.results = results
1510
+ adjustScrollOffset(state)
1511
+ return
1512
+ }
1513
+
1514
+ if (key.name === 'up' && state.settingsCursor > 0) {
1515
+ state.settingsCursor--
1516
+ return
1517
+ }
1518
+
1519
+ if (key.name === 'down' && state.settingsCursor < providerKeys.length - 1) {
1520
+ state.settingsCursor++
1521
+ return
1522
+ }
1523
+
1524
+ if (key.name === 'return') {
1525
+ // 📖 Enter edit mode for the selected provider's key
1526
+ const pk = providerKeys[state.settingsCursor]
1527
+ state.settingsEditBuffer = state.config.apiKeys?.[pk] ?? ''
1528
+ state.settingsEditMode = true
1529
+ return
1530
+ }
1531
+
1532
+ if (key.name === 'space') {
1533
+ // 📖 Toggle enabled/disabled for selected provider
1534
+ const pk = providerKeys[state.settingsCursor]
1535
+ if (!state.config.providers) state.config.providers = {}
1536
+ if (!state.config.providers[pk]) state.config.providers[pk] = { enabled: true }
1537
+ state.config.providers[pk].enabled = !isProviderEnabled(state.config, pk)
1538
+ saveConfig(state.config)
1539
+ return
1540
+ }
1541
+
1542
+ if (key.name === 't') {
1543
+ // 📖 Test the selected provider's key (fires a real ping)
1544
+ const pk = providerKeys[state.settingsCursor]
1545
+ testProviderKey(pk)
1546
+ return
1547
+ }
1548
+
1549
+ if (key.ctrl && key.name === 'c') { exit(0); return }
1550
+ return // 📖 Swallow all other keys while settings is open
1551
+ }
1552
+
1553
+ // 📖 P key: open settings screen
1554
+ if (key.name === 'p') {
1555
+ state.settingsOpen = true
1556
+ state.settingsCursor = 0
1557
+ state.settingsEditMode = false
1558
+ state.settingsEditBuffer = ''
1559
+ return
1560
+ }
1561
+
1321
1562
  // 📖 Sorting keys: R=rank, Y=tier, O=origin, M=model, L=latest ping, A=avg ping, S=SWE-bench, N=context, H=health, V=verdict, U=uptime
1322
1563
  // 📖 T is reserved for tier filter cycling — tier sort moved to Y
1323
1564
  const sortKeys = {
@@ -1435,10 +1676,13 @@ async function main() {
1435
1676
 
1436
1677
  process.stdin.on('keypress', onKeyPress)
1437
1678
 
1438
- // 📖 Animation loop: clear alt screen + redraw table at FPS with cursor
1679
+ // 📖 Animation loop: render settings overlay OR main table based on state
1439
1680
  const ticker = setInterval(() => {
1440
1681
  state.frame++
1441
- process.stdout.write(ALT_HOME + renderTable(state.results, state.pendingPings, state.frame, state.cursor, state.sortColumn, state.sortDirection, state.pingInterval, state.lastPingTime, state.mode, tierFilterMode, state.scrollOffset, state.terminalRows))
1682
+ const content = state.settingsOpen
1683
+ ? renderSettings()
1684
+ : renderTable(state.results, state.pendingPings, state.frame, state.cursor, state.sortColumn, state.sortDirection, state.pingInterval, state.lastPingTime, state.mode, tierFilterMode, state.scrollOffset, state.terminalRows)
1685
+ process.stdout.write(ALT_HOME + content)
1442
1686
  }, Math.round(1000 / FPS))
1443
1687
 
1444
1688
  process.stdout.write(ALT_HOME + renderTable(state.results, state.pendingPings, state.frame, state.cursor, state.sortColumn, state.sortDirection, state.pingInterval, state.lastPingTime, state.mode, tierFilterMode, state.scrollOffset, state.terminalRows))
@@ -1446,8 +1690,11 @@ async function main() {
1446
1690
  // ── Continuous ping loop — ping all models every N seconds forever ──────────
1447
1691
 
1448
1692
  // 📖 Single ping function that updates result
1693
+ // 📖 Uses per-provider API key and URL from sources.js
1449
1694
  const pingModel = async (r) => {
1450
- const { code, ms } = await ping(apiKey, r.modelId)
1695
+ const providerApiKey = getApiKey(state.config, r.providerKey) ?? apiKey
1696
+ const providerUrl = sources[r.providerKey]?.url ?? sources.nvidia.url
1697
+ const { code, ms } = await ping(providerApiKey, r.modelId, providerUrl)
1451
1698
 
1452
1699
  // 📖 Store ping result as object with ms and code
1453
1700
  // 📖 ms = actual response time (even for errors like 429)
package/lib/config.js ADDED
@@ -0,0 +1,170 @@
1
+ /**
2
+ * @file lib/config.js
3
+ * @description JSON config management for free-coding-models multi-provider support.
4
+ *
5
+ * 📖 This module manages ~/.free-coding-models.json, the new config file that
6
+ * stores API keys and per-provider enabled/disabled state for all providers
7
+ * (NVIDIA NIM, Groq, Cerebras, etc.).
8
+ *
9
+ * 📖 Config file location: ~/.free-coding-models.json
10
+ * 📖 File permissions: 0o600 (user read/write only — contains API keys)
11
+ *
12
+ * 📖 Config JSON structure:
13
+ * {
14
+ * "apiKeys": {
15
+ * "nvidia": "nvapi-xxx",
16
+ * "groq": "gsk_xxx",
17
+ * "cerebras": "csk_xxx"
18
+ * },
19
+ * "providers": {
20
+ * "nvidia": { "enabled": true },
21
+ * "groq": { "enabled": true },
22
+ * "cerebras": { "enabled": true }
23
+ * }
24
+ * }
25
+ *
26
+ * 📖 Migration: On first run, if the old plain-text ~/.free-coding-models exists
27
+ * and the new JSON file does not, the old key is auto-migrated as the nvidia key.
28
+ * The old file is left in place (not deleted) for safety.
29
+ *
30
+ * @functions
31
+ * → loadConfig() — Read ~/.free-coding-models.json; auto-migrate old plain-text config if needed
32
+ * → saveConfig(config) — Write config to ~/.free-coding-models.json with 0o600 permissions
33
+ * → getApiKey(config, providerKey) — Get effective API key (env var override > config > null)
34
+ *
35
+ * @exports loadConfig, saveConfig, getApiKey
36
+ * @exports CONFIG_PATH — path to the JSON config file
37
+ *
38
+ * @see bin/free-coding-models.js — main CLI that uses these functions
39
+ * @see sources.js — provider keys come from Object.keys(sources)
40
+ */
41
+
42
+ import { readFileSync, writeFileSync, existsSync } from 'fs'
43
+ import { homedir } from 'os'
44
+ import { join } from 'path'
45
+
46
+ // 📖 New JSON config path — stores all providers' API keys + enabled state
47
+ export const CONFIG_PATH = join(homedir(), '.free-coding-models.json')
48
+
49
+ // 📖 Old plain-text config path — used only for migration
50
+ const LEGACY_CONFIG_PATH = join(homedir(), '.free-coding-models')
51
+
52
+ // 📖 Environment variable names per provider
53
+ // 📖 These allow users to override config via env vars (useful for CI/headless setups)
54
+ const ENV_VARS = {
55
+ nvidia: 'NVIDIA_API_KEY',
56
+ groq: 'GROQ_API_KEY',
57
+ cerebras: 'CEREBRAS_API_KEY',
58
+ }
59
+
60
+ /**
61
+ * 📖 loadConfig: Read the JSON config from disk.
62
+ *
63
+ * 📖 Fallback chain:
64
+ * 1. Try to read ~/.free-coding-models.json (new format)
65
+ * 2. If missing, check if ~/.free-coding-models (old plain-text) exists → migrate
66
+ * 3. If neither, return an empty default config
67
+ *
68
+ * 📖 The migration reads the old file as a plain nvidia API key and writes
69
+ * a proper JSON config. The old file is NOT deleted (safety first).
70
+ *
71
+ * @returns {{ apiKeys: Record<string,string>, providers: Record<string,{enabled:boolean}> }}
72
+ */
73
+ export function loadConfig() {
74
+ // 📖 Try new JSON config first
75
+ if (existsSync(CONFIG_PATH)) {
76
+ try {
77
+ const raw = readFileSync(CONFIG_PATH, 'utf8').trim()
78
+ const parsed = JSON.parse(raw)
79
+ // 📖 Ensure the shape is always complete — fill missing sections with defaults
80
+ if (!parsed.apiKeys) parsed.apiKeys = {}
81
+ if (!parsed.providers) parsed.providers = {}
82
+ return parsed
83
+ } catch {
84
+ // 📖 Corrupted JSON — return empty config (user will re-enter keys)
85
+ return _emptyConfig()
86
+ }
87
+ }
88
+
89
+ // 📖 Migration path: old plain-text file exists, new JSON doesn't
90
+ if (existsSync(LEGACY_CONFIG_PATH)) {
91
+ try {
92
+ const oldKey = readFileSync(LEGACY_CONFIG_PATH, 'utf8').trim()
93
+ if (oldKey) {
94
+ const config = _emptyConfig()
95
+ config.apiKeys.nvidia = oldKey
96
+ // 📖 Auto-save migrated config so next launch is fast
97
+ saveConfig(config)
98
+ return config
99
+ }
100
+ } catch {
101
+ // 📖 Can't read old file — proceed with empty config
102
+ }
103
+ }
104
+
105
+ return _emptyConfig()
106
+ }
107
+
108
+ /**
109
+ * 📖 saveConfig: Write the config object to ~/.free-coding-models.json.
110
+ *
111
+ * 📖 Uses mode 0o600 so the file is only readable by the owning user (API keys!).
112
+ * 📖 Pretty-prints JSON for human readability.
113
+ *
114
+ * @param {{ apiKeys: Record<string,string>, providers: Record<string,{enabled:boolean}> }} config
115
+ */
116
+ export function saveConfig(config) {
117
+ try {
118
+ writeFileSync(CONFIG_PATH, JSON.stringify(config, null, 2), { mode: 0o600 })
119
+ } catch {
120
+ // 📖 Silently fail — the app is still usable, keys just won't persist
121
+ }
122
+ }
123
+
124
+ /**
125
+ * 📖 getApiKey: Get the effective API key for a provider.
126
+ *
127
+ * 📖 Priority order (first non-empty wins):
128
+ * 1. Environment variable (e.g. NVIDIA_API_KEY) — for CI/headless
129
+ * 2. Config file value — from ~/.free-coding-models.json
130
+ * 3. null — no key configured
131
+ *
132
+ * @param {{ apiKeys: Record<string,string> }} config
133
+ * @param {string} providerKey — e.g. 'nvidia', 'groq', 'cerebras'
134
+ * @returns {string|null}
135
+ */
136
+ export function getApiKey(config, providerKey) {
137
+ // 📖 Env var override — takes precedence over everything
138
+ const envVar = ENV_VARS[providerKey]
139
+ if (envVar && process.env[envVar]) return process.env[envVar]
140
+
141
+ // 📖 Config file value
142
+ const key = config?.apiKeys?.[providerKey]
143
+ if (key) return key
144
+
145
+ return null
146
+ }
147
+
148
+ /**
149
+ * 📖 isProviderEnabled: Check if a provider is enabled in config.
150
+ *
151
+ * 📖 Providers are enabled by default if not explicitly set to false.
152
+ * 📖 A provider without an API key should still appear in settings (just can't ping).
153
+ *
154
+ * @param {{ providers: Record<string,{enabled:boolean}> }} config
155
+ * @param {string} providerKey
156
+ * @returns {boolean}
157
+ */
158
+ export function isProviderEnabled(config, providerKey) {
159
+ const providerConfig = config?.providers?.[providerKey]
160
+ if (!providerConfig) return true // 📖 Default: enabled
161
+ return providerConfig.enabled !== false
162
+ }
163
+
164
+ // 📖 Internal helper: create a blank config with the right shape
165
+ function _emptyConfig() {
166
+ return {
167
+ apiKeys: {},
168
+ providers: {},
169
+ }
170
+ }
package/lib/utils.js CHANGED
@@ -155,8 +155,8 @@ export const sortResults = (results, sortColumn, sortDirection) => {
155
155
  cmp = TIER_ORDER.indexOf(a.tier) - TIER_ORDER.indexOf(b.tier)
156
156
  break
157
157
  case 'origin':
158
- // 📖 All models are NIM for now this is future-proofed for multi-source
159
- cmp = 'NIM'.localeCompare('NIM')
158
+ // 📖 Sort by providerKey (or fallback to modelId prefix) for multi-provider support
159
+ cmp = (a.providerKey ?? 'nvidia').localeCompare(b.providerKey ?? 'nvidia')
160
160
  break
161
161
  case 'model':
162
162
  cmp = a.label.localeCompare(b.label)
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "free-coding-models",
3
- "version": "0.1.43",
3
+ "version": "0.1.45",
4
4
  "description": "Find the fastest coding LLM models in seconds — ping free models from multiple providers, pick the best one for OpenCode, Cursor, or any AI coding assistant.",
5
5
  "keywords": [
6
6
  "nvidia",
package/sources.js CHANGED
@@ -27,7 +27,12 @@
27
27
  * 📖 Secondary: https://swe-rebench.com (independent evals, scores are lower)
28
28
  * 📖 Leaderboard tracker: https://www.marc0.dev/en/leaderboard
29
29
  *
30
- * @exports Object containing all sources and their models
30
+ * @exports nvidiaNim, groq, cerebras model arrays per provider
31
+ * @exports sources — map of { nvidia, groq, cerebras } each with { name, url, models }
32
+ * @exports MODELS — flat array of [modelId, label, tier, sweScore, ctx, providerKey]
33
+ *
34
+ * 📖 MODELS now includes providerKey as 6th element so ping() knows which
35
+ * API endpoint and API key to use for each model.
31
36
  */
32
37
 
33
38
  // 📖 NIM source - https://build.nvidia.com
@@ -86,18 +91,50 @@ export const nvidiaNim = [
86
91
  ['microsoft/phi-4-mini-instruct', 'Phi 4 Mini', 'C', '14.0%', '128k'],
87
92
  ]
88
93
 
94
+ // 📖 Groq source - https://console.groq.com
95
+ // 📖 Free API keys available at https://console.groq.com/keys
96
+ export const groq = [
97
+ ['llama-3.3-70b-versatile', 'Llama 3.3 70B', 'A-', '39.5%', '128k'],
98
+ ['meta-llama/llama-4-scout-17b-16e-preview', 'Llama 4 Scout', 'A', '44.0%', '10M'],
99
+ ['meta-llama/llama-4-maverick-17b-128e-preview', 'Llama 4 Maverick', 'S', '62.0%', '1M'],
100
+ ['deepseek-r1-distill-llama-70b', 'R1 Distill 70B', 'A', '43.9%', '128k'],
101
+ ['qwen-qwq-32b', 'QwQ 32B', 'A+', '50.0%', '131k'],
102
+ ['moonshotai/kimi-k2-instruct', 'Kimi K2 Instruct', 'S', '65.8%', '131k'],
103
+ ]
104
+
105
+ // 📖 Cerebras source - https://cloud.cerebras.ai
106
+ // 📖 Free API keys available at https://cloud.cerebras.ai
107
+ export const cerebras = [
108
+ ['llama3.3-70b', 'Llama 3.3 70B', 'A-', '39.5%', '128k'],
109
+ ['llama-4-scout-17b-16e-instruct', 'Llama 4 Scout', 'A', '44.0%', '10M'],
110
+ ['qwen-3-32b', 'Qwen3 32B', 'A+', '50.0%', '128k'],
111
+ ]
112
+
89
113
  // 📖 All sources combined - used by the main script
114
+ // 📖 Each source has: name (display), url (API endpoint), models (array of model tuples)
90
115
  export const sources = {
91
116
  nvidia: {
92
117
  name: 'NIM',
118
+ url: 'https://integrate.api.nvidia.com/v1/chat/completions',
93
119
  models: nvidiaNim,
94
120
  },
121
+ groq: {
122
+ name: 'Groq',
123
+ url: 'https://api.groq.com/openai/v1/chat/completions',
124
+ models: groq,
125
+ },
126
+ cerebras: {
127
+ name: 'Cerebras',
128
+ url: 'https://api.cerebras.ai/v1/chat/completions',
129
+ models: cerebras,
130
+ },
95
131
  }
96
132
 
97
- // 📖 Flatten all models from all sources for backward compatibility
133
+ // 📖 Flatten all models from all sources each entry includes providerKey as 6th element
134
+ // 📖 providerKey lets the main CLI know which API key and URL to use per model
98
135
  export const MODELS = []
99
136
  for (const [sourceKey, sourceData] of Object.entries(sources)) {
100
137
  for (const [modelId, label, tier, sweScore, ctx] of sourceData.models) {
101
- MODELS.push([modelId, label, tier, sweScore, ctx])
138
+ MODELS.push([modelId, label, tier, sweScore, ctx, sourceKey])
102
139
  }
103
140
  }