free-coding-models 0.1.52 → 0.1.56

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -2,14 +2,14 @@
2
2
  <img src="https://img.shields.io/npm/v/free-coding-models?color=76b900&label=npm&logo=npm" alt="npm version">
3
3
  <img src="https://img.shields.io/node/v/free-coding-models?color=76b900&logo=node.js" alt="node version">
4
4
  <img src="https://img.shields.io/npm/l/free-coding-models?color=76b900" alt="license">
5
- <img src="https://img.shields.io/badge/models-53-76b900?logo=nvidia" alt="models count">
6
- <img src="https://img.shields.io/badge/providers-3-blue" alt="providers count">
5
+ <img src="https://img.shields.io/badge/models-101-76b900?logo=nvidia" alt="models count">
6
+ <img src="https://img.shields.io/badge/providers-9-blue" alt="providers count">
7
7
  </p>
8
8
 
9
9
  <h1 align="center">free-coding-models</h1>
10
10
 
11
11
  <p align="center">
12
- <strong>Want to contribute or discuss the project?</strong> Join our <a href="https://discord.gg/5MbTnDC3Md">Discord community</a>!
12
+ 💬 <a href="https://discord.gg/5MbTnDC3Md">Let's talk about the project on Discord</a>
13
13
  </p>
14
14
 
15
15
  <p align="center">
@@ -24,7 +24,7 @@
24
24
 
25
25
  <p align="center">
26
26
  <strong>Find the fastest coding LLM models in seconds</strong><br>
27
- <sub>Ping free models from NVIDIA NIM, Groq, and Cerebras in real-time — pick the best one for OpenCode, OpenClaw, or any AI coding assistant</sub>
27
+ <sub>Ping free models from NVIDIA NIM, Groq, Cerebras, and SambaNova in real-time — pick the best one for OpenCode, OpenClaw, or any AI coding assistant</sub>
28
28
  </p>
29
29
 
30
30
  <p align="center">
@@ -47,7 +47,7 @@
47
47
  ## ✨ Features
48
48
 
49
49
  - **🎯 Coding-focused** — Only LLM models optimized for code generation, not chat or vision
50
- - **🌐 Multi-provider** — 53 models from NVIDIA NIM, Groq, and Cerebras — all free to use
50
+ - **🌐 Multi-provider** — 101 models from NVIDIA NIM, Groq, Cerebras, SambaNova, OpenRouter, Codestral, Hyperbolic, Scaleway, and Google AI — all free to use
51
51
  - **⚙️ Settings screen** — Press `P` to manage provider API keys, enable/disable providers, and test keys live
52
52
  - **🚀 Parallel pings** — All models tested simultaneously via native `fetch`
53
53
  - **📊 Real-time animation** — Watch latency appear live in alternate screen buffer
@@ -76,10 +76,16 @@ Before using `free-coding-models`, make sure you have:
76
76
  - **NVIDIA NIM** — [build.nvidia.com](https://build.nvidia.com) → Profile → API Keys → Generate
77
77
  - **Groq** — [console.groq.com/keys](https://console.groq.com/keys) → Create API Key
78
78
  - **Cerebras** — [cloud.cerebras.ai](https://cloud.cerebras.ai) → API Keys → Create
79
+ - **SambaNova** — [cloud.sambanova.ai/apis](https://cloud.sambanova.ai/apis) → API Keys → Create ($5 free trial, 3 months)
80
+ - **OpenRouter** — [openrouter.ai/settings/keys](https://openrouter.ai/settings/keys) → Create key (50 free req/day)
81
+ - **Mistral Codestral** — [codestral.mistral.ai](https://codestral.mistral.ai) → API Keys (30 req/min, 2000/day — phone required)
82
+ - **Hyperbolic** — [app.hyperbolic.ai/settings](https://app.hyperbolic.ai/settings) → API Keys ($1 free trial)
83
+ - **Scaleway** — [console.scaleway.com/iam/api-keys](https://console.scaleway.com/iam/api-keys) → IAM → API Keys (1M free tokens)
84
+ - **Google AI Studio** — [aistudio.google.com/apikey](https://aistudio.google.com/apikey) → Get API key (free Gemma models, 14.4K req/day)
79
85
  3. **OpenCode** *(optional)* — [Install OpenCode](https://github.com/opencode-ai/opencode) to use the OpenCode integration
80
86
  4. **OpenClaw** *(optional)* — [Install OpenClaw](https://openclaw.ai) to use the OpenClaw integration
81
87
 
82
- > 💡 **Tip:** You don't need all three providers. One key is enough to get started. Add more later via the Settings screen (`P` key). Models without a key still show real latency (`🔑 NO KEY`) so you can evaluate providers before signing up.
88
+ > 💡 **Tip:** You don't need all four providers. One key is enough to get started. Add more later via the Settings screen (`P` key). Models without a key still show real latency (`🔑 NO KEY`) so you can evaluate providers before signing up.
83
89
 
84
90
  ---
85
91
 
@@ -157,13 +163,13 @@ When you run `free-coding-models` without `--opencode` or `--openclaw`, you get
157
163
  Use `↑↓` arrows to select, `Enter` to confirm. Then the TUI launches with your chosen mode shown in the header badge.
158
164
 
159
165
  **How it works:**
160
- 1. **Ping phase** — All enabled models are pinged in parallel (up to 53 across 3 providers)
166
+ 1. **Ping phase** — All enabled models are pinged in parallel (up to 101 across 9 providers)
161
167
  2. **Continuous monitoring** — Models are re-pinged every 2 seconds forever
162
168
  3. **Real-time updates** — Watch "Latest", "Avg", and "Up%" columns update live
163
169
  4. **Select anytime** — Use ↑↓ arrows to navigate, press Enter on a model to act
164
170
  5. **Smart detection** — Automatically detects if NVIDIA NIM is configured in OpenCode or OpenClaw
165
171
 
166
- Setup wizard (first run — walks through all 3 providers):
172
+ Setup wizard (first run — walks through all 9 providers):
167
173
 
168
174
  ```
169
175
  🔑 First-time setup — API keys
@@ -184,11 +190,16 @@ Setup wizard (first run — walks through all 3 providers):
184
190
  API Keys → Create
185
191
  Enter key (or Enter to skip):
186
192
 
193
+ ● SambaNova
194
+ Free key at: https://cloud.sambanova.ai/apis
195
+ API Keys → Create ($5 free trial, 3 months)
196
+ Enter key (or Enter to skip):
197
+
187
198
  ✅ 2 key(s) saved to ~/.free-coding-models.json
188
199
  You can add or change keys anytime with the P key in the TUI.
189
200
  ```
190
201
 
191
- You don't need all three — skip any provider by pressing Enter. At least one key is required.
202
+ You don't need all four — skip any provider by pressing Enter. At least one key is required.
192
203
 
193
204
  ### Adding or changing keys later
194
205
 
@@ -246,7 +257,7 @@ CEREBRAS_API_KEY=csk_xxx free-coding-models
246
257
 
247
258
  ## 🤖 Coding Models
248
259
 
249
- **53 coding models** across 3 providers and 8 tiers, ranked by [SWE-bench Verified](https://www.swebench.com) — the industry-standard benchmark measuring real GitHub issue resolution. Scores are self-reported by providers unless noted.
260
+ **101 coding models** across 9 providers and 8 tiers, ranked by [SWE-bench Verified](https://www.swebench.com) — the industry-standard benchmark measuring real GitHub issue resolution. Scores are self-reported by providers unless noted.
250
261
 
251
262
  ### NVIDIA NIM (44 models)
252
263
 
@@ -601,4 +612,9 @@ We welcome contributions! Feel free to open issues, submit pull requests, or get
601
612
  **A:** No — `free-coding-models` configures OpenClaw to use NVIDIA NIM's remote API, so models run on NVIDIA's infrastructure. No GPU or local setup required.
602
613
 
603
614
  ## 📧 Support
604
- For questions or issues, open a GitHub issue or join our community Discord: https://discord.gg/5MbTnDC3Md
615
+
616
+ For questions or issues, open a [GitHub issue](https://github.com/vava-nessa/free-coding-models/issues).
617
+
618
+ 💬 Let's talk about the projet on Discord: https://discord.gg/5MbTnDC3Md
619
+
620
+ > ⚠️ **free-coding-models is a BETA TUI** — it might crash or have problems. Use at your own risk and feel free to report issues!
@@ -198,6 +198,54 @@ async function promptApiKey(config) {
198
198
  hint: 'API Keys → Create',
199
199
  prefix: 'csk_ / cauth_',
200
200
  },
201
+ {
202
+ key: 'sambanova',
203
+ label: 'SambaNova',
204
+ color: chalk.rgb(255, 165, 0),
205
+ url: 'https://cloud.sambanova.ai/apis',
206
+ hint: 'API Keys → Create ($5 free trial, 3 months)',
207
+ prefix: 'sn-',
208
+ },
209
+ {
210
+ key: 'openrouter',
211
+ label: 'OpenRouter',
212
+ color: chalk.rgb(120, 80, 255),
213
+ url: 'https://openrouter.ai/settings/keys',
214
+ hint: 'API Keys → Create key (50 free req/day, shared quota)',
215
+ prefix: 'sk-or-',
216
+ },
217
+ {
218
+ key: 'codestral',
219
+ label: 'Mistral Codestral',
220
+ color: chalk.rgb(255, 100, 100),
221
+ url: 'https://codestral.mistral.ai',
222
+ hint: 'API Keys → Create key (30 req/min, 2000/day — phone required)',
223
+ prefix: 'csk-',
224
+ },
225
+ {
226
+ key: 'hyperbolic',
227
+ label: 'Hyperbolic',
228
+ color: chalk.rgb(0, 200, 150),
229
+ url: 'https://app.hyperbolic.ai/settings',
230
+ hint: 'Settings → API Keys ($1 free trial)',
231
+ prefix: 'eyJ',
232
+ },
233
+ {
234
+ key: 'scaleway',
235
+ label: 'Scaleway',
236
+ color: chalk.rgb(130, 0, 250),
237
+ url: 'https://console.scaleway.com/iam/api-keys',
238
+ hint: 'IAM → API Keys (1M free tokens)',
239
+ prefix: 'scw-',
240
+ },
241
+ {
242
+ key: 'googleai',
243
+ label: 'Google AI Studio',
244
+ color: chalk.rgb(66, 133, 244),
245
+ url: 'https://aistudio.google.com/apikey',
246
+ hint: 'Get API key (free Gemma models, 14.4K req/day)',
247
+ prefix: 'AIza',
248
+ },
201
249
  ]
202
250
 
203
251
  const rl = readline.createInterface({ input: process.stdin, output: process.stdout })
@@ -412,7 +460,7 @@ function calculateViewport(terminalRows, scrollOffset, totalModels) {
412
460
  }
413
461
 
414
462
  // 📖 renderTable: mode param controls footer hint text (opencode vs openclaw)
415
- function renderTable(results, pendingPings, frame, cursor = null, sortColumn = 'avg', sortDirection = 'asc', pingInterval = PING_INTERVAL, lastPingTime = Date.now(), mode = 'opencode', tierFilterMode = 0, scrollOffset = 0, terminalRows = 0) {
463
+ function renderTable(results, pendingPings, frame, cursor = null, sortColumn = 'avg', sortDirection = 'asc', pingInterval = PING_INTERVAL, lastPingTime = Date.now(), mode = 'opencode', tierFilterMode = 0, scrollOffset = 0, terminalRows = 0, originFilterMode = 0) {
416
464
  // 📖 Filter out hidden models for display
417
465
  const visibleResults = results.filter(r => !r.hidden)
418
466
 
@@ -453,6 +501,17 @@ function renderTable(results, pendingPings, frame, cursor = null, sortColumn = '
453
501
  tierBadge = chalk.bold.rgb(255, 200, 0)(` [${TIER_CYCLE_NAMES[tierFilterMode]}]`)
454
502
  }
455
503
 
504
+ // 📖 Origin filter badge — shown when filtering by provider is active
505
+ let originBadge = ''
506
+ if (originFilterMode > 0) {
507
+ const originKeys = [null, ...Object.keys(sources)]
508
+ const activeOriginKey = originKeys[originFilterMode]
509
+ const activeOriginName = activeOriginKey ? sources[activeOriginKey]?.name ?? activeOriginKey : null
510
+ if (activeOriginName) {
511
+ originBadge = chalk.bold.rgb(100, 200, 255)(` [${activeOriginName}]`)
512
+ }
513
+ }
514
+
456
515
  // 📖 Column widths (generous spacing with margins)
457
516
  const W_RANK = 6
458
517
  const W_TIER = 6
@@ -471,7 +530,7 @@ function renderTable(results, pendingPings, frame, cursor = null, sortColumn = '
471
530
 
472
531
  const lines = [
473
532
  '',
474
- ` ${chalk.bold('⚡ Free Coding Models')} ${chalk.dim('v' + LOCAL_VERSION)}${modeBadge}${modeHint}${tierBadge} ` +
533
+ ` ${chalk.bold('⚡ Free Coding Models')} ${chalk.dim('v' + LOCAL_VERSION)}${modeBadge}${modeHint}${tierBadge}${originBadge} ` +
475
534
  chalk.greenBright(`✅ ${up}`) + chalk.dim(' up ') +
476
535
  chalk.yellow(`⏱ ${timeout}`) + chalk.dim(' timeout ') +
477
536
  chalk.red(`❌ ${down}`) + chalk.dim(' down ') +
@@ -509,7 +568,10 @@ function renderTable(results, pendingPings, frame, cursor = null, sortColumn = '
509
568
  // 📖 Now colorize after padding is calculated on plain text
510
569
  const rankH_c = colorFirst(rankH, W_RANK)
511
570
  const tierH_c = colorFirst('Tier', W_TIER)
512
- const originH_c = sortColumn === 'origin' ? chalk.bold.cyan(originH.padEnd(W_SOURCE)) : colorFirst(originH, W_SOURCE)
571
+ const originLabel = 'Origin(N)'
572
+ const originH_c = sortColumn === 'origin'
573
+ ? chalk.bold.cyan(originLabel.padEnd(W_SOURCE))
574
+ : (originFilterMode > 0 ? chalk.bold.rgb(100, 200, 255)(originLabel.padEnd(W_SOURCE)) : colorFirst(originLabel, W_SOURCE))
513
575
  const modelH_c = colorFirst(modelH, W_MODEL)
514
576
  const sweH_c = sortColumn === 'swe' ? chalk.bold.cyan(sweH.padEnd(W_SWE)) : colorFirst(sweH, W_SWE)
515
577
  const ctxH_c = sortColumn === 'ctx' ? chalk.bold.cyan(ctxH.padEnd(W_CTX)) : colorFirst(ctxH, W_CTX)
@@ -707,10 +769,11 @@ function renderTable(results, pendingPings, frame, cursor = null, sortColumn = '
707
769
  : mode === 'opencode-desktop'
708
770
  ? chalk.rgb(0, 200, 255)('Enter→OpenDesktop')
709
771
  : chalk.rgb(0, 200, 255)('Enter→OpenCode')
710
- lines.push(chalk.dim(` ↑↓ Navigate • `) + actionHint + chalk.dim(` • R/Y/O/M/L/A/S/C/H/V/U Sort • W↓/X↑ Interval (${intervalSec}s) • T Filter tier • Z Mode • `) + chalk.yellow('P') + chalk.dim(` Settings • Ctrl+C Exit`))
772
+ lines.push(chalk.dim(` ↑↓ Navigate • `) + actionHint + chalk.dim(` • R/Y/O/M/L/A/S/C/H/V/U Sort • T Tier • N Origin • W↓/X↑ (${intervalSec}s) • Z Mode • `) + chalk.yellow('P') + chalk.dim(` Settings • `) + chalk.yellow('K') + chalk.dim(` Help • Ctrl+C Exit`))
711
773
  lines.push('')
712
- lines.push(chalk.dim(' Made with ') + '💖 & ☕' + chalk.dim(' by ') + '\x1b]8;;https://github.com/vava-nessa\x1b\\vava-nessa\x1b]8;;\x1b\\' + chalk.dim(' • ') + '🫂 ' + chalk.cyanBright('\x1b]8;;https://discord.gg/5MbTnDC3Md\x1b\\Join our Discord!\x1b]8;;\x1b\\') + chalk.dim(' • ') + '⭐ ' + '\x1b]8;;https://github.com/vava-nessa/free-coding-models\x1b\\Read the docs on GitHub\x1b]8;;\x1b\\')
713
- lines.push(chalk.dim(' 💬 Discord: ') + chalk.cyanBright('https://discord.gg/5MbTnDC3Md'))
774
+ lines.push(chalk.dim(' Made with ') + '💖 & ☕' + chalk.dim(' by ') + '\x1b]8;;https://github.com/vava-nessa\x1b\\vava-nessa\x1b]8;;\x1b\\' + chalk.dim(' • ') + '⭐ ' + '\x1b]8;;https://github.com/vava-nessa/free-coding-models\x1b\\Star on GitHub\x1b]8;;\x1b\\')
775
+ // 📖 Discord invite + BETA warning — always visible at the bottom of the TUI
776
+ lines.push(' 💬 ' + chalk.cyanBright('\x1b]8;;https://discord.gg/5MbTnDC3Md\x1b\\Let\'s talk about the project on Discord: https://discord.gg/5MbTnDC3Md\x1b]8;;\x1b\\') + chalk.dim(' • ') + chalk.yellow('⚠ BETA TUI') + chalk.dim(' — might crash or have problems'))
714
777
  lines.push('')
715
778
  // 📖 Append \x1b[K (erase to EOL) to each line so leftover chars from previous
716
779
  // 📖 frames are cleared. Then pad with blank cleared lines to fill the terminal,
@@ -777,9 +840,15 @@ function getOpenCodeModelId(providerKey, modelId) {
777
840
 
778
841
  // 📖 Env var names per provider -- used for passing resolved keys to child processes
779
842
  const ENV_VAR_NAMES = {
780
- nvidia: 'NVIDIA_API_KEY',
781
- groq: 'GROQ_API_KEY',
782
- cerebras: 'CEREBRAS_API_KEY',
843
+ nvidia: 'NVIDIA_API_KEY',
844
+ groq: 'GROQ_API_KEY',
845
+ cerebras: 'CEREBRAS_API_KEY',
846
+ sambanova: 'SAMBANOVA_API_KEY',
847
+ openrouter: 'OPENROUTER_API_KEY',
848
+ codestral: 'CODESTRAL_API_KEY',
849
+ hyperbolic: 'HYPERBOLIC_API_KEY',
850
+ scaleway: 'SCALEWAY_API_KEY',
851
+ googleai: 'GOOGLE_API_KEY',
783
852
  }
784
853
 
785
854
  // 📖 OpenCode config location varies by platform
@@ -990,6 +1059,67 @@ After installation, you can use: opencode --model ${modelRef}`
990
1059
  },
991
1060
  models: {}
992
1061
  }
1062
+ } else if (providerKey === 'sambanova') {
1063
+ // 📖 SambaNova is OpenAI-compatible — uses @ai-sdk/openai-compatible with their base URL
1064
+ config.provider.sambanova = {
1065
+ npm: '@ai-sdk/openai-compatible',
1066
+ name: 'SambaNova',
1067
+ options: {
1068
+ baseURL: 'https://api.sambanova.ai/v1',
1069
+ apiKey: '{env:SAMBANOVA_API_KEY}'
1070
+ },
1071
+ models: {}
1072
+ }
1073
+ } else if (providerKey === 'openrouter') {
1074
+ config.provider.openrouter = {
1075
+ npm: '@ai-sdk/openai-compatible',
1076
+ name: 'OpenRouter',
1077
+ options: {
1078
+ baseURL: 'https://openrouter.ai/api/v1',
1079
+ apiKey: '{env:OPENROUTER_API_KEY}'
1080
+ },
1081
+ models: {}
1082
+ }
1083
+ } else if (providerKey === 'codestral') {
1084
+ config.provider.codestral = {
1085
+ npm: '@ai-sdk/openai-compatible',
1086
+ name: 'Mistral Codestral',
1087
+ options: {
1088
+ baseURL: 'https://codestral.mistral.ai/v1',
1089
+ apiKey: '{env:CODESTRAL_API_KEY}'
1090
+ },
1091
+ models: {}
1092
+ }
1093
+ } else if (providerKey === 'hyperbolic') {
1094
+ config.provider.hyperbolic = {
1095
+ npm: '@ai-sdk/openai-compatible',
1096
+ name: 'Hyperbolic',
1097
+ options: {
1098
+ baseURL: 'https://api.hyperbolic.xyz/v1',
1099
+ apiKey: '{env:HYPERBOLIC_API_KEY}'
1100
+ },
1101
+ models: {}
1102
+ }
1103
+ } else if (providerKey === 'scaleway') {
1104
+ config.provider.scaleway = {
1105
+ npm: '@ai-sdk/openai-compatible',
1106
+ name: 'Scaleway',
1107
+ options: {
1108
+ baseURL: 'https://api.scaleway.ai/v1',
1109
+ apiKey: '{env:SCALEWAY_API_KEY}'
1110
+ },
1111
+ models: {}
1112
+ }
1113
+ } else if (providerKey === 'googleai') {
1114
+ config.provider.googleai = {
1115
+ npm: '@ai-sdk/openai-compatible',
1116
+ name: 'Google AI Studio',
1117
+ options: {
1118
+ baseURL: 'https://generativelanguage.googleapis.com/v1beta/openai',
1119
+ apiKey: '{env:GOOGLE_API_KEY}'
1120
+ },
1121
+ models: {}
1122
+ }
993
1123
  }
994
1124
  }
995
1125
 
@@ -1159,6 +1289,67 @@ ${isWindows ? 'set NVIDIA_API_KEY=your_key_here' : 'export NVIDIA_API_KEY=your_k
1159
1289
  },
1160
1290
  models: {}
1161
1291
  }
1292
+ } else if (providerKey === 'sambanova') {
1293
+ // 📖 SambaNova is OpenAI-compatible — uses @ai-sdk/openai-compatible with their base URL
1294
+ config.provider.sambanova = {
1295
+ npm: '@ai-sdk/openai-compatible',
1296
+ name: 'SambaNova',
1297
+ options: {
1298
+ baseURL: 'https://api.sambanova.ai/v1',
1299
+ apiKey: '{env:SAMBANOVA_API_KEY}'
1300
+ },
1301
+ models: {}
1302
+ }
1303
+ } else if (providerKey === 'openrouter') {
1304
+ config.provider.openrouter = {
1305
+ npm: '@ai-sdk/openai-compatible',
1306
+ name: 'OpenRouter',
1307
+ options: {
1308
+ baseURL: 'https://openrouter.ai/api/v1',
1309
+ apiKey: '{env:OPENROUTER_API_KEY}'
1310
+ },
1311
+ models: {}
1312
+ }
1313
+ } else if (providerKey === 'codestral') {
1314
+ config.provider.codestral = {
1315
+ npm: '@ai-sdk/openai-compatible',
1316
+ name: 'Mistral Codestral',
1317
+ options: {
1318
+ baseURL: 'https://codestral.mistral.ai/v1',
1319
+ apiKey: '{env:CODESTRAL_API_KEY}'
1320
+ },
1321
+ models: {}
1322
+ }
1323
+ } else if (providerKey === 'hyperbolic') {
1324
+ config.provider.hyperbolic = {
1325
+ npm: '@ai-sdk/openai-compatible',
1326
+ name: 'Hyperbolic',
1327
+ options: {
1328
+ baseURL: 'https://api.hyperbolic.xyz/v1',
1329
+ apiKey: '{env:HYPERBOLIC_API_KEY}'
1330
+ },
1331
+ models: {}
1332
+ }
1333
+ } else if (providerKey === 'scaleway') {
1334
+ config.provider.scaleway = {
1335
+ npm: '@ai-sdk/openai-compatible',
1336
+ name: 'Scaleway',
1337
+ options: {
1338
+ baseURL: 'https://api.scaleway.ai/v1',
1339
+ apiKey: '{env:SCALEWAY_API_KEY}'
1340
+ },
1341
+ models: {}
1342
+ }
1343
+ } else if (providerKey === 'googleai') {
1344
+ config.provider.googleai = {
1345
+ npm: '@ai-sdk/openai-compatible',
1346
+ name: 'Google AI Studio',
1347
+ options: {
1348
+ baseURL: 'https://generativelanguage.googleapis.com/v1beta/openai',
1349
+ apiKey: '{env:GOOGLE_API_KEY}'
1350
+ },
1351
+ models: {}
1352
+ }
1162
1353
  }
1163
1354
  }
1164
1355
 
@@ -1513,6 +1704,7 @@ async function main() {
1513
1704
  settingsTestResults: {}, // 📖 { providerKey: 'pending'|'ok'|'fail'|null }
1514
1705
  config, // 📖 Live reference to the config object (updated on save)
1515
1706
  visibleSorted: [], // 📖 Cached visible+sorted models — shared between render loop and key handlers
1707
+ helpVisible: false, // 📖 Whether the help overlay (K key) is active
1516
1708
  }
1517
1709
 
1518
1710
  // 📖 Re-clamp viewport on terminal resize
@@ -1538,10 +1730,19 @@ async function main() {
1538
1730
  // 📖 0=All, 1=S+, 2=S, 3=A+, 4=A, 5=A-, 6=B+, 7=B, 8=C
1539
1731
  const TIER_CYCLE = [null, 'S+', 'S', 'A+', 'A', 'A-', 'B+', 'B', 'C']
1540
1732
  let tierFilterMode = 0
1733
+
1734
+ // 📖 originFilterMode: index into ORIGIN_CYCLE, 0=All, then each provider key in order
1735
+ const ORIGIN_CYCLE = [null, ...Object.keys(sources)]
1736
+ let originFilterMode = 0
1737
+
1541
1738
  function applyTierFilter() {
1542
1739
  const activeTier = TIER_CYCLE[tierFilterMode]
1740
+ const activeOrigin = ORIGIN_CYCLE[originFilterMode]
1543
1741
  state.results.forEach(r => {
1544
- r.hidden = activeTier !== null && r.tier !== activeTier
1742
+ // 📖 Apply both tier and origin filters — model is hidden if it fails either
1743
+ const tierHide = activeTier !== null && r.tier !== activeTier
1744
+ const originHide = activeOrigin !== null && r.providerKey !== activeOrigin
1745
+ r.hidden = tierHide || originHide
1545
1746
  })
1546
1747
  return state.results
1547
1748
  }
@@ -1611,6 +1812,42 @@ async function main() {
1611
1812
  return cleared.join('\n')
1612
1813
  }
1613
1814
 
1815
+ // ─── Help overlay renderer ────────────────────────────────────────────────
1816
+ // 📖 renderHelp: Draw the help overlay listing all key bindings.
1817
+ // 📖 Toggled with K key. Gives users a quick reference without leaving the TUI.
1818
+ function renderHelp() {
1819
+ const EL = '\x1b[K'
1820
+ const lines = []
1821
+ lines.push('')
1822
+ lines.push(` ${chalk.bold('❓ Keyboard Shortcuts')} ${chalk.dim('— press K or Esc to close')}`)
1823
+ lines.push('')
1824
+ lines.push(` ${chalk.bold('Navigation')}`)
1825
+ lines.push(` ${chalk.yellow('↑↓')} Navigate rows`)
1826
+ lines.push(` ${chalk.yellow('Enter')} Select model and launch`)
1827
+ lines.push('')
1828
+ lines.push(` ${chalk.bold('Sorting')}`)
1829
+ lines.push(` ${chalk.yellow('R')} Rank ${chalk.yellow('Y')} Tier ${chalk.yellow('O')} Origin ${chalk.yellow('M')} Model`)
1830
+ lines.push(` ${chalk.yellow('L')} Latest ping ${chalk.yellow('A')} Avg ping ${chalk.yellow('S')} SWE-bench score`)
1831
+ lines.push(` ${chalk.yellow('C')} Context window ${chalk.yellow('H')} Health ${chalk.yellow('V')} Verdict ${chalk.yellow('U')} Uptime`)
1832
+ lines.push('')
1833
+ lines.push(` ${chalk.bold('Filters')}`)
1834
+ lines.push(` ${chalk.yellow('T')} Cycle tier filter ${chalk.dim('(All → S+ → S → A+ → A → A- → B+ → B → C → All)')}`)
1835
+ lines.push(` ${chalk.yellow('N')} Cycle origin filter ${chalk.dim('(All → NIM → Groq → Cerebras → ... each provider → All)')}`)
1836
+ lines.push('')
1837
+ lines.push(` ${chalk.bold('Controls')}`)
1838
+ lines.push(` ${chalk.yellow('W')} Decrease ping interval (faster)`)
1839
+ lines.push(` ${chalk.yellow('X')} Increase ping interval (slower)`)
1840
+ lines.push(` ${chalk.yellow('Z')} Cycle launch mode ${chalk.dim('(OpenCode CLI → OpenCode Desktop → OpenClaw)')}`)
1841
+ lines.push(` ${chalk.yellow('P')} Open settings ${chalk.dim('(manage API keys per provider, enable/disable, test)')}`)
1842
+ lines.push(` ${chalk.yellow('K')} / ${chalk.yellow('Esc')} Show/hide this help`)
1843
+ lines.push(` ${chalk.yellow('Ctrl+C')} Exit`)
1844
+ lines.push('')
1845
+ const cleared = lines.map(l => l + EL)
1846
+ const remaining = state.terminalRows > 0 ? Math.max(0, state.terminalRows - cleared.length) : 0
1847
+ for (let i = 0; i < remaining; i++) cleared.push(EL)
1848
+ return cleared.join('\n')
1849
+ }
1850
+
1614
1851
  // ─── Settings key test helper ───────────────────────────────────────────────
1615
1852
  // 📖 Fires a single ping to the selected provider to verify the API key works.
1616
1853
  async function testProviderKey(providerKey) {
@@ -1646,6 +1883,12 @@ async function main() {
1646
1883
  const onKeyPress = async (str, key) => {
1647
1884
  if (!key) return
1648
1885
 
1886
+ // 📖 Help overlay: Esc or K closes it — handle before everything else so Esc isn't swallowed elsewhere
1887
+ if (state.helpVisible && (key.name === 'escape' || key.name === 'k')) {
1888
+ state.helpVisible = false
1889
+ return
1890
+ }
1891
+
1649
1892
  // ─── Settings overlay keyboard handling ───────────────────────────────────
1650
1893
  if (state.settingsOpen) {
1651
1894
  const providerKeys = Object.keys(sources)
@@ -1753,11 +1996,12 @@ async function main() {
1753
1996
  return
1754
1997
  }
1755
1998
 
1756
- // 📖 Sorting keys: R=rank, Y=tier, O=origin, M=model, L=latest ping, A=avg ping, S=SWE-bench, N=context, H=health, V=verdict, U=uptime
1999
+ // 📖 Sorting keys: R=rank, Y=tier, O=origin, M=model, L=latest ping, A=avg ping, S=SWE-bench, C=context, H=health, V=verdict, U=uptime
1757
2000
  // 📖 T is reserved for tier filter cycling — tier sort moved to Y
2001
+ // 📖 N is now reserved for origin filter cycling
1758
2002
  const sortKeys = {
1759
2003
  'r': 'rank', 'y': 'tier', 'o': 'origin', 'm': 'model',
1760
- 'l': 'ping', 'a': 'avg', 's': 'swe', 'n': 'ctx', 'h': 'condition', 'v': 'verdict', 'u': 'uptime'
2004
+ 'l': 'ping', 'a': 'avg', 's': 'swe', 'c': 'ctx', 'h': 'condition', 'v': 'verdict', 'u': 'uptime'
1761
2005
  }
1762
2006
 
1763
2007
  if (sortKeys[key.name] && !key.ctrl) {
@@ -1797,6 +2041,24 @@ async function main() {
1797
2041
  return
1798
2042
  }
1799
2043
 
2044
+ // 📖 Origin filter key: N = cycle through each provider (All → NIM → Groq → ... → All)
2045
+ if (key.name === 'n') {
2046
+ originFilterMode = (originFilterMode + 1) % ORIGIN_CYCLE.length
2047
+ applyTierFilter()
2048
+ // 📖 Recompute visible sorted list and reset cursor to avoid stale index into new filtered set
2049
+ const visible = state.results.filter(r => !r.hidden)
2050
+ state.visibleSorted = sortResults(visible, state.sortColumn, state.sortDirection)
2051
+ state.cursor = 0
2052
+ state.scrollOffset = 0
2053
+ return
2054
+ }
2055
+
2056
+ // 📖 Help overlay key: K = toggle help overlay
2057
+ if (key.name === 'k') {
2058
+ state.helpVisible = !state.helpVisible
2059
+ return
2060
+ }
2061
+
1800
2062
  // 📖 Mode toggle key: Z = cycle through modes (CLI → Desktop → OpenClaw)
1801
2063
  if (key.name === 'z') {
1802
2064
  const modeOrder = ['opencode', 'opencode-desktop', 'openclaw']
@@ -1899,7 +2161,9 @@ async function main() {
1899
2161
  }
1900
2162
  const content = state.settingsOpen
1901
2163
  ? renderSettings()
1902
- : renderTable(state.results, state.pendingPings, state.frame, state.cursor, state.sortColumn, state.sortDirection, state.pingInterval, state.lastPingTime, state.mode, tierFilterMode, state.scrollOffset, state.terminalRows)
2164
+ : state.helpVisible
2165
+ ? renderHelp()
2166
+ : renderTable(state.results, state.pendingPings, state.frame, state.cursor, state.sortColumn, state.sortDirection, state.pingInterval, state.lastPingTime, state.mode, tierFilterMode, state.scrollOffset, state.terminalRows, originFilterMode)
1903
2167
  process.stdout.write(ALT_HOME + content)
1904
2168
  }, Math.round(1000 / FPS))
1905
2169
 
@@ -1907,7 +2171,7 @@ async function main() {
1907
2171
  const initialVisible = state.results.filter(r => !r.hidden)
1908
2172
  state.visibleSorted = sortResults(initialVisible, state.sortColumn, state.sortDirection)
1909
2173
 
1910
- process.stdout.write(ALT_HOME + renderTable(state.results, state.pendingPings, state.frame, state.cursor, state.sortColumn, state.sortDirection, state.pingInterval, state.lastPingTime, state.mode, tierFilterMode, state.scrollOffset, state.terminalRows))
2174
+ process.stdout.write(ALT_HOME + renderTable(state.results, state.pendingPings, state.frame, state.cursor, state.sortColumn, state.sortDirection, state.pingInterval, state.lastPingTime, state.mode, tierFilterMode, state.scrollOffset, state.terminalRows, originFilterMode))
1911
2175
 
1912
2176
  // ── Continuous ping loop — ping all models every N seconds forever ──────────
1913
2177
 
package/lib/config.js CHANGED
@@ -12,14 +12,26 @@
12
12
  * 📖 Config JSON structure:
13
13
  * {
14
14
  * "apiKeys": {
15
- * "nvidia": "nvapi-xxx",
16
- * "groq": "gsk_xxx",
17
- * "cerebras": "csk_xxx"
15
+ * "nvidia": "nvapi-xxx",
16
+ * "groq": "gsk_xxx",
17
+ * "cerebras": "csk_xxx",
18
+ * "sambanova": "sn-xxx",
19
+ * "openrouter": "sk-or-xxx",
20
+ * "codestral": "csk-xxx",
21
+ * "hyperbolic": "eyJ...",
22
+ * "scaleway": "scw-xxx",
23
+ * "googleai": "AIza..."
18
24
  * },
19
25
  * "providers": {
20
- * "nvidia": { "enabled": true },
21
- * "groq": { "enabled": true },
22
- * "cerebras": { "enabled": true }
26
+ * "nvidia": { "enabled": true },
27
+ * "groq": { "enabled": true },
28
+ * "cerebras": { "enabled": true },
29
+ * "sambanova": { "enabled": true },
30
+ * "openrouter": { "enabled": true },
31
+ * "codestral": { "enabled": true },
32
+ * "hyperbolic": { "enabled": true },
33
+ * "scaleway": { "enabled": true },
34
+ * "googleai": { "enabled": true }
23
35
  * }
24
36
  * }
25
37
  *
@@ -52,9 +64,15 @@ const LEGACY_CONFIG_PATH = join(homedir(), '.free-coding-models')
52
64
  // 📖 Environment variable names per provider
53
65
  // 📖 These allow users to override config via env vars (useful for CI/headless setups)
54
66
  const ENV_VARS = {
55
- nvidia: 'NVIDIA_API_KEY',
56
- groq: 'GROQ_API_KEY',
57
- cerebras: 'CEREBRAS_API_KEY',
67
+ nvidia: 'NVIDIA_API_KEY',
68
+ groq: 'GROQ_API_KEY',
69
+ cerebras: 'CEREBRAS_API_KEY',
70
+ sambanova: 'SAMBANOVA_API_KEY',
71
+ openrouter: 'OPENROUTER_API_KEY',
72
+ codestral: 'CODESTRAL_API_KEY',
73
+ hyperbolic: 'HYPERBOLIC_API_KEY',
74
+ scaleway: 'SCALEWAY_API_KEY',
75
+ googleai: 'GOOGLE_API_KEY',
58
76
  }
59
77
 
60
78
  /**
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "free-coding-models",
3
- "version": "0.1.52",
3
+ "version": "0.1.56",
4
4
  "description": "Find the fastest coding LLM models in seconds — ping free models from multiple providers, pick the best one for OpenCode, Cursor, or any AI coding assistant.",
5
5
  "keywords": [
6
6
  "nvidia",
package/sources.js CHANGED
@@ -27,8 +27,8 @@
27
27
  * 📖 Secondary: https://swe-rebench.com (independent evals, scores are lower)
28
28
  * 📖 Leaderboard tracker: https://www.marc0.dev/en/leaderboard
29
29
  *
30
- * @exports nvidiaNim, groq, cerebras — model arrays per provider
31
- * @exports sources — map of { nvidia, groq, cerebras } each with { name, url, models }
30
+ * @exports nvidiaNim, groq, cerebras, sambanova, openrouter, codestral, hyperbolic, scaleway, googleai — model arrays per provider
31
+ * @exports sources — map of { nvidia, groq, cerebras, sambanova, openrouter, codestral, hyperbolic, scaleway, googleai } each with { name, url, models }
32
32
  * @exports MODELS — flat array of [modelId, label, tier, sweScore, ctx, providerKey]
33
33
  *
34
34
  * 📖 MODELS now includes providerKey as 6th element so ping() knows which
@@ -100,6 +100,10 @@ export const groq = [
100
100
  ['deepseek-r1-distill-llama-70b', 'R1 Distill 70B', 'A', '43.9%', '128k'],
101
101
  ['qwen-qwq-32b', 'QwQ 32B', 'A+', '50.0%', '131k'],
102
102
  ['moonshotai/kimi-k2-instruct', 'Kimi K2 Instruct', 'S', '65.8%', '131k'],
103
+ ['llama-3.1-8b-instant', 'Llama 3.1 8B', 'B', '28.8%', '128k'],
104
+ ['openai/gpt-oss-120b', 'GPT OSS 120B', 'S', '60.0%', '128k'],
105
+ ['openai/gpt-oss-20b', 'GPT OSS 20B', 'A', '42.0%', '128k'],
106
+ ['qwen/qwen3-32b', 'Qwen3 32B', 'A+', '50.0%', '131k'],
103
107
  ]
104
108
 
105
109
  // 📖 Cerebras source - https://cloud.cerebras.ai
@@ -108,6 +112,89 @@ export const cerebras = [
108
112
  ['llama3.3-70b', 'Llama 3.3 70B', 'A-', '39.5%', '128k'],
109
113
  ['llama-4-scout-17b-16e-instruct', 'Llama 4 Scout', 'A', '44.0%', '10M'],
110
114
  ['qwen-3-32b', 'Qwen3 32B', 'A+', '50.0%', '128k'],
115
+ ['gpt-oss-120b', 'GPT OSS 120B', 'S', '60.0%', '128k'],
116
+ ['qwen-3-235b-a22b', 'Qwen3 235B', 'S+', '70.0%', '128k'],
117
+ ['llama3.1-8b', 'Llama 3.1 8B', 'B', '28.8%', '128k'],
118
+ ['glm-4.6', 'GLM 4.6', 'A-', '38.0%', '128k'],
119
+ ]
120
+
121
+ // 📖 SambaNova source - https://cloud.sambanova.ai
122
+ // 📖 Free trial: $5 credits for 3 months — API keys at https://cloud.sambanova.ai/apis
123
+ // 📖 OpenAI-compatible API, supports all major coding models including DeepSeek V3/R1, Qwen3, Llama 4
124
+ export const sambanova = [
125
+ // ── S+ tier ──
126
+ ['Qwen3-235B-A22B-Instruct-2507', 'Qwen3 235B', 'S+', '70.0%', '128k'],
127
+ // ── S tier ──
128
+ ['DeepSeek-R1-0528', 'DeepSeek R1 0528', 'S', '61.0%', '128k'],
129
+ ['DeepSeek-V3.1', 'DeepSeek V3.1', 'S', '62.0%', '128k'],
130
+ ['DeepSeek-V3-0324', 'DeepSeek V3 0324', 'S', '62.0%', '128k'],
131
+ ['Llama-4-Maverick-17B-128E-Instruct', 'Llama 4 Maverick', 'S', '62.0%', '1M'],
132
+ ['gpt-oss-120b', 'GPT OSS 120B', 'S', '60.0%', '128k'],
133
+ ['deepseek-ai/DeepSeek-V3.1-Terminus', 'DeepSeek V3.1 Term', 'S', '68.4%', '128k'],
134
+ // ── A+ tier ──
135
+ ['Qwen3-32B', 'Qwen3 32B', 'A+', '50.0%', '128k'],
136
+ // ── A tier ──
137
+ ['DeepSeek-R1-Distill-Llama-70B', 'R1 Distill 70B', 'A', '43.9%', '128k'],
138
+ // ── A- tier ──
139
+ ['Meta-Llama-3.3-70B-Instruct', 'Llama 3.3 70B', 'A-', '39.5%', '128k'],
140
+ // ── B tier ──
141
+ ['Meta-Llama-3.1-8B-Instruct', 'Llama 3.1 8B', 'B', '28.8%', '128k'],
142
+ ]
143
+
144
+ // 📖 OpenRouter source - https://openrouter.ai
145
+ // 📖 Free :free models with shared quota — 50 free req/day
146
+ // 📖 API keys at https://openrouter.ai/settings/keys
147
+ export const openrouter = [
148
+ ['qwen/qwen3-coder:free', 'Qwen3 Coder', 'S+', '70.6%', '256k'],
149
+ ['stepfun/step-3.5-flash:free', 'Step 3.5 Flash', 'S+', '74.4%', '256k'],
150
+ ['deepseek/deepseek-r1-0528:free', 'DeepSeek R1 0528', 'S', '61.0%', '128k'],
151
+ ['qwen/qwen3-next-80b-a3b-instruct:free', 'Qwen3 80B Instruct', 'S', '65.0%', '128k'],
152
+ ['openai/gpt-oss-120b:free', 'GPT OSS 120B', 'S', '60.0%', '128k'],
153
+ ['openai/gpt-oss-20b:free', 'GPT OSS 20B', 'A', '42.0%', '128k'],
154
+ ['nvidia/nemotron-3-nano-30b-a3b:free', 'Nemotron Nano 30B', 'A', '43.0%', '128k'],
155
+ ['meta-llama/llama-3.3-70b-instruct:free', 'Llama 3.3 70B', 'A-', '39.5%', '128k'],
156
+ ]
157
+
158
+ // 📖 Mistral Codestral source - https://codestral.mistral.ai
159
+ // 📖 Free coding model — 30 req/min, 2000/day (phone number required for key)
160
+ // 📖 API keys at https://codestral.mistral.ai
161
+ export const codestral = [
162
+ ['codestral-latest', 'Codestral', 'B+', '34.0%', '256k'],
163
+ ]
164
+
165
+ // 📖 Hyperbolic source - https://app.hyperbolic.ai
166
+ // 📖 $1 free trial credits — API keys at https://app.hyperbolic.xyz/settings
167
+ export const hyperbolic = [
168
+ ['qwen/qwen3-coder-480b-a35b-instruct', 'Qwen3 Coder 480B', 'S+', '70.6%', '256k'],
169
+ ['deepseek-ai/DeepSeek-R1-0528', 'DeepSeek R1 0528', 'S', '61.0%', '128k'],
170
+ ['moonshotai/Kimi-K2-Instruct', 'Kimi K2 Instruct', 'S', '65.8%', '131k'],
171
+ ['openai/gpt-oss-120b', 'GPT OSS 120B', 'S', '60.0%', '128k'],
172
+ ['Qwen/Qwen3-235B-A22B', 'Qwen3 235B', 'S+', '70.0%', '128k'],
173
+ ['qwen/qwen3-next-80b-a3b-instruct', 'Qwen3 80B Instruct', 'S', '65.0%', '128k'],
174
+ ['deepseek-ai/DeepSeek-V3-0324', 'DeepSeek V3 0324', 'S', '62.0%', '128k'],
175
+ ['Qwen/Qwen2.5-Coder-32B-Instruct', 'Qwen2.5 Coder 32B', 'A', '46.0%', '32k'],
176
+ ['meta-llama/Llama-3.3-70B-Instruct', 'Llama 3.3 70B', 'A-', '39.5%', '128k'],
177
+ ['meta-llama/Meta-Llama-3.1-405B-Instruct', 'Llama 3.1 405B', 'A', '44.0%', '128k'],
178
+ ]
179
+
180
+ // 📖 Scaleway source - https://console.scaleway.com
181
+ // 📖 1M free tokens — API keys at https://console.scaleway.com/iam/api-keys
182
+ export const scaleway = [
183
+ ['devstral-2-123b-instruct-2512', 'Devstral 2 123B', 'S+', '72.2%', '256k'],
184
+ ['qwen3-235b-a22b-instruct-2507', 'Qwen3 235B', 'S+', '70.0%', '128k'],
185
+ ['gpt-oss-120b', 'GPT OSS 120B', 'S', '60.0%', '128k'],
186
+ ['qwen3-coder-30b-a3b-instruct', 'Qwen3 Coder 30B', 'A+', '55.0%', '32k'],
187
+ ['llama-3.3-70b-instruct', 'Llama 3.3 70B', 'A-', '39.5%', '128k'],
188
+ ['deepseek-r1-distill-llama-70b', 'R1 Distill 70B', 'A', '43.9%', '128k'],
189
+ ['mistral-small-3.2-24b-instruct-2506', 'Mistral Small 3.2', 'B+', '30.0%', '128k'],
190
+ ]
191
+
192
+ // 📖 Google AI Studio source - https://aistudio.google.com
193
+ // 📖 Free Gemma models — 14.4K req/day, API keys at https://aistudio.google.com/apikey
194
+ export const googleai = [
195
+ ['gemma-3-27b-it', 'Gemma 3 27B', 'B', '22.0%', '128k'],
196
+ ['gemma-3-12b-it', 'Gemma 3 12B', 'C', '15.0%', '128k'],
197
+ ['gemma-3-4b-it', 'Gemma 3 4B', 'C', '10.0%', '128k'],
111
198
  ]
112
199
 
113
200
  // 📖 All sources combined - used by the main script
@@ -128,6 +215,36 @@ export const sources = {
128
215
  url: 'https://api.cerebras.ai/v1/chat/completions',
129
216
  models: cerebras,
130
217
  },
218
+ sambanova: {
219
+ name: 'SambaNova',
220
+ url: 'https://api.sambanova.ai/v1/chat/completions',
221
+ models: sambanova,
222
+ },
223
+ openrouter: {
224
+ name: 'OpenRouter',
225
+ url: 'https://openrouter.ai/api/v1/chat/completions',
226
+ models: openrouter,
227
+ },
228
+ codestral: {
229
+ name: 'Codestral',
230
+ url: 'https://codestral.mistral.ai/v1/chat/completions',
231
+ models: codestral,
232
+ },
233
+ hyperbolic: {
234
+ name: 'Hyperbolic',
235
+ url: 'https://api.hyperbolic.xyz/v1/chat/completions',
236
+ models: hyperbolic,
237
+ },
238
+ scaleway: {
239
+ name: 'Scaleway',
240
+ url: 'https://api.scaleway.ai/v1/chat/completions',
241
+ models: scaleway,
242
+ },
243
+ googleai: {
244
+ name: 'Google AI',
245
+ url: 'https://generativelanguage.googleapis.com/v1beta/openai/chat/completions',
246
+ models: googleai,
247
+ },
131
248
  }
132
249
 
133
250
  // 📖 Flatten all models from all sources — each entry includes providerKey as 6th element