free-coding-models 0.1.52 → 0.1.54

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -2,14 +2,14 @@
2
2
  <img src="https://img.shields.io/npm/v/free-coding-models?color=76b900&label=npm&logo=npm" alt="npm version">
3
3
  <img src="https://img.shields.io/node/v/free-coding-models?color=76b900&logo=node.js" alt="node version">
4
4
  <img src="https://img.shields.io/npm/l/free-coding-models?color=76b900" alt="license">
5
- <img src="https://img.shields.io/badge/models-53-76b900?logo=nvidia" alt="models count">
6
- <img src="https://img.shields.io/badge/providers-3-blue" alt="providers count">
5
+ <img src="https://img.shields.io/badge/models-101-76b900?logo=nvidia" alt="models count">
6
+ <img src="https://img.shields.io/badge/providers-9-blue" alt="providers count">
7
7
  </p>
8
8
 
9
9
  <h1 align="center">free-coding-models</h1>
10
10
 
11
11
  <p align="center">
12
- <strong>Want to contribute or discuss the project?</strong> Join our <a href="https://discord.gg/5MbTnDC3Md">Discord community</a>!
12
+ 💬 <a href="https://discord.gg/5MbTnDC3Md">Let's talk about the project on Discord</a>
13
13
  </p>
14
14
 
15
15
  <p align="center">
@@ -24,7 +24,7 @@
24
24
 
25
25
  <p align="center">
26
26
  <strong>Find the fastest coding LLM models in seconds</strong><br>
27
- <sub>Ping free models from NVIDIA NIM, Groq, and Cerebras in real-time — pick the best one for OpenCode, OpenClaw, or any AI coding assistant</sub>
27
+ <sub>Ping free models from NVIDIA NIM, Groq, Cerebras, and SambaNova in real-time — pick the best one for OpenCode, OpenClaw, or any AI coding assistant</sub>
28
28
  </p>
29
29
 
30
30
  <p align="center">
@@ -47,7 +47,7 @@
47
47
  ## ✨ Features
48
48
 
49
49
  - **🎯 Coding-focused** — Only LLM models optimized for code generation, not chat or vision
50
- - **🌐 Multi-provider** — 53 models from NVIDIA NIM, Groq, and Cerebras — all free to use
50
+ - **🌐 Multi-provider** — 101 models from NVIDIA NIM, Groq, Cerebras, SambaNova, OpenRouter, Codestral, Hyperbolic, Scaleway, and Google AI — all free to use
51
51
  - **⚙️ Settings screen** — Press `P` to manage provider API keys, enable/disable providers, and test keys live
52
52
  - **🚀 Parallel pings** — All models tested simultaneously via native `fetch`
53
53
  - **📊 Real-time animation** — Watch latency appear live in alternate screen buffer
@@ -76,10 +76,16 @@ Before using `free-coding-models`, make sure you have:
76
76
  - **NVIDIA NIM** — [build.nvidia.com](https://build.nvidia.com) → Profile → API Keys → Generate
77
77
  - **Groq** — [console.groq.com/keys](https://console.groq.com/keys) → Create API Key
78
78
  - **Cerebras** — [cloud.cerebras.ai](https://cloud.cerebras.ai) → API Keys → Create
79
+ - **SambaNova** — [cloud.sambanova.ai/apis](https://cloud.sambanova.ai/apis) → API Keys → Create ($5 free trial, 3 months)
80
+ - **OpenRouter** — [openrouter.ai/settings/keys](https://openrouter.ai/settings/keys) → Create key (50 free req/day)
81
+ - **Mistral Codestral** — [codestral.mistral.ai](https://codestral.mistral.ai) → API Keys (30 req/min, 2000/day — phone required)
82
+ - **Hyperbolic** — [app.hyperbolic.ai/settings](https://app.hyperbolic.ai/settings) → API Keys ($1 free trial)
83
+ - **Scaleway** — [console.scaleway.com/iam/api-keys](https://console.scaleway.com/iam/api-keys) → IAM → API Keys (1M free tokens)
84
+ - **Google AI Studio** — [aistudio.google.com/apikey](https://aistudio.google.com/apikey) → Get API key (free Gemma models, 14.4K req/day)
79
85
  3. **OpenCode** *(optional)* — [Install OpenCode](https://github.com/opencode-ai/opencode) to use the OpenCode integration
80
86
  4. **OpenClaw** *(optional)* — [Install OpenClaw](https://openclaw.ai) to use the OpenClaw integration
81
87
 
82
- > 💡 **Tip:** You don't need all three providers. One key is enough to get started. Add more later via the Settings screen (`P` key). Models without a key still show real latency (`🔑 NO KEY`) so you can evaluate providers before signing up.
88
+ > 💡 **Tip:** You don't need all four providers. One key is enough to get started. Add more later via the Settings screen (`P` key). Models without a key still show real latency (`🔑 NO KEY`) so you can evaluate providers before signing up.
83
89
 
84
90
  ---
85
91
 
@@ -157,13 +163,13 @@ When you run `free-coding-models` without `--opencode` or `--openclaw`, you get
157
163
  Use `↑↓` arrows to select, `Enter` to confirm. Then the TUI launches with your chosen mode shown in the header badge.
158
164
 
159
165
  **How it works:**
160
- 1. **Ping phase** — All enabled models are pinged in parallel (up to 53 across 3 providers)
166
+ 1. **Ping phase** — All enabled models are pinged in parallel (up to 101 across 9 providers)
161
167
  2. **Continuous monitoring** — Models are re-pinged every 2 seconds forever
162
168
  3. **Real-time updates** — Watch "Latest", "Avg", and "Up%" columns update live
163
169
  4. **Select anytime** — Use ↑↓ arrows to navigate, press Enter on a model to act
164
170
  5. **Smart detection** — Automatically detects if NVIDIA NIM is configured in OpenCode or OpenClaw
165
171
 
166
- Setup wizard (first run — walks through all 3 providers):
172
+ Setup wizard (first run — walks through all 9 providers):
167
173
 
168
174
  ```
169
175
  🔑 First-time setup — API keys
@@ -184,11 +190,16 @@ Setup wizard (first run — walks through all 3 providers):
184
190
  API Keys → Create
185
191
  Enter key (or Enter to skip):
186
192
 
193
+ ● SambaNova
194
+ Free key at: https://cloud.sambanova.ai/apis
195
+ API Keys → Create ($5 free trial, 3 months)
196
+ Enter key (or Enter to skip):
197
+
187
198
  ✅ 2 key(s) saved to ~/.free-coding-models.json
188
199
  You can add or change keys anytime with the P key in the TUI.
189
200
  ```
190
201
 
191
- You don't need all three — skip any provider by pressing Enter. At least one key is required.
202
+ You don't need all four — skip any provider by pressing Enter. At least one key is required.
192
203
 
193
204
  ### Adding or changing keys later
194
205
 
@@ -246,7 +257,7 @@ CEREBRAS_API_KEY=csk_xxx free-coding-models
246
257
 
247
258
  ## 🤖 Coding Models
248
259
 
249
- **53 coding models** across 3 providers and 8 tiers, ranked by [SWE-bench Verified](https://www.swebench.com) — the industry-standard benchmark measuring real GitHub issue resolution. Scores are self-reported by providers unless noted.
260
+ **101 coding models** across 9 providers and 8 tiers, ranked by [SWE-bench Verified](https://www.swebench.com) — the industry-standard benchmark measuring real GitHub issue resolution. Scores are self-reported by providers unless noted.
250
261
 
251
262
  ### NVIDIA NIM (44 models)
252
263
 
@@ -601,4 +612,8 @@ We welcome contributions! Feel free to open issues, submit pull requests, or get
601
612
  **A:** No — `free-coding-models` configures OpenClaw to use NVIDIA NIM's remote API, so models run on NVIDIA's infrastructure. No GPU or local setup required.
602
613
 
603
614
  ## 📧 Support
604
- For questions or issues, open a GitHub issue or join our community Discord: https://discord.gg/5MbTnDC3Md
615
+
616
+ For questions or issues, open a [GitHub issue](https://github.com/vava-nessa/free-coding-models/issues).
617
+
618
+ 💬 Let's talk about the project on Discord: https://discord.gg/5MbTnDC3Md
619
+ 📚 Read the docs on GitHub: https://github.com/vava-nessa/free-coding-models#readme ⚠️ free-coding-models is a BETA TUI — expect rough edges and occasional crashes.
@@ -198,6 +198,54 @@ async function promptApiKey(config) {
198
198
  hint: 'API Keys → Create',
199
199
  prefix: 'csk_ / cauth_',
200
200
  },
201
+ {
202
+ key: 'sambanova',
203
+ label: 'SambaNova',
204
+ color: chalk.rgb(255, 165, 0),
205
+ url: 'https://cloud.sambanova.ai/apis',
206
+ hint: 'API Keys → Create ($5 free trial, 3 months)',
207
+ prefix: 'sn-',
208
+ },
209
+ {
210
+ key: 'openrouter',
211
+ label: 'OpenRouter',
212
+ color: chalk.rgb(120, 80, 255),
213
+ url: 'https://openrouter.ai/settings/keys',
214
+ hint: 'API Keys → Create key (50 free req/day, shared quota)',
215
+ prefix: 'sk-or-',
216
+ },
217
+ {
218
+ key: 'codestral',
219
+ label: 'Mistral Codestral',
220
+ color: chalk.rgb(255, 100, 100),
221
+ url: 'https://codestral.mistral.ai',
222
+ hint: 'API Keys → Create key (30 req/min, 2000/day — phone required)',
223
+ prefix: 'csk-',
224
+ },
225
+ {
226
+ key: 'hyperbolic',
227
+ label: 'Hyperbolic',
228
+ color: chalk.rgb(0, 200, 150),
229
+ url: 'https://app.hyperbolic.ai/settings',
230
+ hint: 'Settings → API Keys ($1 free trial)',
231
+ prefix: 'eyJ',
232
+ },
233
+ {
234
+ key: 'scaleway',
235
+ label: 'Scaleway',
236
+ color: chalk.rgb(130, 0, 250),
237
+ url: 'https://console.scaleway.com/iam/api-keys',
238
+ hint: 'IAM → API Keys (1M free tokens)',
239
+ prefix: 'scw-',
240
+ },
241
+ {
242
+ key: 'googleai',
243
+ label: 'Google AI Studio',
244
+ color: chalk.rgb(66, 133, 244),
245
+ url: 'https://aistudio.google.com/apikey',
246
+ hint: 'Get API key (free Gemma models, 14.4K req/day)',
247
+ prefix: 'AIza',
248
+ },
201
249
  ]
202
250
 
203
251
  const rl = readline.createInterface({ input: process.stdin, output: process.stdout })
@@ -412,7 +460,7 @@ function calculateViewport(terminalRows, scrollOffset, totalModels) {
412
460
  }
413
461
 
414
462
  // 📖 renderTable: mode param controls footer hint text (opencode vs openclaw)
415
- function renderTable(results, pendingPings, frame, cursor = null, sortColumn = 'avg', sortDirection = 'asc', pingInterval = PING_INTERVAL, lastPingTime = Date.now(), mode = 'opencode', tierFilterMode = 0, scrollOffset = 0, terminalRows = 0) {
463
+ function renderTable(results, pendingPings, frame, cursor = null, sortColumn = 'avg', sortDirection = 'asc', pingInterval = PING_INTERVAL, lastPingTime = Date.now(), mode = 'opencode', tierFilterMode = 0, scrollOffset = 0, terminalRows = 0, originFilterMode = 0) {
416
464
  // 📖 Filter out hidden models for display
417
465
  const visibleResults = results.filter(r => !r.hidden)
418
466
 
@@ -453,6 +501,17 @@ function renderTable(results, pendingPings, frame, cursor = null, sortColumn = '
453
501
  tierBadge = chalk.bold.rgb(255, 200, 0)(` [${TIER_CYCLE_NAMES[tierFilterMode]}]`)
454
502
  }
455
503
 
504
+ // 📖 Origin filter badge — shown when filtering by provider is active
505
+ let originBadge = ''
506
+ if (originFilterMode > 0) {
507
+ const originKeys = [null, ...Object.keys(sources)]
508
+ const activeOriginKey = originKeys[originFilterMode]
509
+ const activeOriginName = activeOriginKey ? sources[activeOriginKey]?.name ?? activeOriginKey : null
510
+ if (activeOriginName) {
511
+ originBadge = chalk.bold.rgb(100, 200, 255)(` [${activeOriginName}]`)
512
+ }
513
+ }
514
+
456
515
  // 📖 Column widths (generous spacing with margins)
457
516
  const W_RANK = 6
458
517
  const W_TIER = 6
@@ -471,7 +530,7 @@ function renderTable(results, pendingPings, frame, cursor = null, sortColumn = '
471
530
 
472
531
  const lines = [
473
532
  '',
474
- ` ${chalk.bold('⚡ Free Coding Models')} ${chalk.dim('v' + LOCAL_VERSION)}${modeBadge}${modeHint}${tierBadge} ` +
533
+ ` ${chalk.bold('⚡ Free Coding Models')} ${chalk.dim('v' + LOCAL_VERSION)}${modeBadge}${modeHint}${tierBadge}${originBadge} ` +
475
534
  chalk.greenBright(`✅ ${up}`) + chalk.dim(' up ') +
476
535
  chalk.yellow(`⏱ ${timeout}`) + chalk.dim(' timeout ') +
477
536
  chalk.red(`❌ ${down}`) + chalk.dim(' down ') +
@@ -509,7 +568,10 @@ function renderTable(results, pendingPings, frame, cursor = null, sortColumn = '
509
568
  // 📖 Now colorize after padding is calculated on plain text
510
569
  const rankH_c = colorFirst(rankH, W_RANK)
511
570
  const tierH_c = colorFirst('Tier', W_TIER)
512
- const originH_c = sortColumn === 'origin' ? chalk.bold.cyan(originH.padEnd(W_SOURCE)) : colorFirst(originH, W_SOURCE)
571
+ const originLabel = 'Origin(N)'
572
+ const originH_c = sortColumn === 'origin'
573
+ ? chalk.bold.cyan(originLabel.padEnd(W_SOURCE))
574
+ : (originFilterMode > 0 ? chalk.bold.rgb(100, 200, 255)(originLabel.padEnd(W_SOURCE)) : colorFirst(originLabel, W_SOURCE))
513
575
  const modelH_c = colorFirst(modelH, W_MODEL)
514
576
  const sweH_c = sortColumn === 'swe' ? chalk.bold.cyan(sweH.padEnd(W_SWE)) : colorFirst(sweH, W_SWE)
515
577
  const ctxH_c = sortColumn === 'ctx' ? chalk.bold.cyan(ctxH.padEnd(W_CTX)) : colorFirst(ctxH, W_CTX)
@@ -707,7 +769,7 @@ function renderTable(results, pendingPings, frame, cursor = null, sortColumn = '
707
769
  : mode === 'opencode-desktop'
708
770
  ? chalk.rgb(0, 200, 255)('Enter→OpenDesktop')
709
771
  : chalk.rgb(0, 200, 255)('Enter→OpenCode')
710
- lines.push(chalk.dim(` ↑↓ Navigate • `) + actionHint + chalk.dim(` • R/Y/O/M/L/A/S/C/H/V/U Sort • W↓/X↑ Interval (${intervalSec}s) • T Filter tier • Z Mode • `) + chalk.yellow('P') + chalk.dim(` Settings • Ctrl+C Exit`))
772
+ lines.push(chalk.dim(` ↑↓ Navigate • `) + actionHint + chalk.dim(` • R/Y/O/M/L/A/S/C/H/V/U Sort • T Tier • N Origin • W↓/X↑ (${intervalSec}s) • Z Mode • `) + chalk.yellow('P') + chalk.dim(` Settings • `) + chalk.yellow('K') + chalk.dim(` Help • Ctrl+C Exit`))
711
773
  lines.push('')
712
774
  lines.push(chalk.dim(' Made with ') + '💖 & ☕' + chalk.dim(' by ') + '\x1b]8;;https://github.com/vava-nessa\x1b\\vava-nessa\x1b]8;;\x1b\\' + chalk.dim(' • ') + '🫂 ' + chalk.cyanBright('\x1b]8;;https://discord.gg/5MbTnDC3Md\x1b\\Join our Discord!\x1b]8;;\x1b\\') + chalk.dim(' • ') + '⭐ ' + '\x1b]8;;https://github.com/vava-nessa/free-coding-models\x1b\\Read the docs on GitHub\x1b]8;;\x1b\\')
713
775
  lines.push(chalk.dim(' 💬 Discord: ') + chalk.cyanBright('https://discord.gg/5MbTnDC3Md'))
@@ -777,9 +839,15 @@ function getOpenCodeModelId(providerKey, modelId) {
777
839
 
778
840
  // 📖 Env var names per provider -- used for passing resolved keys to child processes
779
841
  const ENV_VAR_NAMES = {
780
- nvidia: 'NVIDIA_API_KEY',
781
- groq: 'GROQ_API_KEY',
782
- cerebras: 'CEREBRAS_API_KEY',
842
+ nvidia: 'NVIDIA_API_KEY',
843
+ groq: 'GROQ_API_KEY',
844
+ cerebras: 'CEREBRAS_API_KEY',
845
+ sambanova: 'SAMBANOVA_API_KEY',
846
+ openrouter: 'OPENROUTER_API_KEY',
847
+ codestral: 'CODESTRAL_API_KEY',
848
+ hyperbolic: 'HYPERBOLIC_API_KEY',
849
+ scaleway: 'SCALEWAY_API_KEY',
850
+ googleai: 'GOOGLE_API_KEY',
783
851
  }
784
852
 
785
853
  // 📖 OpenCode config location varies by platform
@@ -990,6 +1058,67 @@ After installation, you can use: opencode --model ${modelRef}`
990
1058
  },
991
1059
  models: {}
992
1060
  }
1061
+ } else if (providerKey === 'sambanova') {
1062
+ // 📖 SambaNova is OpenAI-compatible — uses @ai-sdk/openai-compatible with their base URL
1063
+ config.provider.sambanova = {
1064
+ npm: '@ai-sdk/openai-compatible',
1065
+ name: 'SambaNova',
1066
+ options: {
1067
+ baseURL: 'https://api.sambanova.ai/v1',
1068
+ apiKey: '{env:SAMBANOVA_API_KEY}'
1069
+ },
1070
+ models: {}
1071
+ }
1072
+ } else if (providerKey === 'openrouter') {
1073
+ config.provider.openrouter = {
1074
+ npm: '@ai-sdk/openai-compatible',
1075
+ name: 'OpenRouter',
1076
+ options: {
1077
+ baseURL: 'https://openrouter.ai/api/v1',
1078
+ apiKey: '{env:OPENROUTER_API_KEY}'
1079
+ },
1080
+ models: {}
1081
+ }
1082
+ } else if (providerKey === 'codestral') {
1083
+ config.provider.codestral = {
1084
+ npm: '@ai-sdk/openai-compatible',
1085
+ name: 'Mistral Codestral',
1086
+ options: {
1087
+ baseURL: 'https://codestral.mistral.ai/v1',
1088
+ apiKey: '{env:CODESTRAL_API_KEY}'
1089
+ },
1090
+ models: {}
1091
+ }
1092
+ } else if (providerKey === 'hyperbolic') {
1093
+ config.provider.hyperbolic = {
1094
+ npm: '@ai-sdk/openai-compatible',
1095
+ name: 'Hyperbolic',
1096
+ options: {
1097
+ baseURL: 'https://api.hyperbolic.xyz/v1',
1098
+ apiKey: '{env:HYPERBOLIC_API_KEY}'
1099
+ },
1100
+ models: {}
1101
+ }
1102
+ } else if (providerKey === 'scaleway') {
1103
+ config.provider.scaleway = {
1104
+ npm: '@ai-sdk/openai-compatible',
1105
+ name: 'Scaleway',
1106
+ options: {
1107
+ baseURL: 'https://api.scaleway.ai/v1',
1108
+ apiKey: '{env:SCALEWAY_API_KEY}'
1109
+ },
1110
+ models: {}
1111
+ }
1112
+ } else if (providerKey === 'googleai') {
1113
+ config.provider.googleai = {
1114
+ npm: '@ai-sdk/openai-compatible',
1115
+ name: 'Google AI Studio',
1116
+ options: {
1117
+ baseURL: 'https://generativelanguage.googleapis.com/v1beta/openai',
1118
+ apiKey: '{env:GOOGLE_API_KEY}'
1119
+ },
1120
+ models: {}
1121
+ }
993
1122
  }
994
1123
  }
995
1124
 
@@ -1159,6 +1288,67 @@ ${isWindows ? 'set NVIDIA_API_KEY=your_key_here' : 'export NVIDIA_API_KEY=your_k
1159
1288
  },
1160
1289
  models: {}
1161
1290
  }
1291
+ } else if (providerKey === 'sambanova') {
1292
+ // 📖 SambaNova is OpenAI-compatible — uses @ai-sdk/openai-compatible with their base URL
1293
+ config.provider.sambanova = {
1294
+ npm: '@ai-sdk/openai-compatible',
1295
+ name: 'SambaNova',
1296
+ options: {
1297
+ baseURL: 'https://api.sambanova.ai/v1',
1298
+ apiKey: '{env:SAMBANOVA_API_KEY}'
1299
+ },
1300
+ models: {}
1301
+ }
1302
+ } else if (providerKey === 'openrouter') {
1303
+ config.provider.openrouter = {
1304
+ npm: '@ai-sdk/openai-compatible',
1305
+ name: 'OpenRouter',
1306
+ options: {
1307
+ baseURL: 'https://openrouter.ai/api/v1',
1308
+ apiKey: '{env:OPENROUTER_API_KEY}'
1309
+ },
1310
+ models: {}
1311
+ }
1312
+ } else if (providerKey === 'codestral') {
1313
+ config.provider.codestral = {
1314
+ npm: '@ai-sdk/openai-compatible',
1315
+ name: 'Mistral Codestral',
1316
+ options: {
1317
+ baseURL: 'https://codestral.mistral.ai/v1',
1318
+ apiKey: '{env:CODESTRAL_API_KEY}'
1319
+ },
1320
+ models: {}
1321
+ }
1322
+ } else if (providerKey === 'hyperbolic') {
1323
+ config.provider.hyperbolic = {
1324
+ npm: '@ai-sdk/openai-compatible',
1325
+ name: 'Hyperbolic',
1326
+ options: {
1327
+ baseURL: 'https://api.hyperbolic.xyz/v1',
1328
+ apiKey: '{env:HYPERBOLIC_API_KEY}'
1329
+ },
1330
+ models: {}
1331
+ }
1332
+ } else if (providerKey === 'scaleway') {
1333
+ config.provider.scaleway = {
1334
+ npm: '@ai-sdk/openai-compatible',
1335
+ name: 'Scaleway',
1336
+ options: {
1337
+ baseURL: 'https://api.scaleway.ai/v1',
1338
+ apiKey: '{env:SCALEWAY_API_KEY}'
1339
+ },
1340
+ models: {}
1341
+ }
1342
+ } else if (providerKey === 'googleai') {
1343
+ config.provider.googleai = {
1344
+ npm: '@ai-sdk/openai-compatible',
1345
+ name: 'Google AI Studio',
1346
+ options: {
1347
+ baseURL: 'https://generativelanguage.googleapis.com/v1beta/openai',
1348
+ apiKey: '{env:GOOGLE_API_KEY}'
1349
+ },
1350
+ models: {}
1351
+ }
1162
1352
  }
1163
1353
  }
1164
1354
 
@@ -1513,6 +1703,7 @@ async function main() {
1513
1703
  settingsTestResults: {}, // 📖 { providerKey: 'pending'|'ok'|'fail'|null }
1514
1704
  config, // 📖 Live reference to the config object (updated on save)
1515
1705
  visibleSorted: [], // 📖 Cached visible+sorted models — shared between render loop and key handlers
1706
+ helpVisible: false, // 📖 Whether the help overlay (K key) is active
1516
1707
  }
1517
1708
 
1518
1709
  // 📖 Re-clamp viewport on terminal resize
@@ -1538,10 +1729,19 @@ async function main() {
1538
1729
  // 📖 0=All, 1=S+, 2=S, 3=A+, 4=A, 5=A-, 6=B+, 7=B, 8=C
1539
1730
  const TIER_CYCLE = [null, 'S+', 'S', 'A+', 'A', 'A-', 'B+', 'B', 'C']
1540
1731
  let tierFilterMode = 0
1732
+
1733
+ // 📖 originFilterMode: index into ORIGIN_CYCLE, 0=All, then each provider key in order
1734
+ const ORIGIN_CYCLE = [null, ...Object.keys(sources)]
1735
+ let originFilterMode = 0
1736
+
1541
1737
  function applyTierFilter() {
1542
1738
  const activeTier = TIER_CYCLE[tierFilterMode]
1739
+ const activeOrigin = ORIGIN_CYCLE[originFilterMode]
1543
1740
  state.results.forEach(r => {
1544
- r.hidden = activeTier !== null && r.tier !== activeTier
1741
+ // 📖 Apply both tier and origin filters — model is hidden if it fails either
1742
+ const tierHide = activeTier !== null && r.tier !== activeTier
1743
+ const originHide = activeOrigin !== null && r.providerKey !== activeOrigin
1744
+ r.hidden = tierHide || originHide
1545
1745
  })
1546
1746
  return state.results
1547
1747
  }
@@ -1611,6 +1811,42 @@ async function main() {
1611
1811
  return cleared.join('\n')
1612
1812
  }
1613
1813
 
1814
+ // ─── Help overlay renderer ────────────────────────────────────────────────
1815
+ // 📖 renderHelp: Draw the help overlay listing all key bindings.
1816
+ // 📖 Toggled with K key. Gives users a quick reference without leaving the TUI.
1817
+ function renderHelp() {
1818
+ const EL = '\x1b[K'
1819
+ const lines = []
1820
+ lines.push('')
1821
+ lines.push(` ${chalk.bold('❓ Keyboard Shortcuts')} ${chalk.dim('— press K or Esc to close')}`)
1822
+ lines.push('')
1823
+ lines.push(` ${chalk.bold('Navigation')}`)
1824
+ lines.push(` ${chalk.yellow('↑↓')} Navigate rows`)
1825
+ lines.push(` ${chalk.yellow('Enter')} Select model and launch`)
1826
+ lines.push('')
1827
+ lines.push(` ${chalk.bold('Sorting')}`)
1828
+ lines.push(` ${chalk.yellow('R')} Rank ${chalk.yellow('Y')} Tier ${chalk.yellow('O')} Origin ${chalk.yellow('M')} Model`)
1829
+ lines.push(` ${chalk.yellow('L')} Latest ping ${chalk.yellow('A')} Avg ping ${chalk.yellow('S')} SWE-bench score`)
1830
+ lines.push(` ${chalk.yellow('C')} Context window ${chalk.yellow('H')} Health ${chalk.yellow('V')} Verdict ${chalk.yellow('U')} Uptime`)
1831
+ lines.push('')
1832
+ lines.push(` ${chalk.bold('Filters')}`)
1833
+ lines.push(` ${chalk.yellow('T')} Cycle tier filter ${chalk.dim('(All → S+ → S → A+ → A → A- → B+ → B → C → All)')}`)
1834
+ lines.push(` ${chalk.yellow('N')} Cycle origin filter ${chalk.dim('(All → NIM → Groq → Cerebras → ... each provider → All)')}`)
1835
+ lines.push('')
1836
+ lines.push(` ${chalk.bold('Controls')}`)
1837
+ lines.push(` ${chalk.yellow('W')} Decrease ping interval (faster)`)
1838
+ lines.push(` ${chalk.yellow('X')} Increase ping interval (slower)`)
1839
+ lines.push(` ${chalk.yellow('Z')} Cycle launch mode ${chalk.dim('(OpenCode CLI → OpenCode Desktop → OpenClaw)')}`)
1840
+ lines.push(` ${chalk.yellow('P')} Open settings ${chalk.dim('(manage API keys per provider, enable/disable, test)')}`)
1841
+ lines.push(` ${chalk.yellow('K')} / ${chalk.yellow('Esc')} Show/hide this help`)
1842
+ lines.push(` ${chalk.yellow('Ctrl+C')} Exit`)
1843
+ lines.push('')
1844
+ const cleared = lines.map(l => l + EL)
1845
+ const remaining = state.terminalRows > 0 ? Math.max(0, state.terminalRows - cleared.length) : 0
1846
+ for (let i = 0; i < remaining; i++) cleared.push(EL)
1847
+ return cleared.join('\n')
1848
+ }
1849
+
1614
1850
  // ─── Settings key test helper ───────────────────────────────────────────────
1615
1851
  // 📖 Fires a single ping to the selected provider to verify the API key works.
1616
1852
  async function testProviderKey(providerKey) {
@@ -1646,6 +1882,12 @@ async function main() {
1646
1882
  const onKeyPress = async (str, key) => {
1647
1883
  if (!key) return
1648
1884
 
1885
+ // 📖 Help overlay: Esc or K closes it — handle before everything else so Esc isn't swallowed elsewhere
1886
+ if (state.helpVisible && (key.name === 'escape' || key.name === 'k')) {
1887
+ state.helpVisible = false
1888
+ return
1889
+ }
1890
+
1649
1891
  // ─── Settings overlay keyboard handling ───────────────────────────────────
1650
1892
  if (state.settingsOpen) {
1651
1893
  const providerKeys = Object.keys(sources)
@@ -1753,11 +1995,12 @@ async function main() {
1753
1995
  return
1754
1996
  }
1755
1997
 
1756
- // 📖 Sorting keys: R=rank, Y=tier, O=origin, M=model, L=latest ping, A=avg ping, S=SWE-bench, N=context, H=health, V=verdict, U=uptime
1998
+ // 📖 Sorting keys: R=rank, Y=tier, O=origin, M=model, L=latest ping, A=avg ping, S=SWE-bench, C=context, H=health, V=verdict, U=uptime
1757
1999
  // 📖 T is reserved for tier filter cycling — tier sort moved to Y
2000
+ // 📖 N is now reserved for origin filter cycling
1758
2001
  const sortKeys = {
1759
2002
  'r': 'rank', 'y': 'tier', 'o': 'origin', 'm': 'model',
1760
- 'l': 'ping', 'a': 'avg', 's': 'swe', 'n': 'ctx', 'h': 'condition', 'v': 'verdict', 'u': 'uptime'
2003
+ 'l': 'ping', 'a': 'avg', 's': 'swe', 'c': 'ctx', 'h': 'condition', 'v': 'verdict', 'u': 'uptime'
1761
2004
  }
1762
2005
 
1763
2006
  if (sortKeys[key.name] && !key.ctrl) {
@@ -1797,6 +2040,24 @@ async function main() {
1797
2040
  return
1798
2041
  }
1799
2042
 
2043
+ // 📖 Origin filter key: N = cycle through each provider (All → NIM → Groq → ... → All)
2044
+ if (key.name === 'n') {
2045
+ originFilterMode = (originFilterMode + 1) % ORIGIN_CYCLE.length
2046
+ applyTierFilter()
2047
+ // 📖 Recompute visible sorted list and reset cursor to avoid stale index into new filtered set
2048
+ const visible = state.results.filter(r => !r.hidden)
2049
+ state.visibleSorted = sortResults(visible, state.sortColumn, state.sortDirection)
2050
+ state.cursor = 0
2051
+ state.scrollOffset = 0
2052
+ return
2053
+ }
2054
+
2055
+ // 📖 Help overlay key: K = toggle help overlay
2056
+ if (key.name === 'k') {
2057
+ state.helpVisible = !state.helpVisible
2058
+ return
2059
+ }
2060
+
1800
2061
  // 📖 Mode toggle key: Z = cycle through modes (CLI → Desktop → OpenClaw)
1801
2062
  if (key.name === 'z') {
1802
2063
  const modeOrder = ['opencode', 'opencode-desktop', 'openclaw']
@@ -1899,7 +2160,9 @@ async function main() {
1899
2160
  }
1900
2161
  const content = state.settingsOpen
1901
2162
  ? renderSettings()
1902
- : renderTable(state.results, state.pendingPings, state.frame, state.cursor, state.sortColumn, state.sortDirection, state.pingInterval, state.lastPingTime, state.mode, tierFilterMode, state.scrollOffset, state.terminalRows)
2163
+ : state.helpVisible
2164
+ ? renderHelp()
2165
+ : renderTable(state.results, state.pendingPings, state.frame, state.cursor, state.sortColumn, state.sortDirection, state.pingInterval, state.lastPingTime, state.mode, tierFilterMode, state.scrollOffset, state.terminalRows, originFilterMode)
1903
2166
  process.stdout.write(ALT_HOME + content)
1904
2167
  }, Math.round(1000 / FPS))
1905
2168
 
@@ -1907,7 +2170,7 @@ async function main() {
1907
2170
  const initialVisible = state.results.filter(r => !r.hidden)
1908
2171
  state.visibleSorted = sortResults(initialVisible, state.sortColumn, state.sortDirection)
1909
2172
 
1910
- process.stdout.write(ALT_HOME + renderTable(state.results, state.pendingPings, state.frame, state.cursor, state.sortColumn, state.sortDirection, state.pingInterval, state.lastPingTime, state.mode, tierFilterMode, state.scrollOffset, state.terminalRows))
2173
+ process.stdout.write(ALT_HOME + renderTable(state.results, state.pendingPings, state.frame, state.cursor, state.sortColumn, state.sortDirection, state.pingInterval, state.lastPingTime, state.mode, tierFilterMode, state.scrollOffset, state.terminalRows, originFilterMode))
1911
2174
 
1912
2175
  // ── Continuous ping loop — ping all models every N seconds forever ──────────
1913
2176
 
package/lib/config.js CHANGED
@@ -12,14 +12,26 @@
12
12
  * 📖 Config JSON structure:
13
13
  * {
14
14
  * "apiKeys": {
15
- * "nvidia": "nvapi-xxx",
16
- * "groq": "gsk_xxx",
17
- * "cerebras": "csk_xxx"
15
+ * "nvidia": "nvapi-xxx",
16
+ * "groq": "gsk_xxx",
17
+ * "cerebras": "csk_xxx",
18
+ * "sambanova": "sn-xxx",
19
+ * "openrouter": "sk-or-xxx",
20
+ * "codestral": "csk-xxx",
21
+ * "hyperbolic": "eyJ...",
22
+ * "scaleway": "scw-xxx",
23
+ * "googleai": "AIza..."
18
24
  * },
19
25
  * "providers": {
20
- * "nvidia": { "enabled": true },
21
- * "groq": { "enabled": true },
22
- * "cerebras": { "enabled": true }
26
+ * "nvidia": { "enabled": true },
27
+ * "groq": { "enabled": true },
28
+ * "cerebras": { "enabled": true },
29
+ * "sambanova": { "enabled": true },
30
+ * "openrouter": { "enabled": true },
31
+ * "codestral": { "enabled": true },
32
+ * "hyperbolic": { "enabled": true },
33
+ * "scaleway": { "enabled": true },
34
+ * "googleai": { "enabled": true }
23
35
  * }
24
36
  * }
25
37
  *
@@ -52,9 +64,15 @@ const LEGACY_CONFIG_PATH = join(homedir(), '.free-coding-models')
52
64
  // 📖 Environment variable names per provider
53
65
  // 📖 These allow users to override config via env vars (useful for CI/headless setups)
54
66
  const ENV_VARS = {
55
- nvidia: 'NVIDIA_API_KEY',
56
- groq: 'GROQ_API_KEY',
57
- cerebras: 'CEREBRAS_API_KEY',
67
+ nvidia: 'NVIDIA_API_KEY',
68
+ groq: 'GROQ_API_KEY',
69
+ cerebras: 'CEREBRAS_API_KEY',
70
+ sambanova: 'SAMBANOVA_API_KEY',
71
+ openrouter: 'OPENROUTER_API_KEY',
72
+ codestral: 'CODESTRAL_API_KEY',
73
+ hyperbolic: 'HYPERBOLIC_API_KEY',
74
+ scaleway: 'SCALEWAY_API_KEY',
75
+ googleai: 'GOOGLE_API_KEY',
58
76
  }
59
77
 
60
78
  /**
package/package.json CHANGED
@@ -1,7 +1,7 @@
1
1
  {
2
2
  "name": "free-coding-models",
3
- "version": "0.1.52",
4
- "description": "Find the fastest coding LLM models in seconds ping free models from multiple providers, pick the best one for OpenCode, Cursor, or any AI coding assistant.",
3
+ "version": "0.1.54",
4
+ "description": "Find the fastest coding LLM models in seconds \u2014 ping free models from multiple providers, pick the best one for OpenCode, Cursor, or any AI coding assistant.",
5
5
  "keywords": [
6
6
  "nvidia",
7
7
  "nim",
package/sources.js CHANGED
@@ -27,8 +27,8 @@
27
27
  * 📖 Secondary: https://swe-rebench.com (independent evals, scores are lower)
28
28
  * 📖 Leaderboard tracker: https://www.marc0.dev/en/leaderboard
29
29
  *
30
- * @exports nvidiaNim, groq, cerebras — model arrays per provider
31
- * @exports sources — map of { nvidia, groq, cerebras } each with { name, url, models }
30
+ * @exports nvidiaNim, groq, cerebras, sambanova, openrouter, codestral, hyperbolic, scaleway, googleai — model arrays per provider
31
+ * @exports sources — map of { nvidia, groq, cerebras, sambanova, openrouter, codestral, hyperbolic, scaleway, googleai } each with { name, url, models }
32
32
  * @exports MODELS — flat array of [modelId, label, tier, sweScore, ctx, providerKey]
33
33
  *
34
34
  * 📖 MODELS now includes providerKey as 6th element so ping() knows which
@@ -100,6 +100,10 @@ export const groq = [
100
100
  ['deepseek-r1-distill-llama-70b', 'R1 Distill 70B', 'A', '43.9%', '128k'],
101
101
  ['qwen-qwq-32b', 'QwQ 32B', 'A+', '50.0%', '131k'],
102
102
  ['moonshotai/kimi-k2-instruct', 'Kimi K2 Instruct', 'S', '65.8%', '131k'],
103
+ ['llama-3.1-8b-instant', 'Llama 3.1 8B', 'B', '28.8%', '128k'],
104
+ ['openai/gpt-oss-120b', 'GPT OSS 120B', 'S', '60.0%', '128k'],
105
+ ['openai/gpt-oss-20b', 'GPT OSS 20B', 'A', '42.0%', '128k'],
106
+ ['qwen/qwen3-32b', 'Qwen3 32B', 'A+', '50.0%', '131k'],
103
107
  ]
104
108
 
105
109
  // 📖 Cerebras source - https://cloud.cerebras.ai
@@ -108,6 +112,89 @@ export const cerebras = [
108
112
  ['llama3.3-70b', 'Llama 3.3 70B', 'A-', '39.5%', '128k'],
109
113
  ['llama-4-scout-17b-16e-instruct', 'Llama 4 Scout', 'A', '44.0%', '10M'],
110
114
  ['qwen-3-32b', 'Qwen3 32B', 'A+', '50.0%', '128k'],
115
+ ['gpt-oss-120b', 'GPT OSS 120B', 'S', '60.0%', '128k'],
116
+ ['qwen-3-235b-a22b', 'Qwen3 235B', 'S+', '70.0%', '128k'],
117
+ ['llama3.1-8b', 'Llama 3.1 8B', 'B', '28.8%', '128k'],
118
+ ['glm-4.6', 'GLM 4.6', 'A-', '38.0%', '128k'],
119
+ ]
120
+
121
+ // 📖 SambaNova source - https://cloud.sambanova.ai
122
+ // 📖 Free trial: $5 credits for 3 months — API keys at https://cloud.sambanova.ai/apis
123
+ // 📖 OpenAI-compatible API, supports all major coding models including DeepSeek V3/R1, Qwen3, Llama 4
124
+ export const sambanova = [
125
+ // ── S+ tier ──
126
+ ['Qwen3-235B-A22B-Instruct-2507', 'Qwen3 235B', 'S+', '70.0%', '128k'],
127
+ // ── S tier ──
128
+ ['DeepSeek-R1-0528', 'DeepSeek R1 0528', 'S', '61.0%', '128k'],
129
+ ['DeepSeek-V3.1', 'DeepSeek V3.1', 'S', '62.0%', '128k'],
130
+ ['DeepSeek-V3-0324', 'DeepSeek V3 0324', 'S', '62.0%', '128k'],
131
+ ['Llama-4-Maverick-17B-128E-Instruct', 'Llama 4 Maverick', 'S', '62.0%', '1M'],
132
+ ['gpt-oss-120b', 'GPT OSS 120B', 'S', '60.0%', '128k'],
133
+ ['deepseek-ai/DeepSeek-V3.1-Terminus', 'DeepSeek V3.1 Term', 'S', '68.4%', '128k'],
134
+ // ── A+ tier ──
135
+ ['Qwen3-32B', 'Qwen3 32B', 'A+', '50.0%', '128k'],
136
+ // ── A tier ──
137
+ ['DeepSeek-R1-Distill-Llama-70B', 'R1 Distill 70B', 'A', '43.9%', '128k'],
138
+ // ── A- tier ──
139
+ ['Meta-Llama-3.3-70B-Instruct', 'Llama 3.3 70B', 'A-', '39.5%', '128k'],
140
+ // ── B tier ──
141
+ ['Meta-Llama-3.1-8B-Instruct', 'Llama 3.1 8B', 'B', '28.8%', '128k'],
142
+ ]
143
+
144
+ // 📖 OpenRouter source - https://openrouter.ai
145
+ // 📖 Free :free models with shared quota — 50 free req/day
146
+ // 📖 API keys at https://openrouter.ai/settings/keys
147
+ export const openrouter = [
148
+ ['qwen/qwen3-coder:free', 'Qwen3 Coder', 'S+', '70.6%', '256k'],
149
+ ['stepfun/step-3.5-flash:free', 'Step 3.5 Flash', 'S+', '74.4%', '256k'],
150
+ ['deepseek/deepseek-r1-0528:free', 'DeepSeek R1 0528', 'S', '61.0%', '128k'],
151
+ ['qwen/qwen3-next-80b-a3b-instruct:free', 'Qwen3 80B Instruct', 'S', '65.0%', '128k'],
152
+ ['openai/gpt-oss-120b:free', 'GPT OSS 120B', 'S', '60.0%', '128k'],
153
+ ['openai/gpt-oss-20b:free', 'GPT OSS 20B', 'A', '42.0%', '128k'],
154
+ ['nvidia/nemotron-3-nano-30b-a3b:free', 'Nemotron Nano 30B', 'A', '43.0%', '128k'],
155
+ ['meta-llama/llama-3.3-70b-instruct:free', 'Llama 3.3 70B', 'A-', '39.5%', '128k'],
156
+ ]
157
+
158
+ // 📖 Mistral Codestral source - https://codestral.mistral.ai
159
+ // 📖 Free coding model — 30 req/min, 2000/day (phone number required for key)
160
+ // 📖 API keys at https://codestral.mistral.ai
161
+ export const codestral = [
162
+ ['codestral-latest', 'Codestral', 'B+', '34.0%', '256k'],
163
+ ]
164
+
165
+ // 📖 Hyperbolic source - https://app.hyperbolic.ai
166
+ // 📖 $1 free trial credits — API keys at https://app.hyperbolic.xyz/settings
167
+ export const hyperbolic = [
168
+ ['qwen/qwen3-coder-480b-a35b-instruct', 'Qwen3 Coder 480B', 'S+', '70.6%', '256k'],
169
+ ['deepseek-ai/DeepSeek-R1-0528', 'DeepSeek R1 0528', 'S', '61.0%', '128k'],
170
+ ['moonshotai/Kimi-K2-Instruct', 'Kimi K2 Instruct', 'S', '65.8%', '131k'],
171
+ ['openai/gpt-oss-120b', 'GPT OSS 120B', 'S', '60.0%', '128k'],
172
+ ['Qwen/Qwen3-235B-A22B', 'Qwen3 235B', 'S+', '70.0%', '128k'],
173
+ ['qwen/qwen3-next-80b-a3b-instruct', 'Qwen3 80B Instruct', 'S', '65.0%', '128k'],
174
+ ['deepseek-ai/DeepSeek-V3-0324', 'DeepSeek V3 0324', 'S', '62.0%', '128k'],
175
+ ['Qwen/Qwen2.5-Coder-32B-Instruct', 'Qwen2.5 Coder 32B', 'A', '46.0%', '32k'],
176
+ ['meta-llama/Llama-3.3-70B-Instruct', 'Llama 3.3 70B', 'A-', '39.5%', '128k'],
177
+ ['meta-llama/Meta-Llama-3.1-405B-Instruct', 'Llama 3.1 405B', 'A', '44.0%', '128k'],
178
+ ]
179
+
180
+ // 📖 Scaleway source - https://console.scaleway.com
181
+ // 📖 1M free tokens — API keys at https://console.scaleway.com/iam/api-keys
182
+ export const scaleway = [
183
+ ['devstral-2-123b-instruct-2512', 'Devstral 2 123B', 'S+', '72.2%', '256k'],
184
+ ['qwen3-235b-a22b-instruct-2507', 'Qwen3 235B', 'S+', '70.0%', '128k'],
185
+ ['gpt-oss-120b', 'GPT OSS 120B', 'S', '60.0%', '128k'],
186
+ ['qwen3-coder-30b-a3b-instruct', 'Qwen3 Coder 30B', 'A+', '55.0%', '32k'],
187
+ ['llama-3.3-70b-instruct', 'Llama 3.3 70B', 'A-', '39.5%', '128k'],
188
+ ['deepseek-r1-distill-llama-70b', 'R1 Distill 70B', 'A', '43.9%', '128k'],
189
+ ['mistral-small-3.2-24b-instruct-2506', 'Mistral Small 3.2', 'B+', '30.0%', '128k'],
190
+ ]
191
+
192
+ // 📖 Google AI Studio source - https://aistudio.google.com
193
+ // 📖 Free Gemma models — 14.4K req/day, API keys at https://aistudio.google.com/apikey
194
+ export const googleai = [
195
+ ['gemma-3-27b-it', 'Gemma 3 27B', 'B', '22.0%', '128k'],
196
+ ['gemma-3-12b-it', 'Gemma 3 12B', 'C', '15.0%', '128k'],
197
+ ['gemma-3-4b-it', 'Gemma 3 4B', 'C', '10.0%', '128k'],
111
198
  ]
112
199
 
113
200
  // 📖 All sources combined - used by the main script
@@ -128,6 +215,36 @@ export const sources = {
128
215
  url: 'https://api.cerebras.ai/v1/chat/completions',
129
216
  models: cerebras,
130
217
  },
218
+ sambanova: {
219
+ name: 'SambaNova',
220
+ url: 'https://api.sambanova.ai/v1/chat/completions',
221
+ models: sambanova,
222
+ },
223
+ openrouter: {
224
+ name: 'OpenRouter',
225
+ url: 'https://openrouter.ai/api/v1/chat/completions',
226
+ models: openrouter,
227
+ },
228
+ codestral: {
229
+ name: 'Codestral',
230
+ url: 'https://codestral.mistral.ai/v1/chat/completions',
231
+ models: codestral,
232
+ },
233
+ hyperbolic: {
234
+ name: 'Hyperbolic',
235
+ url: 'https://api.hyperbolic.xyz/v1/chat/completions',
236
+ models: hyperbolic,
237
+ },
238
+ scaleway: {
239
+ name: 'Scaleway',
240
+ url: 'https://api.scaleway.ai/v1/chat/completions',
241
+ models: scaleway,
242
+ },
243
+ googleai: {
244
+ name: 'Google AI',
245
+ url: 'https://generativelanguage.googleapis.com/v1beta/openai/chat/completions',
246
+ models: googleai,
247
+ },
131
248
  }
132
249
 
133
250
  // 📖 Flatten all models from all sources — each entry includes providerKey as 6th element