free-coding-models 0.1.51 → 0.1.54

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -2,14 +2,14 @@
2
2
  <img src="https://img.shields.io/npm/v/free-coding-models?color=76b900&label=npm&logo=npm" alt="npm version">
3
3
  <img src="https://img.shields.io/node/v/free-coding-models?color=76b900&logo=node.js" alt="node version">
4
4
  <img src="https://img.shields.io/npm/l/free-coding-models?color=76b900" alt="license">
5
- <img src="https://img.shields.io/badge/models-53-76b900?logo=nvidia" alt="models count">
6
- <img src="https://img.shields.io/badge/providers-3-blue" alt="providers count">
5
+ <img src="https://img.shields.io/badge/models-101-76b900?logo=nvidia" alt="models count">
6
+ <img src="https://img.shields.io/badge/providers-9-blue" alt="providers count">
7
7
  </p>
8
8
 
9
9
  <h1 align="center">free-coding-models</h1>
10
10
 
11
11
  <p align="center">
12
- <strong>Want to contribute or discuss the project?</strong> Join our <a href="https://discord.gg/5MbTnDC3Md">Discord community</a>!
12
+ 💬 <a href="https://discord.gg/5MbTnDC3Md">Let's talk about the project on Discord</a>
13
13
  </p>
14
14
 
15
15
  <p align="center">
@@ -24,7 +24,7 @@
24
24
 
25
25
  <p align="center">
26
26
  <strong>Find the fastest coding LLM models in seconds</strong><br>
27
- <sub>Ping free models from NVIDIA NIM, Groq, and Cerebras in real-time — pick the best one for OpenCode, OpenClaw, or any AI coding assistant</sub>
27
+ <sub>Ping free models from NVIDIA NIM, Groq, Cerebras, and SambaNova in real-time — pick the best one for OpenCode, OpenClaw, or any AI coding assistant</sub>
28
28
  </p>
29
29
 
30
30
  <p align="center">
@@ -47,7 +47,7 @@
47
47
  ## ✨ Features
48
48
 
49
49
  - **🎯 Coding-focused** — Only LLM models optimized for code generation, not chat or vision
50
- - **🌐 Multi-provider** — 53 models from NVIDIA NIM, Groq, and Cerebras — all free to use
50
+ - **🌐 Multi-provider** — 101 models from NVIDIA NIM, Groq, Cerebras, SambaNova, OpenRouter, Codestral, Hyperbolic, Scaleway, and Google AI — all free to use
51
51
  - **⚙️ Settings screen** — Press `P` to manage provider API keys, enable/disable providers, and test keys live
52
52
  - **🚀 Parallel pings** — All models tested simultaneously via native `fetch`
53
53
  - **📊 Real-time animation** — Watch latency appear live in alternate screen buffer
@@ -76,10 +76,16 @@ Before using `free-coding-models`, make sure you have:
76
76
  - **NVIDIA NIM** — [build.nvidia.com](https://build.nvidia.com) → Profile → API Keys → Generate
77
77
  - **Groq** — [console.groq.com/keys](https://console.groq.com/keys) → Create API Key
78
78
  - **Cerebras** — [cloud.cerebras.ai](https://cloud.cerebras.ai) → API Keys → Create
79
+ - **SambaNova** — [cloud.sambanova.ai/apis](https://cloud.sambanova.ai/apis) → API Keys → Create ($5 free trial, 3 months)
80
+ - **OpenRouter** — [openrouter.ai/settings/keys](https://openrouter.ai/settings/keys) → Create key (50 free req/day)
81
+ - **Mistral Codestral** — [codestral.mistral.ai](https://codestral.mistral.ai) → API Keys (30 req/min, 2000/day — phone required)
82
+ - **Hyperbolic** — [app.hyperbolic.ai/settings](https://app.hyperbolic.ai/settings) → API Keys ($1 free trial)
83
+ - **Scaleway** — [console.scaleway.com/iam/api-keys](https://console.scaleway.com/iam/api-keys) → IAM → API Keys (1M free tokens)
84
+ - **Google AI Studio** — [aistudio.google.com/apikey](https://aistudio.google.com/apikey) → Get API key (free Gemma models, 14.4K req/day)
79
85
  3. **OpenCode** *(optional)* — [Install OpenCode](https://github.com/opencode-ai/opencode) to use the OpenCode integration
80
86
  4. **OpenClaw** *(optional)* — [Install OpenClaw](https://openclaw.ai) to use the OpenClaw integration
81
87
 
82
- > 💡 **Tip:** You don't need all three providers. One key is enough to get started. Add more later via the Settings screen (`P` key). Models without a key still show real latency (`🔑 NO KEY`) so you can evaluate providers before signing up.
88
+ > 💡 **Tip:** You don't need all four providers. One key is enough to get started. Add more later via the Settings screen (`P` key). Models without a key still show real latency (`🔑 NO KEY`) so you can evaluate providers before signing up.
83
89
 
84
90
  ---
85
91
 
@@ -157,13 +163,13 @@ When you run `free-coding-models` without `--opencode` or `--openclaw`, you get
157
163
  Use `↑↓` arrows to select, `Enter` to confirm. Then the TUI launches with your chosen mode shown in the header badge.
158
164
 
159
165
  **How it works:**
160
- 1. **Ping phase** — All enabled models are pinged in parallel (up to 53 across 3 providers)
166
+ 1. **Ping phase** — All enabled models are pinged in parallel (up to 101 across 9 providers)
161
167
  2. **Continuous monitoring** — Models are re-pinged every 2 seconds forever
162
168
  3. **Real-time updates** — Watch "Latest", "Avg", and "Up%" columns update live
163
169
  4. **Select anytime** — Use ↑↓ arrows to navigate, press Enter on a model to act
164
170
  5. **Smart detection** — Automatically detects if NVIDIA NIM is configured in OpenCode or OpenClaw
165
171
 
166
- Setup wizard (first run — walks through all 3 providers):
172
+ Setup wizard (first run — walks through all 9 providers):
167
173
 
168
174
  ```
169
175
  🔑 First-time setup — API keys
@@ -184,11 +190,16 @@ Setup wizard (first run — walks through all 3 providers):
184
190
  API Keys → Create
185
191
  Enter key (or Enter to skip):
186
192
 
193
+ ● SambaNova
194
+ Free key at: https://cloud.sambanova.ai/apis
195
+ API Keys → Create ($5 free trial, 3 months)
196
+ Enter key (or Enter to skip):
197
+
187
198
  ✅ 2 key(s) saved to ~/.free-coding-models.json
188
199
  You can add or change keys anytime with the P key in the TUI.
189
200
  ```
190
201
 
191
- You don't need all three — skip any provider by pressing Enter. At least one key is required.
202
+ You don't need all four — skip any provider by pressing Enter. At least one key is required.
192
203
 
193
204
  ### Adding or changing keys later
194
205
 
@@ -246,7 +257,7 @@ CEREBRAS_API_KEY=csk_xxx free-coding-models
246
257
 
247
258
  ## 🤖 Coding Models
248
259
 
249
- **53 coding models** across 3 providers and 8 tiers, ranked by [SWE-bench Verified](https://www.swebench.com) — the industry-standard benchmark measuring real GitHub issue resolution. Scores are self-reported by providers unless noted.
260
+ **101 coding models** across 9 providers and 8 tiers, ranked by [SWE-bench Verified](https://www.swebench.com) — the industry-standard benchmark measuring real GitHub issue resolution. Scores are self-reported by providers unless noted.
250
261
 
251
262
  ### NVIDIA NIM (44 models)
252
263
 
@@ -601,4 +612,8 @@ We welcome contributions! Feel free to open issues, submit pull requests, or get
601
612
  **A:** No — `free-coding-models` configures OpenClaw to use NVIDIA NIM's remote API, so models run on NVIDIA's infrastructure. No GPU or local setup required.
602
613
 
603
614
  ## 📧 Support
604
- For questions or issues, open a GitHub issue or join our community Discord: https://discord.gg/5MbTnDC3Md
615
+
616
+ For questions or issues, open a [GitHub issue](https://github.com/vava-nessa/free-coding-models/issues).
617
+
618
+ 💬 Let's talk about the project on Discord: https://discord.gg/5MbTnDC3Md
619
+ 📚 Read the docs on GitHub: https://github.com/vava-nessa/free-coding-models#readme ⚠️ free-coding-models is a BETA TUI — expect rough edges and occasional crashes.
@@ -198,6 +198,54 @@ async function promptApiKey(config) {
198
198
  hint: 'API Keys → Create',
199
199
  prefix: 'csk_ / cauth_',
200
200
  },
201
+ {
202
+ key: 'sambanova',
203
+ label: 'SambaNova',
204
+ color: chalk.rgb(255, 165, 0),
205
+ url: 'https://cloud.sambanova.ai/apis',
206
+ hint: 'API Keys → Create ($5 free trial, 3 months)',
207
+ prefix: 'sn-',
208
+ },
209
+ {
210
+ key: 'openrouter',
211
+ label: 'OpenRouter',
212
+ color: chalk.rgb(120, 80, 255),
213
+ url: 'https://openrouter.ai/settings/keys',
214
+ hint: 'API Keys → Create key (50 free req/day, shared quota)',
215
+ prefix: 'sk-or-',
216
+ },
217
+ {
218
+ key: 'codestral',
219
+ label: 'Mistral Codestral',
220
+ color: chalk.rgb(255, 100, 100),
221
+ url: 'https://codestral.mistral.ai',
222
+ hint: 'API Keys → Create key (30 req/min, 2000/day — phone required)',
223
+ prefix: 'csk-',
224
+ },
225
+ {
226
+ key: 'hyperbolic',
227
+ label: 'Hyperbolic',
228
+ color: chalk.rgb(0, 200, 150),
229
+ url: 'https://app.hyperbolic.ai/settings',
230
+ hint: 'Settings → API Keys ($1 free trial)',
231
+ prefix: 'eyJ',
232
+ },
233
+ {
234
+ key: 'scaleway',
235
+ label: 'Scaleway',
236
+ color: chalk.rgb(130, 0, 250),
237
+ url: 'https://console.scaleway.com/iam/api-keys',
238
+ hint: 'IAM → API Keys (1M free tokens)',
239
+ prefix: 'scw-',
240
+ },
241
+ {
242
+ key: 'googleai',
243
+ label: 'Google AI Studio',
244
+ color: chalk.rgb(66, 133, 244),
245
+ url: 'https://aistudio.google.com/apikey',
246
+ hint: 'Get API key (free Gemma models, 14.4K req/day)',
247
+ prefix: 'AIza',
248
+ },
201
249
  ]
202
250
 
203
251
  const rl = readline.createInterface({ input: process.stdin, output: process.stdout })
@@ -412,7 +460,7 @@ function calculateViewport(terminalRows, scrollOffset, totalModels) {
412
460
  }
413
461
 
414
462
  // 📖 renderTable: mode param controls footer hint text (opencode vs openclaw)
415
- function renderTable(results, pendingPings, frame, cursor = null, sortColumn = 'avg', sortDirection = 'asc', pingInterval = PING_INTERVAL, lastPingTime = Date.now(), mode = 'opencode', tierFilterMode = 0, scrollOffset = 0, terminalRows = 0) {
463
+ function renderTable(results, pendingPings, frame, cursor = null, sortColumn = 'avg', sortDirection = 'asc', pingInterval = PING_INTERVAL, lastPingTime = Date.now(), mode = 'opencode', tierFilterMode = 0, scrollOffset = 0, terminalRows = 0, originFilterMode = 0) {
416
464
  // 📖 Filter out hidden models for display
417
465
  const visibleResults = results.filter(r => !r.hidden)
418
466
 
@@ -453,6 +501,17 @@ function renderTable(results, pendingPings, frame, cursor = null, sortColumn = '
453
501
  tierBadge = chalk.bold.rgb(255, 200, 0)(` [${TIER_CYCLE_NAMES[tierFilterMode]}]`)
454
502
  }
455
503
 
504
+ // 📖 Origin filter badge — shown when filtering by provider is active
505
+ let originBadge = ''
506
+ if (originFilterMode > 0) {
507
+ const originKeys = [null, ...Object.keys(sources)]
508
+ const activeOriginKey = originKeys[originFilterMode]
509
+ const activeOriginName = activeOriginKey ? sources[activeOriginKey]?.name ?? activeOriginKey : null
510
+ if (activeOriginName) {
511
+ originBadge = chalk.bold.rgb(100, 200, 255)(` [${activeOriginName}]`)
512
+ }
513
+ }
514
+
456
515
  // 📖 Column widths (generous spacing with margins)
457
516
  const W_RANK = 6
458
517
  const W_TIER = 6
@@ -471,7 +530,7 @@ function renderTable(results, pendingPings, frame, cursor = null, sortColumn = '
471
530
 
472
531
  const lines = [
473
532
  '',
474
- ` ${chalk.bold('⚡ Free Coding Models')} ${chalk.dim('v' + LOCAL_VERSION)}${modeBadge}${modeHint}${tierBadge} ` +
533
+ ` ${chalk.bold('⚡ Free Coding Models')} ${chalk.dim('v' + LOCAL_VERSION)}${modeBadge}${modeHint}${tierBadge}${originBadge} ` +
475
534
  chalk.greenBright(`✅ ${up}`) + chalk.dim(' up ') +
476
535
  chalk.yellow(`⏱ ${timeout}`) + chalk.dim(' timeout ') +
477
536
  chalk.red(`❌ ${down}`) + chalk.dim(' down ') +
@@ -509,7 +568,10 @@ function renderTable(results, pendingPings, frame, cursor = null, sortColumn = '
509
568
  // 📖 Now colorize after padding is calculated on plain text
510
569
  const rankH_c = colorFirst(rankH, W_RANK)
511
570
  const tierH_c = colorFirst('Tier', W_TIER)
512
- const originH_c = sortColumn === 'origin' ? chalk.bold.cyan(originH.padEnd(W_SOURCE)) : colorFirst(originH, W_SOURCE)
571
+ const originLabel = 'Origin(N)'
572
+ const originH_c = sortColumn === 'origin'
573
+ ? chalk.bold.cyan(originLabel.padEnd(W_SOURCE))
574
+ : (originFilterMode > 0 ? chalk.bold.rgb(100, 200, 255)(originLabel.padEnd(W_SOURCE)) : colorFirst(originLabel, W_SOURCE))
513
575
  const modelH_c = colorFirst(modelH, W_MODEL)
514
576
  const sweH_c = sortColumn === 'swe' ? chalk.bold.cyan(sweH.padEnd(W_SWE)) : colorFirst(sweH, W_SWE)
515
577
  const ctxH_c = sortColumn === 'ctx' ? chalk.bold.cyan(ctxH.padEnd(W_CTX)) : colorFirst(ctxH, W_CTX)
@@ -707,9 +769,10 @@ function renderTable(results, pendingPings, frame, cursor = null, sortColumn = '
707
769
  : mode === 'opencode-desktop'
708
770
  ? chalk.rgb(0, 200, 255)('Enter→OpenDesktop')
709
771
  : chalk.rgb(0, 200, 255)('Enter→OpenCode')
710
- lines.push(chalk.dim(` ↑↓ Navigate • `) + actionHint + chalk.dim(` • R/Y/O/M/L/A/S/C/H/V/U Sort • W↓/X↑ Interval (${intervalSec}s) • T Filter tier • Z Mode • `) + chalk.yellow('P') + chalk.dim(` Settings • Ctrl+C Exit`))
772
+ lines.push(chalk.dim(` ↑↓ Navigate • `) + actionHint + chalk.dim(` • R/Y/O/M/L/A/S/C/H/V/U Sort • T Tier • N Origin • W↓/X↑ (${intervalSec}s) • Z Mode • `) + chalk.yellow('P') + chalk.dim(` Settings • `) + chalk.yellow('K') + chalk.dim(` Help • Ctrl+C Exit`))
711
773
  lines.push('')
712
774
  lines.push(chalk.dim(' Made with ') + '💖 & ☕' + chalk.dim(' by ') + '\x1b]8;;https://github.com/vava-nessa\x1b\\vava-nessa\x1b]8;;\x1b\\' + chalk.dim(' • ') + '🫂 ' + chalk.cyanBright('\x1b]8;;https://discord.gg/5MbTnDC3Md\x1b\\Join our Discord!\x1b]8;;\x1b\\') + chalk.dim(' • ') + '⭐ ' + '\x1b]8;;https://github.com/vava-nessa/free-coding-models\x1b\\Read the docs on GitHub\x1b]8;;\x1b\\')
775
+ lines.push(chalk.dim(' 💬 Discord: ') + chalk.cyanBright('https://discord.gg/5MbTnDC3Md'))
713
776
  lines.push('')
714
777
  // 📖 Append \x1b[K (erase to EOL) to each line so leftover chars from previous
715
778
  // 📖 frames are cleared. Then pad with blank cleared lines to fill the terminal,
@@ -758,6 +821,35 @@ const isWindows = process.platform === 'win32'
758
821
  const isMac = process.platform === 'darwin'
759
822
  const isLinux = process.platform === 'linux'
760
823
 
824
+ // ─── OpenCode model ID mapping ─────────────────────────────────────────────────
825
+ // 📖 Source model IDs -> OpenCode built-in model IDs (only where they differ)
826
+ // 📖 Groq's API aliases short names to full names, but OpenCode does exact ID matching
827
+ // 📖 against its built-in model list. Unmapped models pass through as-is.
828
+ const OPENCODE_MODEL_MAP = {
829
+ groq: {
830
+ 'moonshotai/kimi-k2-instruct': 'moonshotai/kimi-k2-instruct-0905',
831
+ 'meta-llama/llama-4-scout-17b-16e-preview': 'meta-llama/llama-4-scout-17b-16e-instruct',
832
+ 'meta-llama/llama-4-maverick-17b-128e-preview': 'meta-llama/llama-4-maverick-17b-128e-instruct',
833
+ }
834
+ }
835
+
836
+ function getOpenCodeModelId(providerKey, modelId) {
837
+ return OPENCODE_MODEL_MAP[providerKey]?.[modelId] || modelId
838
+ }
839
+
840
+ // 📖 Env var names per provider -- used for passing resolved keys to child processes
841
+ const ENV_VAR_NAMES = {
842
+ nvidia: 'NVIDIA_API_KEY',
843
+ groq: 'GROQ_API_KEY',
844
+ cerebras: 'CEREBRAS_API_KEY',
845
+ sambanova: 'SAMBANOVA_API_KEY',
846
+ openrouter: 'OPENROUTER_API_KEY',
847
+ codestral: 'CODESTRAL_API_KEY',
848
+ hyperbolic: 'HYPERBOLIC_API_KEY',
849
+ scaleway: 'SCALEWAY_API_KEY',
850
+ googleai: 'GOOGLE_API_KEY',
851
+ }
852
+
761
853
  // 📖 OpenCode config location varies by platform
762
854
  // 📖 Windows: %APPDATA%\opencode\opencode.json (or sometimes ~/.config/opencode)
763
855
  // 📖 macOS/Linux: ~/.config/opencode/opencode.json
@@ -809,16 +901,50 @@ function checkNvidiaNimConfig() {
809
901
  )
810
902
  }
811
903
 
904
+ // ─── Shared OpenCode spawn helper ──────────────────────────────────────────────
905
+ // 📖 Resolves the actual API key from config/env and passes it as an env var
906
+ // 📖 to the child process so OpenCode's {env:GROQ_API_KEY} references work
907
+ // 📖 even when the key is only in ~/.free-coding-models.json (not in shell env).
908
+ async function spawnOpenCode(args, providerKey, fcmConfig) {
909
+ const envVarName = ENV_VAR_NAMES[providerKey]
910
+ const resolvedKey = getApiKey(fcmConfig, providerKey)
911
+ const childEnv = { ...process.env }
912
+ if (envVarName && resolvedKey) childEnv[envVarName] = resolvedKey
913
+
914
+ const { spawn } = await import('child_process')
915
+ const child = spawn('opencode', args, {
916
+ stdio: 'inherit',
917
+ shell: true,
918
+ detached: false,
919
+ env: childEnv
920
+ })
921
+
922
+ return new Promise((resolve, reject) => {
923
+ child.on('exit', resolve)
924
+ child.on('error', (err) => {
925
+ if (err.code === 'ENOENT') {
926
+ console.error(chalk.red('\n X Could not find "opencode" -- is it installed and in your PATH?'))
927
+ console.error(chalk.dim(' Install: npm i -g opencode or see https://opencode.ai'))
928
+ resolve(1)
929
+ } else {
930
+ reject(err)
931
+ }
932
+ })
933
+ })
934
+ }
935
+
812
936
  // ─── Start OpenCode ────────────────────────────────────────────────────────────
813
937
  // 📖 Launches OpenCode with the selected model.
814
938
  // 📖 Handles all 3 providers: nvidia (needs custom provider config), groq & cerebras (built-in in OpenCode).
815
939
  // 📖 For nvidia: checks if NIM is configured, sets provider.models entry, spawns with nvidia/model-id.
816
- // 📖 For groq/cerebras: OpenCode has built-in support just sets model in config and spawns.
940
+ // 📖 For groq/cerebras: OpenCode has built-in support -- just sets model in config and spawns.
817
941
  // 📖 Model format: { modelId, label, tier, providerKey }
818
- async function startOpenCode(model) {
942
+ // 📖 fcmConfig: the free-coding-models config (for resolving API keys)
943
+ async function startOpenCode(model, fcmConfig) {
819
944
  const providerKey = model.providerKey ?? 'nvidia'
820
- // 📖 Full model reference string used in OpenCode config and --model flag
821
- const modelRef = `${providerKey}/${model.modelId}`
945
+ // 📖 Map model ID to OpenCode's built-in ID if it differs from our source ID
946
+ const ocModelId = getOpenCodeModelId(providerKey, model.modelId)
947
+ const modelRef = `${providerKey}/${ocModelId}`
822
948
 
823
949
  if (providerKey === 'nvidia') {
824
950
  // 📖 NVIDIA NIM needs a custom provider block in OpenCode config (not built-in)
@@ -840,11 +966,9 @@ async function startOpenCode(model) {
840
966
  config.model = modelRef
841
967
 
842
968
  // 📖 Register the model in the nvidia provider's models section
843
- // 📖 OpenCode requires models to be explicitly listed in provider.models
844
- // 📖 to recognize them — without this, it falls back to the previous default
845
969
  if (config.provider?.nvidia) {
846
970
  if (!config.provider.nvidia.models) config.provider.nvidia.models = {}
847
- config.provider.nvidia.models[model.modelId] = { name: model.label }
971
+ config.provider.nvidia.models[ocModelId] = { name: model.label }
848
972
  }
849
973
 
850
974
  saveOpenCodeConfig(config)
@@ -863,27 +987,9 @@ async function startOpenCode(model) {
863
987
  console.log(chalk.dim(' Starting OpenCode…'))
864
988
  console.log()
865
989
 
866
- const { spawn } = await import('child_process')
867
- const child = spawn('opencode', ['--model', modelRef], {
868
- stdio: 'inherit',
869
- shell: true,
870
- detached: false
871
- })
872
-
873
- await new Promise((resolve, reject) => {
874
- child.on('exit', resolve)
875
- child.on('error', (err) => {
876
- if (err.code === 'ENOENT') {
877
- console.error(chalk.red('\n ✗ Could not find "opencode" — is it installed and in your PATH?'))
878
- console.error(chalk.dim(' Install: npm i -g opencode or see https://opencode.ai'))
879
- resolve(1)
880
- } else {
881
- reject(err)
882
- }
883
- })
884
- })
990
+ await spawnOpenCode(['--model', modelRef], providerKey, fcmConfig)
885
991
  } else {
886
- // 📖 NVIDIA NIM not configured show install prompt
992
+ // 📖 NVIDIA NIM not configured -- show install prompt
887
993
  console.log(chalk.yellow(' ⚠ NVIDIA NIM not configured in OpenCode'))
888
994
  console.log()
889
995
  console.log(chalk.dim(' Starting OpenCode with installation prompt…'))
@@ -914,29 +1020,11 @@ After installation, you can use: opencode --model ${modelRef}`
914
1020
  console.log(chalk.dim(' Starting OpenCode…'))
915
1021
  console.log()
916
1022
 
917
- const { spawn } = await import('child_process')
918
- const child = spawn('opencode', [], {
919
- stdio: 'inherit',
920
- shell: true,
921
- detached: false
922
- })
923
-
924
- await new Promise((resolve, reject) => {
925
- child.on('exit', resolve)
926
- child.on('error', (err) => {
927
- if (err.code === 'ENOENT') {
928
- console.error(chalk.red('\n ✗ Could not find "opencode" — is it installed and in your PATH?'))
929
- console.error(chalk.dim(' Install: npm i -g opencode or see https://opencode.ai'))
930
- resolve(1)
931
- } else {
932
- reject(err)
933
- }
934
- })
935
- })
1023
+ await spawnOpenCode([], providerKey, fcmConfig)
936
1024
  }
937
1025
  } else {
938
- // 📖 Groq: built-in OpenCode provider needs provider block with apiKey in opencode.json.
939
- // 📖 Cerebras: NOT built-in needs @ai-sdk/openai-compatible + baseURL, like NVIDIA.
1026
+ // 📖 Groq: built-in OpenCode provider -- needs provider block with apiKey in opencode.json.
1027
+ // 📖 Cerebras: NOT built-in -- needs @ai-sdk/openai-compatible + baseURL, like NVIDIA.
940
1028
  // 📖 Both need the model registered in provider.<key>.models so OpenCode can find it.
941
1029
  console.log(chalk.green(` 🚀 Setting ${chalk.bold(model.label)} as default…`))
942
1030
  console.log(chalk.dim(` Model: ${modelRef}`))
@@ -970,13 +1058,77 @@ After installation, you can use: opencode --model ${modelRef}`
970
1058
  },
971
1059
  models: {}
972
1060
  }
1061
+ } else if (providerKey === 'sambanova') {
1062
+ // 📖 SambaNova is OpenAI-compatible — uses @ai-sdk/openai-compatible with their base URL
1063
+ config.provider.sambanova = {
1064
+ npm: '@ai-sdk/openai-compatible',
1065
+ name: 'SambaNova',
1066
+ options: {
1067
+ baseURL: 'https://api.sambanova.ai/v1',
1068
+ apiKey: '{env:SAMBANOVA_API_KEY}'
1069
+ },
1070
+ models: {}
1071
+ }
1072
+ } else if (providerKey === 'openrouter') {
1073
+ config.provider.openrouter = {
1074
+ npm: '@ai-sdk/openai-compatible',
1075
+ name: 'OpenRouter',
1076
+ options: {
1077
+ baseURL: 'https://openrouter.ai/api/v1',
1078
+ apiKey: '{env:OPENROUTER_API_KEY}'
1079
+ },
1080
+ models: {}
1081
+ }
1082
+ } else if (providerKey === 'codestral') {
1083
+ config.provider.codestral = {
1084
+ npm: '@ai-sdk/openai-compatible',
1085
+ name: 'Mistral Codestral',
1086
+ options: {
1087
+ baseURL: 'https://codestral.mistral.ai/v1',
1088
+ apiKey: '{env:CODESTRAL_API_KEY}'
1089
+ },
1090
+ models: {}
1091
+ }
1092
+ } else if (providerKey === 'hyperbolic') {
1093
+ config.provider.hyperbolic = {
1094
+ npm: '@ai-sdk/openai-compatible',
1095
+ name: 'Hyperbolic',
1096
+ options: {
1097
+ baseURL: 'https://api.hyperbolic.xyz/v1',
1098
+ apiKey: '{env:HYPERBOLIC_API_KEY}'
1099
+ },
1100
+ models: {}
1101
+ }
1102
+ } else if (providerKey === 'scaleway') {
1103
+ config.provider.scaleway = {
1104
+ npm: '@ai-sdk/openai-compatible',
1105
+ name: 'Scaleway',
1106
+ options: {
1107
+ baseURL: 'https://api.scaleway.ai/v1',
1108
+ apiKey: '{env:SCALEWAY_API_KEY}'
1109
+ },
1110
+ models: {}
1111
+ }
1112
+ } else if (providerKey === 'googleai') {
1113
+ config.provider.googleai = {
1114
+ npm: '@ai-sdk/openai-compatible',
1115
+ name: 'Google AI Studio',
1116
+ options: {
1117
+ baseURL: 'https://generativelanguage.googleapis.com/v1beta/openai',
1118
+ apiKey: '{env:GOOGLE_API_KEY}'
1119
+ },
1120
+ models: {}
1121
+ }
973
1122
  }
974
1123
  }
975
1124
 
976
1125
  // 📖 Register the model in the provider's models section
977
- // 📖 OpenCode requires models to be explicitly listed to recognize them
978
- if (!config.provider[providerKey].models) config.provider[providerKey].models = {}
979
- config.provider[providerKey].models[model.modelId] = { name: model.label }
1126
+ // 📖 Only register custom models -- skip if the model maps to a built-in OpenCode ID
1127
+ const isBuiltinMapped = OPENCODE_MODEL_MAP[providerKey]?.[model.modelId]
1128
+ if (!isBuiltinMapped) {
1129
+ if (!config.provider[providerKey].models) config.provider[providerKey].models = {}
1130
+ config.provider[providerKey].models[ocModelId] = { name: model.label }
1131
+ }
980
1132
 
981
1133
  config.model = modelRef
982
1134
  saveOpenCodeConfig(config)
@@ -995,25 +1147,7 @@ After installation, you can use: opencode --model ${modelRef}`
995
1147
  console.log(chalk.dim(' Starting OpenCode…'))
996
1148
  console.log()
997
1149
 
998
- const { spawn } = await import('child_process')
999
- const child = spawn('opencode', ['--model', modelRef], {
1000
- stdio: 'inherit',
1001
- shell: true,
1002
- detached: false
1003
- })
1004
-
1005
- await new Promise((resolve, reject) => {
1006
- child.on('exit', resolve)
1007
- child.on('error', (err) => {
1008
- if (err.code === 'ENOENT') {
1009
- console.error(chalk.red('\n ✗ Could not find "opencode" — is it installed and in your PATH?'))
1010
- console.error(chalk.dim(' Install: npm i -g opencode or see https://opencode.ai'))
1011
- resolve(1)
1012
- } else {
1013
- reject(err)
1014
- }
1015
- })
1016
- })
1150
+ await spawnOpenCode(['--model', modelRef], providerKey, fcmConfig)
1017
1151
  }
1018
1152
  }
1019
1153
 
@@ -1022,10 +1156,11 @@ After installation, you can use: opencode --model ${modelRef}`
1022
1156
  // 📖 OpenCode Desktop shares config at the same location as CLI.
1023
1157
  // 📖 Handles all 3 providers: nvidia (needs custom provider config), groq & cerebras (built-in).
1024
1158
  // 📖 No need to wait for exit — Desktop app stays open independently.
1025
- async function startOpenCodeDesktop(model) {
1159
+ async function startOpenCodeDesktop(model, fcmConfig) {
1026
1160
  const providerKey = model.providerKey ?? 'nvidia'
1027
- // 📖 Full model reference string used in OpenCode config and --model flag
1028
- const modelRef = `${providerKey}/${model.modelId}`
1161
+ // 📖 Map model ID to OpenCode's built-in ID if it differs from our source ID
1162
+ const ocModelId = getOpenCodeModelId(providerKey, model.modelId)
1163
+ const modelRef = `${providerKey}/${ocModelId}`
1029
1164
 
1030
1165
  // 📖 Helper to open the Desktop app based on platform
1031
1166
  const launchDesktop = async () => {
@@ -1074,7 +1209,7 @@ async function startOpenCodeDesktop(model) {
1074
1209
 
1075
1210
  if (config.provider?.nvidia) {
1076
1211
  if (!config.provider.nvidia.models) config.provider.nvidia.models = {}
1077
- config.provider.nvidia.models[model.modelId] = { name: model.label }
1212
+ config.provider.nvidia.models[ocModelId] = { name: model.label }
1078
1213
  }
1079
1214
 
1080
1215
  saveOpenCodeConfig(config)
@@ -1153,12 +1288,77 @@ ${isWindows ? 'set NVIDIA_API_KEY=your_key_here' : 'export NVIDIA_API_KEY=your_k
1153
1288
  },
1154
1289
  models: {}
1155
1290
  }
1291
+ } else if (providerKey === 'sambanova') {
1292
+ // 📖 SambaNova is OpenAI-compatible — uses @ai-sdk/openai-compatible with their base URL
1293
+ config.provider.sambanova = {
1294
+ npm: '@ai-sdk/openai-compatible',
1295
+ name: 'SambaNova',
1296
+ options: {
1297
+ baseURL: 'https://api.sambanova.ai/v1',
1298
+ apiKey: '{env:SAMBANOVA_API_KEY}'
1299
+ },
1300
+ models: {}
1301
+ }
1302
+ } else if (providerKey === 'openrouter') {
1303
+ config.provider.openrouter = {
1304
+ npm: '@ai-sdk/openai-compatible',
1305
+ name: 'OpenRouter',
1306
+ options: {
1307
+ baseURL: 'https://openrouter.ai/api/v1',
1308
+ apiKey: '{env:OPENROUTER_API_KEY}'
1309
+ },
1310
+ models: {}
1311
+ }
1312
+ } else if (providerKey === 'codestral') {
1313
+ config.provider.codestral = {
1314
+ npm: '@ai-sdk/openai-compatible',
1315
+ name: 'Mistral Codestral',
1316
+ options: {
1317
+ baseURL: 'https://codestral.mistral.ai/v1',
1318
+ apiKey: '{env:CODESTRAL_API_KEY}'
1319
+ },
1320
+ models: {}
1321
+ }
1322
+ } else if (providerKey === 'hyperbolic') {
1323
+ config.provider.hyperbolic = {
1324
+ npm: '@ai-sdk/openai-compatible',
1325
+ name: 'Hyperbolic',
1326
+ options: {
1327
+ baseURL: 'https://api.hyperbolic.xyz/v1',
1328
+ apiKey: '{env:HYPERBOLIC_API_KEY}'
1329
+ },
1330
+ models: {}
1331
+ }
1332
+ } else if (providerKey === 'scaleway') {
1333
+ config.provider.scaleway = {
1334
+ npm: '@ai-sdk/openai-compatible',
1335
+ name: 'Scaleway',
1336
+ options: {
1337
+ baseURL: 'https://api.scaleway.ai/v1',
1338
+ apiKey: '{env:SCALEWAY_API_KEY}'
1339
+ },
1340
+ models: {}
1341
+ }
1342
+ } else if (providerKey === 'googleai') {
1343
+ config.provider.googleai = {
1344
+ npm: '@ai-sdk/openai-compatible',
1345
+ name: 'Google AI Studio',
1346
+ options: {
1347
+ baseURL: 'https://generativelanguage.googleapis.com/v1beta/openai',
1348
+ apiKey: '{env:GOOGLE_API_KEY}'
1349
+ },
1350
+ models: {}
1351
+ }
1156
1352
  }
1157
1353
  }
1158
1354
 
1159
1355
  // 📖 Register the model in the provider's models section
1160
- if (!config.provider[providerKey].models) config.provider[providerKey].models = {}
1161
- config.provider[providerKey].models[model.modelId] = { name: model.label }
1356
+ // 📖 Only register custom models -- skip if the model maps to a built-in OpenCode ID
1357
+ const isBuiltinMapped = OPENCODE_MODEL_MAP[providerKey]?.[model.modelId]
1358
+ if (!isBuiltinMapped) {
1359
+ if (!config.provider[providerKey].models) config.provider[providerKey].models = {}
1360
+ config.provider[providerKey].models[ocModelId] = { name: model.label }
1361
+ }
1162
1362
 
1163
1363
  config.model = modelRef
1164
1364
  saveOpenCodeConfig(config)
@@ -1243,9 +1443,14 @@ async function startOpenClaw(model, apiKey) {
1243
1443
  config.models.providers.nvidia = {
1244
1444
  baseUrl: 'https://integrate.api.nvidia.com/v1',
1245
1445
  api: 'openai-completions',
1446
+ models: [],
1246
1447
  }
1247
1448
  console.log(chalk.dim(' ➕ Added nvidia provider block to OpenClaw config (models.providers.nvidia)'))
1248
1449
  }
1450
+ // 📖 Ensure models array exists even if the provider block was created by an older version
1451
+ if (!Array.isArray(config.models.providers.nvidia.models)) {
1452
+ config.models.providers.nvidia.models = []
1453
+ }
1249
1454
 
1250
1455
  // 📖 Store API key in the root "env" section so OpenClaw can read it as NVIDIA_API_KEY env var.
1251
1456
  // 📖 Only writes if not already set to avoid overwriting an existing key.
@@ -1450,7 +1655,7 @@ async function main() {
1450
1655
  // 📖 Clamp scrollOffset so cursor is always within the visible viewport window.
1451
1656
  // 📖 Called after every cursor move, sort change, and terminal resize.
1452
1657
  const adjustScrollOffset = (st) => {
1453
- const total = st.results.length
1658
+ const total = st.visibleSorted ? st.visibleSorted.length : st.results.filter(r => !r.hidden).length
1454
1659
  let maxSlots = st.terminalRows - 10 // 5 header + 5 footer
1455
1660
  if (maxSlots < 1) maxSlots = 1
1456
1661
  if (total <= maxSlots) { st.scrollOffset = 0; return }
@@ -1497,6 +1702,8 @@ async function main() {
1497
1702
  settingsEditBuffer: '', // 📖 Typed characters for the API key being edited
1498
1703
  settingsTestResults: {}, // 📖 { providerKey: 'pending'|'ok'|'fail'|null }
1499
1704
  config, // 📖 Live reference to the config object (updated on save)
1705
+ visibleSorted: [], // 📖 Cached visible+sorted models — shared between render loop and key handlers
1706
+ helpVisible: false, // 📖 Whether the help overlay (K key) is active
1500
1707
  }
1501
1708
 
1502
1709
  // 📖 Re-clamp viewport on terminal resize
@@ -1522,10 +1729,19 @@ async function main() {
1522
1729
  // 📖 0=All, 1=S+, 2=S, 3=A+, 4=A, 5=A-, 6=B+, 7=B, 8=C
1523
1730
  const TIER_CYCLE = [null, 'S+', 'S', 'A+', 'A', 'A-', 'B+', 'B', 'C']
1524
1731
  let tierFilterMode = 0
1732
+
1733
+ // 📖 originFilterMode: index into ORIGIN_CYCLE, 0=All, then each provider key in order
1734
+ const ORIGIN_CYCLE = [null, ...Object.keys(sources)]
1735
+ let originFilterMode = 0
1736
+
1525
1737
  function applyTierFilter() {
1526
1738
  const activeTier = TIER_CYCLE[tierFilterMode]
1739
+ const activeOrigin = ORIGIN_CYCLE[originFilterMode]
1527
1740
  state.results.forEach(r => {
1528
- r.hidden = activeTier !== null && r.tier !== activeTier
1741
+ // 📖 Apply both tier and origin filters — model is hidden if it fails either
1742
+ const tierHide = activeTier !== null && r.tier !== activeTier
1743
+ const originHide = activeOrigin !== null && r.providerKey !== activeOrigin
1744
+ r.hidden = tierHide || originHide
1529
1745
  })
1530
1746
  return state.results
1531
1747
  }
@@ -1595,6 +1811,42 @@ async function main() {
1595
1811
  return cleared.join('\n')
1596
1812
  }
1597
1813
 
1814
+ // ─── Help overlay renderer ────────────────────────────────────────────────
1815
+ // 📖 renderHelp: Draw the help overlay listing all key bindings.
1816
+ // 📖 Toggled with K key. Gives users a quick reference without leaving the TUI.
1817
+ function renderHelp() {
1818
+ const EL = '\x1b[K'
1819
+ const lines = []
1820
+ lines.push('')
1821
+ lines.push(` ${chalk.bold('❓ Keyboard Shortcuts')} ${chalk.dim('— press K or Esc to close')}`)
1822
+ lines.push('')
1823
+ lines.push(` ${chalk.bold('Navigation')}`)
1824
+ lines.push(` ${chalk.yellow('↑↓')} Navigate rows`)
1825
+ lines.push(` ${chalk.yellow('Enter')} Select model and launch`)
1826
+ lines.push('')
1827
+ lines.push(` ${chalk.bold('Sorting')}`)
1828
+ lines.push(` ${chalk.yellow('R')} Rank ${chalk.yellow('Y')} Tier ${chalk.yellow('O')} Origin ${chalk.yellow('M')} Model`)
1829
+ lines.push(` ${chalk.yellow('L')} Latest ping ${chalk.yellow('A')} Avg ping ${chalk.yellow('S')} SWE-bench score`)
1830
+ lines.push(` ${chalk.yellow('C')} Context window ${chalk.yellow('H')} Health ${chalk.yellow('V')} Verdict ${chalk.yellow('U')} Uptime`)
1831
+ lines.push('')
1832
+ lines.push(` ${chalk.bold('Filters')}`)
1833
+ lines.push(` ${chalk.yellow('T')} Cycle tier filter ${chalk.dim('(All → S+ → S → A+ → A → A- → B+ → B → C → All)')}`)
1834
+ lines.push(` ${chalk.yellow('N')} Cycle origin filter ${chalk.dim('(All → NIM → Groq → Cerebras → ... each provider → All)')}`)
1835
+ lines.push('')
1836
+ lines.push(` ${chalk.bold('Controls')}`)
1837
+ lines.push(` ${chalk.yellow('W')} Decrease ping interval (faster)`)
1838
+ lines.push(` ${chalk.yellow('X')} Increase ping interval (slower)`)
1839
+ lines.push(` ${chalk.yellow('Z')} Cycle launch mode ${chalk.dim('(OpenCode CLI → OpenCode Desktop → OpenClaw)')}`)
1840
+ lines.push(` ${chalk.yellow('P')} Open settings ${chalk.dim('(manage API keys per provider, enable/disable, test)')}`)
1841
+ lines.push(` ${chalk.yellow('K')} / ${chalk.yellow('Esc')} Show/hide this help`)
1842
+ lines.push(` ${chalk.yellow('Ctrl+C')} Exit`)
1843
+ lines.push('')
1844
+ const cleared = lines.map(l => l + EL)
1845
+ const remaining = state.terminalRows > 0 ? Math.max(0, state.terminalRows - cleared.length) : 0
1846
+ for (let i = 0; i < remaining; i++) cleared.push(EL)
1847
+ return cleared.join('\n')
1848
+ }
1849
+
1598
1850
  // ─── Settings key test helper ───────────────────────────────────────────────
1599
1851
  // 📖 Fires a single ping to the selected provider to verify the API key works.
1600
1852
  async function testProviderKey(providerKey) {
@@ -1630,6 +1882,12 @@ async function main() {
1630
1882
  const onKeyPress = async (str, key) => {
1631
1883
  if (!key) return
1632
1884
 
1885
+ // 📖 Help overlay: Esc or K closes it — handle before everything else so Esc isn't swallowed elsewhere
1886
+ if (state.helpVisible && (key.name === 'escape' || key.name === 'k')) {
1887
+ state.helpVisible = false
1888
+ return
1889
+ }
1890
+
1633
1891
  // ─── Settings overlay keyboard handling ───────────────────────────────────
1634
1892
  if (state.settingsOpen) {
1635
1893
  const providerKeys = Object.keys(sources)
@@ -1737,11 +1995,12 @@ async function main() {
1737
1995
  return
1738
1996
  }
1739
1997
 
1740
- // 📖 Sorting keys: R=rank, Y=tier, O=origin, M=model, L=latest ping, A=avg ping, S=SWE-bench, N=context, H=health, V=verdict, U=uptime
1998
+ // 📖 Sorting keys: R=rank, Y=tier, O=origin, M=model, L=latest ping, A=avg ping, S=SWE-bench, C=context, H=health, V=verdict, U=uptime
1741
1999
  // 📖 T is reserved for tier filter cycling — tier sort moved to Y
2000
+ // 📖 N is now reserved for origin filter cycling
1742
2001
  const sortKeys = {
1743
2002
  'r': 'rank', 'y': 'tier', 'o': 'origin', 'm': 'model',
1744
- 'l': 'ping', 'a': 'avg', 's': 'swe', 'n': 'ctx', 'h': 'condition', 'v': 'verdict', 'u': 'uptime'
2003
+ 'l': 'ping', 'a': 'avg', 's': 'swe', 'c': 'ctx', 'h': 'condition', 'v': 'verdict', 'u': 'uptime'
1745
2004
  }
1746
2005
 
1747
2006
  if (sortKeys[key.name] && !key.ctrl) {
@@ -1753,7 +2012,11 @@ async function main() {
1753
2012
  state.sortColumn = col
1754
2013
  state.sortDirection = 'asc'
1755
2014
  }
1756
- adjustScrollOffset(state)
2015
+ // 📖 Recompute visible sorted list and reset cursor to top to avoid stale index
2016
+ const visible = state.results.filter(r => !r.hidden)
2017
+ state.visibleSorted = sortResults(visible, state.sortColumn, state.sortDirection)
2018
+ state.cursor = 0
2019
+ state.scrollOffset = 0
1757
2020
  return
1758
2021
  }
1759
2022
 
@@ -1769,7 +2032,29 @@ async function main() {
1769
2032
  if (key.name === 't') {
1770
2033
  tierFilterMode = (tierFilterMode + 1) % TIER_CYCLE.length
1771
2034
  applyTierFilter()
1772
- adjustScrollOffset(state)
2035
+ // 📖 Recompute visible sorted list and reset cursor to avoid stale index into new filtered set
2036
+ const visible = state.results.filter(r => !r.hidden)
2037
+ state.visibleSorted = sortResults(visible, state.sortColumn, state.sortDirection)
2038
+ state.cursor = 0
2039
+ state.scrollOffset = 0
2040
+ return
2041
+ }
2042
+
2043
+ // 📖 Origin filter key: N = cycle through each provider (All → NIM → Groq → ... → All)
2044
+ if (key.name === 'n') {
2045
+ originFilterMode = (originFilterMode + 1) % ORIGIN_CYCLE.length
2046
+ applyTierFilter()
2047
+ // 📖 Recompute visible sorted list and reset cursor to avoid stale index into new filtered set
2048
+ const visible = state.results.filter(r => !r.hidden)
2049
+ state.visibleSorted = sortResults(visible, state.sortColumn, state.sortDirection)
2050
+ state.cursor = 0
2051
+ state.scrollOffset = 0
2052
+ return
2053
+ }
2054
+
2055
+ // 📖 Help overlay key: K = toggle help overlay
2056
+ if (key.name === 'k') {
2057
+ state.helpVisible = !state.helpVisible
1773
2058
  return
1774
2059
  }
1775
2060
 
@@ -1796,7 +2081,7 @@ async function main() {
1796
2081
  }
1797
2082
 
1798
2083
  if (key.name === 'down') {
1799
- if (state.cursor < results.length - 1) {
2084
+ if (state.cursor < state.visibleSorted.length - 1) {
1800
2085
  state.cursor++
1801
2086
  adjustScrollOffset(state)
1802
2087
  }
@@ -1809,9 +2094,9 @@ async function main() {
1809
2094
  }
1810
2095
 
1811
2096
  if (key.name === 'return') { // Enter
1812
- // 📖 Use the same sorting as the table display
1813
- const sorted = sortResults(results, state.sortColumn, state.sortDirection)
1814
- const selected = sorted[state.cursor]
2097
+ // 📖 Use the cached visible+sorted array guaranteed to match what's on screen
2098
+ const selected = state.visibleSorted[state.cursor]
2099
+ if (!selected) return // 📖 Guard: empty visible list (all filtered out)
1815
2100
  // 📖 Allow selecting ANY model (even timeout/down) - user knows what they're doing
1816
2101
  userSelected = { modelId: selected.modelId, label: selected.label, tier: selected.tier, providerKey: selected.providerKey }
1817
2102
 
@@ -1834,13 +2119,24 @@ async function main() {
1834
2119
  }
1835
2120
  console.log()
1836
2121
 
2122
+ // 📖 Warn if no API key is configured for the selected model's provider
2123
+ if (state.mode !== 'openclaw') {
2124
+ const selectedApiKey = getApiKey(state.config, selected.providerKey)
2125
+ if (!selectedApiKey) {
2126
+ console.log(chalk.yellow(` Warning: No API key configured for ${selected.providerKey}.`))
2127
+ console.log(chalk.yellow(` OpenCode may not be able to use ${selected.label}.`))
2128
+ console.log(chalk.dim(` Set ${ENV_VAR_NAMES[selected.providerKey] || selected.providerKey.toUpperCase() + '_API_KEY'} or configure via settings (P key).`))
2129
+ console.log()
2130
+ }
2131
+ }
2132
+
1837
2133
  // 📖 Dispatch to the correct integration based on active mode
1838
2134
  if (state.mode === 'openclaw') {
1839
2135
  await startOpenClaw(userSelected, apiKey)
1840
2136
  } else if (state.mode === 'opencode-desktop') {
1841
- await startOpenCodeDesktop(userSelected)
2137
+ await startOpenCodeDesktop(userSelected, state.config)
1842
2138
  } else {
1843
- await startOpenCode(userSelected)
2139
+ await startOpenCode(userSelected, state.config)
1844
2140
  }
1845
2141
  process.exit(0)
1846
2142
  }
@@ -1857,13 +2153,24 @@ async function main() {
1857
2153
  // 📖 Animation loop: render settings overlay OR main table based on state
1858
2154
  const ticker = setInterval(() => {
1859
2155
  state.frame++
2156
+ // 📖 Cache visible+sorted models each frame so Enter handler always matches the display
2157
+ if (!state.settingsOpen) {
2158
+ const visible = state.results.filter(r => !r.hidden)
2159
+ state.visibleSorted = sortResults(visible, state.sortColumn, state.sortDirection)
2160
+ }
1860
2161
  const content = state.settingsOpen
1861
2162
  ? renderSettings()
1862
- : renderTable(state.results, state.pendingPings, state.frame, state.cursor, state.sortColumn, state.sortDirection, state.pingInterval, state.lastPingTime, state.mode, tierFilterMode, state.scrollOffset, state.terminalRows)
2163
+ : state.helpVisible
2164
+ ? renderHelp()
2165
+ : renderTable(state.results, state.pendingPings, state.frame, state.cursor, state.sortColumn, state.sortDirection, state.pingInterval, state.lastPingTime, state.mode, tierFilterMode, state.scrollOffset, state.terminalRows, originFilterMode)
1863
2166
  process.stdout.write(ALT_HOME + content)
1864
2167
  }, Math.round(1000 / FPS))
1865
2168
 
1866
- process.stdout.write(ALT_HOME + renderTable(state.results, state.pendingPings, state.frame, state.cursor, state.sortColumn, state.sortDirection, state.pingInterval, state.lastPingTime, state.mode, tierFilterMode, state.scrollOffset, state.terminalRows))
2169
+ // 📖 Populate visibleSorted before the first frame so Enter works immediately
2170
+ const initialVisible = state.results.filter(r => !r.hidden)
2171
+ state.visibleSorted = sortResults(initialVisible, state.sortColumn, state.sortDirection)
2172
+
2173
+ process.stdout.write(ALT_HOME + renderTable(state.results, state.pendingPings, state.frame, state.cursor, state.sortColumn, state.sortDirection, state.pingInterval, state.lastPingTime, state.mode, tierFilterMode, state.scrollOffset, state.terminalRows, originFilterMode))
1867
2174
 
1868
2175
  // ── Continuous ping loop — ping all models every N seconds forever ──────────
1869
2176
 
package/lib/config.js CHANGED
@@ -12,14 +12,26 @@
12
12
  * 📖 Config JSON structure:
13
13
  * {
14
14
  * "apiKeys": {
15
- * "nvidia": "nvapi-xxx",
16
- * "groq": "gsk_xxx",
17
- * "cerebras": "csk_xxx"
15
+ * "nvidia": "nvapi-xxx",
16
+ * "groq": "gsk_xxx",
17
+ * "cerebras": "csk_xxx",
18
+ * "sambanova": "sn-xxx",
19
+ * "openrouter": "sk-or-xxx",
20
+ * "codestral": "csk-xxx",
21
+ * "hyperbolic": "eyJ...",
22
+ * "scaleway": "scw-xxx",
23
+ * "googleai": "AIza..."
18
24
  * },
19
25
  * "providers": {
20
- * "nvidia": { "enabled": true },
21
- * "groq": { "enabled": true },
22
- * "cerebras": { "enabled": true }
26
+ * "nvidia": { "enabled": true },
27
+ * "groq": { "enabled": true },
28
+ * "cerebras": { "enabled": true },
29
+ * "sambanova": { "enabled": true },
30
+ * "openrouter": { "enabled": true },
31
+ * "codestral": { "enabled": true },
32
+ * "hyperbolic": { "enabled": true },
33
+ * "scaleway": { "enabled": true },
34
+ * "googleai": { "enabled": true }
23
35
  * }
24
36
  * }
25
37
  *
@@ -52,9 +64,15 @@ const LEGACY_CONFIG_PATH = join(homedir(), '.free-coding-models')
52
64
  // 📖 Environment variable names per provider
53
65
  // 📖 These allow users to override config via env vars (useful for CI/headless setups)
54
66
  const ENV_VARS = {
55
- nvidia: 'NVIDIA_API_KEY',
56
- groq: 'GROQ_API_KEY',
57
- cerebras: 'CEREBRAS_API_KEY',
67
+ nvidia: 'NVIDIA_API_KEY',
68
+ groq: 'GROQ_API_KEY',
69
+ cerebras: 'CEREBRAS_API_KEY',
70
+ sambanova: 'SAMBANOVA_API_KEY',
71
+ openrouter: 'OPENROUTER_API_KEY',
72
+ codestral: 'CODESTRAL_API_KEY',
73
+ hyperbolic: 'HYPERBOLIC_API_KEY',
74
+ scaleway: 'SCALEWAY_API_KEY',
75
+ googleai: 'GOOGLE_API_KEY',
58
76
  }
59
77
 
60
78
  /**
package/package.json CHANGED
@@ -1,7 +1,7 @@
1
1
  {
2
2
  "name": "free-coding-models",
3
- "version": "0.1.51",
4
- "description": "Find the fastest coding LLM models in seconds ping free models from multiple providers, pick the best one for OpenCode, Cursor, or any AI coding assistant.",
3
+ "version": "0.1.54",
4
+ "description": "Find the fastest coding LLM models in seconds \u2014 ping free models from multiple providers, pick the best one for OpenCode, Cursor, or any AI coding assistant.",
5
5
  "keywords": [
6
6
  "nvidia",
7
7
  "nim",
package/sources.js CHANGED
@@ -27,8 +27,8 @@
27
27
  * 📖 Secondary: https://swe-rebench.com (independent evals, scores are lower)
28
28
  * 📖 Leaderboard tracker: https://www.marc0.dev/en/leaderboard
29
29
  *
30
- * @exports nvidiaNim, groq, cerebras — model arrays per provider
31
- * @exports sources — map of { nvidia, groq, cerebras } each with { name, url, models }
30
+ * @exports nvidiaNim, groq, cerebras, sambanova, openrouter, codestral, hyperbolic, scaleway, googleai — model arrays per provider
31
+ * @exports sources — map of { nvidia, groq, cerebras, sambanova, openrouter, codestral, hyperbolic, scaleway, googleai } each with { name, url, models }
32
32
  * @exports MODELS — flat array of [modelId, label, tier, sweScore, ctx, providerKey]
33
33
  *
34
34
  * 📖 MODELS now includes providerKey as 6th element so ping() knows which
@@ -100,6 +100,10 @@ export const groq = [
100
100
  ['deepseek-r1-distill-llama-70b', 'R1 Distill 70B', 'A', '43.9%', '128k'],
101
101
  ['qwen-qwq-32b', 'QwQ 32B', 'A+', '50.0%', '131k'],
102
102
  ['moonshotai/kimi-k2-instruct', 'Kimi K2 Instruct', 'S', '65.8%', '131k'],
103
+ ['llama-3.1-8b-instant', 'Llama 3.1 8B', 'B', '28.8%', '128k'],
104
+ ['openai/gpt-oss-120b', 'GPT OSS 120B', 'S', '60.0%', '128k'],
105
+ ['openai/gpt-oss-20b', 'GPT OSS 20B', 'A', '42.0%', '128k'],
106
+ ['qwen/qwen3-32b', 'Qwen3 32B', 'A+', '50.0%', '131k'],
103
107
  ]
104
108
 
105
109
  // 📖 Cerebras source - https://cloud.cerebras.ai
@@ -108,6 +112,89 @@ export const cerebras = [
108
112
  ['llama3.3-70b', 'Llama 3.3 70B', 'A-', '39.5%', '128k'],
109
113
  ['llama-4-scout-17b-16e-instruct', 'Llama 4 Scout', 'A', '44.0%', '10M'],
110
114
  ['qwen-3-32b', 'Qwen3 32B', 'A+', '50.0%', '128k'],
115
+ ['gpt-oss-120b', 'GPT OSS 120B', 'S', '60.0%', '128k'],
116
+ ['qwen-3-235b-a22b', 'Qwen3 235B', 'S+', '70.0%', '128k'],
117
+ ['llama3.1-8b', 'Llama 3.1 8B', 'B', '28.8%', '128k'],
118
+ ['glm-4.6', 'GLM 4.6', 'A-', '38.0%', '128k'],
119
+ ]
120
+
121
+ // 📖 SambaNova source - https://cloud.sambanova.ai
122
+ // 📖 Free trial: $5 credits for 3 months — API keys at https://cloud.sambanova.ai/apis
123
+ // 📖 OpenAI-compatible API, supports all major coding models including DeepSeek V3/R1, Qwen3, Llama 4
124
+ export const sambanova = [
125
+ // ── S+ tier ──
126
+ ['Qwen3-235B-A22B-Instruct-2507', 'Qwen3 235B', 'S+', '70.0%', '128k'],
127
+ // ── S tier ──
128
+ ['DeepSeek-R1-0528', 'DeepSeek R1 0528', 'S', '61.0%', '128k'],
129
+ ['DeepSeek-V3.1', 'DeepSeek V3.1', 'S', '62.0%', '128k'],
130
+ ['DeepSeek-V3-0324', 'DeepSeek V3 0324', 'S', '62.0%', '128k'],
131
+ ['Llama-4-Maverick-17B-128E-Instruct', 'Llama 4 Maverick', 'S', '62.0%', '1M'],
132
+ ['gpt-oss-120b', 'GPT OSS 120B', 'S', '60.0%', '128k'],
133
+ ['deepseek-ai/DeepSeek-V3.1-Terminus', 'DeepSeek V3.1 Term', 'S', '68.4%', '128k'],
134
+ // ── A+ tier ──
135
+ ['Qwen3-32B', 'Qwen3 32B', 'A+', '50.0%', '128k'],
136
+ // ── A tier ──
137
+ ['DeepSeek-R1-Distill-Llama-70B', 'R1 Distill 70B', 'A', '43.9%', '128k'],
138
+ // ── A- tier ──
139
+ ['Meta-Llama-3.3-70B-Instruct', 'Llama 3.3 70B', 'A-', '39.5%', '128k'],
140
+ // ── B tier ──
141
+ ['Meta-Llama-3.1-8B-Instruct', 'Llama 3.1 8B', 'B', '28.8%', '128k'],
142
+ ]
143
+
144
+ // 📖 OpenRouter source - https://openrouter.ai
145
+ // 📖 Free :free models with shared quota — 50 free req/day
146
+ // 📖 API keys at https://openrouter.ai/settings/keys
147
+ export const openrouter = [
148
+ ['qwen/qwen3-coder:free', 'Qwen3 Coder', 'S+', '70.6%', '256k'],
149
+ ['stepfun/step-3.5-flash:free', 'Step 3.5 Flash', 'S+', '74.4%', '256k'],
150
+ ['deepseek/deepseek-r1-0528:free', 'DeepSeek R1 0528', 'S', '61.0%', '128k'],
151
+ ['qwen/qwen3-next-80b-a3b-instruct:free', 'Qwen3 80B Instruct', 'S', '65.0%', '128k'],
152
+ ['openai/gpt-oss-120b:free', 'GPT OSS 120B', 'S', '60.0%', '128k'],
153
+ ['openai/gpt-oss-20b:free', 'GPT OSS 20B', 'A', '42.0%', '128k'],
154
+ ['nvidia/nemotron-3-nano-30b-a3b:free', 'Nemotron Nano 30B', 'A', '43.0%', '128k'],
155
+ ['meta-llama/llama-3.3-70b-instruct:free', 'Llama 3.3 70B', 'A-', '39.5%', '128k'],
156
+ ]
157
+
158
+ // 📖 Mistral Codestral source - https://codestral.mistral.ai
159
+ // 📖 Free coding model — 30 req/min, 2000/day (phone number required for key)
160
+ // 📖 API keys at https://codestral.mistral.ai
161
+ export const codestral = [
162
+ ['codestral-latest', 'Codestral', 'B+', '34.0%', '256k'],
163
+ ]
164
+
165
+ // 📖 Hyperbolic source - https://app.hyperbolic.ai
166
+ // 📖 $1 free trial credits — API keys at https://app.hyperbolic.xyz/settings
167
+ export const hyperbolic = [
168
+ ['qwen/qwen3-coder-480b-a35b-instruct', 'Qwen3 Coder 480B', 'S+', '70.6%', '256k'],
169
+ ['deepseek-ai/DeepSeek-R1-0528', 'DeepSeek R1 0528', 'S', '61.0%', '128k'],
170
+ ['moonshotai/Kimi-K2-Instruct', 'Kimi K2 Instruct', 'S', '65.8%', '131k'],
171
+ ['openai/gpt-oss-120b', 'GPT OSS 120B', 'S', '60.0%', '128k'],
172
+ ['Qwen/Qwen3-235B-A22B', 'Qwen3 235B', 'S+', '70.0%', '128k'],
173
+ ['qwen/qwen3-next-80b-a3b-instruct', 'Qwen3 80B Instruct', 'S', '65.0%', '128k'],
174
+ ['deepseek-ai/DeepSeek-V3-0324', 'DeepSeek V3 0324', 'S', '62.0%', '128k'],
175
+ ['Qwen/Qwen2.5-Coder-32B-Instruct', 'Qwen2.5 Coder 32B', 'A', '46.0%', '32k'],
176
+ ['meta-llama/Llama-3.3-70B-Instruct', 'Llama 3.3 70B', 'A-', '39.5%', '128k'],
177
+ ['meta-llama/Meta-Llama-3.1-405B-Instruct', 'Llama 3.1 405B', 'A', '44.0%', '128k'],
178
+ ]
179
+
180
+ // 📖 Scaleway source - https://console.scaleway.com
181
+ // 📖 1M free tokens — API keys at https://console.scaleway.com/iam/api-keys
182
+ export const scaleway = [
183
+ ['devstral-2-123b-instruct-2512', 'Devstral 2 123B', 'S+', '72.2%', '256k'],
184
+ ['qwen3-235b-a22b-instruct-2507', 'Qwen3 235B', 'S+', '70.0%', '128k'],
185
+ ['gpt-oss-120b', 'GPT OSS 120B', 'S', '60.0%', '128k'],
186
+ ['qwen3-coder-30b-a3b-instruct', 'Qwen3 Coder 30B', 'A+', '55.0%', '32k'],
187
+ ['llama-3.3-70b-instruct', 'Llama 3.3 70B', 'A-', '39.5%', '128k'],
188
+ ['deepseek-r1-distill-llama-70b', 'R1 Distill 70B', 'A', '43.9%', '128k'],
189
+ ['mistral-small-3.2-24b-instruct-2506', 'Mistral Small 3.2', 'B+', '30.0%', '128k'],
190
+ ]
191
+
192
+ // 📖 Google AI Studio source - https://aistudio.google.com
193
+ // 📖 Free Gemma models — 14.4K req/day, API keys at https://aistudio.google.com/apikey
194
+ export const googleai = [
195
+ ['gemma-3-27b-it', 'Gemma 3 27B', 'B', '22.0%', '128k'],
196
+ ['gemma-3-12b-it', 'Gemma 3 12B', 'C', '15.0%', '128k'],
197
+ ['gemma-3-4b-it', 'Gemma 3 4B', 'C', '10.0%', '128k'],
111
198
  ]
112
199
 
113
200
  // 📖 All sources combined - used by the main script
@@ -128,6 +215,36 @@ export const sources = {
128
215
  url: 'https://api.cerebras.ai/v1/chat/completions',
129
216
  models: cerebras,
130
217
  },
218
+ sambanova: {
219
+ name: 'SambaNova',
220
+ url: 'https://api.sambanova.ai/v1/chat/completions',
221
+ models: sambanova,
222
+ },
223
+ openrouter: {
224
+ name: 'OpenRouter',
225
+ url: 'https://openrouter.ai/api/v1/chat/completions',
226
+ models: openrouter,
227
+ },
228
+ codestral: {
229
+ name: 'Codestral',
230
+ url: 'https://codestral.mistral.ai/v1/chat/completions',
231
+ models: codestral,
232
+ },
233
+ hyperbolic: {
234
+ name: 'Hyperbolic',
235
+ url: 'https://api.hyperbolic.xyz/v1/chat/completions',
236
+ models: hyperbolic,
237
+ },
238
+ scaleway: {
239
+ name: 'Scaleway',
240
+ url: 'https://api.scaleway.ai/v1/chat/completions',
241
+ models: scaleway,
242
+ },
243
+ googleai: {
244
+ name: 'Google AI',
245
+ url: 'https://generativelanguage.googleapis.com/v1beta/openai/chat/completions',
246
+ models: googleai,
247
+ },
131
248
  }
132
249
 
133
250
  // 📖 Flatten all models from all sources — each entry includes providerKey as 6th element