free-coding-models 0.2.5 → 0.2.8

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -103,15 +103,43 @@ Before using `free-coding-models`, make sure you have:
103
103
 
104
104
  1. **Node.js 18+** — Required for native `fetch` API
105
105
  2. **At least one free API key** — pick any or all of:
106
- - **NVIDIA NIM** — [build.nvidia.com](https://build.nvidia.com) → Profile → API Keys → Generate
107
- - **Groq** — [console.groq.com/keys](https://console.groq.com/keys) → Create API Key
108
- - **Cerebras** — [cloud.cerebras.ai](https://cloud.cerebras.ai) → API Keys → Create
106
+ - **NVIDIA NIM** — [build.nvidia.com](https://build.nvidia.com) → Profile → API Keys → Generate – free tier: 40 req/min (no credit card)
107
+ - **Groq** — [console.groq.com/keys](https://console.groq.com/keys) → Create API Key – free tier: 30‑50 RPM per model (varies)
108
+ - **Cerebras** — [cloud.cerebras.ai](https://cloud.cerebras.ai) → API Keys → Create – free tier: generous (developer tier 10× higher limits)
109
109
  - **SambaNova** — [sambanova.ai/developers](https://sambanova.ai/developers) → Developers portal → API key (dev tier generous)
110
- - **OpenRouter** — [openrouter.ai/keys](https://openrouter.ai/keys) → Create key (50 req/day, 20/min on `:free`)
110
+ - **OpenRouter** — [openrouter.ai/keys](https://openrouter.ai/keys) → Create key (free requests on `:free` models, see details below)
111
+
112
+ ### OpenRouter Free Tier Details
113
+
114
+ OpenRouter provides free requests on free models (`:free`):
115
+
116
+ ```
117
+ ──────────────────────────────────────────────────
118
+ OpenRouter — Free requests on free models (:free)
119
+ ──────────────────────────────────────────────────
120
+
121
+ No credits (or <$10) → 50 requests / day (20 req/min)
122
+ ≥ $10 in credits → 1000 requests / day (20 req/min)
123
+
124
+ ──────────────────────────────────────────────────
125
+ Key things to know:
126
+
127
+ • Free models (:free) never consume your credits.
128
+ Your $10 stays untouched if you only use :free models.
129
+
130
+ • Failed requests still count toward your daily quota.
131
+
132
+ • Quota resets every day at midnight UTC.
133
+
134
+ • Free-tier popular models may be additionally rate-limited
135
+ by the provider itself during peak hours.
136
+ ──────────────────────────────────────────────────
137
+ ```
138
+
111
139
  - **Hugging Face Inference** — [huggingface.co/settings/tokens](https://huggingface.co/settings/tokens) → Access Tokens (free monthly credits)
112
- - **Replicate** — [replicate.com/account/api-tokens](https://replicate.com/account/api-tokens) → Create token (dev quota)
113
- - **DeepInfra** — [deepinfra.com/login](https://deepinfra.com/login) → Login → API key (free dev tier)
114
- - **Fireworks AI** — [fireworks.ai](https://fireworks.ai) → Settings → Access Tokens ($1 free credits)
140
+ - **Replicate** — [replicate.com/account/api-tokens](https://replicate.com/account/api-tokens) → Create token – free tier: 6 req/min (no payment) – up to 3,000 RPM (API) / 600 RPM (predictions) with payment
141
+ - **DeepInfra** — [deepinfra.com/login](https://deepinfra.com/login) → Login → API key free tier: 200 concurrent requests (default)
142
+ - **Fireworks AI** — [fireworks.ai](https://fireworks.ai) → Settings → Access Tokens $1 free credits; 10 req/min without payment (full limits with payment)
115
143
  - **Mistral Codestral** — [codestral.mistral.ai](https://codestral.mistral.ai) → API Keys (30 req/min, 2000/day — phone required)
116
144
  - **Hyperbolic** — [app.hyperbolic.ai/settings](https://app.hyperbolic.ai/settings) → API Keys ($1 free trial)
117
145
  - **Scaleway** — [console.scaleway.com/iam/api-keys](https://console.scaleway.com/iam/api-keys) → IAM → API Keys (1M free tokens)
@@ -146,6 +174,18 @@ pnpx free-coding-models YOUR_API_KEY
146
174
  bunx free-coding-models YOUR_API_KEY
147
175
  ```
148
176
 
177
+ ### 🆕 What's New
178
+
179
+ **Version 0.2.6 brings powerful new features:**
180
+
181
+ - **`--json` flag** — Output model results as JSON for scripting, CI/CD pipelines, and monitoring dashboards. Perfect for automation: `free-coding-models --tier S --json | jq '.[0].modelId'`
182
+
183
+ - **Persistent ping cache** — Results are cached for 5 minutes between runs. Startup is nearly instant when cache is fresh, and you save API rate limits. Cache file: `~/.free-coding-models.cache.json`
184
+
185
+ - **Config security check** — Automatically warns if your API key config file has insecure permissions and offers one-click auto-fix with `chmod 600`
186
+
187
+ - **Provider colors everywhere** — Provider names are now colored consistently in logs, settings, and the main table for better visual recognition
188
+
149
189
  ---
150
190
 
151
191
  ## 🚀 Usage
@@ -173,6 +213,11 @@ free-coding-models --best
173
213
  # Analyze for 10 seconds and output the most reliable model
174
214
  free-coding-models --fiable
175
215
 
216
+ # Output results as JSON (for scripting/automation)
217
+ free-coding-models --json
218
+ free-coding-models --tier S --json | jq '.[0].modelId' # Get fastest S-tier model ID
219
+ free-coding-models --json | jq '.[] | select(.avgPing < 500)' # Filter by latency
220
+
176
221
  # Filter models by tier letter
177
222
  free-coding-models --tier S # S+ and S only
178
223
  free-coding-models --tier A # A+, A, A- only
@@ -182,6 +227,7 @@ free-coding-models --tier C # C only
182
227
  # Combine flags freely
183
228
  free-coding-models --openclaw --tier S
184
229
  free-coding-models --opencode --best
230
+ free-coding-models --tier S --json
185
231
  ```
186
232
 
187
233
  ### Choosing the target tool
@@ -241,7 +287,7 @@ Press **`P`** to open the Settings screen at any time:
241
287
  Providers
242
288
 
243
289
  ❯ [ ✅ ] NVIDIA NIM nvapi-••••••••••••3f9a [Test ✅] Free tier (provider quota by model)
244
- [ ✅ ] OpenRouter (no key set) [Test —] 50 req/day, 20/min (:free shared quota)
290
+ [ ✅ ] OpenRouter (no key set) [Test —] Free on :free (50/day <$10, 1000/day ≥$10)
245
291
  [ ✅ ] Hugging Face Inference (no key set) [Test —] Free monthly credits (~$0.10)
246
292
 
247
293
  Setup Instructions — NVIDIA NIM
@@ -834,6 +880,7 @@ This script:
834
880
  | `--goose` | Goose mode — Enter launches Goose with env-based provider config |
835
881
  | `--best` | Show only top-tier models (A+, S, S+) |
836
882
  | `--fiable` | Analyze 10 seconds, output the most reliable model as `provider/model_id` |
883
+ | `--json` | Output results as JSON (for scripting/automation, CI/CD, dashboards) |
837
884
  | `--tier S` | Show only S+ and S tier models |
838
885
  | `--tier A` | Show only A+, A, A- tier models |
839
886
  | `--tier B` | Show only B+, B tier models |
@@ -851,7 +898,8 @@ This script:
851
898
  - **D** — Cycle provider filter (All → NIM → Groq → ...)
852
899
  - **E** — Toggle configured-only mode (on by default, persisted across sessions and profiles)
853
900
  - **Z** — Cycle target tool (OpenCode CLI → OpenCode Desktop → OpenClaw → Crush → Goose)
854
- - **X** — Toggle request logs (recent proxied request/token usage logs)
901
+ - **X** — Toggle request logs (recent proxied request/token usage logs, up to 500 entries)
902
+ - **A (in logs)** — Toggle between showing 500 entries or ALL logs
855
903
  - **P** — Open Settings (manage API keys, toggles, updates, profiles)
856
904
  - **Y** — Open Install Endpoints (`provider → tool → all models` or `selected models only`, no proxy)
857
905
  - **Shift+P** — Cycle through saved profiles (switches live TUI settings)
@@ -77,6 +77,7 @@
77
77
  * - --crush / --goose: launch the currently selected model in the supported external CLI
78
78
  * - --best: Show only top-tier models (A+, S, S+)
79
79
  * - --fiable: Analyze 10s and output the most reliable model
80
+ * - --json: Output results as JSON (for scripting/automation)
80
81
  * - --no-telemetry: Disable anonymous usage analytics for this run
81
82
  * - --tier S/A/B/C: Filter models by tier letter (S=S+/S, A=A+/A/A-, B=B+/B, C=C)
82
83
  *
@@ -92,7 +93,7 @@ import { randomUUID } from 'crypto'
92
93
  import { homedir } from 'os'
93
94
  import { join, dirname } from 'path'
94
95
  import { MODELS, sources } from '../sources.js'
95
- import { getAvg, getVerdict, getUptime, getP95, getJitter, getStabilityScore, sortResults, filterByTier, findBestModel, parseArgs, TIER_ORDER, VERDICT_ORDER, TIER_LETTER_MAP, scoreModelForTask, getTopRecommendations, TASK_TYPES, PRIORITY_TYPES, CONTEXT_BUDGETS, formatCtxWindow, labelFromId, getProxyStatusInfo } from '../src/utils.js'
96
+ import { getAvg, getVerdict, getUptime, getP95, getJitter, getStabilityScore, sortResults, filterByTier, findBestModel, parseArgs, TIER_ORDER, VERDICT_ORDER, TIER_LETTER_MAP, scoreModelForTask, getTopRecommendations, TASK_TYPES, PRIORITY_TYPES, CONTEXT_BUDGETS, formatCtxWindow, labelFromId, getProxyStatusInfo, formatResultsAsJSON } from '../src/utils.js'
96
97
  import { loadConfig, saveConfig, getApiKey, getProxySettings, resolveApiKeys, addApiKey, removeApiKey, isProviderEnabled, saveAsProfile, loadProfile, listProfiles, deleteProfile, getActiveProfileName, setActiveProfile, _emptyProfileSettings } from '../src/config.js'
97
98
  import { buildMergedModels } from '../src/model-merger.js'
98
99
  import { ProxyServer } from '../src/proxy-server.js'
@@ -112,7 +113,7 @@ import { ensureFavoritesConfig, toFavoriteKey, syncFavoriteFlags, toggleFavorite
112
113
  import { checkForUpdateDetailed, checkForUpdate, runUpdate, promptUpdateNotification } from '../src/updater.js'
113
114
  import { promptApiKey } from '../src/setup.js'
114
115
  import { stripAnsi, maskApiKey, displayWidth, padEndDisplay, tintOverlayLines, keepOverlayTargetVisible, sliceOverlayLines, calculateViewport, sortResultsWithPinnedFavorites, renderProxyStatusLine, adjustScrollOffset } from '../src/render-helpers.js'
115
- import { renderTable } from '../src/render-table.js'
116
+ import { renderTable, PROVIDER_COLOR } from '../src/render-table.js'
116
117
  import { setOpenCodeModelData, startOpenCode, startOpenCodeDesktop, startProxyAndLaunch, autoStartProxyIfSynced, ensureProxyRunning, buildProxyTopologyFromConfig, isProxyEnabledForConfig } from '../src/opencode.js'
117
118
  import { startOpenClaw } from '../src/openclaw.js'
118
119
  import { createOverlayRenderers } from '../src/overlays.js'
@@ -120,6 +121,8 @@ import { createKeyHandler } from '../src/key-handler.js'
120
121
  import { getToolModeOrder } from '../src/tool-metadata.js'
121
122
  import { startExternalTool } from '../src/tool-launchers.js'
122
123
  import { getConfiguredInstallableProviders, installProviderEndpoints, refreshInstalledEndpoints, getInstallTargetModes, getProviderCatalogModels } from '../src/endpoint-installer.js'
124
+ import { loadCache, saveCache, clearCache, getCacheAge } from '../src/cache.js'
125
+ import { checkConfigSecurity } from '../src/security.js'
123
126
 
124
127
  // 📖 mergedModels: cross-provider grouped model list (one entry per label, N providers each)
125
128
  // 📖 mergedModelByLabel: fast lookup map from display label → merged model entry
@@ -180,6 +183,12 @@ async function main() {
180
183
  ensureTelemetryConfig(config)
181
184
  ensureFavoritesConfig(config)
182
185
 
186
+ // 📖 Check config file security — warn and offer auto-fix if permissions are too open
187
+ const securityCheck = checkConfigSecurity()
188
+ if (!securityCheck.wasSecure && !securityCheck.wasFixed) {
189
+ // 📖 User declined auto-fix or it failed — continue anyway, just warned
190
+ }
191
+
183
192
  if (cliArgs.cleanProxyMode) {
184
193
  const cleaned = cleanupOpenCodeProxyConfig()
185
194
  console.log()
@@ -453,6 +462,7 @@ async function main() {
453
462
  // 📖 Log page overlay state (X key opens it)
454
463
  logVisible: false, // 📖 Whether the log page overlay is active
455
464
  logScrollOffset: 0, // 📖 Vertical scroll offset for log overlay viewport
465
+ logShowAll: false, // 📖 Show all logs (true) or limited to 500 (false)
456
466
  // 📖 Proxy startup status — set by autoStartProxyIfSynced, consumed by Task 3 indicator
457
467
  // 📖 null = not configured/not attempted
458
468
  // 📖 { phase: 'starting' } — proxy start in progress
@@ -533,11 +543,78 @@ async function main() {
533
543
  void autoStartProxyIfSynced(config, state)
534
544
  }
535
545
 
546
+ // 📖 Load cache if available (for faster startup with cached ping results)
547
+ const cached = loadCache()
548
+ if (cached && cached.models) {
549
+ // 📖 Apply cached values to results
550
+ for (const r of state.results) {
551
+ const cachedModel = cached.models[r.modelId]
552
+ if (cachedModel) {
553
+ r.avg = cachedModel.avg
554
+ r.p95 = cachedModel.p95
555
+ r.jitter = cachedModel.jitter
556
+ r.stability = cachedModel.stability
557
+ r.uptime = cachedModel.uptime
558
+ r.verdict = cachedModel.verdict
559
+ r.status = cachedModel.status
560
+ r.httpCode = cachedModel.httpCode
561
+ r.pings = cachedModel.pings || []
562
+ }
563
+ }
564
+ }
565
+
566
+ // 📖 JSON output mode: skip TUI, output results as JSON after initial pings
567
+ if (cliArgs.jsonMode) {
568
+ console.log(chalk.cyan(' ⚡ Pinging models for JSON output...'))
569
+ console.log()
570
+
571
+ // 📖 Run initial pings
572
+ const initialPing = Promise.all(state.results.map(r => pingModel(r)))
573
+ await initialPing
574
+
575
+ // 📖 Calculate final stats
576
+ state.results.forEach(r => {
577
+ r.avg = getAvg(r)
578
+ r.p95 = getP95(r)
579
+ r.jitter = getJitter(r)
580
+ r.stability = getStabilityScore(r)
581
+ r.uptime = getUptime(r)
582
+ r.verdict = getVerdict(r)
583
+ })
584
+
585
+ // 📖 Apply tier filter if specified
586
+ let outputResults = state.results
587
+ if (cliArgs.tierFilter) {
588
+ const filteredTier = TIER_LETTER_MAP[cliArgs.tierFilter]
589
+ if (filteredTier) {
590
+ outputResults = state.results.filter(r => filteredTier.includes(r.tier))
591
+ }
592
+ }
593
+
594
+ // 📖 Apply best mode filter if specified
595
+ if (cliArgs.bestMode) {
596
+ outputResults = outputResults.filter(r => ['S+', 'S', 'A+'].includes(r.tier))
597
+ }
598
+
599
+ // 📖 Sort by avg ping (ascending)
600
+ outputResults = sortResults(outputResults, 'avg', 'asc')
601
+
602
+ // 📖 Output JSON
603
+ console.log(formatResultsAsJSON(outputResults))
604
+
605
+ // 📖 Save cache before exiting
606
+ saveCache(state.results, state.pingMode)
607
+
608
+ process.exit(0)
609
+ }
610
+
536
611
  // 📖 Enter alternate screen — animation runs here, zero scrollback pollution
537
612
  process.stdout.write(ALT_ENTER)
538
613
 
539
614
  // 📖 Ensure we always leave alt screen cleanly (Ctrl+C, crash, normal exit)
540
615
  const exit = (code = 0) => {
616
+ // 📖 Save cache before exiting so next run starts faster
617
+ saveCache(state.results, state.pingMode)
541
618
  clearInterval(ticker)
542
619
  clearTimeout(state.pingIntervalObj)
543
620
  process.stdout.write(ALT_LEAVE)
@@ -590,6 +667,7 @@ async function main() {
590
667
  chalk,
591
668
  sources,
592
669
  PROVIDER_METADATA,
670
+ PROVIDER_COLOR,
593
671
  LOCAL_VERSION,
594
672
  getApiKey,
595
673
  getProxySettings,
@@ -849,6 +927,9 @@ async function main() {
849
927
 
850
928
  await initialPing
851
929
 
930
+ // 📖 Save cache after initial pings complete for faster next startup
931
+ saveCache(state.results, state.pingMode)
932
+
852
933
  // 📖 Keep interface running forever - user can select anytime or Ctrl+C to exit
853
934
  // 📖 The pings continue running in background with dynamic interval
854
935
  // 📖 User can press W to decrease interval (faster pings) or = to increase (slower)
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "free-coding-models",
3
- "version": "0.2.5",
3
+ "version": "0.2.8",
4
4
  "description": "Find the fastest coding LLM models in seconds — ping free models from multiple providers, pick the best one for OpenCode, Cursor, or any AI coding assistant.",
5
5
  "keywords": [
6
6
  "nvidia",
@@ -45,7 +45,8 @@
45
45
  "patch-openclaw.js",
46
46
  "patch-openclaw-models.js",
47
47
  "README.md",
48
- "LICENSE"
48
+ "LICENSE",
49
+ "CHANGELOG.md"
49
50
  ],
50
51
  "scripts": {
51
52
  "start": "node bin/free-coding-models.js",
@@ -56,5 +57,8 @@
56
57
  },
57
58
  "engines": {
58
59
  "node": ">=18.0.0"
60
+ },
61
+ "devDependencies": {
62
+ "@mariozechner/terminalcp": "^1.3.3"
59
63
  }
60
64
  }
package/sources.js CHANGED
@@ -146,7 +146,14 @@ export const sambanova = [
146
146
  ]
147
147
 
148
148
  // 📖 OpenRouter source - https://openrouter.ai
149
- // 📖 Free :free models with shared quota — 50 free req/day
149
+ // 📖 Free :free models with shared quota — 50 free req/day (20 req/min)
150
+ // 📖 No credits (or < $10) → 50 requests / day (20 req/min)
151
+ // 📖 ≥ $10 in credits → 1000 requests / day (20 req/min)
152
+ // 📖 Key things to know:
153
+ // 📖 • Free models (:free) never consume your credits. Your $10 stays untouched if you only use :free models.
154
+ // 📖 • Failed requests still count toward your daily quota.
155
+ // 📖 • Quota resets every day at midnight UTC.
156
+ // 📖 • Free-tier popular models may be additionally rate-limited by the provider itself during peak hours.
150
157
  // 📖 API keys at https://openrouter.ai/keys
151
158
  export const openrouter = [
152
159
  ['qwen/qwen3-coder:free', 'Qwen3 Coder 480B', 'S+', '70.6%', '262k'],
package/src/analysis.js CHANGED
@@ -39,6 +39,7 @@ import { MODELS, sources } from '../sources.js'
39
39
  import { findBestModel, filterByTier, formatCtxWindow, labelFromId, TIER_LETTER_MAP } from '../src/utils.js'
40
40
  import { isProviderEnabled, getApiKey } from '../src/config.js'
41
41
  import { ping } from '../src/ping.js'
42
+ import { PROVIDER_COLOR } from './render-table.js'
42
43
  import chalk from 'chalk'
43
44
 
44
45
  // 📖 runFiableMode: Analyze models for reliability over 10 seconds, output the best one.
@@ -99,7 +100,10 @@ export async function runFiableMode(config) {
99
100
  // 📖 Output in format: providerName/modelId
100
101
  const providerName = sources[best.providerKey]?.name ?? best.providerKey ?? 'nvidia'
101
102
  console.log(chalk.green(` ✓ Most reliable model:`))
102
- console.log(chalk.bold(` ${providerName}/${best.modelId}`))
103
+ // 📖 Color provider name the same way as in the main table
104
+ const providerRgb = PROVIDER_COLOR[best.providerKey] ?? [105, 190, 245]
105
+ const coloredProviderName = chalk.bold.rgb(...providerRgb)(providerName)
106
+ console.log(` ${coloredProviderName}/${best.modelId}`)
103
107
  console.log()
104
108
  console.log(chalk.dim(` 📊 Stats:`))
105
109
  const { getAvg, getUptime } = await import('./utils.js')
package/src/cache.js ADDED
@@ -0,0 +1,165 @@
1
+ /**
2
+ * @file cache.js
3
+ * @description Persistent cache for ping results to speed up startup.
4
+ *
5
+ * 📖 Cache file location: ~/.free-coding-models.cache.json
6
+ * 📖 File permissions: 0o600 (user read/write only — contains API timing data)
7
+ *
8
+ * 📖 Why caching matters:
9
+ * - Ping results don't change dramatically within 5 minutes
10
+ * - Repeated runs start instantly instead of waiting 10+ seconds
11
+ * - Fewer API rate limit hits for providers
12
+ *
13
+ * 📖 Cache structure:
14
+ * {
15
+ * "timestamp": 1712345678901, // Last cache write time (ms since epoch)
16
+ * "models": {
17
+ * "nvidia/deepseek-v3.2": {
18
+ * "avg": 245,
19
+ * "p95": 312,
20
+ * "jitter": 45,
21
+ * "stability": 87,
22
+ * "uptime": 95.5,
23
+ * "verdict": "Perfect",
24
+ * "status": "up",
25
+ * "httpCode": "200",
26
+ * "pings": [
27
+ * { "ms": 230, "code": "200" },
28
+ * { "ms": 260, "code": "200" }
29
+ * ]
30
+ * }
31
+ * },
32
+ * "providerTier": "normal" // Ping cadence: "fast" | "normal" | "slow"
33
+ * }
34
+ *
35
+ * 📖 Cache TTL (time-to-live):
36
+ * - 5 minutes (300,000ms) for normal operations
37
+ * - Stale cache is ignored and models are re-pinged
38
+ *
39
+ * @functions
40
+ * → getCachePath() — Returns the cache file path
41
+ * → loadCache() — Reads cache from disk, returns null if missing/stale
42
+ * → saveCache(results, providerTier) — Writes current results to cache
43
+ * → clearCache() — Deletes cache file (useful for testing)
44
+ * → isCacheFresh(cache) — Checks if cache is within TTL
45
+ *
46
+ * @exports getCachePath, loadCache, saveCache, clearCache, isCacheFresh
47
+ */
48
+
49
+ import fs from 'node:fs'
50
+ import path from 'node:path'
51
+ import os from 'node:os'
52
+ import { fileURLToPath } from 'node:url'
53
+
54
+ // 📖 Cache TTL: 5 minutes in milliseconds
55
+ // 📖 Ping results are considered fresh for this duration
56
+ const CACHE_TTL = 5 * 60 * 1000
57
+
58
+ // 📖 Get cache file path — platform-aware home directory resolution
59
+ export function getCachePath() {
60
+ const homeDir = os.homedir()
61
+ return path.join(homeDir, '.free-coding-models.cache.json')
62
+ }
63
+
64
+ // 📖 Load cache from disk if it exists and is valid JSON
65
+ // 📖 Returns null if file doesn't exist, is invalid JSON, or is stale
66
+ export function loadCache() {
67
+ const cachePath = getCachePath()
68
+
69
+ try {
70
+ if (!fs.existsSync(cachePath)) {
71
+ return null
72
+ }
73
+
74
+ const raw = fs.readFileSync(cachePath, 'utf-8')
75
+ const cache = JSON.parse(raw)
76
+
77
+ // 📖 Validate cache structure — must have timestamp and models object
78
+ if (!cache || typeof cache !== 'object' || !cache.timestamp || !cache.models) {
79
+ return null
80
+ }
81
+
82
+ // 📖 Check if cache is stale (older than TTL)
83
+ if (!isCacheFresh(cache)) {
84
+ return null
85
+ }
86
+
87
+ return cache
88
+ } catch (err) {
89
+ // 📖 Silently fail on parse errors — cache is optional
90
+ return null
91
+ }
92
+ }
93
+
94
+ // 📖 Save current ping results to cache
95
+ // 📖 results: Array of model result objects from the TUI
96
+ // 📖 providerTier: Current ping cadence ("fast" | "normal" | "slow")
97
+ export function saveCache(results, providerTier = 'normal') {
98
+ const cachePath = getCachePath()
99
+
100
+ try {
101
+ const models = {}
102
+
103
+ // 📖 Extract relevant data from each result object
104
+ for (const result of results) {
105
+ if (!result.modelId) continue
106
+
107
+ models[result.modelId] = {
108
+ avg: result.avg,
109
+ p95: result.p95,
110
+ jitter: result.jitter,
111
+ stability: result.stability,
112
+ uptime: result.uptime,
113
+ verdict: result.verdict,
114
+ status: result.status,
115
+ httpCode: result.httpCode,
116
+ // 📖 Only save last 20 pings to keep cache file small
117
+ pings: (result.pings || []).slice(-20)
118
+ }
119
+ }
120
+
121
+ const cache = {
122
+ timestamp: Date.now(),
123
+ models,
124
+ providerTier
125
+ }
126
+
127
+ // 📖 Write with secure permissions (user read/write only)
128
+ fs.writeFileSync(cachePath, JSON.stringify(cache, null, 2), { mode: 0o600 })
129
+ } catch (err) {
130
+ // 📖 Silently fail on write errors — caching is optional
131
+ }
132
+ }
133
+
134
+ // 📖 Check if cache is within TTL (fresh) or expired (stale)
135
+ export function isCacheFresh(cache) {
136
+ if (!cache || typeof cache.timestamp !== 'number') return false
137
+
138
+ const age = Date.now() - cache.timestamp
139
+ return age < CACHE_TTL
140
+ }
141
+
142
+ // 📖 Clear cache file — useful for testing or forcing fresh pings
143
+ export function clearCache() {
144
+ const cachePath = getCachePath()
145
+
146
+ try {
147
+ if (fs.existsSync(cachePath)) {
148
+ fs.unlinkSync(cachePath)
149
+ }
150
+ } catch (err) {
151
+ // 📖 Silently fail — cache is optional
152
+ }
153
+ }
154
+
155
+ // 📖 Get cache age in human-readable format (for debugging)
156
+ export function getCacheAge(cache) {
157
+ if (!cache || typeof cache.timestamp !== 'number') return null
158
+
159
+ const ageMs = Date.now() - cache.timestamp
160
+ const ageSec = Math.floor(ageMs / 1000)
161
+
162
+ if (ageSec < 60) return `${ageSec}s`
163
+ if (ageSec < 3600) return `${Math.floor(ageSec / 60)}m`
164
+ return `${Math.floor(ageSec / 3600)}h`
165
+ }
package/src/config.js CHANGED
@@ -263,10 +263,22 @@ export function saveConfig(config) {
263
263
  throw new Error('Written config is not a valid object')
264
264
  }
265
265
 
266
- // 📖 Verify critical data wasn't lost
266
+ // 📖 Verify critical data wasn't lost - check ALL keys are preserved
267
267
  if (config.apiKeys && Object.keys(config.apiKeys).length > 0) {
268
- if (!parsed.apiKeys || Object.keys(parsed.apiKeys).length === 0) {
269
- throw new Error('API keys were lost during write')
268
+ if (!parsed.apiKeys) {
269
+ throw new Error('apiKeys object missing after write')
270
+ }
271
+ const originalKeys = Object.keys(config.apiKeys).sort()
272
+ const writtenKeys = Object.keys(parsed.apiKeys).sort()
273
+ if (originalKeys.length > writtenKeys.length) {
274
+ const lostKeys = originalKeys.filter(k => !writtenKeys.includes(k))
275
+ throw new Error(`API keys lost during write: ${lostKeys.join(', ')}`)
276
+ }
277
+ // 📖 Also verify each key's value is not empty
278
+ for (const key of originalKeys) {
279
+ if (!parsed.apiKeys[key] || parsed.apiKeys[key].length === 0) {
280
+ throw new Error(`API key for ${key} is empty after write`)
281
+ }
270
282
  }
271
283
  }
272
284
 
@@ -742,7 +754,13 @@ export function loadProfile(config, name) {
742
754
  const nextSettings = profile.settings ? { ..._emptyProfileSettings(), ...profile.settings, proxy: normalizeProxySettings(profile.settings.proxy) } : _emptyProfileSettings()
743
755
 
744
756
  // 📖 Deep-copy the profile data into the live config (don't share references)
745
- config.apiKeys = JSON.parse(JSON.stringify(profile.apiKeys || {}))
757
+ // 📖 IMPORTANT: MERGE apiKeys instead of replacing to preserve keys not in profile
758
+ // 📖 Profile keys take priority over existing keys (allows profile-specific overrides)
759
+ const profileApiKeys = profile.apiKeys || {}
760
+ const mergedApiKeys = { ...config.apiKeys || {}, ...profileApiKeys }
761
+ config.apiKeys = JSON.parse(JSON.stringify(mergedApiKeys))
762
+
763
+ // 📖 For providers, favorites: replace with profile values (these are profile-specific settings)
746
764
  config.providers = JSON.parse(JSON.stringify(profile.providers || {}))
747
765
  config.favorites = [...(profile.favorites || [])]
748
766
  config.settings = nextSettings
package/src/constants.js CHANGED
@@ -74,10 +74,10 @@ export const TIER_CYCLE = [null, 'S+', 'S', 'A+', 'A', 'A-', 'B+', 'B', 'C']
74
74
 
75
75
  // 📖 Overlay background chalk functions — each overlay panel has a distinct tint
76
76
  // 📖 so users can tell Settings, Help, Recommend, and Log panels apart at a glance.
77
- export const SETTINGS_OVERLAY_BG = chalk.bgRgb(14, 20, 30)
78
- export const HELP_OVERLAY_BG = chalk.bgRgb(24, 16, 32)
79
- export const RECOMMEND_OVERLAY_BG = chalk.bgRgb(10, 25, 15) // 📖 Green tint for Smart Recommend
80
- export const LOG_OVERLAY_BG = chalk.bgRgb(10, 20, 26) // 📖 Dark blue-green tint for Log page
77
+ export const SETTINGS_OVERLAY_BG = chalk.bgRgb(0, 0, 0)
78
+ export const HELP_OVERLAY_BG = chalk.bgRgb(0, 0, 0)
79
+ export const RECOMMEND_OVERLAY_BG = chalk.bgRgb(0, 0, 0) // 📖 Green tint for Smart Recommend
80
+ export const LOG_OVERLAY_BG = chalk.bgRgb(0, 0, 0) // 📖 Dark blue-green tint for Log page
81
81
 
82
82
  // 📖 OVERLAY_PANEL_WIDTH: fixed character width of all overlay panels so background
83
83
  // 📖 tint fills the panel consistently regardless of content length.
@@ -750,6 +750,12 @@ export function createKeyHandler(ctx) {
750
750
  state.logVisible = false
751
751
  return
752
752
  }
753
+ // 📖 A key: toggle between showing all logs and limited to 500
754
+ if (key.name === 'a') {
755
+ state.logShowAll = !state.logShowAll
756
+ state.logScrollOffset = 0
757
+ return
758
+ }
753
759
  if (key.name === 'up') { state.logScrollOffset = Math.max(0, state.logScrollOffset - 1); return }
754
760
  if (key.name === 'down') { state.logScrollOffset += 1; return }
755
761
  if (key.name === 'pageup') { state.logScrollOffset = Math.max(0, state.logScrollOffset - pageStep); return }
package/src/openclaw.js CHANGED
@@ -23,6 +23,8 @@ import { copyFileSync, existsSync, mkdirSync, readFileSync, writeFileSync } from
23
23
  import { homedir } from 'os'
24
24
  import { join } from 'path'
25
25
  import { patchOpenClawModelsJson } from '../patch-openclaw-models.js'
26
+ import { sources } from '../sources.js'
27
+ import { PROVIDER_COLOR } from './render-table.js'
26
28
 
27
29
  // 📖 OpenClaw config: ~/.openclaw/openclaw.json (JSON format, may be JSON5 in newer versions)
28
30
  const OPENCLAW_CONFIG = join(homedir(), '.openclaw', 'openclaw.json')
@@ -84,7 +86,10 @@ export async function startOpenClaw(model, apiKey) {
84
86
  api: 'openai-completions',
85
87
  models: [],
86
88
  }
87
- console.log(chalk.dim(' ➕ Added nvidia provider block to OpenClaw config (models.providers.nvidia)'))
89
+ // 📖 Color provider name the same way as in the main table
90
+ const providerRgb = PROVIDER_COLOR['nvidia'] ?? [105, 190, 245]
91
+ const coloredProviderName = chalk.bold.rgb(...providerRgb)('nvidia')
92
+ console.log(chalk.dim(` ➕ Added ${coloredProviderName} provider block to OpenClaw config (models.providers.nvidia)`))
88
93
  }
89
94
  // 📖 Ensure models array exists even if the provider block was created by an older version
90
95
  if (!Array.isArray(config.models.providers.nvidia.models)) {
package/src/opencode.js CHANGED
@@ -38,6 +38,7 @@ import { homedir } from 'os'
38
38
  import { join } from 'path'
39
39
  import { copyFileSync, existsSync } from 'fs'
40
40
  import { sources } from '../sources.js'
41
+ import { PROVIDER_COLOR } from './render-table.js'
41
42
  import { resolveCloudflareUrl } from './ping.js'
42
43
  import { ProxyServer } from './proxy-server.js'
43
44
  import { loadOpenCodeConfig, saveOpenCodeConfig, syncToOpenCode } from './opencode-sync.js'
@@ -268,7 +269,10 @@ export async function startOpenCode(model, fcmConfig) {
268
269
  },
269
270
  models: {}
270
271
  }
271
- console.log(chalk.green(' + Auto-configured NVIDIA NIM provider in OpenCode'))
272
+ // 📖 Color provider name the same way as in the main table
273
+ const providerRgb = PROVIDER_COLOR['nvidia'] ?? [105, 190, 245]
274
+ const coloredNimName = chalk.bold.rgb(...providerRgb)('NVIDIA NIM')
275
+ console.log(chalk.green(` + Auto-configured ${coloredNimName} provider in OpenCode`))
272
276
  }
273
277
 
274
278
  console.log(chalk.green(` Setting ${chalk.bold(model.label)} as default...`))
@@ -780,7 +784,10 @@ export async function startOpenCodeDesktop(model, fcmConfig) {
780
784
  },
781
785
  models: {}
782
786
  }
783
- console.log(chalk.green(' + Auto-configured NVIDIA NIM provider in OpenCode'))
787
+ // 📖 Color provider name the same way as in the main table
788
+ const providerRgb = PROVIDER_COLOR['nvidia'] ?? [105, 190, 245]
789
+ const coloredNimName = chalk.bold.rgb(...providerRgb)('NVIDIA NIM')
790
+ console.log(chalk.green(` + Auto-configured ${coloredNimName} provider in OpenCode`))
784
791
  }
785
792
 
786
793
  console.log(chalk.green(` Setting ${chalk.bold(model.label)} as default for OpenCode Desktop...`))