free-coding-models 0.3.3 → 0.3.5

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/CHANGELOG.md CHANGED
@@ -2,6 +2,22 @@
2
2
 
3
3
  ---
4
4
 
5
+ ## 0.3.5
6
+
7
+ ### Fixed
8
+ - **Claude Code beta-route compatibility**: FCM Proxy V2 now matches routes on the URL pathname, so Anthropic requests like `/v1/messages?beta=true` and `/v1/messages/count_tokens?beta=true` resolve correctly instead of failing with a fake “selected model may not exist” error.
9
+ - **Claude proxy parity with `free-claude-code`**: The Claude integration was revalidated against the real `claude` binary, and the proxy-side Claude alias mapping now reaches the upstream provider again in the exact `free-claude-code` style flow.
10
+
11
+ ## 0.3.4
12
+
13
+ ### Added
14
+ - **Proxy root landing JSON**: `GET /` on FCM Proxy V2 now returns a small unauthenticated status payload, so browser checks no longer fail with `{"error":"Unauthorized"}`.
15
+ - **`daemon stop` CLI command**: The public CLI now supports `free-coding-models daemon stop`, matching the existing daemon manager capability and the documented workflow.
16
+
17
+ ### Fixed
18
+ - **README/UI parity restored**: The docs now match the current product surface, including `160` models, the `Used` token-history column, and the current launcher/proxy behavior.
19
+ - **Malformed config sections are normalized on load**: Invalid `apiKeys`, `providers`, or `settings` values are now coerced back to safe empty objects instead of leaking broken runtime shapes into the app.
20
+
5
21
  ## 0.3.3
6
22
 
7
23
  ### Fixed
package/README.md CHANGED
@@ -2,7 +2,7 @@
2
2
  <img src="https://img.shields.io/npm/v/free-coding-models?color=76b900&label=npm&logo=npm" alt="npm version">
3
3
  <img src="https://img.shields.io/node/v/free-coding-models?color=76b900&logo=node.js" alt="node version">
4
4
  <img src="https://img.shields.io/npm/l/free-coding-models?color=76b900" alt="license">
5
- <img src="https://img.shields.io/badge/models-159-76b900?logo=nvidia" alt="models count">
5
+ <img src="https://img.shields.io/badge/models-160-76b900?logo=nvidia" alt="models count">
6
6
  <img src="https://img.shields.io/badge/providers-20-blue" alt="providers count">
7
7
  </p>
8
8
 
@@ -81,7 +81,7 @@ By Vanessa Depraute
81
81
  - **📈 Rolling averages** — Avg calculated from ALL successful pings since start
82
82
  - **📊 Uptime tracking** — Percentage of successful pings shown in real-time
83
83
  - **📐 Stability score** — Composite 0–100 score measuring consistency (p95, jitter, spikes, uptime)
84
- - **📊 Usage tracking** — Monitor remaining quota for each exact provider/model pair when the provider exposes it; otherwise the TUI shows a green dot instead of a misleading percentage.
84
+ - **📊 Token usage tracking** — The proxy logs prompt+completion token usage per exact provider/model pair, and the TUI surfaces that history in the `Used` column and the request log overlay.
85
85
  - **📜 Request Log Overlay** — Press `X` to inspect recent proxied requests and token usage for exact provider/model pairs.
86
86
  - **📋 Changelog Overlay** — Press `N` to browse all versions in an index, then `Enter` to view details for any version with full scroll support
87
87
  - **🛠 MODEL_NOT_FOUND Rotation** — If a specific provider returns a 404 for a model, the TUI intelligently rotates through other available providers for the same model.
@@ -89,7 +89,7 @@ By Vanessa Depraute
89
89
  - **🎮 Interactive selection** — Navigate with arrow keys directly in the table, press Enter to act
90
90
  - **💻 OpenCode integration** — Auto-detects NIM setup, sets model as default, launches OpenCode
91
91
  - **🦞 OpenClaw integration** — Sets selected model as default provider in `~/.openclaw/openclaw.json`
92
- - **🧰 Public tool launchers** — `Enter` auto-configures and launches 10+ tools: `OpenCode CLI`, `OpenCode Desktop`, `OpenClaw`, `Crush`, `Goose`, `Aider`, `Claude Code`, `Codex`, `Gemini`, `Qwen`, `OpenHands`, `Amp`, and `Pi`. All tools auto-select the chosen model on launch.
92
+ - **🧰 Public tool launchers** — `Enter` auto-configures and launches all 13 tool modes: `OpenCode CLI`, `OpenCode Desktop`, `OpenClaw`, `Crush`, `Goose`, `Aider`, `Claude Code`, `Codex`, `Gemini`, `Qwen`, `OpenHands`, `Amp`, and `Pi`. All tools auto-select the chosen model on launch.
93
93
  - **🔌 Install Endpoints flow** — Press `Y` to install one configured provider into the compatible persisted-config tools, with a choice between **Direct Provider** (pure API) or **FCM Proxy V2** (key rotation + usage tracking), then pick all models or a curated subset
94
94
  - **📝 Feature Request (J key)** — Send anonymous feedback directly to the project team
95
95
  - **🐛 Bug Report (I key)** — Send anonymous bug reports directly to the project team
@@ -182,13 +182,11 @@ bunx free-coding-models YOUR_API_KEY
182
182
 
183
183
  ### 🆕 What's New
184
184
 
185
- **Version 0.3.3 switches Claude Code to the exact `free-claude-code` pattern instead of injecting FCM slugs into Claude itself:**
185
+ **Version 0.3.5 fixes the main Claude Code proxy compatibility bug found in real-world use:**
186
186
 
187
- - **Claude Code is proxy-only now** — FCM no longer tries to make Claude Code “select” `gpt-oss-120b` or any other free model directly. Claude still speaks in Claude model ids, and the proxy picks the real backend.
188
- - **Proxy-side `MODEL` / `MODEL_OPUS` / `MODEL_SONNET` / `MODEL_HAIKU` routing now drives Claude** — When you launch Claude Code from a selected FCM row, FCM writes the selected proxy slug into the proxy's Anthropic routing config, exactly like `free-claude-code`, then lets the daemon hot-reload it.
189
- - **Claude no longer gets `--model <fcm-slug>` or `ANTHROPIC_MODEL=<fcm-slug>`**the launcher now passes only `ANTHROPIC_BASE_URL` and `ANTHROPIC_AUTH_TOKEN`, which is the same clean client-side contract used by `free-claude-code`.
190
- - **Claude is now forced onto a real Claude alias at launch** — FCM starts Claude Code with `--model sonnet`, so stale local values like `gpt-oss-120b` cannot fail client-side before the proxy even receives the request.
191
- - **Claude sync/install leftovers were removed from the proxy path** — Claude Code is no longer treated like a persisted-config target for proxy sync; its integration is runtime-only, with fake Claude ids resolved by the proxy.
187
+ - **Claude Code beta-route requests now work** — the proxy accepts Anthropic URLs like `/v1/messages?beta=true` and `/v1/messages/count_tokens?beta=true`, which is how recent Claude Code builds really call the API.
188
+ - **Claude proxy flow now behaves like `free-claude-code` on the routing layer** — fake Claude model ids still map proxy-side to the selected free backend model, but the route matcher no longer breaks before that mapping can run.
189
+ - **The fix was validated against the real `claude` binary** not just unit tests. The exact failure `selected model (claude-sonnet-4-6) may not exist` is now gone in local end-to-end repro.
192
190
 
193
191
  ---
194
192
 
@@ -246,7 +244,7 @@ Running `free-coding-models` with no launcher flag starts in **OpenCode CLI** mo
246
244
  - The active target is always visible in the header badge before you press `Enter`
247
245
 
248
246
  **How it works:**
249
- 1. **Ping phase** — All enabled models are pinged in parallel (up to 159 across 20 providers)
247
+ 1. **Ping phase** — All enabled models are pinged in parallel (up to 160 across 20 providers)
250
248
  2. **Continuous monitoring** — Models start at 2s re-pings for 60s, then fall back to 10s automatically, and slow to 30s after 5 minutes idle unless you force 4s mode with `W`
251
249
  3. **Real-time updates** — Watch "Latest", "Avg", and "Up%" columns update live
252
250
  4. **Select anytime** — Use ↑↓ arrows to navigate, press Enter on a model to act
@@ -429,7 +427,7 @@ TOGETHER_API_KEY=together_xxx free-coding-models
429
427
 
430
428
  ## 🤖 Coding Models
431
429
 
432
- **159 coding models** across 20 providers and 8 tiers, ranked by [SWE-bench Verified](https://www.swebench.com) — the industry-standard benchmark measuring real GitHub issue resolution. Scores are self-reported by providers unless noted.
430
+ **160 coding models** across 20 providers and 8 tiers, ranked by [SWE-bench Verified](https://www.swebench.com) — the industry-standard benchmark measuring real GitHub issue resolution. Scores are self-reported by providers unless noted.
433
431
 
434
432
  ### Alibaba Cloud (DashScope) (8 models)
435
433
 
@@ -513,7 +511,6 @@ The main table displays one row per model with the following columns:
513
511
  | **Stability** | `B` | Composite 0–100 consistency score (see [Stability Score](#-stability-score)) |
514
512
  | **Up%** | `U` | Uptime — percentage of successful pings |
515
513
  | **Used** | — | Total prompt+completion tokens consumed in logs for this exact provider/model pair, shown in `k` or `M` |
516
- | **Usage** | `G` | Provider-scoped quota remaining when measurable; otherwise a green dot means usage % is not applicable/reliable for that provider |
517
514
 
518
515
  ### Verdict values
519
516
 
@@ -602,6 +599,8 @@ free-coding-models daemon uninstall # Remove OS service completely
602
599
  free-coding-models daemon logs # Show recent service logs
603
600
  ```
604
601
 
602
+ For a quick browser sanity-check, open [http://127.0.0.1:18045/](http://127.0.0.1:18045/) or [http://127.0.0.1:18045/v1/health](http://127.0.0.1:18045/v1/health) while the proxy is running.
603
+
605
604
  ### Service management
606
605
 
607
606
  The dedicated **FCM Proxy V2** overlay (accessible via `J` from main TUI, or Settings → Enter) provides full control:
@@ -271,6 +271,18 @@ async function main() {
271
271
  console.log()
272
272
  process.exit(result.success ? 0 : 1)
273
273
  }
274
+ if (daemonSubcmd === 'stop') {
275
+ const result = dm.stopDaemon()
276
+ console.log()
277
+ if (result.success) {
278
+ console.log(chalk.greenBright(' ✅ FCM Proxy V2 service stopped.'))
279
+ console.log(chalk.dim(' The service stays installed and can be restarted later.'))
280
+ } else {
281
+ console.log(chalk.red(` ❌ Stop failed: ${result.error}`))
282
+ }
283
+ console.log()
284
+ process.exit(result.success ? 0 : 1)
285
+ }
274
286
  if (daemonSubcmd === 'logs') {
275
287
  const logPath = dm.getDaemonLogPath()
276
288
  console.log(chalk.dim(` Log file: ${logPath}`))
@@ -283,7 +295,7 @@ async function main() {
283
295
  process.exit(0)
284
296
  }
285
297
  console.log(chalk.red(` Unknown command: ${daemonSubcmd}`))
286
- console.log(chalk.dim(' Usage: free-coding-models daemon [status|install|uninstall|restart|logs]'))
298
+ console.log(chalk.dim(' Usage: free-coding-models daemon [status|install|uninstall|restart|stop|logs]'))
287
299
  process.exit(1)
288
300
  }
289
301
 
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "free-coding-models",
3
- "version": "0.3.3",
3
+ "version": "0.3.5",
4
4
  "description": "Find the fastest coding LLM models in seconds — ping free models from multiple providers, pick the best one for OpenCode, Cursor, or any AI coding assistant.",
5
5
  "keywords": [
6
6
  "nvidia",
@@ -4,7 +4,7 @@
4
4
  * and OpenAI Chat Completions API.
5
5
  *
6
6
  * 📖 This is the key module that enables Claude Code to work natively through the
7
- * FCM proxy without needing the external "free-claude-code" Python proxy.
7
+ * FCM proxy without needing the external Claude proxy integration.
8
8
  * Claude Code sends requests in Anthropic format (POST /v1/messages) and this
9
9
  * module translates them to OpenAI format for the upstream providers, then
10
10
  * translates the responses back.
package/src/cli-help.js CHANGED
@@ -40,6 +40,7 @@ const COMMANDS = [
40
40
  { command: 'daemon install', description: 'Install and start the background service' },
41
41
  { command: 'daemon uninstall', description: 'Remove the background service' },
42
42
  { command: 'daemon restart', description: 'Restart the background service' },
43
+ { command: 'daemon stop', description: 'Gracefully stop the background service without uninstalling it' },
43
44
  { command: 'daemon logs', description: 'Print the latest daemon log lines' },
44
45
  ]
45
46
 
@@ -70,7 +71,7 @@ export function buildCliHelpLines({ chalk = null, indent = '', title = 'CLI Help
70
71
 
71
72
  lines.push(`${indent}${paint(chalk, chalk?.bold, title)}`)
72
73
  lines.push(`${indent}${paint(chalk, chalk?.dim, 'Usage: free-coding-models [apiKey] [options]')}`)
73
- lines.push(`${indent}${paint(chalk, chalk?.dim, ' free-coding-models daemon [status|install|uninstall|restart|logs]')}`)
74
+ lines.push(`${indent}${paint(chalk, chalk?.dim, ' free-coding-models daemon [status|install|uninstall|restart|stop|logs]')}`)
74
75
  lines.push('')
75
76
  lines.push(`${indent}${paint(chalk, chalk?.bold, 'Tool Flags')}`)
76
77
  for (const entry of launchFlags) {
package/src/config.js CHANGED
@@ -183,10 +183,10 @@ export function loadConfig() {
183
183
  try {
184
184
  const raw = readFileSync(CONFIG_PATH, 'utf8').trim()
185
185
  const parsed = JSON.parse(raw)
186
- // 📖 Ensure the shape is always complete — fill missing sections with defaults
187
- if (!parsed.apiKeys) parsed.apiKeys = {}
188
- if (!parsed.providers) parsed.providers = {}
189
- if (!parsed.settings || typeof parsed.settings !== 'object') parsed.settings = {}
186
+ // 📖 Ensure the shape is always complete — fill missing or corrupted sections with defaults.
187
+ if (!parsed.apiKeys || typeof parsed.apiKeys !== 'object' || Array.isArray(parsed.apiKeys)) parsed.apiKeys = {}
188
+ if (!parsed.providers || typeof parsed.providers !== 'object' || Array.isArray(parsed.providers)) parsed.providers = {}
189
+ if (!parsed.settings || typeof parsed.settings !== 'object' || Array.isArray(parsed.settings)) parsed.settings = {}
190
190
  if (typeof parsed.settings.hideUnconfiguredModels !== 'boolean') parsed.settings.hideUnconfiguredModels = true
191
191
  parsed.settings.proxy = normalizeProxySettings(parsed.settings.proxy)
192
192
  // 📖 Favorites: list of "providerKey/modelId" pinned rows.
@@ -443,9 +443,11 @@ export function validateConfigFile(options = {}) {
443
443
  throw new Error('Config is not a valid object')
444
444
  }
445
445
 
446
- // 📖 Check for critical corruption (apiKeys should be an object if it exists)
447
- if (parsed.apiKeys !== null && parsed.apiKeys !== undefined && typeof parsed.apiKeys !== 'object') {
448
- throw new Error('apiKeys field is corrupted')
446
+ // 📖 Check for critical corruption (apiKeys should be an object if it exists).
447
+ // 📖 Treat this as recoverable loadConfig() will normalize the value safely.
448
+ if (parsed.apiKeys !== null && parsed.apiKeys !== undefined
449
+ && (typeof parsed.apiKeys !== 'object' || Array.isArray(parsed.apiKeys))) {
450
+ console.warn('⚠️ apiKeys field malformed; it will be normalized on load')
449
451
  }
450
452
 
451
453
  return { valid: true }
package/src/opencode.js CHANGED
@@ -577,7 +577,7 @@ export async function ensureProxyRunning(fcmConfig, { forceRestart = false } = {
577
577
 
578
578
  // 📖 Always prefer the background daemon when it is available. Launcher code
579
579
  // 📖 can update config and let the daemon hot-reload, which is closer to the
580
- // 📖 free-claude-code model than spinning up tool-specific local proxies.
580
+ // 📖 Claude proxy model than spinning up tool-specific local proxies.
581
581
  try {
582
582
  const daemonRunning = await isDaemonRunning()
583
583
  if (daemonRunning) {
@@ -108,6 +108,21 @@ function sendJson(res, statusCode, body) {
108
108
  res.end(json)
109
109
  }
110
110
 
111
+ /**
112
+ * 📖 Match routes on the URL pathname only so Claude Code's `?beta=true`
113
+ * 📖 Anthropic requests resolve exactly like FastAPI routes do in free-claude-code.
114
+ *
115
+ * @param {http.IncomingMessage} req
116
+ * @returns {string}
117
+ */
118
+ function getRequestPathname(req) {
119
+ try {
120
+ return new URL(req.url || '/', 'http://127.0.0.1').pathname || '/'
121
+ } catch {
122
+ return req.url || '/'
123
+ }
124
+ }
125
+
111
126
  function normalizeRequestedModel(modelId) {
112
127
  if (typeof modelId !== 'string') return null
113
128
  const trimmed = modelId.trim()
@@ -303,8 +318,16 @@ export class ProxyServer {
303
318
  // ── Request routing ────────────────────────────────────────────────────────
304
319
 
305
320
  _handleRequest(req, res) {
321
+ const pathname = getRequestPathname(req)
322
+
323
+ // 📖 Root endpoint is unauthenticated so a browser hit on http://127.0.0.1:{port}/
324
+ // 📖 gives a useful status payload instead of a misleading Unauthorized error.
325
+ if (req.method === 'GET' && pathname === '/') {
326
+ return this._handleRoot(res)
327
+ }
328
+
306
329
  // 📖 Health endpoint is unauthenticated so external monitors can probe it
307
- if (req.method === 'GET' && req.url === '/v1/health') {
330
+ if (req.method === 'GET' && pathname === '/v1/health') {
308
331
  return this._handleHealth(res)
309
332
  }
310
333
 
@@ -313,11 +336,11 @@ export class ProxyServer {
313
336
  return sendJson(res, 401, { error: 'Unauthorized' })
314
337
  }
315
338
 
316
- if (req.method === 'GET' && req.url === '/v1/models') {
339
+ if (req.method === 'GET' && pathname === '/v1/models') {
317
340
  this._handleModels(res)
318
- } else if (req.method === 'GET' && req.url === '/v1/stats') {
341
+ } else if (req.method === 'GET' && pathname === '/v1/stats') {
319
342
  this._handleStats(res)
320
- } else if (req.method === 'POST' && req.url === '/v1/chat/completions') {
343
+ } else if (req.method === 'POST' && pathname === '/v1/chat/completions') {
321
344
  this._handleChatCompletions(req, res).catch(err => {
322
345
  console.error('[proxy] Internal error:', err)
323
346
  // 📖 Return 413 for body-too-large, generic 500 for everything else — never leak stack traces
@@ -325,7 +348,7 @@ export class ProxyServer {
325
348
  const msg = err.statusCode === 413 ? 'Request body too large' : 'Internal server error'
326
349
  sendJson(res, status, { error: msg })
327
350
  })
328
- } else if (req.method === 'POST' && req.url === '/v1/messages') {
351
+ } else if (req.method === 'POST' && pathname === '/v1/messages') {
329
352
  // 📖 Anthropic Messages API translation — enables Claude Code compatibility
330
353
  this._handleAnthropicMessages(req, res, authContext).catch(err => {
331
354
  console.error('[proxy] Internal error:', err)
@@ -333,26 +356,26 @@ export class ProxyServer {
333
356
  const msg = err.statusCode === 413 ? 'Request body too large' : 'Internal server error'
334
357
  sendJson(res, status, { error: msg })
335
358
  })
336
- } else if (req.method === 'POST' && req.url === '/v1/messages/count_tokens') {
359
+ } else if (req.method === 'POST' && pathname === '/v1/messages/count_tokens') {
337
360
  this._handleAnthropicCountTokens(req, res).catch(err => {
338
361
  console.error('[proxy] Internal error:', err)
339
362
  const status = err.statusCode === 413 ? 413 : 500
340
363
  const msg = err.statusCode === 413 ? 'Request body too large' : 'Internal server error'
341
364
  sendJson(res, status, { error: msg })
342
365
  })
343
- } else if (req.method === 'POST' && req.url === '/v1/responses') {
366
+ } else if (req.method === 'POST' && pathname === '/v1/responses') {
344
367
  this._handleResponses(req, res).catch(err => {
345
368
  console.error('[proxy] Internal error:', err)
346
369
  const status = err.statusCode === 413 ? 413 : 500
347
370
  const msg = err.statusCode === 413 ? 'Request body too large' : 'Internal server error'
348
371
  sendJson(res, status, { error: msg })
349
372
  })
350
- } else if (req.method === 'POST' && req.url === '/v1/completions') {
373
+ } else if (req.method === 'POST' && pathname === '/v1/completions') {
351
374
  // These legacy/alternative OpenAI endpoints are not supported by the proxy.
352
375
  // Return 501 (not 404) so callers get a clear signal instead of silently failing.
353
376
  sendJson(res, 501, {
354
377
  error: 'Not Implemented',
355
- message: `${req.url} is not supported by this proxy. Use POST /v1/chat/completions instead.`,
378
+ message: `${pathname} is not supported by this proxy. Use POST /v1/chat/completions instead.`,
356
379
  })
357
380
  } else {
358
381
  sendJson(res, 404, { error: 'Not found' })
@@ -780,6 +803,26 @@ export class ProxyServer {
780
803
 
781
804
  // ── GET /v1/health ──────────────────────────────────────────────────────────
782
805
 
806
+ /**
807
+ * 📖 Friendly unauthenticated landing endpoint for browsers and quick local checks.
808
+ */
809
+ _handleRoot(res) {
810
+ const status = this.getStatus()
811
+ const uniqueModels = new Set(this._accounts.map(acct => acct.proxyModelId || acct.modelId)).size
812
+ sendJson(res, 200, {
813
+ status: 'ok',
814
+ service: 'fcm-proxy-v2',
815
+ running: status.running,
816
+ accountCount: status.accountCount,
817
+ modelCount: uniqueModels,
818
+ endpoints: {
819
+ health: '/v1/health',
820
+ models: '/v1/models',
821
+ stats: '/v1/stats',
822
+ },
823
+ })
824
+ }
825
+
783
826
  /**
784
827
  * 📖 Health endpoint for daemon liveness checks. Unauthenticated so external
785
828
  * monitors (TUI, launchctl, systemd) can probe without needing the token.
package/src/proxy-sync.js CHANGED
@@ -34,7 +34,7 @@ const PROXY_PROVIDER_ID = 'fcm-proxy'
34
34
 
35
35
  // 📖 Tools that support proxy sync (have base URL + API key config)
36
36
  // 📖 Gemini is excluded — it only stores a model name, no URL/key fields.
37
- // 📖 Claude Code is excluded too: its free-claude-code style integration is
37
+ // 📖 Claude proxy integration is
38
38
  // 📖 runtime-only now, with fake Claude ids handled by the proxy itself.
39
39
  export const PROXY_SYNCABLE_TOOLS = [
40
40
  'opencode', 'opencode-desktop', 'openclaw', 'crush', 'goose', 'pi',
@@ -67,7 +67,7 @@ export function buildProxyTopologyFromConfig(fcmConfig, mergedModels, sourcesMap
67
67
  return {
68
68
  accounts,
69
69
  proxyModels,
70
- // 📖 Mirror free-claude-code: proxy-side Claude family routing is config-driven.
70
+ // 📖 Mirror Claude proxy: proxy-side Claude family routing is config-driven.
71
71
  anthropicRouting: getProxySettings(fcmConfig).anthropicRouting,
72
72
  }
73
73
  }
@@ -19,7 +19,7 @@
19
19
  * 📖 Crush: writes crush.json with provider config + models.large/small defaults
20
20
  * 📖 Pi: uses --provider/--model CLI flags for guaranteed auto-selection
21
21
  * 📖 Aider: writes ~/.aider.conf.yml + passes --model flag
22
- * 📖 Claude Code: mirrors free-claude-code by keeping fake Claude model ids on the client,
22
+ * 📖 Claude Code: mirrors Claude proxy by keeping fake Claude model ids on the client,
23
23
  * forcing a valid Claude alias at launch, and moving MODEL / MODEL_OPUS / MODEL_SONNET /
24
24
  * MODEL_HAIKU routing into the proxy
25
25
  * 📖 Codex CLI: uses a custom model_provider override so Codex stays in explicit API-provider mode
@@ -27,7 +27,7 @@
27
27
  *
28
28
  * @functions
29
29
  * → `resolveLauncherModelId` — choose the provider-specific id or proxy slug for a launch
30
- * → `waitForClaudeProxyRouting` — wait until the daemon/proxy has reloaded the free-claude-code style Claude-family mapping
30
+ * → `waitForClaudeProxyRouting` — wait until the daemon/proxy has reloaded the Claude proxy style Claude-family mapping
31
31
  * → `buildClaudeProxyArgs` — force a valid Claude alias so stale local non-Claude selections cannot break launch
32
32
  * → `buildCodexProxyArgs` — force Codex into a proxy-backed custom provider config
33
33
  * → `inspectGeminiCliSupport` — detect whether the installed Gemini CLI can use proxy mode safely
@@ -646,7 +646,7 @@ export async function startExternalTool(mode, model, config) {
646
646
  }
647
647
 
648
648
  if (mode === 'claude-code') {
649
- // 📖 Mirror free-claude-code exactly on the client side:
649
+ // 📖 Mirror Claude proxy exactly on the client side:
650
650
  // 📖 Claude gets only ANTHROPIC_BASE_URL + ANTHROPIC_AUTH_TOKEN, and the
651
651
  // 📖 proxy owns the fake Claude model ids -> real backend model mapping.
652
652
  const launchModelId = resolveLauncherModelId(model, true)