free-coding-models 0.3.24 → 0.3.26

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/CHANGELOG.md CHANGED
@@ -1,6 +1,57 @@
1
1
  # Changelog
2
2
  ---
3
3
 
4
+ ## [0.3.26] - 2026-03-27
5
+
6
+ ### Added
7
+ - **Groq**: Added Compound + Compound Mini; fixed Llama 4 Scout context (10M)
8
+ - **OpenRouter**: Added MiniMax M2.5, Nemotron 3 Super, Hermes 3 405B, Gemma 3n E4B
9
+ - **HuggingFace**: Replaced invalid DeepSeek-V3-Coder + outdated starcoder2-15b with DeepSeek V3 0324 + Qwen2.5 Coder 32B
10
+ - **Replicate**: Replaced CodeLlama 70B (2023) with DeepSeek V3 0324 + Llama 3.3 70B
11
+ - **Cloudflare**: Added Kimi K2.5, GLM-4.7-Flash, Llama 4 Scout, Nemotron 3 Super, Qwen3 30B MoE
12
+ - **Scaleway**: Added Qwen3.5 400B VLM + Mistral Large 675B
13
+ - **DeepInfra**: Replaced Mixtral Code with Nemotron 3 Super + DeepSeek V3 0324 + Qwen3 235B
14
+ - **Fireworks**: Added Llama 4 Maverick + Qwen3 235B
15
+ - **Hyperbolic**: Added Qwen3 80B Thinking variant
16
+ - **Together AI**: Added Qwen3.5 400B VLM, MiniMax M2.5, GLM-5
17
+
18
+ ### Changed
19
+ - **Rovo Dev CLI**: Updated Sonnet 4 → Sonnet 4.6, added Opus 4.6
20
+ - **Groq**: Removed 4 deprecated models (R1 Distill 70B, QwQ 32B, Kimi K2, Maverick)
21
+ - **OpenRouter**: Updated context sizes for multiple models
22
+
23
+ ## [0.3.25] - 2026-03-19
24
+
25
+ ### Added
26
+ - **Installed Models Manager** — View, launch, and disable models configured in external tools (Goose, Crush, Aider, Qwen, Pi, OpenHands, Amp)
27
+ - Access via Command Palette (Ctrl+P) → "Installed models"
28
+ - Scans all supported tool configs automatically on opening
29
+ - Displays all models per tool (e.g., Crush shows both large and small models)
30
+ - Actions: Launch (Enter), Disable (D) with backup, Reinstall (R)
31
+ - Soft delete: Comments out model entries and saves backups to ~/.free-coding-models-backups.json
32
+ - **Full mouse support for the TUI** — Click, right-click, double-click, and scroll work throughout the interface
33
+ - **Click column headers** — Sort by any column (click Rank, Tier, SWE%, CTX, Model, Provider, etc.)
34
+ - **Click model rows** — Move cursor to any model (left-click)
35
+ - **Right-click model rows** — Toggle favorite (same as F key)
36
+ - **Double-click model rows** — Select model and launch (same as Enter)
37
+ - **Mouse wheel** — Scroll through the main table, overlays (Settings, Help, Changelog), and command palette results
38
+ - **Click CLI Tools header** — Cycle through tool modes (same as Z key)
39
+ - **Click Tier header** — Cycle through tier filters (same as T key)
40
+ - **Click footer hotkeys** — Trigger any visible hotkey from the footer
41
+ - **Command palette click** — Click inside to select items, double-click to confirm; click outside to close
42
+ - **Recommend questionnaire click** — Click on option rows to select, double-click to confirm
43
+ - **Mouse unit tests** — 46 new tests covering SGR sequence parsing, double-click detection, modifiers, and COLUMN_SORT_MAP validation
44
+
45
+ ### Changed
46
+ - **CLI Tools column redesigned** — Renamed from "Compatible with" to "CLI Tools", with left-aligned emoji display (compatible tools packed left instead of fixed slot positions)
47
+ - **Sort arrow overflow fixed** — SWE%, CTX, Stability, and Uptime columns now properly fit within their widths when sorted (arrow now `↑SWE%` instead of `↑ SWE%`)
48
+ - **Mouse sequence suppression** — SGR mouse sequences no longer leak into keypress handlers (prevents spurious sort/filter triggers when clicking)
49
+
50
+ ### Fixed
51
+ - **Command palette scroll leak** — Mouse wheel no longer injects raw SGR sequence bytes into the command palette text input
52
+ - **Double-action on model click** — Clicking a model row now only moves the cursor; it no longer simultaneously triggers column sorting
53
+ - **Mouse event listener order** — Fixed race condition where readline emitted keypress events before mouse data was processed
54
+
4
55
  ## [0.3.24] - 2026-03-19
5
56
 
6
57
  ### Added
package/README.md CHANGED
@@ -214,7 +214,7 @@ To use Zen models: sign up at [opencode.ai/auth](https://opencode.ai/auth) and e
214
214
 
215
215
  ### Tool Compatibility
216
216
 
217
- The TUI shows a **"Compatible with"** column displaying colored emojis for each supported tool. When a tool mode is active (via `Z`), models incompatible with that tool are highlighted with a dark red background so you can instantly see which models work with your current tool.
217
+ When a tool mode is active (via `Z`), models incompatible with that tool are highlighted with a dark red background so you can instantly see which models work with your current tool.
218
218
 
219
219
  | Model Type | Compatible Tools |
220
220
  |------------|-----------------|
@@ -229,6 +229,8 @@ The TUI shows a **"Compatible with"** column displaying colored emojis for each
229
229
 
230
230
  ## ⌨️ TUI Keys
231
231
 
232
+ ### Keyboard
233
+
232
234
  | Key | Action |
233
235
  |-----|--------|
234
236
  | `↑↓` | Navigate models |
@@ -251,6 +253,22 @@ The TUI shows a **"Compatible with"** column displaying colored emojis for each
251
253
  | `K` | Help overlay |
252
254
  | `Ctrl+C` | Exit |
253
255
 
256
+ ### Mouse
257
+
258
+ | Action | Result |
259
+ |--------|--------|
260
+ | **Click column header** | Sort by that column |
261
+ | **Click Tier header** | Cycle tier filter |
262
+ | **Click CLI Tools header** | Cycle tool mode |
263
+ | **Click model row** | Move cursor to model |
264
+ | **Double-click model row** | Select and launch model |
265
+ | **Right-click model row** | Toggle favorite |
266
+ | **Scroll wheel** | Navigate table / overlays / palette |
267
+ | **Click footer hotkey** | Trigger that action |
268
+ | **Click command palette item** | Select item (double-click to confirm) |
269
+ | **Click recommend option** | Select option (double-click to confirm) |
270
+ | **Click outside modal** | Close command palette |
271
+
254
272
  → **[Stability score & column reference](./docs/stability.md)**
255
273
 
256
274
  ---
@@ -268,7 +286,7 @@ The TUI shows a **"Compatible with"** column displaying colored emojis for each
268
286
  - **⚡️ Command Palette** — `Ctrl+P` opens a searchable action launcher for filters, sorting, overlays, and quick toggles
269
287
  - **Install Endpoints** — push a full provider catalog into any tool's config (from Settings `P` or ⚡️ Command Palette)
270
288
  - **Missing tool bootstrap** — detect absent CLIs, offer one-click install, then continue the selected launch automatically
271
- - **Tool compatibility matrix** — colored emojis show which tools each model supports; incompatible rows highlighted in dark red when a tool mode is active
289
+ - **Tool compatibility matrix** — incompatible rows highlighted in dark red when a tool mode is active
272
290
  - **OpenCode Zen models** — 8 free models exclusive to OpenCode CLI/Desktop, powered by the Zen AI gateway
273
291
  - **Width guardrail** — shows a warning instead of a broken table in narrow terminals
274
292
  - **Readable everywhere** — semantic theme palette keeps table rows, overlays, badges, and help screens legible in dark and light terminals
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "free-coding-models",
3
- "version": "0.3.24",
3
+ "version": "0.3.26",
4
4
  "description": "Find the fastest coding LLM models in seconds — ping free models from multiple providers, pick the best one for OpenCode, Cursor, or any AI coding assistant.",
5
5
  "keywords": [
6
6
  "nvidia",
package/sources.js CHANGED
@@ -99,15 +99,13 @@ export const nvidiaNim = [
99
99
  // 📖 Free API keys available at https://console.groq.com/keys
100
100
  export const groq = [
101
101
  ['llama-3.3-70b-versatile', 'Llama 3.3 70B', 'A-', '39.5%', '128k'],
102
- ['meta-llama/llama-4-scout-17b-16e-preview', 'Llama 4 Scout', 'A', '44.0%', '10M'],
103
- ['meta-llama/llama-4-maverick-17b-128e-preview', 'Llama 4 Maverick', 'S', '62.0%', '1M'],
104
- ['deepseek-r1-distill-llama-70b', 'R1 Distill 70B', 'A', '43.9%', '128k'],
105
- ['qwen-qwq-32b', 'QwQ 32B', 'A+', '50.0%', '131k'],
106
- ['moonshotai/kimi-k2-instruct', 'Kimi K2 Instruct', 'S', '65.8%', '131k'],
102
+ ['meta-llama/llama-4-scout-17b-16e-preview', 'Llama 4 Scout', 'A', '44.0%', '131k'],
107
103
  ['llama-3.1-8b-instant', 'Llama 3.1 8B', 'B', '28.8%', '128k'],
108
104
  ['openai/gpt-oss-120b', 'GPT OSS 120B', 'S', '60.0%', '128k'],
109
105
  ['openai/gpt-oss-20b', 'GPT OSS 20B', 'A', '42.0%', '128k'],
110
106
  ['qwen/qwen3-32b', 'Qwen3 32B', 'A+', '50.0%', '131k'],
107
+ ['groq/compound', 'Groq Compound', 'A', '45.0%', '131k'],
108
+ ['groq/compound-mini', 'Groq Compound Mini', 'B+', '32.0%', '131k'],
111
109
  ]
112
110
 
113
111
  // 📖 Cerebras source - https://cloud.cerebras.ai
@@ -158,36 +156,43 @@ export const sambanova = [
158
156
  // 📖 API keys at https://openrouter.ai/keys
159
157
  export const openrouter = [
160
158
  ['qwen/qwen3-coder:free', 'Qwen3 Coder 480B', 'S+', '70.6%', '262k'],
161
- ['z-ai/glm-4.5-air:free', 'GLM 4.5 Air', 'S+', '72.0%', '128k'],
162
- ['google/gemma-3-27b-it:free', 'Gemma 3 27B', 'B', '22.0%', '128k'],
159
+ ['minimax/minimax-m2.5:free', 'MiniMax M2.5', 'S+', '74.0%', '197k'],
160
+ ['z-ai/glm-4.5-air:free', 'GLM 4.5 Air', 'S+', '72.0%', '131k'],
163
161
  ['stepfun/step-3.5-flash:free', 'Step 3.5 Flash', 'S+', '74.4%', '256k'],
164
- ['qwen/qwen3-next-80b-a3b-instruct:free', 'Qwen3 80B Instruct', 'S', '65.0%', '128k'],
165
- ['openai/gpt-oss-120b:free', 'GPT OSS 120B', 'S', '60.0%', '128k'],
166
- ['openai/gpt-oss-20b:free', 'GPT OSS 20B', 'A', '42.0%', '128k'],
162
+ ['nvidia/nemotron-3-super-120b-a12b:free', 'Nemotron 3 Super', 'A+', '56.0%', '262k'],
163
+ ['qwen/qwen3-next-80b-a3b-instruct:free', 'Qwen3 80B Instruct', 'S', '65.0%', '131k'],
164
+ ['nousresearch/hermes-3-llama-3.1-405b:free', 'Hermes 3 405B', 'A', '44.0%', '131k'],
165
+ ['openai/gpt-oss-120b:free', 'GPT OSS 120B', 'S', '60.0%', '131k'],
166
+ ['openai/gpt-oss-20b:free', 'GPT OSS 20B', 'A', '42.0%', '131k'],
167
167
  ['nvidia/nemotron-3-nano-30b-a3b:free', 'Nemotron Nano 30B', 'A', '43.0%', '128k'],
168
- ['meta-llama/llama-3.3-70b-instruct:free', 'Llama 3.3 70B', 'A-', '39.5%', '128k'],
168
+ ['meta-llama/llama-3.3-70b-instruct:free', 'Llama 3.3 70B', 'A-', '39.5%', '131k'],
169
169
  ['mistralai/mistral-small-3.1-24b-instruct:free', 'Mistral Small 3.1', 'B+', '30.0%', '128k'],
170
- ['google/gemma-3-12b-it:free', 'Gemma 3 12B', 'C', '15.0%', '128k'],
170
+ ['google/gemma-3-27b-it:free', 'Gemma 3 27B', 'B', '22.0%', '131k'],
171
+ ['google/gemma-3-12b-it:free', 'Gemma 3 12B', 'C', '15.0%', '131k'],
172
+ ['google/gemma-3n-e4b-it:free', 'Gemma 3n E4B', 'C', '10.0%', '8k'],
171
173
  ]
172
174
 
173
175
  // 📖 Hugging Face Inference source - https://huggingface.co
174
176
  // 📖 OpenAI-compatible endpoint via router.huggingface.co/v1
175
177
  // 📖 Free monthly credits on developer accounts (~$0.10) — token at https://huggingface.co/settings/tokens
176
178
  export const huggingface = [
177
- ['deepseek-ai/DeepSeek-V3-Coder', 'DeepSeek V3 Coder', 'S', '62.0%', '128k'],
178
- ['bigcode/starcoder2-15b', 'StarCoder2 15B', 'B', '25.0%', '16k'],
179
+ ['deepseek-ai/DeepSeek-V3-0324', 'DeepSeek V3 0324', 'S', '62.0%', '128k'],
180
+ ['Qwen/Qwen2.5-Coder-32B-Instruct', 'Qwen2.5 Coder 32B', 'A', '46.0%', '32k'],
179
181
  ]
180
182
 
181
183
  // 📖 Replicate source - https://replicate.com
182
184
  // 📖 Uses predictions endpoint (not OpenAI chat-completions) with token auth
183
185
  export const replicate = [
184
- ['codellama/CodeLlama-70b-Instruct-hf', 'CodeLlama 70B', 'A-', '39.0%', '16k'],
186
+ ['deepseek-ai/DeepSeek-V3-0324', 'DeepSeek V3 0324', 'S', '62.0%', '128k'],
187
+ ['meta/llama-3.3-70b-instruct', 'Llama 3.3 70B', 'A-', '39.5%', '128k'],
185
188
  ]
186
189
 
187
190
  // 📖 DeepInfra source - https://deepinfra.com
188
191
  // 📖 OpenAI-compatible endpoint: https://api.deepinfra.com/v1/openai/chat/completions
189
192
  export const deepinfra = [
190
- ['mistralai/Mixtral-8x22B-Instruct-v0.1', 'Mixtral Code', 'B+', '32.0%', '64k'],
193
+ ['nvidia/Nemotron-3-Super', 'Nemotron 3 Super', 'A+', '56.0%', '128k'],
194
+ ['deepseek-ai/DeepSeek-V3-0324', 'DeepSeek V3 0324', 'S', '62.0%', '128k'],
195
+ ['Qwen/Qwen3-235B-A22B', 'Qwen3 235B', 'S+', '70.0%', '128k'],
191
196
  ['meta-llama/Meta-Llama-3.1-70B-Instruct', 'Llama 3.1 70B', 'A-', '39.5%', '128k'],
192
197
  ]
193
198
 
@@ -197,6 +202,8 @@ export const deepinfra = [
197
202
  export const fireworks = [
198
203
  ['accounts/fireworks/models/deepseek-v3', 'DeepSeek V3', 'S', '62.0%', '128k'],
199
204
  ['accounts/fireworks/models/deepseek-r1', 'DeepSeek R1', 'S', '61.0%', '128k'],
205
+ ['accounts/fireworks/models/llama4-maverick-instruct-basic', 'Llama 4 Maverick', 'S', '62.0%', '1M'],
206
+ ['accounts/fireworks/models/qwen3-235b-a22b', 'Qwen3 235B', 'S+', '70.0%', '128k'],
200
207
  ]
201
208
 
202
209
  // 📖 Mistral Codestral source - https://codestral.mistral.ai
@@ -215,6 +222,7 @@ export const hyperbolic = [
215
222
  ['openai/gpt-oss-120b', 'GPT OSS 120B', 'S', '60.0%', '128k'],
216
223
  ['Qwen/Qwen3-235B-A22B', 'Qwen3 235B', 'S+', '70.0%', '128k'],
217
224
  ['qwen/qwen3-next-80b-a3b-instruct', 'Qwen3 80B Instruct', 'S', '65.0%', '128k'],
225
+ ['Qwen/Qwen3-Next-80B-A3B-Thinking', 'Qwen3 80B Thinking', 'S', '68.0%', '128k'],
218
226
  ['deepseek-ai/DeepSeek-V3-0324', 'DeepSeek V3 0324', 'S', '62.0%', '128k'],
219
227
  ['Qwen/Qwen2.5-Coder-32B-Instruct', 'Qwen2.5 Coder 32B', 'A', '46.0%', '32k'],
220
228
  ['meta-llama/Llama-3.3-70B-Instruct', 'Llama 3.3 70B', 'A-', '39.5%', '128k'],
@@ -224,9 +232,10 @@ export const hyperbolic = [
224
232
  // 📖 Scaleway source - https://console.scaleway.com
225
233
  // 📖 1M free tokens — API keys at https://console.scaleway.com/iam/api-keys
226
234
  export const scaleway = [
227
- ['devstral-2-123b-instruct-2512', 'Devstral 2 123B', 'S+', '72.2%', '256k'],
235
+ ['devstral-2-123b-instruct-2512', 'Devstral 2 123B', 'S+', '72.2%', '256k'],
236
+ ['qwen3.5-397b-a17b', 'Qwen3.5 400B VLM', 'S', '68.0%', '250k'],
237
+ ['mistral/mistral-large-3-675b-instruct-2512', 'Mistral Large 675B', 'A+', '58.0%', '250k'],
228
238
  ['qwen3-235b-a22b-instruct-2507', 'Qwen3 235B', 'S+', '70.0%', '128k'],
229
- ['gpt-oss-120b', 'GPT OSS 120B', 'S', '60.0%', '128k'],
230
239
  ['qwen3-coder-30b-a3b-instruct', 'Qwen3 Coder 30B', 'A+', '55.0%', '32k'],
231
240
  ['llama-3.3-70b-instruct', 'Llama 3.3 70B', 'A-', '39.5%', '128k'],
232
241
  ['deepseek-r1-distill-llama-70b', 'R1 Distill 70B', 'A', '43.9%', '128k'],
@@ -272,6 +281,9 @@ export const siliconflow = [
272
281
  // 📖 Credits/promotions vary by account and region; verify current quota in console.
273
282
  export const together = [
274
283
  ['moonshotai/Kimi-K2.5', 'Kimi K2.5', 'S+', '76.8%', '128k'],
284
+ ['Qwen/Qwen3.5-397B-A17B', 'Qwen3.5 400B VLM', 'S', '68.0%', '250k'],
285
+ ['MiniMaxAI/MiniMax-M2.5', 'MiniMax M2.5', 'S+', '80.2%', '200k'],
286
+ ['zai-org/GLM-5', 'GLM-5', 'S+', '77.8%', '128k'],
275
287
  ['Qwen/Qwen3-Coder-480B-A35B-Instruct-FP8', 'Qwen3 Coder 480B', 'S+', '70.6%', '256k'],
276
288
  ['deepseek-ai/DeepSeek-V3.1', 'DeepSeek V3.1', 'S', '62.0%', '128k'],
277
289
  ['deepseek-ai/DeepSeek-R1', 'DeepSeek R1', 'S', '61.0%', '128k'],
@@ -285,7 +297,12 @@ export const together = [
285
297
  // 📖 https://api.cloudflare.com/client/v4/accounts/{account_id}/ai/v1/chat/completions
286
298
  // 📖 Free plan includes daily neuron quota and provider-level request limits.
287
299
  export const cloudflare = [
300
+ ['@cf/moonshotai/kimi-k2.5', 'Kimi K2.5', 'S+', '76.8%', '256k'],
301
+ ['@cf/zhipu/glm-4.7-flash', 'GLM-4.7-Flash', 'S', '59.2%', '131k'],
288
302
  ['@cf/openai/gpt-oss-120b', 'GPT OSS 120B', 'S', '60.0%', '128k'],
303
+ ['@cf/meta/llama-4-scout-17b-16e-instruct', 'Llama 4 Scout', 'A', '44.0%', '131k'],
304
+ ['@cf/nvidia/nemotron-3-120b-a12b', 'Nemotron 3 Super', 'A+', '56.0%', '128k'],
305
+ ['@cf/qwen/qwen3-30b-a3b-fp8', 'Qwen3 30B MoE', 'A', '45.0%', '128k'],
289
306
  ['@cf/qwen/qwen2.5-coder-32b-instruct', 'Qwen2.5 Coder 32B', 'A', '46.0%', '32k'],
290
307
  ['@cf/deepseek-ai/deepseek-r1-distill-qwen-32b', 'R1 Distill 32B', 'A', '43.9%', '128k'],
291
308
  ['@cf/openai/gpt-oss-20b', 'GPT OSS 20B', 'A', '42.0%', '128k'],
@@ -350,7 +367,8 @@ export const iflow = [
350
367
  // 📖 Free tier: 5M tokens/day (beta) - Claude Sonnet 4 (72.7% SWE-bench)
351
368
  // 📖 Requires Atlassian account + Rovo Dev activated on your site
352
369
  export const rovo = [
353
- ['anthropic/claude-sonnet-4', 'Claude Sonnet 4 🆕', 'S+', '72.7%', '200k'],
370
+ ['anthropic/claude-sonnet-4.6', 'Claude Sonnet 4.6', 'S+', '75.0%', '200k'],
371
+ ['anthropic/claude-opus-4.6', 'Claude Opus 4.6', 'S+', '80.0%', '200k'],
354
372
  ]
355
373
 
356
374
  // 📖 Gemini CLI source - https://github.com/google-gemini/gemini-cli
package/src/app.js CHANGED
@@ -119,7 +119,8 @@ import { renderTable, PROVIDER_COLOR } from '../src/render-table.js'
119
119
  import { setOpenCodeModelData, startOpenCode, startOpenCodeDesktop } from '../src/opencode.js'
120
120
  import { startOpenClaw } from '../src/openclaw.js'
121
121
  import { createOverlayRenderers } from '../src/overlays.js'
122
- import { createKeyHandler } from '../src/key-handler.js'
122
+ import { createKeyHandler, createMouseEventHandler } from '../src/key-handler.js'
123
+ import { createMouseHandler, containsMouseSequence } from '../src/mouse.js'
123
124
  import { getToolModeOrder, getToolMeta } from '../src/tool-metadata.js'
124
125
  import { startExternalTool } from '../src/tool-launchers.js'
125
126
  import { getToolInstallPlan, installToolWithPlan, isToolInstalled } from '../src/tool-bootstrap.js'
@@ -470,6 +471,12 @@ export async function runApp(cliArgs, config) {
470
471
  changelogPhase: 'index', // 📖 'index' (all versions) | 'details' (specific version)
471
472
  changelogCursor: 0, // 📖 Selected row in index phase
472
473
  changelogSelectedVersion: null, // 📖 Which version to show details for
474
+ // 📖 Installed Models overlay state (Command Palette → Installed models)
475
+ installedModelsOpen: false, // 📖 Whether the installed models overlay is active
476
+ installedModelsCursor: 0, // 📖 Selected row (tool or model)
477
+ installedModelsScrollOffset: 0, // 📖 Vertical scroll offset for overlay viewport
478
+ installedModelsData: [], // 📖 Cached scan results
479
+ installedModelsErrorMsg: null, // 📖 Error or status message
473
480
  // 📖 Custom text filter (Ctrl+P palette → type text → Enter). Ephemeral — not saved to config.
474
481
  customTextFilter: null, // 📖 Active free-text filter string (null = off). Matches model name, ctx, provider key/name.
475
482
  }
@@ -495,6 +502,7 @@ export async function runApp(cliArgs, config) {
495
502
 
496
503
  let ticker = null
497
504
  let onKeyPress = null
505
+ let onMouseData = null // 📖 Mouse data listener — set after createMouseEventHandler
498
506
  let pingModel = null
499
507
 
500
508
  const scheduleNextPing = () => {
@@ -736,6 +744,7 @@ export async function runApp(cliArgs, config) {
736
744
  if (ticker) clearInterval(ticker)
737
745
  clearTimeout(state.pingIntervalObj)
738
746
  if (onKeyPress) process.stdin.removeListener('keypress', onKeyPress)
747
+ if (onMouseData) process.stdin.removeListener('data', onMouseData)
739
748
  if (process.stdin.isTTY && resetRawMode) process.stdin.setRawMode(false)
740
749
  process.stdin.pause()
741
750
  process.stdout.write(ALT_LEAVE)
@@ -837,6 +846,38 @@ export async function runApp(cliArgs, config) {
837
846
  readline,
838
847
  })
839
848
 
849
+ // 📖 Mouse event handler: translates parsed mouse events into TUI actions (sort, cursor, scroll).
850
+ const onMouseEvent = createMouseEventHandler({
851
+ state,
852
+ adjustScrollOffset,
853
+ applyTierFilter,
854
+ TIER_CYCLE,
855
+ ORIGIN_CYCLE,
856
+ noteUserActivity,
857
+ sortResultsWithPinnedFavorites,
858
+ saveConfig,
859
+ overlayLayout: overlays.overlayLayout, // 📖 Overlay cursor-to-line maps for click handling
860
+ // 📖 Favorite toggle — right-click on model rows
861
+ toggleFavoriteModel,
862
+ syncFavoriteFlags,
863
+ toFavoriteKey,
864
+ // 📖 Tool mode cycling — compat header click
865
+ cycleToolMode: () => {
866
+ // 📖 Inline cycle matching the Z-key handler in createKeyHandler
867
+ const modeOrder = getToolModeOrder()
868
+ const currentIndex = modeOrder.indexOf(state.mode)
869
+ const nextIndex = (currentIndex + 1) % modeOrder.length
870
+ state.mode = modeOrder[nextIndex]
871
+ if (!state.config.settings || typeof state.config.settings !== 'object') state.config.settings = {}
872
+ state.config.settings.preferredToolMode = state.mode
873
+ saveConfig(state.config)
874
+ },
875
+ })
876
+
877
+ // 📖 Wire the raw stdin data listener for mouse events.
878
+ // 📖 createMouseHandler returns a function that parses SGR sequences and calls onMouseEvent.
879
+ onMouseData = createMouseHandler({ onMouseEvent })
880
+
840
881
  // Apply CLI --tier filter if provided
841
882
  if (cliArgs.tierFilter) {
842
883
  const allowed = TIER_LETTER_MAP[cliArgs.tierFilter]
@@ -858,8 +899,35 @@ export async function runApp(cliArgs, config) {
858
899
  process.stdin.setRawMode(true)
859
900
  }
860
901
 
902
+ // 📖 Mouse sequence suppression: readline.emitKeypressEvents() registers its own
903
+ // 📖 internal `data` listener that parses bytes and fires `keypress` events.
904
+ // 📖 When a mouse SGR sequence like \x1b[<0;35;20m arrives, readline fragments it
905
+ // 📖 and emits individual keypress events for chars like 'm', '0', ';' etc.
906
+ // 📖 The 'm' at the end of a release event maps to the Model sort hotkey!
907
+ // 📖
908
+ // 📖 Fix: use prependListener to register a `data` handler BEFORE readline's,
909
+ // 📖 so we can set a suppression flag before any keypress events fire.
910
+ // 📖 The flag is cleared on the next tick via setImmediate after all synchronous
911
+ // 📖 keypress emissions from readline have completed.
912
+ let _suppressMouseKeypresses = false
913
+
914
+ process.stdin.prependListener('data', (data) => {
915
+ const str = typeof data === 'string' ? data : data.toString('utf8')
916
+ if (str.includes('\x1b[<')) {
917
+ _suppressMouseKeypresses = true
918
+ // 📖 Reset after current tick — all synchronous keypress events from this data
919
+ // 📖 chunk will have fired by then.
920
+ setImmediate(() => { _suppressMouseKeypresses = false })
921
+ }
922
+ })
923
+
861
924
  process.stdin.on('keypress', async (str, key) => {
862
925
  try {
926
+ // 📖 Skip keypress events that originate from mouse escape sequences.
927
+ // 📖 readline may partially parse SGR mouse sequences as garbage keypresses.
928
+ if (str && containsMouseSequence(str)) return
929
+ // 📖 Suppress fragmented mouse bytes that readline emits as individual keypresses.
930
+ if (_suppressMouseKeypresses) return
863
931
  await onKeyPress(str, key);
864
932
  } catch (err) {
865
933
  process.stdout.write(ALT_LEAVE);
@@ -869,6 +937,18 @@ export async function runApp(cliArgs, config) {
869
937
  process.exit(1);
870
938
  }
871
939
  })
940
+
941
+ // 📖 Mouse data listener: parses SGR mouse escape sequences from raw stdin
942
+ // 📖 and dispatches structured events (click, scroll, double-click) to the mouse handler.
943
+ process.stdin.on('data', (data) => {
944
+ try {
945
+ if (onMouseData) onMouseData(data)
946
+ } catch (err) {
947
+ // 📖 Mouse errors are non-fatal — log and continue so the TUI doesn't crash.
948
+ // 📖 This could happen on terminals that send unexpected mouse sequences.
949
+ }
950
+ })
951
+
872
952
  process.on('SIGCONT', noteUserActivity)
873
953
 
874
954
  // 📖 Animation loop: render settings overlay, recommend overlay, help overlay, feature request overlay, bug report overlay, changelog overlay, OR main table
@@ -877,7 +957,7 @@ export async function runApp(cliArgs, config) {
877
957
  refreshAutoPingMode()
878
958
  state.frame++
879
959
  // 📖 Cache visible+sorted models each frame so Enter handler always matches the display
880
- if (!state.settingsOpen && !state.installEndpointsOpen && !state.toolInstallPromptOpen && !state.incompatibleFallbackOpen && !state.recommendOpen && !state.feedbackOpen && !state.changelogOpen && !state.commandPaletteOpen) {
960
+ if (!state.settingsOpen && !state.installEndpointsOpen && !state.toolInstallPromptOpen && !state.incompatibleFallbackOpen && !state.recommendOpen && !state.feedbackOpen && !state.changelogOpen && !state.installedModelsOpen && !state.commandPaletteOpen) {
881
961
  const visible = state.results.filter(r => !r.hidden)
882
962
  state.visibleSorted = sortResultsWithPinnedFavorites(visible, state.sortColumn, state.sortDirection, {
883
963
  pinFavorites: state.favoritesPinnedAndSticky,
@@ -960,6 +1040,8 @@ export async function runApp(cliArgs, config) {
960
1040
  ? overlays.renderInstallEndpoints()
961
1041
  : state.toolInstallPromptOpen
962
1042
  ? overlays.renderToolInstallPrompt()
1043
+ : state.installedModelsOpen
1044
+ ? overlays.renderInstalledModels()
963
1045
  : state.incompatibleFallbackOpen
964
1046
  ? overlays.renderIncompatibleFallback()
965
1047
  : state.commandPaletteOpen
@@ -203,6 +203,7 @@ const BASE_COMMAND_TREE = [
203
203
  { id: 'open-feedback', label: 'Feedback', shortcut: 'I', icon: '📝', type: 'page', description: 'Report bugs or requests', keywords: ['feedback', 'bug', 'request'] },
204
204
  { id: 'open-recommend', label: 'Smart recommend', shortcut: 'Q', icon: '🎯', type: 'page', description: 'Find best model for task', keywords: ['recommend', 'best model'] },
205
205
  { id: 'open-install-endpoints', label: 'Install endpoints', icon: '🔌', type: 'page', description: 'Install provider catalogs', keywords: ['install', 'endpoints', 'providers'] },
206
+ { id: 'open-installed-models', label: 'Installed models', icon: '🗂️', type: 'page', description: 'View models configured in tools', keywords: ['installed', 'models', 'configured', 'tools', 'manager', 'goose', 'crush', 'aider'] },
206
207
  ]
207
208
 
208
209
  /**
package/src/constants.js CHANGED
@@ -47,8 +47,11 @@ import chalk from 'chalk'
47
47
  // 📖 \x1b[H = cursor to top
48
48
  // 📖 \x1b[?7l disables auto-wrap so wide rows clip at the right edge instead of
49
49
  // 📖 wrapping to the next line (which would double the row height and overflow).
50
- export const ALT_ENTER = '\x1b[?1049h\x1b[?25l\x1b[?7l'
51
- export const ALT_LEAVE = '\x1b[?7h\x1b[?1049l\x1b[?25h'
50
+ // 📖 Mouse tracking sequences are appended/prepended so clicks and scroll work in the TUI.
51
+ import { MOUSE_ENABLE, MOUSE_DISABLE } from './mouse.js'
52
+
53
+ export const ALT_ENTER = '\x1b[?1049h\x1b[?25l\x1b[?7l' + MOUSE_ENABLE
54
+ export const ALT_LEAVE = MOUSE_DISABLE + '\x1b[?7h\x1b[?1049l\x1b[?25h'
52
55
  export const ALT_HOME = '\x1b[H'
53
56
 
54
57
  // 📖 Timing constants — control how fast the health-check loop runs.