free-coding-models 0.3.32 → 0.3.34

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/CHANGELOG.md CHANGED
@@ -1,6 +1,39 @@
1
1
  # Changelog
2
2
  ---
3
3
 
4
+ ## [0.3.34] - 2026-04-06
5
+
6
+ ### Added
7
+ - **Chutes AI** — new decentralized free provider (4 models: DeepSeek R1, Llama 3.1 70B, Qwen 2.5 72B, Qwen2.5 Coder 32B)
8
+ - **Google Gemini 3.1 Pro** — replaced dead Gemini 3 Pro (shut down March 9)
9
+ - **Google Gemma 4 family** — Gemma 4 31B, Gemma 4 26B MoE, Gemma 4 E4B added to Google AI Studio
10
+ - **Qwen3.6 Plus** and **Qwen3.5 Plus** added to Alibaba DashScope
11
+ - **NVIDIA NIM: Kimi K2 Instruct 0905** added
12
+ - **SambaNova: Qwen3-235B-A22B-Instruct-2507** added
13
+ - **OpenRouter: ~10 new free models** (arcee trinity, mimo-v2-flash, etc.) — 25 total
14
+ - **Together AI: expanded to 19 free models** (DeepSeek V3.2, MiniMax M2.1, etc.)
15
+ - **Cloudflare: 4 new models** (gemma-4-26b, mistral-small-3.1, qwq-32b, granite-4.0) — 15 total
16
+ - **Scaleway: gpt-oss-120b and holo2-30b-a3b** added
17
+ - **Hyperbolic: gpt-oss-20b and Qwen3-235B-Instruct-2507** added
18
+ - **Rovo: GPT-5.2, GPT-5.2-Codex, Claude Haiku 4.5** added
19
+ - **OpenCode Zen: mimo-v2-pro-free, mimo-v2-omni-free** added; fixed context windows (gpt-5-nano: 128k→400k, nemotron-3-super-free: 128k→1M)
20
+
21
+ ### Changed
22
+ - **Groq scout model renamed** — `llama-4-scout-17b-16e-preview` → `llama-4-scout-17b-16e-instruct`
23
+ - **Now 230 models across 24 providers** (was 174/23)
24
+ - **README provider table fully updated** with accurate per-provider counts
25
+
26
+ ### Fixed
27
+ - Removed 3 deprecated Cerebras models (llama3.3-70b, qwen-3-32b, llama-4-scout-17b-16e-instruct) — 4 remain
28
+ - Fixed missing comma in googleai array causing undefined model entry
29
+ - Replaced all `??%` placeholder SWE scores with reasonable estimates
30
+
31
+ ## [0.3.33] - 2026-04-01
32
+
33
+ ### Changed
34
+ - **X badge darker fuchsia background** (`rgb(140,0,80)`) for better readability
35
+ - **Updated text** → "Follow me on X: @vavanessadev to check my other projects! 💖"
36
+
4
37
  ## [0.3.32] - 2026-04-01
5
38
 
6
39
  ### Fixed
package/README.md CHANGED
@@ -2,8 +2,8 @@
2
2
  <img src="https://img.shields.io/npm/v/free-coding-models?color=76b900&label=npm&logo=npm" alt="npm version">
3
3
  <img src="https://img.shields.io/node/v/free-coding-models?color=76b900&logo=node.js" alt="node version">
4
4
  <img src="https://img.shields.io/npm/l/free-coding-models?color=76b900" alt="license">
5
- <img src="https://img.shields.io/badge/models-174-76b900?logo=nvidia" alt="models count">
6
- <img src="https://img.shields.io/badge/providers-23-blue" alt="providers count">
5
+ <img src="https://img.shields.io/badge/models-230-76b900?logo=nvidia" alt="models count">
6
+ <img src="https://img.shields.io/badge/providers-24-blue" alt="providers count">
7
7
  </p>
8
8
 
9
9
  <h1 align="center">free-coding-models</h1>
@@ -14,7 +14,7 @@
14
14
 
15
15
  <p align="center">
16
16
  <strong>Find the fastest free coding model in seconds</strong><br>
17
- <sub>Ping 174 models across 23 AI Free providers in real-time </sub><br><sub> Install Free API endpoints to your favorite AI coding tool: <br>📦 OpenCode, 🦞 OpenClaw, 💘 Crush, 🪿 Goose, 🛠 Aider, 🐉 Qwen Code, 🤲 OpenHands, ⚡ Amp, π Pi, 🦘 Rovo or ♊ Gemini in one keystroke</sub>
17
+ <sub>Ping 230 models across 24 AI Free providers in real-time </sub><br><sub> Install Free API endpoints to your favorite AI coding tool: <br>📦 OpenCode, 🦞 OpenClaw, 💘 Crush, 🪿 Goose, 🛠 Aider, 🐉 Qwen Code, 🤲 OpenHands, ⚡ Amp, π Pi, 🦘 Rovo or ♊ Gemini in one keystroke</sub>
18
18
  </p>
19
19
 
20
20
 
@@ -51,7 +51,7 @@ create a free account on one of the [providers](#-list-of-free-ai-providers)
51
51
 
52
52
  ## 💡 Why this tool?
53
53
 
54
- There are **174+ free coding models** scattered across 23 providers. Which one is fastest right now? Which one is actually stable versus just lucky on the last ping?
54
+ There are **230+ free coding models** scattered across 24 providers. Which one is fastest right now? Which one is actually stable versus just lucky on the last ping?
55
55
 
56
56
  This CLI pings them all in parallel, shows live latency, and calculates a **live Stability Score (0-100)**. Average latency alone is misleading if a model randomly spikes to 6 seconds; the stability score measures true reliability by combining **p95 latency** (30%), **jitter/variance** (30%), **spike rate** (20%), and **uptime** (20%).
57
57
 
@@ -65,33 +65,34 @@ It then writes the model you pick directly into your coding tool's config — so
65
65
 
66
66
  Create a free account on one provider below to get started:
67
67
 
68
- **174 coding models** across 23 providers, ranked by [SWE-bench Verified](https://www.swebench.com).
68
+ **230 coding models** across 24 providers, ranked by [SWE-bench Verified](https://www.swebench.com).
69
69
 
70
70
  | Provider | Models | Tier range | Free tier | Env var |
71
71
  |----------|--------|-----------|-----------|--------|
72
- | [NVIDIA NIM](https://build.nvidia.com) | 44 | S+ → C | 40 req/min (no credit card needed) | `NVIDIA_API_KEY` |
72
+ | [NVIDIA NIM](https://build.nvidia.com) | 46 | S+ → C | 40 req/min (no credit card needed) | `NVIDIA_API_KEY` |
73
+ | [OpenRouter](https://openrouter.ai/keys) | 25 | S+ → C | Free on :free: 50/day <$10, 1000/day ≥$10 (20 req/min) | `OPENROUTER_API_KEY` |
74
+ | [Cloudflare Workers AI](https://dash.cloudflare.com) | 15 | S → B | Free: 10k neurons/day, text-gen 300 RPM | `CLOUDFLARE_API_TOKEN` + `CLOUDFLARE_ACCOUNT_ID` |
75
+ | [SambaNova](https://sambanova.ai/developers) | 13 | S+ → B | Dev tier generous quota | `SAMBANOVA_API_KEY` |
76
+ | [Hyperbolic](https://app.hyperbolic.ai/settings) | 13 | S+ → A- | $1 free trial credits | `HYPERBOLIC_API_KEY` |
77
+ | [Together AI](https://api.together.ai/settings/api-keys) | 19 | S+ → A- | Credits/promos vary by account (check console) | `TOGETHER_API_KEY` |
78
+ | [Scaleway](https://console.scaleway.com/iam/api-keys) | 10 | S+ → B+ | 1M free tokens | `SCALEWAY_API_KEY` |
73
79
  | [iFlow](https://platform.iflow.cn) | 11 | S+ → A+ | Free for individuals (no req limits, 7-day key expiry) | `IFLOW_API_KEY` |
80
+ | [Alibaba DashScope](https://modelstudio.console.alibabacloud.com) | 11 | S+ → A | 1M free tokens per model (Singapore region, 90 days) | `DASHSCOPE_API_KEY` |
81
+ | [Groq](https://console.groq.com/keys) | 8 | S → B | 30‑50 RPM per model (varies by model) | `GROQ_API_KEY` |
82
+ | [Rovo Dev CLI](https://www.atlassian.com/rovo) | 5 | S+ | 5M tokens/day (beta) | CLI tool 🦘 |
74
83
  | [ZAI](https://z.ai) | 7 | S+ → S | Free tier (generous quota) | `ZAI_API_KEY` |
75
- | [Alibaba DashScope](https://modelstudio.console.alibabacloud.com) | 8 | S+ → A | 1M free tokens per model (Singapore region, 90 days) | `DASHSCOPE_API_KEY` |
76
- | [Groq](https://console.groq.com/keys) | 10 | SB | 30‑50 RPM per model (varies by model) | `GROQ_API_KEY` |
77
- | [Cerebras](https://cloud.cerebras.ai) | 7 | S+ → B | Generous free tier (developer tier 10× higher limits) | `CEREBRAS_API_KEY` |
78
- | [SambaNova](https://sambanova.ai/developers) | 12 | S+ → B | Dev tier generous quota | `SAMBANOVA_API_KEY` |
79
- | [OpenRouter](https://openrouter.ai/keys) | 11 | S+ → C | Free on :free: 50/day <$10, 1000/day ≥$10 (20 req/min) | `OPENROUTER_API_KEY` |
80
- | [Hugging Face](https://huggingface.co/settings/tokens) | 2 | S → B | Free monthly credits (~$0.10) | `HUGGINGFACE_API_KEY` |
81
- | [Together AI](https://api.together.ai/settings/api-keys) | 7 | S+ → A- | Credits/promos vary by account (check console) | `TOGETHER_API_KEY` |
82
- | [DeepInfra](https://deepinfra.com/login) | 2 | A- → B+ | 200 concurrent requests (default) | `DEEPINFRA_API_KEY` |
83
- | [Fireworks AI](https://fireworks.ai) | 2 | S | $1 credits – 10 req/min without payment | `FIREWORKS_API_KEY` |
84
- | [Mistral Codestral](https://codestral.mistral.ai) | 1 | B+ | 30 req/min, 2000/day | `CODESTRAL_API_KEY` |
85
- | [Hyperbolic](https://app.hyperbolic.ai/settings) | 10 | S+ → A- | $1 free trial credits | `HYPERBOLIC_API_KEY` |
86
- | [Scaleway](https://console.scaleway.com/iam/api-keys) | 7 | S+ → B+ | 1M free tokens | `SCALEWAY_API_KEY` |
87
- | [Google AI Studio](https://aistudio.google.com/apikey) | 3 | B → C | 14.4K req/day, 30/min | `GOOGLE_API_KEY` |
84
+ | [OpenCode Zen](https://opencode.ai/zen) | 7 | S+ → A+ | Free with OpenCode account | Zen models |
85
+ | [Google AI Studio](https://aistudio.google.com/apikey) | 6 | B+C | 14.4K req/day, 30/min | `GOOGLE_API_KEY` |
88
86
  | [SiliconFlow](https://cloud.siliconflow.cn/account/ak) | 6 | S+ → A | Free models: usually 100 RPM, varies by model | `SILICONFLOW_API_KEY` |
89
- | [Cloudflare Workers AI](https://dash.cloudflare.com) | 6 | S → B | Free: 10k neurons/day, text-gen 300 RPM | `CLOUDFLARE_API_TOKEN` + `CLOUDFLARE_ACCOUNT_ID` |
87
+ | [Cerebras](https://cloud.cerebras.ai) | 4 | S+ → B | Generous free tier (developer tier 10× higher limits) | `CEREBRAS_API_KEY` |
90
88
  | [Perplexity API](https://www.perplexity.ai/settings/api) | 4 | A+ → B | Tiered limits by spend (default ~50 RPM) | `PERPLEXITY_API_KEY` |
91
- | [Replicate](https://replicate.com/account/api-tokens) | 1 | A- | 6 req/min (no payment) – up to 3,000 RPM with payment | `REPLICATE_API_TOKEN` |
92
- | [Rovo Dev CLI](https://www.atlassian.com/rovo) | 1 | S+ | 5M tokens/day (beta) | CLI tool 🦘 |
89
+ | [Chutes AI](https://chutes.ai) | 4 | S → A | Free (community GPU-powered, no credit card) | `CHUTES_API_KEY` |
90
+ | [DeepInfra](https://deepinfra.com/login) | 4 | A- → B+ | 200 concurrent requests (default) | `DEEPINFRA_API_KEY` |
91
+ | [Fireworks AI](https://fireworks.ai) | 4 | S → B+ | $1 credits – 10 req/min without payment | `FIREWORKS_API_KEY` |
93
92
  | [Gemini CLI](https://github.com/google-gemini/gemini-cli) | 3 | S+ → A+ | 1,000 req/day | CLI tool ♊ |
94
- | [OpenCode Zen](https://opencode.ai/zen) | 8 | S+A+ | Free with OpenCode account | Zen models ✨ |
93
+ | [Hugging Face](https://huggingface.com/settings/tokens) | 2 | S → B | Free monthly credits (~$0.10) | `HUGGINGFACE_API_KEY` |
94
+ | [Replicate](https://replicate.com/account/api-tokens) | 2 | A- → B | 6 req/min (no payment) – up to 3,000 RPM with payment | `REPLICATE_API_TOKEN` |
95
+ | [Mistral Codestral](https://codestral.mistral.ai) | 1 | B+ | 30 req/min, 2000/day | `CODESTRAL_API_KEY` |
95
96
 
96
97
  > 💡 One key is enough. Add more at any time with **`P`** inside the TUI.
97
98
 
@@ -283,7 +284,7 @@ When a tool mode is active (via `Z`), models incompatible with that tool are hig
283
284
 
284
285
  ## ✨ Features
285
286
 
286
- - **Parallel pings** — all 174 models tested simultaneously via native `fetch`
287
+ - **Parallel pings** — all 230 models tested simultaneously via native `fetch`
287
288
  - **Adaptive monitoring** — 2s burst for 60s → 10s normal → 30s idle
288
289
  - **Stability score** — composite 0–100 (p95 latency, jitter, spike rate, uptime)
289
290
  - **Smart ranking** — top 3 highlighted 🥇🥈🥉
@@ -319,6 +320,40 @@ We welcome contributions — issues, PRs, new provider integrations.
319
320
 
320
321
  ---
321
322
 
323
+ ## ⚖️ Model Licensing & Commercial Use
324
+
325
+ **Short answer:** All 230 models allow **commercial use of generated output (including code)**. You own what the models generate for you.
326
+
327
+ ### Output Ownership
328
+
329
+ For every model in this tool, **you own the generated output** — code, text, or otherwise — and can use it commercially. The licenses below govern the *model weights themselves*, not your generated content.
330
+
331
+ ### License Breakdown by Model Family
332
+
333
+ | License | Models | Commercial Output |
334
+ |---------|--------|:-----------------:|
335
+ | **Apache 2.0** | Qwen3/Qwen3.5/Qwen2.5 Coder, GPT-OSS 120B/20B, Devstral Small 2, Gemma 4, MiMo V2 Flash | ✅ Unrestricted |
336
+ | **MIT** | GLM 4.5/4.6/4.7/5, MiniMax M2.1, Devstral 2 | ✅ Unrestricted |
337
+ | **Modified MIT** | Kimi K2/K2.5 (>100M MAU → display "Kimi K2" branding) | ✅ With attribution at scale |
338
+ | **Llama Community License** | Llama 3.3 70B, Llama 4 Scout/Maverick | ✅ Attribution required. >700M MAU → separate Meta license |
339
+ | **DeepSeek License** | DeepSeek V3/V3.1/V3.2, R1 | ✅ Use restrictions on model (no military, no harm) — output is yours |
340
+ | **NVIDIA Nemotron License** | Nemotron Super/Ultra/Nano | ✅ Updated Mar 2026, now near-Apache 2.0 permissive |
341
+ | **MiniMax Model License** | MiniMax M2, M2.5 | ✅ Royalty-free, non-exclusive. Prohibited uses policy applies to model |
342
+ | **Proprietary (API)** | Claude (Rovo), Gemini (CLI), Perplexity Sonar, Mistral Large, Codestral | ✅ You own outputs per provider ToS |
343
+ | **OpenCode Zen** | Big Pickle, MiMo V2 Pro/Flash/Omni Free, GPT 5 Nano, MiniMax M2.5 Free, Nemotron 3 Super Free | ✅ Per OpenCode Zen ToS |
344
+
345
+ ### Key Points
346
+
347
+ 1. **Generated code is yours** — no model claims ownership of your output
348
+ 2. **Apache 2.0 / MIT models** (Qwen, GLM, GPT-OSS, MiMo, Devstral Small) are the most permissive — no strings attached
349
+ 3. **Llama** requires "Built with Llama" attribution; >700M MAU needs a Meta license
350
+ 4. **DeepSeek / MiniMax** have use-restriction policies (no military use) that govern the model, not your generated code
351
+ 5. **API-served models** (Claude, Gemini, Perplexity) grant full output ownership under their terms of service
352
+
353
+ > ⚠️ **Disclaimer:** This is a summary, not legal advice. License terms can change. Always verify the current license on the model's official page before making legal decisions.
354
+
355
+ ---
356
+
322
357
  ## 📧 Support
323
358
 
324
359
  [GitHub Issues](https://github.com/vava-nessa/free-coding-models/issues) · [Discord](https://discord.gg/ZTNFHvvCkU)
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "free-coding-models",
3
- "version": "0.3.32",
3
+ "version": "0.3.34",
4
4
  "description": "Find the fastest coding LLM models in seconds — ping free models from multiple providers, pick the best one for OpenCode, Cursor, or any AI coding assistant.",
5
5
  "keywords": [
6
6
  "nvidia",
package/sources.js CHANGED
@@ -54,6 +54,7 @@ export const nvidiaNim = [
54
54
  ['mistralai/devstral-2-123b-instruct-2512', 'Devstral 2 123B', 'S+', '72.2%', '256k'],
55
55
  // ── S tier — SWE-bench Verified 60–70% ──
56
56
  ['deepseek-ai/deepseek-v3.1-terminus', 'DeepSeek V3.1 Term', 'S', '68.4%', '128k'],
57
+ ['moonshotai/kimi-k2-instruct-0905', 'Kimi K2 Instruct 0905', 'S', '65.8%', '256k'],
57
58
  ['moonshotai/kimi-k2-instruct', 'Kimi K2 Instruct', 'S', '65.8%', '128k'],
58
59
  ['minimaxai/minimax-m2', 'MiniMax M2', 'S', '69.4%', '128k'],
59
60
  ['qwen/qwen3-next-80b-a3b-thinking', 'Qwen3 80B Thinking', 'S', '68.0%', '128k'],
@@ -99,7 +100,7 @@ export const nvidiaNim = [
99
100
  // 📖 Free API keys available at https://console.groq.com/keys
100
101
  export const groq = [
101
102
  ['llama-3.3-70b-versatile', 'Llama 3.3 70B', 'A-', '39.5%', '128k'],
102
- ['meta-llama/llama-4-scout-17b-16e-preview', 'Llama 4 Scout', 'A', '44.0%', '131k'],
103
+ ['meta-llama/llama-4-scout-17b-16e-instruct', 'Llama 4 Scout', 'A', '44.0%', '131k'],
103
104
  ['llama-3.1-8b-instant', 'Llama 3.1 8B', 'B', '28.8%', '128k'],
104
105
  ['openai/gpt-oss-120b', 'GPT OSS 120B', 'S', '60.0%', '128k'],
105
106
  ['openai/gpt-oss-20b', 'GPT OSS 20B', 'A', '42.0%', '128k'],
@@ -111,9 +112,6 @@ export const groq = [
111
112
  // 📖 Cerebras source - https://cloud.cerebras.ai
112
113
  // 📖 Free API keys available at https://cloud.cerebras.ai
113
114
  export const cerebras = [
114
- ['llama3.3-70b', 'Llama 3.3 70B', 'A-', '39.5%', '128k'],
115
- ['llama-4-scout-17b-16e-instruct', 'Llama 4 Scout', 'A', '44.0%', '10M'],
116
- ['qwen-3-32b', 'Qwen3 32B', 'A+', '50.0%', '128k'],
117
115
  ['gpt-oss-120b', 'GPT OSS 120B', 'S', '60.0%', '128k'],
118
116
  ['qwen-3-235b-a22b-instruct-2507', 'Qwen3 235B', 'S+', '70.0%', '128k'],
119
117
  ['llama3.1-8b', 'Llama 3.1 8B', 'B', '28.8%', '128k'],
@@ -136,6 +134,7 @@ export const sambanova = [
136
134
  ['DeepSeek-V3.1-Terminus', 'DeepSeek V3.1 Term', 'S', '68.4%', '128k'],
137
135
  // ── A+ tier ──
138
136
  ['Qwen3-32B', 'Qwen3 32B', 'A+', '50.0%', '128k'],
137
+ ['Qwen3-235B-A22B-Instruct-2507', 'Qwen3 235B Instruct 2507', 'S+', '70.0%', '64k'],
139
138
  // ── A tier ──
140
139
  ['DeepSeek-R1-Distill-Llama-70B', 'R1 Distill 70B', 'A', '43.9%', '128k'],
141
140
  // ── A- tier ──
@@ -155,21 +154,39 @@ export const sambanova = [
155
154
  // 📖 • Free-tier popular models may be additionally rate-limited by the provider itself during peak hours.
156
155
  // 📖 API keys at https://openrouter.ai/keys
157
156
  export const openrouter = [
158
- ['qwen/qwen3-coder:free', 'Qwen3 Coder 480B', 'S+', '70.6%', '262k'],
159
- ['minimax/minimax-m2.5:free', 'MiniMax M2.5', 'S+', '74.0%', '197k'],
160
- ['z-ai/glm-4.5-air:free', 'GLM 4.5 Air', 'S+', '72.0%', '131k'],
161
- ['stepfun/step-3.5-flash:free', 'Step 3.5 Flash', 'S+', '74.4%', '256k'],
162
- ['nvidia/nemotron-3-super-120b-a12b:free', 'Nemotron 3 Super', 'A+', '56.0%', '262k'],
163
- ['qwen/qwen3-next-80b-a3b-instruct:free', 'Qwen3 80B Instruct', 'S', '65.0%', '131k'],
164
- ['nousresearch/hermes-3-llama-3.1-405b:free', 'Hermes 3 405B', 'A', '44.0%', '131k'],
165
- ['openai/gpt-oss-120b:free', 'GPT OSS 120B', 'S', '60.0%', '131k'],
166
- ['openai/gpt-oss-20b:free', 'GPT OSS 20B', 'A', '42.0%', '131k'],
167
- ['nvidia/nemotron-3-nano-30b-a3b:free', 'Nemotron Nano 30B', 'A', '43.0%', '128k'],
168
- ['meta-llama/llama-3.3-70b-instruct:free', 'Llama 3.3 70B', 'A-', '39.5%', '131k'],
157
+ // ── S+ tier — confirmed free ──
158
+ ['qwen/qwen3.6-plus:free', 'Qwen3.6 Plus', 'S+', '78.8%', '1M'],
159
+ ['qwen/qwen3-coder:free', 'Qwen3 Coder 480B', 'S+', '70.6%', '262k'],
160
+ ['minimax/minimax-m2.5:free', 'MiniMax M2.5', 'S+', '74.0%', '197k'],
161
+ ['z-ai/glm-4.5-air:free', 'GLM 4.5 Air', 'S+', '72.0%', '131k'],
162
+ ['stepfun/step-3.5-flash:free', 'Step 3.5 Flash', 'S+', '74.4%', '256k'],
163
+ ['arcee-ai/trinity-large-preview:free', 'Arcee Trinity Large','S+', '60.0%', '131k'],
164
+ ['xiaomi/mimo-v2-flash:free', 'MiMo V2 Flash', 'S+', '73.4%', '262k'],
165
+ // ── S tier — confirmed free ──
166
+ ['deepseek/deepseek-r1-0528:free', 'DeepSeek R1 0528', 'S', '61.0%', '164k'],
167
+ // ── A+ tier — confirmed free ──
168
+ ['nvidia/nemotron-3-super-120b-a12b:free', 'Nemotron 3 Super', 'A+', '56.0%', '262k'],
169
+ ['qwen/qwen3-next-80b-a3b-instruct:free', 'Qwen3 80B Instruct', 'S', '65.0%', '131k'],
170
+ ['arcee-ai/trinity-mini:free', 'Arcee Trinity Mini', 'A', '40.0%', '131k'],
171
+ ['nvidia/nemotron-nano-12b-v2-vl:free', 'Nemotron Nano 12B VL','A', '20.0%', '128k'],
172
+ ['nvidia/nemotron-nano-9b-v2:free', 'Nemotron Nano 9B', 'B+', '18.0%', '128k'],
173
+ // ── A tier — confirmed free ──
174
+ ['nousresearch/hermes-3-llama-3.1-405b:free', 'Hermes 3 405B', 'A', '44.0%', '131k'],
175
+ ['openai/gpt-oss-120b:free', 'GPT OSS 120B', 'S', '60.0%', '131k'],
176
+ ['openai/gpt-oss-20b:free', 'GPT OSS 20B', 'A', '42.0%', '131k'],
177
+ ['nvidia/nemotron-3-nano-30b-a3b:free', 'Nemotron Nano 30B', 'A', '43.0%', '128k'],
178
+ ['cognitivecomputations/dolphin-mistral-24b-venice-edition:free', 'Dolphin Mistral 24B', 'B+', '30.0%', '33k'],
179
+ // ── A- tier — confirmed free ──
180
+ ['meta-llama/llama-3.3-70b-instruct:free', 'Llama 3.3 70B', 'A-', '39.5%', '131k'],
181
+ // ── B+ tier ──
169
182
  ['mistralai/mistral-small-3.1-24b-instruct:free', 'Mistral Small 3.1', 'B+', '30.0%', '128k'],
170
- ['google/gemma-3-27b-it:free', 'Gemma 3 27B', 'B', '22.0%', '131k'],
171
- ['google/gemma-3-12b-it:free', 'Gemma 3 12B', 'C', '15.0%', '131k'],
172
- ['google/gemma-3n-e4b-it:free', 'Gemma 3n E4B', 'C', '10.0%', '8k'],
183
+ // ── B tier ──
184
+ ['google/gemma-3-27b-it:free', 'Gemma 3 27B', 'B', '22.0%', '131k'],
185
+ // ── C tier ──
186
+ ['google/gemma-3-12b-it:free', 'Gemma 3 12B', 'C', '15.0%', '131k'],
187
+ ['qwen/qwen3-4b:free', 'Qwen3 4B', 'C', '15.0%', '41k'],
188
+ ['google/gemma-3n-e4b-it:free', 'Gemma 3n E4B', 'C', '10.0%', '8k'],
189
+ ['google/gemma-3-4b-it:free', 'Gemma 3 4B', 'C', '10.0%', '33k'],
173
190
  ]
174
191
 
175
192
  // 📖 Hugging Face Inference source - https://huggingface.co
@@ -220,10 +237,12 @@ export const hyperbolic = [
220
237
  ['deepseek-ai/DeepSeek-R1-0528', 'DeepSeek R1 0528', 'S', '61.0%', '128k'],
221
238
  ['moonshotai/Kimi-K2-Instruct', 'Kimi K2 Instruct', 'S', '65.8%', '131k'],
222
239
  ['openai/gpt-oss-120b', 'GPT OSS 120B', 'S', '60.0%', '128k'],
240
+ ['Qwen/Qwen3-235B-A22B-Instruct-2507', 'Qwen3 235B 2507', 'S+', '70.0%', '262k'],
223
241
  ['Qwen/Qwen3-235B-A22B', 'Qwen3 235B', 'S+', '70.0%', '128k'],
224
242
  ['qwen/qwen3-next-80b-a3b-instruct', 'Qwen3 80B Instruct', 'S', '65.0%', '128k'],
225
243
  ['Qwen/Qwen3-Next-80B-A3B-Thinking', 'Qwen3 80B Thinking', 'S', '68.0%', '128k'],
226
244
  ['deepseek-ai/DeepSeek-V3-0324', 'DeepSeek V3 0324', 'S', '62.0%', '128k'],
245
+ ['openai/gpt-oss-20b', 'GPT OSS 20B', 'A', '42.0%', '131k'],
227
246
  ['Qwen/Qwen2.5-Coder-32B-Instruct', 'Qwen2.5 Coder 32B', 'A', '46.0%', '32k'],
228
247
  ['meta-llama/Llama-3.3-70B-Instruct', 'Llama 3.3 70B', 'A-', '39.5%', '128k'],
229
248
  ['meta-llama/Meta-Llama-3.1-405B-Instruct', 'Llama 3.1 405B', 'A', '44.0%', '128k'],
@@ -233,10 +252,12 @@ export const hyperbolic = [
233
252
  // 📖 1M free tokens — API keys at https://console.scaleway.com/iam/api-keys
234
253
  export const scaleway = [
235
254
  ['devstral-2-123b-instruct-2512', 'Devstral 2 123B', 'S+', '72.2%', '256k'],
236
- ['qwen3.5-397b-a17b', 'Qwen3.5 400B VLM', 'S', '68.0%', '250k'],
255
+ ['qwen3.5-397b-a17b', 'Qwen3.5 400B VLM', 'S', '68.0%', '250k'],
237
256
  ['mistral/mistral-large-3-675b-instruct-2512', 'Mistral Large 675B', 'A+', '58.0%', '250k'],
238
257
  ['qwen3-235b-a22b-instruct-2507', 'Qwen3 235B', 'S+', '70.0%', '128k'],
258
+ ['gpt-oss-120b', 'GPT OSS 120B', 'S', '60.0%', '131k'],
239
259
  ['qwen3-coder-30b-a3b-instruct', 'Qwen3 Coder 30B', 'A+', '55.0%', '32k'],
260
+ ['holo2-30b-a3b', 'Holo2 30B', 'A+', '52.0%', '131k'],
240
261
  ['llama-3.3-70b-instruct', 'Llama 3.3 70B', 'A-', '39.5%', '128k'],
241
262
  ['deepseek-r1-distill-llama-70b', 'R1 Distill 70B', 'A', '43.9%', '128k'],
242
263
  ['mistral-small-3.2-24b-instruct-2506', 'Mistral Small 3.2', 'B+', '30.0%', '128k'],
@@ -245,8 +266,11 @@ export const scaleway = [
245
266
  // 📖 Google AI Studio source - https://aistudio.google.com
246
267
  // 📖 Free Gemma models — 14.4K req/day, API keys at https://aistudio.google.com/apikey
247
268
  export const googleai = [
269
+ ['gemma-4-31b-it', 'Gemma 4 31B', 'B+', '45.0%', '256k'],
270
+ ['gemma-4-26b-a4b-it', 'Gemma 4 26B MoE', 'B+', '42.0%', '256k'],
248
271
  ['gemma-3-27b-it', 'Gemma 3 27B', 'B', '22.0%', '128k'],
249
272
  ['gemma-3-12b-it', 'Gemma 3 12B', 'C', '15.0%', '128k'],
273
+ ['gemma-4-e4b-it', 'Gemma 4 E4B', 'C', '12.0%', '128k'],
250
274
  ['gemma-3-4b-it', 'Gemma 3 4B', 'C', '10.0%', '128k'],
251
275
  ]
252
276
 
@@ -280,15 +304,29 @@ export const siliconflow = [
280
304
  // 📖 OpenAI-compatible endpoint: https://api.together.xyz/v1/chat/completions
281
305
  // 📖 Credits/promotions vary by account and region; verify current quota in console.
282
306
  export const together = [
307
+ // ── S+ tier ──
283
308
  ['moonshotai/Kimi-K2.5', 'Kimi K2.5', 'S+', '76.8%', '128k'],
284
- ['Qwen/Qwen3.5-397B-A17B', 'Qwen3.5 400B VLM', 'S', '68.0%', '250k'],
285
- ['MiniMaxAI/MiniMax-M2.5', 'MiniMax M2.5', 'S+', '80.2%', '200k'],
309
+ ['MiniMaxAI/MiniMax-M2.5', 'MiniMax M2.5', 'S+', '80.2%', '228k'],
286
310
  ['zai-org/GLM-5', 'GLM-5', 'S+', '77.8%', '128k'],
287
311
  ['Qwen/Qwen3-Coder-480B-A35B-Instruct-FP8', 'Qwen3 Coder 480B', 'S+', '70.6%', '256k'],
288
- ['deepseek-ai/DeepSeek-V3.1', 'DeepSeek V3.1', 'S', '62.0%', '128k'],
289
- ['deepseek-ai/DeepSeek-R1', 'DeepSeek R1', 'S', '61.0%', '128k'],
290
- ['openai/gpt-oss-120b', 'GPT OSS 120B', 'S', '60.0%', '128k'],
291
- ['openai/gpt-oss-20b', 'GPT OSS 20B', 'A', '42.0%', '128k'],
312
+ ['deepseek-ai/DeepSeek-V3.2', 'DeepSeek V3.2', 'S+', '73.1%', '164k'],
313
+ ['MiniMaxAI/MiniMax-M2.1', 'MiniMax M2.1', 'S+', '74.0%', '197k'],
314
+ // ── S tier ──
315
+ ['Qwen/Qwen3.5-397B-A17B', 'Qwen3.5 400B VLM', 'S', '68.0%', '250k'],
316
+ ['deepseek-ai/DeepSeek-V3.1', 'DeepSeek V3.1', 'S', '62.0%', '164k'],
317
+ ['deepseek-ai/DeepSeek-V3.1-Terminus', 'DeepSeek V3.1 Term', 'S', '68.4%', '164k'],
318
+ ['deepseek-ai/DeepSeek-R1', 'DeepSeek R1', 'S', '61.0%', '164k'],
319
+ ['openai/gpt-oss-120b', 'GPT OSS 120B', 'S', '60.0%', '131k'],
320
+ ['Qwen/Qwen3-235B-A22B-Instruct-2507', 'Qwen3 235B 2507', 'S+', '70.0%', '131k'],
321
+ ['MiniMaxAI/MiniMax-M2', 'MiniMax M2', 'S', '69.4%', '197k'],
322
+ // ── A+ tier ──
323
+ ['nvidia/Nemotron-3-Super-120B-A12B', 'Nemotron 3 Super', 'A+', '56.0%', '128k'],
324
+ ['nvidia/Nemotron-3-Nano-30B-A3B', 'Nemotron Nano 30B', 'A', '43.0%', '262k'],
325
+ ['Qwen/Qwen3-Coder-30B-A3B-Instruct', 'Qwen3 Coder 30B', 'A+', '55.0%', '160k'],
326
+ // ── A tier ──
327
+ ['meta-llama/Llama-4-Scout-17B-16E-Instruct', 'Llama 4 Scout', 'A', '44.0%', '328k'],
328
+ ['openai/gpt-oss-20b', 'GPT OSS 20B', 'A', '42.0%', '131k'],
329
+ // ── A- tier ──
292
330
  ['meta-llama/Llama-3.3-70B-Instruct-Turbo', 'Llama 3.3 70B', 'A-', '39.5%', '128k'],
293
331
  ]
294
332
 
@@ -297,17 +335,27 @@ export const together = [
297
335
  // 📖 https://api.cloudflare.com/client/v4/accounts/{account_id}/ai/v1/chat/completions
298
336
  // 📖 Free plan includes daily neuron quota and provider-level request limits.
299
337
  export const cloudflare = [
338
+ // ── S+ tier ──
300
339
  ['@cf/moonshotai/kimi-k2.5', 'Kimi K2.5', 'S+', '76.8%', '256k'],
301
- ['@cf/zhipu/glm-4.7-flash', 'GLM-4.7-Flash', 'S', '59.2%', '131k'],
302
- ['@cf/openai/gpt-oss-120b', 'GPT OSS 120B', 'S', '60.0%', '128k'],
303
- ['@cf/meta/llama-4-scout-17b-16e-instruct', 'Llama 4 Scout', 'A', '44.0%', '131k'],
304
- ['@cf/nvidia/nemotron-3-120b-a12b', 'Nemotron 3 Super', 'A+', '56.0%', '128k'],
305
- ['@cf/qwen/qwen3-30b-a3b-fp8', 'Qwen3 30B MoE', 'A', '45.0%', '128k'],
306
- ['@cf/qwen/qwen2.5-coder-32b-instruct', 'Qwen2.5 Coder 32B', 'A', '46.0%', '32k'],
307
- ['@cf/deepseek-ai/deepseek-r1-distill-qwen-32b', 'R1 Distill 32B', 'A', '43.9%', '128k'],
308
- ['@cf/openai/gpt-oss-20b', 'GPT OSS 20B', 'A', '42.0%', '128k'],
309
- ['@cf/meta/llama-3.3-70b-instruct-fp8-fast', 'Llama 3.3 70B', 'A-', '39.5%', '128k'],
310
- ['@cf/meta/llama-3.1-8b-instruct', 'Llama 3.1 8B', 'B', '28.8%', '128k'],
340
+ // ── S tier ──
341
+ ['@cf/zhipu/glm-4.7-flash', 'GLM-4.7-Flash', 'S', '59.2%', '131k'],
342
+ ['@cf/openai/gpt-oss-120b', 'GPT OSS 120B', 'S', '60.0%', '128k'],
343
+ // ── A+ tier ──
344
+ ['@cf/qwen/qwq-32b', 'QwQ 32B', 'A+', '50.0%', '131k'],
345
+ // ── A tier ──
346
+ ['@cf/meta/llama-4-scout-17b-16e-instruct', 'Llama 4 Scout', 'A', '44.0%', '131k'],
347
+ ['@cf/nvidia/nemotron-3-120b-a12b', 'Nemotron 3 Super', 'A+', '56.0%', '128k'],
348
+ ['@cf/qwen/qwen3-30b-a3b-fp8', 'Qwen3 30B MoE', 'A', '45.0%', '128k'],
349
+ ['@cf/qwen/qwen2.5-coder-32b-instruct', 'Qwen2.5 Coder 32B', 'A', '46.0%', '32k'],
350
+ ['@cf/deepseek-ai/deepseek-r1-distill-qwen-32b', 'R1 Distill 32B', 'A', '43.9%', '128k'],
351
+ // ── A- tier ──
352
+ ['@cf/openai/gpt-oss-20b', 'GPT OSS 20B', 'A', '42.0%', '128k'],
353
+ ['@cf/meta/llama-3.3-70b-instruct-fp8-fast','Llama 3.3 70B', 'A-', '39.5%', '128k'],
354
+ ['@cf/google/gemma-4-26b-a4b-it', 'Gemma 4 26B MoE', 'A-', '38.0%', '256k'],
355
+ ['@cf/mistralai/mistral-small-3.1-24b-instruct', 'Mistral Small 3.1', 'B+', '30.0%', '128k'],
356
+ // ── B tier ──
357
+ ['@cf/ibm/granite-4.0-h-micro', 'Granite 4.0 Micro', 'B+', '30.0%', '128k'],
358
+ ['@cf/meta/llama-3.1-8b-instruct', 'Llama 3.1 8B', 'B', '28.8%', '128k'],
311
359
  ]
312
360
 
313
361
  // 📖 Perplexity source - https://docs.perplexity.ai
@@ -328,16 +376,20 @@ export const perplexity = [
328
376
  // 📖 Qwen3-Coder models: optimized coding models with excellent SWE-bench scores
329
377
  export const qwen = [
330
378
  // ── S+ tier — SWE-bench Verified ≥70% ──
331
- ['qwen3-coder-plus', 'Qwen3 Coder Plus', 'S+', '69.6%', '256k'],
332
- ['qwen3-coder-480b-a35b-instruct', 'Qwen3 Coder 480B', 'S+', '70.6%', '256k'],
379
+ ['qwen3.6-plus', 'Qwen3.6 Plus', 'S+', '78.8%', '1M'],
380
+ ['qwen3-coder-plus', 'Qwen3 Coder Plus', 'S+', '69.6%', '256k'],
381
+ ['qwen3-coder-480b-a35b-instruct', 'Qwen3 Coder 480B', 'S+', '70.6%', '256k'],
333
382
  // ── S tier — SWE-bench Verified 60–70% ──
334
- ['qwen3-coder-max', 'Qwen3 Coder Max', 'S', '67.0%', '256k'],
335
- ['qwen3-coder-next', 'Qwen3 Coder Next', 'S', '65.0%', '256k'],
336
- ['qwen3-235b-a22b-instruct', 'Qwen3 235B', 'S', '70.0%', '256k'],
337
- ['qwen3-next-80b-a3b-instruct', 'Qwen3 80B Instruct', 'S', '65.0%', '128k'],
383
+ ['qwen3.5-plus', 'Qwen3.5 Plus', 'S', '68.0%', '1M'],
384
+ ['qwen3-coder-max', 'Qwen3 Coder Max', 'S', '67.0%', '256k'],
385
+ ['qwen3-coder-next', 'Qwen3 Coder Next', 'S', '65.0%', '256k'],
386
+ ['qwen3-235b-a22b-instruct', 'Qwen3 235B', 'S', '70.0%', '256k'],
387
+ ['qwen3-next-80b-a3b-instruct', 'Qwen3 80B Instruct', 'S', '65.0%', '128k'],
338
388
  // ── A+ tier — SWE-bench Verified 50–60% ──
339
- ['qwen3-32b', 'Qwen3 32B', 'A+', '50.0%', '128k'],
340
- ['qwen2.5-coder-32b-instruct', 'Qwen2.5 Coder 32B', 'A', '46.0%', '32k'],
389
+ ['qwen3-32b', 'Qwen3 32B', 'A+', '50.0%', '128k'],
390
+ ['qwen2.5-coder-32b-instruct', 'Qwen2.5 Coder 32B', 'A', '46.0%', '32k'],
391
+ // ── B+ tier ──
392
+ ['qwen3.5-flash', 'Qwen3.5 Flash', 'B+', '55.0%', '1M'],
341
393
  ]
342
394
 
343
395
  // 📖 iFlow source - https://platform.iflow.cn
@@ -361,6 +413,15 @@ export const iflow = [
361
413
  ['qwen3-max', 'Qwen3 Max', 'A+', '55.0%', '256k'],
362
414
  ]
363
415
 
416
+ // 📖 Chutes AI - Decentralized serverless AI compute (Bittensor Subnet 64)
417
+ // 📖 Truly free (community GPU-powered), no credit card required
418
+ export const chutes = [
419
+ ['deepseek-ai/DeepSeek-R1', 'DeepSeek R1', 'S', '61.0%', '64k'],
420
+ ['meta-llama/Llama-3.1-70B-Instruct', 'Llama 3.1 70B', 'A-', '39.5%', '128k'],
421
+ ['Qwen/Qwen2.5-72B-Instruct', 'Qwen 2.5 72B', 'A', '42.0%', '32k'],
422
+ ['Qwen/Qwen2.5-Coder-32B-Instruct', 'Qwen2.5 Coder 32B', 'A', '46.0%', '32k'],
423
+ ]
424
+
364
425
  // 📖 Rovo Dev CLI source - https://www.atlassian.com/rovo
365
426
  // 📖 CLI tool only - no API endpoint - requires 'acli rovodev run'
366
427
  // 📖 Install: https://support.atlassian.com/rovo/docs/install-and-run-rovo-dev-cli-on-your-device/
@@ -369,16 +430,20 @@ export const iflow = [
369
430
  export const rovo = [
370
431
  ['anthropic/claude-sonnet-4.6', 'Claude Sonnet 4.6', 'S+', '75.0%', '200k'],
371
432
  ['anthropic/claude-opus-4.6', 'Claude Opus 4.6', 'S+', '80.0%', '200k'],
433
+ ['openai/gpt-5.2', 'GPT-5.2', 'S+', '72.0%', '400k'],
434
+ ['openai/gpt-5.2-codex', 'GPT-5.2 Codex', 'S+', '74.0%', '400k'],
435
+ ['anthropic/claude-haiku-4.5', 'Claude Haiku 4.5', 'A+', '50.0%', '200k'],
372
436
  ]
373
437
 
374
438
  // 📖 Gemini CLI source - https://github.com/google-gemini/gemini-cli
375
439
  // 📖 CLI tool with OpenAI-compatible API support
376
440
  // 📖 Install: npm install -g @google/gemini-cli
377
441
  // 📖 Free tier: 1,000 req/day with personal Google account (no credit card)
378
- // 📖 Models: Gemini 3 Pro (76.2% SWE-bench), Gemini 2.5 Pro, Gemini 2.5 Flash
442
+ // 📖 Models: Gemini 3.1 Pro, Gemini 2.5 Pro, Gemini 2.5 Flash
443
+ // 📖 Note: Gemini 3 Pro was shut down March 9, 2026 — replaced by Gemini 3.1 Pro
379
444
  // 📖 Supports custom OpenAI-compatible providers via GEMINI_API_BASE_URL
380
445
  export const gemini = [
381
- ['google/gemini-3-pro', 'Gemini 3 Pro 🆕', 'S+', '76.2%', '1M'],
446
+ ['google/gemini-3.1-pro', 'Gemini 3.1 Pro', 'S+', '78.0%', '1M'],
382
447
  ['google/gemini-2.5-pro', 'Gemini 2.5 Pro', 'S+', '63.2%', '1M'],
383
448
  ['google/gemini-2.5-flash', 'Gemini 2.5 Flash', 'A+', '50.0%', '1M'],
384
449
  ]
@@ -389,11 +454,13 @@ export const gemini = [
389
454
  // 📖 Login: https://opencode.ai/auth — get your Zen API key
390
455
  // 📖 Config: set provider to opencode/<model-id> in OpenCode config
391
456
  export const opencodeZen = [
392
- ['big-pickle', 'Big Pickle 🆕', 'S+', '72.0%', '200k'],
393
- ['gpt-5-nano', 'GPT 5 Nano 🆕', 'S', '65.0%', '128k'],
394
- ['mimo-v2-flash-free', 'MiMo V2 Flash Free 🆕', 'S+', '73.4%', '256k'],
395
- ['minimax-m2.5-free', 'MiniMax M2.5 Free 🆕', 'S+', '80.2%', '200k'],
396
- ['nemotron-3-super-free', 'Nemotron 3 Super Free 🆕', 'A+', '52.0%', '128k'],
457
+ ['big-pickle', 'Big Pickle', 'S+', '72.0%', '200k'],
458
+ ['mimo-v2-pro-free', 'MiMo V2 Pro Free', 'S+', '75.0%', '1M'],
459
+ ['mimo-v2-flash-free', 'MiMo V2 Flash Free', 'S+', '73.4%', '262k'],
460
+ ['mimo-v2-omni-free', 'MiMo V2 Omni Free', 'S+', '73.0%', '262k'],
461
+ ['gpt-5-nano', 'GPT 5 Nano', 'S', '65.0%', '400k'],
462
+ ['minimax-m2.5-free', 'MiniMax M2.5 Free', 'S+', '80.2%', '200k'],
463
+ ['nemotron-3-super-free', 'Nemotron 3 Super Free','A+', '52.0%', '1M'],
397
464
  ]
398
465
 
399
466
  // 📖 All sources combined - used by the main script
@@ -518,19 +585,24 @@ export const sources = {
518
585
  binary: 'gemini',
519
586
  checkArgs: ['--version'],
520
587
  },
521
- // 📖 OpenCode Zen free models — hosted AI gateway, only runs on OpenCode CLI / Desktop
522
588
  'opencode-zen': {
523
589
  name: 'OpenCode Zen',
524
590
  url: 'https://opencode.ai/zen/v1/chat/completions',
525
591
  models: opencodeZen,
526
592
  zenOnly: true,
527
593
  },
594
+ chutes: {
595
+ name: 'Chutes AI',
596
+ url: 'https://chutes.ai/v1/chat/completions',
597
+ models: chutes,
598
+ },
528
599
  }
529
600
 
530
601
  // 📖 Flatten all models from all sources — each entry includes providerKey as 6th element
531
602
  // 📖 providerKey lets the main CLI know which API key and URL to use per model
532
- export const MODELS = []
603
+ export const MODELS = [];
533
604
  for (const [sourceKey, sourceData] of Object.entries(sources)) {
605
+ if (!sourceData || !sourceData.models) continue
534
606
  for (const [modelId, label, tier, sweScore, ctx] of sourceData.models) {
535
607
  MODELS.push([modelId, label, tier, sweScore, ctx, sourceKey])
536
608
  }
@@ -59,6 +59,7 @@ export const ENV_VAR_NAMES = {
59
59
  perplexity: 'PERPLEXITY_API_KEY',
60
60
  zai: 'ZAI_API_KEY',
61
61
  gemini: 'GEMINI_API_KEY',
62
+ chutes: 'CHUTES_API_KEY',
62
63
  }
63
64
 
64
65
  // 📖 OPENCODE_MODEL_MAP: sparse table of model IDs that differ between sources.js and OpenCode's
@@ -66,7 +67,6 @@ export const ENV_VAR_NAMES = {
66
67
  export const OPENCODE_MODEL_MAP = {
67
68
  groq: {
68
69
  'moonshotai/kimi-k2-instruct': 'moonshotai/kimi-k2-instruct-0905',
69
- 'meta-llama/llama-4-scout-17b-16e-preview': 'meta-llama/llama-4-scout-17b-16e-instruct',
70
70
  'meta-llama/llama-4-maverick-17b-128e-preview': 'meta-llama/llama-4-maverick-17b-128e-instruct',
71
71
  }
72
72
  }
@@ -250,4 +250,11 @@ export const PROVIDER_METADATA = {
250
250
  rateLimits: 'Free tier models — requires OpenCode Zen API key',
251
251
  zenOnly: true,
252
252
  },
253
+ chutes: {
254
+ label: 'Chutes AI',
255
+ color: chalk.rgb(144, 238, 144),
256
+ signupUrl: 'https://chutes.ai',
257
+ signupHint: 'Sign up and generate an API key',
258
+ rateLimits: 'Free (community GPU-powered), no hard cap',
259
+ },
253
260
  }
@@ -882,11 +882,11 @@ export function renderTable(results, pendingPings, frame, cursor = null, sortCol
882
882
  ? chalk.rgb(255, 182, 193)(`Last release: ${lastReleaseDate}`)
883
883
  : ''
884
884
 
885
- const xSupportBg = chalk.bgRgb(255, 0, 128).rgb(255, 255, 255).bold('🐦 Support me on X: ') +
885
+ const xSupportBg = chalk.bgRgb(140, 0, 80).rgb(255, 255, 255).bold('🐦 Follow me on X: ') +
886
886
  '\x1b]8;;https://x.com/vavanessadev\x1b\\' +
887
- chalk.bgRgb(255, 0, 128).rgb(255, 255, 0).bold('@vavanessadev') +
887
+ chalk.bgRgb(140, 0, 80).rgb(255, 200, 50).bold('@vavanessadev') +
888
888
  '\x1b]8;;\x1b\\' +
889
- chalk.bgRgb(255, 0, 128).rgb(255, 255, 255).bold(' 💖')
889
+ chalk.bgRgb(140, 0, 80).rgb(255, 255, 255).bold(' to check my other projects! 💖')
890
890
 
891
891
  lines.push(
892
892
  ' ' + themeColors.hotkey('N') + themeColors.dim(' Changelog') +