free-coding-models 0.1.68 → 0.1.70

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -2,12 +2,27 @@
2
2
  <img src="https://img.shields.io/npm/v/free-coding-models?color=76b900&label=npm&logo=npm" alt="npm version">
3
3
  <img src="https://img.shields.io/node/v/free-coding-models?color=76b900&logo=node.js" alt="node version">
4
4
  <img src="https://img.shields.io/npm/l/free-coding-models?color=76b900" alt="license">
5
- <img src="https://img.shields.io/badge/models-134-76b900?logo=nvidia" alt="models count">
6
- <img src="https://img.shields.io/badge/providers-17-blue" alt="providers count">
5
+ <img src="https://img.shields.io/badge/models-150-76b900?logo=nvidia" alt="models count">
6
+ <img src="https://img.shields.io/badge/providers-19-blue" alt="providers count">
7
7
  </p>
8
8
 
9
9
  <h1 align="center">free-coding-models</h1>
10
10
 
11
+ <p align="center">
12
+ <strong>Contributors</strong><br>
13
+ <a href="https://github.com/vava-nessa"><img src="https://avatars.githubusercontent.com/u/5466264?v=4&s=60" width="60" height="60" style="border-radius:50%" alt="vava-nessa"></a>
14
+ <a href="https://github.com/erwinh22"><img src="https://avatars.githubusercontent.com/u/6641858?v=4&s=60" width="60" height="60" style="border-radius:50%" alt="erwinh22"></a>
15
+ <a href="https://github.com/whit3rabbit"><img src="https://avatars.githubusercontent.com/u/12357518?v=4&s=60" width="60" height="60" style="border-radius:50%" alt="whit3rabbit"></a>
16
+ <a href="https://github.com/skylaweber"><img src="https://avatars.githubusercontent.com/u/172871734?v=4&s=60" width="60" height="60" style="border-radius:50%" alt="skylaweber"></a>
17
+ <br>
18
+ <sub>
19
+ <a href="https://github.com/vava-nessa">vava-nessa</a> &middot;
20
+ <a href="https://github.com/erwinh22">erwinh22</a> &middot;
21
+ <a href="https://github.com/whit3rabbit">whit3rabbit</a> &middot;
22
+ <a href="https://github.com/skylaweber">skylaweber</a>
23
+ </sub>
24
+ </p>
25
+
11
26
  <p align="center">
12
27
  💬 <a href="https://discord.gg/5MbTnDC3Md">Let's talk about the project on Discord</a>
13
28
  </p>
@@ -24,7 +39,7 @@
24
39
 
25
40
  <p align="center">
26
41
  <strong>Find the fastest coding LLM models in seconds</strong><br>
27
- <sub>Ping free coding models from 18 providers in real-time — pick the best one for OpenCode, OpenClaw, or any AI coding assistant</sub>
42
+ <sub>Ping free coding models from 19 providers in real-time — pick the best one for OpenCode, OpenClaw, or any AI coding assistant</sub>
28
43
  </p>
29
44
 
30
45
  <p align="center">
@@ -99,7 +114,7 @@ Before using `free-coding-models`, make sure you have:
99
114
  3. **OpenCode** *(optional)* — [Install OpenCode](https://github.com/opencode-ai/opencode) to use the OpenCode integration
100
115
  4. **OpenClaw** *(optional)* — [Install OpenClaw](https://openclaw.ai) to use the OpenClaw integration
101
116
 
102
- > 💡 **Tip:** You don't need all seventeen providers. One key is enough to get started. Add more later via the Settings screen (`P` key). Models without a key still show real latency (`🔑 NO KEY`) so you can evaluate providers before signing up.
117
+ > 💡 **Tip:** You don't need all nineteen providers. One key is enough to get started. Add more later via the Settings screen (`P` key). Models without a key still show real latency (`🔑 NO KEY`) so you can evaluate providers before signing up.
103
118
 
104
119
  ---
105
120
 
@@ -180,13 +195,13 @@ When you run `free-coding-models` without `--opencode` or `--openclaw`, you get
180
195
  Use `↑↓` arrows to select, `Enter` to confirm. Then the TUI launches with your chosen mode shown in the header badge.
181
196
 
182
197
  **How it works:**
183
- 1. **Ping phase** — All enabled models are pinged in parallel (up to 139 across 18 providers)
184
- 2. **Continuous monitoring** — Models are re-pinged every 60 seconds forever
198
+ 1. **Ping phase** — All enabled models are pinged in parallel (up to 150 across 19 providers)
199
+ 2. **Continuous monitoring** — Models are re-pinged every 3 seconds forever
185
200
  3. **Real-time updates** — Watch "Latest", "Avg", and "Up%" columns update live
186
201
  4. **Select anytime** — Use ↑↓ arrows to navigate, press Enter on a model to act
187
202
  5. **Smart detection** — Automatically detects if NVIDIA NIM is configured in OpenCode or OpenClaw
188
203
 
189
- Setup wizard (first run — walks through all 18 providers):
204
+ Setup wizard (first run — walks through all 19 providers):
190
205
 
191
206
  ```
192
207
  🔑 First-time setup — API keys
@@ -363,7 +378,7 @@ When enabled, telemetry events include: event name, app version, selected mode,
363
378
 
364
379
  ## 🤖 Coding Models
365
380
 
366
- **139 coding models** across 18 providers and 8 tiers, ranked by [SWE-bench Verified](https://www.swebench.com) — the industry-standard benchmark measuring real GitHub issue resolution. Scores are self-reported by providers unless noted.
381
+ **150 coding models** across 19 providers and 8 tiers, ranked by [SWE-bench Verified](https://www.swebench.com) — the industry-standard benchmark measuring real GitHub issue resolution. Scores are self-reported by providers unless noted.
367
382
 
368
383
  ### ZAI Coding Plan (5 models)
369
384
 
@@ -707,7 +722,7 @@ This script:
707
722
  │ 1. Enter alternate screen buffer (like vim/htop/less) │
708
723
  │ 2. Ping ALL models in parallel │
709
724
  │ 3. Display real-time table with Latest/Avg/Stability/Up% │
710
- │ 4. Re-ping ALL models every 60 seconds (forever) │
725
+ │ 4. Re-ping ALL models every 3 seconds (forever) │
711
726
  │ 5. Update rolling averages + stability scores per model │
712
727
  │ 6. User can navigate with ↑↓ and select with Enter │
713
728
  │ 7. On Enter (OpenCode): set model, launch OpenCode │
@@ -792,7 +807,7 @@ This script:
792
807
 
793
808
  **Configuration:**
794
809
  - **Ping timeout**: 15 seconds per attempt (slow models get more time)
795
- - **Ping interval**: 60 seconds between complete re-pings of all models (adjustable with W/X keys)
810
+ - **Ping interval**: 3 seconds between complete re-pings of all models (adjustable with W/X keys)
796
811
  - **Monitor mode**: Interface stays open forever, press Ctrl+C to exit
797
812
 
798
813
  **Flags:**
@@ -720,7 +720,7 @@ const ALT_HOME = '\x1b[H'
720
720
  // 📖 This allows easy addition of new model sources beyond NVIDIA NIM
721
721
 
722
722
  const PING_TIMEOUT = 15_000 // 📖 15s per attempt before abort - slow models get more time
723
- const PING_INTERVAL = 60_000 // 📖 60s between pings — avoids provider rate-limit bans
723
+ const PING_INTERVAL = 3_000 // 📖 3s between pings — faster feedback for model selection
724
724
 
725
725
  const FPS = 12
726
726
  const COL_MODEL = 22
@@ -1301,6 +1301,7 @@ function renderTable(results, pendingPings, frame, cursor = null, sortColumn = '
1301
1301
  chalk.dim(' • ') +
1302
1302
  '🤝 ' +
1303
1303
  chalk.rgb(255, 165, 0)('\x1b]8;;https://github.com/vava-nessa/free-coding-models/graphs/contributors\x1b\\Contributors\x1b]8;;\x1b\\') +
1304
+ chalk.dim(' (vava-nessa • erwinh22 • whit3rabbit • skylaweber)') +
1304
1305
  chalk.dim(' • ') +
1305
1306
  '💬 ' +
1306
1307
  chalk.rgb(200, 150, 255)('\x1b]8;;https://discord.gg/5MbTnDC3Md\x1b\\Discord\x1b]8;;\x1b\\') +
@@ -1576,6 +1577,13 @@ const PROVIDER_METADATA = {
1576
1577
  signupHint: 'Sign up and generate an API key',
1577
1578
  rateLimits: 'Free tier (generous quota)',
1578
1579
  },
1580
+ iflow: {
1581
+ label: 'iFlow',
1582
+ color: chalk.rgb(100, 200, 255),
1583
+ signupUrl: 'https://platform.iflow.cn',
1584
+ signupHint: 'Register → Personal Information → Generate API Key (7-day expiry)',
1585
+ rateLimits: 'Free for individuals (no request limits)',
1586
+ },
1579
1587
  }
1580
1588
 
1581
1589
  // 📖 OpenCode config location: ~/.config/opencode/opencode.json on ALL platforms.
package/lib/config.js CHANGED
@@ -53,15 +53,12 @@
53
53
  * },
54
54
  * "favorites": [
55
55
  * "nvidia/deepseek-ai/deepseek-v3.2"
56
- * },
56
+ * ],
57
57
  * "telemetry": {
58
58
  * "enabled": true,
59
59
  * "consentVersion": 1,
60
60
  * "anonymousId": "anon_550e8400-e29b-41d4-a716-446655440000"
61
- * "apiKeys": { ... },
62
- * "providers": { ... },
63
- * "favorites": [ "nvidia/deepseek-ai/deepseek-v3.2" ],
64
- * "telemetry": { "enabled": true, "consentVersion": 1, "anonymousId": "anon_..." },
61
+ * },
65
62
  * "activeProfile": "work",
66
63
  * "profiles": {
67
64
  * "work": { "apiKeys": {...}, "providers": {...}, "favorites": [...], "settings": {...} },
@@ -137,6 +134,7 @@ const ENV_VARS = {
137
134
  cloudflare: ['CLOUDFLARE_API_TOKEN', 'CLOUDFLARE_API_KEY'],
138
135
  perplexity: ['PERPLEXITY_API_KEY', 'PPLX_API_KEY'],
139
136
  zai: 'ZAI_API_KEY',
137
+ iflow: 'IFLOW_API_KEY',
140
138
  }
141
139
 
142
140
  /**
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "free-coding-models",
3
- "version": "0.1.68",
3
+ "version": "0.1.70",
4
4
  "description": "Find the fastest coding LLM models in seconds — ping free models from multiple providers, pick the best one for OpenCode, Cursor, or any AI coding assistant.",
5
5
  "keywords": [
6
6
  "nvidia",
package/sources.js CHANGED
@@ -27,8 +27,8 @@
27
27
  * 📖 Secondary: https://swe-rebench.com (independent evals, scores are lower)
28
28
  * 📖 Leaderboard tracker: https://www.marc0.dev/en/leaderboard
29
29
  *
30
- * @exports nvidiaNim, groq, cerebras, sambanova, openrouter, huggingface, replicate, deepinfra, fireworks, codestral, hyperbolic, scaleway, googleai, siliconflow, together, cloudflare, perplexity — model arrays per provider
31
- * @exports sources — map of { nvidia, groq, cerebras, sambanova, openrouter, huggingface, replicate, deepinfra, fireworks, codestral, hyperbolic, scaleway, googleai, siliconflow, together, cloudflare, perplexity } each with { name, url, models }
30
+ * @exports nvidiaNim, groq, cerebras, sambanova, openrouter, huggingface, replicate, deepinfra, fireworks, codestral, hyperbolic, scaleway, googleai, siliconflow, together, cloudflare, perplexity, iflow — model arrays per provider
31
+ * @exports sources — map of { nvidia, groq, cerebras, sambanova, openrouter, huggingface, replicate, deepinfra, fireworks, codestral, hyperbolic, scaleway, googleai, siliconflow, together, cloudflare, perplexity, iflow } each with { name, url, models }
32
32
  * @exports MODELS — flat array of [modelId, label, tier, sweScore, ctx, providerKey]
33
33
  *
34
34
  * 📖 MODELS now includes providerKey as 6th element so ping() knows which
@@ -290,6 +290,27 @@ export const perplexity = [
290
290
  ['sonar', 'Sonar', 'B', '25.0%', '128k'],
291
291
  ]
292
292
 
293
+ // 📖 iFlow source - https://platform.iflow.cn
294
+ // 📖 OpenAI-compatible endpoint: https://apis.iflow.cn/v1/chat/completions
295
+ // 📖 Free for individual users with no request limits (API key expires every 7 days)
296
+ // 📖 Provides high-performance models including DeepSeek, Qwen3, Kimi K2, GLM, and TBStars2
297
+ export const iflow = [
298
+ // ── S+ tier — SWE-bench Verified ≥70% ──
299
+ ['TBStars2-200B-A13B', 'TBStars2 200B', 'S+', '77.8%', '128k'],
300
+ ['deepseek-v3.2', 'DeepSeek V3.2', 'S+', '73.1%', '128k'],
301
+ ['qwen3-coder-plus', 'Qwen3 Coder Plus', 'S+', '72.0%', '256k'],
302
+ ['qwen3-235b-a22b-instruct', 'Qwen3 235B', 'S+', '70.0%', '256k'],
303
+ ['deepseek-r1', 'DeepSeek R1', 'S+', '70.6%', '128k'],
304
+ // ── S tier — SWE-bench Verified 60–70% ──
305
+ ['kimi-k2', 'Kimi K2', 'S', '65.8%', '128k'],
306
+ ['kimi-k2-0905', 'Kimi K2 0905', 'S', '68.0%', '256k'],
307
+ ['glm-4.6', 'GLM 4.6', 'S', '62.0%', '200k'],
308
+ ['deepseek-v3', 'DeepSeek V3', 'S', '62.0%', '128k'],
309
+ // ── A+ tier — SWE-bench Verified 50–60% ──
310
+ ['qwen3-32b', 'Qwen3 32B', 'A+', '50.0%', '128k'],
311
+ ['qwen3-max', 'Qwen3 Max', 'A+', '55.0%', '256k'],
312
+ ]
313
+
293
314
  // 📖 All sources combined - used by the main script
294
315
  // 📖 Each source has: name (display), url (API endpoint), models (array of model tuples)
295
316
  export const sources = {
@@ -383,6 +404,11 @@ export const sources = {
383
404
  url: 'https://api.perplexity.ai/chat/completions',
384
405
  models: perplexity,
385
406
  },
407
+ iflow: {
408
+ name: 'iFlow',
409
+ url: 'https://apis.iflow.cn/v1/chat/completions',
410
+ models: iflow,
411
+ },
386
412
  }
387
413
 
388
414
  // 📖 Flatten all models from all sources — each entry includes providerKey as 6th element