free-coding-models 0.3.34 → 0.3.35

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/CHANGELOG.md CHANGED
@@ -1,6 +1,23 @@
1
1
  # Changelog
2
2
  ---
3
3
 
4
+ ## [0.3.35] - 2026-04-07
5
+
6
+ ### Added
7
+ - **OVHcloud AI Endpoints** — new European sovereign AI provider (8 models: Qwen3 Coder 30B MoE, GPT OSS 120B, GPT OSS 20B, Llama 3.3 70B, Qwen3 32B, R1 Distill 70B, Mistral Small 3.2, Llama 3.1 8B)
8
+ - Free sandbox mode: 2 req/min per IP per model (no API key needed), 400 RPM with API key
9
+ - **Now 238 models across 25 providers** (was 230/24)
10
+
11
+ ### SECURITY.md
12
+ - **SECURITY.md** — full security policy with vulnerability reporting, architecture, and supply chain docs
13
+ - **CODEOWNERS** — all changes require @vava-nessa review
14
+ - **Dependabot** — weekly automated dependency + GitHub Actions updates (`.github/dependabot.yml`)
15
+ - **Security Audit CI** — `npm audit` + lockfile lint on every push/PR + weekly schedule (`.github/workflows/security-audit.yml`)
16
+ - **npm Provenance** — release workflow now publishes with `--provenance` (Sigstore-signed)
17
+ - **SBOM generation** — Software Bill of Materials attached to every GitHub Release
18
+ - **README trust badges** — dependency count, provenance, supply chain badges
19
+ - **README 🛡️ Security section** — what the tool does/doesn't do, supply chain table
20
+
4
21
  ## [0.3.34] - 2026-04-06
5
22
 
6
23
  ### Added
package/README.md CHANGED
@@ -2,8 +2,12 @@
2
2
  <img src="https://img.shields.io/npm/v/free-coding-models?color=76b900&label=npm&logo=npm" alt="npm version">
3
3
  <img src="https://img.shields.io/node/v/free-coding-models?color=76b900&logo=node.js" alt="node version">
4
4
  <img src="https://img.shields.io/npm/l/free-coding-models?color=76b900" alt="license">
5
- <img src="https://img.shields.io/badge/models-230-76b900?logo=nvidia" alt="models count">
6
- <img src="https://img.shields.io/badge/providers-24-blue" alt="providers count">
5
+ <img src="https://img.shields.io/badge/models-238-76b900?logo=nvidia" alt="models count">
6
+ <img src="https://img.shields.io/badge/providers-25-blue" alt="providers count">
7
+ <br>
8
+ <img src="https://img.shields.io/badge/dependencies-1-76b900?logo=npm" alt="1 dependency">
9
+ <img src="https://img.shields.io/badge/provenance-sigstore-blueviolet?logo=signstore" alt="npm provenance">
10
+ <img src="https://img.shields.io/badge/supply_chain-verified-brightgreen" alt="supply chain verified">
7
11
  </p>
8
12
 
9
13
  <h1 align="center">free-coding-models</h1>
@@ -14,7 +18,7 @@
14
18
 
15
19
  <p align="center">
16
20
  <strong>Find the fastest free coding model in seconds</strong><br>
17
- <sub>Ping 230 models across 24 AI Free providers in real-time </sub><br><sub> Install Free API endpoints to your favorite AI coding tool: <br>📦 OpenCode, 🦞 OpenClaw, 💘 Crush, 🪿 Goose, 🛠 Aider, 🐉 Qwen Code, 🤲 OpenHands, ⚡ Amp, π Pi, 🦘 Rovo or ♊ Gemini in one keystroke</sub>
21
+ <sub>Ping 238 models across 25 AI Free providers in real-time </sub><br><sub> Install Free API endpoints to your favorite AI coding tool: <br>📦 OpenCode, 🦞 OpenClaw, 💘 Crush, 🪿 Goose, 🛠 Aider, 🐉 Qwen Code, 🤲 OpenHands, ⚡ Amp, π Pi, 🦘 Rovo or ♊ Gemini in one keystroke</sub>
18
22
  </p>
19
23
 
20
24
 
@@ -51,7 +55,7 @@ create a free account on one of the [providers](#-list-of-free-ai-providers)
51
55
 
52
56
  ## 💡 Why this tool?
53
57
 
54
- There are **230+ free coding models** scattered across 24 providers. Which one is fastest right now? Which one is actually stable versus just lucky on the last ping?
58
+ There are **238+ free coding models** scattered across 25 providers. Which one is fastest right now? Which one is actually stable versus just lucky on the last ping?
55
59
 
56
60
  This CLI pings them all in parallel, shows live latency, and calculates a **live Stability Score (0-100)**. Average latency alone is misleading if a model randomly spikes to 6 seconds; the stability score measures true reliability by combining **p95 latency** (30%), **jitter/variance** (30%), **spike rate** (20%), and **uptime** (20%).
57
61
 
@@ -65,7 +69,7 @@ It then writes the model you pick directly into your coding tool's config — so
65
69
 
66
70
  Create a free account on one provider below to get started:
67
71
 
68
- **230 coding models** across 24 providers, ranked by [SWE-bench Verified](https://www.swebench.com).
72
+ **238 coding models** across 25 providers, ranked by [SWE-bench Verified](https://www.swebench.com).
69
73
 
70
74
  | Provider | Models | Tier range | Free tier | Env var |
71
75
  |----------|--------|-----------|-----------|--------|
@@ -86,6 +90,7 @@ Create a free account on one provider below to get started:
86
90
  | [SiliconFlow](https://cloud.siliconflow.cn/account/ak) | 6 | S+ → A | Free models: usually 100 RPM, varies by model | `SILICONFLOW_API_KEY` |
87
91
  | [Cerebras](https://cloud.cerebras.ai) | 4 | S+ → B | Generous free tier (developer tier 10× higher limits) | `CEREBRAS_API_KEY` |
88
92
  | [Perplexity API](https://www.perplexity.ai/settings/api) | 4 | A+ → B | Tiered limits by spend (default ~50 RPM) | `PERPLEXITY_API_KEY` |
93
+ | [OVHcloud AI Endpoints](https://endpoints.ai.cloud.ovh.net) | 8 | S → B | Free sandbox: 2 req/min/IP (no key). 400 RPM with key | `OVH_AI_ENDPOINTS_ACCESS_TOKEN` |
89
94
  | [Chutes AI](https://chutes.ai) | 4 | S → A | Free (community GPU-powered, no credit card) | `CHUTES_API_KEY` |
90
95
  | [DeepInfra](https://deepinfra.com/login) | 4 | A- → B+ | 200 concurrent requests (default) | `DEEPINFRA_API_KEY` |
91
96
  | [Fireworks AI](https://fireworks.ai) | 4 | S → B+ | $1 credits – 10 req/min without payment | `FIREWORKS_API_KEY` |
@@ -284,7 +289,7 @@ When a tool mode is active (via `Z`), models incompatible with that tool are hig
284
289
 
285
290
  ## ✨ Features
286
291
 
287
- - **Parallel pings** — all 230 models tested simultaneously via native `fetch`
292
+ - **Parallel pings** — all 238 models tested simultaneously via native `fetch`
288
293
  - **Adaptive monitoring** — 2s burst for 60s → 10s normal → 30s idle
289
294
  - **Stability score** — composite 0–100 (p95 latency, jitter, spike rate, uptime)
290
295
  - **Smart ranking** — top 3 highlighted 🥇🥈🥉
@@ -322,7 +327,7 @@ We welcome contributions — issues, PRs, new provider integrations.
322
327
 
323
328
  ## ⚖️ Model Licensing & Commercial Use
324
329
 
325
- **Short answer:** All 230 models allow **commercial use of generated output (including code)**. You own what the models generate for you.
330
+ **Short answer:** All 238 models allow **commercial use of generated output (including code)**. You own what the models generate for you.
326
331
 
327
332
  ### Output Ownership
328
333
 
@@ -354,6 +359,40 @@ For every model in this tool, **you own the generated output** — code, text, o
354
359
 
355
360
  ---
356
361
 
362
+ ## 🛡️ Security & Trust
363
+
364
+ ### Supply Chain
365
+
366
+ | Signal | Status |
367
+ |--------|--------|
368
+ | **npm Provenance** | ✅ Published with Sigstore-signed provenance |
369
+ | **SBOM** | ✅ Software Bill of Materials attached to every GitHub Release |
370
+ | **Dependencies** | ✅ 1 runtime dependency (`chalk`) |
371
+ | **Lockfile** | ✅ `pnpm-lock.yaml` committed and tracked |
372
+ | **Security Policy** | ✅ [`SECURITY.md`](SECURITY.md) |
373
+ | **Code Owners** | ✅ [`CODEOWNERS`](CODEOWNERS) — all changes require maintainer review |
374
+ | **Dependabot** | ✅ Weekly automated dependency + GitHub Actions updates |
375
+ | **Audit CI** | ✅ `npm audit` runs on every push/PR + weekly scheduled scan |
376
+ | **License** | ✅ MIT |
377
+
378
+ ### What This Tool Does
379
+
380
+ - Pings public API endpoints to measure latency and check availability
381
+ - Reads your API keys from `.env` files (only if you configure them)
382
+ - Opens configuration files for editing (with your permission)
383
+ - Reports anonymous usage data (no personal information — see footer)
384
+
385
+ ### What This Tool Does NOT Do
386
+
387
+ - ❌ Does **not** send your API keys, code, or personal data to any third party
388
+ - ❌ Does **not** install or execute arbitrary code beyond `chalk` (the only dependency)
389
+ - ❌ Does **not** modify any files outside its own config directory
390
+ - ❌ Does **not** require `sudo`, root, or elevated permissions
391
+
392
+ > To report a vulnerability, see [`SECURITY.md`](SECURITY.md).
393
+
394
+ ---
395
+
357
396
  ## 📧 Support
358
397
 
359
398
  [GitHub Issues](https://github.com/vava-nessa/free-coding-models/issues) · [Discord](https://discord.gg/ZTNFHvvCkU)
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "free-coding-models",
3
- "version": "0.3.34",
3
+ "version": "0.3.35",
4
4
  "description": "Find the fastest coding LLM models in seconds — ping free models from multiple providers, pick the best one for OpenCode, Cursor, or any AI coding assistant.",
5
5
  "keywords": [
6
6
  "nvidia",
package/sources.js CHANGED
@@ -422,6 +422,21 @@ export const chutes = [
422
422
  ['Qwen/Qwen2.5-Coder-32B-Instruct', 'Qwen2.5 Coder 32B', 'A', '46.0%', '32k'],
423
423
  ]
424
424
 
425
+ // 📖 OVHcloud AI Endpoints - https://endpoints.ai.cloud.ovh.net
426
+ // 📖 OpenAI-compatible API with European data sovereignty (GDPR)
427
+ // 📖 Free sandbox: 2 req/min per IP per model (no API key needed), 400 RPM with API key
428
+ // 📖 Env var: OVH_AI_ENDPOINTS_ACCESS_TOKEN
429
+ export const ovhcloud = [
430
+ ['Qwen3-Coder-30B-A3B-Instruct', 'Qwen3 Coder 30B MoE', 'A+', '55.0%', '256k'],
431
+ ['gpt-oss-120b', 'GPT OSS 120B', 'S', '60.0%', '131k'],
432
+ ['gpt-oss-20b', 'GPT OSS 20B', 'A', '42.0%', '131k'],
433
+ ['Meta-Llama-3_3-70B-Instruct', 'Llama 3.3 70B', 'A-', '39.5%', '131k'],
434
+ ['Qwen3-32B', 'Qwen3 32B', 'A+', '50.0%', '32k'],
435
+ ['DeepSeek-R1-Distill-Llama-70B', 'R1 Distill 70B', 'A-', '40.0%', '131k'],
436
+ ['Mistral-Small-3.2-24B-Instruct-2506', 'Mistral Small 3.2', 'B+', '34.0%', '131k'],
437
+ ['Llama-3.1-8B-Instruct', 'Llama 3.1 8B', 'B', '28.8%', '131k'],
438
+ ]
439
+
425
440
  // 📖 Rovo Dev CLI source - https://www.atlassian.com/rovo
426
441
  // 📖 CLI tool only - no API endpoint - requires 'acli rovodev run'
427
442
  // 📖 Install: https://support.atlassian.com/rovo/docs/install-and-run-rovo-dev-cli-on-your-device/
@@ -596,6 +611,11 @@ export const sources = {
596
611
  url: 'https://chutes.ai/v1/chat/completions',
597
612
  models: chutes,
598
613
  },
614
+ ovhcloud: {
615
+ name: 'OVHcloud AI 🆕',
616
+ url: 'https://oai.endpoints.kepler.ai.cloud.ovh.net/v1/chat/completions',
617
+ models: ovhcloud,
618
+ },
599
619
  }
600
620
 
601
621
  // 📖 Flatten all models from all sources — each entry includes providerKey as 6th element
@@ -60,6 +60,7 @@ export const ENV_VAR_NAMES = {
60
60
  zai: 'ZAI_API_KEY',
61
61
  gemini: 'GEMINI_API_KEY',
62
62
  chutes: 'CHUTES_API_KEY',
63
+ ovhcloud: 'OVH_AI_ENDPOINTS_ACCESS_TOKEN',
63
64
  }
64
65
 
65
66
  // 📖 OPENCODE_MODEL_MAP: sparse table of model IDs that differ between sources.js and OpenCode's
@@ -257,4 +258,11 @@ export const PROVIDER_METADATA = {
257
258
  signupHint: 'Sign up and generate an API key',
258
259
  rateLimits: 'Free (community GPU-powered), no hard cap',
259
260
  },
261
+ ovhcloud: {
262
+ label: 'OVHcloud AI 🆕',
263
+ color: chalk.rgb(100, 149, 205),
264
+ signupUrl: 'https://endpoints.ai.cloud.ovh.net',
265
+ signupHint: 'Manager → Public Cloud → AI Endpoints → API keys (optional: sandbox works without key)',
266
+ rateLimits: 'Free sandbox: 2 req/min per IP per model (no key). With API key: 400 RPM',
267
+ },
260
268
  }