free-coding-models 0.1.1

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/LICENSE ADDED
@@ -0,0 +1,49 @@
1
+ MIT License with NVIDIA Disclaimer
2
+
3
+ Copyright (c) 2025 vava
4
+
5
+ Permission is hereby granted, free of charge, to any person obtaining a copy
6
+ of this software and associated documentation files (the "Software"), to deal
7
+ in the Software without restriction, including without limitation the rights
8
+ to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9
+ copies of the Software, and to permit persons to whom the Software is
10
+ furnished to do so, subject to the following conditions:
11
+
12
+ The above copyright notice and this permission notice shall be included in all
13
+ copies or substantial portions of the Software.
14
+
15
+ THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16
+ IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17
+ FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18
+ AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19
+ LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20
+ OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21
+ SOFTWARE.
22
+
23
+ ---
24
+
25
+ NVIDIA DISCLAIMER
26
+
27
+ This project is NOT affiliated with, endorsed by, or sponsored by NVIDIA
28
+ Corporation.
29
+
30
+ "NVIDIA", "NIM", "NVIDIA NIM", and related trademarks are the property of
31
+ NVIDIA Corporation. Use of these trademarks is for descriptive purposes only
32
+ to indicate compatibility with NVIDIA's services.
33
+
34
+ This software is an independent, community-developed tool that interfaces
35
+ with NVIDIA's publicly documented APIs. The author makes no claims about
36
+ the availability, reliability, or performance of NVIDIA's services.
37
+
38
+ Users of this software are solely responsible for:
39
+ - Complying with NVIDIA's Terms of Service
40
+ - Managing their own API keys securely
41
+ - Any costs or usage associated with their NVIDIA account
42
+
43
+ The author shall not be held liable for any issues arising from the use of
44
+ NVIDIA's services, including but not limited to service interruptions,
45
+ API changes, billing disputes, or account-related issues.
46
+
47
+ For official NVIDIA NIM documentation and terms, visit:
48
+ https://build.nvidia.com
49
+ https://www.nvidia.com
package/README.md ADDED
@@ -0,0 +1,332 @@
1
+ <p align="center">
2
+ <img src="https://img.shields.io/npm/v/free-coding-models?color=76b900&label=npm&logo=npm" alt="npm version">
3
+ <img src="https://img.shields.io/node/v/free-coding-models?color=76b900&logo=node.js" alt="node version">
4
+ <img src="https://img.shields.io/npm/l/free-coding-models?color=76b900" alt="license">
5
+ <img src="https://img.shields.io/badge/models-44-76b900?logo=nvidia" alt="models count">
6
+ </p>
7
+
8
+ <h1 align="center">⚡ Free Coding Models</h1>
9
+
10
+ <p align="center">
11
+ <strong>Find the fastest coding LLM models in seconds</strong><br>
12
+ <sub>Ping free models from multiple providers — pick the best one for OpenCode, Cursor, or any AI coding assistant</sub>
13
+ </p>
14
+
15
+ <p align="center">
16
+ <img src="demo.gif" alt="free-coding-models demo" width="100%">
17
+ </p>
18
+
19
+ <p align="center">
20
+ <a href="#-features">Features</a> •
21
+ <a href="#-requirements">Requirements</a> •
22
+ <a href="#-installation">Installation</a> •
23
+ <a href="#-usage">Usage</a> •
24
+ <a href="#-models">Models</a> •
25
+ <a href="#-how-it-works">How it works</a>
26
+ </p>
27
+
28
+ ---
29
+
30
+ ## ✨ Features
31
+
32
+ - **🎯 Coding-focused** — Only LLM models optimized for code generation, not chat or vision
33
+ - **🚀 Parallel pings** — All 44 models tested simultaneously via native `fetch`
34
+ - **📊 Real-time animation** — Watch latency appear live in alternate screen buffer
35
+ - **🏆 Smart ranking** — Top 3 fastest models highlighted with medals 🥇🥈🥉
36
+ - **⏱ Continuous monitoring** — Pings all models every 2 seconds forever, never stops
37
+ - **📈 Rolling averages** — Avg calculated from ALL successful pings since start
38
+ - **📊 Uptime tracking** — Percentage of successful pings shown in real-time
39
+ - **🔄 Auto-retry** — Timeout models keep getting retried, nothing is ever "given up on"
40
+ - **🎮 Interactive selection** — Navigate with arrow keys directly in the table, press Enter to launch OpenCode
41
+ - **🔌 Auto-configuration** — Detects NVIDIA NIM setup, installs if missing, sets as default model
42
+ - **🎨 Clean output** — Zero scrollback pollution, interface stays open until Ctrl+C
43
+ - **📶 Status indicators** — UP ✅ · Timeout ⏳ · Overloaded 🔥 · Not Found 🚫
44
+ - **🔧 Multi-source support** — Extensible architecture via `sources.js` (add new providers easily)
45
+
46
+ ---
47
+
48
+ ## 📋 Requirements
49
+
50
+ Before using `free-coding-models`, make sure you have:
51
+
52
+ 1. **Node.js 18+** — Required for native `fetch` API
53
+ 2. **OpenCode installed** — [Install OpenCode](https://github.com/opencode-ai/opencode) (`npm install -g opencode`)
54
+ 3. **NVIDIA NIM account** — Free tier available at [build.nvidia.com](https://build.nvidia.com)
55
+ 4. **API key** — Generate one from Profile → API Keys → Generate API Key
56
+
57
+ > 💡 **Tip:** Without OpenCode installed, you can still use the tool to benchmark models. OpenCode is only needed for the auto-launch feature.
58
+
59
+ ---
60
+
61
+ ## 📦 Installation
62
+
63
+ ```bash
64
+ # npm (global install — recommended)
65
+ npm install -g free-coding-models
66
+
67
+ # pnpm
68
+ pnpm add -g free-coding-models
69
+
70
+ # bun
71
+ bun add -g free-coding-models
72
+
73
+ # Or use directly with npx/pnpx/bunx
74
+ npx free-coding-models YOUR_API_KEY
75
+ pnpx free-coding-models YOUR_API_KEY
76
+ bunx free-coding-models YOUR_API_KEY
77
+ ```
78
+
79
+ ---
80
+
81
+ ## 🚀 Usage
82
+
83
+ ```bash
84
+ # Just run it — will prompt for API key if not set
85
+ free-coding-models
86
+ ```
87
+
88
+ **How it works:**
89
+ 1. **Ping phase** — All 44 models are pinged in parallel
90
+ 2. **Continuous monitoring** — Models are re-pinged every 2 seconds forever
91
+ 3. **Real-time updates** — Watch "Latest", "Avg", and "Up%" columns update live
92
+ 4. **Select anytime** — Use ↑↓ arrows to navigate, press Enter on a model to launch OpenCode
93
+ 5. **Smart detection** — Automatically detects if NVIDIA NIM is configured in OpenCode:
94
+ - ✅ If configured → Sets model as default and launches OpenCode
95
+ - ⚠️ If missing → Shows installation instructions and launches OpenCode
96
+
97
+ Setup wizard:
98
+
99
+ ```
100
+ 🔑 Setup your NVIDIA API key
101
+ 📝 Get a free key at: https://build.nvidia.com
102
+ 💾 Key will be saved to ~/.free-coding-models
103
+
104
+ Enter your API key: nvapi-xxxx-xxxx
105
+
106
+ ✅ API key saved to ~/.free-coding-models
107
+ ```
108
+
109
+ ### Other ways to provide the key
110
+
111
+ ```bash
112
+ # Pass directly
113
+ free-coding-models nvapi-xxxx-your-key-here
114
+
115
+ # Use environment variable
116
+ NVIDIA_API_KEY=nvapi-xxx free-coding-models
117
+
118
+ # Or add to your shell profile
119
+ export NVIDIA_API_KEY=nvapi-xxxx-your-key-here
120
+ free-coding-models
121
+ ```
122
+
123
+ ### Get your free API key
124
+
125
+ 1. **Create NVIDIA Account** — Sign up at [build.nvidia.com](https://build.nvidia.com) with your email
126
+ 2. **Verify** — Confirm email, set privacy options, create NGC account, verify phone
127
+ 3. **Generate Key** — Go to Profile → API Keys → Generate API Key
128
+ 4. **Name it** — e.g., "free-coding-models" or "OpenCode-NIM"
129
+ 5. **Set expiration** — Choose "Never" for convenience
130
+ 6. **Copy securely** — Key is shown only once!
131
+
132
+ > 💡 **Free credits** — NVIDIA offers free credits for NIM models via their API Catalog for developers.
133
+
134
+ ---
135
+
136
+ ## 🤖 Coding Models
137
+
138
+ **44 coding models** across 8 tiers, ranked by [Aider Polyglot benchmark](https://aider.chat/docs/leaderboards) (225 coding exercises across C++/Go/Java/JS/Python/Rust). Models without a confirmed Aider score are estimated from model family, size, and published release benchmarks.
139
+
140
+ | Tier | Score | Count | Models |
141
+ |------|-------|-------|--------|
142
+ | **S+** | 75%+ | 7 | DeepSeek V3.1/Terminus, DeepSeek V3.2, Kimi K2.5, Devstral 2, Nemotron Ultra 253B, Mistral Large 675B |
143
+ | **S** | 62–74% | 7 | Qwen2.5 Coder 32B, GLM 5, Qwen3.5 400B VLM, Qwen3 Coder 480B, Qwen3 80B Thinking, Llama 3.1 405B, MiniMax M2.1 |
144
+ | **A+** | 54–62% | 6 | Kimi K2 Thinking/Instruct, Qwen3 235B, Llama 3.3 70B, GLM 4.7, Qwen3 80B Instruct |
145
+ | **A** | 44–54% | 5 | MiniMax M2, Mistral Medium 3, Magistral Small, Nemotron Nano 30B, R1 Distill 32B |
146
+ | **A-** | 36–44% | 5 | GPT OSS 120B, Nemotron Super 49B, Llama 4 Scout, R1 Distill 14B, Colosseum 355B |
147
+ | **B+** | 25–36% | 5 | QwQ 32B, GPT OSS 20B, Stockmark 100B, Seed OSS 36B, Step 3.5 Flash |
148
+ | **B** | 14–25% | 5 | Llama 4 Maverick, Mixtral 8x22B, Ministral 14B, Granite 34B Code, R1 Distill 8B |
149
+ | **C** | <14% | 4 | R1 Distill 7B, Gemma 2 9B, Phi 3.5 Mini, Phi 4 Mini |
150
+
151
+ ### Tier scale
152
+
153
+ - **S+/S** — Frontier coders, top Aider polyglot scores, best for complex refactors
154
+ - **A+/A** — Excellent alternatives, strong at most coding tasks
155
+ - **A-/B+** — Solid performers, good for targeted programming tasks
156
+ - **B/C** — Lightweight or older models, good for code completion on constrained infra
157
+
158
+ ---
159
+
160
+ ## 🔌 Use with OpenCode
161
+
162
+ **The easiest way** — let `free-coding-models` do everything:
163
+
164
+ 1. **Run**: `free-coding-models`
165
+ 2. **Wait** for models to be pinged (green ✅ status)
166
+ 3. **Navigate** with ↑↓ arrows to your preferred model
167
+ 4. **Press Enter** — tool automatically:
168
+ - Detects if NVIDIA NIM is configured in OpenCode
169
+ - Sets your selected model as default in `~/.config/opencode/opencode.json`
170
+ - Launches OpenCode with the model ready to use
171
+
172
+ That's it! No manual config needed.
173
+
174
+ ### Manual Setup (Optional)
175
+
176
+ If you prefer to configure OpenCode yourself:
177
+
178
+ #### Prerequisites
179
+
180
+ 1. **OpenCode installed**: `npm install -g opencode` (or equivalent)
181
+ 2. **NVIDIA NIM account**: Get a free account at [build.nvidia.com](https://build.nvidia.com)
182
+ 3. **API key generated**: Go to Profile → API Keys → Generate API Key
183
+
184
+ #### 1. Find your model
185
+
186
+ Run `free-coding-models` to see which models are available and fast. The "Latest" column shows real-time latency, "Avg" shows rolling average, and "Up%" shows uptime percentage (reliability over time).
187
+
188
+ #### 2. Configure OpenCode
189
+
190
+ Create or edit `~/.config/opencode/opencode.json`:
191
+
192
+ ```json
193
+ {
194
+ "provider": {
195
+ "nvidia": {
196
+ "npm": "@ai-sdk/openai-compatible",
197
+ "name": "NVIDIA NIM",
198
+ "options": {
199
+ "baseURL": "https://integrate.api.nvidia.com/v1",
200
+ "apiKey": "{env:NVIDIA_API_KEY}"
201
+ }
202
+ }
203
+ },
204
+ "model": "nvidia/deepseek-ai/deepseek-v3.2"
205
+ }
206
+ ```
207
+
208
+ #### 3. Set environment variable
209
+
210
+ ```bash
211
+ export NVIDIA_API_KEY=nvapi-xxxx-your-key-here
212
+ # Add to ~/.bashrc or ~/.zshrc for persistence
213
+ ```
214
+
215
+ #### 4. Use it
216
+
217
+ Run `/models` in OpenCode and select **NVIDIA NIM** provider and your chosen model.
218
+
219
+ > ⚠️ **Note:** Free models have usage limits based on NVIDIA's tier — check [build.nvidia.com](https://build.nvidia.com) for quotas.
220
+
221
+ ### Automatic Installation
222
+
223
+ The tool includes a **smart fallback mechanism**:
224
+
225
+ 1. **Primary**: Try to launch OpenCode with the selected model
226
+ 2. **Fallback**: If NVIDIA NIM is not detected in `~/.config/opencode/opencode.json`, the tool:
227
+ - Shows installation instructions in your terminal
228
+ - Creates a `prompt` file in `$HOME/prompt` with the exact configuration to add
229
+ - Launches OpenCode, which will detect and display the prompt automatically
230
+
231
+ This **"prompt" fallback** ensures that even if NVIDIA NIM isn't pre-configured, OpenCode will guide you through installation with the ready-to-use configuration already prepared.
232
+
233
+ #### Example prompt file created at `$HOME/prompt`:
234
+
235
+ ```json
236
+ Please install NVIDIA NIM provider in OpenCode by adding this to ~/.config/opencode/opencode.json:
237
+
238
+ {
239
+ "provider": {
240
+ "nvidia": {
241
+ "npm": "@ai-sdk/openai-compatible",
242
+ "name": "NVIDIA NIM",
243
+ "options": {
244
+ "baseURL": "https://integrate.api.nvidia.com/v1",
245
+ "apiKey": "{env:NVIDIA_API_KEY}"
246
+ }
247
+ }
248
+ }
249
+ }
250
+
251
+ Then set env var: export NVIDIA_API_KEY=your_key_here
252
+ ```
253
+
254
+ OpenCode will automatically detect this file when launched and guide you through the installation.
255
+
256
+ ---
257
+
258
+ ## ⚙️ How it works
259
+
260
+ ```
261
+ ┌─────────────────────────────────────────────────────────────┐
262
+ │ 1. Enter alternate screen buffer (like vim/htop/less) │
263
+ │ 2. Ping ALL models in parallel │
264
+ │ 3. Display real-time table with Latest/Avg/Up% columns │
265
+ │ 4. Re-ping ALL models every 2 seconds (forever) │
266
+ │ 5. Update rolling averages from ALL successful pings │
267
+ │ 6. User can navigate with ↑↓ and select with Enter │
268
+ │ 7. On Enter: stop monitoring, exit alt screen │
269
+ │ 8. Detect NVIDIA NIM config in OpenCode │
270
+ │ 9. If configured: update default model, launch OpenCode │
271
+ │ 10. If missing: show install prompt, launch OpenCode │
272
+ └─────────────────────────────────────────────────────────────┘
273
+ ```
274
+
275
+ **Result:** Continuous monitoring interface that stays open until you select a model or press Ctrl+C. Rolling averages give you accurate long-term latency data, uptime percentage tracks reliability, and you can launch OpenCode with your chosen model in one keystroke.
276
+
277
+ ---
278
+
279
+ ## 📋 API Reference
280
+
281
+ | Parameter | Description |
282
+ |-----------|-------------|
283
+ | `NVIDIA_API_KEY` | Environment variable for API key |
284
+ | `<api-key>` | First positional argument |
285
+
286
+ **Configuration:**
287
+ - **Ping timeout**: 15 seconds per attempt (slow models get more time)
288
+ - **Ping interval**: 2 seconds between complete re-pings of all models (adjustable with W/X keys)
289
+ - **Monitor mode**: Interface stays open forever, press Ctrl+C to exit
290
+
291
+ **Keyboard shortcuts:**
292
+ - **↑↓** — Navigate models
293
+ - **Enter** — Select model and launch OpenCode
294
+ - **R/T/O/M/P/A/S/V/U** — Sort by Rank/Tier/Origin/Model/Ping/Avg/Status/Verdict/Uptime
295
+ - **W** — Decrease ping interval (faster pings)
296
+ - **X** — Increase ping interval (slower pings)
297
+ - **Ctrl+C** — Exit
298
+
299
+ ---
300
+
301
+ ## 🔧 Development
302
+
303
+ ```bash
304
+ git clone https://github.com/vava-nessa/free-coding-models
305
+ cd free-coding-models
306
+ npm install
307
+ npm start -- YOUR_API_KEY
308
+ ```
309
+
310
+ ---
311
+
312
+ ## 📄 License
313
+
314
+ MIT © [vava](https://github.com/vava-nessa)
315
+
316
+ ---
317
+
318
+ <p align="center">
319
+ <sub>Built with ☕ and 🌹 by <a href="https://github.com/vava-nessa">vava</a></sub>
320
+ </p>
321
+
322
+ ## 📬 Contribute
323
+ We welcome contributions! Feel free to open issues, submit pull requests, or get involved in the project.
324
+
325
+ **Q:** Can I use this with other providers?
326
+ **A:** Yes, the tool is designed to be extensible; see the source for examples of customizing endpoints.
327
+
328
+ **Q:** How accurate are the latency numbers?
329
+ **A:** They represent average round-trip times measured during testing; actual performance may vary based on network conditions.
330
+
331
+ ## 📧 Support
332
+ For questions or issues, open a GitHub issue or join our community Discord: https://discord.gg/QnR8xq9p