ai-speedometer 2.1.1 → 2.1.4

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -1,488 +0,0 @@
1
- # Models.dev Integration Guide
2
-
3
- ## Overview
4
-
5
- This document provides a comprehensive explanation of how the AI benchmark CLI integrates with models.dev to provide a rich, up-to-date provider and model ecosystem. The integration enables users to easily access verified AI providers and their models without manual configuration, while also supporting custom providers for complete flexibility.
6
-
7
- ## What is Models.dev?
8
-
9
- Models.dev is a centralized registry of AI model providers and their models. It serves as a comprehensive source of truth for:
10
-
11
- - **Provider Information:** Names, base URLs, API endpoints, and provider types
12
- - **Model Catalogs:** Available models for each provider with their names and IDs
13
- - **Provider Classification:** Categorization by provider type (OpenAI-compatible, Anthropic, Google, etc.)
14
-
15
- **Note:** For information about custom verified providers (pre-configured providers not in models.dev), see [Custom Verified Providers](custom-verified-providers.md).
16
-
17
- ## Integration Architecture
18
-
19
- ### Core Components
20
-
21
- #### 1. `models-dev.js` - Models.dev API Client
22
- This module handles all communication with the models.dev API and provides caching functionality.
23
-
24
- **Key Functions:**
25
- - `getAllProviders()` - Fetch all providers from models.dev
26
- - `searchProviders(query)` - Search providers by name or ID
27
- - `getModelsForProvider(providerId)` - Get models for a specific provider
28
- - `refreshData()` - Force refresh from API (bypassing cache)
29
-
30
- #### 2. `opencode-integration.js` - Verified Provider Management
31
- This module handles **verified providers only** from models.dev and manages their authentication.
32
-
33
- **Key Functions:**
34
- - `getAuthenticatedProviders()` - Get verified providers with valid API keys from auth.json
35
- - `addApiKey(providerId, apiKey)` - Store API keys securely in auth.json
36
- - `getAllAvailableProviders()` - Combine verified providers with custom providers
37
- - `migrateFromOldConfig()` - Migrate data from old config format
38
-
39
- #### 3. `ai-config.js` - Custom Provider Management
40
- This module handles **user-defined custom providers** and their models.
41
-
42
- **Key Functions:**
43
- - `addCustomProvider()` - Add new custom providers
44
- - `addModelToCustomProvider()` - Add models to existing custom providers
45
- - `getCustomProvidersFromConfig()` - Retrieve custom providers from config
46
- - `readAIConfig()` / `writeAIConfig()` - Manage ai-benchmark-config.json
47
-
48
- #### 4. `cli.js` - User Interface
49
- The main CLI interface uses both provider systems for model selection and benchmarking.
50
-
51
- ## Data Flow
52
-
53
- ### 1. Provider Discovery and Loading
54
-
55
- ```
56
- User Request → CLI → getAllProviders() → Models.dev API → Provider Data
57
- ```
58
-
59
- **Step-by-step process:**
60
-
61
- 1. **Cache Check:** The system first checks for cached provider data
62
- 2. **API Fallback:** If cache is missing or expired (>1 hour), fetch from models.dev API
63
- 3. **Fallback Data:** If API fails, use built-in fallback provider data
64
- 4. **Data Transformation:** Convert models.dev format to internal format
65
- 5. **Caching:** Store successful responses in local cache
66
-
67
- **Cache Implementation:**
68
- - **Location:** `~/.cache/ai-speedometer/models.json`
69
- - **Expiration:** 1 hour (3600 seconds)
70
- - **Format:** JSON with timestamp
71
- - **Fallback:** Built-in provider list for offline operation
72
-
73
- ### 2. Dual Provider System Flow
74
-
75
- The system now manages three distinct provider types:
76
-
77
- #### Verified Providers (models.dev integration)
78
- ```
79
- Provider Selection → API Key Input → auth.json Storage → Provider Activation
80
- ```
81
-
82
- **Authentication Process:**
83
-
84
- 1. **Provider Selection:** User selects a provider from models.dev list
85
- 2. **API Key Input:** User enters their API key for the provider
86
- 3. **Secure Storage:** Key is stored in `~/.local/share/opencode/auth.json`
87
- 4. **Provider Activation:** Provider becomes available in model selection
88
-
89
- #### Custom Verified Providers (pre-configured)
90
- ```
91
- Pre-configured Definition → custom-verified-providers.json → API Key Input → Integration
92
- ```
93
-
94
- **Custom Verified Provider Process:**
95
-
96
- 1. **Pre-configured Definition:** Providers defined in `custom-verified-providers.json`
97
- 2. **API Key Input:** User enters API key for the pre-configured provider
98
- 3. **Authentication Storage:** Key stored in auth.json alongside verified providers
99
- 4. **Integration:** Available immediately in model selection interface
100
-
101
- #### Custom Providers (user-defined)
102
- ```
103
- Custom Provider Setup → ai-benchmark-config.json Storage → Direct Integration
104
- ```
105
-
106
- **Custom Provider Process:**
107
-
108
- 1. **Provider Definition:** User defines custom provider with base URL and models
109
- 2. **Config Storage:** Provider stored in `~/.config/ai-speedometer/ai-benchmark-config.json`
110
- 3. **Model Management:** Users can add/remove models from custom providers
111
- 4. **Direct Integration:** Available immediately in model selection
112
-
113
- ### 3. Model Loading and Filtering
114
-
115
- ```
116
- Verified Providers → Custom Providers → Combine → Present to User
117
- ```
118
-
119
- **Model Assembly Process:**
120
-
121
- 1. **Verified Models:** Load models from authenticated providers in auth.json
122
- 2. **Custom Verified Models:** Load models from custom-verified-providers.json with API keys
123
- 3. **Custom Models:** Load user-defined models from ai-benchmark-config.json
124
- 4. **Deduplication:** Ensure no duplicate models in final list
125
- 5. **Presentation:** Display combined list in model selection interface
126
-
127
- ## Configuration Files and Locations
128
-
129
- ### File Locations
130
-
131
- #### OpenCode Integration (Verified Providers)
132
- - **auth.json:** `~/.local/share/opencode/auth.json` (API keys for verified providers)
133
- - **opencode.json:** `~/.config/opencode/opencode.json` (deprecated, no longer used)
134
-
135
- #### AI Speedometer Config (Custom Providers)
136
- - **ai-benchmark-config.json:** `~/.config/ai-speedometer/ai-benchmark-config.json` (custom providers)
137
- - **Cache:** `~/.cache/ai-speedometer/models.json` (models.dev API cache)
138
-
139
- #### Custom Verified Providers
140
- - **custom-verified-providers.json:** `./custom-verified-providers.json` (pre-configured provider definitions)
141
- - **Authentication:** Same as verified providers (stored in auth.json)
142
-
143
- ### Configuration Structures
144
-
145
- #### Verified Providers (auth.json)
146
- ```json
147
- {
148
- "openai": {
149
- "type": "api",
150
- "key": "sk-..."
151
- },
152
- "anthropic": {
153
- "type": "api",
154
- "key": "sk-ant-..."
155
- }
156
- }
157
- ```
158
-
159
- #### Custom Providers (ai-benchmark-config.json)
160
- ```json
161
- {
162
- "verifiedProviders": {},
163
- "customProviders": [
164
- {
165
- "id": "my-custom-provider",
166
- "name": "My Custom Provider",
167
- "type": "openai-compatible",
168
- "baseUrl": "https://api.custom.com/v1",
169
- "apiKey": "custom-api-key",
170
- "models": [
171
- {
172
- "name": "Custom Model 1",
173
- "id": "custom-model-1"
174
- }
175
- ]
176
- }
177
- ]
178
- }
179
- ```
180
-
181
- ## Provider Types and SDK Integration
182
-
183
- ### Supported Provider Types
184
-
185
- #### 1. OpenAI-Compatible
186
- - **SDK:** `@ai-sdk/openai-compatible`
187
- - **Examples:** OpenAI, Groq, Together AI, Anyscale, Fireworks AI
188
- - **Base URL Pattern:** `https://api.example.com/v1`
189
- - **Authentication:** Bearer token via Authorization header
190
-
191
- #### 2. Anthropic
192
- - **SDK:** `@ai-sdk/anthropic`
193
- - **Examples:** Anthropic Claude models
194
- - **Base URL Pattern:** `https://api.anthropic.com`
195
- - **Authentication:** Custom headers with x-api-key and anthropic-version
196
-
197
- #### 3. Google
198
- - **SDK:** `@ai-sdk/google`
199
- - **Examples:** Google Gemini models
200
- - **Base URL Pattern:** `https://generativelanguage.googleapis.com`
201
- - **Authentication:** x-goog-api-key header
202
-
203
- ### Provider Type Detection
204
-
205
- The system automatically detects provider types based on:
206
-
207
- 1. **NPM Package:** The `npm` field in models.dev data
208
- 2. **Fallback Logic:** Package name patterns for type detection
209
-
210
- ```javascript
211
- // Automatic type detection logic
212
- let providerType = 'openai-compatible'; // default
213
- if (providerConfig.npm === '@ai-sdk/anthropic') {
214
- providerType = 'anthropic';
215
- } else if (providerConfig.npm === '@ai-sdk/google') {
216
- providerType = 'google';
217
- } else if (providerConfig.npm === '@ai-sdk/openai') {
218
- providerType = 'openai';
219
- }
220
- ```
221
-
222
- ## User Interface Improvements
223
-
224
- ### Search Bar Visibility
225
- - **Enhanced Visibility:** Search bar now includes 🔍 emoji indicator for clear visual identification
226
- - **Proper Sizing:** Header height calculations correctly account for search interface
227
- - **Consistent Implementation:** Search functionality works across all provider selection interfaces
228
-
229
- ### Screen Rendering Optimization
230
- - **Double Buffering:** Screen content built in memory before display to eliminate flickering
231
- - **Optimized Clearing:** Single screen clear operation per render cycle
232
- - **Smooth Navigation:** Pagination and scrolling work without visual artifacts
233
-
234
- ### Menu Structure
235
- The CLI now follows the structure defined in `plan/models.md`:
236
-
237
- ```
238
- Main Menu
239
- ├── Set Model
240
- │ ├── Add Verified Provider (models.dev)
241
- │ ├── Add Custom Models
242
- │ │ ├── Add Models to Existing Provider
243
- │ │ ├── Add Custom Provider
244
- │ │ └── Back to Model Management
245
- │ ├── List Existing Providers
246
- │ ├── Debug Info
247
- │ └── Back to Main Menu
248
- ├── Run Benchmark (AI SDK)
249
- ├── Run Benchmark (REST API)
250
- └── Exit
251
- ```
252
-
253
- ## Adding Custom Providers
254
-
255
- ### Custom Provider Setup Process
256
-
257
- 1. **Navigate to Add Custom Models** from Model Management menu
258
- 2. **Choose "Add Custom Provider"** option
259
- 3. **Select Provider Type:** OpenAI Compatible or Anthropic
260
- 4. **Enter Provider Details:**
261
- - Provider ID (e.g., my-openai)
262
- - Provider Name (e.g., MyOpenAI)
263
- - Base URL (e.g., https://api.openai.com/v1)
264
- - API Key
265
- 5. **Add Models:** Choose single or multiple model mode
266
- 6. **Automatic Save:** Provider saved to ai-benchmark-config.json
267
-
268
- ### Example Custom Provider Configuration
269
-
270
- ```json
271
- {
272
- "id": "my-custom-openai",
273
- "name": "My Custom OpenAI",
274
- "type": "openai-compatible",
275
- "baseUrl": "https://api.custom.com/v1",
276
- "apiKey": "your-api-key",
277
- "models": [
278
- {
279
- "name": "gpt-4",
280
- "id": "gpt-4_1234567890"
281
- },
282
- {
283
- "name": "gpt-3.5-turbo",
284
- "id": "gpt-3-5-turbo_1234567891"
285
- }
286
- ]
287
- }
288
- ```
289
-
290
- ## Migration System
291
-
292
- ### Automatic Migration
293
- The system includes migration functionality for users transitioning from the old config format:
294
-
295
- - **Detection:** Automatically detects old `ai-benchmark-config.json` in current directory
296
- - **Migration:** Splits verified providers to auth.json and custom providers to new config location
297
- - **Backup:** Creates backup of old config file
298
- - **Reporting:** Shows migration results and any errors encountered
299
-
300
- ### Migration Process
301
- 1. **Old Config Detection:** Checks for `./ai-benchmark-config.json`
302
- 2. **Data Splitting:**
303
- - Verified providers → `~/.local/share/opencode/auth.json`
304
- - Custom providers → `~/.config/ai-speedometer/ai-benchmark-config.json`
305
- 3. **Backup:** Renames old file to `ai-benchmark-config.json.backup`
306
- 4. **Confirmation:** Shows migration summary to user
307
-
308
- ## Error Handling and Resilience
309
-
310
- ### Cache Management
311
- - **Automatic Cache Refresh:** Cache expires after 1 hour
312
- - **Manual Refresh:** `refreshData()` function for forced updates
313
- - **Cache Clearing:** `clearCache()` function for troubleshooting
314
- - **Graceful Degradation:** Falls back to built-in data on API failure
315
-
316
- ### API Failure Handling
317
- - **Network Errors:** Automatic retry with exponential backoff
318
- - **Invalid Responses:** Fallback to cached data when available
319
- - **Rate Limiting:** Respect API rate limits with proper delay handling
320
- - **Offline Mode:** Built-in provider list for offline operation
321
-
322
- ### Data Validation
323
- - **Schema Validation:** Validate incoming API responses
324
- - **Type Checking:** Ensure provider types are recognized
325
- - **Model Validation:** Verify model data integrity
326
- - **URL Validation:** Confirm base URLs are properly formatted
327
-
328
- ## Security Considerations
329
-
330
- ### API Key Storage
331
- - **Secure File Permissions:** Auth files restricted to owner (0o600)
332
- - **XDG Compliance:** Follows system standards for config locations
333
- - **No Key Logging:** API keys never appear in logs or output
334
- - **Separation of Concerns:** Verified and custom provider keys stored separately
335
-
336
- ### Network Security
337
- - **HTTPS Only:** All API communications use HTTPS
338
- - **Certificate Validation:** Proper SSL certificate verification
339
- - **No Credential Exposure:** Keys sent only in headers, never URLs
340
- - **Request Signing:** Proper authentication headers for all requests
341
-
342
- ### Data Privacy
343
- - **Local Storage Only:** No data sent to external services except models.dev API
344
- - **Minimal Data Collection:** Only necessary provider and model information
345
- - **User Control:** Users can clear cache and data at any time
346
- - **Transparent Operation:** All operations are visible to the user
347
-
348
- ## Troubleshooting
349
-
350
- ### Common Issues
351
-
352
- #### 1. Search Bar Not Visible
353
- **Symptoms:** Search interface missing from provider selection
354
- **Solution:**
355
- - This is now fixed with 🔍 emoji indicator and proper header calculations
356
- - Ensure you're using the updated CLI version
357
-
358
- #### 2. Screen Flickering During Navigation
359
- **Symptoms:** Visual artifacts during scrolling or pagination
360
- **Solution:**
361
- - This has been resolved with double buffering implementation
362
- - Update to latest CLI version if experiencing issues
363
-
364
- #### 3. Models Not Loading
365
- **Symptoms:** Model selection shows empty or outdated list
366
- **Solution:**
367
- ```bash
368
- # Clear cache and force refresh
369
- node -e "import('./models-dev.js').then(m => m.clearCache())"
370
- ```
371
-
372
- #### 4. Authentication Issues
373
- **Symptoms:** Provider appears but models fail during benchmark
374
- **Solution:**
375
- ```bash
376
- # Check auth.json contents
377
- cat ~/.local/share/opencode/auth.json
378
-
379
- # Verify API key format and permissions
380
- ```
381
-
382
- #### 5. Custom Provider Issues
383
- **Symptoms:** Custom providers not appearing or models not working
384
- **Solution:**
385
- ```bash
386
- # Check custom provider config
387
- cat ~/.config/ai-speedometer/ai-benchmark-config.json
388
-
389
- # Verify config directory exists
390
- ls -la ~/.config/ai-speedometer/
391
- ```
392
-
393
- ### Debugging Commands
394
-
395
- ```bash
396
- # View cache status
397
- ls -la ~/.cache/ai-speedometer/
398
-
399
- # View verified provider configuration
400
- cat ~/.local/share/opencode/auth.json
401
-
402
- # View custom providers
403
- cat ~/.config/ai-speedometer/ai-benchmark-config.json
404
-
405
- # View all config locations
406
- node cli.js
407
- # Then select "Debug Info" from the menu
408
-
409
- # Clear all caches
410
- rm -rf ~/.cache/ai-speedometer/
411
- ```
412
-
413
- ### Logging and Debugging
414
-
415
- Enable debug logging for troubleshooting:
416
- ```bash
417
- npm run cli:debug
418
- ```
419
-
420
- This creates detailed logs in `debug.log` including:
421
- - API request/response details
422
- - Cache operations
423
- - Provider loading process
424
- - Authentication flow
425
- - Config system operations
426
-
427
- ## Performance Optimization
428
-
429
- ### Caching Strategy
430
- - **Local Cache:** Reduces API calls and improves load times
431
- - **Conditional Updates:** Only fetch when data is stale
432
- - **Memory Efficiency:** Cache data is compactly stored
433
- - **Fast Access:** In-memory filtering and searching
434
-
435
- ### Search Optimization
436
- - **Debounced Input:** 50ms delay reduces unnecessary filtering
437
- - **Efficient Algorithms:** Optimized search across multiple fields
438
- - **Partial Results:** Show results immediately during typing
439
- - **Memory Management:** Clean up unused search timeouts
440
-
441
- ### Screen Rendering
442
- - **Double Buffering:** Eliminates screen flickering
443
- - **Optimized Clearing:** Single screen clear per render cycle
444
- - **Memory Efficient:** Screen content built in memory
445
- - **Responsive:** Immediate feedback for user interactions
446
-
447
- ### Concurrent Loading
448
- - **Parallel Requests:** Fetch verified and custom providers concurrently
449
- - **Non-blocking UI:** Interface remains responsive during data loading
450
- - **Progressive Enhancement:** Show available data while loading more
451
-
452
- ## Future Enhancements
453
-
454
- ### Planned Features
455
-
456
- 1. **Enhanced Caching:**
457
- - Configurable cache expiration times
458
- - Cache size limits and management
459
- - Offline mode with extended cache
460
-
461
- 2. **Provider Management:**
462
- - Provider health monitoring
463
- - Automatic provider failover
464
- - Provider performance metrics
465
-
466
- 3. **User Experience:**
467
- - Provider favorite/bookmarking
468
- - Model capability filtering
469
- - Advanced search options
470
- - Visual provider status indicators
471
-
472
- 4. **Integration Enhancements:**
473
- - Real-time provider updates
474
- - Provider webhooks for changes
475
- - Community provider submissions
476
-
477
- ### API Considerations
478
-
479
- - **Rate Limiting:** Implement proper rate limit handling
480
- - **Pagination:** Support for paginated model lists
481
- - **Versioning:** API version compatibility
482
- - **Deprecation:** Graceful handling of deprecated providers/models
483
-
484
- ## Conclusion
485
-
486
- The models.dev integration provides a powerful, extensible foundation for AI provider and model management. The new dual-system architecture separates verified providers (from models.dev) from custom providers (user-defined), providing both convenience and flexibility. The recent improvements in search visibility, screen rendering, and configuration management ensure a reliable and user-friendly experience for benchmarking AI models across multiple providers.
487
-
488
- The separation of concerns between verified and custom providers, combined with the robust migration system and optimized user interface, makes this integration both powerful and approachable for users at all levels of expertise.
@@ -1,68 +0,0 @@
1
- # AI Speedometer - Publishing Guide
2
-
3
- ## Table of Contents
4
- 1. [Publishing to npm](#publishing-to-npm)
5
- 2. [Building the Binary](#building-the-binary)
6
-
7
- ## Publishing to npm
8
-
9
- ### Current Setup
10
- The AI Speedometer CLI is now published as a global npm package with the following configuration:
11
-
12
- ### Package.json Configuration
13
- ```json
14
- {
15
- "name": "ai-speedometer",
16
- "version": "1.0.0",
17
- "bin": {
18
- "ai-speedometer": "./dist/cli.js",
19
- "aispeed": "./dist/cli.js"
20
- },
21
- "files": ["dist/"],
22
- "scripts": {
23
- "build": "esbuild cli.js --bundle --platform=node --outfile=dist/cli.js",
24
- "prepublishOnly": "npm run build"
25
- }
26
- }
27
- ```
28
-
29
- ### Publishing Process
30
- 1. **Build Step**: Run `npm run build` to bundle the CLI using esbuild
31
- 2. **Publish**: Run `npm publish` to publish to npm registry
32
- 3. **Global Installation**: Users install with `npm install -g ai-speedometer`
33
- 4. **Usage**: Users can run `ai-speedometer` or `aispeed` from anywhere
34
-
35
- ### Key Files Modified
36
- - `package.json` - Added esbuild config, binary commands, and npm metadata
37
- - `cli.js` - Fixed entry point detection and interactive mode
38
- - `benchmark-rest.js` - Fixed main function execution conflicts
39
- - `.gitignore` - Added `dist/` to exclude bundled binaries
40
-
41
- ## Building the Binary
42
-
43
- ### Why Use esbuild?
44
- - **Single File**: Bundles all dependencies into one executable file
45
- - **Performance**: Faster startup time compared to Node.js module resolution
46
- - **Distribution**: Easy to distribute as a global npm binary
47
- - **Compatibility**: Works across different Node.js versions
48
-
49
- ### Build Configuration
50
- ```bash
51
- esbuild cli.js --bundle --platform=node --outfile=dist/cli.js
52
- ```
53
-
54
- ### Build Process
55
- 1. **Entry Point**: `cli.js` is the main entry point
56
- 2. **Bundling**: All required modules are bundled into `dist/cli.js`
57
- 3. **Platform**: Targeted for Node.js platform
58
- 4. **Output**: Single executable file in `dist/` directory
59
-
60
- ### Git Strategy
61
- - `dist/` directory is in `.gitignore`
62
- - Binary is generated during npm build process
63
- - Only source code is tracked in git
64
- - Built binary is distributed via npm
65
-
66
-
67
-
68
- This documentation covers the complete publishing process for the AI Speedometer CLI, including npm configuration, esbuild bundling, and distribution setup.