ai-speedometer 1.0.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/docs/README.md ADDED
@@ -0,0 +1,147 @@
1
+ # AI Speedometer Documentation
2
+
3
+ Welcome to the documentation for the AI Speedometer benchmark CLI. This documentation covers everything you need to know about using, configuring, and understanding the system.
4
+
5
+ ## Quick Start
6
+
7
+ - [User Guide](README.md) - Main README with installation and usage
8
+ - [Bug Fixes v1.0](bug-fixes-v1.md) - Critical issues resolved in latest version
9
+ - [Models.dev Integration](models-dev-integration.md) - How provider and model loading works
10
+
11
+ ## Documentation Structure
12
+
13
+ ### 📚 User Documentation
14
+
15
+ #### Getting Started
16
+ - [Main README](../README.md) - Installation, setup, and basic usage
17
+ - [Configuration Guide](../README.md#setup-guide) - Setting up providers and API keys
18
+
19
+ #### Features
20
+ - **Parallel Benchmarking** - Run multiple models simultaneously
21
+ - **Provider Management** - Add verified and custom providers
22
+ - **Model Selection** - Interactive search and selection interface
23
+ - **Performance Metrics** - Comprehensive benchmark results and charts
24
+
25
+ ### 🔧 Technical Documentation
26
+
27
+ #### Architecture
28
+ - [Models.dev Integration](models-dev-integration.md) - Provider ecosystem and API integration
29
+ - [Bug Fixes](bug-fixes-v1.md) - Critical issues and their solutions
30
+
31
+ #### Configuration
32
+ - **Provider Configuration** - Setting up different AI providers
33
+ - **Authentication** - API key management and security
34
+ - **Custom Providers** - Adding your own AI providers
35
+
36
+ #### Development
37
+ - **Code Structure** - Organization of modules and components
38
+ - **Testing Guide** - Running and writing tests
39
+ - **Contributing** - How to contribute to the project
40
+
41
+ ## Key Concepts
42
+
43
+ ### Providers and Models
44
+
45
+ The system supports two types of providers:
46
+
47
+ 1. **Verified Providers** - From models.dev ecosystem
48
+ - OpenAI, Anthropic, Google, and other major providers
49
+ - Automatically updated with latest models
50
+ - Pre-configured endpoints and authentication
51
+
52
+ 2. **Custom Providers** - User-defined providers
53
+ - Your own AI endpoints or local models
54
+ - Full configuration flexibility
55
+ - Support for OpenAI-compatible APIs
56
+
57
+ ### Benchmarking Methods
58
+
59
+ - **AI SDK Method** - Uses Vercel AI SDK with streaming
60
+ - Real-time token counting
61
+ - Time to First Token (TTFT) metrics
62
+ - Streaming response analysis
63
+
64
+ - **REST API Method** - Direct HTTP API calls
65
+ - No streaming, complete response timing
66
+ - Consistent across all providers
67
+ - Fallback for compatibility
68
+
69
+ ### Performance Metrics
70
+
71
+ - **Total Time** - Complete request duration
72
+ - **TTFT** - Time to First Token (streaming only)
73
+ - **Tokens/Second** - Real-time throughput calculation
74
+ - **Token Counts** - Input, output, and total tokens
75
+ - **Provider Rankings** - Performance comparison across providers
76
+
77
+ ## Recent Updates
78
+
79
+ ### Version 1.0 Bug Fixes
80
+
81
+ The latest release includes critical fixes for:
82
+
83
+ - ✅ **Parallel Model Execution** - Multi-model selection now works correctly
84
+ - ✅ **AI SDK Model Support** - Verified providers work in benchmarks
85
+ - ✅ **Search Performance** - Reduced lag and flickering in search
86
+ - ✅ **Screen Rendering** - Fixed text overlapping issues
87
+
88
+ See [Bug Fixes v1.0](bug-fixes-v1.md) for detailed technical information.
89
+
90
+ ### Models.dev Integration Enhancement
91
+
92
+ - **Improved Caching** - Better performance with 1-hour cache expiration
93
+ - **Enhanced Search** - Debounced filtering with 50ms delay
94
+ - **Provider Type Detection** - Automatic SDK selection based on provider
95
+ - **Error Handling** - Graceful degradation and fallback mechanisms
96
+
97
+ See [Models.dev Integration](models-dev-integration.md) for complete architectural details.
98
+
99
+ ## Getting Help
100
+
101
+ ### Troubleshooting
102
+
103
+ #### Common Issues
104
+ 1. **Models Not Showing** - Check API key configuration and provider authentication
105
+ 2. **Search Lag** - Clear cache and ensure proper file permissions
106
+ 3. **Benchmark Failures** - Verify API keys and network connectivity
107
+
108
+ #### Debug Mode
109
+ Enable detailed logging for troubleshooting:
110
+ ```bash
111
+ npm run cli:debug
112
+ ```
113
+
114
+ This creates `debug.log` with API request/response details.
115
+
116
+ ### Community Support
117
+
118
+ - **GitHub Issues** - Report bugs and request features
119
+ - **Documentation** - Check these docs for common questions
120
+ - **Debug Logs** - Include debug logs when reporting issues
121
+
122
+ ### Contributing
123
+
124
+ We welcome contributions! See the main project README for guidelines on:
125
+ - Reporting bugs
126
+ - Requesting features
127
+ - Submitting pull requests
128
+ - Improving documentation
129
+
130
+ ## Documentation Index
131
+
132
+ | Document | Description | Audience |
133
+ |----------|-------------|----------|
134
+ | [Main README](../README.md) | Installation, setup, and basic usage | Users |
135
+ | [Bug Fixes v1.0](bug-fixes-v1.md) | Critical issues and solutions | Developers/Users |
136
+ | [Models.dev Integration](models-dev-integration.md) | Provider ecosystem architecture | Developers |
137
+
138
+ ## Quick Links
139
+
140
+ - **Project Home:** [GitHub Repository](https://github.com/aptdnfapt/Ai-speedometer)
141
+ - **Issues:** [GitHub Issues](https://github.com/aptdnfapt/Ai-speedometer/issues)
142
+ - **Models.dev:** [Provider Registry](https://models.dev)
143
+
144
+ ---
145
+
146
+ *Documentation last updated: September 2025*
147
+ *Version: 1.0.0*
@@ -0,0 +1,344 @@
1
+ # Models.dev Integration Guide
2
+
3
+ ## Overview
4
+
5
+ This document provides a comprehensive explanation of how the AI benchmark CLI integrates with models.dev to provide a rich, up-to-date provider and model ecosystem. The integration enables users to easily access verified AI providers and their models without manual configuration.
6
+
7
+ ## What is Models.dev?
8
+
9
+ Models.dev is a centralized registry of AI model providers and their models. It serves as a comprehensive source of truth for:
10
+
11
+ - **Provider Information:** Names, base URLs, API endpoints, and provider types
12
+ - **Model Catalogs:** Available models for each provider with their names and IDs
13
+ - **Provider Classification:** Categorization by provider type (OpenAI-compatible, Anthropic, Google, etc.)
14
+
15
+ ## Integration Architecture
16
+
17
+ ### Core Components
18
+
19
+ #### 1. `models-dev.js` - Models.dev API Client
20
+ This module handles all communication with the models.dev API and provides caching functionality.
21
+
22
+ **Key Functions:**
23
+ - `getAllProviders()` - Fetch all providers from models.dev
24
+ - `searchProviders(query)` - Search providers by name or ID
25
+ - `getModelsForProvider(providerId)` - Get models for a specific provider
26
+ - `refreshData()` - Force refresh from API (bypassing cache)
27
+
28
+ #### 2. `opencode-integration.js` - Provider Management
29
+ This module bridges the models.dev data with the opencode authentication system.
30
+
31
+ **Key Functions:**
32
+ - `getAuthenticatedProviders()` - Get providers with valid API keys
33
+ - `getCustomProviders()` - Get user-defined custom providers
34
+ - `getAllAvailableProviders()` - Combine both authenticated and custom providers
35
+ - `addApiKey(providerId, apiKey)` - Store API keys securely
36
+
37
+ #### 3. `cli.js` - User Interface
38
+ The main CLI interface uses the integrated provider data for model selection and benchmarking.
39
+
40
+ ## Data Flow
41
+
42
+ ### 1. Provider Discovery and Loading
43
+
44
+ ```
45
+ User Request → CLI → getAllProviders() → Models.dev API → Provider Data
46
+ ```
47
+
48
+ **Step-by-step process:**
49
+
50
+ 1. **Cache Check:** The system first checks for cached provider data
51
+ 2. **API Fallback:** If cache is missing or expired (>1 hour), fetch from models.dev API
52
+ 3. **Fallback Data:** If API fails, use built-in fallback provider data
53
+ 4. **Data Transformation:** Convert models.dev format to internal format
54
+ 5. **Caching:** Store successful responses in local cache
55
+
56
+ **Cache Implementation:**
57
+ - **Location:** `~/.cache/ai-speedometer/models.json`
58
+ - **Expiration:** 1 hour (3600 seconds)
59
+ - **Format:** JSON with timestamp
60
+ - **Fallback:** Built-in provider list for offline operation
61
+
62
+ ### 2. Provider Authentication Flow
63
+
64
+ ```
65
+ Provider Selection → API Key Input → Auth Storage → Provider Activation
66
+ ```
67
+
68
+ **Authentication Process:**
69
+
70
+ 1. **Provider Selection:** User selects a provider from models.dev list
71
+ 2. **API Key Input:** User enters their API key for the provider
72
+ 3. **Secure Storage:** Key is stored in `~/.local/share/opencode/auth.json`
73
+ 4. **Provider Activation:** Provider becomes available in model selection
74
+
75
+ **Security Features:**
76
+ - **File Permissions:** Auth files have 0o600 permissions (read/write for owner only)
77
+ - **XDG Compliance:** Follows XDG Base Directory specification
78
+ - **No Key Exposure:** Keys are never logged or displayed in plain text
79
+
80
+ ### 3. Model Loading and Filtering
81
+
82
+ ```
83
+ All Providers → Filter by Auth → Combine with Custom → Present to User
84
+ ```
85
+
86
+ **Model Assembly Process:**
87
+
88
+ 1. **Authenticated Models:** Load models from providers with valid API keys
89
+ 2. **Custom Models:** Load user-defined models from opencode.json
90
+ 3. **Deduplication:** Ensure no duplicate models in final list
91
+ 4. **Presentation:** Display combined list in model selection interface
92
+
93
+ ## Provider Types and SDK Integration
94
+
95
+ ### Supported Provider Types
96
+
97
+ #### 1. OpenAI-Compatible
98
+ - **SDK:** `@ai-sdk/openai-compatible`
99
+ - **Examples:** OpenAI, Groq, Together AI, Anyscale, Fireworks AI
100
+ - **Base URL Pattern:** `https://api.example.com/v1`
101
+ - **Authentication:** Bearer token via Authorization header
102
+
103
+ #### 2. Anthropic
104
+ - **SDK:** `@ai-sdk/anthropic`
105
+ - **Examples:** Anthropic Claude models
106
+ - **Base URL Pattern:** `https://api.anthropic.com`
107
+ - **Authentication:** Custom headers with x-api-key and anthropic-version
108
+
109
+ #### 3. Google
110
+ - **SDK:** `@ai-sdk/google`
111
+ - **Examples:** Google Gemini models
112
+ - **Base URL Pattern:** `https://generativelanguage.googleapis.com`
113
+ - **Authentication:** x-goog-api-key header
114
+
115
+ ### Provider Type Detection
116
+
117
+ The system automatically detects provider types based on:
118
+
119
+ 1. **NPM Package:** The `npm` field in models.dev data
120
+ 2. **Fallback Logic:** Package name patterns for type detection
121
+
122
+ ```javascript
123
+ // Automatic type detection logic
124
+ let providerType = 'openai-compatible'; // default
125
+ if (providerConfig.npm === '@ai-sdk/anthropic') {
126
+ providerType = 'anthropic';
127
+ } else if (providerConfig.npm === '@ai-sdk/google') {
128
+ providerType = 'google';
129
+ } else if (providerConfig.npm === '@ai-sdk/openai') {
130
+ providerType = 'openai';
131
+ }
132
+ ```
133
+
134
+ ## Configuration and Customization
135
+
136
+ ### Adding Custom Providers
137
+
138
+ Users can extend the models.dev ecosystem with custom providers:
139
+
140
+ 1. **Custom Provider Setup:**
141
+ ```json
142
+ {
143
+ "provider": {
144
+ "my-custom-provider": {
145
+ "name": "My Custom Provider",
146
+ "npm": "@ai-sdk/openai-compatible",
147
+ "options": {
148
+ "apiKey": "your-api-key",
149
+ "baseURL": "https://api.custom.com/v1"
150
+ },
151
+ "models": {
152
+ "custom-model-1": {
153
+ "name": "Custom Model 1"
154
+ }
155
+ }
156
+ }
157
+ }
158
+ }
159
+ ```
160
+
161
+ 2. **Custom Provider Location:** `~/.config/opencode/opencode.json`
162
+
163
+ ### Provider Configuration Structure
164
+
165
+ #### Verified Providers (from models.dev)
166
+ ```json
167
+ {
168
+ "providerId": {
169
+ "type": "api",
170
+ "key": "api-key-here"
171
+ }
172
+ }
173
+ ```
174
+
175
+ #### Custom Providers (user-defined)
176
+ ```json
177
+ {
178
+ "providerId": {
179
+ "name": "Provider Name",
180
+ "npm": "@ai-sdk/package-name",
181
+ "options": {
182
+ "apiKey": "api-key",
183
+ "baseURL": "https://api.example.com/v1"
184
+ },
185
+ "models": {
186
+ "modelId": {
187
+ "name": "Model Name"
188
+ }
189
+ }
190
+ }
191
+ }
192
+ ```
193
+
194
+ ## Error Handling and Resilience
195
+
196
+ ### Cache Management
197
+ - **Automatic Cache Refresh:** Cache expires after 1 hour
198
+ - **Manual Refresh:** `refreshData()` function for forced updates
199
+ - **Cache Clearing:** `clearCache()` function for troubleshooting
200
+ - **Graceful Degradation:** Falls back to built-in data on API failure
201
+
202
+ ### API Failure Handling
203
+ - **Network Errors:** Automatic retry with exponential backoff
204
+ - **Invalid Responses:** Fallback to cached data when available
205
+ - **Rate Limiting:** Respect API rate limits with proper delay handling
206
+ - **Offline Mode:** Built-in provider list for offline operation
207
+
208
+ ### Data Validation
209
+ - **Schema Validation:** Validate incoming API responses
210
+ - **Type Checking:** Ensure provider types are recognized
211
+ - **Model Validation:** Verify model data integrity
212
+ - **URL Validation:** Confirm base URLs are properly formatted
213
+
214
+ ## Performance Optimization
215
+
216
+ ### Caching Strategy
217
+ - **Local Cache:** Reduces API calls and improves load times
218
+ - **Conditional Updates:** Only fetch when data is stale
219
+ - **Memory Efficiency:** Cache data is compactly stored
220
+ - **Fast Access:** In-memory filtering and searching
221
+
222
+ ### Search Optimization
223
+ - **Debounced Input:** 50ms delay reduces unnecessary filtering
224
+ - **Efficient Algorithms:** Optimized search across multiple fields
225
+ - **Partial Results:** Show results immediately during typing
226
+ - **Memory Management:** Clean up unused search timeouts
227
+
228
+ ### Concurrent Loading
229
+ - **Parallel Requests:** Fetch authenticated and custom providers concurrently
230
+ - **Non-blocking UI:** Interface remains responsive during data loading
231
+ - **Progressive Enhancement:** Show available data while loading more
232
+
233
+ ## Security Considerations
234
+
235
+ ### API Key Storage
236
+ - **Secure File Permissions:** Auth files restricted to owner (0o600)
237
+ - **XDG Compliance:** Follows system standards for config locations
238
+ - **No Key Logging:** API keys never appear in logs or output
239
+ - **Environment Variables:** Support for environment-based configuration
240
+
241
+ ### Network Security
242
+ - **HTTPS Only:** All API communications use HTTPS
243
+ - **Certificate Validation:** Proper SSL certificate verification
244
+ - **No Credential Exposure:** Keys sent only in headers, never URLs
245
+ - **Request Signing:** Proper authentication headers for all requests
246
+
247
+ ### Data Privacy
248
+ - **Local Storage Only:** No data sent to external services except models.dev API
249
+ - **Minimal Data Collection:** Only necessary provider and model information
250
+ - **User Control:** Users can clear cache and data at any time
251
+ - **Transparent Operation:** All operations are visible to the user
252
+
253
+ ## Troubleshooting
254
+
255
+ ### Common Issues
256
+
257
+ #### 1. Models Not Loading
258
+ **Symptoms:** Model selection shows empty or outdated list
259
+ **Solution:**
260
+ ```bash
261
+ # Clear cache and force refresh
262
+ node -e "import('./models-dev.js').then(m => m.clearCache())"
263
+ ```
264
+
265
+ #### 2. Authentication Issues
266
+ **Symptoms:** Provider appears but models fail during benchmark
267
+ **Solution:**
268
+ ```bash
269
+ # Check auth.json contents
270
+ cat ~/.local/share/opencode/auth.json
271
+
272
+ # Verify API key format and permissions
273
+ ```
274
+
275
+ #### 3. Search Performance Issues
276
+ **Symptoms:** Lag or flickering during search typing
277
+ **Solution:**
278
+ - Ensure cache directory exists and is writable
279
+ - Check for large model lists (1000+ models)
280
+ - Verify system resources are adequate
281
+
282
+ ### Debugging Commands
283
+
284
+ ```bash
285
+ # View cache status
286
+ ls -la ~/.cache/ai-speedometer/
287
+
288
+ # View auth configuration
289
+ cat ~/.local/share/opencode/auth.json
290
+
291
+ # View custom providers
292
+ cat ~/.config/opencode/opencode.json
293
+
294
+ # Clear all caches
295
+ rm -rf ~/.cache/ai-speedometer/
296
+ ```
297
+
298
+ ### Logging and Debugging
299
+
300
+ Enable debug logging for troubleshooting:
301
+ ```bash
302
+ npm run cli:debug
303
+ ```
304
+
305
+ This creates detailed logs in `debug.log` including:
306
+ - API request/response details
307
+ - Cache operations
308
+ - Provider loading process
309
+ - Authentication flow
310
+
311
+ ## Future Enhancements
312
+
313
+ ### Planned Features
314
+
315
+ 1. **Enhanced Caching:**
316
+ - Configurable cache expiration times
317
+ - Cache size limits and management
318
+ - Offline mode with extended cache
319
+
320
+ 2. **Provider Management:**
321
+ - Provider health monitoring
322
+ - Automatic provider failover
323
+ - Provider performance metrics
324
+
325
+ 3. **User Experience:**
326
+ - Provider favorite/bookmarking
327
+ - Model capability filtering
328
+ - Advanced search options
329
+
330
+ 4. **Integration Enhancements:**
331
+ - Real-time provider updates
332
+ - Provider webhooks for changes
333
+ - Community provider submissions
334
+
335
+ ### API Considerations
336
+
337
+ - **Rate Limiting:** Implement proper rate limit handling
338
+ - **Pagination:** Support for paginated model lists
339
+ - **Versioning:** API version compatibility
340
+ - **Deprecation:** Graceful handling of deprecated providers/models
341
+
342
+ ## Conclusion
343
+
344
+ The models.dev integration provides a powerful, extensible foundation for AI provider and model management. By combining centralized provider data with local authentication and customization, the system offers both convenience and flexibility. The caching, performance optimizations, and security features ensure a reliable and user-friendly experience for benchmarking AI models across multiple providers.