ai-speedometer 2.1.2 → 2.1.4

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/docs/README.md DELETED
@@ -1,191 +0,0 @@
1
- # AI Speedometer Documentation
2
-
3
- Welcome to the documentation for the AI Speedometer benchmark CLI. This documentation covers everything you need to know about using, configuring, and understanding the system.
4
-
5
- ## Quick Start
6
-
7
- - [User Guide](README.md) - Main README with installation and usage
8
- - [Bug Fixes v1.0](bug-fixes-v1.md) - Critical issues resolved in latest version
9
- - [Models.dev Integration](models-dev-integration.md) - How provider and model loading works
10
- - [Custom Verified Providers](custom-verified-providers.md) - Adding and configuring custom verified providers
11
-
12
- ## Documentation Structure
13
-
14
- ### 📚 User Documentation
15
-
16
- #### Getting Started
17
- - [Main README](../README.md) - Installation, setup, and basic usage
18
- - [Configuration Guide](../README.md#setup-guide) - Setting up providers and API keys
19
-
20
- #### Features
21
- - **Parallel Benchmarking** - Run multiple models simultaneously
22
- - **Provider Management** - Add verified and custom providers
23
- - **Model Selection** - Interactive search and selection interface
24
- - **Performance Metrics** - Comprehensive benchmark results and charts
25
-
26
- ### 🔧 Technical Documentation
27
-
28
- #### Architecture
29
- - [Models.dev Integration](models-dev-integration.md) - Provider ecosystem and API integration
30
- - [Custom Verified Providers](custom-verified-providers.md) - Pre-configured provider setup and management
31
- - [Bug Fixes](bug-fixes-v1.md) - Critical issues and their solutions
32
-
33
- #### Configuration
34
- - **Provider Configuration** - Setting up different AI providers
35
- - **Authentication** - API key management and security
36
- - **Custom Providers** - Adding your own AI providers
37
- - **Custom Verified Providers** - Pre-configured trusted providers
38
-
39
- #### Development
40
- - **Code Structure** - Organization of modules and components
41
- - **Testing Guide** - Running and writing tests
42
- - **Contributing** - How to contribute to the project
43
-
44
- ## Key Concepts
45
-
46
- ### Providers and Models
47
-
48
- The system supports three types of providers:
49
-
50
- 1. **Verified Providers** - From models.dev ecosystem
51
- - OpenAI, Anthropic, Google, and other major providers
52
- - Automatically updated with latest models
53
- - Pre-configured endpoints and authentication
54
-
55
- 2. **Custom Verified Providers** - Pre-configured trusted providers
56
- - Curated providers not in models.dev but treated as verified
57
- - Pre-configured with specific models and endpoints
58
- - Available in `custom-verified-providers.json`
59
-
60
- 3. **Custom Providers** - User-defined providers
61
- - Your own AI endpoints or local models
62
- - Full configuration flexibility
63
- - Support for OpenAI-compatible APIs
64
-
65
- ### Benchmarking Methods
66
-
67
- - **AI SDK Method** - Uses Vercel AI SDK with streaming
68
- - Real-time token counting
69
- - Time to First Token (TTFT) metrics
70
- - Streaming response analysis
71
-
72
- - **REST API Method** - Direct HTTP API calls
73
- - No streaming, complete response timing
74
- - Consistent across all providers
75
- - Fallback for compatibility
76
-
77
- ### Performance Metrics
78
-
79
- - **Total Time** - Complete request duration
80
- - **TTFT** - Time to First Token (streaming only)
81
- - **Tokens/Second** - Real-time throughput calculation
82
- - **Token Counts** - Input, output, and total tokens
83
- - **Provider Rankings** - Performance comparison across providers
84
-
85
- ## Recent Updates
86
-
87
- ### Phase 8 — Build, Polish & Cleanup
88
-
89
- Production-ready build with full TypeScript, clean dependencies, and proper binary distribution:
90
-
91
- - **Standalone binary** — `bun build --compile` produces a single ELF executable at `dist/ai-speedometer`
92
- - **Scripts**: `bun run build`, `bun run typecheck`, `bun test`, `bun test --watch`, `bun test --update-snapshots`
93
- - **Removed unused deps** — `@ai-sdk/anthropic`, `@ai-sdk/openai-compatible`, `ai`, `cli-table3`, `dotenv`, `esbuild` removed; only `jsonc-parser` remains
94
- - **SIGINT handler** — `renderer.destroy()` + `process.exit(0)` on `SIGINT` in `src/tui/index.tsx`
95
- - **Loading guard** — `ModelSelectScreen` shows "Loading config..." during initial config load
96
- - **0 TypeScript errors** — `bun tsc --noEmit` passes clean
97
-
98
- Build: `bun run build` → `dist/ai-speedometer` (standalone binary, no bun required)
99
-
100
- ### Phase 7 — Tests
101
-
102
- Comprehensive test suite using `bun test` + `@opentui/react/test-utils`:
103
-
104
- - **51 tests** across 11 files — all passing
105
- - Component tests: Header, Footer, MenuList, BarChart, ModelRow, ResultsTable
106
- - Screen tests: MainMenuScreen, ModelSelectScreen, AddVerifiedScreen, AddCustomScreen
107
- - Benchmark logic tests with mocked fetch (openai-compatible, anthropic, google)
108
- - 14 snapshots for visual regression detection
109
- - Test factories in `src/tests/setup.ts`: `mockModel`, `mockBenchmarkResult`, `mockProvider`, `mockModelBenchState`
110
-
111
- Run with: `bun test`
112
-
113
- ### Version 1.0 Bug Fixes
114
-
115
- The latest release includes critical fixes for:
116
-
117
- - ✅ **Parallel Model Execution** - Multi-model selection now works correctly
118
- - ✅ **AI SDK Model Support** - Verified providers work in benchmarks
119
- - ✅ **Search Performance** - Reduced lag and flickering in search
120
- - ✅ **Screen Rendering** - Fixed text overlapping issues
121
-
122
- See [Bug Fixes v1.0](bug-fixes-v1.md) for detailed technical information.
123
-
124
- ### Models.dev Integration Enhancement
125
-
126
- - **Improved Caching** - Better performance with 1-hour cache expiration
127
- - **Enhanced Search** - Debounced filtering with 50ms delay
128
- - **Provider Type Detection** - Automatic SDK selection based on provider
129
- - **Error Handling** - Graceful degradation and fallback mechanisms
130
-
131
- See [Models.dev Integration](models-dev-integration.md) for complete architectural details.
132
-
133
- ### Custom Verified Providers
134
-
135
- - **Pre-configured Providers** - Curated list of trusted providers not in models.dev
136
- - **Structured Configuration** - Standardized format for provider definitions
137
- - **Model Management** - Predefined models with display names and API IDs
138
- - **Authentication Integration** - Seamless integration with existing auth system
139
-
140
- See [Custom Verified Providers](custom-verified-providers.md) for complete setup and configuration guide.
141
-
142
- ## Getting Help
143
-
144
- ### Troubleshooting
145
-
146
- #### Common Issues
147
- 1. **Models Not Showing** - Check API key configuration and provider authentication
148
- 2. **Search Lag** - Clear cache and ensure proper file permissions
149
- 3. **Benchmark Failures** - Verify API keys and network connectivity
150
-
151
- #### Debug Mode
152
- Enable detailed logging for troubleshooting:
153
- ```bash
154
- npm run cli:debug
155
- ```
156
-
157
- This creates `debug.log` with API request/response details.
158
-
159
- ### Community Support
160
-
161
- - **GitHub Issues** - Report bugs and request features
162
- - **Documentation** - Check these docs for common questions
163
- - **Debug Logs** - Include debug logs when reporting issues
164
-
165
- ### Contributing
166
-
167
- We welcome contributions! See the main project README for guidelines on:
168
- - Reporting bugs
169
- - Requesting features
170
- - Submitting pull requests
171
- - Improving documentation
172
-
173
- ## Documentation Index
174
-
175
- | Document | Description | Audience |
176
- |----------|-------------|----------|
177
- | [Main README](../README.md) | Installation, setup, and basic usage | Users |
178
- | [Bug Fixes v1.0](bug-fixes-v1.md) | Critical issues and solutions | Developers/Users |
179
- | [Models.dev Integration](models-dev-integration.md) | Provider ecosystem architecture | Developers |
180
- | [Custom Verified Providers](custom-verified-providers.md) | Pre-configured provider setup | Developers/Users |
181
-
182
- ## Quick Links
183
-
184
- - **Project Home:** [GitHub Repository](https://github.com/aptdnfapt/Ai-speedometer)
185
- - **Issues:** [GitHub Issues](https://github.com/aptdnfapt/Ai-speedometer/issues)
186
- - **Models.dev:** [Provider Registry](https://models.dev)
187
-
188
- ---
189
-
190
- *Documentation last updated: September 2025*
191
- *Version: 1.0.0*
@@ -1,463 +0,0 @@
1
- # Custom Verified Providers Guide
2
-
3
- ## Overview
4
-
5
- Custom Verified Providers are pre-configured AI providers that are not part of the main models.dev ecosystem but are treated as "verified" within the AI Speedometer system. These providers have predefined configurations and models that users can access with their own API keys.
6
-
7
- This guide explains how custom verified providers work, their configuration structure, and how to add or modify them.
8
-
9
- ## What are Custom Verified Providers?
10
-
11
- Custom Verified Providers bridge the gap between:
12
-
13
- 1. **Official Verified Providers** (from models.dev) - OpenAI, Anthropic, Google, etc.
14
- 2. **User-Defined Custom Providers** - Completely user-configured endpoints
15
-
16
- Custom Verified Providers are:
17
- - **Pre-configured** by the AI Speedometer team
18
- - **Curated** for quality and reliability
19
- - **Structured** with specific models and endpoints
20
- - **Treated as verified** in the user interface
21
-
22
- ## Configuration Structure
23
-
24
- The custom verified providers are stored in `custom-verified-providers.json` in the project root. Here's the complete structure:
25
-
26
- ### Root Structure
27
- ```json
28
- {
29
- "custom-verified-providers": {
30
- "provider-id": {
31
- // Provider configuration
32
- },
33
- "another-provider": {
34
- // Provider configuration
35
- }
36
- }
37
- }
38
- ```
39
-
40
- ### Provider Configuration
41
-
42
- Each provider has the following structure:
43
-
44
- #### Basic Provider
45
- ```json
46
- {
47
- "id": "provider-id",
48
- "name": "Provider Display Name",
49
- "baseUrl": "https://api.provider.com/v1",
50
- "type": "openai-compatible",
51
- "models": {
52
- "model-id": {
53
- "id": "model-id",
54
- "name": "Model Display Name"
55
- }
56
- }
57
- }
58
- ```
59
-
60
- #### Provider Types
61
-
62
- The `type` field determines which SDK and authentication method to use:
63
-
64
- | Type | SDK Package | Authentication | Base URL Pattern |
65
- |------|-------------|----------------|------------------|
66
- | `openai-compatible` | `@ai-sdk/openai-compatible` | Bearer token | `https://api.example.com/v1` |
67
- | `anthropic` | `@ai-sdk/anthropic` | x-api-key header | `https://api.anthropic.com` |
68
- | `google` | `@ai-sdk/google` | x-goog-api-key header | `https://generativelanguage.googleapis.com` |
69
-
70
- ### Models Configuration
71
-
72
- Models are nested objects within each provider:
73
-
74
- ```json
75
- "models": {
76
- "model-unique-id": {
77
- "id": "model-api-id",
78
- "name": "Human-Readable Model Name"
79
- },
80
- "another-model": {
81
- "id": "another-model-api-id",
82
- "name": "Another Model Name"
83
- }
84
- }
85
- ```
86
-
87
- **Key Points:**
88
- - **Object Key**: Used internally for identification (should be unique)
89
- - **id**: The actual model ID passed to the API
90
- - **name**: Display name shown in the CLI interface
91
-
92
- ## Complete Example
93
-
94
- Here's a complete example from the current configuration:
95
-
96
- ```json
97
- {
98
- "custom-verified-providers": {
99
- "zai-code-anth": {
100
- "id": "zai-code-anth",
101
- "name": "zai-code-anth",
102
- "baseUrl": "https://api.z.ai/api/anthropic/v1",
103
- "type": "anthropic",
104
- "models": {
105
- "glm-4-5": {
106
- "id": "glm-4.5",
107
- "name": "GLM-4.5-anth"
108
- },
109
- "glm-4-5-air": {
110
- "id": "glm-4.5-air",
111
- "name": "GLM-4.5-air-anth"
112
- }
113
- }
114
- },
115
- "nanogpt-plan": {
116
- "id": "nanogpt-plan",
117
- "name": "nanogpt-plan",
118
- "baseUrl": "https://nano-gpt.com/api/v1",
119
- "type": "openai-compatible",
120
- "models": {
121
- "deepseek-ai/DeepSeek-V3.1-Terminus": {
122
- "id": "deepseek-ai/DeepSeek-V3.1-Terminus",
123
- "name": "DeepSeek V3.1 Terminus"
124
- },
125
- "zai-org/GLM-4.5-FP8": {
126
- "id": "zai-org/GLM-4.5-FP8",
127
- "name": "GLM 4.5 FP8"
128
- }
129
- }
130
- }
131
- }
132
- }
133
- ```
134
-
135
- ## How Custom Verified Providers Work
136
-
137
- ### Integration Flow
138
-
139
- 1. **Loading**: The system reads `custom-verified-providers.json` on startup
140
- 2. **Merging**: Custom verified providers are merged with models.dev providers
141
- 3. **Authentication**: Users add API keys through the CLI interface
142
- 4. **Availability**: Providers appear in the model selection interface
143
-
144
- ### User Experience
145
-
146
- Users see custom verified providers alongside official models.dev providers:
147
-
148
- ```
149
- Available Providers:
150
- ├── OpenAI (models.dev)
151
- ├── Anthropic (models.dev)
152
- ├── zai-code-anth (custom verified)
153
- ├── nanogpt-plan (custom verified)
154
- └── [Add Custom Provider]
155
- ```
156
-
157
- ### Authentication Storage
158
-
159
- API keys for custom verified providers are stored in **both locations** for redundancy:
160
-
161
- ```json
162
- // Primary storage: ~/.local/share/opencode/auth.json
163
- {
164
- "zai-code-anth": {
165
- "type": "api",
166
- "key": "user-api-key-here"
167
- }
168
- }
169
-
170
- // Backup storage: ~/.config/ai-speedometer/ai-benchmark-config.json
171
- {
172
- "verifiedProviders": {
173
- "zai-code-anth": "user-api-key-here"
174
- }
175
- }
176
- ```
177
-
178
- ```json
179
- {
180
- "zai-code-anth": {
181
- "type": "api",
182
- "key": "user-api-key-here"
183
- },
184
- "nanogpt-plan": {
185
- "type": "api",
186
- "key": "another-api-key"
187
- }
188
- }
189
- ```
190
-
191
- ## Adding New Custom Verified Providers
192
-
193
- ### Step-by-Step Process
194
-
195
- 1. **Research the Provider**
196
- - Determine the provider type (OpenAI-compatible, Anthropic, Google)
197
- - Find the base API endpoint URL
198
- - Identify available models and their API IDs
199
-
200
- 2. **Create Provider Configuration**
201
- ```json
202
- {
203
- "id": "unique-provider-id",
204
- "name": "Human Readable Name",
205
- "baseUrl": "https://api.provider.com/v1",
206
- "type": "openai-compatible",
207
- "models": {
208
- "model-key-1": {
209
- "id": "actual-model-id-1",
210
- "name": "Model 1 Display Name"
211
- }
212
- }
213
- }
214
- ```
215
-
216
- 3. **Add to custom-verified-providers.json**
217
- - Add your provider configuration to the file
218
- - Ensure the provider ID is unique
219
- - Test the configuration
220
-
221
- 4. **Test the Provider**
222
- ```bash
223
- # Start CLI
224
- ai-speedometer
225
-
226
- # Add your API key for the new provider
227
- # Select "Set Model" → "Add Verified Provider"
228
- # Choose your new provider from the list
229
- ```
230
-
231
- ### Validation Checklist
232
-
233
- Before adding a new provider, verify:
234
-
235
- - [ ] **Base URL Accessibility**: The endpoint is reachable and responds to HTTPS requests
236
- - [ ] **Authentication**: The correct authentication method is configured
237
- - [ ] **Model Availability**: Models exist and are accessible with the API
238
- - [ ] **Rate Limits**: Provider allows reasonable request rates for benchmarking
239
- - [ ] **API Compatibility**: Endpoint follows the expected SDK format
240
-
241
- ### Best Practices
242
-
243
- 1. **Provider ID Naming**
244
- - Use lowercase letters, numbers, and hyphens
245
- - Make it descriptive and unique
246
- - Example: `my-provider-name` not `MyProviderName`
247
-
248
- 2. **Model Naming**
249
- - Use clear, human-readable names
250
- - Include provider context if helpful
251
- - Example: `GPT-4 via MyProvider` not just `GPT-4`
252
-
253
- 3. **Base URL Configuration**
254
- - Include the full API path
255
- - Use HTTPS only
256
- - Ensure trailing slash consistency
257
-
258
- 4. **Type Selection**
259
- - Choose the most specific type available
260
- - When in doubt, use `openai-compatible`
261
- - Test with the actual SDK if possible
262
-
263
- ## Configuration Options Reference
264
-
265
- ### Provider Fields
266
-
267
- | Field | Required | Type | Description |
268
- |-------|----------|------|-------------|
269
- | `id` | Yes | String | Unique identifier for the provider |
270
- | `name` | Yes | String | Display name shown in the CLI |
271
- | `baseUrl` | Yes | String | API endpoint URL |
272
- | `type` | Yes | String | Provider type (SDK to use) |
273
- | `models` | Yes | Object | Models available for this provider |
274
-
275
- ### Model Fields
276
-
277
- | Field | Required | Type | Description |
278
- |-------|----------|------|-------------|
279
- | `id` | Yes | String | API model identifier |
280
- | `name` | Yes | String | Display name for the model |
281
-
282
- ### Supported Provider Types
283
-
284
- #### openai-compatible
285
- - **SDK**: `@ai-sdk/openai-compatible`
286
- - **Authentication**: Bearer token in Authorization header
287
- - **Base URL Format**: `https://api.example.com/v1`
288
- - **Model Format**: Standard OpenAI model IDs
289
- - **Examples**: Groq, Together AI, Fireworks AI
290
-
291
- #### anthropic
292
- - **SDK**: `@ai-sdk/anthropic`
293
- - **Authentication**: x-api-key and x-api-version headers
294
- - **Base URL Format**: `https://api.anthropic.com`
295
- - **Model Format**: Claude model IDs (claude-3-sonnet-20240229, etc.)
296
- - **Examples**: Anthropic Claude API, Anthropic-compatible endpoints
297
-
298
- #### google
299
- - **SDK**: `@ai-sdk/google`
300
- - **Authentication**: x-goog-api-key header
301
- - **Base URL Format**: `https://generativelanguage.googleapis.com`
302
- - **Model Format**: Gemini model IDs (gemini-pro, etc.)
303
- - **Examples**: Google Gemini API
304
-
305
- ## Troubleshooting
306
-
307
- ### Common Issues
308
-
309
- #### 1. Provider Not Showing in List
310
- **Symptoms**: Custom verified provider doesn't appear in provider selection
311
-
312
- **Solutions**:
313
- - Check JSON syntax in `custom-verified-providers.json`
314
- - Verify provider ID is unique
315
- - Ensure file is in the correct project root directory
316
- - Restart the CLI application
317
-
318
- #### 2. Authentication Failures
319
- **Symptoms**: API key accepted but models fail during benchmark
320
-
321
- **Solutions**:
322
- - Verify the provider type matches the actual API
323
- - Check base URL format and trailing slashes
324
- - Confirm API key has correct permissions
325
- - Test API endpoint manually with curl
326
-
327
- #### 3. Model Not Found Errors
328
- **Symptoms**: Selected model returns "not found" from provider
329
-
330
- **Solutions**:
331
- - Verify model ID matches provider's API exactly
332
- - Check model availability in your account/region
333
- - Ensure model object key is different from model ID
334
- - Test with a simple curl request
335
-
336
- ### Debugging Commands
337
-
338
- ```bash
339
- # Check custom verified providers configuration
340
- cat custom-verified-providers.json
341
-
342
- # Validate JSON syntax
343
- python -m json.tool custom-verified-providers.json > /dev/null
344
-
345
- # Test API endpoint accessibility
346
- curl -I https://api.provider.com/v1/models
347
-
348
- # Check authentication storage
349
- cat ~/.config/ai-speedometer/ai-benchmark-config.json
350
-
351
- # Run CLI in debug mode
352
- ai-speedometer --debug
353
- ```
354
-
355
- ### Testing New Providers
356
-
357
- Before adding to the main configuration:
358
-
359
- 1. **Manual API Test**
360
- ```bash
361
- curl -H "Authorization: Bearer YOUR_KEY" \
362
- https://api.provider.com/v1/models
363
- ```
364
-
365
- 2. **Single Model Test**
366
- ```bash
367
- curl -H "Authorization: Bearer YOUR_KEY" \
368
- -H "Content-Type: application/json" \
369
- -d '{"model": "model-id", "messages": [{"role": "user", "content": "Hello"}]}' \
370
- https://api.provider.com/v1/chat/completions
371
- ```
372
-
373
- 3. **CLI Integration Test**
374
- - Add provider temporarily to local config
375
- - Test through CLI interface
376
- - Verify benchmark results
377
-
378
- ## Security Considerations
379
-
380
- ### API Key Management
381
- - Custom verified providers never store API keys in the main configuration
382
- - Keys are stored in user-specific config files with proper permissions
383
- - No API keys are logged or transmitted to external services
384
-
385
- ### Provider Vetting
386
- - Only add providers from trusted sources
387
- - Verify provider privacy policies and data handling practices
388
- - Ensure providers follow security best practices
389
- - Regularly review provider configurations for updates
390
-
391
- ### Network Security
392
- - All providers must use HTTPS endpoints
393
- - Certificate validation is enforced
394
- - No plaintext authentication is supported
395
- - API keys are transmitted only via headers
396
-
397
- ## Migration and Updates
398
-
399
- ### Adding New Providers
400
- 1. **Test in Development**: Verify provider works locally
401
- 2. **Update Configuration**: Add to `custom-verified-providers.json`
402
- 3. **Update Documentation**: Document any special requirements
403
- 4. **Communicate Changes**: Inform users of new provider options
404
-
405
- ### Removing Providers
406
- 1. **Deprecation Period**: Keep provider for at least one release cycle
407
- 2. **User Communication**: Announce removal in advance
408
- 3. **Migration Support**: Help users migrate to alternatives
409
- 4. **Clean Removal**: Remove from configuration and documentation
410
-
411
- ### Configuration Updates
412
- - When provider APIs change, update the configuration
413
- - Maintain backward compatibility when possible
414
- - Document breaking changes clearly
415
- - Test updates thoroughly before deployment
416
-
417
- ## Contributing
418
-
419
- ### Adding New Providers
420
- To contribute a new custom verified provider:
421
-
422
- 1. **Research**: Thoroughly test the provider's API
423
- 2. **Documentation**: Include any special setup requirements
424
- 3. **Testing**: Verify the provider works with the CLI
425
- 4. **Pull Request**: Submit the configuration change
426
-
427
- ### Provider Requirements
428
- - Must be a reliable AI service provider
429
- - Should have reasonable API rate limits
430
- - Must support standard AI SDK patterns
431
- - Should provide documentation for their API
432
- - Must be accessible via HTTPS
433
-
434
- ### Code of Conduct
435
- - Only add providers that you personally use and recommend
436
- - Ensure providers have clear privacy policies
437
- - Avoid providers with known security issues
438
- - Respect provider terms of service
439
-
440
- ## Future Enhancements
441
-
442
- ### Planned Features
443
- 1. **Provider Validation**: Automatic testing of provider configurations
444
- 2. **Version Management**: Support for multiple provider API versions
445
- 3. **Dynamic Loading**: Load providers from external sources
446
- 4. **Provider Metrics**: Performance and reliability tracking
447
- 5. **User Feedback**: Allow users to rate and review providers
448
-
449
- ### Configuration Improvements
450
- 1. **Schema Validation**: JSON schema validation for configurations
451
- 2. **Default Values**: Support for optional configuration fields
452
- 3. **Environment Variables**: Override configurations via environment
453
- 4. **Conditional Models**: Model availability based on user account
454
-
455
- ### Integration Enhancements
456
- 1. **Auto-discovery**: Automatically detect provider capabilities
457
- 2. **Webhooks**: Real-time provider status updates
458
- 3. **Health Checks**: Monitor provider availability
459
- 4. **Failover**: Automatic fallback to alternative providers
460
-
461
- ---
462
-
463
- This guide provides comprehensive documentation for custom verified providers in the AI Speedometer system. For additional questions or to contribute new providers, please refer to the main project documentation or create an issue on the GitHub repository.