ai-speedometer 1.4.3 → 2.0.3

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -2,38 +2,60 @@
2
2
 
3
3
  A CLI tool for benchmarking AI models across multiple providers with parallel execution and performance metrics.
4
4
 
5
- ![Ai-speedometer showcase](./pics/image.png)
5
+ [![Discord](https://img.shields.io/badge/Discord-Join%20Community-5865F2?style=for-the-badge&logo=discord&logoColor=white)](https://discord.gg/6S7HwCxbMy)
6
+
7
+ Track OSS model speeds over time: [ai-speedometer.oliveowl.xyz](https://ai-speedometer.oliveowl.xyz/)
8
+
9
+ ![Ai-speedometer main menu](./pics/main.png)
10
+
11
+ ![Ai-speedometer benchmark](./pics/benchmark.png)
6
12
 
7
13
  ## Install
8
14
 
15
+ Requires [Bun](https://bun.sh) runtime.
16
+
17
+ ```bash
18
+ bun install -g ai-speedometer
19
+ ```
20
+
21
+ Or with npm (Bun still required at runtime):
22
+
9
23
  ```bash
10
24
  npm install -g ai-speedometer
11
25
  ```
12
26
 
27
+ Or run directly from source:
28
+
29
+ ```bash
30
+ bun src/index.ts
31
+ ```
32
+
13
33
  ## What It Measures
14
34
 
15
35
  - **TTFT** (Time to First Token) - How fast the first response token arrives
16
- - **Total Time** - Complete request duration
36
+ - **Total Time** - Complete request duration
17
37
  - **Tokens/Second** - Real-time throughput
18
38
  - **Token Counts** - Input, output, and total tokens used
19
39
 
20
- ## New Features
40
+ ## Features
21
41
 
22
- - **REST API Default** - REST API benchmarking is now the default method for better compatibility
42
+ - **Interactive TUI** - Full terminal UI with Tokyo Night theme, menus, search, and live benchmark progress
43
+ - **REST API Benchmarking** - Default method, works with all OpenAI-compatible providers
23
44
  - **Headless Mode** - Run benchmarks without interactive CLI using command-line arguments
24
- - **Streaming Support** - Full streaming support now available in REST API benchmarks
45
+ - **Parallel Execution** - Benchmark multiple models simultaneously
46
+ - **Provider Management** - Add verified, custom verified, and custom providers
25
47
 
26
48
  ## Quick Setup
27
49
 
28
50
  1. **Set Model**
29
51
  ```bash
30
52
  ai-speedometer
31
- # Select "Set Model" → "Add Verified Provider" → Choose provider (OpenAI, Anthropic, etc.)
53
+ # Select "Run Benchmark" → "Add Verified Provider" → Choose provider (OpenAI, Anthropic, etc.)
32
54
  # Enter your API key when prompted
33
55
  ```
34
56
 
35
57
  2. **Choose Model Provider**
36
- - Verified providers (OpenAI, Anthropic, Google) - auto-configured
58
+ - Verified providers (OpenAI, Anthropic, Google) - auto-configured via models.dev
37
59
  - Custom verified providers (pre-configured trusted providers) - add API key
38
60
  - Custom providers (Ollama, local models) - add your base URL
39
61
 
@@ -42,47 +64,64 @@ npm install -g ai-speedometer
42
64
  - Enter when prompted - stored securely in:
43
65
  - `~/.local/share/opencode/auth.json` (primary storage)
44
66
  - `~/.config/ai-speedometer/ai-benchmark-config.json` (backup storage)
45
- - Both files store verified and custom verified provider keys
46
67
 
47
68
  4. **Run Benchmark**
48
69
  ```bash
49
70
  ai-speedometer
50
- # Select "Run Benchmark (REST API)" → Choose models → Press ENTER
51
- # Note: REST API is now the default benchmark method
71
+ # Select "Run Benchmark" → choose models → press Enter
52
72
  ```
53
73
 
54
74
  ## Usage
55
75
 
56
76
  ```bash
57
- # Start CLI
77
+ # Start interactive TUI
58
78
  ai-speedometer
59
79
 
60
- # Or use short alias
80
+ # Short alias
61
81
  aispeed
62
82
 
63
83
  # Debug mode
64
84
  ai-speedometer --debug
65
85
 
66
- # Headless mode - run benchmark directly
86
+ # Headless benchmark
67
87
  ai-speedometer --bench openai:gpt-4
68
88
  # With custom API key
69
89
  ai-speedometer --bench openai:gpt-4 --api-key "sk-your-key"
70
- # Use AI SDK instead of REST API
71
- ai-speedometer --bench openai:gpt-4 --ai-sdk
90
+ # Custom provider
91
+ ai-speedometer --bench-custom myprovider:mymodel --base-url https://... --api-key "..."
92
+ ```
93
+
94
+ ## Development
95
+
96
+ ```bash
97
+ # Run from source
98
+ bun src/index.ts
99
+
100
+ # Run with auto-reload
101
+ bun --watch src/index.ts
102
+
103
+ # Run tests
104
+ bun test
105
+
106
+ # Typecheck
107
+ bun run typecheck
108
+
109
+ # Build standalone binary
110
+ bun run build # → dist/ai-speedometer
72
111
  ```
73
112
 
74
113
  ## Configuration Files
75
114
 
76
115
  API keys and configuration are stored in:
77
116
 
78
- - **Verified + Custom Verified Providers**:
117
+ - **Verified + Custom Verified Providers**:
79
118
  - Primary: `~/.local/share/opencode/auth.json`
80
119
  - Backup: `~/.config/ai-speedometer/ai-benchmark-config.json` (verifiedProviders section)
81
120
  - **Custom Providers**: `~/.config/ai-speedometer/ai-benchmark-config.json` (customProviders section)
82
- - **Provider Definitions**: `./custom-verified-providers.json`
121
+ - **Provider Definitions**: `./custom-verified-providers.json` (bundled at build time)
83
122
 
84
123
  ## Requirements
85
124
 
86
- - Node.js 18+
125
+ - **Runtime**: Bun 1.0+ (required — install from [bun.sh](https://bun.sh))
87
126
  - API keys for AI providers
88
127
  - Terminal with arrow keys and ANSI colors