llm-usage-metrics 0.3.3 → 0.3.5
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/README.md +63 -30
- package/dist/index.js +588 -87
- package/dist/index.js.map +1 -1
- package/package.json +5 -2
package/README.md
CHANGED
|
@@ -21,11 +21,11 @@
|
|
|
21
21
|
|
|
22
22
|
---
|
|
23
23
|
|
|
24
|
-
Aggregate token usage and costs from your local coding agent sessions. Supports **pi**, **codex**, and **OpenCode** with zero configuration required.
|
|
24
|
+
Aggregate token usage and costs from your local coding agent sessions. Supports **pi**, **codex**, **Gemini CLI**, **Droid CLI**, and **OpenCode** with zero configuration required.
|
|
25
25
|
|
|
26
26
|
## ✨ Features
|
|
27
27
|
|
|
28
|
-
- **Zero-Config Discovery** — Automatically finds `.pi`, `.codex`, and OpenCode session data
|
|
28
|
+
- **Zero-Config Discovery** — Automatically finds `.pi`, `.codex`, `.gemini`, `.factory`, and OpenCode session data
|
|
29
29
|
- **LiteLLM Pricing** — Real-time pricing sync with offline caching support
|
|
30
30
|
- **Flexible Reports** — Daily, weekly, and monthly aggregations
|
|
31
31
|
- **Efficiency Reports** — Correlate cost/tokens with repository commit outcomes
|
|
@@ -39,7 +39,7 @@ Aggregate token usage and costs from your local coding agent sessions. Supports
|
|
|
39
39
|
npm install -g llm-usage-metrics
|
|
40
40
|
|
|
41
41
|
# Or run without installing
|
|
42
|
-
npx llm-usage-metrics daily
|
|
42
|
+
npx llm-usage-metrics@latest daily
|
|
43
43
|
|
|
44
44
|
# Generate your first report
|
|
45
45
|
llm-usage daily
|
|
@@ -53,11 +53,13 @@ llm-usage daily
|
|
|
53
53
|
|
|
54
54
|
## 📋 Supported Sources
|
|
55
55
|
|
|
56
|
-
| Source
|
|
57
|
-
|
|
|
58
|
-
| **pi**
|
|
59
|
-
| **codex**
|
|
60
|
-
| **
|
|
56
|
+
| Source | Pattern | Discovery |
|
|
57
|
+
| -------------- | ---------------------------------------- | -------------------------------- |
|
|
58
|
+
| **pi** | `~/.pi/agent/sessions/**/*.jsonl` | Automatic |
|
|
59
|
+
| **codex** | `~/.codex/sessions/**/*.jsonl` | Automatic |
|
|
60
|
+
| **Gemini CLI** | `~/.gemini/tmp/*/chats/*.json` | Automatic |
|
|
61
|
+
| **Droid CLI** | `~/.factory/sessions/**/*.settings.json` | Automatic |
|
|
62
|
+
| **OpenCode** | `~/.opencode/opencode.db` | Auto or explicit `--opencode-db` |
|
|
61
63
|
|
|
62
64
|
OpenCode source support requires Node.js 24+ runtime with built-in `node:sqlite`.
|
|
63
65
|
|
|
@@ -123,16 +125,18 @@ For source-by-source comparisons, run the same report per source:
|
|
|
123
125
|
```bash
|
|
124
126
|
llm-usage efficiency monthly --repo-dir /path/to/repo --source pi
|
|
125
127
|
llm-usage efficiency monthly --repo-dir /path/to/repo --source codex
|
|
128
|
+
llm-usage efficiency monthly --repo-dir /path/to/repo --source gemini
|
|
129
|
+
llm-usage efficiency monthly --repo-dir /path/to/repo --source droid
|
|
126
130
|
llm-usage efficiency monthly --repo-dir /path/to/repo --source opencode
|
|
127
131
|
```
|
|
128
132
|
|
|
129
|
-
Note: usage filters (`--source`, `--provider`, `--model`, `--pi-dir`, `--codex-dir`, `--opencode-db`, `--source-dir`) also constrain commit attribution: only commit days with matching repo-attributed usage events are counted.
|
|
133
|
+
Note: usage filters (`--source`, `--provider`, `--model`, `--pi-dir`, `--codex-dir`, `--gemini-dir`, `--droid-dir`, `--opencode-db`, `--source-dir`) also constrain commit attribution: only commit days with matching repo-attributed usage events are counted.
|
|
130
134
|
|
|
131
135
|
### Filtering
|
|
132
136
|
|
|
133
137
|
```bash
|
|
134
138
|
# By source
|
|
135
|
-
llm-usage monthly --source pi,codex
|
|
139
|
+
llm-usage monthly --source pi,codex,gemini,droid
|
|
136
140
|
|
|
137
141
|
# By provider
|
|
138
142
|
llm-usage monthly --provider openai
|
|
@@ -148,9 +152,11 @@ llm-usage monthly --source opencode --provider openai --model gpt-4.1
|
|
|
148
152
|
|
|
149
153
|
```bash
|
|
150
154
|
# Custom directories
|
|
151
|
-
llm-usage daily --source-dir pi=/path/to/pi --source-dir codex=/path/to/codex
|
|
155
|
+
llm-usage daily --source-dir pi=/path/to/pi --source-dir codex=/path/to/codex --source-dir gemini=/path/to/.gemini --source-dir droid=/path/to/.factory/sessions
|
|
152
156
|
|
|
153
|
-
# Explicit OpenCode
|
|
157
|
+
# Explicit Gemini/Droid/OpenCode paths
|
|
158
|
+
llm-usage daily --gemini-dir /path/to/.gemini
|
|
159
|
+
llm-usage daily --droid-dir /path/to/.factory/sessions
|
|
154
160
|
llm-usage daily --opencode-db /path/to/opencode.db
|
|
155
161
|
```
|
|
156
162
|
|
|
@@ -166,41 +172,61 @@ llm-usage monthly --ignore-pricing-failures
|
|
|
166
172
|
|
|
167
173
|
## 🧪 Production Benchmarks
|
|
168
174
|
|
|
169
|
-
Benchmarked on **February
|
|
175
|
+
Benchmarked on **February 27, 2026** on a local production machine:
|
|
170
176
|
|
|
171
177
|
- OS: CachyOS (Linux 6.19.2-2-cachyos)
|
|
172
178
|
- CPU: Intel Core Ultra 9 185H (22 logical CPUs)
|
|
173
179
|
- RAM: 62 GiB
|
|
174
180
|
- Storage: NVMe SSD
|
|
175
181
|
|
|
176
|
-
Compared
|
|
182
|
+
Compared scenarios:
|
|
177
183
|
|
|
178
184
|
```bash
|
|
185
|
+
# direct source-to-source parity (openai provider)
|
|
179
186
|
ccusage-codex monthly
|
|
180
|
-
llm-usage monthly --provider openai
|
|
187
|
+
llm-usage monthly --provider openai --source codex
|
|
188
|
+
|
|
189
|
+
# multi-source comparison for one provider (openai)
|
|
190
|
+
ccusage-codex monthly
|
|
191
|
+
llm-usage monthly --provider openai --source pi,codex,gemini,opencode
|
|
181
192
|
```
|
|
182
193
|
|
|
183
|
-
Timed benchmark summary (5 runs per scenario)
|
|
194
|
+
Timed benchmark summary (5 runs per scenario).
|
|
195
|
+
|
|
196
|
+
Direct source-to-source parity (`--source codex`):
|
|
184
197
|
|
|
185
|
-
| Tool
|
|
186
|
-
|
|
|
187
|
-
| `ccusage-codex monthly`
|
|
188
|
-
| `ccusage-codex monthly --offline`
|
|
189
|
-
| `llm-usage monthly --provider openai` | no cache |
|
|
190
|
-
| `llm-usage monthly --provider openai --pricing-offline` | with cache | 0.
|
|
198
|
+
| Tool | Cache mode | Median (s) | Mean (s) |
|
|
199
|
+
| ---------------------------------------------------------------------- | ---------- | ---------: | -------: |
|
|
200
|
+
| `ccusage-codex monthly` | no cache | 16.785 | 17.288 |
|
|
201
|
+
| `ccusage-codex monthly --offline` | with cache | 16.995 | 17.594 |
|
|
202
|
+
| `llm-usage monthly --provider openai --source codex` | no cache | 3.651 | 3.760 |
|
|
203
|
+
| `llm-usage monthly --provider openai --source codex --pricing-offline` | with cache | 0.746 | 0.724 |
|
|
191
204
|
|
|
192
|
-
|
|
205
|
+
Speedups (median): `4.60x` faster cold, `22.78x` faster cached.
|
|
193
206
|
|
|
194
|
-
-
|
|
195
|
-
|
|
196
|
-
|
|
207
|
+
Multi-source OpenAI (`--source pi,codex,gemini,opencode`):
|
|
208
|
+
|
|
209
|
+
| Tool | Cache mode | Median (s) | Mean (s) |
|
|
210
|
+
| ----------------------------------------------------------------------------------------- | ---------- | ---------: | -------: |
|
|
211
|
+
| `ccusage-codex monthly` | no cache | 17.297 | 17.463 |
|
|
212
|
+
| `ccusage-codex monthly --offline` | with cache | 16.698 | 16.745 |
|
|
213
|
+
| `llm-usage monthly --provider openai --source pi,codex,gemini,opencode` | no cache | 4.767 | 4.864 |
|
|
214
|
+
| `llm-usage monthly --provider openai --source pi,codex,gemini,opencode --pricing-offline` | with cache | 0.941 | 0.951 |
|
|
215
|
+
|
|
216
|
+
Speedups (median): `3.63x` faster cold, `17.75x` faster cached.
|
|
197
217
|
|
|
198
218
|
Full methodology, cache-mode definition, and scope caveats are documented in the Astro docs: [Benchmarks](https://ayagmar.github.io/llm-usage-metrics/benchmarks/).
|
|
199
219
|
|
|
200
|
-
Re-run benchmark locally:
|
|
220
|
+
Re-run direct parity benchmark locally:
|
|
201
221
|
|
|
202
222
|
```bash
|
|
203
|
-
pnpm run perf:production-benchmark -- --runs 5
|
|
223
|
+
pnpm run perf:production-benchmark -- --runs 5 --llm-source codex
|
|
224
|
+
```
|
|
225
|
+
|
|
226
|
+
Re-run multi-source OpenAI benchmark locally:
|
|
227
|
+
|
|
228
|
+
```bash
|
|
229
|
+
pnpm run perf:production-benchmark -- --runs 5 --llm-source pi,codex,gemini,opencode
|
|
204
230
|
```
|
|
205
231
|
|
|
206
232
|
Generate machine-readable artifacts:
|
|
@@ -208,8 +234,15 @@ Generate machine-readable artifacts:
|
|
|
208
234
|
```bash
|
|
209
235
|
pnpm run perf:production-benchmark -- \
|
|
210
236
|
--runs 5 \
|
|
211
|
-
--
|
|
212
|
-
--
|
|
237
|
+
--llm-source codex \
|
|
238
|
+
--json-output ./tmp/production-benchmark-openai-codex.json \
|
|
239
|
+
--markdown-output ./tmp/production-benchmark-openai-codex.md
|
|
240
|
+
|
|
241
|
+
pnpm run perf:production-benchmark -- \
|
|
242
|
+
--runs 5 \
|
|
243
|
+
--llm-source pi,codex,gemini,opencode \
|
|
244
|
+
--json-output ./tmp/production-benchmark-openai-multi-source.json \
|
|
245
|
+
--markdown-output ./tmp/production-benchmark-openai-multi-source.md
|
|
213
246
|
```
|
|
214
247
|
|
|
215
248
|
## ⚙️ Configuration
|