llm-usage-metrics 0.3.4 → 0.3.6
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/README.md +65 -31
- package/dist/index.js +457 -108
- package/dist/index.js.map +1 -1
- package/package.json +4 -2
package/README.md
CHANGED
|
@@ -21,11 +21,11 @@
|
|
|
21
21
|
|
|
22
22
|
---
|
|
23
23
|
|
|
24
|
-
Aggregate token usage and costs from your local coding agent sessions. Supports **pi**, **codex**, **Gemini CLI**, and **OpenCode** with zero configuration required.
|
|
24
|
+
Aggregate token usage and costs from your local coding agent sessions. Supports **pi**, **codex**, **Gemini CLI**, **Droid CLI**, and **OpenCode** with zero configuration required.
|
|
25
25
|
|
|
26
26
|
## ✨ Features
|
|
27
27
|
|
|
28
|
-
- **Zero-Config Discovery** — Automatically finds `.pi`, `.codex`, `.gemini`, and OpenCode session data
|
|
28
|
+
- **Zero-Config Discovery** — Automatically finds `.pi`, `.codex`, `.gemini`, `.factory`, and OpenCode session data
|
|
29
29
|
- **LiteLLM Pricing** — Real-time pricing sync with offline caching support
|
|
30
30
|
- **Flexible Reports** — Daily, weekly, and monthly aggregations
|
|
31
31
|
- **Efficiency Reports** — Correlate cost/tokens with repository commit outcomes
|
|
@@ -39,7 +39,7 @@ Aggregate token usage and costs from your local coding agent sessions. Supports
|
|
|
39
39
|
npm install -g llm-usage-metrics
|
|
40
40
|
|
|
41
41
|
# Or run without installing
|
|
42
|
-
npx llm-usage-metrics daily
|
|
42
|
+
npx llm-usage-metrics@latest daily
|
|
43
43
|
|
|
44
44
|
# Generate your first report
|
|
45
45
|
llm-usage daily
|
|
@@ -53,15 +53,18 @@ llm-usage daily
|
|
|
53
53
|
|
|
54
54
|
## 📋 Supported Sources
|
|
55
55
|
|
|
56
|
-
| Source | Pattern
|
|
57
|
-
| -------------- |
|
|
58
|
-
| **pi** | `~/.pi/agent/sessions/**/*.jsonl`
|
|
59
|
-
| **codex** | `~/.codex/sessions/**/*.jsonl`
|
|
60
|
-
| **Gemini CLI** | `~/.gemini/tmp/*/chats/*.json`
|
|
61
|
-
| **
|
|
56
|
+
| Source | Pattern | Discovery |
|
|
57
|
+
| -------------- | ---------------------------------------- | -------------------------------- |
|
|
58
|
+
| **pi** | `~/.pi/agent/sessions/**/*.jsonl` | Automatic |
|
|
59
|
+
| **codex** | `~/.codex/sessions/**/*.jsonl` | Automatic |
|
|
60
|
+
| **Gemini CLI** | `~/.gemini/tmp/*/chats/*.json` | Automatic |
|
|
61
|
+
| **Droid CLI** | `~/.factory/sessions/**/*.settings.json` | Automatic |
|
|
62
|
+
| **OpenCode** | `~/.opencode/opencode.db` | Auto or explicit `--opencode-db` |
|
|
62
63
|
|
|
63
64
|
OpenCode source support requires Node.js 24+ runtime with built-in `node:sqlite`.
|
|
64
65
|
|
|
66
|
+
For `droid`, `Input`, `Output`, `Reasoning`, `Cache Read`, and `Cache Write` come directly from session files, and `totalTokens` is billable raw tokens (`Input + Output + Cache Read + Cache Write`, excluding `Reasoning`). Factory dashboard totals may differ because Factory applies standard-token normalization/multipliers.
|
|
67
|
+
|
|
65
68
|
## 🎯 Usage
|
|
66
69
|
|
|
67
70
|
### Basic Reports
|
|
@@ -125,16 +128,17 @@ For source-by-source comparisons, run the same report per source:
|
|
|
125
128
|
llm-usage efficiency monthly --repo-dir /path/to/repo --source pi
|
|
126
129
|
llm-usage efficiency monthly --repo-dir /path/to/repo --source codex
|
|
127
130
|
llm-usage efficiency monthly --repo-dir /path/to/repo --source gemini
|
|
131
|
+
llm-usage efficiency monthly --repo-dir /path/to/repo --source droid
|
|
128
132
|
llm-usage efficiency monthly --repo-dir /path/to/repo --source opencode
|
|
129
133
|
```
|
|
130
134
|
|
|
131
|
-
Note: usage filters (`--source`, `--provider`, `--model`, `--pi-dir`, `--codex-dir`, `--gemini-dir`, `--opencode-db`, `--source-dir`) also constrain commit attribution: only commit days with matching repo-attributed usage events are counted.
|
|
135
|
+
Note: usage filters (`--source`, `--provider`, `--model`, `--pi-dir`, `--codex-dir`, `--gemini-dir`, `--droid-dir`, `--opencode-db`, `--source-dir`) also constrain commit attribution: only commit days with matching repo-attributed usage events are counted.
|
|
132
136
|
|
|
133
137
|
### Filtering
|
|
134
138
|
|
|
135
139
|
```bash
|
|
136
140
|
# By source
|
|
137
|
-
llm-usage monthly --source pi,codex,gemini
|
|
141
|
+
llm-usage monthly --source pi,codex,gemini,droid
|
|
138
142
|
|
|
139
143
|
# By provider
|
|
140
144
|
llm-usage monthly --provider openai
|
|
@@ -150,10 +154,11 @@ llm-usage monthly --source opencode --provider openai --model gpt-4.1
|
|
|
150
154
|
|
|
151
155
|
```bash
|
|
152
156
|
# Custom directories
|
|
153
|
-
llm-usage daily --source-dir pi=/path/to/pi --source-dir codex=/path/to/codex --source-dir gemini=/path/to/.gemini
|
|
157
|
+
llm-usage daily --source-dir pi=/path/to/pi --source-dir codex=/path/to/codex --source-dir gemini=/path/to/.gemini --source-dir droid=/path/to/.factory/sessions
|
|
154
158
|
|
|
155
|
-
# Explicit Gemini/OpenCode paths
|
|
159
|
+
# Explicit Gemini/Droid/OpenCode paths
|
|
156
160
|
llm-usage daily --gemini-dir /path/to/.gemini
|
|
161
|
+
llm-usage daily --droid-dir /path/to/.factory/sessions
|
|
157
162
|
llm-usage daily --opencode-db /path/to/opencode.db
|
|
158
163
|
```
|
|
159
164
|
|
|
@@ -169,41 +174,61 @@ llm-usage monthly --ignore-pricing-failures
|
|
|
169
174
|
|
|
170
175
|
## 🧪 Production Benchmarks
|
|
171
176
|
|
|
172
|
-
Benchmarked on **February
|
|
177
|
+
Benchmarked on **February 27, 2026** on a local production machine:
|
|
173
178
|
|
|
174
179
|
- OS: CachyOS (Linux 6.19.2-2-cachyos)
|
|
175
180
|
- CPU: Intel Core Ultra 9 185H (22 logical CPUs)
|
|
176
181
|
- RAM: 62 GiB
|
|
177
182
|
- Storage: NVMe SSD
|
|
178
183
|
|
|
179
|
-
Compared
|
|
184
|
+
Compared scenarios:
|
|
180
185
|
|
|
181
186
|
```bash
|
|
187
|
+
# direct source-to-source parity (openai provider)
|
|
182
188
|
ccusage-codex monthly
|
|
183
|
-
llm-usage monthly --provider openai
|
|
189
|
+
llm-usage monthly --provider openai --source codex
|
|
190
|
+
|
|
191
|
+
# multi-source comparison for one provider (openai)
|
|
192
|
+
ccusage-codex monthly
|
|
193
|
+
llm-usage monthly --provider openai --source pi,codex,gemini,opencode
|
|
184
194
|
```
|
|
185
195
|
|
|
186
|
-
Timed benchmark summary (5 runs per scenario)
|
|
196
|
+
Timed benchmark summary (5 runs per scenario).
|
|
187
197
|
|
|
188
|
-
|
|
189
|
-
| ------------------------------------------------------- | ---------- | ---------: | -------: |
|
|
190
|
-
| `ccusage-codex monthly` | no cache | 14.247 | 14.456 |
|
|
191
|
-
| `ccusage-codex monthly --offline` | with cache | 14.043 | 14.268 |
|
|
192
|
-
| `llm-usage monthly --provider openai` | no cache | 4.192 | 4.196 |
|
|
193
|
-
| `llm-usage monthly --provider openai --pricing-offline` | with cache | 0.793 | 0.784 |
|
|
198
|
+
Direct source-to-source parity (`--source codex`):
|
|
194
199
|
|
|
195
|
-
|
|
200
|
+
| Tool | Cache mode | Median (s) | Mean (s) |
|
|
201
|
+
| ---------------------------------------------------------------------- | ---------- | ---------: | -------: |
|
|
202
|
+
| `ccusage-codex monthly` | no cache | 16.785 | 17.288 |
|
|
203
|
+
| `ccusage-codex monthly --offline` | with cache | 16.995 | 17.594 |
|
|
204
|
+
| `llm-usage monthly --provider openai --source codex` | no cache | 3.651 | 3.760 |
|
|
205
|
+
| `llm-usage monthly --provider openai --source codex --pricing-offline` | with cache | 0.746 | 0.724 |
|
|
196
206
|
|
|
197
|
-
|
|
198
|
-
|
|
199
|
-
-
|
|
207
|
+
Speedups (median): `4.60x` faster cold, `22.78x` faster cached.
|
|
208
|
+
|
|
209
|
+
Multi-source OpenAI (`--source pi,codex,gemini,opencode`):
|
|
210
|
+
|
|
211
|
+
| Tool | Cache mode | Median (s) | Mean (s) |
|
|
212
|
+
| ----------------------------------------------------------------------------------------- | ---------- | ---------: | -------: |
|
|
213
|
+
| `ccusage-codex monthly` | no cache | 17.297 | 17.463 |
|
|
214
|
+
| `ccusage-codex monthly --offline` | with cache | 16.698 | 16.745 |
|
|
215
|
+
| `llm-usage monthly --provider openai --source pi,codex,gemini,opencode` | no cache | 4.767 | 4.864 |
|
|
216
|
+
| `llm-usage monthly --provider openai --source pi,codex,gemini,opencode --pricing-offline` | with cache | 0.941 | 0.951 |
|
|
217
|
+
|
|
218
|
+
Speedups (median): `3.63x` faster cold, `17.75x` faster cached.
|
|
200
219
|
|
|
201
220
|
Full methodology, cache-mode definition, and scope caveats are documented in the Astro docs: [Benchmarks](https://ayagmar.github.io/llm-usage-metrics/benchmarks/).
|
|
202
221
|
|
|
203
|
-
Re-run benchmark locally:
|
|
222
|
+
Re-run direct parity benchmark locally:
|
|
204
223
|
|
|
205
224
|
```bash
|
|
206
|
-
pnpm run perf:production-benchmark -- --runs 5
|
|
225
|
+
pnpm run perf:production-benchmark -- --runs 5 --llm-source codex
|
|
226
|
+
```
|
|
227
|
+
|
|
228
|
+
Re-run multi-source OpenAI benchmark locally:
|
|
229
|
+
|
|
230
|
+
```bash
|
|
231
|
+
pnpm run perf:production-benchmark -- --runs 5 --llm-source pi,codex,gemini,opencode
|
|
207
232
|
```
|
|
208
233
|
|
|
209
234
|
Generate machine-readable artifacts:
|
|
@@ -211,8 +236,15 @@ Generate machine-readable artifacts:
|
|
|
211
236
|
```bash
|
|
212
237
|
pnpm run perf:production-benchmark -- \
|
|
213
238
|
--runs 5 \
|
|
214
|
-
--
|
|
215
|
-
--
|
|
239
|
+
--llm-source codex \
|
|
240
|
+
--json-output ./tmp/production-benchmark-openai-codex.json \
|
|
241
|
+
--markdown-output ./tmp/production-benchmark-openai-codex.md
|
|
242
|
+
|
|
243
|
+
pnpm run perf:production-benchmark -- \
|
|
244
|
+
--runs 5 \
|
|
245
|
+
--llm-source pi,codex,gemini,opencode \
|
|
246
|
+
--json-output ./tmp/production-benchmark-openai-multi-source.json \
|
|
247
|
+
--markdown-output ./tmp/production-benchmark-openai-multi-source.md
|
|
216
248
|
```
|
|
217
249
|
|
|
218
250
|
## ⚙️ Configuration
|
|
@@ -226,6 +258,8 @@ pnpm run perf:production-benchmark -- \
|
|
|
226
258
|
| `LLM_USAGE_PARSE_MAX_PARALLEL` | Max parallel file parses (`1-64`) |
|
|
227
259
|
| `LLM_USAGE_PARSE_CACHE_ENABLED` | Enable parse cache (`1/0`) |
|
|
228
260
|
|
|
261
|
+
Parse cache is source-sharded on disk (`parse-file-cache.<source>.json`) so source-scoped runs avoid loading unrelated cache blobs.
|
|
262
|
+
|
|
229
263
|
See full environment variable reference in the [documentation](https://ayagmar.github.io/llm-usage-metrics/configuration/).
|
|
230
264
|
|
|
231
265
|
### Update Checks
|