llm-usage-metrics 0.3.4 → 0.3.5

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -21,11 +21,11 @@
21
21
 
22
22
  ---
23
23
 
24
- Aggregate token usage and costs from your local coding agent sessions. Supports **pi**, **codex**, **Gemini CLI**, and **OpenCode** with zero configuration required.
24
+ Aggregate token usage and costs from your local coding agent sessions. Supports **pi**, **codex**, **Gemini CLI**, **Droid CLI**, and **OpenCode** with zero configuration required.
25
25
 
26
26
  ## ✨ Features
27
27
 
28
- - **Zero-Config Discovery** — Automatically finds `.pi`, `.codex`, `.gemini`, and OpenCode session data
28
+ - **Zero-Config Discovery** — Automatically finds `.pi`, `.codex`, `.gemini`, `.factory`, and OpenCode session data
29
29
  - **LiteLLM Pricing** — Real-time pricing sync with offline caching support
30
30
  - **Flexible Reports** — Daily, weekly, and monthly aggregations
31
31
  - **Efficiency Reports** — Correlate cost/tokens with repository commit outcomes
@@ -39,7 +39,7 @@ Aggregate token usage and costs from your local coding agent sessions. Supports
39
39
  npm install -g llm-usage-metrics
40
40
 
41
41
  # Or run without installing
42
- npx llm-usage-metrics daily
42
+ npx llm-usage-metrics@latest daily
43
43
 
44
44
  # Generate your first report
45
45
  llm-usage daily
@@ -53,12 +53,13 @@ llm-usage daily
53
53
 
54
54
  ## 📋 Supported Sources
55
55
 
56
- | Source | Pattern | Discovery |
57
- | -------------- | --------------------------------- | -------------------------------- |
58
- | **pi** | `~/.pi/agent/sessions/**/*.jsonl` | Automatic |
59
- | **codex** | `~/.codex/sessions/**/*.jsonl` | Automatic |
60
- | **Gemini CLI** | `~/.gemini/tmp/*/chats/*.json` | Automatic |
61
- | **OpenCode** | `~/.opencode/opencode.db` | Auto or explicit `--opencode-db` |
56
+ | Source | Pattern | Discovery |
57
+ | -------------- | ---------------------------------------- | -------------------------------- |
58
+ | **pi** | `~/.pi/agent/sessions/**/*.jsonl` | Automatic |
59
+ | **codex** | `~/.codex/sessions/**/*.jsonl` | Automatic |
60
+ | **Gemini CLI** | `~/.gemini/tmp/*/chats/*.json` | Automatic |
61
+ | **Droid CLI** | `~/.factory/sessions/**/*.settings.json` | Automatic |
62
+ | **OpenCode** | `~/.opencode/opencode.db` | Auto or explicit `--opencode-db` |
62
63
 
63
64
  OpenCode source support requires Node.js 24+ runtime with built-in `node:sqlite`.
64
65
 
@@ -125,16 +126,17 @@ For source-by-source comparisons, run the same report per source:
125
126
  llm-usage efficiency monthly --repo-dir /path/to/repo --source pi
126
127
  llm-usage efficiency monthly --repo-dir /path/to/repo --source codex
127
128
  llm-usage efficiency monthly --repo-dir /path/to/repo --source gemini
129
+ llm-usage efficiency monthly --repo-dir /path/to/repo --source droid
128
130
  llm-usage efficiency monthly --repo-dir /path/to/repo --source opencode
129
131
  ```
130
132
 
131
- Note: usage filters (`--source`, `--provider`, `--model`, `--pi-dir`, `--codex-dir`, `--gemini-dir`, `--opencode-db`, `--source-dir`) also constrain commit attribution: only commit days with matching repo-attributed usage events are counted.
133
+ Note: usage filters (`--source`, `--provider`, `--model`, `--pi-dir`, `--codex-dir`, `--gemini-dir`, `--droid-dir`, `--opencode-db`, `--source-dir`) also constrain commit attribution: only commit days with matching repo-attributed usage events are counted.
132
134
 
133
135
  ### Filtering
134
136
 
135
137
  ```bash
136
138
  # By source
137
- llm-usage monthly --source pi,codex,gemini
139
+ llm-usage monthly --source pi,codex,gemini,droid
138
140
 
139
141
  # By provider
140
142
  llm-usage monthly --provider openai
@@ -150,10 +152,11 @@ llm-usage monthly --source opencode --provider openai --model gpt-4.1
150
152
 
151
153
  ```bash
152
154
  # Custom directories
153
- llm-usage daily --source-dir pi=/path/to/pi --source-dir codex=/path/to/codex --source-dir gemini=/path/to/.gemini
155
+ llm-usage daily --source-dir pi=/path/to/pi --source-dir codex=/path/to/codex --source-dir gemini=/path/to/.gemini --source-dir droid=/path/to/.factory/sessions
154
156
 
155
- # Explicit Gemini/OpenCode paths
157
+ # Explicit Gemini/Droid/OpenCode paths
156
158
  llm-usage daily --gemini-dir /path/to/.gemini
159
+ llm-usage daily --droid-dir /path/to/.factory/sessions
157
160
  llm-usage daily --opencode-db /path/to/opencode.db
158
161
  ```
159
162
 
@@ -169,41 +172,61 @@ llm-usage monthly --ignore-pricing-failures
169
172
 
170
173
  ## 🧪 Production Benchmarks
171
174
 
172
- Benchmarked on **February 24, 2026** on a local production machine:
175
+ Benchmarked on **February 27, 2026** on a local production machine:
173
176
 
174
177
  - OS: CachyOS (Linux 6.19.2-2-cachyos)
175
178
  - CPU: Intel Core Ultra 9 185H (22 logical CPUs)
176
179
  - RAM: 62 GiB
177
180
  - Storage: NVMe SSD
178
181
 
179
- Compared commands:
182
+ Compared scenarios:
180
183
 
181
184
  ```bash
185
+ # direct source-to-source parity (openai provider)
182
186
  ccusage-codex monthly
183
- llm-usage monthly --provider openai
187
+ llm-usage monthly --provider openai --source codex
188
+
189
+ # multi-source comparison for one provider (openai)
190
+ ccusage-codex monthly
191
+ llm-usage monthly --provider openai --source pi,codex,gemini,opencode
184
192
  ```
185
193
 
186
- Timed benchmark summary (5 runs per scenario):
194
+ Timed benchmark summary (5 runs per scenario).
195
+
196
+ Direct source-to-source parity (`--source codex`):
187
197
 
188
- | Tool | Cache mode | Median (s) | Mean (s) |
189
- | ------------------------------------------------------- | ---------- | ---------: | -------: |
190
- | `ccusage-codex monthly` | no cache | 14.247 | 14.456 |
191
- | `ccusage-codex monthly --offline` | with cache | 14.043 | 14.268 |
192
- | `llm-usage monthly --provider openai` | no cache | 4.192 | 4.196 |
193
- | `llm-usage monthly --provider openai --pricing-offline` | with cache | 0.793 | 0.784 |
198
+ | Tool | Cache mode | Median (s) | Mean (s) |
199
+ | ---------------------------------------------------------------------- | ---------- | ---------: | -------: |
200
+ | `ccusage-codex monthly` | no cache | 16.785 | 17.288 |
201
+ | `ccusage-codex monthly --offline` | with cache | 16.995 | 17.594 |
202
+ | `llm-usage monthly --provider openai --source codex` | no cache | 3.651 | 3.760 |
203
+ | `llm-usage monthly --provider openai --source codex --pricing-offline` | with cache | 0.746 | 0.724 |
194
204
 
195
- On this dataset and machine:
205
+ Speedups (median): `4.60x` faster cold, `22.78x` faster cached.
196
206
 
197
- - `llm-usage` is `3.40x` faster than `ccusage-codex` in no-cache mode.
198
- - `llm-usage` is `17.71x` faster than `ccusage-codex` in cached mode.
199
- - `llm-usage` improves `5.29x` with cache; `ccusage-codex` improves `1.01x`.
207
+ Multi-source OpenAI (`--source pi,codex,gemini,opencode`):
208
+
209
+ | Tool | Cache mode | Median (s) | Mean (s) |
210
+ | ----------------------------------------------------------------------------------------- | ---------- | ---------: | -------: |
211
+ | `ccusage-codex monthly` | no cache | 17.297 | 17.463 |
212
+ | `ccusage-codex monthly --offline` | with cache | 16.698 | 16.745 |
213
+ | `llm-usage monthly --provider openai --source pi,codex,gemini,opencode` | no cache | 4.767 | 4.864 |
214
+ | `llm-usage monthly --provider openai --source pi,codex,gemini,opencode --pricing-offline` | with cache | 0.941 | 0.951 |
215
+
216
+ Speedups (median): `3.63x` faster cold, `17.75x` faster cached.
200
217
 
201
218
  Full methodology, cache-mode definition, and scope caveats are documented in the Astro docs: [Benchmarks](https://ayagmar.github.io/llm-usage-metrics/benchmarks/).
202
219
 
203
- Re-run benchmark locally:
220
+ Re-run direct parity benchmark locally:
204
221
 
205
222
  ```bash
206
- pnpm run perf:production-benchmark -- --runs 5
223
+ pnpm run perf:production-benchmark -- --runs 5 --llm-source codex
224
+ ```
225
+
226
+ Re-run multi-source OpenAI benchmark locally:
227
+
228
+ ```bash
229
+ pnpm run perf:production-benchmark -- --runs 5 --llm-source pi,codex,gemini,opencode
207
230
  ```
208
231
 
209
232
  Generate machine-readable artifacts:
@@ -211,8 +234,15 @@ Generate machine-readable artifacts:
211
234
  ```bash
212
235
  pnpm run perf:production-benchmark -- \
213
236
  --runs 5 \
214
- --json-output ./tmp/production-benchmark.json \
215
- --markdown-output ./tmp/production-benchmark.md
237
+ --llm-source codex \
238
+ --json-output ./tmp/production-benchmark-openai-codex.json \
239
+ --markdown-output ./tmp/production-benchmark-openai-codex.md
240
+
241
+ pnpm run perf:production-benchmark -- \
242
+ --runs 5 \
243
+ --llm-source pi,codex,gemini,opencode \
244
+ --json-output ./tmp/production-benchmark-openai-multi-source.json \
245
+ --markdown-output ./tmp/production-benchmark-openai-multi-source.md
216
246
  ```
217
247
 
218
248
  ## ⚙️ Configuration