llm-usage-metrics 0.3.2 → 0.3.3

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -28,6 +28,7 @@ Aggregate token usage and costs from your local coding agent sessions. Supports
28
28
  - **Zero-Config Discovery** — Automatically finds `.pi`, `.codex`, and OpenCode session data
29
29
  - **LiteLLM Pricing** — Real-time pricing sync with offline caching support
30
30
  - **Flexible Reports** — Daily, weekly, and monthly aggregations
31
+ - **Efficiency Reports** — Correlate cost/tokens with repository commit outcomes
31
32
  - **Multiple Outputs** — Terminal tables, JSON, or Markdown
32
33
  - **Smart Filtering** — By source, provider, model, and date ranges
33
34
 
@@ -58,6 +59,8 @@ llm-usage daily
58
59
  | **codex** | `~/.codex/sessions/**/*.jsonl` | Automatic |
59
60
  | **OpenCode** | `~/.opencode/opencode.db` | Auto or explicit `--opencode-db` |
60
61
 
62
+ OpenCode source support requires Node.js 24+ runtime with built-in `node:sqlite`.
63
+
61
64
  ## 🎯 Usage
62
65
 
63
66
  ### Basic Reports
@@ -86,6 +89,45 @@ llm-usage daily --markdown
86
89
  llm-usage monthly --per-model-columns
87
90
  ```
88
91
 
92
+ ### Efficiency Reports
93
+
94
+ ```bash
95
+ # Daily efficiency in current repository
96
+ llm-usage efficiency daily
97
+
98
+ # Weekly efficiency for a specific repository path
99
+ llm-usage efficiency weekly --repo-dir /path/to/repo
100
+
101
+ # Include merge commits and export JSON
102
+ llm-usage efficiency monthly --include-merge-commits --json
103
+ ```
104
+
105
+ Efficiency reports are repo-attributed: usage events are mapped to a Git repository root using source metadata (`cwd`/path info), and only events attributed to the selected repo are included in efficiency totals.
106
+
107
+ #### Reading efficiency output
108
+
109
+ - `Commits`, `+Lines`, `-Lines`, `ΔLines` come from local Git shortstat outcomes (for your configured Git author).
110
+ - `Input`, `Output`, `Reasoning`, `Cache Read`, `Cache Write`, `Total`, and `Cost` come from repo-attributed usage events.
111
+ - `All Tokens/Commit` uses `Total / Commits` and includes cache read/write tokens.
112
+ - `Non-Cache/Commit` uses `(Input + Output + Reasoning) / Commits` and excludes cache read/write tokens.
113
+ - `$/Commit` uses `Cost / Commits`.
114
+ - `$/1k Lines` uses `Cost / (ΔLines / 1000)`.
115
+ - `Commits/$` uses `Commits / Cost` (shown only when `Cost > 0`).
116
+
117
+ Efficiency period rows are emitted only when both Git outcomes and repo-attributed usage signal exist for that period.
118
+ When a denominator is zero, derived values in emitted rows render as `-`.
119
+ When pricing is incomplete, terminal/markdown output prefixes affected USD metrics with `~`.
120
+
121
+ For source-by-source comparisons, run the same report per source:
122
+
123
+ ```bash
124
+ llm-usage efficiency monthly --repo-dir /path/to/repo --source pi
125
+ llm-usage efficiency monthly --repo-dir /path/to/repo --source codex
126
+ llm-usage efficiency monthly --repo-dir /path/to/repo --source opencode
127
+ ```
128
+
129
+ Note: usage filters (`--source`, `--provider`, `--model`, `--pi-dir`, `--codex-dir`, `--opencode-db`, `--source-dir`) also constrain commit attribution: only commit days with matching repo-attributed usage events are counted.
130
+
89
131
  ### Filtering
90
132
 
91
133
  ```bash
@@ -117,6 +159,57 @@ llm-usage daily --opencode-db /path/to/opencode.db
117
159
  ```bash
118
160
  # Use cached pricing only
119
161
  llm-usage monthly --pricing-offline
162
+
163
+ # Continue even if pricing fetch fails
164
+ llm-usage monthly --ignore-pricing-failures
165
+ ```
166
+
167
+ ## 🧪 Production Benchmarks
168
+
169
+ Benchmarked on **February 24, 2026** on a local production machine:
170
+
171
+ - OS: CachyOS (Linux 6.19.2-2-cachyos)
172
+ - CPU: Intel Core Ultra 9 185H (22 logical CPUs)
173
+ - RAM: 62 GiB
174
+ - Storage: NVMe SSD
175
+
176
+ Compared commands:
177
+
178
+ ```bash
179
+ ccusage-codex monthly
180
+ llm-usage monthly --provider openai
181
+ ```
182
+
183
+ Timed benchmark summary (5 runs per scenario):
184
+
185
+ | Tool | Cache mode | Median (s) | Mean (s) |
186
+ | ------------------------------------------------------- | ---------- | ---------: | -------: |
187
+ | `ccusage-codex monthly` | no cache | 14.247 | 14.456 |
188
+ | `ccusage-codex monthly --offline` | with cache | 14.043 | 14.268 |
189
+ | `llm-usage monthly --provider openai` | no cache | 4.192 | 4.196 |
190
+ | `llm-usage monthly --provider openai --pricing-offline` | with cache | 0.793 | 0.784 |
191
+
192
+ On this dataset and machine:
193
+
194
+ - `llm-usage` is `3.40x` faster than `ccusage-codex` in no-cache mode.
195
+ - `llm-usage` is `17.71x` faster than `ccusage-codex` in cached mode.
196
+ - `llm-usage` improves `5.29x` with cache; `ccusage-codex` improves `1.01x`.
197
+
198
+ Full methodology, cache-mode definition, and scope caveats are documented in the Astro docs: [Benchmarks](https://ayagmar.github.io/llm-usage-metrics/benchmarks/).
199
+
200
+ Re-run benchmark locally:
201
+
202
+ ```bash
203
+ pnpm run perf:production-benchmark -- --runs 5
204
+ ```
205
+
206
+ Generate machine-readable artifacts:
207
+
208
+ ```bash
209
+ pnpm run perf:production-benchmark -- \
210
+ --runs 5 \
211
+ --json-output ./tmp/production-benchmark.json \
212
+ --markdown-output ./tmp/production-benchmark.md
120
213
  ```
121
214
 
122
215
  ## ⚙️ Configuration
@@ -169,8 +262,10 @@ pnpm cli daily
169
262
 
170
263
  - **[Getting Started](https://ayagmar.github.io/llm-usage-metrics/getting-started/)** — Installation and first steps
171
264
  - **[CLI Reference](https://ayagmar.github.io/llm-usage-metrics/cli-reference/)** — Complete command reference
265
+ - **[Efficiency](https://ayagmar.github.io/llm-usage-metrics/efficiency/)** — Efficiency report semantics and interpretation
172
266
  - **[Data Sources](https://ayagmar.github.io/llm-usage-metrics/sources/)** — Source configuration
173
267
  - **[Configuration](https://ayagmar.github.io/llm-usage-metrics/configuration/)** — Environment variables
268
+ - **[Benchmarks](https://ayagmar.github.io/llm-usage-metrics/benchmarks/)** — Production benchmark methodology and results
174
269
  - **[Architecture](https://ayagmar.github.io/llm-usage-metrics/architecture/)** — Technical overview
175
270
 
176
271
  ## 🤝 Contributing