llm-usage-metrics 0.3.2 → 0.3.4

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -21,13 +21,14 @@
21
21
 
22
22
  ---
23
23
 
24
- Aggregate token usage and costs from your local coding agent sessions. Supports **pi**, **codex**, and **OpenCode** with zero configuration required.
24
+ Aggregate token usage and costs from your local coding agent sessions. Supports **pi**, **codex**, **Gemini CLI**, and **OpenCode** with zero configuration required.
25
25
 
26
26
  ## ✨ Features
27
27
 
28
- - **Zero-Config Discovery** — Automatically finds `.pi`, `.codex`, and OpenCode session data
28
+ - **Zero-Config Discovery** — Automatically finds `.pi`, `.codex`, `.gemini`, and OpenCode session data
29
29
  - **LiteLLM Pricing** — Real-time pricing sync with offline caching support
30
30
  - **Flexible Reports** — Daily, weekly, and monthly aggregations
31
+ - **Efficiency Reports** — Correlate cost/tokens with repository commit outcomes
31
32
  - **Multiple Outputs** — Terminal tables, JSON, or Markdown
32
33
  - **Smart Filtering** — By source, provider, model, and date ranges
33
34
 
@@ -52,11 +53,14 @@ llm-usage daily
52
53
 
53
54
  ## 📋 Supported Sources
54
55
 
55
- | Source | Pattern | Discovery |
56
- | ------------ | --------------------------------- | -------------------------------- |
57
- | **pi** | `~/.pi/agent/sessions/**/*.jsonl` | Automatic |
58
- | **codex** | `~/.codex/sessions/**/*.jsonl` | Automatic |
59
- | **OpenCode** | `~/.opencode/opencode.db` | Auto or explicit `--opencode-db` |
56
+ | Source | Pattern | Discovery |
57
+ | -------------- | --------------------------------- | -------------------------------- |
58
+ | **pi** | `~/.pi/agent/sessions/**/*.jsonl` | Automatic |
59
+ | **codex** | `~/.codex/sessions/**/*.jsonl` | Automatic |
60
+ | **Gemini CLI** | `~/.gemini/tmp/*/chats/*.json` | Automatic |
61
+ | **OpenCode** | `~/.opencode/opencode.db` | Auto or explicit `--opencode-db` |
62
+
63
+ OpenCode source support requires Node.js 24+ runtime with built-in `node:sqlite`.
60
64
 
61
65
  ## 🎯 Usage
62
66
 
@@ -86,11 +90,51 @@ llm-usage daily --markdown
86
90
  llm-usage monthly --per-model-columns
87
91
  ```
88
92
 
93
+ ### Efficiency Reports
94
+
95
+ ```bash
96
+ # Daily efficiency in current repository
97
+ llm-usage efficiency daily
98
+
99
+ # Weekly efficiency for a specific repository path
100
+ llm-usage efficiency weekly --repo-dir /path/to/repo
101
+
102
+ # Include merge commits and export JSON
103
+ llm-usage efficiency monthly --include-merge-commits --json
104
+ ```
105
+
106
+ Efficiency reports are repo-attributed: usage events are mapped to a Git repository root using source metadata (`cwd`/path info), and only events attributed to the selected repo are included in efficiency totals.
107
+
108
+ #### Reading efficiency output
109
+
110
+ - `Commits`, `+Lines`, `-Lines`, `ΔLines` come from local Git shortstat outcomes (for your configured Git author).
111
+ - `Input`, `Output`, `Reasoning`, `Cache Read`, `Cache Write`, `Total`, and `Cost` come from repo-attributed usage events.
112
+ - `All Tokens/Commit` uses `Total / Commits` and includes cache read/write tokens.
113
+ - `Non-Cache/Commit` uses `(Input + Output + Reasoning) / Commits` and excludes cache read/write tokens.
114
+ - `$/Commit` uses `Cost / Commits`.
115
+ - `$/1k Lines` uses `Cost / (ΔLines / 1000)`.
116
+ - `Commits/$` uses `Commits / Cost` (shown only when `Cost > 0`).
117
+
118
+ Efficiency period rows are emitted only when both Git outcomes and repo-attributed usage signal exist for that period.
119
+ When a denominator is zero, derived values in emitted rows render as `-`.
120
+ When pricing is incomplete, terminal/markdown output prefixes affected USD metrics with `~`.
121
+
122
+ For source-by-source comparisons, run the same report per source:
123
+
124
+ ```bash
125
+ llm-usage efficiency monthly --repo-dir /path/to/repo --source pi
126
+ llm-usage efficiency monthly --repo-dir /path/to/repo --source codex
127
+ llm-usage efficiency monthly --repo-dir /path/to/repo --source gemini
128
+ llm-usage efficiency monthly --repo-dir /path/to/repo --source opencode
129
+ ```
130
+
131
+ Note: usage filters (`--source`, `--provider`, `--model`, `--pi-dir`, `--codex-dir`, `--gemini-dir`, `--opencode-db`, `--source-dir`) also constrain commit attribution: only commit days with matching repo-attributed usage events are counted.
132
+
89
133
  ### Filtering
90
134
 
91
135
  ```bash
92
136
  # By source
93
- llm-usage monthly --source pi,codex
137
+ llm-usage monthly --source pi,codex,gemini
94
138
 
95
139
  # By provider
96
140
  llm-usage monthly --provider openai
@@ -106,9 +150,10 @@ llm-usage monthly --source opencode --provider openai --model gpt-4.1
106
150
 
107
151
  ```bash
108
152
  # Custom directories
109
- llm-usage daily --source-dir pi=/path/to/pi --source-dir codex=/path/to/codex
153
+ llm-usage daily --source-dir pi=/path/to/pi --source-dir codex=/path/to/codex --source-dir gemini=/path/to/.gemini
110
154
 
111
- # Explicit OpenCode database
155
+ # Explicit Gemini/OpenCode paths
156
+ llm-usage daily --gemini-dir /path/to/.gemini
112
157
  llm-usage daily --opencode-db /path/to/opencode.db
113
158
  ```
114
159
 
@@ -117,6 +162,57 @@ llm-usage daily --opencode-db /path/to/opencode.db
117
162
  ```bash
118
163
  # Use cached pricing only
119
164
  llm-usage monthly --pricing-offline
165
+
166
+ # Continue even if pricing fetch fails
167
+ llm-usage monthly --ignore-pricing-failures
168
+ ```
169
+
170
+ ## 🧪 Production Benchmarks
171
+
172
+ Benchmarked on **February 24, 2026** on a local production machine:
173
+
174
+ - OS: CachyOS (Linux 6.19.2-2-cachyos)
175
+ - CPU: Intel Core Ultra 9 185H (22 logical CPUs)
176
+ - RAM: 62 GiB
177
+ - Storage: NVMe SSD
178
+
179
+ Compared commands:
180
+
181
+ ```bash
182
+ ccusage-codex monthly
183
+ llm-usage monthly --provider openai
184
+ ```
185
+
186
+ Timed benchmark summary (5 runs per scenario):
187
+
188
+ | Tool | Cache mode | Median (s) | Mean (s) |
189
+ | ------------------------------------------------------- | ---------- | ---------: | -------: |
190
+ | `ccusage-codex monthly` | no cache | 14.247 | 14.456 |
191
+ | `ccusage-codex monthly --offline` | with cache | 14.043 | 14.268 |
192
+ | `llm-usage monthly --provider openai` | no cache | 4.192 | 4.196 |
193
+ | `llm-usage monthly --provider openai --pricing-offline` | with cache | 0.793 | 0.784 |
194
+
195
+ On this dataset and machine:
196
+
197
+ - `llm-usage` is `3.40x` faster than `ccusage-codex` in no-cache mode.
198
+ - `llm-usage` is `17.71x` faster than `ccusage-codex` in cached mode.
199
+ - `llm-usage` improves `5.29x` with cache; `ccusage-codex` improves `1.01x`.
200
+
201
+ Full methodology, cache-mode definition, and scope caveats are documented in the Astro docs: [Benchmarks](https://ayagmar.github.io/llm-usage-metrics/benchmarks/).
202
+
203
+ Re-run benchmark locally:
204
+
205
+ ```bash
206
+ pnpm run perf:production-benchmark -- --runs 5
207
+ ```
208
+
209
+ Generate machine-readable artifacts:
210
+
211
+ ```bash
212
+ pnpm run perf:production-benchmark -- \
213
+ --runs 5 \
214
+ --json-output ./tmp/production-benchmark.json \
215
+ --markdown-output ./tmp/production-benchmark.md
120
216
  ```
121
217
 
122
218
  ## ⚙️ Configuration
@@ -169,8 +265,10 @@ pnpm cli daily
169
265
 
170
266
  - **[Getting Started](https://ayagmar.github.io/llm-usage-metrics/getting-started/)** — Installation and first steps
171
267
  - **[CLI Reference](https://ayagmar.github.io/llm-usage-metrics/cli-reference/)** — Complete command reference
268
+ - **[Efficiency](https://ayagmar.github.io/llm-usage-metrics/efficiency/)** — Efficiency report semantics and interpretation
172
269
  - **[Data Sources](https://ayagmar.github.io/llm-usage-metrics/sources/)** — Source configuration
173
270
  - **[Configuration](https://ayagmar.github.io/llm-usage-metrics/configuration/)** — Environment variables
271
+ - **[Benchmarks](https://ayagmar.github.io/llm-usage-metrics/benchmarks/)** — Production benchmark methodology and results
174
272
  - **[Architecture](https://ayagmar.github.io/llm-usage-metrics/architecture/)** — Technical overview
175
273
 
176
274
  ## 🤝 Contributing