llm-usage-metrics 0.3.5 → 0.3.7

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -29,8 +29,9 @@ Aggregate token usage and costs from your local coding agent sessions. Supports
29
29
  - **LiteLLM Pricing** — Real-time pricing sync with offline caching support
30
30
  - **Flexible Reports** — Daily, weekly, and monthly aggregations
31
31
  - **Efficiency Reports** — Correlate cost/tokens with repository commit outcomes
32
+ - **Optimize Reports** — Counterfactual candidate-model pricing against observed token mix
32
33
  - **Multiple Outputs** — Terminal tables, JSON, or Markdown
33
- - **Smart Filtering** — By source, provider, model, and date ranges
34
+ - **Smart Filtering** — By source, billing provider, model, and date ranges
34
35
 
35
36
  ## 🚀 Quick Start
36
37
 
@@ -63,6 +64,8 @@ llm-usage daily
63
64
 
64
65
  OpenCode source support requires Node.js 24+ runtime with built-in `node:sqlite`.
65
66
 
67
+ For `droid`, `Input`, `Output`, `Reasoning`, `Cache Read`, and `Cache Write` come directly from session files, and `totalTokens` is billable raw tokens (`Input + Output + Cache Read + Cache Write`, excluding `Reasoning`). Factory dashboard totals may differ because Factory applies standard-token normalization/multipliers.
68
+
66
69
  ## 🎯 Usage
67
70
 
68
71
  ### Basic Reports
@@ -132,6 +135,18 @@ llm-usage efficiency monthly --repo-dir /path/to/repo --source opencode
132
135
 
133
136
  Note: usage filters (`--source`, `--provider`, `--model`, `--pi-dir`, `--codex-dir`, `--gemini-dir`, `--droid-dir`, `--opencode-db`, `--source-dir`) also constrain commit attribution: only commit days with matching repo-attributed usage events are counted.
134
137
 
138
+ ### Optimize Reports
139
+
140
+ ```bash
141
+ # Counterfactual pricing across candidate models
142
+ llm-usage optimize monthly --provider openai --candidate-model gpt-4.1 --candidate-model gpt-5-codex
143
+
144
+ # Keep only the cheapest candidate in JSON output
145
+ llm-usage optimize weekly --provider openai --candidate-model gpt-4.1,gpt-5-codex --top 1 --json
146
+ ```
147
+
148
+ `--provider` filters by billing entity. Provider aliases are normalized to billing roots (for example, `openai-codex` is treated as `openai`).
149
+
135
150
  ### Filtering
136
151
 
137
152
  ```bash
@@ -148,6 +163,8 @@ llm-usage monthly --model claude
148
163
  llm-usage monthly --source opencode --provider openai --model gpt-4.1
149
164
  ```
150
165
 
166
+ Use `--source` to scope where events came from (`pi`, `codex`, `gemini`, `droid`, `opencode`), and `--provider` to scope the billing entity behind those events.
167
+
151
168
  ### Custom Paths
152
169
 
153
170
  ```bash
@@ -256,6 +273,8 @@ pnpm run perf:production-benchmark -- \
256
273
  | `LLM_USAGE_PARSE_MAX_PARALLEL` | Max parallel file parses (`1-64`) |
257
274
  | `LLM_USAGE_PARSE_CACHE_ENABLED` | Enable parse cache (`1/0`) |
258
275
 
276
+ Parse cache is source-sharded on disk (`parse-file-cache.<source>.json`) so source-scoped runs avoid loading unrelated cache blobs.
277
+
259
278
  See full environment variable reference in the [documentation](https://ayagmar.github.io/llm-usage-metrics/configuration/).
260
279
 
261
280
  ### Update Checks
@@ -296,6 +315,7 @@ pnpm cli daily
296
315
  - **[Getting Started](https://ayagmar.github.io/llm-usage-metrics/getting-started/)** — Installation and first steps
297
316
  - **[CLI Reference](https://ayagmar.github.io/llm-usage-metrics/cli-reference/)** — Complete command reference
298
317
  - **[Efficiency](https://ayagmar.github.io/llm-usage-metrics/efficiency/)** — Efficiency report semantics and interpretation
318
+ - **[Optimize](https://ayagmar.github.io/llm-usage-metrics/optimize/)** — Counterfactual candidate-model pricing semantics
299
319
  - **[Data Sources](https://ayagmar.github.io/llm-usage-metrics/sources/)** — Source configuration
300
320
  - **[Configuration](https://ayagmar.github.io/llm-usage-metrics/configuration/)** — Environment variables
301
321
  - **[Benchmarks](https://ayagmar.github.io/llm-usage-metrics/benchmarks/)** — Production benchmark methodology and results