llm-usage-metrics 0.3.6 → 0.4.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -29,8 +29,9 @@ Aggregate token usage and costs from your local coding agent sessions. Supports
29
29
  - **LiteLLM Pricing** — Real-time pricing sync with offline caching support
30
30
  - **Flexible Reports** — Daily, weekly, and monthly aggregations
31
31
  - **Efficiency Reports** — Correlate cost/tokens with repository commit outcomes
32
+ - **Optimize Reports** — Counterfactual candidate-model pricing against observed token mix
32
33
  - **Multiple Outputs** — Terminal tables, JSON, or Markdown
33
- - **Smart Filtering** — By source, provider, model, and date ranges
34
+ - **Smart Filtering** — By source, billing provider, model, and date ranges
34
35
 
35
36
  ## 🚀 Quick Start
36
37
 
@@ -134,6 +135,18 @@ llm-usage efficiency monthly --repo-dir /path/to/repo --source opencode
134
135
 
135
136
  Note: usage filters (`--source`, `--provider`, `--model`, `--pi-dir`, `--codex-dir`, `--gemini-dir`, `--droid-dir`, `--opencode-db`, `--source-dir`) also constrain commit attribution: only commit days with matching repo-attributed usage events are counted.
136
137
 
138
+ ### Optimize Reports
139
+
140
+ ```bash
141
+ # Counterfactual pricing across candidate models
142
+ llm-usage optimize monthly --provider openai --candidate-model gpt-4.1 --candidate-model gpt-5-codex
143
+
144
+ # Keep only the cheapest candidate in JSON output
145
+ llm-usage optimize weekly --provider openai --candidate-model gpt-4.1,gpt-5-codex --top 1 --json
146
+ ```
147
+
148
+ `--provider` filters by billing entity. Provider aliases are normalized to billing roots (for example, `openai-codex` is treated as `openai`).
149
+
137
150
  ### Filtering
138
151
 
139
152
  ```bash
@@ -150,6 +163,8 @@ llm-usage monthly --model claude
150
163
  llm-usage monthly --source opencode --provider openai --model gpt-4.1
151
164
  ```
152
165
 
166
+ Use `--source` to scope where events came from (`pi`, `codex`, `gemini`, `droid`, `opencode`), and `--provider` to scope the billing entity behind those events.
167
+
153
168
  ### Custom Paths
154
169
 
155
170
  ```bash
@@ -300,6 +315,7 @@ pnpm cli daily
300
315
  - **[Getting Started](https://ayagmar.github.io/llm-usage-metrics/getting-started/)** — Installation and first steps
301
316
  - **[CLI Reference](https://ayagmar.github.io/llm-usage-metrics/cli-reference/)** — Complete command reference
302
317
  - **[Efficiency](https://ayagmar.github.io/llm-usage-metrics/efficiency/)** — Efficiency report semantics and interpretation
318
+ - **[Optimize](https://ayagmar.github.io/llm-usage-metrics/optimize/)** — Counterfactual candidate-model pricing semantics
303
319
  - **[Data Sources](https://ayagmar.github.io/llm-usage-metrics/sources/)** — Source configuration
304
320
  - **[Configuration](https://ayagmar.github.io/llm-usage-metrics/configuration/)** — Environment variables
305
321
  - **[Benchmarks](https://ayagmar.github.io/llm-usage-metrics/benchmarks/)** — Production benchmark methodology and results