llm-usage-metrics 0.3.0 โ†’ 0.3.2

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (2) hide show
  1. package/README.md +105 -262
  2. package/package.json +4 -3
package/README.md CHANGED
@@ -1,341 +1,184 @@
1
- # llm-usage-metrics
2
-
3
- [![Ask DeepWiki](https://deepwiki.com/badge.svg)](https://deepwiki.com/ayagmar/llm-usage-metrics)
4
- [![CI](https://github.com/ayagmar/llm-usage-metrics/actions/workflows/ci.yml/badge.svg)](https://github.com/ayagmar/llm-usage-metrics/actions/workflows/ci.yml)
5
- [![Coverage](https://img.shields.io/codecov/c/github/ayagmar/llm-usage-metrics)](https://codecov.io/gh/ayagmar/llm-usage-metrics)
6
- [![npm version](https://img.shields.io/npm/v/llm-usage-metrics.svg)](https://www.npmjs.com/package/llm-usage-metrics)
7
- [![npm total downloads](https://img.shields.io/npm/dt/llm-usage-metrics.svg)](https://www.npmjs.com/package/llm-usage-metrics)
8
- [![install size](https://packagephobia.com/badge?p=llm-usage-metrics)](https://packagephobia.com/result?p=llm-usage-metrics)
9
-
10
- CLI to aggregate local LLM usage from:
11
-
12
- - `~/.pi/agent/sessions/**/*.jsonl`
13
- - `~/.codex/sessions/**/*.jsonl`
14
- - OpenCode SQLite DB (auto-discovered or provided via `--opencode-db`)
15
-
16
- Reports are available for daily, weekly (Monday-start), and monthly periods.
17
-
18
- **Documentation: [ayagmar.github.io/llm-usage-metrics](https://ayagmar.github.io/llm-usage-metrics/)**
1
+ <div align="center">
19
2
 
20
- Built-in adapters currently support 3 sources: `.pi`, `.codex`, and OpenCode SQLite. The codebase is structured to add more sources (for example Claude/Gemini exports) through the `SourceAdapter` pattern. See [`CONTRIBUTING.md`](./CONTRIBUTING.md).
3
+ <img src="https://ayagmar.github.io/llm-usage-metrics/favicon.svg" width="64" height="64" alt="llm-usage-metrics logo">
21
4
 
22
- ## Install
23
-
24
- ```bash
25
- npm install -g llm-usage-metrics
26
- ```
5
+ # llm-usage-metrics
27
6
 
28
- Or run without global install:
7
+ **Track and analyze your local LLM usage across coding agents**
29
8
 
30
- ```bash
31
- npx --yes llm-usage-metrics daily
32
- ```
33
-
34
- (`npx llm-usage daily` works when the project is already installed locally.)
9
+ [![Ask DeepWiki](https://deepwiki.com/badge.svg)](https://deepwiki.com/ayagmar/llm-usage-metrics)
10
+ [![npm version](https://img.shields.io/npm/v/llm-usage-metrics.svg?style=flat-square&color=0ea5e9)](https://www.npmjs.com/package/llm-usage-metrics)
11
+ [![npm downloads](https://img.shields.io/npm/dt/llm-usage-metrics.svg?style=flat-square&color=10b981)](https://www.npmjs.com/package/llm-usage-metrics)
12
+ [![CI](https://img.shields.io/github/actions/workflow/status/ayagmar/llm-usage-metrics/ci.yml?style=flat-square&label=CI)](https://github.com/ayagmar/llm-usage-metrics/actions/workflows/ci.yml)
13
+ [![Coverage](https://img.shields.io/codecov/c/github/ayagmar/llm-usage-metrics?style=flat-square)](https://codecov.io/gh/ayagmar/llm-usage-metrics)
35
14
 
36
- Runtime notes:
15
+ [๐Ÿ“– Documentation](https://ayagmar.github.io/llm-usage-metrics/) ยท
16
+ [โšก Quick Start](#quick-start) ยท
17
+ [๐Ÿ“Š Examples](#usage) ยท
18
+ [๐Ÿค Contributing](./CONTRIBUTING.md)
37
19
 
38
- - OpenCode parsing requires Node.js 24+ (`node:sqlite`).
39
- - pnpm is the supported dependency/script workflow for this repository.
40
- - Example local execution against built dist: `node dist/index.js daily --source opencode --opencode-db /path/to/opencode.db`
20
+ </div>
41
21
 
42
- ## Update checks
22
+ ---
43
23
 
44
- When installed globally, the CLI performs a lightweight npm update check on startup.
24
+ Aggregate token usage and costs from your local coding agent sessions. Supports **pi**, **codex**, and **OpenCode** with zero configuration required.
45
25
 
46
- Behavior:
26
+ ## โœจ Features
47
27
 
48
- - uses a local cache (`<platform-cache-root>/llm-usage-metrics/update-check.json`; defaults to `~/.cache/llm-usage-metrics/update-check.json` on Linux when `XDG_CACHE_HOME` is unset) with a 1-hour default TTL
49
- - optional session-scoped cache mode via `LLM_USAGE_UPDATE_CACHE_SCOPE=session`
50
- - skips checks for `--help` / `--version` invocations
51
- - skips checks when run through `npx`
52
- - prompts for install + restart only in interactive TTY sessions
53
- - prints a one-line notice in non-interactive sessions
28
+ - **Zero-Config Discovery** โ€” Automatically finds `.pi`, `.codex`, and OpenCode session data
29
+ - **LiteLLM Pricing** โ€” Real-time pricing sync with offline caching support
30
+ - **Flexible Reports** โ€” Daily, weekly, and monthly aggregations
31
+ - **Multiple Outputs** โ€” Terminal tables, JSON, or Markdown
32
+ - **Smart Filtering** โ€” By source, provider, model, and date ranges
54
33
 
55
- To force-skip startup update checks:
34
+ ## ๐Ÿš€ Quick Start
56
35
 
57
36
  ```bash
58
- LLM_USAGE_SKIP_UPDATE_CHECK=1 llm-usage daily
59
- ```
37
+ # Install globally
38
+ npm install -g llm-usage-metrics
60
39
 
61
- ### Runtime environment overrides
40
+ # Or run without installing
41
+ npx llm-usage-metrics daily
62
42
 
63
- You can tune runtime behavior with environment variables:
43
+ # Generate your first report
44
+ llm-usage daily
45
+ ```
64
46
 
65
- - `LLM_USAGE_SKIP_UPDATE_CHECK`: skip startup update check when set to `1`
66
- - `LLM_USAGE_UPDATE_CACHE_SCOPE`: cache scope for update checks (`global` default, `session` to scope by terminal shell session)
67
- - `LLM_USAGE_UPDATE_CACHE_SESSION_KEY`: optional custom session key when `LLM_USAGE_UPDATE_CACHE_SCOPE=session` (defaults to parent shell PID)
68
- - `LLM_USAGE_UPDATE_CACHE_TTL_MS`: update-check cache TTL in milliseconds (clamped: `0..2592000000`; use `0` to check on every CLI run)
69
- - `LLM_USAGE_UPDATE_FETCH_TIMEOUT_MS`: update-check network timeout in milliseconds (clamped: `200..30000`)
70
- - `LLM_USAGE_PRICING_CACHE_TTL_MS`: pricing cache TTL in milliseconds (clamped: `60000..2592000000`)
71
- - `LLM_USAGE_PRICING_FETCH_TIMEOUT_MS`: pricing fetch timeout in milliseconds (clamped: `200..30000`)
72
- - `LLM_USAGE_PARSE_MAX_PARALLEL`: max concurrent file parses per source adapter (clamped: `1..64`)
73
- - `LLM_USAGE_PARSE_CACHE_ENABLED`: enable file parse cache (`1` by default; accepts `1/0`, `true/false`, `yes/no`)
74
- - `LLM_USAGE_PARSE_CACHE_TTL_MS`: file parse cache TTL in milliseconds (clamped: `3600000..2592000000`)
75
- - `LLM_USAGE_PARSE_CACHE_MAX_ENTRIES`: max cached file parse entries (clamped: `100..20000`)
76
- - `LLM_USAGE_PARSE_CACHE_MAX_BYTES`: max parse-cache file size in bytes (clamped: `1048576..536870912`)
47
+ <div align="center">
77
48
 
78
- Parse cache location:
49
+ ![Terminal output showing token usage and cost breakdown](https://ayagmar.github.io/llm-usage-metrics/screenshot.png)
79
50
 
80
- - `<platform-cache-root>/llm-usage-metrics/parse-file-cache.json` (defaults to `~/.cache/llm-usage-metrics/parse-file-cache.json` on Linux when `XDG_CACHE_HOME` is unset)
51
+ </div>
81
52
 
82
- Example:
53
+ ## ๐Ÿ“‹ Supported Sources
83
54
 
84
- ```bash
85
- LLM_USAGE_PARSE_MAX_PARALLEL=16 LLM_USAGE_PRICING_FETCH_TIMEOUT_MS=8000 llm-usage monthly
86
- ```
55
+ | Source | Pattern | Discovery |
56
+ | ------------ | --------------------------------- | -------------------------------- |
57
+ | **pi** | `~/.pi/agent/sessions/**/*.jsonl` | Automatic |
58
+ | **codex** | `~/.codex/sessions/**/*.jsonl` | Automatic |
59
+ | **OpenCode** | `~/.opencode/opencode.db` | Auto or explicit `--opencode-db` |
87
60
 
88
- ## Usage
61
+ ## ๐ŸŽฏ Usage
89
62
 
90
- ### Daily report (default terminal table)
63
+ ### Basic Reports
91
64
 
92
65
  ```bash
66
+ # Daily report (default terminal table)
93
67
  llm-usage daily
94
- ```
95
68
 
96
- ### Weekly report with custom timezone
97
-
98
- ```bash
69
+ # Weekly with timezone
99
70
  llm-usage weekly --timezone Europe/Paris
100
- ```
101
71
 
102
- ### Monthly report with date range
103
-
104
- ```bash
72
+ # Monthly date range
105
73
  llm-usage monthly --since 2026-01-01 --until 2026-01-31
106
74
  ```
107
75
 
108
- ### Markdown output
109
-
110
- ```bash
111
- llm-usage daily --markdown
112
- ```
113
-
114
- ### JSON output
76
+ ### Output Formats
115
77
 
116
78
  ```bash
79
+ # JSON for pipelines
117
80
  llm-usage daily --json
118
- ```
119
81
 
120
- ### Offline pricing (use cached LiteLLM pricing only)
82
+ # Markdown for documentation
83
+ llm-usage daily --markdown
121
84
 
122
- ```bash
123
- llm-usage monthly --pricing-offline
85
+ # Detailed per-model breakdown
86
+ llm-usage monthly --per-model-columns
124
87
  ```
125
88
 
126
- ### Override pricing URL
89
+ ### Filtering
127
90
 
128
91
  ```bash
129
- llm-usage monthly --pricing-url https://raw.githubusercontent.com/BerriAI/litellm/main/model_prices_and_context_window.json
130
- ```
131
-
132
- Pricing behavior notes:
92
+ # By source
93
+ llm-usage monthly --source pi,codex
133
94
 
134
- - LiteLLM is the active pricing source.
135
- - explicit `costUsd: 0` events are re-priced from LiteLLM when model pricing is available.
136
- - if all contributing events in a row have unresolved cost, the row `Cost` is rendered as `-`.
137
- - if only part of a row cost is known, the row `Cost` is rendered as `~$...` to mark incomplete pricing.
138
- - when pricing cannot be loaded from LiteLLM (or cache in offline mode), report generation fails fast.
95
+ # By provider
96
+ llm-usage monthly --provider openai
139
97
 
140
- ### Custom session directories
98
+ # By model
99
+ llm-usage monthly --model claude
141
100
 
142
- ```bash
143
- llm-usage daily --pi-dir /path/to/pi/sessions --codex-dir /path/to/codex/sessions
101
+ # Combined filters
102
+ llm-usage monthly --source opencode --provider openai --model gpt-4.1
144
103
  ```
145
104
 
146
- Or use generic source-id mapping (repeatable):
105
+ ### Custom Paths
147
106
 
148
107
  ```bash
149
- llm-usage daily --source-dir pi=/path/to/pi/sessions --source-dir codex=/path/to/codex/sessions
150
- ```
108
+ # Custom directories
109
+ llm-usage daily --source-dir pi=/path/to/pi --source-dir codex=/path/to/codex
151
110
 
152
- Directory override rules:
153
-
154
- - `--source-dir` is directory-only (currently `pi` and `codex`).
155
- - `--source-dir opencode=...` is invalid and points to `--opencode-db`.
156
- - `--opencode-db <path>` sets an explicit OpenCode SQLite DB path.
157
-
158
- OpenCode DB override:
159
-
160
- ```bash
111
+ # Explicit OpenCode database
161
112
  llm-usage daily --opencode-db /path/to/opencode.db
162
113
  ```
163
114
 
164
- OpenCode path precedence:
165
-
166
- 1. explicit `--opencode-db`
167
- 2. deterministic OS-specific default path candidates
168
-
169
- Backfill example from a historical DB snapshot:
170
-
171
- ```bash
172
- llm-usage monthly --source opencode --opencode-db /archives/opencode-2026-01.db --since 2026-01-01 --until 2026-01-31
173
- ```
174
-
175
- OpenCode safety notes:
176
-
177
- - OpenCode DB is opened in read-only mode
178
- - unreadable/missing explicit paths fail fast with actionable errors
179
- - OpenCode CLI is optional for troubleshooting and not required for runtime parsing
180
-
181
- ### Filter by source (`--source`)
182
-
183
- Use `--source` to limit reports to one or more source ids.
184
-
185
- Supported source ids:
186
-
187
- - `pi`
188
- - `codex`
189
- - `opencode`
190
-
191
- Behavior:
192
-
193
- - repeatable or comma-separated (`--source pi --source codex` or `--source pi,codex`)
194
- - case-insensitive source id matching
195
- - unknown ids fail fast with a validation error
196
-
197
- Examples:
115
+ ### Offline Mode
198
116
 
199
117
  ```bash
200
- # only codex data
201
- llm-usage monthly --source codex
202
-
203
- # only pi data
204
- llm-usage monthly --source pi
205
-
206
- # only OpenCode data
207
- llm-usage monthly --source opencode
208
-
209
- # multiple sources
210
- llm-usage monthly --source pi --source codex
211
- llm-usage monthly --source pi,codex
212
-
213
- # OpenCode source with explicit DB path
214
- llm-usage monthly --source opencode --opencode-db /path/to/opencode.db
118
+ # Use cached pricing only
119
+ llm-usage monthly --pricing-offline
215
120
  ```
216
121
 
217
- ### Filter by provider (`--provider`)
122
+ ## โš™๏ธ Configuration
218
123
 
219
- Use `--provider` to keep only events whose provider contains the filter text.
124
+ ### Environment Variables
220
125
 
221
- Behavior:
126
+ | Variable | Description |
127
+ | -------------------------------- | --------------------------------- |
128
+ | `LLM_USAGE_SKIP_UPDATE_CHECK` | Skip update check (`1`) |
129
+ | `LLM_USAGE_PRICING_CACHE_TTL_MS` | Pricing cache duration |
130
+ | `LLM_USAGE_PARSE_MAX_PARALLEL` | Max parallel file parses (`1-64`) |
131
+ | `LLM_USAGE_PARSE_CACHE_ENABLED` | Enable parse cache (`1/0`) |
222
132
 
223
- - case-insensitive substring match
224
- - optional flag (when omitted, all providers are included)
225
- - works together with `--source` and `--model`
133
+ See full environment variable reference in the [documentation](https://ayagmar.github.io/llm-usage-metrics/configuration/).
226
134
 
227
- Examples:
135
+ ### Update Checks
228
136
 
229
- ```bash
230
- # all OpenAI-family providers
231
- llm-usage monthly --provider openai
137
+ The CLI performs lightweight update checks with smart defaults:
232
138
 
233
- # GitHub Models providers
234
- llm-usage monthly --provider github
139
+ - 1-hour cache TTL
140
+ - Skipped for `--help`, `--version`, and `npx` runs
141
+ - Prompts only in interactive TTY sessions
235
142
 
236
- # source + provider together
237
- llm-usage monthly --source codex --provider openai
238
- ```
239
-
240
- ### Filter by model (`--model`)
241
-
242
- `--model` supports repeatable and comma-separated filters. Matching is case-insensitive.
243
-
244
- Per filter value:
245
-
246
- - if an exact model id exists in the currently selected event set (after source/provider/date filtering), exact matching is used
247
- - otherwise, substring matching is used
248
-
249
- Examples:
143
+ Disable with:
250
144
 
251
145
  ```bash
252
- # substring match (all Claude-family models)
253
- llm-usage monthly --model claude
254
-
255
- # exact match when present
256
- llm-usage monthly --model claude-sonnet-4.5
257
-
258
- # multiple model filters
259
- llm-usage monthly --model claude --model gpt-5
260
- llm-usage monthly --model claude,gpt-5
261
-
262
- # source + provider + model together
263
- llm-usage monthly --source opencode --provider openai --model gpt-4.1
146
+ LLM_USAGE_SKIP_UPDATE_CHECK=1 llm-usage daily
264
147
  ```
265
148
 
266
- ### Per-model columns (opt-in detailed table layout)
267
-
268
- Default output is compact (model names only in the Models column).
269
-
270
- Use `--per-model-columns` to render per-model multiline metrics in each numeric column:
149
+ ## ๐Ÿ› ๏ธ Development
271
150
 
272
151
  ```bash
273
- llm-usage monthly --per-model-columns
274
- llm-usage monthly --markdown --per-model-columns
275
- ```
276
-
277
- ## Output features
278
-
279
- ### Terminal UI
280
-
281
- The CLI provides an enhanced terminal output with:
282
-
283
- - **Boxed report header** showing the report type and timezone
284
- - **Session summary** displayed at startup (session files and event counts per source)
285
- - **Malformed-row summary** when rows are skipped, including per-source reason counts
286
- - **Source failure summary** when one or more sources fail to parse
287
- - **Pricing source info** indicating whether data was loaded from cache or fetched remotely
288
- - **Environment variable overrides** displayed when active
289
- - **Models displayed as bullet points** for better readability
290
- - **Rounded table borders** and improved color scheme
291
-
292
- Example output:
152
+ # Install dependencies
153
+ pnpm install
293
154
 
294
- ```text
295
- โ„น Found 12 session file(s) with 45 event(s)
296
- โ€ข pi: 8 file(s), 32 events
297
- โ€ข codex: 4 file(s), 13 events
298
- โ„น Loaded pricing from cache
155
+ # Run quality checks
156
+ pnpm run lint
157
+ pnpm run typecheck
158
+ pnpm run test
159
+ pnpm run format:check
299
160
 
300
- โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
301
- โ”‚ Monthly Token Usage Report โ”‚
302
- โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜
161
+ # Build
162
+ pnpm run build
303
163
 
304
- โ•ญโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฎ
305
- โ”‚ Period โ”‚ Source โ”‚ Models โ”‚
306
- โ”œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ค
307
- โ”‚ Feb 2026 โ”‚ pi โ”‚ โ€ข gpt-5.2 โ”‚
308
- โ”‚ โ”‚ โ”‚ โ€ข gpt-5.2-codex โ”‚
309
- โ•ฐโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฏ
164
+ # Run locally
165
+ pnpm cli daily
310
166
  ```
311
167
 
312
- ### Report structure
168
+ ## ๐Ÿ“š Documentation
313
169
 
314
- Each report includes:
170
+ - **[Getting Started](https://ayagmar.github.io/llm-usage-metrics/getting-started/)** โ€” Installation and first steps
171
+ - **[CLI Reference](https://ayagmar.github.io/llm-usage-metrics/cli-reference/)** โ€” Complete command reference
172
+ - **[Data Sources](https://ayagmar.github.io/llm-usage-metrics/sources/)** โ€” Source configuration
173
+ - **[Configuration](https://ayagmar.github.io/llm-usage-metrics/configuration/)** โ€” Environment variables
174
+ - **[Architecture](https://ayagmar.github.io/llm-usage-metrics/architecture/)** โ€” Technical overview
315
175
 
316
- - source rows (`pi`, `codex`, `opencode`) for each period
317
- - a per-period combined subtotal row (only when multiple sources exist in that period)
318
- - a final grand total row across all periods
176
+ ## ๐Ÿค Contributing
319
177
 
320
- Columns:
178
+ Contributions are welcome! See [CONTRIBUTING.md](./CONTRIBUTING.md) for guidelines.
321
179
 
322
- - Period
323
- - Source
324
- - Models
325
- - Input
326
- - Output
327
- - Reasoning
328
- - Cache Read
329
- - Cache Write
330
- - Total
331
- - Cost
180
+ The codebase is structured to add more sources through the `SourceAdapter` pattern.
332
181
 
333
- ## Development
182
+ ## ๐Ÿ“„ License
334
183
 
335
- ```bash
336
- pnpm install
337
- pnpm run lint
338
- pnpm run typecheck
339
- pnpm run test
340
- pnpm run format:check
341
- ```
184
+ MIT ยฉ [Abdeslam Yagmar](https://github.com/ayagmar)
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "llm-usage-metrics",
3
- "version": "0.3.0",
3
+ "version": "0.3.2",
4
4
  "description": "CLI for aggregating local LLM usage metrics from pi, codex, and opencode sessions",
5
5
  "type": "module",
6
6
  "packageManager": "pnpm@10.17.1",
@@ -33,7 +33,8 @@
33
33
  "site:build": "pnpm --filter llm-usage-metrics-site build",
34
34
  "site:preview": "pnpm --filter llm-usage-metrics-site preview",
35
35
  "site:check": "pnpm --filter llm-usage-metrics-site check",
36
- "site:docs:generate": "node scripts/generate-cli-reference.mjs"
36
+ "site:docs:generate": "node scripts/generate-cli-reference.mjs",
37
+ "docs:mermaid:validate": "node scripts/validate-mermaid.mjs"
37
38
  },
38
39
  "keywords": [
39
40
  "llm",
@@ -51,7 +52,7 @@
51
52
  "bugs": {
52
53
  "url": "https://github.com/ayagmar/llm-usage-metrics/issues"
53
54
  },
54
- "homepage": "https://github.com/ayagmar/llm-usage-metrics",
55
+ "homepage": "https://ayagmar.github.io/llm-usage-metrics/",
55
56
  "dependencies": {
56
57
  "commander": "^14.0.3",
57
58
  "markdown-table": "^3.0.4",