llm_cost_tracker 0.2.0.alpha2 → 0.3.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- checksums.yaml +4 -4
- data/CHANGELOG.md +48 -1
- data/README.md +114 -70
- data/Rakefile +2 -0
- data/app/assets/llm_cost_tracker/application.css +760 -0
- data/app/controllers/llm_cost_tracker/application_controller.rb +1 -7
- data/app/controllers/llm_cost_tracker/assets_controller.rb +12 -0
- data/app/controllers/llm_cost_tracker/calls_controller.rb +29 -12
- data/app/controllers/llm_cost_tracker/dashboard_controller.rb +5 -1
- data/app/helpers/llm_cost_tracker/application_helper.rb +46 -5
- data/app/helpers/llm_cost_tracker/chart_helper.rb +133 -0
- data/app/helpers/llm_cost_tracker/dashboard_filter_helper.rb +47 -0
- data/app/helpers/llm_cost_tracker/dashboard_filter_options_helper.rb +34 -0
- data/app/helpers/llm_cost_tracker/dashboard_query_helper.rb +58 -0
- data/app/helpers/llm_cost_tracker/pagination_helper.rb +18 -0
- data/app/services/llm_cost_tracker/dashboard/data_quality.rb +16 -1
- data/app/services/llm_cost_tracker/dashboard/filter.rb +22 -3
- data/app/services/llm_cost_tracker/dashboard/overview_stats.rb +16 -1
- data/app/services/llm_cost_tracker/dashboard/spend_anomaly.rb +79 -0
- data/app/services/llm_cost_tracker/dashboard/tag_key_explorer.rb +19 -46
- data/app/services/llm_cost_tracker/dashboard/top_models.rb +17 -8
- data/app/services/llm_cost_tracker/pagination.rb +6 -0
- data/app/views/layouts/llm_cost_tracker/application.html.erb +35 -333
- data/app/views/llm_cost_tracker/calls/index.html.erb +116 -74
- data/app/views/llm_cost_tracker/calls/show.html.erb +58 -1
- data/app/views/llm_cost_tracker/dashboard/index.html.erb +211 -111
- data/app/views/llm_cost_tracker/data_quality/index.html.erb +224 -78
- data/app/views/llm_cost_tracker/errors/database.html.erb +3 -3
- data/app/views/llm_cost_tracker/errors/invalid_filter.html.erb +3 -3
- data/app/views/llm_cost_tracker/errors/not_found.html.erb +3 -3
- data/app/views/llm_cost_tracker/models/index.html.erb +66 -58
- data/app/views/llm_cost_tracker/shared/_active_filters.html.erb +16 -0
- data/app/views/llm_cost_tracker/shared/_metric_stack.html.erb +23 -0
- data/app/views/llm_cost_tracker/shared/_spend_chart.html.erb +18 -0
- data/app/views/llm_cost_tracker/shared/_tag_chips.html.erb +15 -0
- data/app/views/llm_cost_tracker/shared/setup_required.html.erb +3 -2
- data/app/views/llm_cost_tracker/tags/index.html.erb +55 -12
- data/app/views/llm_cost_tracker/tags/show.html.erb +88 -39
- data/config/routes.rb +3 -0
- data/lib/llm_cost_tracker/assets.rb +19 -0
- data/lib/llm_cost_tracker/configuration.rb +78 -42
- data/lib/llm_cost_tracker/engine.rb +2 -0
- data/lib/llm_cost_tracker/event.rb +2 -0
- data/lib/llm_cost_tracker/generators/llm_cost_tracker/add_streaming_generator.rb +29 -0
- data/lib/llm_cost_tracker/generators/llm_cost_tracker/templates/add_streaming_to_llm_api_calls.rb.erb +25 -0
- data/lib/llm_cost_tracker/generators/llm_cost_tracker/templates/create_llm_api_calls.rb.erb +4 -0
- data/lib/llm_cost_tracker/generators/llm_cost_tracker/templates/llm_cost_tracker_prices.yml.erb +8 -1
- data/lib/llm_cost_tracker/llm_api_call.rb +9 -1
- data/lib/llm_cost_tracker/middleware/faraday.rb +57 -9
- data/lib/llm_cost_tracker/parsed_usage.rb +7 -3
- data/lib/llm_cost_tracker/parsers/anthropic.rb +79 -1
- data/lib/llm_cost_tracker/parsers/base.rb +17 -5
- data/lib/llm_cost_tracker/parsers/gemini.rb +59 -6
- data/lib/llm_cost_tracker/parsers/openai.rb +8 -0
- data/lib/llm_cost_tracker/parsers/openai_compatible.rb +8 -0
- data/lib/llm_cost_tracker/parsers/openai_usage.rb +55 -1
- data/lib/llm_cost_tracker/parsers/registry.rb +15 -3
- data/lib/llm_cost_tracker/parsers/sse.rb +81 -0
- data/lib/llm_cost_tracker/price_registry.rb +18 -7
- data/lib/llm_cost_tracker/price_sync/fetcher.rb +72 -0
- data/lib/llm_cost_tracker/price_sync/merger.rb +72 -0
- data/lib/llm_cost_tracker/price_sync/model_catalog.rb +77 -0
- data/lib/llm_cost_tracker/price_sync/raw_price.rb +35 -0
- data/lib/llm_cost_tracker/price_sync/source.rb +29 -0
- data/lib/llm_cost_tracker/price_sync/source_result.rb +7 -0
- data/lib/llm_cost_tracker/price_sync/sources/litellm.rb +91 -0
- data/lib/llm_cost_tracker/price_sync/sources/open_router.rb +94 -0
- data/lib/llm_cost_tracker/price_sync/validator.rb +66 -0
- data/lib/llm_cost_tracker/price_sync.rb +310 -0
- data/lib/llm_cost_tracker/pricing.rb +19 -6
- data/lib/llm_cost_tracker/retention.rb +34 -0
- data/lib/llm_cost_tracker/storage/active_record_store.rb +3 -1
- data/lib/llm_cost_tracker/stream_collector.rb +158 -0
- data/lib/llm_cost_tracker/tag_query.rb +7 -2
- data/lib/llm_cost_tracker/tags_column.rb +21 -1
- data/lib/llm_cost_tracker/tracker.rb +15 -12
- data/lib/llm_cost_tracker/value_helpers.rb +40 -0
- data/lib/llm_cost_tracker/version.rb +1 -1
- data/lib/llm_cost_tracker.rb +51 -29
- data/lib/tasks/llm_cost_tracker.rake +124 -0
- data/llm_cost_tracker.gemspec +9 -8
- metadata +40 -12
- data/PLAN_0.2.md +0 -488
checksums.yaml
CHANGED
|
@@ -1,7 +1,7 @@
|
|
|
1
1
|
---
|
|
2
2
|
SHA256:
|
|
3
|
-
metadata.gz:
|
|
4
|
-
data.tar.gz:
|
|
3
|
+
metadata.gz: 8b20da957651521f022866af9d4735a4ef53d52a2dc3c278b8b2a90e1d7a7f98
|
|
4
|
+
data.tar.gz: ea98b2a7505d99c5f78d7756d0adc50224c4fdc88000fa5ec81be4450c9200f1
|
|
5
5
|
SHA512:
|
|
6
|
-
metadata.gz:
|
|
7
|
-
data.tar.gz:
|
|
6
|
+
metadata.gz: 9ca709080d46395ac32b9a2931b4b3cb7d4df6016b73bad3579cb1decdd046be21a2fb67c06e96876013a754a113e9ce5987ed0e27792b312716324bdb5f9adb
|
|
7
|
+
data.tar.gz: 445b77222180802f208246a2e25b30e5e0a5679d2d5b84a2ba00d1e2fc97a5cf3127521be13f415c6d76bbcc056dd0bdfe6ade937eb9d67d737ce6b6548665fa
|
data/CHANGELOG.md
CHANGED
|
@@ -2,6 +2,53 @@
|
|
|
2
2
|
|
|
3
3
|
Format: [Keep a Changelog](https://keepachangelog.com/en/1.1.0/). Versioning: [SemVer](https://semver.org/spec/v2.0.0.html).
|
|
4
4
|
|
|
5
|
+
## [Unreleased]
|
|
6
|
+
|
|
7
|
+
## [0.3.0] - 2026-04-22
|
|
8
|
+
|
|
9
|
+
### Added
|
|
10
|
+
|
|
11
|
+
- Streaming capture across OpenAI, Anthropic, and Gemini, including `LlmCostTracker.track_stream` for non-Faraday clients.
|
|
12
|
+
- `stream` / `usage_source` persistence and dashboard coverage for streamed calls.
|
|
13
|
+
- `llm_cost_tracker:prices:sync` and `llm_cost_tracker:prices:check` for keeping local price snapshots current.
|
|
14
|
+
- `LlmCostTracker.enforce_budget!` and opt-in `enforce_budget:` keyword for `track` / `track_stream`.
|
|
15
|
+
|
|
16
|
+
### Changed
|
|
17
|
+
|
|
18
|
+
- Price refresh now uses structured JSON sources (LiteLLM primary, OpenRouter secondary) instead of scraping provider HTML pages.
|
|
19
|
+
- Synced price entries now carry source provenance (`_source`, `_source_version`, `_fetched_at`), while `_source: "manual"` entries remain untouched.
|
|
20
|
+
- Manual stream parsing now resolves parsers through the shared registry, so configured OpenAI-compatible providers work the same way as built-in ones.
|
|
21
|
+
- `LlmCostTracker.configure` now treats configuration as an immutable snapshot after the block returns; mutating or replacing shared fields through `LlmCostTracker.configuration` raises `FrozenError`.
|
|
22
|
+
|
|
23
|
+
### Removed
|
|
24
|
+
|
|
25
|
+
- Public `LlmCostTracker.configuration=` writer; use `LlmCostTracker.configure` to replace configuration snapshots.
|
|
26
|
+
|
|
27
|
+
## [0.2.0] - 2026-04-20
|
|
28
|
+
|
|
29
|
+
### Added
|
|
30
|
+
|
|
31
|
+
- `LlmCostTracker::Retention.prune(older_than:)` and `llm_cost_tracker:prune` rake task.
|
|
32
|
+
- Overview: budget projection, previous-period daily spend comparison, spend anomaly alerts.
|
|
33
|
+
- Call details: token and cost mix breakdowns.
|
|
34
|
+
- Dashboard CSS served as a fingerprinted, immutably-cached file via `LlmCostTracker::AssetsController`.
|
|
35
|
+
- Filter dropdowns for Provider and Model, scoped to the current slice.
|
|
36
|
+
- Pagination with per-page selector and Stripe-style page window.
|
|
37
|
+
|
|
38
|
+
### Changed
|
|
39
|
+
|
|
40
|
+
- Dashboard UI aligned to Tailwind UI Application UI: dot-indicator badges, value-first stat tiles, inset-shadow form inputs, white secondary buttons with `shadow-sm`.
|
|
41
|
+
- CSS fully namespaced under `lct-*`; removed bare `body` selector to avoid host-app leakage.
|
|
42
|
+
|
|
43
|
+
### Fixed
|
|
44
|
+
|
|
45
|
+
- Thread-safe price memoization (regression from 0.1.3).
|
|
46
|
+
- `by_tag` on MySQL JSON columns.
|
|
47
|
+
- CSV export escapes formula-prefixed values.
|
|
48
|
+
- Portable dashboard sorting across adapters.
|
|
49
|
+
- Dashboard shows database errors instead of install/setup guidance when the DB is unavailable.
|
|
50
|
+
- Tag key explorer uses SQL discovery on MySQL 8.0+.
|
|
51
|
+
|
|
5
52
|
## [0.2.0.alpha1, 0.2.0.alpha2] - 2026-04-20
|
|
6
53
|
|
|
7
54
|
### Breaking
|
|
@@ -15,7 +62,7 @@ Format: [Keep a Changelog](https://keepachangelog.com/en/1.1.0/). Versioning: [S
|
|
|
15
62
|
### Added
|
|
16
63
|
|
|
17
64
|
- `LlmApiCall.group_by_period(:day/:month)` — SQL-side period grouping.
|
|
18
|
-
- Opt-in `LlmCostTracker::Engine` dashboard (Rails 7.1+): overview with delta-vs-previous-period, provider rollup, models, filterable call list with CSV export and outlier sort modes, call details, tag key explorer, per-key tag breakdown, data quality. PostgreSQL/SQLite use adapter-specific SQL; MySQL
|
|
65
|
+
- Opt-in `LlmCostTracker::Engine` dashboard (Rails 7.1+): overview with delta-vs-previous-period, provider rollup, models, filterable call list with CSV export and outlier sort modes, call details, tag key explorer, per-key tag breakdown, data quality. PostgreSQL/SQLite use adapter-specific SQL; MySQL 8.0+ uses JSON_TABLE-based tag discovery. Core middleware still works without Rails.
|
|
19
66
|
|
|
20
67
|
## [0.1.4] - 2026-04-18
|
|
21
68
|
|
data/README.md
CHANGED
|
@@ -1,35 +1,17 @@
|
|
|
1
|
-
#
|
|
1
|
+
# LLM Cost Tracker
|
|
2
2
|
|
|
3
|
-
**Self-hosted LLM cost tracking for Ruby and Rails.** Intercepts Faraday LLM responses, prices
|
|
3
|
+
**Self-hosted LLM cost tracking for Ruby and Rails.** Intercepts Faraday LLM responses or records usage explicitly, prices events locally, and stores them in your database. No proxy, no SaaS.
|
|
4
4
|
|
|
5
5
|
[](https://rubygems.org/gems/llm_cost_tracker)
|
|
6
6
|
[](https://github.com/sergey-homenko/llm_cost_tracker/actions)
|
|
7
7
|
|
|
8
|
-
```text
|
|
9
|
-
LLM Cost Report (last 30 days)
|
|
10
|
-
|
|
11
|
-
Total cost: $127.420000
|
|
12
|
-
Requests: 4,218
|
|
13
|
-
Avg latency: 812ms
|
|
14
|
-
Unknown pricing: 0
|
|
15
|
-
|
|
16
|
-
By model:
|
|
17
|
-
gpt-4o $82.100000
|
|
18
|
-
claude-sonnet-4-6 $31.200000
|
|
19
|
-
gemini-2.5-flash $14.120000
|
|
20
|
-
|
|
21
|
-
By tag key "env":
|
|
22
|
-
production $119.300000
|
|
23
|
-
staging $8.120000
|
|
24
|
-
```
|
|
25
|
-
|
|
26
8
|
## Why
|
|
27
9
|
|
|
28
|
-
Every Rails app with LLM integrations eventually runs into the same question: where did that invoice come from? Full observability platforms like Langfuse and Helicone
|
|
10
|
+
Every Rails app with LLM integrations eventually runs into the same question: where did that invoice come from? Full observability platforms like Langfuse and Helicone solve a broader set of problems; sometimes you just need a small Rails-native ledger in your own database.
|
|
29
11
|
|
|
30
|
-
`llm_cost_tracker` is
|
|
12
|
+
`llm_cost_tracker` is built for that. It plugs into Faraday or lets you record usage explicitly with `track` / `track_stream`, looks up pricing locally, and writes an event. You end up with a ledger you can query with plain ActiveRecord, slice by any tag dimension, and optionally surface on a built-in dashboard. No proxy, no SaaS, no separate service to run.
|
|
31
13
|
|
|
32
|
-
It
|
|
14
|
+
It is not a tracing platform, prompt CMS, eval system, or gateway. The goal is to answer _"what did this app spend on LLM APIs, and where did that spend come from?"_ clearly enough to make spend review routine.
|
|
33
15
|
|
|
34
16
|
## Installation
|
|
35
17
|
|
|
@@ -44,23 +26,6 @@ bin/rails generate llm_cost_tracker:install
|
|
|
44
26
|
bin/rails db:migrate
|
|
45
27
|
```
|
|
46
28
|
|
|
47
|
-
## Quick try (no database)
|
|
48
|
-
|
|
49
|
-
```ruby
|
|
50
|
-
require "llm_cost_tracker"
|
|
51
|
-
|
|
52
|
-
LlmCostTracker.configure { |c| c.storage_backend = :log }
|
|
53
|
-
|
|
54
|
-
LlmCostTracker.track(
|
|
55
|
-
provider: :openai,
|
|
56
|
-
model: "gpt-4o",
|
|
57
|
-
input_tokens: 1000,
|
|
58
|
-
output_tokens: 200,
|
|
59
|
-
feature: "demo"
|
|
60
|
-
)
|
|
61
|
-
# => [LlmCostTracker] openai/gpt-4o tokens=1000+200 cost=$0.004500 tags={:feature=>"demo"}
|
|
62
|
-
```
|
|
63
|
-
|
|
64
29
|
## Usage
|
|
65
30
|
|
|
66
31
|
### Patch an existing client's Faraday connection
|
|
@@ -78,19 +43,7 @@ OpenAI.configure do |config|
|
|
|
78
43
|
end
|
|
79
44
|
```
|
|
80
45
|
|
|
81
|
-
`tags:` can be a callable
|
|
82
|
-
|
|
83
|
-
```ruby
|
|
84
|
-
class Current < ActiveSupport::CurrentAttributes
|
|
85
|
-
attribute :user, :tenant, :workflow
|
|
86
|
-
end
|
|
87
|
-
|
|
88
|
-
# application_controller.rb
|
|
89
|
-
before_action do
|
|
90
|
-
Current.user = current_user
|
|
91
|
-
Current.workflow = "chat"
|
|
92
|
-
end
|
|
93
|
-
```
|
|
46
|
+
`tags:` can be a callable and is evaluated on each request.
|
|
94
47
|
|
|
95
48
|
### Raw Faraday
|
|
96
49
|
|
|
@@ -105,7 +58,41 @@ end
|
|
|
105
58
|
conn.post("/v1/responses", { model: "gpt-5-mini", input: "Hello!" })
|
|
106
59
|
```
|
|
107
60
|
|
|
108
|
-
Place `llm_cost_tracker` inside the Faraday stack where it can see the final response body.
|
|
61
|
+
Place `llm_cost_tracker` inside the Faraday stack where it can see the final response body.
|
|
62
|
+
|
|
63
|
+
### Streaming
|
|
64
|
+
|
|
65
|
+
Streaming is captured automatically for OpenAI, Anthropic, and Gemini when the request goes through the Faraday middleware. The middleware tees the `on_data` callback, keeps the stream flowing to your code, and records the final usage block once the response completes.
|
|
66
|
+
|
|
67
|
+
```ruby
|
|
68
|
+
# OpenAI: include usage in the final chunk
|
|
69
|
+
client.chat(parameters: {
|
|
70
|
+
model: "gpt-4o",
|
|
71
|
+
messages: [...],
|
|
72
|
+
stream: proc { |chunk| ... },
|
|
73
|
+
stream_options: { include_usage: true }
|
|
74
|
+
})
|
|
75
|
+
```
|
|
76
|
+
|
|
77
|
+
Anthropic emits usage in `message_start` + `message_delta` events. Gemini's `:streamGenerateContent` endpoint includes `usageMetadata`; usage from the final chunk is used.
|
|
78
|
+
|
|
79
|
+
Streamed calls are stored with `stream: true` and `usage_source: "stream_final"`. If the provider never sends final usage, the call is still recorded with `usage_source: "unknown"` so those calls surface on the Data Quality page.
|
|
80
|
+
|
|
81
|
+
For non-Faraday clients (raw `Net::HTTP`, custom SSE code, Azure OpenAI), use the explicit helper:
|
|
82
|
+
|
|
83
|
+
```ruby
|
|
84
|
+
LlmCostTracker.track_stream(provider: "openai", model: "gpt-4o") do |stream|
|
|
85
|
+
my_client.stream(...) { |chunk| stream.event(chunk) }
|
|
86
|
+
end
|
|
87
|
+
|
|
88
|
+
# Or skip the chunk parsing entirely if you already know the totals:
|
|
89
|
+
LlmCostTracker.track_stream(provider: "openai", model: "gpt-4o") do |stream|
|
|
90
|
+
# ... your streaming loop ...
|
|
91
|
+
stream.usage(input_tokens: 120, output_tokens: 45)
|
|
92
|
+
end
|
|
93
|
+
```
|
|
94
|
+
|
|
95
|
+
Run `bin/rails g llm_cost_tracker:add_streaming` once on existing installs to add the `stream` and `usage_source` columns.
|
|
109
96
|
|
|
110
97
|
### Manual tracking
|
|
111
98
|
|
|
@@ -148,7 +135,7 @@ LlmCostTracker.configure do |config|
|
|
|
148
135
|
end
|
|
149
136
|
```
|
|
150
137
|
|
|
151
|
-
Pricing is best
|
|
138
|
+
Pricing is best effort. OpenRouter-style IDs like `openai/gpt-4o-mini` are normalized to built-in names when possible. Use `prices_file` / `pricing_overrides` for fine-tunes, gateway-specific IDs, enterprise discounts, batch pricing, or models the gem does not know.
|
|
152
139
|
|
|
153
140
|
`storage_error_behavior = :warn` (default) lets LLM responses continue if storage fails; `:raise` exposes `StorageError#original_error`.
|
|
154
141
|
|
|
@@ -160,7 +147,7 @@ LlmCostTracker::LlmApiCall.unknown_pricing.group(:model).count
|
|
|
160
147
|
|
|
161
148
|
### Keeping prices current
|
|
162
149
|
|
|
163
|
-
Built-in prices
|
|
150
|
+
Built-in prices live in `lib/llm_cost_tracker/prices.json`. The gem never fetches pricing on boot. For production, keep a local snapshot under `config/` and point the gem at it:
|
|
164
151
|
|
|
165
152
|
```bash
|
|
166
153
|
bin/rails generate llm_cost_tracker:prices
|
|
@@ -175,7 +162,26 @@ bin/rails generate llm_cost_tracker:prices
|
|
|
175
162
|
}
|
|
176
163
|
```
|
|
177
164
|
|
|
178
|
-
`pricing_overrides` has the highest precedence
|
|
165
|
+
`pricing_overrides` has the highest precedence. Use it for a handful of Ruby-side overrides; use `prices_file` when you want a local pricing table under source control.
|
|
166
|
+
|
|
167
|
+
To refresh prices on demand:
|
|
168
|
+
|
|
169
|
+
```bash
|
|
170
|
+
bin/rails llm_cost_tracker:prices:sync
|
|
171
|
+
```
|
|
172
|
+
|
|
173
|
+
`llm_cost_tracker:prices:sync` refreshes the current registry from two structured sources: LiteLLM first, OpenRouter second. LiteLLM is the primary source; OpenRouter fills gaps and helps surface discrepancies.
|
|
174
|
+
|
|
175
|
+
`llm_cost_tracker:prices:sync` / `llm_cost_tracker:prices:check` perform HTTP GET requests to:
|
|
176
|
+
|
|
177
|
+
- LiteLLM pricing JSON: `https://raw.githubusercontent.com/BerriAI/litellm/main/model_prices_and_context_window.json`
|
|
178
|
+
- OpenRouter Models API: `https://openrouter.ai/api/v1/models`
|
|
179
|
+
|
|
180
|
+
If `config.prices_file` is configured, the task syncs that file automatically; otherwise it works from the built-in snapshot. `_source: "manual"` entries are never touched. Models that are still in your file but missing from both upstream sources are left alone and reported as orphaned. For intentional custom entries, mark them as manual so they stop showing up in orphaned warnings.
|
|
181
|
+
|
|
182
|
+
Use `PREVIEW=1` to see the diff without writing. Use `STRICT=1` to fail instead of applying a partial refresh when a source fails or the validator rejects a price. Use `bin/rails llm_cost_tracker:prices:check` in CI to print the current diff and exit non-zero when the snapshot has drifted or refresh fails.
|
|
183
|
+
|
|
184
|
+
Large price changes are flagged during sync. If a specific entry is expected to move by more than 3x, add `_validator_override: ["skip_relative_change"]` to that entry in your local price file.
|
|
179
185
|
|
|
180
186
|
## Budget enforcement
|
|
181
187
|
|
|
@@ -194,7 +200,25 @@ rescue LlmCostTracker::BudgetExceededError => e
|
|
|
194
200
|
# e.monthly_total, e.budget, e.last_event
|
|
195
201
|
```
|
|
196
202
|
|
|
197
|
-
`:block_requests` is
|
|
203
|
+
`:block_requests` is a **guardrail, not a hard cap**. The preflight and the spend-recording write are separate statements, so under Puma / Sidekiq concurrency multiple workers can all pass the preflight and then collectively overshoot the budget. The setting reliably *stops new requests after the overshoot is visible* — it does not prevent the overshoot itself. For strict quotas use a provider- or gateway-level limit, or a database-backed counter outside this gem.
|
|
204
|
+
|
|
205
|
+
Preflight is wired into the Faraday middleware automatically. When you record events via `LlmCostTracker.track` / `track_stream` and also want the same preflight, opt in:
|
|
206
|
+
|
|
207
|
+
```ruby
|
|
208
|
+
LlmCostTracker.track(
|
|
209
|
+
provider: "openai",
|
|
210
|
+
model: "gpt-4o",
|
|
211
|
+
input_tokens: 120,
|
|
212
|
+
output_tokens: 45,
|
|
213
|
+
enforce_budget: true
|
|
214
|
+
)
|
|
215
|
+
|
|
216
|
+
LlmCostTracker.track_stream(provider: "openai", model: "gpt-4o", enforce_budget: true) do |stream|
|
|
217
|
+
# raises BudgetExceededError before the block runs when over budget
|
|
218
|
+
end
|
|
219
|
+
|
|
220
|
+
LlmCostTracker.enforce_budget! # standalone preflight
|
|
221
|
+
```
|
|
198
222
|
|
|
199
223
|
## Querying costs
|
|
200
224
|
|
|
@@ -229,7 +253,15 @@ LlmCostTracker::LlmApiCall.by_tags(user_id: 42, feature: "chat").this_month.tota
|
|
|
229
253
|
LlmCostTracker::LlmApiCall.between(1.week.ago, Time.current).cost_by_model
|
|
230
254
|
```
|
|
231
255
|
|
|
232
|
-
|
|
256
|
+
## Retention
|
|
257
|
+
|
|
258
|
+
Retention is not enforced automatically. Use the rake task below if you need to delete older records in batches.
|
|
259
|
+
|
|
260
|
+
```bash
|
|
261
|
+
DAYS=90 bin/rails llm_cost_tracker:prune # delete calls older than N days in batches
|
|
262
|
+
```
|
|
263
|
+
|
|
264
|
+
## Tag storage
|
|
233
265
|
|
|
234
266
|
New installs use `jsonb` + GIN on PostgreSQL:
|
|
235
267
|
|
|
@@ -251,7 +283,7 @@ bin/rails db:migrate
|
|
|
251
283
|
|
|
252
284
|
## Dashboard (optional)
|
|
253
285
|
|
|
254
|
-
|
|
286
|
+
Optional Rails Engine. Plain ERB, no JavaScript framework, no asset pipeline required. Requires Rails 7.1+; the core middleware works without Rails.
|
|
255
287
|
|
|
256
288
|
```ruby
|
|
257
289
|
# config/application.rb (or an initializer)
|
|
@@ -263,15 +295,15 @@ mount LlmCostTracker::Engine => "/llm-costs"
|
|
|
263
295
|
|
|
264
296
|
Routes (GET-only; CSV export included):
|
|
265
297
|
|
|
266
|
-
- `/llm-costs` — overview: spend
|
|
298
|
+
- `/llm-costs` — overview: spend with delta vs previous period, budget projection, spend anomaly banner, daily trend vs previous slice, provider rollup, top models
|
|
267
299
|
- `/llm-costs/models` — by provider + model; sortable by spend, volume, avg cost, latency
|
|
268
300
|
- `/llm-costs/calls` — filterable + paginated; outlier sort modes (expensive, largest input/output, slowest, unknown pricing); CSV export
|
|
269
|
-
- `/llm-costs/calls/:id` — details
|
|
270
|
-
- `/llm-costs/tags` — tag keys present in the dataset (PG/SQLite native
|
|
301
|
+
- `/llm-costs/calls/:id` — details with token mix and cost mix breakdowns
|
|
302
|
+
- `/llm-costs/tags` — tag keys present in the dataset (PG/SQLite native; MySQL 8.0+ via JSON_TABLE)
|
|
271
303
|
- `/llm-costs/tags/:key` — breakdown by values of a given tag key
|
|
272
304
|
- `/llm-costs/data_quality` — unknown pricing share, untagged calls, missing latency
|
|
273
305
|
|
|
274
|
-
> ⚠️ **No built-in auth.** Tags carry whatever your app puts in them. Protect the mount point with your
|
|
306
|
+
> ⚠️ **No built-in auth.** Tags carry whatever your app puts in them. Protect the mount point with your application's authentication.
|
|
275
307
|
|
|
276
308
|
### Basic auth
|
|
277
309
|
|
|
@@ -330,7 +362,7 @@ config.custom_storage = ->(event) {
|
|
|
330
362
|
config.openai_compatible_providers["gateway.example.com"] = "internal_gateway"
|
|
331
363
|
```
|
|
332
364
|
|
|
333
|
-
Configured hosts are parsed
|
|
365
|
+
Configured hosts are parsed using the OpenAI-compatible usage shape (`prompt_tokens` / `completion_tokens` / `total_tokens`, `input_tokens` / `output_tokens`, and optional cached-input details). This covers OpenRouter, DeepSeek, and private gateways exposing Chat Completions / Responses / Completions / Embeddings.
|
|
334
366
|
|
|
335
367
|
## Custom parser
|
|
336
368
|
|
|
@@ -372,20 +404,32 @@ LlmCostTracker::Parsers::Registry.register(AcmeParser.new)
|
|
|
372
404
|
| Google Gemini | ✅ | Gemini 2.5 Pro/Flash/Flash-Lite, 2.0 Flash/Flash-Lite, 1.5 Pro/Flash |
|
|
373
405
|
| Any other | 🔧 | Custom parser |
|
|
374
406
|
|
|
375
|
-
Endpoints: OpenAI Chat Completions / Responses / Completions / Embeddings; OpenAI-compatible equivalents; Anthropic Messages; Gemini `generateContent`
|
|
407
|
+
Endpoints: OpenAI Chat Completions / Responses / Completions / Embeddings; OpenAI-compatible equivalents; Anthropic Messages; Gemini `generateContent` and `streamGenerateContent`. All endpoints support streaming capture.
|
|
376
408
|
|
|
377
409
|
## Safety
|
|
378
410
|
|
|
379
|
-
- No external HTTP calls.
|
|
411
|
+
- No external HTTP calls at request-tracking time.
|
|
380
412
|
- No prompt or response bodies stored.
|
|
381
413
|
- Faraday responses not modified.
|
|
382
414
|
- Storage failures non-fatal by default (`storage_error_behavior = :warn`).
|
|
383
|
-
- Budget
|
|
415
|
+
- Budget and unknown-pricing errors are raised only when you opt in.
|
|
416
|
+
|
|
417
|
+
## Thread safety (Puma, Sidekiq)
|
|
418
|
+
|
|
419
|
+
The gem is designed for multi-threaded hosts — Puma with `max_threads > 1` and Sidekiq with `concurrency > 1` are both supported. A few rules:
|
|
420
|
+
|
|
421
|
+
- **Configure once at boot.** `LlmCostTracker.configure` deep-freezes `default_tags`, `pricing_overrides`, `report_tag_breakdowns`, and `openai_compatible_providers` when the block returns. Mutating or replacing shared fields through `LlmCostTracker.configuration` raises `FrozenError`.
|
|
422
|
+
- **Use `:active_record` storage for shared ledgers.** Puma workers and Sidekiq processes do not share memory; `:log` and `:custom` backends see per-process state only. `:active_record` writes to a single table and is the right choice for dashboards and budget checks across processes.
|
|
423
|
+
- **Size your connection pool.** Each tracked call on the middleware path issues up to three SQL queries (preflight `SUM`, `INSERT`, post-check `SUM`). Make sure the AR pool covers `puma max_threads + sidekiq concurrency` plus your app's own usage.
|
|
424
|
+
- **Don't share a `StreamCollector` across threads you don't own.** The collector itself is thread-safe — `event`, `usage`, and `finish!` synchronize internally and `finish!` is idempotent — but the documented pattern is one collector per stream.
|
|
425
|
+
- **`finish!` is a barrier.** Once a stream is finished, later `event`, `usage`, or `model=` calls raise `FrozenError` instead of mutating a closed collector.
|
|
426
|
+
- **`ActiveSupport::Notifications` subscribers run synchronously** in the caller's thread. Keep them fast or hand off to a background job; otherwise they add latency to every tracked call.
|
|
427
|
+
- **`storage_error_behavior = :raise` inside Sidekiq** will retry the job, which can duplicate an expensive LLM call. Prefer `:warn` plus a Notifications subscriber, or `:ignore`, for worker contexts.
|
|
384
428
|
|
|
385
429
|
## Known limitations
|
|
386
430
|
|
|
387
|
-
- `:block_requests` is best-effort
|
|
388
|
-
- Streaming
|
|
431
|
+
- `:block_requests` is a best-effort guardrail, not a hard cap. Concurrent workers can pass preflight simultaneously and collectively overshoot the budget. Use an external quota system if you need a transactional cap.
|
|
432
|
+
- Streaming capture relies on the provider emitting a final-usage event (OpenAI needs `stream_options: { include_usage: true }`); missing events are recorded with `usage_source: "unknown"` so they surface on the Data Quality page.
|
|
389
433
|
- Anthropic cache TTL variants (1h vs 5min writes) not modeled separately.
|
|
390
434
|
- OpenAI reasoning tokens included in output totals; separate reasoning-token attribution not stored.
|
|
391
435
|
|
data/Rakefile
CHANGED