llm-cost-meter 0.1.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (48) hide show
  1. package/CHANGELOG.md +78 -0
  2. package/LICENSE +21 -0
  3. package/README.md +469 -0
  4. package/dashboard/index.html +544 -0
  5. package/dist/adapters/console.d.ts +5 -0
  6. package/dist/adapters/console.d.ts.map +1 -0
  7. package/dist/adapters/console.js +16 -0
  8. package/dist/adapters/console.js.map +1 -0
  9. package/dist/adapters/index.d.ts +9 -0
  10. package/dist/adapters/index.d.ts.map +1 -0
  11. package/dist/adapters/index.js +29 -0
  12. package/dist/adapters/index.js.map +1 -0
  13. package/dist/adapters/local.d.ts +11 -0
  14. package/dist/adapters/local.d.ts.map +1 -0
  15. package/dist/adapters/local.js +72 -0
  16. package/dist/adapters/local.js.map +1 -0
  17. package/dist/cli.d.ts +2 -0
  18. package/dist/cli.d.ts.map +1 -0
  19. package/dist/cli.js +200 -0
  20. package/dist/cli.js.map +1 -0
  21. package/dist/dashboard/server.js +135 -0
  22. package/dist/index.d.ts +65 -0
  23. package/dist/index.d.ts.map +1 -0
  24. package/dist/index.js +291 -0
  25. package/dist/index.js.map +1 -0
  26. package/dist/pricing/anthropic.json +47 -0
  27. package/dist/pricing/index.d.ts +28 -0
  28. package/dist/pricing/index.d.ts.map +1 -0
  29. package/dist/pricing/index.js +92 -0
  30. package/dist/pricing/index.js.map +1 -0
  31. package/dist/pricing/openai.json +67 -0
  32. package/dist/reporters/csv.d.ts +3 -0
  33. package/dist/reporters/csv.d.ts.map +1 -0
  34. package/dist/reporters/csv.js +14 -0
  35. package/dist/reporters/csv.js.map +1 -0
  36. package/dist/reporters/json.d.ts +3 -0
  37. package/dist/reporters/json.d.ts.map +1 -0
  38. package/dist/reporters/json.js +25 -0
  39. package/dist/reporters/json.js.map +1 -0
  40. package/dist/reporters/summary.d.ts +4 -0
  41. package/dist/reporters/summary.d.ts.map +1 -0
  42. package/dist/reporters/summary.js +85 -0
  43. package/dist/reporters/summary.js.map +1 -0
  44. package/dist/types.d.ts +89 -0
  45. package/dist/types.d.ts.map +1 -0
  46. package/dist/types.js +2 -0
  47. package/dist/types.js.map +1 -0
  48. package/package.json +82 -0
package/CHANGELOG.md ADDED
@@ -0,0 +1,78 @@
1
+ # Changelog
2
+
3
+ ## 0.1.0 (2026-04-05)
4
+
5
+ Initial release of llm-cost-meter.
6
+
7
+ ### Core Features
8
+
9
+ - `meter()` wrapper function — wrap any LLM API call to track cost, tokens, and latency
10
+ - `CostMeter` class for instance-level configuration (multiple meters, separate pipelines)
11
+ - `configure()` / `resetConfig()` for global configuration with merge semantics
12
+ - Auto-detection of OpenAI and Anthropic response formats (no explicit provider needed)
13
+ - Tagging system: `feature`, `userId`, `sessionId`, `env`, and arbitrary custom `tags`
14
+ - `awaitWrites` option for guaranteed event persistence on critical paths
15
+ - `flush()` for draining pending writes before process shutdown
16
+
17
+ ### Pricing
18
+
19
+ - Built-in pricing tables for 22 models:
20
+ - **Anthropic**: Claude Opus 4, Sonnet 4, Haiku 4.5, Claude 3.5 Sonnet/Haiku, Claude 3 Opus/Sonnet/Haiku
21
+ - **OpenAI**: GPT-4o, GPT-4o-mini, GPT-4-turbo, GPT-4, GPT-3.5-turbo, o1, o1-mini, o3, o3-mini (including dated variants)
22
+ - `configurePricing()` — add custom or fine-tuned model pricing at runtime
23
+ - `removePricing()` — remove a model's pricing entry
24
+ - `setPricingTable()` — set an entire provider's pricing at once
25
+ - Unknown model warnings with `warnOnMissingModel` flag (default: true)
26
+
27
+ ### Adapters
28
+
29
+ - Console adapter — prints cost per call to stdout (ideal for development)
30
+ - Local file adapter — appends events as NDJSON with async write queue
31
+ - Sequential write queue prevents file corruption from concurrent calls
32
+ - Automatic retry (1 retry after 100ms) on write failure
33
+ - Queue recovers from errors instead of stalling
34
+ - Bring-your-own adapter via `CostAdapter` interface (with optional `flush()`)
35
+
36
+ ### Error Handling & Observability
37
+
38
+ - `onError` callback — notified when adapter writes fail (replaces silent swallowing)
39
+ - Failed LLM calls tracked as `status: 'error'` events with `errorMessage`
40
+ - `getMeterStats()` — monitor eventsTracked, eventsDropped, adapterErrors, unknownModels
41
+ - `resetStats()` — reset counters for test isolation
42
+ - Memory-safe: unknownModels and warnedModels Sets capped at 1000 entries
43
+ - `getAllPricing()` returns a deep copy (mutations don't affect internal state)
44
+
45
+ ### CLI
46
+
47
+ - `llm-cost-meter report` command with:
48
+ - `--group-by` (feature, userId, model, env, provider, sessionId)
49
+ - `--feature`, `--env`, `--user` filters
50
+ - `--from`, `--to` date range filtering
51
+ - `--format` (table, csv, json)
52
+ - `--top N` to limit results
53
+ - `--file` for custom events file path
54
+ - Streaming file reader (handles large NDJSON files without loading into memory)
55
+ - Malformed line warnings logged to stderr
56
+ - Auto-generated insights (e.g., "chat drives 53% of cost but only 24% of calls")
57
+
58
+ ### Dashboard
59
+
60
+ - Web dashboard at `http://localhost:3000` (no external services needed)
61
+ - Date range picker with from/to date inputs
62
+ - Feature, User, Model, Environment dropdown filters
63
+ - Active filter pills with one-click removal
64
+ - Click-to-drill-down on chart segments and table rows
65
+ - Export CSV / Export JSON buttons on every table
66
+ - 5 KPI cards: Total Spend, Avg Cost/Call, Total Tokens, Most Expensive Feature, Costliest User
67
+ - Charts: Daily Cost Trend, Cost by Feature (doughnut), Calls by Model (bar), Cost by User (bar), Cost vs Calls (bubble)
68
+ - Scrollable event log with full model names
69
+ - Runs on plain Node.js — no ts-node required
70
+
71
+ ### Package Quality
72
+
73
+ - TypeScript-first with full type declarations
74
+ - 110 tests (unit, integration, CLI subprocess, error handling)
75
+ - Real API smoke test (`npm run test:smoke`) for Anthropic and OpenAI
76
+ - 24 KB published tarball (no source maps, no tests, no dev files)
77
+ - `"type": "commonjs"` + `"exports"` field for modern bundler support
78
+ - MIT license
package/LICENSE ADDED
@@ -0,0 +1,21 @@
1
+ MIT License
2
+
3
+ Copyright (c) 2026 shmulikdav
4
+
5
+ Permission is hereby granted, free of charge, to any person obtaining a copy
6
+ of this software and associated documentation files (the "Software"), to deal
7
+ in the Software without restriction, including without limitation the rights
8
+ to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9
+ copies of the Software, and to permit persons to whom the Software is
10
+ furnished to do so, subject to the following conditions:
11
+
12
+ The above copyright notice and this permission notice shall be included in all
13
+ copies or substantial portions of the Software.
14
+
15
+ THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16
+ IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17
+ FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18
+ AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19
+ LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20
+ OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21
+ SOFTWARE.
package/README.md ADDED
@@ -0,0 +1,469 @@
1
+ # llm-cost-meter
2
+
3
+ [![npm version](https://img.shields.io/npm/v/llm-cost-meter.svg)](https://www.npmjs.com/package/llm-cost-meter)
4
+ [![license](https://img.shields.io/npm/l/llm-cost-meter.svg)](https://github.com/shmulikdav/llmeter/blob/main/LICENSE)
5
+ [![node](https://img.shields.io/node/v/llm-cost-meter.svg)](https://nodejs.org)
6
+
7
+ **Per-feature, per-user LLM cost attribution for production AI apps.**
8
+
9
+ Every team running AI in production knows their monthly bill. No team knows which feature is responsible for which cost. `llm-cost-meter` wraps your LLM API calls, calculates actual cost from token usage, and tags every call by feature, user, and environment — so you can finally see where the money goes.
10
+
11
+ ## Installation
12
+
13
+ ```bash
14
+ npm install llm-cost-meter
15
+ ```
16
+
17
+ Requires Node.js >= 18.
18
+
19
+ ## Quick Start (60 seconds)
20
+
21
+ ### 1. Wrap your LLM call
22
+
23
+ ```typescript
24
+ import { meter, configure } from 'llm-cost-meter';
25
+ import Anthropic from '@anthropic-ai/sdk';
26
+
27
+ configure({ adapters: ['console'] });
28
+
29
+ const client = new Anthropic();
30
+
31
+ const response = await meter(
32
+ () => client.messages.create({
33
+ model: 'claude-sonnet-4-20250514',
34
+ max_tokens: 1024,
35
+ messages: [{ role: 'user', content: 'Summarize this article...' }]
36
+ }),
37
+ {
38
+ feature: 'article-summarizer',
39
+ userId: 'user_abc123',
40
+ }
41
+ );
42
+
43
+ // Your response is unchanged
44
+ console.log(response.content);
45
+ // Console output: [llm-cost-meter] article-summarizer — $0.00396 (600 tokens, 1240ms)
46
+ ```
47
+
48
+ ### 2. Enable local storage
49
+
50
+ ```typescript
51
+ configure({
52
+ adapters: ['console', 'local'],
53
+ localPath: './.llm-costs/events.ndjson',
54
+ });
55
+ ```
56
+
57
+ ### 3. View your report
58
+
59
+ ```bash
60
+ npx llm-cost-meter report
61
+ ```
62
+
63
+ ```
64
+ llm-cost-meter report — 2025-04-01 to 2025-04-05
65
+ Source: ./.llm-costs/events.ndjson (1,284 events)
66
+
67
+ By feature:
68
+ ┌─────────────────────┬────────┬──────────────┬────────────────┬────────────────┐
69
+ │ Feature │ Calls │ Total Tokens │ Avg Cost/Call │ Total Cost │
70
+ ├─────────────────────┼────────┼──────────────┼────────────────┼────────────────┤
71
+ │ article-summarizer │ 843 │ 2,104,200 │ $0.0039 │ $3.29 │
72
+ │ chat │ 312 │ 987,400 │ $0.0121 │ $3.78 │
73
+ │ tag-classifier │ 129 │ 198,300 │ $0.0002 │ $0.02 │
74
+ ├─────────────────────┼────────┼──────────────┼────────────────┼────────────────┤
75
+ │ TOTAL │ 1,284 │ 3,289,900 │ — │ $7.09 │
76
+ └─────────────────────┴────────┴──────────────┴────────────────┴────────────────┘
77
+
78
+ Insight: 'chat' drives 53% of cost but only 24% of calls.
79
+ ```
80
+
81
+ ## Tagging Guide
82
+
83
+ Every `meter()` call accepts these tags:
84
+
85
+ | Tag | Purpose | Example |
86
+ |-----|---------|---------|
87
+ | `feature` | Which product feature made the call | `'article-summarizer'` |
88
+ | `userId` | Which user triggered it | `'user_abc123'` |
89
+ | `sessionId` | Group calls within a session | `'sess_xyz'` |
90
+ | `env` | Environment | `'production'` |
91
+ | `tags` | Arbitrary key-value pairs | `{ team: 'product', tier: 'pro' }` |
92
+
93
+ ```typescript
94
+ await meter(llmCall, {
95
+ feature: 'chat',
96
+ userId: req.user.id,
97
+ sessionId: req.sessionId,
98
+ env: 'production',
99
+ tags: { team: 'product', tier: 'pro' },
100
+ });
101
+ ```
102
+
103
+ ## Global Configuration
104
+
105
+ Call `configure()` once at app startup:
106
+
107
+ ```typescript
108
+ import { configure } from 'llm-cost-meter';
109
+
110
+ configure({
111
+ adapters: ['console', 'local'], // Output destinations
112
+ localPath: './.llm-costs/events.ndjson', // File path for local adapter
113
+ defaultTags: { // Applied to every event
114
+ env: process.env.NODE_ENV ?? 'development',
115
+ service: 'my-app',
116
+ },
117
+ verbose: false, // true = log adapter errors to console
118
+ warnOnMissingModel: true, // true = warn when model not in pricing table
119
+ onError: (err, event) => { // Called when adapter writes fail
120
+ myLogger.warn('Cost tracking error', { err, feature: event?.feature });
121
+ },
122
+ });
123
+ ```
124
+
125
+ `configure()` merges with the current config. To start fresh, call `resetConfig()` first:
126
+
127
+ ```typescript
128
+ import { resetConfig, configure } from 'llm-cost-meter';
129
+
130
+ resetConfig(); // Back to defaults
131
+ configure({ adapters: ['local'] });
132
+ ```
133
+
134
+ ## Error Handling
135
+
136
+ By default, adapter errors are silent (your LLM calls are never affected). To catch them:
137
+
138
+ ```typescript
139
+ // Option 1: onError callback (recommended for production)
140
+ configure({
141
+ onError: (err, event) => {
142
+ console.error('Failed to write cost event:', err.message);
143
+ // Send to your error tracker, Sentry, DataDog, etc.
144
+ },
145
+ });
146
+
147
+ // Option 2: verbose mode (logs to console.error)
148
+ configure({ verbose: true });
149
+
150
+ // Option 3: await writes to guarantee persistence
151
+ const response = await meter(llmCall, {
152
+ feature: 'billing-critical',
153
+ awaitWrites: true, // Will throw if adapter fails
154
+ });
155
+ ```
156
+
157
+ ### Monitoring meter health
158
+
159
+ ```typescript
160
+ import { getMeterStats } from 'llm-cost-meter';
161
+
162
+ const stats = getMeterStats();
163
+ console.log(stats);
164
+ // {
165
+ // eventsTracked: 1284,
166
+ // eventsDropped: 0,
167
+ // adapterErrors: 0,
168
+ // unknownModels: ['openai/ft:gpt-4o-mini:my-org']
169
+ // }
170
+ ```
171
+
172
+ ## Custom Pricing
173
+
174
+ Add pricing for fine-tuned models, new models, or entirely new providers:
175
+
176
+ ```typescript
177
+ import { configurePricing, setPricingTable } from 'llm-cost-meter';
178
+
179
+ // Add a single model
180
+ configurePricing('openai', 'ft:gpt-4o-mini:my-org', { input: 0.30, output: 1.20 });
181
+
182
+ // Add a new provider
183
+ configurePricing('mistral', 'mistral-large', { input: 2.00, output: 6.00 });
184
+
185
+ // Override existing pricing
186
+ configurePricing('anthropic', 'claude-sonnet-4-20250514', { input: 3.50, output: 16.00 });
187
+
188
+ // Set an entire provider at once
189
+ setPricingTable('deepseek', {
190
+ 'deepseek-chat': { input: 0.14, output: 0.28, unit: 'per_million_tokens' },
191
+ 'deepseek-coder': { input: 0.14, output: 0.28, unit: 'per_million_tokens' },
192
+ });
193
+ ```
194
+
195
+ When a model isn't found in the pricing table, cost is reported as $0.00 and a warning is logged (disable with `warnOnMissingModel: false`).
196
+
197
+ ## Guaranteed Write Mode
198
+
199
+ By default, `meter()` is fire-and-forget — adapter writes happen in the background so your LLM response is returned immediately. For billing-critical paths:
200
+
201
+ ```typescript
202
+ // Wait for adapters to finish writing before continuing
203
+ const response = await meter(llmCall, {
204
+ feature: 'billing',
205
+ awaitWrites: true,
206
+ });
207
+
208
+ // Flush all pending writes before process exit
209
+ import { flush } from 'llm-cost-meter';
210
+
211
+ process.on('SIGTERM', async () => {
212
+ await flush();
213
+ process.exit(0);
214
+ });
215
+ ```
216
+
217
+ ## Advanced: CostMeter Class
218
+
219
+ Use `CostMeter` when you need multiple independent meters (e.g., different adapters for different teams, separate configs for billing vs analytics):
220
+
221
+ ```typescript
222
+ import { CostMeter } from 'llm-cost-meter';
223
+
224
+ const billingMeter = new CostMeter({
225
+ provider: 'anthropic',
226
+ adapters: ['local'],
227
+ localPath: './billing/events.ndjson',
228
+ onError: (err) => alertOps('billing tracking failed', err),
229
+ });
230
+
231
+ const analyticsMeter = new CostMeter({
232
+ adapters: [new DataDogAdapter()],
233
+ defaultTags: { team: 'analytics' },
234
+ });
235
+
236
+ // Each meter writes to its own destination
237
+ const response = await billingMeter.track(
238
+ () => client.messages.create({ ... }),
239
+ { feature: 'chat', userId: req.user.id }
240
+ );
241
+
242
+ // Manual event recording
243
+ billingMeter.record({
244
+ model: 'claude-sonnet-4-20250514',
245
+ provider: 'anthropic',
246
+ inputTokens: 450,
247
+ outputTokens: 210,
248
+ feature: 'classifier',
249
+ userId: 'user_123',
250
+ });
251
+
252
+ // Flush before shutdown
253
+ await billingMeter.flush();
254
+ ```
255
+
256
+ **When to use `meter()` vs `CostMeter`:**
257
+ - `meter()` — simple apps, single config, most use cases
258
+ - `CostMeter` — multi-team apps, separate billing/analytics pipelines, multiple output destinations
259
+
260
+ ## CLI Reference
261
+
262
+ ```bash
263
+ # Default report grouped by feature
264
+ npx llm-cost-meter report
265
+
266
+ # Group by different dimensions
267
+ npx llm-cost-meter report --group-by userId
268
+ npx llm-cost-meter report --group-by model
269
+ npx llm-cost-meter report --group-by env
270
+
271
+ # Filter by tag
272
+ npx llm-cost-meter report --feature article-summarizer
273
+ npx llm-cost-meter report --env production
274
+ npx llm-cost-meter report --user user_abc123
275
+
276
+ # Date range
277
+ npx llm-cost-meter report --from 2025-04-01 --to 2025-04-05
278
+
279
+ # Export formats
280
+ npx llm-cost-meter report --format csv > costs.csv
281
+ npx llm-cost-meter report --format json > costs.json
282
+
283
+ # Show top N most expensive
284
+ npx llm-cost-meter report --top 5
285
+
286
+ # Custom events file path
287
+ npx llm-cost-meter report --file ./path/to/events.ndjson
288
+ ```
289
+
290
+ ## Adapter Reference
291
+
292
+ ### Console Adapter
293
+
294
+ Prints cost per call to stdout. Ideal for development.
295
+
296
+ ```typescript
297
+ configure({ adapters: ['console'] });
298
+ // Output: [llm-cost-meter] article-summarizer — $0.00396 (600 tokens, 1240ms)
299
+ ```
300
+
301
+ ### Local File Adapter
302
+
303
+ Appends events as NDJSON (newline-delimited JSON) to a file. Uses an async write queue to prevent file corruption from concurrent calls.
304
+
305
+ ```typescript
306
+ configure({
307
+ adapters: ['local'],
308
+ localPath: './.llm-costs/events.ndjson',
309
+ });
310
+ ```
311
+
312
+ ### Bring Your Own Adapter
313
+
314
+ Implement the `CostAdapter` interface:
315
+
316
+ ```typescript
317
+ import { CostAdapter, CostEvent, configure } from 'llm-cost-meter';
318
+
319
+ class DataDogAdapter implements CostAdapter {
320
+ name = 'datadog';
321
+
322
+ async write(event: CostEvent): Promise<void> {
323
+ await fetch('https://api.datadoghq.com/api/v1/series', {
324
+ method: 'POST',
325
+ body: JSON.stringify({
326
+ series: [{
327
+ metric: 'llm.cost',
328
+ points: [[Date.now() / 1000, event.totalCostUSD]],
329
+ tags: [`feature:${event.feature}`, `model:${event.model}`],
330
+ }],
331
+ }),
332
+ });
333
+ }
334
+
335
+ // Optional: called by flush() before shutdown
336
+ async flush(): Promise<void> {
337
+ // Drain any internal buffers
338
+ }
339
+ }
340
+
341
+ configure({
342
+ adapters: [new DataDogAdapter(), 'local'],
343
+ });
344
+ ```
345
+
346
+ ## Integration Recipes
347
+
348
+ ### Express.js Middleware
349
+
350
+ ```typescript
351
+ import { meter } from 'llm-cost-meter';
352
+
353
+ function withCostTracking(feature: string) {
354
+ return (req, res, next) => {
355
+ req.llmMeter = (fn) =>
356
+ meter(fn, {
357
+ feature,
358
+ userId: req.user?.id,
359
+ sessionId: req.sessionId,
360
+ env: process.env.NODE_ENV,
361
+ });
362
+ next();
363
+ };
364
+ }
365
+
366
+ router.post('/summarize', withCostTracking('article-summarizer'), async (req, res) => {
367
+ const result = await req.llmMeter(() =>
368
+ client.messages.create({ ... })
369
+ );
370
+ res.json(result);
371
+ });
372
+ ```
373
+
374
+ ### Next.js API Route
375
+
376
+ ```typescript
377
+ import { meter, configure } from 'llm-cost-meter';
378
+
379
+ configure({ adapters: ['local'], defaultTags: { env: 'production' } });
380
+
381
+ export default async function handler(req, res) {
382
+ const response = await meter(
383
+ () => openai.chat.completions.create({ ... }),
384
+ { feature: 'chat', userId: req.body.userId }
385
+ );
386
+ res.json(response);
387
+ }
388
+ ```
389
+
390
+ ## Pricing Tables
391
+
392
+ Built-in pricing for current models (USD per million tokens):
393
+
394
+ ### Anthropic
395
+
396
+ | Model | Input | Output |
397
+ |-------|-------|--------|
398
+ | claude-opus-4-20250514 | $15.00 | $75.00 |
399
+ | claude-sonnet-4-20250514 | $3.00 | $15.00 |
400
+ | claude-haiku-4-5-20251001 | $0.80 | $4.00 |
401
+ | claude-3-5-sonnet-20241022 | $3.00 | $15.00 |
402
+ | claude-3-5-haiku-20241022 | $0.80 | $4.00 |
403
+ | claude-3-opus-20240229 | $15.00 | $75.00 |
404
+ | claude-3-sonnet-20240229 | $3.00 | $15.00 |
405
+ | claude-3-haiku-20240307 | $0.25 | $1.25 |
406
+
407
+ ### OpenAI
408
+
409
+ | Model | Input | Output |
410
+ |-------|-------|--------|
411
+ | gpt-4o | $2.50 | $10.00 |
412
+ | gpt-4o-mini | $0.15 | $0.60 |
413
+ | gpt-4-turbo | $10.00 | $30.00 |
414
+ | gpt-4 | $30.00 | $60.00 |
415
+ | gpt-3.5-turbo | $0.50 | $1.50 |
416
+ | o1 | $15.00 | $60.00 |
417
+ | o1-mini | $3.00 | $12.00 |
418
+ | o3 | $10.00 | $40.00 |
419
+ | o3-mini | $1.10 | $4.40 |
420
+
421
+ Dated model variants (e.g., `gpt-4o-2024-08-06`) are also included. Use `configurePricing()` to add models not in this table.
422
+
423
+ ## Testing Your Code
424
+
425
+ Use `resetConfig()` and `resetStats()` in test setup to avoid state leaking between tests:
426
+
427
+ ```typescript
428
+ import { configure, resetConfig, resetStats, meter, CostAdapter, CostEvent } from 'llm-cost-meter';
429
+
430
+ class TestAdapter implements CostAdapter {
431
+ name = 'test';
432
+ events: CostEvent[] = [];
433
+ async write(event: CostEvent) { this.events.push(event); }
434
+ }
435
+
436
+ beforeEach(() => {
437
+ resetConfig();
438
+ resetStats();
439
+ const adapter = new TestAdapter();
440
+ configure({ adapters: [adapter], warnOnMissingModel: false });
441
+ });
442
+ ```
443
+
444
+ ## FAQ
445
+
446
+ **Does it add latency to my API calls?**
447
+ No. By default, adapter writes are fire-and-forget. Your LLM response is returned immediately. Use `awaitWrites: true` only when you need guaranteed persistence.
448
+
449
+ **Does it send my data anywhere?**
450
+ No. All data stays local by default. The `console` adapter prints to stdout and the `local` adapter writes to a file on disk. No network calls are made unless you add a custom adapter.
451
+
452
+ **What happens if the pricing table doesn't have my model?**
453
+ A warning is logged (unless `warnOnMissingModel: false`) and cost is reported as $0.00. Use `configurePricing()` to add the model. Check `getMeterStats().unknownModels` to see which models are missing.
454
+
455
+ **What if an adapter fails?**
456
+ By default, errors are silent — your app is never affected. Use `onError` callback or `verbose: true` to catch errors. Use `getMeterStats().adapterErrors` to monitor.
457
+
458
+ **Can I use it with streaming responses?**
459
+ V1 does not support streaming token counting. Streaming support is planned for V2. For now, check the final response's usage field after streaming completes.
460
+
461
+ **Does it work with fine-tuned models?**
462
+ Yes. Use `configurePricing()` to set pricing for your fine-tuned model IDs.
463
+
464
+ **Is it safe for high-traffic apps?**
465
+ The local file adapter uses an async write queue that serializes writes — no file corruption from concurrent calls. For high-throughput, consider a custom adapter that batches writes.
466
+
467
+ ## License
468
+
469
+ MIT