cchubber 0.1.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md ADDED
@@ -0,0 +1,104 @@
1
+ # CC Hubber
2
+
3
+ **What you spent. Why you spent it. Is that normal.**
4
+
5
+ Offline CLI that reads your local Claude Code data and generates a diagnostic HTML report. No API keys. No telemetry. Everything stays on your machine.
6
+
7
+ Built because Claude Code users had zero visibility into the [March 2026 cache bug](https://github.com/anthropics/claude-code/issues/41930) that silently inflated costs by 10-20x. Your `$100 plan` shouldn't feel like a `$20 plan`.
8
+
9
+ ![CC Hubber Report](https://raw.githubusercontent.com/azkhh/cchubber/master/screenshot.png)
10
+
11
+ ## What it does
12
+
13
+ - **Cost breakdown** — Per-day, per-model, per-project cost calculated from your actual token counts
14
+ - **Cache health grade** — Trend-weighted (recent 7 days dominate). If you hit the cache bug, you'll see D/F, not a misleading A
15
+ - **Inflection point detection** — "Your efficiency dropped 4.7x starting March 29. Before: 360:1. After: 1,676:1."
16
+ - **Anomaly detection** — Flags days where your cost/ratio deviates >2 standard deviations
17
+ - **Cache break analysis** — Reads `~/.claude/tmp/cache-break-*.diff` files. Shows why your cache broke and how often
18
+ - **CLAUDE.md cost analysis** — How much your rules files cost per message (cached vs uncached)
19
+ - **Per-project breakdown** — Which project is eating your budget
20
+ - **Live rate limits** — 5-hour and 7-day utilization (if OAuth token available)
21
+ - **Shareable card** — Export your report as a PNG
22
+
23
+ ## Install
24
+
25
+ ```bash
26
+ npx cchubber
27
+ ```
28
+
29
+ Or install globally:
30
+
31
+ ```bash
32
+ npm install -g cchubber
33
+ cchubber
34
+ ```
35
+
36
+ Requires Node.js 18+. Runs on macOS, Windows, and Linux.
37
+
38
+ ## Usage
39
+
40
+ ```bash
41
+ cchubber # Scan and open HTML report
42
+ cchubber --days 7 # Default view: last 7 days
43
+ cchubber -o report.html # Custom output path
44
+ cchubber --no-open # Don't auto-open in browser
45
+ cchubber --json # Machine-readable JSON output
46
+ ```
47
+
48
+ ## What it reads
49
+
50
+ All data is local. Nothing leaves your machine.
51
+
52
+ | Source | Path | What |
53
+ |--------|------|------|
54
+ | JSONL conversations | `~/.claude/projects/*/` | Token counts per message, per model, per session |
55
+ | Stats cache | `~/.claude/stats-cache.json` | Pre-aggregated daily totals |
56
+ | Session meta | `~/.claude/usage-data/session-meta/` | Duration, tool counts, lines changed |
57
+ | Cache breaks | `~/.claude/tmp/cache-break-*.diff` | Why your prompt cache invalidated |
58
+ | CLAUDE.md stack | `~/.claude/CLAUDE.md`, project-level | File sizes and per-message cost impact |
59
+ | OAuth usage | `~/.claude/.credentials.json` | Live rate limit utilization |
60
+
61
+ ## The March 2026 cache bug
62
+
63
+ Between v2.1.69 and v2.1.89, multiple bugs caused Claude Code's prompt cache to silently fail:
64
+
65
+ - A sentinel replacement bug in the Bun fork dropped cache read rates from ~95% to 4-17%
66
+ - The `--resume` flag caused full prompt-cache misses on every resume
67
+ - One session generated 652,069 output tokens with no user input — $342 on a single session
68
+
69
+ **v2.1.90 fixes most of these.** Update immediately: `claude update`
70
+
71
+ CC Hubber detects whether you were affected by showing your cache efficiency trend over time. If you see a sharp inflection point, that's probably when it hit you.
72
+
73
+ ## Best practices (from the community)
74
+
75
+ These tips surfaced during the March crisis. CC Hubber helps you verify whether they're working:
76
+
77
+ - **Start fresh sessions per task** — don't try to extend long sessions
78
+ - **Avoid `--resume` on older versions** — fixed in v2.1.90
79
+ - **Switch to Sonnet 4.6 for routine work** — same quality, fraction of the quota
80
+ - **Keep CLAUDE.md under 200 lines** — it's re-read on every message
81
+ - **Use `/compact` every 30-40 tool calls** — prevents context bloat
82
+ - **Create `.claudeignore`** — exclude `node_modules/`, `dist/`, `*.lock`
83
+ - **Shift heavy work to off-peak hours** — outside 5am-11am PT weekdays
84
+
85
+ ## How cost is calculated
86
+
87
+ Claude Code doesn't report costs for Max/Pro plans (`costUSD` is always 0). CC Hubber calculates costs from token counts using dynamic pricing from [LiteLLM](https://github.com/BerriAI/litellm), with hardcoded fallbacks.
88
+
89
+ This gives you an **equivalent API cost** — what you would pay on the API tier for the same usage. Useful for understanding relative consumption, not for billing disputes.
90
+
91
+ ## Prior art
92
+
93
+ - [ccusage](https://github.com/jikyo/ccusage) (12K+ stars) — token tracking and cost visualization
94
+ - [Claude-Code-Usage-Monitor](https://github.com/nicobailon/Claude-Code-Usage-Monitor) — basic session tracking
95
+
96
+ CC Hubber focuses on **diagnosis** — cache health grading, inflection detection, cache break analysis — not just accounting. If ccusage tells you *what* you spent, CC Hubber tells you *why* and whether it's normal.
97
+
98
+ ## License
99
+
100
+ MIT
101
+
102
+ ## Credits
103
+
104
+ Built by [@azkhh](https://x.com/asmirkn). Shipped with [Mover OS](https://moveros.dev).
package/package.json ADDED
@@ -0,0 +1,35 @@
1
+ {
2
+ "name": "cchubber",
3
+ "version": "0.1.0",
4
+ "description": "What you spent. Why you spent it. Is that normal. — Claude Code usage diagnosis with beautiful HTML reports.",
5
+ "type": "module",
6
+ "bin": {
7
+ "cchubber": "./src/cli/index.js"
8
+ },
9
+ "scripts": {
10
+ "start": "node src/cli/index.js"
11
+ },
12
+ "keywords": [
13
+ "claude",
14
+ "claude-code",
15
+ "anthropic",
16
+ "usage",
17
+ "tokens",
18
+ "cost",
19
+ "diagnosis",
20
+ "cache",
21
+ "analytics"
22
+ ],
23
+ "author": "Asmir Khan (@azkhh)",
24
+ "license": "MIT",
25
+ "repository": {
26
+ "type": "git",
27
+ "url": "https://github.com/azkhh/cchubber"
28
+ },
29
+ "engines": {
30
+ "node": ">=18.0.0"
31
+ },
32
+ "files": [
33
+ "src/**/*"
34
+ ]
35
+ }
@@ -0,0 +1,59 @@
1
+ export function detectAnomalies(costAnalysis) {
2
+ const dailyCosts = costAnalysis.dailyCosts || [];
3
+ if (dailyCosts.length < 3) return { anomalies: [], hasAnomalies: false };
4
+
5
+ const costs = dailyCosts.filter(d => d.cost > 0.01).map(d => d.cost);
6
+ if (costs.length < 3) return { anomalies: [], hasAnomalies: false };
7
+
8
+ const mean = costs.reduce((a, b) => a + b, 0) / costs.length;
9
+ const variance = costs.reduce((sum, c) => sum + Math.pow(c - mean, 2), 0) / costs.length;
10
+ const stdDev = Math.sqrt(variance);
11
+
12
+ const anomalies = [];
13
+
14
+ for (const day of dailyCosts) {
15
+ if (day.cost < 0.01) continue;
16
+
17
+ const zScore = stdDev > 0 ? (day.cost - mean) / stdDev : 0;
18
+
19
+ if (Math.abs(zScore) > 2) {
20
+ // Check cache ratio too
21
+ const ratioAnomaly = day.cacheOutputRatio > 2000;
22
+
23
+ anomalies.push({
24
+ date: day.date,
25
+ cost: day.cost,
26
+ zScore: Math.round(zScore * 100) / 100,
27
+ severity: Math.abs(zScore) > 3 ? 'critical' : 'warning',
28
+ type: zScore > 0 ? 'spike' : 'dip',
29
+ avgCost: Math.round(mean * 100) / 100,
30
+ deviation: Math.round((day.cost - mean) * 100) / 100,
31
+ cacheRatioAnomaly: ratioAnomaly,
32
+ cacheOutputRatio: day.cacheOutputRatio,
33
+ });
34
+ }
35
+ }
36
+
37
+ // Trend detection: are costs increasing over time?
38
+ let trend = 'stable';
39
+ if (dailyCosts.length >= 7) {
40
+ const recent = dailyCosts.slice(-7).filter(d => d.cost > 0.01);
41
+ const older = dailyCosts.slice(0, -7).filter(d => d.cost > 0.01);
42
+ if (recent.length > 0 && older.length > 0) {
43
+ const recentAvg = recent.reduce((s, d) => s + d.cost, 0) / recent.length;
44
+ const olderAvg = older.reduce((s, d) => s + d.cost, 0) / older.length;
45
+ const change = ((recentAvg - olderAvg) / olderAvg) * 100;
46
+ if (change > 50) trend = 'rising_fast';
47
+ else if (change > 20) trend = 'rising';
48
+ else if (change < -50) trend = 'dropping_fast';
49
+ else if (change < -20) trend = 'dropping';
50
+ }
51
+ }
52
+
53
+ return {
54
+ anomalies: anomalies.sort((a, b) => b.cost - a.cost),
55
+ hasAnomalies: anomalies.length > 0,
56
+ stats: { mean: Math.round(mean * 100) / 100, stdDev: Math.round(stdDev * 100) / 100 },
57
+ trend,
58
+ };
59
+ }
@@ -0,0 +1,135 @@
1
+ export function analyzeCacheHealth(statsCache, cacheBreaks, days, dailyFromJSONL) {
2
+ const cutoffDate = new Date();
3
+ cutoffDate.setDate(cutoffDate.getDate() - days);
4
+
5
+ // Cache break analysis
6
+ const reasonCounts = {};
7
+ let totalBreaks = 0;
8
+
9
+ for (const brk of cacheBreaks) {
10
+ totalBreaks++;
11
+ for (const reason of brk.reasons) {
12
+ reasonCounts[reason] = (reasonCounts[reason] || 0) + 1;
13
+ }
14
+ }
15
+
16
+ // Sort reasons by frequency
17
+ const reasonsRanked = Object.entries(reasonCounts)
18
+ .sort((a, b) => b[1] - a[1])
19
+ .map(([reason, count]) => ({ reason, count, percentage: totalBreaks > 0 ? Math.round(count / totalBreaks * 100) : 0 }));
20
+
21
+ // Cache efficiency from stats cache
22
+ let totalCacheRead = 0;
23
+ let totalCacheWrite = 0;
24
+ let totalInput = 0;
25
+ let totalOutput = 0;
26
+
27
+ // Use JSONL data if available (more accurate), fallback to stats-cache
28
+ if (dailyFromJSONL && dailyFromJSONL.length > 0) {
29
+ const cutoffStr = cutoffDate.toISOString().split('T')[0];
30
+ for (const day of dailyFromJSONL.filter(d => d.date >= cutoffStr)) {
31
+ totalCacheRead += day.cacheReadTokens || 0;
32
+ totalCacheWrite += day.cacheCreationTokens || 0;
33
+ totalInput += day.inputTokens || 0;
34
+ totalOutput += day.outputTokens || 0;
35
+ }
36
+ } else if (statsCache?.modelUsage) {
37
+ for (const usage of Object.values(statsCache.modelUsage)) {
38
+ totalCacheRead += usage.cacheReadInputTokens || 0;
39
+ totalCacheWrite += usage.cacheCreationInputTokens || 0;
40
+ totalInput += usage.inputTokens || 0;
41
+ totalOutput += usage.outputTokens || 0;
42
+ }
43
+ }
44
+
45
+ // Cache hit rate: what % of input tokens were served from cache
46
+ const totalInputAttempts = totalCacheRead + totalCacheWrite + totalInput;
47
+ const cacheHitRate = totalInputAttempts > 0 ? (totalCacheRead / totalInputAttempts) * 100 : 0;
48
+
49
+ // Cache efficiency ratio: cache reads per output token (lower = more efficient)
50
+ const efficiencyRatio = totalOutput > 0 ? Math.round(totalCacheRead / totalOutput) : 0;
51
+
52
+ // Trend-weighted grade: recent 7 days count 3x more than older days
53
+ const grade = calculateGrade(efficiencyRatio, totalBreaks, days, dailyFromJSONL);
54
+
55
+ // Estimated cost savings from caching
56
+ // Without cache: all cache reads would be standard input ($5/M for Opus)
57
+ // With cache: reads are $0.50/M
58
+ const savingsFromCache = totalCacheRead / 1_000_000 * (5.0 - 0.50);
59
+
60
+ // Cost wasted from cache breaks (rough estimate)
61
+ // Each cache break forces a full re-read at write price ($6.25/M) instead of read price ($0.50/M)
62
+ // Estimate ~200K tokens re-cached per break
63
+ const wastedFromBreaks = totalBreaks * 200_000 / 1_000_000 * (6.25 - 0.50);
64
+
65
+ return {
66
+ totalCacheBreaks: totalBreaks,
67
+ reasonsRanked,
68
+ cacheHitRate: Math.round(cacheHitRate * 10) / 10,
69
+ efficiencyRatio,
70
+ grade,
71
+ savings: {
72
+ fromCaching: Math.round(savingsFromCache),
73
+ wastedFromBreaks: Math.round(wastedFromBreaks),
74
+ },
75
+ totals: {
76
+ cacheRead: totalCacheRead,
77
+ cacheWrite: totalCacheWrite,
78
+ input: totalInput,
79
+ output: totalOutput,
80
+ },
81
+ };
82
+ }
83
+
84
+ function calculateGrade(allTimeRatio, breaks, days, dailyFromJSONL) {
85
+ // Trend-weighted scoring: recent 7 days dominate the grade.
86
+ // A user with great all-time stats but a recent cache bug spike should get D/F.
87
+ let score = 100;
88
+
89
+ // Compute recent 7-day ratio from daily data
90
+ let recentRatio = allTimeRatio;
91
+ let olderRatio = allTimeRatio;
92
+
93
+ if (dailyFromJSONL && dailyFromJSONL.length > 0) {
94
+ const sorted = [...dailyFromJSONL].sort((a, b) => a.date.localeCompare(b.date));
95
+ const recent = sorted.slice(-7);
96
+ const older = sorted.slice(0, -7);
97
+
98
+ const recentOutput = recent.reduce((s, d) => s + (d.outputTokens || 0), 0);
99
+ const recentCacheRead = recent.reduce((s, d) => s + (d.cacheReadTokens || 0), 0);
100
+ recentRatio = recentOutput > 0 ? Math.round(recentCacheRead / recentOutput) : 0;
101
+
102
+ const olderOutput = older.reduce((s, d) => s + (d.outputTokens || 0), 0);
103
+ const olderCacheRead = older.reduce((s, d) => s + (d.cacheReadTokens || 0), 0);
104
+ olderRatio = olderOutput > 0 ? Math.round(olderCacheRead / olderOutput) : 0;
105
+ }
106
+
107
+ // Weighted ratio: 70% recent, 30% older (recent dominates)
108
+ const weightedRatio = dailyFromJSONL && dailyFromJSONL.length >= 7
109
+ ? Math.round(recentRatio * 0.7 + olderRatio * 0.3)
110
+ : allTimeRatio;
111
+
112
+ // Penalize based on weighted ratio
113
+ if (weightedRatio > 3000) score -= 45;
114
+ else if (weightedRatio > 2000) score -= 35;
115
+ else if (weightedRatio > 1500) score -= 28;
116
+ else if (weightedRatio > 1000) score -= 20;
117
+ else if (weightedRatio > 500) score -= 10;
118
+
119
+ // Extra penalty if recent is sharply worse than older (deterioration signal)
120
+ if (olderRatio > 0 && recentRatio > olderRatio * 2) {
121
+ score -= 15; // Recent degradation penalty
122
+ }
123
+
124
+ // Penalize high break frequency
125
+ const breaksPerDay = days > 0 ? breaks / days : 0;
126
+ if (breaksPerDay > 20) score -= 30;
127
+ else if (breaksPerDay > 10) score -= 20;
128
+ else if (breaksPerDay > 5) score -= 10;
129
+
130
+ if (score >= 90) return { letter: 'A', color: '#10b981', label: 'Excellent' };
131
+ if (score >= 75) return { letter: 'B', color: '#22d3ee', label: 'Good' };
132
+ if (score >= 60) return { letter: 'C', color: '#f59e0b', label: 'Fair' };
133
+ if (score >= 40) return { letter: 'D', color: '#f97316', label: 'Poor' };
134
+ return { letter: 'F', color: '#ef4444', label: 'Critical' };
135
+ }
@@ -0,0 +1,357 @@
1
+ import https from 'https';
2
+
3
+ // Fallback pricing — used when LiteLLM fetch fails (per million tokens)
4
+ const FALLBACK_PRICING = {
5
+ 'claude-opus-4-6': { input: 5, output: 25, cacheWrite: 6.25, cacheRead: 0.50 },
6
+ 'claude-opus-4-5-20251101': { input: 5, output: 25, cacheWrite: 6.25, cacheRead: 0.50 },
7
+ 'claude-sonnet-4-6': { input: 3, output: 15, cacheWrite: 3.75, cacheRead: 0.30 },
8
+ 'claude-sonnet-4-5-20250929': { input: 3, output: 15, cacheWrite: 3.75, cacheRead: 0.30 },
9
+ 'claude-sonnet-4-20250514': { input: 3, output: 15, cacheWrite: 3.75, cacheRead: 0.30 },
10
+ 'claude-haiku-4-5-20251001': { input: 1, output: 5, cacheWrite: 1.25, cacheRead: 0.10 },
11
+ 'default': { input: 5, output: 25, cacheWrite: 6.25, cacheRead: 0.50 },
12
+ };
13
+
14
+ // Dynamic pricing cache (populated by fetchPricing)
15
+ let dynamicPricing = null;
16
+
17
+ /**
18
+ * Fetch latest pricing from LiteLLM. Returns map of model -> pricing.
19
+ * Falls back to hardcoded pricing on failure.
20
+ */
21
+ export async function fetchPricing() {
22
+ if (dynamicPricing) return dynamicPricing;
23
+
24
+ try {
25
+ const data = await new Promise((resolve, reject) => {
26
+ const req = https.get('https://raw.githubusercontent.com/BerriAI/litellm/main/model_prices_and_context_window.json', (res) => {
27
+ let body = '';
28
+ res.on('data', chunk => body += chunk);
29
+ res.on('end', () => {
30
+ try { resolve(JSON.parse(body)); } catch { reject(new Error('Bad JSON')); }
31
+ });
32
+ });
33
+ req.on('error', reject);
34
+ req.setTimeout(5000, () => { req.destroy(); reject(new Error('Timeout')); });
35
+ });
36
+
37
+ // Parse LiteLLM format into our format
38
+ const pricing = {};
39
+ for (const [key, info] of Object.entries(data)) {
40
+ if (!key.startsWith('claude') && !key.includes('claude')) continue;
41
+
42
+ // LiteLLM uses per-token pricing, we use per-million
43
+ const inputPerM = (info.input_cost_per_token || 0) * 1_000_000;
44
+ const outputPerM = (info.output_cost_per_token || 0) * 1_000_000;
45
+ const cacheReadPerM = (info.cache_read_input_token_cost || 0) * 1_000_000;
46
+ const cacheWritePerM = (info.cache_creation_input_token_cost || 0) * 1_000_000;
47
+
48
+ if (inputPerM > 0) {
49
+ pricing[key] = { input: inputPerM, output: outputPerM, cacheWrite: cacheWritePerM || inputPerM * 1.25, cacheRead: cacheReadPerM || inputPerM * 0.1 };
50
+ }
51
+ }
52
+
53
+ if (Object.keys(pricing).length > 0) {
54
+ dynamicPricing = pricing;
55
+ return pricing;
56
+ }
57
+ } catch {
58
+ // Fall through to fallback
59
+ }
60
+
61
+ dynamicPricing = FALLBACK_PRICING;
62
+ return FALLBACK_PRICING;
63
+ }
64
+
65
+ const PRICING = FALLBACK_PRICING; // Sync access for calculateCost
66
+
67
+ function getPricing(modelName) {
68
+ const source = dynamicPricing || PRICING;
69
+ // Try exact match
70
+ if (source[modelName]) return source[modelName];
71
+ // Try with/without claude/ prefix (LiteLLM uses "claude/claude-opus-4-6" format)
72
+ const altKey = 'claude/' + modelName;
73
+ if (source[altKey]) return source[altKey];
74
+ // Infer from name
75
+ if (modelName.includes('haiku')) return source['claude-haiku-4-5-20251001'] || PRICING['claude-haiku-4-5-20251001'];
76
+ if (modelName.includes('sonnet')) return source['claude-sonnet-4-6'] || PRICING['claude-sonnet-4-6'];
77
+ if (modelName.includes('opus')) return source['claude-opus-4-6'] || PRICING['claude-opus-4-6'];
78
+ return PRICING['default'];
79
+ }
80
+
81
+ function calculateCost(modelName, tokens) {
82
+ const pricing = getPricing(modelName);
83
+ const input = (tokens.inputTokens || 0) / 1_000_000 * pricing.input;
84
+ const output = (tokens.outputTokens || 0) / 1_000_000 * pricing.output;
85
+ const cacheWrite = (tokens.cacheCreationInputTokens || tokens.cacheCreationTokens || 0) / 1_000_000 * pricing.cacheWrite;
86
+ const cacheRead = (tokens.cacheReadInputTokens || tokens.cacheReadTokens || 0) / 1_000_000 * pricing.cacheRead;
87
+ return { input, output, cacheWrite, cacheRead, total: input + output + cacheWrite + cacheRead };
88
+ }
89
+
90
+ export function analyzeUsage(statsCache, sessionMeta, days, dailyFromJSONL, modelFromJSONL) {
91
+ const cutoffDate = new Date();
92
+ cutoffDate.setDate(cutoffDate.getDate() - days);
93
+ const cutoffStr = cutoffDate.toISOString().split('T')[0];
94
+
95
+ // PRIMARY: Use JSONL aggregated data (has actual token counts, we calculate costs)
96
+ if (dailyFromJSONL && dailyFromJSONL.length > 0) {
97
+ return analyzeFromJSONL(dailyFromJSONL, modelFromJSONL, sessionMeta, days, cutoffStr);
98
+ }
99
+
100
+ // FALLBACK: Use stats-cache (less detailed)
101
+ const dailyCosts = [];
102
+ let totalCost = 0;
103
+ let totalInput = 0;
104
+ let totalOutput = 0;
105
+ let totalCacheRead = 0;
106
+ let totalCacheWrite = 0;
107
+ let activeDays = 0;
108
+
109
+ if (statsCache?.modelUsage) {
110
+ // Calculate total from aggregate model usage
111
+ for (const [modelName, usage] of Object.entries(statsCache.modelUsage)) {
112
+ const cost = calculateCost(modelName, usage);
113
+ totalCost += cost.total;
114
+ totalInput += usage.inputTokens || 0;
115
+ totalOutput += usage.outputTokens || 0;
116
+ totalCacheRead += usage.cacheReadInputTokens || 0;
117
+ totalCacheWrite += usage.cacheCreationInputTokens || 0;
118
+ }
119
+ }
120
+
121
+ // Build daily view from dailyData
122
+ if (statsCache?.dailyData) {
123
+ for (const day of statsCache.dailyData) {
124
+ if (day.date < cutoffStr) continue;
125
+
126
+ let dayCost = 0;
127
+ let dayOutput = 0;
128
+ let dayCacheRead = 0;
129
+ const modelBreakdowns = [];
130
+
131
+ for (const model of day.models) {
132
+ const tokens = model.tokens || {};
133
+ const cost = calculateCost(model.modelName, {
134
+ inputTokens: tokens.inputTokens || tokens,
135
+ outputTokens: tokens.outputTokens || 0,
136
+ cacheCreationInputTokens: tokens.cacheCreationInputTokens || 0,
137
+ cacheReadInputTokens: tokens.cacheReadInputTokens || 0,
138
+ });
139
+
140
+ // If tokens is just a number (total), estimate breakdown
141
+ if (typeof tokens === 'number') {
142
+ dayCost += tokens / 1_000_000 * 0.50; // Rough estimate at cache read rate
143
+ dayCacheRead += tokens;
144
+ } else {
145
+ dayCost += cost.total;
146
+ dayOutput += tokens.outputTokens || 0;
147
+ dayCacheRead += tokens.cacheReadInputTokens || 0;
148
+ }
149
+
150
+ modelBreakdowns.push({
151
+ model: model.modelName,
152
+ cost: typeof tokens === 'number' ? tokens / 1_000_000 * 0.50 : cost.total,
153
+ tokens: typeof tokens === 'number' ? tokens : {
154
+ input: tokens.inputTokens || 0,
155
+ output: tokens.outputTokens || 0,
156
+ cacheRead: tokens.cacheReadInputTokens || 0,
157
+ cacheWrite: tokens.cacheCreationInputTokens || 0,
158
+ },
159
+ });
160
+ }
161
+
162
+ if (dayCost > 0.01) activeDays++;
163
+
164
+ const ratio = dayOutput > 0 ? Math.round(dayCacheRead / dayOutput) : 0;
165
+
166
+ dailyCosts.push({
167
+ date: day.date,
168
+ cost: dayCost,
169
+ outputTokens: dayOutput,
170
+ cacheReadTokens: dayCacheRead,
171
+ cacheOutputRatio: ratio,
172
+ messageCount: day.messageCount,
173
+ sessionCount: day.sessionCount,
174
+ toolCallCount: day.toolCallCount,
175
+ models: modelBreakdowns,
176
+ });
177
+ }
178
+ }
179
+
180
+ // Session analysis
181
+ const recentSessions = sessionMeta.filter(s => {
182
+ if (!s.startTime) return true;
183
+ return s.startTime >= cutoffStr;
184
+ });
185
+
186
+ const totalSessions = recentSessions.length;
187
+ const avgSessionDuration = totalSessions > 0
188
+ ? recentSessions.reduce((sum, s) => sum + s.durationMinutes, 0) / totalSessions
189
+ : 0;
190
+
191
+ const totalLinesAdded = recentSessions.reduce((sum, s) => sum + s.linesAdded, 0);
192
+ const totalLinesRemoved = recentSessions.reduce((sum, s) => sum + s.linesRemoved, 0);
193
+ const totalFilesModified = recentSessions.reduce((sum, s) => sum + s.filesModified, 0);
194
+
195
+ // Tool usage aggregation
196
+ const toolAgg = {};
197
+ for (const session of recentSessions) {
198
+ for (const [tool, count] of Object.entries(session.toolCounts || {})) {
199
+ toolAgg[tool] = (toolAgg[tool] || 0) + count;
200
+ }
201
+ }
202
+
203
+ // Model cost breakdown
204
+ const modelCosts = {};
205
+ for (const day of dailyCosts) {
206
+ for (const m of day.models) {
207
+ const name = cleanModelName(m.model);
208
+ if (!modelCosts[name]) modelCosts[name] = 0;
209
+ modelCosts[name] += m.cost;
210
+ }
211
+ }
212
+
213
+ const periodCost = dailyCosts.reduce((sum, d) => sum + d.cost, 0);
214
+ const avgDailyCost = activeDays > 0 ? periodCost / activeDays : 0;
215
+ const peakDay = dailyCosts.reduce((max, d) => d.cost > (max?.cost || 0) ? d : max, null);
216
+
217
+ return {
218
+ periodDays: days,
219
+ activeDays,
220
+ totalCost: periodCost,
221
+ avgDailyCost,
222
+ medianDailyCost: median(dailyCosts.filter(d => d.cost > 0.01).map(d => d.cost)),
223
+ peakDay,
224
+ dailyCosts,
225
+ modelCosts,
226
+ sessions: {
227
+ total: totalSessions,
228
+ avgDurationMinutes: avgSessionDuration,
229
+ totalLinesAdded,
230
+ totalLinesRemoved,
231
+ totalFilesModified,
232
+ },
233
+ toolUsage: toolAgg,
234
+ totals: {
235
+ inputTokens: totalInput,
236
+ outputTokens: totalOutput,
237
+ cacheReadTokens: totalCacheRead,
238
+ cacheWriteTokens: totalCacheWrite,
239
+ },
240
+ };
241
+ }
242
+
243
+ function cleanModelName(name) {
244
+ return (name || 'unknown')
245
+ .replace('claude-', '')
246
+ .replace(/-20\d{6}$/, '') // Remove date suffixes like -20251001
247
+ .replace(/^(opus|sonnet|haiku)-(\d+)-(\d+)$/, '$1 $2.$3') // opus-4-6 -> opus 4.6
248
+ .replace(/^(opus|sonnet|haiku)-(\d+)$/, '$1 $2') // opus-4 -> opus 4
249
+ .replace(/^(\w)/, c => c.toUpperCase()); // Capitalize first letter
250
+ }
251
+
252
+ function median(arr) {
253
+ if (arr.length === 0) return 0;
254
+ const sorted = [...arr].sort((a, b) => a - b);
255
+ const mid = Math.floor(sorted.length / 2);
256
+ return sorted.length % 2 ? sorted[mid] : (sorted[mid - 1] + sorted[mid]) / 2;
257
+ }
258
+
259
+ function analyzeFromJSONL(dailyFromJSONL, modelFromJSONL, sessionMeta, days, cutoffStr) {
260
+ const filtered = dailyFromJSONL.filter(d => d.date >= cutoffStr);
261
+
262
+ // Calculate costs from token counts (costUSD is 0 for Max plan users)
263
+ const dailyCosts = filtered.map(d => {
264
+ let dayCost = 0;
265
+ const modelBreakdowns = [];
266
+
267
+ for (const [modelName, m] of Object.entries(d.models)) {
268
+ const cost = calculateCost(modelName, {
269
+ inputTokens: m.inputTokens,
270
+ outputTokens: m.outputTokens,
271
+ cacheCreationTokens: m.cacheCreationTokens,
272
+ cacheReadTokens: m.cacheReadTokens,
273
+ });
274
+ dayCost += cost.total;
275
+ modelBreakdowns.push({
276
+ model: modelName,
277
+ cost: cost.total,
278
+ tokens: {
279
+ input: m.inputTokens,
280
+ output: m.outputTokens,
281
+ cacheRead: m.cacheReadTokens,
282
+ cacheWrite: m.cacheCreationTokens,
283
+ },
284
+ });
285
+ }
286
+
287
+ return {
288
+ date: d.date,
289
+ cost: dayCost,
290
+ outputTokens: d.outputTokens,
291
+ cacheReadTokens: d.cacheReadTokens,
292
+ cacheOutputRatio: d.cacheOutputRatio,
293
+ messageCount: d.messageCount,
294
+ sessionCount: d.sessionCount,
295
+ models: modelBreakdowns,
296
+ };
297
+ });
298
+
299
+ const activeDays = dailyCosts.filter(d => d.cost > 0.01).length;
300
+
301
+ // Model cost breakdown — from filtered daily data (must match period)
302
+ const modelCosts = {};
303
+ for (const day of dailyCosts) {
304
+ for (const m of day.models) {
305
+ const name = cleanModelName(m.model);
306
+ if (!modelCosts[name]) modelCosts[name] = 0;
307
+ modelCosts[name] += m.cost;
308
+ }
309
+ }
310
+
311
+ // Session analysis
312
+ const recentSessions = (sessionMeta || []).filter(s => {
313
+ if (!s.startTime) return true;
314
+ return s.startTime >= cutoffStr;
315
+ });
316
+
317
+ const totalSessions = recentSessions.length;
318
+ const avgSessionDuration = totalSessions > 0
319
+ ? recentSessions.reduce((sum, s) => sum + s.durationMinutes, 0) / totalSessions
320
+ : 0;
321
+
322
+ const periodCost = dailyCosts.reduce((sum, d) => sum + d.cost, 0);
323
+ const avgDailyCost = activeDays > 0 ? periodCost / activeDays : 0;
324
+ const peakDay = dailyCosts.reduce((max, d) => d.cost > (max?.cost || 0) ? d : max, null);
325
+
326
+ // Totals
327
+ const totalInput = filtered.reduce((s, d) => s + d.inputTokens, 0);
328
+ const totalOutput = filtered.reduce((s, d) => s + d.outputTokens, 0);
329
+ const totalCacheRead = filtered.reduce((s, d) => s + d.cacheReadTokens, 0);
330
+ const totalCacheWrite = filtered.reduce((s, d) => s + d.cacheCreationTokens, 0);
331
+
332
+ return {
333
+ periodDays: days,
334
+ activeDays,
335
+ totalCost: periodCost,
336
+ avgDailyCost,
337
+ medianDailyCost: median(dailyCosts.filter(d => d.cost > 0.01).map(d => d.cost)),
338
+ peakDay,
339
+ dailyCosts,
340
+ modelCosts,
341
+ sessions: {
342
+ total: totalSessions,
343
+ avgDurationMinutes: avgSessionDuration,
344
+ totalLinesAdded: recentSessions.reduce((s, x) => s + x.linesAdded, 0),
345
+ totalLinesRemoved: recentSessions.reduce((s, x) => s + x.linesRemoved, 0),
346
+ totalFilesModified: recentSessions.reduce((s, x) => s + x.filesModified, 0),
347
+ },
348
+ totals: {
349
+ inputTokens: totalInput,
350
+ outputTokens: totalOutput,
351
+ cacheReadTokens: totalCacheRead,
352
+ cacheWriteTokens: totalCacheWrite,
353
+ },
354
+ };
355
+ }
356
+
357
+ export { PRICING, calculateCost, cleanModelName };