@vibescore/tracker 0.0.8 → 0.1.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -13,7 +13,7 @@ _Real-time AI Analytics for Codex CLI_
13
13
 
14
14
  [**English**](README.md) • [**中文说明**](README.zh-CN.md)
15
15
 
16
- [**Documentation**](docs/) • [**Dashboard**](dashboard/) • [**Backend API**](BACKEND_API.md) • [**Dashboard API**](docs/dashboard/api.md)
16
+ [**Documentation**](docs/) • [**Dashboard**](dashboard/) • [**Backend API**](BACKEND_API.md)
17
17
 
18
18
  <br/>
19
19
 
@@ -25,7 +25,7 @@ _Real-time AI Analytics for Codex CLI_
25
25
 
26
26
  ## 🌌 Overview
27
27
 
28
- **VibeScore** is an intelligent token usage tracking system designed specifically for macOS developers. It monitors Codex CLI output in real-time, transforming your **AI Output** into quantifiable metrics via a high-fidelity, **Matrix-themed** dashboard.
28
+ **VibeScore** is an intelligent token usage tracking system designed specifically for macOS developers. Through the all-new **Matrix-A Design System**, it provides a high-fidelity cyberpunk-style dashboard that transforms your **AI Output** into quantifiable metrics, supported by the **Neural Divergence Map** for real-time monitoring of multi-model compute distribution.
29
29
 
30
30
  > [!TIP] > **Core Index**: Our signature metric that reflects your flow state by analyzing token consumption rates and patterns.
31
31
 
@@ -40,9 +40,12 @@ We believe your code and thoughts are your own. VibeScore is built with strict p
40
40
 
41
41
  ## 🚀 Key Features
42
42
 
43
- - 📡 **Live Sniffer & Auto-Sync**: Real-time interception of Codex CLI pipes with **automatic background synchronization**. Once initialized, your tokens are tracked and synced without any manual commands.
44
- - 🧭 **Multi-source Ingestion**: Supports Codex CLI and Every Code (tagged as `source=every-code`) without modifying Every Code.
45
- - 📊 **Matrix Dashboard**: A high-performance React + Vite dashboard featuring heatmaps, trend charts, and live logs.
43
+ - 📡 **Auto-Sync**: Real-time interception of Codex CLI pipes with **automatic background synchronization**. Once initialized, your tokens are tracked and synced without any manual commands.
44
+ - 🧭 **Universal-Sync**: Native support for **Codex CLI**, **Every Code**, and the latest **Claude Code**. Whether it's GPT-4, Claude 3.5 Sonnet, or o1/Gemini, token consumption from all models is unified and counted.
45
+ - 📊 **Matrix Dashboard**: High-performance dashboard built with React + Vite, featuring the new **Matrix-A** design language.
46
+ - **Neural Divergence Map**: Visualize multi-engine load balancing and compute distribution.
47
+ - **Cost Intelligence**: Real-time, multi-dimensional cost breakdown and forecasting.
48
+ - **Smart Notifications**: Non-intrusive system-level alerts using a Golden (Gold/Amber) visual style for high-value information.
46
49
  - ⚡ **AI Analytics**: Deep analysis of Input/Output tokens, with dedicated tracking for Cached and Reasoning components.
47
50
  - 🔒 **Identity Core**: Robust authentication and permission management to secure your development data.
48
51
 
@@ -79,11 +82,13 @@ npx --yes @vibescore/tracker status
79
82
 
80
83
  - Codex CLI logs: `~/.codex/sessions/**/rollout-*.jsonl` (override with `CODEX_HOME`)
81
84
  - Every Code logs: `~/.code/sessions/**/rollout-*.jsonl` (override with `CODE_HOME`)
85
+ - Gemini CLI logs: `~/.gemini/tmp/**/chats/session-*.json` (override with `GEMINI_HOME`)
82
86
 
83
87
  ## 🔧 Environment Variables
84
88
 
85
89
  - `VIBESCORE_HTTP_TIMEOUT_MS`: CLI HTTP timeout in ms (default `20000`, `0` disables, clamped to `1000..120000`).
86
90
  - `VITE_VIBESCORE_HTTP_TIMEOUT_MS`: Dashboard request timeout in ms (default `15000`, `0` disables, clamped to `1000..30000`).
91
+ - `GEMINI_HOME`: Override Gemini CLI home (defaults to `~/.gemini`).
87
92
 
88
93
  ## 🧰 Troubleshooting
89
94
 
package/README.zh-CN.md CHANGED
@@ -13,7 +13,7 @@ _Codex CLI 实时 AI 分析工具_
13
13
 
14
14
  [**English**](README.md) • [**中文说明**](README.zh-CN.md)
15
15
 
16
- [**文档**](docs/) • [**控制台**](dashboard/) • [**后端接口**](BACKEND_API.md) • [**Dashboard API**](docs/dashboard/api.md)
16
+ [**文档**](docs/) • [**控制台**](dashboard/) • [**后端接口**](BACKEND_API.md)
17
17
 
18
18
  <br/>
19
19
 
@@ -25,7 +25,7 @@ _Codex CLI 实时 AI 分析工具_
25
25
 
26
26
  ## 🌌 项目概述
27
27
 
28
- **VibeScore** 是一个专为 macOS 开发者设计的智能令牌(Token)使用追踪系统。它能够实时监控 Codex CLI 的输出,通过高度可视化的 **Matrix** 风格仪表盘,将你的 **AI 产出 (AI Output)** 转化为可量化的指标。
28
+ **VibeScore** 是一个专为 macOS 开发者设计的智能令牌(Token)使用追踪系统。它通过全新的 **Matrix-A Design System**,提供高度可视化的赛博朋克风格仪表盘,将你的 **AI 产出 (AI Output)** 转化为可量化的指标,并支持通过 **Neural Divergence Map** 实时监控多模型的算力分布。
29
29
 
30
30
  > [!TIP] > **Core Index (核心指数)**: 我们的标志性指标,通过分析 Token 消耗速率与模式,反映你的开发心流状态。
31
31
 
@@ -41,8 +41,11 @@ _Codex CLI 实时 AI 分析工具_
41
41
  ## 🚀 核心功能
42
42
 
43
43
  - 📡 **自动嗅探与同步 (Auto-Sync)**: 实时监听 Codex CLI 管道并具备**全自动后台同步**功能。初始化后,你的 Token 产出将自动追踪并同步,无需手动执行脚本。
44
- - 🧭 **多来源采集**:支持 Codex CLI Every Code(标记为 `source=every-code`),无需修改 Every Code 客户端。
45
- - 📊 **Matrix Dashboard (矩阵控制台)**: 基于 React + Vite 的高性能仪表盘,具备热力图、趋势图与实时日志。
44
+ - 🧭 **全能采集 (Universal-Sync)**: 原生支持 **Codex CLI**, **Every Code** 以及最新的 **Claude Code**。无论是 GPT-4, Claude 3.5 Sonnet 还是 o1/Gemini,所有模型的 Token 消耗均可被统一捕获与统计。
45
+ - 📊 **Matrix Dashboard (矩阵控制台)**: 基于 React + Vite 的高性能仪表盘,采用全新的 **Matrix-A** 设计语言。
46
+ - **Neural Divergence Map (神经发散图谱)**: 可视化多引擎负载均衡状态,直观展示算力分布。
47
+ - **Cost Intelligence (成本智能)**: 实时、多维度的成本分解与预测。
48
+ - **Smart Notifications (智能通知)**: 非侵入式的系统级通知,采用金色 (Gold/Amber) 视觉传达高价值信息。
46
49
  - ⚡ **AI Analytics (AI 分析)**: 深度分析 Input/Output Token,支持缓存 (Cached) 与推理 (Reasoning) 部分的分离监控。
47
50
  - 🔒 **Identity Core (身份核心)**: 完备的身份验证与权限管理,保护你的开发数据资产。
48
51
 
@@ -79,11 +82,13 @@ npx --yes @vibescore/tracker status
79
82
 
80
83
  - Codex CLI 日志:`~/.codex/sessions/**/rollout-*.jsonl`(可用 `CODEX_HOME` 覆盖)
81
84
  - Every Code 日志:`~/.code/sessions/**/rollout-*.jsonl`(可用 `CODE_HOME` 覆盖)
85
+ - Gemini CLI 日志:`~/.gemini/tmp/**/chats/session-*.json`(可用 `GEMINI_HOME` 覆盖)
82
86
 
83
87
  ## 🔧 环境变量
84
88
 
85
89
  - `VIBESCORE_HTTP_TIMEOUT_MS`:CLI 请求超时(毫秒,默认 `20000`,`0` 表示关闭,范围 `1000..120000`)。
86
90
  - `VITE_VIBESCORE_HTTP_TIMEOUT_MS`:Dashboard 请求超时(毫秒,默认 `15000`,`0` 表示关闭,范围 `1000..30000`)。
91
+ - `GEMINI_HOME`:覆盖 Gemini CLI 的 home(默认 `~/.gemini`)。
87
92
 
88
93
  ## 🧰 常见问题
89
94
 
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "@vibescore/tracker",
3
- "version": "0.0.8",
3
+ "version": "0.1.0",
4
4
  "description": "Codex CLI token usage tracker (macOS-first, notify-driven).",
5
5
  "license": "MIT",
6
6
  "publishConfig": {
@@ -387,6 +387,10 @@ function spawnInitSync({ trackerBinPath, packageName }) {
387
387
  stdio: 'ignore',
388
388
  env: process.env
389
389
  });
390
+ child.on('error', (err) => {
391
+ const msg = err && err.message ? err.message : 'unknown error';
392
+ process.stderr.write(`Initial sync spawn failed: ${msg}\n`);
393
+ });
390
394
  child.unref();
391
395
  }
392
396
 
@@ -4,7 +4,14 @@ const fs = require('node:fs/promises');
4
4
  const cp = require('node:child_process');
5
5
 
6
6
  const { ensureDir, readJson, writeJson, openLock } = require('../lib/fs');
7
- const { listRolloutFiles, listClaudeProjectFiles, parseRolloutIncremental, parseClaudeIncremental } = require('../lib/rollout');
7
+ const {
8
+ listRolloutFiles,
9
+ listClaudeProjectFiles,
10
+ listGeminiSessionFiles,
11
+ parseRolloutIncremental,
12
+ parseClaudeIncremental,
13
+ parseGeminiIncremental
14
+ } = require('../lib/rollout');
8
15
  const { drainQueueToCloud } = require('../lib/uploader');
9
16
  const { createProgress, renderBar, formatNumber, formatBytes } = require('../lib/progress');
10
17
  const { syncHeartbeat } = require('../lib/vibescore-api');
@@ -45,6 +52,8 @@ async function cmdSync(argv) {
45
52
  const codexHome = process.env.CODEX_HOME || path.join(home, '.codex');
46
53
  const codeHome = process.env.CODE_HOME || path.join(home, '.code');
47
54
  const claudeProjectsDir = path.join(home, '.claude', 'projects');
55
+ const geminiHome = process.env.GEMINI_HOME || path.join(home, '.gemini');
56
+ const geminiTmpDir = path.join(geminiHome, 'tmp');
48
57
 
49
58
  const sources = [
50
59
  { source: 'codex', sessionsDir: path.join(codexHome, 'sessions') },
@@ -104,6 +113,29 @@ async function cmdSync(argv) {
104
113
  });
105
114
  }
106
115
 
116
+ const geminiFiles = await listGeminiSessionFiles(geminiTmpDir);
117
+ let geminiResult = { filesProcessed: 0, eventsAggregated: 0, bucketsQueued: 0 };
118
+ if (geminiFiles.length > 0) {
119
+ if (progress?.enabled) {
120
+ progress.start(`Parsing Gemini ${renderBar(0)} 0/${formatNumber(geminiFiles.length)} files | buckets 0`);
121
+ }
122
+ geminiResult = await parseGeminiIncremental({
123
+ sessionFiles: geminiFiles,
124
+ cursors,
125
+ queuePath,
126
+ onProgress: (p) => {
127
+ if (!progress?.enabled) return;
128
+ const pct = p.total > 0 ? p.index / p.total : 1;
129
+ progress.update(
130
+ `Parsing Gemini ${renderBar(pct)} ${formatNumber(p.index)}/${formatNumber(p.total)} files | buckets ${formatNumber(
131
+ p.bucketsQueued
132
+ )}`
133
+ );
134
+ },
135
+ source: 'gemini'
136
+ });
137
+ }
138
+
107
139
  cursors.updatedAt = new Date().toISOString();
108
140
  await writeJson(cursorsPath, cursors);
109
141
 
@@ -229,8 +261,8 @@ async function cmdSync(argv) {
229
261
  });
230
262
 
231
263
  if (!opts.auto) {
232
- const totalParsed = parseResult.filesProcessed + claudeResult.filesProcessed;
233
- const totalBuckets = parseResult.bucketsQueued + claudeResult.bucketsQueued;
264
+ const totalParsed = parseResult.filesProcessed + claudeResult.filesProcessed + geminiResult.filesProcessed;
265
+ const totalBuckets = parseResult.bucketsQueued + claudeResult.bucketsQueued + geminiResult.bucketsQueued;
234
266
  process.stdout.write(
235
267
  [
236
268
  'Sync finished:',
@@ -44,6 +44,23 @@ async function listClaudeProjectFiles(projectsDir) {
44
44
  return out;
45
45
  }
46
46
 
47
+ async function listGeminiSessionFiles(tmpDir) {
48
+ const out = [];
49
+ const roots = await safeReadDir(tmpDir);
50
+ for (const root of roots) {
51
+ if (!root.isDirectory()) continue;
52
+ const chatsDir = path.join(tmpDir, root.name, 'chats');
53
+ const chats = await safeReadDir(chatsDir);
54
+ for (const entry of chats) {
55
+ if (!entry.isFile()) continue;
56
+ if (!entry.name.startsWith('session-') || !entry.name.endsWith('.json')) continue;
57
+ out.push(path.join(chatsDir, entry.name));
58
+ }
59
+ }
60
+ out.sort((a, b) => a.localeCompare(b));
61
+ return out;
62
+ }
63
+
47
64
  async function parseRolloutIncremental({ rolloutFiles, cursors, queuePath, onProgress, source }) {
48
65
  await ensureDir(path.dirname(queuePath));
49
66
  let filesProcessed = 0;
@@ -181,6 +198,78 @@ async function parseClaudeIncremental({ projectFiles, cursors, queuePath, onProg
181
198
  return { filesProcessed, eventsAggregated, bucketsQueued };
182
199
  }
183
200
 
201
+ async function parseGeminiIncremental({ sessionFiles, cursors, queuePath, onProgress, source }) {
202
+ await ensureDir(path.dirname(queuePath));
203
+ let filesProcessed = 0;
204
+ let eventsAggregated = 0;
205
+
206
+ const cb = typeof onProgress === 'function' ? onProgress : null;
207
+ const files = Array.isArray(sessionFiles) ? sessionFiles : [];
208
+ const totalFiles = files.length;
209
+ const hourlyState = normalizeHourlyState(cursors?.hourly);
210
+ const touchedBuckets = new Set();
211
+ const defaultSource = normalizeSourceInput(source) || 'gemini';
212
+
213
+ if (!cursors.files || typeof cursors.files !== 'object') {
214
+ cursors.files = {};
215
+ }
216
+
217
+ for (let idx = 0; idx < files.length; idx++) {
218
+ const entry = files[idx];
219
+ const filePath = typeof entry === 'string' ? entry : entry?.path;
220
+ if (!filePath) continue;
221
+ const fileSource =
222
+ typeof entry === 'string' ? defaultSource : normalizeSourceInput(entry?.source) || defaultSource;
223
+ const st = await fs.stat(filePath).catch(() => null);
224
+ if (!st || !st.isFile()) continue;
225
+
226
+ const key = filePath;
227
+ const prev = cursors.files[key] || null;
228
+ const inode = st.ino || 0;
229
+ let startIndex = prev && prev.inode === inode ? Number(prev.lastIndex || -1) : -1;
230
+ let lastTotals = prev && prev.inode === inode ? prev.lastTotals || null : null;
231
+ let lastModel = prev && prev.inode === inode ? prev.lastModel || null : null;
232
+
233
+ const result = await parseGeminiFile({
234
+ filePath,
235
+ startIndex,
236
+ lastTotals,
237
+ lastModel,
238
+ hourlyState,
239
+ touchedBuckets,
240
+ source: fileSource
241
+ });
242
+
243
+ cursors.files[key] = {
244
+ inode,
245
+ lastIndex: result.lastIndex,
246
+ lastTotals: result.lastTotals,
247
+ lastModel: result.lastModel,
248
+ updatedAt: new Date().toISOString()
249
+ };
250
+
251
+ filesProcessed += 1;
252
+ eventsAggregated += result.eventsAggregated;
253
+
254
+ if (cb) {
255
+ cb({
256
+ index: idx + 1,
257
+ total: totalFiles,
258
+ filePath,
259
+ filesProcessed,
260
+ eventsAggregated,
261
+ bucketsQueued: touchedBuckets.size
262
+ });
263
+ }
264
+ }
265
+
266
+ const bucketsQueued = await enqueueTouchedBuckets({ queuePath, hourlyState, touchedBuckets });
267
+ hourlyState.updatedAt = new Date().toISOString();
268
+ cursors.hourly = hourlyState;
269
+
270
+ return { filesProcessed, eventsAggregated, bucketsQueued };
271
+ }
272
+
184
273
  async function parseRolloutFile({
185
274
  filePath,
186
275
  startOffset,
@@ -296,35 +385,137 @@ async function parseClaudeFile({ filePath, startOffset, hourlyState, touchedBuck
296
385
  return { endOffset, eventsAggregated };
297
386
  }
298
387
 
388
+ async function parseGeminiFile({
389
+ filePath,
390
+ startIndex,
391
+ lastTotals,
392
+ lastModel,
393
+ hourlyState,
394
+ touchedBuckets,
395
+ source
396
+ }) {
397
+ const raw = await fs.readFile(filePath, 'utf8').catch(() => '');
398
+ if (!raw.trim()) return { lastIndex: startIndex, lastTotals, lastModel, eventsAggregated: 0 };
399
+
400
+ let session;
401
+ try {
402
+ session = JSON.parse(raw);
403
+ } catch (_e) {
404
+ return { lastIndex: startIndex, lastTotals, lastModel, eventsAggregated: 0 };
405
+ }
406
+
407
+ const messages = Array.isArray(session?.messages) ? session.messages : [];
408
+ if (startIndex >= messages.length) {
409
+ startIndex = -1;
410
+ lastTotals = null;
411
+ lastModel = null;
412
+ }
413
+
414
+ let eventsAggregated = 0;
415
+ let model = typeof lastModel === 'string' ? lastModel : null;
416
+ let totals = lastTotals && typeof lastTotals === 'object' ? lastTotals : null;
417
+ const begin = Number.isFinite(startIndex) ? startIndex + 1 : 0;
418
+
419
+ for (let idx = begin; idx < messages.length; idx++) {
420
+ const msg = messages[idx];
421
+ if (!msg || typeof msg !== 'object') continue;
422
+
423
+ const normalizedModel = normalizeModelInput(msg.model);
424
+ if (normalizedModel) model = normalizedModel;
425
+
426
+ const timestamp = typeof msg.timestamp === 'string' ? msg.timestamp : null;
427
+ const currentTotals = normalizeGeminiTokens(msg.tokens);
428
+ if (!timestamp || !currentTotals) {
429
+ totals = currentTotals || totals;
430
+ continue;
431
+ }
432
+
433
+ const delta = diffGeminiTotals(currentTotals, totals);
434
+ if (!delta || isAllZeroUsage(delta)) {
435
+ totals = currentTotals;
436
+ continue;
437
+ }
438
+
439
+ const bucketStart = toUtcHalfHourStart(timestamp);
440
+ if (!bucketStart) {
441
+ totals = currentTotals;
442
+ continue;
443
+ }
444
+
445
+ const bucket = getHourlyBucket(hourlyState, source, model, bucketStart);
446
+ addTotals(bucket.totals, delta);
447
+ touchedBuckets.add(bucketKey(source, model, bucketStart));
448
+ eventsAggregated += 1;
449
+ totals = currentTotals;
450
+ }
451
+
452
+ return {
453
+ lastIndex: messages.length - 1,
454
+ lastTotals: totals,
455
+ lastModel: model,
456
+ eventsAggregated
457
+ };
458
+ }
459
+
299
460
  async function enqueueTouchedBuckets({ queuePath, hourlyState, touchedBuckets }) {
300
461
  if (!touchedBuckets || touchedBuckets.size === 0) return 0;
301
462
 
302
- const toAppend = [];
463
+ const touchedGroups = new Set();
303
464
  for (const bucketStart of touchedBuckets) {
304
- const parsedKey = parseBucketKey(bucketStart);
305
- const source = parsedKey.source || DEFAULT_SOURCE;
306
- const model = parsedKey.model || DEFAULT_MODEL;
307
- const hourStart = parsedKey.hourStart;
308
- const bucket =
309
- hourlyState.buckets[bucketKey(source, model, hourStart)] || hourlyState.buckets[bucketStart];
465
+ const parsed = parseBucketKey(bucketStart);
466
+ const hourStart = parsed.hourStart;
467
+ if (!hourStart) continue;
468
+ touchedGroups.add(groupBucketKey(parsed.source, hourStart));
469
+ }
470
+ if (touchedGroups.size === 0) return 0;
471
+
472
+ const grouped = new Map();
473
+ for (const [key, bucket] of Object.entries(hourlyState.buckets || {})) {
310
474
  if (!bucket || !bucket.totals) continue;
311
- const key = totalsKey(bucket.totals);
312
- if (bucket.queuedKey === key) continue;
475
+ const parsed = parseBucketKey(key);
476
+ const hourStart = parsed.hourStart;
477
+ if (!hourStart) continue;
478
+ const groupKey = groupBucketKey(parsed.source, hourStart);
479
+ if (!touchedGroups.has(groupKey)) continue;
480
+
481
+ let group = grouped.get(groupKey);
482
+ if (!group) {
483
+ group = {
484
+ source: normalizeSourceInput(parsed.source) || DEFAULT_SOURCE,
485
+ hourStart,
486
+ models: new Set(),
487
+ totals: initTotals()
488
+ };
489
+ grouped.set(groupKey, group);
490
+ }
491
+ group.models.add(parsed.model || DEFAULT_MODEL);
492
+ addTotals(group.totals, bucket.totals);
493
+ }
494
+
495
+ const toAppend = [];
496
+ const groupQueued = hourlyState.groupQueued && typeof hourlyState.groupQueued === 'object' ? hourlyState.groupQueued : {};
497
+ for (const group of grouped.values()) {
498
+ const model = group.models.size === 1 ? [...group.models][0] : DEFAULT_MODEL;
499
+ const key = totalsKey(group.totals);
500
+ const groupKey = groupBucketKey(group.source, group.hourStart);
501
+ if (groupQueued[groupKey] === key) continue;
313
502
  toAppend.push(
314
503
  JSON.stringify({
315
- source,
504
+ source: group.source,
316
505
  model,
317
- hour_start: hourStart,
318
- input_tokens: bucket.totals.input_tokens,
319
- cached_input_tokens: bucket.totals.cached_input_tokens,
320
- output_tokens: bucket.totals.output_tokens,
321
- reasoning_output_tokens: bucket.totals.reasoning_output_tokens,
322
- total_tokens: bucket.totals.total_tokens
506
+ hour_start: group.hourStart,
507
+ input_tokens: group.totals.input_tokens,
508
+ cached_input_tokens: group.totals.cached_input_tokens,
509
+ output_tokens: group.totals.output_tokens,
510
+ reasoning_output_tokens: group.totals.reasoning_output_tokens,
511
+ total_tokens: group.totals.total_tokens
323
512
  })
324
513
  );
325
- bucket.queuedKey = key;
514
+ groupQueued[groupKey] = key;
326
515
  }
327
516
 
517
+ hourlyState.groupQueued = groupQueued;
518
+
328
519
  if (toAppend.length > 0) {
329
520
  await fs.appendFile(queuePath, toAppend.join('\n') + '\n', 'utf8');
330
521
  }
@@ -335,15 +526,30 @@ async function enqueueTouchedBuckets({ queuePath, hourlyState, touchedBuckets })
335
526
  function normalizeHourlyState(raw) {
336
527
  const state = raw && typeof raw === 'object' ? raw : {};
337
528
  const version = Number(state.version || 1);
529
+ const rawBuckets = state.buckets && typeof state.buckets === 'object' ? state.buckets : {};
530
+ const buckets = {};
531
+ const groupQueued = {};
532
+
338
533
  if (!Number.isFinite(version) || version < 2) {
534
+ for (const [key, value] of Object.entries(rawBuckets)) {
535
+ const parsed = parseBucketKey(key);
536
+ const hourStart = parsed.hourStart;
537
+ if (!hourStart) continue;
538
+ const source = normalizeSourceInput(parsed.source) || DEFAULT_SOURCE;
539
+ const normalizedKey = bucketKey(source, DEFAULT_MODEL, hourStart);
540
+ buckets[normalizedKey] = value;
541
+ if (value?.queuedKey) {
542
+ groupQueued[groupBucketKey(source, hourStart)] = value.queuedKey;
543
+ }
544
+ }
339
545
  return {
340
- version: 2,
341
- buckets: {},
342
- updatedAt: null
546
+ version: 3,
547
+ buckets,
548
+ groupQueued,
549
+ updatedAt: typeof state.updatedAt === 'string' ? state.updatedAt : null
343
550
  };
344
551
  }
345
- const rawBuckets = state.buckets && typeof state.buckets === 'object' ? state.buckets : {};
346
- const buckets = {};
552
+
347
553
  for (const [key, value] of Object.entries(rawBuckets)) {
348
554
  const parsed = parseBucketKey(key);
349
555
  const hourStart = parsed.hourStart;
@@ -351,9 +557,14 @@ function normalizeHourlyState(raw) {
351
557
  const normalizedKey = bucketKey(parsed.source, parsed.model, hourStart);
352
558
  buckets[normalizedKey] = value;
353
559
  }
560
+
561
+ const existingGroupQueued =
562
+ state.groupQueued && typeof state.groupQueued === 'object' ? state.groupQueued : {};
563
+
354
564
  return {
355
- version: 2,
565
+ version: 3,
356
566
  buckets,
567
+ groupQueued: version >= 3 ? existingGroupQueued : {},
357
568
  updatedAt: typeof state.updatedAt === 'string' ? state.updatedAt : null
358
569
  };
359
570
  }
@@ -434,6 +645,11 @@ function bucketKey(source, model, hourStart) {
434
645
  return `${safeSource}${BUCKET_SEPARATOR}${safeModel}${BUCKET_SEPARATOR}${hourStart}`;
435
646
  }
436
647
 
648
+ function groupBucketKey(source, hourStart) {
649
+ const safeSource = normalizeSourceInput(source) || DEFAULT_SOURCE;
650
+ return `${safeSource}${BUCKET_SEPARATOR}${hourStart}`;
651
+ }
652
+
437
653
  function parseBucketKey(key) {
438
654
  if (typeof key !== 'string') return { source: DEFAULT_SOURCE, model: DEFAULT_MODEL, hourStart: '' };
439
655
  const first = key.indexOf(BUCKET_SEPARATOR);
@@ -461,6 +677,54 @@ function normalizeModelInput(value) {
461
677
  return trimmed.length > 0 ? trimmed : null;
462
678
  }
463
679
 
680
+ function normalizeGeminiTokens(tokens) {
681
+ if (!tokens || typeof tokens !== 'object') return null;
682
+ const input = toNonNegativeInt(tokens.input);
683
+ const cached = toNonNegativeInt(tokens.cached);
684
+ const output = toNonNegativeInt(tokens.output);
685
+ const tool = toNonNegativeInt(tokens.tool);
686
+ const thoughts = toNonNegativeInt(tokens.thoughts);
687
+ const total = toNonNegativeInt(tokens.total);
688
+
689
+ return {
690
+ input_tokens: input,
691
+ cached_input_tokens: cached,
692
+ output_tokens: output + tool,
693
+ reasoning_output_tokens: thoughts,
694
+ total_tokens: total
695
+ };
696
+ }
697
+
698
+ function sameGeminiTotals(a, b) {
699
+ if (!a || !b) return false;
700
+ return (
701
+ a.input_tokens === b.input_tokens &&
702
+ a.cached_input_tokens === b.cached_input_tokens &&
703
+ a.output_tokens === b.output_tokens &&
704
+ a.reasoning_output_tokens === b.reasoning_output_tokens &&
705
+ a.total_tokens === b.total_tokens
706
+ );
707
+ }
708
+
709
+ function diffGeminiTotals(current, previous) {
710
+ if (!current || typeof current !== 'object') return null;
711
+ if (!previous || typeof previous !== 'object') return current;
712
+ if (sameGeminiTotals(current, previous)) return null;
713
+
714
+ const totalReset = (current.total_tokens || 0) < (previous.total_tokens || 0);
715
+ if (totalReset) return current;
716
+
717
+ const delta = {
718
+ input_tokens: Math.max(0, (current.input_tokens || 0) - (previous.input_tokens || 0)),
719
+ cached_input_tokens: Math.max(0, (current.cached_input_tokens || 0) - (previous.cached_input_tokens || 0)),
720
+ output_tokens: Math.max(0, (current.output_tokens || 0) - (previous.output_tokens || 0)),
721
+ reasoning_output_tokens: Math.max(0, (current.reasoning_output_tokens || 0) - (previous.reasoning_output_tokens || 0)),
722
+ total_tokens: Math.max(0, (current.total_tokens || 0) - (previous.total_tokens || 0))
723
+ };
724
+
725
+ return isAllZeroUsage(delta) ? null : delta;
726
+ }
727
+
464
728
  function extractTokenCount(obj) {
465
729
  const payload = obj?.payload;
466
730
  if (!payload) return null;
@@ -523,12 +787,16 @@ function normalizeUsage(u) {
523
787
  }
524
788
 
525
789
  function normalizeClaudeUsage(u) {
790
+ const inputTokens = toNonNegativeInt(u?.input_tokens);
791
+ const outputTokens = toNonNegativeInt(u?.output_tokens);
792
+ const hasTotal = u && Object.prototype.hasOwnProperty.call(u, 'total_tokens');
793
+ const totalTokens = hasTotal ? toNonNegativeInt(u?.total_tokens) : inputTokens + outputTokens;
526
794
  return {
527
- input_tokens: toNonNegativeInt(u?.input_tokens),
795
+ input_tokens: inputTokens,
528
796
  cached_input_tokens: toNonNegativeInt(u?.cache_read_input_tokens),
529
- output_tokens: toNonNegativeInt(u?.output_tokens),
797
+ output_tokens: outputTokens,
530
798
  reasoning_output_tokens: 0,
531
- total_tokens: toNonNegativeInt(u?.input_tokens) + toNonNegativeInt(u?.output_tokens)
799
+ total_tokens: totalTokens
532
800
  };
533
801
  }
534
802
 
@@ -591,6 +859,8 @@ async function walkClaudeProjects(dir, out) {
591
859
  module.exports = {
592
860
  listRolloutFiles,
593
861
  listClaudeProjectFiles,
862
+ listGeminiSessionFiles,
594
863
  parseRolloutIncremental,
595
- parseClaudeIncremental
864
+ parseClaudeIncremental,
865
+ parseGeminiIncremental
596
866
  };
@@ -78,7 +78,7 @@ async function readBatch(queuePath, startOffset, maxBuckets) {
78
78
  const model = normalizeModel(bucket?.model) || DEFAULT_MODEL;
79
79
  bucket.source = source;
80
80
  bucket.model = model;
81
- bucketMap.set(bucketKey(source, model, hourStart), bucket);
81
+ bucketMap.set(bucketKey(source, hourStart), bucket);
82
82
  linesRead += 1;
83
83
  if (linesRead >= maxBuckets) break;
84
84
  }
@@ -97,8 +97,8 @@ async function safeFileSize(p) {
97
97
  }
98
98
  }
99
99
 
100
- function bucketKey(source, model, hourStart) {
101
- return `${source}${BUCKET_SEPARATOR}${model}${BUCKET_SEPARATOR}${hourStart}`;
100
+ function bucketKey(source, hourStart) {
101
+ return `${source}${BUCKET_SEPARATOR}${hourStart}`;
102
102
  }
103
103
 
104
104
  function normalizeSource(value) {