@tspappsen/elamax 1.2.6 → 1.2.7

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -162,7 +162,7 @@ When developing locally, `npm run dev` starts the daemon in watch mode, but it d
162
162
 
163
163
  ```bash
164
164
  # One-time install
165
- git clone https://github.com/burkeholland/max.git
165
+ git clone https://github.com/lauraeus/max-assistant.git
166
166
  cd max
167
167
  npm install
168
168
 
@@ -242,11 +242,12 @@ Max-Watchdog is a second, ops-only Max instance that monitors and repairs the ma
242
242
  │ @MaxBot on Telegram │ │ Port 7778 │
243
243
  │ General-purpose AI │ │ @WatchdogBot │
244
244
  │ Skills, workers, etc. │ │ Ops-only: health, │
245
- └────────────────────────┘ │ restart, logs, shell
245
+ │ Usage ledger │ │ restart, logs, usage
246
+ └────────────────────────┘ │ reporting, shell │
246
247
  └─────────────────────────┘
247
248
  ```
248
249
 
249
- Two fully isolated instances: separate home directories, separate SQLite databases, separate bot tokens, separate ports, separate pm2 processes. Zero shared state.
250
+ Two mostly isolated instances: separate home directories, separate primary SQLite databases, separate bot tokens, separate ports, and separate pm2 processes. For premium-usage reporting, watchdog may open the main Max SQLite database in **read-only** mode via `MAIN_MAX_HOME` to query the append-only `request_usage` ledger. Watchdog never writes to main Max's DB.
250
251
 
251
252
  ### Watchdog setup
252
253
 
@@ -273,6 +274,8 @@ The watchdog has its own Copilot-powered AI session with these ops tools:
273
274
  | Tool | Description |
274
275
  |------|-------------|
275
276
  | `check_main_max` | Check if main Max is running (pm2 status + HTTP health) |
277
+ | `get_main_usage_summary` | Summarize estimated request usage from main Max over `today`, `24h`, `7d`, or `30d` |
278
+ | `get_main_premium_usage` | Show estimated premium usage plus recent premium-billed events from main Max |
276
279
  | `restart_main_max` | Restart the main Max pm2 process |
277
280
  | `read_main_logs` | Read the last N lines of main Max's daemon log |
278
281
  | `server_health` | Report hostname, uptime, memory, disk, load average |
@@ -281,6 +284,46 @@ The watchdog has its own Copilot-powered AI session with these ops tools:
281
284
 
282
285
  The watchdog does **not** have workers, skills, or long-term memory — it's purpose-built for ops.
283
286
 
287
+ ### Premium usage tracking
288
+
289
+ Main Max now records a compact per-turn usage ledger in its local SQLite database. Each completed turn stores metadata such as:
290
+
291
+ - selected `model`
292
+ - routed `tier` (`fast`, `standard`, `premium`, or `null`)
293
+ - routing `reason`
294
+ - `billingMultiplier`
295
+ - whether the turn was a routed premium turn and/or an estimated premium-billed turn
296
+ - prompt/response character counts
297
+
298
+ This ledger is intentionally **metadata-only** — it does not store raw prompt or response bodies for usage accounting.
299
+
300
+ Watchdog reads this ledger from main Max in **read-only** mode and can answer questions like:
301
+
302
+ - "How many premium requests did Max use today?"
303
+ - "Which model consumed the most premium traffic this week?"
304
+ - "Show me recent premium-billed events"
305
+
306
+ Premium usage is reported as a **best-effort estimate** based on route decisions and Copilot model metadata (`billing.multiplier`). It is useful for ops visibility, but it is **not** an authoritative GitHub billing ledger.
307
+
308
+ ### Example prompts
309
+
310
+ You usually do **not** need to name the underlying tool. Ask in plain English and let Max-Watchdog choose the right action.
311
+
312
+ Examples:
313
+
314
+ - "Show premium usage today."
315
+ - "How many premium requests did Max use in the last 24 hours?"
316
+ - "Give me a 7-day usage summary."
317
+ - "Which model consumed the most premium traffic this week?"
318
+ - "Show recent premium-billed events."
319
+
320
+ If you want to be more explicit, these also work:
321
+
322
+ - "Run `get_main_usage_summary` for `today`."
323
+ - "Run `get_main_usage_summary` for `7d`."
324
+ - "Run `get_main_premium_usage` for `24h`."
325
+ - "Run `get_main_premium_usage` for `today` with 10 recent events."
326
+
284
327
  ### Dual-instance pm2 deployment
285
328
 
286
329
  ```bash
@@ -313,5 +356,5 @@ Watchdog: "Main Max is back online (pid 4521)."
313
356
  | `MAX_PROFILE` | _(unset)_ | Profile name; `watchdog` for the ops instance |
314
357
  | `MAX_HOME` | `~/.max` or `~/.max-<profile>` | Override the home directory |
315
358
  | `MAIN_MAX_PM2_NAME` | `max` | pm2 process name of the main Max instance |
316
- | `MAIN_MAX_HOME` | `~/.max` | Home directory of the main Max instance |
359
+ | `MAIN_MAX_HOME` | `~/.max` | Home directory of the main Max instance; watchdog uses this to read logs and query the main usage ledger read-only |
317
360
  | `MAIN_MAX_API_PORT` | `7777` | HTTP API port of the main Max instance |
@@ -8,7 +8,7 @@ const SYSTEM_PROMPT = `You are a message complexity classifier for an AI assista
8
8
 
9
9
  Tiers:
10
10
  - FAST: Greetings, thanks, acknowledgments, simple yes/no, trivial factual questions ("what time is it?", "hello", "thanks"), casual chat with no technical depth.
11
- - STANDARD: Coding tasks, file operations, tool usage requests, moderate reasoning, questions about technical topics, requests to create/check/manage things, anything involving code or development workflow.
11
+ - STANDARD: Coding tasks, file operations, tool usage requests, moderate reasoning, questions about technical topics, requests to create/check/manage things, anything involving code or development workflow. Short operational/reporting requests like "show premium usage today", "run get_main_usage_summary for today", or "show recent premium-billed events" are ALWAYS STANDARD.
12
12
  - PREMIUM: Complex architecture decisions, deep analysis, multi-step reasoning, comparing trade-offs, detailed explanations of complex topics, debugging intricate issues, designing systems, strategic planning.
13
13
 
14
14
  Rules:
@@ -1,5 +1,24 @@
1
1
  import { CopilotClient } from "@github/copilot-sdk";
2
2
  let client;
3
+ // Billing multiplier cache — populated at init, refreshed on reset
4
+ const modelMultiplierCache = new Map();
5
+ /** Populate the billing multiplier cache from the SDK model catalog. */
6
+ export async function populateModelCache(c) {
7
+ try {
8
+ const models = await c.listModels();
9
+ modelMultiplierCache.clear();
10
+ for (const m of models) {
11
+ modelMultiplierCache.set(m.id, m.billing?.multiplier ?? 0);
12
+ }
13
+ }
14
+ catch (err) {
15
+ console.log(`[max] Failed to populate model cache: ${err instanceof Error ? err.message : err}`);
16
+ }
17
+ }
18
+ /** Get the billing multiplier for a model. Returns 0 if unknown. */
19
+ export function getBillingMultiplier(modelId) {
20
+ return modelMultiplierCache.get(modelId) ?? 0;
21
+ }
3
22
  export async function getClient() {
4
23
  if (!client) {
5
24
  client = new CopilotClient({
@@ -19,6 +38,7 @@ export async function resetClient() {
19
38
  catch { /* best-effort */ }
20
39
  client = undefined;
21
40
  }
41
+ modelMultiplierCache.clear();
22
42
  return getClient();
23
43
  }
24
44
  export async function stopClient() {
@@ -6,10 +6,11 @@ import { config, DEFAULT_MODEL } from "../config.js";
6
6
  import { loadMcpConfig } from "./mcp-config.js";
7
7
  import { getSkillDirectories } from "./skills.js";
8
8
  import { resetClient } from "./client.js";
9
- import { logConversation, getState, setState, deleteState, getMemorySummary, getRecentConversation } from "../store/db.js";
9
+ import { logConversation, getState, setState, deleteState, getMemorySummary, getRecentConversation, logUsageEvent } from "../store/db.js";
10
10
  import { IS_WATCHDOG, INSTRUCTIONS_DIR, SESSIONS_DIR } from "../paths.js";
11
11
  import { resolveModel } from "./router.js";
12
12
  import { watchInstructions, seedDefaultInstructions } from "./workspace-instructions.js";
13
+ import { getBillingMultiplier, populateModelCache } from "./client.js";
13
14
  const MAX_RETRIES = 3;
14
15
  const RECONNECT_DELAYS_MS = [1_000, 3_000, 10_000];
15
16
  const HEALTH_CHECK_INTERVAL_MS = 30_000;
@@ -244,6 +245,8 @@ export async function initOrchestrator(client) {
244
245
  catch (err) {
245
246
  console.log(`[max] Could not validate model (will use '${config.copilotModel}' as-is): ${err instanceof Error ? err.message : err}`);
246
247
  }
248
+ // Populate billing multiplier cache for usage tracking
249
+ await populateModelCache(client);
247
250
  console.log(`[max] Loading ${Object.keys(mcpServers).length} MCP server(s): ${Object.keys(mcpServers).join(", ") || "(none)"}`);
248
251
  console.log(`[max] Skill directories: ${skillDirectories.join(", ") || "(none)"}`);
249
252
  console.log(`[max] Persistent session mode — conversation history maintained by SDK`);
@@ -327,6 +330,7 @@ async function processQueue() {
327
330
  tier: null,
328
331
  switched: false,
329
332
  routerMode: "manual",
333
+ reason: "attachments",
330
334
  };
331
335
  }
332
336
  else {
@@ -406,6 +410,26 @@ export async function sendToOrchestrator(prompt, source, callback, options) {
406
410
  logConversation("assistant", finalContent, sourceLabel);
407
411
  }
408
412
  catch { /* best-effort */ }
413
+ // Record usage event for premium tracking
414
+ try {
415
+ const route = getLastRouteResult();
416
+ if (route) {
417
+ const multiplier = getBillingMultiplier(route.model);
418
+ logUsageEvent({
419
+ source: sourceLabel,
420
+ model: route.model,
421
+ tier: route.tier,
422
+ routerMode: route.routerMode,
423
+ reason: route.reason,
424
+ billingMultiplier: multiplier,
425
+ isPremiumTier: route.tier === "premium",
426
+ isPremiumBilledEstimate: multiplier > 0,
427
+ promptChars: prompt.length,
428
+ responseChars: finalContent.length,
429
+ });
430
+ }
431
+ }
432
+ catch { /* best-effort — never fail the hot path */ }
409
433
  return;
410
434
  }
411
435
  catch (err) {
@@ -33,6 +33,15 @@ const FOLLOW_UP_PATTERNS = [
33
33
  "perfect", "+1", "please", "yep", "yup", "nope", "nah", "ok", "okay",
34
34
  "got it", "cool", "nice", "great", "alright", "right",
35
35
  ];
36
+ const SIMPLE_USAGE_TOOL_PATTERNS = [
37
+ /\bget_main_usage_summary\b/i,
38
+ /\bget_main_premium_usage\b/i,
39
+ /\bshow\b.*\bpremium usage\b/i,
40
+ /\bshow\b.*\busage summary\b/i,
41
+ /\bhow many\b.*\bpremium requests\b/i,
42
+ /\brecent premium-billed events?\b/i,
43
+ /\bwhich model\b.*\bpremium traffic\b/i,
44
+ ];
36
45
  // ---------------------------------------------------------------------------
37
46
  // Helpers
38
47
  // ---------------------------------------------------------------------------
@@ -49,6 +58,11 @@ function wordMatch(text, keyword) {
49
58
  const escaped = keyword.replace(/[.*+?^${}()|[\]\\]/g, "\\$&");
50
59
  return new RegExp(`\\b${escaped}\\b`, "i").test(text);
51
60
  }
61
+ function isSimpleUsageToolRequest(text) {
62
+ if (text.length > 200)
63
+ return false;
64
+ return SIMPLE_USAGE_TOOL_PATTERNS.some((pattern) => pattern.test(text));
65
+ }
52
66
  // ---------------------------------------------------------------------------
53
67
  // Config management
54
68
  // ---------------------------------------------------------------------------
@@ -90,24 +104,28 @@ async function classifyMessage(prompt, recentTiers, client) {
90
104
  const lower = text.toLowerCase();
91
105
  // Background tasks → always standard
92
106
  if (lower.startsWith("[background task completed]"))
93
- return "standard";
107
+ return { tier: "standard", reason: "background" };
94
108
  // Short follow-ups inherit the previous tier
95
109
  if (text.length < 20 && recentTiers.length > 0) {
96
110
  const isFollowUp = FOLLOW_UP_PATTERNS.some((p) => lower === p || lower === p + ".");
97
111
  if (isFollowUp)
98
- return recentTiers[0];
112
+ return { tier: recentTiers[0], reason: "follow-up" };
113
+ }
114
+ // Short reporting / tool-invocation prompts should never burn premium.
115
+ if (isSimpleUsageToolRequest(text)) {
116
+ return { tier: "standard", reason: "usage-tool" };
99
117
  }
100
118
  // LLM classification
101
119
  if (client) {
102
120
  const tier = await classifyWithLLM(client, text);
103
121
  if (tier) {
104
122
  console.log(`[max] Classifier: ${tier}`);
105
- return tier;
123
+ return { tier, reason: "classifier" };
106
124
  }
107
125
  }
108
126
  // Fallback — standard is always safe
109
127
  console.log(`[max] Classifier (fallback): standard`);
110
- return "standard";
128
+ return { tier: "standard", reason: "fallback" };
111
129
  }
112
130
  // ---------------------------------------------------------------------------
113
131
  // Main entry point
@@ -117,7 +135,7 @@ export async function resolveModel(prompt, currentModel, recentTiers, client) {
117
135
  // Router disabled → manual mode
118
136
  if (!config.enabled) {
119
137
  messagesSinceSwitch = Infinity;
120
- return { model: currentModel, tier: null, switched: false, routerMode: "manual" };
138
+ return { model: currentModel, tier: null, switched: false, routerMode: "manual", reason: "manual" };
121
139
  }
122
140
  const text = sanitize(prompt);
123
141
  // 1. Check overrides first — they bypass cooldown
@@ -126,22 +144,22 @@ export async function resolveModel(prompt, currentModel, recentTiers, client) {
126
144
  const switched = rule.model !== currentModel;
127
145
  if (switched)
128
146
  messagesSinceSwitch = 0;
129
- return { model: rule.model, tier: null, overrideName: rule.name, switched, routerMode: "auto" };
147
+ return { model: rule.model, tier: null, overrideName: rule.name, switched, routerMode: "auto", reason: `override:${rule.name}` };
130
148
  }
131
149
  }
132
150
  // 2. Classify the message
133
- const tier = await classifyMessage(prompt, recentTiers, client);
151
+ const { tier, reason: classificationReason } = await classifyMessage(prompt, recentTiers, client);
134
152
  const targetModel = config.tierModels[tier];
135
153
  const wouldSwitch = targetModel !== currentModel;
136
154
  // 3. Cooldown — prevent rapid switching
137
155
  if (wouldSwitch && messagesSinceSwitch < config.cooldownMessages) {
138
156
  messagesSinceSwitch++;
139
- return { model: currentModel, tier, switched: false, routerMode: "auto" };
157
+ return { model: currentModel, tier, switched: false, routerMode: "auto", reason: "cooldown" };
140
158
  }
141
159
  if (wouldSwitch)
142
160
  messagesSinceSwitch = 0;
143
161
  else
144
162
  messagesSinceSwitch++;
145
- return { model: targetModel, tier, switched: wouldSwitch, routerMode: "auto" };
163
+ return { model: targetModel, tier, switched: wouldSwitch, routerMode: "auto", reason: classificationReason === "classifier" ? `tier:${tier}` : classificationReason };
146
164
  }
147
165
  //# sourceMappingURL=router.js.map
@@ -7,6 +7,8 @@ import { existsSync, readFileSync } from "fs";
7
7
  import { join } from "path";
8
8
  import { hostname, uptime, totalmem, freemem, platform, loadavg } from "os";
9
9
  import http from "http";
10
+ import Database from "better-sqlite3";
11
+ import { getUsageSummary, getRecentUsage } from "../store/db.js";
10
12
  /** All known pm2 names for the watchdog's own process.
11
13
  * Includes the derived name (max-<profile>) AND the pm2-injected process name. */
12
14
  function getOwnPm2Names() {
@@ -158,6 +160,18 @@ function buildMainStatus(status, httpStatus) {
158
160
  http: httpStatus.detail,
159
161
  };
160
162
  }
163
+ /** Open main Max's SQLite database in read-only mode for usage queries. */
164
+ function openMainMaxDb() {
165
+ const dbPath = join(MAIN_MAX_HOME, "max.db");
166
+ if (!existsSync(dbPath))
167
+ return null;
168
+ try {
169
+ return new Database(dbPath, { readonly: true });
170
+ }
171
+ catch {
172
+ return null;
173
+ }
174
+ }
161
175
  export function createWatchdogTools() {
162
176
  return [
163
177
  defineTool("check_main_max", {
@@ -307,6 +321,62 @@ export function createWatchdogTools() {
307
321
  };
308
322
  },
309
323
  }),
324
+ defineTool("get_main_usage_summary", {
325
+ description: "Get a summary of main Max's request usage for a time window. Reports estimated premium usage — not authoritative GitHub billing.",
326
+ parameters: z.object({
327
+ window: z.enum(["today", "24h", "7d", "30d"]).default("today").describe("Time window for the summary"),
328
+ }),
329
+ handler: async (args) => {
330
+ const db = openMainMaxDb();
331
+ if (!db) {
332
+ return { ok: false, error: `Main Max database not found at ${join(MAIN_MAX_HOME, "max.db")}. Main Max may not have started yet.` };
333
+ }
334
+ try {
335
+ const summary = getUsageSummary(args.window, db);
336
+ return { ok: true, ...summary };
337
+ }
338
+ catch (err) {
339
+ return { ok: false, error: `Failed to query usage: ${err instanceof Error ? err.message : String(err)}` };
340
+ }
341
+ finally {
342
+ db.close();
343
+ }
344
+ },
345
+ }),
346
+ defineTool("get_main_premium_usage", {
347
+ description: "Get recent premium request events from main Max with an optional summary. Reports estimated premium usage — not authoritative GitHub billing.",
348
+ parameters: z.object({
349
+ window: z.enum(["today", "24h", "7d", "30d"]).default("today").describe("Time window for the summary"),
350
+ limit: z.number().int().min(1).max(100).default(20).describe("Max recent premium events to return"),
351
+ }),
352
+ handler: async (args) => {
353
+ const db = openMainMaxDb();
354
+ if (!db) {
355
+ return { ok: false, error: `Main Max database not found at ${join(MAIN_MAX_HOME, "max.db")}. Main Max may not have started yet.` };
356
+ }
357
+ try {
358
+ const summary = getUsageSummary(args.window, db);
359
+ const recentPremium = getRecentUsage(args.limit, { premiumOnly: true }, db);
360
+ return {
361
+ ok: true,
362
+ summary: {
363
+ window: summary.window,
364
+ totalRequests: summary.totalRequests,
365
+ premiumTierCount: summary.premiumTierCount,
366
+ premiumBilledEstimateCount: summary.premiumBilledEstimateCount,
367
+ },
368
+ recentPremiumEvents: recentPremium,
369
+ estimated: true,
370
+ };
371
+ }
372
+ catch (err) {
373
+ return { ok: false, error: `Failed to query usage: ${err instanceof Error ? err.message : String(err)}` };
374
+ }
375
+ finally {
376
+ db.close();
377
+ }
378
+ },
379
+ }),
310
380
  ];
311
381
  }
312
382
  //# sourceMappingURL=watchdog-tools.js.map
package/dist/setup.js CHANGED
@@ -10,7 +10,7 @@ const CYAN = "\x1b[36m";
10
10
  const RESET = "\x1b[0m";
11
11
  const FALLBACK_MODELS = [
12
12
  { id: "claude-sonnet-4.6", label: "Claude Sonnet 4.6", desc: "Fast, great for most tasks" },
13
- { id: "gpt-5.1", label: "GPT-5.1", desc: "OpenAI's fast model" },
13
+ { id: "gpt-5.4", label: "GPT-5.4", desc: "OpenAI's fast model" },
14
14
  { id: "gpt-4.1", label: "GPT-4.1", desc: "Free included model" },
15
15
  ];
16
16
  async function fetchModels() {
package/dist/store/db.js CHANGED
@@ -44,6 +44,23 @@ export function getDb() {
44
44
  last_accessed DATETIME DEFAULT CURRENT_TIMESTAMP
45
45
  )
46
46
  `);
47
+ db.exec(`
48
+ CREATE TABLE IF NOT EXISTS request_usage (
49
+ id INTEGER PRIMARY KEY AUTOINCREMENT,
50
+ ts DATETIME NOT NULL DEFAULT CURRENT_TIMESTAMP,
51
+ source TEXT NOT NULL,
52
+ model TEXT NOT NULL,
53
+ tier TEXT,
54
+ router_mode TEXT NOT NULL,
55
+ reason TEXT,
56
+ billing_multiplier REAL NOT NULL DEFAULT 0,
57
+ is_premium_tier INTEGER NOT NULL DEFAULT 0,
58
+ is_premium_billed_estimate INTEGER NOT NULL DEFAULT 0,
59
+ prompt_chars INTEGER NOT NULL DEFAULT 0,
60
+ response_chars INTEGER NOT NULL DEFAULT 0
61
+ )
62
+ `);
63
+ db.exec(`CREATE INDEX IF NOT EXISTS idx_request_usage_ts ON request_usage(ts)`);
47
64
  // Migrate: if the table already existed with a stricter CHECK, recreate it
48
65
  try {
49
66
  db.prepare(`INSERT INTO conversation_log (role, content, source) VALUES ('system', '__migration_test__', 'test')`).run();
@@ -66,6 +83,8 @@ export function getDb() {
66
83
  }
67
84
  // Prune conversation log at startup
68
85
  db.prepare(`DELETE FROM conversation_log WHERE id NOT IN (SELECT id FROM conversation_log ORDER BY id DESC LIMIT 200)`).run();
86
+ // Prune usage events older than 90 days
87
+ db.prepare(`DELETE FROM request_usage WHERE ts < datetime('now', '-90 days')`).run();
69
88
  }
70
89
  return db;
71
90
  }
@@ -164,6 +183,62 @@ export function getMemorySummary() {
164
183
  });
165
184
  return sections.join("\n");
166
185
  }
186
+ export function logUsageEvent(event) {
187
+ const db = getDb();
188
+ db.prepare(`
189
+ INSERT INTO request_usage (source, model, tier, router_mode, reason, billing_multiplier, is_premium_tier, is_premium_billed_estimate, prompt_chars, response_chars)
190
+ VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?)
191
+ `).run(event.source, event.model, event.tier, event.routerMode, event.reason ?? null, event.billingMultiplier, event.isPremiumTier ? 1 : 0, event.isPremiumBilledEstimate ? 1 : 0, event.promptChars, event.responseChars);
192
+ }
193
+ /** Compute a usage summary for the given time window. */
194
+ export function getUsageSummary(window, dbInstance) {
195
+ const d = dbInstance ?? getDb();
196
+ const windowStart = resolveWindowStart(window);
197
+ const totals = d.prepare(`
198
+ SELECT
199
+ COUNT(*) as total,
200
+ SUM(is_premium_tier) as premium_tier,
201
+ SUM(is_premium_billed_estimate) as premium_billed
202
+ FROM request_usage WHERE ts >= ?
203
+ `).get(windowStart);
204
+ const byModel = {};
205
+ const modelRows = d.prepare(`
206
+ SELECT model, COUNT(*) as cnt FROM request_usage WHERE ts >= ? GROUP BY model ORDER BY cnt DESC
207
+ `).all(windowStart);
208
+ for (const r of modelRows)
209
+ byModel[r.model] = r.cnt;
210
+ const bySource = {};
211
+ const sourceRows = d.prepare(`
212
+ SELECT source, COUNT(*) as cnt FROM request_usage WHERE ts >= ? GROUP BY source ORDER BY cnt DESC
213
+ `).all(windowStart);
214
+ for (const r of sourceRows)
215
+ bySource[r.source] = r.cnt;
216
+ return {
217
+ window,
218
+ totalRequests: totals.total,
219
+ premiumTierCount: totals.premium_tier ?? 0,
220
+ premiumBilledEstimateCount: totals.premium_billed ?? 0,
221
+ byModel,
222
+ bySource,
223
+ estimated: true,
224
+ };
225
+ }
226
+ /** Get recent usage events, newest first. */
227
+ export function getRecentUsage(limit, options, dbInstance) {
228
+ const d = dbInstance ?? getDb();
229
+ const boundedLimit = Math.max(1, Math.min(limit, 200));
230
+ const where = options?.premiumOnly ? `WHERE is_premium_billed_estimate = 1` : ``;
231
+ return d.prepare(`SELECT * FROM request_usage ${where} ORDER BY id DESC LIMIT ?`).all(boundedLimit);
232
+ }
233
+ function resolveWindowStart(window) {
234
+ switch (window) {
235
+ case "today": return new Date(new Date().setUTCHours(0, 0, 0, 0)).toISOString().replace("T", " ").replace("Z", "");
236
+ case "24h": return new Date(Date.now() - 24 * 60 * 60 * 1000).toISOString().replace("T", " ").replace("Z", "");
237
+ case "7d": return new Date(Date.now() - 7 * 24 * 60 * 60 * 1000).toISOString().replace("T", " ").replace("Z", "");
238
+ case "30d": return new Date(Date.now() - 30 * 24 * 60 * 60 * 1000).toISOString().replace("T", " ").replace("Z", "");
239
+ default: return new Date(Date.now() - 24 * 60 * 60 * 1000).toISOString().replace("T", " ").replace("Z", "");
240
+ }
241
+ }
167
242
  export function closeDb() {
168
243
  if (db) {
169
244
  db.close();
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "@tspappsen/elamax",
3
- "version": "1.2.6",
3
+ "version": "1.2.7",
4
4
  "description": "Max — a personal AI assistant for developers, built on the GitHub Copilot SDK",
5
5
  "bin": {
6
6
  "max": "dist/cli.js"