engramx 0.2.0 → 0.3.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -15,79 +15,148 @@
15
15
  <a href="https://github.com/NickCirv/engram/actions"><img src="https://github.com/NickCirv/engram/actions/workflows/ci.yml/badge.svg" alt="CI"></a>
16
16
  <img src="https://img.shields.io/badge/license-Apache%202.0-blue" alt="License">
17
17
  <img src="https://img.shields.io/badge/node-%3E%3D20-brightgreen" alt="Node">
18
- <img src="https://img.shields.io/badge/tests-132%20passing-brightgreen" alt="Tests">
18
+ <img src="https://img.shields.io/badge/tests-439%20passing-brightgreen" alt="Tests">
19
19
  <img src="https://img.shields.io/badge/LLM%20cost-$0-green" alt="Zero LLM cost">
20
20
  <img src="https://img.shields.io/badge/native%20deps-zero-green" alt="Zero native deps">
21
21
  </p>
22
22
 
23
23
  ---
24
24
 
25
- **Your AI coding assistant forgets everything. We fixed that.**
25
+ **Context as infra for your AI coding tools.**
26
26
 
27
- engram gives AI coding tools persistent memory. One command scans your codebase, builds a knowledge graph, and makes every session start where the last one left off.
27
+ engram installs a Claude Code hook layer that intercepts every `Read`, `Edit`, `Write`, and `Bash cat` — replacing full file reads with ~300-token structural graph summaries *before the agent even sees them*. No more re-exploring the codebase every session. No more agents forgetting to use the tool you gave them.
28
28
 
29
- Zero LLM cost. Zero cloud. Works with Claude Code, Cursor, Codex, aider, and any MCP client.
29
+ **v0.3 "Sentinel":** the agent can't forget to use engram because engram sits between the agent and the filesystem.
30
+
31
+ Zero LLM cost. Zero cloud. Zero native deps. Works today in Claude Code.
30
32
 
31
33
  ```bash
32
- npx engram init
34
+ npm install -g engramx
35
+ cd ~/my-project
36
+ engram init # scan codebase → .engram/graph.db
37
+ engram install-hook # wire into Claude Code (project-local)
33
38
  ```
34
39
 
35
- ```
36
- 🔍 Scanning codebase...
37
- 🌳 AST extraction complete (42ms, 0 tokens used)
38
- 60 nodes, 96 edges from 14 files (2,155 lines)
40
+ That's it. The next Claude Code session in that directory automatically:
39
41
 
40
- 📊 Token savings: 11x fewer tokens vs relevant files
41
- Full corpus: ~15,445 tokens | Graph query: ~285 tokens
42
+ - **Replaces file reads with graph summaries** (Read intercept, deny+reason)
43
+ - **Warns before edits that hit known mistakes** (Edit landmine injection)
44
+ - **Pre-loads relevant context when you ask a question** (UserPromptSubmit pre-query)
45
+ - **Injects a project brief at session start** (SessionStart additionalContext)
46
+ - **Logs every decision for `engram hook-stats`** (PostToolUse observer)
42
47
 
43
- Ready. Your AI now has persistent memory.
44
- ```
48
+ ## The Problem
45
49
 
46
- ## Why
50
+ Every Claude Code session burns ~52,500 tokens on things you already told the agent yesterday. Reading the same files, re-exploring the same modules, re-discovering the same patterns. Even with a great CLAUDE.md, the agent still falls back to `Read` because `Read` is what it knows.
47
51
 
48
- Every AI coding session starts from zero. Claude Code re-reads your files. Cursor reindexes. Copilot has no memory. CLAUDE.md is a sticky note you write by hand.
52
+ The ceiling isn't the graph's accuracy. It's that the agent has to *remember* to ask. v0.2 of engram was a tool the agent queried ~5 times per session. The other 25 Reads happened uninterrupted.
49
53
 
50
- engram fixes this with five things no other tool combines:
54
+ v0.3 flips this. The hook intercepts at the tool-call boundary, not at the agent's discretion.
55
+
56
+ ```
57
+ v0.2: agent → (remembers to call query_graph) → engram returns summary
58
+ v0.3: agent → Read → Claude Code hook → engram intercepts → summary delivered
59
+ ```
51
60
 
52
- 1. **Persistent knowledge graph** survives across sessions, stored in `.engram/graph.db`
53
- 2. **Learns from every session** decisions, patterns, mistakes are extracted and remembered
54
- 3. **Universal protocol** — MCP server + CLI + auto-generates CLAUDE.md, .cursorrules, AGENTS.md
55
- 4. **Skill-aware** (v0.2) — indexes your `~/.claude/skills/` directory into the graph so queries return code *and* the skill to apply
56
- 5. **Regret buffer** (v0.2) — surfaces past mistakes at the top of query results so your AI stops re-making the same wrong turns
61
+ **Projected savings: -42,500 tokens per session** (~80% reduction vs v0.2.1 baseline).
62
+ Every number is arithmetic on empirically verified hook mechanisms not estimates.
57
63
 
58
64
  ## Install
59
65
 
60
66
  ```bash
61
- npx engramx init
67
+ npm install -g engramx
62
68
  ```
63
69
 
64
- Or install globally:
70
+ Requires Node.js 20+. Zero native dependencies. No build tools needed.
71
+
72
+ ## Quickstart (v0.3 Sentinel)
65
73
 
66
74
  ```bash
67
- npm install -g engramx
68
- engram init
75
+ cd ~/my-project
76
+ engram init # scan codebase, build knowledge graph
77
+ engram install-hook # install Sentinel hooks into .claude/settings.local.json
78
+ engram hook-preview src/auth.ts # dry-run: see what the hook would do
69
79
  ```
70
80
 
71
- Requires Node.js 20+. Zero native dependencies. No build tools needed.
81
+ Open a Claude Code session in that project. When it reads a well-covered file, you'll see a system-reminder with engram's structural summary instead of the full file contents. Run `engram hook-stats` afterwards to see how many reads were intercepted.
82
+
83
+ ```bash
84
+ engram hook-stats # summarize hook-log.jsonl
85
+ engram hook-disable # kill switch (keeps install, disables intercepts)
86
+ engram hook-enable # re-enable
87
+ engram uninstall-hook # surgical removal, preserves other hooks
88
+ ```
89
+
90
+ ## All Commands
72
91
 
73
- ## Usage
92
+ ### Core (v0.1/v0.2 — unchanged)
74
93
 
75
94
  ```bash
76
95
  engram init [path] # Scan codebase, build knowledge graph
77
96
  engram init --with-skills # Also index ~/.claude/skills/ (v0.2)
78
97
  engram query "how does auth" # Query the graph (BFS, token-budgeted)
79
- engram query "auth" --dfs # DFS traversal (trace specific paths)
98
+ engram query "auth" --dfs # DFS traversal
80
99
  engram gods # Show most connected entities
81
100
  engram stats # Node/edge counts, token savings
82
101
  engram bench # Token reduction benchmark
83
102
  engram path "auth" "database" # Shortest path between concepts
84
103
  engram learn "chose JWT..." # Teach a decision or pattern
85
- engram mistakes # List known mistakes (v0.2)
104
+ engram mistakes # List known landmines
86
105
  engram gen # Generate CLAUDE.md section from graph
87
- engram gen --task bug-fix # Task-aware view (v0.2: general|bug-fix|feature|refactor)
88
- engram hooks install # Auto-rebuild on git commit
106
+ engram gen --task bug-fix # Task-aware view (general|bug-fix|feature|refactor)
107
+ engram hooks install # Auto-rebuild graph on git commit
89
108
  ```
90
109
 
110
+ ### Sentinel (v0.3 — new)
111
+
112
+ ```bash
113
+ engram intercept # Hook entry point (called by Claude Code, reads stdin)
114
+ engram install-hook [--scope <s>] # Install hooks into Claude Code settings
115
+ # --scope local (default, gitignored)
116
+ # --scope project (committed)
117
+ # --scope user (global ~/.claude/settings.json)
118
+ engram install-hook --dry-run # Preview changes without writing
119
+ engram uninstall-hook # Remove engram entries (preserves other hooks)
120
+ engram hook-stats # Summarize .engram/hook-log.jsonl
121
+ engram hook-stats --json # Machine-readable output
122
+ engram hook-preview <file> # Dry-run Read handler for a specific file
123
+ engram hook-disable # Kill switch (touch .engram/hook-disabled)
124
+ engram hook-enable # Remove kill switch
125
+ ```
126
+
127
+ ## How the Sentinel Layer Works
128
+
129
+ Seven hook handlers compose the interception stack:
130
+
131
+ | Hook | Mechanism | What it does |
132
+ |---|---|---|
133
+ | **`PreToolUse:Read`** | `deny + permissionDecisionReason` | If the file is in the graph with ≥0.7 confidence, blocks the Read and delivers a ~300-token structural summary as the block reason. Claude sees the reason as a system-reminder and uses it as context. The file is never actually read. |
134
+ | **`PreToolUse:Edit`** | `allow + additionalContext` | Never blocks writes. If the file has known past mistakes, injects them as a landmine warning alongside the edit. |
135
+ | **`PreToolUse:Write`** | Same as Edit | Advisory landmine injection. |
136
+ | **`PreToolUse:Bash`** | Parse + delegate | Detects `cat|head|tail|less|more <single-file>` invocations (strict parser, rejects any shell metacharacter) and delegates to the Read handler. Closes the Bash workaround loophole. |
137
+ | **`SessionStart`** | `additionalContext` | Injects a compact project brief (god nodes + graph stats + top landmines + git branch) on source=startup/clear/compact. Passes through on resume. |
138
+ | **`UserPromptSubmit`** | `additionalContext` | Extracts keywords from the user's message, runs a ≤500-token pre-query, injects results. Skipped for short or generic prompts. Raw prompt content is never logged. |
139
+ | **`PostToolUse`** | Observer | Pure logger. Writes tool/path/outputSize/success/decision to `.engram/hook-log.jsonl` for `hook-stats` and v0.3.1 self-tuning. |
140
+
141
+ ### Ten safety invariants, enforced at runtime
142
+
143
+ 1. Any handler error → passthrough (never block Claude Code)
144
+ 2. 2-second per-handler timeout
145
+ 3. Kill switch (`.engram/hook-disabled`) respected by every handler
146
+ 4. Atomic settings.json writes with timestamped backups
147
+ 5. Never intercept outside the project root
148
+ 6. Never intercept binary files or secrets (.env, .pem, .key, credentials, id_rsa, ...)
149
+ 7. Never log user prompt content (privacy invariant asserted in tests)
150
+ 8. Never inject >8000 chars per hook response
151
+ 9. Stale graph detection (file mtime > graph mtime → passthrough)
152
+ 10. Partial-read bypass (Read with explicit `offset` or `limit` → passthrough)
153
+
154
+ ### What you can safely install
155
+
156
+ Default scope is `.claude/settings.local.json` — gitignored, project-local, zero risk of committing hook config to a shared repo. Idempotent install. Non-destructive uninstall. `--dry-run` shows the diff before writing.
157
+
158
+ If anything goes wrong, `engram hook-disable` flips the kill switch without uninstalling.
159
+
91
160
  ## How It Works
92
161
 
93
162
  engram runs three miners on your codebase. None of them use an LLM.
@@ -310,7 +310,7 @@ function writeToFile(filePath, summary) {
310
310
  writeFileSync2(filePath, newContent);
311
311
  }
312
312
  async function autogen(projectRoot, target, task) {
313
- const { getStore } = await import("./core-H72MM256.js");
313
+ const { getStore } = await import("./core-MPNNCPFW.js");
314
314
  const store = await getStore(projectRoot);
315
315
  try {
316
316
  let view = VIEWS.general;
@@ -1,6 +1,6 @@
1
1
  // src/core.ts
2
- import { join as join4, resolve as resolve2 } from "path";
3
- import { existsSync as existsSync5, mkdirSync as mkdirSync2, readFileSync as readFileSync5, writeFileSync as writeFileSync2, unlinkSync } from "fs";
2
+ import { join as join4, resolve as resolve2, relative as relative2 } from "path";
3
+ import { existsSync as existsSync5, mkdirSync as mkdirSync2, readFileSync as readFileSync5, writeFileSync as writeFileSync2, unlinkSync, statSync as statSync2 } from "fs";
4
4
  import { homedir } from "os";
5
5
 
6
6
  // src/graph/store.ts
@@ -315,7 +315,13 @@ function truncateGraphemeSafe(s, max) {
315
315
 
316
316
  // src/graph/query.ts
317
317
  var MISTAKE_SCORE_BOOST = 2.5;
318
+ var KEYWORD_SCORE_DOWNWEIGHT = 0.5;
318
319
  var MAX_MISTAKE_LABEL_CHARS = 500;
320
+ function isHiddenKeyword(node) {
321
+ if (node.kind !== "concept") return false;
322
+ const meta = node.metadata;
323
+ return meta?.subkind === "keyword";
324
+ }
319
325
  var CHARS_PER_TOKEN = 4;
320
326
  function scoreNodes(store, terms) {
321
327
  const allNodes = store.getAllNodes();
@@ -330,6 +336,7 @@ function scoreNodes(store, terms) {
330
336
  }
331
337
  if (score > 0) {
332
338
  if (node.kind === "mistake") score *= MISTAKE_SCORE_BOOST;
339
+ if (isHiddenKeyword(node)) score *= KEYWORD_SCORE_DOWNWEIGHT;
333
340
  scored.push({ score, node });
334
341
  }
335
342
  }
@@ -346,6 +353,12 @@ function queryGraph(store, question, options = {}) {
346
353
  for (const n of startNodes) store.incrementQueryCount(n.id);
347
354
  const visited = new Set(startNodes.map((n) => n.id));
348
355
  const collectedEdges = [];
356
+ const shouldSkipEdgeFrom = (currentNodeId, edge) => {
357
+ if (edge.relation !== "triggered_by") return false;
358
+ const currentNode = store.getNode(currentNodeId);
359
+ if (!currentNode) return false;
360
+ return !isHiddenKeyword(currentNode);
361
+ };
349
362
  if (mode === "bfs") {
350
363
  let frontier = new Set(startNodes.map((n) => n.id));
351
364
  for (let d = 0; d < depth; d++) {
@@ -353,6 +366,7 @@ function queryGraph(store, question, options = {}) {
353
366
  for (const nid of frontier) {
354
367
  const neighbors = store.getNeighbors(nid);
355
368
  for (const { node, edge } of neighbors) {
369
+ if (shouldSkipEdgeFrom(nid, edge)) continue;
356
370
  if (!visited.has(node.id)) {
357
371
  nextFrontier.add(node.id);
358
372
  collectedEdges.push(edge);
@@ -369,6 +383,7 @@ function queryGraph(store, question, options = {}) {
369
383
  if (d > depth) continue;
370
384
  const neighbors = store.getNeighbors(id);
371
385
  for (const { node, edge } of neighbors) {
386
+ if (shouldSkipEdgeFrom(id, edge)) continue;
372
387
  if (!visited.has(node.id)) {
373
388
  visited.add(node.id);
374
389
  stack.push({ id: node.id, d: d + 1 });
@@ -446,7 +461,12 @@ function renderSubgraph(nodes, edges, tokenBudget) {
446
461
  const charBudget = tokenBudget * CHARS_PER_TOKEN;
447
462
  const lines = [];
448
463
  const mistakes2 = nodes.filter((n) => n.kind === "mistake");
449
- const nonMistakes = nodes.filter((n) => n.kind !== "mistake");
464
+ const visible = nodes.filter(
465
+ (n) => n.kind !== "mistake" && !isHiddenKeyword(n)
466
+ );
467
+ const hiddenKeywordIds = new Set(
468
+ nodes.filter(isHiddenKeyword).map((n) => n.id)
469
+ );
450
470
  if (mistakes2.length > 0) {
451
471
  lines.push("\u26A0\uFE0F PAST MISTAKES (relevant to your query):");
452
472
  for (const m of mistakes2) {
@@ -461,7 +481,7 @@ function renderSubgraph(nodes, edges, tokenBudget) {
461
481
  degreeMap.set(e.source, (degreeMap.get(e.source) ?? 0) + 1);
462
482
  degreeMap.set(e.target, (degreeMap.get(e.target) ?? 0) + 1);
463
483
  }
464
- const sorted = [...nonMistakes].sort(
484
+ const sorted = [...visible].sort(
465
485
  (a, b) => (degreeMap.get(b.id) ?? 0) - (degreeMap.get(a.id) ?? 0)
466
486
  );
467
487
  for (const n of sorted) {
@@ -469,7 +489,18 @@ function renderSubgraph(nodes, edges, tokenBudget) {
469
489
  `NODE ${n.label} [${n.kind}] src=${n.sourceFile} ${n.sourceLocation ?? ""}`
470
490
  );
471
491
  }
492
+ const skillConceptIds = new Set(
493
+ nodes.filter(
494
+ (n) => n.kind === "concept" && n.metadata?.subkind === "skill"
495
+ ).map((n) => n.id)
496
+ );
472
497
  for (const e of edges) {
498
+ if (hiddenKeywordIds.has(e.source) || hiddenKeywordIds.has(e.target)) {
499
+ continue;
500
+ }
501
+ if (e.relation === "similar_to" && skillConceptIds.has(e.source) && skillConceptIds.has(e.target)) {
502
+ continue;
503
+ }
473
504
  const srcNode = nodes.find((n) => n.id === e.source);
474
505
  const tgtNode = nodes.find((n) => n.id === e.target);
475
506
  if (srcNode && tgtNode) {
@@ -495,6 +526,106 @@ function renderPath(nodes, edges) {
495
526
  }
496
527
  return `Path (${edges.length} hops): ${segments.join(" ")}`;
497
528
  }
529
+ function renderFileStructure(store, relativeFilePath, tokenBudget = 600) {
530
+ const allNodes = store.getAllNodes();
531
+ const fileNodes = allNodes.filter(
532
+ (n) => n.sourceFile === relativeFilePath && !isHiddenKeyword(n)
533
+ );
534
+ if (fileNodes.length === 0) {
535
+ return {
536
+ text: "",
537
+ nodeCount: 0,
538
+ codeNodeCount: 0,
539
+ avgConfidence: 0,
540
+ estimatedTokens: 0
541
+ };
542
+ }
543
+ const codeNodeCount = fileNodes.filter(
544
+ (n) => n.kind !== "file" && n.kind !== "module"
545
+ ).length;
546
+ const avgConfidence = fileNodes.reduce((s, n) => s + n.confidenceScore, 0) / fileNodes.length;
547
+ const allEdges = store.getAllEdges();
548
+ const fileNodeIds = new Set(fileNodes.map((n) => n.id));
549
+ const degreeMap = /* @__PURE__ */ new Map();
550
+ for (const e of allEdges) {
551
+ if (fileNodeIds.has(e.source)) {
552
+ degreeMap.set(e.source, (degreeMap.get(e.source) ?? 0) + 1);
553
+ }
554
+ if (fileNodeIds.has(e.target)) {
555
+ degreeMap.set(e.target, (degreeMap.get(e.target) ?? 0) + 1);
556
+ }
557
+ }
558
+ const byKind = /* @__PURE__ */ new Map();
559
+ for (const n of fileNodes) {
560
+ const list = byKind.get(n.kind) ?? [];
561
+ list.push(n);
562
+ byKind.set(n.kind, list);
563
+ }
564
+ for (const list of byKind.values()) {
565
+ list.sort(
566
+ (a, b) => (degreeMap.get(b.id) ?? 0) - (degreeMap.get(a.id) ?? 0)
567
+ );
568
+ }
569
+ const kindOrder = [
570
+ "class",
571
+ "interface",
572
+ "type",
573
+ "function",
574
+ "method",
575
+ "variable",
576
+ "import",
577
+ "module",
578
+ "file",
579
+ "decision",
580
+ "pattern",
581
+ "mistake",
582
+ "concept"
583
+ ];
584
+ const lines = [];
585
+ lines.push(`[engram] Structural summary for ${relativeFilePath}`);
586
+ lines.push(
587
+ `Nodes: ${fileNodes.length} | avg extraction confidence: ${avgConfidence.toFixed(2)}`
588
+ );
589
+ lines.push("");
590
+ for (const kind of kindOrder) {
591
+ const group = byKind.get(kind);
592
+ if (!group || group.length === 0) continue;
593
+ for (const n of group) {
594
+ const loc = n.sourceLocation ?? "";
595
+ lines.push(`NODE ${n.label} [${n.kind}] ${loc}`.trim());
596
+ }
597
+ }
598
+ const relevantEdges = allEdges.filter(
599
+ (e) => fileNodeIds.has(e.source) || fileNodeIds.has(e.target)
600
+ ).slice(0, 10);
601
+ if (relevantEdges.length > 0) {
602
+ lines.push("");
603
+ lines.push("Key relationships:");
604
+ for (const e of relevantEdges) {
605
+ const src = allNodes.find((n) => n.id === e.source);
606
+ const tgt = allNodes.find((n) => n.id === e.target);
607
+ if (src && tgt) {
608
+ lines.push(`EDGE ${src.label} --${e.relation}--> ${tgt.label}`);
609
+ }
610
+ }
611
+ }
612
+ lines.push("");
613
+ lines.push(
614
+ "Note: engram replaced a full-file read with this structural view to save tokens. If you need specific lines, Read this file again with explicit offset/limit parameters \u2014 engram passes partial reads through unchanged."
615
+ );
616
+ let text = lines.join("\n");
617
+ const charBudget = tokenBudget * CHARS_PER_TOKEN;
618
+ if (text.length > charBudget) {
619
+ text = sliceGraphemeSafe(text, charBudget) + "\n... (truncated to fit summary budget)";
620
+ }
621
+ return {
622
+ text,
623
+ nodeCount: fileNodes.length,
624
+ codeNodeCount,
625
+ avgConfidence,
626
+ estimatedTokens: Math.ceil(text.length / CHARS_PER_TOKEN)
627
+ };
628
+ }
498
629
 
499
630
  // src/miners/ast-miner.ts
500
631
  import { readFileSync as readFileSync2, readdirSync, realpathSync } from "fs";
@@ -1459,6 +1590,76 @@ async function stats(projectRoot) {
1459
1590
  store.close();
1460
1591
  }
1461
1592
  }
1593
+ var FILE_CONTEXT_COVERAGE_CEILING = 3;
1594
+ async function getFileContext(projectRoot, absFilePath) {
1595
+ const empty = {
1596
+ found: false,
1597
+ confidence: 0,
1598
+ summary: "",
1599
+ nodeCount: 0,
1600
+ codeNodeCount: 0,
1601
+ avgNodeConfidence: 0,
1602
+ graphMtimeMs: 0,
1603
+ fileMtimeMs: null,
1604
+ isStale: false
1605
+ };
1606
+ try {
1607
+ const root = resolve2(projectRoot);
1608
+ const abs = resolve2(absFilePath);
1609
+ const relPath = relative2(root, abs);
1610
+ if (relPath.startsWith("..") || relPath === "") {
1611
+ return empty;
1612
+ }
1613
+ const dbPath = getDbPath(root);
1614
+ let graphMtimeMs = 0;
1615
+ try {
1616
+ graphMtimeMs = statSync2(dbPath).mtimeMs;
1617
+ } catch {
1618
+ return empty;
1619
+ }
1620
+ let fileMtimeMs = null;
1621
+ try {
1622
+ fileMtimeMs = statSync2(abs).mtimeMs;
1623
+ } catch {
1624
+ fileMtimeMs = null;
1625
+ }
1626
+ const isStale = fileMtimeMs !== null && fileMtimeMs > graphMtimeMs;
1627
+ const store = await getStore(root);
1628
+ try {
1629
+ const summary = renderFileStructure(store, relPath);
1630
+ if (summary.codeNodeCount === 0) {
1631
+ return {
1632
+ ...empty,
1633
+ nodeCount: summary.nodeCount,
1634
+ codeNodeCount: 0,
1635
+ graphMtimeMs,
1636
+ fileMtimeMs,
1637
+ isStale
1638
+ };
1639
+ }
1640
+ const coverageScore = Math.min(
1641
+ summary.codeNodeCount / FILE_CONTEXT_COVERAGE_CEILING,
1642
+ 1
1643
+ );
1644
+ const confidence = coverageScore * summary.avgConfidence;
1645
+ return {
1646
+ found: true,
1647
+ confidence,
1648
+ summary: summary.text,
1649
+ nodeCount: summary.nodeCount,
1650
+ codeNodeCount: summary.codeNodeCount,
1651
+ avgNodeConfidence: summary.avgConfidence,
1652
+ graphMtimeMs,
1653
+ fileMtimeMs,
1654
+ isStale
1655
+ };
1656
+ } finally {
1657
+ store.close();
1658
+ }
1659
+ } catch {
1660
+ return empty;
1661
+ }
1662
+ }
1462
1663
  async function learn(projectRoot, text, sourceLabel = "manual") {
1463
1664
  const { nodes, edges } = learnFromSession(text, sourceLabel);
1464
1665
  if (nodes.length === 0 && edges.length === 0) return { nodesAdded: 0 };
@@ -1474,6 +1675,10 @@ async function mistakes(projectRoot, options = {}) {
1474
1675
  const store = await getStore(projectRoot);
1475
1676
  try {
1476
1677
  let items = store.getAllNodes().filter((n) => n.kind === "mistake");
1678
+ if (options.sourceFile !== void 0) {
1679
+ const target = options.sourceFile;
1680
+ items = items.filter((m) => m.sourceFile === target);
1681
+ }
1477
1682
  if (options.sinceDays !== void 0) {
1478
1683
  const cutoff = Date.now() - options.sinceDays * 24 * 60 * 60 * 1e3;
1479
1684
  items = items.filter((m) => m.lastVerified >= cutoff);
@@ -1572,6 +1777,7 @@ export {
1572
1777
  path,
1573
1778
  godNodes,
1574
1779
  stats,
1780
+ getFileContext,
1575
1781
  learn,
1576
1782
  mistakes,
1577
1783
  benchmark