engramx 0.1.0 → 0.2.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -7,6 +7,7 @@
7
7
  <a href="#usage"><strong>Usage</strong></a> ·
8
8
  <a href="#mcp-server"><strong>MCP Server</strong></a> ·
9
9
  <a href="#how-it-works"><strong>How It Works</strong></a> ·
10
+ <a href="docs/INTEGRATION.md"><strong>Integration Guide</strong></a> ·
10
11
  <a href="#contributing"><strong>Contributing</strong></a>
11
12
  </p>
12
13
 
@@ -14,7 +15,7 @@
14
15
  <a href="https://github.com/NickCirv/engram/actions"><img src="https://github.com/NickCirv/engram/actions/workflows/ci.yml/badge.svg" alt="CI"></a>
15
16
  <img src="https://img.shields.io/badge/license-Apache%202.0-blue" alt="License">
16
17
  <img src="https://img.shields.io/badge/node-%3E%3D20-brightgreen" alt="Node">
17
- <img src="https://img.shields.io/badge/tests-63%20passing-brightgreen" alt="Tests">
18
+ <img src="https://img.shields.io/badge/tests-132%20passing-brightgreen" alt="Tests">
18
19
  <img src="https://img.shields.io/badge/LLM%20cost-$0-green" alt="Zero LLM cost">
19
20
  <img src="https://img.shields.io/badge/native%20deps-zero-green" alt="Zero native deps">
20
21
  </p>
@@ -46,22 +47,24 @@ npx engram init
46
47
 
47
48
  Every AI coding session starts from zero. Claude Code re-reads your files. Cursor reindexes. Copilot has no memory. CLAUDE.md is a sticky note you write by hand.
48
49
 
49
- engram fixes this with three things no other tool combines:
50
+ engram fixes this with five things no other tool combines:
50
51
 
51
52
  1. **Persistent knowledge graph** — survives across sessions, stored in `.engram/graph.db`
52
53
  2. **Learns from every session** — decisions, patterns, mistakes are extracted and remembered
53
54
  3. **Universal protocol** — MCP server + CLI + auto-generates CLAUDE.md, .cursorrules, AGENTS.md
55
+ 4. **Skill-aware** (v0.2) — indexes your `~/.claude/skills/` directory into the graph so queries return code *and* the skill to apply
56
+ 5. **Regret buffer** (v0.2) — surfaces past mistakes at the top of query results so your AI stops re-making the same wrong turns
54
57
 
55
58
  ## Install
56
59
 
57
60
  ```bash
58
- npx engram-ai init
61
+ npx engramx init
59
62
  ```
60
63
 
61
64
  Or install globally:
62
65
 
63
66
  ```bash
64
- npm install -g engram-ai
67
+ npm install -g engramx
65
68
  engram init
66
69
  ```
67
70
 
@@ -71,6 +74,7 @@ Requires Node.js 20+. Zero native dependencies. No build tools needed.
71
74
 
72
75
  ```bash
73
76
  engram init [path] # Scan codebase, build knowledge graph
77
+ engram init --with-skills # Also index ~/.claude/skills/ (v0.2)
74
78
  engram query "how does auth" # Query the graph (BFS, token-budgeted)
75
79
  engram query "auth" --dfs # DFS traversal (trace specific paths)
76
80
  engram gods # Show most connected entities
@@ -78,7 +82,9 @@ engram stats # Node/edge counts, token savings
78
82
  engram bench # Token reduction benchmark
79
83
  engram path "auth" "database" # Shortest path between concepts
80
84
  engram learn "chose JWT..." # Teach a decision or pattern
85
+ engram mistakes # List known mistakes (v0.2)
81
86
  engram gen # Generate CLAUDE.md section from graph
87
+ engram gen --task bug-fix # Task-aware view (v0.2: general|bug-fix|feature|refactor)
82
88
  engram hooks install # Auto-rebuild on git commit
83
89
  ```
84
90
 
@@ -103,7 +109,7 @@ Connect engram to Claude Code, Windsurf, or any MCP client:
103
109
  "mcpServers": {
104
110
  "engram": {
105
111
  "command": "npx",
106
- "args": ["-y", "engram-ai", "serve", "/path/to/your/project"]
112
+ "args": ["-y", "engramx", "serve", "/path/to/your/project"]
107
113
  }
108
114
  }
109
115
  }
@@ -122,12 +128,24 @@ Or if installed globally:
122
128
  }
123
129
  ```
124
130
 
125
- **MCP Tools:**
131
+ **MCP Tools** (6 total):
126
132
  - `query_graph` — Search the knowledge graph with natural language
127
133
  - `god_nodes` — Core abstractions (most connected entities)
128
134
  - `graph_stats` — Node/edge counts, confidence breakdown
129
135
  - `shortest_path` — Trace connections between two concepts
130
136
  - `benchmark` — Token reduction measurement
137
+ - `list_mistakes` — Known failure modes from past sessions (v0.2)
138
+
139
+ ### Shell Wrapper (for Bash-based agents)
140
+
141
+ If your agent stack runs shell commands instead of JSON-RPC MCP, use the reference wrapper at [`scripts/mcp-engram`](scripts/mcp-engram). One command handles all projects via `-p <path>` — no per-project MCP server needed.
142
+
143
+ ```bash
144
+ cp scripts/mcp-engram ~/bin/mcp-engram && chmod +x ~/bin/mcp-engram
145
+ mcp-engram query "how does auth work" -p ~/myrepo
146
+ ```
147
+
148
+ See [`docs/INTEGRATION.md`](docs/INTEGRATION.md) for multi-machine setups, rule-file integration, and gotchas.
131
149
 
132
150
  ## Auto-Generated AI Instructions
133
151
 
@@ -142,6 +160,19 @@ engram gen --target agents # Write to AGENTS.md
142
160
 
143
161
  This writes a structured codebase summary — god nodes, file structure, key dependencies, decisions — so your AI assistant navigates by structure instead of grepping.
144
162
 
163
+ ### Task-Aware Views (v0.2)
164
+
165
+ `engram gen --task <name>` emits different content based on what you're about to do. The four preset views are defined in `src/autogen.ts` as a data table — no branching logic — so you can add your own task modes without touching the renderer.
166
+
167
+ ```bash
168
+ engram gen --task general # default — balanced mix of sections
169
+ engram gen --task bug-fix # emphasizes hot files + past mistakes
170
+ engram gen --task feature # emphasizes architecture + decisions
171
+ engram gen --task refactor # emphasizes god nodes + dependency graph
172
+ ```
173
+
174
+ Each view picks a different set of sections with different limits. For example, `bug-fix` omits `## Decisions` and `## Key dependencies` entirely (they'd just be noise when you're chasing a regression) and leads with `🔥 Hot files` and `⚠️ Past mistakes`.
175
+
145
176
  ## How engram Compares
146
177
 
147
178
  | | engram | Mem0 | Graphify | aider repo-map | CLAUDE.md |
@@ -234,7 +265,26 @@ src/
234
265
 
235
266
  TypeScript, JavaScript, Python, Go, Rust, Java, C, C++, Ruby, PHP.
236
267
 
237
- Tree-sitter WASM integration (20+ languages with full call-graph precision) is planned for v0.2.
268
+ ## Roadmap
269
+
270
+ ### v0.2 (current) — **shipped April 2026**
271
+ - ✅ Skills miner — index `~/.claude/skills/` into the graph
272
+ - ✅ Adaptive gen — task-aware views (`--task general|bug-fix|feature|refactor`)
273
+ - ✅ Regret buffer — surface past mistakes at the top of query results
274
+ - ✅ `list_mistakes` MCP tool
275
+ - ✅ Atomic init lockfile
276
+ - ✅ Marker-safe `writeToFile` + surrogate-safe truncation
277
+
278
+ ### v0.3
279
+ - Tree-sitter WASM (20+ languages with full call-graph precision)
280
+ - Cross-project graph (query patterns across *all* your projects)
281
+ - Temporal graph (commit-snapshot deltas — "what changed in auth this week?")
282
+ - Token enforcement PreToolUse hook for Claude Code
283
+
284
+ ### v0.4+
285
+ - LLM-free semantic search (locality-sensitive hashing over n-grams)
286
+ - Graph-as-IR experimental spike
287
+ - Team memory sync (paid tier)
238
288
 
239
289
  ## Privacy
240
290
 
@@ -1,6 +1,7 @@
1
1
  // src/core.ts
2
- import { join as join3, resolve as resolve2 } from "path";
3
- import { existsSync as existsSync4, mkdirSync as mkdirSync2, readFileSync as readFileSync4 } from "fs";
2
+ import { join as join4, resolve as resolve2 } from "path";
3
+ import { existsSync as existsSync5, mkdirSync as mkdirSync2, readFileSync as readFileSync5, writeFileSync as writeFileSync2, unlinkSync } from "fs";
4
+ import { homedir } from "os";
4
5
 
5
6
  // src/graph/store.ts
6
7
  import initSqlJs from "sql.js";
@@ -164,7 +165,7 @@ var GraphStore = class _GraphStore {
164
165
  `SELECT n.*, COUNT(*) as degree
165
166
  FROM nodes n
166
167
  JOIN edges e ON e.source = n.id OR e.target = n.id
167
- WHERE n.kind NOT IN ('file', 'import', 'module')
168
+ WHERE n.kind NOT IN ('file', 'import', 'module', 'concept')
168
169
  GROUP BY n.id
169
170
  ORDER BY degree DESC
170
171
  LIMIT ?`
@@ -293,7 +294,28 @@ var GraphStore = class _GraphStore {
293
294
  }
294
295
  };
295
296
 
297
+ // src/graph/render-utils.ts
298
+ function sliceGraphemeSafe(s, max) {
299
+ if (max <= 0) return "";
300
+ if (s.length <= max) return s;
301
+ let cut = max;
302
+ const code = s.charCodeAt(cut - 1);
303
+ if (code >= 55296 && code <= 56319) cut--;
304
+ return s.slice(0, cut);
305
+ }
306
+ function truncateGraphemeSafe(s, max) {
307
+ if (max <= 0) return "";
308
+ if (s.length <= max) return s;
309
+ let cut = max - 1;
310
+ if (cut <= 0) return "";
311
+ const code = s.charCodeAt(cut - 1);
312
+ if (code >= 55296 && code <= 56319) cut--;
313
+ return s.slice(0, cut) + "\u2026";
314
+ }
315
+
296
316
  // src/graph/query.ts
317
+ var MISTAKE_SCORE_BOOST = 2.5;
318
+ var MAX_MISTAKE_LABEL_CHARS = 500;
297
319
  var CHARS_PER_TOKEN = 4;
298
320
  function scoreNodes(store, terms) {
299
321
  const allNodes = store.getAllNodes();
@@ -306,7 +328,10 @@ function scoreNodes(store, terms) {
306
328
  if (label.includes(t)) score += 2;
307
329
  if (file.includes(t)) score += 1;
308
330
  }
309
- if (score > 0) scored.push({ score, node });
331
+ if (score > 0) {
332
+ if (node.kind === "mistake") score *= MISTAKE_SCORE_BOOST;
333
+ scored.push({ score, node });
334
+ }
310
335
  }
311
336
  return scored.sort((a, b) => b.score - a.score);
312
337
  }
@@ -420,12 +445,23 @@ function shortestPath(store, sourceTerm, targetTerm, maxHops = 8) {
420
445
  function renderSubgraph(nodes, edges, tokenBudget) {
421
446
  const charBudget = tokenBudget * CHARS_PER_TOKEN;
422
447
  const lines = [];
448
+ const mistakes2 = nodes.filter((n) => n.kind === "mistake");
449
+ const nonMistakes = nodes.filter((n) => n.kind !== "mistake");
450
+ if (mistakes2.length > 0) {
451
+ lines.push("\u26A0\uFE0F PAST MISTAKES (relevant to your query):");
452
+ for (const m of mistakes2) {
453
+ const label = truncateGraphemeSafe(m.label, MAX_MISTAKE_LABEL_CHARS);
454
+ const confNote = m.confidence === "EXTRACTED" ? "" : ` [confidence ${m.confidenceScore}]`;
455
+ lines.push(` - ${label} (from ${m.sourceFile})${confNote}`);
456
+ }
457
+ lines.push("");
458
+ }
423
459
  const degreeMap = /* @__PURE__ */ new Map();
424
460
  for (const e of edges) {
425
461
  degreeMap.set(e.source, (degreeMap.get(e.source) ?? 0) + 1);
426
462
  degreeMap.set(e.target, (degreeMap.get(e.target) ?? 0) + 1);
427
463
  }
428
- const sorted = [...nodes].sort(
464
+ const sorted = [...nonMistakes].sort(
429
465
  (a, b) => (degreeMap.get(b.id) ?? 0) - (degreeMap.get(a.id) ?? 0)
430
466
  );
431
467
  for (const n of sorted) {
@@ -445,7 +481,7 @@ function renderSubgraph(nodes, edges, tokenBudget) {
445
481
  }
446
482
  let output = lines.join("\n");
447
483
  if (output.length > charBudget) {
448
- output = output.slice(0, charBudget) + `
484
+ output = sliceGraphemeSafe(output, charBudget) + `
449
485
  ... (truncated to ~${tokenBudget} token budget)`;
450
486
  }
451
487
  return output;
@@ -1031,34 +1067,358 @@ function learnFromSession(text, sourceLabel = "session") {
1031
1067
  return mineText(text, sourceLabel);
1032
1068
  }
1033
1069
 
1070
+ // src/miners/skills-miner.ts
1071
+ import {
1072
+ existsSync as existsSync4,
1073
+ readFileSync as readFileSync4,
1074
+ readdirSync as readdirSync3,
1075
+ realpathSync as realpathSync2,
1076
+ statSync
1077
+ } from "fs";
1078
+ import { basename as basename3, dirname as dirname2, join as join3 } from "path";
1079
+ function makeId4(...parts) {
1080
+ return parts.filter(Boolean).join("_").replace(/[^a-zA-Z0-9]+/g, "_").replace(/^_|_$/g, "").toLowerCase().slice(0, 120);
1081
+ }
1082
+ function parseFrontmatter(content) {
1083
+ if (!content.startsWith("---")) {
1084
+ return { data: {}, body: content, parseOk: false };
1085
+ }
1086
+ const closeMatch = content.slice(3).match(/\n---\s*(\n|$)/);
1087
+ if (!closeMatch || closeMatch.index === void 0) {
1088
+ return { data: {}, body: content, parseOk: false };
1089
+ }
1090
+ const yamlBlock = content.slice(3, 3 + closeMatch.index).trim();
1091
+ const bodyStart = 3 + closeMatch.index + closeMatch[0].length;
1092
+ const body = content.slice(bodyStart);
1093
+ try {
1094
+ const data = parseYaml(yamlBlock);
1095
+ return { data, body, parseOk: true };
1096
+ } catch {
1097
+ return { data: {}, body, parseOk: false };
1098
+ }
1099
+ }
1100
+ function parseYaml(block) {
1101
+ const data = {};
1102
+ const lines = block.split("\n");
1103
+ let i = 0;
1104
+ while (i < lines.length) {
1105
+ const line = lines[i];
1106
+ if (!line.trim() || line.trim().startsWith("#")) {
1107
+ i++;
1108
+ continue;
1109
+ }
1110
+ const topMatch = line.match(/^([A-Za-z_][A-Za-z0-9_-]*)\s*:\s*(.*)$/);
1111
+ if (!topMatch) {
1112
+ throw new Error(`YAML parse: unexpected line ${i}: ${line}`);
1113
+ }
1114
+ const [, key, rest] = topMatch;
1115
+ if (rest === ">" || rest === "|") {
1116
+ const style = rest;
1117
+ const collected = [];
1118
+ i++;
1119
+ while (i < lines.length) {
1120
+ const next = lines[i];
1121
+ if (next === "" || /^\s/.test(next)) {
1122
+ collected.push(next.replace(/^ {2}/, "").replace(/^\t/, ""));
1123
+ i++;
1124
+ } else {
1125
+ break;
1126
+ }
1127
+ }
1128
+ data[key] = style === ">" ? collected.filter((l) => l.trim()).join(" ").trim() : collected.join("\n").trim();
1129
+ } else if (rest === "") {
1130
+ const nested = {};
1131
+ i++;
1132
+ while (i < lines.length && /^\s+\S/.test(lines[i])) {
1133
+ const childMatch = lines[i].match(
1134
+ /^\s+([A-Za-z_][A-Za-z0-9_-]*)\s*:\s*(.*)$/
1135
+ );
1136
+ if (childMatch) {
1137
+ nested[childMatch[1]] = stripQuotes(childMatch[2]);
1138
+ }
1139
+ i++;
1140
+ }
1141
+ data[key] = nested;
1142
+ } else {
1143
+ data[key] = stripQuotes(rest);
1144
+ i++;
1145
+ }
1146
+ }
1147
+ return data;
1148
+ }
1149
+ function stripQuotes(s) {
1150
+ const trimmed = s.trim();
1151
+ if (trimmed.length < 2) return trimmed;
1152
+ const first = trimmed[0];
1153
+ const last = trimmed[trimmed.length - 1];
1154
+ if (first === '"' && last === '"') {
1155
+ return trimmed.slice(1, -1).replace(
1156
+ /\\u([0-9a-fA-F]{4})/g,
1157
+ (_, hex) => String.fromCharCode(parseInt(hex, 16))
1158
+ ).replace(/\\"/g, '"').replace(/\\'/g, "'").replace(/\\n/g, "\n").replace(/\\t/g, " ").replace(/\\\\/g, "\\");
1159
+ }
1160
+ if (first === "'" && last === "'") {
1161
+ return trimmed.slice(1, -1).replace(/''/g, "'");
1162
+ }
1163
+ return trimmed;
1164
+ }
1165
+ var QUOTED_PHRASE_RE = /[\u0022\u0027\u201C\u201D\u2018\u2019]([^\u0022\u0027\u201C\u201D\u2018\u2019\n]{4,100}?)[\u0022\u0027\u201C\u201D\u2018\u2019]/g;
1166
+ var USE_WHEN_RE = /\bUse when\s+(.+?)(?=\.\s+[A-Z]|[\n\r]|$)/g;
1167
+ function extractTriggers(text) {
1168
+ const triggers = /* @__PURE__ */ new Set();
1169
+ for (const m of text.matchAll(QUOTED_PHRASE_RE)) {
1170
+ const t = m[1]?.trim();
1171
+ if (t && t.length >= 4) triggers.add(t);
1172
+ }
1173
+ for (const m of text.matchAll(USE_WHEN_RE)) {
1174
+ const t = m[1]?.trim().replace(/[.,;]+$/, "");
1175
+ if (t && t.length > 0 && t.length < 120) triggers.add(t);
1176
+ }
1177
+ return [...triggers];
1178
+ }
1179
+ function extractRelatedSkills(body) {
1180
+ const sectionMatch = body.match(
1181
+ /##\s*Related Skills\s*\r?\n([\s\S]*?)(?=\r?\n##|\r?\n#[^#]|$)/i
1182
+ );
1183
+ if (!sectionMatch) return [];
1184
+ const section = sectionMatch[1];
1185
+ const names = [];
1186
+ for (const line of section.split(/\r?\n/)) {
1187
+ const m = line.match(/^[\s]*[-*+]\s+`?([a-z][a-z0-9-]*)`?/i);
1188
+ if (m) names.push(m[1].toLowerCase());
1189
+ }
1190
+ return [...new Set(names)];
1191
+ }
1192
+ function discoverSkillFiles(skillsDir) {
1193
+ if (!existsSync4(skillsDir)) return [];
1194
+ let entries;
1195
+ try {
1196
+ entries = readdirSync3(skillsDir, {
1197
+ withFileTypes: true,
1198
+ encoding: "utf-8"
1199
+ });
1200
+ } catch {
1201
+ return [];
1202
+ }
1203
+ const sorted = [...entries].sort((a, b) => a.name.localeCompare(b.name));
1204
+ const results = [];
1205
+ for (const entry of sorted) {
1206
+ if (entry.name.startsWith(".")) continue;
1207
+ let isDir = entry.isDirectory();
1208
+ if (entry.isSymbolicLink()) {
1209
+ try {
1210
+ const resolved = realpathSync2(join3(skillsDir, entry.name));
1211
+ isDir = statSync(resolved).isDirectory();
1212
+ } catch {
1213
+ continue;
1214
+ }
1215
+ }
1216
+ if (!isDir) continue;
1217
+ const skillMdPath = join3(skillsDir, entry.name, "SKILL.md");
1218
+ if (existsSync4(skillMdPath)) {
1219
+ results.push(skillMdPath);
1220
+ }
1221
+ }
1222
+ return results;
1223
+ }
1224
+ function mineSkills(skillsDir) {
1225
+ const result = {
1226
+ nodes: [],
1227
+ edges: [],
1228
+ skillCount: 0,
1229
+ anomalies: []
1230
+ };
1231
+ const skillFiles = discoverSkillFiles(skillsDir);
1232
+ if (skillFiles.length === 0) return result;
1233
+ const now = Date.now();
1234
+ const keywordIds = /* @__PURE__ */ new Set();
1235
+ const skillIdByDirName = /* @__PURE__ */ new Map();
1236
+ const pendingRelated = [];
1237
+ for (const skillPath of skillFiles) {
1238
+ let content;
1239
+ try {
1240
+ content = readFileSync4(skillPath, "utf-8");
1241
+ } catch {
1242
+ continue;
1243
+ }
1244
+ const skillDirName = basename3(dirname2(skillPath));
1245
+ const { data, body, parseOk } = parseFrontmatter(content);
1246
+ const hasFrontmatter = parseOk && Object.keys(data).length > 0;
1247
+ let name;
1248
+ let description;
1249
+ let version;
1250
+ if (hasFrontmatter) {
1251
+ name = String(data.name ?? skillDirName);
1252
+ description = String(data.description ?? "");
1253
+ const meta = data.metadata;
1254
+ if (meta && typeof meta === "object" && "version" in meta) {
1255
+ version = String(meta.version);
1256
+ }
1257
+ } else {
1258
+ result.anomalies.push(skillPath);
1259
+ const headingMatch = content.match(/^#\s+(.+)$/m);
1260
+ name = headingMatch?.[1]?.trim() ?? skillDirName;
1261
+ const paragraphs = content.split(/\r?\n\s*\r?\n/).map((p) => p.trim()).filter((p) => p && !p.startsWith("#"));
1262
+ description = paragraphs[0] ?? "";
1263
+ }
1264
+ const skillId = makeId4("skill", skillDirName);
1265
+ skillIdByDirName.set(skillDirName.toLowerCase(), skillId);
1266
+ const sourceFileRel = `${skillDirName}/SKILL.md`;
1267
+ const sizeLines = content.split("\n").length;
1268
+ result.nodes.push({
1269
+ id: skillId,
1270
+ label: name,
1271
+ kind: "concept",
1272
+ sourceFile: sourceFileRel,
1273
+ sourceLocation: skillPath,
1274
+ confidence: "EXTRACTED",
1275
+ confidenceScore: 1,
1276
+ lastVerified: now,
1277
+ queryCount: 0,
1278
+ metadata: {
1279
+ miner: "skills",
1280
+ subkind: "skill",
1281
+ description: truncateGraphemeSafe(description, 500),
1282
+ sizeLines,
1283
+ hasFrontmatter,
1284
+ version,
1285
+ skillDir: skillDirName
1286
+ }
1287
+ });
1288
+ result.skillCount++;
1289
+ const triggers = extractTriggers(description + "\n\n" + body);
1290
+ for (const trigger of triggers) {
1291
+ const normalized = trigger.toLowerCase().trim();
1292
+ if (normalized.length === 0 || normalized.length > 120) continue;
1293
+ const keywordId = makeId4("keyword", normalized);
1294
+ if (!keywordIds.has(keywordId)) {
1295
+ result.nodes.push({
1296
+ id: keywordId,
1297
+ label: trigger,
1298
+ kind: "concept",
1299
+ sourceFile: sourceFileRel,
1300
+ sourceLocation: null,
1301
+ confidence: "EXTRACTED",
1302
+ confidenceScore: 1,
1303
+ lastVerified: now,
1304
+ queryCount: 0,
1305
+ metadata: { miner: "skills", subkind: "keyword" }
1306
+ });
1307
+ keywordIds.add(keywordId);
1308
+ }
1309
+ result.edges.push({
1310
+ source: keywordId,
1311
+ target: skillId,
1312
+ relation: "triggered_by",
1313
+ confidence: "EXTRACTED",
1314
+ confidenceScore: 1,
1315
+ sourceFile: sourceFileRel,
1316
+ sourceLocation: skillPath,
1317
+ lastVerified: now,
1318
+ metadata: { miner: "skills" }
1319
+ });
1320
+ }
1321
+ if (hasFrontmatter || body.length > 0) {
1322
+ const relatedNames = extractRelatedSkills(body);
1323
+ for (const relatedName of relatedNames) {
1324
+ pendingRelated.push({ fromId: skillId, toName: relatedName });
1325
+ }
1326
+ }
1327
+ }
1328
+ for (const { fromId, toName } of pendingRelated) {
1329
+ const toId = skillIdByDirName.get(toName.toLowerCase());
1330
+ if (toId && toId !== fromId) {
1331
+ result.edges.push({
1332
+ source: fromId,
1333
+ target: toId,
1334
+ relation: "similar_to",
1335
+ confidence: "INFERRED",
1336
+ confidenceScore: 0.8,
1337
+ sourceFile: "SKILL.md",
1338
+ sourceLocation: null,
1339
+ lastVerified: Date.now(),
1340
+ metadata: { miner: "skills", via: "related_skills_section" }
1341
+ });
1342
+ }
1343
+ }
1344
+ return result;
1345
+ }
1346
+
1034
1347
  // src/core.ts
1035
1348
  var ENGRAM_DIR = ".engram";
1036
1349
  var DB_FILE = "graph.db";
1350
+ var LOCK_FILE = "init.lock";
1351
+ var DEFAULT_SKILLS_DIR = join4(homedir(), ".claude", "skills");
1037
1352
  function getDbPath(projectRoot) {
1038
- return join3(projectRoot, ENGRAM_DIR, DB_FILE);
1353
+ return join4(projectRoot, ENGRAM_DIR, DB_FILE);
1039
1354
  }
1040
1355
  async function getStore(projectRoot) {
1041
1356
  return GraphStore.open(getDbPath(projectRoot));
1042
1357
  }
1043
- async function init(projectRoot) {
1358
+ async function init(projectRoot, options = {}) {
1044
1359
  const root = resolve2(projectRoot);
1045
1360
  const start = Date.now();
1046
- mkdirSync2(join3(root, ENGRAM_DIR), { recursive: true });
1047
- const { nodes, edges, fileCount, totalLines } = extractDirectory(root);
1048
- const gitResult = mineGitHistory(root);
1049
- const sessionResult = mineSessionHistory(root);
1050
- const allNodes = [...nodes, ...gitResult.nodes, ...sessionResult.nodes];
1051
- const allEdges = [...edges, ...gitResult.edges, ...sessionResult.edges];
1052
- const store = await getStore(root);
1361
+ mkdirSync2(join4(root, ENGRAM_DIR), { recursive: true });
1362
+ const lockPath = join4(root, ENGRAM_DIR, LOCK_FILE);
1053
1363
  try {
1054
- store.clearAll();
1055
- store.bulkUpsert(allNodes, allEdges);
1056
- store.setStat("last_mined", String(Date.now()));
1057
- store.setStat("project_root", root);
1364
+ writeFileSync2(lockPath, String(process.pid), { flag: "wx" });
1365
+ } catch (err) {
1366
+ if (err.code === "EEXIST") {
1367
+ throw new Error(
1368
+ `engram: another init is running on ${root} (lock: ${lockPath}). If no other process is active, delete the lock file manually.`
1369
+ );
1370
+ }
1371
+ throw err;
1372
+ }
1373
+ try {
1374
+ const { nodes, edges, fileCount, totalLines } = extractDirectory(root);
1375
+ const gitResult = mineGitHistory(root);
1376
+ const sessionResult = mineSessionHistory(root);
1377
+ let skillCount = 0;
1378
+ let skillNodes = [];
1379
+ let skillEdges = [];
1380
+ if (options.withSkills) {
1381
+ const skillsDir = typeof options.withSkills === "string" ? options.withSkills : DEFAULT_SKILLS_DIR;
1382
+ const skillsResult = mineSkills(skillsDir);
1383
+ skillCount = skillsResult.skillCount;
1384
+ skillNodes = skillsResult.nodes;
1385
+ skillEdges = skillsResult.edges;
1386
+ }
1387
+ const allNodes = [
1388
+ ...nodes,
1389
+ ...gitResult.nodes,
1390
+ ...sessionResult.nodes,
1391
+ ...skillNodes
1392
+ ];
1393
+ const allEdges = [
1394
+ ...edges,
1395
+ ...gitResult.edges,
1396
+ ...sessionResult.edges,
1397
+ ...skillEdges
1398
+ ];
1399
+ const store = await getStore(root);
1400
+ try {
1401
+ store.clearAll();
1402
+ store.bulkUpsert(allNodes, allEdges);
1403
+ store.setStat("last_mined", String(Date.now()));
1404
+ store.setStat("project_root", root);
1405
+ } finally {
1406
+ store.close();
1407
+ }
1408
+ return {
1409
+ nodes: allNodes.length,
1410
+ edges: allEdges.length,
1411
+ fileCount,
1412
+ totalLines,
1413
+ timeMs: Date.now() - start,
1414
+ skillCount
1415
+ };
1058
1416
  } finally {
1059
- store.close();
1417
+ try {
1418
+ unlinkSync(lockPath);
1419
+ } catch {
1420
+ }
1060
1421
  }
1061
- return { nodes: allNodes.length, edges: allEdges.length, fileCount, totalLines, timeMs: Date.now() - start };
1062
1422
  }
1063
1423
  async function query(projectRoot, question, options = {}) {
1064
1424
  const store = await getStore(projectRoot);
@@ -1110,6 +1470,28 @@ async function learn(projectRoot, text, sourceLabel = "manual") {
1110
1470
  }
1111
1471
  return { nodesAdded: nodes.length };
1112
1472
  }
1473
+ async function mistakes(projectRoot, options = {}) {
1474
+ const store = await getStore(projectRoot);
1475
+ try {
1476
+ let items = store.getAllNodes().filter((n) => n.kind === "mistake");
1477
+ if (options.sinceDays !== void 0) {
1478
+ const cutoff = Date.now() - options.sinceDays * 24 * 60 * 60 * 1e3;
1479
+ items = items.filter((m) => m.lastVerified >= cutoff);
1480
+ }
1481
+ items.sort((a, b) => b.lastVerified - a.lastVerified);
1482
+ const limit = options.limit ?? 20;
1483
+ return items.slice(0, limit).map((m) => ({
1484
+ id: m.id,
1485
+ label: m.label,
1486
+ confidence: m.confidence,
1487
+ confidenceScore: m.confidenceScore,
1488
+ sourceFile: m.sourceFile,
1489
+ lastVerified: m.lastVerified
1490
+ }));
1491
+ } finally {
1492
+ store.close();
1493
+ }
1494
+ }
1113
1495
  async function benchmark(projectRoot, questions) {
1114
1496
  const root = resolve2(projectRoot);
1115
1497
  const store = await getStore(root);
@@ -1121,8 +1503,8 @@ async function benchmark(projectRoot, questions) {
1121
1503
  if (node.sourceFile && !seenFiles.has(node.sourceFile)) {
1122
1504
  seenFiles.add(node.sourceFile);
1123
1505
  try {
1124
- const fullPath = join3(root, node.sourceFile);
1125
- if (existsSync4(fullPath)) fullCorpusChars += readFileSync4(fullPath, "utf-8").length;
1506
+ const fullPath = join4(root, node.sourceFile);
1507
+ if (existsSync5(fullPath)) fullCorpusChars += readFileSync5(fullPath, "utf-8").length;
1126
1508
  } catch {
1127
1509
  }
1128
1510
  }
@@ -1143,8 +1525,8 @@ async function benchmark(projectRoot, questions) {
1143
1525
  let relevantChars = 0;
1144
1526
  for (const f of matchedFiles) {
1145
1527
  try {
1146
- const fullPath = join3(root, f);
1147
- if (existsSync4(fullPath)) relevantChars += readFileSync4(fullPath, "utf-8").length;
1528
+ const fullPath = join4(root, f);
1529
+ if (existsSync5(fullPath)) relevantChars += readFileSync5(fullPath, "utf-8").length;
1148
1530
  } catch {
1149
1531
  }
1150
1532
  }
@@ -1174,11 +1556,15 @@ async function benchmark(projectRoot, questions) {
1174
1556
 
1175
1557
  export {
1176
1558
  GraphStore,
1559
+ sliceGraphemeSafe,
1560
+ truncateGraphemeSafe,
1561
+ MAX_MISTAKE_LABEL_CHARS,
1177
1562
  queryGraph,
1178
1563
  shortestPath,
1179
1564
  SUPPORTED_EXTENSIONS,
1180
1565
  extractFile,
1181
1566
  extractDirectory,
1567
+ mineSkills,
1182
1568
  getDbPath,
1183
1569
  getStore,
1184
1570
  init,
@@ -1187,5 +1573,6 @@ export {
1187
1573
  godNodes,
1188
1574
  stats,
1189
1575
  learn,
1576
+ mistakes,
1190
1577
  benchmark
1191
1578
  };