@alaarab/cortex 1.13.6 → 1.14.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -19,9 +19,9 @@ Supports Claude Code, Copilot CLI, Cursor, and Codex.
19
19
 
20
20
  <br>
21
21
 
22
- Project knowledge, field findings, task queues stored as markdown in a git repo you own. No vendor lock-in, no cloud dependency. One command to set up. Zero commands to use after that.
22
+ Project knowledge, field findings, task queues. Stored as markdown in a git repo you own. No vendor lock-in, no cloud dependency. One command to set up. Zero commands to use after that.
23
23
 
24
- > **Quick start:** `npx @alaarab/cortex init` -- takes 30 seconds, no account needed.
24
+ > **Quick start:** `npx @alaarab/cortex init` takes 30 seconds, no account needed.
25
25
 
26
26
  <br>
27
27
  </div>
@@ -98,24 +98,24 @@ On a new machine: clone, run init, done.
98
98
 
99
99
  ## What's new
100
100
 
101
- - **Bulk MCP tools** -- `add_findings`, `add_backlog_items`, `complete_backlog_items`, `remove_findings` for batch operations
102
- - **TUI shell** -- interactive terminal UI with Backlog, Findings, Memory Queue, and Health tabs (`cortex shell`)
103
- - **Tiered reference** -- `reference/` subdirectories for deep reference docs, indexed separately from findings
104
- - **FTS5 full-text search** with synonym expansion and keyword extraction
105
- - **Multi-agent governance** -- role-based access control for teams (admins, maintainers, contributors, viewers)
106
- - **Starter templates** -- `cortex init --template python-project|monorepo|library|frontend`
107
- - **Memory quality** -- confidence scoring, age decay, trust filtering, and a feedback loop
108
- - **Data portability** -- export/import projects as JSON, archive/unarchive projects
101
+ - **Terminal shell**: open `cortex` and get tabs for Backlog, Findings, Memory Queue, and Health. No agent needed
102
+ - **Synonym search**: type "throttling" and find "rate limit" and "429". You don't need to remember what you called it
103
+ - **Bulk operations**: `add_findings`, `add_backlog_items`, `complete_backlog_items`, `remove_findings` for batch work
104
+ - **Memory quality**: confidence scoring, age decay, and a feedback loop. Stale or low-signal entries stop appearing
105
+ - **Starter templates**: `cortex init --template python-project|monorepo|library|frontend`
106
+ - **Multi-agent access control**: four roles (admin, maintainer, contributor, viewer) for shared cortex repos
107
+ - **Deep reference**: `reference/` subdirectories indexed separately so API docs don't drown out your findings
108
+ - **Data portability**: export/import projects as JSON, archive/unarchive anytime
109
109
 
110
110
  ---
111
111
 
112
112
  ## What makes this different
113
113
 
114
- **It runs itself.** Hooks inject context before every prompt and auto-save after every response. No manual calls needed. Trust filtering gates what gets injected based on confidence, age decay, and citation validity.
114
+ **It runs itself.** Hooks inject context before every prompt and auto-save after every response. Trust filtering checks confidence, age decay, and citation validity before anything lands in your context.
115
115
 
116
116
  **It's just files.** Markdown in a git repo you own. No database, no vector store, no account. `git log` shows how it grew.
117
117
 
118
- **Search that works.** Type "throttling" and it finds "rate limit" and "429". Synonym expansion means you don't need exact phrases.
118
+ **Search that works.** Type "throttling" and it finds "rate limit" and "429". You don't need to remember what you called it.
119
119
 
120
120
  **Every machine, same brain.** Push to a private repo, clone on a new machine, run init. Profiles control which projects each machine sees.
121
121
 
@@ -123,15 +123,15 @@ On a new machine: clone, run init, done.
123
123
 
124
124
  ## What lives in your cortex
125
125
 
126
- `cortex init` creates your project store with starter templates. Each project gets its own directory add files as the project grows.
126
+ `cortex init` creates your project store with starter templates. Each project gets its own directory. Add files as the project grows.
127
127
 
128
128
  | File | What it's for |
129
129
  |------|--------------|
130
- | `summary.md` | Five-line card: what, stack, status, how to run, the finding |
130
+ | `summary.md` | Five-line card: what, stack, status, how to run, key insight |
131
131
  | `CLAUDE.md` | Full context: architecture, commands, conventions |
132
132
  | `REFERENCE.md` | Deep reference: API details, data models, things too long for CLAUDE.md |
133
133
  | `FINDINGS.md` | Bugs hit, patterns discovered, things to avoid next time |
134
- | `CANONICAL_MEMORIES.md` | Pinned high-signal memories that bypass decay |
134
+ | `CANONICAL_MEMORIES.md` | Pinned memories that never expire and always inject |
135
135
  | `backlog.md` | Task queue that persists across sessions |
136
136
  | `MEMORY_QUEUE.md` | Items waiting for your review (see [Memory queue](#memory-queue) below) |
137
137
  | `.claude/skills/` | Project-specific slash commands |
@@ -209,6 +209,7 @@ The server indexes your cortex into a local SQLite FTS5 database. Tools are grou
209
209
  | `get_related_docs` | Get docs linked to a named entity. |
210
210
  | `read_graph` | Read the entity graph for a project or all projects. |
211
211
  | `link_findings` | Manually link a finding to an entity. Persists to manual-links.json and survives rebuilds. |
212
+ | `cross_project_entities` | Find entities shared across multiple projects. |
212
213
 
213
214
  ### Session management
214
215
 
@@ -2,7 +2,12 @@ import { ensureCortexPath } from "./shared.js";
2
2
  import { getIndexPolicy, updateIndexPolicy, getRetentionPolicy, updateRetentionPolicy, getWorkflowPolicy, updateWorkflowPolicy, getAccessControl, updateAccessControl, } from "./shared-governance.js";
3
3
  import { listMachines as listMachinesStore, listProfiles as listProfilesStore } from "./data-access.js";
4
4
  import { setTelemetryEnabled, getTelemetrySummary, resetTelemetry } from "./telemetry.js";
5
- const cortexPath = ensureCortexPath();
5
+ let _cortexPath;
6
+ function getCortexPath() {
7
+ if (!_cortexPath)
8
+ _cortexPath = ensureCortexPath();
9
+ return _cortexPath;
10
+ }
6
11
  const profile = process.env.CORTEX_PROFILE || "";
7
12
  // ── Config router ────────────────────────────────────────────────────────────
8
13
  export async function handleConfig(args) {
@@ -43,7 +48,7 @@ Subcommands:
43
48
  // ── Index policy ─────────────────────────────────────────────────────────────
44
49
  export async function handleIndexPolicy(args) {
45
50
  if (!args.length || args[0] === "get") {
46
- console.log(JSON.stringify(getIndexPolicy(cortexPath), null, 2));
51
+ console.log(JSON.stringify(getIndexPolicy(getCortexPath()), null, 2));
47
52
  return;
48
53
  }
49
54
  if (args[0] === "set") {
@@ -64,7 +69,7 @@ export async function handleIndexPolicy(args) {
64
69
  patch.includeHidden = /^(1|true|yes|on)$/i.test(v);
65
70
  }
66
71
  }
67
- const result = updateIndexPolicy(cortexPath, patch);
72
+ const result = updateIndexPolicy(getCortexPath(), patch);
68
73
  if (!result.ok) {
69
74
  console.log(result.error);
70
75
  if (result.code === "PERMISSION_DENIED")
@@ -80,7 +85,7 @@ export async function handleIndexPolicy(args) {
80
85
  // ── Memory policy ────────────────────────────────────────────────────────────
81
86
  export async function handleRetentionPolicy(args) {
82
87
  if (!args.length || args[0] === "get") {
83
- console.log(JSON.stringify(getRetentionPolicy(cortexPath), null, 2));
88
+ console.log(JSON.stringify(getRetentionPolicy(getCortexPath()), null, 2));
84
89
  return;
85
90
  }
86
91
  if (args[0] === "set") {
@@ -101,7 +106,7 @@ export async function handleRetentionPolicy(args) {
101
106
  patch[k] = value;
102
107
  }
103
108
  }
104
- const result = updateRetentionPolicy(cortexPath, patch);
109
+ const result = updateRetentionPolicy(getCortexPath(), patch);
105
110
  if (!result.ok) {
106
111
  console.log(result.error);
107
112
  if (result.code === "PERMISSION_DENIED")
@@ -117,7 +122,7 @@ export async function handleRetentionPolicy(args) {
117
122
  // ── Memory workflow ──────────────────────────────────────────────────────────
118
123
  export async function handleWorkflowPolicy(args) {
119
124
  if (!args.length || args[0] === "get") {
120
- console.log(JSON.stringify(getWorkflowPolicy(cortexPath), null, 2));
125
+ console.log(JSON.stringify(getWorkflowPolicy(getCortexPath()), null, 2));
121
126
  return;
122
127
  }
123
128
  if (args[0] === "set") {
@@ -139,7 +144,7 @@ export async function handleWorkflowPolicy(args) {
139
144
  patch[k] = Number.isNaN(num) ? v : num;
140
145
  }
141
146
  }
142
- const result = updateWorkflowPolicy(cortexPath, patch);
147
+ const result = updateWorkflowPolicy(getCortexPath(), patch);
143
148
  if (!result.ok) {
144
149
  console.log(result.error);
145
150
  if (result.code === "PERMISSION_DENIED")
@@ -155,7 +160,7 @@ export async function handleWorkflowPolicy(args) {
155
160
  // ── Memory access ────────────────────────────────────────────────────────────
156
161
  export async function handleAccessControl(args) {
157
162
  if (!args.length || args[0] === "get") {
158
- console.log(JSON.stringify(getAccessControl(cortexPath), null, 2));
163
+ console.log(JSON.stringify(getAccessControl(getCortexPath()), null, 2));
159
164
  return;
160
165
  }
161
166
  if (args[0] === "set") {
@@ -168,7 +173,7 @@ export async function handleAccessControl(args) {
168
173
  continue;
169
174
  patch[k] = v.split(",").map((s) => s.trim()).filter(Boolean);
170
175
  }
171
- const result = updateAccessControl(cortexPath, patch);
176
+ const result = updateAccessControl(getCortexPath(), patch);
172
177
  if (!result.ok) {
173
178
  console.log(result.error);
174
179
  if (result.code === "PERMISSION_DENIED")
@@ -183,7 +188,7 @@ export async function handleAccessControl(args) {
183
188
  }
184
189
  // ── Machines and profiles ────────────────────────────────────────────────────
185
190
  export function handleConfigMachines() {
186
- const result = listMachinesStore(cortexPath);
191
+ const result = listMachinesStore(getCortexPath());
187
192
  if (!result.ok) {
188
193
  console.log(result.error);
189
194
  return;
@@ -192,7 +197,7 @@ export function handleConfigMachines() {
192
197
  console.log(`Registered Machines\n${lines.join("\n")}`);
193
198
  }
194
199
  export function handleConfigProfiles() {
195
- const result = listProfilesStore(cortexPath);
200
+ const result = listProfilesStore(getCortexPath());
196
201
  if (!result.ok) {
197
202
  console.log(result.error);
198
203
  return;
@@ -209,19 +214,19 @@ function handleConfigTelemetry(args) {
209
214
  const action = args[0];
210
215
  switch (action) {
211
216
  case "on":
212
- setTelemetryEnabled(cortexPath, true);
217
+ setTelemetryEnabled(getCortexPath(), true);
213
218
  console.log("Telemetry enabled. Local usage stats will be collected.");
214
219
  console.log("No data is sent externally. Stats are stored in .runtime/telemetry.json.");
215
220
  return;
216
221
  case "off":
217
- setTelemetryEnabled(cortexPath, false);
222
+ setTelemetryEnabled(getCortexPath(), false);
218
223
  console.log("Telemetry disabled.");
219
224
  return;
220
225
  case "reset":
221
- resetTelemetry(cortexPath);
226
+ resetTelemetry(getCortexPath());
222
227
  console.log("Telemetry stats reset.");
223
228
  return;
224
229
  default:
225
- console.log(getTelemetrySummary(cortexPath));
230
+ console.log(getTelemetrySummary(getCortexPath()));
226
231
  }
227
232
  }
@@ -8,7 +8,12 @@ import * as fs from "fs";
8
8
  import * as os from "os";
9
9
  import * as path from "path";
10
10
  import { execFileSync } from "child_process";
11
- const cortexPath = ensureCortexPath();
11
+ let _cortexPath;
12
+ function getCortexPath() {
13
+ if (!_cortexPath)
14
+ _cortexPath = ensureCortexPath();
15
+ return _cortexPath;
16
+ }
12
17
  const profile = process.env.CORTEX_PROFILE || "";
13
18
  function runGit(cwd, args) {
14
19
  return runGitShared(cwd, args, EXEC_TIMEOUT_MS, debugLog);
@@ -20,7 +25,7 @@ function shouldRetryGh(err) {
20
25
  function inferProject(arg) {
21
26
  if (arg)
22
27
  return arg;
23
- return detectProject(cortexPath, process.cwd(), profile);
28
+ return detectProject(getCortexPath(), process.cwd(), profile);
24
29
  }
25
30
  // ── Git log parsing ──────────────────────────────────────────────────────────
26
31
  export function parseGitLogRecords(cwd, days) {
@@ -219,7 +224,7 @@ export async function handleExtractMemories(projectArg, cwdArg, silent = false)
219
224
  return;
220
225
  }
221
226
  const days = Number.parseInt(process.env.CORTEX_MEMORY_EXTRACT_WINDOW_DAYS || "30", 10);
222
- const threshold = Number.parseFloat(process.env.CORTEX_MEMORY_AUTO_ACCEPT || String(getRetentionPolicy(cortexPath).autoAcceptThreshold));
227
+ const threshold = Number.parseFloat(process.env.CORTEX_MEMORY_AUTO_ACCEPT || String(getRetentionPolicy(getCortexPath()).autoAcceptThreshold));
223
228
  const records = parseGitLogRecords(repoRoot, Number.isNaN(days) ? 30 : days);
224
229
  const ghCandidates = isFeatureEnabled("CORTEX_FEATURE_GH_MINING", false)
225
230
  ? await mineGithubCandidates(repoRoot)
@@ -232,14 +237,14 @@ export async function handleExtractMemories(projectArg, cwdArg, silent = false)
232
237
  continue;
233
238
  const line = `${candidate.text} (source commit ${rec.hash.slice(0, 8)})`;
234
239
  if (candidate.score >= threshold) {
235
- addFindingToFile(cortexPath, project, line, {
240
+ addFindingToFile(getCortexPath(), project, line, {
236
241
  repo: repoRoot,
237
242
  commit: rec.hash,
238
243
  });
239
244
  accepted++;
240
245
  }
241
246
  else {
242
- const qr1 = appendReviewQueue(cortexPath, project, "Review", [`[confidence ${candidate.score.toFixed(2)}] ${line}`]);
247
+ const qr1 = appendReviewQueue(getCortexPath(), project, "Review", [`[confidence ${candidate.score.toFixed(2)}] ${line}`]);
243
248
  if (qr1.ok)
244
249
  queued += qr1.data;
245
250
  }
@@ -248,10 +253,10 @@ export async function handleExtractMemories(projectArg, cwdArg, silent = false)
248
253
  const line = `${c.text}${c.commit ? ` (source commit ${c.commit.slice(0, 8)})` : ""}`;
249
254
  if (c.text.startsWith("CI failure pattern:")) {
250
255
  const key = entryScoreKey(project, "FINDINGS.md", line);
251
- recordFeedback(cortexPath, key, "regression");
256
+ recordFeedback(getCortexPath(), key, "regression");
252
257
  }
253
258
  if (c.score >= threshold) {
254
- addFindingToFile(cortexPath, project, line, {
259
+ addFindingToFile(getCortexPath(), project, line, {
255
260
  repo: repoRoot,
256
261
  commit: c.commit,
257
262
  file: c.file,
@@ -259,13 +264,13 @@ export async function handleExtractMemories(projectArg, cwdArg, silent = false)
259
264
  accepted++;
260
265
  }
261
266
  else {
262
- const qr2 = appendReviewQueue(cortexPath, project, "Review", [`[confidence ${c.score.toFixed(2)}] ${line}`]);
267
+ const qr2 = appendReviewQueue(getCortexPath(), project, "Review", [`[confidence ${c.score.toFixed(2)}] ${line}`]);
263
268
  if (qr2.ok)
264
269
  queued += qr2.data;
265
270
  }
266
271
  }
267
- flushEntryScores(cortexPath);
268
- appendAuditLog(cortexPath, "extract_memories", `project=${project} accepted=${accepted} queued=${queued} window_days=${days}`);
272
+ flushEntryScores(getCortexPath());
273
+ appendAuditLog(getCortexPath(), "extract_memories", `project=${project} accepted=${accepted} queued=${queued} window_days=${days}`);
269
274
  if (!silent)
270
275
  console.log(`Extracted memory candidates for ${project}: accepted=${accepted}, queued=${queued}, window=${days}d`);
271
276
  }
@@ -4,13 +4,18 @@ import { filterTrustedFindingsDetailed, migrateLegacyFindings, } from "./shared-
4
4
  import * as fs from "fs";
5
5
  import * as path from "path";
6
6
  import { handleExtractMemories } from "./cli-extract.js";
7
- const cortexPath = ensureCortexPath();
7
+ let _cortexPath;
8
+ function getCortexPath() {
9
+ if (!_cortexPath)
10
+ _cortexPath = ensureCortexPath();
11
+ return _cortexPath;
12
+ }
8
13
  const profile = process.env.CORTEX_PROFILE || "";
9
14
  // ── Shared helpers ───────────────────────────────────────────────────────────
10
15
  function targetProjects(projectArg) {
11
16
  return projectArg
12
17
  ? [projectArg]
13
- : getProjectDirs(cortexPath, profile).map((p) => path.basename(p)).filter((p) => p !== "global");
18
+ : getProjectDirs(getCortexPath(), profile).map((p) => path.basename(p)).filter((p) => p !== "global");
14
19
  }
15
20
  function parseProjectDryRunArgs(args, command, usage) {
16
21
  let projectArg;
@@ -36,7 +41,7 @@ function parseProjectDryRunArgs(args, command, usage) {
36
41
  function captureFindingBackups(projects) {
37
42
  const snapshots = new Map();
38
43
  for (const project of projects) {
39
- const backup = path.join(cortexPath, project, "FINDINGS.md.bak");
44
+ const backup = path.join(getCortexPath(), project, "FINDINGS.md.bak");
40
45
  if (!fs.existsSync(backup))
41
46
  continue;
42
47
  snapshots.set(backup, fs.statSync(backup).mtimeMs);
@@ -46,14 +51,14 @@ function captureFindingBackups(projects) {
46
51
  function summarizeBackupChanges(before, projects) {
47
52
  const changed = [];
48
53
  for (const project of projects) {
49
- const backup = path.join(cortexPath, project, "FINDINGS.md.bak");
54
+ const backup = path.join(getCortexPath(), project, "FINDINGS.md.bak");
50
55
  if (!fs.existsSync(backup))
51
56
  continue;
52
57
  const current = fs.statSync(backup).mtimeMs;
53
58
  const previous = before.get(backup);
54
59
  if (previous === undefined || current !== previous) {
55
60
  // Normalize to forward slashes for consistent output across platforms
56
- changed.push(path.relative(cortexPath, backup).replace(/\\/g, "/"));
61
+ changed.push(path.relative(getCortexPath(), backup).replace(/\\/g, "/"));
57
62
  }
58
63
  }
59
64
  return changed.sort();
@@ -66,16 +71,16 @@ function qualityMarkers(cortexPathLocal) {
66
71
  };
67
72
  }
68
73
  export async function handleGovernMemories(projectArg, silent = false, dryRun = false) {
69
- const policy = getRetentionPolicy(cortexPath);
74
+ const policy = getRetentionPolicy(getCortexPath());
70
75
  const ttlDays = Number.parseInt(process.env.CORTEX_MEMORY_TTL_DAYS || String(policy.ttlDays), 10);
71
76
  const projects = projectArg
72
77
  ? [projectArg]
73
- : getProjectDirs(cortexPath, profile).map((p) => path.basename(p)).filter((p) => p !== "global");
78
+ : getProjectDirs(getCortexPath(), profile).map((p) => path.basename(p)).filter((p) => p !== "global");
74
79
  let staleCount = 0;
75
80
  let conflictCount = 0;
76
81
  let reviewCount = 0;
77
82
  for (const project of projects) {
78
- const learningsPath = path.join(cortexPath, project, "FINDINGS.md");
83
+ const learningsPath = path.join(getCortexPath(), project, "FINDINGS.md");
79
84
  if (!fs.existsSync(learningsPath))
80
85
  continue;
81
86
  const content = fs.readFileSync(learningsPath, "utf8");
@@ -93,18 +98,18 @@ export async function handleGovernMemories(projectArg, silent = false, dryRun =
93
98
  .filter((l) => /(fixed stuff|updated things|misc|temp|wip|quick note)/i.test(l) || l.length < 16);
94
99
  reviewCount += lowValue.length;
95
100
  if (!dryRun) {
96
- appendReviewQueue(cortexPath, project, "Stale", stale);
97
- appendReviewQueue(cortexPath, project, "Conflicts", conflicts);
98
- appendReviewQueue(cortexPath, project, "Review", lowValue);
101
+ appendReviewQueue(getCortexPath(), project, "Stale", stale);
102
+ appendReviewQueue(getCortexPath(), project, "Conflicts", conflicts);
103
+ appendReviewQueue(getCortexPath(), project, "Review", lowValue);
99
104
  }
100
105
  }
101
106
  if (!dryRun) {
102
- appendAuditLog(cortexPath, "govern_memories", `projects=${projects.length} stale=${staleCount} conflicts=${conflictCount} review=${reviewCount}`);
107
+ appendAuditLog(getCortexPath(), "govern_memories", `projects=${projects.length} stale=${staleCount} conflicts=${conflictCount} review=${reviewCount}`);
103
108
  for (const project of projects) {
104
- consolidateProjectFindings(cortexPath, project);
109
+ consolidateProjectFindings(getCortexPath(), project);
105
110
  }
106
111
  }
107
- const lockSummary = dryRun ? "" : enforceCanonicalLocks(cortexPath, projectArg);
112
+ const lockSummary = dryRun ? "" : enforceCanonicalLocks(getCortexPath(), projectArg);
108
113
  if (!silent) {
109
114
  const prefix = dryRun ? "[dry-run] Would govern" : "Governed";
110
115
  console.log(`${prefix} memories: stale=${staleCount}, conflicts=${conflictCount}, review=${reviewCount}`);
@@ -123,19 +128,19 @@ export async function handlePruneMemories(args = []) {
123
128
  const { projectArg, dryRun } = parseProjectDryRunArgs(args, "prune-memories", usage);
124
129
  const projects = targetProjects(projectArg);
125
130
  const beforeBackups = dryRun ? new Map() : captureFindingBackups(projects);
126
- const result = pruneDeadMemories(cortexPath, projectArg, dryRun);
131
+ const result = pruneDeadMemories(getCortexPath(), projectArg, dryRun);
127
132
  if (!result.ok) {
128
133
  console.log(result.error);
129
134
  return;
130
135
  }
131
136
  console.log(result.data);
132
137
  // TTL enforcement: move entries older than ttlDays that haven't been retrieved recently
133
- const policy = getRetentionPolicy(cortexPath);
138
+ const policy = getRetentionPolicy(getCortexPath());
134
139
  const ttlDays = policy.ttlDays;
135
140
  const retrievalGraceDays = Math.floor(ttlDays / 2);
136
141
  const now = Date.now();
137
142
  // Load retrieval log once for all projects
138
- const retrievalLogPath = path.join(cortexPath, ".runtime", "retrieval-log.jsonl");
143
+ const retrievalLogPath = path.join(getCortexPath(), ".runtime", "retrieval-log.jsonl");
139
144
  let retrievalEntries = [];
140
145
  if (fs.existsSync(retrievalLogPath)) {
141
146
  try {
@@ -165,7 +170,7 @@ export async function handlePruneMemories(args = []) {
165
170
  }
166
171
  let ttlExpired = 0;
167
172
  for (const project of projects) {
168
- const learningsPath = path.join(cortexPath, project, "FINDINGS.md");
173
+ const learningsPath = path.join(getCortexPath(), project, "FINDINGS.md");
169
174
  if (!fs.existsSync(learningsPath))
170
175
  continue;
171
176
  const content = fs.readFileSync(learningsPath, "utf8");
@@ -197,7 +202,7 @@ export async function handlePruneMemories(args = []) {
197
202
  ttlExpired++;
198
203
  }
199
204
  if (expiredEntries.length > 0 && !dryRun) {
200
- appendReviewQueue(cortexPath, project, "Stale", expiredEntries);
205
+ appendReviewQueue(getCortexPath(), project, "Stale", expiredEntries);
201
206
  }
202
207
  if (expiredEntries.length > 0 && dryRun) {
203
208
  for (const entry of expiredEntries) {
@@ -221,7 +226,7 @@ export async function handleConsolidateMemories(args = []) {
221
226
  const { projectArg, dryRun } = parseProjectDryRunArgs(args, "consolidate-memories", usage);
222
227
  const projects = targetProjects(projectArg);
223
228
  const beforeBackups = dryRun ? new Map() : captureFindingBackups(projects);
224
- const results = projects.map((p) => consolidateProjectFindings(cortexPath, p, dryRun));
229
+ const results = projects.map((p) => consolidateProjectFindings(getCortexPath(), p, dryRun));
225
230
  console.log(results.map((r) => r.ok ? r.data : r.error).join("\n"));
226
231
  if (dryRun)
227
232
  return;
@@ -238,7 +243,7 @@ export async function handleMigrateFindings(args) {
238
243
  }
239
244
  const pinCanonical = args.includes("--pin");
240
245
  const dryRun = args.includes("--dry-run");
241
- const result = migrateLegacyFindings(cortexPath, project, { pinCanonical, dryRun });
246
+ const result = migrateLegacyFindings(getCortexPath(), project, { pinCanonical, dryRun });
242
247
  console.log(result.ok ? result.data : result.error);
243
248
  }
244
249
  function printMaintainMigrationUsage() {
@@ -299,7 +304,7 @@ function parseMaintainMigrationArgs(args) {
299
304
  return { kind: "data", project: positional[0], pinCanonical, dryRun };
300
305
  }
301
306
  function describeGovernanceMigrationPlan() {
302
- const govDir = path.join(cortexPath, ".governance");
307
+ const govDir = path.join(getCortexPath(), ".governance");
303
308
  if (!fs.existsSync(govDir))
304
309
  return [];
305
310
  const files = [
@@ -335,7 +340,7 @@ function runGovernanceMigration(dryRun) {
335
340
  const details = pending.map((entry) => `${entry.file} (${entry.from} -> ${entry.to})`).join(", ");
336
341
  return `[dry-run] Would migrate ${pending.length} governance file(s): ${details}`;
337
342
  }
338
- const migrated = migrateGovernanceFiles(cortexPath);
343
+ const migrated = migrateGovernanceFiles(getCortexPath());
339
344
  if (!migrated.length)
340
345
  return "Governance files are already up to date.";
341
346
  return `Migrated ${migrated.length} governance file(s): ${migrated.join(", ")}`;
@@ -347,7 +352,7 @@ export async function handleMaintainMigrate(args) {
347
352
  lines.push(`Governance migration: ${runGovernanceMigration(parsed.dryRun)}`);
348
353
  }
349
354
  if (parsed.kind === "data" || parsed.kind === "all") {
350
- const result = migrateLegacyFindings(cortexPath, parsed.project, {
355
+ const result = migrateLegacyFindings(getCortexPath(), parsed.project, {
351
356
  pinCanonical: parsed.pinCanonical,
352
357
  dryRun: parsed.dryRun,
353
358
  });
@@ -404,7 +409,7 @@ function findBackups(projects) {
404
409
  const results = [];
405
410
  const now = Date.now();
406
411
  for (const project of projects) {
407
- const dir = path.join(cortexPath, project);
412
+ const dir = path.join(getCortexPath(), project);
408
413
  if (!fs.existsSync(dir))
409
414
  continue;
410
415
  for (const f of fs.readdirSync(dir)) {
@@ -450,35 +455,35 @@ async function handleRestoreBackup(args) {
450
455
  fs.copyFileSync(b.fullPath, target);
451
456
  console.log(`Restored ${b.project}/${b.file.replace(/\.bak$/, "")} from backup`);
452
457
  }
453
- appendAuditLog(cortexPath, "restore_backup", `project=${projectArg} files=${projectBackups.length}`);
458
+ appendAuditLog(getCortexPath(), "restore_backup", `project=${projectArg} files=${projectBackups.length}`);
454
459
  }
455
460
  // ── Background maintenance ───────────────────────────────────────────────────
456
461
  export async function handleBackgroundMaintenance(projectArg) {
457
- const markers = qualityMarkers(cortexPath);
462
+ const markers = qualityMarkers(getCortexPath());
458
463
  const startedAt = new Date().toISOString();
459
464
  try {
460
465
  const governance = await handleGovernMemories(projectArg, true);
461
- const pruneResult = pruneDeadMemories(cortexPath, projectArg);
466
+ const pruneResult = pruneDeadMemories(getCortexPath(), projectArg);
462
467
  const pruneMsg = pruneResult.ok ? pruneResult.data : pruneResult.error;
463
468
  fs.writeFileSync(markers.done, new Date().toISOString() + "\n");
464
- updateRuntimeHealth(cortexPath, {
469
+ updateRuntimeHealth(getCortexPath(), {
465
470
  lastGovernance: {
466
471
  at: startedAt,
467
472
  status: "ok",
468
473
  detail: `projects=${governance.projects} stale=${governance.staleCount} conflicts=${governance.conflictCount} review=${governance.reviewCount}; ${pruneMsg}`,
469
474
  },
470
475
  });
471
- appendAuditLog(cortexPath, "background_maintenance", `status=ok projects=${governance.projects} stale=${governance.staleCount} conflicts=${governance.conflictCount} review=${governance.reviewCount}`);
476
+ appendAuditLog(getCortexPath(), "background_maintenance", `status=ok projects=${governance.projects} stale=${governance.staleCount} conflicts=${governance.conflictCount} review=${governance.reviewCount}`);
472
477
  }
473
478
  catch (err) {
474
- updateRuntimeHealth(cortexPath, {
479
+ updateRuntimeHealth(getCortexPath(), {
475
480
  lastGovernance: {
476
481
  at: startedAt,
477
482
  status: "error",
478
483
  detail: err?.message || String(err),
479
484
  },
480
485
  });
481
- appendAuditLog(cortexPath, "background_maintenance_failed", `error=${err?.message || String(err)}`);
486
+ appendAuditLog(getCortexPath(), "background_maintenance_failed", `error=${err?.message || String(err)}`);
482
487
  }
483
488
  finally {
484
489
  try {