cfsa-antigravity 3.0.1 → 3.0.3
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/README.md +22 -6
- package/package.json +1 -1
- package/template/.agent/kit-sync.md +3 -3
- package/template/.claude/README.md +2 -0
- package/template/.claude/kit-sync.md +3 -3
- package/template/.factory/kit-sync.md +3 -3
- package/template/.memory/mcp-server/README.md +10 -2
- package/template/.memory/mcp-server/client.mjs +61 -18
- package/template/.memory/mcp-server/daemon.mjs +8 -3
- package/template/.memory/mcp-server/runtime.mjs +51 -14
- package/template/.memory/mcp-server/start.mjs +4 -2
- package/template/.memory/pipeline/compile.mjs +76 -11
- package/template/docs/README.md +2 -0
- package/template/docs/kit-architecture.md +2 -0
package/README.md
CHANGED
|
@@ -57,7 +57,11 @@ Every install now scaffolds a shared `.memory/` root at the project level. It is
|
|
|
57
57
|
└── migrate/ # legacy memory import helpers
|
|
58
58
|
```
|
|
59
59
|
|
|
60
|
-
The installer ships the shared memory runtime into `.memory/`, including the project-local MCP server under `.memory/mcp-server/`. Tool-specific MCP client config is intentionally **not** managed by the kit — users wire their own `.mcp.json` / editor settings for the tools they actually use. The daemon writes runtime state into `.memory/runtime/`, and
|
|
60
|
+
The installer ships the shared memory runtime into `.memory/`, including the project-local MCP server under `.memory/mcp-server/`. Tool-specific MCP client config is intentionally **not** managed by the kit — users wire their own `.mcp.json` / editor settings for the tools they actually use. The daemon writes workspace-local runtime state into `.memory/runtime/`, and clients resolve that state before proxying requests so one workspace does not silently talk to another workspace's daemon. The semantic index writes retrieval artifacts into `.memory/schema/semantic-index.json` plus `.memory/schema/semantic-manifest.json`.
|
|
61
|
+
|
|
62
|
+
The practical contract is: point your tool at `.memory/mcp-server/client.mjs`, start the daemon for that workspace, and let the client discover the correct local daemon from `.memory/runtime/cfsa-memory-daemon.json`.
|
|
63
|
+
|
|
64
|
+
If the daemon health payload reports a different `projectRoot` than the client expects, the client now fails loudly instead of proxying requests to the wrong workspace.
|
|
61
65
|
|
|
62
66
|
### MCP client setup and first graph compile
|
|
63
67
|
|
|
@@ -65,6 +69,21 @@ The kit installs the **server/runtime** only. You must configure your tool's MCP
|
|
|
65
69
|
|
|
66
70
|
For tools that use a workspace `.mcp.json`, point the tool at the shared server entrypoint:
|
|
67
71
|
|
|
72
|
+
```json
|
|
73
|
+
{
|
|
74
|
+
"mcpServers": {
|
|
75
|
+
"cfsa-memory": {
|
|
76
|
+
"command": "node",
|
|
77
|
+
"args": [".memory/mcp-server/client.mjs"]
|
|
78
|
+
}
|
|
79
|
+
}
|
|
80
|
+
}
|
|
81
|
+
```
|
|
82
|
+
|
|
83
|
+
This is the preferred default because the client resolves the correct workspace-local daemon from `.memory/runtime/cfsa-memory-daemon.json`.
|
|
84
|
+
|
|
85
|
+
If you need a custom host/port, use env overrides:
|
|
86
|
+
|
|
68
87
|
```json
|
|
69
88
|
{
|
|
70
89
|
"mcpServers": {
|
|
@@ -72,18 +91,15 @@ For tools that use a workspace `.mcp.json`, point the tool at the shared server
|
|
|
72
91
|
"command": "node",
|
|
73
92
|
"args": [".memory/mcp-server/client.mjs"],
|
|
74
93
|
"env": {
|
|
75
|
-
"MEMORY_ROOT": ".memory",
|
|
76
94
|
"CFSA_MEMORY_HOST": "127.0.0.1",
|
|
77
|
-
"CFSA_MEMORY_PORT": "4317"
|
|
78
|
-
"CFSA_MEMORY_URL": "http://127.0.0.1:4317/mcp",
|
|
79
|
-
"CFSA_MEMORY_HEALTH_URL": "http://127.0.0.1:4317/health"
|
|
95
|
+
"CFSA_MEMORY_PORT": "4317"
|
|
80
96
|
}
|
|
81
97
|
}
|
|
82
98
|
}
|
|
83
99
|
}
|
|
84
100
|
```
|
|
85
101
|
|
|
86
|
-
For an existing project, after wiring your MCP client, trigger the initial graph/index build by running the memory compile path (`memory_compile` via MCP, or the direct compile script fallback) before opening Obsidian. Verify files like `.memory/schema/spec-graph.json` and `.memory/wiki/hubs/spec-graph.md` exist, then open Obsidian at `.memory/`.
|
|
102
|
+
For an existing project, after wiring your MCP client, start the workspace-local daemon so it writes `.memory/runtime/cfsa-memory-daemon.json`, then trigger the initial graph/index build by running the memory compile path (`memory_compile` via MCP, or the direct compile script fallback) before opening Obsidian. Verify files like `.memory/schema/spec-graph.json` and `.memory/wiki/hubs/spec-graph.md` exist, then open Obsidian at `.memory/`.
|
|
87
103
|
|
|
88
104
|
This replaces fragmented runtime-local memory as the canonical project memory layer. Runtime-local memory files remain legacy inputs for migration.
|
|
89
105
|
|
package/package.json
CHANGED
|
@@ -1,6 +1,6 @@
|
|
|
1
1
|
# Kit Sync State
|
|
2
2
|
|
|
3
3
|
upstream: https://github.com/RepairYourTech/cfsa-antigravity
|
|
4
|
-
last_synced_commit:
|
|
5
|
-
last_synced_at: 2026-04-
|
|
6
|
-
kit_version: 3.0.
|
|
4
|
+
last_synced_commit: 719622791838f3cc3da429ab87ff4bc303c81961
|
|
5
|
+
last_synced_at: 2026-04-15T20:06:45Z
|
|
6
|
+
kit_version: 3.0.3
|
|
@@ -37,6 +37,8 @@ A Claude install also gets the shared `.memory/` runtime scaffold, including:
|
|
|
37
37
|
|
|
38
38
|
Tool-specific MCP client config and Claude hook wiring are user-managed. If you want Claude to talk to the shared memory daemon, add the appropriate MCP client config yourself and then run the initial compile before opening Obsidian at `.memory/`.
|
|
39
39
|
|
|
40
|
+
Claude should point at `.memory/mcp-server/client.mjs`. That client now resolves the daemon from the current workspace's `.memory/runtime/cfsa-memory-daemon.json` and validates the daemon's `projectRoot` from `/health`, so a Claude session in one repo will not silently proxy into another repo's daemon.
|
|
41
|
+
|
|
40
42
|
All runtimes should read and write shared project memory through `.memory/` and the MCP bridge.
|
|
41
43
|
|
|
42
44
|
```text
|
|
@@ -1,6 +1,6 @@
|
|
|
1
1
|
# Kit Sync State
|
|
2
2
|
|
|
3
3
|
upstream: https://github.com/RepairYourTech/cfsa-antigravity
|
|
4
|
-
last_synced_commit:
|
|
5
|
-
last_synced_at: 2026-04-
|
|
6
|
-
kit_version: 3.0.
|
|
4
|
+
last_synced_commit: 719622791838f3cc3da429ab87ff4bc303c81961
|
|
5
|
+
last_synced_at: 2026-04-15T20:06:45Z
|
|
6
|
+
kit_version: 3.0.3
|
|
@@ -1,6 +1,6 @@
|
|
|
1
1
|
# Kit Sync State
|
|
2
2
|
|
|
3
3
|
upstream: https://github.com/RepairYourTech/cfsa-antigravity
|
|
4
|
-
last_synced_commit:
|
|
5
|
-
last_synced_at: 2026-04-
|
|
6
|
-
kit_version: 3.0.
|
|
4
|
+
last_synced_commit: 719622791838f3cc3da429ab87ff4bc303c81961
|
|
5
|
+
last_synced_at: 2026-04-15T20:06:45Z
|
|
6
|
+
kit_version: 3.0.3
|
|
@@ -12,8 +12,16 @@ The kit installs the MCP **server/runtime** into the workspace. Tool-specific MC
|
|
|
12
12
|
## Initial bootstrap
|
|
13
13
|
For an existing project:
|
|
14
14
|
1. wire your tool's MCP client to this server entrypoint
|
|
15
|
-
2.
|
|
16
|
-
3.
|
|
15
|
+
2. start the workspace-local daemon so it writes `.memory/runtime/cfsa-memory-daemon.json`
|
|
16
|
+
3. run the initial memory compile (`memory_compile` or the direct compile fallback)
|
|
17
|
+
4. verify graph/index artifacts exist under `.memory/schema/` and `.memory/wiki/`
|
|
18
|
+
|
|
19
|
+
## Workspace-local routing contract
|
|
20
|
+
- preferred client config is just `command: node` plus `args: [".memory/mcp-server/client.mjs"]`
|
|
21
|
+
- `daemon.mjs` publishes `projectRoot`, `memoryRoot`, `endpoint`, and `healthUrl` into `.memory/runtime/cfsa-memory-daemon.json`
|
|
22
|
+
- `client.mjs` reads that runtime state for the current workspace before proxying MCP requests
|
|
23
|
+
- `client.mjs` validates `/health` and rejects mismatched `projectRoot` values instead of silently talking to another workspace's daemon
|
|
24
|
+
- host/port env can still be used when you intentionally need custom daemon startup behavior
|
|
17
25
|
|
|
18
26
|
## Extension pattern
|
|
19
27
|
Expose new memory capabilities by adding a tool file under `tools/` and wiring it into `index.mjs`.
|
|
@@ -1,15 +1,51 @@
|
|
|
1
|
-
|
|
2
|
-
|
|
1
|
+
import { buildDaemonEndpoints, fetchHealth, readDaemonState, resolveProjectRoot } from "./runtime.mjs";
|
|
2
|
+
|
|
3
|
+
const projectRoot = resolveProjectRoot();
|
|
4
|
+
const state = readDaemonState(projectRoot);
|
|
5
|
+
const envEndpoint = process.env.CFSA_MEMORY_URL || null;
|
|
6
|
+
const envHealthUrl = process.env.CFSA_MEMORY_HEALTH_URL || null;
|
|
7
|
+
const fallbackEndpoint = `http://${process.env.CFSA_MEMORY_HOST || "127.0.0.1"}:${process.env.CFSA_MEMORY_PORT || "4317"}/mcp`;
|
|
8
|
+
const fallbackHealthUrl = envHealthUrl || fallbackEndpoint.replace(/\/mcp$/, "/health");
|
|
9
|
+
|
|
10
|
+
function resolveConnectionTarget() {
|
|
11
|
+
if (state) {
|
|
12
|
+
return { source: "runtime-state", ...buildDaemonEndpoints(state) };
|
|
13
|
+
}
|
|
14
|
+
if (envEndpoint || envHealthUrl) {
|
|
15
|
+
return {
|
|
16
|
+
source: "environment",
|
|
17
|
+
endpoint: envEndpoint || fallbackEndpoint,
|
|
18
|
+
healthUrl: envHealthUrl || (envEndpoint || fallbackEndpoint).replace(/\/mcp$/, "/health"),
|
|
19
|
+
};
|
|
20
|
+
}
|
|
21
|
+
return {
|
|
22
|
+
source: "fallback",
|
|
23
|
+
endpoint: fallbackEndpoint,
|
|
24
|
+
healthUrl: fallbackHealthUrl,
|
|
25
|
+
};
|
|
26
|
+
}
|
|
27
|
+
|
|
28
|
+
const connection = resolveConnectionTarget();
|
|
3
29
|
|
|
4
30
|
async function ensureReady() {
|
|
5
31
|
for (let attempt = 0; attempt < 20; attempt += 1) {
|
|
6
32
|
try {
|
|
7
|
-
const
|
|
8
|
-
|
|
9
|
-
|
|
33
|
+
const health = await fetchHealth(connection.healthUrl);
|
|
34
|
+
const daemonProjectRoot = health?.projectRoot;
|
|
35
|
+
if (daemonProjectRoot && daemonProjectRoot !== projectRoot) {
|
|
36
|
+
throw new Error(`Workspace mismatch: client for ${projectRoot} is pointing at daemon for ${daemonProjectRoot}`);
|
|
37
|
+
}
|
|
38
|
+
if (!daemonProjectRoot && state?.projectRoot) {
|
|
39
|
+
throw new Error(`Workspace identity missing from daemon health response at ${connection.healthUrl}`);
|
|
40
|
+
}
|
|
41
|
+
return true;
|
|
42
|
+
} catch (error) {
|
|
43
|
+
if (error instanceof Error && error.message.startsWith("Workspace ")) {
|
|
44
|
+
throw error;
|
|
45
|
+
}
|
|
46
|
+
if (error instanceof Error && error.message.startsWith("Workspace identity missing")) {
|
|
47
|
+
throw error;
|
|
10
48
|
}
|
|
11
|
-
} catch {
|
|
12
|
-
// retry
|
|
13
49
|
}
|
|
14
50
|
await new Promise((resolve) => setTimeout(resolve, 150));
|
|
15
51
|
}
|
|
@@ -25,17 +61,24 @@ process.stdin.on("data", async (chunk) => {
|
|
|
25
61
|
for (const line of lines) {
|
|
26
62
|
const trimmed = line.trim();
|
|
27
63
|
if (!trimmed) continue;
|
|
28
|
-
|
|
29
|
-
|
|
30
|
-
|
|
31
|
-
|
|
64
|
+
let requestId = null;
|
|
65
|
+
try {
|
|
66
|
+
const parsed = JSON.parse(trimmed);
|
|
67
|
+
requestId = parsed?.id ?? null;
|
|
68
|
+
const ready = await ensureReady();
|
|
69
|
+
if (!ready) {
|
|
70
|
+
process.stdout.write(`${JSON.stringify({ jsonrpc: "2.0", id: requestId, error: { code: -32000, message: "cfsa-memory-daemon is not ready" } })}\n`);
|
|
71
|
+
continue;
|
|
72
|
+
}
|
|
73
|
+
const response = await fetch(connection.endpoint, {
|
|
74
|
+
method: "POST",
|
|
75
|
+
headers: { "content-type": "application/json" },
|
|
76
|
+
body: trimmed,
|
|
77
|
+
});
|
|
78
|
+
const text = await response.text();
|
|
79
|
+
process.stdout.write(`${text}\n`);
|
|
80
|
+
} catch (error) {
|
|
81
|
+
process.stdout.write(`${JSON.stringify({ jsonrpc: "2.0", id: requestId, error: { code: -32000, message: error instanceof Error ? error.message : String(error) } })}\n`);
|
|
32
82
|
}
|
|
33
|
-
const response = await fetch(endpoint, {
|
|
34
|
-
method: "POST",
|
|
35
|
-
headers: { "content-type": "application/json" },
|
|
36
|
-
body: trimmed,
|
|
37
|
-
});
|
|
38
|
-
const text = await response.text();
|
|
39
|
-
process.stdout.write(`${text}\n`);
|
|
40
83
|
}
|
|
41
84
|
});
|
|
@@ -5,12 +5,14 @@ import { memoryGraphQuery } from "./tools/graph.mjs";
|
|
|
5
5
|
import { memoryIngest, memoryLogDaily, memoryGetActiveBlockers } from "./tools/ingest.mjs";
|
|
6
6
|
import { memoryLint } from "./tools/lint.mjs";
|
|
7
7
|
import { memoryContext, memoryQuery, memorySemanticStatus } from "./tools/query.mjs";
|
|
8
|
-
import { clearDaemonState, writeDaemonState } from "./runtime.mjs";
|
|
8
|
+
import { clearDaemonState, resolveProjectRoot, workspaceMemoryRoot, writeDaemonState } from "./runtime.mjs";
|
|
9
9
|
|
|
10
10
|
const host = process.env.CFSA_MEMORY_HOST || "127.0.0.1";
|
|
11
11
|
const preferredPort = Number(process.env.CFSA_MEMORY_PORT || 4317);
|
|
12
|
-
const projectRoot =
|
|
12
|
+
const projectRoot = resolveProjectRoot();
|
|
13
|
+
const memoryRoot = workspaceMemoryRoot(projectRoot);
|
|
13
14
|
let actualPort = preferredPort;
|
|
15
|
+
const startedAt = new Date().toISOString();
|
|
14
16
|
|
|
15
17
|
const tools = {
|
|
16
18
|
memory_flush: memoryFlush,
|
|
@@ -44,8 +46,11 @@ function statePayload(extra = {}) {
|
|
|
44
46
|
host,
|
|
45
47
|
port: actualPort,
|
|
46
48
|
requestedPort: preferredPort,
|
|
49
|
+
endpoint: `http://${host}:${actualPort}/mcp`,
|
|
47
50
|
healthUrl: `http://${host}:${actualPort}/health`,
|
|
48
|
-
|
|
51
|
+
projectRoot,
|
|
52
|
+
memoryRoot,
|
|
53
|
+
startedAt,
|
|
49
54
|
...extra,
|
|
50
55
|
};
|
|
51
56
|
}
|
|
@@ -1,11 +1,15 @@
|
|
|
1
1
|
import { existsSync, mkdirSync, readFileSync, rmSync, writeFileSync } from "node:fs";
|
|
2
|
-
import {
|
|
2
|
+
import { join, resolve } from "node:path";
|
|
3
3
|
|
|
4
|
-
export function
|
|
4
|
+
export function resolveProjectRoot(root = process.env.CFSA_MEMORY_PROJECT_ROOT || process.cwd()) {
|
|
5
|
+
return resolve(root);
|
|
6
|
+
}
|
|
7
|
+
|
|
8
|
+
export function getRuntimeDir(root = resolveProjectRoot()) {
|
|
5
9
|
return join(root, ".memory", "runtime");
|
|
6
10
|
}
|
|
7
11
|
|
|
8
|
-
export function ensureRuntimeDir(root =
|
|
12
|
+
export function ensureRuntimeDir(root = resolveProjectRoot()) {
|
|
9
13
|
const runtimeDir = getRuntimeDir(root);
|
|
10
14
|
if (!existsSync(runtimeDir)) {
|
|
11
15
|
mkdirSync(runtimeDir, { recursive: true });
|
|
@@ -13,28 +17,55 @@ export function ensureRuntimeDir(root = process.cwd()) {
|
|
|
13
17
|
return runtimeDir;
|
|
14
18
|
}
|
|
15
19
|
|
|
16
|
-
export function pidFile(root =
|
|
20
|
+
export function pidFile(root = resolveProjectRoot()) {
|
|
17
21
|
return join(getRuntimeDir(root), "cfsa-memory-daemon.pid");
|
|
18
22
|
}
|
|
19
23
|
|
|
20
|
-
export function stateFile(root =
|
|
24
|
+
export function stateFile(root = resolveProjectRoot()) {
|
|
21
25
|
return join(getRuntimeDir(root), "cfsa-memory-daemon.json");
|
|
22
26
|
}
|
|
23
27
|
|
|
28
|
+
export function workspaceMemoryRoot(root = resolveProjectRoot()) {
|
|
29
|
+
return join(root, ".memory");
|
|
30
|
+
}
|
|
31
|
+
|
|
32
|
+
export function buildDaemonEndpoints(state) {
|
|
33
|
+
const host = state.host || "127.0.0.1";
|
|
34
|
+
const port = state.port;
|
|
35
|
+
const healthUrl = state.healthUrl || `http://${host}:${port}/health`;
|
|
36
|
+
const endpoint = state.endpoint || `http://${host}:${port}/mcp`;
|
|
37
|
+
return { endpoint, healthUrl };
|
|
38
|
+
}
|
|
39
|
+
|
|
40
|
+
export function normalizeDaemonState(root, state = {}) {
|
|
41
|
+
const projectRoot = resolveProjectRoot(root);
|
|
42
|
+
const memoryRoot = state.memoryRoot ? resolve(state.memoryRoot) : workspaceMemoryRoot(projectRoot);
|
|
43
|
+
const normalized = {
|
|
44
|
+
...state,
|
|
45
|
+
projectRoot,
|
|
46
|
+
memoryRoot,
|
|
47
|
+
};
|
|
48
|
+
const { endpoint, healthUrl } = buildDaemonEndpoints(normalized);
|
|
49
|
+
normalized.endpoint = endpoint;
|
|
50
|
+
normalized.healthUrl = healthUrl;
|
|
51
|
+
return normalized;
|
|
52
|
+
}
|
|
53
|
+
|
|
24
54
|
export function writeDaemonState(root, state) {
|
|
25
55
|
ensureRuntimeDir(root);
|
|
26
|
-
|
|
27
|
-
writeFileSync(
|
|
56
|
+
const normalized = normalizeDaemonState(root, state);
|
|
57
|
+
writeFileSync(stateFile(root), `${JSON.stringify(normalized, null, 2)}\n`, "utf8");
|
|
58
|
+
writeFileSync(pidFile(root), `${normalized.pid}\n`, "utf8");
|
|
28
59
|
}
|
|
29
60
|
|
|
30
|
-
export function readDaemonState(root =
|
|
61
|
+
export function readDaemonState(root = resolveProjectRoot()) {
|
|
31
62
|
if (!existsSync(stateFile(root))) {
|
|
32
63
|
return null;
|
|
33
64
|
}
|
|
34
|
-
return JSON.parse(readFileSync(stateFile(root), "utf8"));
|
|
65
|
+
return normalizeDaemonState(root, JSON.parse(readFileSync(stateFile(root), "utf8")));
|
|
35
66
|
}
|
|
36
67
|
|
|
37
|
-
export function clearDaemonState(root =
|
|
68
|
+
export function clearDaemonState(root = resolveProjectRoot()) {
|
|
38
69
|
if (existsSync(pidFile(root))) rmSync(pidFile(root), { force: true });
|
|
39
70
|
if (existsSync(stateFile(root))) rmSync(stateFile(root), { force: true });
|
|
40
71
|
}
|
|
@@ -48,13 +79,19 @@ export function processExists(pid) {
|
|
|
48
79
|
}
|
|
49
80
|
}
|
|
50
81
|
|
|
82
|
+
export async function fetchHealth(url) {
|
|
83
|
+
const response = await fetch(url);
|
|
84
|
+
if (!response.ok) {
|
|
85
|
+
throw new Error(`Health check failed with status ${response.status}`);
|
|
86
|
+
}
|
|
87
|
+
return response.json();
|
|
88
|
+
}
|
|
89
|
+
|
|
51
90
|
export async function waitForHealth(url, retries = 20, delayMs = 150) {
|
|
52
91
|
for (let attempt = 0; attempt < retries; attempt += 1) {
|
|
53
92
|
try {
|
|
54
|
-
|
|
55
|
-
|
|
56
|
-
return true;
|
|
57
|
-
}
|
|
93
|
+
await fetchHealth(url);
|
|
94
|
+
return true;
|
|
58
95
|
} catch {
|
|
59
96
|
// keep retrying
|
|
60
97
|
}
|
|
@@ -1,9 +1,9 @@
|
|
|
1
1
|
import { spawn } from "node:child_process";
|
|
2
2
|
import { existsSync } from "node:fs";
|
|
3
3
|
import { join } from "node:path";
|
|
4
|
-
import { clearDaemonState, processExists, readDaemonState, waitForHealth } from "./runtime.mjs";
|
|
4
|
+
import { clearDaemonState, processExists, readDaemonState, resolveProjectRoot, waitForHealth } from "./runtime.mjs";
|
|
5
5
|
|
|
6
|
-
const root =
|
|
6
|
+
const root = resolveProjectRoot();
|
|
7
7
|
const daemonPath = join(root, ".memory", "mcp-server", "daemon.mjs");
|
|
8
8
|
if (!existsSync(daemonPath)) {
|
|
9
9
|
console.error(`Missing daemon at ${daemonPath}`);
|
|
@@ -25,10 +25,12 @@ if (current?.pid && processExists(current.pid)) {
|
|
|
25
25
|
}
|
|
26
26
|
|
|
27
27
|
const child = spawn("node", [daemonPath], {
|
|
28
|
+
cwd: root,
|
|
28
29
|
detached: true,
|
|
29
30
|
stdio: "ignore",
|
|
30
31
|
env: {
|
|
31
32
|
...process.env,
|
|
33
|
+
CFSA_MEMORY_PROJECT_ROOT: root,
|
|
32
34
|
CFSA_MEMORY_HOST: host,
|
|
33
35
|
CFSA_MEMORY_PORT: String(port),
|
|
34
36
|
},
|
|
@@ -2,6 +2,7 @@ import { join } from "node:path";
|
|
|
2
2
|
import {
|
|
3
3
|
chunkText,
|
|
4
4
|
ensureMemoryScaffold,
|
|
5
|
+
getFileText,
|
|
5
6
|
getMemoryRoot,
|
|
6
7
|
listFilesRecursively,
|
|
7
8
|
readJsonl,
|
|
@@ -84,8 +85,64 @@ export function compileMemory(options = {}) {
|
|
|
84
85
|
const knowledge = records.filter((record) => !["pattern", "decision", "blocker"].includes(record.type));
|
|
85
86
|
|
|
86
87
|
const knowledgeDir = join(memoryRoot, "wiki", "knowledge");
|
|
88
|
+
const specsRoot = join(memoryRoot, "wiki", "specs");
|
|
87
89
|
const indexEntries = [];
|
|
88
90
|
const chunkEntries = [];
|
|
91
|
+
const specFiles = listFilesRecursively(specsRoot)
|
|
92
|
+
.filter((filePath) => filePath.endsWith(".md"))
|
|
93
|
+
.filter((filePath) => !filePath.endsWith("README.md"))
|
|
94
|
+
.filter((filePath) => !filePath.endsWith(".gitkeep"));
|
|
95
|
+
|
|
96
|
+
const specTypeForPath = (relativePath) => {
|
|
97
|
+
if (relativePath.includes('/specs/ia/')) return 'ia-spec';
|
|
98
|
+
if (relativePath.includes('/specs/be/')) return 'be-spec';
|
|
99
|
+
if (relativePath.includes('/specs/fe/')) return 'fe-spec';
|
|
100
|
+
if (relativePath.includes('/specs/phases/')) return 'phase-plan';
|
|
101
|
+
if (relativePath.includes('/specs/ideation/')) return 'ideation';
|
|
102
|
+
if (relativePath.includes('/specs/audits/')) return 'audit';
|
|
103
|
+
if (relativePath.includes('/specs/architecture/')) return 'architecture';
|
|
104
|
+
return 'spec';
|
|
105
|
+
};
|
|
106
|
+
|
|
107
|
+
const specTitleForText = (relativePath, text) => {
|
|
108
|
+
const heading = text.split("\n").find((line) => line.startsWith("# "));
|
|
109
|
+
if (heading) {
|
|
110
|
+
return heading.replace(/^#\s+/, '').trim();
|
|
111
|
+
}
|
|
112
|
+
return relativePath.split('/').at(-1)?.replace(/\.md$/, '') ?? relativePath;
|
|
113
|
+
};
|
|
114
|
+
|
|
115
|
+
for (const filePath of specFiles) {
|
|
116
|
+
const text = getFileText(filePath, '');
|
|
117
|
+
if (!text.trim()) {
|
|
118
|
+
continue;
|
|
119
|
+
}
|
|
120
|
+
const relativePath = toRelativePath(projectRoot, filePath);
|
|
121
|
+
const type = specTypeForPath(relativePath);
|
|
122
|
+
const title = specTitleForText(relativePath, text);
|
|
123
|
+
indexEntries.push({
|
|
124
|
+
id: `spec:${relativePath.replace(/^\.memory\/wiki\/specs\//, '').replace(/\.md$/, '')}`,
|
|
125
|
+
type,
|
|
126
|
+
title,
|
|
127
|
+
path: relativePath,
|
|
128
|
+
source: relativePath,
|
|
129
|
+
agent: 'spec-vault',
|
|
130
|
+
timestamp: 'spec-vault',
|
|
131
|
+
excerpt: text.slice(0, 240),
|
|
132
|
+
});
|
|
133
|
+
|
|
134
|
+
for (const [index, chunk] of chunkText(text).entries()) {
|
|
135
|
+
chunkEntries.push({
|
|
136
|
+
id: `spec:${relativePath}:${index + 1}`,
|
|
137
|
+
parentId: `spec:${relativePath.replace(/^\.memory\/wiki\/specs\//, '').replace(/\.md$/, '')}`,
|
|
138
|
+
path: relativePath,
|
|
139
|
+
text: chunk,
|
|
140
|
+
});
|
|
141
|
+
}
|
|
142
|
+
}
|
|
143
|
+
|
|
144
|
+
const seenIndexEntryIds = new Set(indexEntries.map((entry) => entry.id));
|
|
145
|
+
const seenChunkEntryIds = new Set(chunkEntries.map((entry) => entry.id));
|
|
89
146
|
|
|
90
147
|
const getRecordPath = (record) => {
|
|
91
148
|
if (record.type === "pattern") {
|
|
@@ -111,24 +168,32 @@ export function compileMemory(options = {}) {
|
|
|
111
168
|
|
|
112
169
|
for (const record of records) {
|
|
113
170
|
const absolutePath = getRecordPath(record);
|
|
114
|
-
|
|
115
|
-
|
|
116
|
-
|
|
117
|
-
|
|
118
|
-
|
|
119
|
-
|
|
120
|
-
|
|
121
|
-
|
|
122
|
-
|
|
123
|
-
|
|
171
|
+
if (!seenIndexEntryIds.has(record.id)) {
|
|
172
|
+
indexEntries.push({
|
|
173
|
+
id: record.id,
|
|
174
|
+
type: record.type,
|
|
175
|
+
title: record.title ?? record.id,
|
|
176
|
+
path: toRelativePath(projectRoot, absolutePath),
|
|
177
|
+
source: record.source,
|
|
178
|
+
agent: record.agent,
|
|
179
|
+
timestamp: record.timestamp,
|
|
180
|
+
excerpt: record.text.slice(0, 240),
|
|
181
|
+
});
|
|
182
|
+
seenIndexEntryIds.add(record.id);
|
|
183
|
+
}
|
|
124
184
|
|
|
125
185
|
for (const [index, chunk] of chunkText(record.text).entries()) {
|
|
186
|
+
const chunkId = `${record.id}:${index + 1}`;
|
|
187
|
+
if (seenChunkEntryIds.has(chunkId)) {
|
|
188
|
+
continue;
|
|
189
|
+
}
|
|
126
190
|
chunkEntries.push({
|
|
127
|
-
id:
|
|
191
|
+
id: chunkId,
|
|
128
192
|
parentId: record.id,
|
|
129
193
|
path: toRelativePath(projectRoot, absolutePath),
|
|
130
194
|
text: chunk,
|
|
131
195
|
});
|
|
196
|
+
seenChunkEntryIds.add(chunkId);
|
|
132
197
|
}
|
|
133
198
|
}
|
|
134
199
|
|
package/template/docs/README.md
CHANGED
|
@@ -176,6 +176,8 @@ npx cfsa-antigravity init
|
|
|
176
176
|
|
|
177
177
|
The kit installs the memory **server/runtime** only. `.mcp.json` and other tool-specific MCP client settings are user-managed.
|
|
178
178
|
|
|
179
|
+
The MCP routing contract is workspace-local: the daemon writes `.memory/runtime/cfsa-memory-daemon.json`, the client reads that file for the current workspace, then validates the daemon's `projectRoot` from `/health` before proxying requests. This prevents cross-workspace collisions when multiple projects have daemons running.
|
|
180
|
+
|
|
179
181
|
### Initial graph compile for existing projects
|
|
180
182
|
|
|
181
183
|
After installing or updating the `.memory` runtime in an existing project:
|
|
@@ -48,6 +48,8 @@ The kit ships multiple runtime trees in this repository:
|
|
|
48
48
|
|
|
49
49
|
The `.memory/` directory is not just an internal store; it is intended to be an Obsidian-friendly vault within the project space so humans and agents can browse the same memory corpus directly. It also mirrors durable `.memory/wiki/specs/` artifacts into graph-friendly vault notes so IA/BE/FE specs, phase plans, and related knowledge become traversable Obsidian graph nodes. Runtime clients should connect to one shared project-local memory daemon rather than each spawning their own isolated server process.
|
|
50
50
|
|
|
51
|
+
That routing is now workspace-safe: the daemon publishes `projectRoot`, `memoryRoot`, and endpoint metadata into `.memory/runtime/cfsa-memory-daemon.json` and its `/health` payload; the client resolves the local workspace daemon from that runtime state and rejects mismatched workspace identity instead of trusting a shared default port.
|
|
52
|
+
|
|
51
53
|
### Antigravity Runtime Components
|
|
52
54
|
|
|
53
55
|
```text
|