@rubytech/create-maxy 1.0.875 → 1.0.877

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (61) hide show
  1. package/dist/index.js +51 -37
  2. package/package.json +1 -1
  3. package/payload/platform/lib/graph-search/src/__tests__/brochure-threshold.test.ts +136 -0
  4. package/payload/platform/neo4j/edge-annotations.json +11 -3
  5. package/payload/platform/plugins/admin/hooks/archive-ingest-surface-gate.sh +11 -5
  6. package/payload/platform/plugins/admin/mcp/dist/index.js +19 -2
  7. package/payload/platform/plugins/admin/mcp/dist/index.js.map +1 -1
  8. package/payload/platform/plugins/admin/skills/onboarding/SKILL.md +6 -2
  9. package/payload/platform/plugins/cloudflare/scripts/setup-tunnel.sh +88 -9
  10. package/payload/platform/plugins/cloudflare/skills/setup-tunnel/SKILL.md +1 -1
  11. package/payload/platform/plugins/docs/references/admin-session.md +80 -0
  12. package/payload/platform/plugins/docs/references/deployment.md +1 -1
  13. package/payload/platform/plugins/docs/references/platform.md +1 -1
  14. package/payload/platform/plugins/docs/references/plugins-guide.md +3 -0
  15. package/payload/platform/plugins/memory/PLUGIN.md +4 -1
  16. package/payload/platform/plugins/memory/mcp/dist/index.js +127 -0
  17. package/payload/platform/plugins/memory/mcp/dist/index.js.map +1 -1
  18. package/payload/platform/plugins/memory/mcp/dist/tools/__tests__/conversation-archive-derive-insights.test.d.ts +2 -0
  19. package/payload/platform/plugins/memory/mcp/dist/tools/__tests__/conversation-archive-derive-insights.test.d.ts.map +1 -0
  20. package/payload/platform/plugins/memory/mcp/dist/tools/__tests__/conversation-archive-derive-insights.test.js +97 -0
  21. package/payload/platform/plugins/memory/mcp/dist/tools/__tests__/conversation-archive-derive-insights.test.js.map +1 -0
  22. package/payload/platform/plugins/memory/mcp/dist/tools/__tests__/conversation-archive-enrich-rejection.test.d.ts +2 -0
  23. package/payload/platform/plugins/memory/mcp/dist/tools/__tests__/conversation-archive-enrich-rejection.test.d.ts.map +1 -0
  24. package/payload/platform/plugins/memory/mcp/dist/tools/__tests__/conversation-archive-enrich-rejection.test.js +184 -0
  25. package/payload/platform/plugins/memory/mcp/dist/tools/__tests__/conversation-archive-enrich-rejection.test.js.map +1 -0
  26. package/payload/platform/plugins/memory/mcp/dist/tools/conversation-archive-derive-insights.d.ts +89 -0
  27. package/payload/platform/plugins/memory/mcp/dist/tools/conversation-archive-derive-insights.d.ts.map +1 -0
  28. package/payload/platform/plugins/memory/mcp/dist/tools/conversation-archive-derive-insights.js +542 -0
  29. package/payload/platform/plugins/memory/mcp/dist/tools/conversation-archive-derive-insights.js.map +1 -0
  30. package/payload/platform/plugins/memory/mcp/dist/tools/conversation-archive-enrich-rejection.d.ts +41 -0
  31. package/payload/platform/plugins/memory/mcp/dist/tools/conversation-archive-enrich-rejection.d.ts.map +1 -0
  32. package/payload/platform/plugins/memory/mcp/dist/tools/conversation-archive-enrich-rejection.js +116 -0
  33. package/payload/platform/plugins/memory/mcp/dist/tools/conversation-archive-enrich-rejection.js.map +1 -0
  34. package/payload/platform/plugins/memory/skills/conversation-archive-enrich/SKILL.md +159 -0
  35. package/payload/platform/templates/specialists/agents/database-operator.md +3 -2
  36. package/payload/server/chunk-GOZP57CX.js +1373 -0
  37. package/payload/server/chunk-I4AQMEJA.js +11265 -0
  38. package/payload/server/chunk-LU6TUP3E.js +2169 -0
  39. package/payload/server/chunk-RRVBWC66.js +667 -0
  40. package/payload/server/client-pool-VYDOIFG7.js +34 -0
  41. package/payload/server/cloudflare-task-tracker-M7APAYEF.js +20 -0
  42. package/payload/server/maxy-edge.js +6 -5
  43. package/payload/server/public/assets/{Checkbox-CiL1E1ss.js → Checkbox-m3yLBLrp.js} +1 -1
  44. package/payload/server/public/assets/{admin-DJodbJ_C.js → admin-DEm0CCga.js} +7 -7
  45. package/payload/server/public/assets/data-BkbjVYwP.js +1 -0
  46. package/payload/server/public/assets/graph-Cic-rDfg.js +1 -0
  47. package/payload/server/public/assets/{graph-labels-B3yLO-UT.js → graph-labels-C13OVh5P.js} +1 -1
  48. package/payload/server/public/assets/{jsx-runtime-CstxPbUG.css → jsx-runtime-DJwgVAMg.css} +1 -1
  49. package/payload/server/public/assets/page-BLRjaAoU.js +50 -0
  50. package/payload/server/public/assets/{page-C_FMEZcW.js → page-p-Fj8Guk.js} +1 -1
  51. package/payload/server/public/assets/{public-nLRQYsaG.js → public-4udeVi_T.js} +1 -1
  52. package/payload/server/public/assets/{useVoiceRecorder-PQASU6Eq.js → useVoiceRecorder-JwwBC5pd.js} +1 -1
  53. package/payload/server/public/data.html +5 -5
  54. package/payload/server/public/graph.html +6 -6
  55. package/payload/server/public/index.html +8 -8
  56. package/payload/server/public/public.html +5 -5
  57. package/payload/server/server.js +54 -24
  58. package/payload/server/public/assets/data-CXF4YQ99.js +0 -1
  59. package/payload/server/public/assets/graph-CtSKqC3V.js +0 -1
  60. package/payload/server/public/assets/page-XR25fnQN.js +0 -50
  61. /package/payload/server/public/assets/{jsx-runtime-BRxv9PO7.js → jsx-runtime-Bd3TJ8Bg.js} +0 -0
@@ -0,0 +1,41 @@
1
+ /**
2
+ * conversation-archive-enrich-rejection MCP tool — Task 980.
3
+ *
4
+ * Durable rejection memory for conversation-archive-enrich. Writes to a
5
+ * sidecar JSONL file alongside the account's state directory; the read side
6
+ * (filter step) lives in `conversation-archive-derive-insights.ts` so the
7
+ * path constant has a single source of truth.
8
+ *
9
+ * Modes:
10
+ * record — append a rejection line. O_APPEND is POSIX-atomic for writes
11
+ * ≤ PIPE_BUF; one JSONL line is well under that, so concurrent
12
+ * appends never corrupt the file.
13
+ * undo — read all lines, drop matches by (chunkElementId, kind,
14
+ * contentHash), then atomic-rename the rewritten file. A `wire`
15
+ * of the same key triggers undo so the operator's change of mind
16
+ * doesn't leave stale rejections behind.
17
+ *
18
+ * Residual race: a `record` append landing between the `undo`'s read and
19
+ * rename is overwritten. The per-row operator gate is single-threaded
20
+ * (one chat-turn at a time), so this window is not exploitable in normal
21
+ * operation. A `flock`-based fix is deferred until rejection volume warrants.
22
+ */
23
+ export type RejectionMode = "record" | "undo";
24
+ export interface RecordRejectionParams {
25
+ accountId: string;
26
+ archiveElementId: string;
27
+ chunkElementId: string;
28
+ kind: string;
29
+ contentHash: string;
30
+ mode: RejectionMode;
31
+ sessionId?: string;
32
+ }
33
+ export interface RecordRejectionResult {
34
+ ok: true;
35
+ mode: RejectionMode;
36
+ beforeCount: number;
37
+ afterCount: number;
38
+ path: string;
39
+ }
40
+ export declare function conversationArchiveEnrichRejection(params: RecordRejectionParams): RecordRejectionResult;
41
+ //# sourceMappingURL=conversation-archive-enrich-rejection.d.ts.map
@@ -0,0 +1 @@
1
+ {"version":3,"file":"conversation-archive-enrich-rejection.d.ts","sourceRoot":"","sources":["../../src/tools/conversation-archive-enrich-rejection.ts"],"names":[],"mappings":"AAAA;;;;;;;;;;;;;;;;;;;;;GAqBG;AASH,MAAM,MAAM,aAAa,GAAG,QAAQ,GAAG,MAAM,CAAC;AAE9C,MAAM,WAAW,qBAAqB;IACpC,SAAS,EAAE,MAAM,CAAC;IAClB,gBAAgB,EAAE,MAAM,CAAC;IACzB,cAAc,EAAE,MAAM,CAAC;IACvB,IAAI,EAAE,MAAM,CAAC;IACb,WAAW,EAAE,MAAM,CAAC;IACpB,IAAI,EAAE,aAAa,CAAC;IACpB,SAAS,CAAC,EAAE,MAAM,CAAC;CACpB;AAED,MAAM,WAAW,qBAAqB;IACpC,EAAE,EAAE,IAAI,CAAC;IACT,IAAI,EAAE,aAAa,CAAC;IACpB,WAAW,EAAE,MAAM,CAAC;IACpB,UAAU,EAAE,MAAM,CAAC;IACnB,IAAI,EAAE,MAAM,CAAC;CACd;AA4CD,wBAAgB,kCAAkC,CAChD,MAAM,EAAE,qBAAqB,GAC5B,qBAAqB,CAkFvB"}
@@ -0,0 +1,116 @@
1
+ /**
2
+ * conversation-archive-enrich-rejection MCP tool — Task 980.
3
+ *
4
+ * Durable rejection memory for conversation-archive-enrich. Writes to a
5
+ * sidecar JSONL file alongside the account's state directory; the read side
6
+ * (filter step) lives in `conversation-archive-derive-insights.ts` so the
7
+ * path constant has a single source of truth.
8
+ *
9
+ * Modes:
10
+ * record — append a rejection line. O_APPEND is POSIX-atomic for writes
11
+ * ≤ PIPE_BUF; one JSONL line is well under that, so concurrent
12
+ * appends never corrupt the file.
13
+ * undo — read all lines, drop matches by (chunkElementId, kind,
14
+ * contentHash), then atomic-rename the rewritten file. A `wire`
15
+ * of the same key triggers undo so the operator's change of mind
16
+ * doesn't leave stale rejections behind.
17
+ *
18
+ * Residual race: a `record` append landing between the `undo`'s read and
19
+ * rename is overwritten. The per-row operator gate is single-threaded
20
+ * (one chat-turn at a time), so this window is not exploitable in normal
21
+ * operation. A `flock`-based fix is deferred until rejection volume warrants.
22
+ */
23
+ import { appendFileSync, existsSync, mkdirSync, readFileSync, renameSync, writeFileSync } from "node:fs";
24
+ import { dirname } from "node:path";
25
+ import { rejectionsJsonlPath, rejectionKey, } from "./conversation-archive-derive-insights.js";
26
+ function readAllLines(path) {
27
+ if (!existsSync(path))
28
+ return [];
29
+ const raw = readFileSync(path, "utf8");
30
+ return raw.split("\n").filter((line) => line.trim().length > 0);
31
+ }
32
+ function countMatches(lines, chunkElementId, kind, contentHash) {
33
+ const target = rejectionKey(chunkElementId, kind, contentHash);
34
+ let n = 0;
35
+ for (const line of lines) {
36
+ try {
37
+ const obj = JSON.parse(line);
38
+ if (typeof obj.chunkElementId === "string" &&
39
+ typeof obj.kind === "string" &&
40
+ typeof obj.contentHash === "string" &&
41
+ rejectionKey(obj.chunkElementId, obj.kind, obj.contentHash) === target) {
42
+ n += 1;
43
+ }
44
+ }
45
+ catch {
46
+ // Parse failure here is a counting concern, not a correctness one —
47
+ // the filter side already emits rejection-parse-skip.
48
+ }
49
+ }
50
+ return n;
51
+ }
52
+ export function conversationArchiveEnrichRejection(params) {
53
+ const { accountId, archiveElementId, chunkElementId, kind, contentHash, mode, sessionId, } = params;
54
+ const path = rejectionsJsonlPath(accountId);
55
+ mkdirSync(dirname(path), { recursive: true });
56
+ const before = readAllLines(path);
57
+ const beforeCount = countMatches(before, chunkElementId, kind, contentHash);
58
+ if (mode === "record") {
59
+ if (beforeCount > 0) {
60
+ // Idempotent: a second `record` for the same key is a silent no-op.
61
+ process.stderr.write(`[conversation-archive-enrich] rejected chunk=${chunkElementId} kind=${kind} contentHash=${contentHash} stored (already-present)\n`);
62
+ return { ok: true, mode, beforeCount, afterCount: beforeCount, path };
63
+ }
64
+ const line = {
65
+ archiveElementId,
66
+ chunkElementId,
67
+ kind,
68
+ contentHash,
69
+ rejectedAt: new Date().toISOString(),
70
+ rejectedBySession: sessionId ?? null,
71
+ };
72
+ appendFileSync(path, JSON.stringify(line) + "\n");
73
+ process.stderr.write(`[conversation-archive-enrich] rejected chunk=${chunkElementId} kind=${kind} contentHash=${contentHash} stored\n`);
74
+ return { ok: true, mode, beforeCount, afterCount: beforeCount + 1, path };
75
+ }
76
+ // mode === "undo"
77
+ if (beforeCount === 0) {
78
+ return { ok: true, mode, beforeCount: 0, afterCount: 0, path };
79
+ }
80
+ const target = rejectionKey(chunkElementId, kind, contentHash);
81
+ const remaining = [];
82
+ for (const line of before) {
83
+ try {
84
+ const obj = JSON.parse(line);
85
+ if (typeof obj.chunkElementId === "string" &&
86
+ typeof obj.kind === "string" &&
87
+ typeof obj.contentHash === "string" &&
88
+ rejectionKey(obj.chunkElementId, obj.kind, obj.contentHash) === target) {
89
+ continue;
90
+ }
91
+ remaining.push(line);
92
+ }
93
+ catch {
94
+ // Preserve malformed lines through undo — corruption is the filter
95
+ // side's problem to surface, not this tool's to silently drop.
96
+ remaining.push(line);
97
+ }
98
+ }
99
+ const tmp = path + ".tmp";
100
+ writeFileSync(tmp, remaining.length ? remaining.join("\n") + "\n" : "");
101
+ renameSync(tmp, path);
102
+ process.stderr.write(`[conversation-archive-enrich] rejection-undone chunk=${chunkElementId} kind=${kind} contentHash=${contentHash}\n`);
103
+ // The filter loop above removes EVERY matching line, not just one. Under
104
+ // the residual record-vs-undo race documented at the top of this file, a
105
+ // single key can briefly have multiple JSONL lines; afterCount must reflect
106
+ // the actual remaining count for that key (always 0 after a successful
107
+ // undo) rather than `beforeCount - 1` (only correct when beforeCount==1).
108
+ return {
109
+ ok: true,
110
+ mode,
111
+ beforeCount,
112
+ afterCount: 0,
113
+ path,
114
+ };
115
+ }
116
+ //# sourceMappingURL=conversation-archive-enrich-rejection.js.map
@@ -0,0 +1 @@
1
+ {"version":3,"file":"conversation-archive-enrich-rejection.js","sourceRoot":"","sources":["../../src/tools/conversation-archive-enrich-rejection.ts"],"names":[],"mappings":"AAAA;;;;;;;;;;;;;;;;;;;;;GAqBG;AAEH,OAAO,EAAE,cAAc,EAAE,UAAU,EAAE,SAAS,EAAE,YAAY,EAAE,UAAU,EAAE,aAAa,EAAE,MAAM,SAAS,CAAC;AACzG,OAAO,EAAE,OAAO,EAAE,MAAM,WAAW,CAAC;AACpC,OAAO,EACL,mBAAmB,EACnB,YAAY,GACb,MAAM,2CAA2C,CAAC;AA+BnD,SAAS,YAAY,CAAC,IAAY;IAChC,IAAI,CAAC,UAAU,CAAC,IAAI,CAAC;QAAE,OAAO,EAAE,CAAC;IACjC,MAAM,GAAG,GAAG,YAAY,CAAC,IAAI,EAAE,MAAM,CAAC,CAAC;IACvC,OAAO,GAAG,CAAC,KAAK,CAAC,IAAI,CAAC,CAAC,MAAM,CAAC,CAAC,IAAI,EAAE,EAAE,CAAC,IAAI,CAAC,IAAI,EAAE,CAAC,MAAM,GAAG,CAAC,CAAC,CAAC;AAClE,CAAC;AAED,SAAS,YAAY,CACnB,KAAe,EACf,cAAsB,EACtB,IAAY,EACZ,WAAmB;IAEnB,MAAM,MAAM,GAAG,YAAY,CAAC,cAAc,EAAE,IAAI,EAAE,WAAW,CAAC,CAAC;IAC/D,IAAI,CAAC,GAAG,CAAC,CAAC;IACV,KAAK,MAAM,IAAI,IAAI,KAAK,EAAE,CAAC;QACzB,IAAI,CAAC;YACH,MAAM,GAAG,GAAG,IAAI,CAAC,KAAK,CAAC,IAAI,CAA2B,CAAC;YACvD,IACE,OAAO,GAAG,CAAC,cAAc,KAAK,QAAQ;gBACtC,OAAO,GAAG,CAAC,IAAI,KAAK,QAAQ;gBAC5B,OAAO,GAAG,CAAC,WAAW,KAAK,QAAQ;gBACnC,YAAY,CAAC,GAAG,CAAC,cAAc,EAAE,GAAG,CAAC,IAAI,EAAE,GAAG,CAAC,WAAW,CAAC,KAAK,MAAM,EACtE,CAAC;gBACD,CAAC,IAAI,CAAC,CAAC;YACT,CAAC;QACH,CAAC;QAAC,MAAM,CAAC;YACP,oEAAoE;YACpE,sDAAsD;QACxD,CAAC;IACH,CAAC;IACD,OAAO,CAAC,CAAC;AACX,CAAC;AAED,MAAM,UAAU,kCAAkC,CAChD,MAA6B;IAE7B,MAAM,EACJ,SAAS,EACT,gBAAgB,EAChB,cAAc,EACd,IAAI,EACJ,WAAW,EACX,IAAI,EACJ,SAAS,GACV,GAAG,MAAM,CAAC;IAEX,MAAM,IAAI,GAAG,mBAAmB,CAAC,SAAS,CAAC,CAAC;IAC5C,SAAS,CAAC,OAAO,CAAC,IAAI,CAAC,EAAE,EAAE,SAAS,EAAE,IAAI,EAAE,CAAC,CAAC;IAE9C,MAAM,MAAM,GAAG,YAAY,CAAC,IAAI,CAAC,CAAC;IAClC,MAAM,WAAW,GAAG,YAAY,CAAC,MAAM,EAAE,cAAc,EAAE,IAAI,EAAE,WAAW,CAAC,CAAC;IAE5E,IAAI,IAAI,KAAK,QAAQ,EAAE,CAAC;QACtB,IAAI,WAAW,GAAG,CAAC,EAAE,CAAC;YACpB,oEAAoE;YACpE,OAAO,CAAC,MAAM,CAAC,KAAK,CAClB,gDAAgD,cAAc,SAAS,IAAI,gBAAgB,WAAW,6BAA6B,CACpI,CAAC;YACF,OAAO,EAAE,EAAE,EAAE,IAAI,EAAE,IAAI,EAAE,WAAW,EAAE,UAAU,EAAE,WAAW,EAAE,IAAI,EAAE,CAAC;QACxE,CAAC;QACD,MAAM,IAAI,GAAkB;YAC1B,gBAAgB;YAChB,cAAc;YACd,IAAI;YACJ,WAAW;YACX,UAAU,EAAE,IAAI,IAAI,EAAE,CAAC,WAAW,EAAE;YACpC,iBAAiB,EAAE,SAAS,IAAI,IAAI;SACrC,CAAC;QACF,cAAc,CAAC,IAAI,EAAE,IAAI,CAAC,SAAS,CAAC,IAAI,CAAC,GAAG,IAAI,CAAC,CAAC;QAClD,OAAO,CAAC,MAAM,CAAC,KAAK,CAClB,gDAAgD,cAAc,SAAS,IAAI,gBAAgB,WAAW,WAAW,CAClH,CAAC;QACF,OAAO,EAAE,EAAE,EAAE,IAAI,EAAE,IAAI,EAAE,WAAW,EAAE,UAAU,EAAE,WAAW,GAAG,CAAC,EAAE,IAAI,EAAE,CAAC;IAC5E,CAAC;IAED,kBAAkB;IAClB,IAAI,WAAW,KAAK,CAAC,EAAE,CAAC;QACtB,OAAO,EAAE,EAAE,EAAE,IAAI,EAAE,IAAI,EAAE,WAAW,EAAE,CAAC,EAAE,UAAU,EAAE,CAAC,EAAE,IAAI,EAAE,CAAC;IACjE,CAAC;IACD,MAAM,MAAM,GAAG,YAAY,CAAC,cAAc,EAAE,IAAI,EAAE,WAAW,CAAC,CAAC;IAC/D,MAAM,SAAS,GAAa,EAAE,CAAC;IAC/B,KAAK,MAAM,IAAI,IAAI,MAAM,EAAE,CAAC;QAC1B,IAAI,CAAC;YACH,MAAM,GAAG,GAAG,IAAI,CAAC,KAAK,CAAC,IAAI,CAA2B,CAAC;YACvD,IACE,OAAO,GAAG,CAAC,cAAc,KAAK,QAAQ;gBACtC,OAAO,GAAG,CAAC,IAAI,KAAK,QAAQ;gBAC5B,OAAO,GAAG,CAAC,WAAW,KAAK,QAAQ;gBACnC,YAAY,CAAC,GAAG,CAAC,cAAc,EAAE,GAAG,CAAC,IAAI,EAAE,GAAG,CAAC,WAAW,CAAC,KAAK,MAAM,EACtE,CAAC;gBACD,SAAS;YACX,CAAC;YACD,SAAS,CAAC,IAAI,CAAC,IAAI,CAAC,CAAC;QACvB,CAAC;QAAC,MAAM,CAAC;YACP,mEAAmE;YACnE,+DAA+D;YAC/D,SAAS,CAAC,IAAI,CAAC,IAAI,CAAC,CAAC;QACvB,CAAC;IACH,CAAC;IACD,MAAM,GAAG,GAAG,IAAI,GAAG,MAAM,CAAC;IAC1B,aAAa,CAAC,GAAG,EAAE,SAAS,CAAC,MAAM,CAAC,CAAC,CAAC,SAAS,CAAC,IAAI,CAAC,IAAI,CAAC,GAAG,IAAI,CAAC,CAAC,CAAC,EAAE,CAAC,CAAC;IACxE,UAAU,CAAC,GAAG,EAAE,IAAI,CAAC,CAAC;IACtB,OAAO,CAAC,MAAM,CAAC,KAAK,CAClB,wDAAwD,cAAc,SAAS,IAAI,gBAAgB,WAAW,IAAI,CACnH,CAAC;IACF,yEAAyE;IACzE,yEAAyE;IACzE,4EAA4E;IAC5E,uEAAuE;IACvE,0EAA0E;IAC1E,OAAO;QACL,EAAE,EAAE,IAAI;QACR,IAAI;QACJ,WAAW;QACX,UAAU,EAAE,CAAC;QACb,IAAI;KACL,CAAC;AACJ,CAAC"}
@@ -0,0 +1,159 @@
1
+ ---
2
+ name: conversation-archive-enrich
3
+ description: Source-agnostic Phase 2 for any `:ConversationArchive` — walks the `:Section:Conversation` chunks of one named archive and asks the operator to confirm derived claims row-by-row. Fires only when the operator names a specific archive and asks for insights/enrichment; never auto-fires on Phase 1 completion. Per-row gate (wire / skip / reject) over four kinds — `mention`, `task`, `preference`, `observed-relationship`. Re-runs are idempotent on `(elementId(chunk), kind, contentHash)`, so collapsing duplicates is structural. Invokes the read-only MCP tool `mcp__memory__conversation-archive-derive-insights` to obtain proposals; never writes from the tool's return path. Each `wire` decision runs the proposal's returned `mergeCypher` via `mcp__graph__maxy-graph-write_neo4j_cypher`, or, for `task`, mints the `:Task` via `mcp__tasks__task-create` first and then runs the REFERENCES merge.
4
+ ---
5
+
6
+ # Conversation Archive — chunk-anchored insight derivation
7
+
8
+ One skill. Walks one `:ConversationArchive`'s chunks in pages. Per chunk, Haiku surfaces high-confidence claims via `mcp__memory__conversation-archive-derive-insights`; per claim, the operator decides `wire / skip / reject`. The tool is read-only; the only writes happen through this skill's per-row dispatch.
9
+
10
+ ## When to invoke
11
+
12
+ When the operator names a specific archive AND asks for insights, enrichment, or derived claims against it. Concrete triggers:
13
+
14
+ - "Derive insights from the Adam Mackay WhatsApp archive."
15
+ - "Enrich the Joel × Adam conversation archive — surface anything actionable."
16
+ - "Run Phase 2 on `:ConversationArchive elementId=4:abcd…:42`."
17
+
18
+ Never auto-fire after a Phase 1 ingest completes — `conversation-archive` (the ingest skill) explicitly writes ZERO observations. Phase 2 is operator-gated by design.
19
+
20
+ ## Resolve the archive
21
+
22
+ Operator phrasing usually names participants or the source file, not an `elementId`. Resolve via `mcp__memory__memory-search`:
23
+
24
+ ```
25
+ memory-search { query: "<operator's phrasing>", labels: ["ConversationArchive"], expandHops: 0 }
26
+ ```
27
+
28
+ If multiple `:ConversationArchive` rows match, ask the operator to pick by `title` (DM and group archives have stable titles per the predecessor task). If zero match, surface the empty result and yield — never run Phase 2 against the wrong archive.
29
+
30
+ ## Walk the chunks in pages
31
+
32
+ Page through the chunks with the read-only MCP tool. Default page size is 5 chunks.
33
+
34
+ ```
35
+ mcp__memory__conversation-archive-derive-insights {
36
+ archiveElementId: "<elementId>",
37
+ chunkOffset: 0,
38
+ chunkLimit: 5,
39
+ }
40
+ ```
41
+
42
+ The tool returns:
43
+
44
+ - A header line with `totalChunks`, `walked=from..to`, `proposalsRemaining`.
45
+ - Zero or more `Proposal N/M` blocks, each carrying `chunkElementId`, `kind`, `contentHash`, `evidenceSnippet`, `proposedAction`, `mergeCypher`, `mergeParams`, and (for `mention` / `observed-relationship`) `resolve:` hints for the operator-confirmed entity resolution.
46
+
47
+ After processing the page's proposals, advance `chunkOffset` by the page size and re-call until `proposalsRemaining=false`. Empty chunks (no high-confidence claims) are silent — the tool already filtered them.
48
+
49
+ ## Per-row gate
50
+
51
+ Surface one operator question per proposal. The operator decides `wire`, `skip`, or `reject`. **No batch-confirm.** The doctrine is row-by-row review.
52
+
53
+ Message shape:
54
+
55
+ ```
56
+ Chunk K/N — kind=<X> — evidence: "<snippet>"
57
+ Proposed: <proposedAction>
58
+ wire / skip / reject ?
59
+ ```
60
+
61
+ Operator answers `wire`, `skip`, or `reject`. Anything else: re-ask the same question.
62
+
63
+ ## On `wire`
64
+
65
+ Dispatch per kind. Provenance (`createdByTool`, `createdBySession`, `createdAt`) is baked into the proposal's `mergeCypher`; the skill only provides the resolved entity elementIds and the optional `:Task` elementId.
66
+
67
+ ### `mention`
68
+
69
+ The proposal carries `disambiguation.needsResolution[0]` naming the entity to resolve.
70
+
71
+ 1. Resolve via `mcp__memory__memory-search { query: "<displayName>", labels: ["Person", "Organization"], expandHops: 0 }`.
72
+ 2. If multiple candidates: ask the operator to pick by `elementId`. If zero candidates: ask the operator `mint via contact-create / reject`. Minting goes through `mcp__contacts__contact-create` (operator-confirmed canonical Person/Organization — never auto-mint).
73
+ 3. Run the `mergeCypher` via `mcp__graph__maxy-graph-write_neo4j_cypher` with `mergeParams` plus `targetElementId=<resolved elementId>`.
74
+
75
+ A re-run against the same `(chunkElementId, target, contentHash)` collapses on the MERGE — no duplicate edges.
76
+
77
+ ### `task`
78
+
79
+ 1. Mint the `:Task` via `mcp__tasks__task-create` with `title=mergeParams.taskTitle`, optional due hint from `mergeParams.taskDueHint`, `affects=archiveElementId`, `raisedDuringConversationKey=chunkElementId`.
80
+ 2. Run the proposal's `mergeCypher` via `mcp__graph__maxy-graph-write_neo4j_cypher` with `mergeParams` plus `taskElementId=<minted Task elementId>`.
81
+
82
+ The `:Task` itself is the durable artefact; the `REFERENCES` edge to the chunk records the evidence that produced it.
83
+
84
+ ### `preference`
85
+
86
+ No disambiguation step — the `:Preference` node is account-scoped and MERGE-keyed on `(accountId, contentHash)`. Run the proposal's `mergeCypher` directly via `mcp__graph__maxy-graph-write_neo4j_cypher` with `mergeParams` (no additional substitutions).
87
+
88
+ The `mergeCypher` MERGEs the `:Preference`, sets category/key/value on first create, and creates the `(:Preference)-[:OBSERVED_IN]->(chunk)` edge that ties the preference to its evidence.
89
+
90
+ ### `observed-relationship`
91
+
92
+ Two-entity resolution. The proposal carries `disambiguation.needsResolution` with `subject` and `object`.
93
+
94
+ 1. Resolve each via `mcp__memory__memory-search` as in `mention`. Reject the proposal if either entity does not resolve and the operator declines to mint via `contact-create`.
95
+ 2. Run the `mergeCypher` with `mergeParams` plus `subjectElementId=<resolved>` and `objectElementId=<resolved>`.
96
+
97
+ The merged `:RELATED_TO` edge carries `operatorConfirmed: true` so downstream queries can filter speculative relationships out.
98
+
99
+ ## On `skip` or `reject`
100
+
101
+ `skip` — the operator wants to move on without writing. Note locally and continue. Skips are not persisted; the proposal re-surfaces on the next run.
102
+
103
+ `reject` — the operator deems the claim wrong. Call `mcp__memory__conversation-archive-enrich-rejection` with `mode: "record"` and the proposal's `chunkElementId`, `kind`, `contentHash`, `archiveElementId`. The tool appends a line to the account's `data/accounts/<accountId>/state/conversation-archive-enrich-rejections.jsonl` sidecar; on every subsequent walk the `conversation-archive-derive-insights` tool filters proposals matching any recorded `(chunkElementId, kind, contentHash)` tuple. Duplicate `record` calls for the same key are silent no-ops.
104
+
105
+ ## Wire undoes prior rejection
106
+
107
+ After a successful `wire` (any kind), call `mcp__memory__conversation-archive-enrich-rejection` with `mode: "undo"` and the same `(chunkElementId, kind, contentHash)` triple. Operators who change their minds and wire a once-rejected claim should not have to manage rejection state by hand — the wire-then-undo pairing keeps the JSONL aligned with the live graph. Calling `undo` for a key that was never rejected is a silent no-op (`beforeCount=0, afterCount=0`).
108
+
109
+ ## Observability
110
+
111
+ Per the spec, the tool already emits:
112
+
113
+ - `[conversation-archive-derive-insights] start accountId=<…> archiveElementId=<…> offset=<n> limit=<n> totalChunks=<N>`
114
+ - `[conversation-archive-derive-insights] chunk=<i>/<N> proposals=<k> elementId=<…> rejections-filtered=<r>` (the `rejections-filtered` field counts proposals dropped by the rejection sidecar this page; field absence = filter not wired)
115
+ - `[conversation-archive-derive-insights] FAIL phase=<haiku|…> reason="..."` (per failing chunk; failures do not abort the walk)
116
+ - `[conversation-archive-derive-insights] done accountId=<…> archiveElementId=<…> walked=<n> proposals=<n> emptyChunks=<n> ms=<n>`
117
+
118
+ The rejection tool emits one line per write/undo (and one per parse-skip / load-fail when the JSONL is malformed):
119
+
120
+ - `[conversation-archive-enrich] rejected chunk=<eid> kind=<k> contentHash=<sha> stored` — new rejection appended (or `stored (already-present)` for an idempotent duplicate)
121
+ - `[conversation-archive-enrich] rejection-undone chunk=<eid> kind=<k> contentHash=<sha>` — rejection removed after a wire
122
+ - `[conversation-archive-enrich] rejection-parse-skip line=<n>` — corrupt JSONL line dropped during a walk-time load
123
+ - `[conversation-archive-enrich] rejection-load-fail accountId=<…> reason="…"` — JSONL read errored; walk continues with an empty filter set
124
+
125
+ The skill itself emits one operator-facing line per row decision:
126
+
127
+ - `[conversation-archive-enrich] chunk=<i>/<N> kind=<X> action=<wired|skipped|rejected> elementId=<chunkElementId>`
128
+
129
+ And one end-of-walk summary:
130
+
131
+ - `[conversation-archive-enrich] done archiveElementId=<…> wired=<n> skipped=<n> rejected=<n> ms=<n>`
132
+
133
+ ## Failure paths
134
+
135
+ - **`archive-not-found`** — the named elementId did not resolve, or the archive has zero `:Section:Conversation` chunks. LOUD-FAIL surfaced verbatim; the skill yields back to admin. Never silently substitute a different archive.
136
+ - **`haiku` per-chunk fail** — Haiku call failed (rate-limit, network, malformed). The tool continues to the next chunk; the operator sees a `FAIL phase=haiku chunk=i/N` line. Once the page finishes, the skill can be re-invoked with the same `chunkOffset` to retry only the failed chunks.
137
+ - **Entity resolution zero-or-multi** — handled inline in the per-row flow above. Never auto-resolve.
138
+
139
+ ## Verification (post-enrich)
140
+
141
+ Run via `mcp__graph__maxy-graph-read_neo4j_cypher`:
142
+
143
+ - All wired writes carry `createdByTool='conversation-archive-derive-insights'`. Count per kind:
144
+ ```
145
+ MATCH (c:Section:Conversation)-[m:MENTIONS]->()
146
+ WHERE m.createdByTool = 'conversation-archive-derive-insights'
147
+ AND elementId(c) IN $chunkElementIds
148
+ RETURN count(m)
149
+ ```
150
+ (Repeat for `:Task -[:REFERENCES]->`, `:Preference -[:OBSERVED_IN]->`, and `:RELATED_TO`.)
151
+
152
+ - Idempotency: re-running this skill against the same archive (with the same operator decisions) MUST NOT increase any of the above counts — every MERGE collapses on `contentHash`.
153
+
154
+ ## What this is not
155
+
156
+ - **Not** an automatic step. The operator names the archive and asks for Phase 2; the skill runs.
157
+ - **Not** a batch-confirm UI. Per-row gate is doctrine; do not synthesise a "wire all visible mentions" shortcut.
158
+ - **Not** a numeric-confidence filter. The Haiku prompt absorbs the confidence floor as a hedging-avoidance instruction — never trust a model-emitted confidence score in code.
159
+ - **Not** a writer of `:Observation` umbrella nodes. The four kinds carry their own provenance via `createdByTool` + `chunkElementId` + `contentHash`; there is no umbrella label.
@@ -3,7 +3,7 @@ name: database-operator
3
3
  description: "Owner of the memory graph — any raw cypher (read or write) against Neo4j routes here, plus all document and archive ingestion (running the universal `document-ingest` skill for unstructured documents — PDF, text, transcript, web page, audio, video — and per-source archive-import skills — LinkedIn Basic Data Export today; CRM-type seed archives as each plugin ships), plus operator-driven graph hygiene (prune orphans, deduplicate entities, add edges, normalise labels), plus one-off raw reads (property-name lookups, edge-shape audits, multi-statement queries) when admin's wrapped read tools — `memory-search`, `memory-rank`, `conversation-search`, `profile-read` — do not expose the property or relationship being asked about. Delegate when the operator uploads any document, drops an archive directory into chat, asks for any graph operation that is not a routine per-turn wrapped write, or asks a factual question whose answer requires a property or relationship admin's wrapped read tools cannot reach."
4
4
  summary: "Ingests every unstructured document and external archive into your graph (LinkedIn today; other CRM sources in future) and handles ad-hoc graph tidy-ups on request. For example, when you upload a CV, a pricing guide, or a contract; when you drop a LinkedIn export folder into chat; or when you ask to prune orphan nodes, merge duplicate people, or add edges between entities."
5
5
  model: claude-sonnet-4-6
6
- tools: Read, Bash, Glob, Grep, mcp__graph__maxy-graph-read_neo4j_cypher, mcp__graph__maxy-graph-write_neo4j_cypher, mcp__graph__maxy-graph-get_neo4j_schema, mcp__memory__memory-write, mcp__memory__memory-update, mcp__memory__memory-delete, mcp__memory__memory-search, mcp__memory__memory-rank, mcp__memory__memory-reindex, mcp__memory__memory-find-candidates, mcp__memory__memory-ingest, mcp__memory__memory-ingest-extract, mcp__memory__memory-ingest-web, mcp__memory__memory-classify, mcp__memory__memory-archive-write, mcp__memory__graph-prune-denylist-list, mcp__memory__graph-prune-denylist-add, mcp__memory__graph-prune-denylist-remove, mcp__contacts__contact-create, mcp__contacts__contact-update, mcp__contacts__contact-lookup, mcp__contacts__contact-list, mcp__tasks__task-create, mcp__admin__file-attach, mcp__admin__plugin-read
6
+ tools: Read, Bash, Glob, Grep, mcp__graph__maxy-graph-read_neo4j_cypher, mcp__graph__maxy-graph-write_neo4j_cypher, mcp__graph__maxy-graph-get_neo4j_schema, mcp__memory__memory-write, mcp__memory__memory-update, mcp__memory__memory-delete, mcp__memory__memory-search, mcp__memory__memory-rank, mcp__memory__memory-reindex, mcp__memory__memory-find-candidates, mcp__memory__memory-ingest, mcp__memory__memory-ingest-extract, mcp__memory__memory-ingest-web, mcp__memory__memory-classify, mcp__memory__memory-archive-write, mcp__memory__conversation-archive-derive-insights, mcp__memory__conversation-archive-enrich-rejection, mcp__memory__graph-prune-denylist-list, mcp__memory__graph-prune-denylist-add, mcp__memory__graph-prune-denylist-remove, mcp__contacts__contact-create, mcp__contacts__contact-update, mcp__contacts__contact-lookup, mcp__contacts__contact-list, mcp__tasks__task-create, mcp__admin__file-attach, mcp__admin__plugin-read
7
7
  ---
8
8
 
9
9
  # Database Operator
@@ -119,7 +119,8 @@ The classifier maps document sections to typed ontology labels. It does not inve
119
119
  Per-source archive imports keep their own skill because their CSVs already encode entity types deterministically and need no LLM classifier. Currently shipped:
120
120
 
121
121
  - **linkedin-import** — LinkedIn Basic Data Export. Ships with references for `Profile.csv` and `Connections.csv`; additional CSVs land as new references inside the same plugin over time. Path: `platform/plugins/linkedin-import/skills/linkedin-import/SKILL.md`. Load via `plugin-read` before any ingestion.
122
- - **conversation-archive** — Conversation transcripts (any source) ingest via the `conversation-archive` skill. One skill, one bash entry, one writer, with `--source <enum>` selecting the per-source normaliser (`whatsapp`, `telegram`, `signal`, `linkedin-messages`, `zoom-transcript`, `meeting-minutes`, `imessage`, `slack`, `other`). Phase 0 ships only `whatsapp`; other normalisers land per-source. Pipeline: operator confirms owner + every distinct sender → `bash platform/plugins/memory/bin/conversation-archive-ingest.sh <archive> --source <enum> --owner-element-id <id> --participant-person-ids <id1>,<id2>,... --scope <admin|public>`. The script normalises (per source), sessionizes at gap-hours boundaries (default 12h), classifies each session via Haiku (`memory-classify` with `mode='chat'`) into topic-bounded `:Section:Conversation` chunks, and writes them under a parent `:ConversationArchive` MERGEd on `conversationIdentity = sha256(accountId + ":" + sortedParticipantElementIds)`. Re-imports are delta-append; the writer is bound to the operator-confirmed sender set (parser-miss = LOUD-FAIL). SKILL: `platform/plugins/memory/skills/conversation-archive/SKILL.md`. Phase 2 insight derivation against chunks is a separate follow-up task. Distinct from the live `whatsapp` plugin (Baileys QR pairing, in-memory store).
122
+ - **conversation-archive** — Conversation transcripts (any source) ingest via the `conversation-archive` skill. One skill, one bash entry, one writer, with `--source <enum>` selecting the per-source normaliser (`whatsapp`, `telegram`, `signal`, `linkedin-messages`, `zoom-transcript`, `meeting-minutes`, `imessage`, `slack`, `other`). Phase 0 ships only `whatsapp`; other normalisers land per-source. Pipeline: operator confirms owner + every distinct sender → `bash platform/plugins/memory/bin/conversation-archive-ingest.sh <archive> --source <enum> --owner-element-id <id> --participant-person-ids <id1>,<id2>,... --scope <admin|public>`. The script normalises (per source), sessionizes at gap-hours boundaries (default 12h), classifies each session via Haiku (`memory-classify` with `mode='chat'`) into topic-bounded `:Section:Conversation` chunks, and writes them under a parent `:ConversationArchive` MERGEd on `conversationIdentity = sha256(accountId + ":" + sortedParticipantElementIds)`. Re-imports are delta-append; the writer is bound to the operator-confirmed sender set (parser-miss = LOUD-FAIL). SKILL: `platform/plugins/memory/skills/conversation-archive/SKILL.md`. Distinct from the live `whatsapp` plugin (Baileys QR pairing, in-memory store).
123
+ - **conversation-archive-enrich** — Phase 2 over a named `:ConversationArchive`. Source-agnostic — runs against any archive produced by the `conversation-archive` skill regardless of source. Operator-triggered only (never auto-fires on Phase 1 completion). Walks `:Section:Conversation` chunks in pages via the read-only MCP tool `mcp__memory__conversation-archive-derive-insights` (which calls Haiku per chunk on OAuth, NOT the API key) and surfaces high-confidence claims for per-row operator gate (`wire / skip / reject`) over four kinds — `mention`, `task`, `preference`, `observed-relationship`. Idempotent on `(elementId(chunk), kind, contentHash)`: re-runs collapse identical claims via MERGE. Load: `platform/plugins/memory/skills/conversation-archive-enrich/SKILL.md`. Trigger phrasing: operator names a specific archive AND asks for insights / enrichment / derived claims — never the ingest skill's completion event.
123
124
 
124
125
  Future CRM-type seed plugins (HubSpot, Salesforce, Pipedrive, iCloud contacts, Gmail CSV, etc.) will ship under the same pattern — each as its own opt-in plugin, each with its own `SKILL.md` path under `platform/plugins/<name>/skills/`. When the admin adds a new archive-import skill, its PLUGIN.md will name itself here and in the admin's `<plugin-manifest>`. No prompt change required.
125
126