prism-mcp-server 6.1.9 → 6.2.1
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/README.md +81 -75
- package/dist/backgroundScheduler.js +218 -1
- package/dist/config.js +32 -11
- package/dist/dashboard/graphRouter.js +324 -0
- package/dist/dashboard/server.js +16 -231
- package/dist/dashboard/ui.js +414 -51
- package/dist/observability/graphMetrics.js +347 -0
- package/dist/sdm/conceptDictionary.js +79 -0
- package/dist/sdm/hdc.js +77 -0
- package/dist/sdm/policyGateway.js +70 -0
- package/dist/sdm/sdmDecoder.js +9 -3
- package/dist/sdm/sdmEngine.js +176 -22
- package/dist/sdm/stateMachine.js +86 -0
- package/dist/server.js +8 -2
- package/dist/storage/index.js +3 -1
- package/dist/storage/sqlite.js +148 -10
- package/dist/storage/supabase.js +404 -74
- package/dist/storage/supabaseMigrations.js +324 -1
- package/dist/test-cli.js +18 -0
- package/dist/tools/compactionHandler.js +1 -1
- package/dist/tools/graphHandlers.js +227 -3
- package/dist/tools/index.js +2 -2
- package/dist/tools/ledgerHandlers.js +1 -1
- package/dist/tools/sessionMemoryDefinitions.js +50 -0
- package/dist/utils/autoLinker.js +3 -3
- package/dist/utils/cognitiveMemory.js +11 -5
- package/package.json +1 -1
package/README.md
CHANGED
|
@@ -259,7 +259,21 @@ To sync memory across machines or teams:
|
|
|
259
259
|
}
|
|
260
260
|
```
|
|
261
261
|
|
|
262
|
-
|
|
262
|
+
#### Schema Migrations
|
|
263
|
+
|
|
264
|
+
Prism auto-applies its schema on first connect — no manual step required. If you need to apply or re-apply migrations manually (e.g. for a fresh project or after a version bump), run the SQL files in `supabase/migrations/` in numbered order via the **Supabase SQL Editor** or the CLI:
|
|
265
|
+
|
|
266
|
+
```bash
|
|
267
|
+
# Via CLI (requires supabase CLI + project linked)
|
|
268
|
+
supabase db push
|
|
269
|
+
|
|
270
|
+
# Or apply a single migration via the Supabase dashboard SQL Editor
|
|
271
|
+
# Paste the contents of supabase/migrations/0NN_*.sql and click Run
|
|
272
|
+
```
|
|
273
|
+
|
|
274
|
+
> **Key migrations:**
|
|
275
|
+
> - `020_*` — Core schema (ledger, handoff, FTS, TTL, CRDT)
|
|
276
|
+
> - `033_memory_links.sql` — Associative Memory Graph (MemoryLinks) — required for `session_backfill_links`
|
|
263
277
|
|
|
264
278
|
> **Anon key vs. service role key:** The anon key works for personal use (Supabase RLS policies apply). Use the service role key for team deployments where multiple users share the same Supabase project — it bypasses RLS and allows Prism to manage all rows regardless of auth context. Never expose the service role key client-side.
|
|
265
279
|
|
|
@@ -381,51 +395,30 @@ Soft/hard delete (Art. 17), full export in JSON, Markdown, or Obsidian vault `.z
|
|
|
381
395
|
|
|
382
396
|
## 🆕 What's New
|
|
383
397
|
|
|
384
|
-
### v6.
|
|
385
|
-
> **Current stable release (v6.1
|
|
386
|
-
|
|
387
|
-
-
|
|
388
|
-
-
|
|
389
|
-
-
|
|
390
|
-
-
|
|
391
|
-
-
|
|
392
|
-
-
|
|
393
|
-
-
|
|
394
|
-
|
|
395
|
-
|
|
396
|
-
|
|
397
|
-
|
|
398
|
-
-
|
|
399
|
-
|
|
400
|
-
|
|
401
|
-
-
|
|
402
|
-
-
|
|
403
|
-
-
|
|
404
|
-
-
|
|
405
|
-
-
|
|
406
|
-
|
|
407
|
-
|
|
408
|
-
- 🔄 **Rollback on Save Failure** — `saveSetting()` now returns `Promise<boolean>`; UI toggles (Hivemind, Auto-Capture) roll back their optimistic state if the server request fails.
|
|
409
|
-
- 🚫 **Cache-Busting** — `loadSettings()` appends `?t=<timestamp>` to bypass stale browser/service-worker caches.
|
|
410
|
-
- 🔔 **HTTP Error Detection** — Explicit 4xx/5xx catching in `saveSetting()` surfaces failed saves as user-visible toast notifications.
|
|
411
|
-
|
|
412
|
-
#### v6.1.6 — Type Guard Audit (Round 1)
|
|
413
|
-
- 🛡️ **11 Type Guards Hardened** — Audited and refactored all MCP tool argument guards to include explicit `typeof` validation for optional fields, preventing LLM-hallucinated payloads from causing runtime type coercion errors.
|
|
414
|
-
|
|
415
|
-
#### v6.1.5 — SQLite Deep Storage TTL
|
|
416
|
-
- 🧪 **Comprehensive Edge-Case Test Suite** — 425 tests across 20 files covering CRDT merges, TurboQuant mathematical invariants, prototype pollution guards, and SQLite retention TTL boundary conditions.
|
|
417
|
-
- 🔒 **Prototype Pollution Guards** — CRDT merge pipeline hardened against `__proto__` / `constructor` injection via `Object.create(null)` scratchpads.
|
|
418
|
-
- 🗜️ **`maintenance_vacuum` Tool** — New tool to reclaim SQLite disk space after large purge operations.
|
|
419
|
-
|
|
420
|
-
#### v6.1.4 — Production Hardening
|
|
421
|
-
- 🔒 **Embedding Binary Strip** — Both `embedding` (raw float32) and `embedding_compressed` (TurboQuant binary blob) are now stripped from all export formats, preventing ~400 bytes of raw binary per entry from appearing in vault/JSON exports.
|
|
422
|
-
- 🔗 **Vault Wikilink Fix** — Keyword backlink paths now use vault-relative `Ledger/filename.md` instead of `../Ledger/filename.md` — ensuring correct internal link resolution in Obsidian and Logseq.
|
|
423
|
-
- 🖼️ **Visual Memory Key Fix** — Export correctly reads `filename` and `timestamp` (the keys written by `session_save_image`), resolving a mismatch that produced `"Unknown"` values in the vault visual memory index.
|
|
424
|
-
- 🛡️ **OOM Guard on Large Exports** — `getLedgerEntries` in the export handler now has a 10,000-entry ceiling with explicit `ORDER BY created_at ASC`, preventing unbounded heap allocation on high-volume projects.
|
|
425
|
-
- ⚡ **O(1) Filename Dedup** — Vault filename collision resolution upgraded from O(n²) loop to O(1) `Map<string, number>` counter. Important for projects with many same-day sessions.
|
|
426
|
-
- 🔧 **TurboQuant Guard** — `bits` parameter now validated to `[2, 6]` range at construction time, preventing accidental multi-second Lloyd-Max initialization at higher bit depths.
|
|
427
|
-
|
|
428
|
-

|
|
398
|
+
### v6.2 — The "Synthesize & Prune" Phase ✅
|
|
399
|
+
> **Current stable release (v6.2.1).** The Mind Palace becomes self-organizing.
|
|
400
|
+
|
|
401
|
+
- 🕸️ **Edge Synthesis ("The Dream Procedure")** — Automated background linker discovers semantically similar but disconnected memory nodes via cosine similarity (≥ 0.7 threshold). Batch-limited to 50 sources × 3 neighbors. New `session_synthesize_edges` tool for on-demand graph enrichment.
|
|
402
|
+
- ✂️ **Graph Pruning (Soft-Prune)** — Configurable strength-based pruning soft-deletes weak links. Includes per-project cooldown, backpressure guards, and sweep budget controls. Enable with `PRISM_GRAPH_PRUNING_ENABLED=true`.
|
|
403
|
+
- 📊 **SLO Observability** — New `graphMetrics.ts` module tracks synthesis success rate, net new links, prune ratio, and sweep duration. Exposes `slo` and `warnings` fields at `GET /api/graph/metrics` for proactive health monitoring.
|
|
404
|
+
- 🗓️ **Temporal Decay Heatmaps** — UI overlay toggle where un-accessed nodes desaturate while Graduated nodes stay vibrant. Makes the Ebbinghaus curve visceral.
|
|
405
|
+
- 📝 **Active Recall ("Test Me")** — Node editor panel generates synthetic quizzes from semantic neighbors for knowledge activation.
|
|
406
|
+
- ⚡ **Supabase Weak-Link RPC (WS4.1)** — New `prism_summarize_weak_links` Postgres function (migration 036) aggregates pruning server-side, eliminating N+1 network roundtrips.
|
|
407
|
+
- 🔒 **Migration 035** — Tenant-safe graph writes + soft-delete hardening for MemoryLinks.
|
|
408
|
+
|
|
409
|
+
<details>
|
|
410
|
+
<summary><strong>v6.1 — Prism-Port, Cognitive Load & Semantic Search</strong></summary>
|
|
411
|
+
|
|
412
|
+
- 📦 **Prism-Port Vault Export** — `.zip` of interlinked Markdown files with YAML frontmatter, `[[Wikilinks]]`, and `Keywords/` backlink indices for Obsidian/Logseq.
|
|
413
|
+
- 🧠 **Smart Memory Merge UI** — Merge duplicate knowledge nodes from the Graph Editor.
|
|
414
|
+
- ✨ **Semantic Search Highlighting** — RegEx-powered match engine wraps exact keyword matches in `<mark>` tags.
|
|
415
|
+
- 📊 **Deep Purge Visualization** — "Memory Density" analytic for signal-to-noise ratio.
|
|
416
|
+
- 🛡️ **Context-Boosted Search** — Biases semantic queries by current project workspace.
|
|
417
|
+
- 🌐 **Tavily Web Scholar** — `@tavily/core` as alternative to Brave+Firecrawl.
|
|
418
|
+
- 🛡️ **Type Guard Hardening** — Full audit of all 11+ MCP tool argument guards.
|
|
419
|
+
- 🔄 **Dashboard Toggle Persistence** — Optimistic rollback on save failure.
|
|
420
|
+
|
|
421
|
+
</details>
|
|
429
422
|
|
|
430
423
|
<details>
|
|
431
424
|
<summary><strong>Earlier releases (v5.x and below)</strong></summary>
|
|
@@ -452,29 +445,43 @@ Soft/hard delete (Art. 17), full export in JSON, Markdown, or Obsidian vault `.z
|
|
|
452
445
|
|
|
453
446
|
---
|
|
454
447
|
|
|
455
|
-
##
|
|
456
|
-
|
|
457
|
-
|
|
458
|
-
|
|
459
|
-
|
|
|
460
|
-
|
|
461
|
-
|
|
|
462
|
-
|
|
|
463
|
-
|
|
|
464
|
-
|
|
|
465
|
-
|
|
|
466
|
-
|
|
|
467
|
-
|
|
|
468
|
-
|
|
|
469
|
-
|
|
|
470
|
-
|
|
|
471
|
-
|
|
|
472
|
-
|
|
473
|
-
|
|
474
|
-
|
|
475
|
-
|
|
476
|
-
|
|
477
|
-
|
|
448
|
+
## ⚔️ How Prism Compares
|
|
449
|
+
|
|
450
|
+
While standard Memory MCPs act as passive filing cabinets, Prism is an **active cognitive architecture** that manages its own health, compresses its own data, and learns autonomously in the background.
|
|
451
|
+
|
|
452
|
+
| Feature | 🧠 **Prism MCP** | Official Anthropic Memory | Cloud APIs (Mem0 / Zep) | Simple SQLite/File MCPs |
|
|
453
|
+
|:---|:---:|:---:|:---:|:---:|
|
|
454
|
+
| **Paradigm** | **Active & Autonomous** | Passive Entity Graph | Passive Vector Store | Passive Log |
|
|
455
|
+
| **Context Assembly** | **Progressive (Quick/Std/Deep)** | Manual JSON retrieval | Similarity Search only | Dump all / exact match |
|
|
456
|
+
| **Graph Generation** | **Auto-Synthesized (Edges)** | Manual (LLM must write JSON) | Often none / black box | None |
|
|
457
|
+
| **Context Window Mgmt** | **Auto-Compaction & Decay** | Endless unbounded growth | Requires paid API logic | Manual deletion required |
|
|
458
|
+
| **Storage Engine** | **Local SQLite OR Supabase** | Local File | Cloud Only (Vendor lock-in) | Local SQLite |
|
|
459
|
+
| **Vector Search** | **Three-Tier (Native / TQ / FTS5)** | ❌ None | ✅ Yes (Remote) | ❌ None |
|
|
460
|
+
| **Vector Compression** | **TurboQuant (10× smaller)** | ❌ N/A | ❌ Expensive/Opaque | ❌ N/A |
|
|
461
|
+
| **Multi-Agent Sync** | **CRDTs + Hivemind Watchdog** | ❌ Single-agent only | ✅ Paid feature | ❌ Data collisions |
|
|
462
|
+
| **Observability** | **OTel Traces + Web Dashboard** | ❌ None | ✅ Web Dashboard | ❌ None |
|
|
463
|
+
| **Data Portability** | **Prism-Port (Obsidian Vault ZIP)** | ❌ Raw JSON | ❌ API Export | ❌ Raw DB file |
|
|
464
|
+
| **Cost Model** | **Free + BYOM (Ollama)** | Free (limited) | Per-API-call pricing | Free (limited) |
|
|
465
|
+
|
|
466
|
+
### 🏆 Why Prism Wins: The "Big Three" Differentiators
|
|
467
|
+
|
|
468
|
+
**1. Zero Cold-Starts with Progressive Loading & OCC**
|
|
469
|
+
Other systems require the LLM to waste tokens and reasoning steps asking "What was I doing?" and calling tools to fetch memory. Prism uses MCP Resources to instantly inject the live project state into the context window *before* the LLM generates its first token. CRDT-backed Optimistic Concurrency Control ensures multiple agents (e.g., Claude + Cursor) can work on the same project simultaneously without data collisions.
|
|
470
|
+
|
|
471
|
+
**2. Self-Cleaning & Self-Optimizing**
|
|
472
|
+
If you use a standard memory tool long enough, it clogs the LLM's context window with thousands of obsolete tokens. Prism runs an autonomous Background Scheduler that:
|
|
473
|
+
- **Ebbinghaus Decays** older, unreferenced memories — importance fades unless reinforced.
|
|
474
|
+
- **Auto-Compacts** large session histories into dense, LLM-generated summaries.
|
|
475
|
+
- **Deep Purges** high-precision vectors, replacing them with 400-byte TurboQuant compressed blobs, saving ~90% of disk space.
|
|
476
|
+
|
|
477
|
+
> 💰 **Token Economics:** Progressive Context Loading (Quick ~50 tokens / Standard ~200 / Deep ~1000+) plus auto-compaction means you never blow your Claude/OpenAI token budget fetching 50 pages of raw chat history.
|
|
478
|
+
|
|
479
|
+
**3. The Associative Memory Graph**
|
|
480
|
+
Prism doesn't just store logs; it connects them. When a session is saved, Prism automatically creates temporal chains (what happened next) and keyword overlap edges. In the background, Edge Synthesis actively scans for latent relationships and *synthesizes* new graph edges between conceptually similar but disconnected memories — turning passive storage into an active, self-organizing knowledge graph.
|
|
481
|
+
|
|
482
|
+
> 🔌 **BYOM (Bring Your Own Model):** While tools like Mem0 charge per API call, Prism's pluggable architecture lets you run `nomic-embed-text` locally via Ollama for **free vectors**, while using Claude or GPT for high-level reasoning. Zero vendor lock-in.
|
|
483
|
+
>
|
|
484
|
+
> 🏛️ **Prism-Port for PKM:** Prism turns your AI's brain into a readable [Obsidian](https://obsidian.md) / [Logseq](https://logseq.com) vault. Export with `session_export_memory(format='vault')` — complete with YAML frontmatter, `[[Wikilinks]]`, and keyword backlink indices. No more black-box AI memory.
|
|
478
485
|
|
|
479
486
|
---
|
|
480
487
|
|
|
@@ -680,7 +687,7 @@ Prism is evolving from smart session logging toward a **cognitive memory archite
|
|
|
680
687
|
| **v6.2+** | Full Superposed Memory (SDM) — O(1) key-value retrieval via Hamming correlation | Kanerva's SDM | 🔬 In Progress |
|
|
681
688
|
| **v6.1** | Prism-Port Vault Export — Obsidian/Logseq `.zip` with YAML frontmatter & `[[Wikilinks]]` | Data sovereignty, PKM interop | ✅ Shipped |
|
|
682
689
|
| **v6.1** | Cognitive Load & Semantic Search — dynamic graph thinning, search highlights | Contextual working memory | ✅ Shipped |
|
|
683
|
-
| **v6.2** | Synthesize & Prune — automated edge synthesis
|
|
690
|
+
| **v6.2** | Synthesize & Prune — automated edge synthesis, graph pruning, SLO observability | Implicit associative memory | ✅ Shipped |
|
|
684
691
|
| **v7.x** | Affect-Tagged Memory — sentiment shapes what gets recalled | Affect-modulated retrieval (neuroscience) | 🔭 Horizon |
|
|
685
692
|
| **v8+** | Zero-Search Retrieval — no index, no ANN, just ask the vector | Holographic Reduced Representations | 🔭 Horizon |
|
|
686
693
|
|
|
@@ -692,12 +699,11 @@ Prism is evolving from smart session logging toward a **cognitive memory archite
|
|
|
692
699
|
|
|
693
700
|
> **[Full ROADMAP.md →](ROADMAP.md)**
|
|
694
701
|
|
|
695
|
-
### v6.2: The "Synthesize & Prune" Phase
|
|
696
|
-
|
|
702
|
+
### v6.2: The "Synthesize & Prune" Phase ✅
|
|
703
|
+
Shipped in v6.2.0. Edge synthesis, graph pruning with SLO observability, temporal decay heatmaps, active recall prompt generation, and full dashboard metrics integration.
|
|
697
704
|
|
|
698
|
-
|
|
699
|
-
|
|
700
|
-
3. 📝 **Active Recall Prompt Generation (Knowledge Activation):** A "Test Me" utility in the `nodeEditorPanel`. Using a node's semantic neighbors, the dashboard generates synthetic quizzes to ensure context retention, pushing the product away from pure "storage" into genuine "active learning" capabilities.
|
|
705
|
+
### v7.x: Affect-Tagged Memory
|
|
706
|
+
Sentiment and emotional valence shape what gets recalled — bringing affect-modulated retrieval from neuroscience into agentic memory.
|
|
701
707
|
|
|
702
708
|
---
|
|
703
709
|
|
|
@@ -17,7 +17,7 @@
|
|
|
17
17
|
* storage backends during maintenance windows.
|
|
18
18
|
*/
|
|
19
19
|
import { getStorage } from "./storage/index.js";
|
|
20
|
-
import { PRISM_USER_ID, PRISM_SCHOLAR_ENABLED, PRISM_SCHOLAR_INTERVAL_MS } from "./config.js";
|
|
20
|
+
import { PRISM_USER_ID, PRISM_SCHOLAR_ENABLED, PRISM_SCHOLAR_INTERVAL_MS, PRISM_GRAPH_PRUNING_ENABLED, PRISM_GRAPH_PRUNE_MIN_STRENGTH, PRISM_GRAPH_PRUNE_PROJECT_COOLDOWN_MS, PRISM_GRAPH_PRUNE_SWEEP_BUDGET_MS, PRISM_GRAPH_PRUNE_MAX_PROJECTS_PER_SWEEP, } from "./config.js";
|
|
21
21
|
import { debugLog } from "./utils/logger.js";
|
|
22
22
|
import { runWebScholar } from "./scholar/webScholar.js";
|
|
23
23
|
import { getAllActiveSdmProjects, getSdmEngine } from "./sdm/sdmEngine.js";
|
|
@@ -86,10 +86,20 @@ export const DEFAULT_SCHEDULER_CONFIG = {
|
|
|
86
86
|
enableCompaction: true,
|
|
87
87
|
enableDeepPurge: true,
|
|
88
88
|
enableSdmFlush: true,
|
|
89
|
+
enableEdgeSynthesis: true,
|
|
89
90
|
purgeOlderThanDays: 30,
|
|
90
91
|
compactionThreshold: 50,
|
|
91
92
|
compactionKeepRecent: 10,
|
|
92
93
|
decayDays: 30,
|
|
94
|
+
edgeSynthesisCooldownMs: 10 * 60_000,
|
|
95
|
+
edgeSynthesisBudgetMs: 60_000,
|
|
96
|
+
edgeSynthesisMaxRetries: 1,
|
|
97
|
+
edgeSynthesisBackoffMs: 30 * 60_000,
|
|
98
|
+
enableGraphPruning: PRISM_GRAPH_PRUNING_ENABLED,
|
|
99
|
+
graphPruneMinStrength: PRISM_GRAPH_PRUNE_MIN_STRENGTH,
|
|
100
|
+
graphPruneProjectCooldownMs: PRISM_GRAPH_PRUNE_PROJECT_COOLDOWN_MS,
|
|
101
|
+
graphPruneSweepBudgetMs: PRISM_GRAPH_PRUNE_SWEEP_BUDGET_MS,
|
|
102
|
+
graphPruneMaxProjectsPerSweep: PRISM_GRAPH_PRUNE_MAX_PROJECTS_PER_SWEEP,
|
|
93
103
|
};
|
|
94
104
|
// ─── Scheduler State ─────────────────────────────────────────
|
|
95
105
|
let schedulerInterval = null;
|
|
@@ -97,6 +107,14 @@ let schedulerInterval = null;
|
|
|
97
107
|
let lastSweepResult = null;
|
|
98
108
|
/** When the scheduler was started */
|
|
99
109
|
let schedulerStartedAt = null;
|
|
110
|
+
/** Backpressure: tracks projects currently undergoing edge synthesis */
|
|
111
|
+
const runningSynthesis = new Set();
|
|
112
|
+
/** Per-project last successful synthesis timestamp (ms since epoch) */
|
|
113
|
+
const lastSynthesisAt = new Map();
|
|
114
|
+
/** Per-project synthesis backoff-until timestamp (ms since epoch) */
|
|
115
|
+
const synthesisBackoffUntil = new Map();
|
|
116
|
+
/** Per-project last prune summary timestamp (ms since epoch) */
|
|
117
|
+
const lastPruneAt = new Map();
|
|
100
118
|
// ─── Public API ──────────────────────────────────────────────
|
|
101
119
|
/**
|
|
102
120
|
* Start the background scheduler.
|
|
@@ -127,6 +145,8 @@ export function startScheduler(config) {
|
|
|
127
145
|
cfg.enableCompaction && "Compaction",
|
|
128
146
|
cfg.enableDeepPurge && "DeepPurge",
|
|
129
147
|
cfg.enableSdmFlush && "SdmFlush",
|
|
148
|
+
cfg.enableEdgeSynthesis && "EdgeSynthesis",
|
|
149
|
+
cfg.enableGraphPruning && "GraphPruning",
|
|
130
150
|
].filter(Boolean).join(", ");
|
|
131
151
|
console.error(`[Scheduler] ⏰ Started (interval=${formatDuration(cfg.intervalMs)}, tasks=[${enabledTasks}])`);
|
|
132
152
|
return () => {
|
|
@@ -205,6 +225,30 @@ export async function runSchedulerSweep(cfg = DEFAULT_SCHEDULER_CONFIG) {
|
|
|
205
225
|
deepPurge: { ran: false, purged: 0, reclaimedBytes: 0 },
|
|
206
226
|
sdmFlush: { ran: false, projectsFlushed: 0 },
|
|
207
227
|
linkDecay: { ran: false, linksDecayed: 0 },
|
|
228
|
+
edgeSynthesis: {
|
|
229
|
+
ran: false,
|
|
230
|
+
projectsAttempted: 0,
|
|
231
|
+
projectsSynthesized: 0,
|
|
232
|
+
projectsFailed: 0,
|
|
233
|
+
retries: 0,
|
|
234
|
+
skippedBackpressure: 0,
|
|
235
|
+
skippedCooldown: 0,
|
|
236
|
+
skippedBudget: 0,
|
|
237
|
+
skippedBackoff: 0,
|
|
238
|
+
newLinks: 0,
|
|
239
|
+
},
|
|
240
|
+
graphPruning: {
|
|
241
|
+
ran: false,
|
|
242
|
+
projectsConsidered: 0,
|
|
243
|
+
projectsPruned: 0,
|
|
244
|
+
linksScanned: 0,
|
|
245
|
+
linksSoftPruned: 0,
|
|
246
|
+
skippedBackpressure: 0,
|
|
247
|
+
skippedCooldown: 0,
|
|
248
|
+
skippedBudget: 0,
|
|
249
|
+
durationMs: 0,
|
|
250
|
+
minStrength: cfg.graphPruneMinStrength,
|
|
251
|
+
},
|
|
208
252
|
},
|
|
209
253
|
};
|
|
210
254
|
debugLog("[Scheduler] 🔄 Sweep starting...");
|
|
@@ -406,6 +450,168 @@ export async function runSchedulerSweep(cfg = DEFAULT_SCHEDULER_CONFIG) {
|
|
|
406
450
|
result.tasks.linkDecay.error = err instanceof Error ? err.message : String(err);
|
|
407
451
|
debugLog(`[Scheduler] Link decay error (non-fatal): ${err instanceof Error ? err.message : String(err)}`);
|
|
408
452
|
}
|
|
453
|
+
// ── Task 7: Edge Synthesis ──────────────────────────────────
|
|
454
|
+
if (cfg.enableEdgeSynthesis) {
|
|
455
|
+
const synthTaskStart = Date.now();
|
|
456
|
+
try {
|
|
457
|
+
result.tasks.edgeSynthesis.ran = true;
|
|
458
|
+
const projects = await storage.listProjects();
|
|
459
|
+
// Dynamic import to avoid circular dependencies
|
|
460
|
+
const { synthesizeEdgesCore } = await import("./tools/graphHandlers.js");
|
|
461
|
+
for (const project of projects) {
|
|
462
|
+
const now = Date.now();
|
|
463
|
+
if (runningSynthesis.has(project)) {
|
|
464
|
+
debugLog(`[Scheduler] Skipping edge synthesis for "${project}" — already running`);
|
|
465
|
+
result.tasks.edgeSynthesis.skippedBackpressure++;
|
|
466
|
+
continue;
|
|
467
|
+
}
|
|
468
|
+
const backoffUntil = synthesisBackoffUntil.get(project);
|
|
469
|
+
if (typeof backoffUntil === "number" && now < backoffUntil) {
|
|
470
|
+
result.tasks.edgeSynthesis.skippedBackoff++;
|
|
471
|
+
continue;
|
|
472
|
+
}
|
|
473
|
+
const lastRun = lastSynthesisAt.get(project);
|
|
474
|
+
if (typeof lastRun === "number" && now - lastRun < cfg.edgeSynthesisCooldownMs) {
|
|
475
|
+
result.tasks.edgeSynthesis.skippedCooldown++;
|
|
476
|
+
continue;
|
|
477
|
+
}
|
|
478
|
+
if (now - synthTaskStart >= cfg.edgeSynthesisBudgetMs) {
|
|
479
|
+
result.tasks.edgeSynthesis.skippedBudget++;
|
|
480
|
+
continue;
|
|
481
|
+
}
|
|
482
|
+
result.tasks.edgeSynthesis.projectsAttempted++;
|
|
483
|
+
try {
|
|
484
|
+
runningSynthesis.add(project);
|
|
485
|
+
for (let attempt = 0; attempt <= cfg.edgeSynthesisMaxRetries; attempt++) {
|
|
486
|
+
try {
|
|
487
|
+
if (attempt > 0) {
|
|
488
|
+
result.tasks.edgeSynthesis.retries++;
|
|
489
|
+
debugLog(`[Scheduler] Retrying edge synthesis for "${project}" (attempt ${attempt + 1})`);
|
|
490
|
+
}
|
|
491
|
+
else {
|
|
492
|
+
debugLog(`[Scheduler] Synthesizing edges for "${project}"...`);
|
|
493
|
+
}
|
|
494
|
+
const synthRes = await synthesizeEdgesCore({
|
|
495
|
+
project,
|
|
496
|
+
similarity_threshold: 0.7,
|
|
497
|
+
max_entries: 50,
|
|
498
|
+
max_neighbors_per_entry: 3,
|
|
499
|
+
randomize_selection: true,
|
|
500
|
+
});
|
|
501
|
+
if (synthRes && synthRes.success) {
|
|
502
|
+
result.tasks.edgeSynthesis.projectsSynthesized++;
|
|
503
|
+
result.tasks.edgeSynthesis.newLinks += synthRes.newLinks;
|
|
504
|
+
lastSynthesisAt.set(project, Date.now());
|
|
505
|
+
synthesisBackoffUntil.delete(project);
|
|
506
|
+
if (synthRes.newLinks > 0) {
|
|
507
|
+
debugLog(`[Scheduler] Edge Synthesis: created ${synthRes.newLinks} links for "${project}"`);
|
|
508
|
+
}
|
|
509
|
+
break;
|
|
510
|
+
}
|
|
511
|
+
throw new Error("Synthesis returned unsuccessful result");
|
|
512
|
+
}
|
|
513
|
+
catch (err) {
|
|
514
|
+
const isLast = attempt >= cfg.edgeSynthesisMaxRetries;
|
|
515
|
+
if (isLast) {
|
|
516
|
+
result.tasks.edgeSynthesis.projectsFailed++;
|
|
517
|
+
synthesisBackoffUntil.set(project, Date.now() + cfg.edgeSynthesisBackoffMs);
|
|
518
|
+
debugLog(`[Scheduler] Edge Synthesis failed for "${project}": ${err instanceof Error ? err.message : String(err)}`);
|
|
519
|
+
}
|
|
520
|
+
}
|
|
521
|
+
}
|
|
522
|
+
}
|
|
523
|
+
finally {
|
|
524
|
+
runningSynthesis.delete(project);
|
|
525
|
+
}
|
|
526
|
+
}
|
|
527
|
+
}
|
|
528
|
+
catch (err) {
|
|
529
|
+
result.tasks.edgeSynthesis.error = err instanceof Error ? err.message : String(err);
|
|
530
|
+
console.error(`[Scheduler] Edge Synthesis error: ${err instanceof Error ? err.message : String(err)}`);
|
|
531
|
+
}
|
|
532
|
+
// Emit scheduler-level synthesis telemetry
|
|
533
|
+
try {
|
|
534
|
+
const { recordSchedulerSynthesis } = await import("./observability/graphMetrics.js");
|
|
535
|
+
recordSchedulerSynthesis({
|
|
536
|
+
projects_processed: result.tasks.edgeSynthesis.projectsAttempted,
|
|
537
|
+
projects_succeeded: result.tasks.edgeSynthesis.projectsSynthesized,
|
|
538
|
+
projects_failed: result.tasks.edgeSynthesis.projectsFailed,
|
|
539
|
+
retries: result.tasks.edgeSynthesis.retries,
|
|
540
|
+
links_created: result.tasks.edgeSynthesis.newLinks,
|
|
541
|
+
duration_ms: Date.now() - synthTaskStart,
|
|
542
|
+
skipped_backpressure: result.tasks.edgeSynthesis.skippedBackpressure,
|
|
543
|
+
skipped_cooldown: result.tasks.edgeSynthesis.skippedCooldown,
|
|
544
|
+
skipped_budget: result.tasks.edgeSynthesis.skippedBudget,
|
|
545
|
+
skipped_backoff: result.tasks.edgeSynthesis.skippedBackoff,
|
|
546
|
+
});
|
|
547
|
+
}
|
|
548
|
+
catch {
|
|
549
|
+
// Non-critical — don't let metrics failure break the scheduler
|
|
550
|
+
}
|
|
551
|
+
}
|
|
552
|
+
// ── Task 8: Graph Soft-Prune Summary (WS3) ─────────────────
|
|
553
|
+
if (cfg.enableGraphPruning) {
|
|
554
|
+
const pruneTaskStart = Date.now();
|
|
555
|
+
try {
|
|
556
|
+
result.tasks.graphPruning.ran = true;
|
|
557
|
+
const projects = await storage.listProjects();
|
|
558
|
+
const maxProjects = Math.max(1, cfg.graphPruneMaxProjectsPerSweep);
|
|
559
|
+
for (const project of projects) {
|
|
560
|
+
const now = Date.now();
|
|
561
|
+
if (result.tasks.graphPruning.projectsConsidered >= maxProjects) {
|
|
562
|
+
break;
|
|
563
|
+
}
|
|
564
|
+
if (runningSynthesis.has(project)) {
|
|
565
|
+
result.tasks.graphPruning.skippedBackpressure++;
|
|
566
|
+
continue;
|
|
567
|
+
}
|
|
568
|
+
const lastPrune = lastPruneAt.get(project);
|
|
569
|
+
if (typeof lastPrune === "number" && now - lastPrune < cfg.graphPruneProjectCooldownMs) {
|
|
570
|
+
result.tasks.graphPruning.skippedCooldown++;
|
|
571
|
+
continue;
|
|
572
|
+
}
|
|
573
|
+
if (now - pruneTaskStart >= cfg.graphPruneSweepBudgetMs) {
|
|
574
|
+
result.tasks.graphPruning.skippedBudget++;
|
|
575
|
+
continue;
|
|
576
|
+
}
|
|
577
|
+
result.tasks.graphPruning.projectsConsidered++;
|
|
578
|
+
try {
|
|
579
|
+
const summary = await storage.summarizeWeakLinks(project, PRISM_USER_ID, cfg.graphPruneMinStrength, 25, 25);
|
|
580
|
+
result.tasks.graphPruning.linksScanned += summary.links_scanned;
|
|
581
|
+
result.tasks.graphPruning.linksSoftPruned += summary.links_soft_pruned;
|
|
582
|
+
if (summary.links_soft_pruned > 0) {
|
|
583
|
+
result.tasks.graphPruning.projectsPruned++;
|
|
584
|
+
}
|
|
585
|
+
lastPruneAt.set(project, Date.now());
|
|
586
|
+
}
|
|
587
|
+
catch (err) {
|
|
588
|
+
debugLog(`[Scheduler] Graph pruning summary failed for "${project}": ${err instanceof Error ? err.message : String(err)}`);
|
|
589
|
+
}
|
|
590
|
+
}
|
|
591
|
+
}
|
|
592
|
+
catch (err) {
|
|
593
|
+
result.tasks.graphPruning.error = err instanceof Error ? err.message : String(err);
|
|
594
|
+
console.error(`[Scheduler] Graph pruning summary error: ${err instanceof Error ? err.message : String(err)}`);
|
|
595
|
+
}
|
|
596
|
+
result.tasks.graphPruning.durationMs = Date.now() - pruneTaskStart;
|
|
597
|
+
try {
|
|
598
|
+
const { recordPruningRun } = await import("./observability/graphMetrics.js");
|
|
599
|
+
recordPruningRun({
|
|
600
|
+
projects_considered: result.tasks.graphPruning.projectsConsidered,
|
|
601
|
+
projects_pruned: result.tasks.graphPruning.projectsPruned,
|
|
602
|
+
links_scanned: result.tasks.graphPruning.linksScanned,
|
|
603
|
+
links_soft_pruned: result.tasks.graphPruning.linksSoftPruned,
|
|
604
|
+
min_strength: cfg.graphPruneMinStrength,
|
|
605
|
+
duration_ms: result.tasks.graphPruning.durationMs,
|
|
606
|
+
skipped_backpressure: result.tasks.graphPruning.skippedBackpressure,
|
|
607
|
+
skipped_cooldown: result.tasks.graphPruning.skippedCooldown,
|
|
608
|
+
skipped_budget: result.tasks.graphPruning.skippedBudget,
|
|
609
|
+
});
|
|
610
|
+
}
|
|
611
|
+
catch {
|
|
612
|
+
// Non-critical — don't let metrics failure break the scheduler
|
|
613
|
+
}
|
|
614
|
+
}
|
|
409
615
|
}
|
|
410
616
|
finally {
|
|
411
617
|
clearInterval(heartbeatInterval);
|
|
@@ -420,6 +626,14 @@ export async function runSchedulerSweep(cfg = DEFAULT_SCHEDULER_CONFIG) {
|
|
|
420
626
|
result.completedAt = new Date().toISOString();
|
|
421
627
|
result.durationMs = Date.now() - sweepStart;
|
|
422
628
|
lastSweepResult = result;
|
|
629
|
+
// WS4: Record total sweep duration into graph metrics for SLO exposure
|
|
630
|
+
try {
|
|
631
|
+
const { recordSweepDuration } = await import("./observability/graphMetrics.js");
|
|
632
|
+
recordSweepDuration(result.durationMs);
|
|
633
|
+
}
|
|
634
|
+
catch {
|
|
635
|
+
// Non-critical — don't let metrics recording break the scheduler
|
|
636
|
+
}
|
|
423
637
|
// Build summary line
|
|
424
638
|
const parts = [];
|
|
425
639
|
if (result.tasks.ttlSweep.ran && result.tasks.ttlSweep.totalExpired > 0) {
|
|
@@ -440,6 +654,9 @@ export async function runSchedulerSweep(cfg = DEFAULT_SCHEDULER_CONFIG) {
|
|
|
440
654
|
if (result.tasks.linkDecay.ran && result.tasks.linkDecay.linksDecayed > 0) {
|
|
441
655
|
parts.push(`LinkDecay:${result.tasks.linkDecay.linksDecayed} links`);
|
|
442
656
|
}
|
|
657
|
+
if (result.tasks.edgeSynthesis.ran && result.tasks.edgeSynthesis.projectsSynthesized > 0) {
|
|
658
|
+
parts.push(`Synthesis:${result.tasks.edgeSynthesis.newLinks} links in ${result.tasks.edgeSynthesis.projectsSynthesized} projects`);
|
|
659
|
+
}
|
|
443
660
|
const summaryLine = parts.length > 0
|
|
444
661
|
? parts.join(" | ")
|
|
445
662
|
: "no maintenance actions needed";
|
package/dist/config.js
CHANGED
|
@@ -60,27 +60,39 @@ export const BRAVE_ANSWERS_API_KEY = process.env.BRAVE_ANSWERS_API_KEY;
|
|
|
60
60
|
if (!BRAVE_ANSWERS_API_KEY) {
|
|
61
61
|
console.error("Warning: BRAVE_ANSWERS_API_KEY environment variable is missing. Brave Answers tool will be unavailable.");
|
|
62
62
|
}
|
|
63
|
+
// ─── v2.0: Storage Backend Selection ─────────────────────────
|
|
64
|
+
// REVIEWER NOTE: Step 1 of v2.0 introduces a storage abstraction.
|
|
65
|
+
// Currently only "supabase" is implemented. "local" (SQLite) is
|
|
66
|
+
// coming in Step 2. Default is "supabase" for backward compat.
|
|
67
|
+
//
|
|
68
|
+
// Set PRISM_STORAGE=local to use SQLite (once implemented).
|
|
69
|
+
// Set PRISM_STORAGE=supabase to use Supabase REST API (default).
|
|
70
|
+
export const PRISM_STORAGE = process.env.PRISM_STORAGE || "supabase";
|
|
71
|
+
// Logged at debug level — see debug() at bottom of file
|
|
63
72
|
// ─── Optional: Supabase (Session Memory Module) ───────────────
|
|
64
73
|
// When both SUPABASE_URL and SUPABASE_KEY are set, session memory tools
|
|
65
74
|
// are registered. These tools allow AI agents to persist and recover
|
|
66
75
|
// context between sessions.
|
|
67
76
|
export const SUPABASE_URL = process.env.SUPABASE_URL;
|
|
68
77
|
export const SUPABASE_KEY = process.env.SUPABASE_KEY;
|
|
69
|
-
|
|
78
|
+
/**
|
|
79
|
+
* SESSION_MEMORY_ENABLED — Master toggle for session persistence tools.
|
|
80
|
+
*
|
|
81
|
+
* Hardcoded to `true` since v3.0. This flag was originally used to gate
|
|
82
|
+
* session memory tools when Supabase credentials were optional. Now that
|
|
83
|
+
* session memory is a core feature (both SQLite and Supabase backends),
|
|
84
|
+
* it is always enabled.
|
|
85
|
+
*
|
|
86
|
+
* The flag is kept (rather than removed) because several modules import
|
|
87
|
+
* it for conditional registration of MCP tools. Removing it would require
|
|
88
|
+
* a broader refactor with no functional benefit.
|
|
89
|
+
*/
|
|
90
|
+
export const SESSION_MEMORY_ENABLED = true;
|
|
70
91
|
// Note: debug() is defined at the bottom of this file; these lines
|
|
71
92
|
// execute at import time after the full module is loaded by Node.
|
|
72
93
|
if (!SESSION_MEMORY_ENABLED) {
|
|
73
|
-
console.error("Info: Session memory disabled (set
|
|
94
|
+
console.error("Info: Session memory disabled (set PRISM_STORAGE=local or configure Supabase)");
|
|
74
95
|
}
|
|
75
|
-
// ─── v2.0: Storage Backend Selection ─────────────────────────
|
|
76
|
-
// REVIEWER NOTE: Step 1 of v2.0 introduces a storage abstraction.
|
|
77
|
-
// Currently only "supabase" is implemented. "local" (SQLite) is
|
|
78
|
-
// coming in Step 2. Default is "supabase" for backward compat.
|
|
79
|
-
//
|
|
80
|
-
// Set PRISM_STORAGE=local to use SQLite (once implemented).
|
|
81
|
-
// Set PRISM_STORAGE=supabase to use Supabase REST API (default).
|
|
82
|
-
export const PRISM_STORAGE = process.env.PRISM_STORAGE || "supabase";
|
|
83
|
-
// Logged at debug level — see debug() at bottom of file
|
|
84
96
|
// ─── Optional: Multi-Tenant User ID ──────────────────────────
|
|
85
97
|
// REVIEWER NOTE: When multiple users share the same Supabase instance,
|
|
86
98
|
// PRISM_USER_ID isolates their data. Each user sets a unique ID in their
|
|
@@ -163,3 +175,12 @@ export const PRISM_SCHOLAR_TOPICS = (process.env.PRISM_SCHOLAR_TOPICS || "ai,age
|
|
|
163
175
|
// Controls the age threshold for link strength decay.
|
|
164
176
|
// Links not traversed in the last N days lose 0.1 strength per sweep.
|
|
165
177
|
export const PRISM_LINK_DECAY_DAYS = parseInt(process.env.PRISM_LINK_DECAY_DAYS || "30", 10);
|
|
178
|
+
// ─── v6.2: Graph Soft-Pruning ───────────────────────────────
|
|
179
|
+
// Soft-pruning filters weak links from graph/retrieval reads while preserving
|
|
180
|
+
// underlying rows for provenance. This does NOT delete links.
|
|
181
|
+
export const PRISM_GRAPH_PRUNING_ENABLED = process.env.PRISM_GRAPH_PRUNING_ENABLED === "true";
|
|
182
|
+
export const PRISM_GRAPH_PRUNE_MIN_STRENGTH = parseFloat(process.env.PRISM_GRAPH_PRUNE_MIN_STRENGTH || "0.15");
|
|
183
|
+
// Scheduler-driven prune sweep controls (WS3)
|
|
184
|
+
export const PRISM_GRAPH_PRUNE_PROJECT_COOLDOWN_MS = parseInt(process.env.PRISM_GRAPH_PRUNE_PROJECT_COOLDOWN_MS || "600000", 10);
|
|
185
|
+
export const PRISM_GRAPH_PRUNE_SWEEP_BUDGET_MS = parseInt(process.env.PRISM_GRAPH_PRUNE_SWEEP_BUDGET_MS || "30000", 10);
|
|
186
|
+
export const PRISM_GRAPH_PRUNE_MAX_PROJECTS_PER_SWEEP = parseInt(process.env.PRISM_GRAPH_PRUNE_MAX_PROJECTS_PER_SWEEP || "25", 10);
|