prism-mcp-server 9.12.0 → 9.13.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -12,7 +12,7 @@
12
12
 
13
13
  **Your AI agent forgets everything between sessions. Prism fixes that — then teaches it to think.**
14
14
 
15
- Prism v9.12 is a true **Cognitive Architecture** inspired by human brain mechanics. Beyond flat vector search, your agent now forms principles from experience, follows causal trains of thought, and possesses the self-awareness to know when it lacks information. **Your agents don't just remember; they learn.**
15
+ Prism v9.13 is a true **Cognitive Architecture** inspired by human brain mechanics. Beyond flat vector search, your agent now forms principles from experience, follows causal trains of thought, and possesses the self-awareness to know when it lacks information. **Your agents don't just remember; they learn.** With v9.13, semantic search works **100% offline** — no API keys required.
16
16
 
17
17
  ```bash
18
18
  npx -y prism-mcp-server
@@ -124,16 +124,16 @@ Then open `http://localhost:3001` instead.
124
124
  | Time travel & versioning | ✅ | ✅ |
125
125
  | Mind Palace Dashboard | ✅ | ✅ |
126
126
  | GDPR export (JSON/Markdown/Vault) | ✅ | ✅ |
127
- | Semantic vector search | | ✅ `GOOGLE_API_KEY` |
128
- | Morning Briefings | ❌ | ✅ `GOOGLE_API_KEY` |
129
- | Auto-compaction | ❌ | ✅ `GOOGLE_API_KEY` |
127
+ | Semantic vector search | (`embedding_provider=local`) | ✅ (gemini, openai, or voyage) |
128
+ | Morning Briefings | ❌ | ✅ Text provider key |
129
+ | Auto-compaction | ❌ | ✅ Text provider key |
130
130
  | Web Scholar research | ❌ | ✅ [`BRAVE_API_KEY`](#environment-variables) + [`FIRECRAWL_API_KEY`](#environment-variables) (or `TAVILY_API_KEY`) |
131
131
  | VLM image captioning | ❌ | ✅ Provider key |
132
- | Autonomous Pipelines (Dark Factory) | ❌ | ✅ `GOOGLE_API_KEY` (or LLM override) |
132
+ | Autonomous Pipelines (Dark Factory) | ❌ | ✅ Text provider key |
133
133
 
134
- > 🔑 The core Mind Palace works **100% offline** with zero API keys. Cloud keys unlock intelligence features. See [Environment Variables](#environment-variables).
134
+ > 🔑 The core Mind Palace works **100% offline** with zero API keys — including semantic vector search with `embedding_provider=local`. Cloud keys unlock text generation features (Briefings, compaction, pipelines). See [Environment Variables](#environment-variables).
135
135
 
136
- > 💰 **API Cost Note:** `GOOGLE_API_KEY` (Gemini) has a generous free tier that covers most individual use. `BRAVE_API_KEY` offers 2,000 free searches/month. `FIRECRAWL_API_KEY` has a free plan with 500 credits. For typical solo development, expect **$0/month** on the free tiers. Only high-volume teams or heavy autonomous pipeline usage will incur meaningful costs.
136
+ > 💰 **API Cost Note:** With `embedding_provider=local`, semantic search is fully free and offline. Cloud providers (`GOOGLE_API_KEY` for Gemini, `VOYAGE_API_KEY`, `OPENAI_API_KEY`) have generous free tiers. `BRAVE_API_KEY` offers 2,000 free searches/month. `FIRECRAWL_API_KEY` has a free plan with 500 credits. For typical solo development, expect **$0/month** on the free tiers.
137
137
 
138
138
  ---
139
139
 
@@ -377,8 +377,7 @@ Then add to your MCP config:
377
377
  "command": "node",
378
378
  "args": ["/path/to/prism-mcp/dist/server.js"],
379
379
  "env": {
380
- "BRAVE_API_KEY": "your-key",
381
- "GOOGLE_API_KEY": "your-gemini-key"
380
+ "BRAVE_API_KEY": "your-key"
382
381
  }
383
382
  }
384
383
  }
@@ -432,7 +431,7 @@ Prism can be deployed natively to cloud platforms like [Render](https://render.c
432
431
  > `npx` resolves the correct binary automatically, always fetches the latest version, and works identically on macOS, Linux, and Windows. Already installed globally? Run `npm uninstall -g prism-mcp-server` first.
433
432
 
434
433
  > **❓ Seeing warnings about missing API keys on startup?**
435
- > That's expected and not an error. `BRAVE_API_KEY` / `GOOGLE_API_KEY` warnings are informational only — core session memory works with zero keys. See [Environment Variables](#environment-variables) for what each key unlocks.
434
+ > That's expected and not an error. API key warnings are informational only — core session memory and semantic search (with `embedding_provider=local`) work with zero keys. See [Environment Variables](#environment-variables) for what each key unlocks.
436
435
 
437
436
  > 💡 **Do agents auto-load Prism?** Agents using Cursor, Windsurf, or other MCP clients will see the `session_load_context` tool automatically, but may not call it unprompted. Add this to your project's `.cursorrules` (or equivalent system prompt) to guarantee auto-load:
438
437
  > ```
@@ -567,12 +566,12 @@ When you trigger a Dark Factory pipeline, Prism doesn't just run your task — i
567
566
  Most AI agents have an infinite memory budget. They dump massive, repetitive logs into vector databases until they bankrupt your API budget and choke their own context windows. Prism v9.0 fixes this by introducing **Token-Economic Reinforcement Learning** and **Affect-Tagged Memory**.
568
567
 
569
568
  ### 💰 Memory-as-an-Economy (The Surprisal Gate)
570
- Prism assigns every project a strict **Cognitive Budget** (e.g., 2,000 tokens) that persists across sessions. Every time the agent saves a memory, it costs tokens.
569
+ Prism assigns every project a strict **Cognitive Budget** (e.g., 2,000 tokens) that persists across sessions. Every time the agent saves a memory, it costs tokens.
571
570
 
572
571
  But not all memories are priced equally. Prism intercepts the save and runs a **Vector-Based Surprisal** calculation against recent memories:
573
572
  * **High Surprisal (Novel thought):** Costs 0.5× tokens. The agent is rewarded for new insights.
574
573
  * **Low Surprisal (Boilerplate):** Costs 2.0× tokens. The agent is penalized for repeating itself.
575
- * **Universal Basic Income (UBI):** The budget recovers passively over time (+100 tokens/hour).
574
+ * **Universal Basic Income (UBI):** The budget recovers passively over time (+100 tokens/hour).
576
575
 
577
576
  If an agent is too verbose, it goes into **Cognitive Debt**. You don't need to prompt the agent to "be concise." The physics of the system force the LLM to learn data compression to avoid bankruptcy.
578
577
 
@@ -625,7 +624,7 @@ Standard RAG (Retrieval-Augmented Generation) is now a commodity. Everyone has v
625
624
 
626
625
  ┌─────────────┼─────────────┐
627
626
  ▼ ▼ ▼
628
- [Memory: API [Memory: [Memory:
627
+ [Memory: API [Memory: [Memory:
629
628
  timeout error] DB pool rate limiter
630
629
  exhaustion] misconfigured]
631
630
  │ │
@@ -680,9 +679,9 @@ rm -rf ~/.prism-mcp
680
679
  Prism will recreate the directory with empty databases on next startup.
681
680
 
682
681
  **What leaves your machine?**
683
- - **Local mode (default):** Nothing. Zero network calls. All data is on-disk SQLite.
684
- - **With `GOOGLE_API_KEY`:** Text snippets are sent to Gemini for embedding generation, summaries, and Morning Briefings. No session data is stored on Google's servers beyond the API call.
685
- - **With `VOYAGE_API_KEY` / `OPENAI_API_KEY`:** Text snippets are sent to providers if selected as your embedding endpoints.
682
+ - **Local mode (default):** Nothing. Zero network calls. All data is on-disk SQLite. With `embedding_provider=local`, even semantic search stays fully offline.
683
+ - **With `GOOGLE_API_KEY`:** Text snippets are sent to Gemini for text generation (summaries, Morning Briefings) and optionally embeddings. No session data is stored on Google's servers beyond the API call.
684
+ - **With `VOYAGE_API_KEY` / `OPENAI_API_KEY`:** Text snippets are sent to providers if selected as your embedding or text endpoints.
686
685
  - **With `BRAVE_API_KEY` / `FIRECRAWL_API_KEY`:** Web Scholar queries are sent to Brave/Firecrawl for search and scraping.
687
686
  - **With Supabase:** Session data syncs to your own Supabase instance (you control the Postgres database).
688
687
 
@@ -1072,13 +1071,17 @@ Requires `PRISM_DARK_FACTORY_ENABLED=true`.
1072
1071
 
1073
1072
  ## Environment Variables
1074
1073
 
1075
- > **🚦 TL;DR — Just want the best experience fast?** Set these three keys and you're done:
1074
+ > **🚦 TL;DR — Just want the best experience fast?** Two options:
1076
1075
  > ```
1077
- > GOOGLE_API_KEY=... # Unlocks: semantic search, Morning Briefings, auto-compaction
1076
+ > # Option A: Fully offline (no API keys needed)
1077
+ > # Set embedding_provider=local in the Mind Palace dashboard — semantic search works out of the box.
1078
+ >
1079
+ > # Option B: Cloud-powered (best quality)
1080
+ > GOOGLE_API_KEY=... # Unlocks: Gemini embeddings, Morning Briefings, auto-compaction
1078
1081
  > BRAVE_API_KEY=... # Unlocks: Web Scholar research + Brave Answers
1079
1082
  > FIRECRAWL_API_KEY=... # Unlocks: Web Scholar deep scraping (or use TAVILY_API_KEY instead)
1080
1083
  > ```
1081
- > **Zero keys = zero problem.** Core session memory, keyword search, time travel, and the full dashboard work 100% offline. Cloud keys are optional power-ups.
1084
+ > **Zero keys = zero problem.** Core session memory, keyword search, semantic search (local embeddings), time travel, and the full dashboard work 100% offline. Cloud keys are optional power-ups.
1082
1085
 
1083
1086
  <details>
1084
1087
  <summary><strong>Full variable reference</strong></summary>
@@ -1091,7 +1094,7 @@ Requires `PRISM_DARK_FACTORY_ENABLED=true`.
1091
1094
  | `PRISM_STORAGE` | No | `"local"` (default) or `"supabase"` — restart required |
1092
1095
  | `PRISM_ENABLE_HIVEMIND` | No | `"true"` to enable multi-agent tools — restart required |
1093
1096
  | `PRISM_INSTANCE` | No | Instance name for multi-server PID isolation |
1094
- | `GOOGLE_API_KEY` | No | Gemini — enables semantic search, Briefings, compaction |
1097
+ | `GOOGLE_API_KEY` | No | Gemini — enables Briefings, compaction, and cloud embeddings (not needed with `embedding_provider=local`) |
1095
1098
  | `VOYAGE_API_KEY` | No | Voyage AI — optional premium embedding provider |
1096
1099
  | `OPENAI_API_KEY` | No | OpenAI — optional proxy model and embedding provider |
1097
1100
  | `BRAVE_ANSWERS_API_KEY` | No | Separate Brave Answers key |
@@ -1277,7 +1280,7 @@ Prism MCP is open-source and free for individual developers. For teams and enter
1277
1280
  * **What's included:** Active Directory / custom JWKS auth integration, Air-gapped on-premise deployment, custom OTel Grafana dashboards for cognitive observability, and custom skills/tools development.
1278
1281
  * **Model:** Custom enterprise quote.
1279
1282
 
1280
- **Interested in accelerating your team's autonomous workflows?**
1283
+ **Interested in accelerating your team's autonomous workflows?**
1281
1284
  [📧 Contact us for a consultation](mailto:inquiries@prism-mcp.com) — let's build your organization's cognitive memory engine.
1282
1285
 
1283
1286
  ---
@@ -1332,11 +1335,11 @@ A: Run `npm run build && npm test`, then open the Mind Palace dashboard (`localh
1332
1335
 
1333
1336
  ### 💡 Known Limitations & Quirks
1334
1337
 
1335
- - **LLM-dependent features require an API key.** Semantic search, Morning Briefings, auto-compaction, and VLM captioning need a `GOOGLE_API_KEY` (your Gemini API key) or equivalent provider key. Without one, Prism falls back to keyword-only search (FTS5).
1338
+ - **Text generation features require an API key.** Morning Briefings, auto-compaction, and VLM captioning need a cloud provider key (`GOOGLE_API_KEY`, `OPENAI_API_KEY`, or `ANTHROPIC_API_KEY`). Semantic search works offline with `embedding_provider=local` (no key needed). Without any embedding provider, Prism falls back to keyword-only search (FTS5).
1336
1339
  - **Auto-load is model- and client-dependent.** Session auto-loading relies on both the LLM following system prompt instructions *and* the MCP client completing tool registration before the model's first turn. Prism provides platform-specific [Setup Guides](#-setup-guides) and a server-side fallback (v5.2.1) that auto-pushes context after 10 seconds.
1337
1340
  - **MCP client race conditions.** Some MCP clients may not finish tool enumeration before the model generates its first response, causing transient `unknown_tool` errors. This is a client-side timing issue — Prism's server completes the MCP handshake in ~60ms. Workaround: the server-side auto-push fallback and the startup skill's retry logic.
1338
1341
  - **No real-time sync without Supabase.** Local SQLite mode is single-machine only. Multi-device or team sync requires a Supabase backend.
1339
- - **Embedding quality varies by provider.** Gemini `text-embedding-004` and OpenAI `text-embedding-3-small` produce high-quality 768-dim vectors. Prism passes `dimensions: 768` via the Matryoshka API for OpenAI models (native output is 1536-dim; this truncation is lossless and outperforms ada-002 at full 1536 dims). Ollama embeddings (e.g., `nomic-embed-text`) are usable but may reduce retrieval accuracy.
1342
+ - **Embedding quality varies by provider.** Gemini `text-embedding-004` and OpenAI `text-embedding-3-small` produce high-quality 768-dim vectors. Prism passes `dimensions: 768` via the Matryoshka API for OpenAI models (native output is 1536-dim; this truncation is lossless and outperforms ada-002 at full 1536 dims). Local embeddings (`nomic-embed-text-v1.5` via `@huggingface/transformers`) provide good quality with zero API cost. Ollama embeddings are usable but may reduce retrieval accuracy.
1340
1343
  - **Dashboard is HTTP-only.** The Mind Palace dashboard at `localhost:3000` does not support HTTPS. For remote access, use a reverse proxy (nginx/Caddy) or SSH tunnel. Basic auth is available via `PRISM_DASHBOARD_USER` / `PRISM_DASHBOARD_PASS`. JWKS JWT auth is available via `PRISM_JWKS_URI` for agent-native authentication (works with Auth0, AgentLair ([llms.txt](https://agentlair.com/llms.txt)), Keycloak, Cognito, or any standard JWKS endpoint).
1341
1344
  - **Long-lived clients can accumulate zombie processes.** MCP clients that run for extended periods (e.g., Claude CLI) may leave orphaned Prism server processes. The lifecycle manager detects true orphans (PPID=1) but allows coexistence for active parent processes. Use `PRISM_INSTANCE` to isolate instances across clients.
1342
1345
  - **Migration is one-way.** Universal Import ingests sessions *into* Prism but does not export back to Claude/Gemini/OpenAI formats. Use `session_export_memory` for portable JSON/Markdown export, or the `vault` format for Obsidian/Logseq-compatible `.zip` archives.
@@ -971,10 +971,10 @@ return false;}
971
971
  }
972
972
  catch {
973
973
  res.writeHead(503, { "Content-Type": "application/json" });
974
- return res.end(JSON.stringify({ error: "LLM Provider not configured for semantic search. Provide a GOOGLE_API_KEY or equivalent." }));
974
+ return res.end(JSON.stringify({ error: "LLM Provider not configured for semantic search. Configure an embedding provider in the Mind Palace dashboard." }));
975
975
  }
976
976
  const queryEmbedding = await llm.generateEmbedding(queryText);
977
- // We query limit + offset, then slice manually since the storage
977
+ // We query limit + offset, then slice manually since the storage
978
978
  // layer interface limit parameter doesn't natively expose offset.
979
979
  const results = await s.searchMemory({
980
980
  queryEmbedding: JSON.stringify(queryEmbedding),
@@ -1205,7 +1205,7 @@ self.addEventListener('message', (e) => {
1205
1205
  .bg {
1206
1206
  position: fixed;
1207
1207
  inset: 0;
1208
- background-image:
1208
+ background-image:
1209
1209
  radial-gradient(circle at 20% 30%, rgba(139, 92, 246, 0.08) 0%, transparent 50%),
1210
1210
  radial-gradient(circle at 80% 70%, rgba(59, 130, 246, 0.06) 0%, transparent 50%);
1211
1211
  z-index: 0;
@@ -228,7 +228,7 @@ export const BRAVE_ANSWERS_TOOL = {
228
228
  };
229
229
  // Analyzes academic research papers using Google's Gemini model.
230
230
  // Supports multiple analysis types: summary, critique, literature review, key findings.
231
- // Requires GOOGLE_API_KEY to be configured.
231
+ // Requires a configured text provider (Gemini, OpenAI, or Anthropic).
232
232
  export const RESEARCH_PAPER_ANALYSIS_TOOL = {
233
233
  name: "gemini_research_paper_analysis",
234
234
  description: "Performs in-depth analysis of research papers using Google's Gemini-2.0-flash model. " +
@@ -29,7 +29,7 @@ import { getSetting } from "../storage/configStorage.js";
29
29
  // containing: strategy, scores, latency breakdown (embedding/storage/total), and metadata.
30
30
  // See src/utils/tracing.ts for full type definitions and design decisions.
31
31
  import { createMemoryTrace, traceToContentBlock } from "../utils/tracing.js";
32
- import { GOOGLE_API_KEY, PRISM_USER_ID } from "../config.js";
32
+ import { PRISM_USER_ID } from "../config.js";
33
33
  import { isKnowledgeSearchArgs, isKnowledgeForgetArgs, isSessionSearchMemoryArgs, isKnowledgeVoteArgs,
34
34
  // v4.2: Sync Rules type guard
35
35
  isKnowledgeSyncRulesArgs, isSessionIntuitiveRecallArgs, isSessionSynthesizeEdgesArgs, isSessionCognitiveRouteArgs, } from "./sessionMemoryDefinitions.js";
@@ -290,17 +290,6 @@ export async function sessionSearchMemoryHandler(args) {
290
290
  // Phase 1: Start total latency timer BEFORE any work (embedding + storage)
291
291
  const totalStart = performance.now();
292
292
  // Step 1: Generate embedding for the search query
293
- if (!GOOGLE_API_KEY) {
294
- return {
295
- content: [{
296
- type: "text",
297
- text: `❌ Semantic search requires GOOGLE_API_KEY for embedding generation.\n` +
298
- `Set this environment variable and restart the server.\n\n` +
299
- `💡 As a workaround, try knowledge_search (keyword-based) instead.`,
300
- }],
301
- isError: true,
302
- };
303
- }
304
293
  let queryEmbedding;
305
294
  // Phase 1: Start embedding latency timer — isolates Gemini API call time.
306
295
  // This is the most variable component: 50ms on a good day, 2000ms under load.
@@ -390,7 +379,7 @@ export async function sessionSearchMemoryHandler(args) {
390
379
  `Tips:\n` +
391
380
  `• Lower the similarity_threshold (e.g., 0.5) for broader results\n` +
392
381
  `• Try knowledge_search for keyword-based matching\n` +
393
- `• Ensure sessions have been saved with embeddings (requires GOOGLE_API_KEY)`,
382
+ `• Ensure sessions have been saved with embeddings (requires a configured embedding provider)`,
394
383
  }];
395
384
  // Phase 1: Trace is still valuable on empty results — it proves the search
396
385
  // executed and reveals whether the bottleneck was embedding or storage.
@@ -29,7 +29,7 @@ import { getLLMProvider } from "../utils/llm/factory.js";
29
29
  import { getCurrentGitState, getGitDrift } from "../utils/git.js";
30
30
  import { getSetting, getAllSettings } from "../storage/configStorage.js";
31
31
  import { mergeHandoff, dbToHandoffSchema, sanitizeForMerge } from "../utils/crdtMerge.js";
32
- import { GOOGLE_API_KEY, PRISM_USER_ID, PRISM_AUTO_CAPTURE, PRISM_CAPTURE_PORTS } from "../config.js";
32
+ import { PRISM_USER_ID, PRISM_AUTO_CAPTURE, PRISM_CAPTURE_PORTS } from "../config.js";
33
33
  import { captureLocalEnvironment } from "../utils/autoCapture.js";
34
34
  import { fireCaptionAsync } from "../utils/imageCaptioner.js";
35
35
  import { isSessionSaveLedgerArgs, isSessionSaveHandoffArgs, isSessionLoadContextArgs, isMemoryHistoryArgs, isMemoryCheckoutArgs, // v2.2.0: health check type guard
@@ -134,7 +134,7 @@ export async function sessionSaveLedgerHandler(args) {
134
134
  role: effectiveRole, // v3.0: Hivemind role scoping (dashboard fallback)
135
135
  });
136
136
  // ─── Fire-and-forget embedding generation ───
137
- if (GOOGLE_API_KEY && result) {
137
+ if (result) {
138
138
  const embeddingText = [summary, ...(decisions || [])].join("\n");
139
139
  const savedEntry = Array.isArray(result) ? result[0] : result;
140
140
  const entryId = savedEntry?.id;
@@ -230,7 +230,7 @@ export async function sessionSaveLedgerHandler(args) {
230
230
  (todos?.length ? `TODOs: ${todos.length} items\n` : "") +
231
231
  (files_changed?.length ? `Files changed: ${files_changed.length}\n` : "") +
232
232
  (decisions?.length ? `Decisions: ${decisions.length}\n` : "") +
233
- (GOOGLE_API_KEY ? `📊 Embedding generation queued for semantic search.\n` : "") +
233
+ `📊 Embedding generation queued for semantic search.\n` +
234
234
  repoPathWarning +
235
235
  `\nRaw response: ${JSON.stringify(result)}`,
236
236
  }],
@@ -450,15 +450,14 @@ export async function sessionSaveHandoffHandler(args, server) {
450
450
  // merges contradicting facts in the background (~2-3s).
451
451
  //
452
452
  // TRIGGER CONDITIONS (all must be true):
453
- // 1. GOOGLE_API_KEY is configured (Gemini is available)
454
- // 2. The handoff was an UPDATE (not a brand-new project)
455
- // 3. key_context was provided (something to merge)
453
+ // 1. The handoff was an UPDATE (not a brand-new project)
454
+ // 2. key_context was provided (something to merge)
456
455
  //
457
456
  // OCC SAFETY:
458
457
  // If the user saves another handoff while the merger runs,
459
458
  // the merger's save will fail with a version conflict. This is
460
459
  // intentional — active user input always wins over background merging.
461
- if (GOOGLE_API_KEY && data.status === "updated" && key_context) {
460
+ if (data.status === "updated" && key_context) {
462
461
  // Use dynamic import to avoid loading Gemini SDK if not needed
463
462
  import("../utils/factMerger.js").then(async ({ consolidateFacts }) => {
464
463
  try {
@@ -805,7 +804,7 @@ export async function sessionLoadContextHandler(args) {
805
804
  // ─── SDM Intuitive Recall (v5.5) ───
806
805
  // Generate embedding of current context and fetch latent SDM patterns
807
806
  let sdmRecallBlock = "";
808
- if (level !== "quick" && GOOGLE_API_KEY) {
807
+ if (level !== "quick") {
809
808
  try {
810
809
  const activeText = [d.last_summary, d.key_context, ...(d.keywords || [])].filter(Boolean).join(" ");
811
810
  if (activeText.length > 10) {
@@ -1233,7 +1232,7 @@ export async function sessionSaveExperienceHandler(args) {
1233
1232
  importance: event_type === "correction" ? 1 : 0,
1234
1233
  });
1235
1234
  // Fire-and-forget embedding generation
1236
- if (GOOGLE_API_KEY && result) {
1235
+ if (result) {
1237
1236
  const embeddingText = summary;
1238
1237
  const savedEntry = Array.isArray(result) ? result[0] : result;
1239
1238
  const entryId = savedEntry?.id;
@@ -29,7 +29,7 @@
29
29
  * "Merge skipped due to active session."
30
30
  *
31
31
  * REQUIREMENTS:
32
- * - GOOGLE_API_KEY must be set (skips gracefully if not)
32
+ * - A text provider must be configured (skips gracefully if not)
33
33
  * - Uses gemini-2.5-flash for speed (~2-3s per merge)
34
34
  * ═══════════════════════════════════════════════════════════════════
35
35
  */
@@ -0,0 +1,9 @@
1
+ export class DisabledTextAdapter {
2
+ async generateText(_prompt, _systemInstruction) {
3
+ throw new Error("Text generation is not available. " +
4
+ "Configure an AI provider in the Mind Palace dashboard.");
5
+ }
6
+ async generateEmbedding(_text) {
7
+ throw new Error("[DisabledTextAdapter] Embedding is handled by a separate adapter — this method should not be called directly.");
8
+ }
9
+ }
@@ -0,0 +1,114 @@
1
+ import { getSettingSync } from "../../../storage/configStorage.js";
2
+ import { debugLog } from "../../logger.js";
3
+ const EMBEDDING_DIMS = 768;
4
+ const MAX_EMBEDDING_CHARS = 8000;
5
+ const DEFAULT_MODEL = "nomic-ai/nomic-embed-text-v1.5";
6
+ const DEFAULT_REVISION = "main";
7
+ // MODEL_ID_PATTERN allows '.' in the name segment — the separate '..' check below
8
+ // handles directory traversal (e.g., "owner/foo..bar" passes the regex but is invalid).
9
+ const MODEL_ID_PATTERN = /^[a-zA-Z0-9_-]{1,64}\/[a-zA-Z0-9._-]{1,128}$/;
10
+ // Allowed: "main", 40-char commit SHA, semver tag like "v1.5" or "v1.5.0"
11
+ const REVISION_PATTERN = /^(main|[a-f0-9]{40}|v\d+(\.\d+){0,2})$/;
12
+ export class LocalEmbeddingAdapter {
13
+ /** @internal Resolves once pipeline initialization completes. Callers and tests await this for readiness. */
14
+ loadPromise;
15
+ pipe = null;
16
+ loadError = null;
17
+ constructor() {
18
+ this.loadPromise = this.initPipeline();
19
+ }
20
+ async generateText(_prompt, _systemInstruction) {
21
+ throw new Error("LocalEmbeddingAdapter does not support text generation. " +
22
+ "It is an embedding-only provider. Configure a text provider in the Mind Palace dashboard.");
23
+ }
24
+ async generateEmbedding(text) {
25
+ if (!text || !text.trim()) {
26
+ throw new Error("[LocalEmbeddingAdapter] generateEmbedding called with empty text");
27
+ }
28
+ let inputText = text;
29
+ if (inputText.length > MAX_EMBEDDING_CHARS) {
30
+ inputText = inputText.substring(0, MAX_EMBEDDING_CHARS);
31
+ const lastSpace = inputText.lastIndexOf(" ");
32
+ if (lastSpace > 0)
33
+ inputText = inputText.substring(0, lastSpace);
34
+ }
35
+ await this.loadPromise;
36
+ if (this.loadError)
37
+ throw this.loadError;
38
+ if (!this.pipe) {
39
+ throw new Error("[LocalEmbeddingAdapter] Pipeline not initialized and no load error recorded");
40
+ }
41
+ const result = await this.pipe(`search_document: ${inputText}`, { pooling: "mean", normalize: true });
42
+ const tensorData = result.data;
43
+ if (!tensorData || !(tensorData instanceof Float32Array)) {
44
+ throw new Error("[LocalEmbeddingAdapter] Unexpected pipeline output shape — expected { data: Float32Array }. " +
45
+ "This may indicate an incompatible @huggingface/transformers version.");
46
+ }
47
+ const vec = Array.from(tensorData);
48
+ if (vec.length !== EMBEDDING_DIMS) {
49
+ throw new Error(`[LocalEmbeddingAdapter] Embedding dimension mismatch: expected ${EMBEDDING_DIMS}, got ${vec.length}. ` +
50
+ `Check the local_embedding_model setting.`);
51
+ }
52
+ return vec;
53
+ }
54
+ async initPipeline() {
55
+ const model = process.env.LOCAL_EMBEDDING_MODEL ?? getSettingSync("local_embedding_model", DEFAULT_MODEL);
56
+ if (!MODEL_ID_PATTERN.test(model) || model.includes("..")) {
57
+ this.loadError = new Error(`[LocalEmbeddingAdapter] Invalid local_embedding_model: "${model}". ` +
58
+ `Must be a HuggingFace model ID in "owner/name" format.`);
59
+ return;
60
+ }
61
+ const hfEndpoint = process.env.HF_ENDPOINT;
62
+ if (hfEndpoint) {
63
+ try {
64
+ const parsed = new URL(hfEndpoint);
65
+ const isTrusted = parsed.hostname === "huggingface.co" ||
66
+ parsed.hostname.endsWith(".huggingface.co");
67
+ if (!isTrusted) {
68
+ console.warn(`[LocalEmbeddingAdapter] HF_ENDPOINT hostname "${parsed.hostname}" is not huggingface.co — ` +
69
+ `model downloads are redirected. Only set if you control and trust this server.`);
70
+ }
71
+ }
72
+ catch {
73
+ console.warn(`[LocalEmbeddingAdapter] HF_ENDPOINT is not a valid URL: "${hfEndpoint}". Ignoring.`);
74
+ }
75
+ }
76
+ let transformers;
77
+ try {
78
+ transformers = await import("@huggingface/transformers");
79
+ }
80
+ catch (err) {
81
+ const e = err instanceof Error ? err : new Error(String(err));
82
+ this.loadError = e.code === "ERR_MODULE_NOT_FOUND"
83
+ ? new Error("[LocalEmbeddingAdapter] @huggingface/transformers is not installed. " +
84
+ "Run: npm install @huggingface/transformers")
85
+ : e;
86
+ return;
87
+ }
88
+ const quantized = getSettingSync("local_embedding_quantized", "true") !== "false";
89
+ const dtype = quantized ? "q8" : "fp32";
90
+ const revision = getSettingSync("local_embedding_revision", DEFAULT_REVISION);
91
+ if (!REVISION_PATTERN.test(revision)) {
92
+ this.loadError = new Error(`[LocalEmbeddingAdapter] Invalid local_embedding_revision: "${revision}". ` +
93
+ `Allowed values: "main", a 40-char commit SHA, or a semver tag like "v1.5".`);
94
+ return;
95
+ }
96
+ try {
97
+ const pipelineInstance = await transformers.pipeline("feature-extraction", model, { dtype, revision });
98
+ this.pipe = pipelineInstance;
99
+ try {
100
+ await this.pipe("warmup text", { pooling: "mean", normalize: true });
101
+ debugLog(`[LocalEmbeddingAdapter] Pipeline ready and warmed up: ${model} (${dtype})`);
102
+ }
103
+ catch (warmupErr) {
104
+ const we = warmupErr instanceof Error ? warmupErr : new Error(String(warmupErr));
105
+ console.warn(`[LocalEmbeddingAdapter] Warmup failed (non-fatal): ${we.message}. ` +
106
+ `First embedding call may be slightly slower.`);
107
+ }
108
+ }
109
+ catch (err) {
110
+ this.loadError = err instanceof Error ? err : new Error(String(err));
111
+ console.error(`[LocalEmbeddingAdapter] Failed to load pipeline: ${this.loadError.message}`);
112
+ }
113
+ }
114
+ }
@@ -11,7 +11,7 @@
11
11
  * Two independent settings control text and embedding routing:
12
12
  *
13
13
  * text_provider — "gemini" (default) | "openai" | "anthropic"
14
- * embedding_provider — "auto" (default) | "gemini" | "openai" | "voyage"
14
+ * embedding_provider — "auto" (default) | "gemini" | "openai" | "voyage" | "local"
15
15
  *
16
16
  * When embedding_provider = "auto":
17
17
  * * If text_provider is gemini or openai → use same provider for embeddings
@@ -44,6 +44,8 @@ import { GeminiAdapter } from "./adapters/gemini.js";
44
44
  import { OpenAIAdapter } from "./adapters/openai.js";
45
45
  import { AnthropicAdapter } from "./adapters/anthropic.js";
46
46
  import { VoyageAdapter } from "./adapters/voyage.js";
47
+ import { LocalEmbeddingAdapter } from "./adapters/local.js";
48
+ import { DisabledTextAdapter } from "./adapters/disabledText.js";
47
49
  import { TracingLLMProvider } from "./adapters/traced.js";
48
50
  // Module-level singleton — one composed provider per MCP server process.
49
51
  let providerInstance = null;
@@ -54,6 +56,7 @@ function buildTextAdapter(type) {
54
56
  switch (type) {
55
57
  case "anthropic": return new AnthropicAdapter();
56
58
  case "openai": return new OpenAIAdapter();
59
+ case "none": return new DisabledTextAdapter();
57
60
  case "gemini":
58
61
  default: return new GeminiAdapter();
59
62
  }
@@ -66,6 +69,7 @@ function buildEmbeddingAdapter(type) {
66
69
  switch (type) {
67
70
  case "openai": return new OpenAIAdapter();
68
71
  case "voyage": return new VoyageAdapter();
72
+ case "local": return new LocalEmbeddingAdapter();
69
73
  case "gemini":
70
74
  default: return new GeminiAdapter();
71
75
  }
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "prism-mcp-server",
3
- "version": "9.12.0",
3
+ "version": "9.13.0",
4
4
  "mcpName": "io.github.dcostenco/prism-mcp",
5
5
  "description": "The Mind Palace for AI Agents — a true Cognitive Architecture with Hebbian learning (episodic→semantic consolidation), ACT-R spreading activation (multi-hop causal reasoning), uncertainty-aware rejection gates (agents that know when they don't know), adversarial evaluation (anti-sycophancy), fail-closed Dark Factory pipelines, persistent memory (SQLite/Supabase), multi-agent Hivemind, time travel & visual dashboard. Zero-config local mode.",
6
6
  "module": "index.ts",
@@ -80,7 +80,10 @@
80
80
  "dark-factory",
81
81
  "autonomous-pipeline",
82
82
  "fail-closed",
83
- "anti-sycophancy"
83
+ "anti-sycophancy",
84
+ "local-embeddings",
85
+ "transformers-js",
86
+ "nomic-embed"
84
87
  ],
85
88
  "homepage": "https://github.com/dcostenco/prism-mcp",
86
89
  "repository": {
@@ -90,6 +93,7 @@
90
93
  "author": "Dmitri Costenco",
91
94
  "license": "MIT",
92
95
  "devDependencies": {
96
+ "@huggingface/transformers": "3.1.0",
93
97
  "@types/bun": "latest",
94
98
  "@types/jsdom": "^28.0.1",
95
99
  "@types/mozilla-readability": "^0.2.1",
@@ -99,7 +103,13 @@
99
103
  "vitest": "^4.1.1"
100
104
  },
101
105
  "peerDependencies": {
102
- "typescript": "^5.0.0"
106
+ "typescript": "^5.0.0",
107
+ "@huggingface/transformers": "~3.1.0"
108
+ },
109
+ "peerDependenciesMeta": {
110
+ "@huggingface/transformers": {
111
+ "optional": true
112
+ }
103
113
  },
104
114
  "dependencies": {
105
115
  "@anthropic-ai/sdk": "^0.81.0",