@rixter145/open-brain 1.0.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md ADDED
@@ -0,0 +1,204 @@
1
+ # Open Brain – MCP server
2
+
3
+ One shared memory layer for Cursor, Claude, and any MCP client: thoughts stored in **Postgres + pgvector**, exposed via an **MCP server** with semantic search and capture.
4
+
5
+ - **Capture**: Save thoughts from any client; each is embedded (OpenAI `text-embedding-3-small`) and stored.
6
+ - **Retrieve**: Semantic search by meaning, list recent thoughts, or view stats.
7
+ - **Same brain everywhere**: One Postgres DB and one MCP server; point Cursor, Claude Desktop, and other clients at it.
8
+
9
+ ## Prerequisites
10
+
11
+ - **Node.js** 18+
12
+ - **Postgres** 15+ with the **pgvector** extension (e.g. [Supabase](https://supabase.com) free tier, or self‑hosted).
13
+ - **OpenAI API key** (for embeddings), **Google AI Studio API key** (see [Using Google AI Studio](#using-google-ai-studio)), or **Ollama** for free local embeddings (see [Using Ollama](#using-ollama-free-embeddings) below).
14
+
15
+ ## Finish setup on this machine
16
+
17
+ 1. **Build** (in a terminal where Node/npm are available):
18
+ ```powershell
19
+ cd "c:\Users\rix\OneDrive\open-brain"
20
+ npm install
21
+ npm run build
22
+ ```
23
+ 2. **Env**: Edit `.env` in the project root and set `DATABASE_URL` and your embedding provider key (`OPENAI_API_KEY`, `GOOGLE_API_KEY`, or use `EMBEDDING_PROVIDER=ollama`).
24
+ 3. **Database**: If you don’t have a DB yet, follow [Database setup](#1-database-setup) below (Supabase free tier is the easiest). Then run [schema.sql](schema.sql) once.
25
+ 4. **Client**: Add the Open Brain MCP server in your MCP client (see [Connect clients](#3-connect-clients) below), then reload.
26
+
27
+ ---
28
+
29
+ ## Using Ollama (free embeddings)
30
+
31
+ To avoid paying for OpenAI, you can use **Ollama** with **nomic-embed-text** (runs locally, no API key).
32
+
33
+ 1. **Install Ollama** from [ollama.com](https://ollama.com) and start it.
34
+ 2. **Pull the embedding model:**
35
+ `ollama pull nomic-embed-text`
36
+ 3. **In `.env`:** set `EMBEDDING_PROVIDER=ollama`. Optionally set `OLLAMA_HOST=http://localhost:11434` if Ollama runs elsewhere. You do **not** need `OPENAI_API_KEY`.
37
+ 4. **Database:** The Ollama model uses **768 dimensions** (not 1536). If you already have a `thoughts` table from the OpenAI schema, drop it and re-run the Ollama schema:
38
+ In Supabase SQL Editor, run `DROP TABLE IF EXISTS thoughts;` then paste and run the contents of [schema-ollama.sql](schema-ollama.sql).
39
+ 5. **Cursor MCP:** In the Open Brain server env, include `EMBEDDING_PROVIDER=ollama` (and optionally `OLLAMA_HOST`). No `OPENAI_API_KEY` needed.
40
+ 6. Restart Cursor (or reload the window).
41
+
42
+ After that, capture and search use your local Ollama embeddings. You cannot mix OpenAI (1536) and Ollama (768) in the same table.
43
+
44
+ ---
45
+
46
+ ## Using Google AI Studio
47
+
48
+ To use **Google AI Studio (Gemini)** for embeddings instead of OpenAI:
49
+
50
+ 1. **Get an API key** at [Google AI Studio](https://aistudio.google.com/app/apikey). Sign in, create or select a project, and create an API key.
51
+ 2. **In `.env`:** set `EMBEDDING_PROVIDER=google` and `GOOGLE_API_KEY=your-key`. You do **not** need `OPENAI_API_KEY`.
52
+ 3. **Database:** Google embeddings use **768 dimensions** (same as Ollama). Use the same schema: if you don’t already have a 768-dim table, run [schema-ollama.sql](schema-ollama.sql) (or run `DROP TABLE IF EXISTS thoughts;` then the contents of schema-ollama.sql).
53
+ 4. **Cursor MCP:** In the Open Brain server env, include `EMBEDDING_PROVIDER=google` and `GOOGLE_API_KEY`. No `OPENAI_API_KEY` needed.
54
+ 5. Restart Cursor (or reload the window).
55
+
56
+ You cannot mix different embedding dimensions in the same table (OpenAI 1536 vs Ollama/Google 768).
57
+
58
+ ---
59
+
60
+ ## 1. Database setup
61
+
62
+ Open Brain needs **Postgres 15+** with the **pgvector** extension. The easiest way is [Supabase](https://supabase.com) (free tier, pgvector included).
63
+
64
+ ### Option A: Supabase (recommended, free)
65
+
66
+ 1. **Create an account**
67
+ Go to [supabase.com](https://supabase.com) and sign up (GitHub or email).
68
+
69
+ 2. **Create a project**
70
+ - Click **New project**.
71
+ - Choose your **organization** (or create one).
72
+ - Set **Name** (e.g. `open-brain`), **Database password** (save it somewhere safe), and **Region**.
73
+ - Click **Create new project** and wait until it’s ready.
74
+
75
+ 3. **Get the connection string**
76
+ - In the left sidebar: **Project Settings** (gear) → **Database**.
77
+ - Under **Connection string** choose **URI**.
78
+ - Copy the URI. It looks like:
79
+ `postgresql://postgres.[ref]:[YOUR-PASSWORD]@aws-0-[region].pooler.supabase.com:6543/postgres`
80
+ - Replace `[YOUR-PASSWORD]` with the database password you set in step 2.
81
+ - **If your password has special characters** (`@`, `#`, `/`, `%`, etc.), they must be URL-encoded in the URI. Run:
82
+ `node scripts/encode-password.mjs "YourPassword"`
83
+ and use the output in place of the password in the URL.
84
+ - Put this full URI in your `.env` as `DATABASE_URL`.
85
+
86
+ 4. **Run the schema**
87
+ - In the left sidebar: **SQL Editor**.
88
+ - Click **New query**.
89
+ - Open [schema.sql](schema.sql) in this repo, copy its **entire** contents, paste into the editor, and click **Run** (or Ctrl+Enter).
90
+ - You should see “Success. No rows returned.” The `thoughts` table and vector index are now created.
91
+
92
+ 5. **Use the same `DATABASE_URL`** in your `.env` and in Cursor’s MCP config for the Open Brain server.
93
+
94
+ ### Option B: Other Postgres (with pgvector)
95
+
96
+ If you use another host (e.g. Neon, Railway, or your own server):
97
+
98
+ - Ensure **pgvector** is enabled (run `CREATE EXTENSION IF NOT EXISTS vector;` if needed).
99
+ - Connection string format: `postgresql://USER:PASSWORD@HOST:PORT/DATABASE`.
100
+ - Run the contents of [schema.sql](schema.sql) once (e.g. via `psql $DATABASE_URL -f schema.sql` or your host’s SQL runner).
101
+
102
+ ## 2. Install and run the MCP server
103
+
104
+ ```bash
105
+ cd open-brain
106
+ npm install
107
+ npm run build
108
+ ```
109
+
110
+ Set environment variables (or use a `.env` file with something like `dotenv` if you add it):
111
+
112
+ - **`DATABASE_URL`** – Postgres connection string (e.g. `postgresql://user:pass@host:5432/dbname`).
113
+ - **`OPENAI_API_KEY`** – OpenAI API key for embeddings (default provider).
114
+ - **`GOOGLE_API_KEY`** – Google AI Studio API key when `EMBEDDING_PROVIDER=google` (get one at [Google AI Studio](https://aistudio.google.com/app/apikey)).
115
+ - **`EMBEDDING_PROVIDER`** – Optional. Set to `ollama`, `google`, or `gemini` to use that provider instead of OpenAI.
116
+
117
+ Run the server (stdio; clients will start it as a subprocess):
118
+
119
+ ```bash
120
+ npm start
121
+ # or for development: npm run dev
122
+ ```
123
+
124
+ ## 3. Connect clients
125
+
126
+ Any MCP client (Cursor, Claude Desktop, etc.) can connect to Open Brain. Add a server entry that runs the built server and passes the required env.
127
+
128
+ **Server entry shape:** `command`: `"node"`, `args`: path to `dist/index.js` (absolute path, or relative to the workspace if your client uses this repo as the working directory). Include `env` with `DATABASE_URL` and your embedding provider key (`OPENAI_API_KEY`, or `GOOGLE_API_KEY` + `EMBEDDING_PROVIDER=google`, or use Ollama and set `EMBEDDING_PROVIDER=ollama`).
129
+
130
+ Example (replace the path and env values with your own):
131
+
132
+ ```json
133
+ {
134
+ "mcpServers": {
135
+ "open-brain": {
136
+ "command": "node",
137
+ "args": ["/path/to/open_brain/dist/index.js"],
138
+ "env": {
139
+ "DATABASE_URL": "postgresql://user:password@host:5432/database",
140
+ "GOOGLE_API_KEY": "your-key",
141
+ "EMBEDDING_PROVIDER": "google"
142
+ }
143
+ }
144
+ }
145
+ }
146
+ ```
147
+
148
+ - **Cursor:** Settings → MCP, or project-level `.cursor/mcp.json` (see your client’s docs for config location).
149
+ - **Claude Desktop:** e.g. `%APPDATA%\\Claude\\claude_desktop_config.json` on Windows.
150
+ - **Abacus AI Deep Agent:** [MCP Servers How-to](https://abacus.ai/help/chatllm-ai-super-assistant/mcp-servers) — use the npx config below in MCP JSON Config.
151
+
152
+ **Using npx (Abacus AI or any stdio client):** Paste this in your client's MCP config (Abacus: only the inner object; Cursor/Claude: nest under `mcpServers`). The package is published as `@rixter145/open-brain`; run `npm publish --access=public` from the repo after `npm login`.
153
+
154
+ ```json
155
+ {
156
+ "open_brain": {
157
+ "command": "npx",
158
+ "args": ["-y", "@rixter145/open-brain"],
159
+ "env": {
160
+ "DATABASE_URL": "postgresql://user:password@host:5432/database",
161
+ "OPENAI_API_KEY": "your-key"
162
+ }
163
+ }
164
+ }
165
+ ```
166
+
167
+ For Google or Ollama embeddings, add `EMBEDDING_PROVIDER` and the matching key to `env` (see sections above).
168
+
169
+ Restart the client after changing the config.
170
+
171
+ ## MCP tools
172
+
173
+ | Tool | Purpose |
174
+ |------|--------|
175
+ | **capture_thought** | Save a thought (content + optional source). It is embedded and stored. |
176
+ | **search_brain** | Semantic search by meaning (e.g. “career change”, “meeting with Sarah”). |
177
+ | **list_recent** | List recent thoughts (optional: last N days). |
178
+ | **brain_stats** | Count of thoughts and activity in the last 7 and 30 days. |
179
+
180
+ All tools return plain text (and optional metadata) so any model can interpret the results.
181
+
182
+ ## Embedding model
183
+
184
+ By default the server uses **OpenAI `text-embedding-3-small`** (1536 dimensions). If `EMBEDDING_PROVIDER=ollama` (or `OPENAI_API_KEY` is unset and `OLLAMA_HOST` is set), it uses **Ollama `nomic-embed-text`** (768 dimensions). If `EMBEDDING_PROVIDER=google` or `gemini`, it uses **Google Gemini text-embedding** (768 dimensions via `outputDimensionality`). Use the matching schema: [schema.sql](schema.sql) for OpenAI (1536 dims), [schema-ollama.sql](schema-ollama.sql) for Ollama or Google (768 dims). Do not change the embedding model without re-embedding existing rows.
185
+
186
+ ## Project layout
187
+
188
+ - `schema.sql` – Postgres + pgvector schema for OpenAI (1536 dims). `schema-ollama.sql` for Ollama (768 dims).
189
+ - `src/index.ts` – MCP server and tool handlers.
190
+ - `src/db.ts` – Postgres + pgvector access (insert, search, list, stats).
191
+ - `src/embeddings.ts` – Embedding calls (OpenAI, Google Gemini, or Ollama, env-driven).
192
+
193
+ ## Publishing to npm
194
+
195
+ The package is scoped as `@rixter145/open-brain` (npm rejects unscoped `open-brain` as too similar to existing `openbrain`). To publish so clients can use `npx @rixter145/open-brain` (e.g. Abacus AI):
196
+
197
+ 1. Log in: `npm login` (username, password, email, OTP if 2FA enabled).
198
+ 2. From the repo root: `npm publish --access=public`.
199
+
200
+ The `prepublishOnly` script builds before publish; the package includes only `dist/`.
201
+
202
+ ## Optional: metadata extraction
203
+
204
+ The plan mentioned optional metadata (people, topics, type, action items) via an LLM call. The current schema and table support these columns; the server currently only stores `content`, `embedding`, and `source`. You can extend `capture_thought` to call an LLM, parse the response, and fill `people`, `topics`, `type`, and `action_items` in `insertThought`.
package/dist/db.d.ts ADDED
@@ -0,0 +1,28 @@
1
+ export interface ThoughtRow {
2
+ id: string;
3
+ content: string;
4
+ embedding: number[];
5
+ people: string[];
6
+ topics: string[];
7
+ type: string | null;
8
+ action_items: string[];
9
+ source: string | null;
10
+ created_at: Date;
11
+ }
12
+ export declare function insertThought(params: {
13
+ content: string;
14
+ embedding: number[];
15
+ people?: string[];
16
+ topics?: string[];
17
+ type?: string | null;
18
+ action_items?: string[];
19
+ source?: string | null;
20
+ }): Promise<ThoughtRow>;
21
+ export declare function searchThoughts(queryEmbedding: number[], limit?: number): Promise<ThoughtRow[]>;
22
+ export declare function listRecentThoughts(limit?: number, days?: number): Promise<ThoughtRow[]>;
23
+ export declare function getBrainStats(): Promise<{
24
+ total: number;
25
+ last_7_days: number;
26
+ last_30_days: number;
27
+ }>;
28
+ export declare function formatThoughtForOutput(t: ThoughtRow): string;
package/dist/db.js ADDED
@@ -0,0 +1,131 @@
1
+ import pg from "pg";
2
+ import pgvector from "pgvector/pg";
3
+ let pool = null;
4
+ function getPool() {
5
+ const url = process.env.DATABASE_URL;
6
+ if (!url)
7
+ throw new Error("DATABASE_URL is required");
8
+ if (!pool) {
9
+ pool = new pg.Pool({ connectionString: url, max: 5 });
10
+ }
11
+ return pool;
12
+ }
13
+ export async function insertThought(params) {
14
+ const client = await getPool().connect();
15
+ try {
16
+ pgvector.registerTypes(client);
17
+ const embeddingSql = pgvector.toSql(params.embedding);
18
+ const res = await client.query(`INSERT INTO thoughts (content, embedding, people, topics, type, action_items, source)
19
+ VALUES ($1, $2::vector, $3::text[], $4::text[], $5, $6::text[], $7)
20
+ RETURNING id, content, embedding::text, people, topics, type, action_items, source, created_at`, [
21
+ params.content,
22
+ embeddingSql,
23
+ params.people ?? [],
24
+ params.topics ?? [],
25
+ params.type ?? null,
26
+ params.action_items ?? [],
27
+ params.source ?? null,
28
+ ]);
29
+ const row = res.rows[0];
30
+ if (!row)
31
+ throw new Error("Insert failed");
32
+ return parseThoughtRow(row);
33
+ }
34
+ finally {
35
+ client.release();
36
+ }
37
+ }
38
+ function parseThoughtRow(row) {
39
+ let vec = [];
40
+ const e = row.embedding;
41
+ if (Array.isArray(e))
42
+ vec = e;
43
+ else if (typeof e === "string")
44
+ vec = e.startsWith("[") ? JSON.parse(e) : parsePgVector(e);
45
+ return {
46
+ id: row.id,
47
+ content: row.content,
48
+ embedding: vec,
49
+ people: row.people ?? [],
50
+ topics: row.topics ?? [],
51
+ type: row.type ?? null,
52
+ action_items: row.action_items ?? [],
53
+ source: row.source ?? null,
54
+ created_at: new Date(row.created_at),
55
+ };
56
+ }
57
+ function parsePgVector(s) {
58
+ if (typeof s !== "string")
59
+ return [];
60
+ const trimmed = s.replace(/^\[|\]$/g, "").trim();
61
+ if (!trimmed)
62
+ return [];
63
+ return trimmed.split(",").map((n) => parseFloat(n.trim()));
64
+ }
65
+ export async function searchThoughts(queryEmbedding, limit = 10) {
66
+ const client = await getPool().connect();
67
+ try {
68
+ const embeddingSql = pgvector.toSql(queryEmbedding);
69
+ const res = await client.query(`SELECT id, content, embedding::text, people, topics, type, action_items, source, created_at
70
+ FROM thoughts
71
+ ORDER BY embedding <=> $1::vector
72
+ LIMIT $2`, [embeddingSql, limit]);
73
+ return res.rows.map(parseThoughtRow);
74
+ }
75
+ finally {
76
+ client.release();
77
+ }
78
+ }
79
+ export async function listRecentThoughts(limit = 20, days) {
80
+ const client = await getPool().connect();
81
+ try {
82
+ if (days != null) {
83
+ const res = await client.query(`SELECT id, content, embedding::text, people, topics, type, action_items, source, created_at
84
+ FROM thoughts
85
+ WHERE created_at >= now() - ($2::text || ' days')::interval
86
+ ORDER BY created_at DESC
87
+ LIMIT $1`, [limit, days]);
88
+ return res.rows.map(parseThoughtRow);
89
+ }
90
+ const res = await client.query(`SELECT id, content, embedding::text, people, topics, type, action_items, source, created_at
91
+ FROM thoughts
92
+ ORDER BY created_at DESC
93
+ LIMIT $1`, [limit]);
94
+ return res.rows.map(parseThoughtRow);
95
+ }
96
+ finally {
97
+ client.release();
98
+ }
99
+ }
100
+ export async function getBrainStats() {
101
+ const client = await getPool().connect();
102
+ try {
103
+ const res = await client.query(`SELECT
104
+ count(*)::text AS total,
105
+ count(*) FILTER (WHERE created_at >= now() - interval '7 days')::text AS last_7,
106
+ count(*) FILTER (WHERE created_at >= now() - interval '30 days')::text AS last_30
107
+ FROM thoughts`);
108
+ const row = res.rows[0];
109
+ return {
110
+ total: parseInt(row?.total ?? "0", 10),
111
+ last_7_days: parseInt(row?.last_7 ?? "0", 10),
112
+ last_30_days: parseInt(row?.last_30 ?? "0", 10),
113
+ };
114
+ }
115
+ finally {
116
+ client.release();
117
+ }
118
+ }
119
+ export function formatThoughtForOutput(t) {
120
+ const meta = [];
121
+ if (t.people?.length)
122
+ meta.push(`People: ${t.people.join(", ")}`);
123
+ if (t.topics?.length)
124
+ meta.push(`Topics: ${t.topics.join(", ")}`);
125
+ if (t.type)
126
+ meta.push(`Type: ${t.type}`);
127
+ if (t.action_items?.length)
128
+ meta.push(`Action items: ${t.action_items.join("; ")}`);
129
+ const metaLine = meta.length ? `[${meta.join(" | ")}]\n` : "";
130
+ return `${metaLine}${t.content}\n— ${t.created_at.toISOString().slice(0, 10)}${t.source ? ` (${t.source})` : ""}`;
131
+ }
@@ -0,0 +1,2 @@
1
+ export declare function embed(text: string): Promise<number[]>;
2
+ export declare const EMBEDDING_DIMS: number;
@@ -0,0 +1,95 @@
1
+ import { GoogleGenAI } from "@google/genai";
2
+ import OpenAI from "openai";
3
+ const OPENAI_MODEL = "text-embedding-3-small";
4
+ const OPENAI_DIMS = 1536;
5
+ const OLLAMA_MODEL = "nomic-embed-text";
6
+ const OLLAMA_DIMS = 768;
7
+ const GOOGLE_MODEL = "gemini-embedding-001";
8
+ const GOOGLE_DIMS = 768;
9
+ let openaiClient = null;
10
+ let googleClient = null;
11
+ function useGoogle() {
12
+ const p = process.env.EMBEDDING_PROVIDER;
13
+ return p === "google" || p === "gemini";
14
+ }
15
+ function useOllama() {
16
+ if (process.env.EMBEDDING_PROVIDER === "ollama")
17
+ return true;
18
+ if (!useGoogle() && !process.env.OPENAI_API_KEY?.trim() && process.env.OLLAMA_HOST?.trim())
19
+ return true;
20
+ return false;
21
+ }
22
+ function getOpenAIClient() {
23
+ const key = process.env.OPENAI_API_KEY;
24
+ if (!key)
25
+ throw new Error("OPENAI_API_KEY is required for embeddings");
26
+ if (!openaiClient)
27
+ openaiClient = new OpenAI({ apiKey: key });
28
+ return openaiClient;
29
+ }
30
+ function getGoogleClient() {
31
+ const key = process.env.GOOGLE_API_KEY || process.env.GEMINI_API_KEY;
32
+ if (!key?.trim())
33
+ throw new Error("GOOGLE_API_KEY or GEMINI_API_KEY is required when EMBEDDING_PROVIDER=google");
34
+ if (!googleClient)
35
+ googleClient = new GoogleGenAI({ apiKey: key });
36
+ return googleClient;
37
+ }
38
+ async function embedGoogle(text) {
39
+ const ai = getGoogleClient();
40
+ const response = await ai.models.embedContent({
41
+ model: GOOGLE_MODEL,
42
+ contents: text.slice(0, 8192),
43
+ config: { outputDimensionality: GOOGLE_DIMS },
44
+ });
45
+ const vec = response.embeddings?.[0]?.values;
46
+ if (!vec || !Array.isArray(vec) || vec.length !== GOOGLE_DIMS) {
47
+ throw new Error(`Invalid Google embedding response (expected ${GOOGLE_DIMS} dimensions)`);
48
+ }
49
+ return vec;
50
+ }
51
+ async function embedOllama(text) {
52
+ const base = (process.env.OLLAMA_HOST || "http://localhost:11434").replace(/\/$/, "");
53
+ const res = await fetch(`${base}/api/embed`, {
54
+ method: "POST",
55
+ headers: { "Content-Type": "application/json" },
56
+ body: JSON.stringify({ model: OLLAMA_MODEL, input: text.slice(0, 8192) }),
57
+ });
58
+ if (!res.ok) {
59
+ const err = await res.text();
60
+ throw new Error(`Ollama embeddings failed: ${res.status} ${err}`);
61
+ }
62
+ const data = (await res.json());
63
+ const vec = data.embeddings?.[0];
64
+ if (!vec || !Array.isArray(vec) || vec.length !== OLLAMA_DIMS) {
65
+ throw new Error(`Invalid Ollama embedding response (expected ${OLLAMA_DIMS} dimensions)`);
66
+ }
67
+ return vec;
68
+ }
69
+ async function embedOpenAI(text) {
70
+ const openai = getOpenAIClient();
71
+ const res = await openai.embeddings.create({
72
+ model: OPENAI_MODEL,
73
+ input: text.slice(0, 8191),
74
+ dimensions: OPENAI_DIMS,
75
+ });
76
+ const vec = res.data[0]?.embedding;
77
+ if (!vec || vec.length !== OPENAI_DIMS)
78
+ throw new Error("Invalid embedding response");
79
+ return vec;
80
+ }
81
+ export async function embed(text) {
82
+ if (useGoogle())
83
+ return embedGoogle(text);
84
+ if (useOllama())
85
+ return embedOllama(text);
86
+ return embedOpenAI(text);
87
+ }
88
+ function getEmbeddingDims() {
89
+ if (useGoogle())
90
+ return GOOGLE_DIMS;
91
+ if (useOllama())
92
+ return OLLAMA_DIMS;
93
+ return OPENAI_DIMS;
94
+ }
95
+ export const EMBEDDING_DIMS = getEmbeddingDims();
@@ -0,0 +1,2 @@
1
+ #!/usr/bin/env node
2
+ import "dotenv/config";
package/dist/index.js ADDED
@@ -0,0 +1,126 @@
1
+ #!/usr/bin/env node
2
+ import "dotenv/config";
3
+ import { McpServer } from "@modelcontextprotocol/sdk/server/mcp.js";
4
+ import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";
5
+ import { z } from "zod";
6
+ import { embed } from "./embeddings.js";
7
+ import { insertThought, searchThoughts, listRecentThoughts, getBrainStats, formatThoughtForOutput, } from "./db.js";
8
+ const server = new McpServer({
9
+ name: "open-brain",
10
+ version: "1.0.0",
11
+ });
12
+ // capture_thought: add a thought (content + optional source); embed and store in Postgres
13
+ server.registerTool("capture_thought", {
14
+ title: "Capture thought",
15
+ description: "Save a thought into your Open Brain. The content is embedded and stored so it can be found by semantic search later. Optional: set source (e.g. 'cursor', 'claude', 'slack').",
16
+ inputSchema: z.object({
17
+ content: z.string().describe("The thought or note to save"),
18
+ source: z.string().optional().describe("Where this thought came from (e.g. cursor, claude)"),
19
+ }),
20
+ }, async ({ content, source }) => {
21
+ try {
22
+ const vector = await embed(content);
23
+ const row = await insertThought({
24
+ content,
25
+ embedding: vector,
26
+ source: source ?? null,
27
+ });
28
+ return {
29
+ content: [
30
+ {
31
+ type: "text",
32
+ text: `Captured thought (id: ${row.id}). You can find it later with search_brain or list_recent.`,
33
+ },
34
+ ],
35
+ };
36
+ }
37
+ catch (err) {
38
+ const msg = err instanceof Error ? err.message : String(err);
39
+ return {
40
+ content: [{ type: "text", text: `Error: ${msg}` }],
41
+ isError: true,
42
+ };
43
+ }
44
+ });
45
+ // search_brain: semantic search by meaning
46
+ server.registerTool("search_brain", {
47
+ title: "Search brain",
48
+ description: "Search your Open Brain by meaning. Give a natural language query (e.g. 'career change', 'meeting with Sarah'); returns the most relevant thoughts.",
49
+ inputSchema: z.object({
50
+ query: z.string().describe("What to search for (by meaning)"),
51
+ limit: z.number().min(1).max(50).optional().default(10),
52
+ }),
53
+ }, async ({ query, limit }) => {
54
+ try {
55
+ const queryVector = await embed(query);
56
+ const thoughts = await searchThoughts(queryVector, limit);
57
+ const text = thoughts.length === 0
58
+ ? "No matching thoughts found."
59
+ : thoughts.map(formatThoughtForOutput).join("\n\n---\n\n");
60
+ return {
61
+ content: [{ type: "text", text }],
62
+ };
63
+ }
64
+ catch (err) {
65
+ const msg = err instanceof Error ? err.message : String(err);
66
+ return {
67
+ content: [{ type: "text", text: `Error: ${msg}` }],
68
+ isError: true,
69
+ };
70
+ }
71
+ });
72
+ // list_recent: list recent thoughts (optionally filter by days)
73
+ server.registerTool("list_recent", {
74
+ title: "List recent thoughts",
75
+ description: "List thoughts you captured recently. Optionally limit to the last N days (e.g. 7 for this week).",
76
+ inputSchema: z.object({
77
+ limit: z.number().min(1).max(100).optional().default(20),
78
+ days: z.number().min(1).optional().describe("Only show thoughts from the last N days"),
79
+ }),
80
+ }, async ({ limit, days }) => {
81
+ try {
82
+ const thoughts = await listRecentThoughts(limit, days);
83
+ const text = thoughts.length === 0
84
+ ? "No thoughts in this range."
85
+ : thoughts.map(formatThoughtForOutput).join("\n\n---\n\n");
86
+ return {
87
+ content: [{ type: "text", text }],
88
+ };
89
+ }
90
+ catch (err) {
91
+ const msg = err instanceof Error ? err.message : String(err);
92
+ return {
93
+ content: [{ type: "text", text: `Error: ${msg}` }],
94
+ isError: true,
95
+ };
96
+ }
97
+ });
98
+ // brain_stats: counts and activity
99
+ server.registerTool("brain_stats", {
100
+ title: "Brain stats",
101
+ description: "See how many thoughts are in your Open Brain and recent activity (last 7 and 30 days).",
102
+ inputSchema: z.object({}),
103
+ }, async () => {
104
+ try {
105
+ const stats = await getBrainStats();
106
+ const text = `Total thoughts: ${stats.total}\nLast 7 days: ${stats.last_7_days}\nLast 30 days: ${stats.last_30_days}`;
107
+ return {
108
+ content: [{ type: "text", text }],
109
+ };
110
+ }
111
+ catch (err) {
112
+ const msg = err instanceof Error ? err.message : String(err);
113
+ return {
114
+ content: [{ type: "text", text: `Error: ${msg}` }],
115
+ isError: true,
116
+ };
117
+ }
118
+ });
119
+ async function main() {
120
+ const transport = new StdioServerTransport();
121
+ await server.connect(transport);
122
+ }
123
+ main().catch((err) => {
124
+ console.error("Open Brain MCP server error:", err);
125
+ process.exit(1);
126
+ });
package/package.json ADDED
@@ -0,0 +1,47 @@
1
+ {
2
+ "name": "@rixter145/open-brain",
3
+ "version": "1.0.0",
4
+ "description": "MCP server for Open Brain: Postgres + pgvector shared memory for Cursor, Claude, and any MCP client",
5
+ "type": "module",
6
+ "main": "dist/index.js",
7
+ "bin": {
8
+ "open-brain": "dist/index.js"
9
+ },
10
+ "files": [
11
+ "dist"
12
+ ],
13
+ "scripts": {
14
+ "prepare": "npm run build",
15
+ "prepublishOnly": "npm run build",
16
+ "build": "tsc",
17
+ "start": "node dist/index.js",
18
+ "dev": "tsx src/index.ts",
19
+ "test:env": "node scripts/test-env.mjs",
20
+ "init-db": "node scripts/init-db.mjs"
21
+ },
22
+ "keywords": [
23
+ "mcp",
24
+ "open-brain",
25
+ "pgvector",
26
+ "embeddings"
27
+ ],
28
+ "license": "MIT",
29
+ "engines": {
30
+ "node": ">=18"
31
+ },
32
+ "dependencies": {
33
+ "@google/genai": "^1.0.0",
34
+ "@modelcontextprotocol/sdk": "^1.27.1",
35
+ "dotenv": "^16.4.5",
36
+ "openai": "^4.77.0",
37
+ "pg": "^8.13.0",
38
+ "pgvector": "^0.2.0",
39
+ "zod": "^3.23.0"
40
+ },
41
+ "devDependencies": {
42
+ "@types/node": "^22.0.0",
43
+ "@types/pg": "^8.18.0",
44
+ "tsx": "^4.19.0",
45
+ "typescript": "^5.6.0"
46
+ }
47
+ }