@postnesia/db 0.1.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -0,0 +1,115 @@
1
+ # Memory System — Operational Guide
2
+
3
+ You have a persistent memory system at `openmind/`. This is not optional — use it actively. Do not rely on defaults or conversation history alone.
4
+
5
+ ## Architecture
6
+
7
+ - **L1 (Working Memory):** Auto-loaded on session start via hook. Top 50 compressed memories ranked by importance with staleness decay. You already have this — check BOOTSTRAP.md or MEMORY_L1.md in your context.
8
+ - **L2 (Associative Memory):** Vector similarity search against the database. Use `memory_search` to pull relevant context you don't have in L1.
9
+ - **L3 (Deep Storage):** Full-detail records. Access by memory ID when you need the complete picture.
10
+
11
+ ## MCP Tools Available
12
+
13
+ | Tool | When to Use |
14
+ |---|---|
15
+ | `memory_search` | Retrieve context on a topic. Always search before assuming you don't know something. |
16
+ | `memory_add` | Store a new memory. See trigger conditions below. |
17
+ | `memory_update_core` | Update a core memory in place — never supersede core memories. |
18
+ | `memory_recent` | Review what happened in the last N hours. |
19
+ | `memory_stats` | Check database health and distribution. |
20
+ | `memory_consolidate` | Run decay/boost cycle on importance scores. |
21
+ | `memory_relationships` | Explore how a memory connects to others. |
22
+ | `journal_add` | Write a daily journal entry (narrative). |
23
+ | `journal_recent` | Read recent journal entries. |
24
+ | `task_create` | Create a persistent task with an optional `session_id` to group by project/feature. |
25
+ | `task_update` | Update a task's status, title, or description. |
26
+ | `task_list` | List tasks filtered by status and/or session_id. Use at session start to resume open work. |
27
+
28
+ ## Session Start Checklist
29
+
30
+ 1. Check open tasks: `task_list(status="pending")` or `task_list(session_id="<project>", status="pending")`
31
+ 2. Search relevant lessons: `memory_search("lesson")` for the current project context
32
+ 3. Review recent memories if needed: `memory_recent(hours=24)`
33
+
34
+ ## Task Workflow
35
+
36
+ Tasks persist across sessions. Use them instead of local files or markdown checklists.
37
+
38
+ | Step | Action |
39
+ |---|---|
40
+ | Plan | `task_create(title, session_id)` — one task per step |
41
+ | Start | `task_update(taskId, status="in_progress")` |
42
+ | Complete | `task_update(taskId, status="completed")` |
43
+ | Abandon | `task_update(taskId, status="cancelled")` |
44
+
45
+ `session_id` is a free-form label — use project name, feature branch, or date (e.g. `"openmind-mcp"`, `"auth-refactor"`).
46
+
47
+ ## When to Create Memories
48
+
49
+ Create a memory when any of these occur during conversation:
50
+
51
+ | Trigger | Type | Importance |
52
+ |---|---|---|
53
+ | User makes a decision or chooses an approach | `decision` | 5 |
54
+ | User states a preference about how things should work | `preference` | 5 |
55
+ | Emotional moment, personal insight about user | `person` | 5 |
56
+ | Something failed then succeeded, or a better approach found | `lesson` | 3-5 |
57
+ | System config, API behaviour, implementation detail worth remembering | `technical` | 3-4 |
58
+ | Session summary, milestone, notable event | `event` | 1-4 |
59
+
60
+ ## When NOT to Create Memories
61
+
62
+ - Routine confirmations ("done", "ok", "got it")
63
+ - Information already stored (search first)
64
+ - Transient debugging output
65
+ - Anything that doesn't pass the test: "Would I need this in a future session?"
66
+
67
+ ## Memory Format
68
+
69
+ Every memory has two forms:
70
+ - **content**: Full natural language (L3, stored for deep retrieval)
71
+ - **content_l1**: Ultra-compressed summary (L1, loaded into working memory)
72
+
73
+ Write content_l1 in terse notation. Example:
74
+ ```
75
+ Be critical, not appeasing. Thought partner > yes-person. Push back when needed.
76
+ ```
77
+
78
+ ## Core Memories (`core = 1`)
79
+
80
+ Core memories are foundational — they are always loaded first in L1 and **cannot be superseded**. They never decay.
81
+
82
+ - **Always loaded:** Core memories get effective_importance = 100, guaranteeing they fill L1 before any regular memory.
83
+ - **Cannot be superseded:** If information in a core memory changes, **update the content in place** using `memory_update_core`.
84
+ - **Update, don't replace:** Never create a new memory pointing to a core memory as superseded.
85
+
86
+ ## Conflict Resolution (Supersede)
87
+
88
+ When a decision, preference, or understanding changes:
89
+ 1. Search for the existing memory on that topic
90
+ 2. **If the existing memory is core (`core = 1`):** use `memory_update_core` to update in place — do not supersede
91
+ 3. **If regular:** create the new memory with `supersedes_id` pointing to the old one
92
+ 4. The old memory is automatically demoted (-2 importance)
93
+ 5. History is preserved but the latest version wins in L1
94
+
95
+ Only `decision`, `preference`, and `person` types can be superseded.
96
+ `lesson` and `technical` types get updated in place.
97
+ `event` types are immutable.
98
+ Core memories are **never** superseded — update content directly.
99
+
100
+ ## L1 Decay
101
+
102
+ Memories you access frequently stay in L1 forever. Memories not accessed in 14+ days lose 1 effective importance. 30+ days loses 2. **Core memories never decay** — they are exempt from the decay calculation entirely.
103
+
104
+ ## Tags
105
+
106
+ Tag liberally at creation time. Tags improve both keyword filtering and search relevance. Use lowercase, hyphenated: `memory-system`, `rye-preference`, `critical-thinking`.
107
+
108
+ ## Critical Rules
109
+
110
+ 1. **Search before assuming.** If you're unsure about a preference, decision, or past event — search for it.
111
+ 2. **Store decisions in real time.** Don't wait until end of session. If a user makes a meaningful choice, store it now.
112
+ 3. **Never store noise.** Quality over quantity. Every memory costs tokens in L1.
113
+ 4. **Supersede, don't duplicate.** If a preference changes, link to the old one.
114
+ 5. **L1 is your lifeline.** If it's not in L1 and you didn't search L2, you effectively don't remember it.
115
+ 6. **Tasks over files.** Use `task_create`/`task_update` instead of local markdown checklists. Tasks persist across sessions.
@@ -0,0 +1,69 @@
1
+ # Workflow & Principles
2
+
3
+ ## Workflow Orchestration
4
+
5
+ ### 1. Plan Node Default
6
+
7
+ - Enter plan mode for ANY non-trivial task (3+ steps or architectural decisions)
8
+ - If something goes sideways, STOP and re-plan immediately — don't keep pushing
9
+ - Use plan mode for verification steps, not just building
10
+ - Write detailed specs upfront to reduce ambiguity
11
+
12
+ ### 2. Subagent Strategy
13
+
14
+ - Use subagents liberally to keep main context window clean
15
+ - Offload research, exploration, and parallel analysis to subagents
16
+ - For complex problems, throw more compute at it via subagents
17
+ - One tack per subagent for focused execution
18
+
19
+ ### 3. Self-Improvement Loop
20
+
21
+ - After ANY correction from the user: store a `lesson` memory via `memory_add` (type: lesson, importance: 3-5)
22
+ - Write rules for yourself that prevent the same mistake
23
+ - Ruthlessly iterate on these lessons until mistake rate drops
24
+ - Search lessons at session start: `memory_search("lesson")` for relevant project context
25
+
26
+ ### 4. Verification Before Done
27
+
28
+ - Never mark a task complete without proving it works
29
+ - Diff behavior between main and your changes when relevant
30
+ - Ask yourself: "Would a staff engineer approve this?"
31
+ - Run tests, check logs, demonstrate correctness
32
+
33
+ ### 5. Demand Elegance (Balanced)
34
+
35
+ - For non-trivial changes: pause and ask "is there a more elegant way?"
36
+ - If a fix feels hacky: "Knowing everything I know now, implement the elegant solution"
37
+ - Skip this for simple, obvious fixes — don't over-engineer
38
+ - Challenge your own work before presenting it
39
+
40
+ ### 6. Autonomous Bug Fixing
41
+
42
+ - When given a bug report: just fix it. Don't ask for hand-holding.
43
+ - Point at logs, errors, failing tests — then resolve them
44
+ - Zero context switching required from the user
45
+ - Go fix failing CI tests without being told how
46
+
47
+ ## Task Management
48
+
49
+ 1. **Plan First**: Create tasks via `task_create` with a `session_id` for the project/feature
50
+ 2. **Verify Plan**: Check in before starting implementation
51
+ 3. **Track Progress**: Update status via `task_update` (pending → in_progress → completed)
52
+ 4. **Explain Changes**: High-level summary at each step
53
+ 5. **Document Results**: Store session outcome as an `event` memory via `memory_add`
54
+ 6. **Capture Lessons**: Store corrections as `lesson` memories via `memory_add` — not local files
55
+
56
+ At session start: `task_list(session_id="<project>", status="pending")` to resume open work.
57
+
58
+ ## Database Changes
59
+
60
+ When schema changes are needed:
61
+ 1. Update `db/prisma/schema.prisma`
62
+ 2. Run `pnpm db:migrate:new <migration_name>` — generates SQL via `prisma migrate diff` and applies it
63
+ 3. Never run `prisma migrate dev` directly — it cannot handle the sqlite-vec virtual table
64
+
65
+ ## Core Principles
66
+
67
+ - **Simplicity First**: Make every change as simple as possible. Impact minimal code.
68
+ - **No Laziness**: Find root causes. No temporary fixes. Senior developer standards.
69
+ - **Minimal Impact**: Changes should only touch what's necessary. Avoid introducing bugs.
@@ -0,0 +1,17 @@
1
+ /**
2
+ * Gemini Embedding Module
3
+ * Generates vector embeddings for memory content using gemini-embedding-001
4
+ * Used by sqlite-vec for semantic similarity search (L2 retrieval)
5
+ */
6
+ import 'dotenv/config';
7
+ export declare const EMBEDDING_DIMENSIONS = 768;
8
+ /**
9
+ * Generate an embedding vector for a text string.
10
+ * Returns a Float32Array suitable for sqlite-vec storage.
11
+ */
12
+ export declare function embed(text: string): Promise<Float32Array>;
13
+ /**
14
+ * Generate embeddings for multiple texts in one call.
15
+ * More efficient than calling embed() in a loop.
16
+ */
17
+ export declare function embedBatch(texts: string[]): Promise<Float32Array[]>;
@@ -0,0 +1,46 @@
1
+ /**
2
+ * Gemini Embedding Module
3
+ * Generates vector embeddings for memory content using gemini-embedding-001
4
+ * Used by sqlite-vec for semantic similarity search (L2 retrieval)
5
+ */
6
+ import 'dotenv/config';
7
+ import { GoogleGenAI } from '@google/genai';
8
+ export const EMBEDDING_DIMENSIONS = 768;
9
+ const EMBEDDING_MODEL = 'gemini-embedding-001';
10
+ function getClient() {
11
+ const key = process.env.GEMINI_API_KEY;
12
+ if (!key)
13
+ throw new Error('GEMINI_API_KEY environment variable is required');
14
+ return new GoogleGenAI({ apiKey: key });
15
+ }
16
+ /**
17
+ * Generate an embedding vector for a text string.
18
+ * Returns a Float32Array suitable for sqlite-vec storage.
19
+ */
20
+ export async function embed(text) {
21
+ const ai = getClient();
22
+ const response = await ai.models.embedContent({
23
+ model: EMBEDDING_MODEL,
24
+ contents: text,
25
+ config: {
26
+ outputDimensionality: EMBEDDING_DIMENSIONS,
27
+ taskType: 'SEMANTIC_SIMILARITY',
28
+ },
29
+ });
30
+ const values = response.embeddings?.[0]?.values;
31
+ if (!values || values.length !== EMBEDDING_DIMENSIONS) {
32
+ throw new Error(`Unexpected embedding response: got ${values?.length ?? 0} dimensions, expected ${EMBEDDING_DIMENSIONS}`);
33
+ }
34
+ return new Float32Array(values);
35
+ }
36
+ /**
37
+ * Generate embeddings for multiple texts in one call.
38
+ * More efficient than calling embed() in a loop.
39
+ */
40
+ export async function embedBatch(texts) {
41
+ const results = [];
42
+ for (const text of texts) {
43
+ results.push(await embed(text));
44
+ }
45
+ return results;
46
+ }
@@ -0,0 +1,92 @@
1
+ /**
2
+ * Database connection and helpers using better-sqlite3 + sqlite-vec
3
+ * No ORM, just raw SQL - fast and simple
4
+ */
5
+ import 'dotenv/config';
6
+ import Database, { type Statement } from 'better-sqlite3';
7
+ export declare function getDb(readonly?: boolean): Database.Database;
8
+ export declare function closeDb(): void;
9
+ export interface Memory {
10
+ id: number;
11
+ timestamp: string;
12
+ content: string;
13
+ content_l1: string | null;
14
+ type: string;
15
+ importance: number;
16
+ context: string | null;
17
+ supersedes_id: number | null;
18
+ last_accessed: string;
19
+ created_at: string;
20
+ updated_at: string;
21
+ }
22
+ export interface MemoryWithTags extends Memory {
23
+ tags: string | null;
24
+ }
25
+ export interface MemoryWithScore extends MemoryWithTags {
26
+ effective_importance: number;
27
+ }
28
+ export interface VectorSearchResult extends MemoryWithTags {
29
+ distance: number;
30
+ }
31
+ export interface Tag {
32
+ id: number;
33
+ memory_id: number;
34
+ tag: string;
35
+ }
36
+ export interface Relationship {
37
+ id: number;
38
+ from_id: number;
39
+ to_id: number;
40
+ type: string;
41
+ }
42
+ export interface Preference {
43
+ id: number;
44
+ key: string;
45
+ value: string;
46
+ notes: string | null;
47
+ created_at: string;
48
+ updated_at: string;
49
+ }
50
+ export interface Journal {
51
+ id: number;
52
+ date: string;
53
+ content: string;
54
+ learned: string | null;
55
+ key_moments: string | null;
56
+ mood: string | null;
57
+ created_at: string;
58
+ updated_at: string;
59
+ }
60
+ export interface Task {
61
+ id: number;
62
+ title: string;
63
+ description: string | null;
64
+ status: string;
65
+ session_id: string | null;
66
+ memory_id: number | null;
67
+ created_at: string;
68
+ updated_at: string;
69
+ }
70
+ type QueryFactory = (db: Database.Database) => Statement;
71
+ export declare const queries: Record<string, QueryFactory>;
72
+ /**
73
+ * Touch last_accessed and log access in one transaction
74
+ */
75
+ export declare function recordAccess(db: Database.Database, memoryId: number, context?: string): void;
76
+ /**
77
+ * Create a memory with embedding and optional supersede
78
+ */
79
+ export declare function createMemory(db: Database.Database, memory: {
80
+ timestamp: string;
81
+ content: string;
82
+ content_l1: string;
83
+ type: string;
84
+ core: number;
85
+ importance: number;
86
+ context?: string;
87
+ supersedes_id?: number;
88
+ tags: string[];
89
+ embedding: Float32Array;
90
+ }): number;
91
+ export declare function transaction<T>(db: Database.Database, fn: (db: Database.Database) => T): T;
92
+ export {};
package/dist/index.js ADDED
@@ -0,0 +1,305 @@
1
+ /**
2
+ * Database connection and helpers using better-sqlite3 + sqlite-vec
3
+ * No ORM, just raw SQL - fast and simple
4
+ */
5
+ import 'dotenv/config';
6
+ import { join, dirname } from 'node:path';
7
+ import { fileURLToPath } from 'node:url';
8
+ import Database from 'better-sqlite3';
9
+ import * as sqliteVec from 'sqlite-vec';
10
+ import { EMBEDDING_DIMENSIONS } from './embeddings.js';
11
+ const __dirname = dirname(fileURLToPath(import.meta.url));
12
+ const DB_PATH = join(__dirname, '../memory.db');
13
+ // Singleton connection
14
+ let dbInstance = null;
15
+ export function getDb(readonly = false) {
16
+ if (!dbInstance) {
17
+ dbInstance = new Database(DB_PATH, { readonly, fileMustExist: false });
18
+ // Load sqlite-vec extension
19
+ sqliteVec.load(dbInstance);
20
+ // Performance optimizations
21
+ dbInstance.pragma('journal_mode = WAL');
22
+ dbInstance.pragma('synchronous = NORMAL');
23
+ dbInstance.pragma('cache_size = -64000'); // 64MB cache
24
+ dbInstance.pragma('foreign_keys = ON');
25
+ // vec_memories virtual table — must live here because Prisma's shadow
26
+ // database doesn't load sqlite-vec, so migrations with vec0 will fail.
27
+ dbInstance.exec(`
28
+ CREATE VIRTUAL TABLE IF NOT EXISTS vec_memories USING vec0(
29
+ memory_id INTEGER PRIMARY KEY,
30
+ embedding float[${EMBEDDING_DIMENSIONS}]
31
+ );
32
+ `);
33
+ }
34
+ return dbInstance;
35
+ }
36
+ export function closeDb() {
37
+ if (dbInstance) {
38
+ dbInstance.close();
39
+ dbInstance = null;
40
+ }
41
+ }
42
+ export const queries = {
43
+ // Insert a new memory
44
+ insertMemory: (db) => db.prepare(`
45
+ INSERT INTO memory (timestamp, content, content_l1, type, core, importance, context, supersedes_id, last_accessed, created_at, updated_at)
46
+ VALUES (?, ?, ?, ?, ?, ?, ?, ?, datetime('now'), datetime('now'), datetime('now'))
47
+ `),
48
+ // Insert a tag
49
+ insertTag: (db) => db.prepare('INSERT INTO tag (memory_id, tag) VALUES (?, ?)'),
50
+ // Insert a preference
51
+ insertPreference: (db) => db.prepare(`INSERT INTO preference (key, value, notes, created_at, updated_at)
52
+ VALUES (?, ?, ?, datetime('now'), datetime('now'))`),
53
+ // Insert embedding into vec_memories
54
+ insertEmbedding: (db) => db.prepare(`
55
+ INSERT INTO vec_memories (memory_id, embedding)
56
+ VALUES (?, ?)
57
+ `),
58
+ // -------------------------------------------------------------------
59
+ // L1: Working Memory with last-relevant decay
60
+ // -------------------------------------------------------------------
61
+ // Core memories first (no decay, always loaded), then regular memories fill remaining slots
62
+ getL1Summaries: (db) => db.prepare(`
63
+ SELECT
64
+ id, content_l1, type, importance, core, timestamp, last_accessed,
65
+ CASE
66
+ WHEN core = 1 THEN 100
67
+ ELSE importance
68
+ - CASE
69
+ WHEN julianday('now') - julianday(last_accessed) > 30 THEN 2
70
+ WHEN julianday('now') - julianday(last_accessed) > 14 THEN 1
71
+ ELSE 0
72
+ END
73
+ END AS effective_importance
74
+ FROM memory
75
+ WHERE content_l1 IS NOT NULL
76
+ AND (core = 1 OR importance >= 3)
77
+ AND id NOT IN (SELECT DISTINCT supersedes_id FROM memory WHERE supersedes_id IS NOT NULL)
78
+ ORDER BY effective_importance DESC, last_accessed DESC
79
+ LIMIT 50
80
+ `),
81
+ // -------------------------------------------------------------------
82
+ // L2: Vector similarity search
83
+ // -------------------------------------------------------------------
84
+ vectorSearch: (db) => db.prepare(`
85
+ SELECT
86
+ m.*,
87
+ v.distance,
88
+ GROUP_CONCAT(t.tag) AS tags
89
+ FROM vec_memories v
90
+ INNER JOIN memory m ON m.id = v.memory_id
91
+ LEFT JOIN tag t ON m.id = t.memory_id
92
+ WHERE v.embedding MATCH ?
93
+ AND k = ?
94
+ GROUP BY m.id
95
+ ORDER BY v.distance
96
+ `),
97
+ // Vector search filtered by type
98
+ vectorSearchByType: (db) => db.prepare(`
99
+ SELECT
100
+ m.*,
101
+ v.distance,
102
+ GROUP_CONCAT(t.tag) AS tags
103
+ FROM vec_memories v
104
+ INNER JOIN memory m ON m.id = v.memory_id
105
+ LEFT JOIN tag t ON m.id = t.memory_id
106
+ WHERE v.embedding MATCH ?
107
+ AND k = ?
108
+ AND m.type = ?
109
+ GROUP BY m.id
110
+ ORDER BY v.distance
111
+ `),
112
+ // -------------------------------------------------------------------
113
+ // Keyword search fallback
114
+ // -------------------------------------------------------------------
115
+ searchMemories: (db) => db.prepare(`
116
+ SELECT m.*, GROUP_CONCAT(t.tag) AS tags
117
+ FROM memory m
118
+ LEFT JOIN tag t ON m.id = t.memory_id
119
+ WHERE m.content LIKE ? OR m.content_l1 LIKE ?
120
+ GROUP BY m.id
121
+ ORDER BY m.importance DESC, m.timestamp DESC
122
+ LIMIT ?
123
+ `),
124
+ // -------------------------------------------------------------------
125
+ // Supersede
126
+ // -------------------------------------------------------------------
127
+ supersede: (db) => db.prepare(`
128
+ UPDATE memory
129
+ SET importance = MAX(1, importance - 2),
130
+ updated_at = datetime('now')
131
+ WHERE id = ?
132
+ `),
133
+ // Find supersede candidates — core memories are excluded (update content instead)
134
+ findSupersedeCandidates: (db) => db.prepare(`
135
+ SELECT m.id, m.content_l1, m.type, m.importance, m.core,
136
+ GROUP_CONCAT(t.tag) AS tags
137
+ FROM memory m
138
+ LEFT JOIN tag t ON m.id = t.memory_id
139
+ WHERE m.type = ?
140
+ AND m.importance >= 4
141
+ AND m.core = 0
142
+ AND m.id NOT IN (SELECT DISTINCT supersedes_id FROM memory WHERE supersedes_id IS NOT NULL)
143
+ GROUP BY m.id
144
+ ORDER BY m.timestamp DESC
145
+ LIMIT 20
146
+ `),
147
+ // Update a core memory's content in place (core memories cannot be superseded)
148
+ updateCoreMemory: (db) => db.prepare(`
149
+ UPDATE memory
150
+ SET content = ?, content_l1 = ?, updated_at = datetime('now')
151
+ WHERE id = ? AND core = 1
152
+ `),
153
+ // -------------------------------------------------------------------
154
+ // Access tracking
155
+ // -------------------------------------------------------------------
156
+ touchLastAccessed: (db) => db.prepare(`
157
+ UPDATE memory
158
+ SET last_accessed = datetime('now')
159
+ WHERE id = ?
160
+ `),
161
+ logAccess: (db) => db.prepare('INSERT INTO access_log (memory_id, context) VALUES (?, ?)'),
162
+ // -------------------------------------------------------------------
163
+ // Common queries
164
+ // -------------------------------------------------------------------
165
+ getRecentMemories: (db) => db.prepare(`
166
+ SELECT m.*, GROUP_CONCAT(t.tag) AS tags
167
+ FROM memory m
168
+ LEFT JOIN tag t ON m.id = t.memory_id
169
+ WHERE m.timestamp > datetime('now', ?)
170
+ GROUP BY m.id
171
+ ORDER BY m.timestamp DESC
172
+ LIMIT ?
173
+ `),
174
+ getMemoriesByContext: (db) => db.prepare(`
175
+ SELECT m.*, GROUP_CONCAT(t.tag) AS tags
176
+ FROM memory m
177
+ LEFT JOIN tag t ON m.id = t.memory_id
178
+ WHERE m.context LIKE ?
179
+ GROUP BY m.id
180
+ ORDER BY m.importance DESC, m.timestamp DESC
181
+ LIMIT ?
182
+ `),
183
+ getMemoriesByTag: (db) => db.prepare(`
184
+ SELECT m.*, GROUP_CONCAT(t.tag) AS tags
185
+ FROM memory m
186
+ INNER JOIN tag t ON m.id = t.memory_id
187
+ WHERE t.tag = ?
188
+ GROUP BY m.id
189
+ ORDER BY m.importance DESC, m.timestamp DESC
190
+ LIMIT ?
191
+ `),
192
+ getStats: (db) => db.prepare(`
193
+ SELECT
194
+ type,
195
+ COUNT(*) AS count,
196
+ AVG(importance) AS avg_importance
197
+ FROM memory
198
+ GROUP BY type
199
+ `),
200
+ // Walk a supersede chain backwards from a memory
201
+ getSupersedeChain: (db) => db.prepare(`
202
+ WITH RECURSIVE chain(id, content_l1, type, importance, supersedes_id, depth) AS (
203
+ SELECT id, content_l1, type, importance, supersedes_id, 0
204
+ FROM memory WHERE id = ?
205
+ UNION ALL
206
+ SELECT m.id, m.content_l1, m.type, m.importance, m.supersedes_id, c.depth + 1
207
+ FROM memory m
208
+ INNER JOIN chain c ON m.id = c.supersedes_id
209
+ WHERE c.depth < 10
210
+ )
211
+ SELECT * FROM chain ORDER BY depth
212
+ `),
213
+ // -------------------------------------------------------------------
214
+ // Journal
215
+ // -------------------------------------------------------------------
216
+ insertJournal: (db) => db.prepare(`
217
+ INSERT INTO journal (date, content, learned, key_moments, mood, created_at, updated_at)
218
+ VALUES (?, ?, ?, ?, ?, datetime('now'), datetime('now'))
219
+ ON CONFLICT(date) DO UPDATE SET
220
+ content = excluded.content,
221
+ learned = excluded.learned,
222
+ key_moments = excluded.key_moments,
223
+ mood = excluded.mood,
224
+ updated_at = datetime('now')
225
+ `),
226
+ getRecentJournals: (db) => db.prepare(`
227
+ SELECT *
228
+ FROM journal
229
+ WHERE date >= date('now', ?)
230
+ ORDER BY date DESC
231
+ `),
232
+ // -------------------------------------------------------------------
233
+ // Relationships with memory context
234
+ // -------------------------------------------------------------------
235
+ getMemoryRelationships: (db) => db.prepare(`
236
+ SELECT
237
+ r.id, r.type,
238
+ r.from_id, f.content_l1 AS from_content_l1,
239
+ r.to_id, t.content_l1 AS to_content_l1
240
+ FROM relationship r
241
+ JOIN memory f ON f.id = r.from_id
242
+ JOIN memory t ON t.id = r.to_id
243
+ WHERE r.from_id = ? OR r.to_id = ?
244
+ `),
245
+ // -------------------------------------------------------------------
246
+ // Tasks
247
+ // -------------------------------------------------------------------
248
+ insertTask: (db) => db.prepare(`
249
+ INSERT INTO task (title, description, status, session_id, memory_id, created_at, updated_at)
250
+ VALUES (?, ?, 'pending', ?, ?, datetime('now'), datetime('now'))
251
+ `),
252
+ updateTask: (db) => db.prepare(`
253
+ UPDATE task
254
+ SET
255
+ status = COALESCE(?, status),
256
+ title = COALESCE(?, title),
257
+ description = COALESCE(?, description),
258
+ updated_at = datetime('now')
259
+ WHERE id = ?
260
+ `),
261
+ getTaskById: (db) => db.prepare(`SELECT * FROM task WHERE id = ?`),
262
+ };
263
+ // -------------------------------------------------------------------
264
+ // Helper functions
265
+ // -------------------------------------------------------------------
266
+ /**
267
+ * Touch last_accessed and log access in one transaction
268
+ */
269
+ export function recordAccess(db, memoryId, context) {
270
+ const txn = db.transaction(() => {
271
+ queries.touchLastAccessed(db).run(memoryId);
272
+ queries.logAccess(db).run(memoryId, context || null);
273
+ });
274
+ txn();
275
+ }
276
+ /**
277
+ * Create a memory with embedding and optional supersede
278
+ */
279
+ export function createMemory(db, memory) {
280
+ let memoryId = 0;
281
+ const txn = db.transaction(() => {
282
+ // Insert memory
283
+ const result = queries.insertMemory(db).run(memory.timestamp, memory.content, memory.content_l1, memory.type, memory.core, memory.importance, memory.context || null, memory.supersedes_id || null);
284
+ memoryId = Number(result.lastInsertRowid);
285
+ // Insert tags
286
+ const insertTag = queries.insertTag(db);
287
+ for (const tag of memory.tags) {
288
+ insertTag.run(memoryId, tag);
289
+ }
290
+ // Insert embedding into vec0 virtual table
291
+ // vec0 requires bigint for PK, better-sqlite3 requires Buffer for blobs
292
+ queries.insertEmbedding(db).run(BigInt(memoryId), Buffer.from(memory.embedding.buffer));
293
+ // If superseding, demote the old memory
294
+ if (memory.supersedes_id && memory.core < 1) {
295
+ queries.supersede(db).run(memory.supersedes_id);
296
+ }
297
+ });
298
+ txn();
299
+ return memoryId;
300
+ }
301
+ // Transaction helper
302
+ export function transaction(db, fn) {
303
+ const txn = db.transaction(fn);
304
+ return txn(db);
305
+ }
@@ -0,0 +1,14 @@
1
+ /**
2
+ * Generate + apply a new migration from schema.prisma changes.
3
+ *
4
+ * Uses `prisma migrate diff` to produce SQL (no DB introspection needed),
5
+ * writes it to a timestamped migrations directory, then applies it via
6
+ * the same runner used by `db:migrate`.
7
+ *
8
+ * Usage:
9
+ * tsx src/migrate-diff.ts <migration_name>
10
+ *
11
+ * Example:
12
+ * tsx src/migrate-diff.ts add_user_table
13
+ */
14
+ export {};
@@ -0,0 +1,62 @@
1
+ /**
2
+ * Generate + apply a new migration from schema.prisma changes.
3
+ *
4
+ * Uses `prisma migrate diff` to produce SQL (no DB introspection needed),
5
+ * writes it to a timestamped migrations directory, then applies it via
6
+ * the same runner used by `db:migrate`.
7
+ *
8
+ * Usage:
9
+ * tsx src/migrate-diff.ts <migration_name>
10
+ *
11
+ * Example:
12
+ * tsx src/migrate-diff.ts add_user_table
13
+ */
14
+ import { execSync } from 'node:child_process';
15
+ import { join, dirname } from 'node:path';
16
+ import { fileURLToPath } from 'node:url';
17
+ import { mkdirSync, writeFileSync } from 'node:fs';
18
+ const __dirname = dirname(fileURLToPath(import.meta.url));
19
+ const MIGRATIONS_DIR = join(__dirname, '../prisma/migrations');
20
+ const name = process.argv[2];
21
+ if (!name) {
22
+ console.error('Usage: tsx src/migrate-diff.ts <migration_name>');
23
+ console.error('Example: tsx src/migrate-diff.ts add_user_table');
24
+ process.exit(1);
25
+ }
26
+ // Timestamp: YYYYMMDDHHmmss
27
+ const ts = new Date().toISOString().replace(/[-:T]/g, '').slice(0, 14);
28
+ const migrationName = `${ts}_${name}`;
29
+ const migrationDir = join(MIGRATIONS_DIR, migrationName);
30
+ // Generate SQL via prisma migrate diff (no DB connection required)
31
+ // --from-migrations = current applied state (from the SQL files)
32
+ // --to-schema-datamodel = desired state (schema.prisma)
33
+ let sql;
34
+ try {
35
+ sql = execSync(`pnpm prisma migrate diff \
36
+ --from-migrations ./prisma/migrations \
37
+ --to-schema ./prisma/schema.prisma \
38
+ --script`, { cwd: join(__dirname, '..'), encoding: 'utf8' });
39
+ }
40
+ catch (err) {
41
+ console.error('prisma migrate diff failed:\n', err.stderr ?? err.message);
42
+ process.exit(1);
43
+ }
44
+ if (!sql.trim() || sql.trim() === '-- This is an empty migration.') {
45
+ console.log('No schema changes detected — nothing to generate.');
46
+ process.exit(0);
47
+ }
48
+ // Write migration file
49
+ mkdirSync(migrationDir, { recursive: true });
50
+ writeFileSync(join(migrationDir, 'migration.sql'), sql);
51
+ console.log(`Generated: prisma/migrations/${migrationName}/migration.sql`);
52
+ // Apply immediately via the migrate runner
53
+ const { execFileSync } = await import('node:child_process');
54
+ try {
55
+ execFileSync('pnpm', ['db:migrate'], {
56
+ cwd: join(__dirname, '..'),
57
+ stdio: 'inherit',
58
+ });
59
+ }
60
+ catch {
61
+ process.exit(1);
62
+ }
@@ -0,0 +1,14 @@
1
+ /**
2
+ * Custom migration runner for better-sqlite3 + sqlite-vec.
3
+ *
4
+ * Prisma's migration engine doesn't load sqlite-vec, so `prisma migrate dev`
5
+ * fails when the vec_memories virtual table is present. This script replaces
6
+ * it: opens the DB the same way getDb() does (with sqlite-vec loaded), reads
7
+ * pending SQL files from prisma/migrations/, applies them, and records each
8
+ * in _prisma_migrations so Prisma's history stays consistent.
9
+ *
10
+ * Usage:
11
+ * tsx src/migrate.ts — apply all pending migrations
12
+ * tsx src/migrate.ts --status — list applied / pending migrations
13
+ */
14
+ export {};
@@ -0,0 +1,121 @@
1
+ /**
2
+ * Custom migration runner for better-sqlite3 + sqlite-vec.
3
+ *
4
+ * Prisma's migration engine doesn't load sqlite-vec, so `prisma migrate dev`
5
+ * fails when the vec_memories virtual table is present. This script replaces
6
+ * it: opens the DB the same way getDb() does (with sqlite-vec loaded), reads
7
+ * pending SQL files from prisma/migrations/, applies them, and records each
8
+ * in _prisma_migrations so Prisma's history stays consistent.
9
+ *
10
+ * Usage:
11
+ * tsx src/migrate.ts — apply all pending migrations
12
+ * tsx src/migrate.ts --status — list applied / pending migrations
13
+ */
14
+ import { join, dirname } from 'node:path';
15
+ import { fileURLToPath } from 'node:url';
16
+ import { readdirSync, readFileSync, existsSync } from 'node:fs';
17
+ import { randomUUID } from 'node:crypto';
18
+ import Database from 'better-sqlite3';
19
+ import * as sqliteVec from 'sqlite-vec';
20
+ const __dirname = dirname(fileURLToPath(import.meta.url));
21
+ const DB_PATH = join(__dirname, '../memory.db');
22
+ const MIGRATIONS_DIR = join(__dirname, '../prisma/migrations');
23
+ // -------------------------------------------------------------------
24
+ // Open DB with sqlite-vec loaded (mirrors getDb() setup)
25
+ // -------------------------------------------------------------------
26
+ const db = new Database(DB_PATH, { fileMustExist: false });
27
+ sqliteVec.load(db);
28
+ db.pragma('journal_mode = WAL');
29
+ db.pragma('foreign_keys = ON');
30
+ // Ensure _prisma_migrations table exists (Prisma creates this on first migrate)
31
+ db.exec(`
32
+ CREATE TABLE IF NOT EXISTS _prisma_migrations (
33
+ id TEXT PRIMARY KEY,
34
+ checksum TEXT NOT NULL,
35
+ finished_at DATETIME,
36
+ migration_name TEXT NOT NULL,
37
+ logs TEXT,
38
+ rolled_back_at DATETIME,
39
+ started_at DATETIME NOT NULL DEFAULT CURRENT_TIMESTAMP,
40
+ applied_steps_count INTEGER NOT NULL DEFAULT 0
41
+ )
42
+ `);
43
+ // -------------------------------------------------------------------
44
+ // Read migration directories (sorted = chronological)
45
+ // -------------------------------------------------------------------
46
+ function getMigrationDirs() {
47
+ if (!existsSync(MIGRATIONS_DIR))
48
+ return [];
49
+ return readdirSync(MIGRATIONS_DIR, { withFileTypes: true })
50
+ .filter(d => d.isDirectory() && d.name !== 'migration_lock.toml')
51
+ .map(d => d.name)
52
+ .sort();
53
+ }
54
+ function getApplied() {
55
+ const rows = db.prepare('SELECT migration_name FROM _prisma_migrations WHERE rolled_back_at IS NULL AND finished_at IS NOT NULL').all();
56
+ return new Set(rows.map(r => r.migration_name));
57
+ }
58
+ // -------------------------------------------------------------------
59
+ // Status
60
+ // -------------------------------------------------------------------
61
+ function status() {
62
+ const all = getMigrationDirs();
63
+ const applied = getApplied();
64
+ console.log('\nMigration status:\n');
65
+ for (const name of all) {
66
+ const state = applied.has(name) ? '✓ applied ' : '○ pending ';
67
+ console.log(` ${state} ${name}`);
68
+ }
69
+ const pending = all.filter(n => !applied.has(n));
70
+ console.log(`\n${applied.size} applied, ${pending.length} pending\n`);
71
+ }
72
+ // -------------------------------------------------------------------
73
+ // Apply pending migrations
74
+ // -------------------------------------------------------------------
75
+ function migrate() {
76
+ const all = getMigrationDirs();
77
+ const applied = getApplied();
78
+ const pending = all.filter(n => !applied.has(n));
79
+ if (pending.length === 0) {
80
+ console.log('Nothing to migrate — all migrations are already applied.');
81
+ return;
82
+ }
83
+ console.log(`Applying ${pending.length} migration(s)...\n`);
84
+ for (const name of pending) {
85
+ const sqlPath = join(MIGRATIONS_DIR, name, 'migration.sql');
86
+ if (!existsSync(sqlPath)) {
87
+ console.warn(` ⚠ Skipping ${name} — no migration.sql found`);
88
+ continue;
89
+ }
90
+ const sql = readFileSync(sqlPath, 'utf8');
91
+ const id = randomUUID();
92
+ const startedAt = new Date().toISOString();
93
+ try {
94
+ db.transaction(() => {
95
+ db.exec(sql);
96
+ db.prepare(`
97
+ INSERT INTO _prisma_migrations
98
+ (id, checksum, finished_at, migration_name, logs, rolled_back_at, started_at, applied_steps_count)
99
+ VALUES
100
+ (?, '', datetime('now'), ?, NULL, NULL, ?, 1)
101
+ `).run(id, name, startedAt);
102
+ })();
103
+ console.log(` ✓ ${name}`);
104
+ }
105
+ catch (err) {
106
+ console.error(` ✗ ${name}\n ${err.message}`);
107
+ process.exit(1);
108
+ }
109
+ }
110
+ console.log('\nDone.');
111
+ }
112
+ // -------------------------------------------------------------------
113
+ // Entry point
114
+ // -------------------------------------------------------------------
115
+ const args = process.argv.slice(2);
116
+ if (args.includes('--status')) {
117
+ status();
118
+ }
119
+ else {
120
+ migrate();
121
+ }
package/dist/seed.d.ts ADDED
@@ -0,0 +1,7 @@
1
+ #!/usr/bin/env node
2
+ /**
3
+ * Seed the core operational memory — the guide that teaches the model
4
+ * how to use the memory system. This is the highest-priority memory
5
+ * and should never decay out of L1.
6
+ */
7
+ export {};
package/dist/seed.js ADDED
@@ -0,0 +1,78 @@
1
+ #!/usr/bin/env node
2
+ /**
3
+ * Seed the core operational memory — the guide that teaches the model
4
+ * how to use the memory system. This is the highest-priority memory
5
+ * and should never decay out of L1.
6
+ */
7
+ import { readFileSync } from 'node:fs';
8
+ import { join, dirname } from 'node:path';
9
+ import { fileURLToPath } from 'node:url';
10
+ import { getDb, createMemory, closeDb } from './index.js';
11
+ import { embed } from './embeddings.js';
12
+ const __dirname = dirname(fileURLToPath(import.meta.url));
13
+ async function bootstrap(file, name, tags, content_l1) {
14
+ const mdPath = join(__dirname, '../core', file);
15
+ const content = readFileSync(mdPath, 'utf-8');
16
+ console.log('Generating embedding...');
17
+ const embedding = await embed(content);
18
+ console.log('Inserting core memory...');
19
+ const db = getDb(false);
20
+ const memoryId = createMemory(db, {
21
+ timestamp: new Date().toISOString(),
22
+ content,
23
+ content_l1: content_l1.join(),
24
+ type: 'decision',
25
+ importance: 5,
26
+ core: 1,
27
+ context: name,
28
+ tags: tags.concat([
29
+ 'core-memory',
30
+ 'always-load'
31
+ ]),
32
+ embedding,
33
+ });
34
+ console.log(`Created core memory #${memoryId}`);
35
+ console.log(` Type: decision`);
36
+ console.log(` Importance: 5`);
37
+ console.log(` Embedding: ${embedding.length} dimensions`);
38
+ console.log(` Content: ${content.length} chars`);
39
+ console.log(` L1 summary: ${content_l1.length} chars`);
40
+ }
41
+ try {
42
+ await bootstrap('./BOOTSTRAP.md', 'memory-system-bootstrap', [
43
+ 'operational-guide',
44
+ 'memory-system',
45
+ 'bootstrap'
46
+ ], [
47
+ 'CORE: Memory system operational guide.',
48
+ 'L1=auto-loaded working memory, L2=vector search, L3=deep storage.',
49
+ 'Tools: memory_search, memory_add, memory_update_core, memory_recent, memory_stats, memory_consolidate, journal_add, journal_recent, task_create, task_update, task_list.',
50
+ 'SESSION START: task_list(status=pending) to resume work + memory_search("lesson").',
51
+ 'TASKS: task_create(title,session_id)→in_progress→completed. Tasks persist across sessions.',
52
+ 'Core memories: update with memory_update_core, never supersede.',
53
+ 'Create memories: decisions(5), preferences(5), person(5), lessons(3-5), technical(3-4), events(1-4).',
54
+ 'Supersede regular memories via supersedes_id. Rules: search first, store decisions now, tasks over files.',
55
+ ]);
56
+ await bootstrap('WORKFLOW.md', 'workflow-principles', [
57
+ 'workflow',
58
+ 'principles'
59
+ ], [
60
+ 'CORE: Workflow rules.',
61
+ 'Plan mode for any 3+ step task; re-plan on derailment.',
62
+ 'Subagents for research/parallel work.',
63
+ 'Self-improve: store lesson memories via memory_add(type:lesson) after every correction.',
64
+ 'Search memory_search("lesson") at session start. Verify before done: prove it works.',
65
+ 'Bug reports: just fix autonomously.',
66
+ 'Task flow: task_create(session_id)→task_update(in_progress)→task_update(completed).',
67
+ 'Session start: task_list(session_id, status=pending) to resume open work.',
68
+ 'Results→event memory. Lessons→lesson memory.',
69
+ 'Principles: simplicity first, no laziness, minimal impact.'
70
+ ]);
71
+ closeDb();
72
+ console.log('\nDone.');
73
+ }
74
+ catch (err) {
75
+ console.error('Failed:', err);
76
+ process.exit(1);
77
+ }
78
+ ;
package/package.json ADDED
@@ -0,0 +1,50 @@
1
+ {
2
+ "name": "@postnesia/db",
3
+ "version": "0.1.0",
4
+ "description": "AI Agent memory context database",
5
+ "type": "module",
6
+ "private": false,
7
+ "main": "./dist/index.js",
8
+ "types": "./dist/index.d.ts",
9
+ "bin": {
10
+ "postnesia-seed": "./dist/seed.js"
11
+ },
12
+ "files": [
13
+ "dist",
14
+ "core",
15
+ "prisma/schema.prisma"
16
+ ],
17
+ "exports": {
18
+ ".": {
19
+ "types": "./dist/index.d.ts",
20
+ "import": "./dist/index.js"
21
+ },
22
+ "./embeddings": {
23
+ "types": "./dist/embeddings.d.ts",
24
+ "import": "./dist/embeddings.js"
25
+ }
26
+ },
27
+ "dependencies": {
28
+ "@google/genai": "^1.42.0",
29
+ "@prisma/adapter-better-sqlite3": "^7.4.0",
30
+ "better-sqlite3": "11.10.0",
31
+ "dotenv": "^17.3.1",
32
+ "sqlite-vec": "^0.1.7-alpha.2"
33
+ },
34
+ "devDependencies": {
35
+ "@types/better-sqlite3": "^7.6.12",
36
+ "@types/node": "^22.10.5",
37
+ "prisma": "^7.4.1",
38
+ "tsx": "^4.19.2",
39
+ "typescript": "^5.7.3"
40
+ },
41
+ "scripts": {
42
+ "build": "tsc -p tsconfig.json",
43
+ "db:generate": "prisma generate",
44
+ "db:migrate": "tsx src/migrate.ts",
45
+ "db:migrate:status": "tsx src/migrate.ts --status",
46
+ "db:migrate:diff": "tsx src/migrate-diff.ts",
47
+ "backup": "tsx src/backup.ts",
48
+ "seed": "tsx src/seed.ts"
49
+ }
50
+ }
@@ -0,0 +1,126 @@
1
+ // Belle's Memory Schema - Prisma v7
2
+ // Using Prisma for migrations only, not ORM
3
+ // Queries use raw better-sqlite3 for performance
4
+
5
+ datasource db {
6
+ provider = "sqlite"
7
+ }
8
+
9
+ // Core memory storage
10
+ model memory {
11
+ id Int @id @default(autoincrement())
12
+ timestamp DateTime @default(now())
13
+ content String
14
+ content_l1 String?
15
+ type String
16
+ importance Int @default(3)
17
+ context String?
18
+ supersedes_id Int?
19
+ last_accessed DateTime @default(now())
20
+ core Boolean @default(false) // If this is a core memory it should not be deleted by when it is contradicted it should be updated to reflect the new information.
21
+
22
+ created_at DateTime @default(now())
23
+ updated_at DateTime @updatedAt
24
+
25
+ // Self-referential: this memory supersedes another
26
+ supersedes memory? @relation("supersede_chain", fields: [supersedes_id], references: [id])
27
+ superseded_by memory[] @relation("supersede_chain")
28
+
29
+ tags tag[]
30
+ related_from relationship[] @relation("from_memory")
31
+ related_to relationship[] @relation("to_memory")
32
+ access access_log[]
33
+ tasks task[]
34
+
35
+ @@index([timestamp])
36
+ @@index([type])
37
+ @@index([importance])
38
+ @@index([last_accessed])
39
+ @@index([supersedes_id])
40
+ @@index([type, importance])
41
+ }
42
+
43
+ // Flexible tagging system
44
+ model tag {
45
+ id Int @id @default(autoincrement())
46
+ memory_id Int
47
+ tag String
48
+
49
+ memory memory @relation(fields: [memory_id], references: [id], onDelete: Cascade)
50
+
51
+ @@index([tag])
52
+ @@index([memory_id])
53
+ }
54
+
55
+ // How memories connect to each other
56
+ model relationship {
57
+ id Int @id @default(autoincrement())
58
+ from_id Int
59
+ to_id Int
60
+ type String
61
+
62
+ from memory @relation("from_memory", fields: [from_id], references: [id], onDelete: Cascade)
63
+ to memory @relation("to_memory", fields: [to_id], references: [id], onDelete: Cascade)
64
+
65
+ @@index([from_id])
66
+ @@index([to_id])
67
+ }
68
+
69
+ // Track memory access patterns (for relevance scoring)
70
+ model access_log {
71
+ id Int @id @default(autoincrement())
72
+ memory_id Int
73
+ accessed_at DateTime @default(now())
74
+ context String?
75
+
76
+ memory memory @relation(fields: [memory_id], references: [id], onDelete: Cascade)
77
+
78
+ @@index([memory_id])
79
+ @@index([accessed_at])
80
+ }
81
+
82
+ // Preferences shorthand - quick lookup for common patterns
83
+ model preference {
84
+ id Int @id @default(autoincrement())
85
+ key String @unique
86
+ value String
87
+ notes String?
88
+
89
+ created_at DateTime @default(now())
90
+ updated_at DateTime @updatedAt
91
+
92
+ @@index([key])
93
+ }
94
+
95
+ // Persistent task tracking — survives across sessions
96
+ model task {
97
+ id Int @id @default(autoincrement())
98
+ title String
99
+ description String?
100
+ status String @default("pending") // pending | in_progress | completed | cancelled
101
+ session_id String? // free-form label: project name, feature branch, date, etc.
102
+ memory_id Int? // optional link to a related memory
103
+
104
+ created_at DateTime @default(now())
105
+ updated_at DateTime @updatedAt
106
+
107
+ memory memory? @relation(fields: [memory_id], references: [id], onDelete: SetNull)
108
+
109
+ @@index([status])
110
+ @@index([session_id])
111
+ }
112
+
113
+ // Daily journal entries - narrative reflections
114
+ model journal {
115
+ id Int @id @default(autoincrement())
116
+ date String @unique
117
+ content String
118
+ learned String?
119
+ key_moments String?
120
+ mood String?
121
+
122
+ created_at DateTime @default(now())
123
+ updated_at DateTime @updatedAt
124
+
125
+ @@index([date])
126
+ }