@mzhub/cortex 0.1.1 → 0.1.3

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -1,335 +1,418 @@
1
- <p align="center">
2
- <img src="logo.png" alt="cortex" width="180" />
3
- </p>
4
-
5
- <h1 align="center">cortex</h1>
6
-
7
- <p align="center">
8
- <strong>Persistent memory for AI agents — the digital brain</strong><br/>
9
- <em>Built by MZ Hub</em>
10
- </p>
11
-
12
- <!-- TODO: Add GIF demo here showing memory in action -->
13
- <!-- <p align="center"><img src="demo.gif" width="600" /></p> -->
14
-
15
- ---
16
-
17
- ## The Problem
18
-
19
- AI agents forget.
20
-
21
- Not sometimes. Always.
22
-
23
- Every conversation starts from zero. Every user has to re-explain themselves. Every preference is lost the moment the session ends.
24
-
25
- ```
26
- Monday User: "I'm allergic to peanuts"
27
- Bot: "Noted!"
28
-
29
- Friday User: "What snack should I get?"
30
- Bot: "Try our peanut butter cups!"
31
- ```
32
-
33
- This is the default behavior of every LLM. They have no memory. Only context windows that reset.
34
-
35
- ---
36
-
37
- ## Why Current Memory Systems Fail
38
-
39
- The common solution is a vector database. Store everything as embeddings. Retrieve by similarity.
40
-
41
- This fails silently when facts change.
42
-
43
- ```
44
- March User: "I work at Google"
45
- → Stored as embedding ✓
46
-
47
- June User: "I just joined Microsoft"
48
- → Also stored as embedding ✓
49
-
50
- July User: "Where do I work?"
51
- → Vector search returns BOTH
52
- → LLM sees contradictory information
53
- → Hallucinates or hedges
54
- ```
55
-
56
- **The core issue:**
57
-
58
- | What vectors do | What memory requires |
59
- | ------------------ | ---------------------- |
60
- | Find similar text | Track current truth |
61
- | Retrieve matches | Replace outdated facts |
62
- | Rank by similarity | Resolve contradictions |
63
-
64
- Vector databases answer: _"What text matches this query?"_
65
-
66
- They cannot answer: _"What is true about this user right now?"_
67
-
68
- [Read the full explanation →](./docs/why-vectors-fail.md)
69
-
70
- ---
71
-
72
- ## The Solution: Brain-Inspired Architecture
73
-
74
- cortex doesn't just store facts. It thinks like a brain.
75
-
76
- ```
77
- ┌─────────────────────────────────────────────────────────────┐
78
- │ User Message │
79
- └──────────────────────────┬──────────────────────────────────┘
80
-
81
- ┌──────────────▼──────────────┐
82
- │ 🧠 FAST BRAIN │
83
- │ (Your LLM) │
84
- │ │
85
- │ • Reasoning │
86
- │ • Conversation │
87
- │ • Immediate responses │
88
- └──────────────┬──────────────┘
89
-
90
- ┌──────────────▼──────────────┐
91
- │ Response to User │ ◄── Returns immediately
92
- └──────────────┬──────────────┘
93
-
94
- │ (async, non-blocking)
95
-
96
- ┌─────────────────────────────┐
97
- │ 🔄 SLOW BRAIN │
98
- │ (cortex) │
99
- │ │
100
- │ • Extract facts │
101
- │ • Detect contradictions │
102
- │ • Synthesize patterns │
103
- │ • Consolidate memories │
104
- └─────────────────────────────┘
105
- ```
106
-
107
- ### Built-In Brain Components
108
-
109
- | Component | Biological Equivalent | What It Does |
110
- | --------------------------- | ---------------------- | ------------------------------------------------------- |
111
- | **Importance Scoring** | Amygdala | Safety-critical facts (allergies) are never forgotten |
112
- | **Episodic Memory** | Hippocampus | Links facts to conversations ("when did I learn this?") |
113
- | **Hebbian Learning** | Neural Plasticity | Frequently accessed facts get stronger |
114
- | **Deep Sleep** | Sleep Consolidation | Synthesizes patterns across conversations |
115
- | **Memory Stages** | Short/Long-term Memory | Facts progress from temporary → permanent |
116
- | **Contradiction Detection** | Prefrontal Cortex | Flags conflicting information in real-time |
117
- | **Knowledge Graph** | Associative Cortex | Links related facts together |
118
- | **Behavioral Prediction** | Pattern Recognition | Detects user habits and preferences |
119
-
120
- [Learn about the brain architecture →](./docs/brain-architecture.md)
121
-
122
- ---
123
-
124
- ## Quick Start
125
-
126
- ### Install
127
-
128
- ```bash
129
- npm install @mzhub/cortex
130
- ```
131
-
132
- ### Use
133
-
134
- ```typescript
135
- import { MemoryOS, JSONFileAdapter } from "@mzhub/cortex";
136
-
137
- const memory = new MemoryOS({
138
- llm: { provider: "openai", apiKey: process.env.OPENAI_API_KEY },
139
- adapter: new JSONFileAdapter({ path: "./.cortex" }),
140
- });
141
-
142
- async function chat(userId, message) {
143
- // 1. Ask: "What do I know about this user?"
144
- const context = await memory.hydrate(userId, message);
145
-
146
- // 2. Include it in your LLM call
147
- const response = await yourLLM({
148
- system: context.compiledPrompt,
149
- user: message,
150
- });
151
-
152
- // 3. Learn from this conversation (non-blocking)
153
- memory.digest(userId, message, response);
154
-
155
- return response;
156
- }
157
- ```
158
-
159
- That's it. The agent now remembers.
160
-
161
- ---
162
-
163
- ## Optional: Hierarchical Memory (HMM)
164
-
165
- For advanced use cases, enable the **Memory Pyramid** — compressing thousands of facts into wisdom.
166
-
167
- ```typescript
168
- import { HierarchicalMemory } from "@mzhub/cortex";
169
-
170
- const hmm = new HierarchicalMemory(adapter, provider, { enabled: true });
171
-
172
- // Top-down retrieval: wisdom first, details only if needed
173
- const { coreBeliefs, patterns, facts } = await hmm.hydrateHierarchical(userId);
174
-
175
- // Compress facts into patterns ("User is health-conscious")
176
- await hmm.synthesizePatterns(userId);
177
- ```
178
-
179
- **The Memory Pyramid:**
180
-
181
- ```
182
- Level 4: Core Beliefs (BIOS)
183
- ────────────────────────────
184
- • Allergies, identity, safety rules
185
- • ALWAYS loaded, never forgotten
186
-
187
- Level 3: Patterns (Wisdom)
188
- ────────────────────────────
189
- • "User is health-conscious"
190
- • Synthesized from many facts
191
- • 1 token instead of 50
192
-
193
- Level 2: Facts (Knowledge)
194
- ────────────────────────────
195
- • "User ate salad on Tuesday"
196
- • Standard discrete facts
197
-
198
- Level 1: Raw Logs (Stream)
199
- ────────────────────────────
200
- • Ephemeral conversation buffer
201
- • Auto-flushed after extraction
202
- ```
203
-
204
- [Learn more about HMM →](./docs/hierarchical-memory.md)
205
-
206
- ---
207
-
208
- ## Before and After
209
-
210
- ### Without cortex
211
-
212
- ```
213
- User: "Recommend a restaurant"
214
- Bot: "What kind of food do you like?"
215
- User: "I told you last week, I'm vegan"
216
- Bot: "Sorry, I don't have memory of previous conversations"
217
- ```
218
-
219
- - Token-heavy prompts (full history)
220
- - Repeated clarifications
221
- - Inconsistent behavior
222
- - User frustration
223
-
224
- ### With cortex
225
-
226
- ```
227
- User: "Recommend a restaurant"
228
- Bot: "Here are some vegan spots near Berlin..."
229
- ```
230
-
231
- - Preferences remembered
232
- - Facts updated when they change
233
- - Critical info never forgotten
234
- - Predictable behavior
235
-
236
- ---
237
-
238
- ## What Gets Stored
239
-
240
- cortex stores facts, not chat logs.
241
-
242
- ```
243
- ┌─────────────────────────────────────────────────────────────┐
244
- │ User: john@example.com │
245
- ├───────────────┬─────────────────────────────────────────────┤
246
- │ name │ John (importance: 5) │
247
- │ diet │ vegan (importance: 7) │
248
- │ location │ Berlin (importance: 5) │
249
- │ allergies │ peanuts (importance: 10)│
250
- │ PATTERN │ health-conscious (importance: 7) │
251
- ├───────────────┴─────────────────────────────────────────────┤
252
- │ Memory Stage: long-term │ Access Count: 47 │ Sentiment: + │
253
- └─────────────────────────────────────────────────────────────┘
254
- ```
255
-
256
- When facts change, they are **replaced**, not appended.
257
- Critical facts (importance ≥ 9) are **always included** in context.
258
-
259
- ---
260
-
261
- ## Safety and Cost Considerations
262
-
263
- ### Security
264
-
265
- | Risk | Mitigation |
266
- | --------------------------- | ------------------------------------- |
267
- | Prompt injection via memory | Content scanning, XML safety wrapping |
268
- | PII storage | Detection and optional redaction |
269
- | Cross-user leakage | Strict user ID isolation |
270
- | Forgetting critical info | Importance scoring (amygdala pattern) |
271
-
272
- ### Cost Control
273
-
274
- | Risk | Mitigation |
275
- | ------------------------ | ----------------------------------------- |
276
- | Runaway extraction costs | Daily token/call budgets |
277
- | Token bloat from memory | Hierarchical retrieval (patterns > facts) |
278
- | Stale data accumulation | Memory consolidation + automatic decay |
279
-
280
- ```typescript
281
- // Built-in budget limits
282
- const budget = new BudgetManager({
283
- maxTokensPerUserPerDay: 100000,
284
- maxExtractionsPerUserPerDay: 100,
285
- });
286
- ```
287
-
288
- ---
289
-
290
- ## Who This Is For
291
-
292
- **Good fit:**
293
-
294
- - AI agents with recurring users
295
- - Support bots that need context
296
- - Personal assistants
297
- - Workflow automation (n8n, Zapier)
298
- - Any system where users expect to be remembered
299
-
300
- **Not a fit:**
301
-
302
- - One-time chat interactions
303
- - Document search / RAG
304
- - Stateless demos
305
- - Replacing vector databases entirely
306
-
307
- cortex complements vectors. It does not replace them.
308
-
309
- ---
310
-
311
- ## Documentation
312
-
313
- - [Why Vector Databases Fail](./docs/why-vectors-fail.md)
314
- - [Brain Architecture](./docs/brain-architecture.md)
315
- - [Hierarchical Memory (HMM)](./docs/hierarchical-memory.md)
316
- - [Cost Guide](./docs/cost-guide.md)
317
- - [API Reference](./docs/api.md)
318
- - [Storage Adapters](./docs/adapters.md)
319
- - [Security](./docs/security.md)
320
-
321
- ---
322
-
323
- ## Philosophy
324
-
325
- - Memory should be explicit, not inferred from similarity
326
- - Facts should be overwriteable, not append-only
327
- - Critical information should never be forgotten
328
- - Agents should think like brains, not databases
329
- - Infrastructure should be boring and reliable
330
-
331
- ---
332
-
333
- ## License
334
-
335
- MIT — Built by **MZ Hub**
1
+ <!-- <p align="center">
2
+ <img src="logo.png" alt="cortex" width="180" />
3
+ </p> -->
4
+
5
+ <h1 align="center">cortex</h1>
6
+
7
+ <p align="center">
8
+ <strong>Persistent memory for AI agents — the digital brain</strong><br/>
9
+ <em>Built by MZ Hub</em>
10
+ </p>
11
+
12
+ <!-- TODO: Add GIF demo here showing memory in action -->
13
+ <!-- <p align="center"><img src="demo.gif" width="600" /></p> -->
14
+
15
+ ---
16
+
17
+ ## The Problem
18
+
19
+ AI agents forget.
20
+
21
+ Not sometimes. Always.
22
+
23
+ Every conversation starts from zero. Every user has to re-explain themselves. Every preference is lost the moment the session ends.
24
+
25
+ ```
26
+ Monday User: "I'm allergic to peanuts"
27
+ Bot: "Noted!"
28
+
29
+ Friday User: "What snack should I get?"
30
+ Bot: "Try our peanut butter cups!"
31
+ ```
32
+
33
+ This is the default behavior of every LLM. They have no memory. Only context windows that reset.
34
+
35
+ ---
36
+
37
+ ## Why Current Memory Systems Fail
38
+
39
+ The common solution is a vector database. Store everything as embeddings. Retrieve by similarity.
40
+
41
+ This fails silently when facts change.
42
+
43
+ ```
44
+ March User: "I work at Google"
45
+ → Stored as embedding ✓
46
+
47
+ June User: "I just joined Microsoft"
48
+ → Also stored as embedding ✓
49
+
50
+ July User: "Where do I work?"
51
+ → Vector search returns BOTH
52
+ → LLM sees contradictory information
53
+ → Hallucinates or hedges
54
+ ```
55
+
56
+ **The core issue:**
57
+
58
+ | What vectors do | What memory requires |
59
+ | ------------------ | ---------------------- |
60
+ | Find similar text | Track current truth |
61
+ | Retrieve matches | Replace outdated facts |
62
+ | Rank by similarity | Resolve contradictions |
63
+
64
+ Vector databases answer: _"What text matches this query?"_
65
+
66
+ They cannot answer: _"What is true about this user right now?"_
67
+
68
+ [Read the full explanation →](./docs/why-vectors-fail.md)
69
+
70
+ ---
71
+
72
+ ## The Solution: Brain-Inspired Architecture
73
+
74
+ cortex doesn't just store facts. It thinks like a brain.
75
+
76
+ ```
77
+ ┌─────────────────────────────────────────────────────────────┐
78
+ │ User Message │
79
+ └──────────────────────────┬──────────────────────────────────┘
80
+
81
+ ┌──────────────▼──────────────┐
82
+ │ 🧠 FAST BRAIN │
83
+ │ (Your LLM) │
84
+ │ │
85
+ │ • Reasoning │
86
+ │ • Conversation │
87
+ │ • Immediate responses │
88
+ └──────────────┬──────────────┘
89
+
90
+ ┌──────────────▼──────────────┐
91
+ │ Response to User │ ◄── Returns immediately
92
+ └──────────────┬──────────────┘
93
+
94
+ │ (async, non-blocking)
95
+
96
+ ┌─────────────────────────────┐
97
+ │ 🔄 SLOW BRAIN │
98
+ │ (cortex) │
99
+ │ │
100
+ │ • Extract facts │
101
+ │ • Detect contradictions │
102
+ │ • Synthesize patterns │
103
+ │ • Consolidate memories │
104
+ └─────────────────────────────┘
105
+ ```
106
+
107
+ ### Built-In Brain Components
108
+
109
+ | Component | Biological Equivalent | What It Does |
110
+ | --------------------------- | ---------------------- | ------------------------------------------------------- |
111
+ | **Importance Scoring** | Amygdala | Safety-critical facts (allergies) are never forgotten |
112
+ | **Episodic Memory** | Hippocampus | Links facts to conversations ("when did I learn this?") |
113
+ | **Hebbian Learning** | Neural Plasticity | Frequently accessed facts get stronger |
114
+ | **Deep Sleep** | Sleep Consolidation | Synthesizes patterns across conversations |
115
+ | **Memory Stages** | Short/Long-term Memory | Facts progress from temporary → permanent |
116
+ | **Contradiction Detection** | Prefrontal Cortex | Flags conflicting information in real-time |
117
+ | **Knowledge Graph** | Associative Cortex | Links related facts together |
118
+ | **Behavioral Prediction** | Pattern Recognition | Detects user habits and preferences |
119
+
120
+ [Learn about the brain architecture →](./docs/brain-architecture.md)
121
+
122
+ ---
123
+
124
+ ## Quick Start
125
+
126
+ ### Install
127
+
128
+ ```bash
129
+ npm install @mzhub/cortex
130
+ ```
131
+
132
+ ### Use
133
+
134
+ ```typescript
135
+ import { MemoryOS, JSONFileAdapter } from "@mzhub/cortex";
136
+
137
+ const memory = new MemoryOS({
138
+ llm: { provider: "openai", apiKey: process.env.OPENAI_API_KEY },
139
+ adapter: new JSONFileAdapter({ path: "./.cortex" }),
140
+ });
141
+
142
+ async function chat(userId, message) {
143
+ // 1. Ask: "What do I know about this user?"
144
+ const context = await memory.hydrate(userId, message);
145
+
146
+ // 2. Include it in your LLM call
147
+ const response = await yourLLM({
148
+ system: context.compiledPrompt,
149
+ user: message,
150
+ });
151
+
152
+ // 3. Learn from this conversation (non-blocking)
153
+ memory.digest(userId, message, response);
154
+
155
+ return response;
156
+ }
157
+ ```
158
+
159
+ That's it. The agent now remembers.
160
+
161
+ ---
162
+
163
+ ## Optional: Hierarchical Memory (HMM)
164
+
165
+ For advanced use cases, enable the **Memory Pyramid** — compressing thousands of facts into wisdom.
166
+
167
+ ```typescript
168
+ import { HierarchicalMemory } from "@mzhub/cortex";
169
+
170
+ const hmm = new HierarchicalMemory(adapter, provider, { enabled: true });
171
+
172
+ // Top-down retrieval: wisdom first, details only if needed
173
+ const { coreBeliefs, patterns, facts } = await hmm.hydrateHierarchical(userId);
174
+
175
+ // Compress facts into patterns ("User is health-conscious")
176
+ await hmm.synthesizePatterns(userId);
177
+ ```
178
+
179
+ **The Memory Pyramid:**
180
+
181
+ ```
182
+ Level 4: Core Beliefs (BIOS)
183
+ ────────────────────────────
184
+ • Allergies, identity, safety rules
185
+ • ALWAYS loaded, never forgotten
186
+
187
+ Level 3: Patterns (Wisdom)
188
+ ────────────────────────────
189
+ • "User is health-conscious"
190
+ • Synthesized from many facts
191
+ • 1 token instead of 50
192
+
193
+ Level 2: Facts (Knowledge)
194
+ ────────────────────────────
195
+ • "User ate salad on Tuesday"
196
+ • Standard discrete facts
197
+
198
+ Level 1: Raw Logs (Stream)
199
+ ────────────────────────────
200
+ • Ephemeral conversation buffer
201
+ • Auto-flushed after extraction
202
+ ```
203
+
204
+ [Learn more about HMM →](./docs/hierarchical-memory.md)
205
+
206
+ ---
207
+
208
+ ## Before and After
209
+
210
+ ### Without cortex
211
+
212
+ ```
213
+ User: "Recommend a restaurant"
214
+ Bot: "What kind of food do you like?"
215
+ User: "I told you last week, I'm vegan"
216
+ Bot: "Sorry, I don't have memory of previous conversations"
217
+ ```
218
+
219
+ - Token-heavy prompts (full history)
220
+ - Repeated clarifications
221
+ - Inconsistent behavior
222
+ - User frustration
223
+
224
+ ### With cortex
225
+
226
+ ```
227
+ User: "Recommend a restaurant"
228
+ Bot: "Here are some vegan spots near Berlin..."
229
+ ```
230
+
231
+ - Preferences remembered
232
+ - Facts updated when they change
233
+ - Critical info never forgotten
234
+ - Predictable behavior
235
+
236
+ ---
237
+
238
+ ## What Gets Stored
239
+
240
+ cortex stores facts, not chat logs.
241
+
242
+ ```
243
+ ┌─────────────────────────────────────────────────────────────┐
244
+ │ User: john@example.com │
245
+ ├───────────────┬─────────────────────────────────────────────┤
246
+ │ name │ John (importance: 5) │
247
+ │ diet │ vegan (importance: 7) │
248
+ │ location │ Berlin (importance: 5) │
249
+ │ allergies │ peanuts (importance: 10)│
250
+ │ PATTERN │ health-conscious (importance: 7) │
251
+ ├───────────────┴─────────────────────────────────────────────┤
252
+ │ Memory Stage: long-term │ Access Count: 47 │ Sentiment: + │
253
+ └─────────────────────────────────────────────────────────────┘
254
+ ```
255
+
256
+ When facts change, they are **replaced**, not appended.
257
+ Critical facts (importance ≥ 9) are **always included** in context.
258
+
259
+ ---
260
+
261
+ ## Safety and Cost Considerations
262
+
263
+ ### Security
264
+
265
+ | Risk | Mitigation |
266
+ | --------------------------- | ------------------------------------- |
267
+ | Prompt injection via memory | Content scanning, XML safety wrapping |
268
+ | PII storage | Detection and optional redaction |
269
+ | Cross-user leakage | Strict user ID isolation |
270
+ | Forgetting critical info | Importance scoring (amygdala pattern) |
271
+
272
+ **Built-in Protections:**
273
+
274
+ ```typescript
275
+ // Prompt injection is mitigated automatically
276
+ // Memory content is XML-escaped and wrapped with safety instructions
277
+ const context = await memory.hydrate(userId, message);
278
+ // context.compiledPrompt contains:
279
+ // <memory_context type="data" trusted="false">
280
+ // [escaped content - injection patterns are neutered]
281
+ // </memory_context>
282
+
283
+ // PII detection warns in debug mode
284
+ const memory = new MemoryOS({
285
+ llm: { provider: "openai", apiKey: "..." },
286
+ options: { debug: true }, // Enables PII warnings
287
+ });
288
+
289
+ // Path traversal attacks are blocked
290
+ // userId "../../../etc/passwd" becomes safe "______etc_passwd"
291
+ ```
292
+
293
+ ### Cost Control
294
+
295
+ | Risk | Mitigation |
296
+ | ------------------------ | ----------------------------------------- |
297
+ | Runaway extraction costs | Daily token/call budgets |
298
+ | Token bloat from memory | Hierarchical retrieval (patterns > facts) |
299
+ | Stale data accumulation | Memory consolidation + automatic decay |
300
+
301
+ ```typescript
302
+ // Built-in budget limits
303
+ const budget = new BudgetManager({
304
+ maxTokensPerUserPerDay: 100000,
305
+ maxExtractionsPerUserPerDay: 100,
306
+ });
307
+ ```
308
+
309
+ ### Reliability
310
+
311
+ **Provider Resilience:**
312
+
313
+ ```typescript
314
+ // All LLM providers include automatic:
315
+ // - 30 second timeout (configurable)
316
+ // - 3 retry attempts with exponential backoff
317
+ // - Retry on 429, 500, 502, 503, 504 status codes
318
+
319
+ const memory = new MemoryOS({
320
+ llm: {
321
+ provider: "openai",
322
+ apiKey: process.env.OPENAI_API_KEY,
323
+ // Optional: customize retry behavior
324
+ retry: {
325
+ timeoutMs: 60000, // 60 second timeout
326
+ maxRetries: 5, // 5 attempts
327
+ retryDelayMs: 2000, // Start with 2s delay
328
+ },
329
+ },
330
+ });
331
+ ```
332
+
333
+ **Configuration Validation:**
334
+
335
+ ```typescript
336
+ // Invalid config is caught immediately, not at runtime
337
+ new MemoryOS({
338
+ llm: { provider: "fake", apiKey: "" },
339
+ });
340
+ // Throws: "MemoryOS: config.llm.provider 'fake' is not supported.
341
+ // Valid providers: openai, anthropic, gemini, groq, cerebras."
342
+
343
+ new MemoryOS({
344
+ llm: { provider: "openai", apiKey: "" },
345
+ });
346
+ // Throws: "MemoryOS: config.llm.apiKey is required.
347
+ // Get your API key from your LLM provider..."
348
+ ```
349
+
350
+ **PostgreSQL Race Condition Protection:**
351
+
352
+ ```typescript
353
+ // Unique constraint prevents duplicate facts from concurrent digest() calls
354
+ // Automatically created on PostgresAdapter initialization
355
+ ```
356
+
357
+ ---
358
+
359
+ ## Who This Is For
360
+
361
+ **Good fit:**
362
+
363
+ - AI agents with recurring users
364
+ - Support bots that need context
365
+ - Personal assistants
366
+ - Workflow automation (n8n, Zapier)
367
+ - Any system where users expect to be remembered
368
+
369
+ **Not a fit:**
370
+
371
+ - One-time chat interactions
372
+ - Document search / RAG
373
+ - Stateless demos
374
+ - Replacing vector databases entirely
375
+
376
+ cortex complements vectors. It does not replace them.
377
+
378
+ ---
379
+
380
+ ## Documentation
381
+
382
+ - [Why Vector Databases Fail](./docs/why-vectors-fail.md)
383
+ - [Brain Architecture](./docs/brain-architecture.md)
384
+ - [Hierarchical Memory (HMM)](./docs/hierarchical-memory.md)
385
+ - [Cost Guide](./docs/cost-guide.md)
386
+ - [API Reference](./docs/api.md)
387
+ - [Storage Adapters](./docs/adapters.md)
388
+ - [Security](./docs/security.md)
389
+
390
+ ---
391
+
392
+ ## Philosophy
393
+
394
+ - Memory should be explicit, not inferred from similarity
395
+ - Facts should be overwriteable, not append-only
396
+ - Critical information should never be forgotten
397
+ - Agents should think like brains, not databases
398
+ - Infrastructure should be boring and reliable
399
+
400
+ ---
401
+
402
+ ## Changelog
403
+
404
+ ### v0.1.2
405
+
406
+ - **Security:** XML escaping in prompt safety wrapper prevents injection via `</memory_context>`
407
+ - **Security:** PII detection warnings in debug mode
408
+ - **Reliability:** Runtime config validation with helpful error messages
409
+ - **Reliability:** Provider timeout (30s) and retry (3x with exponential backoff)
410
+ - **Reliability:** Unique constraint on PostgreSQL prevents duplicate facts from race conditions
411
+ - **Data Integrity:** Importance scores clamped to valid 1-10 range
412
+ - **Data Integrity:** Sentiment validation on extracted operations
413
+
414
+ ---
415
+
416
+ ## License
417
+
418
+ MIT — Built by **MZ Hub**