audrey 0.15.0 → 0.16.1

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -1,681 +1,808 @@
1
- # Audrey
2
-
3
- Biological memory architecture for AI agents. Memory that decays, consolidates, feels, and learns — not just a database.
4
-
5
- ## Why Audrey Exists
6
-
7
- Every AI memory tool today (Mem0, Zep, LangChain Memory) is a filing cabinet. Store stuff, retrieve stuff. None of them do what biological memory actually does:
8
-
9
- - Memories don't decay. A fact from 6 months ago has the same weight as one from today.
10
- - No consolidation. Raw events never become general principles.
11
- - No contradiction detection. Conflicting facts coexist silently.
12
- - No self-defense. If an agent hallucinates and encodes the hallucination, it becomes "truth."
13
-
14
- Audrey fixes all of this by modeling memory the way the brain does:
15
-
16
- | Brain Structure | Audrey Component | What It Does |
17
- |---|---|---|
18
- | Hippocampus | Episodic Memory | Fast capture of raw events and observations |
19
- | Neocortex | Semantic Memory | Consolidated principles and patterns |
20
- | Sleep Replay | Consolidation Engine | Extracts patterns from episodes, promotes to principles |
21
- | Prefrontal Cortex | Validation Engine | Truth-checking, contradiction detection |
22
- | Amygdala | Affect System | Emotional encoding, arousal-salience coupling, mood-congruent recall |
23
-
24
- ## Install
25
-
26
- ### MCP Server for Claude Code (one command)
27
-
28
- ```bash
29
- npx audrey install
30
- ```
31
-
32
- That's it. Audrey auto-detects API keys from your environment:
33
-
34
- - `OPENAI_API_KEY` set? Uses real OpenAI embeddings (1536d) for semantic search.
35
- - `ANTHROPIC_API_KEY` set? Enables LLM-powered consolidation and contradiction detection.
36
- - Neither? Runs with mock embeddings — fully functional, upgrade anytime.
37
-
38
- To upgrade later, set the keys and re-run `npx audrey install`.
39
-
40
- ```bash
41
- # Check status
42
- npx audrey status
43
-
44
- # Uninstall
45
- npx audrey uninstall
46
- ```
47
-
48
- Every Claude Code session now has 9 memory tools: `memory_encode`, `memory_recall`, `memory_consolidate`, `memory_introspect`, `memory_resolve_truth`, `memory_export`, `memory_import`, `memory_forget`, `memory_decay`.
49
-
50
- ### SDK in Your Code
51
-
52
- ```bash
53
- npm install audrey
54
- ```
55
-
56
- Zero external infrastructure. One SQLite file.
57
-
58
- ## Usage
59
-
60
- ```js
61
- import { Audrey } from 'audrey';
62
-
63
- // 1. Create a brain
64
- const brain = new Audrey({
65
- dataDir: './agent-memory',
66
- agent: 'my-agent',
67
- embedding: { provider: 'mock', dimensions: 8 }, // or 'openai' for production
68
- });
69
-
70
- // 2. Encode observations — with optional emotional context
71
- await brain.encode({
72
- content: 'Stripe API returns 429 above 100 req/s',
73
- source: 'direct-observation',
74
- tags: ['stripe', 'rate-limit'],
75
- affect: { valence: -0.4, arousal: 0.7, label: 'frustration' },
76
- });
77
-
78
- // 3. Recall what you know — mood-congruent retrieval
79
- const memories = await brain.recall('stripe rate limits', {
80
- limit: 5,
81
- mood: { valence: -0.3 }, // frustrated right now? memories encoded in frustration surface first
82
- });
83
-
84
- // 4. Filtered recall — by tag, source, or date range
85
- const recent = await brain.recall('stripe', {
86
- tags: ['rate-limit'],
87
- sources: ['direct-observation'],
88
- after: '2026-02-01T00:00:00Z',
89
- context: { task: 'debugging', domain: 'payments' }, // context-dependent retrieval
90
- });
91
-
92
- // 5. Consolidate episodes into principles (the "sleep" cycle)
93
- await brain.consolidate();
94
-
95
- // 6. Forget something
96
- brain.forget(memoryId); // soft-delete
97
- brain.forget(memoryId, { purge: true }); // hard-delete
98
- await brain.forgetByQuery('old API endpoint', { minSimilarity: 0.9 });
99
-
100
- // 7. Check brain health
101
- const stats = brain.introspect();
102
- // { episodic: 47, semantic: 12, procedural: 3, dormant: 8, ... }
103
-
104
- // 8. Clean up
105
- brain.close();
106
- ```
107
-
108
- ### Configuration
109
-
110
- ```js
111
- const brain = new Audrey({
112
- dataDir: './audrey-data', // SQLite database directory
113
- agent: 'my-agent', // Agent identifier
114
-
115
- // Embedding provider (required)
116
- embedding: {
117
- provider: 'mock', // 'mock' for testing, 'openai' for production
118
- dimensions: 8, // 8 for mock, 1536 for openai text-embedding-3-small
119
- apiKey: '...', // Required for openai
120
- },
121
-
122
- // LLM provider (optional — enables smart consolidation + contradiction detection)
123
- llm: {
124
- provider: 'anthropic', // 'mock', 'anthropic', or 'openai'
125
- apiKey: '...', // Required for anthropic/openai
126
- model: 'claude-sonnet-4-6', // Optional model override
127
- },
128
-
129
- // Consolidation settings
130
- consolidation: {
131
- minEpisodes: 3, // Minimum cluster size for principle extraction
132
- },
133
-
134
- // Context-dependent retrieval (v0.8.0)
135
- context: {
136
- enabled: true, // Enable encoding-specificity principle
137
- weight: 0.3, // Max 30% confidence boost on full context match
138
- },
139
-
140
- // Emotional memory (v0.9.0)
141
- affect: {
142
- enabled: true, // Enable affect system
143
- weight: 0.2, // Max 20% mood-congruence boost
144
- arousalWeight: 0.3, // Yerkes-Dodson arousal-salience coupling
145
- resonance: { // Detect emotional echoes across experiences
146
- enabled: true,
147
- k: 5, // How many past episodes to check
148
- threshold: 0.5, // Semantic similarity threshold
149
- affectThreshold: 0.6, // Emotional similarity threshold
150
- },
151
- },
152
-
153
- // Interference-based forgetting (v0.7.0)
154
- interference: {
155
- enabled: true, // New episodes suppress similar existing memories
156
- weight: 0.15, // Suppression strength
157
- },
158
-
159
- // Decay settings
160
- decay: {
161
- dormantThreshold: 0.1, // Below this confidence = dormant
162
- },
163
- });
164
- ```
165
-
166
- **Without an LLM provider**, consolidation uses a default text-based extractor and contradiction detection is similarity-only. **With an LLM provider**, Audrey extracts real generalized principles, detects semantic contradictions, and resolves context-dependent truths.
167
-
168
- ### Environment Variables (MCP Server)
169
-
170
- | Variable | Default | Purpose |
171
- |---|---|---|
172
- | `AUDREY_DATA_DIR` | `~/.audrey/data` | SQLite database directory |
173
- | `AUDREY_AGENT` | `claude-code` | Agent identifier |
174
- | `AUDREY_EMBEDDING_PROVIDER` | `mock` | `mock` or `openai` |
175
- | `AUDREY_EMBEDDING_DIMENSIONS` | `8` | Vector dimensions (1536 for openai) |
176
- | `OPENAI_API_KEY` | | Required when embedding/LLM provider is openai |
177
- | `AUDREY_LLM_PROVIDER` | | `mock`, `anthropic`, or `openai` |
178
- | `ANTHROPIC_API_KEY` | — | Required when LLM provider is anthropic |
179
-
180
- ## Core Concepts
181
-
182
- ### Four Memory Types
183
-
184
- **Episodic** (hot, fast decay) — Raw events. "Stripe returned 429 at 3pm." Immutable. Append-only. Never modified.
185
-
186
- **Semantic** (warm, slow decay) — Consolidated principles. "Stripe enforces 100 req/s rate limit." Extracted automatically from clusters of episodic memories.
187
-
188
- **Procedural** (cold, slowest decay) — Learned workflows. "When Stripe rate-limits, implement exponential backoff." Skills the agent has acquired.
189
-
190
- **Causal** — Why things happened. Not just "A then B" but "A caused B because of mechanism C." Prevents correlation-as-causation.
191
-
192
- ### Confidence Formula
193
-
194
- Every memory has a compositional confidence score:
195
-
196
- ```
197
- C(m, t) = w_s * S + w_e * E + w_r * R(t) + w_ret * Ret(t)
198
- ```
199
-
200
- | Component | What It Measures | Default Weight |
201
- |---|---|---|
202
- | **S** Source reliability | How trustworthy is the origin? | 0.30 |
203
- | **E** Evidence agreement | Do observations agree or contradict? | 0.35 |
204
- | **R(t)** Recency decay | How old is the memory? (Ebbinghaus curve) | 0.20 |
205
- | **Ret(t)**Retrieval reinforcement | How often is this memory accessed? | 0.15 |
206
-
207
- Source reliability hierarchy:
208
-
209
- | Source Type | Reliability |
210
- |---|---|
211
- | `direct-observation` | 0.95 |
212
- | `told-by-user` | 0.90 |
213
- | `tool-result` | 0.85 |
214
- | `inference` | 0.60 |
215
- | `model-generated` | 0.40 (capped at 0.6 confidence) |
216
-
217
- The `model-generated` cap prevents circular self-confirmation an agent can't boost its own hallucinations into high-confidence "facts."
218
-
219
- ### Decay (Forgetting Curves)
220
-
221
- Unreinforced memories lose confidence over time following Ebbinghaus exponential decay:
222
-
223
- | Memory Type | Half-Life | Rationale |
224
- |---|---|---|
225
- | Episodic | 7 days | Raw events go stale fast |
226
- | Semantic | 30 days | Principles are hard-won |
227
- | Procedural | 90 days | Skills are slowest to forget |
228
-
229
- Retrieval resets the decay clock. Frequently accessed memories persist. Memories below the dormant threshold (0.1) become dormant — still searchable with `includeDormant: true`, but excluded from default recall.
230
-
231
- ### Consolidation (The "Sleep" Cycle)
232
-
233
- Audrey's consolidation engine periodically clusters similar episodic memories and extracts general principles:
234
-
235
- ```
236
- 3 episodes about Stripe 429 errors
237
- → 1 semantic principle: "Stripe enforces ~100 req/s rate limit"
238
- ```
239
-
240
- The pipeline: **Cluster** (embedding similarity) → **Extract** (LLM or callback) → **Validate** (check for contradictions) → **Promote** (write semantic memory) → **Audit** (log everything).
241
-
242
- Consolidation is idempotent. Re-running on the same data produces no duplicates. Every run creates an audit record with input/output IDs for full traceability.
243
-
244
- ### Contradiction Handling
245
-
246
- When memories conflict, Audrey doesn't force a winner. Contradictions have a lifecycle:
247
-
248
- ```
249
- open → resolved | context_dependent | reopened
250
- ```
251
-
252
- Context-dependent truths are modeled explicitly:
253
-
254
- ```js
255
- // "Stripe rate limit is 100 req/s" (live keys)
256
- // "Stripe rate limit is 25 req/s" (test keys)
257
- // Both true — under different conditions
258
- ```
259
-
260
- New high-confidence evidence can reopen resolved disputes.
261
-
262
- ### Forget and Purge
263
-
264
- Memories can be explicitly forgotten by ID or by semantic query:
265
-
266
- **Soft-delete** (default) Marks the memory as forgotten/superseded and removes its vector index. The record stays in the database but is excluded from recall. Reversible via direct database access.
267
-
268
- **Hard-delete** (`purge: true`)Permanently removes the memory from both the main table and the vector index. Irreversible.
269
-
270
- **Bulk purge** — Removes all forgotten, dormant, superseded, and rolled-back memories in one operation. Useful for GDPR compliance or storage cleanup.
271
-
272
- ### Rollback
273
-
274
- Bad consolidation? Undo it:
275
-
276
- ```js
277
- const history = brain.consolidationHistory();
278
- brain.rollback(history[0].id);
279
- // Semantic memories rolled_back state
280
- // Source episodes → un-consolidated
281
- // Full audit trail preserved
282
- ```
283
-
284
- ### Circular Self-Confirmation Defense
285
-
286
- The most dangerous exploit in AI memory: agent hallucinates X, encodes it, later retrieves it, "reinforcement" boosts confidence, X eventually consolidates as "established truth."
287
-
288
- Audrey's defenses:
289
-
290
- 1. **Source diversity requirement** Consolidation requires evidence from 2+ distinct source types
291
- 2. **Model-generated cap** — Memories from `model-generated` sources are capped at 0.6 confidence
292
- 3. **Source lineage tracking** — Provenance chains detect when all evidence traces back to a single inference
293
- 4. **Source diversity score** Every semantic memory tracks how many different source types contributed
294
-
295
- ## API Reference
296
-
297
- ### `new Audrey(config)`
298
-
299
- See [Configuration](#configuration) above for all options.
300
-
301
- ### `brain.encode(params)` `Promise<string>`
302
-
303
- Encode an episodic memory. Returns the memory ID.
304
-
305
- ```js
306
- const id = await brain.encode({
307
- content: 'What happened', // Required. Non-empty string.
308
- source: 'direct-observation', // Required. See source types above.
309
- salience: 0.8, // Optional. 0-1. Default: 0.5
310
- causal: { // Optional. What caused this / what it caused.
311
- trigger: 'batch-processing',
312
- consequence: 'queue-backed-up',
313
- },
314
- tags: ['stripe', 'production'], // Optional. Array of strings.
315
- supersedes: 'previous-id', // Optional. ID of episode this corrects.
316
- context: { task: 'debugging' }, // Optional. Situational context for retrieval.
317
- affect: { // Optional. Emotional context.
318
- valence: -0.5, // -1 (negative) to 1 (positive)
319
- arousal: 0.7, // 0 (calm) to 1 (activated)
320
- label: 'frustration', // Human-readable emotion label
321
- },
322
- });
323
- ```
324
-
325
- Episodes are **immutable**. Corrections create new records with `supersedes` links. The original is preserved.
326
-
327
- ### `brain.encodeBatch(paramsList)` → `Promise<string[]>`
328
-
329
- Encode multiple episodes in one call. Same params as `encode()`, but as an array.
330
-
331
- ```js
332
- const ids = await brain.encodeBatch([
333
- { content: 'Stripe returned 429', source: 'direct-observation' },
334
- { content: 'Redis timed out', source: 'tool-result' },
335
- { content: 'User reports slow checkout', source: 'told-by-user' },
336
- ]);
337
- ```
338
-
339
- ### `brain.recall(query, options)` → `Promise<Memory[]>`
340
-
341
- Retrieve memories ranked by `similarity * confidence`.
342
-
343
- ```js
344
- const memories = await brain.recall('stripe rate limits', {
345
- limit: 5, // Max results (default 10)
346
- minConfidence: 0.5, // Filter below this confidence
347
- types: ['semantic'], // Filter by memory type
348
- includeProvenance: true, // Include evidence chains
349
- includeDormant: false, // Include dormant memories
350
- tags: ['rate-limit'], // Only episodic memories with these tags
351
- sources: ['direct-observation'], // Only episodic memories from these sources
352
- after: '2026-02-01T00:00:00Z', // Only memories created after this date
353
- before: '2026-03-01T00:00:00Z', // Only memories created before this date
354
- context: { task: 'debugging' }, // Boost memories encoded in matching context
355
- mood: { valence: -0.3, arousal: 0.5 }, // Mood-congruent retrieval
356
- });
357
- ```
358
-
359
- Tag and source filters only apply to episodic memories (semantic and procedural memories don't have tags or sources). Date filters apply to all memory types.
360
-
361
- Each result:
362
-
363
- ```js
364
- {
365
- id: '01ABC...',
366
- content: 'Stripe enforces ~100 req/s rate limit',
367
- type: 'semantic',
368
- confidence: 0.87,
369
- score: 0.74, // similarity * confidence
370
- source: 'consolidation',
371
- state: 'active',
372
- contextMatch: 0.8, // When retrieval context provided
373
- moodCongruence: 0.7, // When mood provided
374
- provenance: { // When includeProvenance: true
375
- evidenceEpisodeIds: ['01XYZ...', '01DEF...'],
376
- evidenceCount: 3,
377
- supportingCount: 3,
378
- contradictingCount: 0,
379
- },
380
- }
381
- ```
382
-
383
- Retrieval automatically reinforces matched memories (boosts confidence, resets decay clock).
384
-
385
- ### `brain.recallStream(query, options)` `AsyncGenerator<Memory>`
386
-
387
- Streaming version of `recall()`. Yields results one at a time. Supports early `break`. Same options as `recall()`.
388
-
389
- ```js
390
- for await (const memory of brain.recallStream('stripe issues', { limit: 10 })) {
391
- console.log(memory.content, memory.score);
392
- if (memory.score > 0.9) break;
393
- }
394
- ```
395
-
396
- ### `brain.forget(id, options)` → `ForgetResult`
397
-
398
- Forget a memory by ID. Works on any memory type (episodic, semantic, procedural).
399
-
400
- ```js
401
- brain.forget(memoryId); // soft-delete
402
- brain.forget(memoryId, { purge: true }); // hard-delete (permanent)
403
- // { id, type: 'episodic', purged: false }
404
- ```
405
-
406
- ### `brain.forgetByQuery(query, options)` `Promise<ForgetResult | null>`
407
-
408
- Find the closest matching memory by semantic search and forget it. Searches all three memory types, picks the best match.
409
-
410
- ```js
411
- const result = await brain.forgetByQuery('old API endpoint', {
412
- minSimilarity: 0.9, // Threshold for match (default 0.9)
413
- purge: false, // Hard-delete? (default false)
414
- });
415
- // null if no match above threshold
416
- ```
417
-
418
- ### `brain.purge()` → `PurgeCounts`
419
-
420
- Bulk hard-delete all dead memories: forgotten episodes, dormant/superseded/rolled-back semantics and procedures.
421
-
422
- ```js
423
- const counts = brain.purge();
424
- // { episodes: 12, semantics: 3, procedures: 0 }
425
- ```
426
-
427
- ### `brain.consolidate(options)` → `Promise<ConsolidationResult>`
428
-
429
- Run the consolidation engine manually.
430
-
431
- ```js
432
- const result = await brain.consolidate({
433
- minClusterSize: 3,
434
- similarityThreshold: 0.80,
435
- extractPrinciple: (episodes) => ({ // Optional LLM callback
436
- content: 'Extracted principle text',
437
- type: 'semantic',
438
- }),
439
- });
440
- // { runId, status, episodesEvaluated, clustersFound, principlesExtracted }
441
- ```
442
-
443
- ### `brain.decay(options)` `DecayResult`
444
-
445
- Apply forgetting curves. Transitions low-confidence memories to dormant.
446
-
447
- ```js
448
- const result = brain.decay({ dormantThreshold: 0.1 });
449
- // { totalEvaluated, transitionedToDormant, timestamp }
450
- ```
451
-
452
- ### `brain.rollback(runId)` → `RollbackResult`
453
-
454
- Undo a consolidation run.
455
-
456
- ```js
457
- brain.rollback('01ABC...');
458
- // { rolledBackMemories: 3, restoredEpisodes: 9 }
459
- ```
460
-
461
- ### `brain.resolveTruth(contradictionId)` `Promise<Resolution>`
462
-
463
- Resolve an open contradiction using LLM reasoning. Requires an LLM provider configured.
464
-
465
- ```js
466
- const resolution = await brain.resolveTruth('contradiction-id');
467
- // { resolution: 'context_dependent', conditions: { a: 'live keys', b: 'test keys' }, explanation: '...' }
468
- ```
469
-
470
- ### `brain.introspect()` `Stats`
471
-
472
- Get memory system health stats.
473
-
474
- ```js
475
- brain.introspect();
476
- // {
477
- // episodic: 247, semantic: 31, procedural: 8,
478
- // causalLinks: 42, dormant: 15,
479
- // contradictions: { open: 2, resolved: 7, context_dependent: 3, reopened: 0 },
480
- // lastConsolidation: '2026-02-18T22:00:00Z',
481
- // totalConsolidationRuns: 14,
482
- // }
483
- ```
484
-
485
- ### `brain.consolidationHistory()` `ConsolidationRun[]`
486
-
487
- Full audit trail of all consolidation runs.
488
-
489
- ### `brain.export()` / `brain.import(snapshot)`
490
-
491
- Export all memories as a JSON snapshot, or import from one.
492
-
493
- ```js
494
- const snapshot = brain.export(); // { version, episodes, semantics, procedures, ... }
495
- await brain.import(snapshot); // Re-embeds everything with current provider
496
- ```
497
-
498
- ### Events
499
-
500
- ```js
501
- brain.on('encode', ({ id, content, source }) => { ... });
502
- brain.on('reinforcement', ({ episodeId, targetId, similarity }) => { ... });
503
- brain.on('contradiction', ({ episodeId, contradictionId, semanticId, resolution }) => { ... });
504
- brain.on('consolidation', ({ runId, principlesExtracted }) => { ... });
505
- brain.on('decay', ({ totalEvaluated, transitionedToDormant }) => { ... });
506
- brain.on('rollback', ({ runId, rolledBackMemories }) => { ... });
507
- brain.on('forget', ({ id, type, purged }) => { ... });
508
- brain.on('purge', ({ episodes, semantics, procedures }) => { ... });
509
- brain.on('interference', ({ newEpisodeId, suppressedId, similarity }) => { ... });
510
- brain.on('resonance', ({ episodeId, resonances }) => { ... });
511
- brain.on('migration', ({ episodes, semantics, procedures }) => { ... });
512
- brain.on('error', (err) => { ... });
513
- ```
514
-
515
- ### `brain.close()`
516
-
517
- Close the database connection.
518
-
519
- ## Architecture
520
-
521
- ```
522
- audrey-data/
523
- audrey.db <- Single SQLite file. WAL mode. That's your brain.
524
- ```
525
-
526
- ```
527
- src/
528
- audrey.js Main class. EventEmitter. Public API surface.
529
- causal.js Causal graph management. LLM-powered mechanism articulation.
530
- confidence.js Compositional confidence formula. Pure math.
531
- consolidate.js "Sleep" cycle. KNN clustering -> LLM extraction -> promote.
532
- db.js SQLite + sqlite-vec. Schema, vec0 tables, migrations.
533
- decay.js Ebbinghaus forgetting curves.
534
- embedding.js Pluggable providers (Mock, OpenAI). Batch embedding.
535
- encode.js Immutable episodic memory creation + vec0 writes.
536
- affect.js Emotional memory: arousal-salience coupling, mood-congruent recall, resonance.
537
- context.js Context-dependent retrieval modifier (encoding specificity).
538
- interference.js Competitive memory suppression (engram competition).
539
- forget.js Soft-delete, hard-delete, query-based forget, bulk purge.
540
- introspect.js Health dashboard queries.
541
- llm.js Pluggable LLM providers (Mock, Anthropic, OpenAI).
542
- prompts.js Structured prompt templates for LLM operations.
543
- recall.js KNN retrieval + confidence scoring + filtered recall + streaming.
544
- rollback.js Undo consolidation runs.
545
- utils.js Date math, safe JSON parse.
546
- validate.js KNN validation + LLM contradiction detection.
547
- migrate.js Dimension migration re-embedding.
548
- adaptive.js Adaptive consolidation parameter suggestions.
549
- export.js Memory export (JSON snapshots).
550
- import.js Memory import with re-embedding.
551
- index.js Barrel export.
552
-
553
- mcp-server/
554
- index.js MCP tool server (9 tools, stdio transport) + CLI subcommands.
555
- config.js Shared config (env var parsing, install arg builder).
556
- ```
557
-
558
- ### Database Schema
559
-
560
- | Table | Purpose |
561
- |---|---|
562
- | `episodes` | Immutable raw events (content, source, salience, causal context) |
563
- | `semantics` | Consolidated principles (content, state, evidence chain) |
564
- | `procedures` | Learned workflows (trigger conditions, success/failure counts) |
565
- | `causal_links` | Causal relationships (cause, effect, mechanism, link type) |
566
- | `contradictions` | Dispute tracking (claims, state, resolution) |
567
- | `consolidation_runs` | Audit trail (inputs, outputs, status) |
568
- | `vec_episodes` | sqlite-vec KNN index for episode embeddings |
569
- | `vec_semantics` | sqlite-vec KNN index for semantic embeddings |
570
- | `vec_procedures` | sqlite-vec KNN index for procedural embeddings |
571
- | `audrey_config` | Dimension configuration and metadata |
572
-
573
- All mutations use SQLite transactions. CHECK constraints enforce valid states and source types. Vector search uses sqlite-vec with cosine distance.
574
-
575
- ## Running Tests
576
-
577
- ```bash
578
- npm test # 379 tests across 28 files
579
- npm run test:watch
580
- ```
581
-
582
- ## Running the Demo
583
-
584
- ```bash
585
- node examples/stripe-demo.js
586
- ```
587
-
588
- Demonstrates the full pipeline: encode 3 rate-limit observations, consolidate into principle, recall proactively.
589
-
590
- ---
591
-
592
- ## Changelog
593
-
594
- ### v0.9.0 Emotional Memory (current)
595
-
596
- - Valence-arousal affect model (Russell's circumplex) on every episode
597
- - Arousal-salience coupling via Yerkes-Dodson inverted-U curve
598
- - Mood-congruent recall matching emotional state boosts retrieval confidence
599
- - Emotional resonance detection new experiences that echo past emotional patterns emit events
600
- - MCP server: `memory_encode` accepts `affect`, `memory_recall` accepts `mood`
601
- - 379 tests across 28 test files
602
-
603
- ### v0.8.0 — Context-Dependent Retrieval
604
-
605
- - Encoding specificity principle: context stored with memory, matching context boosts recall
606
- - MCP server: `memory_encode` and `memory_recall` accept `context`
607
- - 340 tests across 27 test files
608
-
609
- ### v0.7.0 — Interference + Salience
610
-
611
- - Interference-based forgetting: new memories competitively suppress similar existing ones
612
- - Salience-weighted confidence: high-salience memories resist decay
613
- - Spaced-repetition reconsolidation: retrieval intervals affect reinforcement strength
614
- - 310 tests across 25 test files
615
-
616
- ### v0.6.0 Filtered Recall + Forget
617
-
618
- - Filtered recall: tag, source, and date-range filters on `recall()` and `recallStream()`
619
- - `forget()` soft-delete any memory by ID
620
- - `forgetByQuery()` find closest match by semantic search and forget it
621
- - `purge()` — bulk hard-delete all forgotten/dormant/superseded memories
622
- - `memory_forget` and `memory_decay` MCP tools (9 tools total)
623
- - 278 tests across 23 files
624
-
625
- ### v0.5.0 Feature Depth
626
-
627
- - Configurable confidence weights and decay rates per instance
628
- - Memory export/import (JSON snapshots with re-embedding)
629
- - `memory_export` and `memory_import` MCP tools
630
- - Auto-consolidation scheduling
631
- - Adaptive consolidation parameter suggestions
632
- - 243 tests across 22 files
633
-
634
- ### v0.3.1 MCP Server
635
-
636
- - MCP tool server via `@modelcontextprotocol/sdk` with stdio transport
637
- - One-command install: `npx audrey install` (auto-detects API keys)
638
- - CLI subcommands: `install`, `uninstall`, `status`
639
- - JSDoc type annotations on all public exports
640
- - Published to npm
641
- - 194 tests across 17 files
642
-
643
- ### v0.3.0 Vector Performance
644
-
645
- - sqlite-vec native vector indexing (vec0 virtual tables with cosine distance)
646
- - KNN queries for recall, validation, and consolidation clustering
647
- - Batch encoding API and streaming recall with async generators
648
- - Dimension configuration and automatic migration from v0.2.0
649
- - 168 tests across 16 files
650
-
651
- ### v0.2.0 LLM Integration
652
-
653
- - LLM-powered principle extraction, contradiction detection, causal articulation
654
- - Context-dependent truth resolution
655
- - Configurable LLM providers (Mock, Anthropic, OpenAI)
656
- - 142 tests across 15 files
657
-
658
- ### v0.1.0 Foundation
659
-
660
- - Immutable episodic memory, compositional confidence, Ebbinghaus forgetting curves
661
- - Consolidation engine, contradiction lifecycle, rollback
662
- - Circular self-confirmation defense, causal context, introspection
663
- - 104 tests across 12 files
664
-
665
- ## Design Decisions
666
-
667
- **Why SQLite, not Postgres?** Zero infrastructure. `npm install` and you have a brain. The adapter pattern means you can migrate to pgvector when you need to scale.
668
-
669
- **Why append-only episodes?** Immutability creates a reliable audit trail. Corrections use `supersedes` links rather than mutations. You can always trace back to what actually happened.
670
-
671
- **Why Ebbinghaus curves?** Biological forgetting is an adaptive feature, not a bug. It prevents cognitive overload, maintains relevance, and enables generalization. Audrey's forgetting works the same way.
672
-
673
- **Why model-generated cap at 0.6?** Prevents the most dangerous exploit in AI memory: circular self-confirmation where an agent's own inferences bootstrap themselves into high-confidence "facts" through repeated retrieval.
674
-
675
- **Why soft-delete by default?** Hard-deletes are irreversible. Soft-delete preserves data integrity and audit trails while excluding the memory from recall. Use `purge: true` or `brain.purge()` when you need permanent removal (GDPR, storage cleanup).
676
-
677
- **Why emotional memory?** Every memory system stores facts. Biological memory stores facts with emotional context — and that context changes how memories are retrieved. Emotional arousal modulates encoding strength (amygdala-hippocampal interaction). Current mood biases which memories surface (Bower, 1981). This isn't a novelty feature — it's the foundation for AI that remembers like it cares.
678
-
679
- ## License
680
-
681
- MIT
1
+ # Audrey
2
+
3
+ Biological memory architecture for AI agents. Memory that decays, consolidates, feels, and learns — not just a database.
4
+
5
+ ## Why Audrey Exists
6
+
7
+ Every AI memory tool today (Mem0, Zep, LangChain Memory) is a filing cabinet. Store stuff, retrieve stuff. None of them do what biological memory actually does:
8
+
9
+ - Memories don't decay. A fact from 6 months ago has the same weight as one from today.
10
+ - No consolidation. Raw events never become general principles.
11
+ - No contradiction detection. Conflicting facts coexist silently.
12
+ - No self-defense. If an agent hallucinates and encodes the hallucination, it becomes "truth."
13
+
14
+ Audrey fixes all of this by modeling memory the way the brain does:
15
+
16
+ | Brain Structure | Audrey Component | What It Does |
17
+ |---|---|---|
18
+ | Hippocampus | Episodic Memory | Fast capture of raw events and observations |
19
+ | Neocortex | Semantic Memory | Consolidated principles and patterns |
20
+ | Cerebellum | Procedural Memory | Learned workflows and conditional behaviors |
21
+ | Sleep Replay | Dream Cycle | Consolidates episodes into principles, applies decay |
22
+ | Prefrontal Cortex | Validation Engine | Truth-checking, contradiction detection |
23
+ | Amygdala | Affect System | Emotional encoding, arousal-salience coupling, mood-congruent recall |
24
+
25
+ ## Install
26
+
27
+ ### MCP Server for Claude Code (one command)
28
+
29
+ ```bash
30
+ npx audrey install
31
+ ```
32
+
33
+ That's it. Audrey auto-detects API keys from your environment:
34
+
35
+ - `GOOGLE_API_KEY` or `GEMINI_API_KEY` set? Uses Gemini embeddings (3072d).
36
+ - Neither? Runs with local embeddings (384d, MiniLM via @huggingface/transformers zero API key, works offline).
37
+ - `AUDREY_EMBEDDING_PROVIDER=openai` for explicit OpenAI embeddings (1536d).
38
+ - `ANTHROPIC_API_KEY` set? Enables LLM-powered consolidation, contradiction detection, and reflection.
39
+
40
+ ```bash
41
+ # Check status
42
+ npx audrey status
43
+
44
+ # Uninstall
45
+ npx audrey uninstall
46
+ ```
47
+
48
+ Every Claude Code session now has 13 memory tools: `memory_encode`, `memory_recall`, `memory_consolidate`, `memory_dream`, `memory_introspect`, `memory_resolve_truth`, `memory_export`, `memory_import`, `memory_forget`, `memory_decay`, `memory_status`, `memory_reflect`, `memory_greeting`.
49
+
50
+ ### CLI Subcommands
51
+
52
+ ```bash
53
+ npx audrey install # Register MCP server with Claude Code
54
+ npx audrey uninstall # Remove MCP server registration
55
+ npx audrey status # Show memory store health and stats
56
+ npx audrey greeting # Output session briefing (mood, principles, recent memories)
57
+ npx audrey greeting "auth" # Briefing + context-relevant memories for "auth"
58
+ npx audrey reflect # Reflect on conversation + dream cycle (reads turns from stdin)
59
+ npx audrey dream # Run consolidation + decay cycle
60
+ npx audrey reembed # Re-embed all memories with current provider
61
+ ```
62
+
63
+ `greeting` and `reflect` are designed for Claude Code hooks — wire them into SessionStart and Stop events for automatic memory lifecycle.
64
+
65
+ ### SDK in Your Code
66
+
67
+ ```bash
68
+ npm install audrey
69
+ ```
70
+
71
+ Zero external infrastructure. One SQLite file.
72
+
73
+ ## Usage
74
+
75
+ ```js
76
+ import { Audrey } from 'audrey';
77
+
78
+ // 1. Create a brain
79
+ const brain = new Audrey({
80
+ dataDir: './agent-memory',
81
+ agent: 'my-agent',
82
+ embedding: { provider: 'local', dimensions: 384 }, // or 'gemini', 'openai'
83
+ });
84
+
85
+ // 2. Encode observations with optional emotional context
86
+ await brain.encode({
87
+ content: 'Stripe API returns 429 above 100 req/s',
88
+ source: 'direct-observation',
89
+ tags: ['stripe', 'rate-limit'],
90
+ affect: { valence: -0.4, arousal: 0.7, label: 'frustration' },
91
+ });
92
+
93
+ // 3. Recall what you know — mood-congruent retrieval
94
+ const memories = await brain.recall('stripe rate limits', {
95
+ limit: 5,
96
+ mood: { valence: -0.3 }, // frustrated right now? memories encoded in frustration surface first
97
+ });
98
+
99
+ // 4. Filtered recall — by tag, source, or date range
100
+ const recent = await brain.recall('stripe', {
101
+ tags: ['rate-limit'],
102
+ sources: ['direct-observation'],
103
+ after: '2026-02-01T00:00:00Z',
104
+ context: { task: 'debugging', domain: 'payments' }, // context-dependent retrieval
105
+ });
106
+
107
+ // 5. Dream — the biological sleep cycle
108
+ const dream = await brain.dream();
109
+ // Consolidates episodes into principles, applies forgetting curves, reports health
110
+
111
+ // 6. Reflect on a conversation — form lasting memories
112
+ const result = await brain.reflect([
113
+ { role: 'user', content: 'How do I handle rate limits?' },
114
+ { role: 'assistant', content: 'Use exponential backoff with jitter...' },
115
+ ]);
116
+ // LLM extracts what matters, encodes it as lasting memories
117
+
118
+ // 7. Session greeting wake up with context
119
+ const briefing = await brain.greeting({ context: 'debugging stripe' });
120
+ // Returns mood, principles, recent memories, identity, unresolved threads
121
+
122
+ // 8. Forget something
123
+ brain.forget(memoryId); // soft-delete
124
+ brain.forget(memoryId, { purge: true }); // hard-delete
125
+ await brain.forgetByQuery('old API endpoint', { minSimilarity: 0.9 });
126
+
127
+ // 9. Check brain health
128
+ const stats = brain.introspect();
129
+ // { episodic: 47, semantic: 12, procedural: 3, dormant: 8, ... }
130
+
131
+ // 10. Clean up
132
+ brain.close();
133
+ ```
134
+
135
+ ### Configuration
136
+
137
+ ```js
138
+ const brain = new Audrey({
139
+ dataDir: './audrey-data', // SQLite database directory
140
+ agent: 'my-agent', // Agent identifier
141
+
142
+ // Embedding provider (required)
143
+ embedding: {
144
+ provider: 'local', // 'mock' (test), 'local' (384d MiniLM), 'gemini' (3072d), 'openai' (1536d)
145
+ dimensions: 384, // Must match provider
146
+ apiKey: '...', // Required for gemini/openai
147
+ device: 'gpu', // 'gpu' or 'cpu' for local provider only
148
+ },
149
+
150
+ // LLM provider (optional — enables smart consolidation + contradiction detection + reflection)
151
+ llm: {
152
+ provider: 'anthropic', // 'mock', 'anthropic', or 'openai'
153
+ apiKey: '...', // Required for anthropic/openai
154
+ model: 'claude-sonnet-4-6', // Optional model override
155
+ },
156
+
157
+ // Consolidation settings
158
+ consolidation: {
159
+ minEpisodes: 3, // Minimum cluster size for principle extraction
160
+ },
161
+
162
+ // Context-dependent retrieval
163
+ context: {
164
+ enabled: true, // Enable encoding-specificity principle
165
+ weight: 0.3, // Max 30% confidence boost on full context match
166
+ },
167
+
168
+ // Emotional memory
169
+ affect: {
170
+ enabled: true, // Enable affect system
171
+ weight: 0.2, // Max 20% mood-congruence boost
172
+ arousalWeight: 0.3, // Yerkes-Dodson arousal-salience coupling
173
+ resonance: { // Detect emotional echoes across experiences
174
+ enabled: true,
175
+ k: 5, // How many past episodes to check
176
+ threshold: 0.5, // Semantic similarity threshold
177
+ affectThreshold: 0.6, // Emotional similarity threshold
178
+ },
179
+ },
180
+
181
+ // Interference-based forgetting
182
+ interference: {
183
+ enabled: true, // New episodes suppress similar existing memories
184
+ weight: 0.15, // Suppression strength
185
+ },
186
+
187
+ // Decay settings
188
+ decay: {
189
+ dormantThreshold: 0.1, // Below this confidence = dormant
190
+ },
191
+ });
192
+ ```
193
+
194
+ **Without an LLM provider**, consolidation uses a default text-based extractor and contradiction detection is similarity-only. **With an LLM provider**, Audrey extracts real generalized principles (semantic and procedural), detects semantic contradictions, resolves context-dependent truths, and reflects on conversations to form lasting memories.
195
+
196
+ ### Environment Variables (MCP Server)
197
+
198
+ | Variable | Default | Purpose |
199
+ |---|---|---|
200
+ | `AUDREY_DATA_DIR` | `~/.audrey/data` | SQLite database directory |
201
+ | `AUDREY_AGENT` | `claude-code` | Agent identifier |
202
+ | `AUDREY_EMBEDDING_PROVIDER` | auto-detect | `local`, `gemini`, `openai`, or `mock` |
203
+ | `AUDREY_LLM_PROVIDER` | auto-detect | `anthropic`, `openai`, or `mock` |
204
+ | `AUDREY_DEVICE` | `gpu` | Device for local embedding provider |
205
+ | `GOOGLE_API_KEY` | — | Gemini embeddings (auto-selected when present) |
206
+ | `ANTHROPIC_API_KEY` | — | Anthropic LLM (consolidation, reflection, contradiction detection) |
207
+ | `OPENAI_API_KEY` | — | OpenAI embeddings/LLM (must be explicitly selected for embeddings) |
208
+
209
+ ## Core Concepts
210
+
211
+ ### Four Memory Types
212
+
213
+ **Episodic** (hot, fast decay) — Raw events. "Stripe returned 429 at 3pm." Immutable. Append-only. Never modified.
214
+
215
+ **Semantic** (warm, slow decay) — Consolidated principles. "Stripe enforces 100 req/s rate limit." Extracted automatically from clusters of episodic memories.
216
+
217
+ **Procedural** (cold, slowest decay) Learned workflows. "When Stripe rate-limits, implement exponential backoff." Skills the agent has acquired. Routed automatically when the LLM identifies a principle as procedural.
218
+
219
+ **Causal** Why things happened. Not just "A then B" but "A caused B because of mechanism C." Prevents correlation-as-causation.
220
+
221
+ ### Confidence Formula
222
+
223
+ Every memory has a compositional confidence score:
224
+
225
+ ```
226
+ C(m, t) = w_s * S + w_e * E + w_r * R(t) + w_ret * Ret(t)
227
+ ```
228
+
229
+ | Component | What It Measures | Default Weight |
230
+ |---|---|---|
231
+ | **S** Source reliability | How trustworthy is the origin? | 0.30 |
232
+ | **E** — Evidence agreement | Do observations agree or contradict? | 0.35 |
233
+ | **R(t)** Recency decay | How old is the memory? (Ebbinghaus curve) | 0.20 |
234
+ | **Ret(t)** — Retrieval reinforcement | How often is this memory accessed? | 0.15 |
235
+
236
+ Source reliability hierarchy:
237
+
238
+ | Source Type | Reliability |
239
+ |---|---|
240
+ | `direct-observation` | 0.95 |
241
+ | `told-by-user` | 0.90 |
242
+ | `tool-result` | 0.85 |
243
+ | `inference` | 0.60 |
244
+ | `model-generated` | 0.40 (capped at 0.6 confidence) |
245
+
246
+ The `model-generated` cap prevents circular self-confirmation — an agent can't boost its own hallucinations into high-confidence "facts."
247
+
248
+ ### Decay (Forgetting Curves)
249
+
250
+ Unreinforced memories lose confidence over time following Ebbinghaus exponential decay:
251
+
252
+ | Memory Type | Half-Life | Rationale |
253
+ |---|---|---|
254
+ | Episodic | 7 days | Raw events go stale fast |
255
+ | Semantic | 30 days | Principles are hard-won |
256
+ | Procedural | 90 days | Skills are slowest to forget |
257
+
258
+ Retrieval resets the decay clock. Frequently accessed memories persist. Memories below the dormant threshold (0.1) become dormant — still searchable with `includeDormant: true`, but excluded from default recall.
259
+
260
+ ### Dream Cycle (The "Sleep" Cycle)
261
+
262
+ `brain.dream()` runs the full biological sleep analog:
263
+
264
+ 1. **Consolidate** Cluster similar episodic memories via KNN, extract principles via LLM, route to semantic or procedural tables
265
+ 2. **Decay** — Apply forgetting curves, transition low-confidence memories to dormant
266
+ 3. **Introspect** — Report memory system health
267
+
268
+ The pipeline is fully transactional if any cluster fails mid-run, all writes roll back. Consolidation is idempotent. Re-running on the same data produces no duplicates.
269
+
270
+ ### Consolidation Routing
271
+
272
+ When the LLM extracts a principle, it classifies it:
273
+
274
+ - `type: 'semantic'` → goes to the `semantics` table (general knowledge)
275
+ - `type: 'procedural'` → goes to the `procedures` table with `trigger_conditions` (actionable skills)
276
+
277
+ ### Contradiction Handling
278
+
279
+ When memories conflict, Audrey doesn't force a winner. Contradictions have a lifecycle:
280
+
281
+ ```
282
+ open -> resolved | context_dependent | reopened
283
+ ```
284
+
285
+ Context-dependent truths are modeled explicitly:
286
+
287
+ ```js
288
+ // "Stripe rate limit is 100 req/s" (live keys)
289
+ // "Stripe rate limit is 25 req/s" (test keys)
290
+ // Both trueunder different conditions
291
+ ```
292
+
293
+ New high-confidence evidence can reopen resolved disputes.
294
+
295
+ ### Forget and Purge
296
+
297
+ Memories can be explicitly forgotten — by ID or by semantic query:
298
+
299
+ **Soft-delete** (default) Marks the memory as forgotten/superseded and removes its vector index. The record stays in the database but is excluded from recall. Reversible via direct database access.
300
+
301
+ **Hard-delete** (`purge: true`) — Permanently removes the memory from both the main table and the vector index. Irreversible.
302
+
303
+ **Bulk purge** Removes all forgotten, dormant, superseded, and rolled-back memories in one operation. Useful for GDPR compliance or storage cleanup.
304
+
305
+ ### Rollback
306
+
307
+ Bad consolidation? Undo it:
308
+
309
+ ```js
310
+ const history = brain.consolidationHistory();
311
+ brain.rollback(history[0].id);
312
+ // Semantic memories -> rolled_back state
313
+ // Source episodes -> un-consolidated
314
+ // Full audit trail preserved
315
+ ```
316
+
317
+ ### Circular Self-Confirmation Defense
318
+
319
+ The most dangerous exploit in AI memory: agent hallucinates X, encodes it, later retrieves it, "reinforcement" boosts confidence, X eventually consolidates as "established truth."
320
+
321
+ Audrey's defenses:
322
+
323
+ 1. **Source diversity requirement** — Consolidation requires evidence from 2+ distinct source types
324
+ 2. **Model-generated cap** — Memories from `model-generated` sources are capped at 0.6 confidence
325
+ 3. **Source lineage tracking** Provenance chains detect when all evidence traces back to a single inference
326
+ 4. **Source diversity score** — Every semantic memory tracks how many different source types contributed
327
+
328
+ ## API Reference
329
+
330
+ ### `new Audrey(config)`
331
+
332
+ See [Configuration](#configuration) above for all options.
333
+
334
+ ### `brain.encode(params)` -> `Promise<string>`
335
+
336
+ Encode an episodic memory. Returns the memory ID.
337
+
338
+ ```js
339
+ const id = await brain.encode({
340
+ content: 'What happened', // Required. Non-empty string, max 50000 chars.
341
+ source: 'direct-observation', // Required. See source types above.
342
+ salience: 0.8, // Optional. 0-1. Default: 0.5
343
+ causal: { // Optional. What caused this / what it caused.
344
+ trigger: 'batch-processing',
345
+ consequence: 'queue-backed-up',
346
+ },
347
+ tags: ['stripe', 'production'], // Optional. Array of strings.
348
+ supersedes: 'previous-id', // Optional. ID of episode this corrects.
349
+ context: { task: 'debugging' }, // Optional. Situational context for retrieval.
350
+ affect: { // Optional. Emotional context.
351
+ valence: -0.5, // -1 (negative) to 1 (positive)
352
+ arousal: 0.7, // 0 (calm) to 1 (activated)
353
+ label: 'frustration', // Human-readable emotion label
354
+ },
355
+ private: true, // Optional. If true, excluded from public recall.
356
+ });
357
+ ```
358
+
359
+ Episodes are **immutable**. Corrections create new records with `supersedes` links. The original is preserved.
360
+
361
+ ### `brain.encodeBatch(paramsList)` -> `Promise<string[]>`
362
+
363
+ Encode multiple episodes in one call. Same params as `encode()`, but as an array.
364
+
365
+ ```js
366
+ const ids = await brain.encodeBatch([
367
+ { content: 'Stripe returned 429', source: 'direct-observation' },
368
+ { content: 'Redis timed out', source: 'tool-result' },
369
+ { content: 'User reports slow checkout', source: 'told-by-user' },
370
+ ]);
371
+ ```
372
+
373
+ ### `brain.recall(query, options)` -> `Promise<Memory[]>`
374
+
375
+ Retrieve memories ranked by `similarity * confidence`.
376
+
377
+ ```js
378
+ const memories = await brain.recall('stripe rate limits', {
379
+ limit: 5, // Max results (default 10, max 50)
380
+ minConfidence: 0.5, // Filter below this confidence
381
+ types: ['semantic'], // Filter by memory type
382
+ includeProvenance: true, // Include evidence chains
383
+ includeDormant: false, // Include dormant memories
384
+ tags: ['rate-limit'], // Only episodic memories with these tags
385
+ sources: ['direct-observation'], // Only episodic memories from these sources
386
+ after: '2026-02-01T00:00:00Z', // Only memories created after this date
387
+ before: '2026-03-01T00:00:00Z', // Only memories created before this date
388
+ context: { task: 'debugging' }, // Boost memories encoded in matching context
389
+ mood: { valence: -0.3, arousal: 0.5 }, // Mood-congruent retrieval
390
+ });
391
+ ```
392
+
393
+ Tag and source filters only apply to episodic memories (semantic and procedural memories don't have tags or sources). Date filters apply to all memory types. Recall gracefully degrades — if one memory type's vector search fails, the others still return results.
394
+
395
+ Each result:
396
+
397
+ ```js
398
+ {
399
+ id: '01ABC...',
400
+ content: 'Stripe enforces ~100 req/s rate limit',
401
+ type: 'semantic',
402
+ confidence: 0.87,
403
+ score: 0.74, // similarity * confidence
404
+ source: 'consolidation',
405
+ state: 'active',
406
+ contextMatch: 0.8, // When retrieval context provided
407
+ moodCongruence: 0.7, // When mood provided
408
+ provenance: { // When includeProvenance: true
409
+ evidenceEpisodeIds: ['01XYZ...', '01DEF...'],
410
+ evidenceCount: 3,
411
+ supportingCount: 3,
412
+ contradictingCount: 0,
413
+ },
414
+ }
415
+ ```
416
+
417
+ Retrieval automatically reinforces matched memories (boosts confidence, resets decay clock).
418
+
419
+ ### `brain.recallStream(query, options)` -> `AsyncGenerator<Memory>`
420
+
421
+ Streaming version of `recall()`. Yields results one at a time. Supports early `break`. Same options as `recall()`.
422
+
423
+ ```js
424
+ for await (const memory of brain.recallStream('stripe issues', { limit: 10 })) {
425
+ console.log(memory.content, memory.score);
426
+ if (memory.score > 0.9) break;
427
+ }
428
+ ```
429
+
430
+ ### `brain.dream(options)` -> `Promise<DreamResult>`
431
+
432
+ Run the full biological sleep cycle: consolidate + decay + introspect.
433
+
434
+ ```js
435
+ const result = await brain.dream({
436
+ minClusterSize: 3, // Min episodes per cluster
437
+ similarityThreshold: 0.85, // KNN clustering threshold
438
+ dormantThreshold: 0.1, // Below this = dormant
439
+ });
440
+ // {
441
+ // consolidation: { episodesEvaluated, clustersFound, principlesExtracted, semanticsCreated, proceduresCreated },
442
+ // decay: { totalEvaluated, transitionedToDormant },
443
+ // stats: { episodic, semantic, procedural, ... },
444
+ // }
445
+ ```
446
+
447
+ ### `brain.reflect(turns)` -> `Promise<ReflectResult>`
448
+
449
+ Feed a conversation to the LLM and extract lasting memories. Requires an LLM provider.
450
+
451
+ ```js
452
+ const result = await brain.reflect([
453
+ { role: 'user', content: 'How do I handle rate limits?' },
454
+ { role: 'assistant', content: 'Use exponential backoff...' },
455
+ ]);
456
+ // { encoded: 2, memories: [...] }
457
+ ```
458
+
459
+ ### `brain.greeting(options)` -> `Promise<GreetingResult>`
460
+
461
+ Session-start briefing. Returns mood, principles, identity, recent memories, and unresolved threads.
462
+
463
+ ```js
464
+ const briefing = await brain.greeting({
465
+ context: 'debugging stripe', // Optional — also returns relevant memories
466
+ recentLimit: 10,
467
+ principleLimit: 5,
468
+ identityLimit: 5,
469
+ });
470
+ // { recent, principles, mood, unresolved, identity, contextual }
471
+ ```
472
+
473
+ ### `brain.forget(id, options)` -> `ForgetResult`
474
+
475
+ Forget a memory by ID. Works on any memory type (episodic, semantic, procedural).
476
+
477
+ ```js
478
+ brain.forget(memoryId); // soft-delete
479
+ brain.forget(memoryId, { purge: true }); // hard-delete (permanent)
480
+ // { id, type: 'episodic', purged: false }
481
+ ```
482
+
483
+ ### `brain.forgetByQuery(query, options)` -> `Promise<ForgetResult | null>`
484
+
485
+ Find the closest matching memory by semantic search and forget it. Searches all three memory types, picks the best match.
486
+
487
+ ```js
488
+ const result = await brain.forgetByQuery('old API endpoint', {
489
+ minSimilarity: 0.9, // Threshold for match (default 0.9)
490
+ purge: false, // Hard-delete? (default false)
491
+ });
492
+ // null if no match above threshold
493
+ ```
494
+
495
+ ### `brain.purge()` -> `PurgeCounts`
496
+
497
+ Bulk hard-delete all dead memories: forgotten episodes, dormant/superseded/rolled-back semantics and procedures.
498
+
499
+ ```js
500
+ const counts = brain.purge();
501
+ // { episodes: 12, semantics: 3, procedures: 0 }
502
+ ```
503
+
504
+ ### `brain.consolidate(options)` -> `Promise<ConsolidationResult>`
505
+
506
+ Run the consolidation engine manually. Fully transactional if any cluster fails, all writes roll back.
507
+
508
+ ```js
509
+ const result = await brain.consolidate({
510
+ minClusterSize: 3,
511
+ similarityThreshold: 0.80,
512
+ extractPrinciple: (episodes) => ({ // Optional LLM callback
513
+ content: 'Extracted principle text',
514
+ type: 'semantic', // or 'procedural'
515
+ conditions: ['trigger conditions'], // for procedural only
516
+ }),
517
+ });
518
+ // { runId, status, episodesEvaluated, clustersFound, principlesExtracted, semanticsCreated, proceduresCreated }
519
+ ```
520
+
521
+ ### `brain.decay(options)` -> `DecayResult`
522
+
523
+ Apply forgetting curves. Transitions low-confidence memories to dormant.
524
+
525
+ ```js
526
+ const result = brain.decay({ dormantThreshold: 0.1 });
527
+ // { totalEvaluated, transitionedToDormant, timestamp }
528
+ ```
529
+
530
+ ### `brain.memoryStatus()` -> `HealthStatus`
531
+
532
+ Check brain health: vector index sync, dimension consistency, re-embed recommendations.
533
+
534
+ ```js
535
+ brain.memoryStatus();
536
+ // { healthy, vec_episodes, searchable_episodes, vec_semantics, ..., reembed_recommended }
537
+ ```
538
+
539
+ ### `brain.rollback(runId)` -> `RollbackResult`
540
+
541
+ Undo a consolidation run.
542
+
543
+ ```js
544
+ brain.rollback('01ABC...');
545
+ // { rolledBackMemories: 3, restoredEpisodes: 9 }
546
+ ```
547
+
548
+ ### `brain.resolveTruth(contradictionId)` -> `Promise<Resolution>`
549
+
550
+ Resolve an open contradiction using LLM reasoning. Requires an LLM provider configured.
551
+
552
+ ```js
553
+ const resolution = await brain.resolveTruth('contradiction-id');
554
+ // { resolution: 'context_dependent', conditions: { a: 'live keys', b: 'test keys' }, explanation: '...' }
555
+ ```
556
+
557
+ ### `brain.introspect()` -> `Stats`
558
+
559
+ Get memory system health stats.
560
+
561
+ ```js
562
+ brain.introspect();
563
+ // {
564
+ // episodic: 247, semantic: 31, procedural: 8,
565
+ // causalLinks: 42, dormant: 15,
566
+ // contradictions: { open: 2, resolved: 7, context_dependent: 3, reopened: 0 },
567
+ // lastConsolidation: '2026-02-18T22:00:00Z',
568
+ // totalConsolidationRuns: 14,
569
+ // }
570
+ ```
571
+
572
+ ### `brain.consolidationHistory()` -> `ConsolidationRun[]`
573
+
574
+ Full audit trail of all consolidation runs.
575
+
576
+ ### `brain.export()` / `brain.import(snapshot)`
577
+
578
+ Export all memories as a JSON snapshot, or import from one. Full-fidelity: preserves consolidation metrics, run metadata, and config. Import re-embeds everything with the current provider in a single atomic transaction.
579
+
580
+ ```js
581
+ const snapshot = brain.export(); // { version, episodes, semantics, procedures, consolidationMetrics, ... }
582
+ await brain.import(snapshot); // Re-embeds everything with current provider
583
+ ```
584
+
585
+ ### Events
586
+
587
+ ```js
588
+ brain.on('encode', ({ id, content, source }) => { ... });
589
+ brain.on('reinforcement', ({ episodeId, targetId, similarity }) => { ... });
590
+ brain.on('contradiction', ({ episodeId, contradictionId, semanticId, resolution }) => { ... });
591
+ brain.on('consolidation', ({ runId, principlesExtracted }) => { ... });
592
+ brain.on('decay', ({ totalEvaluated, transitionedToDormant }) => { ... });
593
+ brain.on('dream', ({ consolidation, decay, stats }) => { ... });
594
+ brain.on('rollback', ({ runId, rolledBackMemories }) => { ... });
595
+ brain.on('forget', ({ id, type, purged }) => { ... });
596
+ brain.on('purge', ({ episodes, semantics, procedures }) => { ... });
597
+ brain.on('interference', ({ newEpisodeId, suppressedId, similarity }) => { ... });
598
+ brain.on('resonance', ({ episodeId, resonances }) => { ... });
599
+ brain.on('migration', ({ episodes, semantics, procedures }) => { ... });
600
+ brain.on('error', (err) => { ... });
601
+ ```
602
+
603
+ ### `brain.close()`
604
+
605
+ Close the database connection.
606
+
607
+ ## Architecture
608
+
609
+ ```
610
+ audrey-data/
611
+ audrey.db <- Single SQLite file. WAL mode. That's your brain.
612
+ ```
613
+
614
+ ```
615
+ src/
616
+ audrey.js Main class. EventEmitter. Public API surface.
617
+ causal.js Causal graph management. LLM-powered mechanism articulation.
618
+ confidence.js Compositional confidence formula. Pure math.
619
+ consolidate.js "Sleep" cycle. KNN clustering -> LLM extraction -> promote.
620
+ db.js SQLite + sqlite-vec. Schema, vec0 tables, migrations.
621
+ decay.js Ebbinghaus forgetting curves.
622
+ embedding.js Pluggable providers (Mock, Local/MiniLM, Gemini, OpenAI). Batch embedding.
623
+ encode.js Immutable episodic memory creation + vec0 writes.
624
+ affect.js Emotional memory: arousal-salience coupling, mood-congruent recall, resonance.
625
+ context.js Context-dependent retrieval modifier (encoding specificity).
626
+ interference.js Competitive memory suppression (engram competition).
627
+ forget.js Soft-delete, hard-delete, query-based forget, bulk purge.
628
+ introspect.js Health dashboard queries.
629
+ llm.js Pluggable LLM providers (Mock, Anthropic, OpenAI).
630
+ prompts.js Structured prompt templates for LLM operations.
631
+ recall.js KNN retrieval + confidence scoring + filtered recall + streaming.
632
+ rollback.js Undo consolidation runs.
633
+ utils.js Date math, safe JSON parse.
634
+ validate.js KNN validation + LLM contradiction detection.
635
+ migrate.js Dimension migration re-embedding.
636
+ adaptive.js Adaptive consolidation parameter suggestions.
637
+ export.js Memory export (JSON snapshots with consolidation metrics).
638
+ import.js Memory import with batch re-embedding in atomic transactions.
639
+ index.js SDK barrel export (all providers, database utilities).
640
+
641
+ mcp-server/
642
+ index.js MCP tool server (13 tools, stdio transport) + CLI subcommands.
643
+ config.js Shared config (env var parsing, provider resolution, install arg builder).
644
+ ```
645
+
646
+ ### Database Schema
647
+
648
+ | Table | Purpose |
649
+ |---|---|
650
+ | `episodes` | Immutable raw events (content, source, salience, causal context, affect, private flag) |
651
+ | `semantics` | Consolidated principles (content, state, evidence chain) |
652
+ | `procedures` | Learned workflows (trigger conditions, success/failure counts) |
653
+ | `causal_links` | Causal relationships (cause, effect, mechanism, link type) |
654
+ | `contradictions` | Dispute tracking (claims, state, resolution) |
655
+ | `consolidation_runs` | Audit trail (inputs, outputs, status, checkpoint cursor) |
656
+ | `consolidation_metrics` | Per-run metrics and confidence deltas |
657
+ | `vec_episodes` | sqlite-vec KNN index for episode embeddings |
658
+ | `vec_semantics` | sqlite-vec KNN index for semantic embeddings |
659
+ | `vec_procedures` | sqlite-vec KNN index for procedural embeddings |
660
+ | `audrey_config` | Dimension configuration, embedding model info, metadata |
661
+
662
+ All mutations use SQLite transactions. CHECK constraints enforce valid states and source types. Vector search uses sqlite-vec with cosine distance.
663
+
664
+ ## Running Tests
665
+
666
+ ```bash
667
+ npm test # 463 tests across 29 files
668
+ npm run test:watch
669
+ ```
670
+
671
+ ## Running the Demo
672
+
673
+ ```bash
674
+ node examples/stripe-demo.js
675
+ ```
676
+
677
+ Demonstrates the full pipeline: encode 3 rate-limit observations, consolidate into principle, recall proactively.
678
+
679
+ ---
680
+
681
+ ## Changelog
682
+
683
+ ### v0.16.0 (current)
684
+
685
+ - Version bump for npm publish with all v0.15.0 features included
686
+ - 463 tests across 29 test files
687
+
688
+ ### v0.15.0 — Production Hardening + Dream Cycle
689
+
690
+ - `dream()` method: consolidation + decay + introspect (biological sleep analog)
691
+ - `memory_dream` MCP tool with configurable thresholds
692
+ - `greeting` and `reflect` CLI subcommands for hook integration
693
+ - Consolidation routes procedural principles to `procedures` table (previously all went to semantics)
694
+ - Fully transactional consolidation — mid-run failures roll back all writes
695
+ - Recall gracefully degrades per memory type (independent try/catch per KNN search)
696
+ - sqlite-vec crash guard for empty vector tables
697
+ - LLM JSON parsing strips markdown code fences from any provider
698
+ - Input validation: empty content rejected, 50K char limit, forget requires exactly one target
699
+ - Full-fidelity export/import: preserves consolidation metrics, run metadata, config
700
+ - Import uses batch embedding in a single atomic transaction
701
+ - Expanded SDK exports: all embedding/LLM providers, database utilities
702
+ - Shared `resolveLLMConfig()` for CLI commands
703
+ - 463 tests across 29 test files
704
+
705
+ ### v0.14.0 — Memory Intelligence
706
+
707
+ - `memory_reflect` MCP tool — form lasting memories from conversation turns
708
+ - `memory_greeting` MCP tool — session-start context briefing
709
+ - `greeting()` method: mood, principles, identity, recent memories, unresolved threads
710
+ - `reflect()` method: LLM-powered conversation analysis and memory formation
711
+ - Rewritten consolidation prompt for deeper principle extraction
712
+ - Rewritten reflection prompt for relational and emotional depth
713
+ - `npx audrey status` shows last consolidation time
714
+
715
+ ### v0.13.0 — GPU-Accelerated Embeddings
716
+
717
+ - GPU device configuration for LocalEmbeddingProvider
718
+ - True single-forward-pass batch embedding for LocalEmbeddingProvider
719
+ - Gemini `batchEmbedContents` API for batch embedding
720
+ - `reembedAll` uses `embedBatch` for performance
721
+ - `AUDREY_DEVICE` env var, `memoryStatus` reports device
722
+
723
+ ### v0.11.0 — Multi-Provider Embeddings + Privacy
724
+
725
+ - `LocalEmbeddingProvider` — 384d MiniLM via @huggingface/transformers (zero API key, works offline)
726
+ - `GeminiEmbeddingProvider` — 3072d via Google text-embedding-004
727
+ - `private: true` memory flag — memories visible to AI only, excluded from public recall
728
+ - Auto-select embedding provider: local -> gemini (if API key present) -> explicit openai
729
+ - `npx audrey reembed` CLI subcommand for provider migration
730
+ - `reflect()` method for post-conversation memory formation
731
+ - 409 tests across 29 test files
732
+
733
+ ### v0.9.0 — Emotional Memory
734
+
735
+ - Valence-arousal affect model (Russell's circumplex) on every episode
736
+ - Arousal-salience coupling via Yerkes-Dodson inverted-U curve
737
+ - Mood-congruent recall — matching emotional state boosts retrieval confidence
738
+ - Emotional resonance detection — new experiences that echo past emotional patterns emit events
739
+ - MCP server: `memory_encode` accepts `affect`, `memory_recall` accepts `mood`
740
+
741
+ ### v0.8.0 — Context-Dependent Retrieval
742
+
743
+ - Encoding specificity principle: context stored with memory, matching context boosts recall
744
+ - MCP server: `memory_encode` and `memory_recall` accept `context`
745
+
746
+ ### v0.7.0 — Interference + Salience
747
+
748
+ - Interference-based forgetting: new memories competitively suppress similar existing ones
749
+ - Salience-weighted confidence: high-salience memories resist decay
750
+ - Spaced-repetition reconsolidation: retrieval intervals affect reinforcement strength
751
+
752
+ ### v0.6.0 — Filtered Recall + Forget
753
+
754
+ - Filtered recall: tag, source, and date-range filters on `recall()` and `recallStream()`
755
+ - `forget()`, `forgetByQuery()`, `purge()`
756
+ - `memory_forget` and `memory_decay` MCP tools
757
+
758
+ ### v0.5.0 — Feature Depth
759
+
760
+ - Configurable confidence weights and decay rates per instance
761
+ - Memory export/import (JSON snapshots with re-embedding)
762
+ - `memory_export` and `memory_import` MCP tools
763
+ - Auto-consolidation scheduling
764
+ - Adaptive consolidation parameter suggestions
765
+
766
+ ### v0.3.1 — MCP Server
767
+
768
+ - MCP tool server via `@modelcontextprotocol/sdk` with stdio transport
769
+ - One-command install: `npx audrey install` (auto-detects API keys)
770
+ - CLI subcommands: `install`, `uninstall`, `status`
771
+
772
+ ### v0.3.0 — Vector Performance
773
+
774
+ - sqlite-vec native vector indexing (vec0 virtual tables with cosine distance)
775
+ - KNN queries for recall, validation, and consolidation clustering
776
+ - Batch encoding API and streaming recall with async generators
777
+
778
+ ### v0.2.0 — LLM Integration
779
+
780
+ - LLM-powered principle extraction, contradiction detection, causal articulation
781
+ - Context-dependent truth resolution
782
+ - Configurable LLM providers (Mock, Anthropic, OpenAI)
783
+
784
+ ### v0.1.0 — Foundation
785
+
786
+ - Immutable episodic memory, compositional confidence, Ebbinghaus forgetting curves
787
+ - Consolidation engine, contradiction lifecycle, rollback
788
+ - Circular self-confirmation defense, causal context, introspection
789
+
790
+ ## Design Decisions
791
+
792
+ **Why SQLite, not Postgres?** Zero infrastructure. `npm install` and you have a brain. The adapter pattern means you can migrate to pgvector when you need to scale.
793
+
794
+ **Why append-only episodes?** Immutability creates a reliable audit trail. Corrections use `supersedes` links rather than mutations. You can always trace back to what actually happened.
795
+
796
+ **Why Ebbinghaus curves?** Biological forgetting is an adaptive feature, not a bug. It prevents cognitive overload, maintains relevance, and enables generalization. Audrey's forgetting works the same way.
797
+
798
+ **Why model-generated cap at 0.6?** Prevents the most dangerous exploit in AI memory: circular self-confirmation where an agent's own inferences bootstrap themselves into high-confidence "facts" through repeated retrieval.
799
+
800
+ **Why soft-delete by default?** Hard-deletes are irreversible. Soft-delete preserves data integrity and audit trails while excluding the memory from recall. Use `purge: true` or `brain.purge()` when you need permanent removal (GDPR, storage cleanup).
801
+
802
+ **Why emotional memory?** Every memory system stores facts. Biological memory stores facts with emotional context — and that context changes how memories are retrieved. Emotional arousal modulates encoding strength (amygdala-hippocampal interaction). Current mood biases which memories surface (Bower, 1981). This isn't a novelty feature — it's the foundation for AI that remembers like it cares.
803
+
804
+ **Why a dream cycle?** Biological sleep isn't downtime — it's when the brain consolidates episodic memories into long-term semantic knowledge, prunes weak connections, and strengthens important ones. Audrey's `dream()` does the same: cluster episodes, extract principles, apply decay, report health. Wire it into session hooks and your agent gets smarter every time it sleeps.
805
+
806
+ ## License
807
+
808
+ MIT