audrey 0.5.1 → 0.9.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -1,7 +1,6 @@
1
1
  # Audrey
2
2
 
3
- Biological memory architecture for AI agents. Gives agents cognitive memory that decays, consolidates, self-validates, and learns from experience — not just a database.
4
-
3
+ Biological memory architecture for AI agents. Memory that decays, consolidates, feels, and learns — not just a database.
5
4
 
6
5
  ## Why Audrey Exists
7
6
 
@@ -20,7 +19,7 @@ Audrey fixes all of this by modeling memory the way the brain does:
20
19
  | Neocortex | Semantic Memory | Consolidated principles and patterns |
21
20
  | Sleep Replay | Consolidation Engine | Extracts patterns from episodes, promotes to principles |
22
21
  | Prefrontal Cortex | Validation Engine | Truth-checking, contradiction detection |
23
- | Amygdala | Salience Scorer | Importance weighting for retention priority |
22
+ | Amygdala | Affect System | Emotional encoding, arousal-salience coupling, mood-congruent recall |
24
23
 
25
24
  ## Install
26
25
 
@@ -46,7 +45,7 @@ npx audrey status
46
45
  npx audrey uninstall
47
46
  ```
48
47
 
49
- Every Claude Code session now has 7 memory tools: `memory_encode`, `memory_recall`, `memory_consolidate`, `memory_introspect`, `memory_resolve_truth`, `memory_export`, `memory_import`.
48
+ Every Claude Code session now has 9 memory tools: `memory_encode`, `memory_recall`, `memory_consolidate`, `memory_introspect`, `memory_resolve_truth`, `memory_export`, `memory_import`, `memory_forget`, `memory_decay`.
50
49
 
51
50
  ### SDK in Your Code
52
51
 
@@ -68,25 +67,41 @@ const brain = new Audrey({
68
67
  embedding: { provider: 'mock', dimensions: 8 }, // or 'openai' for production
69
68
  });
70
69
 
71
- // 2. Encode observations
70
+ // 2. Encode observations — with optional emotional context
72
71
  await brain.encode({
73
72
  content: 'Stripe API returns 429 above 100 req/s',
74
73
  source: 'direct-observation',
75
74
  tags: ['stripe', 'rate-limit'],
75
+ affect: { valence: -0.4, arousal: 0.7, label: 'frustration' },
76
+ });
77
+
78
+ // 3. Recall what you know — mood-congruent retrieval
79
+ const memories = await brain.recall('stripe rate limits', {
80
+ limit: 5,
81
+ mood: { valence: -0.3 }, // frustrated right now? memories encoded in frustration surface first
76
82
  });
77
83
 
78
- // 3. Recall what you know
79
- const memories = await brain.recall('stripe rate limits', { limit: 5 });
80
- // Returns: [{ content, type, confidence, score, ... }]
84
+ // 4. Filtered recall by tag, source, or date range
85
+ const recent = await brain.recall('stripe', {
86
+ tags: ['rate-limit'],
87
+ sources: ['direct-observation'],
88
+ after: '2026-02-01T00:00:00Z',
89
+ context: { task: 'debugging', domain: 'payments' }, // context-dependent retrieval
90
+ });
81
91
 
82
- // 4. Consolidate episodes into principles (the "sleep" cycle)
92
+ // 5. Consolidate episodes into principles (the "sleep" cycle)
83
93
  await brain.consolidate();
84
94
 
85
- // 5. Check brain health
95
+ // 6. Forget something
96
+ brain.forget(memoryId); // soft-delete
97
+ brain.forget(memoryId, { purge: true }); // hard-delete
98
+ await brain.forgetByQuery('old API endpoint', { minSimilarity: 0.9 });
99
+
100
+ // 7. Check brain health
86
101
  const stats = brain.introspect();
87
102
  // { episodic: 47, semantic: 12, procedural: 3, dormant: 8, ... }
88
103
 
89
- // 6. Clean up
104
+ // 8. Clean up
90
105
  brain.close();
91
106
  ```
92
107
 
@@ -116,6 +131,31 @@ const brain = new Audrey({
116
131
  minEpisodes: 3, // Minimum cluster size for principle extraction
117
132
  },
118
133
 
134
+ // Context-dependent retrieval (v0.8.0)
135
+ context: {
136
+ enabled: true, // Enable encoding-specificity principle
137
+ weight: 0.3, // Max 30% confidence boost on full context match
138
+ },
139
+
140
+ // Emotional memory (v0.9.0)
141
+ affect: {
142
+ enabled: true, // Enable affect system
143
+ weight: 0.2, // Max 20% mood-congruence boost
144
+ arousalWeight: 0.3, // Yerkes-Dodson arousal-salience coupling
145
+ resonance: { // Detect emotional echoes across experiences
146
+ enabled: true,
147
+ k: 5, // How many past episodes to check
148
+ threshold: 0.5, // Semantic similarity threshold
149
+ affectThreshold: 0.6, // Emotional similarity threshold
150
+ },
151
+ },
152
+
153
+ // Interference-based forgetting (v0.7.0)
154
+ interference: {
155
+ enabled: true, // New episodes suppress similar existing memories
156
+ weight: 0.15, // Suppression strength
157
+ },
158
+
119
159
  // Decay settings
120
160
  decay: {
121
161
  dormantThreshold: 0.1, // Below this confidence = dormant
@@ -219,6 +259,16 @@ Context-dependent truths are modeled explicitly:
219
259
 
220
260
  New high-confidence evidence can reopen resolved disputes.
221
261
 
262
+ ### Forget and Purge
263
+
264
+ Memories can be explicitly forgotten — by ID or by semantic query:
265
+
266
+ **Soft-delete** (default) — Marks the memory as forgotten/superseded and removes its vector index. The record stays in the database but is excluded from recall. Reversible via direct database access.
267
+
268
+ **Hard-delete** (`purge: true`) — Permanently removes the memory from both the main table and the vector index. Irreversible.
269
+
270
+ **Bulk purge** — Removes all forgotten, dormant, superseded, and rolled-back memories in one operation. Useful for GDPR compliance or storage cleanup.
271
+
222
272
  ### Rollback
223
273
 
224
274
  Bad consolidation? Undo it:
@@ -263,25 +313,51 @@ const id = await brain.encode({
263
313
  },
264
314
  tags: ['stripe', 'production'], // Optional. Array of strings.
265
315
  supersedes: 'previous-id', // Optional. ID of episode this corrects.
316
+ context: { task: 'debugging' }, // Optional. Situational context for retrieval.
317
+ affect: { // Optional. Emotional context.
318
+ valence: -0.5, // -1 (negative) to 1 (positive)
319
+ arousal: 0.7, // 0 (calm) to 1 (activated)
320
+ label: 'frustration', // Human-readable emotion label
321
+ },
266
322
  });
267
323
  ```
268
324
 
269
325
  Episodes are **immutable**. Corrections create new records with `supersedes` links. The original is preserved.
270
326
 
327
+ ### `brain.encodeBatch(paramsList)` → `Promise<string[]>`
328
+
329
+ Encode multiple episodes in one call. Same params as `encode()`, but as an array.
330
+
331
+ ```js
332
+ const ids = await brain.encodeBatch([
333
+ { content: 'Stripe returned 429', source: 'direct-observation' },
334
+ { content: 'Redis timed out', source: 'tool-result' },
335
+ { content: 'User reports slow checkout', source: 'told-by-user' },
336
+ ]);
337
+ ```
338
+
271
339
  ### `brain.recall(query, options)` → `Promise<Memory[]>`
272
340
 
273
341
  Retrieve memories ranked by `similarity * confidence`.
274
342
 
275
343
  ```js
276
344
  const memories = await brain.recall('stripe rate limits', {
277
- minConfidence: 0.5, // Filter below this confidence
278
- types: ['semantic'], // Filter by memory type
279
- limit: 5, // Max results
280
- includeProvenance: true, // Include evidence chains
281
- includeDormant: false, // Include dormant memories
345
+ limit: 5, // Max results (default 10)
346
+ minConfidence: 0.5, // Filter below this confidence
347
+ types: ['semantic'], // Filter by memory type
348
+ includeProvenance: true, // Include evidence chains
349
+ includeDormant: false, // Include dormant memories
350
+ tags: ['rate-limit'], // Only episodic memories with these tags
351
+ sources: ['direct-observation'], // Only episodic memories from these sources
352
+ after: '2026-02-01T00:00:00Z', // Only memories created after this date
353
+ before: '2026-03-01T00:00:00Z', // Only memories created before this date
354
+ context: { task: 'debugging' }, // Boost memories encoded in matching context
355
+ mood: { valence: -0.3, arousal: 0.5 }, // Mood-congruent retrieval
282
356
  });
283
357
  ```
284
358
 
359
+ Tag and source filters only apply to episodic memories (semantic and procedural memories don't have tags or sources). Date filters apply to all memory types.
360
+
285
361
  Each result:
286
362
 
287
363
  ```js
@@ -293,6 +369,8 @@ Each result:
293
369
  score: 0.74, // similarity * confidence
294
370
  source: 'consolidation',
295
371
  state: 'active',
372
+ contextMatch: 0.8, // When retrieval context provided
373
+ moodCongruence: 0.7, // When mood provided
296
374
  provenance: { // When includeProvenance: true
297
375
  evidenceEpisodeIds: ['01XYZ...', '01DEF...'],
298
376
  evidenceCount: 3,
@@ -304,21 +382,9 @@ Each result:
304
382
 
305
383
  Retrieval automatically reinforces matched memories (boosts confidence, resets decay clock).
306
384
 
307
- ### `brain.encodeBatch(paramsList)` → `Promise<string[]>`
308
-
309
- Encode multiple episodes in one call. Same params as `encode()`, but as an array.
310
-
311
- ```js
312
- const ids = await brain.encodeBatch([
313
- { content: 'Stripe returned 429', source: 'direct-observation' },
314
- { content: 'Redis timed out', source: 'tool-result' },
315
- { content: 'User reports slow checkout', source: 'told-by-user' },
316
- ]);
317
- ```
318
-
319
385
  ### `brain.recallStream(query, options)` → `AsyncGenerator<Memory>`
320
386
 
321
- Streaming version of `recall()`. Yields results one at a time. Supports early `break`.
387
+ Streaming version of `recall()`. Yields results one at a time. Supports early `break`. Same options as `recall()`.
322
388
 
323
389
  ```js
324
390
  for await (const memory of brain.recallStream('stripe issues', { limit: 10 })) {
@@ -327,6 +393,37 @@ for await (const memory of brain.recallStream('stripe issues', { limit: 10 })) {
327
393
  }
328
394
  ```
329
395
 
396
+ ### `brain.forget(id, options)` → `ForgetResult`
397
+
398
+ Forget a memory by ID. Works on any memory type (episodic, semantic, procedural).
399
+
400
+ ```js
401
+ brain.forget(memoryId); // soft-delete
402
+ brain.forget(memoryId, { purge: true }); // hard-delete (permanent)
403
+ // { id, type: 'episodic', purged: false }
404
+ ```
405
+
406
+ ### `brain.forgetByQuery(query, options)` → `Promise<ForgetResult | null>`
407
+
408
+ Find the closest matching memory by semantic search and forget it. Searches all three memory types, picks the best match.
409
+
410
+ ```js
411
+ const result = await brain.forgetByQuery('old API endpoint', {
412
+ minSimilarity: 0.9, // Threshold for match (default 0.9)
413
+ purge: false, // Hard-delete? (default false)
414
+ });
415
+ // null if no match above threshold
416
+ ```
417
+
418
+ ### `brain.purge()` → `PurgeCounts`
419
+
420
+ Bulk hard-delete all dead memories: forgotten episodes, dormant/superseded/rolled-back semantics and procedures.
421
+
422
+ ```js
423
+ const counts = brain.purge();
424
+ // { episodes: 12, semantics: 3, procedures: 0 }
425
+ ```
426
+
330
427
  ### `brain.consolidate(options)` → `Promise<ConsolidationResult>`
331
428
 
332
429
  Run the consolidation engine manually.
@@ -389,6 +486,15 @@ brain.introspect();
389
486
 
390
487
  Full audit trail of all consolidation runs.
391
488
 
489
+ ### `brain.export()` / `brain.import(snapshot)`
490
+
491
+ Export all memories as a JSON snapshot, or import from one.
492
+
493
+ ```js
494
+ const snapshot = brain.export(); // { version, episodes, semantics, procedures, ... }
495
+ await brain.import(snapshot); // Re-embeds everything with current provider
496
+ ```
497
+
392
498
  ### Events
393
499
 
394
500
  ```js
@@ -398,6 +504,10 @@ brain.on('contradiction', ({ episodeId, contradictionId, semanticId, resolution
398
504
  brain.on('consolidation', ({ runId, principlesExtracted }) => { ... });
399
505
  brain.on('decay', ({ totalEvaluated, transitionedToDormant }) => { ... });
400
506
  brain.on('rollback', ({ runId, rolledBackMemories }) => { ... });
507
+ brain.on('forget', ({ id, type, purged }) => { ... });
508
+ brain.on('purge', ({ episodes, semantics, procedures }) => { ... });
509
+ brain.on('interference', ({ newEpisodeId, suppressedId, similarity }) => { ... });
510
+ brain.on('resonance', ({ episodeId, resonances }) => { ... });
401
511
  brain.on('migration', ({ episodes, semantics, procedures }) => { ... });
402
512
  brain.on('error', (err) => { ... });
403
513
  ```
@@ -410,7 +520,7 @@ Close the database connection.
410
520
 
411
521
  ```
412
522
  audrey-data/
413
- audrey.db Single SQLite file. WAL mode. That's your brain.
523
+ audrey.db <- Single SQLite file. WAL mode. That's your brain.
414
524
  ```
415
525
 
416
526
  ```
@@ -418,15 +528,19 @@ src/
418
528
  audrey.js Main class. EventEmitter. Public API surface.
419
529
  causal.js Causal graph management. LLM-powered mechanism articulation.
420
530
  confidence.js Compositional confidence formula. Pure math.
421
- consolidate.js "Sleep" cycle. KNN clustering LLM extraction promote.
531
+ consolidate.js "Sleep" cycle. KNN clustering -> LLM extraction -> promote.
422
532
  db.js SQLite + sqlite-vec. Schema, vec0 tables, migrations.
423
533
  decay.js Ebbinghaus forgetting curves.
424
534
  embedding.js Pluggable providers (Mock, OpenAI). Batch embedding.
425
535
  encode.js Immutable episodic memory creation + vec0 writes.
536
+ affect.js Emotional memory: arousal-salience coupling, mood-congruent recall, resonance.
537
+ context.js Context-dependent retrieval modifier (encoding specificity).
538
+ interference.js Competitive memory suppression (engram competition).
539
+ forget.js Soft-delete, hard-delete, query-based forget, bulk purge.
426
540
  introspect.js Health dashboard queries.
427
541
  llm.js Pluggable LLM providers (Mock, Anthropic, OpenAI).
428
542
  prompts.js Structured prompt templates for LLM operations.
429
- recall.js KNN retrieval + confidence scoring + async streaming.
543
+ recall.js KNN retrieval + confidence scoring + filtered recall + streaming.
430
544
  rollback.js Undo consolidation runs.
431
545
  utils.js Date math, safe JSON parse.
432
546
  validate.js KNN validation + LLM contradiction detection.
@@ -437,7 +551,7 @@ src/
437
551
  index.js Barrel export.
438
552
 
439
553
  mcp-server/
440
- index.js MCP tool server (7 tools, stdio transport) + CLI subcommands.
554
+ index.js MCP tool server (9 tools, stdio transport) + CLI subcommands.
441
555
  config.js Shared config (env var parsing, install arg builder).
442
556
  ```
443
557
 
@@ -461,7 +575,7 @@ All mutations use SQLite transactions. CHECK constraints enforce valid states an
461
575
  ## Running Tests
462
576
 
463
577
  ```bash
464
- npm test # 243 tests across 22 files
578
+ npm test # 379 tests across 28 files
465
579
  npm run test:watch
466
580
  ```
467
581
 
@@ -471,115 +585,82 @@ npm run test:watch
471
585
  node examples/stripe-demo.js
472
586
  ```
473
587
 
474
- Demonstrates the full pipeline: encode 3 rate-limit observations consolidate into principle recall proactively.
588
+ Demonstrates the full pipeline: encode 3 rate-limit observations, consolidate into principle, recall proactively.
475
589
 
476
590
  ---
477
591
 
478
- ## Roadmap
592
+ ## Changelog
479
593
 
480
- ### v0.1.0 — Foundation
594
+ ### v0.9.0 — Emotional Memory (current)
481
595
 
482
- - [x] Immutable episodic memory with append-only records
483
- - [x] Compositional confidence formula (source + evidence + recency + retrieval)
484
- - [x] Ebbinghaus-inspired forgetting curves with configurable half-lives
485
- - [x] Dormancy transitions for low-confidence memories
486
- - [x] Confidence-weighted recall across episodic/semantic/procedural types
487
- - [x] Provenance chains (which episodes contributed to which principles)
488
- - [x] Retrieval reinforcement (frequently accessed memories resist decay)
489
- - [x] Consolidation engine with clustering and principle extraction
490
- - [x] Idempotent consolidation with checkpoint cursors
491
- - [x] Full consolidation audit trail (input/output IDs per run)
492
- - [x] Consolidation rollback (undo bad runs, restore episodes)
493
- - [x] Contradiction lifecycle (open/resolved/context_dependent/reopened)
494
- - [x] Circular self-confirmation defense (model-generated cap at 0.6)
495
- - [x] Source type diversity tracking on semantic memories
496
- - [x] Supersedes links for correcting episodic memories
497
- - [x] Pluggable embedding providers (Mock for tests, OpenAI for production)
498
- - [x] Causal context storage (trigger/consequence per episode)
499
- - [x] Introspection API (memory counts, contradiction stats, consolidation history)
500
- - [x] EventEmitter lifecycle hooks (encode, reinforcement, consolidation, decay, rollback, error)
501
- - [x] SQLite with WAL mode, CHECK constraints, indexes, foreign keys
502
- - [x] Transaction safety on all multi-step mutations
503
- - [x] Input validation on public API (content, salience, tags, source)
504
- - [x] Shared utility extraction (cosine similarity, date math, safe JSON parse)
505
- - [x] 104 tests across 12 test files
506
- - [x] Proof-of-concept demo (Stripe rate limit scenario)
596
+ - Valence-arousal affect model (Russell's circumplex) on every episode
597
+ - Arousal-salience coupling via Yerkes-Dodson inverted-U curve
598
+ - Mood-congruent recall matching emotional state boosts retrieval confidence
599
+ - Emotional resonance detection new experiences that echo past emotional patterns emit events
600
+ - MCP server: `memory_encode` accepts `affect`, `memory_recall` accepts `mood`
601
+ - 379 tests across 28 test files
507
602
 
508
- ### v0.2.0 — LLM Integration
603
+ ### v0.8.0 — Context-Dependent Retrieval
604
+
605
+ - Encoding specificity principle: context stored with memory, matching context boosts recall
606
+ - MCP server: `memory_encode` and `memory_recall` accept `context`
607
+ - 340 tests across 27 test files
608
+
609
+ ### v0.7.0 — Interference + Salience
610
+
611
+ - Interference-based forgetting: new memories competitively suppress similar existing ones
612
+ - Salience-weighted confidence: high-salience memories resist decay
613
+ - Spaced-repetition reconsolidation: retrieval intervals affect reinforcement strength
614
+ - 310 tests across 25 test files
615
+
616
+ ### v0.6.0 — Filtered Recall + Forget
617
+
618
+ - Filtered recall: tag, source, and date-range filters on `recall()` and `recallStream()`
619
+ - `forget()` — soft-delete any memory by ID
620
+ - `forgetByQuery()` — find closest match by semantic search and forget it
621
+ - `purge()` — bulk hard-delete all forgotten/dormant/superseded memories
622
+ - `memory_forget` and `memory_decay` MCP tools (9 tools total)
623
+ - 278 tests across 23 files
509
624
 
510
- - [x] LLM-powered principle extraction (replace callback with Anthropic/OpenAI calls)
511
- - [x] LLM-based contradiction detection during validation
512
- - [x] Causal mechanism articulation via LLM (not just trigger/consequence)
513
- - [x] Spurious correlation detection (require mechanistic explanation for causal links)
514
- - [x] Context-dependent truth resolution via LLM
515
- - [x] Configurable LLM provider for consolidation (Mock, Anthropic, OpenAI)
516
- - [x] Structured prompt templates for all LLM operations
517
- - [x] 142 tests across 15 test files
625
+ ### v0.5.0 Feature Depth
626
+
627
+ - Configurable confidence weights and decay rates per instance
628
+ - Memory export/import (JSON snapshots with re-embedding)
629
+ - `memory_export` and `memory_import` MCP tools
630
+ - Auto-consolidation scheduling
631
+ - Adaptive consolidation parameter suggestions
632
+ - 243 tests across 22 files
633
+
634
+ ### v0.3.1 — MCP Server
635
+
636
+ - MCP tool server via `@modelcontextprotocol/sdk` with stdio transport
637
+ - One-command install: `npx audrey install` (auto-detects API keys)
638
+ - CLI subcommands: `install`, `uninstall`, `status`
639
+ - JSDoc type annotations on all public exports
640
+ - Published to npm
641
+ - 194 tests across 17 files
518
642
 
519
643
  ### v0.3.0 — Vector Performance
520
644
 
521
- - [x] sqlite-vec native vector indexing (vec0 virtual tables with cosine distance)
522
- - [x] KNN queries for recall, validation, and consolidation clustering (all vector math in C)
523
- - [x] SQL-native metadata filtering in KNN (state, source, consolidated)
524
- - [x] Batch encoding API (`encodeBatch` encode N episodes in one call)
525
- - [x] Streaming recall with async generators (`recallStream`)
526
- - [x] Dimension configuration and mismatch validation
527
- - [x] Automatic migration from v0.2.0 embedding BLOBs to vec0 tables
528
- - [x] 168 tests across 16 test files
529
-
530
- ### v0.3.1 MCP Server + JSDoc Types
531
-
532
- - [x] MCP tool server via `@modelcontextprotocol/sdk` with stdio transport
533
- - [x] 5 tools: `memory_encode`, `memory_recall`, `memory_consolidate`, `memory_introspect`, `memory_resolve_truth`
534
- - [x] Configuration via environment variables (data dir, embedding provider, LLM provider)
535
- - [x] One-command install: `npx audrey install` (auto-detects API keys)
536
- - [x] CLI subcommands: `install`, `uninstall`, `status`
537
- - [x] JSDoc type annotations on all public exports (16 source files)
538
- - [x] Published to npm with proper package metadata
539
- - [x] 194 tests across 17 test files
540
-
541
- ### v0.5.0 — Feature Depth (current)
542
-
543
- - [x] Configurable confidence weights per Audrey instance
544
- - [x] Configurable decay rates (half-lives) per Audrey instance
545
- - [x] Confidence config wired through constructor to recall and decay
546
- - [x] Memory export (JSON snapshot of all tables, no raw embeddings)
547
- - [x] Memory import with automatic re-embedding via current provider
548
- - [x] `memory_export` and `memory_import` MCP tools (7 tools total)
549
- - [x] Auto-consolidation scheduling (`startAutoConsolidate` / `stopAutoConsolidate`)
550
- - [x] Consolidation metrics tracking (per-run params and results)
551
- - [x] Adaptive consolidation parameter suggestions based on historical yield
552
- - [x] 243 tests across 22 test files
553
-
554
- ### v0.4.0 — Type Safety & Developer Experience
555
-
556
- - [ ] Full TypeScript conversion with strict mode
557
- - [ ] Published type declarations (.d.ts)
558
- - [ ] Schema versioning and migration system
559
- - [ ] Structured logging (optional, pluggable)
560
-
561
- ### v0.4.5 — Embedding Migration (deferred from v0.3.0)
562
-
563
- - [ ] Embedding migration pipeline (re-embed when models change)
564
- - [ ] Re-consolidation queue (re-run consolidation with new embedding model)
565
-
566
- ### v0.6.0 — Scale
567
-
568
- - [ ] pgvector adapter for PostgreSQL backend
569
- - [ ] Redis adapter for distributed caching
570
- - [ ] Connection pooling for concurrent agent access
571
- - [ ] Pagination on recall queries (cursor-based)
572
- - [ ] Benchmarks: encode throughput, recall latency at 10k/100k/1M memories
573
-
574
- ### v1.0.0 — Production Ready
575
-
576
- - [ ] Comprehensive error handling at all boundaries
577
- - [ ] Rate limiting on embedding API calls
578
- - [ ] Memory usage profiling and optimization
579
- - [ ] Security audit (injection, data isolation)
580
- - [ ] Cross-agent knowledge sharing protocol (Hivemind)
581
- - [ ] Documentation site
582
- - [ ] Integration guides (LangChain, CrewAI, Claude Code, custom agents)
645
+ - sqlite-vec native vector indexing (vec0 virtual tables with cosine distance)
646
+ - KNN queries for recall, validation, and consolidation clustering
647
+ - Batch encoding API and streaming recall with async generators
648
+ - Dimension configuration and automatic migration from v0.2.0
649
+ - 168 tests across 16 files
650
+
651
+ ### v0.2.0 LLM Integration
652
+
653
+ - LLM-powered principle extraction, contradiction detection, causal articulation
654
+ - Context-dependent truth resolution
655
+ - Configurable LLM providers (Mock, Anthropic, OpenAI)
656
+ - 142 tests across 15 files
657
+
658
+ ### v0.1.0 Foundation
659
+
660
+ - Immutable episodic memory, compositional confidence, Ebbinghaus forgetting curves
661
+ - Consolidation engine, contradiction lifecycle, rollback
662
+ - Circular self-confirmation defense, causal context, introspection
663
+ - 104 tests across 12 files
583
664
 
584
665
  ## Design Decisions
585
666
 
@@ -591,7 +672,9 @@ Demonstrates the full pipeline: encode 3 rate-limit observations → consolidate
591
672
 
592
673
  **Why model-generated cap at 0.6?** Prevents the most dangerous exploit in AI memory: circular self-confirmation where an agent's own inferences bootstrap themselves into high-confidence "facts" through repeated retrieval.
593
674
 
594
- **Why no TypeScript yet?** Prototyping speed. TypeScript conversion is on the roadmap for v0.4.0. The pure-math modules (`confidence.js`, `utils.js`) are already type-safe in practice.
675
+ **Why soft-delete by default?** Hard-deletes are irreversible. Soft-delete preserves data integrity and audit trails while excluding the memory from recall. Use `purge: true` or `brain.purge()` when you need permanent removal (GDPR, storage cleanup).
676
+
677
+ **Why emotional memory?** Every memory system stores facts. Biological memory stores facts with emotional context — and that context changes how memories are retrieved. Emotional arousal modulates encoding strength (amygdala-hippocampal interaction). Current mood biases which memories surface (Bower, 1981). This isn't a novelty feature — it's the foundation for AI that remembers like it cares.
595
678
 
596
679
  ## License
597
680
 
@@ -1,7 +1,7 @@
1
1
  import { homedir } from 'node:os';
2
2
  import { join } from 'node:path';
3
3
 
4
- export const VERSION = '0.5.1';
4
+ export const VERSION = '0.9.0';
5
5
  export const SERVER_NAME = 'audrey-memory';
6
6
  export const DEFAULT_DATA_DIR = join(homedir(), '.audrey', 'data');
7
7
 
@@ -65,7 +65,7 @@ function install() {
65
65
  console.log(`
66
66
  Audrey registered as "${SERVER_NAME}" with Claude Code.
67
67
 
68
- 7 tools available in every session:
68
+ 9 tools available in every session:
69
69
  memory_encode — Store observations, facts, preferences
70
70
  memory_recall — Search memories by semantic similarity
71
71
  memory_consolidate — Extract principles from accumulated episodes
@@ -73,6 +73,8 @@ Audrey registered as "${SERVER_NAME}" with Claude Code.
73
73
  memory_resolve_truth — Resolve contradictions between claims
74
74
  memory_export — Export all memories as JSON snapshot
75
75
  memory_import — Import a snapshot into a fresh database
76
+ memory_forget — Forget a specific memory by ID or query
77
+ memory_decay — Apply forgetting curves, transition low-confidence to dormant
76
78
 
77
79
  Data stored in: ${DEFAULT_DATA_DIR}
78
80
  Verify: claude mcp list
@@ -161,10 +163,16 @@ async function main() {
161
163
  source: z.enum(VALID_SOURCES).describe('Source type of the memory'),
162
164
  tags: z.array(z.string()).optional().describe('Optional tags for categorization'),
163
165
  salience: z.number().min(0).max(1).optional().describe('Importance weight 0-1'),
166
+ context: z.record(z.string()).optional().describe('Situational context as key-value pairs (e.g., {task: "debugging", domain: "payments"})'),
167
+ affect: z.object({
168
+ valence: z.number().min(-1).max(1).describe('Emotional valence: -1 (very negative) to 1 (very positive)'),
169
+ arousal: z.number().min(0).max(1).optional().describe('Emotional arousal: 0 (calm) to 1 (highly activated)'),
170
+ label: z.string().optional().describe('Human-readable emotion label (e.g., "curiosity", "frustration", "relief")'),
171
+ }).optional().describe('Emotional affect — how this memory feels'),
164
172
  },
165
- async ({ content, source, tags, salience }) => {
173
+ async ({ content, source, tags, salience, context, affect }) => {
166
174
  try {
167
- const id = await audrey.encode({ content, source, tags, salience });
175
+ const id = await audrey.encode({ content, source, tags, salience, context, affect });
168
176
  return toolResult({ id, content, source });
169
177
  } catch (err) {
170
178
  return toolError(err);
@@ -179,13 +187,28 @@ async function main() {
179
187
  limit: z.number().min(1).max(50).optional().describe('Max results (default 10)'),
180
188
  types: z.array(z.enum(VALID_TYPES)).optional().describe('Memory types to search'),
181
189
  min_confidence: z.number().min(0).max(1).optional().describe('Minimum confidence threshold'),
190
+ tags: z.array(z.string()).optional().describe('Only return episodic memories with these tags'),
191
+ sources: z.array(z.enum(VALID_SOURCES)).optional().describe('Only return episodic memories from these sources'),
192
+ after: z.string().optional().describe('Only return memories created after this ISO date'),
193
+ before: z.string().optional().describe('Only return memories created before this ISO date'),
194
+ context: z.record(z.string()).optional().describe('Retrieval context — memories encoded in matching context get boosted'),
195
+ mood: z.object({
196
+ valence: z.number().min(-1).max(1).describe('Current emotional valence: -1 (negative) to 1 (positive)'),
197
+ arousal: z.number().min(0).max(1).optional().describe('Current arousal: 0 (calm) to 1 (activated)'),
198
+ }).optional().describe('Current mood — boosts recall of memories encoded in similar emotional state'),
182
199
  },
183
- async ({ query, limit, types, min_confidence }) => {
200
+ async ({ query, limit, types, min_confidence, tags, sources, after, before, context, mood }) => {
184
201
  try {
185
202
  const results = await audrey.recall(query, {
186
203
  limit: limit ?? 10,
187
204
  types,
188
205
  minConfidence: min_confidence,
206
+ tags,
207
+ sources,
208
+ after,
209
+ before,
210
+ context,
211
+ mood,
189
212
  });
190
213
  return toolResult(results);
191
214
  } catch (err) {
@@ -279,6 +302,53 @@ async function main() {
279
302
  },
280
303
  );
281
304
 
305
+ server.tool(
306
+ 'memory_forget',
307
+ {
308
+ id: z.string().optional().describe('ID of the memory to forget'),
309
+ query: z.string().optional().describe('Semantic query to find and forget the closest matching memory'),
310
+ min_similarity: z.number().min(0).max(1).optional().describe('Minimum similarity for query-based forget (default 0.9)'),
311
+ purge: z.boolean().optional().describe('Hard-delete the memory permanently (default false, soft-delete)'),
312
+ },
313
+ async ({ id, query, min_similarity, purge }) => {
314
+ try {
315
+ if (!id && !query) {
316
+ return toolError(new Error('Provide either id or query'));
317
+ }
318
+ let result;
319
+ if (id) {
320
+ result = audrey.forget(id, { purge: purge ?? false });
321
+ } else {
322
+ result = await audrey.forgetByQuery(query, {
323
+ minSimilarity: min_similarity ?? 0.9,
324
+ purge: purge ?? false,
325
+ });
326
+ if (!result) {
327
+ return toolResult({ forgotten: false, reason: 'No memory found above similarity threshold' });
328
+ }
329
+ }
330
+ return toolResult({ forgotten: true, ...result });
331
+ } catch (err) {
332
+ return toolError(err);
333
+ }
334
+ },
335
+ );
336
+
337
+ server.tool(
338
+ 'memory_decay',
339
+ {
340
+ dormant_threshold: z.number().min(0).max(1).optional().describe('Confidence below which memories go dormant (default 0.1)'),
341
+ },
342
+ async ({ dormant_threshold }) => {
343
+ try {
344
+ const result = audrey.decay({ dormantThreshold: dormant_threshold });
345
+ return toolResult(result);
346
+ } catch (err) {
347
+ return toolError(err);
348
+ }
349
+ },
350
+ );
351
+
282
352
  const transport = new StdioServerTransport();
283
353
  await server.connect(transport);
284
354
  console.error('[audrey-mcp] connected via stdio');
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "audrey",
3
- "version": "0.5.1",
3
+ "version": "0.9.0",
4
4
  "description": "Biological memory architecture for AI agents — encode, consolidate, and recall memories with confidence decay, contradiction detection, and causal graphs",
5
5
  "type": "module",
6
6
  "main": "src/index.js",