audrey 0.3.0 → 0.3.1

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -24,49 +24,119 @@ Audrey fixes all of this by modeling memory the way the brain does:
24
24
 
25
25
  ## Install
26
26
 
27
+ ### MCP Server for Claude Code (one command)
28
+
29
+ ```bash
30
+ npx audrey install
31
+ ```
32
+
33
+ That's it. Audrey auto-detects API keys from your environment:
34
+
35
+ - `OPENAI_API_KEY` set? Uses real OpenAI embeddings (1536d) for semantic search.
36
+ - `ANTHROPIC_API_KEY` set? Enables LLM-powered consolidation and contradiction detection.
37
+ - Neither? Runs with mock embeddings — fully functional, upgrade anytime.
38
+
39
+ To upgrade later, set the keys and re-run `npx audrey install`.
40
+
41
+ ```bash
42
+ # Check status
43
+ npx audrey status
44
+
45
+ # Uninstall
46
+ npx audrey uninstall
47
+ ```
48
+
49
+ Every Claude Code session now has 5 memory tools: `memory_encode`, `memory_recall`, `memory_consolidate`, `memory_introspect`, `memory_resolve_truth`.
50
+
51
+ ### SDK in Your Code
52
+
27
53
  ```bash
28
54
  npm install audrey
29
55
  ```
30
56
 
31
- Zero external infrastructure. One SQLite file. That's it.
57
+ Zero external infrastructure. One SQLite file.
32
58
 
33
- ## Quick Start
59
+ ## Usage
34
60
 
35
61
  ```js
36
62
  import { Audrey } from 'audrey';
37
63
 
64
+ // 1. Create a brain
38
65
  const brain = new Audrey({
39
66
  dataDir: './agent-memory',
40
67
  agent: 'my-agent',
41
- embedding: { provider: 'openai', model: 'text-embedding-3-small' },
68
+ embedding: { provider: 'mock', dimensions: 8 }, // or 'openai' for production
42
69
  });
43
70
 
44
- // Agent observes something
71
+ // 2. Encode observations
45
72
  await brain.encode({
46
73
  content: 'Stripe API returns 429 above 100 req/s',
47
74
  source: 'direct-observation',
48
- salience: 0.9,
49
- causal: { trigger: 'batch-payment-job', consequence: 'queue-stalled' },
50
75
  tags: ['stripe', 'rate-limit'],
51
76
  });
52
77
 
53
- // Later agent encounters Stripe again
54
- const memories = await brain.recall('stripe rate limits', {
55
- minConfidence: 0.5,
56
- types: ['semantic', 'procedural'],
57
- limit: 5,
58
- });
78
+ // 3. Recall what you know
79
+ const memories = await brain.recall('stripe rate limits', { limit: 5 });
80
+ // Returns: [{ content, type, confidence, score, ... }]
59
81
 
60
- // Run consolidation (the "sleep" cycle)
82
+ // 4. Consolidate episodes into principles (the "sleep" cycle)
61
83
  await brain.consolidate();
62
84
 
63
- // Check brain health
85
+ // 5. Check brain health
64
86
  const stats = brain.introspect();
65
87
  // { episodic: 47, semantic: 12, procedural: 3, dormant: 8, ... }
66
88
 
89
+ // 6. Clean up
67
90
  brain.close();
68
91
  ```
69
92
 
93
+ ### Configuration
94
+
95
+ ```js
96
+ const brain = new Audrey({
97
+ dataDir: './audrey-data', // SQLite database directory
98
+ agent: 'my-agent', // Agent identifier
99
+
100
+ // Embedding provider (required)
101
+ embedding: {
102
+ provider: 'mock', // 'mock' for testing, 'openai' for production
103
+ dimensions: 8, // 8 for mock, 1536 for openai text-embedding-3-small
104
+ apiKey: '...', // Required for openai
105
+ },
106
+
107
+ // LLM provider (optional — enables smart consolidation + contradiction detection)
108
+ llm: {
109
+ provider: 'anthropic', // 'mock', 'anthropic', or 'openai'
110
+ apiKey: '...', // Required for anthropic/openai
111
+ model: 'claude-sonnet-4-6', // Optional model override
112
+ },
113
+
114
+ // Consolidation settings
115
+ consolidation: {
116
+ minEpisodes: 3, // Minimum cluster size for principle extraction
117
+ },
118
+
119
+ // Decay settings
120
+ decay: {
121
+ dormantThreshold: 0.1, // Below this confidence = dormant
122
+ },
123
+ });
124
+ ```
125
+
126
+ **Without an LLM provider**, consolidation uses a default text-based extractor and contradiction detection is similarity-only. **With an LLM provider**, Audrey extracts real generalized principles, detects semantic contradictions, and resolves context-dependent truths.
127
+
128
+ ### Environment Variables (MCP Server)
129
+
130
+ | Variable | Default | Purpose |
131
+ |---|---|---|
132
+ | `AUDREY_DATA_DIR` | `~/.audrey/data` | SQLite database directory |
133
+ | `AUDREY_AGENT` | `claude-code` | Agent identifier |
134
+ | `AUDREY_EMBEDDING_PROVIDER` | `mock` | `mock` or `openai` |
135
+ | `AUDREY_EMBEDDING_DIMENSIONS` | `8` | Vector dimensions (1536 for openai) |
136
+ | `OPENAI_API_KEY` | — | Required when embedding/LLM provider is openai |
137
+ | `AUDREY_LLM_PROVIDER` | — | `mock`, `anthropic`, or `openai` |
138
+ | `ANTHROPIC_API_KEY` | — | Required when LLM provider is anthropic |
139
+
70
140
  ## Core Concepts
71
141
 
72
142
  ### Four Memory Types
@@ -176,25 +246,7 @@ Audrey's defenses:
176
246
 
177
247
  ### `new Audrey(config)`
178
248
 
179
- ```js
180
- const brain = new Audrey({
181
- dataDir: './audrey-data', // Where the SQLite DB lives
182
- agent: 'my-agent', // Agent identifier
183
- embedding: {
184
- provider: 'openai', // 'openai' | 'mock'
185
- model: 'text-embedding-3-small',
186
- apiKey: process.env.OPENAI_API_KEY,
187
- },
188
- consolidation: {
189
- interval: '1h', // Auto-consolidation interval
190
- minEpisodes: 3, // Minimum cluster size
191
- confidenceTarget: 2.0, // Adaptive threshold multiplier
192
- },
193
- decay: {
194
- dormantThreshold: 0.1, // Below this → dormant
195
- },
196
- });
197
- ```
249
+ See [Configuration](#configuration) above for all options.
198
250
 
199
251
  ### `brain.encode(params)` → `Promise<string>`
200
252
 
@@ -252,6 +304,29 @@ Each result:
252
304
 
253
305
  Retrieval automatically reinforces matched memories (boosts confidence, resets decay clock).
254
306
 
307
+ ### `brain.encodeBatch(paramsList)` → `Promise<string[]>`
308
+
309
+ Encode multiple episodes in one call. Same params as `encode()`, but as an array.
310
+
311
+ ```js
312
+ const ids = await brain.encodeBatch([
313
+ { content: 'Stripe returned 429', source: 'direct-observation' },
314
+ { content: 'Redis timed out', source: 'tool-result' },
315
+ { content: 'User reports slow checkout', source: 'told-by-user' },
316
+ ]);
317
+ ```
318
+
319
+ ### `brain.recallStream(query, options)` → `AsyncGenerator<Memory>`
320
+
321
+ Streaming version of `recall()`. Yields results one at a time. Supports early `break`.
322
+
323
+ ```js
324
+ for await (const memory of brain.recallStream('stripe issues', { limit: 10 })) {
325
+ console.log(memory.content, memory.score);
326
+ if (memory.score > 0.9) break;
327
+ }
328
+ ```
329
+
255
330
  ### `brain.consolidate(options)` → `Promise<ConsolidationResult>`
256
331
 
257
332
  Run the consolidation engine manually.
@@ -286,6 +361,15 @@ brain.rollback('01ABC...');
286
361
  // { rolledBackMemories: 3, restoredEpisodes: 9 }
287
362
  ```
288
363
 
364
+ ### `brain.resolveTruth(contradictionId)` → `Promise<Resolution>`
365
+
366
+ Resolve an open contradiction using LLM reasoning. Requires an LLM provider configured.
367
+
368
+ ```js
369
+ const resolution = await brain.resolveTruth('contradiction-id');
370
+ // { resolution: 'context_dependent', conditions: { a: 'live keys', b: 'test keys' }, explanation: '...' }
371
+ ```
372
+
289
373
  ### `brain.introspect()` → `Stats`
290
374
 
291
375
  Get memory system health stats.
@@ -310,6 +394,7 @@ Full audit trail of all consolidation runs.
310
394
  ```js
311
395
  brain.on('encode', ({ id, content, source }) => { ... });
312
396
  brain.on('reinforcement', ({ episodeId, targetId, similarity }) => { ... });
397
+ brain.on('contradiction', ({ episodeId, contradictionId, semanticId, resolution }) => { ... });
313
398
  brain.on('consolidation', ({ runId, principlesExtracted }) => { ... });
314
399
  brain.on('decay', ({ totalEvaluated, transitionedToDormant }) => { ... });
315
400
  brain.on('rollback', ({ runId, rolledBackMemories }) => { ... });
@@ -330,37 +415,48 @@ audrey-data/
330
415
  ```
331
416
  src/
332
417
  audrey.js Main class. EventEmitter. Public API surface.
418
+ causal.js Causal graph management. LLM-powered mechanism articulation.
333
419
  confidence.js Compositional confidence formula. Pure math.
334
- consolidate.js "Sleep" cycle. Clusterextract → promote.
335
- db.js SQLite schema. 6 tables. CHECK constraints. Indexes.
420
+ consolidate.js "Sleep" cycle. KNN clustering LLM extraction → promote.
421
+ db.js SQLite + sqlite-vec. Schema, vec0 tables, migrations.
336
422
  decay.js Ebbinghaus forgetting curves.
337
- embedding.js Pluggable providers (Mock, OpenAI).
338
- encode.js Immutable episodic memory creation.
423
+ embedding.js Pluggable providers (Mock, OpenAI). Batch embedding.
424
+ encode.js Immutable episodic memory creation + vec0 writes.
339
425
  introspect.js Health dashboard queries.
340
- recall.js Confidence-weighted vector retrieval.
426
+ llm.js Pluggable LLM providers (Mock, Anthropic, OpenAI).
427
+ prompts.js Structured prompt templates for LLM operations.
428
+ recall.js KNN retrieval + confidence scoring + async streaming.
341
429
  rollback.js Undo consolidation runs.
342
- utils.js Shared: cosine similarity, date math, safe JSON parse.
343
- validate.js Reinforcement + contradiction lifecycle.
430
+ utils.js Date math, safe JSON parse.
431
+ validate.js KNN validation + LLM contradiction detection.
344
432
  index.js Barrel export.
345
- ```
346
433
 
347
- ### Database Schema (6 tables)
434
+ mcp-server/
435
+ index.js MCP tool server (5 tools, stdio transport) + CLI subcommands.
436
+ config.js Shared config (env var parsing, install arg builder).
437
+ ```
348
438
 
349
- | Table | Purpose | Key Columns |
350
- |---|---|---|
351
- | `episodes` | Immutable raw events | content, embedding, source, salience, causal_trigger/consequence, supersedes |
352
- | `semantics` | Consolidated principles | content, embedding, state, evidence_episode_ids, source_type_diversity |
353
- | `procedures` | Learned workflows | content, embedding, trigger_conditions, success/failure_count |
354
- | `causal_links` | Why things happened | cause_id, effect_id, link_type (causal/correlational/temporal), mechanism |
355
- | `contradictions` | Dispute tracking | claim_a/b_id, state (open/resolved/context_dependent/reopened), resolution |
356
- | `consolidation_runs` | Audit trail | input_episode_ids, output_memory_ids, status, checkpoint_cursor |
439
+ ### Database Schema
357
440
 
358
- All mutations use SQLite transactions for atomicity. CHECK constraints enforce valid states and source types.
441
+ | Table | Purpose |
442
+ |---|---|
443
+ | `episodes` | Immutable raw events (content, source, salience, causal context) |
444
+ | `semantics` | Consolidated principles (content, state, evidence chain) |
445
+ | `procedures` | Learned workflows (trigger conditions, success/failure counts) |
446
+ | `causal_links` | Causal relationships (cause, effect, mechanism, link type) |
447
+ | `contradictions` | Dispute tracking (claims, state, resolution) |
448
+ | `consolidation_runs` | Audit trail (inputs, outputs, status) |
449
+ | `vec_episodes` | sqlite-vec KNN index for episode embeddings |
450
+ | `vec_semantics` | sqlite-vec KNN index for semantic embeddings |
451
+ | `vec_procedures` | sqlite-vec KNN index for procedural embeddings |
452
+ | `audrey_config` | Dimension configuration and metadata |
453
+
454
+ All mutations use SQLite transactions. CHECK constraints enforce valid states and source types. Vector search uses sqlite-vec with cosine distance.
359
455
 
360
456
  ## Running Tests
361
457
 
362
458
  ```bash
363
- npm test # 104 tests, ~760ms
459
+ npm test # 194 tests across 17 files
364
460
  npm run test:watch
365
461
  ```
366
462
 
@@ -431,8 +527,9 @@ Demonstrates the full pipeline: encode 3 rate-limit observations → consolidate
431
527
  - [x] MCP tool server via `@modelcontextprotocol/sdk` with stdio transport
432
528
  - [x] 5 tools: `memory_encode`, `memory_recall`, `memory_consolidate`, `memory_introspect`, `memory_resolve_truth`
433
529
  - [x] Configuration via environment variables (data dir, embedding provider, LLM provider)
434
- - [x] Registration script for Claude Code (`mcp-server/register.sh`)
435
- - [x] 184 tests across 17 test files
530
+ - [x] One-command install: `npx audrey install` (auto-detects API keys)
531
+ - [x] CLI subcommands: `install`, `uninstall`, `status`
532
+ - [x] 194 tests across 17 test files
436
533
 
437
534
  ### v0.3.5 — Embedding Migration (deferred from v0.3.0)
438
535
 
@@ -0,0 +1,60 @@
1
+ import { homedir } from 'node:os';
2
+ import { join } from 'node:path';
3
+
4
+ export const VERSION = '0.3.1';
5
+ export const SERVER_NAME = 'audrey-memory';
6
+ export const DEFAULT_DATA_DIR = join(homedir(), '.audrey', 'data');
7
+
8
+ export function buildAudreyConfig() {
9
+ const dataDir = process.env.AUDREY_DATA_DIR || DEFAULT_DATA_DIR;
10
+ const agent = process.env.AUDREY_AGENT || 'claude-code';
11
+ const embProvider = process.env.AUDREY_EMBEDDING_PROVIDER || 'mock';
12
+ const embDimensions = parseInt(process.env.AUDREY_EMBEDDING_DIMENSIONS || '8', 10);
13
+ const llmProvider = process.env.AUDREY_LLM_PROVIDER;
14
+
15
+ const config = {
16
+ dataDir,
17
+ agent,
18
+ embedding: { provider: embProvider, dimensions: embDimensions },
19
+ };
20
+
21
+ if (embProvider === 'openai') {
22
+ config.embedding.apiKey = process.env.OPENAI_API_KEY;
23
+ }
24
+
25
+ if (llmProvider === 'anthropic') {
26
+ config.llm = { provider: 'anthropic', apiKey: process.env.ANTHROPIC_API_KEY };
27
+ } else if (llmProvider === 'openai') {
28
+ config.llm = { provider: 'openai', apiKey: process.env.OPENAI_API_KEY };
29
+ } else if (llmProvider === 'mock') {
30
+ config.llm = { provider: 'mock' };
31
+ }
32
+
33
+ return config;
34
+ }
35
+
36
+ export function buildInstallArgs(env = process.env) {
37
+ const envPairs = [`AUDREY_DATA_DIR=${DEFAULT_DATA_DIR}`];
38
+
39
+ if (env.OPENAI_API_KEY) {
40
+ envPairs.push('AUDREY_EMBEDDING_PROVIDER=openai');
41
+ envPairs.push('AUDREY_EMBEDDING_DIMENSIONS=1536');
42
+ envPairs.push(`OPENAI_API_KEY=${env.OPENAI_API_KEY}`);
43
+ } else {
44
+ envPairs.push('AUDREY_EMBEDDING_PROVIDER=mock');
45
+ envPairs.push('AUDREY_EMBEDDING_DIMENSIONS=8');
46
+ }
47
+
48
+ if (env.ANTHROPIC_API_KEY) {
49
+ envPairs.push('AUDREY_LLM_PROVIDER=anthropic');
50
+ envPairs.push(`ANTHROPIC_API_KEY=${env.ANTHROPIC_API_KEY}`);
51
+ }
52
+
53
+ const args = ['mcp', 'add', '-s', 'user', SERVER_NAME];
54
+ for (const pair of envPairs) {
55
+ args.push('-e', pair);
56
+ }
57
+ args.push('--', 'npx', 'audrey');
58
+
59
+ return args;
60
+ }
@@ -4,33 +4,121 @@ import { StdioServerTransport } from '@modelcontextprotocol/sdk/server/stdio.js'
4
4
  import { z } from 'zod';
5
5
  import { homedir } from 'node:os';
6
6
  import { join } from 'node:path';
7
+ import { existsSync, readFileSync } from 'node:fs';
8
+ import { execFileSync } from 'node:child_process';
7
9
  import { Audrey } from '../src/index.js';
10
+ import { VERSION, SERVER_NAME, DEFAULT_DATA_DIR, buildAudreyConfig, buildInstallArgs } from './config.js';
8
11
 
9
12
  const VALID_SOURCES = ['direct-observation', 'told-by-user', 'tool-result', 'inference', 'model-generated'];
10
13
  const VALID_TYPES = ['episodic', 'semantic', 'procedural'];
11
14
 
12
- function buildAudreyConfig() {
13
- const dataDir = process.env.AUDREY_DATA_DIR || join(homedir(), '.audrey', 'data');
14
- const agent = process.env.AUDREY_AGENT || 'claude-code';
15
- const embProvider = process.env.AUDREY_EMBEDDING_PROVIDER || 'mock';
16
- const embDimensions = parseInt(process.env.AUDREY_EMBEDDING_DIMENSIONS || '8', 10);
17
- const llmProvider = process.env.AUDREY_LLM_PROVIDER;
18
-
19
- const config = {
20
- dataDir,
21
- agent,
22
- embedding: { provider: embProvider, dimensions: embDimensions },
23
- };
24
-
25
- if (llmProvider === 'anthropic') {
26
- config.llm = { provider: 'anthropic', apiKey: process.env.ANTHROPIC_API_KEY };
27
- } else if (llmProvider === 'openai') {
28
- config.llm = { provider: 'openai', apiKey: process.env.OPENAI_API_KEY };
29
- } else if (llmProvider === 'mock') {
30
- config.llm = { provider: 'mock' };
15
+ const subcommand = process.argv[2];
16
+
17
+ if (subcommand === 'install') {
18
+ install();
19
+ } else if (subcommand === 'uninstall') {
20
+ uninstall();
21
+ } else if (subcommand === 'status') {
22
+ status();
23
+ } else {
24
+ main().catch(err => {
25
+ console.error('[audrey-mcp] fatal:', err);
26
+ process.exit(1);
27
+ });
28
+ }
29
+
30
+ function install() {
31
+ try {
32
+ execFileSync('claude', ['--version'], { stdio: 'ignore' });
33
+ } catch {
34
+ console.error('Error: claude CLI not found. Install Claude Code first: https://docs.anthropic.com/en/docs/claude-code');
35
+ process.exit(1);
36
+ }
37
+
38
+ if (process.env.OPENAI_API_KEY) {
39
+ console.log('Detected OPENAI_API_KEY — using OpenAI embeddings (1536d)');
40
+ } else {
41
+ console.log('No OPENAI_API_KEY found — using mock embeddings (upgrade anytime by re-running with the key set)');
42
+ }
43
+
44
+ if (process.env.ANTHROPIC_API_KEY) {
45
+ console.log('Detected ANTHROPIC_API_KEY — enabling LLM-powered consolidation + contradiction detection');
46
+ }
47
+
48
+ const args = buildInstallArgs(process.env);
49
+
50
+ try {
51
+ execFileSync('claude', args, { stdio: 'inherit' });
52
+ } catch {
53
+ console.error('Failed to register MCP server. Is Claude Code installed and on your PATH?');
54
+ process.exit(1);
55
+ }
56
+
57
+ console.log(`
58
+ Audrey registered as "${SERVER_NAME}" with Claude Code.
59
+
60
+ 5 tools available in every session:
61
+ memory_encode — Store observations, facts, preferences
62
+ memory_recall — Search memories by semantic similarity
63
+ memory_consolidate — Extract principles from accumulated episodes
64
+ memory_introspect — Check memory system health
65
+ memory_resolve_truth — Resolve contradictions between claims
66
+
67
+ Data stored in: ${DEFAULT_DATA_DIR}
68
+ Verify: claude mcp list
69
+ `);
70
+ }
71
+
72
+ function uninstall() {
73
+ try {
74
+ execFileSync('claude', ['--version'], { stdio: 'ignore' });
75
+ } catch {
76
+ console.error('Error: claude CLI not found.');
77
+ process.exit(1);
78
+ }
79
+
80
+ try {
81
+ execFileSync('claude', ['mcp', 'remove', SERVER_NAME], { stdio: 'inherit' });
82
+ console.log(`Removed "${SERVER_NAME}" from Claude Code.`);
83
+ } catch {
84
+ console.error(`Failed to remove "${SERVER_NAME}". It may not be registered.`);
85
+ process.exit(1);
86
+ }
87
+ }
88
+
89
+ function status() {
90
+ let registered = false;
91
+ const claudeJsonPath = join(homedir(), '.claude.json');
92
+ try {
93
+ const claudeConfig = JSON.parse(readFileSync(claudeJsonPath, 'utf-8'));
94
+ registered = SERVER_NAME in (claudeConfig.mcpServers || {});
95
+ } catch {
96
+ // claude.json doesn't exist or isn't readable
31
97
  }
32
98
 
33
- return config;
99
+ console.log(`Registration: ${registered ? 'active' : 'not registered'}`);
100
+
101
+ if (existsSync(DEFAULT_DATA_DIR)) {
102
+ try {
103
+ const audrey = new Audrey({
104
+ dataDir: DEFAULT_DATA_DIR,
105
+ agent: 'status-check',
106
+ embedding: { provider: 'mock', dimensions: 8 },
107
+ });
108
+ const stats = audrey.introspect();
109
+ audrey.close();
110
+ console.log(`Data directory: ${DEFAULT_DATA_DIR}`);
111
+ console.log(`Memories: ${stats.episodic} episodic, ${stats.semantic} semantic, ${stats.procedural} procedural`);
112
+ console.log(`Dormant: ${stats.dormant}`);
113
+ console.log(`Causal links: ${stats.causalLinks}`);
114
+ console.log(`Contradictions: ${stats.contradictions.open} open, ${stats.contradictions.resolved} resolved`);
115
+ console.log(`Consolidation runs: ${stats.totalConsolidationRuns}`);
116
+ } catch (err) {
117
+ console.log(`Data directory: ${DEFAULT_DATA_DIR} (exists but could not read: ${err.message})`);
118
+ }
119
+ } else {
120
+ console.log(`Data directory: ${DEFAULT_DATA_DIR} (not yet created — will be created on first use)`);
121
+ }
34
122
  }
35
123
 
36
124
  function toolResult(data) {
@@ -44,11 +132,15 @@ function toolError(err) {
44
132
  async function main() {
45
133
  const config = buildAudreyConfig();
46
134
  const audrey = new Audrey(config);
47
- console.error(`[audrey-mcp] started — agent=${config.agent} dataDir=${config.dataDir}`);
135
+
136
+ const embLabel = config.embedding.provider === 'mock'
137
+ ? 'mock embeddings — set OPENAI_API_KEY for real semantic search'
138
+ : `${config.embedding.provider} embeddings (${config.embedding.dimensions}d)`;
139
+ console.error(`[audrey-mcp] v${VERSION} started — agent=${config.agent} dataDir=${config.dataDir} (${embLabel})`);
48
140
 
49
141
  const server = new McpServer({
50
- name: 'audrey-memory',
51
- version: '0.3.0',
142
+ name: SERVER_NAME,
143
+ version: VERSION,
52
144
  });
53
145
 
54
146
  server.tool(
@@ -148,8 +240,3 @@ async function main() {
148
240
  process.exit(0);
149
241
  });
150
242
  }
151
-
152
- main().catch(err => {
153
- console.error('[audrey-mcp] fatal:', err);
154
- process.exit(1);
155
- });
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "audrey",
3
- "version": "0.3.0",
3
+ "version": "0.3.1",
4
4
  "description": "Biological memory architecture for AI agents — encode, consolidate, and recall memories with confidence decay, contradiction detection, and causal graphs",
5
5
  "type": "module",
6
6
  "main": "src/index.js",
@@ -9,7 +9,8 @@
9
9
  "./mcp": "./mcp-server/index.js"
10
10
  },
11
11
  "bin": {
12
- "audrey-mcp": "./mcp-server/index.js"
12
+ "audrey": "mcp-server/index.js",
13
+ "audrey-mcp": "mcp-server/index.js"
13
14
  },
14
15
  "files": [
15
16
  "src/",
@@ -50,7 +51,7 @@
50
51
  ],
51
52
  "repository": {
52
53
  "type": "git",
53
- "url": "https://github.com/Evilander/Audrey.git"
54
+ "url": "git+https://github.com/Evilander/Audrey.git"
54
55
  },
55
56
  "homepage": "https://github.com/Evilander/Audrey",
56
57
  "bugs": {