@prmichaelsen/remember-mcp 2.2.1 → 2.3.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (61) hide show
  1. package/AGENT.md +4 -4
  2. package/CHANGELOG.md +45 -0
  3. package/README.md +43 -3
  4. package/agent/commands/acp.init.md +376 -0
  5. package/agent/commands/acp.proceed.md +311 -0
  6. package/agent/commands/acp.status.md +280 -0
  7. package/agent/commands/acp.version-check-for-updates.md +275 -0
  8. package/agent/commands/acp.version-check.md +190 -0
  9. package/agent/commands/acp.version-update.md +288 -0
  10. package/agent/commands/command.template.md +273 -0
  11. package/agent/design/core-memory-user-profile.md +1253 -0
  12. package/agent/design/ghost-profiles-pseudonymous-identity.md +194 -0
  13. package/agent/design/publish-tools-confirmation-flow.md +922 -0
  14. package/agent/milestones/milestone-10-shared-spaces.md +169 -0
  15. package/agent/progress.yaml +90 -4
  16. package/agent/scripts/install.sh +118 -0
  17. package/agent/scripts/update.sh +22 -10
  18. package/agent/scripts/version.sh +35 -0
  19. package/agent/tasks/task-27-implement-llm-provider-interface.md +51 -0
  20. package/agent/tasks/task-28-implement-llm-provider-factory.md +64 -0
  21. package/agent/tasks/task-29-update-config-for-llm.md +71 -0
  22. package/agent/tasks/task-30-implement-bedrock-provider.md +147 -0
  23. package/agent/tasks/task-31-implement-background-job-service.md +120 -0
  24. package/agent/tasks/task-32-test-llm-provider-integration.md +152 -0
  25. package/agent/tasks/task-34-create-confirmation-token-service.md +191 -0
  26. package/agent/tasks/task-35-create-space-memory-types-schema.md +183 -0
  27. package/agent/tasks/task-36-implement-remember-publish.md +227 -0
  28. package/agent/tasks/task-37-implement-remember-confirm.md +225 -0
  29. package/agent/tasks/task-38-implement-remember-deny.md +161 -0
  30. package/agent/tasks/task-39-implement-remember-search-space.md +188 -0
  31. package/agent/tasks/task-40-implement-remember-query-space.md +193 -0
  32. package/agent/tasks/task-41-configure-firestore-ttl.md +188 -0
  33. package/agent/tasks/task-42-create-tests-shared-spaces.md +216 -0
  34. package/agent/tasks/task-43-update-documentation.md +255 -0
  35. package/dist/llm/types.d.ts +1 -0
  36. package/dist/server-factory.js +914 -1
  37. package/dist/server.js +916 -3
  38. package/dist/services/confirmation-token.service.d.ts +99 -0
  39. package/dist/services/confirmation-token.service.spec.d.ts +5 -0
  40. package/dist/tools/confirm.d.ts +20 -0
  41. package/dist/tools/deny.d.ts +19 -0
  42. package/dist/tools/publish.d.ts +22 -0
  43. package/dist/tools/query-space.d.ts +28 -0
  44. package/dist/tools/search-space.d.ts +29 -0
  45. package/dist/types/space-memory.d.ts +80 -0
  46. package/dist/weaviate/space-schema.d.ts +59 -0
  47. package/dist/weaviate/space-schema.spec.d.ts +5 -0
  48. package/package.json +1 -1
  49. package/src/llm/types.ts +0 -0
  50. package/src/server-factory.ts +33 -0
  51. package/src/server.ts +33 -0
  52. package/src/services/confirmation-token.service.spec.ts +254 -0
  53. package/src/services/confirmation-token.service.ts +232 -0
  54. package/src/tools/confirm.ts +176 -0
  55. package/src/tools/deny.ts +70 -0
  56. package/src/tools/publish.ts +167 -0
  57. package/src/tools/query-space.ts +197 -0
  58. package/src/tools/search-space.ts +189 -0
  59. package/src/types/space-memory.ts +94 -0
  60. package/src/weaviate/space-schema.spec.ts +131 -0
  61. package/src/weaviate/space-schema.ts +275 -0
@@ -0,0 +1,1253 @@
1
+ # Core Memory / User Profile
2
+
3
+ **Concept**: A special system memory that tracks metadata about a user's memories and inferred profile information to enable better discovery and context-aware operations.
4
+
5
+ **Created**: 2026-02-15
6
+ **Status**: Design Specification
7
+
8
+ ---
9
+
10
+ ## Overview
11
+
12
+ The Core Memory is a special system-type memory document that solves the "cold start" problem where users don't know what to query for. It provides:
13
+
14
+ 1. **Memory Discovery**: Statistics and metadata about what memories exist
15
+ 2. **User Context**: Inferred profile information about the user
16
+ 3. **Query Suggestions**: Hints for effective memory searches
17
+ 4. **Agent Context**: Quick overview for agents to understand the user
18
+
19
+ This document is automatically created and maintained by the system, requiring no user intervention.
20
+
21
+ **Key Design**: Uses simple ID `"core"` since collections are already scoped to user ID.
22
+
23
+ ---
24
+
25
+ ## Problem Statement
26
+
27
+ **Current Issues:**
28
+ 1. Users don't know what memories they have stored
29
+ 2. Agents lack context about the user when starting conversations
30
+ 3. No way to discover what topics/types of memories exist
31
+ 4. Cold start problem: "What should I search for?"
32
+ 5. No persistent understanding of user identity across sessions
33
+
34
+ **User Stories:**
35
+ - "What memories do I have?" → Should show overview
36
+ - "Tell me about myself" → Should provide user profile
37
+ - "What can I search for?" → Should suggest topics
38
+ - Agent needs context about user's profession, location, interests
39
+
40
+ ---
41
+
42
+ ## Solution
43
+
44
+ ### Core Memory Document
45
+
46
+ A special memory document with:
47
+ - **Type**: `system` (existing content type)
48
+ - **ID**: `"core"` (simple, well-known ID - collections are already scoped to user)
49
+ - **Auto-created**: On first memory operation if doesn't exist
50
+ - **Auto-updated**: Incrementally on memory/relationship operations
51
+ - **Searchable**: Like other memories, but marked as system type
52
+
53
+ ### Why Option 1 (System Memory)?
54
+
55
+ ✅ **Advantages:**
56
+ - Leverages existing memory infrastructure
57
+ - Searchable via vector embeddings
58
+ - No separate storage system needed
59
+ - Can be queried like any other memory
60
+ - Consistent with existing architecture
61
+ - Can use relationships to link to key memories
62
+
63
+ ❌ **Disadvantages:**
64
+ - Slightly slower than Firestore lookup
65
+ - Counts toward memory storage
66
+ - Must be filtered out of normal searches (unless user wants it)
67
+
68
+ ---
69
+
70
+ ## Implementation
71
+
72
+ ### 1. Core Memory Structure
73
+
74
+ ```typescript
75
+ interface CoreMemory extends Memory {
76
+ // Standard Memory fields
77
+ id: 'core'; // Simple ID - collections are scoped to user
78
+ user_id: string;
79
+ doc_type: 'memory';
80
+ type: 'system'; // Uses existing system content type
81
+
82
+ // Content (human-readable summary)
83
+ content: string; // Natural language summary for vector search
84
+ title: 'User Profile & Memory Overview';
85
+
86
+ // Structured metadata
87
+ structured_content: {
88
+ // Memory Statistics
89
+ statistics: {
90
+ total_memories: number;
91
+ total_relationships: number;
92
+ memory_by_type: Record<ContentType, number>;
93
+ common_tags: Array<{ tag: string; count: number }>;
94
+ date_range: {
95
+ oldest: string; // ISO 8601
96
+ newest: string; // ISO 8601
97
+ };
98
+ last_updated: string; // ISO 8601
99
+ };
100
+
101
+ // User Profile (inferred from memories)
102
+ profile: {
103
+ // Professional
104
+ occupation?: string;
105
+ company?: string;
106
+ skills?: string[];
107
+
108
+ // Personal
109
+ location?: {
110
+ city?: string;
111
+ country?: string;
112
+ timezone?: string;
113
+ };
114
+ interests?: string[];
115
+ hobbies?: string[];
116
+
117
+ // Relationships
118
+ key_people?: Array<{
119
+ name: string;
120
+ relationship: string;
121
+ memory_id?: string; // Link to person memory
122
+ }>;
123
+
124
+ // Important Dates
125
+ important_dates?: Array<{
126
+ date: string;
127
+ description: string;
128
+ type: 'birthday' | 'anniversary' | 'event';
129
+ }>;
130
+
131
+ // Personality & Communication Style (LLM-inferred)
132
+ personality?: {
133
+ traits?: string[]; // e.g., ["analytical", "creative", "detail-oriented"]
134
+ communication_style?: string; // e.g., "Direct and concise" or "Warm and conversational"
135
+ values?: string[]; // e.g., ["efficiency", "creativity", "collaboration"]
136
+ working_style?: string; // e.g., "Prefers structured planning"
137
+ tone_preferences?: string; // e.g., "Professional but friendly"
138
+ };
139
+
140
+ // Confidence scores for inferred data
141
+ confidence: Record<string, number>; // 0-1 for each field
142
+ };
143
+
144
+ // Content Categories
145
+ categories: {
146
+ top_types: Array<{ type: ContentType; count: number }>;
147
+ active_projects?: string[];
148
+ recurring_themes?: string[];
149
+ frequent_topics?: string[];
150
+ };
151
+
152
+ // Discovery Hints
153
+ discovery: {
154
+ sample_queries: string[]; // Suggested queries that would work well
155
+ key_topics: string[]; // Main topics to explore
156
+ memory_clusters?: Array<{
157
+ topic: string;
158
+ count: number;
159
+ sample_ids: string[];
160
+ }>;
161
+ };
162
+
163
+ // Representative Memory Snippets (for context)
164
+ snippets?: {
165
+ recent: Array<{
166
+ memory_id: string;
167
+ type: ContentType;
168
+ excerpt: string; // First 200 chars
169
+ date: string;
170
+ }>;
171
+ significant: Array<{
172
+ memory_id: string;
173
+ type: ContentType;
174
+ excerpt: string;
175
+ weight: number;
176
+ why_significant: string; // LLM-generated explanation
177
+ }>;
178
+ personality_revealing: Array<{
179
+ memory_id: string;
180
+ excerpt: string;
181
+ reveals: string; // What this reveals about the user
182
+ }>;
183
+ };
184
+
185
+ // Natural Language Insights (LLM-generated)
186
+ insights?: {
187
+ summary: string; // 2-3 sentence overview of who the user is
188
+ patterns: string[]; // Observed patterns (e.g., "Frequently saves technical documentation on weekends")
189
+ focus_areas: string[]; // What the user focuses on (e.g., "Career development and outdoor activities")
190
+ communication_notes: string; // How to best communicate with this user
191
+ last_generated: string; // ISO 8601
192
+ };
193
+
194
+ // Metadata
195
+ version: number; // Schema version
196
+ last_full_rebuild?: string; // ISO 8601
197
+ update_count: number; // Number of incremental updates
198
+ };
199
+
200
+ // Standard memory fields
201
+ weight: 1.0; // Always high priority
202
+ trust: 1.0; // System-generated, fully trusted
203
+ tags: ['system', 'profile', 'core'];
204
+ context: MemoryContext; // Standard context
205
+ }
206
+ ```
207
+
208
+ ### 2. Content Generation
209
+
210
+ The `content` field should be a rich natural language summary for vector search, including personality insights and representative snippets.
211
+
212
+ **Simple Template-Based Generation** (for incremental updates):
213
+ ```typescript
214
+ function generateCoreMemoryContentSimple(data: CoreMemory['structured_content']): string {
215
+ const parts: string[] = [];
216
+
217
+ // User profile summary
218
+ if (data.profile.occupation) {
219
+ parts.push(`User works as a ${data.profile.occupation}`);
220
+ }
221
+ if (data.profile.location?.city) {
222
+ parts.push(`Lives in ${data.profile.location.city}`);
223
+ }
224
+ if (data.profile.interests?.length) {
225
+ parts.push(`Interested in ${data.profile.interests.join(', ')}`);
226
+ }
227
+
228
+ // Personality (if available)
229
+ if (data.profile.personality?.traits?.length) {
230
+ parts.push(`Personality traits: ${data.profile.personality.traits.join(', ')}`);
231
+ }
232
+
233
+ // Memory statistics
234
+ parts.push(`Has ${data.statistics.total_memories} memories stored`);
235
+
236
+ // Top content types
237
+ const topTypes = data.categories.top_types.slice(0, 3);
238
+ if (topTypes.length) {
239
+ const typeList = topTypes.map(t => `${t.count} ${t.type}`).join(', ');
240
+ parts.push(`Most common memory types: ${typeList}`);
241
+ }
242
+
243
+ // Key topics
244
+ if (data.discovery.key_topics.length) {
245
+ parts.push(`Key topics: ${data.discovery.key_topics.join(', ')}`);
246
+ }
247
+
248
+ // Insights summary (if available)
249
+ if (data.insights?.summary) {
250
+ parts.push(data.insights.summary);
251
+ }
252
+
253
+ return parts.join('. ') + '.';
254
+ }
255
+ ```
256
+
257
+ **LLM-Enhanced Generation** (for full rebuilds):
258
+ ```typescript
259
+ import { completeLLM } from '../llm/factory.js';
260
+
261
+ async function generateCoreMemoryContentLLM(
262
+ data: CoreMemory['structured_content'],
263
+ recentMemories: Memory[]
264
+ ): Promise<string> {
265
+ // Sample diverse memories for personality analysis
266
+ const memorySnippets = recentMemories.slice(0, 20).map(m => ({
267
+ type: m.type,
268
+ title: m.title,
269
+ content: m.content.substring(0, 300),
270
+ tags: m.tags,
271
+ date: m.created_at,
272
+ }));
273
+
274
+ const context = {
275
+ statistics: data.statistics,
276
+ profile: data.profile,
277
+ categories: data.categories,
278
+ memory_snippets: memorySnippets,
279
+ };
280
+
281
+ const prompt = `Generate a comprehensive natural language summary of this user's profile, personality, and memory collection.
282
+ This summary will be used for semantic search and agent context, so make it rich and descriptive.
283
+
284
+ User Data:
285
+ ${JSON.stringify(context, null, 2)}
286
+
287
+ Generate a 3-4 paragraph summary that captures:
288
+
289
+ 1. **Who the user is**: Occupation, location, interests, key relationships
290
+ 2. **Personality & Communication**: Communication style, values, working preferences, tone
291
+ 3. **Memory patterns**: What types of content they save, recurring themes, focus areas
292
+ 4. **Notable insights**: Patterns in their behavior, what matters to them, how they organize information
293
+
294
+ Include specific examples from their memories when relevant. Make it conversational, searchable, and insightful.
295
+ Focus on facts evident in the data, but infer personality traits from patterns.`;
296
+
297
+ const result = await completeLLM([
298
+ {
299
+ role: 'system',
300
+ content: 'You are a profile and personality analysis assistant. Generate rich, insightful summaries that capture both facts and personality.',
301
+ },
302
+ {
303
+ role: 'user',
304
+ content: prompt,
305
+ },
306
+ ], {
307
+ temperature: 0.7,
308
+ maxTokens: 800,
309
+ });
310
+
311
+ return result.content;
312
+ }
313
+ ```
314
+
315
+ ### 3. Auto-Creation
316
+
317
+ ```typescript
318
+ async function ensureCoreMemory(userId: string): Promise<CoreMemory> {
319
+ const coreId = 'core'; // Simple ID - collections are scoped to user
320
+
321
+ try {
322
+ // Try to fetch existing core memory
323
+ const existing = await getMemoryById(coreId, userId);
324
+ return existing as CoreMemory;
325
+ } catch (error) {
326
+ // Doesn't exist, create it
327
+ const coreMemory: CoreMemory = {
328
+ id: coreId,
329
+ user_id: userId,
330
+ doc_type: 'memory',
331
+ type: 'system',
332
+ title: 'User Profile & Memory Overview',
333
+ content: 'User profile and memory overview. No memories stored yet.',
334
+ structured_content: {
335
+ statistics: {
336
+ total_memories: 0,
337
+ total_relationships: 0,
338
+ memory_by_type: {},
339
+ common_tags: [],
340
+ date_range: { oldest: '', newest: '' },
341
+ last_updated: new Date().toISOString(),
342
+ },
343
+ profile: {
344
+ confidence: {},
345
+ },
346
+ categories: {
347
+ top_types: [],
348
+ },
349
+ discovery: {
350
+ sample_queries: [
351
+ 'What memories do I have?',
352
+ 'Show me recent notes',
353
+ 'What projects am I working on?',
354
+ ],
355
+ key_topics: [],
356
+ },
357
+ version: 1,
358
+ update_count: 0,
359
+ },
360
+ weight: 1.0,
361
+ trust: 1.0,
362
+ base_weight: 1.0,
363
+ tags: ['system', 'profile', 'core'],
364
+ relationships: [],
365
+ access_count: 0,
366
+ created_at: new Date().toISOString(),
367
+ updated_at: new Date().toISOString(),
368
+ version: 1,
369
+ location: {
370
+ gps: null,
371
+ address: null,
372
+ source: 'unavailable',
373
+ confidence: 0,
374
+ is_approximate: false,
375
+ },
376
+ context: {
377
+ timestamp: new Date().toISOString(),
378
+ source: {
379
+ type: 'system',
380
+ platform: 'remember-mcp',
381
+ },
382
+ tags: ['system', 'auto-generated'],
383
+ },
384
+ };
385
+
386
+ await createMemory(coreMemory, userId);
387
+ return coreMemory;
388
+ }
389
+ }
390
+ ```
391
+
392
+ ### 4. Incremental Updates
393
+
394
+ Update core memory on every memory/relationship operation. **This runs synchronously but is fast** (no LLM calls):
395
+
396
+ ```typescript
397
+ async function updateCoreMemoryIncremental(
398
+ userId: string,
399
+ operation: 'create' | 'update' | 'delete',
400
+ memory: Memory | Relationship
401
+ ): Promise<void> {
402
+ const coreMemory = await ensureCoreMemory(userId);
403
+ const data = coreMemory.structured_content;
404
+
405
+ // Update statistics
406
+ if (operation === 'create') {
407
+ if (memory.doc_type === 'memory') {
408
+ data.statistics.total_memories++;
409
+ const type = (memory as Memory).type;
410
+ data.statistics.memory_by_type[type] =
411
+ (data.statistics.memory_by_type[type] || 0) + 1;
412
+ } else {
413
+ data.statistics.total_relationships++;
414
+ }
415
+ } else if (operation === 'delete') {
416
+ if (memory.doc_type === 'memory') {
417
+ data.statistics.total_memories--;
418
+ const type = (memory as Memory).type;
419
+ data.statistics.memory_by_type[type]--;
420
+ } else {
421
+ data.statistics.total_relationships--;
422
+ }
423
+ }
424
+
425
+ // Update tags
426
+ if (memory.tags) {
427
+ updateCommonTags(data.statistics.common_tags, memory.tags, operation);
428
+ }
429
+
430
+ // Update date range
431
+ updateDateRange(data.statistics.date_range, memory.created_at);
432
+
433
+ // Infer profile updates (if memory contains relevant info)
434
+ if (memory.doc_type === 'memory') {
435
+ inferProfileUpdates(data.profile, memory as Memory);
436
+ }
437
+
438
+ // Update metadata
439
+ data.statistics.last_updated = new Date().toISOString();
440
+ data.update_count++;
441
+
442
+ // Regenerate content for vector search (simple template-based, fast)
443
+ coreMemory.content = generateCoreMemoryContentSimple(data);
444
+ coreMemory.updated_at = new Date().toISOString();
445
+ coreMemory.version++;
446
+
447
+ // Save
448
+ await updateMemory(coreMemory.id, coreMemory, userId);
449
+
450
+ // Schedule full rebuild if needed (every 100 updates)
451
+ // This runs in the background and doesn't block the response
452
+ if (data.update_count % 100 === 0) {
453
+ scheduleFullRebuildBackground(userId).catch(error => {
454
+ console.error('[Core Memory] Background rebuild failed:', error);
455
+ });
456
+ }
457
+ }
458
+ ```
459
+
460
+ ### 5. Profile Inference
461
+
462
+ **Simple Rule-Based Inference** (for incremental updates):
463
+ ```typescript
464
+ function inferProfileUpdates(profile: CoreMemory['structured_content']['profile'], memory: Memory): void {
465
+ const content = memory.content.toLowerCase();
466
+ const type = memory.type;
467
+
468
+ // Infer occupation
469
+ if (type === 'person' && memory.structured_content?.job_title) {
470
+ profile.occupation = memory.structured_content.job_title;
471
+ profile.confidence.occupation = 0.8;
472
+ }
473
+
474
+ // Infer location from memory location data
475
+ if (memory.location?.address?.city) {
476
+ profile.location = profile.location || {};
477
+ profile.location.city = memory.location.address.city;
478
+ profile.location.country = memory.location.address.country;
479
+ profile.confidence.location = memory.location.confidence;
480
+ }
481
+
482
+ // Infer interests from tags and content
483
+ if (memory.tags) {
484
+ profile.interests = profile.interests || [];
485
+ for (const tag of memory.tags) {
486
+ if (!profile.interests.includes(tag) && isInterestTag(tag)) {
487
+ profile.interests.push(tag);
488
+ }
489
+ }
490
+ }
491
+
492
+ // Infer key people from person memories
493
+ if (type === 'person' && memory.structured_content?.name) {
494
+ profile.key_people = profile.key_people || [];
495
+ const person = {
496
+ name: memory.structured_content.name,
497
+ relationship: memory.structured_content.relationship || 'contact',
498
+ memory_id: memory.id,
499
+ };
500
+
501
+ // Update or add
502
+ const existing = profile.key_people.find(p => p.name === person.name);
503
+ if (existing) {
504
+ Object.assign(existing, person);
505
+ } else {
506
+ profile.key_people.push(person);
507
+ }
508
+ }
509
+
510
+ // More inference rules...
511
+ }
512
+ ```
513
+
514
+ **LLM-Enhanced Inference** (for full rebuilds):
515
+ ```typescript
516
+ import { completeLLM } from '../llm/factory.js';
517
+
518
+ async function inferProfileFromMemoriesLLM(memories: Memory[]): Promise<CoreMemory['structured_content']['profile']> {
519
+ // Sample memories for analysis (don't send all to LLM)
520
+ const sampleMemories = memories
521
+ .filter(m => m.type !== 'system')
522
+ .slice(0, 50)
523
+ .map(m => ({
524
+ type: m.type,
525
+ content: m.content.substring(0, 300),
526
+ tags: m.tags,
527
+ structured_content: m.structured_content,
528
+ location: m.location?.address,
529
+ }));
530
+
531
+ const prompt = `Analyze these memories and infer information about the user. Be conservative - only infer what's clearly evident.
532
+
533
+ Memories:
534
+ ${JSON.stringify(sampleMemories, null, 2)}
535
+
536
+ Infer and return JSON with:
537
+ {
538
+ "occupation": "string or null",
539
+ "company": "string or null",
540
+ "skills": ["array of strings"],
541
+ "location": {
542
+ "city": "string or null",
543
+ "country": "string or null"
544
+ },
545
+ "interests": ["array of strings"],
546
+ "hobbies": ["array of strings"],
547
+ "key_people": [
548
+ {
549
+ "name": "string",
550
+ "relationship": "string"
551
+ }
552
+ ],
553
+ "confidence": {
554
+ "occupation": 0.0-1.0,
555
+ "location": 0.0-1.0,
556
+ "interests": 0.0-1.0
557
+ }
558
+ }
559
+
560
+ Only include fields where you have evidence. Use confidence scores to indicate certainty.`;
561
+
562
+ const result = await completeLLM([
563
+ {
564
+ role: 'system',
565
+ content: 'You are a profile inference assistant. Analyze memories and extract factual information about the user. Be conservative and accurate.',
566
+ },
567
+ {
568
+ role: 'user',
569
+ content: prompt,
570
+ },
571
+ ], {
572
+ temperature: 0.3, // Low temperature for factual inference
573
+ maxTokens: 1000,
574
+ });
575
+
576
+ try {
577
+ const inferred = JSON.parse(result.content);
578
+ return inferred;
579
+ } catch (error) {
580
+ console.error('[Core Memory] Failed to parse LLM inference result:', error);
581
+ // Fall back to empty profile
582
+ return { confidence: {} };
583
+ }
584
+ }
585
+ ```
586
+
587
+ ### 6. Background Processing
588
+
589
+ **Critical**: Full rebuilds with LLM must run in the background to avoid blocking tool responses.
590
+
591
+ #### Background Job Queue
592
+
593
+ Use a simple in-memory queue with Firestore for persistence:
594
+
595
+ ```typescript
596
+ // src/services/background-jobs.service.ts
597
+
598
+ interface BackgroundJob {
599
+ id: string;
600
+ type: 'core_memory_rebuild';
601
+ userId: string;
602
+ status: 'pending' | 'running' | 'completed' | 'failed';
603
+ created_at: string;
604
+ started_at?: string;
605
+ completed_at?: string;
606
+ error?: string;
607
+ }
608
+
609
+ class BackgroundJobService {
610
+ private runningJobs = new Map<string, Promise<void>>();
611
+
612
+ async scheduleJob(type: string, userId: string): Promise<string> {
613
+ const jobId = `${type}-${userId}-${Date.now()}`;
614
+
615
+ const job: BackgroundJob = {
616
+ id: jobId,
617
+ type: type as any,
618
+ userId,
619
+ status: 'pending',
620
+ created_at: new Date().toISOString(),
621
+ };
622
+
623
+ // Store in Firestore for persistence
624
+ await saveJobToFirestore(job);
625
+
626
+ // Start processing (don't await - fire and forget)
627
+ this.processJob(job).catch(error => {
628
+ console.error(`[Background Jobs] Job ${jobId} failed:`, error);
629
+ });
630
+
631
+ return jobId;
632
+ }
633
+
634
+ private async processJob(job: BackgroundJob): Promise<void> {
635
+ // Prevent duplicate processing
636
+ if (this.runningJobs.has(job.id)) {
637
+ return;
638
+ }
639
+
640
+ const promise = this.executeJob(job);
641
+ this.runningJobs.set(job.id, promise);
642
+
643
+ try {
644
+ await promise;
645
+ } finally {
646
+ this.runningJobs.delete(job.id);
647
+ }
648
+ }
649
+
650
+ private async executeJob(job: BackgroundJob): Promise<void> {
651
+ try {
652
+ // Update status to running
653
+ job.status = 'running';
654
+ job.started_at = new Date().toISOString();
655
+ await updateJobInFirestore(job);
656
+
657
+ // Execute based on type
658
+ switch (job.type) {
659
+ case 'core_memory_rebuild':
660
+ await rebuildCoreMemoryWithLLM(job.userId);
661
+ break;
662
+ default:
663
+ throw new Error(`Unknown job type: ${job.type}`);
664
+ }
665
+
666
+ // Mark as completed
667
+ job.status = 'completed';
668
+ job.completed_at = new Date().toISOString();
669
+ await updateJobInFirestore(job);
670
+
671
+ console.log(`[Background Jobs] Job ${job.id} completed`);
672
+ } catch (error) {
673
+ // Mark as failed
674
+ job.status = 'failed';
675
+ job.error = error instanceof Error ? error.message : String(error);
676
+ job.completed_at = new Date().toISOString();
677
+ await updateJobInFirestore(job);
678
+
679
+ throw error;
680
+ }
681
+ }
682
+
683
+ async getJobStatus(jobId: string): Promise<BackgroundJob | null> {
684
+ return await getJobFromFirestore(jobId);
685
+ }
686
+ }
687
+
688
+ export const backgroundJobs = new BackgroundJobService();
689
+ ```
690
+
691
+ #### Schedule Background Rebuild
692
+
693
+ ```typescript
694
+ async function scheduleFullRebuildBackground(userId: string): Promise<void> {
695
+ // Fire and forget - don't await
696
+ backgroundJobs.scheduleJob('core_memory_rebuild', userId).catch(error => {
697
+ console.error('[Core Memory] Failed to schedule rebuild:', error);
698
+ });
699
+
700
+ console.log(`[Core Memory] Scheduled background rebuild for user ${userId}`);
701
+ }
702
+ ```
703
+
704
+ #### Full Rebuild with LLM
705
+
706
+ This runs in the background and can take several seconds:
707
+
708
+ ```typescript
709
+ async function rebuildCoreMemoryWithLLM(userId: string): Promise<void> {
710
+ console.log(`[Core Memory] Starting LLM-enhanced rebuild for user ${userId}`);
711
+
712
+ // Fetch all memories and relationships
713
+ const allMemories = await getAllMemories(userId);
714
+ const allRelationships = await getAllRelationships(userId);
715
+
716
+ // Build statistics from scratch (fast, no LLM)
717
+ const statistics = buildStatistics(allMemories, allRelationships);
718
+
719
+ // Infer profile using LLM (slow, 2-5 seconds)
720
+ const profile = await inferProfileFromMemoriesLLM(allMemories);
721
+
722
+ // Build categories (fast, no LLM)
723
+ const categories = buildCategories(allMemories);
724
+
725
+ // Generate discovery hints (fast, no LLM)
726
+ const discovery = generateDiscoveryHints(allMemories, statistics);
727
+
728
+ // Create new core memory
729
+ const coreMemory = await ensureCoreMemory(userId);
730
+ coreMemory.structured_content = {
731
+ statistics,
732
+ profile,
733
+ categories,
734
+ discovery,
735
+ version: 1,
736
+ last_full_rebuild: new Date().toISOString(),
737
+ update_count: 0,
738
+ };
739
+
740
+ // Generate content using LLM (slow, 2-5 seconds)
741
+ coreMemory.content = await generateCoreMemoryContentLLM(
742
+ coreMemory.structured_content,
743
+ allMemories.slice(0, 20) // Recent memories for context
744
+ );
745
+
746
+ coreMemory.updated_at = new Date().toISOString();
747
+ coreMemory.version++;
748
+
749
+ await updateMemory(coreMemory.id, coreMemory, userId);
750
+
751
+ console.log(`[Core Memory] Completed LLM-enhanced rebuild for user ${userId}`);
752
+ }
753
+ ```
754
+
755
+ #### Process Lifecycle Management
756
+
757
+ **Important**: Node.js will NOT terminate background promises if:
758
+ 1. The event loop has pending work
759
+ 2. The promise is properly tracked
760
+
761
+ **Our approach**:
762
+ - Store job status in Firestore (persists across restarts)
763
+ - Track running jobs in memory
764
+ - Log all job lifecycle events
765
+ - Handle errors gracefully
766
+ - Don't block tool responses
767
+
768
+ **If server restarts mid-job**:
769
+ - Job status in Firestore shows "running"
770
+ - On restart, can detect stale jobs and retry
771
+ - Core memory will have last successful state
772
+ - Next incremental update will continue normally
773
+
774
+ #### Firestore Job Storage
775
+
776
+ ```typescript
777
+ // src/services/background-jobs-firestore.ts
778
+
779
+ async function saveJobToFirestore(job: BackgroundJob): Promise<void> {
780
+ const db = getFirestore();
781
+ await db.collection('background_jobs').doc(job.id).set(job);
782
+ }
783
+
784
+ async function updateJobInFirestore(job: BackgroundJob): Promise<void> {
785
+ const db = getFirestore();
786
+ await db.collection('background_jobs').doc(job.id).update(job);
787
+ }
788
+
789
+ async function getJobFromFirestore(jobId: string): Promise<BackgroundJob | null> {
790
+ const db = getFirestore();
791
+ const doc = await db.collection('background_jobs').doc(jobId).get();
792
+ return doc.exists ? (doc.data() as BackgroundJob) : null;
793
+ }
794
+
795
+ // Clean up old completed jobs (run periodically)
796
+ async function cleanupOldJobs(): Promise<void> {
797
+ const db = getFirestore();
798
+ const cutoff = new Date();
799
+ cutoff.setDate(cutoff.getDate() - 7); // Keep 7 days
800
+
801
+ const oldJobs = await db
802
+ .collection('background_jobs')
803
+ .where('completed_at', '<', cutoff.toISOString())
804
+ .get();
805
+
806
+ const batch = db.batch();
807
+ oldJobs.docs.forEach(doc => batch.delete(doc.ref));
808
+ await batch.commit();
809
+
810
+ console.log(`[Background Jobs] Cleaned up ${oldJobs.size} old jobs`);
811
+ }
812
+ ```
813
+
814
+ ---
815
+
816
+ ## Tool Integration
817
+
818
+ ### New Tool: `remember_get_profile`
819
+
820
+ ```typescript
821
+ export const getProfileTool = {
822
+ name: 'remember_get_profile',
823
+ description: `Get user profile and memory overview.
824
+
825
+ Returns a comprehensive overview of:
826
+ - User profile (occupation, location, interests)
827
+ - Memory statistics (counts by type, common tags)
828
+ - Discovery hints (suggested queries, key topics)
829
+ - Content categories (top types, active projects)
830
+
831
+ Use this when:
832
+ - User asks "What memories do I have?"
833
+ - User asks "Tell me about myself"
834
+ - You need context about the user
835
+ - User doesn't know what to search for
836
+
837
+ This is automatically maintained and requires no user input.
838
+ `,
839
+ inputSchema: {
840
+ type: 'object',
841
+ properties: {
842
+ format: {
843
+ type: 'string',
844
+ enum: ['detailed', 'summary'],
845
+ default: 'detailed',
846
+ description: 'Return full profile or just summary',
847
+ },
848
+ },
849
+ },
850
+ };
851
+
852
+ export async function handleGetProfile(
853
+ args: { format?: 'detailed' | 'summary' },
854
+ userId: string
855
+ ): Promise<string> {
856
+ const coreMemory = await ensureCoreMemory(userId);
857
+
858
+ if (args.format === 'summary') {
859
+ // Return just the content (natural language summary)
860
+ return JSON.stringify({
861
+ summary: coreMemory.content,
862
+ total_memories: coreMemory.structured_content.statistics.total_memories,
863
+ key_topics: coreMemory.structured_content.discovery.key_topics,
864
+ }, null, 2);
865
+ }
866
+
867
+ // Return full structured data
868
+ return JSON.stringify(coreMemory.structured_content, null, 2);
869
+ }
870
+ ```
871
+
872
+ ### Update Existing Tools
873
+
874
+ All memory/relationship tools should call `updateCoreMemoryIncremental`:
875
+
876
+ ```typescript
877
+ // In create-memory.ts
878
+ export async function handleCreateMemory(args: any, userId: string): Promise<string> {
879
+ // ... existing create logic ...
880
+ const memory = await createMemory(memoryData, userId);
881
+
882
+ // Update core memory
883
+ await updateCoreMemoryIncremental(userId, 'create', memory);
884
+
885
+ return JSON.stringify(memory, null, 2);
886
+ }
887
+
888
+ // Similar for update-memory, delete-memory, create-relationship, etc.
889
+ ```
890
+
891
+ ---
892
+
893
+ ## Benefits
894
+
895
+ ### For Users
896
+ 1. **Discovery**: "What memories do I have?" → Get overview
897
+ 2. **Context**: "Tell me about myself" → Get profile
898
+ 3. **Suggestions**: "What can I search for?" → Get hints
899
+ 4. **Understanding**: See what the system knows about them
900
+
901
+ ### For Agents
902
+ 1. **Context**: Understand user before responding
903
+ 2. **Better queries**: Know what topics exist
904
+ 3. **Personalization**: Tailor responses to user profile
905
+ 4. **Efficiency**: Quick overview without scanning all memories
906
+
907
+ ### For System
908
+ 1. **Analytics**: Track usage patterns
909
+ 2. **Quality**: Identify gaps in memory coverage
910
+ 3. **Optimization**: Suggest memory organization improvements
911
+ 4. **Trust**: Transparent about what's stored
912
+
913
+ ---
914
+
915
+ ## Privacy & Security
916
+
917
+ ### Inference Opt-Out
918
+
919
+ Users should be able to disable profile inference:
920
+
921
+ ```typescript
922
+ // In user preferences
923
+ interface UserPreferences {
924
+ // ... existing preferences ...
925
+ core_memory: {
926
+ enable_profile_inference: boolean; // Default: true
927
+ enable_discovery_hints: boolean; // Default: true
928
+ visible_to_user: boolean; // Default: true
929
+ };
930
+ }
931
+ ```
932
+
933
+ ### Accuracy Concerns
934
+
935
+ - All inferred data includes confidence scores
936
+ - Users can view and correct inferred profile data
937
+ - System should indicate "inferred" vs "explicit" data
938
+ - Provide tool to manually update profile fields
939
+
940
+ ### Data Minimization
941
+
942
+ - Only infer data that's useful for discovery
943
+ - Don't store sensitive inferred data
944
+ - Respect user's trust levels on source memories
945
+ - Allow users to delete profile data
946
+
947
+ ---
948
+
949
+ ## Trade-offs
950
+
951
+ ### Advantages
952
+ ✅ Solves cold start problem
953
+ ✅ Provides agent context
954
+ ✅ Enables discovery
955
+ ✅ Leverages existing infrastructure
956
+ ✅ Searchable via vector embeddings
957
+ ✅ Automatically maintained
958
+
959
+ ### Disadvantages
960
+ ❌ Additional storage overhead
961
+ ❌ Update latency on every operation
962
+ ❌ Inference may be inaccurate
963
+ ❌ Privacy concerns with automatic inference
964
+ ❌ Complexity in maintaining consistency
965
+
966
+ ### Mitigations
967
+ - Incremental updates keep latency low
968
+ - Full rebuilds ensure accuracy
969
+ - Confidence scores indicate uncertainty
970
+ - User preferences control inference
971
+ - Clear documentation about what's tracked
972
+
973
+ ---
974
+
975
+ ## Implementation Plan
976
+
977
+ ### Phase 0: Prerequisites (MUST DO FIRST)
978
+ **Implement LLM Provider Abstraction** - See [`llm-provider-abstraction.md`](llm-provider-abstraction.md)
979
+
980
+ This is a **hard dependency** for core memory. The core memory feature relies heavily on LLM calls for:
981
+ - Profile inference (personality, communication style)
982
+ - Content generation (natural language summaries)
983
+ - Insight generation (patterns, focus areas)
984
+ - Snippet analysis (why memories are significant)
985
+
986
+ **Tasks:**
987
+ 1. Implement LLM provider interface (`src/llm/types.ts`)
988
+ 2. Implement provider factory (`src/llm/factory.ts`)
989
+ 3. Implement at least one provider (Bedrock, OpenAI, or Anthropic)
990
+ 4. Add LLM configuration to `config.ts`
991
+ 5. Test LLM provider with simple completion
992
+ 6. Add background job service for async LLM calls
993
+
994
+ **Estimated Time**: 1-2 days
995
+
996
+ ---
997
+
998
+ ### Phase 1: Core Infrastructure (Without LLM)
999
+ **Goal**: Get basic core memory working with template-based generation
1000
+
1001
+ 1. Define CoreMemory type in `src/types/memory.ts`
1002
+ 2. Implement `ensureCoreMemory()` - auto-create if doesn't exist
1003
+ 3. Implement `updateCoreMemoryIncremental()` with **simple template-based** content generation
1004
+ 4. Add core memory updates to all memory/relationship operations
1005
+ 5. Implement basic rule-based profile inference (no LLM)
1006
+ 6. Test with real memory operations
1007
+
1008
+ **Estimated Time**: 2-3 days
1009
+
1010
+ **Note**: This phase uses NO LLM calls - everything is template-based and rule-based. Fast but basic.
1011
+
1012
+ ---
1013
+
1014
+ ### Phase 2: LLM Integration
1015
+ **Goal**: Add LLM-enhanced generation for better quality
1016
+
1017
+ **Prerequisites**: Phase 0 complete (LLM provider working)
1018
+
1019
+ 1. Implement `generateCoreMemoryContentLLM()` - rich natural language summaries
1020
+ 2. Implement `inferProfileFromMemoriesLLM()` - personality and communication style
1021
+ 3. Implement `generateSnippetsAndInsights()` - memory analysis
1022
+ 4. Implement background job service for async LLM processing
1023
+ 5. Add `rebuildCoreMemoryWithLLM()` for full rebuilds
1024
+ 6. Test LLM-enhanced vs template-based quality
1025
+
1026
+ **Estimated Time**: 2-3 days
1027
+
1028
+ ---
1029
+
1030
+ ### Phase 3: Background Processing
1031
+ **Goal**: Make LLM calls non-blocking
1032
+
1033
+ 1. Implement `BackgroundJobService` with Firestore persistence
1034
+ 2. Implement `scheduleFullRebuildBackground()` - fire and forget
1035
+ 3. Add job status tracking
1036
+ 4. Add job cleanup (remove old completed jobs)
1037
+ 5. Test that tool responses remain fast
1038
+ 6. Test job recovery after server restart
1039
+
1040
+ **Estimated Time**: 1-2 days
1041
+
1042
+ ---
1043
+
1044
+ ### Phase 4: Discovery Features
1045
+ **Goal**: Help users discover what memories they have
1046
+
1047
+ 1. Generate sample queries based on memory content
1048
+ 2. Identify key topics (can use LLM for clustering)
1049
+ 3. Cluster memories by theme
1050
+ 4. Build discovery hints
1051
+ 5. Add to core memory content
1052
+
1053
+ **Estimated Time**: 1-2 days
1054
+
1055
+ ---
1056
+
1057
+ ### Phase 5: Tool Integration
1058
+ **Goal**: Make core memory accessible to agents
1059
+
1060
+ 1. Create `remember_get_profile` tool
1061
+ 2. Update search/query tool descriptions to mention profile
1062
+ 3. Add profile context to agent system prompts (optional)
1063
+ 4. Test user experience with real queries
1064
+ 5. Document tool usage
1065
+
1066
+ **Estimated Time**: 1 day
1067
+
1068
+ ---
1069
+
1070
+ ### Phase 6: Optimization & Polish
1071
+ **Goal**: Improve performance and quality
1072
+
1073
+ 1. Add caching for frequent core memory access
1074
+ 2. Optimize update performance
1075
+ 3. Monitor storage impact
1076
+ 4. A/B test LLM vs template-based content generation
1077
+ 5. Tune LLM prompts for better inference
1078
+ 6. Add user preferences for opt-out
1079
+
1080
+ **Estimated Time**: 2-3 days
1081
+
1082
+ ---
1083
+
1084
+ ## Total Estimated Time
1085
+
1086
+ - **Phase 0 (LLM Provider)**: 1-2 days
1087
+ - **Phase 1 (Core Infrastructure)**: 2-3 days
1088
+ - **Phase 2 (LLM Integration)**: 2-3 days
1089
+ - **Phase 3 (Background Processing)**: 1-2 days
1090
+ - **Phase 4 (Discovery)**: 1-2 days
1091
+ - **Phase 5 (Tool Integration)**: 1 day
1092
+ - **Phase 6 (Optimization)**: 2-3 days
1093
+
1094
+ **Total**: 10-16 days (2-3 weeks)
1095
+
1096
+ ---
1097
+
1098
+ ## Recommended Approach
1099
+
1100
+ **Option 1: Full Implementation** (Recommended)
1101
+ - Implement all phases in order
1102
+ - Get LLM provider working first
1103
+ - Full-featured core memory with personality insights
1104
+
1105
+ **Option 2: MVP First**
1106
+ - Implement Phase 1 only (no LLM)
1107
+ - Basic core memory with template-based generation
1108
+ - Add LLM later when provider is ready
1109
+ - Faster to market but less powerful
1110
+
1111
+ **Option 3: Parallel Development**
1112
+ - One developer on LLM provider (Phase 0)
1113
+ - Another on core infrastructure (Phase 1)
1114
+ - Merge when both complete
1115
+ - Fastest but requires coordination
1116
+
1117
+ ---
1118
+
1119
+ ## LLM Integration Benefits
1120
+
1121
+ ### Why Use LLM for Core Memory?
1122
+
1123
+ **1. Better Content Generation**
1124
+ - Template-based: "User works as a developer. Lives in Seattle. Interested in hiking, coding."
1125
+ - LLM-based: "This user is a software developer based in Seattle who actively tracks their professional projects and personal interests. They frequently save memories about hiking trips in the Pacific Northwest and technical documentation about web development. Their memory collection shows a balance between work-related notes and outdoor activities."
1126
+
1127
+ **2. More Accurate Inference**
1128
+ - Rule-based: Can only detect explicit patterns
1129
+ - LLM-based: Can understand context, implicit information, and relationships
1130
+
1131
+ **3. Semantic Search Quality**
1132
+ - Better content → better vector embeddings → better search results
1133
+ - Natural language summaries are more searchable than structured lists
1134
+
1135
+ **4. Adaptive Learning**
1136
+ - LLM can identify emerging patterns that rules might miss
1137
+ - Can adjust inference based on memory content evolution
1138
+
1139
+ ### Trade-offs
1140
+
1141
+ **Advantages:**
1142
+ - ✅ Higher quality content and inference
1143
+ - ✅ Better semantic search results
1144
+ - ✅ More natural language summaries
1145
+ - ✅ Can understand context and nuance
1146
+
1147
+ **Disadvantages:**
1148
+ - ❌ Slower (LLM API calls)
1149
+ - ❌ More expensive (LLM costs)
1150
+ - ❌ Requires LLM provider setup
1151
+ - ❌ Potential for hallucination (mitigated by low temperature)
1152
+
1153
+ ### Hybrid Approach (Recommended)
1154
+
1155
+ **Incremental Updates**: Use simple template-based generation
1156
+ - Fast, cheap, deterministic
1157
+ - Good enough for real-time updates
1158
+ - No LLM dependency
1159
+
1160
+ **Full Rebuilds**: Use LLM-enhanced generation
1161
+ - High quality, comprehensive
1162
+ - Run periodically (every 100 updates or weekly)
1163
+ - Worth the cost for better search quality
1164
+
1165
+ This gives us the best of both worlds: fast incremental updates with periodic high-quality LLM enhancement.
1166
+
1167
+ ---
1168
+
1169
+ ## What Core Memory Stores
1170
+
1171
+ The core memory is much more than just statistics. It captures:
1172
+
1173
+ ### 1. **Factual Profile Data**
1174
+ - Occupation, company, skills
1175
+ - Location, timezone
1176
+ - Interests, hobbies
1177
+ - Key relationships
1178
+
1179
+ ### 2. **Personality Insights** (LLM-inferred)
1180
+ - Personality traits (analytical, creative, detail-oriented, etc.)
1181
+ - Communication style (direct, warm, formal, casual)
1182
+ - Values (efficiency, creativity, collaboration)
1183
+ - Working preferences (structured vs flexible)
1184
+ - Tone preferences (professional, friendly, technical)
1185
+
1186
+ ### 3. **Memory Snippets**
1187
+ - **Recent memories**: Last 5 memories with excerpts
1188
+ - **Significant memories**: High-weight memories with explanations of why they're significant
1189
+ - **Personality-revealing memories**: Memories that reveal aspects of the user's character
1190
+
1191
+ ### 4. **Natural Language Insights** (LLM-generated)
1192
+ - **Summary**: 2-3 sentence overview of who the user is
1193
+ - **Patterns**: Observed behavioral patterns (e.g., "Frequently saves technical docs on weekends")
1194
+ - **Focus areas**: What the user concentrates on (e.g., "Career development and outdoor activities")
1195
+ - **Communication notes**: How to best communicate with this user
1196
+
1197
+ ### 5. **Discovery Metadata**
1198
+ - Sample queries that would work well
1199
+ - Key topics to explore
1200
+ - Memory clusters by theme
1201
+
1202
+ ### Example Core Memory Content
1203
+
1204
+ ```
1205
+ This user is a software engineer based in Seattle who actively tracks both professional
1206
+ projects and personal interests. They demonstrate strong analytical thinking in their
1207
+ technical documentation while showing creativity in their project planning. Their
1208
+ communication style is direct and concise, preferring structured information with clear
1209
+ action items.
1210
+
1211
+ The user maintains a balanced memory collection between work-related notes (45%) and
1212
+ personal interests (35%), with particular focus on web development, hiking in the Pacific
1213
+ Northwest, and productivity systems. They value efficiency and organization, as evidenced
1214
+ by their systematic approach to documenting learnings and maintaining detailed project notes.
1215
+
1216
+ Notable patterns include weekend activity in outdoor recreation memories and weekday focus
1217
+ on technical problem-solving. The user shows consistent interest in continuous learning,
1218
+ frequently saving articles and tutorials about new technologies.
1219
+ ```
1220
+
1221
+ This rich content enables:
1222
+ - **Better semantic search**: Natural language queries match personality and context
1223
+ - **Agent personalization**: Agents can adapt tone and style to user preferences
1224
+ - **Discovery**: Users can explore "What kind of person am I based on my memories?"
1225
+ - **Context-aware responses**: Agents understand user's background and preferences
1226
+
1227
+ ---
1228
+
1229
+ ## Future Enhancements
1230
+
1231
+ 1. **Smart Suggestions**: "You might want to remember..." based on patterns (LLM-powered)
1232
+ 2. **Memory Health**: "You haven't recorded any goals lately" (LLM analysis)
1233
+ 3. **Relationship Insights**: "You mention Alice in 15 memories" (LLM relationship extraction)
1234
+ 4. **Temporal Patterns**: "You're most active on weekends" (statistical analysis)
1235
+ 5. **Cross-User Insights**: Anonymized patterns across users (opt-in, LLM clustering)
1236
+ 6. **Export**: Download profile as JSON/PDF
1237
+ 7. **Visualization**: Graph of memory types, topics over time
1238
+ 8. **Conversational Profile**: "Tell me about my work projects" → LLM queries core memory
1239
+ 9. **Personality Evolution**: Track how personality traits change over time
1240
+ 10. **Memory Recommendations**: "Based on your interests, you might like..."
1241
+
1242
+ ---
1243
+
1244
+ ## Status
1245
+
1246
+ **Current**: Design Specification
1247
+ **Next Steps**:
1248
+ 1. Review design with stakeholders
1249
+ 2. Create implementation tasks
1250
+ 3. Build Phase 1 (Core Infrastructure)
1251
+ 4. Test with real user data
1252
+
1253
+ **Recommendation**: Implement this feature. It significantly improves user experience and agent effectiveness while leveraging existing infrastructure.