@yamo/memory-mesh 2.1.3 → 2.3.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -12,6 +12,8 @@ Built on the [YAMO Protocol](https://github.com/yamo-protocol) for transparent a
12
12
  - **Portable CLI**: Simple JSON-based interface for any agent or language.
13
13
  - **YAMO Skills Integration**: Includes yamo-super workflow system with automatic memory learning.
14
14
  - **Pattern Recognition**: Workflows automatically store and retrieve execution patterns for optimization.
15
+ - **LLM-Powered Reflections**: Generate insights from memories using configurable LLM providers.
16
+ - **YAMO Audit Trail**: Automatic emission of structured blocks for all memory operations.
15
17
 
16
18
  ## Installation
17
19
 
@@ -29,9 +31,6 @@ memory-mesh store "My important memory" '{"tag":"test"}'
29
31
 
30
32
  # Search memories
31
33
  memory-mesh search "query" 5
32
-
33
- # Scrub content only
34
- scrubber scrub "Raw text content"
35
34
  ```
36
35
 
37
36
  ### Node.js API
@@ -44,6 +43,63 @@ await mesh.add('Content', { meta: 'data' });
44
43
  const results = await mesh.search('query');
45
44
  ```
46
45
 
46
+ ### Enhanced Reflections with LLM
47
+
48
+ MemoryMesh supports LLM-powered reflection generation that synthesizes insights from stored memories:
49
+
50
+ ```javascript
51
+ import { MemoryMesh } from '@yamo/memory-mesh';
52
+
53
+ // Enable LLM integration (requires API key or local model)
54
+ const mesh = new MemoryMesh({
55
+ enableLLM: true,
56
+ llmProvider: 'openai', // or 'anthropic', 'ollama'
57
+ llmApiKey: process.env.OPENAI_API_KEY,
58
+ llmModel: 'gpt-4o-mini'
59
+ });
60
+
61
+ // Store some memories
62
+ await mesh.add('Bug: type mismatch in keyword search', { type: 'bug' });
63
+ await mesh.add('Bug: missing content field', { type: 'bug' });
64
+
65
+ // Generate reflection (automatically stores result to memory)
66
+ const reflection = await mesh.reflect({ topic: 'bugs', lookback: 10 });
67
+
68
+ console.log(reflection.reflection);
69
+ // Output: "Synthesized from 2 memories: Bug: type mismatch..., Bug: missing content..."
70
+
71
+ console.log(reflection.confidence); // 0.85
72
+ console.log(reflection.yamoBlock); // YAMO audit trail
73
+ ```
74
+
75
+ **CLI Usage:**
76
+
77
+ ```bash
78
+ # With LLM (default)
79
+ memory-mesh reflect '{"topic": "bugs", "limit": 10}'
80
+
81
+ # Without LLM (prompt-only for external LLM)
82
+ memory-mesh reflect '{"topic": "bugs", "llm": false}'
83
+ ```
84
+
85
+ ### YAMO Audit Trail
86
+
87
+ MemoryMesh automatically emits YAMO blocks for all operations when enabled:
88
+
89
+ ```javascript
90
+ const mesh = new MemoryMesh({ enableYamo: true });
91
+
92
+ // All operations now emit YAMO blocks
93
+ await mesh.add('Memory content', { type: 'event' }); // emits 'retain' block
94
+ await mesh.search('query'); // emits 'recall' block
95
+ await mesh.reflect({ topic: 'test' }); // emits 'reflect' block
96
+
97
+ // Query YAMO log
98
+ const yamoLog = await mesh.getYamoLog({ operationType: 'reflect', limit: 10 });
99
+ console.log(yamoLog);
100
+ // [{ id, agentId, operationType, yamoText, timestamp, ... }]
101
+ ```
102
+
47
103
  ## Using in a Project
48
104
 
49
105
  To use MemoryMesh with your Claude Code skills (like `yamo-super`) in a new project:
@@ -75,9 +131,6 @@ Your skills are now available in Claude Code with automatic memory integration:
75
131
  # Use yamo-super workflow system
76
132
  # Automatically retrieves similar past workflows and stores execution patterns
77
133
  claude /yamo-super
78
-
79
- # Use scrubber skill for content sanitization
80
- claude /scrubber content="raw text"
81
134
  ```
82
135
 
83
136
  **Memory Integration Features:**
@@ -87,7 +140,7 @@ claude /scrubber content="raw text"
87
140
  - **Review Phase**: Stores code review outcomes and quality metrics
88
141
  - **Complete Workflow**: Stores full execution pattern for future optimization
89
142
 
90
- YAMO agents will automatically find tools in `tools/memory_mesh.js` and `tools/scrubber.js`.
143
+ YAMO agents will automatically find tools in `tools/memory_mesh.js`.
91
144
 
92
145
  ## Docker
93
146
 
@@ -127,3 +180,66 @@ Memory Mesh implements YAMO v2.1.0 compliance with:
127
180
  - **Development Guide**: [CLAUDE.md](CLAUDE.md) - Guide for Claude Code development
128
181
  - **Marketplace**: [.claude-plugin/marketplace.json](.claude-plugin/marketplace.json) - Plugin metadata
129
182
 
183
+ ## Configuration
184
+
185
+ ### LLM Provider Configuration
186
+
187
+ ```bash
188
+ # Required for LLM-powered reflections
189
+ LLM_PROVIDER=openai # Provider: 'openai', 'anthropic', 'ollama'
190
+ LLM_API_KEY=sk-... # API key for OpenAI/Anthropic
191
+ LLM_MODEL=gpt-4o-mini # Model name
192
+ LLM_BASE_URL=https://... # Optional: Custom API base URL
193
+ ```
194
+
195
+ **Supported Providers:**
196
+ - **OpenAI**: GPT-4, GPT-4o-mini, etc.
197
+ - **Anthropic**: Claude 3.5 Haiku, Sonnet, Opus
198
+ - **Ollama**: Local models (llama3.2, mistral, etc.)
199
+
200
+ ### YAMO Configuration
201
+
202
+ ```bash
203
+ # Optional YAMO settings
204
+ ENABLE_YAMO=true # Enable YAMO block emission (default: true)
205
+ YAMO_DEBUG=true # Enable verbose YAMO logging
206
+ ```
207
+
208
+ ### LanceDB Configuration
209
+
210
+ ```bash
211
+ # Vector database settings
212
+ LANCEDB_URI=./runtime/data/lancedb
213
+ LANCEDB_MEMORY_TABLE=memory_entries
214
+ ```
215
+
216
+ ### Embedding Configuration
217
+
218
+ ```bash
219
+ # Embedding model settings
220
+ EMBEDDING_MODEL_TYPE=local # 'local', 'openai', 'cohere', 'ollama'
221
+ EMBEDDING_MODEL_NAME=Xenova/all-MiniLM-L6-v2
222
+ EMBEDDING_DIMENSION=384
223
+ ```
224
+
225
+ ### Example .env File
226
+
227
+ ```bash
228
+ # LLM for reflections
229
+ LLM_PROVIDER=openai
230
+ LLM_API_KEY=sk-your-key-here
231
+ LLM_MODEL=gpt-4o-mini
232
+
233
+ # YAMO audit
234
+ ENABLE_YAMO=true
235
+ YAMO_DEBUG=false
236
+
237
+ # Vector DB
238
+ LANCEDB_URI=./data/lancedb
239
+
240
+ # Embeddings (local default)
241
+ EMBEDDING_MODEL_TYPE=local
242
+ EMBEDDING_MODEL_NAME=Xenova/all-MiniLM-L6-v2
243
+ ```
244
+
245
+
package/bin/setup.js CHANGED
@@ -133,8 +133,7 @@ async function installTools() {
133
133
  }
134
134
 
135
135
  const toolFiles = [
136
- { src: 'memory_mesh.js', name: 'Memory Mesh CLI' },
137
- { src: 'scrubber.js', name: 'Scrubber CLI' }
136
+ { src: 'memory_mesh.js', name: 'Memory Mesh CLI' }
138
137
  ];
139
138
 
140
139
  let installed = 0;
@@ -169,7 +168,6 @@ function showUsage() {
169
168
 
170
169
  log('\n📚 Usage:', 'bright');
171
170
  log(' • Use /yamo-super in Claude or Gemini for workflow automation');
172
- log(' • Use /scrubber skill for content sanitization');
173
171
  log(' • Call tools/memory_mesh.js for memory operations');
174
172
 
175
173
  log('\n🔗 Learn more:', 'bright');
package/lib/index.js CHANGED
@@ -2,17 +2,5 @@
2
2
  export * from './lancedb/index.js';
3
3
  export * from './embeddings/index.js';
4
4
  export * from './search/index.js';
5
- export * from './privacy/index.js';
6
5
  export * from './memory/index.js';
7
6
  export * from './scrubber/index.js';
8
- export {
9
- HandoffValidator,
10
- Spinner,
11
- ProgressBar,
12
- MultiSpinner,
13
- StreamingClient,
14
- StreamingLLM,
15
- sanitizeErrorForLogging,
16
- withSanitizedErrors
17
- } from './utils/index.js';
18
- export * from './adapters/index.js';
@@ -0,0 +1,391 @@
1
+ /**
2
+ * LLM Client - Multi-provider LLM API client for reflection generation
3
+ *
4
+ * Supports:
5
+ * - OpenAI (GPT-4, GPT-4o-mini, etc.)
6
+ * - Anthropic (Claude)
7
+ * - Ollama (local models)
8
+ * - Graceful fallback when LLM unavailable
9
+ */
10
+
11
+ /**
12
+ * LLMClient provides unified interface for calling various LLM providers
13
+ * to generate reflections from memory contexts.
14
+ */
15
+ export class LLMClient {
16
+ /**
17
+ * Create a new LLMClient instance
18
+ *
19
+ * @param {Object} [config={}] - Configuration options
20
+ * @param {string} [config.provider='openai'] - LLM provider ('openai', 'anthropic', 'ollama')
21
+ * @param {string} [config.apiKey] - API key (defaults to env var)
22
+ * @param {string} [config.model] - Model name
23
+ * @param {string} [config.baseUrl] - Base URL for API (optional)
24
+ * @param {number} [config.timeout=30000] - Request timeout in ms
25
+ * @param {number} [config.maxRetries=2] - Max retry attempts
26
+ */
27
+ constructor(config = {}) {
28
+ this.provider = config.provider || process.env.LLM_PROVIDER || 'openai';
29
+ this.apiKey = config.apiKey || process.env.LLM_API_KEY || '';
30
+ this.model = config.model || process.env.LLM_MODEL || this._getDefaultModel();
31
+ this.baseUrl = config.baseUrl || process.env.LLM_BASE_URL || this._getDefaultBaseUrl();
32
+ this.timeout = config.timeout || 30000;
33
+ this.maxRetries = config.maxRetries || 2;
34
+
35
+ // Statistics
36
+ this.stats = {
37
+ totalRequests: 0,
38
+ successfulRequests: 0,
39
+ failedRequests: 0,
40
+ fallbackCount: 0
41
+ };
42
+ }
43
+
44
+ /**
45
+ * Get default model for provider
46
+ * @private
47
+ * @returns {string} Default model name
48
+ */
49
+ _getDefaultModel() {
50
+ const defaults = {
51
+ openai: 'gpt-4o-mini',
52
+ anthropic: 'claude-3-5-haiku-20241022',
53
+ ollama: 'llama3.2'
54
+ };
55
+ return defaults[this.provider] || 'gpt-4o-mini';
56
+ }
57
+
58
+ /**
59
+ * Get default base URL for provider
60
+ * @private
61
+ * @returns {string} Default base URL
62
+ */
63
+ _getDefaultBaseUrl() {
64
+ const defaults = {
65
+ openai: 'https://api.openai.com/v1',
66
+ anthropic: 'https://api.anthropic.com/v1',
67
+ ollama: 'http://localhost:11434'
68
+ };
69
+ return defaults[this.provider] || 'https://api.openai.com/v1';
70
+ }
71
+
72
+ /**
73
+ * Generate reflection from memories
74
+ * Main entry point for reflection generation
75
+ *
76
+ * @param {string} prompt - The reflection prompt
77
+ * @param {Array} memories - Context memories
78
+ * @returns {Promise<Object>} { reflection, confidence }
79
+ */
80
+ async reflect(prompt, memories) {
81
+ this.stats.totalRequests++;
82
+
83
+ if (!memories || memories.length === 0) {
84
+ return this._fallback('No memories provided');
85
+ }
86
+
87
+ const systemPrompt = `You are a reflective AI agent. Review the provided memories and synthesize a high-level insight, belief, or observation.
88
+ Respond ONLY in JSON format with exactly these keys:
89
+ {
90
+ "reflection": "a concise insight or observation derived from the memories",
91
+ "confidence": 0.0 to 1.0
92
+ }
93
+
94
+ Keep the reflection brief (1-2 sentences) and actionable.`;
95
+
96
+ const userContent = this._formatMemoriesForLLM(prompt, memories);
97
+
98
+ try {
99
+ const response = await this._callWithRetry(systemPrompt, userContent);
100
+ const parsed = JSON.parse(response);
101
+
102
+ // Validate response structure
103
+ if (!parsed.reflection || typeof parsed.confidence !== 'number') {
104
+ throw new Error('Invalid LLM response format');
105
+ }
106
+
107
+ // Clamp confidence to valid range
108
+ parsed.confidence = Math.max(0, Math.min(1, parsed.confidence));
109
+
110
+ this.stats.successfulRequests++;
111
+ return parsed;
112
+
113
+ } catch (error) {
114
+ this.stats.failedRequests++;
115
+ const errorMessage = error instanceof Error ? error.message : String(error);
116
+ console.warn(`[LLMClient] LLM call failed: ${errorMessage}`);
117
+ return this._fallback('LLM error', memories);
118
+ }
119
+ }
120
+
121
+ /**
122
+ * Format memories for LLM consumption
123
+ * @private
124
+ * @param {string} prompt - User prompt
125
+ * @param {Array} memories - Memory array
126
+ * @returns {string} Formatted content
127
+ */
128
+ _formatMemoriesForLLM(prompt, memories) {
129
+ const memoryList = memories
130
+ .map((m, i) => `${i + 1}. ${m.content}`)
131
+ .join('\n');
132
+
133
+ return `Prompt: ${prompt}\n\nMemories:\n${memoryList}\n\nBased on these memories, provide a brief reflective insight.`;
134
+ }
135
+
136
+ /**
137
+ * Call LLM with retry logic
138
+ * @private
139
+ * @param {string} systemPrompt - System prompt
140
+ * @param {string} userContent - User content
141
+ * @returns {Promise<string>} LLM response text
142
+ */
143
+ async _callWithRetry(systemPrompt, userContent) {
144
+ let lastError = null;
145
+
146
+ for (let attempt = 1; attempt <= this.maxRetries; attempt++) {
147
+ try {
148
+ return await this._callLLM(systemPrompt, userContent);
149
+ } catch (error) {
150
+ lastError = error;
151
+ if (attempt < this.maxRetries) {
152
+ const delay = Math.pow(2, attempt) * 1000; // Exponential backoff
153
+ await this._sleep(delay);
154
+ }
155
+ }
156
+ }
157
+
158
+ throw lastError;
159
+ }
160
+
161
+ /**
162
+ * Call LLM based on provider
163
+ * @private
164
+ * @param {string} systemPrompt - System prompt
165
+ * @param {string} userContent - User content
166
+ * @returns {Promise<string>} Response text
167
+ */
168
+ async _callLLM(systemPrompt, userContent) {
169
+ switch (this.provider) {
170
+ case 'openai':
171
+ return await this._callOpenAI(systemPrompt, userContent);
172
+ case 'anthropic':
173
+ return await this._callAnthropic(systemPrompt, userContent);
174
+ case 'ollama':
175
+ return await this._callOllama(systemPrompt, userContent);
176
+ default:
177
+ throw new Error(`Unsupported provider: ${this.provider}`);
178
+ }
179
+ }
180
+
181
+ /**
182
+ * Call OpenAI API
183
+ * @private
184
+ */
185
+ async _callOpenAI(systemPrompt, userContent) {
186
+ if (!this.apiKey) {
187
+ throw new Error('OpenAI API key not configured');
188
+ }
189
+
190
+ const controller = new AbortController();
191
+ const timeoutId = setTimeout(() => controller.abort(), this.timeout);
192
+
193
+ try {
194
+ const response = await fetch(`${this.baseUrl}/chat/completions`, {
195
+ method: 'POST',
196
+ headers: {
197
+ 'Content-Type': 'application/json',
198
+ 'Authorization': `Bearer ${this.apiKey}`
199
+ },
200
+ body: JSON.stringify({
201
+ model: this.model,
202
+ messages: [
203
+ { role: 'system', content: systemPrompt },
204
+ { role: 'user', content: userContent }
205
+ ],
206
+ temperature: 0.7,
207
+ max_tokens: 500
208
+ }),
209
+ signal: controller.signal
210
+ });
211
+
212
+ clearTimeout(timeoutId);
213
+
214
+ if (!response.ok) {
215
+ const error = await response.text();
216
+ throw new Error(`OpenAI API error: ${response.status} - ${error}`);
217
+ }
218
+
219
+ const data = await response.json();
220
+ return data.choices[0].message.content;
221
+
222
+ } catch (error) {
223
+ clearTimeout(timeoutId);
224
+ if (error instanceof Error && error.name === 'AbortError') {
225
+ throw new Error('Request timeout');
226
+ }
227
+ throw error;
228
+ }
229
+ }
230
+
231
+ /**
232
+ * Call Anthropic (Claude) API
233
+ * @private
234
+ */
235
+ async _callAnthropic(systemPrompt, userContent) {
236
+ if (!this.apiKey) {
237
+ throw new Error('Anthropic API key not configured');
238
+ }
239
+
240
+ const controller = new AbortController();
241
+ const timeoutId = setTimeout(() => controller.abort(), this.timeout);
242
+
243
+ try {
244
+ const response = await fetch(`${this.baseUrl}/messages`, {
245
+ method: 'POST',
246
+ headers: {
247
+ 'Content-Type': 'application/json',
248
+ 'x-api-key': this.apiKey,
249
+ 'anthropic-version': '2023-06-01'
250
+ },
251
+ body: JSON.stringify({
252
+ model: this.model,
253
+ max_tokens: 500,
254
+ system: systemPrompt,
255
+ messages: [
256
+ { role: 'user', content: userContent }
257
+ ]
258
+ }),
259
+ signal: controller.signal
260
+ });
261
+
262
+ clearTimeout(timeoutId);
263
+
264
+ if (!response.ok) {
265
+ const error = await response.text();
266
+ throw new Error(`Anthropic API error: ${response.status} - ${error}`);
267
+ }
268
+
269
+ const data = await response.json();
270
+ return data.content[0].text;
271
+
272
+ } catch (error) {
273
+ clearTimeout(timeoutId);
274
+ if (error instanceof Error && error.name === 'AbortError') {
275
+ throw new Error('Request timeout');
276
+ }
277
+ throw error;
278
+ }
279
+ }
280
+
281
+ /**
282
+ * Call Ollama (local) API
283
+ * @private
284
+ */
285
+ async _callOllama(systemPrompt, userContent) {
286
+ const controller = new AbortController();
287
+ const timeoutId = setTimeout(() => controller.abort(), this.timeout);
288
+
289
+ try {
290
+ const response = await fetch(`${this.baseUrl}/api/chat`, {
291
+ method: 'POST',
292
+ headers: {
293
+ 'Content-Type': 'application/json'
294
+ },
295
+ body: JSON.stringify({
296
+ model: this.model,
297
+ messages: [
298
+ { role: 'system', content: systemPrompt },
299
+ { role: 'user', content: userContent }
300
+ ],
301
+ stream: false
302
+ }),
303
+ signal: controller.signal
304
+ });
305
+
306
+ clearTimeout(timeoutId);
307
+
308
+ if (!response.ok) {
309
+ const error = await response.text();
310
+ throw new Error(`Ollama API error: ${response.status} - ${error}`);
311
+ }
312
+
313
+ const data = await response.json();
314
+ return data.message.content;
315
+
316
+ } catch (error) {
317
+ clearTimeout(timeoutId);
318
+ if (error instanceof Error && error.name === 'AbortError') {
319
+ throw new Error('Request timeout');
320
+ }
321
+ throw error;
322
+ }
323
+ }
324
+
325
+ /**
326
+ * Fallback when LLM fails
327
+ * @private
328
+ * @param {string} reason - Fallback reason
329
+ * @param {Array} [memories=[]] - Memory array
330
+ * @returns {Object} Fallback result
331
+ */
332
+ _fallback(reason, memories = []) {
333
+ this.stats.fallbackCount++;
334
+
335
+ if (memories && memories.length > 0) {
336
+ // Simple aggregation fallback
337
+ const contents = memories.map(m => m.content);
338
+ const combined = contents.join('; ');
339
+ const preview = combined.length > 200
340
+ ? combined.substring(0, 200) + '...'
341
+ : combined;
342
+
343
+ return {
344
+ reflection: `Aggregated from ${memories.length} memories: ${preview}`,
345
+ confidence: 0.5
346
+ };
347
+ }
348
+
349
+ return {
350
+ reflection: `Reflection generation unavailable: ${reason}`,
351
+ confidence: 0.3
352
+ };
353
+ }
354
+
355
+ /**
356
+ * Sleep utility
357
+ * @private
358
+ * @param {number} ms - Milliseconds to sleep
359
+ * @returns {Promise<void>}
360
+ */
361
+ _sleep(ms) {
362
+ return new Promise(resolve => setTimeout(resolve, ms));
363
+ }
364
+
365
+ /**
366
+ * Get client statistics
367
+ * @returns {Object} Statistics
368
+ */
369
+ getStats() {
370
+ return {
371
+ ...this.stats,
372
+ successRate: this.stats.totalRequests > 0
373
+ ? (this.stats.successfulRequests / this.stats.totalRequests).toFixed(2)
374
+ : '0.00'
375
+ };
376
+ }
377
+
378
+ /**
379
+ * Reset statistics
380
+ */
381
+ resetStats() {
382
+ this.stats = {
383
+ totalRequests: 0,
384
+ successfulRequests: 0,
385
+ failedRequests: 0,
386
+ fallbackCount: 0
387
+ };
388
+ }
389
+ }
390
+
391
+ export default LLMClient;
@@ -0,0 +1,10 @@
1
+ /**
2
+ * LLM Module - LLM client support for yamo-memory-mesh
3
+ * Exports multi-provider LLM client for reflection generation
4
+ */
5
+
6
+ export { LLMClient } from './client.js';
7
+
8
+ export default {
9
+ LLMClient: (await import('./client.js')).LLMClient
10
+ };
@@ -1,3 +1 @@
1
- export { default as VectorMemory } from './vector-memory.js';
2
1
  export { default as MemoryMesh } from './memory-mesh.js';
3
- export { default as MigrateMemory } from './migrate-memory.js';