rust-kgdb 0.5.12 → 0.6.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/CHANGELOG.md CHANGED
@@ -2,6 +2,259 @@
2
2
 
3
3
  All notable changes to the rust-kgdb TypeScript SDK will be documented in this file.
4
4
 
5
+ ## [0.6.0] - 2025-12-15
6
+
7
+ ### Memory Hypergraph - AI Agents That Remember
8
+
9
+ This release introduces the **Memory Hypergraph** architecture, solving the fundamental problem that AI agents forget everything between sessions.
10
+
11
+ #### The Problem We Solved
12
+
13
+ Every enterprise AI deployment hits the same wall:
14
+ - **No Memory**: Each session starts from zero - expensive recomputation, no learning
15
+ - **No Context Window Management**: Hit token limits? Lose critical history
16
+ - **No Idempotent Responses**: Same question, different answer - compliance nightmare
17
+ - **No Provenance Chain**: "Why did the agent flag this claim?" - silence
18
+
19
+ LangChain's solution: Vector databases. Store conversations, retrieve via similarity.
20
+
21
+ **The problem**: Similarity isn't memory. When your underwriter asks *"What did we decide about claims from Provider X?"*, you need temporal awareness, semantic edges, epistemological stratification, and proof chains.
22
+
23
+ #### The Solution: Memory Hypergraph
24
+
25
+ Memory stored in the **same** quad store as your knowledge graph, with hyper-edges connecting episodes to KG entities.
26
+
27
+ ```
28
+ ┌─────────────────────────────────────────────────────────────────┐
29
+ │ AGENT MEMORY LAYER (am: graph) │
30
+ │ Episode:001 ──→ Episode:002 ──→ Episode:003 │
31
+ │ (Fraud ring) (Denied claim) (Investigation) │
32
+ └─────────┬───────────────┬───────────────┬───────────────────────┘
33
+ │ HyperEdge │ │
34
+ ▼ ▼ ▼
35
+ ┌─────────────────────────────────────────────────────────────────┐
36
+ │ KNOWLEDGE GRAPH LAYER (domain graph) │
37
+ │ Provider:P001 ────▶ Claim:C123 ◀──── Claimant:C001 │
38
+ │ SAME QUAD STORE - Single SPARQL query traverses BOTH! │
39
+ └─────────────────────────────────────────────────────────────────┘
40
+ ```
41
+
42
+ #### Key Features
43
+
44
+ **Temporal Scoring Formula**
45
+ ```
46
+ Score = α × Recency + β × Relevance + γ × Importance
47
+
48
+ where:
49
+ Recency = 0.995^hours (12% decay/day)
50
+ Relevance = cosine_similarity(query, episode)
51
+ Importance = log10(access_count + 1) / log10(max + 1)
52
+
53
+ Default weights: α=0.3, β=0.5, γ=0.2
54
+ ```
55
+
56
+ **Rolling Context Window**
57
+ Progressive search expands time range until sufficient context:
58
+ - Pass 1: Last 1 hour → 0 episodes → expand
59
+ - Pass 2: Last 24 hours → 1 episode → expand
60
+ - Pass 3: Last 7 days → 3 episodes → within token budget ✓
61
+
62
+ **Idempotent Responses via Query Cache**
63
+ Same question = Same answer. Cryptographic hash ensures compliance.
64
+
65
+ **SPARQL Across Memory + KG**
66
+ Single query traverses both memory graph and knowledge graph:
67
+ ```sparql
68
+ PREFIX am: <https://gonnect.ai/ontology/agent-memory#>
69
+ PREFIX ins: <http://insurance.org/>
70
+
71
+ SELECT ?episode ?finding ?claimAmount WHERE {
72
+ # Search memory graph
73
+ GRAPH <https://gonnect.ai/memory/> {
74
+ ?episode a am:Episode ;
75
+ am:prompt ?finding .
76
+ ?edge am:source ?episode ;
77
+ am:target ?provider .
78
+ }
79
+ # Join with knowledge graph
80
+ ?claim ins:provider ?provider ;
81
+ ins:amount ?claimAmount .
82
+ }
83
+ ```
84
+
85
+ #### New Example
86
+
87
+ **fraud-memory-hypergraph.js** - Complete demo showing:
88
+ 1. Knowledge graph initialization with fraud ontology
89
+ 2. Memory Hypergraph initialization with temporal scoring
90
+ 3. First investigation - Fraud ring detection (stores episode)
91
+ 4. Second investigation - Underwriting decision (recalls context)
92
+ 5. Memory recall - "What did we find last week?"
93
+ 6. Idempotent response demonstration
94
+ 7. OpenAI integration with memory context
95
+ 8. SPARQL across memory + KG
96
+
97
+ #### Updated README.md
98
+
99
+ - Added "The Deeper Problem: AI Agents Forget" section
100
+ - Added "Memory Hypergraph: How AI Agents Remember" with architecture diagram
101
+ - Added "Rolling Context Window" strategy documentation
102
+ - Added "Idempotent Responses via Query Cache" section
103
+ - Added fraud/underwriting examples showing memory → KG connection
104
+
105
+ #### Test Results
106
+
107
+ All tests passing:
108
+ - npm test: 42/42 ✅
109
+ - Memory Hypergraph demo: ✅
110
+ - Fraud detection with memory: ✅
111
+ - Underwriting with memory context: ✅
112
+
113
+ ## [0.5.13] - 2025-12-15
114
+
115
+ ### Memory Layer - GraphDB-Powered Agent Memory
116
+
117
+ This release introduces a comprehensive Memory Layer for the HyperMind Agentic Framework, enabling agents to maintain state, recall past executions, and access knowledge graphs with weighted retrieval scoring.
118
+
119
+ #### New Memory Layer Components
120
+
121
+ **AgentState Enum**
122
+ - `CREATED` - Initial state after instantiation
123
+ - `READY` - Configured and ready to execute
124
+ - `RUNNING` - Currently executing a task
125
+ - `PAUSED` - Execution temporarily suspended
126
+ - `COMPLETED` - Successfully finished
127
+ - `FAILED` - Execution failed with error
128
+
129
+ **AgentRuntime Class**
130
+ - UUID-based agent identity (`agent_{uuid}`)
131
+ - State machine with valid transitions (CREATED→READY→RUNNING→COMPLETED/FAILED)
132
+ - Execution tracking with `startExecution(prompt)` and `completeExecution()`
133
+ - Memory management integration
134
+
135
+ **WorkingMemory Class**
136
+ - In-memory context storage with configurable capacity
137
+ - Variable bindings for agent reasoning
138
+ - Context item management with timestamps and IDs
139
+ - Automatic eviction of oldest items when over capacity
140
+
141
+ **EpisodicMemory Class**
142
+ - Execution history stored as RDF triples in GraphDB
143
+ - Query episodes by time range, tool, or success status
144
+ - Full execution metadata: tool, prompt, output, duration, success
145
+ - Episode limit configuration
146
+
147
+ **LongTermMemory Class**
148
+ - Read-only access to source-of-truth knowledge graph
149
+ - SPARQL query interface
150
+ - Explicit separation from agent memory graph
151
+
152
+ **MemoryManager Class**
153
+ - Unified retrieval across all memory types
154
+ - Weighted scoring: `Score = α × Recency + β × Relevance + γ × Importance`
155
+ - Default weights: recency=0.3, relevance=0.5, importance=0.2
156
+ - Configurable weighting for different use cases
157
+
158
+ #### New Governance Layer
159
+
160
+ **GovernancePolicy Class**
161
+ - Capability-based access control: `ReadKG`, `WriteKG`, `ExecuteTool`, `UseEmbeddings`
162
+ - Resource limits: `maxMemoryMB`, `maxExecutionTimeMs`, `maxToolCalls`
163
+ - Policy validation and enforcement
164
+
165
+ **GovernanceEngine Class**
166
+ - Policy enforcement with capability checking
167
+ - Complete audit trail via `auditLog` array
168
+ - Tool execution gating
169
+
170
+ #### New Scope Layer
171
+
172
+ **AgentScope Class**
173
+ - Namespace isolation between agents
174
+ - Resource tracking with `trackUsage(type, amount)`
175
+ - Remaining resource calculation via `getRemainingResources()`
176
+ - Memory and tool call budget management
177
+
178
+ #### Updated Examples
179
+
180
+ **fraud-detection-agent.js**
181
+ - Added Phase 2.5: Memory Layer initialization
182
+ - Episodic memory storage for each analysis step
183
+ - Memory Layer statistics in execution report
184
+
185
+ **underwriting-agent.js**
186
+ - Complete Memory Layer integration
187
+ - Governance policy enforcement
188
+ - Scope-based resource tracking
189
+
190
+ #### Test Coverage
191
+
192
+ - 65 comprehensive tests for Memory Layer components
193
+ - All tests passing (100%)
194
+ - Tests cover: AgentState, AgentRuntime, WorkingMemory, EpisodicMemory, LongTermMemory, MemoryManager, GovernancePolicy, GovernanceEngine, AgentScope
195
+
196
+ #### New Exports
197
+
198
+ ```javascript
199
+ const {
200
+ // Memory Layer (v0.5.13+)
201
+ AgentState,
202
+ AgentRuntime,
203
+ WorkingMemory,
204
+ EpisodicMemory,
205
+ LongTermMemory,
206
+ MemoryManager,
207
+ // Governance Layer (v0.5.13+)
208
+ GovernancePolicy,
209
+ GovernanceEngine,
210
+ // Scope Layer (v0.5.13+)
211
+ AgentScope,
212
+ } = require('rust-kgdb')
213
+ ```
214
+
215
+ #### Usage Example
216
+
217
+ ```javascript
218
+ const { GraphDB, AgentRuntime, MemoryManager, GovernancePolicy, AgentScope, AgentState } = require('rust-kgdb')
219
+
220
+ // Initialize GraphDB for memory storage
221
+ const db = new GraphDB('http://example.org/memory')
222
+
223
+ // Create agent runtime with identity
224
+ const runtime = new AgentRuntime({
225
+ name: 'fraud-detector',
226
+ model: 'claude-sonnet-4',
227
+ tools: ['kg.sparql.query', 'kg.motif.find'],
228
+ memoryCapacity: 100,
229
+ episodeLimit: 1000
230
+ })
231
+
232
+ // Initialize memory manager with weighted retrieval
233
+ const memory = new MemoryManager(runtime, db, {
234
+ recencyWeight: 0.3,
235
+ relevanceWeight: 0.5,
236
+ importanceWeight: 0.2
237
+ })
238
+
239
+ // Create governance policy
240
+ const policy = new GovernancePolicy({
241
+ capabilities: ['ReadKG', 'ExecuteTool'],
242
+ maxToolCalls: 100
243
+ })
244
+
245
+ // Create scoped execution environment
246
+ const scope = new AgentScope({
247
+ namespace: 'fraud-detection',
248
+ resources: { maxToolCalls: 50, maxMemoryMB: 256 }
249
+ })
250
+
251
+ // Execute with full Memory Layer
252
+ runtime.transitionTo(AgentState.READY)
253
+ const executionId = runtime.startExecution('Analyze claims for fraud patterns')
254
+ // ... agent execution ...
255
+ runtime.completeExecution()
256
+ ```
257
+
5
258
  ## [0.5.12] - 2025-12-15
6
259
 
7
260
  ### Benchmark Section Cleanup
package/README.md CHANGED
@@ -108,6 +108,176 @@ The difference? HyperMind treats tools as **typed morphisms** (category theory),
108
108
 
109
109
  ---
110
110
 
111
+ ## The Deeper Problem: AI Agents Forget
112
+
113
+ Fixing SPARQL syntax is table stakes. Here's what keeps enterprise architects up at night:
114
+
115
+ **Scenario**: Your fraud detection agent correctly identified a circular payment ring last Tuesday. Today, an analyst asks: *"Show me similar patterns to what we found last week."*
116
+
117
+ The LLM response: *"I don't have access to previous conversations. Can you describe what you're looking for?"*
118
+
119
+ **The agent forgot everything.**
120
+
121
+ Every enterprise AI deployment hits the same wall:
122
+ - **No Memory**: Each session starts from zero - expensive recomputation, no learning
123
+ - **No Context Window Management**: Hit token limits? Lose critical history
124
+ - **No Idempotent Responses**: Same question, different answer - compliance nightmare
125
+ - **No Provenance Chain**: "Why did the agent flag this claim?" - silence
126
+
127
+ LangChain's solution: Vector databases. Store conversations, retrieve via similarity.
128
+
129
+ **The problem**: Similarity isn't memory. When your underwriter asks *"What did we decide about claims from Provider X?"*, you need:
130
+ 1. **Temporal awareness** - What we decided *last month* vs *yesterday*
131
+ 2. **Semantic edges** - The decision *relates to* these specific claims
132
+ 3. **Epistemological stratification** - Fact vs inference vs hypothesis
133
+ 4. **Proof chain** - *Why* we decided this, not just *that* we did
134
+
135
+ This requires a **Memory Hypergraph** - not a vector store.
136
+
137
+ ---
138
+
139
+ ## Memory Hypergraph: How AI Agents Remember
140
+
141
+ rust-kgdb introduces the **Memory Hypergraph** - a temporal knowledge graph where agent memory is stored in the *same* quad store as your domain knowledge, with hyper-edges connecting episodes to KG entities.
142
+
143
+ ```
144
+ ┌─────────────────────────────────────────────────────────────────────────────────┐
145
+ │ MEMORY HYPERGRAPH ARCHITECTURE │
146
+ │ │
147
+ │ ┌─────────────────────────────────────────────────────────────────────────┐ │
148
+ │ │ AGENT MEMORY LAYER (am: graph) │ │
149
+ │ │ │ │
150
+ │ │ Episode:001 Episode:002 Episode:003 │ │
151
+ │ │ ┌───────────────┐ ┌───────────────┐ ┌───────────────┐ │ │
152
+ │ │ │ Fraud ring │ │ Underwriting │ │ Follow-up │ │ │
153
+ │ │ │ detected in │ │ denied claim │ │ investigation │ │ │
154
+ │ │ │ Provider P001 │ │ from P001 │ │ on P001 │ │ │
155
+ │ │ │ │ │ │ │ │ │ │
156
+ │ │ │ Dec 10, 14:30 │ │ Dec 12, 09:15 │ │ Dec 15, 11:00 │ │ │
157
+ │ │ │ Score: 0.95 │ │ Score: 0.87 │ │ Score: 0.92 │ │ │
158
+ │ │ └───────┬───────┘ └───────┬───────┘ └───────┬───────┘ │ │
159
+ │ │ │ │ │ │ │
160
+ │ └───────────┼─────────────────────────┼─────────────────────────┼─────────┘ │
161
+ │ │ HyperEdge: │ HyperEdge: │ │
162
+ │ │ "QueriedKG" │ "DeniedClaim" │ │
163
+ │ ▼ ▼ ▼ │
164
+ │ ┌─────────────────────────────────────────────────────────────────────────┐ │
165
+ │ │ KNOWLEDGE GRAPH LAYER (domain graph) │ │
166
+ │ │ │ │
167
+ │ │ Provider:P001 ──────────────▶ Claim:C123 ◀────────── Claimant:C001 │ │
168
+ │ │ │ │ │ │ │
169
+ │ │ │ :hasRiskScore │ :amount │ :name │ │
170
+ │ │ ▼ ▼ ▼ │ │
171
+ │ │ "0.87" "50000" "John Doe" │ │
172
+ │ │ │ │
173
+ │ │ ┌─────────────────────────────────────────────────────────────┐ │ │
174
+ │ │ │ SAME QUAD STORE - Single SPARQL query traverses BOTH │ │ │
175
+ │ │ │ memory graph AND knowledge graph! │ │ │
176
+ │ │ └─────────────────────────────────────────────────────────────┘ │ │
177
+ │ │ │ │
178
+ │ └─────────────────────────────────────────────────────────────────────────┘ │
179
+ │ │
180
+ │ ┌─────────────────────────────────────────────────────────────────────────┐ │
181
+ │ │ TEMPORAL SCORING FORMULA │ │
182
+ │ │ │ │
183
+ │ │ Score = α × Recency + β × Relevance + γ × Importance │ │
184
+ │ │ │ │
185
+ │ │ where: │ │
186
+ │ │ Recency = 0.995^hours (12% decay/day) │ │
187
+ │ │ Relevance = cosine_similarity(query, episode) │ │
188
+ │ │ Importance = log10(access_count + 1) / log10(max + 1) │ │
189
+ │ │ │ │
190
+ │ │ Default: α=0.3, β=0.5, γ=0.2 │ │
191
+ │ └─────────────────────────────────────────────────────────────────────────┘ │
192
+ │ │
193
+ └─────────────────────────────────────────────────────────────────────────────────┘
194
+ ```
195
+
196
+ ### Why This Matters for Enterprise AI
197
+
198
+ **Without Memory Hypergraph** (LangChain, LlamaIndex):
199
+ ```javascript
200
+ // Ask about last week's findings
201
+ agent.chat("What fraud patterns did we find with Provider P001?")
202
+ // Response: "I don't have that information. Could you describe what you're looking for?"
203
+ // Cost: Re-run entire fraud detection pipeline ($5 in API calls, 30 seconds)
204
+ ```
205
+
206
+ **With Memory Hypergraph** (rust-kgdb):
207
+ ```javascript
208
+ // Memories are automatically linked to KG entities
209
+ const memories = await agent.recall("Provider P001 fraud", 10)
210
+ // Returns: Episodes 001, 002, 003 - all linked to Provider:P001 in KG
211
+
212
+ // Even better: SPARQL traverses BOTH memory and KG
213
+ const results = db.querySelect(`
214
+ PREFIX am: <https://gonnect.ai/ontology/agent-memory#>
215
+ PREFIX : <http://insurance.org/>
216
+
217
+ SELECT ?episode ?finding ?claimAmount WHERE {
218
+ # Search memory graph
219
+ GRAPH <https://gonnect.ai/memory/> {
220
+ ?episode a am:Episode ;
221
+ am:prompt ?finding .
222
+ ?edge am:source ?episode ;
223
+ am:target ?provider .
224
+ }
225
+ # Join with knowledge graph
226
+ ?claim :provider ?provider ;
227
+ :amount ?claimAmount .
228
+ FILTER(?claimAmount > 25000)
229
+ }
230
+ `)
231
+ // Returns: Episode findings + actual claim data - in ONE query!
232
+ ```
233
+
234
+ ### Rolling Context Window
235
+
236
+ Token limits are real. rust-kgdb uses a **rolling time window strategy** to find the right context:
237
+
238
+ ```
239
+ ┌─────────────────────────────────────────────────────────────────────────────────┐
240
+ │ ROLLING CONTEXT WINDOW │
241
+ │ │
242
+ │ Query: "What did we find about Provider P001?" │
243
+ │ │
244
+ │ Pass 1: Search last 1 hour → 0 episodes found → expand │
245
+ │ Pass 2: Search last 24 hours → 1 episode found (not enough) → expand │
246
+ │ Pass 3: Search last 7 days → 3 episodes found → within token budget ✓ │
247
+ │ │
248
+ │ Context returned: │
249
+ │ ┌──────────────────────────────────────────────────────────────────────────┐ │
250
+ │ │ Episode 003 (Dec 15): "Follow-up investigation on P001..." │ │
251
+ │ │ Episode 002 (Dec 12): "Underwriting denied claim from P001..." │ │
252
+ │ │ Episode 001 (Dec 10): "Fraud ring detected in Provider P001..." │ │
253
+ │ │ │ │
254
+ │ │ Estimated tokens: 847 / 8192 max │ │
255
+ │ │ Time window: 7 days │ │
256
+ │ │ Search passes: 3 │ │
257
+ │ └──────────────────────────────────────────────────────────────────────────┘ │
258
+ │ │
259
+ └─────────────────────────────────────────────────────────────────────────────────┘
260
+ ```
261
+
262
+ ### Idempotent Responses via Query Cache
263
+
264
+ Same question = Same answer. Critical for compliance.
265
+
266
+ ```javascript
267
+ // First call: Compute answer, cache result
268
+ const result1 = await agent.call("Analyze claims from Provider P001")
269
+ // SHA-256 hash: 9f86d081884c7d659a2feaa0c55ad015a3bf4f1b2b0b822cd15d6c15b0f00a08
270
+
271
+ // Second call (10 minutes later): Return cached result
272
+ const result2 = await agent.call("Analyze claims from Provider P001")
273
+ // Same hash: 9f86d081884c7d659a2feaa0c55ad015a3bf4f1b2b0b822cd15d6c15b0f00a08
274
+
275
+ // Compliance officer: "Why are these identical?"
276
+ // You: "Idempotent responses - same input, same output, cryptographic proof."
277
+ ```
278
+
279
+ ---
280
+
111
281
  ## What This Is
112
282
 
113
283
  **World's first mobile-native knowledge graph database with clustered distribution and mathematically-grounded HyperMind agent framework.**
@@ -41,7 +41,19 @@ const {
41
41
  DatalogProgram,
42
42
  evaluateDatalog,
43
43
  GraphFrame,
44
- getVersion
44
+ getVersion,
45
+ // Memory Layer (v0.5.13+) - GraphDB-Powered Agent Memory
46
+ AgentState,
47
+ AgentRuntime,
48
+ WorkingMemory,
49
+ EpisodicMemory,
50
+ LongTermMemory,
51
+ MemoryManager,
52
+ // Governance Layer
53
+ GovernancePolicy,
54
+ GovernanceEngine,
55
+ // Scope Layer
56
+ AgentScope
45
57
  } = require('../index.js')
46
58
 
47
59
  // ═══════════════════════════════════════════════════════════════════════════════
@@ -335,6 +347,67 @@ async function main() {
335
347
  console.log(` ✓ Tools: ${CONFIG.agent.tools.join(', ')}`)
336
348
  console.log()
337
349
 
350
+ // ───────────────────────────────────────────────────────────────────────────
351
+ // PHASE 2.5: Memory Layer Initialization (NEW in v0.5.13)
352
+ // ───────────────────────────────────────────────────────────────────────────
353
+
354
+ console.log('┌─ PHASE 2.5: Memory Layer Initialization ─────────────────────────────────┐')
355
+ console.log('│ AgentRuntime + MemoryManager + GovernanceEngine + AgentScope │')
356
+ console.log('└─────────────────────────────────────────────────────────────────────────┘')
357
+
358
+ // Create agent runtime with identity
359
+ const runtime = new AgentRuntime({
360
+ name: CONFIG.agent.name,
361
+ model: CONFIG.llm.model,
362
+ tools: CONFIG.agent.tools,
363
+ memoryCapacity: 100,
364
+ episodeLimit: 1000
365
+ })
366
+
367
+ // Initialize memory manager with GraphDB
368
+ const memoryManager = new MemoryManager(runtime, db, {
369
+ recencyWeight: 0.3,
370
+ relevanceWeight: 0.5,
371
+ importanceWeight: 0.2
372
+ })
373
+
374
+ // Create governance policy for fraud detection
375
+ const policy = new GovernancePolicy({
376
+ capabilities: ['ReadKG', 'WriteKG', 'ExecuteTool', 'UseEmbeddings'],
377
+ maxMemoryMB: 256,
378
+ maxExecutionTimeMs: 60000,
379
+ maxToolCalls: 100
380
+ })
381
+
382
+ // Initialize governance engine
383
+ const governance = new GovernanceEngine(policy)
384
+
385
+ // Create agent scope with namespace isolation
386
+ const scope = new AgentScope({
387
+ name: 'fraud-detection-scope',
388
+ graphUri: CONFIG.kg.graphUri,
389
+ allowedGraphs: [CONFIG.kg.graphUri, `${CONFIG.kg.graphUri}/memory`],
390
+ maxToolCalls: 50
391
+ })
392
+
393
+ // Transition to READY state
394
+ runtime.transitionTo(AgentState.READY)
395
+
396
+ console.log(` ✓ Runtime ID: ${runtime.id}`)
397
+ console.log(` ✓ State: ${runtime.state}`)
398
+ console.log(` ✓ Memory Manager: recency=${memoryManager.weights.recency}, relevance=${memoryManager.weights.relevance}, importance=${memoryManager.weights.importance}`)
399
+ console.log(` ✓ Governance: ${policy.capabilities.size} capabilities, ${policy.limits.maxToolCalls} max tool calls`)
400
+ console.log(` ✓ Scope: ${scope.name} (graphs: ${scope.namespace.allowedGraphs.length})`)
401
+ console.log()
402
+
403
+ // Store initial context in working memory
404
+ memoryManager.addToWorking({
405
+ type: 'session_start',
406
+ task: 'fraud_detection',
407
+ target_graph: CONFIG.kg.graphUri,
408
+ triple_count: tripleCount
409
+ })
410
+
338
411
  // ───────────────────────────────────────────────────────────────────────────
339
412
  // PHASE 3: Tool Execution Pipeline
340
413
  // ───────────────────────────────────────────────────────────────────────────
@@ -527,6 +600,62 @@ async function main() {
527
600
  }
528
601
  console.log()
529
602
 
603
+ // ───────────────────────────────────────────────────────────────────────────
604
+ // Memory Layer: Store Tool Executions as Episodes (NEW in v0.5.13)
605
+ // ───────────────────────────────────────────────────────────────────────────
606
+
607
+ // Store each tool execution in episodic memory for audit trail
608
+ const executionId = runtime.startExecution('Fraud detection analysis on insurance claims')
609
+
610
+ await memoryManager.storeExecution({
611
+ id: `${executionId}-sparql`,
612
+ prompt: 'SELECT high-risk claimants',
613
+ tool: 'kg.sparql.query',
614
+ output: { count: highRiskResults.length },
615
+ success: true,
616
+ durationMs: 50
617
+ })
618
+
619
+ await memoryManager.storeExecution({
620
+ id: `${executionId}-triangles`,
621
+ prompt: 'Triangle detection on fraud network',
622
+ tool: 'kg.graphframe.triangles',
623
+ output: { triangles: triangleCount, pageRank: Object.keys(pageRank).length },
624
+ success: true,
625
+ durationMs: 30
626
+ })
627
+
628
+ await memoryManager.storeExecution({
629
+ id: `${executionId}-embeddings`,
630
+ prompt: 'Find similar claims to CLM001',
631
+ tool: 'kg.embeddings.search',
632
+ output: { similarCount: similarClaims.length },
633
+ success: true,
634
+ durationMs: 25
635
+ })
636
+
637
+ await memoryManager.storeExecution({
638
+ id: `${executionId}-datalog`,
639
+ prompt: 'NICB fraud detection rules',
640
+ tool: 'kg.datalog.infer',
641
+ output: { collusions: collusions.length, addressFraud: addressFraud.length },
642
+ success: true,
643
+ durationMs: 15
644
+ })
645
+
646
+ // Track resource usage in scope
647
+ scope.trackUsage('toolCalls', 4)
648
+ scope.trackUsage('graphQueries', 2)
649
+
650
+ // Add findings to working memory
651
+ memoryManager.addToWorking({
652
+ type: 'findings',
653
+ highRiskCount: highRiskResults.length,
654
+ triangles: triangleCount,
655
+ collusions: collusions.length,
656
+ addressFraud: addressFraud.length
657
+ })
658
+
530
659
  // ───────────────────────────────────────────────────────────────────────────
531
660
  // PHASE 4: LLM-Powered Analysis (if API key available)
532
661
  // ───────────────────────────────────────────────────────────────────────────
@@ -589,6 +718,24 @@ async function main() {
589
718
  console.log()
590
719
  }
591
720
 
721
+ // Complete execution and get memory statistics
722
+ runtime.completeExecution({ riskLevel, findings }, true)
723
+ const memStats = memoryManager.getStats()
724
+ const remaining = scope.getRemainingResources()
725
+
726
+ // Memory Layer Report (NEW in v0.5.13)
727
+ console.log(' ┌──────────────────────────────────────────────────────────────────────┐')
728
+ console.log(' │ MEMORY LAYER STATISTICS (v0.5.13+) │')
729
+ console.log(' ├──────────────────────────────────────────────────────────────────────┤')
730
+ console.log(` │ Agent Runtime ID: ${(runtime.id || 'unknown').substring(0, 36).padEnd(46)}│`)
731
+ console.log(` │ Final State: ${runtime.state.padEnd(46)}│`)
732
+ console.log(` │ Working Memory Items: ${memStats.working.contextSize.toString().padEnd(46)}│`)
733
+ console.log(` │ Episodic Memory Episodes: ${memStats.episodic.episodeCount.toString().padEnd(46)}│`)
734
+ console.log(` │ Tool Calls Remaining: ${remaining.toolCalls.toString().padEnd(46)}│`)
735
+ console.log(` │ Graph Queries Remaining: ${remaining.graphQueries.toString().padEnd(46)}│`)
736
+ console.log(' └──────────────────────────────────────────────────────────────────────┘')
737
+ console.log()
738
+
592
739
  // Execution Witness (Proof Theory: Curry-Howard Correspondence)
593
740
  const witness = {
594
741
  agent: agent.getName(),
@@ -609,6 +756,15 @@ async function main() {
609
756
  triangles: findings.triangles,
610
757
  collusions: findings.collusions.length,
611
758
  addressFraud: findings.addressFraud.length
759
+ },
760
+ // Memory Layer (v0.5.13+)
761
+ memory: {
762
+ runtimeId: runtime.id,
763
+ finalState: runtime.state,
764
+ workingMemoryItems: memStats.working.contextSize,
765
+ episodicMemoryEpisodes: memStats.episodic.episodeCount,
766
+ toolCallsUsed: scope.usage.toolCalls,
767
+ governanceAuditEntries: governance.auditLog.length
612
768
  }
613
769
  }
614
770