@juspay/neurolink 9.12.2 → 9.13.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/CHANGELOG.md CHANGED
@@ -1,3 +1,15 @@
1
+ ## [9.13.0](https://github.com/juspay/neurolink/compare/v9.12.3...v9.13.0) (2026-02-25)
2
+
3
+ ### Features
4
+
5
+ - **(memory):** integrate Hippocampus SDK for enhanced user memory management ([4da4e63](https://github.com/juspay/neurolink/commit/4da4e635cb5175c10c2efc63643d97cdee301e25))
6
+
7
+ ## [9.12.3](https://github.com/juspay/neurolink/compare/v9.12.2...v9.12.3) (2026-02-24)
8
+
9
+ ### Bug Fixes
10
+
11
+ - **(package):** resolve consumer bundling errors for server adapters ([0f4f71d](https://github.com/juspay/neurolink/commit/0f4f71de0467835a17146c2ff540b5d2009319fb))
12
+
1
13
  ## [9.12.2](https://github.com/juspay/neurolink/compare/v9.12.1...v9.12.2) (2026-02-23)
2
14
 
3
15
  ### Bug Fixes
package/README.md CHANGED
@@ -37,6 +37,7 @@ Extracted from production systems at Juspay and battle-tested at enterprise scal
37
37
 
38
38
  | Feature | Version | Description | Guide |
39
39
  | ----------------------------------- | ------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------- | --------------------------------------------------------------------- |
40
+ | **Memory** | v9.12.0 | Per-user condensed memory that persists across conversations. LLM-powered condensation with S3, Redis, or SQLite backends. | [Memory Guide](docs/features/memory.md) |
40
41
  | **Context Window Management** | v9.2.0 | 4-stage compaction pipeline with auto-detection, budget gate at 80% usage, per-provider token estimation | [Context Compaction Guide](docs/features/context-compaction.md) |
41
42
  | **Tool Execution Control** | v9.3.0 | `prepareStep` and `toolChoice` support for per-step tool enforcement in multi-step agentic loops. API-level control over tool calls. | [API Reference](docs/api/type-aliases/GenerateOptions.md#preparestep) |
42
43
  | **File Processor System** | v9.1.0 | 17+ file type processors with ProcessorRegistry, security sanitization, SVG text injection | [File Processors Guide](docs/features/file-processors.md) |
@@ -48,6 +49,7 @@ Extracted from production systems at Juspay and battle-tested at enterprise scal
48
49
  | **Image Generation with Gemini** | v8.31.0 | Native image generation using Gemini 2.0 Flash Experimental (`imagen-3.0-generate-002`). High-quality image synthesis directly from Google AI. | [Image Generation Guide](docs/image-generation-streaming.md) |
49
50
  | **HTTP/Streamable HTTP Transport** | v8.29.0 | Connect to remote MCP servers via HTTP with authentication headers, automatic retry with exponential backoff, and configurable rate limiting. | [HTTP Transport Guide](docs/mcp-http-transport.md) |
50
51
 
52
+ - **Memory** – Per-user condensed memory that persists across all conversations. Automatically retrieves and stores memory on each `generate()`/`stream()` call. Supports S3, Redis, and SQLite storage with LLM-powered condensation. → [Memory Guide](docs/features/memory.md)
51
53
  - **External TracerProvider Support** – Integrate NeuroLink with applications that already have OpenTelemetry instrumentation. Supports auto-detection and manual configuration. → [Observability Guide](docs/features/observability.md)
52
54
  - **Server Adapters** – Deploy NeuroLink as an HTTP API server with your framework of choice (Hono, Express, Fastify, Koa). Full CLI support with `serve` and `server` commands for foreground/background modes, route management, and OpenAPI generation. → [Server Adapters Guide](docs/guides/server-adapters/index.md)
53
55
  - **Title Generation Events** – Emit real-time events when conversation titles are auto-generated. Listen to `conversation:titleGenerated` for session tracking. → [Conversation Memory Guide](docs/conversation-memory.md#title-generation-events)
@@ -240,7 +242,7 @@ const result = await neurolink.generate({
240
242
  | --------------------------- | --------------------------------------------------------------------------------- | --------------------------------------------------------- |
241
243
  | **Auto Provider Selection** | Intelligent provider fallback | [SDK Guide](docs/sdk/index.md#auto-selection) |
242
244
  | **Streaming Responses** | Real-time token streaming | [Streaming Guide](docs/advanced/streaming.md) |
243
- | **Conversation Memory** | Automatic context management | [Memory Guide](docs/sdk/index.md#memory) |
245
+ | **Conversation Memory** | Automatic context management with embedded per-user memory | [Memory Guide](docs/sdk/index.md#memory) |
244
246
  | **Full Type Safety** | Complete TypeScript types | [Type Reference](docs/sdk/api-reference.md) |
245
247
  | **Error Handling** | Graceful provider fallback | [Error Guide](docs/reference/troubleshooting.md) |
246
248
  | **Analytics & Evaluation** | Usage tracking, quality scores | [Analytics Guide](docs/advanced/analytics.md) |
@@ -297,16 +299,17 @@ const result = await neurolink.generate({
297
299
 
298
300
  **Production-ready capabilities for regulated industries:**
299
301
 
300
- | Feature | Description | Use Case | Documentation |
301
- | --------------------------- | ---------------------------------- | ------------------------- | ----------------------------------------------------------- |
302
- | **Enterprise Proxy** | Corporate proxy support | Behind firewalls | [Proxy Setup](docs/enterprise-proxy-setup.md) |
303
- | **Redis Memory** | Distributed conversation state | Multi-instance deployment | [Redis Guide](docs/getting-started/provider-setup.md#redis) |
304
- | **Cost Optimization** | Automatic cheapest model selection | Budget control | [Cost Guide](docs/advanced/index.md) |
305
- | **Multi-Provider Failover** | Automatic provider switching | High availability | [Failover Guide](docs/advanced/index.md) |
306
- | **Telemetry & Monitoring** | OpenTelemetry integration | Observability | [Telemetry Guide](docs/telemetry-guide.md) |
307
- | **Security Hardening** | Credential management, auditing | Compliance | [Security Guide](docs/advanced/enterprise.md) |
308
- | **Custom Model Hosting** | SageMaker integration | Private models | [SageMaker Guide](docs/sagemaker-integration.md) |
309
- | **Load Balancing** | LiteLLM proxy integration | Scale & routing | [Load Balancing](docs/litellm-integration.md) |
302
+ | Feature | Description | Use Case | Documentation |
303
+ | --------------------------- | ------------------------------------------- | ------------------------- | ----------------------------------------------------------- |
304
+ | **Enterprise Proxy** | Corporate proxy support | Behind firewalls | [Proxy Setup](docs/enterprise-proxy-setup.md) |
305
+ | **Redis Memory** | Distributed conversation state | Multi-instance deployment | [Redis Guide](docs/getting-started/provider-setup.md#redis) |
306
+ | **Memory** | Per-user condensed memory (S3/Redis/SQLite) | Long-term user context | [Memory Guide](docs/features/memory.md) |
307
+ | **Cost Optimization** | Automatic cheapest model selection | Budget control | [Cost Guide](docs/advanced/index.md) |
308
+ | **Multi-Provider Failover** | Automatic provider switching | High availability | [Failover Guide](docs/advanced/index.md) |
309
+ | **Telemetry & Monitoring** | OpenTelemetry integration | Observability | [Telemetry Guide](docs/telemetry-guide.md) |
310
+ | **Security Hardening** | Credential management, auditing | Compliance | [Security Guide](docs/advanced/enterprise.md) |
311
+ | **Custom Model Hosting** | SageMaker integration | Private models | [SageMaker Guide](docs/sagemaker-integration.md) |
312
+ | **Load Balancing** | LiteLLM proxy integration | Scale & routing | [Load Balancing](docs/litellm-integration.md) |
310
313
 
311
314
  **Security & Compliance:**
312
315
 
@@ -0,0 +1,6 @@
1
+ import { Hippocampus, type HippocampusConfig } from "@juspay/hippocampus";
2
+ export type { HippocampusConfig };
3
+ export type Memory = HippocampusConfig & {
4
+ enabled?: boolean;
5
+ };
6
+ export declare function initializeHippocampus(config: HippocampusConfig): Hippocampus | null;
@@ -0,0 +1,20 @@
1
+ import { Hippocampus } from "@juspay/hippocampus";
2
+ import { logger } from "../utils/logger.js";
3
+ export function initializeHippocampus(config) {
4
+ try {
5
+ const instance = new Hippocampus(config);
6
+ logger.info("[memoryInitializer] Memory initialized successfully", {
7
+ storageType: config.storage?.type || "sqlite",
8
+ maxWords: config.maxWords || 50,
9
+ hasCustomPrompt: !!config.prompt,
10
+ });
11
+ return instance;
12
+ }
13
+ catch (error) {
14
+ logger.warn("[memoryInitializer] Failed to initialize memory; disabling", {
15
+ error: error instanceof Error ? error.message : String(error),
16
+ });
17
+ return null;
18
+ }
19
+ }
20
+ //# sourceMappingURL=hippocampusInitializer.js.map
@@ -126,6 +126,8 @@ export declare class NeuroLink {
126
126
  private _sessionCostUsd;
127
127
  private fileRegistry;
128
128
  private cachedFileTools;
129
+ private memoryInstance?;
130
+ private memorySDKConfig?;
129
131
  /**
130
132
  * Extract and set Langfuse context from options with proper async scoping
131
133
  */
@@ -138,6 +140,11 @@ export declare class NeuroLink {
138
140
  * Async initialization called during generate/stream
139
141
  */
140
142
  private ensureMem0Ready;
143
+ private initializeMemoryConfig;
144
+ /**
145
+ * Lazy initialization for memory — called during generate/stream.
146
+ */
147
+ private ensureMemoryReady;
141
148
  /**
142
149
  * Context storage for tool execution
143
150
  * This context will be merged with any runtime context passed by the AI model
@@ -228,6 +235,16 @@ export declare class NeuroLink {
228
235
  private extractMemoryContext;
229
236
  /** Store conversation turn in mem0 */
230
237
  private storeMem0ConversationTurn;
238
+ /**
239
+ * Retrieve condensed memory for a user.
240
+ * Returns the input text enhanced with memory context, or unchanged if no memory.
241
+ */
242
+ private retrieveMemory;
243
+ /**
244
+ * Store a conversation turn in memory (non-blocking).
245
+ * Calls add(userId, content) which internally condenses old + new via LLM.
246
+ */
247
+ private storeMemoryInBackground;
231
248
  /**
232
249
  * Set up HITL event forwarding to main emitter
233
250
  */
@@ -34,6 +34,7 @@ import { directToolsServer } from "./mcp/servers/agent/directToolsServer.js";
34
34
  import { MCPToolRegistry } from "./mcp/toolRegistry.js";
35
35
  import { initializeMem0 } from "./memory/mem0Initializer.js";
36
36
  import { createMemoryRetrievalTools } from "./memory/memoryRetrievalTools.js";
37
+ import { initializeHippocampus, } from "./memory/hippocampusInitializer.js";
37
38
  import { flushOpenTelemetry, getLangfuseHealthStatus, initializeOpenTelemetry, isOpenTelemetryInitialized, setLangfuseContext, shutdownOpenTelemetry, } from "./services/server/ai/observability/instrumentation.js";
38
39
  import { getConversationMessages, storeConversationTurn, } from "./utils/conversationMemory.js";
39
40
  // Enhanced error handling imports
@@ -180,6 +181,9 @@ export class NeuroLink {
180
181
  fileRegistry;
181
182
  // Cached file tools to avoid redundant createFileTools() calls per generate/stream
182
183
  cachedFileTools = null;
184
+ // Memory instance and config
185
+ memoryInstance;
186
+ memorySDKConfig;
183
187
  /**
184
188
  * Extract and set Langfuse context from options with proper async scoping
185
189
  */
@@ -274,6 +278,32 @@ export class NeuroLink {
274
278
  this.mem0Instance = await initializeMem0(this.mem0Config);
275
279
  return this.mem0Instance;
276
280
  }
281
+ initializeMemoryConfig() {
282
+ const memory = this.conversationMemoryConfig?.conversationMemory?.memory;
283
+ if (!memory?.enabled) {
284
+ return false;
285
+ }
286
+ this.memorySDKConfig = memory;
287
+ return true;
288
+ }
289
+ /**
290
+ * Lazy initialization for memory — called during generate/stream.
291
+ */
292
+ ensureMemoryReady() {
293
+ if (this.memoryInstance !== undefined) {
294
+ return this.memoryInstance;
295
+ }
296
+ if (!this.initializeMemoryConfig()) {
297
+ this.memoryInstance = null;
298
+ return null;
299
+ }
300
+ if (!this.memorySDKConfig) {
301
+ this.memoryInstance = null;
302
+ return null;
303
+ }
304
+ this.memoryInstance = initializeHippocampus(this.memorySDKConfig);
305
+ return this.memoryInstance;
306
+ }
277
307
  /**
278
308
  * Context storage for tool execution
279
309
  * This context will be merged with any runtime context passed by the AI model
@@ -682,6 +712,39 @@ Current user's request: ${currentInput}`;
682
712
  async_mode: true,
683
713
  });
684
714
  }
715
+ /**
716
+ * Retrieve condensed memory for a user.
717
+ * Returns the input text enhanced with memory context, or unchanged if no memory.
718
+ */
719
+ async retrieveMemory(inputText, userId) {
720
+ const client = this.ensureMemoryReady();
721
+ if (!client) {
722
+ return inputText;
723
+ }
724
+ const memory = await client.get(userId);
725
+ if (!memory) {
726
+ return inputText;
727
+ }
728
+ return this.formatMemoryContext(memory, inputText);
729
+ }
730
+ /**
731
+ * Store a conversation turn in memory (non-blocking).
732
+ * Calls add(userId, content) which internally condenses old + new via LLM.
733
+ */
734
+ storeMemoryInBackground(originalPrompt, responseContent, userId) {
735
+ setImmediate(async () => {
736
+ try {
737
+ const client = this.ensureMemoryReady();
738
+ if (client) {
739
+ const content = `User: ${originalPrompt}\nAssistant: ${responseContent}`;
740
+ await client.add(userId, content);
741
+ }
742
+ }
743
+ catch (error) {
744
+ logger.warn("Memory storage failed:", error);
745
+ }
746
+ });
747
+ }
685
748
  /**
686
749
  * Set up HITL event forwarding to main emitter
687
750
  */
@@ -1635,6 +1698,17 @@ Current user's request: ${currentInput}`;
1635
1698
  logger.warn("Mem0 memory retrieval failed:", error);
1636
1699
  }
1637
1700
  }
1701
+ // Memory retrieval
1702
+ if (this.conversationMemoryConfig?.conversationMemory?.memory?.enabled &&
1703
+ options.context?.userId) {
1704
+ try {
1705
+ options.input.text = await this.retrieveMemory(options.input.text, options.context.userId);
1706
+ logger.debug("Memory retrieval successful");
1707
+ }
1708
+ catch (error) {
1709
+ logger.warn("Memory retrieval failed:", error);
1710
+ }
1711
+ }
1638
1712
  const startTime = Date.now();
1639
1713
  // Apply orchestration if enabled and no specific provider/model requested
1640
1714
  if (this.enableOrchestration && !options.provider && !options.model) {
@@ -1863,6 +1937,12 @@ Current user's request: ${currentInput}`;
1863
1937
  }
1864
1938
  });
1865
1939
  }
1940
+ // Memory storage
1941
+ if (this.conversationMemoryConfig?.conversationMemory?.memory?.enabled &&
1942
+ options.context?.userId &&
1943
+ generateResult.content?.trim()) {
1944
+ this.storeMemoryInBackground(originalPrompt ?? "", generateResult.content.trim(), options.context.userId);
1945
+ }
1866
1946
  }
1867
1947
  /**
1868
1948
  * Handle PPT generation mode
@@ -3150,6 +3230,17 @@ Current user's request: ${currentInput}`;
3150
3230
  logger.warn("Mem0 memory retrieval failed:", error);
3151
3231
  }
3152
3232
  }
3233
+ // Memory retrieval
3234
+ if (this.conversationMemoryConfig?.conversationMemory?.memory?.enabled &&
3235
+ options.context?.userId) {
3236
+ try {
3237
+ options.input.text = await this.retrieveMemory(options.input.text, options.context.userId);
3238
+ logger.debug("Memory retrieval successful");
3239
+ }
3240
+ catch (error) {
3241
+ logger.warn("Memory retrieval failed:", error);
3242
+ }
3243
+ }
3153
3244
  // Apply orchestration if enabled and no specific provider/model requested
3154
3245
  if (this.enableOrchestration && !options.provider && !options.model) {
3155
3246
  try {
@@ -3405,6 +3496,11 @@ Current user's request: ${currentInput}`;
3405
3496
  }
3406
3497
  });
3407
3498
  }
3499
+ if (this.conversationMemoryConfig?.conversationMemory?.memory?.enabled &&
3500
+ enhancedOptions.context?.userId &&
3501
+ accumulatedContent?.trim()) {
3502
+ this.storeMemoryInBackground(originalPrompt ?? "", accumulatedContent.trim(), enhancedOptions.context?.userId);
3503
+ }
3408
3504
  }
3409
3505
  /**
3410
3506
  * Validate stream input with comprehensive error reporting
@@ -33,6 +33,8 @@
33
33
  * - Current time (ISO): `new Date().toISOString()`
34
34
  */
35
35
  import type { Mem0Config } from "../memory/mem0Initializer.js";
36
+ import type { Memory } from "../memory/hippocampusInitializer.js";
37
+ export type { Memory };
36
38
  /**
37
39
  * Configuration for conversation memory feature
38
40
  */
@@ -53,6 +55,8 @@ export type ConversationMemoryConfig = {
53
55
  mem0Enabled?: boolean;
54
56
  /** Configuration for mem0 cloud API integration */
55
57
  mem0Config?: Mem0Config;
58
+ /** Memory SDK config (condensed key-value memory per user). Set enabled: true to activate. */
59
+ memory?: Memory;
56
60
  /** Redis configuration (optional) - overrides environment variables */
57
61
  redisConfig?: RedisStorageConfig;
58
62
  /** Context compaction configuration */
@@ -0,0 +1,6 @@
1
+ import { Hippocampus, type HippocampusConfig } from "@juspay/hippocampus";
2
+ export type { HippocampusConfig };
3
+ export type Memory = HippocampusConfig & {
4
+ enabled?: boolean;
5
+ };
6
+ export declare function initializeHippocampus(config: HippocampusConfig): Hippocampus | null;
@@ -0,0 +1,19 @@
1
+ import { Hippocampus } from "@juspay/hippocampus";
2
+ import { logger } from "../utils/logger.js";
3
+ export function initializeHippocampus(config) {
4
+ try {
5
+ const instance = new Hippocampus(config);
6
+ logger.info("[memoryInitializer] Memory initialized successfully", {
7
+ storageType: config.storage?.type || "sqlite",
8
+ maxWords: config.maxWords || 50,
9
+ hasCustomPrompt: !!config.prompt,
10
+ });
11
+ return instance;
12
+ }
13
+ catch (error) {
14
+ logger.warn("[memoryInitializer] Failed to initialize memory; disabling", {
15
+ error: error instanceof Error ? error.message : String(error),
16
+ });
17
+ return null;
18
+ }
19
+ }
@@ -126,6 +126,8 @@ export declare class NeuroLink {
126
126
  private _sessionCostUsd;
127
127
  private fileRegistry;
128
128
  private cachedFileTools;
129
+ private memoryInstance?;
130
+ private memorySDKConfig?;
129
131
  /**
130
132
  * Extract and set Langfuse context from options with proper async scoping
131
133
  */
@@ -138,6 +140,11 @@ export declare class NeuroLink {
138
140
  * Async initialization called during generate/stream
139
141
  */
140
142
  private ensureMem0Ready;
143
+ private initializeMemoryConfig;
144
+ /**
145
+ * Lazy initialization for memory — called during generate/stream.
146
+ */
147
+ private ensureMemoryReady;
141
148
  /**
142
149
  * Context storage for tool execution
143
150
  * This context will be merged with any runtime context passed by the AI model
@@ -228,6 +235,16 @@ export declare class NeuroLink {
228
235
  private extractMemoryContext;
229
236
  /** Store conversation turn in mem0 */
230
237
  private storeMem0ConversationTurn;
238
+ /**
239
+ * Retrieve condensed memory for a user.
240
+ * Returns the input text enhanced with memory context, or unchanged if no memory.
241
+ */
242
+ private retrieveMemory;
243
+ /**
244
+ * Store a conversation turn in memory (non-blocking).
245
+ * Calls add(userId, content) which internally condenses old + new via LLM.
246
+ */
247
+ private storeMemoryInBackground;
231
248
  /**
232
249
  * Set up HITL event forwarding to main emitter
233
250
  */
package/dist/neurolink.js CHANGED
@@ -34,6 +34,7 @@ import { directToolsServer } from "./mcp/servers/agent/directToolsServer.js";
34
34
  import { MCPToolRegistry } from "./mcp/toolRegistry.js";
35
35
  import { initializeMem0 } from "./memory/mem0Initializer.js";
36
36
  import { createMemoryRetrievalTools } from "./memory/memoryRetrievalTools.js";
37
+ import { initializeHippocampus, } from "./memory/hippocampusInitializer.js";
37
38
  import { flushOpenTelemetry, getLangfuseHealthStatus, initializeOpenTelemetry, isOpenTelemetryInitialized, setLangfuseContext, shutdownOpenTelemetry, } from "./services/server/ai/observability/instrumentation.js";
38
39
  import { getConversationMessages, storeConversationTurn, } from "./utils/conversationMemory.js";
39
40
  // Enhanced error handling imports
@@ -180,6 +181,9 @@ export class NeuroLink {
180
181
  fileRegistry;
181
182
  // Cached file tools to avoid redundant createFileTools() calls per generate/stream
182
183
  cachedFileTools = null;
184
+ // Memory instance and config
185
+ memoryInstance;
186
+ memorySDKConfig;
183
187
  /**
184
188
  * Extract and set Langfuse context from options with proper async scoping
185
189
  */
@@ -274,6 +278,32 @@ export class NeuroLink {
274
278
  this.mem0Instance = await initializeMem0(this.mem0Config);
275
279
  return this.mem0Instance;
276
280
  }
281
+ initializeMemoryConfig() {
282
+ const memory = this.conversationMemoryConfig?.conversationMemory?.memory;
283
+ if (!memory?.enabled) {
284
+ return false;
285
+ }
286
+ this.memorySDKConfig = memory;
287
+ return true;
288
+ }
289
+ /**
290
+ * Lazy initialization for memory — called during generate/stream.
291
+ */
292
+ ensureMemoryReady() {
293
+ if (this.memoryInstance !== undefined) {
294
+ return this.memoryInstance;
295
+ }
296
+ if (!this.initializeMemoryConfig()) {
297
+ this.memoryInstance = null;
298
+ return null;
299
+ }
300
+ if (!this.memorySDKConfig) {
301
+ this.memoryInstance = null;
302
+ return null;
303
+ }
304
+ this.memoryInstance = initializeHippocampus(this.memorySDKConfig);
305
+ return this.memoryInstance;
306
+ }
277
307
  /**
278
308
  * Context storage for tool execution
279
309
  * This context will be merged with any runtime context passed by the AI model
@@ -682,6 +712,39 @@ Current user's request: ${currentInput}`;
682
712
  async_mode: true,
683
713
  });
684
714
  }
715
+ /**
716
+ * Retrieve condensed memory for a user.
717
+ * Returns the input text enhanced with memory context, or unchanged if no memory.
718
+ */
719
+ async retrieveMemory(inputText, userId) {
720
+ const client = this.ensureMemoryReady();
721
+ if (!client) {
722
+ return inputText;
723
+ }
724
+ const memory = await client.get(userId);
725
+ if (!memory) {
726
+ return inputText;
727
+ }
728
+ return this.formatMemoryContext(memory, inputText);
729
+ }
730
+ /**
731
+ * Store a conversation turn in memory (non-blocking).
732
+ * Calls add(userId, content) which internally condenses old + new via LLM.
733
+ */
734
+ storeMemoryInBackground(originalPrompt, responseContent, userId) {
735
+ setImmediate(async () => {
736
+ try {
737
+ const client = this.ensureMemoryReady();
738
+ if (client) {
739
+ const content = `User: ${originalPrompt}\nAssistant: ${responseContent}`;
740
+ await client.add(userId, content);
741
+ }
742
+ }
743
+ catch (error) {
744
+ logger.warn("Memory storage failed:", error);
745
+ }
746
+ });
747
+ }
685
748
  /**
686
749
  * Set up HITL event forwarding to main emitter
687
750
  */
@@ -1635,6 +1698,17 @@ Current user's request: ${currentInput}`;
1635
1698
  logger.warn("Mem0 memory retrieval failed:", error);
1636
1699
  }
1637
1700
  }
1701
+ // Memory retrieval
1702
+ if (this.conversationMemoryConfig?.conversationMemory?.memory?.enabled &&
1703
+ options.context?.userId) {
1704
+ try {
1705
+ options.input.text = await this.retrieveMemory(options.input.text, options.context.userId);
1706
+ logger.debug("Memory retrieval successful");
1707
+ }
1708
+ catch (error) {
1709
+ logger.warn("Memory retrieval failed:", error);
1710
+ }
1711
+ }
1638
1712
  const startTime = Date.now();
1639
1713
  // Apply orchestration if enabled and no specific provider/model requested
1640
1714
  if (this.enableOrchestration && !options.provider && !options.model) {
@@ -1863,6 +1937,12 @@ Current user's request: ${currentInput}`;
1863
1937
  }
1864
1938
  });
1865
1939
  }
1940
+ // Memory storage
1941
+ if (this.conversationMemoryConfig?.conversationMemory?.memory?.enabled &&
1942
+ options.context?.userId &&
1943
+ generateResult.content?.trim()) {
1944
+ this.storeMemoryInBackground(originalPrompt ?? "", generateResult.content.trim(), options.context.userId);
1945
+ }
1866
1946
  }
1867
1947
  /**
1868
1948
  * Handle PPT generation mode
@@ -3150,6 +3230,17 @@ Current user's request: ${currentInput}`;
3150
3230
  logger.warn("Mem0 memory retrieval failed:", error);
3151
3231
  }
3152
3232
  }
3233
+ // Memory retrieval
3234
+ if (this.conversationMemoryConfig?.conversationMemory?.memory?.enabled &&
3235
+ options.context?.userId) {
3236
+ try {
3237
+ options.input.text = await this.retrieveMemory(options.input.text, options.context.userId);
3238
+ logger.debug("Memory retrieval successful");
3239
+ }
3240
+ catch (error) {
3241
+ logger.warn("Memory retrieval failed:", error);
3242
+ }
3243
+ }
3153
3244
  // Apply orchestration if enabled and no specific provider/model requested
3154
3245
  if (this.enableOrchestration && !options.provider && !options.model) {
3155
3246
  try {
@@ -3405,6 +3496,11 @@ Current user's request: ${currentInput}`;
3405
3496
  }
3406
3497
  });
3407
3498
  }
3499
+ if (this.conversationMemoryConfig?.conversationMemory?.memory?.enabled &&
3500
+ enhancedOptions.context?.userId &&
3501
+ accumulatedContent?.trim()) {
3502
+ this.storeMemoryInBackground(originalPrompt ?? "", accumulatedContent.trim(), enhancedOptions.context?.userId);
3503
+ }
3408
3504
  }
3409
3505
  /**
3410
3506
  * Validate stream input with comprehensive error reporting
@@ -33,6 +33,8 @@
33
33
  * - Current time (ISO): `new Date().toISOString()`
34
34
  */
35
35
  import type { Mem0Config } from "../memory/mem0Initializer.js";
36
+ import type { Memory } from "../memory/hippocampusInitializer.js";
37
+ export type { Memory };
36
38
  /**
37
39
  * Configuration for conversation memory feature
38
40
  */
@@ -53,6 +55,8 @@ export type ConversationMemoryConfig = {
53
55
  mem0Enabled?: boolean;
54
56
  /** Configuration for mem0 cloud API integration */
55
57
  mem0Config?: Mem0Config;
58
+ /** Memory SDK config (condensed key-value memory per user). Set enabled: true to activate. */
59
+ memory?: Memory;
56
60
  /** Redis configuration (optional) - overrides environment variables */
57
61
  redisConfig?: RedisStorageConfig;
58
62
  /** Context compaction configuration */
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "@juspay/neurolink",
3
- "version": "9.12.2",
3
+ "version": "9.13.0",
4
4
  "description": "Universal AI Development Platform with working MCP integration, multi-provider support, and professional CLI. Built-in tools operational, 58+ external MCP servers discoverable. Connect to filesystem, GitHub, database operations, and more. Build, test, and deploy AI applications with 13 providers: OpenAI, Anthropic, Google AI, AWS Bedrock, Azure, Hugging Face, Ollama, and Mistral AI.",
5
5
  "author": {
6
6
  "name": "Juspay Technologies",
@@ -182,6 +182,7 @@
182
182
  "@google/genai": "^1.34.0",
183
183
  "@google/generative-ai": "^0.24.1",
184
184
  "@huggingface/inference": "^2.8.1",
185
+ "@juspay/hippocampus": "^0.1.2",
185
186
  "@langfuse/otel": "^4.2.0",
186
187
  "@modelcontextprotocol/sdk": "^1.26.0",
187
188
  "@openrouter/ai-sdk-provider": "^0.7.5",
@@ -235,7 +236,16 @@
235
236
  "express-rate-limit": "^7.4.0",
236
237
  "ffmpeg-static": "^5.3.0",
237
238
  "ffprobe-static": "^3.1.0",
238
- "sharp": "^0.34.5"
239
+ "sharp": "^0.34.5",
240
+ "@fastify/rate-limit": "^10.3.0",
241
+ "fastify": "^5.7.2",
242
+ "@fastify/cors": "^11.2.0",
243
+ "@koa/cors": "^5.0.0",
244
+ "@koa/router": "^15.3.0",
245
+ "koa": "^3.1.1",
246
+ "koa-bodyparser": "^4.4.1",
247
+ "express": "^5.1.0",
248
+ "cors": "^2.8.5"
239
249
  },
240
250
  "devDependencies": {
241
251
  "@actions/core": "^2.0.2",
@@ -245,10 +255,6 @@
245
255
  "@changesets/changelog-github": "^0.5.1",
246
256
  "@changesets/cli": "^2.29.7",
247
257
  "@eslint/js": "^9.35.0",
248
- "@fastify/cors": "^11.2.0",
249
- "@fastify/rate-limit": "^10.3.0",
250
- "@koa/cors": "^5.0.0",
251
- "@koa/router": "^15.3.0",
252
258
  "@opentelemetry/api": "^1.9.0",
253
259
  "@opentelemetry/sdk-trace-base": "^2.1.0",
254
260
  "@opentelemetry/sdk-trace-node": "^2.1.0",
@@ -282,13 +288,8 @@
282
288
  "@vitest/coverage-v8": "^2.1.9",
283
289
  "concurrently": "^8.2.2",
284
290
  "conventional-changelog-conventionalcommits": "^9.1.0",
285
- "cors": "^2.8.5",
286
291
  "eslint": "^9.35.0",
287
- "express": "^5.1.0",
288
- "fastify": "^5.7.2",
289
292
  "husky": "^9.1.7",
290
- "koa": "^3.1.1",
291
- "koa-bodyparser": "^4.4.1",
292
293
  "lint-staged": "^16.1.6",
293
294
  "playwright": "^1.55.0",
294
295
  "prettier": "^3.6.2",