@juspay/neurolink 9.12.3 → 9.14.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/CHANGELOG.md CHANGED
@@ -1,3 +1,15 @@
1
+ ## [9.14.0](https://github.com/juspay/neurolink/compare/v9.13.0...v9.14.0) (2026-02-27)
2
+
3
+ ### Features
4
+
5
+ - **(docs):** documentation site overhaul — MCP docs server, homepage redesign, and SEO improvements ([5b16ed4](https://github.com/juspay/neurolink/commit/5b16ed4d5455568a480aa0389ad934eed9d03360))
6
+
7
+ ## [9.13.0](https://github.com/juspay/neurolink/compare/v9.12.3...v9.13.0) (2026-02-25)
8
+
9
+ ### Features
10
+
11
+ - **(memory):** integrate Hippocampus SDK for enhanced user memory management ([4da4e63](https://github.com/juspay/neurolink/commit/4da4e635cb5175c10c2efc63643d97cdee301e25))
12
+
1
13
  ## [9.12.3](https://github.com/juspay/neurolink/compare/v9.12.2...v9.12.3) (2026-02-24)
2
14
 
3
15
  ### Bug Fixes
package/README.md CHANGED
@@ -37,6 +37,7 @@ Extracted from production systems at Juspay and battle-tested at enterprise scal
37
37
 
38
38
  | Feature | Version | Description | Guide |
39
39
  | ----------------------------------- | ------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------- | --------------------------------------------------------------------- |
40
+ | **Memory** | v9.12.0 | Per-user condensed memory that persists across conversations. LLM-powered condensation with S3, Redis, or SQLite backends. | [Memory Guide](docs/features/memory.md) |
40
41
  | **Context Window Management** | v9.2.0 | 4-stage compaction pipeline with auto-detection, budget gate at 80% usage, per-provider token estimation | [Context Compaction Guide](docs/features/context-compaction.md) |
41
42
  | **Tool Execution Control** | v9.3.0 | `prepareStep` and `toolChoice` support for per-step tool enforcement in multi-step agentic loops. API-level control over tool calls. | [API Reference](docs/api/type-aliases/GenerateOptions.md#preparestep) |
42
43
  | **File Processor System** | v9.1.0 | 17+ file type processors with ProcessorRegistry, security sanitization, SVG text injection | [File Processors Guide](docs/features/file-processors.md) |
@@ -48,6 +49,7 @@ Extracted from production systems at Juspay and battle-tested at enterprise scal
48
49
  | **Image Generation with Gemini** | v8.31.0 | Native image generation using Gemini 2.0 Flash Experimental (`imagen-3.0-generate-002`). High-quality image synthesis directly from Google AI. | [Image Generation Guide](docs/image-generation-streaming.md) |
49
50
  | **HTTP/Streamable HTTP Transport** | v8.29.0 | Connect to remote MCP servers via HTTP with authentication headers, automatic retry with exponential backoff, and configurable rate limiting. | [HTTP Transport Guide](docs/mcp-http-transport.md) |
50
51
 
52
+ - **Memory** – Per-user condensed memory that persists across all conversations. Automatically retrieves and stores memory on each `generate()`/`stream()` call. Supports S3, Redis, and SQLite storage with LLM-powered condensation. → [Memory Guide](docs/features/memory.md)
51
53
  - **External TracerProvider Support** – Integrate NeuroLink with applications that already have OpenTelemetry instrumentation. Supports auto-detection and manual configuration. → [Observability Guide](docs/features/observability.md)
52
54
  - **Server Adapters** – Deploy NeuroLink as an HTTP API server with your framework of choice (Hono, Express, Fastify, Koa). Full CLI support with `serve` and `server` commands for foreground/background modes, route management, and OpenAPI generation. → [Server Adapters Guide](docs/guides/server-adapters/index.md)
53
55
  - **Title Generation Events** – Emit real-time events when conversation titles are auto-generated. Listen to `conversation:titleGenerated` for session tracking. → [Conversation Memory Guide](docs/conversation-memory.md#title-generation-events)
@@ -240,7 +242,7 @@ const result = await neurolink.generate({
240
242
  | --------------------------- | --------------------------------------------------------------------------------- | --------------------------------------------------------- |
241
243
  | **Auto Provider Selection** | Intelligent provider fallback | [SDK Guide](docs/sdk/index.md#auto-selection) |
242
244
  | **Streaming Responses** | Real-time token streaming | [Streaming Guide](docs/advanced/streaming.md) |
243
- | **Conversation Memory** | Automatic context management | [Memory Guide](docs/sdk/index.md#memory) |
245
+ | **Conversation Memory** | Automatic context management with embedded per-user memory | [Memory Guide](docs/sdk/index.md#memory) |
244
246
  | **Full Type Safety** | Complete TypeScript types | [Type Reference](docs/sdk/api-reference.md) |
245
247
  | **Error Handling** | Graceful provider fallback | [Error Guide](docs/reference/troubleshooting.md) |
246
248
  | **Analytics & Evaluation** | Usage tracking, quality scores | [Analytics Guide](docs/advanced/analytics.md) |
@@ -297,16 +299,17 @@ const result = await neurolink.generate({
297
299
 
298
300
  **Production-ready capabilities for regulated industries:**
299
301
 
300
- | Feature | Description | Use Case | Documentation |
301
- | --------------------------- | ---------------------------------- | ------------------------- | ----------------------------------------------------------- |
302
- | **Enterprise Proxy** | Corporate proxy support | Behind firewalls | [Proxy Setup](docs/enterprise-proxy-setup.md) |
303
- | **Redis Memory** | Distributed conversation state | Multi-instance deployment | [Redis Guide](docs/getting-started/provider-setup.md#redis) |
304
- | **Cost Optimization** | Automatic cheapest model selection | Budget control | [Cost Guide](docs/advanced/index.md) |
305
- | **Multi-Provider Failover** | Automatic provider switching | High availability | [Failover Guide](docs/advanced/index.md) |
306
- | **Telemetry & Monitoring** | OpenTelemetry integration | Observability | [Telemetry Guide](docs/telemetry-guide.md) |
307
- | **Security Hardening** | Credential management, auditing | Compliance | [Security Guide](docs/advanced/enterprise.md) |
308
- | **Custom Model Hosting** | SageMaker integration | Private models | [SageMaker Guide](docs/sagemaker-integration.md) |
309
- | **Load Balancing** | LiteLLM proxy integration | Scale & routing | [Load Balancing](docs/litellm-integration.md) |
302
+ | Feature | Description | Use Case | Documentation |
303
+ | --------------------------- | ------------------------------------------- | ------------------------- | ----------------------------------------------------------- |
304
+ | **Enterprise Proxy** | Corporate proxy support | Behind firewalls | [Proxy Setup](docs/enterprise-proxy-setup.md) |
305
+ | **Redis Memory** | Distributed conversation state | Multi-instance deployment | [Redis Guide](docs/getting-started/provider-setup.md#redis) |
306
+ | **Memory** | Per-user condensed memory (S3/Redis/SQLite) | Long-term user context | [Memory Guide](docs/features/memory.md) |
307
+ | **Cost Optimization** | Automatic cheapest model selection | Budget control | [Cost Guide](docs/advanced/index.md) |
308
+ | **Multi-Provider Failover** | Automatic provider switching | High availability | [Failover Guide](docs/advanced/index.md) |
309
+ | **Telemetry & Monitoring** | OpenTelemetry integration | Observability | [Telemetry Guide](docs/telemetry-guide.md) |
310
+ | **Security Hardening** | Credential management, auditing | Compliance | [Security Guide](docs/advanced/enterprise.md) |
311
+ | **Custom Model Hosting** | SageMaker integration | Private models | [SageMaker Guide](docs/sagemaker-integration.md) |
312
+ | **Load Balancing** | LiteLLM proxy integration | Scale & routing | [Load Balancing](docs/litellm-integration.md) |
310
313
 
311
314
  **Security & Compliance:**
312
315
 
@@ -0,0 +1,10 @@
1
+ import type { CommandModule } from "yargs";
2
+ type DocsCommandArgs = {
3
+ transport?: "stdio" | "http";
4
+ port?: number;
5
+ };
6
+ export declare class DocsCommandFactory {
7
+ static createDocsCommand(): CommandModule<object, DocsCommandArgs>;
8
+ private static executeDocs;
9
+ }
10
+ export {};
@@ -0,0 +1,48 @@
1
+ import chalk from "chalk";
2
+ export class DocsCommandFactory {
3
+ static createDocsCommand() {
4
+ return {
5
+ command: "docs",
6
+ describe: "Start the NeuroLink documentation MCP server",
7
+ builder: (yargs) => yargs
8
+ .option("transport", {
9
+ alias: "t",
10
+ type: "string",
11
+ choices: ["stdio", "http"],
12
+ default: "stdio",
13
+ description: "Transport protocol (stdio for local, http for remote)",
14
+ })
15
+ .option("port", {
16
+ alias: "p",
17
+ type: "number",
18
+ default: 3001,
19
+ description: "Port for HTTP transport (ignored for stdio)",
20
+ }),
21
+ handler: async (argv) => {
22
+ await DocsCommandFactory.executeDocs(argv);
23
+ },
24
+ };
25
+ }
26
+ static async executeDocs(argv) {
27
+ try {
28
+ // Dynamic path prevents TypeScript from resolving outside rootDir
29
+ const mcpServerPath = "../../../docs-site/mcp-server/index.js";
30
+ const { startDocsServer } = await import(mcpServerPath);
31
+ await startDocsServer({
32
+ transport: argv.transport,
33
+ port: argv.port,
34
+ });
35
+ }
36
+ catch (err) {
37
+ if (err instanceof Error &&
38
+ err.message.includes("search-index.json not found")) {
39
+ console.error(chalk.red("\nSearch index not found. Build the docs site first:\n"));
40
+ console.error(chalk.cyan(" cd docs-site && pnpm build\n"));
41
+ process.exit(1);
42
+ }
43
+ console.error(chalk.red("Failed to start docs server:"), err);
44
+ process.exit(1);
45
+ }
46
+ }
47
+ }
48
+ //# sourceMappingURL=docs.js.map
@@ -10,6 +10,7 @@ import { SetupCommandFactory } from "./factories/setupCommandFactory.js";
10
10
  import { ServerCommandFactory } from "./commands/server.js";
11
11
  import { ServeCommandFactory } from "./commands/serve.js";
12
12
  import { ragCommand } from "./commands/rag.js";
13
+ import { DocsCommandFactory } from "./commands/docs.js";
13
14
  // Enhanced CLI with Professional UX
14
15
  export function initializeCliParser() {
15
16
  return (yargs(hideBin(process.argv))
@@ -166,6 +167,8 @@ export function initializeCliParser() {
166
167
  // Serve Command - Simplified server start - Using ServeCommandFactory
167
168
  .command(ServeCommandFactory.createServeCommands())
168
169
  // RAG Document Processing Commands
169
- .command(ragCommand)); // Close the main return statement
170
+ .command(ragCommand)
171
+ // Docs MCP Server Command
172
+ .command(DocsCommandFactory.createDocsCommand())); // Close the main return statement
170
173
  }
171
174
  //# sourceMappingURL=parser.js.map
package/dist/index.d.ts CHANGED
@@ -1,7 +1,7 @@
1
1
  /**
2
2
  * NeuroLink AI Toolkit
3
3
  *
4
- * A unified AI provider interface with support for 13+ providers,
4
+ * A unified AI provider interface with support for 14+ providers,
5
5
  * automatic fallback, streaming, MCP tool integration, HITL security,
6
6
  * Redis persistence, and enterprise-grade middleware.
7
7
  *
@@ -60,9 +60,9 @@ export declare const VERSION = "1.0.0";
60
60
  * Quick start factory function for creating AI provider instances.
61
61
  *
62
62
  * Creates a configured AI provider instance ready for immediate use.
63
- * Supports all 13 providers: OpenAI, Anthropic, Google AI Studio,
63
+ * Supports 14+ providers: OpenAI, Anthropic, Google AI Studio,
64
64
  * Google Vertex, AWS Bedrock, AWS SageMaker, Azure OpenAI, Hugging Face,
65
- * LiteLLM, Mistral, Ollama, OpenAI Compatible, and OpenRouter.
65
+ * LiteLLM, Mistral, Ollama, OpenAI Compatible, OpenRouter, and more.
66
66
  *
67
67
  * @category Factory
68
68
  *
package/dist/index.js CHANGED
@@ -1,7 +1,7 @@
1
1
  /**
2
2
  * NeuroLink AI Toolkit
3
3
  *
4
- * A unified AI provider interface with support for 13+ providers,
4
+ * A unified AI provider interface with support for 14+ providers,
5
5
  * automatic fallback, streaming, MCP tool integration, HITL security,
6
6
  * Redis persistence, and enterprise-grade middleware.
7
7
  *
@@ -76,9 +76,9 @@ export const VERSION = "1.0.0";
76
76
  * Quick start factory function for creating AI provider instances.
77
77
  *
78
78
  * Creates a configured AI provider instance ready for immediate use.
79
- * Supports all 13 providers: OpenAI, Anthropic, Google AI Studio,
79
+ * Supports 14+ providers: OpenAI, Anthropic, Google AI Studio,
80
80
  * Google Vertex, AWS Bedrock, AWS SageMaker, Azure OpenAI, Hugging Face,
81
- * LiteLLM, Mistral, Ollama, OpenAI Compatible, and OpenRouter.
81
+ * LiteLLM, Mistral, Ollama, OpenAI Compatible, OpenRouter, and more.
82
82
  *
83
83
  * @category Factory
84
84
  *
@@ -1,7 +1,7 @@
1
1
  /**
2
2
  * NeuroLink AI Toolkit
3
3
  *
4
- * A unified AI provider interface with support for 13+ providers,
4
+ * A unified AI provider interface with support for 14+ providers,
5
5
  * automatic fallback, streaming, MCP tool integration, HITL security,
6
6
  * Redis persistence, and enterprise-grade middleware.
7
7
  *
@@ -60,9 +60,9 @@ export declare const VERSION = "1.0.0";
60
60
  * Quick start factory function for creating AI provider instances.
61
61
  *
62
62
  * Creates a configured AI provider instance ready for immediate use.
63
- * Supports all 13 providers: OpenAI, Anthropic, Google AI Studio,
63
+ * Supports 14+ providers: OpenAI, Anthropic, Google AI Studio,
64
64
  * Google Vertex, AWS Bedrock, AWS SageMaker, Azure OpenAI, Hugging Face,
65
- * LiteLLM, Mistral, Ollama, OpenAI Compatible, and OpenRouter.
65
+ * LiteLLM, Mistral, Ollama, OpenAI Compatible, OpenRouter, and more.
66
66
  *
67
67
  * @category Factory
68
68
  *
package/dist/lib/index.js CHANGED
@@ -1,7 +1,7 @@
1
1
  /**
2
2
  * NeuroLink AI Toolkit
3
3
  *
4
- * A unified AI provider interface with support for 13+ providers,
4
+ * A unified AI provider interface with support for 14+ providers,
5
5
  * automatic fallback, streaming, MCP tool integration, HITL security,
6
6
  * Redis persistence, and enterprise-grade middleware.
7
7
  *
@@ -76,9 +76,9 @@ export const VERSION = "1.0.0";
76
76
  * Quick start factory function for creating AI provider instances.
77
77
  *
78
78
  * Creates a configured AI provider instance ready for immediate use.
79
- * Supports all 13 providers: OpenAI, Anthropic, Google AI Studio,
79
+ * Supports 14+ providers: OpenAI, Anthropic, Google AI Studio,
80
80
  * Google Vertex, AWS Bedrock, AWS SageMaker, Azure OpenAI, Hugging Face,
81
- * LiteLLM, Mistral, Ollama, OpenAI Compatible, and OpenRouter.
81
+ * LiteLLM, Mistral, Ollama, OpenAI Compatible, OpenRouter, and more.
82
82
  *
83
83
  * @category Factory
84
84
  *
@@ -0,0 +1,6 @@
1
+ import { Hippocampus, type HippocampusConfig } from "@juspay/hippocampus";
2
+ export type { HippocampusConfig };
3
+ export type Memory = HippocampusConfig & {
4
+ enabled?: boolean;
5
+ };
6
+ export declare function initializeHippocampus(config: HippocampusConfig): Hippocampus | null;
@@ -0,0 +1,20 @@
1
+ import { Hippocampus } from "@juspay/hippocampus";
2
+ import { logger } from "../utils/logger.js";
3
+ export function initializeHippocampus(config) {
4
+ try {
5
+ const instance = new Hippocampus(config);
6
+ logger.info("[memoryInitializer] Memory initialized successfully", {
7
+ storageType: config.storage?.type || "sqlite",
8
+ maxWords: config.maxWords || 50,
9
+ hasCustomPrompt: !!config.prompt,
10
+ });
11
+ return instance;
12
+ }
13
+ catch (error) {
14
+ logger.warn("[memoryInitializer] Failed to initialize memory; disabling", {
15
+ error: error instanceof Error ? error.message : String(error),
16
+ });
17
+ return null;
18
+ }
19
+ }
20
+ //# sourceMappingURL=hippocampusInitializer.js.map
@@ -24,7 +24,7 @@ import type { BatchOperationResult } from "./types/typeAliases.js";
24
24
  /**
25
25
  * NeuroLink - Universal AI Development Platform
26
26
  *
27
- * Main SDK class providing unified access to 13+ AI providers with enterprise features:
27
+ * Main SDK class providing unified access to 14+ AI providers with enterprise features:
28
28
  * - Multi-provider support (OpenAI, Anthropic, Google AI Studio, Google Vertex, AWS Bedrock, etc.)
29
29
  * - MCP (Model Context Protocol) tool integration with 58+ external servers
30
30
  * - Human-in-the-Loop (HITL) security workflows for regulated industries
@@ -126,6 +126,8 @@ export declare class NeuroLink {
126
126
  private _sessionCostUsd;
127
127
  private fileRegistry;
128
128
  private cachedFileTools;
129
+ private memoryInstance?;
130
+ private memorySDKConfig?;
129
131
  /**
130
132
  * Extract and set Langfuse context from options with proper async scoping
131
133
  */
@@ -138,6 +140,11 @@ export declare class NeuroLink {
138
140
  * Async initialization called during generate/stream
139
141
  */
140
142
  private ensureMem0Ready;
143
+ private initializeMemoryConfig;
144
+ /**
145
+ * Lazy initialization for memory — called during generate/stream.
146
+ */
147
+ private ensureMemoryReady;
141
148
  /**
142
149
  * Context storage for tool execution
143
150
  * This context will be merged with any runtime context passed by the AI model
@@ -228,6 +235,16 @@ export declare class NeuroLink {
228
235
  private extractMemoryContext;
229
236
  /** Store conversation turn in mem0 */
230
237
  private storeMem0ConversationTurn;
238
+ /**
239
+ * Retrieve condensed memory for a user.
240
+ * Returns the input text enhanced with memory context, or unchanged if no memory.
241
+ */
242
+ private retrieveMemory;
243
+ /**
244
+ * Store a conversation turn in memory (non-blocking).
245
+ * Calls add(userId, content) which internally condenses old + new via LLM.
246
+ */
247
+ private storeMemoryInBackground;
231
248
  /**
232
249
  * Set up HITL event forwarding to main emitter
233
250
  */
@@ -375,7 +392,7 @@ export declare class NeuroLink {
375
392
  * Generate AI response with comprehensive feature support.
376
393
  *
377
394
  * Primary method for AI generation with support for all NeuroLink features:
378
- * - Multi-provider support (13+ providers)
395
+ * - Multi-provider support (14+ providers)
379
396
  * - MCP tool integration
380
397
  * - Structured JSON output with Zod schemas
381
398
  * - Conversation memory (Redis or in-memory)
@@ -34,6 +34,7 @@ import { directToolsServer } from "./mcp/servers/agent/directToolsServer.js";
34
34
  import { MCPToolRegistry } from "./mcp/toolRegistry.js";
35
35
  import { initializeMem0 } from "./memory/mem0Initializer.js";
36
36
  import { createMemoryRetrievalTools } from "./memory/memoryRetrievalTools.js";
37
+ import { initializeHippocampus, } from "./memory/hippocampusInitializer.js";
37
38
  import { flushOpenTelemetry, getLangfuseHealthStatus, initializeOpenTelemetry, isOpenTelemetryInitialized, setLangfuseContext, shutdownOpenTelemetry, } from "./services/server/ai/observability/instrumentation.js";
38
39
  import { getConversationMessages, storeConversationTurn, } from "./utils/conversationMemory.js";
39
40
  // Enhanced error handling imports
@@ -56,7 +57,7 @@ import { runWorkflow } from "./workflow/core/workflowRunner.js";
56
57
  /**
57
58
  * NeuroLink - Universal AI Development Platform
58
59
  *
59
- * Main SDK class providing unified access to 13+ AI providers with enterprise features:
60
+ * Main SDK class providing unified access to 14+ AI providers with enterprise features:
60
61
  * - Multi-provider support (OpenAI, Anthropic, Google AI Studio, Google Vertex, AWS Bedrock, etc.)
61
62
  * - MCP (Model Context Protocol) tool integration with 58+ external servers
62
63
  * - Human-in-the-Loop (HITL) security workflows for regulated industries
@@ -180,6 +181,9 @@ export class NeuroLink {
180
181
  fileRegistry;
181
182
  // Cached file tools to avoid redundant createFileTools() calls per generate/stream
182
183
  cachedFileTools = null;
184
+ // Memory instance and config
185
+ memoryInstance;
186
+ memorySDKConfig;
183
187
  /**
184
188
  * Extract and set Langfuse context from options with proper async scoping
185
189
  */
@@ -274,6 +278,32 @@ export class NeuroLink {
274
278
  this.mem0Instance = await initializeMem0(this.mem0Config);
275
279
  return this.mem0Instance;
276
280
  }
281
+ initializeMemoryConfig() {
282
+ const memory = this.conversationMemoryConfig?.conversationMemory?.memory;
283
+ if (!memory?.enabled) {
284
+ return false;
285
+ }
286
+ this.memorySDKConfig = memory;
287
+ return true;
288
+ }
289
+ /**
290
+ * Lazy initialization for memory — called during generate/stream.
291
+ */
292
+ ensureMemoryReady() {
293
+ if (this.memoryInstance !== undefined) {
294
+ return this.memoryInstance;
295
+ }
296
+ if (!this.initializeMemoryConfig()) {
297
+ this.memoryInstance = null;
298
+ return null;
299
+ }
300
+ if (!this.memorySDKConfig) {
301
+ this.memoryInstance = null;
302
+ return null;
303
+ }
304
+ this.memoryInstance = initializeHippocampus(this.memorySDKConfig);
305
+ return this.memoryInstance;
306
+ }
277
307
  /**
278
308
  * Context storage for tool execution
279
309
  * This context will be merged with any runtime context passed by the AI model
@@ -682,6 +712,39 @@ Current user's request: ${currentInput}`;
682
712
  async_mode: true,
683
713
  });
684
714
  }
715
+ /**
716
+ * Retrieve condensed memory for a user.
717
+ * Returns the input text enhanced with memory context, or unchanged if no memory.
718
+ */
719
+ async retrieveMemory(inputText, userId) {
720
+ const client = this.ensureMemoryReady();
721
+ if (!client) {
722
+ return inputText;
723
+ }
724
+ const memory = await client.get(userId);
725
+ if (!memory) {
726
+ return inputText;
727
+ }
728
+ return this.formatMemoryContext(memory, inputText);
729
+ }
730
+ /**
731
+ * Store a conversation turn in memory (non-blocking).
732
+ * Calls add(userId, content) which internally condenses old + new via LLM.
733
+ */
734
+ storeMemoryInBackground(originalPrompt, responseContent, userId) {
735
+ setImmediate(async () => {
736
+ try {
737
+ const client = this.ensureMemoryReady();
738
+ if (client) {
739
+ const content = `User: ${originalPrompt}\nAssistant: ${responseContent}`;
740
+ await client.add(userId, content);
741
+ }
742
+ }
743
+ catch (error) {
744
+ logger.warn("Memory storage failed:", error);
745
+ }
746
+ });
747
+ }
685
748
  /**
686
749
  * Set up HITL event forwarding to main emitter
687
750
  */
@@ -1480,7 +1543,7 @@ Current user's request: ${currentInput}`;
1480
1543
  * Generate AI response with comprehensive feature support.
1481
1544
  *
1482
1545
  * Primary method for AI generation with support for all NeuroLink features:
1483
- * - Multi-provider support (13+ providers)
1546
+ * - Multi-provider support (14+ providers)
1484
1547
  * - MCP tool integration
1485
1548
  * - Structured JSON output with Zod schemas
1486
1549
  * - Conversation memory (Redis or in-memory)
@@ -1635,6 +1698,17 @@ Current user's request: ${currentInput}`;
1635
1698
  logger.warn("Mem0 memory retrieval failed:", error);
1636
1699
  }
1637
1700
  }
1701
+ // Memory retrieval
1702
+ if (this.conversationMemoryConfig?.conversationMemory?.memory?.enabled &&
1703
+ options.context?.userId) {
1704
+ try {
1705
+ options.input.text = await this.retrieveMemory(options.input.text, options.context.userId);
1706
+ logger.debug("Memory retrieval successful");
1707
+ }
1708
+ catch (error) {
1709
+ logger.warn("Memory retrieval failed:", error);
1710
+ }
1711
+ }
1638
1712
  const startTime = Date.now();
1639
1713
  // Apply orchestration if enabled and no specific provider/model requested
1640
1714
  if (this.enableOrchestration && !options.provider && !options.model) {
@@ -1863,6 +1937,12 @@ Current user's request: ${currentInput}`;
1863
1937
  }
1864
1938
  });
1865
1939
  }
1940
+ // Memory storage
1941
+ if (this.conversationMemoryConfig?.conversationMemory?.memory?.enabled &&
1942
+ options.context?.userId &&
1943
+ generateResult.content?.trim()) {
1944
+ this.storeMemoryInBackground(originalPrompt ?? "", generateResult.content.trim(), options.context.userId);
1945
+ }
1866
1946
  }
1867
1947
  /**
1868
1948
  * Handle PPT generation mode
@@ -3150,6 +3230,17 @@ Current user's request: ${currentInput}`;
3150
3230
  logger.warn("Mem0 memory retrieval failed:", error);
3151
3231
  }
3152
3232
  }
3233
+ // Memory retrieval
3234
+ if (this.conversationMemoryConfig?.conversationMemory?.memory?.enabled &&
3235
+ options.context?.userId) {
3236
+ try {
3237
+ options.input.text = await this.retrieveMemory(options.input.text, options.context.userId);
3238
+ logger.debug("Memory retrieval successful");
3239
+ }
3240
+ catch (error) {
3241
+ logger.warn("Memory retrieval failed:", error);
3242
+ }
3243
+ }
3153
3244
  // Apply orchestration if enabled and no specific provider/model requested
3154
3245
  if (this.enableOrchestration && !options.provider && !options.model) {
3155
3246
  try {
@@ -3405,6 +3496,11 @@ Current user's request: ${currentInput}`;
3405
3496
  }
3406
3497
  });
3407
3498
  }
3499
+ if (this.conversationMemoryConfig?.conversationMemory?.memory?.enabled &&
3500
+ enhancedOptions.context?.userId &&
3501
+ accumulatedContent?.trim()) {
3502
+ this.storeMemoryInBackground(originalPrompt ?? "", accumulatedContent.trim(), enhancedOptions.context?.userId);
3503
+ }
3408
3504
  }
3409
3505
  /**
3410
3506
  * Validate stream input with comprehensive error reporting
@@ -33,6 +33,8 @@
33
33
  * - Current time (ISO): `new Date().toISOString()`
34
34
  */
35
35
  import type { Mem0Config } from "../memory/mem0Initializer.js";
36
+ import type { Memory } from "../memory/hippocampusInitializer.js";
37
+ export type { Memory };
36
38
  /**
37
39
  * Configuration for conversation memory feature
38
40
  */
@@ -53,6 +55,8 @@ export type ConversationMemoryConfig = {
53
55
  mem0Enabled?: boolean;
54
56
  /** Configuration for mem0 cloud API integration */
55
57
  mem0Config?: Mem0Config;
58
+ /** Memory SDK config (condensed key-value memory per user). Set enabled: true to activate. */
59
+ memory?: Memory;
56
60
  /** Redis configuration (optional) - overrides environment variables */
57
61
  redisConfig?: RedisStorageConfig;
58
62
  /** Context compaction configuration */
@@ -0,0 +1,6 @@
1
+ import { Hippocampus, type HippocampusConfig } from "@juspay/hippocampus";
2
+ export type { HippocampusConfig };
3
+ export type Memory = HippocampusConfig & {
4
+ enabled?: boolean;
5
+ };
6
+ export declare function initializeHippocampus(config: HippocampusConfig): Hippocampus | null;
@@ -0,0 +1,19 @@
1
+ import { Hippocampus } from "@juspay/hippocampus";
2
+ import { logger } from "../utils/logger.js";
3
+ export function initializeHippocampus(config) {
4
+ try {
5
+ const instance = new Hippocampus(config);
6
+ logger.info("[memoryInitializer] Memory initialized successfully", {
7
+ storageType: config.storage?.type || "sqlite",
8
+ maxWords: config.maxWords || 50,
9
+ hasCustomPrompt: !!config.prompt,
10
+ });
11
+ return instance;
12
+ }
13
+ catch (error) {
14
+ logger.warn("[memoryInitializer] Failed to initialize memory; disabling", {
15
+ error: error instanceof Error ? error.message : String(error),
16
+ });
17
+ return null;
18
+ }
19
+ }
@@ -24,7 +24,7 @@ import type { BatchOperationResult } from "./types/typeAliases.js";
24
24
  /**
25
25
  * NeuroLink - Universal AI Development Platform
26
26
  *
27
- * Main SDK class providing unified access to 13+ AI providers with enterprise features:
27
+ * Main SDK class providing unified access to 14+ AI providers with enterprise features:
28
28
  * - Multi-provider support (OpenAI, Anthropic, Google AI Studio, Google Vertex, AWS Bedrock, etc.)
29
29
  * - MCP (Model Context Protocol) tool integration with 58+ external servers
30
30
  * - Human-in-the-Loop (HITL) security workflows for regulated industries
@@ -126,6 +126,8 @@ export declare class NeuroLink {
126
126
  private _sessionCostUsd;
127
127
  private fileRegistry;
128
128
  private cachedFileTools;
129
+ private memoryInstance?;
130
+ private memorySDKConfig?;
129
131
  /**
130
132
  * Extract and set Langfuse context from options with proper async scoping
131
133
  */
@@ -138,6 +140,11 @@ export declare class NeuroLink {
138
140
  * Async initialization called during generate/stream
139
141
  */
140
142
  private ensureMem0Ready;
143
+ private initializeMemoryConfig;
144
+ /**
145
+ * Lazy initialization for memory — called during generate/stream.
146
+ */
147
+ private ensureMemoryReady;
141
148
  /**
142
149
  * Context storage for tool execution
143
150
  * This context will be merged with any runtime context passed by the AI model
@@ -228,6 +235,16 @@ export declare class NeuroLink {
228
235
  private extractMemoryContext;
229
236
  /** Store conversation turn in mem0 */
230
237
  private storeMem0ConversationTurn;
238
+ /**
239
+ * Retrieve condensed memory for a user.
240
+ * Returns the input text enhanced with memory context, or unchanged if no memory.
241
+ */
242
+ private retrieveMemory;
243
+ /**
244
+ * Store a conversation turn in memory (non-blocking).
245
+ * Calls add(userId, content) which internally condenses old + new via LLM.
246
+ */
247
+ private storeMemoryInBackground;
231
248
  /**
232
249
  * Set up HITL event forwarding to main emitter
233
250
  */
@@ -375,7 +392,7 @@ export declare class NeuroLink {
375
392
  * Generate AI response with comprehensive feature support.
376
393
  *
377
394
  * Primary method for AI generation with support for all NeuroLink features:
378
- * - Multi-provider support (13+ providers)
395
+ * - Multi-provider support (14+ providers)
379
396
  * - MCP tool integration
380
397
  * - Structured JSON output with Zod schemas
381
398
  * - Conversation memory (Redis or in-memory)