@juspay/neurolink 7.47.3 → 7.48.1

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (31) hide show
  1. package/CHANGELOG.md +8 -0
  2. package/README.md +68 -874
  3. package/dist/cli/loop/session.d.ts +13 -0
  4. package/dist/cli/loop/session.js +78 -17
  5. package/dist/lib/providers/sagemaker/adaptive-semaphore.d.ts +1 -13
  6. package/dist/lib/providers/sagemaker/client.d.ts +1 -1
  7. package/dist/lib/providers/sagemaker/config.d.ts +1 -1
  8. package/dist/lib/providers/sagemaker/detection.d.ts +1 -1
  9. package/dist/lib/providers/sagemaker/errors.d.ts +1 -1
  10. package/dist/lib/providers/sagemaker/index.d.ts +1 -1
  11. package/dist/lib/providers/sagemaker/language-model.d.ts +1 -1
  12. package/dist/lib/providers/sagemaker/parsers.d.ts +1 -1
  13. package/dist/lib/providers/sagemaker/streaming.d.ts +1 -1
  14. package/dist/lib/providers/sagemaker/structured-parser.d.ts +1 -1
  15. package/dist/lib/types/providers.d.ts +469 -0
  16. package/dist/providers/sagemaker/adaptive-semaphore.d.ts +1 -13
  17. package/dist/providers/sagemaker/client.d.ts +1 -1
  18. package/dist/providers/sagemaker/config.d.ts +1 -1
  19. package/dist/providers/sagemaker/detection.d.ts +1 -1
  20. package/dist/providers/sagemaker/errors.d.ts +1 -1
  21. package/dist/providers/sagemaker/index.d.ts +1 -1
  22. package/dist/providers/sagemaker/language-model.d.ts +3 -3
  23. package/dist/providers/sagemaker/parsers.d.ts +1 -1
  24. package/dist/providers/sagemaker/streaming.d.ts +1 -1
  25. package/dist/providers/sagemaker/structured-parser.d.ts +1 -1
  26. package/dist/types/providers.d.ts +469 -0
  27. package/package.json +1 -1
  28. package/dist/lib/providers/sagemaker/types.d.ts +0 -456
  29. package/dist/lib/providers/sagemaker/types.js +0 -7
  30. package/dist/providers/sagemaker/types.d.ts +0 -456
  31. package/dist/providers/sagemaker/types.js +0 -7
package/README.md CHANGED
@@ -7,924 +7,118 @@
7
7
  [![TypeScript](https://img.shields.io/badge/TypeScript-Ready-blue)](https://www.typescriptlang.org/)
8
8
  [![CI](https://github.com/juspay/neurolink/workflows/CI/badge.svg)](https://github.com/juspay/neurolink/actions)
9
9
 
10
- > **Enterprise AI Development Platform** with universal provider support, factory pattern architecture, and **access to 100+ AI models** through LiteLLM integration. Production-ready with TypeScript support.
10
+ Enterprise AI development platform with unified provider access, production-ready tooling, and an opinionated factory architecture. NeuroLink ships as both a TypeScript SDK and a professional CLI so teams can build, operate, and iterate on AI features quickly.
11
11
 
12
- **NeuroLink** is an Enterprise AI Development Platform that unifies **12 major AI providers** with intelligent fallback and built-in tool support. Available as both a **programmatic SDK** and **professional CLI tool**. Features LiteLLM integration for **100+ models**, plus 6 core tools working across all providers. Extracted from production use at Juspay.
12
+ ## What's New (Q4 2025)
13
13
 
14
- ## 🎉 **NEW: LiteLLM Integration - Access 100+ AI Models**
14
+ - **Human-in-the-loop workflows** Pause generation for user approval/input before tool execution or continuing. → [`docs/features/hitl.md`](docs/features/hitl.md)
15
+ - **Guardrails middleware** – Block PII, profanity, and unsafe content with built-in content filtering. → [`docs/features/guardrails.md`](docs/features/guardrails.md)
16
+ - **Context summarization** – Automatic conversation compression for long-running sessions with memory. → [`docs/features/context-summarization.md`](docs/features/context-summarization.md)
17
+ - **Redis conversation export** – Export full session history as JSON for analytics and debugging. → [`docs/features/conversation-history.md`](docs/features/conversation-history.md)
15
18
 
16
- **NeuroLink now supports LiteLLM**, providing unified access to **100+ AI models** from all major providers through a single interface:
19
+ > **Q3 highlights** (multimodal chat, auto-evaluation, loop sessions, orchestration) are now in [Platform Capabilities](#platform-capabilities-at-a-glance) below.
17
20
 
18
- - **🔄 Universal Access**: OpenAI, Anthropic, Google, Mistral, Meta, and more
19
- - **🎯 Unified Interface**: OpenAI-compatible API for all models
20
- - **💰 Cost Optimization**: Automatic routing to cost-effective models
21
- - **⚡ Load Balancing**: Automatic failover and load distribution
22
- - **📊 Analytics**: Built-in usage tracking and monitoring
21
+ ## Get Started in Two Steps
23
22
 
24
23
  ```bash
25
- # Quick start with LiteLLM
26
- pip install litellm && litellm --port 4000
24
+ # 1. Run the interactive setup wizard (select providers, validate keys)
25
+ pnpm dlx @juspay/neurolink setup
27
26
 
28
- # Use any of 100+ models through one interface
29
- npx @juspay/neurolink generate "Hello" --provider litellm --model "openai/gpt-4o"
30
- npx @juspay/neurolink generate "Hello" --provider litellm --model "anthropic/claude-3-5-sonnet"
31
- npx @juspay/neurolink generate "Hello" --provider litellm --model "google/gemini-2.0-flash"
27
+ # 2. Start generating with automatic provider selection
28
+ npx @juspay/neurolink generate "Write a launch plan for multimodal chat"
32
29
  ```
33
30
 
34
- **[📖 Complete LiteLLM Integration Guide](./docs/LITELLM-INTEGRATION.md)** - Setup, configuration, and 100+ model access
35
-
36
- ## 🎉 **NEW: SageMaker Integration - Deploy Your Custom AI Models**
37
-
38
- **NeuroLink now supports Amazon SageMaker**, enabling you to deploy and use your own custom trained models through NeuroLink's unified interface:
39
-
40
- - **🏗️ Custom Model Hosting** - Deploy your fine-tuned models on AWS infrastructure
41
- - **💰 Cost Control** - Pay only for inference usage with auto-scaling capabilities
42
- - **🔒 Enterprise Security** - Full control over model infrastructure and data privacy
43
- - **⚡ Performance** - Dedicated compute resources with predictable latency
44
- - **📊 Monitoring** - Built-in CloudWatch metrics and logging
31
+ Need a persistent workspace? Launch loop mode:
45
32
 
46
33
  ```bash
47
- # Quick start with SageMaker
48
- export AWS_ACCESS_KEY_ID="your-access-key"
49
- export AWS_SECRET_ACCESS_KEY="your-secret-key"
50
- export SAGEMAKER_DEFAULT_ENDPOINT="your-endpoint-name"
51
-
52
- # Use your custom deployed models
53
- npx @juspay/neurolink generate "Analyze this data" --provider sagemaker
54
- npx @juspay/neurolink sagemaker status # Check endpoint health
55
- npx @juspay/neurolink sagemaker benchmark my-endpoint # Performance testing
56
- ```
57
-
58
- **[📖 Complete SageMaker Integration Guide](./docs/SAGEMAKER-INTEGRATION.md)** - Setup, deployment, and custom model access
59
-
60
- ## 🚀 Enterprise Platform Features
61
-
62
- - **🏭 Factory Pattern Architecture** - Unified provider management through BaseProvider inheritance
63
- - **🔧 Tools-First Design** - All providers include built-in tool support without additional configuration
64
- - **🔗 LiteLLM Integration** - **100+ models** from all major providers through unified interface
65
- - **🏢 Enterprise Proxy Support** - Comprehensive corporate proxy support with MCP compatibility
66
- - **🏗️ Enterprise Architecture** - Production-ready with clean abstractions
67
- - **🔄 Configuration Management** - Flexible provider configuration with automatic backups
68
- - **✅ Type Safety** - Industry-standard TypeScript interfaces
69
- - **⚡ Performance** - Fast response times with streaming support and 68% improved status checks
70
- - **🛡️ Error Recovery** - Graceful failures with provider fallback and retry logic
71
- - **📊 Analytics & Evaluation** - Built-in usage tracking and AI-powered quality assessment
72
- - **🎯 Real-time Event Monitoring** - EventEmitter integration for progress tracking and debugging
73
- - **🔧 External MCP Integration** - Model Context Protocol with 6 built-in tools + full external MCP server support
74
- - **🚀 Lighthouse Integration** - Unified tool registration API supporting both object and array formats for seamless Lighthouse tool import
75
-
76
- ---
77
-
78
- ## 🚀 Quick Start
79
-
80
- ### 🎉 **NEW: Revolutionary Interactive Setup** - Transform Your Developer Experience!
81
-
82
- **🚀 BREAKTHROUGH: Setup in 2-3 minutes (vs 15+ minutes manual setup)**
83
-
84
- ```bash
85
- # 🎯 **MAIN SETUP WIZARD** - Beautiful guided experience
86
- pnpm cli setup
87
-
88
- # ✨ **REVOLUTIONARY FEATURES:**
89
- # 🎨 Beautiful ASCII art welcome screen
90
- # 📊 Interactive provider comparison table
91
- # ⚡ Real-time credential validation with format checking
92
- # 🔄 Atomic .env file management (preserves existing content)
93
- # 🧠 Smart recommendations (Google AI free tier, OpenAI for pro users)
94
- # 🛡️ Cross-platform compatibility with graceful error recovery
95
- # 📈 90% reduction in setup errors vs manual configuration
96
-
97
- # 🚀 **INSTANT PRODUCTIVITY** - Use any AI provider immediately:
98
- npx @juspay/neurolink generate "Hello, AI" # Auto-selects best provider
99
- npx @juspay/neurolink gen "Write code" # Shortest form
100
- npx @juspay/neurolink stream "Tell a story" # Real-time streaming
101
- npx @juspay/neurolink status # Check all providers
34
+ npx @juspay/neurolink loop --enable-conversation-memory
102
35
  ```
103
36
 
104
- **🎯 Why This Changes Everything:**
105
-
106
- - **⏱️ Time Savings**: 15+ minutes → 2-3 minutes (83% faster)
107
- - **🛡️ Error Reduction**: 90% fewer credential/configuration errors
108
- - **🎨 Professional UX**: Beautiful terminal interface with colors and animations
109
- - **🔍 Smart Validation**: Real-time API key format checking and endpoint testing
110
- - **🔄 Safe Management**: Preserves existing .env content, creates backups automatically
111
- - **🧠 Intelligent Guidance**: Context-aware recommendations based on use case
37
+ Skip the wizard and configure manually? See [`docs/getting-started/provider-setup.md`](docs/getting-started/provider-setup.md).
112
38
 
113
- > **Developer Feedback**: _"Setup went from the most frustrating part to the most delightful part of using NeuroLink"_
39
+ ## CLI & SDK Essentials
114
40
 
115
- ### Provider-Specific Setup (if you prefer targeted setup)
41
+ `neurolink` CLI mirrors the SDK so teams can script experiments and codify them later.
116
42
 
117
43
  ```bash
118
- # Setup individual providers with guided wizards
119
- npx @juspay/neurolink setup --provider google-ai # Free tier, perfect for beginners
120
- or pnpm cli setup-google-ai
121
-
122
- npx @juspay/neurolink setup --provider openai # Industry standard, professional use
123
- or pnpm cli setup-openai
124
-
125
- npx @juspay/neurolink setup --provider anthropic # Advanced reasoning, safety-focused
126
- or pnpm cli setup-anthropic
127
-
128
- npx @juspay/neurolink setup --provider azure # Enterprise features, compliance
129
- or pnpm cli setup-azure
130
-
131
- npx @juspay/neurolink setup --provider bedrock # AWS ecosystem integration
132
- or pnpm cli setup-bedrock
133
-
134
- npx @juspay/neurolink setup --provider huggingface # Open source models, 100k+ options
135
- or pnpm cli setup-huggingface
136
-
137
- pnpm cli setup-gcp # For using Vertex
138
- # Check setup status anytime
139
- npx @juspay/neurolink setup --status
140
- npx @juspay/neurolink setup --list # View all available providers
141
- ```
142
-
143
- ### Alternative: Manual Setup (Advanced Users)
144
-
145
- ```bash
146
- # Option 1: LiteLLM - Access 100+ models through one interface
147
- pip install litellm && litellm --port 4000
148
- export LITELLM_BASE_URL="http://localhost:4000"
149
- export LITELLM_API_KEY="sk-anything"
150
-
151
- # Use any of 100+ models
152
- npx @juspay/neurolink generate "Hello, AI" --provider litellm --model "openai/gpt-4o"
153
- npx @juspay/neurolink generate "Hello, AI" --provider litellm --model "anthropic/claude-3-5-sonnet"
154
-
155
- # Option 2: OpenAI Compatible - Use any OpenAI-compatible endpoint with auto-discovery
156
- export OPENAI_COMPATIBLE_BASE_URL="https://api.openrouter.ai/api/v1"
157
- export OPENAI_COMPATIBLE_API_KEY="sk-or-v1-your-api-key"
158
- # Auto-discovers available models via /v1/models endpoint
159
- npx @juspay/neurolink generate "Hello, AI" --provider openai-compatible
160
-
161
- # Or specify a model explicitly
162
- export OPENAI_COMPATIBLE_MODEL="claude-3-5-sonnet"
163
- npx @juspay/neurolink generate "Hello, AI" --provider openai-compatible
164
-
165
- # Option 3: Direct Provider - Quick setup with Google AI Studio (free tier)
166
- export GOOGLE_AI_API_KEY="AIza-your-google-ai-api-key"
167
- npx @juspay/neurolink generate "Hello, AI" --provider google-ai
168
-
169
- # Option 4: Amazon SageMaker - Use your custom deployed models
170
- export AWS_ACCESS_KEY_ID="your-access-key"
171
- export AWS_SECRET_ACCESS_KEY="your-secret-key"
172
- export SAGEMAKER_DEFAULT_ENDPOINT="your-endpoint-name"
173
- npx @juspay/neurolink generate "Hello, AI" --provider sagemaker
174
- ```
175
-
176
- ```bash
177
- # SDK Installation for using in your typescript projects
178
- npm install @juspay/neurolink
179
-
180
- # 🆕 NEW: External MCP Server Integration Quick Test
181
- node -e "
182
- const { NeuroLink } = require('@juspay/neurolink');
183
- (async () => {
184
- const neurolink = new NeuroLink();
44
+ # Discover available providers and models
45
+ npx @juspay/neurolink status
46
+ npx @juspay/neurolink models list --provider google-ai
185
47
 
186
- // Add external filesystem MCP server
187
- await neurolink.addExternalMCPServer('filesystem', {
188
- command: 'npx',
189
- args: ['-y', '@modelcontextprotocol/server-filesystem', '/tmp'],
190
- transport: 'stdio'
191
- });
48
+ # Route to a specific provider/model
49
+ npx @juspay/neurolink generate "Summarize customer feedback" \
50
+ --provider azure --model gpt-4o-mini
192
51
 
193
- // External tools automatically available in generate()
194
- const result = await neurolink.generate({
195
- input: { text: 'List files in the current directory' }
196
- });
197
- console.log('🎉 External MCP integration working!');
198
- console.log(result.content);
199
- })();
200
- "
52
+ # Turn on analytics + evaluation for observability
53
+ npx @juspay/neurolink generate "Draft release notes" \
54
+ --enable-analytics --enable-evaluation --format json
201
55
  ```
202
56
 
203
- ### Basic Usage
204
-
205
57
  ```typescript
206
58
  import { NeuroLink } from "@juspay/neurolink";
207
59
 
208
- // Auto-select best available provider
209
- const neurolink = new NeuroLink();
210
- const autoResult = await neurolink.generate({
211
- input: { text: "Write a business email" },
212
- provider: "google-ai", // or let it auto-select
213
- timeout: "30s",
214
- });
215
-
216
- console.log(autoResult.content);
217
- console.log(`Used: ${autoResult.provider}`);
218
- ```
219
-
220
- ### Conversation Memory
221
-
222
- NeuroLink supports automatic conversation history management that maintains context across multiple turns within sessions. This enables AI to remember previous interactions and provide contextually aware responses. Session-based memory isolation ensures privacy between different conversations.
223
-
224
- ```typescript
225
- // Enable conversation memory with configurable limits
226
60
  const neurolink = new NeuroLink({
227
61
  conversationMemory: {
228
62
  enabled: true,
229
- maxSessions: 50, // Keep last 50 sessions
230
- maxTurnsPerSession: 20, // Keep last 20 turns per session
63
+ store: "redis",
231
64
  },
65
+ enableOrchestration: true,
232
66
  });
233
- ```
234
-
235
- #### 🔗 CLI-SDK Consistency (NEW! ✨)
236
67
 
237
- Method aliases that match CLI command names:
238
-
239
- ```typescript
240
- // The following methods are equivalent:
241
- const result1 = await provider.generate({ input: { text: "Hello" } }); // Original
242
- const result2 = await provider.gen({ input: { text: "Hello" } }); // Matches CLI 'gen'
243
-
244
- // Use whichever style you prefer:
245
- const provider = createBestAIProvider();
246
-
247
- // Detailed method name
248
- const story = await provider.generate({
249
- input: { text: "Write a short story about AI" },
250
- maxTokens: 200,
251
- });
252
-
253
- // CLI-style method names
254
- const poem = await provider.generate({ input: { text: "Write a poem" } });
255
- const joke = await provider.gen({ input: { text: "Tell me a joke" } });
256
- ```
257
-
258
- ### Enhanced Features
259
-
260
- #### CLI with Analytics & Evaluation
261
-
262
- ```bash
263
- # Basic AI generation with auto-provider selection
264
- npx @juspay/neurolink generate "Write a business email"
265
-
266
- # LiteLLM with specific model
267
- npx @juspay/neurolink generate "Write code" --provider litellm --model "anthropic/claude-3-5-sonnet"
268
-
269
- # With analytics and evaluation
270
- npx @juspay/neurolink generate "Write a proposal" --enable-analytics --enable-evaluation --debug
271
-
272
- # Streaming with tools (default behavior)
273
- npx @juspay/neurolink stream "What time is it and write a file with the current date"
274
- ```
275
-
276
- #### SDK and Enhancement Features
277
-
278
- ```typescript
279
- import { NeuroLink } from "@juspay/neurolink";
280
-
281
- // Enhanced generation with analytics
282
- const neurolink = new NeuroLink();
283
68
  const result = await neurolink.generate({
284
- input: { text: "Write a business proposal" },
285
- enableAnalytics: true, // Get usage & cost data
286
- enableEvaluation: true, // Get AI quality scores
287
- context: { project: "Q1-sales" },
288
- });
289
-
290
- console.log("📊 Usage:", result.analytics);
291
- console.log("⭐ Quality:", result.evaluation);
292
- console.log("Response:", result.content);
293
- ```
294
-
295
- ### Environment Setup
296
-
297
- ```bash
298
- # Create .env file (automatically loaded by CLI)
299
- echo 'OPENAI_API_KEY="sk-your-openai-key"' > .env
300
- echo 'GOOGLE_AI_API_KEY="AIza-your-google-ai-key"' >> .env
301
- echo 'AWS_ACCESS_KEY_ID="your-aws-access-key"' >> .env
302
-
303
- # 🆕 NEW: Google Vertex AI for Websearch Tool
304
- echo 'GOOGLE_APPLICATION_CREDENTIALS="/path/to/service-account.json"' >> .env
305
- echo 'GOOGLE_VERTEX_PROJECT="your-gcp-project-id"' >> .env
306
- echo 'GOOGLE_VERTEX_LOCATION="us-central1"' >> .env
307
-
308
- # Test configuration
309
- npx @juspay/neurolink status
310
-
311
- # SDK Env Provider Check - Advanced provider testing with fallback detection
312
- pnpm run test:providers
313
- ```
314
-
315
- #### 🔍 SDK Env Provider Check Output
316
-
317
- ```bash
318
- # Example output:
319
- ✅ Google AI: Working (197 tokens)
320
- ⚠️ OpenAI: Failed (Fallback to google-ai)
321
- ⚠️ AWS Bedrock: Failed (Fallback to google-ai)
322
- ```
323
-
324
- ### JSON Format Support (Complete)
325
-
326
- NeuroLink provides comprehensive JSON input/output support for both CLI and SDK:
327
-
328
- ```bash
329
- # CLI JSON Output - Structured data for scripts
330
- npx @juspay/neurolink generate "Summary of AI trends" --format json
331
- npx @juspay/neurolink gen "Create a user profile" --format json --provider google-ai
332
-
333
- # Example JSON Output:
334
- {
335
- "content": "AI trends include increased automation...",
336
- "provider": "google-ai",
337
- "model": "gemini-2.5-flash",
338
- "usage": {
339
- "promptTokens": 15,
340
- "completionTokens": 127,
341
- "totalTokens": 142
342
- },
343
- "responseTime": 1234
344
- }
345
- ```
346
-
347
- ```typescript
348
- // SDK JSON Input/Output - Full TypeScript support
349
- import { createBestAIProvider } from "@juspay/neurolink";
350
-
351
- const provider = createBestAIProvider();
352
-
353
- // Structured input
354
- const result = await provider.generate({
355
- input: { text: "Create a product specification" },
356
- schema: {
357
- type: "object",
358
- properties: {
359
- name: { type: "string" },
360
- price: { type: "number" },
361
- features: { type: "array", items: { type: "string" } },
362
- },
69
+ input: {
70
+ text: "Create a multimodal onboarding script",
71
+ images: ["./diagrams/architecture.png"],
363
72
  },
73
+ enableEvaluation: true,
74
+ region: "us-east-1",
364
75
  });
365
76
 
366
- // Access structured response
367
- const productData = JSON.parse(result.content);
368
- console.log(productData.name, productData.price, productData.features);
369
- ```
370
-
371
- **📖 [Complete Setup Guide](./docs/CONFIGURATION.md)** - All providers with detailed instructions
372
-
373
- ## 🔍 **NEW: Websearch Tool with Google Vertex AI Grounding**
374
-
375
- **NeuroLink now includes a powerful websearch tool** that uses Google's native search grounding technology for real-time web information:
376
-
377
- - **🔍 Native Google Search** - Uses Google's search grounding via Vertex AI
378
- - **🎯 Real-time Results** - Access current web information during AI conversations
379
- - **🔒 Credential Protection** - Only activates when Google Vertex AI credentials are properly configured
380
-
381
- ### Quick Setup & Test
382
-
383
- ```bash
384
- # 1. Build the project first
385
- pnpm run build
386
-
387
- # 2. Set up environment variables (see detailed setup below)
388
- cp .env.example .env
389
- # Edit .env with your Google Vertex AI credentials
390
-
391
- # 3. Test the websearch tool directly
392
- node test-websearch-grounding.j
77
+ console.log(result.content);
78
+ console.log(result.evaluation?.overallScore);
393
79
  ```
394
80
 
395
- ### Complete Google Vertex AI Setup
396
-
397
- #### Configure Environment Variables
398
-
399
- ```bash
400
- # Add to your .env file
401
- GOOGLE_APPLICATION_CREDENTIALS="/absolute/path/to/neurolink-service-account.json"
402
- GOOGLE_VERTEX_PROJECT="YOUR-PROJECT-ID"
403
- GOOGLE_VERTEX_LOCATION="us-central1"
404
- ```
405
-
406
- #### Step 3: Test the Setup
407
-
408
- ````bash
409
- # Build the project first
410
- pnpm run build
411
-
412
- # Run the dedicated test script
413
- node test-websearch-grounding.js
414
-
415
- ### Using the Websearch Tool
416
-
417
- #### CLI Usage (Works with All Providers)
418
-
419
- # With specific providers - websearch works across all providers
420
- npx @juspay/neurolink generate "Weather in Tokyo now" --provider vertex
421
-
422
- **Note:** The websearch tool gracefully handles missing credentials - it only activates when valid Google Vertex AI credentials are configured. Without proper credentials, other tools continue to work normally and AI responses fall back to training data.
423
-
424
- ## ✨ Key Features
81
+ Full command and API breakdown lives in [`docs/cli/commands.md`](docs/cli/commands.md) and [`docs/sdk/api-reference.md`](docs/sdk/api-reference.md).
425
82
 
426
- - 🔗 **LiteLLM Integration** - **Access 100+ AI models** from all major providers through unified interface
427
- - 🔍 **Smart Model Auto-Discovery** - OpenAI Compatible provider automatically detects available models via `/v1/models` endpoint
428
- - 🏭 **Factory Pattern Architecture** - Unified provider management with BaseProvider inheritance
429
- - 🔧 **Tools-First Design** - All providers automatically include 7 direct tools (getCurrentTime, readFile, listDirectory, calculateMath, writeFile, searchFiles, websearchGrounding)
430
- - 🔄 **12 AI Providers** - OpenAI, Bedrock, Vertex AI, Google AI Studio, Anthropic, Azure, **LiteLLM**, **OpenAI Compatible**, Hugging Face, Ollama, Mistral AI, **SageMaker**
431
- - 💰 **Cost Optimization** - Automatic selection of cheapest models and LiteLLM routing
432
- - ⚡ **Automatic Fallback** - Never fail when providers are down, intelligent provider switching
433
- - 🖥️ **CLI + SDK** - Use from command line or integrate programmatically with TypeScript support
434
- - 🛡️ **Production Ready** - Enterprise-grade error handling, performance optimization, extracted from production
435
- - 🏢 **Enterprise Proxy Support** - Comprehensive corporate proxy support with zero configuration
436
- - ✅ **External MCP Integration** - Model Context Protocol with built-in tools + full external MCP server support
437
- - 🔍 **Smart Model Resolution** - Fuzzy matching, aliases, and capability-based search across all providers
438
- - 🏠 **Local AI Support** - Run completely offline with Ollama or through LiteLLM proxy
439
- - 🌍 **Universal Model Access** - Direct providers + 100,000+ models via Hugging Face + 100+ models via LiteLLM
440
- - 🧠 **Automatic Context Summarization** - Stateful, long-running conversations with automatic history summarization.
441
- - 📊 **Analytics & Evaluation** - Built-in usage tracking and AI-powered quality assessment
83
+ ## Platform Capabilities at a Glance
442
84
 
443
- ## 🛠️ External MCP Integration Status ✅ **PRODUCTION READY**
444
-
445
- | Component | Status | Description |
446
- | ---------------------- | -------------- | ---------------------------------------------------------------- |
447
- | Built-in Tools | **Working** | 6 core tools fully functional across all providers |
448
- | SDK Custom Tools | **Working** | Register custom tools programmatically |
449
- | **External MCP Tools** | **Working** | **Full external MCP server support with dynamic tool discovery** |
450
- | Tool Execution | **Working** | Real-time AI tool calling with all tool types |
451
- | **Streaming Support** | **Working** | **External MCP tools work with streaming generation** |
452
- | **Multi-Provider** | ✅ **Working** | **External tools work across all AI providers** |
453
- | **CLI Integration** | ✅ **READY** | **Production-ready with external MCP support** |
454
-
455
- ### ✅ External MCP Integration Demo
456
-
457
- ```bash
458
- # Test built-in tools (works immediately)
459
- npx @juspay/neurolink generate "What time is it?" --debug
460
-
461
- # 🆕 NEW: External MCP server integration (SDK)
462
- import { NeuroLink } from '@juspay/neurolink';
463
-
464
- const neurolink = new NeuroLink();
465
-
466
- // Add external MCP server (e.g., Bitbucket)
467
- await neurolink.addExternalMCPServer('bitbucket', {
468
- command: 'npx',
469
- args: ['-y', '@nexus2520/bitbucket-mcp-server'],
470
- transport: 'stdio',
471
- env: {
472
- BITBUCKET_USERNAME: process.env.BITBUCKET_USERNAME,
473
- BITBUCKET_TOKEN: process.env.BITBUCKET_TOKEN,
474
- BITBUCKET_BASE_URL: 'https://bitbucket.example.com'
475
- }
476
- });
477
-
478
- // Use external MCP tools in generation
479
- const result = await neurolink.generate({
480
- input: { text: 'Get pull request #123 details from the main repository' },
481
- disableTools: false // External MCP tools automatically available
482
- });
483
-
484
- # Discover available MCP servers
485
- npx @juspay/neurolink mcp discover --format table
486
- ````
487
-
488
- ### 🔧 SDK Custom Tool Registration (NEW!)
489
-
490
- Register your own tools programmatically with the SDK:
491
-
492
- ```typescript
493
- import { NeuroLink } from "@juspay/neurolink";
494
- const neurolink = new NeuroLink();
495
-
496
- // Register a simple tool
497
- neurolink.registerTool("weatherLookup", {
498
- description: "Get current weather for a city",
499
- parameters: z.object({
500
- city: z.string().describe("City name"),
501
- units: z.enum(["celsius", "fahrenheit"]).optional(),
502
- }),
503
- execute: async ({ city, units = "celsius" }) => {
504
- // Your implementation here
505
- return {
506
- city,
507
- temperature: 22,
508
- units,
509
- condition: "sunny",
510
- };
511
- },
512
- });
513
-
514
- // Use it in generation
515
- const result = await neurolink.generate({
516
- input: { text: "What's the weather in London?" },
517
- provider: "google-ai",
518
- });
519
-
520
- // Register multiple tools - Object format (existing)
521
- neurolink.registerTools({
522
- stockPrice: {
523
- description: "Get stock price",
524
- execute: async () => ({ price: 150.25 }),
525
- },
526
- calculator: {
527
- description: "Calculate math",
528
- execute: async () => ({ result: 42 }),
529
- },
530
- });
531
-
532
- // Register multiple tools - Array format (Lighthouse compatible)
533
- neurolink.registerTools([
534
- {
535
- name: "lighthouseTool1",
536
- tool: {
537
- description: "Lighthouse analytics tool",
538
- parameters: z.object({
539
- merchantId: z.string(),
540
- dateRange: z.string().optional(),
541
- }),
542
- execute: async ({ merchantId, dateRange }) => {
543
- // Lighthouse tool implementation with Zod schema
544
- return { data: "analytics result" };
545
- },
546
- },
547
- },
548
- {
549
- name: "lighthouseTool2",
550
- tool: {
551
- description: "Payment processing tool",
552
- execute: async () => ({ status: "processed" }),
553
- },
554
- },
555
- ]);
556
- ```
557
-
558
- ## 💰 Smart Model Selection
559
-
560
- NeuroLink features intelligent model selection and cost optimization:
561
-
562
- ### Cost Optimization Features
563
-
564
- - **💰 Automatic Cost Optimization**: Selects cheapest models for simple tasks
565
- - **🔄 LiteLLM Model Routing**: Access 100+ models with automatic load balancing
566
- - **🔍 Capability-Based Selection**: Find models with specific features (vision, function calling)
567
- - **⚡ Intelligent Fallback**: Seamless switching when providers fail
568
-
569
- ```bash
570
- # Cost optimization - automatically use cheapest model
571
- npx @juspay/neurolink generate "Hello" --optimize-cost
572
-
573
- # LiteLLM specific model selection
574
- npx @juspay/neurolink generate "Complex analysis" --provider litellm --model "anthropic/claude-3-5-sonnet"
575
-
576
- # Auto-select best available provider
577
- npx @juspay/neurolink generate "Write code" # Automatically chooses optimal provider
578
- ```
579
-
580
- ## ✨ Interactive Loop Mode
581
-
582
- NeuroLink features a powerful **interactive loop mode** that transforms the CLI into a persistent, stateful session. This allows you to run multiple commands, set session-wide variables, and maintain conversation history without restarting.
583
-
584
- ### Start the Loop
585
-
586
- ```bash
587
- npx @juspay/neurolink loop
588
- ```
589
-
590
- ### Example Session
591
-
592
- ```bash
593
- # Start the interactive session
594
- $ npx @juspay/neurolink loop
595
-
596
- neurolink » set provider google-ai
597
- ✓ provider set to google-ai
598
-
599
- neurolink » set temperature 0.8
600
- ✓ temperature set to 0.8
601
-
602
- neurolink » generate "Tell me a fun fact about space"
603
- The quietest place on Earth is an anechoic chamber at Microsoft's headquarters in Redmond, Washington. The background noise is so low that it's measured in negative decibels, and you can hear your own heartbeat.
604
-
605
- # Exit the session
606
- neurolink » exit
607
- ```
608
-
609
- ### Conversation Memory in Loop Mode
610
-
611
- Start the loop with conversation memory to have the AI remember the context of your previous commands.
612
-
613
- ```bash
614
- npx @juspay/neurolink loop --enable-conversation-memory
615
- ```
616
-
617
- ## 💻 Essential Examples
618
-
619
- ### CLI Commands
620
-
621
- ```bash
622
- # Text generation with automatic MCP tool detection (default)
623
- npx @juspay/neurolink generate "What time is it?"
624
-
625
- # Alternative short form
626
- npx @juspay/neurolink gen "What time is it?"
627
-
628
- # Disable tools for training-data-only responses
629
- npx @juspay/neurolink generate "What time is it?" --disable-tools
630
-
631
- # With custom timeout for complex prompts
632
- npx @juspay/neurolink generate "Explain quantum computing in detail" --timeout 1m
633
-
634
- # Real-time streaming with agent support (default)
635
- npx @juspay/neurolink stream "What time is it?"
636
-
637
- # Streaming without tools (traditional mode)
638
- npx @juspay/neurolink stream "Tell me a story" --disable-tools
639
-
640
- # Streaming with extended timeout
641
- npx @juspay/neurolink stream "Write a long story" --timeout 5m
642
-
643
- # Provider diagnostics
644
- npx @juspay/neurolink status --verbose
645
-
646
- # Batch processing
647
- echo -e "Write a haiku\nExplain gravity" > prompts.txt
648
- npx @juspay/neurolink batch prompts.txt --output results.json
649
-
650
- # Batch with custom timeout per request
651
- npx @juspay/neurolink batch prompts.txt --timeout 45s --output results.json
652
- ```
653
-
654
- ### SDK Integration
655
-
656
- ```typescript
657
- // SvelteKit API route with timeout handling
658
- export const POST: RequestHandler = async ({ request }) => {
659
- const { message } = await request.json();
660
- const provider = createBestAIProvider();
661
-
662
- try {
663
- // NEW: Primary streaming method (recommended)
664
- const result = await provider.stream({
665
- input: { text: message },
666
- timeout: "2m", // 2 minutes for streaming
667
- });
668
-
669
- // Process stream
670
- for await (const chunk of result.stream) {
671
- // Handle streaming content
672
- console.log(chunk.content);
673
- }
674
-
675
- // LEGACY: Backward compatibility (still works)
676
- const legacyResult = await provider.stream({ input: { text:
677
- prompt: message,
678
- timeout: "2m", // 2 minutes for streaming
679
- });
680
- return new Response(result.toReadableStream());
681
- } catch (error) {
682
- if (error.name === "TimeoutError") {
683
- return new Response("Request timed out", { status: 408 });
684
- }
685
- throw error;
686
- }
687
- };
688
-
689
- // Next.js API route with timeout
690
- export async function POST(request: NextRequest) {
691
- const { prompt } = await request.json();
692
- const provider = createBestAIProvider();
693
-
694
- const result = await provider.generate({
695
- prompt,
696
- timeout: process.env.AI_TIMEOUT || "30s", // Configurable timeout
697
- });
698
-
699
- return NextResponse.json({ text: result.content });
700
- }
701
- ```
702
-
703
- ## 🎬 See It In Action
704
-
705
- **No installation required!** Experience NeuroLink through comprehensive visual documentation:
706
-
707
- ### 📱 Interactive Web Demo
708
-
709
- ```bash
710
- cd neurolink-demo && node server.js
711
- # Visit http://localhost:9876 for live demo
712
- ```
713
-
714
- - **Real AI Integration**: All 9 providers functional with live generation
715
- - **Complete Use Cases**: Business, creative, and developer scenarios
716
- - **Performance Metrics**: Live provider analytics and response times
717
- - **Privacy Options**: Test local AI with Ollama
718
-
719
- ### 🖥️ CLI Demonstrations
720
-
721
- - **[CLI Help & Commands](./docs/visual-content/cli-videos/cli-01-cli-help.mp4)** - Complete command reference
722
- - **[Provider Status Check](./docs/visual-content/cli-videos/cli-02-provider-status.mp4)** - Connectivity verification (now with authentication and model availability checks)
723
- - **[Text Generation](./docs/visual-content/cli-videos/cli-03-text-generation.mp4)** - Real AI content creation
724
-
725
- ### 🌐 Web Interface Videos
726
-
727
- - **[Business Use Cases](./neurolink-demo/videos/business-use-cases.mp4)** - Professional applications
728
- - **[Developer Tools](./neurolink-demo/videos/developer-tools.mp4)** - Code generation and APIs
729
- - **[Creative Tools](./neurolink-demo/videos/creative-tools.mp4)** - Content creation
730
-
731
- **[📖 Complete Visual Documentation](./docs/VISUAL-DEMOS.md)** - All screenshots and videos
732
-
733
- ## 📚 Documentation
734
-
735
- ### Getting Started
736
-
737
- - **[🔧 Provider Setup](./docs/PROVIDER-CONFIGURATION.md)** - Complete environment configuration
738
- - **[🖥️ CLI Guide](./docs/CLI-GUIDE.md)** - All commands and options
739
- - **[🏗️ SDK Integration](./docs/FRAMEWORK-INTEGRATION.md)** - Next.js, SvelteKit, React
740
- - **[⚙️ Environment Variables](./docs/ENVIRONMENT-VARIABLES.md)** - Full configuration guide
741
-
742
- ### Advanced Features
743
-
744
- - **[🏭 Factory Pattern Migration](./docs/FACTORY-PATTERN-MIGRATION.md)** - Guide to the new unified provider architecture
745
- - **[🔄 MCP Foundation](./docs/MCP-FOUNDATION.md)** - Model Context Protocol architecture
746
- - **[⚡ Dynamic Models](./docs/DYNAMIC-MODELS.md)** - Self-updating model configurations and cost optimization
747
- - **[🧠 AI Analysis Tools](./docs/AI-ANALYSIS-TOOLS.md)** - Usage optimization and benchmarking
748
- - **[🛠️ AI Workflow Tools](./docs/AI-WORKFLOW-TOOLS.md)** - Development lifecycle assistance
749
- - **[🎬 Visual Demos](./docs/VISUAL-DEMOS.md)** - Screenshots and videos
750
-
751
- ### Reference
752
-
753
- - **[📚 API Reference](./docs/API-REFERENCE.md)** - Complete TypeScript API
754
- - **[🔗 Framework Integration](./docs/FRAMEWORK-INTEGRATION.md)** - SvelteKit, Next.js, Express.js
755
-
756
- ## 🏗️ Supported Providers & Models
757
-
758
- | Provider | Models | Auth Method | Free Tier | Tool Support | Key Benefit |
759
- | --------------------------- | ---------------------------------- | ------------------ | --------- | ------------ | -------------------------------- |
760
- | **🔗 LiteLLM** 🆕 | **100+ Models** (All Providers) | Proxy Server | Varies | ✅ Full | **Universal Access** |
761
- | **🔗 OpenAI Compatible** 🆕 | **Any OpenAI-compatible endpoint** | API Key + Base URL | Varies | ✅ Full | **Auto-Discovery + Flexibility** |
762
- | **Google AI Studio** | Gemini 2.5 Flash/Pro | API Key | ✅ | ✅ Full | Free Tier Available |
763
- | **OpenAI** | GPT-4o, GPT-4o-mini | API Key | ❌ | ✅ Full | Industry Standard |
764
- | **Anthropic** | Claude 3.5 Sonnet | API Key | ❌ | ✅ Full | Advanced Reasoning |
765
- | **Amazon Bedrock** | Claude 3.5/3.7 Sonnet | AWS Credentials | ❌ | ✅ Full\* | Enterprise Scale |
766
- | **Google Vertex AI** | Gemini 2.5 Flash | Service Account | ❌ | ✅ Full | Enterprise Google |
767
- | **Azure OpenAI** | GPT-4, GPT-3.5 | API Key + Endpoint | ❌ | ✅ Full | Microsoft Ecosystem |
768
- | **Ollama** 🆕 | Llama 3.2, Gemma, Mistral (Local) | None (Local) | ✅ | ⚠️ Partial | Complete Privacy |
769
- | **Hugging Face** 🆕 | 100,000+ open source models | API Key | ✅ | ⚠️ Partial | Open Source |
770
- | **Mistral AI** 🆕 | Tiny, Small, Medium, Large | API Key | ✅ | ✅ Full | European/GDPR |
771
- | **Amazon SageMaker** 🆕 | Custom Models (Your Endpoints) | AWS Credentials | ❌ | ✅ Full | Custom Model Hosting |
772
-
773
- **Tool Support Legend:**
774
-
775
- - ✅ Full: All tools working correctly
776
- - ⚠️ Partial: Tools visible but may not execute properly
777
- - ❌ Limited: Issues with model or configuration
778
- - \* Bedrock requires valid AWS credentials, Ollama requires specific models like gemma3n for tool support
779
-
780
- **✨ Auto-Selection**: NeuroLink automatically chooses the best available provider based on speed, reliability, and configuration.
781
-
782
- ### 🔍 Smart Model Auto-Discovery (OpenAI Compatible)
783
-
784
- The OpenAI Compatible provider includes intelligent model discovery that automatically detects available models from any endpoint:
785
-
786
- ```bash
787
- # Setup - no model specified
788
- export OPENAI_COMPATIBLE_BASE_URL="https://api.your-endpoint.ai/v1"
789
- export OPENAI_COMPATIBLE_API_KEY="your-api-key"
790
-
791
- # Auto-discovers and uses first available model
792
- npx @juspay/neurolink generate "Hello!" --provider openai-compatible
793
- # → 🔍 Auto-discovered model: claude-sonnet-4 from 3 available models
794
-
795
- # Or specify explicitly to skip discovery
796
- export OPENAI_COMPATIBLE_MODEL="gemini-2.5-pro"
797
- npx @juspay/neurolink generate "Hello!" --provider openai-compatible
798
- ```
799
-
800
- **How it works:**
801
-
802
- - Queries `/v1/models` endpoint to discover available models
803
- - Automatically selects the first available model when none specified
804
- - Falls back gracefully if discovery fails
805
- - Works with any OpenAI-compatible service (OpenRouter, vLLM, LiteLLM, etc.)
806
-
807
- ## 🎯 Production Features
808
-
809
- ### Enterprise-Grade Reliability
810
-
811
- - **Automatic Failover**: Seamless provider switching on failures
812
- - **Error Recovery**: Comprehensive error handling and logging
813
- - **Performance Monitoring**: Built-in analytics and metrics
814
- - **Type Safety**: Full TypeScript support with IntelliSense
815
-
816
- ### AI Platform Capabilities
817
-
818
- - **MCP Foundation**: Universal AI development platform with 10+ specialized tools
819
- - **Analysis Tools**: Usage optimization, performance benchmarking, parameter tuning
820
- - **Workflow Tools**: Test generation, code refactoring, documentation, debugging
821
- - **Extensibility**: Connect external tools and services via MCP protocol
822
- - **🆕 Dynamic Server Management**: Programmatically add MCP servers at runtime
823
-
824
- ### 🔧 External MCP Server Management ✅ **AVAILABLE NOW**
825
-
826
- **External MCP integration is now production-ready:**
827
-
828
- - ✅ 6 built-in tools working across all providers
829
- - ✅ SDK custom tool registration
830
- - ✅ **External MCP server management** (add, remove, list, test servers)
831
- - ✅ **Dynamic tool discovery** (automatic tool registration from external servers)
832
- - ✅ **Multi-provider support** (external tools work with all AI providers)
833
- - ✅ **Streaming integration** (external tools work with real-time streaming)
834
- - ✅ **Enhanced tool tracking** (proper parameter extraction and execution logging)
835
-
836
- ```typescript
837
- // Complete external MCP server API
838
- const neurolink = new NeuroLink();
839
-
840
- // Server management
841
- await neurolink.addExternalMCPServer(serverId, config);
842
- await neurolink.removeExternalMCPServer(serverId);
843
- const servers = neurolink.listExternalMCPServers();
844
- const server = neurolink.getExternalMCPServer(serverId);
845
-
846
- // Tool management
847
- const tools = neurolink.getExternalMCPTools();
848
- const serverTools = neurolink.getExternalMCPServerTools(serverId);
849
-
850
- // Direct tool execution
851
- const result = await neurolink.executeExternalMCPTool(
852
- serverId,
853
- toolName,
854
- params,
855
- );
856
-
857
- // Statistics and monitoring
858
- const stats = neurolink.getExternalMCPStatistics();
859
- await neurolink.shutdownExternalMCPServers();
860
- ```
861
-
862
- ## 🤝 Contributing
863
-
864
- We welcome contributions! Please see our [Contributing Guidelines](./CONTRIBUTING.md) for details.
865
-
866
- ### Development Setup
867
-
868
- ```bash
869
- git clone https://github.com/juspay/neurolink
870
- cd neurolink
871
- pnpm install
872
- npx husky install # Setup git hooks for build rule enforcement
873
- pnpm setup:complete # One-command setup with all automation
874
- pnpm test:adaptive # Intelligent testing
875
- pnpm build:complete # Full build pipeline
876
- ```
877
-
878
- ### Enterprise Developer Experience
879
-
880
- NeuroLink features **enterprise-grade build rule enforcement** with comprehensive quality validation:
881
-
882
- ```bash
883
- # Quality & Validation (required for all commits)
884
- pnpm run validate:all # Run all validation checks
885
- pnpm run validate:security # Security scanning with gitleaks
886
- pnpm run validate:env # Environment consistency checks
887
- pnpm run quality:metrics # Generate quality score report
888
-
889
- # Development Workflow
890
- pnpm run check:all # Pre-commit validation simulation
891
- pnpm run format # Auto-fix code formatting
892
- pnpm run lint # ESLint validation with zero-error tolerance
893
-
894
- # Environment & Setup (2-minute initialization)
895
- pnpm setup:complete # Complete project setup
896
- pnpm env:setup # Safe .env configuration
897
- pnpm env:backup # Environment backup
898
-
899
- # Testing (60-80% faster)
900
- pnpm test:adaptive # Intelligent test selection
901
- pnpm test:providers # AI provider validation
902
-
903
- # Documentation & Content
904
- pnpm docs:sync # Cross-file documentation sync
905
- pnpm content:generate # Automated content creation
906
-
907
- # Build & Deployment
908
- pnpm build:complete # 7-phase enterprise pipeline
909
- pnpm dev:health # System health monitoring
910
- ```
85
+ | Capability | Highlights |
86
+ | ------------------------ | --------------------------------------------------------------------------------------------- |
87
+ | **Provider unification** | 12+ providers with automatic fallback, cost-aware routing, provider orchestration (Q3). |
88
+ | **Multimodal pipeline** | Stream images + text across providers with local/remote assets (Q3 2025). |
89
+ | **Quality & governance** | Auto-evaluation engine (Q3), guardrails middleware (Q4), HITL workflows (Q4), audit logging. |
90
+ | **Memory & context** | Conversation memory, Mem0 integration, Redis history export (Q4), context summarization (Q4). |
91
+ | **CLI tooling** | Loop sessions (Q3), setup wizard, config validation, Redis auto-detect, JSON output. |
92
+ | **Enterprise ops** | Proxy support, regional routing (Q3), telemetry hooks, configuration management. |
93
+ | **Tool ecosystem** | MCP auto discovery, LiteLLM hub access, SageMaker custom deployment, web search. |
911
94
 
912
- **Build Rule Enforcement:** All commits automatically validated with pre-commit hooks. See [Contributing Guidelines](./CONTRIBUTING.md) for complete requirements.
95
+ ## Documentation Map
913
96
 
914
- **[📖 Complete Automation Guide](./docs/CLI-GUIDE.md)** - All 72+ commands and automation features
97
+ | Area | When to Use | Link |
98
+ | --------------- | ----------------------------------------------- | ---------------------------------------------------------------- |
99
+ | Getting started | Install, configure, run first prompt | [`docs/getting-started/index.md`](docs/getting-started/index.md) |
100
+ | Feature guides | Understand new functionality front-to-back | [`docs/features/index.md`](docs/features/index.md) |
101
+ | CLI reference | Command syntax, flags, loop sessions | [`docs/cli/index.md`](docs/cli/index.md) |
102
+ | SDK reference | Classes, methods, options | [`docs/sdk/index.md`](docs/sdk/index.md) |
103
+ | Integrations | LiteLLM, SageMaker, MCP, Mem0 | [`docs/LITELLM-INTEGRATION.md`](docs/LITELLM-INTEGRATION.md) |
104
+ | Operations | Configuration, troubleshooting, provider matrix | [`docs/reference/index.md`](docs/reference/index.md) |
105
+ | Visual demos | Screens, GIFs, interactive tours | [`docs/demos/index.md`](docs/demos/index.md) |
915
106
 
916
- ## 📄 License
107
+ ## Integrations
917
108
 
918
- MIT © [Juspay Technologies](https://juspay.in)
109
+ - **LiteLLM 100+ model hub** – Unified access to third-party models via LiteLLM routing. → [`docs/LITELLM-INTEGRATION.md`](docs/LITELLM-INTEGRATION.md)
110
+ - **Amazon SageMaker** – Deploy and call custom endpoints directly from NeuroLink CLI/SDK. → [`docs/SAGEMAKER-INTEGRATION.md`](docs/SAGEMAKER-INTEGRATION.md)
111
+ - **Mem0 conversational memory** – Persistent semantic memory with vector store support. → [`docs/MEM0_INTEGRATION.md`](docs/MEM0_INTEGRATION.md)
112
+ - **Enterprise proxy & security** – Configure outbound policies and compliance posture. → [`docs/ENTERPRISE-PROXY-SETUP.md`](docs/ENTERPRISE-PROXY-SETUP.md)
113
+ - **Configuration automation** – Manage environments, regions, and credentials safely. → [`docs/CONFIGURATION-MANAGEMENT.md`](docs/CONFIGURATION-MANAGEMENT.md)
114
+ - **MCP tool ecosystem** – Auto-discover Model Context Protocol tools and extend workflows. → [`docs/advanced/mcp-integration.md`](docs/advanced/mcp-integration.md)
919
115
 
920
- ## 🔗 Related Projects
116
+ ## Contributing & Support
921
117
 
922
- - [Vercel AI SDK](https://github.com/vercel/ai) - Underlying provider implementations
923
- - [SvelteKit](https://kit.svelte.dev) - Web framework used in this project
924
- - [Model Context Protocol](https://modelcontextprotocol.io) - Tool integration standard
118
+ - Bug reports and feature requests → [GitHub Issues](https://github.com/juspay/neurolink/issues)
119
+ - Development workflow, testing, and pull request guidelines → [`docs/development/contributing.md`](docs/development/contributing.md)
120
+ - Documentation improvements → open a PR referencing the [documentation matrix](docs/tracking/FEATURE-DOC-MATRIX.md).
925
121
 
926
122
  ---
927
123
 
928
- <p align="center">
929
- <strong>Built with ❤️ by <a href="https://juspay.in">Juspay Technologies</a></strong>
930
- </p>
124
+ NeuroLink is built with ❤️ by Juspay. Contributions, questions, and production feedback are always welcome.