@juspay/neurolink 7.48.0 → 7.49.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (156) hide show
  1. package/CHANGELOG.md +15 -0
  2. package/README.md +177 -784
  3. package/dist/agent/directTools.d.ts +55 -0
  4. package/dist/agent/directTools.js +266 -0
  5. package/dist/cli/factories/commandFactory.d.ts +2 -0
  6. package/dist/cli/factories/commandFactory.js +130 -16
  7. package/dist/cli/index.js +0 -0
  8. package/dist/cli/loop/conversationSelector.d.ts +45 -0
  9. package/dist/cli/loop/conversationSelector.js +222 -0
  10. package/dist/cli/loop/optionsSchema.d.ts +1 -1
  11. package/dist/cli/loop/session.d.ts +36 -8
  12. package/dist/cli/loop/session.js +257 -61
  13. package/dist/core/baseProvider.js +9 -2
  14. package/dist/core/evaluation.js +5 -2
  15. package/dist/factories/providerRegistry.js +2 -2
  16. package/dist/lib/agent/directTools.d.ts +55 -0
  17. package/dist/lib/agent/directTools.js +266 -0
  18. package/dist/lib/core/baseProvider.js +9 -2
  19. package/dist/lib/core/evaluation.js +5 -2
  20. package/dist/lib/factories/providerRegistry.js +2 -2
  21. package/dist/lib/mcp/factory.d.ts +2 -157
  22. package/dist/lib/mcp/flexibleToolValidator.d.ts +1 -5
  23. package/dist/lib/mcp/index.d.ts +3 -2
  24. package/dist/lib/mcp/mcpCircuitBreaker.d.ts +1 -75
  25. package/dist/lib/mcp/mcpClientFactory.d.ts +1 -20
  26. package/dist/lib/mcp/mcpClientFactory.js +1 -0
  27. package/dist/lib/mcp/registry.d.ts +3 -10
  28. package/dist/lib/mcp/servers/agent/directToolsServer.d.ts +1 -1
  29. package/dist/lib/mcp/servers/aiProviders/aiCoreServer.d.ts +1 -1
  30. package/dist/lib/mcp/servers/utilities/utilityServer.d.ts +1 -1
  31. package/dist/lib/mcp/toolDiscoveryService.d.ts +3 -84
  32. package/dist/lib/mcp/toolRegistry.d.ts +2 -24
  33. package/dist/lib/middleware/builtin/guardrails.d.ts +5 -16
  34. package/dist/lib/middleware/builtin/guardrails.js +44 -39
  35. package/dist/lib/middleware/utils/guardrailsUtils.d.ts +64 -0
  36. package/dist/lib/middleware/utils/guardrailsUtils.js +387 -0
  37. package/dist/lib/neurolink.d.ts +1 -1
  38. package/dist/lib/providers/anthropic.js +46 -3
  39. package/dist/lib/providers/azureOpenai.js +8 -2
  40. package/dist/lib/providers/googleAiStudio.js +8 -2
  41. package/dist/lib/providers/googleVertex.js +11 -2
  42. package/dist/lib/providers/huggingFace.js +1 -1
  43. package/dist/lib/providers/litellm.js +1 -1
  44. package/dist/lib/providers/mistral.js +1 -1
  45. package/dist/lib/providers/openAI.js +46 -3
  46. package/dist/lib/providers/sagemaker/adaptive-semaphore.d.ts +1 -13
  47. package/dist/lib/providers/sagemaker/client.d.ts +1 -1
  48. package/dist/lib/providers/sagemaker/config.d.ts +1 -1
  49. package/dist/lib/providers/sagemaker/detection.d.ts +1 -1
  50. package/dist/lib/providers/sagemaker/errors.d.ts +1 -1
  51. package/dist/lib/providers/sagemaker/index.d.ts +1 -1
  52. package/dist/lib/providers/sagemaker/language-model.d.ts +1 -1
  53. package/dist/lib/providers/sagemaker/parsers.d.ts +1 -1
  54. package/dist/lib/providers/sagemaker/streaming.d.ts +1 -1
  55. package/dist/lib/providers/sagemaker/structured-parser.d.ts +1 -1
  56. package/dist/lib/session/globalSessionState.d.ts +26 -0
  57. package/dist/lib/session/globalSessionState.js +49 -0
  58. package/dist/lib/types/cli.d.ts +28 -0
  59. package/dist/lib/types/content.d.ts +18 -5
  60. package/dist/lib/types/contextTypes.d.ts +1 -1
  61. package/dist/lib/types/conversation.d.ts +55 -4
  62. package/dist/lib/types/fileTypes.d.ts +65 -0
  63. package/dist/lib/types/fileTypes.js +4 -0
  64. package/dist/lib/types/generateTypes.d.ts +12 -0
  65. package/dist/lib/types/guardrails.d.ts +103 -0
  66. package/dist/lib/types/guardrails.js +1 -0
  67. package/dist/lib/types/index.d.ts +4 -2
  68. package/dist/lib/types/index.js +4 -0
  69. package/dist/lib/types/mcpTypes.d.ts +407 -14
  70. package/dist/lib/types/providers.d.ts +469 -0
  71. package/dist/lib/types/streamTypes.d.ts +7 -0
  72. package/dist/lib/types/tools.d.ts +132 -35
  73. package/dist/lib/utils/csvProcessor.d.ts +68 -0
  74. package/dist/lib/utils/csvProcessor.js +277 -0
  75. package/dist/lib/utils/fileDetector.d.ts +57 -0
  76. package/dist/lib/utils/fileDetector.js +457 -0
  77. package/dist/lib/utils/imageProcessor.d.ts +10 -0
  78. package/dist/lib/utils/imageProcessor.js +22 -0
  79. package/dist/lib/utils/loopUtils.d.ts +71 -0
  80. package/dist/lib/utils/loopUtils.js +262 -0
  81. package/dist/lib/utils/messageBuilder.d.ts +2 -1
  82. package/dist/lib/utils/messageBuilder.js +197 -2
  83. package/dist/lib/utils/optionsUtils.d.ts +1 -1
  84. package/dist/mcp/factory.d.ts +2 -157
  85. package/dist/mcp/flexibleToolValidator.d.ts +1 -5
  86. package/dist/mcp/index.d.ts +3 -2
  87. package/dist/mcp/mcpCircuitBreaker.d.ts +1 -75
  88. package/dist/mcp/mcpClientFactory.d.ts +1 -20
  89. package/dist/mcp/mcpClientFactory.js +1 -0
  90. package/dist/mcp/registry.d.ts +3 -10
  91. package/dist/mcp/servers/agent/directToolsServer.d.ts +1 -1
  92. package/dist/mcp/servers/aiProviders/aiCoreServer.d.ts +1 -1
  93. package/dist/mcp/servers/utilities/utilityServer.d.ts +1 -1
  94. package/dist/mcp/toolDiscoveryService.d.ts +3 -84
  95. package/dist/mcp/toolRegistry.d.ts +2 -24
  96. package/dist/middleware/builtin/guardrails.d.ts +5 -16
  97. package/dist/middleware/builtin/guardrails.js +44 -39
  98. package/dist/middleware/utils/guardrailsUtils.d.ts +64 -0
  99. package/dist/middleware/utils/guardrailsUtils.js +387 -0
  100. package/dist/neurolink.d.ts +1 -1
  101. package/dist/providers/anthropic.js +46 -3
  102. package/dist/providers/azureOpenai.js +8 -2
  103. package/dist/providers/googleAiStudio.js +8 -2
  104. package/dist/providers/googleVertex.js +11 -2
  105. package/dist/providers/huggingFace.js +1 -1
  106. package/dist/providers/litellm.js +1 -1
  107. package/dist/providers/mistral.js +1 -1
  108. package/dist/providers/openAI.js +46 -3
  109. package/dist/providers/sagemaker/adaptive-semaphore.d.ts +1 -13
  110. package/dist/providers/sagemaker/client.d.ts +1 -1
  111. package/dist/providers/sagemaker/config.d.ts +1 -1
  112. package/dist/providers/sagemaker/detection.d.ts +1 -1
  113. package/dist/providers/sagemaker/errors.d.ts +1 -1
  114. package/dist/providers/sagemaker/index.d.ts +1 -1
  115. package/dist/providers/sagemaker/language-model.d.ts +3 -3
  116. package/dist/providers/sagemaker/parsers.d.ts +1 -1
  117. package/dist/providers/sagemaker/streaming.d.ts +1 -1
  118. package/dist/providers/sagemaker/structured-parser.d.ts +1 -1
  119. package/dist/session/globalSessionState.d.ts +26 -0
  120. package/dist/session/globalSessionState.js +49 -0
  121. package/dist/types/cli.d.ts +28 -0
  122. package/dist/types/content.d.ts +18 -5
  123. package/dist/types/contextTypes.d.ts +1 -1
  124. package/dist/types/conversation.d.ts +55 -4
  125. package/dist/types/fileTypes.d.ts +65 -0
  126. package/dist/types/fileTypes.js +4 -0
  127. package/dist/types/generateTypes.d.ts +12 -0
  128. package/dist/types/guardrails.d.ts +103 -0
  129. package/dist/types/guardrails.js +1 -0
  130. package/dist/types/index.d.ts +4 -2
  131. package/dist/types/index.js +4 -0
  132. package/dist/types/mcpTypes.d.ts +407 -14
  133. package/dist/types/modelTypes.d.ts +6 -6
  134. package/dist/types/providers.d.ts +469 -0
  135. package/dist/types/streamTypes.d.ts +7 -0
  136. package/dist/types/tools.d.ts +132 -35
  137. package/dist/utils/csvProcessor.d.ts +68 -0
  138. package/dist/utils/csvProcessor.js +277 -0
  139. package/dist/utils/fileDetector.d.ts +57 -0
  140. package/dist/utils/fileDetector.js +457 -0
  141. package/dist/utils/imageProcessor.d.ts +10 -0
  142. package/dist/utils/imageProcessor.js +22 -0
  143. package/dist/utils/loopUtils.d.ts +71 -0
  144. package/dist/utils/loopUtils.js +262 -0
  145. package/dist/utils/messageBuilder.d.ts +2 -1
  146. package/dist/utils/messageBuilder.js +197 -2
  147. package/dist/utils/optionsUtils.d.ts +1 -1
  148. package/package.json +9 -3
  149. package/dist/lib/mcp/contracts/mcpContract.d.ts +0 -106
  150. package/dist/lib/mcp/contracts/mcpContract.js +0 -5
  151. package/dist/lib/providers/sagemaker/types.d.ts +0 -456
  152. package/dist/lib/providers/sagemaker/types.js +0 -7
  153. package/dist/mcp/contracts/mcpContract.d.ts +0 -106
  154. package/dist/mcp/contracts/mcpContract.js +0 -5
  155. package/dist/providers/sagemaker/types.d.ts +0 -456
  156. package/dist/providers/sagemaker/types.js +0 -7
package/README.md CHANGED
@@ -7,553 +7,169 @@
7
7
  [![TypeScript](https://img.shields.io/badge/TypeScript-Ready-blue)](https://www.typescriptlang.org/)
8
8
  [![CI](https://github.com/juspay/neurolink/workflows/CI/badge.svg)](https://github.com/juspay/neurolink/actions)
9
9
 
10
- > **Enterprise AI Development Platform** with universal provider support, factory pattern architecture, and **access to 100+ AI models** through LiteLLM integration. Production-ready with TypeScript support.
10
+ Enterprise AI development platform with unified provider access, production-ready tooling, and an opinionated factory architecture. NeuroLink ships as both a TypeScript SDK and a professional CLI so teams can build, operate, and iterate on AI features quickly.
11
11
 
12
- **NeuroLink** is an Enterprise AI Development Platform that unifies **12 major AI providers** with intelligent fallback and built-in tool support. Available as both a **programmatic SDK** and **professional CLI tool**. Features LiteLLM integration for **100+ models**, plus 6 core tools working across all providers. Extracted from production use at Juspay.
12
+ ## 🧠 What is NeuroLink?
13
13
 
14
- ## 🎉 **NEW: LiteLLM Integration - Access 100+ AI Models**
14
+ **NeuroLink is the universal AI integration platform that unifies 12 major AI providers and 100+ models under one consistent API.**
15
15
 
16
- **NeuroLink now supports LiteLLM**, providing unified access to **100+ AI models** from all major providers through a single interface:
16
+ Extracted from production systems at Juspay and battle-tested at enterprise scale, NeuroLink provides a production-ready solution for integrating AI into any application. Whether you're building with OpenAI, Anthropic, Google, AWS Bedrock, Azure, or any of our 12 supported providers, NeuroLink gives you a single, consistent interface that works everywhere.
17
17
 
18
- - **🔄 Universal Access**: OpenAI, Anthropic, Google, Mistral, Meta, and more
19
- - **🎯 Unified Interface**: OpenAI-compatible API for all models
20
- - **💰 Cost Optimization**: Automatic routing to cost-effective models
21
- - **⚡ Load Balancing**: Automatic failover and load distribution
22
- - **📊 Analytics**: Built-in usage tracking and monitoring
18
+ **Why NeuroLink?** Switch providers with a single parameter change, leverage 64+ built-in tools and MCP servers, deploy with confidence using enterprise features like Redis memory and multi-provider failover, and optimize costs automatically with intelligent routing. Use it via our professional CLI or TypeScript SDK—whichever fits your workflow.
23
19
 
24
- ```bash
25
- # Quick start with LiteLLM
26
- pip install litellm && litellm --port 4000
27
-
28
- # Use any of 100+ models through one interface
29
- npx @juspay/neurolink generate "Hello" --provider litellm --model "openai/gpt-4o"
30
- npx @juspay/neurolink generate "Hello" --provider litellm --model "anthropic/claude-3-5-sonnet"
31
- npx @juspay/neurolink generate "Hello" --provider litellm --model "google/gemini-2.0-flash"
32
- ```
33
-
34
- **[📖 Complete LiteLLM Integration Guide](./docs/LITELLM-INTEGRATION.md)** - Setup, configuration, and 100+ model access
35
-
36
- ## 🎉 **NEW: SageMaker Integration - Deploy Your Custom AI Models**
37
-
38
- **NeuroLink now supports Amazon SageMaker**, enabling you to deploy and use your own custom trained models through NeuroLink's unified interface:
39
-
40
- - **🏗️ Custom Model Hosting** - Deploy your fine-tuned models on AWS infrastructure
41
- - **💰 Cost Control** - Pay only for inference usage with auto-scaling capabilities
42
- - **🔒 Enterprise Security** - Full control over model infrastructure and data privacy
43
- - **⚡ Performance** - Dedicated compute resources with predictable latency
44
- - **📊 Monitoring** - Built-in CloudWatch metrics and logging
45
-
46
- ```bash
47
- # Quick start with SageMaker
48
- export AWS_ACCESS_KEY_ID="your-access-key"
49
- export AWS_SECRET_ACCESS_KEY="your-secret-key"
50
- export SAGEMAKER_DEFAULT_ENDPOINT="your-endpoint-name"
51
-
52
- # Use your custom deployed models
53
- npx @juspay/neurolink generate "Analyze this data" --provider sagemaker
54
- npx @juspay/neurolink sagemaker status # Check endpoint health
55
- npx @juspay/neurolink sagemaker benchmark my-endpoint # Performance testing
56
- ```
57
-
58
- **[📖 Complete SageMaker Integration Guide](./docs/SAGEMAKER-INTEGRATION.md)** - Setup, deployment, and custom model access
59
-
60
- ## 🚀 Enterprise Platform Features
20
+ **Where we're headed:** We're building for the future of AI—edge-first execution and continuous streaming architectures that make AI practically free and universally available. **[Read our vision →](docs/about/vision.md)**
61
21
 
62
- - **🏭 Factory Pattern Architecture** - Unified provider management through BaseProvider inheritance
63
- - **🔧 Tools-First Design** - All providers include built-in tool support without additional configuration
64
- - **🔗 LiteLLM Integration** - **100+ models** from all major providers through unified interface
65
- - **🏢 Enterprise Proxy Support** - Comprehensive corporate proxy support with MCP compatibility
66
- - **🏗️ Enterprise Architecture** - Production-ready with clean abstractions
67
- - **🔄 Configuration Management** - Flexible provider configuration with automatic backups
68
- - **✅ Type Safety** - Industry-standard TypeScript interfaces
69
- - **⚡ Performance** - Fast response times with streaming support and 68% improved status checks
70
- - **🛡️ Error Recovery** - Graceful failures with provider fallback and retry logic
71
- - **📊 Analytics & Evaluation** - Built-in usage tracking and AI-powered quality assessment
72
- - **🎯 Real-time Event Monitoring** - EventEmitter integration for progress tracking and debugging
73
- - **🔧 External MCP Integration** - Model Context Protocol with 6 built-in tools + full external MCP server support
74
- - **🚀 Lighthouse Integration** - Unified tool registration API supporting both object and array formats for seamless Lighthouse tool import
22
+ **[Get Started in <5 Minutes →](docs/getting-started/quick-start.md)**
75
23
 
76
24
  ---
77
25
 
78
- ## 🚀 Quick Start
26
+ ## What's New (Q4 2025)
79
27
 
80
- ### 🎉 **NEW: Revolutionary Interactive Setup** - Transform Your Developer Experience!
28
+ - **CSV File Support** Attach CSV files to prompts for AI-powered data analysis with auto-detection. → [CSV Guide](docs/features/multimodal-chat.md#csv-file-support)
29
+ - **LiteLLM Integration** – Access 100+ AI models from all major providers through unified interface. → [Setup Guide](docs/LITELLM-INTEGRATION.md)
30
+ - **SageMaker Integration** – Deploy and use custom trained models on AWS infrastructure. → [Setup Guide](docs/SAGEMAKER-INTEGRATION.md)
31
+ - **Human-in-the-loop workflows** – Pause generation for user approval/input before tool execution. → [HITL Guide](docs/features/hitl.md)
32
+ - **Guardrails middleware** – Block PII, profanity, and unsafe content with built-in filtering. → [Guardrails Guide](docs/features/guardrails.md)
33
+ - **Context summarization** – Automatic conversation compression for long-running sessions. → [Summarization Guide](docs/CONTEXT-SUMMARIZATION.md)
34
+ - **Redis conversation export** – Export full session history as JSON for analytics and debugging. → [History Guide](docs/features/conversation-history.md)
81
35
 
82
- **🚀 BREAKTHROUGH: Setup in 2-3 minutes (vs 15+ minutes manual setup)**
36
+ > **Q3 highlights** (multimodal chat, auto-evaluation, loop sessions, orchestration) are now in [Platform Capabilities](#platform-capabilities-at-a-glance) below.
83
37
 
84
- ```bash
85
- # 🎯 **MAIN SETUP WIZARD** - Beautiful guided experience
86
- pnpm cli setup
87
-
88
- # ✨ **REVOLUTIONARY FEATURES:**
89
- # 🎨 Beautiful ASCII art welcome screen
90
- # 📊 Interactive provider comparison table
91
- # ⚡ Real-time credential validation with format checking
92
- # 🔄 Atomic .env file management (preserves existing content)
93
- # 🧠 Smart recommendations (Google AI free tier, OpenAI for pro users)
94
- # 🛡️ Cross-platform compatibility with graceful error recovery
95
- # 📈 90% reduction in setup errors vs manual configuration
96
-
97
- # 🚀 **INSTANT PRODUCTIVITY** - Use any AI provider immediately:
98
- npx @juspay/neurolink generate "Hello, AI" # Auto-selects best provider
99
- npx @juspay/neurolink gen "Write code" # Shortest form
100
- npx @juspay/neurolink stream "Tell a story" # Real-time streaming
101
- npx @juspay/neurolink status # Check all providers
102
- ```
103
-
104
- **🎯 Why This Changes Everything:**
105
-
106
- - **⏱️ Time Savings**: 15+ minutes → 2-3 minutes (83% faster)
107
- - **🛡️ Error Reduction**: 90% fewer credential/configuration errors
108
- - **🎨 Professional UX**: Beautiful terminal interface with colors and animations
109
- - **🔍 Smart Validation**: Real-time API key format checking and endpoint testing
110
- - **🔄 Safe Management**: Preserves existing .env content, creates backups automatically
111
- - **🧠 Intelligent Guidance**: Context-aware recommendations based on use case
112
-
113
- > **Developer Feedback**: _"Setup went from the most frustrating part to the most delightful part of using NeuroLink"_
114
-
115
- ### Provider-Specific Setup (if you prefer targeted setup)
38
+ ## Get Started in Two Steps
116
39
 
117
40
  ```bash
118
- # Setup individual providers with guided wizards
119
- npx @juspay/neurolink setup --provider google-ai # Free tier, perfect for beginners
120
- or pnpm cli setup-google-ai
121
-
122
- npx @juspay/neurolink setup --provider openai # Industry standard, professional use
123
- or pnpm cli setup-openai
124
-
125
- npx @juspay/neurolink setup --provider anthropic # Advanced reasoning, safety-focused
126
- or pnpm cli setup-anthropic
127
-
128
- npx @juspay/neurolink setup --provider azure # Enterprise features, compliance
129
- or pnpm cli setup-azure
130
-
131
- npx @juspay/neurolink setup --provider bedrock # AWS ecosystem integration
132
- or pnpm cli setup-bedrock
133
-
134
- npx @juspay/neurolink setup --provider huggingface # Open source models, 100k+ options
135
- or pnpm cli setup-huggingface
41
+ # 1. Run the interactive setup wizard (select providers, validate keys)
42
+ pnpm dlx @juspay/neurolink setup
136
43
 
137
- pnpm cli setup-gcp # For using Vertex
138
- # Check setup status anytime
139
- npx @juspay/neurolink setup --status
140
- npx @juspay/neurolink setup --list # View all available providers
44
+ # 2. Start generating with automatic provider selection
45
+ npx @juspay/neurolink generate "Write a launch plan for multimodal chat"
141
46
  ```
142
47
 
143
- ### Alternative: Manual Setup (Advanced Users)
48
+ Need a persistent workspace? Launch loop mode with `npx @juspay/neurolink loop` - [Learn more →](docs/features/cli-loop-sessions.md)
144
49
 
145
- ```bash
146
- # Option 1: LiteLLM - Access 100+ models through one interface
147
- pip install litellm && litellm --port 4000
148
- export LITELLM_BASE_URL="http://localhost:4000"
149
- export LITELLM_API_KEY="sk-anything"
150
-
151
- # Use any of 100+ models
152
- npx @juspay/neurolink generate "Hello, AI" --provider litellm --model "openai/gpt-4o"
153
- npx @juspay/neurolink generate "Hello, AI" --provider litellm --model "anthropic/claude-3-5-sonnet"
154
-
155
- # Option 2: OpenAI Compatible - Use any OpenAI-compatible endpoint with auto-discovery
156
- export OPENAI_COMPATIBLE_BASE_URL="https://api.openrouter.ai/api/v1"
157
- export OPENAI_COMPATIBLE_API_KEY="sk-or-v1-your-api-key"
158
- # Auto-discovers available models via /v1/models endpoint
159
- npx @juspay/neurolink generate "Hello, AI" --provider openai-compatible
160
-
161
- # Or specify a model explicitly
162
- export OPENAI_COMPATIBLE_MODEL="claude-3-5-sonnet"
163
- npx @juspay/neurolink generate "Hello, AI" --provider openai-compatible
164
-
165
- # Option 3: Direct Provider - Quick setup with Google AI Studio (free tier)
166
- export GOOGLE_AI_API_KEY="AIza-your-google-ai-api-key"
167
- npx @juspay/neurolink generate "Hello, AI" --provider google-ai
168
-
169
- # Option 4: Amazon SageMaker - Use your custom deployed models
170
- export AWS_ACCESS_KEY_ID="your-access-key"
171
- export AWS_SECRET_ACCESS_KEY="your-secret-key"
172
- export SAGEMAKER_DEFAULT_ENDPOINT="your-endpoint-name"
173
- npx @juspay/neurolink generate "Hello, AI" --provider sagemaker
174
- ```
50
+ ## 🌟 Complete Feature Set
175
51
 
176
- ```bash
177
- # SDK Installation for using in your typescript projects
178
- npm install @juspay/neurolink
179
-
180
- # 🆕 NEW: External MCP Server Integration Quick Test
181
- node -e "
182
- const { NeuroLink } = require('@juspay/neurolink');
183
- (async () => {
184
- const neurolink = new NeuroLink();
185
-
186
- // Add external filesystem MCP server
187
- await neurolink.addExternalMCPServer('filesystem', {
188
- command: 'npx',
189
- args: ['-y', '@modelcontextprotocol/server-filesystem', '/tmp'],
190
- transport: 'stdio'
191
- });
192
-
193
- // External tools automatically available in generate()
194
- const result = await neurolink.generate({
195
- input: { text: 'List files in the current directory' }
196
- });
197
- console.log('🎉 External MCP integration working!');
198
- console.log(result.content);
199
- })();
200
- "
201
- ```
52
+ NeuroLink is a comprehensive AI development platform. Every feature below is production-ready and fully documented.
202
53
 
203
- ### Basic Usage
54
+ ### 🤖 AI Provider Integration
204
55
 
205
- ```typescript
206
- import { NeuroLink } from "@juspay/neurolink";
56
+ **12 providers unified under one API** - Switch providers with a single parameter change.
207
57
 
208
- // Auto-select best available provider
209
- const neurolink = new NeuroLink();
210
- const autoResult = await neurolink.generate({
211
- input: { text: "Write a business email" },
212
- provider: "google-ai", // or let it auto-select
213
- timeout: "30s",
214
- });
58
+ | Provider | Models | Free Tier | Tool Support | Status | Documentation |
59
+ | --------------------- | ------------------------------ | --------------- | ------------ | ------------- | ----------------------------------------------------------------------- |
60
+ | **OpenAI** | GPT-4o, GPT-4o-mini, o1 | ❌ | ✅ Full | ✅ Production | [Setup Guide](docs/getting-started/provider-setup.md#openai) |
61
+ | **Anthropic** | Claude 3.5/3.7 Sonnet, Opus | ❌ | ✅ Full | ✅ Production | [Setup Guide](docs/getting-started/provider-setup.md#anthropic) |
62
+ | **Google AI Studio** | Gemini 2.5 Flash/Pro | Free Tier | ✅ Full | ✅ Production | [Setup Guide](docs/getting-started/provider-setup.md#google-ai) |
63
+ | **AWS Bedrock** | Claude, Titan, Llama, Nova | ❌ | ✅ Full | ✅ Production | [Setup Guide](docs/getting-started/provider-setup.md#bedrock) |
64
+ | **Google Vertex** | Gemini via GCP | ❌ | ✅ Full | ✅ Production | [Setup Guide](docs/getting-started/provider-setup.md#vertex) |
65
+ | **Azure OpenAI** | GPT-4, GPT-4o, o1 | ❌ | ✅ Full | ✅ Production | [Setup Guide](docs/getting-started/provider-setup.md#azure) |
66
+ | **LiteLLM** | 100+ models unified | Varies | ✅ Full | ✅ Production | [Setup Guide](docs/LITELLM-INTEGRATION.md) |
67
+ | **AWS SageMaker** | Custom deployed models | ❌ | ✅ Full | ✅ Production | [Setup Guide](docs/SAGEMAKER-INTEGRATION.md) |
68
+ | **Mistral AI** | Mistral Large, Small | ✅ Free Tier | ✅ Full | ✅ Production | [Setup Guide](docs/getting-started/provider-setup.md#mistral) |
69
+ | **Hugging Face** | 100,000+ models | ✅ Free | ⚠️ Partial | ✅ Production | [Setup Guide](docs/getting-started/provider-setup.md#huggingface) |
70
+ | **Ollama** | Local models (Llama, Mistral) | ✅ Free (Local) | ⚠️ Partial | ✅ Production | [Setup Guide](docs/getting-started/provider-setup.md#ollama) |
71
+ | **OpenAI Compatible** | Any OpenAI-compatible endpoint | Varies | ✅ Full | ✅ Production | [Setup Guide](docs/getting-started/provider-setup.md#openai-compatible) |
215
72
 
216
- console.log(autoResult.content);
217
- console.log(`Used: ${autoResult.provider}`);
218
- ```
73
+ **[📖 Provider Comparison Guide](docs/reference/provider-comparison.md)** - Detailed feature matrix and selection criteria
219
74
 
220
- ### Conversation Memory
75
+ ---
221
76
 
222
- NeuroLink supports automatic conversation history management that maintains context across multiple turns within sessions. This enables AI to remember previous interactions and provide contextually aware responses. Session-based memory isolation ensures privacy between different conversations.
77
+ ### 🔧 Built-in Tools & MCP Integration
223
78
 
224
- ```typescript
225
- // Enable conversation memory with configurable limits
226
- const neurolink = new NeuroLink({
227
- conversationMemory: {
228
- enabled: true,
229
- maxSessions: 50, // Keep last 50 sessions
230
- maxTurnsPerSession: 20, // Keep last 20 turns per session
231
- },
232
- });
233
- ```
79
+ **6 Core Tools** (work across all providers, zero configuration):
234
80
 
235
- #### 🔗 CLI-SDK Consistency (NEW! ✨)
81
+ | Tool | Purpose | Auto-Available | Documentation |
82
+ | -------------------- | ------------------------ | ----------------------- | --------------------------------------------------------- |
83
+ | `getCurrentTime` | Real-time clock access | ✅ | [Tool Reference](docs/sdk/custom-tools.md#getCurrentTime) |
84
+ | `readFile` | File system reading | ✅ | [Tool Reference](docs/sdk/custom-tools.md#readFile) |
85
+ | `writeFile` | File system writing | ✅ | [Tool Reference](docs/sdk/custom-tools.md#writeFile) |
86
+ | `listDirectory` | Directory listing | ✅ | [Tool Reference](docs/sdk/custom-tools.md#listDirectory) |
87
+ | `calculateMath` | Mathematical operations | ✅ | [Tool Reference](docs/sdk/custom-tools.md#calculateMath) |
88
+ | `websearchGrounding` | Google Vertex web search | ⚠️ Requires credentials | [Tool Reference](docs/sdk/custom-tools.md#websearch) |
236
89
 
237
- Method aliases that match CLI command names:
90
+ **58+ External MCP Servers** supported (GitHub, PostgreSQL, Google Drive, Slack, and more):
238
91
 
239
92
  ```typescript
240
- // The following methods are equivalent:
241
- const result1 = await provider.generate({ input: { text: "Hello" } }); // Original
242
- const result2 = await provider.gen({ input: { text: "Hello" } }); // Matches CLI 'gen'
243
-
244
- // Use whichever style you prefer:
245
- const provider = createBestAIProvider();
246
-
247
- // Detailed method name
248
- const story = await provider.generate({
249
- input: { text: "Write a short story about AI" },
250
- maxTokens: 200,
93
+ // Add any MCP server dynamically
94
+ await neurolink.addExternalMCPServer("github", {
95
+ command: "npx",
96
+ args: ["-y", "@modelcontextprotocol/server-github"],
97
+ transport: "stdio",
98
+ env: { GITHUB_TOKEN: process.env.GITHUB_TOKEN },
251
99
  });
252
100
 
253
- // CLI-style method names
254
- const poem = await provider.generate({ input: { text: "Write a poem" } });
255
- const joke = await provider.gen({ input: { text: "Tell me a joke" } });
256
- ```
257
-
258
- ### Enhanced Features
259
-
260
- #### CLI with Analytics & Evaluation
261
-
262
- ```bash
263
- # Basic AI generation with auto-provider selection
264
- npx @juspay/neurolink generate "Write a business email"
265
-
266
- # LiteLLM with specific model
267
- npx @juspay/neurolink generate "Write code" --provider litellm --model "anthropic/claude-3-5-sonnet"
268
-
269
- # With analytics and evaluation
270
- npx @juspay/neurolink generate "Write a proposal" --enable-analytics --enable-evaluation --debug
271
-
272
- # Streaming with tools (default behavior)
273
- npx @juspay/neurolink stream "What time is it and write a file with the current date"
274
- ```
275
-
276
- #### SDK and Enhancement Features
277
-
278
- ```typescript
279
- import { NeuroLink } from "@juspay/neurolink";
280
-
281
- // Enhanced generation with analytics
282
- const neurolink = new NeuroLink();
101
+ // Tools automatically available to AI
283
102
  const result = await neurolink.generate({
284
- input: { text: "Write a business proposal" },
285
- enableAnalytics: true, // Get usage & cost data
286
- enableEvaluation: true, // Get AI quality scores
287
- context: { project: "Q1-sales" },
288
- });
289
-
290
- console.log("📊 Usage:", result.analytics);
291
- console.log("⭐ Quality:", result.evaluation);
292
- console.log("Response:", result.content);
293
- ```
294
-
295
- ### Environment Setup
296
-
297
- ```bash
298
- # Create .env file (automatically loaded by CLI)
299
- echo 'OPENAI_API_KEY="sk-your-openai-key"' > .env
300
- echo 'GOOGLE_AI_API_KEY="AIza-your-google-ai-key"' >> .env
301
- echo 'AWS_ACCESS_KEY_ID="your-aws-access-key"' >> .env
302
-
303
- # 🆕 NEW: Google Vertex AI for Websearch Tool
304
- echo 'GOOGLE_APPLICATION_CREDENTIALS="/path/to/service-account.json"' >> .env
305
- echo 'GOOGLE_VERTEX_PROJECT="your-gcp-project-id"' >> .env
306
- echo 'GOOGLE_VERTEX_LOCATION="us-central1"' >> .env
307
-
308
- # Test configuration
309
- npx @juspay/neurolink status
310
-
311
- # SDK Env Provider Check - Advanced provider testing with fallback detection
312
- pnpm run test:providers
313
- ```
314
-
315
- #### 🔍 SDK Env Provider Check Output
316
-
317
- ```bash
318
- # Example output:
319
- ✅ Google AI: Working (197 tokens)
320
- ⚠️ OpenAI: Failed (Fallback to google-ai)
321
- ⚠️ AWS Bedrock: Failed (Fallback to google-ai)
322
- ```
323
-
324
- ### JSON Format Support (Complete)
325
-
326
- NeuroLink provides comprehensive JSON input/output support for both CLI and SDK:
327
-
328
- ```bash
329
- # CLI JSON Output - Structured data for scripts
330
- npx @juspay/neurolink generate "Summary of AI trends" --format json
331
- npx @juspay/neurolink gen "Create a user profile" --format json --provider google-ai
332
-
333
- # Example JSON Output:
334
- {
335
- "content": "AI trends include increased automation...",
336
- "provider": "google-ai",
337
- "model": "gemini-2.5-flash",
338
- "usage": {
339
- "promptTokens": 15,
340
- "completionTokens": 127,
341
- "totalTokens": 142
342
- },
343
- "responseTime": 1234
344
- }
345
- ```
346
-
347
- ```typescript
348
- // SDK JSON Input/Output - Full TypeScript support
349
- import { createBestAIProvider } from "@juspay/neurolink";
350
-
351
- const provider = createBestAIProvider();
352
-
353
- // Structured input
354
- const result = await provider.generate({
355
- input: { text: "Create a product specification" },
356
- schema: {
357
- type: "object",
358
- properties: {
359
- name: { type: "string" },
360
- price: { type: "number" },
361
- features: { type: "array", items: { type: "string" } },
362
- },
363
- },
103
+ input: { text: 'Create a GitHub issue titled "Bug in auth flow"' },
364
104
  });
365
-
366
- // Access structured response
367
- const productData = JSON.parse(result.content);
368
- console.log(productData.name, productData.price, productData.features);
369
- ```
370
-
371
- **📖 [Complete Setup Guide](./docs/CONFIGURATION.md)** - All providers with detailed instructions
372
-
373
- ## 🔍 **NEW: Websearch Tool with Google Vertex AI Grounding**
374
-
375
- **NeuroLink now includes a powerful websearch tool** that uses Google's native search grounding technology for real-time web information:
376
-
377
- - **🔍 Native Google Search** - Uses Google's search grounding via Vertex AI
378
- - **🎯 Real-time Results** - Access current web information during AI conversations
379
- - **🔒 Credential Protection** - Only activates when Google Vertex AI credentials are properly configured
380
-
381
- ### Quick Setup & Test
382
-
383
- ```bash
384
- # 1. Build the project first
385
- pnpm run build
386
-
387
- # 2. Set up environment variables (see detailed setup below)
388
- cp .env.example .env
389
- # Edit .env with your Google Vertex AI credentials
390
-
391
- # 3. Test the websearch tool directly
392
- node test-websearch-grounding.j
393
- ```
394
-
395
- ### Complete Google Vertex AI Setup
396
-
397
- #### Configure Environment Variables
398
-
399
- ```bash
400
- # Add to your .env file
401
- GOOGLE_APPLICATION_CREDENTIALS="/absolute/path/to/neurolink-service-account.json"
402
- GOOGLE_VERTEX_PROJECT="YOUR-PROJECT-ID"
403
- GOOGLE_VERTEX_LOCATION="us-central1"
404
105
  ```
405
106
 
406
- #### Step 3: Test the Setup
407
-
408
- ````bash
409
- # Build the project first
410
- pnpm run build
411
-
412
- # Run the dedicated test script
413
- node test-websearch-grounding.js
414
-
415
- ### Using the Websearch Tool
107
+ **[📖 MCP Integration Guide](docs/advanced/mcp-integration.md)** - Setup external servers
416
108
 
417
- #### CLI Usage (Works with All Providers)
109
+ ---
418
110
 
419
- # With specific providers - websearch works across all providers
420
- npx @juspay/neurolink generate "Weather in Tokyo now" --provider vertex
111
+ ### 💻 Developer Experience Features
421
112
 
422
- **Note:** The websearch tool gracefully handles missing credentials - it only activates when valid Google Vertex AI credentials are configured. Without proper credentials, other tools continue to work normally and AI responses fall back to training data.
113
+ **SDK-First Design** with TypeScript, IntelliSense, and type safety:
423
114
 
424
- ## Key Features
115
+ | Feature | Description | Documentation |
116
+ | --------------------------- | ------------------------------ | ----------------------------------------------------- |
117
+ | **Auto Provider Selection** | Intelligent provider fallback | [SDK Guide](docs/sdk/index.md#auto-selection) |
118
+ | **Streaming Responses** | Real-time token streaming | [Streaming Guide](docs/advanced/streaming.md) |
119
+ | **Conversation Memory** | Automatic context management | [Memory Guide](docs/sdk/index.md#memory) |
120
+ | **Full Type Safety** | Complete TypeScript types | [Type Reference](docs/sdk/api-reference.md) |
121
+ | **Error Handling** | Graceful provider fallback | [Error Guide](docs/reference/troubleshooting.md) |
122
+ | **Analytics & Evaluation** | Usage tracking, quality scores | [Analytics Guide](docs/advanced/analytics.md) |
123
+ | **Middleware System** | Request/response hooks | [Middleware Guide](docs/CUSTOM-MIDDLEWARE-GUIDE.md) |
124
+ | **Framework Integration** | Next.js, SvelteKit, Express | [Framework Guides](docs/sdk/framework-integration.md) |
425
125
 
426
- - 🔗 **LiteLLM Integration** - **Access 100+ AI models** from all major providers through unified interface
427
- - 🔍 **Smart Model Auto-Discovery** - OpenAI Compatible provider automatically detects available models via `/v1/models` endpoint
428
- - 🏭 **Factory Pattern Architecture** - Unified provider management with BaseProvider inheritance
429
- - 🔧 **Tools-First Design** - All providers automatically include 7 direct tools (getCurrentTime, readFile, listDirectory, calculateMath, writeFile, searchFiles, websearchGrounding)
430
- - 🔄 **12 AI Providers** - OpenAI, Bedrock, Vertex AI, Google AI Studio, Anthropic, Azure, **LiteLLM**, **OpenAI Compatible**, Hugging Face, Ollama, Mistral AI, **SageMaker**
431
- - 💰 **Cost Optimization** - Automatic selection of cheapest models and LiteLLM routing
432
- - ⚡ **Automatic Fallback** - Never fail when providers are down, intelligent provider switching
433
- - 🖥️ **CLI + SDK** - Use from command line or integrate programmatically with TypeScript support
434
- - 🛡️ **Production Ready** - Enterprise-grade error handling, performance optimization, extracted from production
435
- - 🏢 **Enterprise Proxy Support** - Comprehensive corporate proxy support with zero configuration
436
- - ✅ **External MCP Integration** - Model Context Protocol with built-in tools + full external MCP server support
437
- - 🔍 **Smart Model Resolution** - Fuzzy matching, aliases, and capability-based search across all providers
438
- - 🏠 **Local AI Support** - Run completely offline with Ollama or through LiteLLM proxy
439
- - 🌍 **Universal Model Access** - Direct providers + 100,000+ models via Hugging Face + 100+ models via LiteLLM
440
- - 🧠 **Automatic Context Summarization** - Stateful, long-running conversations with automatic history summarization.
441
- - 📊 **Analytics & Evaluation** - Built-in usage tracking and AI-powered quality assessment
126
+ ---
442
127
 
443
- ## 🛠️ External MCP Integration Status ✅ **PRODUCTION READY**
128
+ ### 🏢 Enterprise & Production Features
444
129
 
445
- | Component | Status | Description |
446
- | ---------------------- | -------------- | ---------------------------------------------------------------- |
447
- | Built-in Tools | ✅ **Working** | 6 core tools fully functional across all providers |
448
- | SDK Custom Tools | ✅ **Working** | Register custom tools programmatically |
449
- | **External MCP Tools** | ✅ **Working** | **Full external MCP server support with dynamic tool discovery** |
450
- | Tool Execution | ✅ **Working** | Real-time AI tool calling with all tool types |
451
- | **Streaming Support** | ✅ **Working** | **External MCP tools work with streaming generation** |
452
- | **Multi-Provider** | ✅ **Working** | **External tools work across all AI providers** |
453
- | **CLI Integration** | ✅ **READY** | **Production-ready with external MCP support** |
130
+ **Production-ready capabilities for regulated industries:**
454
131
 
455
- ### External MCP Integration Demo
132
+ | Feature | Description | Use Case | Documentation |
133
+ | --------------------------- | ---------------------------------- | ------------------------- | ----------------------------------------------------------- |
134
+ | **Enterprise Proxy** | Corporate proxy support | Behind firewalls | [Proxy Setup](docs/ENTERPRISE-PROXY-SETUP.md) |
135
+ | **Redis Memory** | Distributed conversation state | Multi-instance deployment | [Redis Guide](docs/getting-started/provider-setup.md#redis) |
136
+ | **Cost Optimization** | Automatic cheapest model selection | Budget control | [Cost Guide](docs/advanced/index.md) |
137
+ | **Multi-Provider Failover** | Automatic provider switching | High availability | [Failover Guide](docs/advanced/index.md) |
138
+ | **Telemetry & Monitoring** | OpenTelemetry integration | Observability | [Telemetry Guide](docs/TELEMETRY-GUIDE.md) |
139
+ | **Security Hardening** | Credential management, auditing | Compliance | [Security Guide](docs/advanced/enterprise.md) |
140
+ | **Custom Model Hosting** | SageMaker integration | Private models | [SageMaker Guide](docs/SAGEMAKER-INTEGRATION.md) |
141
+ | **Load Balancing** | LiteLLM proxy integration | Scale & routing | [Load Balancing](docs/LITELLM-INTEGRATION.md) |
456
142
 
457
- ```bash
458
- # Test built-in tools (works immediately)
459
- npx @juspay/neurolink generate "What time is it?" --debug
460
-
461
- # 🆕 NEW: External MCP server integration (SDK)
462
- import { NeuroLink } from '@juspay/neurolink';
463
-
464
- const neurolink = new NeuroLink();
465
-
466
- // Add external MCP server (e.g., Bitbucket)
467
- await neurolink.addExternalMCPServer('bitbucket', {
468
- command: 'npx',
469
- args: ['-y', '@nexus2520/bitbucket-mcp-server'],
470
- transport: 'stdio',
471
- env: {
472
- BITBUCKET_USERNAME: process.env.BITBUCKET_USERNAME,
473
- BITBUCKET_TOKEN: process.env.BITBUCKET_TOKEN,
474
- BITBUCKET_BASE_URL: 'https://bitbucket.example.com'
475
- }
476
- });
143
+ **Security & Compliance:**
477
144
 
478
- // Use external MCP tools in generation
479
- const result = await neurolink.generate({
480
- input: { text: 'Get pull request #123 details from the main repository' },
481
- disableTools: false // External MCP tools automatically available
482
- });
145
+ - SOC2 Type II compliant deployments
146
+ - ISO 27001 certified infrastructure compatible
147
+ - GDPR-compliant data handling (EU providers available)
148
+ - HIPAA compatible (with proper configuration)
149
+ - ✅ Hardened OS verified (SELinux, AppArmor)
150
+ - ✅ Zero credential logging
151
+ - ✅ Encrypted configuration storage
483
152
 
484
- # Discover available MCP servers
485
- npx @juspay/neurolink mcp discover --format table
486
- ````
153
+ **[📖 Enterprise Deployment Guide](docs/advanced/enterprise.md)** - Complete production checklist
487
154
 
488
- ### 🔧 SDK Custom Tool Registration (NEW!)
155
+ ---
489
156
 
490
- Register your own tools programmatically with the SDK:
157
+ ### 🎨 Professional CLI
491
158
 
492
- ```typescript
493
- import { NeuroLink } from "@juspay/neurolink";
494
- const neurolink = new NeuroLink();
495
-
496
- // Register a simple tool
497
- neurolink.registerTool("weatherLookup", {
498
- description: "Get current weather for a city",
499
- parameters: z.object({
500
- city: z.string().describe("City name"),
501
- units: z.enum(["celsius", "fahrenheit"]).optional(),
502
- }),
503
- execute: async ({ city, units = "celsius" }) => {
504
- // Your implementation here
505
- return {
506
- city,
507
- temperature: 22,
508
- units,
509
- condition: "sunny",
510
- };
511
- },
512
- });
159
+ **15+ commands** for every workflow:
513
160
 
514
- // Use it in generation
515
- const result = await neurolink.generate({
516
- input: { text: "What's the weather in London?" },
517
- provider: "google-ai",
518
- });
519
-
520
- // Register multiple tools - Object format (existing)
521
- neurolink.registerTools({
522
- stockPrice: {
523
- description: "Get stock price",
524
- execute: async () => ({ price: 150.25 }),
525
- },
526
- calculator: {
527
- description: "Calculate math",
528
- execute: async () => ({ result: 42 }),
529
- },
530
- });
161
+ | Command | Purpose | Example | Documentation |
162
+ | ---------- | ---------------------------------- | -------------------------- | ----------------------------------------- |
163
+ | `setup` | Interactive provider configuration | `neurolink setup` | [Setup Guide](docs/cli/index.md) |
164
+ | `generate` | Text generation | `neurolink gen "Hello"` | [Generate](docs/cli/commands.md#generate) |
165
+ | `stream` | Streaming generation | `neurolink stream "Story"` | [Stream](docs/cli/commands.md#stream) |
166
+ | `status` | Provider health check | `neurolink status` | [Status](docs/cli/commands.md#status) |
167
+ | `loop` | Interactive session | `neurolink loop` | [Loop](docs/cli/commands.md#loop) |
168
+ | `mcp` | MCP server management | `neurolink mcp discover` | [MCP CLI](docs/cli/commands.md#mcp) |
169
+ | `models` | Model listing | `neurolink models` | [Models](docs/cli/commands.md#models) |
170
+ | `eval` | Model evaluation | `neurolink eval` | [Eval](docs/cli/commands.md#eval) |
531
171
 
532
- // Register multiple tools - Array format (Lighthouse compatible)
533
- neurolink.registerTools([
534
- {
535
- name: "lighthouseTool1",
536
- tool: {
537
- description: "Lighthouse analytics tool",
538
- parameters: z.object({
539
- merchantId: z.string(),
540
- dateRange: z.string().optional(),
541
- }),
542
- execute: async ({ merchantId, dateRange }) => {
543
- // Lighthouse tool implementation with Zod schema
544
- return { data: "analytics result" };
545
- },
546
- },
547
- },
548
- {
549
- name: "lighthouseTool2",
550
- tool: {
551
- description: "Payment processing tool",
552
- execute: async () => ({ status: "processed" }),
553
- },
554
- },
555
- ]);
556
- ```
172
+ **[📖 Complete CLI Reference](docs/cli/commands.md)** - All commands and options
557
173
 
558
174
  ## 💰 Smart Model Selection
559
175
 
@@ -614,317 +230,94 @@ Start the loop with conversation memory to have the AI remember the context of y
614
230
  npx @juspay/neurolink loop --enable-conversation-memory
615
231
  ```
616
232
 
617
- ## 💻 Essential Examples
618
-
619
- ### CLI Commands
620
-
621
- ```bash
622
- # Text generation with automatic MCP tool detection (default)
623
- npx @juspay/neurolink generate "What time is it?"
624
-
625
- # Alternative short form
626
- npx @juspay/neurolink gen "What time is it?"
627
-
628
- # Disable tools for training-data-only responses
629
- npx @juspay/neurolink generate "What time is it?" --disable-tools
630
-
631
- # With custom timeout for complex prompts
632
- npx @juspay/neurolink generate "Explain quantum computing in detail" --timeout 1m
633
-
634
- # Real-time streaming with agent support (default)
635
- npx @juspay/neurolink stream "What time is it?"
636
-
637
- # Streaming without tools (traditional mode)
638
- npx @juspay/neurolink stream "Tell me a story" --disable-tools
639
-
640
- # Streaming with extended timeout
641
- npx @juspay/neurolink stream "Write a long story" --timeout 5m
642
-
643
- # Provider diagnostics
644
- npx @juspay/neurolink status --verbose
645
-
646
- # Batch processing
647
- echo -e "Write a haiku\nExplain gravity" > prompts.txt
648
- npx @juspay/neurolink batch prompts.txt --output results.json
649
-
650
- # Batch with custom timeout per request
651
- npx @juspay/neurolink batch prompts.txt --timeout 45s --output results.json
652
- ```
653
-
654
- ### SDK Integration
655
-
656
- ```typescript
657
- // SvelteKit API route with timeout handling
658
- export const POST: RequestHandler = async ({ request }) => {
659
- const { message } = await request.json();
660
- const provider = createBestAIProvider();
661
-
662
- try {
663
- // NEW: Primary streaming method (recommended)
664
- const result = await provider.stream({
665
- input: { text: message },
666
- timeout: "2m", // 2 minutes for streaming
667
- });
668
-
669
- // Process stream
670
- for await (const chunk of result.stream) {
671
- // Handle streaming content
672
- console.log(chunk.content);
673
- }
674
-
675
- // LEGACY: Backward compatibility (still works)
676
- const legacyResult = await provider.stream({ input: { text:
677
- prompt: message,
678
- timeout: "2m", // 2 minutes for streaming
679
- });
680
- return new Response(result.toReadableStream());
681
- } catch (error) {
682
- if (error.name === "TimeoutError") {
683
- return new Response("Request timed out", { status: 408 });
684
- }
685
- throw error;
686
- }
687
- };
688
-
689
- // Next.js API route with timeout
690
- export async function POST(request: NextRequest) {
691
- const { prompt } = await request.json();
692
- const provider = createBestAIProvider();
693
-
694
- const result = await provider.generate({
695
- prompt,
696
- timeout: process.env.AI_TIMEOUT || "30s", // Configurable timeout
697
- });
698
-
699
- return NextResponse.json({ text: result.content });
700
- }
701
- ```
702
-
703
- ## 🎬 See It In Action
704
-
705
- **No installation required!** Experience NeuroLink through comprehensive visual documentation:
706
-
707
- ### 📱 Interactive Web Demo
708
-
709
- ```bash
710
- cd neurolink-demo && node server.js
711
- # Visit http://localhost:9876 for live demo
712
- ```
713
-
714
- - **Real AI Integration**: All 9 providers functional with live generation
715
- - **Complete Use Cases**: Business, creative, and developer scenarios
716
- - **Performance Metrics**: Live provider analytics and response times
717
- - **Privacy Options**: Test local AI with Ollama
718
-
719
- ### 🖥️ CLI Demonstrations
720
-
721
- - **[CLI Help & Commands](./docs/visual-content/cli-videos/cli-01-cli-help.mp4)** - Complete command reference
722
- - **[Provider Status Check](./docs/visual-content/cli-videos/cli-02-provider-status.mp4)** - Connectivity verification (now with authentication and model availability checks)
723
- - **[Text Generation](./docs/visual-content/cli-videos/cli-03-text-generation.mp4)** - Real AI content creation
724
-
725
- ### 🌐 Web Interface Videos
726
-
727
- - **[Business Use Cases](./neurolink-demo/videos/business-use-cases.mp4)** - Professional applications
728
- - **[Developer Tools](./neurolink-demo/videos/developer-tools.mp4)** - Code generation and APIs
729
- - **[Creative Tools](./neurolink-demo/videos/creative-tools.mp4)** - Content creation
730
-
731
- **[📖 Complete Visual Documentation](./docs/VISUAL-DEMOS.md)** - All screenshots and videos
732
-
733
- ## 📚 Documentation
734
-
735
- ### Getting Started
736
-
737
- - **[🔧 Provider Setup](./docs/PROVIDER-CONFIGURATION.md)** - Complete environment configuration
738
- - **[🖥️ CLI Guide](./docs/CLI-GUIDE.md)** - All commands and options
739
- - **[🏗️ SDK Integration](./docs/FRAMEWORK-INTEGRATION.md)** - Next.js, SvelteKit, React
740
- - **[⚙️ Environment Variables](./docs/ENVIRONMENT-VARIABLES.md)** - Full configuration guide
233
+ Skip the wizard and configure manually? See [`docs/getting-started/provider-setup.md`](docs/getting-started/provider-setup.md).
741
234
 
742
- ### Advanced Features
235
+ ## CLI & SDK Essentials
743
236
 
744
- - **[🏭 Factory Pattern Migration](./docs/FACTORY-PATTERN-MIGRATION.md)** - Guide to the new unified provider architecture
745
- - **[🔄 MCP Foundation](./docs/MCP-FOUNDATION.md)** - Model Context Protocol architecture
746
- - **[⚡ Dynamic Models](./docs/DYNAMIC-MODELS.md)** - Self-updating model configurations and cost optimization
747
- - **[🧠 AI Analysis Tools](./docs/AI-ANALYSIS-TOOLS.md)** - Usage optimization and benchmarking
748
- - **[🛠️ AI Workflow Tools](./docs/AI-WORKFLOW-TOOLS.md)** - Development lifecycle assistance
749
- - **[🎬 Visual Demos](./docs/VISUAL-DEMOS.md)** - Screenshots and videos
750
-
751
- ### Reference
752
-
753
- - **[📚 API Reference](./docs/API-REFERENCE.md)** - Complete TypeScript API
754
- - **[🔗 Framework Integration](./docs/FRAMEWORK-INTEGRATION.md)** - SvelteKit, Next.js, Express.js
755
-
756
- ## 🏗️ Supported Providers & Models
757
-
758
- | Provider | Models | Auth Method | Free Tier | Tool Support | Key Benefit |
759
- | --------------------------- | ---------------------------------- | ------------------ | --------- | ------------ | -------------------------------- |
760
- | **🔗 LiteLLM** 🆕 | **100+ Models** (All Providers) | Proxy Server | Varies | ✅ Full | **Universal Access** |
761
- | **🔗 OpenAI Compatible** 🆕 | **Any OpenAI-compatible endpoint** | API Key + Base URL | Varies | ✅ Full | **Auto-Discovery + Flexibility** |
762
- | **Google AI Studio** | Gemini 2.5 Flash/Pro | API Key | ✅ | ✅ Full | Free Tier Available |
763
- | **OpenAI** | GPT-4o, GPT-4o-mini | API Key | ❌ | ✅ Full | Industry Standard |
764
- | **Anthropic** | Claude 3.5 Sonnet | API Key | ❌ | ✅ Full | Advanced Reasoning |
765
- | **Amazon Bedrock** | Claude 3.5/3.7 Sonnet | AWS Credentials | ❌ | ✅ Full\* | Enterprise Scale |
766
- | **Google Vertex AI** | Gemini 2.5 Flash | Service Account | ❌ | ✅ Full | Enterprise Google |
767
- | **Azure OpenAI** | GPT-4, GPT-3.5 | API Key + Endpoint | ❌ | ✅ Full | Microsoft Ecosystem |
768
- | **Ollama** 🆕 | Llama 3.2, Gemma, Mistral (Local) | None (Local) | ✅ | ⚠️ Partial | Complete Privacy |
769
- | **Hugging Face** 🆕 | 100,000+ open source models | API Key | ✅ | ⚠️ Partial | Open Source |
770
- | **Mistral AI** 🆕 | Tiny, Small, Medium, Large | API Key | ✅ | ✅ Full | European/GDPR |
771
- | **Amazon SageMaker** 🆕 | Custom Models (Your Endpoints) | AWS Credentials | ❌ | ✅ Full | Custom Model Hosting |
772
-
773
- **Tool Support Legend:**
774
-
775
- - ✅ Full: All tools working correctly
776
- - ⚠️ Partial: Tools visible but may not execute properly
777
- - ❌ Limited: Issues with model or configuration
778
- - \* Bedrock requires valid AWS credentials, Ollama requires specific models like gemma3n for tool support
779
-
780
- **✨ Auto-Selection**: NeuroLink automatically chooses the best available provider based on speed, reliability, and configuration.
781
-
782
- ### 🔍 Smart Model Auto-Discovery (OpenAI Compatible)
783
-
784
- The OpenAI Compatible provider includes intelligent model discovery that automatically detects available models from any endpoint:
237
+ `neurolink` CLI mirrors the SDK so teams can script experiments and codify them later.
785
238
 
786
239
  ```bash
787
- # Setup - no model specified
788
- export OPENAI_COMPATIBLE_BASE_URL="https://api.your-endpoint.ai/v1"
789
- export OPENAI_COMPATIBLE_API_KEY="your-api-key"
240
+ # Discover available providers and models
241
+ npx @juspay/neurolink status
242
+ npx @juspay/neurolink models list --provider google-ai
790
243
 
791
- # Auto-discovers and uses first available model
792
- npx @juspay/neurolink generate "Hello!" --provider openai-compatible
793
- # 🔍 Auto-discovered model: claude-sonnet-4 from 3 available models
244
+ # Route to a specific provider/model
245
+ npx @juspay/neurolink generate "Summarize customer feedback" \
246
+ --provider azure --model gpt-4o-mini
794
247
 
795
- # Or specify explicitly to skip discovery
796
- export OPENAI_COMPATIBLE_MODEL="gemini-2.5-pro"
797
- npx @juspay/neurolink generate "Hello!" --provider openai-compatible
248
+ # Turn on analytics + evaluation for observability
249
+ npx @juspay/neurolink generate "Draft release notes" \
250
+ --enable-analytics --enable-evaluation --format json
798
251
  ```
799
252
 
800
- **How it works:**
801
-
802
- - Queries `/v1/models` endpoint to discover available models
803
- - Automatically selects the first available model when none specified
804
- - Falls back gracefully if discovery fails
805
- - Works with any OpenAI-compatible service (OpenRouter, vLLM, LiteLLM, etc.)
806
-
807
- ## 🎯 Production Features
808
-
809
- ### Enterprise-Grade Reliability
810
-
811
- - **Automatic Failover**: Seamless provider switching on failures
812
- - **Error Recovery**: Comprehensive error handling and logging
813
- - **Performance Monitoring**: Built-in analytics and metrics
814
- - **Type Safety**: Full TypeScript support with IntelliSense
815
-
816
- ### AI Platform Capabilities
817
-
818
- - **MCP Foundation**: Universal AI development platform with 10+ specialized tools
819
- - **Analysis Tools**: Usage optimization, performance benchmarking, parameter tuning
820
- - **Workflow Tools**: Test generation, code refactoring, documentation, debugging
821
- - **Extensibility**: Connect external tools and services via MCP protocol
822
- - **🆕 Dynamic Server Management**: Programmatically add MCP servers at runtime
823
-
824
- ### 🔧 External MCP Server Management ✅ **AVAILABLE NOW**
825
-
826
- **External MCP integration is now production-ready:**
827
-
828
- - ✅ 6 built-in tools working across all providers
829
- - ✅ SDK custom tool registration
830
- - ✅ **External MCP server management** (add, remove, list, test servers)
831
- - ✅ **Dynamic tool discovery** (automatic tool registration from external servers)
832
- - ✅ **Multi-provider support** (external tools work with all AI providers)
833
- - ✅ **Streaming integration** (external tools work with real-time streaming)
834
- - ✅ **Enhanced tool tracking** (proper parameter extraction and execution logging)
835
-
836
253
  ```typescript
837
- // Complete external MCP server API
838
- const neurolink = new NeuroLink();
839
-
840
- // Server management
841
- await neurolink.addExternalMCPServer(serverId, config);
842
- await neurolink.removeExternalMCPServer(serverId);
843
- const servers = neurolink.listExternalMCPServers();
844
- const server = neurolink.getExternalMCPServer(serverId);
845
-
846
- // Tool management
847
- const tools = neurolink.getExternalMCPTools();
848
- const serverTools = neurolink.getExternalMCPServerTools(serverId);
849
-
850
- // Direct tool execution
851
- const result = await neurolink.executeExternalMCPTool(
852
- serverId,
853
- toolName,
854
- params,
855
- );
856
-
857
- // Statistics and monitoring
858
- const stats = neurolink.getExternalMCPStatistics();
859
- await neurolink.shutdownExternalMCPServers();
860
- ```
861
-
862
- ## 🤝 Contributing
254
+ import { NeuroLink } from "@juspay/neurolink";
863
255
 
864
- We welcome contributions! Please see our [Contributing Guidelines](./CONTRIBUTING.md) for details.
256
+ const neurolink = new NeuroLink({
257
+ conversationMemory: {
258
+ enabled: true,
259
+ store: "redis",
260
+ },
261
+ enableOrchestration: true,
262
+ });
865
263
 
866
- ### Development Setup
264
+ const result = await neurolink.generate({
265
+ input: {
266
+ text: "Create a comprehensive analysis",
267
+ files: [
268
+ "./sales_data.csv", // Auto-detected as CSV
269
+ "./diagrams/architecture.png", // Auto-detected as image
270
+ ],
271
+ },
272
+ enableEvaluation: true,
273
+ region: "us-east-1",
274
+ });
867
275
 
868
- ```bash
869
- git clone https://github.com/juspay/neurolink
870
- cd neurolink
871
- pnpm install
872
- npx husky install # Setup git hooks for build rule enforcement
873
- pnpm setup:complete # One-command setup with all automation
874
- pnpm test:adaptive # Intelligent testing
875
- pnpm build:complete # Full build pipeline
276
+ console.log(result.content);
277
+ console.log(result.evaluation?.overallScore);
876
278
  ```
877
279
 
878
- ### Enterprise Developer Experience
280
+ Full command and API breakdown lives in [`docs/cli/commands.md`](docs/cli/commands.md) and [`docs/sdk/api-reference.md`](docs/sdk/api-reference.md).
879
281
 
880
- NeuroLink features **enterprise-grade build rule enforcement** with comprehensive quality validation:
282
+ ## Platform Capabilities at a Glance
881
283
 
882
- ```bash
883
- # Quality & Validation (required for all commits)
884
- pnpm run validate:all # Run all validation checks
885
- pnpm run validate:security # Security scanning with gitleaks
886
- pnpm run validate:env # Environment consistency checks
887
- pnpm run quality:metrics # Generate quality score report
888
-
889
- # Development Workflow
890
- pnpm run check:all # Pre-commit validation simulation
891
- pnpm run format # Auto-fix code formatting
892
- pnpm run lint # ESLint validation with zero-error tolerance
893
-
894
- # Environment & Setup (2-minute initialization)
895
- pnpm setup:complete # Complete project setup
896
- pnpm env:setup # Safe .env configuration
897
- pnpm env:backup # Environment backup
898
-
899
- # Testing (60-80% faster)
900
- pnpm test:adaptive # Intelligent test selection
901
- pnpm test:providers # AI provider validation
902
-
903
- # Documentation & Content
904
- pnpm docs:sync # Cross-file documentation sync
905
- pnpm content:generate # Automated content creation
906
-
907
- # Build & Deployment
908
- pnpm build:complete # 7-phase enterprise pipeline
909
- pnpm dev:health # System health monitoring
910
- ```
284
+ | Capability | Highlights |
285
+ | ------------------------ | -------------------------------------------------------------------------------------------------------- |
286
+ | **Provider unification** | 12+ providers with automatic fallback, cost-aware routing, provider orchestration (Q3). |
287
+ | **Multimodal pipeline** | Stream images + CSV data across providers with local/remote assets. Auto-detection for mixed file types. |
288
+ | **Quality & governance** | Auto-evaluation engine (Q3), guardrails middleware (Q4), HITL workflows (Q4), audit logging. |
289
+ | **Memory & context** | Conversation memory, Mem0 integration, Redis history export (Q4), context summarization (Q4). |
290
+ | **CLI tooling** | Loop sessions (Q3), setup wizard, config validation, Redis auto-detect, JSON output. |
291
+ | **Enterprise ops** | Proxy support, regional routing (Q3), telemetry hooks, configuration management. |
292
+ | **Tool ecosystem** | MCP auto discovery, LiteLLM hub access, SageMaker custom deployment, web search. |
911
293
 
912
- **Build Rule Enforcement:** All commits automatically validated with pre-commit hooks. See [Contributing Guidelines](./CONTRIBUTING.md) for complete requirements.
294
+ ## Documentation Map
913
295
 
914
- **[📖 Complete Automation Guide](./docs/CLI-GUIDE.md)** - All 72+ commands and automation features
296
+ | Area | When to Use | Link |
297
+ | --------------- | ----------------------------------------------- | ---------------------------------------------------------------- |
298
+ | Getting started | Install, configure, run first prompt | [`docs/getting-started/index.md`](docs/getting-started/index.md) |
299
+ | Feature guides | Understand new functionality front-to-back | [`docs/features/index.md`](docs/features/index.md) |
300
+ | CLI reference | Command syntax, flags, loop sessions | [`docs/cli/index.md`](docs/cli/index.md) |
301
+ | SDK reference | Classes, methods, options | [`docs/sdk/index.md`](docs/sdk/index.md) |
302
+ | Integrations | LiteLLM, SageMaker, MCP, Mem0 | [`docs/LITELLM-INTEGRATION.md`](docs/LITELLM-INTEGRATION.md) |
303
+ | Operations | Configuration, troubleshooting, provider matrix | [`docs/reference/index.md`](docs/reference/index.md) |
304
+ | Visual demos | Screens, GIFs, interactive tours | [`docs/demos/index.md`](docs/demos/index.md) |
915
305
 
916
- ## 📄 License
306
+ ## Integrations
917
307
 
918
- MIT © [Juspay Technologies](https://juspay.in)
308
+ - **LiteLLM 100+ model hub** – Unified access to third-party models via LiteLLM routing. → [`docs/LITELLM-INTEGRATION.md`](docs/LITELLM-INTEGRATION.md)
309
+ - **Amazon SageMaker** – Deploy and call custom endpoints directly from NeuroLink CLI/SDK. → [`docs/SAGEMAKER-INTEGRATION.md`](docs/SAGEMAKER-INTEGRATION.md)
310
+ - **Mem0 conversational memory** – Persistent semantic memory with vector store support. → [`docs/MEM0_INTEGRATION.md`](docs/MEM0_INTEGRATION.md)
311
+ - **Enterprise proxy & security** – Configure outbound policies and compliance posture. → [`docs/ENTERPRISE-PROXY-SETUP.md`](docs/ENTERPRISE-PROXY-SETUP.md)
312
+ - **Configuration automation** – Manage environments, regions, and credentials safely. → [`docs/CONFIGURATION-MANAGEMENT.md`](docs/CONFIGURATION-MANAGEMENT.md)
313
+ - **MCP tool ecosystem** – Auto-discover Model Context Protocol tools and extend workflows. → [`docs/advanced/mcp-integration.md`](docs/advanced/mcp-integration.md)
919
314
 
920
- ## 🔗 Related Projects
315
+ ## Contributing & Support
921
316
 
922
- - [Vercel AI SDK](https://github.com/vercel/ai) - Underlying provider implementations
923
- - [SvelteKit](https://kit.svelte.dev) - Web framework used in this project
924
- - [Model Context Protocol](https://modelcontextprotocol.io) - Tool integration standard
317
+ - Bug reports and feature requests → [GitHub Issues](https://github.com/juspay/neurolink/issues)
318
+ - Development workflow, testing, and pull request guidelines → [`docs/development/contributing.md`](docs/development/contributing.md)
319
+ - Documentation improvements → open a PR referencing the [documentation matrix](docs/tracking/FEATURE-DOC-MATRIX.md).
925
320
 
926
321
  ---
927
322
 
928
- <p align="center">
929
- <strong>Built with ❤️ by <a href="https://juspay.in">Juspay Technologies</a></strong>
930
- </p>
323
+ NeuroLink is built with ❤️ by Juspay. Contributions, questions, and production feedback are always welcome.