@juspay/neurolink 7.48.0 → 7.49.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/CHANGELOG.md +15 -0
- package/README.md +177 -784
- package/dist/agent/directTools.d.ts +55 -0
- package/dist/agent/directTools.js +266 -0
- package/dist/cli/factories/commandFactory.d.ts +2 -0
- package/dist/cli/factories/commandFactory.js +130 -16
- package/dist/cli/index.js +0 -0
- package/dist/cli/loop/conversationSelector.d.ts +45 -0
- package/dist/cli/loop/conversationSelector.js +222 -0
- package/dist/cli/loop/optionsSchema.d.ts +1 -1
- package/dist/cli/loop/session.d.ts +36 -8
- package/dist/cli/loop/session.js +257 -61
- package/dist/core/baseProvider.js +9 -2
- package/dist/core/evaluation.js +5 -2
- package/dist/factories/providerRegistry.js +2 -2
- package/dist/lib/agent/directTools.d.ts +55 -0
- package/dist/lib/agent/directTools.js +266 -0
- package/dist/lib/core/baseProvider.js +9 -2
- package/dist/lib/core/evaluation.js +5 -2
- package/dist/lib/factories/providerRegistry.js +2 -2
- package/dist/lib/mcp/factory.d.ts +2 -157
- package/dist/lib/mcp/flexibleToolValidator.d.ts +1 -5
- package/dist/lib/mcp/index.d.ts +3 -2
- package/dist/lib/mcp/mcpCircuitBreaker.d.ts +1 -75
- package/dist/lib/mcp/mcpClientFactory.d.ts +1 -20
- package/dist/lib/mcp/mcpClientFactory.js +1 -0
- package/dist/lib/mcp/registry.d.ts +3 -10
- package/dist/lib/mcp/servers/agent/directToolsServer.d.ts +1 -1
- package/dist/lib/mcp/servers/aiProviders/aiCoreServer.d.ts +1 -1
- package/dist/lib/mcp/servers/utilities/utilityServer.d.ts +1 -1
- package/dist/lib/mcp/toolDiscoveryService.d.ts +3 -84
- package/dist/lib/mcp/toolRegistry.d.ts +2 -24
- package/dist/lib/middleware/builtin/guardrails.d.ts +5 -16
- package/dist/lib/middleware/builtin/guardrails.js +44 -39
- package/dist/lib/middleware/utils/guardrailsUtils.d.ts +64 -0
- package/dist/lib/middleware/utils/guardrailsUtils.js +387 -0
- package/dist/lib/neurolink.d.ts +1 -1
- package/dist/lib/providers/anthropic.js +46 -3
- package/dist/lib/providers/azureOpenai.js +8 -2
- package/dist/lib/providers/googleAiStudio.js +8 -2
- package/dist/lib/providers/googleVertex.js +11 -2
- package/dist/lib/providers/huggingFace.js +1 -1
- package/dist/lib/providers/litellm.js +1 -1
- package/dist/lib/providers/mistral.js +1 -1
- package/dist/lib/providers/openAI.js +46 -3
- package/dist/lib/providers/sagemaker/adaptive-semaphore.d.ts +1 -13
- package/dist/lib/providers/sagemaker/client.d.ts +1 -1
- package/dist/lib/providers/sagemaker/config.d.ts +1 -1
- package/dist/lib/providers/sagemaker/detection.d.ts +1 -1
- package/dist/lib/providers/sagemaker/errors.d.ts +1 -1
- package/dist/lib/providers/sagemaker/index.d.ts +1 -1
- package/dist/lib/providers/sagemaker/language-model.d.ts +1 -1
- package/dist/lib/providers/sagemaker/parsers.d.ts +1 -1
- package/dist/lib/providers/sagemaker/streaming.d.ts +1 -1
- package/dist/lib/providers/sagemaker/structured-parser.d.ts +1 -1
- package/dist/lib/session/globalSessionState.d.ts +26 -0
- package/dist/lib/session/globalSessionState.js +49 -0
- package/dist/lib/types/cli.d.ts +28 -0
- package/dist/lib/types/content.d.ts +18 -5
- package/dist/lib/types/contextTypes.d.ts +1 -1
- package/dist/lib/types/conversation.d.ts +55 -4
- package/dist/lib/types/fileTypes.d.ts +65 -0
- package/dist/lib/types/fileTypes.js +4 -0
- package/dist/lib/types/generateTypes.d.ts +12 -0
- package/dist/lib/types/guardrails.d.ts +103 -0
- package/dist/lib/types/guardrails.js +1 -0
- package/dist/lib/types/index.d.ts +4 -2
- package/dist/lib/types/index.js +4 -0
- package/dist/lib/types/mcpTypes.d.ts +407 -14
- package/dist/lib/types/providers.d.ts +469 -0
- package/dist/lib/types/streamTypes.d.ts +7 -0
- package/dist/lib/types/tools.d.ts +132 -35
- package/dist/lib/utils/csvProcessor.d.ts +68 -0
- package/dist/lib/utils/csvProcessor.js +277 -0
- package/dist/lib/utils/fileDetector.d.ts +57 -0
- package/dist/lib/utils/fileDetector.js +457 -0
- package/dist/lib/utils/imageProcessor.d.ts +10 -0
- package/dist/lib/utils/imageProcessor.js +22 -0
- package/dist/lib/utils/loopUtils.d.ts +71 -0
- package/dist/lib/utils/loopUtils.js +262 -0
- package/dist/lib/utils/messageBuilder.d.ts +2 -1
- package/dist/lib/utils/messageBuilder.js +197 -2
- package/dist/lib/utils/optionsUtils.d.ts +1 -1
- package/dist/mcp/factory.d.ts +2 -157
- package/dist/mcp/flexibleToolValidator.d.ts +1 -5
- package/dist/mcp/index.d.ts +3 -2
- package/dist/mcp/mcpCircuitBreaker.d.ts +1 -75
- package/dist/mcp/mcpClientFactory.d.ts +1 -20
- package/dist/mcp/mcpClientFactory.js +1 -0
- package/dist/mcp/registry.d.ts +3 -10
- package/dist/mcp/servers/agent/directToolsServer.d.ts +1 -1
- package/dist/mcp/servers/aiProviders/aiCoreServer.d.ts +1 -1
- package/dist/mcp/servers/utilities/utilityServer.d.ts +1 -1
- package/dist/mcp/toolDiscoveryService.d.ts +3 -84
- package/dist/mcp/toolRegistry.d.ts +2 -24
- package/dist/middleware/builtin/guardrails.d.ts +5 -16
- package/dist/middleware/builtin/guardrails.js +44 -39
- package/dist/middleware/utils/guardrailsUtils.d.ts +64 -0
- package/dist/middleware/utils/guardrailsUtils.js +387 -0
- package/dist/neurolink.d.ts +1 -1
- package/dist/providers/anthropic.js +46 -3
- package/dist/providers/azureOpenai.js +8 -2
- package/dist/providers/googleAiStudio.js +8 -2
- package/dist/providers/googleVertex.js +11 -2
- package/dist/providers/huggingFace.js +1 -1
- package/dist/providers/litellm.js +1 -1
- package/dist/providers/mistral.js +1 -1
- package/dist/providers/openAI.js +46 -3
- package/dist/providers/sagemaker/adaptive-semaphore.d.ts +1 -13
- package/dist/providers/sagemaker/client.d.ts +1 -1
- package/dist/providers/sagemaker/config.d.ts +1 -1
- package/dist/providers/sagemaker/detection.d.ts +1 -1
- package/dist/providers/sagemaker/errors.d.ts +1 -1
- package/dist/providers/sagemaker/index.d.ts +1 -1
- package/dist/providers/sagemaker/language-model.d.ts +3 -3
- package/dist/providers/sagemaker/parsers.d.ts +1 -1
- package/dist/providers/sagemaker/streaming.d.ts +1 -1
- package/dist/providers/sagemaker/structured-parser.d.ts +1 -1
- package/dist/session/globalSessionState.d.ts +26 -0
- package/dist/session/globalSessionState.js +49 -0
- package/dist/types/cli.d.ts +28 -0
- package/dist/types/content.d.ts +18 -5
- package/dist/types/contextTypes.d.ts +1 -1
- package/dist/types/conversation.d.ts +55 -4
- package/dist/types/fileTypes.d.ts +65 -0
- package/dist/types/fileTypes.js +4 -0
- package/dist/types/generateTypes.d.ts +12 -0
- package/dist/types/guardrails.d.ts +103 -0
- package/dist/types/guardrails.js +1 -0
- package/dist/types/index.d.ts +4 -2
- package/dist/types/index.js +4 -0
- package/dist/types/mcpTypes.d.ts +407 -14
- package/dist/types/modelTypes.d.ts +6 -6
- package/dist/types/providers.d.ts +469 -0
- package/dist/types/streamTypes.d.ts +7 -0
- package/dist/types/tools.d.ts +132 -35
- package/dist/utils/csvProcessor.d.ts +68 -0
- package/dist/utils/csvProcessor.js +277 -0
- package/dist/utils/fileDetector.d.ts +57 -0
- package/dist/utils/fileDetector.js +457 -0
- package/dist/utils/imageProcessor.d.ts +10 -0
- package/dist/utils/imageProcessor.js +22 -0
- package/dist/utils/loopUtils.d.ts +71 -0
- package/dist/utils/loopUtils.js +262 -0
- package/dist/utils/messageBuilder.d.ts +2 -1
- package/dist/utils/messageBuilder.js +197 -2
- package/dist/utils/optionsUtils.d.ts +1 -1
- package/package.json +9 -3
- package/dist/lib/mcp/contracts/mcpContract.d.ts +0 -106
- package/dist/lib/mcp/contracts/mcpContract.js +0 -5
- package/dist/lib/providers/sagemaker/types.d.ts +0 -456
- package/dist/lib/providers/sagemaker/types.js +0 -7
- package/dist/mcp/contracts/mcpContract.d.ts +0 -106
- package/dist/mcp/contracts/mcpContract.js +0 -5
- package/dist/providers/sagemaker/types.d.ts +0 -456
- package/dist/providers/sagemaker/types.js +0 -7
package/README.md
CHANGED
|
@@ -7,553 +7,169 @@
|
|
|
7
7
|
[](https://www.typescriptlang.org/)
|
|
8
8
|
[](https://github.com/juspay/neurolink/actions)
|
|
9
9
|
|
|
10
|
-
|
|
10
|
+
Enterprise AI development platform with unified provider access, production-ready tooling, and an opinionated factory architecture. NeuroLink ships as both a TypeScript SDK and a professional CLI so teams can build, operate, and iterate on AI features quickly.
|
|
11
11
|
|
|
12
|
-
|
|
12
|
+
## 🧠 What is NeuroLink?
|
|
13
13
|
|
|
14
|
-
|
|
14
|
+
**NeuroLink is the universal AI integration platform that unifies 12 major AI providers and 100+ models under one consistent API.**
|
|
15
15
|
|
|
16
|
-
|
|
16
|
+
Extracted from production systems at Juspay and battle-tested at enterprise scale, NeuroLink provides a production-ready solution for integrating AI into any application. Whether you're building with OpenAI, Anthropic, Google, AWS Bedrock, Azure, or any of our 12 supported providers, NeuroLink gives you a single, consistent interface that works everywhere.
|
|
17
17
|
|
|
18
|
-
|
|
19
|
-
- **🎯 Unified Interface**: OpenAI-compatible API for all models
|
|
20
|
-
- **💰 Cost Optimization**: Automatic routing to cost-effective models
|
|
21
|
-
- **⚡ Load Balancing**: Automatic failover and load distribution
|
|
22
|
-
- **📊 Analytics**: Built-in usage tracking and monitoring
|
|
18
|
+
**Why NeuroLink?** Switch providers with a single parameter change, leverage 64+ built-in tools and MCP servers, deploy with confidence using enterprise features like Redis memory and multi-provider failover, and optimize costs automatically with intelligent routing. Use it via our professional CLI or TypeScript SDK—whichever fits your workflow.
|
|
23
19
|
|
|
24
|
-
|
|
25
|
-
# Quick start with LiteLLM
|
|
26
|
-
pip install litellm && litellm --port 4000
|
|
27
|
-
|
|
28
|
-
# Use any of 100+ models through one interface
|
|
29
|
-
npx @juspay/neurolink generate "Hello" --provider litellm --model "openai/gpt-4o"
|
|
30
|
-
npx @juspay/neurolink generate "Hello" --provider litellm --model "anthropic/claude-3-5-sonnet"
|
|
31
|
-
npx @juspay/neurolink generate "Hello" --provider litellm --model "google/gemini-2.0-flash"
|
|
32
|
-
```
|
|
33
|
-
|
|
34
|
-
**[📖 Complete LiteLLM Integration Guide](./docs/LITELLM-INTEGRATION.md)** - Setup, configuration, and 100+ model access
|
|
35
|
-
|
|
36
|
-
## 🎉 **NEW: SageMaker Integration - Deploy Your Custom AI Models**
|
|
37
|
-
|
|
38
|
-
**NeuroLink now supports Amazon SageMaker**, enabling you to deploy and use your own custom trained models through NeuroLink's unified interface:
|
|
39
|
-
|
|
40
|
-
- **🏗️ Custom Model Hosting** - Deploy your fine-tuned models on AWS infrastructure
|
|
41
|
-
- **💰 Cost Control** - Pay only for inference usage with auto-scaling capabilities
|
|
42
|
-
- **🔒 Enterprise Security** - Full control over model infrastructure and data privacy
|
|
43
|
-
- **⚡ Performance** - Dedicated compute resources with predictable latency
|
|
44
|
-
- **📊 Monitoring** - Built-in CloudWatch metrics and logging
|
|
45
|
-
|
|
46
|
-
```bash
|
|
47
|
-
# Quick start with SageMaker
|
|
48
|
-
export AWS_ACCESS_KEY_ID="your-access-key"
|
|
49
|
-
export AWS_SECRET_ACCESS_KEY="your-secret-key"
|
|
50
|
-
export SAGEMAKER_DEFAULT_ENDPOINT="your-endpoint-name"
|
|
51
|
-
|
|
52
|
-
# Use your custom deployed models
|
|
53
|
-
npx @juspay/neurolink generate "Analyze this data" --provider sagemaker
|
|
54
|
-
npx @juspay/neurolink sagemaker status # Check endpoint health
|
|
55
|
-
npx @juspay/neurolink sagemaker benchmark my-endpoint # Performance testing
|
|
56
|
-
```
|
|
57
|
-
|
|
58
|
-
**[📖 Complete SageMaker Integration Guide](./docs/SAGEMAKER-INTEGRATION.md)** - Setup, deployment, and custom model access
|
|
59
|
-
|
|
60
|
-
## 🚀 Enterprise Platform Features
|
|
20
|
+
**Where we're headed:** We're building for the future of AI—edge-first execution and continuous streaming architectures that make AI practically free and universally available. **[Read our vision →](docs/about/vision.md)**
|
|
61
21
|
|
|
62
|
-
|
|
63
|
-
- **🔧 Tools-First Design** - All providers include built-in tool support without additional configuration
|
|
64
|
-
- **🔗 LiteLLM Integration** - **100+ models** from all major providers through unified interface
|
|
65
|
-
- **🏢 Enterprise Proxy Support** - Comprehensive corporate proxy support with MCP compatibility
|
|
66
|
-
- **🏗️ Enterprise Architecture** - Production-ready with clean abstractions
|
|
67
|
-
- **🔄 Configuration Management** - Flexible provider configuration with automatic backups
|
|
68
|
-
- **✅ Type Safety** - Industry-standard TypeScript interfaces
|
|
69
|
-
- **⚡ Performance** - Fast response times with streaming support and 68% improved status checks
|
|
70
|
-
- **🛡️ Error Recovery** - Graceful failures with provider fallback and retry logic
|
|
71
|
-
- **📊 Analytics & Evaluation** - Built-in usage tracking and AI-powered quality assessment
|
|
72
|
-
- **🎯 Real-time Event Monitoring** - EventEmitter integration for progress tracking and debugging
|
|
73
|
-
- **🔧 External MCP Integration** - Model Context Protocol with 6 built-in tools + full external MCP server support
|
|
74
|
-
- **🚀 Lighthouse Integration** - Unified tool registration API supporting both object and array formats for seamless Lighthouse tool import
|
|
22
|
+
**[Get Started in <5 Minutes →](docs/getting-started/quick-start.md)**
|
|
75
23
|
|
|
76
24
|
---
|
|
77
25
|
|
|
78
|
-
##
|
|
26
|
+
## What's New (Q4 2025)
|
|
79
27
|
|
|
80
|
-
|
|
28
|
+
- **CSV File Support** – Attach CSV files to prompts for AI-powered data analysis with auto-detection. → [CSV Guide](docs/features/multimodal-chat.md#csv-file-support)
|
|
29
|
+
- **LiteLLM Integration** – Access 100+ AI models from all major providers through unified interface. → [Setup Guide](docs/LITELLM-INTEGRATION.md)
|
|
30
|
+
- **SageMaker Integration** – Deploy and use custom trained models on AWS infrastructure. → [Setup Guide](docs/SAGEMAKER-INTEGRATION.md)
|
|
31
|
+
- **Human-in-the-loop workflows** – Pause generation for user approval/input before tool execution. → [HITL Guide](docs/features/hitl.md)
|
|
32
|
+
- **Guardrails middleware** – Block PII, profanity, and unsafe content with built-in filtering. → [Guardrails Guide](docs/features/guardrails.md)
|
|
33
|
+
- **Context summarization** – Automatic conversation compression for long-running sessions. → [Summarization Guide](docs/CONTEXT-SUMMARIZATION.md)
|
|
34
|
+
- **Redis conversation export** – Export full session history as JSON for analytics and debugging. → [History Guide](docs/features/conversation-history.md)
|
|
81
35
|
|
|
82
|
-
|
|
36
|
+
> **Q3 highlights** (multimodal chat, auto-evaluation, loop sessions, orchestration) are now in [Platform Capabilities](#platform-capabilities-at-a-glance) below.
|
|
83
37
|
|
|
84
|
-
|
|
85
|
-
# 🎯 **MAIN SETUP WIZARD** - Beautiful guided experience
|
|
86
|
-
pnpm cli setup
|
|
87
|
-
|
|
88
|
-
# ✨ **REVOLUTIONARY FEATURES:**
|
|
89
|
-
# 🎨 Beautiful ASCII art welcome screen
|
|
90
|
-
# 📊 Interactive provider comparison table
|
|
91
|
-
# ⚡ Real-time credential validation with format checking
|
|
92
|
-
# 🔄 Atomic .env file management (preserves existing content)
|
|
93
|
-
# 🧠 Smart recommendations (Google AI free tier, OpenAI for pro users)
|
|
94
|
-
# 🛡️ Cross-platform compatibility with graceful error recovery
|
|
95
|
-
# 📈 90% reduction in setup errors vs manual configuration
|
|
96
|
-
|
|
97
|
-
# 🚀 **INSTANT PRODUCTIVITY** - Use any AI provider immediately:
|
|
98
|
-
npx @juspay/neurolink generate "Hello, AI" # Auto-selects best provider
|
|
99
|
-
npx @juspay/neurolink gen "Write code" # Shortest form
|
|
100
|
-
npx @juspay/neurolink stream "Tell a story" # Real-time streaming
|
|
101
|
-
npx @juspay/neurolink status # Check all providers
|
|
102
|
-
```
|
|
103
|
-
|
|
104
|
-
**🎯 Why This Changes Everything:**
|
|
105
|
-
|
|
106
|
-
- **⏱️ Time Savings**: 15+ minutes → 2-3 minutes (83% faster)
|
|
107
|
-
- **🛡️ Error Reduction**: 90% fewer credential/configuration errors
|
|
108
|
-
- **🎨 Professional UX**: Beautiful terminal interface with colors and animations
|
|
109
|
-
- **🔍 Smart Validation**: Real-time API key format checking and endpoint testing
|
|
110
|
-
- **🔄 Safe Management**: Preserves existing .env content, creates backups automatically
|
|
111
|
-
- **🧠 Intelligent Guidance**: Context-aware recommendations based on use case
|
|
112
|
-
|
|
113
|
-
> **Developer Feedback**: _"Setup went from the most frustrating part to the most delightful part of using NeuroLink"_
|
|
114
|
-
|
|
115
|
-
### Provider-Specific Setup (if you prefer targeted setup)
|
|
38
|
+
## Get Started in Two Steps
|
|
116
39
|
|
|
117
40
|
```bash
|
|
118
|
-
#
|
|
119
|
-
|
|
120
|
-
or pnpm cli setup-google-ai
|
|
121
|
-
|
|
122
|
-
npx @juspay/neurolink setup --provider openai # Industry standard, professional use
|
|
123
|
-
or pnpm cli setup-openai
|
|
124
|
-
|
|
125
|
-
npx @juspay/neurolink setup --provider anthropic # Advanced reasoning, safety-focused
|
|
126
|
-
or pnpm cli setup-anthropic
|
|
127
|
-
|
|
128
|
-
npx @juspay/neurolink setup --provider azure # Enterprise features, compliance
|
|
129
|
-
or pnpm cli setup-azure
|
|
130
|
-
|
|
131
|
-
npx @juspay/neurolink setup --provider bedrock # AWS ecosystem integration
|
|
132
|
-
or pnpm cli setup-bedrock
|
|
133
|
-
|
|
134
|
-
npx @juspay/neurolink setup --provider huggingface # Open source models, 100k+ options
|
|
135
|
-
or pnpm cli setup-huggingface
|
|
41
|
+
# 1. Run the interactive setup wizard (select providers, validate keys)
|
|
42
|
+
pnpm dlx @juspay/neurolink setup
|
|
136
43
|
|
|
137
|
-
|
|
138
|
-
|
|
139
|
-
npx @juspay/neurolink setup --status
|
|
140
|
-
npx @juspay/neurolink setup --list # View all available providers
|
|
44
|
+
# 2. Start generating with automatic provider selection
|
|
45
|
+
npx @juspay/neurolink generate "Write a launch plan for multimodal chat"
|
|
141
46
|
```
|
|
142
47
|
|
|
143
|
-
|
|
48
|
+
Need a persistent workspace? Launch loop mode with `npx @juspay/neurolink loop` - [Learn more →](docs/features/cli-loop-sessions.md)
|
|
144
49
|
|
|
145
|
-
|
|
146
|
-
# Option 1: LiteLLM - Access 100+ models through one interface
|
|
147
|
-
pip install litellm && litellm --port 4000
|
|
148
|
-
export LITELLM_BASE_URL="http://localhost:4000"
|
|
149
|
-
export LITELLM_API_KEY="sk-anything"
|
|
150
|
-
|
|
151
|
-
# Use any of 100+ models
|
|
152
|
-
npx @juspay/neurolink generate "Hello, AI" --provider litellm --model "openai/gpt-4o"
|
|
153
|
-
npx @juspay/neurolink generate "Hello, AI" --provider litellm --model "anthropic/claude-3-5-sonnet"
|
|
154
|
-
|
|
155
|
-
# Option 2: OpenAI Compatible - Use any OpenAI-compatible endpoint with auto-discovery
|
|
156
|
-
export OPENAI_COMPATIBLE_BASE_URL="https://api.openrouter.ai/api/v1"
|
|
157
|
-
export OPENAI_COMPATIBLE_API_KEY="sk-or-v1-your-api-key"
|
|
158
|
-
# Auto-discovers available models via /v1/models endpoint
|
|
159
|
-
npx @juspay/neurolink generate "Hello, AI" --provider openai-compatible
|
|
160
|
-
|
|
161
|
-
# Or specify a model explicitly
|
|
162
|
-
export OPENAI_COMPATIBLE_MODEL="claude-3-5-sonnet"
|
|
163
|
-
npx @juspay/neurolink generate "Hello, AI" --provider openai-compatible
|
|
164
|
-
|
|
165
|
-
# Option 3: Direct Provider - Quick setup with Google AI Studio (free tier)
|
|
166
|
-
export GOOGLE_AI_API_KEY="AIza-your-google-ai-api-key"
|
|
167
|
-
npx @juspay/neurolink generate "Hello, AI" --provider google-ai
|
|
168
|
-
|
|
169
|
-
# Option 4: Amazon SageMaker - Use your custom deployed models
|
|
170
|
-
export AWS_ACCESS_KEY_ID="your-access-key"
|
|
171
|
-
export AWS_SECRET_ACCESS_KEY="your-secret-key"
|
|
172
|
-
export SAGEMAKER_DEFAULT_ENDPOINT="your-endpoint-name"
|
|
173
|
-
npx @juspay/neurolink generate "Hello, AI" --provider sagemaker
|
|
174
|
-
```
|
|
50
|
+
## 🌟 Complete Feature Set
|
|
175
51
|
|
|
176
|
-
|
|
177
|
-
# SDK Installation for using in your typescript projects
|
|
178
|
-
npm install @juspay/neurolink
|
|
179
|
-
|
|
180
|
-
# 🆕 NEW: External MCP Server Integration Quick Test
|
|
181
|
-
node -e "
|
|
182
|
-
const { NeuroLink } = require('@juspay/neurolink');
|
|
183
|
-
(async () => {
|
|
184
|
-
const neurolink = new NeuroLink();
|
|
185
|
-
|
|
186
|
-
// Add external filesystem MCP server
|
|
187
|
-
await neurolink.addExternalMCPServer('filesystem', {
|
|
188
|
-
command: 'npx',
|
|
189
|
-
args: ['-y', '@modelcontextprotocol/server-filesystem', '/tmp'],
|
|
190
|
-
transport: 'stdio'
|
|
191
|
-
});
|
|
192
|
-
|
|
193
|
-
// External tools automatically available in generate()
|
|
194
|
-
const result = await neurolink.generate({
|
|
195
|
-
input: { text: 'List files in the current directory' }
|
|
196
|
-
});
|
|
197
|
-
console.log('🎉 External MCP integration working!');
|
|
198
|
-
console.log(result.content);
|
|
199
|
-
})();
|
|
200
|
-
"
|
|
201
|
-
```
|
|
52
|
+
NeuroLink is a comprehensive AI development platform. Every feature below is production-ready and fully documented.
|
|
202
53
|
|
|
203
|
-
###
|
|
54
|
+
### 🤖 AI Provider Integration
|
|
204
55
|
|
|
205
|
-
|
|
206
|
-
import { NeuroLink } from "@juspay/neurolink";
|
|
56
|
+
**12 providers unified under one API** - Switch providers with a single parameter change.
|
|
207
57
|
|
|
208
|
-
|
|
209
|
-
|
|
210
|
-
|
|
211
|
-
|
|
212
|
-
|
|
213
|
-
|
|
214
|
-
|
|
58
|
+
| Provider | Models | Free Tier | Tool Support | Status | Documentation |
|
|
59
|
+
| --------------------- | ------------------------------ | --------------- | ------------ | ------------- | ----------------------------------------------------------------------- |
|
|
60
|
+
| **OpenAI** | GPT-4o, GPT-4o-mini, o1 | ❌ | ✅ Full | ✅ Production | [Setup Guide](docs/getting-started/provider-setup.md#openai) |
|
|
61
|
+
| **Anthropic** | Claude 3.5/3.7 Sonnet, Opus | ❌ | ✅ Full | ✅ Production | [Setup Guide](docs/getting-started/provider-setup.md#anthropic) |
|
|
62
|
+
| **Google AI Studio** | Gemini 2.5 Flash/Pro | ✅ Free Tier | ✅ Full | ✅ Production | [Setup Guide](docs/getting-started/provider-setup.md#google-ai) |
|
|
63
|
+
| **AWS Bedrock** | Claude, Titan, Llama, Nova | ❌ | ✅ Full | ✅ Production | [Setup Guide](docs/getting-started/provider-setup.md#bedrock) |
|
|
64
|
+
| **Google Vertex** | Gemini via GCP | ❌ | ✅ Full | ✅ Production | [Setup Guide](docs/getting-started/provider-setup.md#vertex) |
|
|
65
|
+
| **Azure OpenAI** | GPT-4, GPT-4o, o1 | ❌ | ✅ Full | ✅ Production | [Setup Guide](docs/getting-started/provider-setup.md#azure) |
|
|
66
|
+
| **LiteLLM** | 100+ models unified | Varies | ✅ Full | ✅ Production | [Setup Guide](docs/LITELLM-INTEGRATION.md) |
|
|
67
|
+
| **AWS SageMaker** | Custom deployed models | ❌ | ✅ Full | ✅ Production | [Setup Guide](docs/SAGEMAKER-INTEGRATION.md) |
|
|
68
|
+
| **Mistral AI** | Mistral Large, Small | ✅ Free Tier | ✅ Full | ✅ Production | [Setup Guide](docs/getting-started/provider-setup.md#mistral) |
|
|
69
|
+
| **Hugging Face** | 100,000+ models | ✅ Free | ⚠️ Partial | ✅ Production | [Setup Guide](docs/getting-started/provider-setup.md#huggingface) |
|
|
70
|
+
| **Ollama** | Local models (Llama, Mistral) | ✅ Free (Local) | ⚠️ Partial | ✅ Production | [Setup Guide](docs/getting-started/provider-setup.md#ollama) |
|
|
71
|
+
| **OpenAI Compatible** | Any OpenAI-compatible endpoint | Varies | ✅ Full | ✅ Production | [Setup Guide](docs/getting-started/provider-setup.md#openai-compatible) |
|
|
215
72
|
|
|
216
|
-
|
|
217
|
-
console.log(`Used: ${autoResult.provider}`);
|
|
218
|
-
```
|
|
73
|
+
**[📖 Provider Comparison Guide](docs/reference/provider-comparison.md)** - Detailed feature matrix and selection criteria
|
|
219
74
|
|
|
220
|
-
|
|
75
|
+
---
|
|
221
76
|
|
|
222
|
-
|
|
77
|
+
### 🔧 Built-in Tools & MCP Integration
|
|
223
78
|
|
|
224
|
-
|
|
225
|
-
// Enable conversation memory with configurable limits
|
|
226
|
-
const neurolink = new NeuroLink({
|
|
227
|
-
conversationMemory: {
|
|
228
|
-
enabled: true,
|
|
229
|
-
maxSessions: 50, // Keep last 50 sessions
|
|
230
|
-
maxTurnsPerSession: 20, // Keep last 20 turns per session
|
|
231
|
-
},
|
|
232
|
-
});
|
|
233
|
-
```
|
|
79
|
+
**6 Core Tools** (work across all providers, zero configuration):
|
|
234
80
|
|
|
235
|
-
|
|
81
|
+
| Tool | Purpose | Auto-Available | Documentation |
|
|
82
|
+
| -------------------- | ------------------------ | ----------------------- | --------------------------------------------------------- |
|
|
83
|
+
| `getCurrentTime` | Real-time clock access | ✅ | [Tool Reference](docs/sdk/custom-tools.md#getCurrentTime) |
|
|
84
|
+
| `readFile` | File system reading | ✅ | [Tool Reference](docs/sdk/custom-tools.md#readFile) |
|
|
85
|
+
| `writeFile` | File system writing | ✅ | [Tool Reference](docs/sdk/custom-tools.md#writeFile) |
|
|
86
|
+
| `listDirectory` | Directory listing | ✅ | [Tool Reference](docs/sdk/custom-tools.md#listDirectory) |
|
|
87
|
+
| `calculateMath` | Mathematical operations | ✅ | [Tool Reference](docs/sdk/custom-tools.md#calculateMath) |
|
|
88
|
+
| `websearchGrounding` | Google Vertex web search | ⚠️ Requires credentials | [Tool Reference](docs/sdk/custom-tools.md#websearch) |
|
|
236
89
|
|
|
237
|
-
|
|
90
|
+
**58+ External MCP Servers** supported (GitHub, PostgreSQL, Google Drive, Slack, and more):
|
|
238
91
|
|
|
239
92
|
```typescript
|
|
240
|
-
//
|
|
241
|
-
|
|
242
|
-
|
|
243
|
-
|
|
244
|
-
|
|
245
|
-
|
|
246
|
-
|
|
247
|
-
// Detailed method name
|
|
248
|
-
const story = await provider.generate({
|
|
249
|
-
input: { text: "Write a short story about AI" },
|
|
250
|
-
maxTokens: 200,
|
|
93
|
+
// Add any MCP server dynamically
|
|
94
|
+
await neurolink.addExternalMCPServer("github", {
|
|
95
|
+
command: "npx",
|
|
96
|
+
args: ["-y", "@modelcontextprotocol/server-github"],
|
|
97
|
+
transport: "stdio",
|
|
98
|
+
env: { GITHUB_TOKEN: process.env.GITHUB_TOKEN },
|
|
251
99
|
});
|
|
252
100
|
|
|
253
|
-
//
|
|
254
|
-
const poem = await provider.generate({ input: { text: "Write a poem" } });
|
|
255
|
-
const joke = await provider.gen({ input: { text: "Tell me a joke" } });
|
|
256
|
-
```
|
|
257
|
-
|
|
258
|
-
### Enhanced Features
|
|
259
|
-
|
|
260
|
-
#### CLI with Analytics & Evaluation
|
|
261
|
-
|
|
262
|
-
```bash
|
|
263
|
-
# Basic AI generation with auto-provider selection
|
|
264
|
-
npx @juspay/neurolink generate "Write a business email"
|
|
265
|
-
|
|
266
|
-
# LiteLLM with specific model
|
|
267
|
-
npx @juspay/neurolink generate "Write code" --provider litellm --model "anthropic/claude-3-5-sonnet"
|
|
268
|
-
|
|
269
|
-
# With analytics and evaluation
|
|
270
|
-
npx @juspay/neurolink generate "Write a proposal" --enable-analytics --enable-evaluation --debug
|
|
271
|
-
|
|
272
|
-
# Streaming with tools (default behavior)
|
|
273
|
-
npx @juspay/neurolink stream "What time is it and write a file with the current date"
|
|
274
|
-
```
|
|
275
|
-
|
|
276
|
-
#### SDK and Enhancement Features
|
|
277
|
-
|
|
278
|
-
```typescript
|
|
279
|
-
import { NeuroLink } from "@juspay/neurolink";
|
|
280
|
-
|
|
281
|
-
// Enhanced generation with analytics
|
|
282
|
-
const neurolink = new NeuroLink();
|
|
101
|
+
// Tools automatically available to AI
|
|
283
102
|
const result = await neurolink.generate({
|
|
284
|
-
input: { text:
|
|
285
|
-
enableAnalytics: true, // Get usage & cost data
|
|
286
|
-
enableEvaluation: true, // Get AI quality scores
|
|
287
|
-
context: { project: "Q1-sales" },
|
|
288
|
-
});
|
|
289
|
-
|
|
290
|
-
console.log("📊 Usage:", result.analytics);
|
|
291
|
-
console.log("⭐ Quality:", result.evaluation);
|
|
292
|
-
console.log("Response:", result.content);
|
|
293
|
-
```
|
|
294
|
-
|
|
295
|
-
### Environment Setup
|
|
296
|
-
|
|
297
|
-
```bash
|
|
298
|
-
# Create .env file (automatically loaded by CLI)
|
|
299
|
-
echo 'OPENAI_API_KEY="sk-your-openai-key"' > .env
|
|
300
|
-
echo 'GOOGLE_AI_API_KEY="AIza-your-google-ai-key"' >> .env
|
|
301
|
-
echo 'AWS_ACCESS_KEY_ID="your-aws-access-key"' >> .env
|
|
302
|
-
|
|
303
|
-
# 🆕 NEW: Google Vertex AI for Websearch Tool
|
|
304
|
-
echo 'GOOGLE_APPLICATION_CREDENTIALS="/path/to/service-account.json"' >> .env
|
|
305
|
-
echo 'GOOGLE_VERTEX_PROJECT="your-gcp-project-id"' >> .env
|
|
306
|
-
echo 'GOOGLE_VERTEX_LOCATION="us-central1"' >> .env
|
|
307
|
-
|
|
308
|
-
# Test configuration
|
|
309
|
-
npx @juspay/neurolink status
|
|
310
|
-
|
|
311
|
-
# SDK Env Provider Check - Advanced provider testing with fallback detection
|
|
312
|
-
pnpm run test:providers
|
|
313
|
-
```
|
|
314
|
-
|
|
315
|
-
#### 🔍 SDK Env Provider Check Output
|
|
316
|
-
|
|
317
|
-
```bash
|
|
318
|
-
# Example output:
|
|
319
|
-
✅ Google AI: Working (197 tokens)
|
|
320
|
-
⚠️ OpenAI: Failed (Fallback to google-ai)
|
|
321
|
-
⚠️ AWS Bedrock: Failed (Fallback to google-ai)
|
|
322
|
-
```
|
|
323
|
-
|
|
324
|
-
### JSON Format Support (Complete)
|
|
325
|
-
|
|
326
|
-
NeuroLink provides comprehensive JSON input/output support for both CLI and SDK:
|
|
327
|
-
|
|
328
|
-
```bash
|
|
329
|
-
# CLI JSON Output - Structured data for scripts
|
|
330
|
-
npx @juspay/neurolink generate "Summary of AI trends" --format json
|
|
331
|
-
npx @juspay/neurolink gen "Create a user profile" --format json --provider google-ai
|
|
332
|
-
|
|
333
|
-
# Example JSON Output:
|
|
334
|
-
{
|
|
335
|
-
"content": "AI trends include increased automation...",
|
|
336
|
-
"provider": "google-ai",
|
|
337
|
-
"model": "gemini-2.5-flash",
|
|
338
|
-
"usage": {
|
|
339
|
-
"promptTokens": 15,
|
|
340
|
-
"completionTokens": 127,
|
|
341
|
-
"totalTokens": 142
|
|
342
|
-
},
|
|
343
|
-
"responseTime": 1234
|
|
344
|
-
}
|
|
345
|
-
```
|
|
346
|
-
|
|
347
|
-
```typescript
|
|
348
|
-
// SDK JSON Input/Output - Full TypeScript support
|
|
349
|
-
import { createBestAIProvider } from "@juspay/neurolink";
|
|
350
|
-
|
|
351
|
-
const provider = createBestAIProvider();
|
|
352
|
-
|
|
353
|
-
// Structured input
|
|
354
|
-
const result = await provider.generate({
|
|
355
|
-
input: { text: "Create a product specification" },
|
|
356
|
-
schema: {
|
|
357
|
-
type: "object",
|
|
358
|
-
properties: {
|
|
359
|
-
name: { type: "string" },
|
|
360
|
-
price: { type: "number" },
|
|
361
|
-
features: { type: "array", items: { type: "string" } },
|
|
362
|
-
},
|
|
363
|
-
},
|
|
103
|
+
input: { text: 'Create a GitHub issue titled "Bug in auth flow"' },
|
|
364
104
|
});
|
|
365
|
-
|
|
366
|
-
// Access structured response
|
|
367
|
-
const productData = JSON.parse(result.content);
|
|
368
|
-
console.log(productData.name, productData.price, productData.features);
|
|
369
|
-
```
|
|
370
|
-
|
|
371
|
-
**📖 [Complete Setup Guide](./docs/CONFIGURATION.md)** - All providers with detailed instructions
|
|
372
|
-
|
|
373
|
-
## 🔍 **NEW: Websearch Tool with Google Vertex AI Grounding**
|
|
374
|
-
|
|
375
|
-
**NeuroLink now includes a powerful websearch tool** that uses Google's native search grounding technology for real-time web information:
|
|
376
|
-
|
|
377
|
-
- **🔍 Native Google Search** - Uses Google's search grounding via Vertex AI
|
|
378
|
-
- **🎯 Real-time Results** - Access current web information during AI conversations
|
|
379
|
-
- **🔒 Credential Protection** - Only activates when Google Vertex AI credentials are properly configured
|
|
380
|
-
|
|
381
|
-
### Quick Setup & Test
|
|
382
|
-
|
|
383
|
-
```bash
|
|
384
|
-
# 1. Build the project first
|
|
385
|
-
pnpm run build
|
|
386
|
-
|
|
387
|
-
# 2. Set up environment variables (see detailed setup below)
|
|
388
|
-
cp .env.example .env
|
|
389
|
-
# Edit .env with your Google Vertex AI credentials
|
|
390
|
-
|
|
391
|
-
# 3. Test the websearch tool directly
|
|
392
|
-
node test-websearch-grounding.j
|
|
393
|
-
```
|
|
394
|
-
|
|
395
|
-
### Complete Google Vertex AI Setup
|
|
396
|
-
|
|
397
|
-
#### Configure Environment Variables
|
|
398
|
-
|
|
399
|
-
```bash
|
|
400
|
-
# Add to your .env file
|
|
401
|
-
GOOGLE_APPLICATION_CREDENTIALS="/absolute/path/to/neurolink-service-account.json"
|
|
402
|
-
GOOGLE_VERTEX_PROJECT="YOUR-PROJECT-ID"
|
|
403
|
-
GOOGLE_VERTEX_LOCATION="us-central1"
|
|
404
105
|
```
|
|
405
106
|
|
|
406
|
-
|
|
407
|
-
|
|
408
|
-
````bash
|
|
409
|
-
# Build the project first
|
|
410
|
-
pnpm run build
|
|
411
|
-
|
|
412
|
-
# Run the dedicated test script
|
|
413
|
-
node test-websearch-grounding.js
|
|
414
|
-
|
|
415
|
-
### Using the Websearch Tool
|
|
107
|
+
**[📖 MCP Integration Guide](docs/advanced/mcp-integration.md)** - Setup external servers
|
|
416
108
|
|
|
417
|
-
|
|
109
|
+
---
|
|
418
110
|
|
|
419
|
-
|
|
420
|
-
npx @juspay/neurolink generate "Weather in Tokyo now" --provider vertex
|
|
111
|
+
### 💻 Developer Experience Features
|
|
421
112
|
|
|
422
|
-
**
|
|
113
|
+
**SDK-First Design** with TypeScript, IntelliSense, and type safety:
|
|
423
114
|
|
|
424
|
-
|
|
115
|
+
| Feature | Description | Documentation |
|
|
116
|
+
| --------------------------- | ------------------------------ | ----------------------------------------------------- |
|
|
117
|
+
| **Auto Provider Selection** | Intelligent provider fallback | [SDK Guide](docs/sdk/index.md#auto-selection) |
|
|
118
|
+
| **Streaming Responses** | Real-time token streaming | [Streaming Guide](docs/advanced/streaming.md) |
|
|
119
|
+
| **Conversation Memory** | Automatic context management | [Memory Guide](docs/sdk/index.md#memory) |
|
|
120
|
+
| **Full Type Safety** | Complete TypeScript types | [Type Reference](docs/sdk/api-reference.md) |
|
|
121
|
+
| **Error Handling** | Graceful provider fallback | [Error Guide](docs/reference/troubleshooting.md) |
|
|
122
|
+
| **Analytics & Evaluation** | Usage tracking, quality scores | [Analytics Guide](docs/advanced/analytics.md) |
|
|
123
|
+
| **Middleware System** | Request/response hooks | [Middleware Guide](docs/CUSTOM-MIDDLEWARE-GUIDE.md) |
|
|
124
|
+
| **Framework Integration** | Next.js, SvelteKit, Express | [Framework Guides](docs/sdk/framework-integration.md) |
|
|
425
125
|
|
|
426
|
-
|
|
427
|
-
- 🔍 **Smart Model Auto-Discovery** - OpenAI Compatible provider automatically detects available models via `/v1/models` endpoint
|
|
428
|
-
- 🏭 **Factory Pattern Architecture** - Unified provider management with BaseProvider inheritance
|
|
429
|
-
- 🔧 **Tools-First Design** - All providers automatically include 7 direct tools (getCurrentTime, readFile, listDirectory, calculateMath, writeFile, searchFiles, websearchGrounding)
|
|
430
|
-
- 🔄 **12 AI Providers** - OpenAI, Bedrock, Vertex AI, Google AI Studio, Anthropic, Azure, **LiteLLM**, **OpenAI Compatible**, Hugging Face, Ollama, Mistral AI, **SageMaker**
|
|
431
|
-
- 💰 **Cost Optimization** - Automatic selection of cheapest models and LiteLLM routing
|
|
432
|
-
- ⚡ **Automatic Fallback** - Never fail when providers are down, intelligent provider switching
|
|
433
|
-
- 🖥️ **CLI + SDK** - Use from command line or integrate programmatically with TypeScript support
|
|
434
|
-
- 🛡️ **Production Ready** - Enterprise-grade error handling, performance optimization, extracted from production
|
|
435
|
-
- 🏢 **Enterprise Proxy Support** - Comprehensive corporate proxy support with zero configuration
|
|
436
|
-
- ✅ **External MCP Integration** - Model Context Protocol with built-in tools + full external MCP server support
|
|
437
|
-
- 🔍 **Smart Model Resolution** - Fuzzy matching, aliases, and capability-based search across all providers
|
|
438
|
-
- 🏠 **Local AI Support** - Run completely offline with Ollama or through LiteLLM proxy
|
|
439
|
-
- 🌍 **Universal Model Access** - Direct providers + 100,000+ models via Hugging Face + 100+ models via LiteLLM
|
|
440
|
-
- 🧠 **Automatic Context Summarization** - Stateful, long-running conversations with automatic history summarization.
|
|
441
|
-
- 📊 **Analytics & Evaluation** - Built-in usage tracking and AI-powered quality assessment
|
|
126
|
+
---
|
|
442
127
|
|
|
443
|
-
|
|
128
|
+
### 🏢 Enterprise & Production Features
|
|
444
129
|
|
|
445
|
-
|
|
446
|
-
| ---------------------- | -------------- | ---------------------------------------------------------------- |
|
|
447
|
-
| Built-in Tools | ✅ **Working** | 6 core tools fully functional across all providers |
|
|
448
|
-
| SDK Custom Tools | ✅ **Working** | Register custom tools programmatically |
|
|
449
|
-
| **External MCP Tools** | ✅ **Working** | **Full external MCP server support with dynamic tool discovery** |
|
|
450
|
-
| Tool Execution | ✅ **Working** | Real-time AI tool calling with all tool types |
|
|
451
|
-
| **Streaming Support** | ✅ **Working** | **External MCP tools work with streaming generation** |
|
|
452
|
-
| **Multi-Provider** | ✅ **Working** | **External tools work across all AI providers** |
|
|
453
|
-
| **CLI Integration** | ✅ **READY** | **Production-ready with external MCP support** |
|
|
130
|
+
**Production-ready capabilities for regulated industries:**
|
|
454
131
|
|
|
455
|
-
|
|
132
|
+
| Feature | Description | Use Case | Documentation |
|
|
133
|
+
| --------------------------- | ---------------------------------- | ------------------------- | ----------------------------------------------------------- |
|
|
134
|
+
| **Enterprise Proxy** | Corporate proxy support | Behind firewalls | [Proxy Setup](docs/ENTERPRISE-PROXY-SETUP.md) |
|
|
135
|
+
| **Redis Memory** | Distributed conversation state | Multi-instance deployment | [Redis Guide](docs/getting-started/provider-setup.md#redis) |
|
|
136
|
+
| **Cost Optimization** | Automatic cheapest model selection | Budget control | [Cost Guide](docs/advanced/index.md) |
|
|
137
|
+
| **Multi-Provider Failover** | Automatic provider switching | High availability | [Failover Guide](docs/advanced/index.md) |
|
|
138
|
+
| **Telemetry & Monitoring** | OpenTelemetry integration | Observability | [Telemetry Guide](docs/TELEMETRY-GUIDE.md) |
|
|
139
|
+
| **Security Hardening** | Credential management, auditing | Compliance | [Security Guide](docs/advanced/enterprise.md) |
|
|
140
|
+
| **Custom Model Hosting** | SageMaker integration | Private models | [SageMaker Guide](docs/SAGEMAKER-INTEGRATION.md) |
|
|
141
|
+
| **Load Balancing** | LiteLLM proxy integration | Scale & routing | [Load Balancing](docs/LITELLM-INTEGRATION.md) |
|
|
456
142
|
|
|
457
|
-
|
|
458
|
-
# Test built-in tools (works immediately)
|
|
459
|
-
npx @juspay/neurolink generate "What time is it?" --debug
|
|
460
|
-
|
|
461
|
-
# 🆕 NEW: External MCP server integration (SDK)
|
|
462
|
-
import { NeuroLink } from '@juspay/neurolink';
|
|
463
|
-
|
|
464
|
-
const neurolink = new NeuroLink();
|
|
465
|
-
|
|
466
|
-
// Add external MCP server (e.g., Bitbucket)
|
|
467
|
-
await neurolink.addExternalMCPServer('bitbucket', {
|
|
468
|
-
command: 'npx',
|
|
469
|
-
args: ['-y', '@nexus2520/bitbucket-mcp-server'],
|
|
470
|
-
transport: 'stdio',
|
|
471
|
-
env: {
|
|
472
|
-
BITBUCKET_USERNAME: process.env.BITBUCKET_USERNAME,
|
|
473
|
-
BITBUCKET_TOKEN: process.env.BITBUCKET_TOKEN,
|
|
474
|
-
BITBUCKET_BASE_URL: 'https://bitbucket.example.com'
|
|
475
|
-
}
|
|
476
|
-
});
|
|
143
|
+
**Security & Compliance:**
|
|
477
144
|
|
|
478
|
-
|
|
479
|
-
|
|
480
|
-
|
|
481
|
-
|
|
482
|
-
|
|
145
|
+
- ✅ SOC2 Type II compliant deployments
|
|
146
|
+
- ✅ ISO 27001 certified infrastructure compatible
|
|
147
|
+
- ✅ GDPR-compliant data handling (EU providers available)
|
|
148
|
+
- ✅ HIPAA compatible (with proper configuration)
|
|
149
|
+
- ✅ Hardened OS verified (SELinux, AppArmor)
|
|
150
|
+
- ✅ Zero credential logging
|
|
151
|
+
- ✅ Encrypted configuration storage
|
|
483
152
|
|
|
484
|
-
|
|
485
|
-
npx @juspay/neurolink mcp discover --format table
|
|
486
|
-
````
|
|
153
|
+
**[📖 Enterprise Deployment Guide](docs/advanced/enterprise.md)** - Complete production checklist
|
|
487
154
|
|
|
488
|
-
|
|
155
|
+
---
|
|
489
156
|
|
|
490
|
-
|
|
157
|
+
### 🎨 Professional CLI
|
|
491
158
|
|
|
492
|
-
|
|
493
|
-
import { NeuroLink } from "@juspay/neurolink";
|
|
494
|
-
const neurolink = new NeuroLink();
|
|
495
|
-
|
|
496
|
-
// Register a simple tool
|
|
497
|
-
neurolink.registerTool("weatherLookup", {
|
|
498
|
-
description: "Get current weather for a city",
|
|
499
|
-
parameters: z.object({
|
|
500
|
-
city: z.string().describe("City name"),
|
|
501
|
-
units: z.enum(["celsius", "fahrenheit"]).optional(),
|
|
502
|
-
}),
|
|
503
|
-
execute: async ({ city, units = "celsius" }) => {
|
|
504
|
-
// Your implementation here
|
|
505
|
-
return {
|
|
506
|
-
city,
|
|
507
|
-
temperature: 22,
|
|
508
|
-
units,
|
|
509
|
-
condition: "sunny",
|
|
510
|
-
};
|
|
511
|
-
},
|
|
512
|
-
});
|
|
159
|
+
**15+ commands** for every workflow:
|
|
513
160
|
|
|
514
|
-
|
|
515
|
-
|
|
516
|
-
|
|
517
|
-
|
|
518
|
-
|
|
519
|
-
|
|
520
|
-
|
|
521
|
-
neurolink.
|
|
522
|
-
|
|
523
|
-
|
|
524
|
-
execute: async () => ({ price: 150.25 }),
|
|
525
|
-
},
|
|
526
|
-
calculator: {
|
|
527
|
-
description: "Calculate math",
|
|
528
|
-
execute: async () => ({ result: 42 }),
|
|
529
|
-
},
|
|
530
|
-
});
|
|
161
|
+
| Command | Purpose | Example | Documentation |
|
|
162
|
+
| ---------- | ---------------------------------- | -------------------------- | ----------------------------------------- |
|
|
163
|
+
| `setup` | Interactive provider configuration | `neurolink setup` | [Setup Guide](docs/cli/index.md) |
|
|
164
|
+
| `generate` | Text generation | `neurolink gen "Hello"` | [Generate](docs/cli/commands.md#generate) |
|
|
165
|
+
| `stream` | Streaming generation | `neurolink stream "Story"` | [Stream](docs/cli/commands.md#stream) |
|
|
166
|
+
| `status` | Provider health check | `neurolink status` | [Status](docs/cli/commands.md#status) |
|
|
167
|
+
| `loop` | Interactive session | `neurolink loop` | [Loop](docs/cli/commands.md#loop) |
|
|
168
|
+
| `mcp` | MCP server management | `neurolink mcp discover` | [MCP CLI](docs/cli/commands.md#mcp) |
|
|
169
|
+
| `models` | Model listing | `neurolink models` | [Models](docs/cli/commands.md#models) |
|
|
170
|
+
| `eval` | Model evaluation | `neurolink eval` | [Eval](docs/cli/commands.md#eval) |
|
|
531
171
|
|
|
532
|
-
|
|
533
|
-
neurolink.registerTools([
|
|
534
|
-
{
|
|
535
|
-
name: "lighthouseTool1",
|
|
536
|
-
tool: {
|
|
537
|
-
description: "Lighthouse analytics tool",
|
|
538
|
-
parameters: z.object({
|
|
539
|
-
merchantId: z.string(),
|
|
540
|
-
dateRange: z.string().optional(),
|
|
541
|
-
}),
|
|
542
|
-
execute: async ({ merchantId, dateRange }) => {
|
|
543
|
-
// Lighthouse tool implementation with Zod schema
|
|
544
|
-
return { data: "analytics result" };
|
|
545
|
-
},
|
|
546
|
-
},
|
|
547
|
-
},
|
|
548
|
-
{
|
|
549
|
-
name: "lighthouseTool2",
|
|
550
|
-
tool: {
|
|
551
|
-
description: "Payment processing tool",
|
|
552
|
-
execute: async () => ({ status: "processed" }),
|
|
553
|
-
},
|
|
554
|
-
},
|
|
555
|
-
]);
|
|
556
|
-
```
|
|
172
|
+
**[📖 Complete CLI Reference](docs/cli/commands.md)** - All commands and options
|
|
557
173
|
|
|
558
174
|
## 💰 Smart Model Selection
|
|
559
175
|
|
|
@@ -614,317 +230,94 @@ Start the loop with conversation memory to have the AI remember the context of y
|
|
|
614
230
|
npx @juspay/neurolink loop --enable-conversation-memory
|
|
615
231
|
```
|
|
616
232
|
|
|
617
|
-
|
|
618
|
-
|
|
619
|
-
### CLI Commands
|
|
620
|
-
|
|
621
|
-
```bash
|
|
622
|
-
# Text generation with automatic MCP tool detection (default)
|
|
623
|
-
npx @juspay/neurolink generate "What time is it?"
|
|
624
|
-
|
|
625
|
-
# Alternative short form
|
|
626
|
-
npx @juspay/neurolink gen "What time is it?"
|
|
627
|
-
|
|
628
|
-
# Disable tools for training-data-only responses
|
|
629
|
-
npx @juspay/neurolink generate "What time is it?" --disable-tools
|
|
630
|
-
|
|
631
|
-
# With custom timeout for complex prompts
|
|
632
|
-
npx @juspay/neurolink generate "Explain quantum computing in detail" --timeout 1m
|
|
633
|
-
|
|
634
|
-
# Real-time streaming with agent support (default)
|
|
635
|
-
npx @juspay/neurolink stream "What time is it?"
|
|
636
|
-
|
|
637
|
-
# Streaming without tools (traditional mode)
|
|
638
|
-
npx @juspay/neurolink stream "Tell me a story" --disable-tools
|
|
639
|
-
|
|
640
|
-
# Streaming with extended timeout
|
|
641
|
-
npx @juspay/neurolink stream "Write a long story" --timeout 5m
|
|
642
|
-
|
|
643
|
-
# Provider diagnostics
|
|
644
|
-
npx @juspay/neurolink status --verbose
|
|
645
|
-
|
|
646
|
-
# Batch processing
|
|
647
|
-
echo -e "Write a haiku\nExplain gravity" > prompts.txt
|
|
648
|
-
npx @juspay/neurolink batch prompts.txt --output results.json
|
|
649
|
-
|
|
650
|
-
# Batch with custom timeout per request
|
|
651
|
-
npx @juspay/neurolink batch prompts.txt --timeout 45s --output results.json
|
|
652
|
-
```
|
|
653
|
-
|
|
654
|
-
### SDK Integration
|
|
655
|
-
|
|
656
|
-
```typescript
|
|
657
|
-
// SvelteKit API route with timeout handling
|
|
658
|
-
export const POST: RequestHandler = async ({ request }) => {
|
|
659
|
-
const { message } = await request.json();
|
|
660
|
-
const provider = createBestAIProvider();
|
|
661
|
-
|
|
662
|
-
try {
|
|
663
|
-
// NEW: Primary streaming method (recommended)
|
|
664
|
-
const result = await provider.stream({
|
|
665
|
-
input: { text: message },
|
|
666
|
-
timeout: "2m", // 2 minutes for streaming
|
|
667
|
-
});
|
|
668
|
-
|
|
669
|
-
// Process stream
|
|
670
|
-
for await (const chunk of result.stream) {
|
|
671
|
-
// Handle streaming content
|
|
672
|
-
console.log(chunk.content);
|
|
673
|
-
}
|
|
674
|
-
|
|
675
|
-
// LEGACY: Backward compatibility (still works)
|
|
676
|
-
const legacyResult = await provider.stream({ input: { text:
|
|
677
|
-
prompt: message,
|
|
678
|
-
timeout: "2m", // 2 minutes for streaming
|
|
679
|
-
});
|
|
680
|
-
return new Response(result.toReadableStream());
|
|
681
|
-
} catch (error) {
|
|
682
|
-
if (error.name === "TimeoutError") {
|
|
683
|
-
return new Response("Request timed out", { status: 408 });
|
|
684
|
-
}
|
|
685
|
-
throw error;
|
|
686
|
-
}
|
|
687
|
-
};
|
|
688
|
-
|
|
689
|
-
// Next.js API route with timeout
|
|
690
|
-
export async function POST(request: NextRequest) {
|
|
691
|
-
const { prompt } = await request.json();
|
|
692
|
-
const provider = createBestAIProvider();
|
|
693
|
-
|
|
694
|
-
const result = await provider.generate({
|
|
695
|
-
prompt,
|
|
696
|
-
timeout: process.env.AI_TIMEOUT || "30s", // Configurable timeout
|
|
697
|
-
});
|
|
698
|
-
|
|
699
|
-
return NextResponse.json({ text: result.content });
|
|
700
|
-
}
|
|
701
|
-
```
|
|
702
|
-
|
|
703
|
-
## 🎬 See It In Action
|
|
704
|
-
|
|
705
|
-
**No installation required!** Experience NeuroLink through comprehensive visual documentation:
|
|
706
|
-
|
|
707
|
-
### 📱 Interactive Web Demo
|
|
708
|
-
|
|
709
|
-
```bash
|
|
710
|
-
cd neurolink-demo && node server.js
|
|
711
|
-
# Visit http://localhost:9876 for live demo
|
|
712
|
-
```
|
|
713
|
-
|
|
714
|
-
- **Real AI Integration**: All 9 providers functional with live generation
|
|
715
|
-
- **Complete Use Cases**: Business, creative, and developer scenarios
|
|
716
|
-
- **Performance Metrics**: Live provider analytics and response times
|
|
717
|
-
- **Privacy Options**: Test local AI with Ollama
|
|
718
|
-
|
|
719
|
-
### 🖥️ CLI Demonstrations
|
|
720
|
-
|
|
721
|
-
- **[CLI Help & Commands](./docs/visual-content/cli-videos/cli-01-cli-help.mp4)** - Complete command reference
|
|
722
|
-
- **[Provider Status Check](./docs/visual-content/cli-videos/cli-02-provider-status.mp4)** - Connectivity verification (now with authentication and model availability checks)
|
|
723
|
-
- **[Text Generation](./docs/visual-content/cli-videos/cli-03-text-generation.mp4)** - Real AI content creation
|
|
724
|
-
|
|
725
|
-
### 🌐 Web Interface Videos
|
|
726
|
-
|
|
727
|
-
- **[Business Use Cases](./neurolink-demo/videos/business-use-cases.mp4)** - Professional applications
|
|
728
|
-
- **[Developer Tools](./neurolink-demo/videos/developer-tools.mp4)** - Code generation and APIs
|
|
729
|
-
- **[Creative Tools](./neurolink-demo/videos/creative-tools.mp4)** - Content creation
|
|
730
|
-
|
|
731
|
-
**[📖 Complete Visual Documentation](./docs/VISUAL-DEMOS.md)** - All screenshots and videos
|
|
732
|
-
|
|
733
|
-
## 📚 Documentation
|
|
734
|
-
|
|
735
|
-
### Getting Started
|
|
736
|
-
|
|
737
|
-
- **[🔧 Provider Setup](./docs/PROVIDER-CONFIGURATION.md)** - Complete environment configuration
|
|
738
|
-
- **[🖥️ CLI Guide](./docs/CLI-GUIDE.md)** - All commands and options
|
|
739
|
-
- **[🏗️ SDK Integration](./docs/FRAMEWORK-INTEGRATION.md)** - Next.js, SvelteKit, React
|
|
740
|
-
- **[⚙️ Environment Variables](./docs/ENVIRONMENT-VARIABLES.md)** - Full configuration guide
|
|
233
|
+
Skip the wizard and configure manually? See [`docs/getting-started/provider-setup.md`](docs/getting-started/provider-setup.md).
|
|
741
234
|
|
|
742
|
-
|
|
235
|
+
## CLI & SDK Essentials
|
|
743
236
|
|
|
744
|
-
|
|
745
|
-
- **[🔄 MCP Foundation](./docs/MCP-FOUNDATION.md)** - Model Context Protocol architecture
|
|
746
|
-
- **[⚡ Dynamic Models](./docs/DYNAMIC-MODELS.md)** - Self-updating model configurations and cost optimization
|
|
747
|
-
- **[🧠 AI Analysis Tools](./docs/AI-ANALYSIS-TOOLS.md)** - Usage optimization and benchmarking
|
|
748
|
-
- **[🛠️ AI Workflow Tools](./docs/AI-WORKFLOW-TOOLS.md)** - Development lifecycle assistance
|
|
749
|
-
- **[🎬 Visual Demos](./docs/VISUAL-DEMOS.md)** - Screenshots and videos
|
|
750
|
-
|
|
751
|
-
### Reference
|
|
752
|
-
|
|
753
|
-
- **[📚 API Reference](./docs/API-REFERENCE.md)** - Complete TypeScript API
|
|
754
|
-
- **[🔗 Framework Integration](./docs/FRAMEWORK-INTEGRATION.md)** - SvelteKit, Next.js, Express.js
|
|
755
|
-
|
|
756
|
-
## 🏗️ Supported Providers & Models
|
|
757
|
-
|
|
758
|
-
| Provider | Models | Auth Method | Free Tier | Tool Support | Key Benefit |
|
|
759
|
-
| --------------------------- | ---------------------------------- | ------------------ | --------- | ------------ | -------------------------------- |
|
|
760
|
-
| **🔗 LiteLLM** 🆕 | **100+ Models** (All Providers) | Proxy Server | Varies | ✅ Full | **Universal Access** |
|
|
761
|
-
| **🔗 OpenAI Compatible** 🆕 | **Any OpenAI-compatible endpoint** | API Key + Base URL | Varies | ✅ Full | **Auto-Discovery + Flexibility** |
|
|
762
|
-
| **Google AI Studio** | Gemini 2.5 Flash/Pro | API Key | ✅ | ✅ Full | Free Tier Available |
|
|
763
|
-
| **OpenAI** | GPT-4o, GPT-4o-mini | API Key | ❌ | ✅ Full | Industry Standard |
|
|
764
|
-
| **Anthropic** | Claude 3.5 Sonnet | API Key | ❌ | ✅ Full | Advanced Reasoning |
|
|
765
|
-
| **Amazon Bedrock** | Claude 3.5/3.7 Sonnet | AWS Credentials | ❌ | ✅ Full\* | Enterprise Scale |
|
|
766
|
-
| **Google Vertex AI** | Gemini 2.5 Flash | Service Account | ❌ | ✅ Full | Enterprise Google |
|
|
767
|
-
| **Azure OpenAI** | GPT-4, GPT-3.5 | API Key + Endpoint | ❌ | ✅ Full | Microsoft Ecosystem |
|
|
768
|
-
| **Ollama** 🆕 | Llama 3.2, Gemma, Mistral (Local) | None (Local) | ✅ | ⚠️ Partial | Complete Privacy |
|
|
769
|
-
| **Hugging Face** 🆕 | 100,000+ open source models | API Key | ✅ | ⚠️ Partial | Open Source |
|
|
770
|
-
| **Mistral AI** 🆕 | Tiny, Small, Medium, Large | API Key | ✅ | ✅ Full | European/GDPR |
|
|
771
|
-
| **Amazon SageMaker** 🆕 | Custom Models (Your Endpoints) | AWS Credentials | ❌ | ✅ Full | Custom Model Hosting |
|
|
772
|
-
|
|
773
|
-
**Tool Support Legend:**
|
|
774
|
-
|
|
775
|
-
- ✅ Full: All tools working correctly
|
|
776
|
-
- ⚠️ Partial: Tools visible but may not execute properly
|
|
777
|
-
- ❌ Limited: Issues with model or configuration
|
|
778
|
-
- \* Bedrock requires valid AWS credentials, Ollama requires specific models like gemma3n for tool support
|
|
779
|
-
|
|
780
|
-
**✨ Auto-Selection**: NeuroLink automatically chooses the best available provider based on speed, reliability, and configuration.
|
|
781
|
-
|
|
782
|
-
### 🔍 Smart Model Auto-Discovery (OpenAI Compatible)
|
|
783
|
-
|
|
784
|
-
The OpenAI Compatible provider includes intelligent model discovery that automatically detects available models from any endpoint:
|
|
237
|
+
`neurolink` CLI mirrors the SDK so teams can script experiments and codify them later.
|
|
785
238
|
|
|
786
239
|
```bash
|
|
787
|
-
#
|
|
788
|
-
|
|
789
|
-
|
|
240
|
+
# Discover available providers and models
|
|
241
|
+
npx @juspay/neurolink status
|
|
242
|
+
npx @juspay/neurolink models list --provider google-ai
|
|
790
243
|
|
|
791
|
-
#
|
|
792
|
-
npx @juspay/neurolink generate "
|
|
793
|
-
|
|
244
|
+
# Route to a specific provider/model
|
|
245
|
+
npx @juspay/neurolink generate "Summarize customer feedback" \
|
|
246
|
+
--provider azure --model gpt-4o-mini
|
|
794
247
|
|
|
795
|
-
#
|
|
796
|
-
|
|
797
|
-
|
|
248
|
+
# Turn on analytics + evaluation for observability
|
|
249
|
+
npx @juspay/neurolink generate "Draft release notes" \
|
|
250
|
+
--enable-analytics --enable-evaluation --format json
|
|
798
251
|
```
|
|
799
252
|
|
|
800
|
-
**How it works:**
|
|
801
|
-
|
|
802
|
-
- Queries `/v1/models` endpoint to discover available models
|
|
803
|
-
- Automatically selects the first available model when none specified
|
|
804
|
-
- Falls back gracefully if discovery fails
|
|
805
|
-
- Works with any OpenAI-compatible service (OpenRouter, vLLM, LiteLLM, etc.)
|
|
806
|
-
|
|
807
|
-
## 🎯 Production Features
|
|
808
|
-
|
|
809
|
-
### Enterprise-Grade Reliability
|
|
810
|
-
|
|
811
|
-
- **Automatic Failover**: Seamless provider switching on failures
|
|
812
|
-
- **Error Recovery**: Comprehensive error handling and logging
|
|
813
|
-
- **Performance Monitoring**: Built-in analytics and metrics
|
|
814
|
-
- **Type Safety**: Full TypeScript support with IntelliSense
|
|
815
|
-
|
|
816
|
-
### AI Platform Capabilities
|
|
817
|
-
|
|
818
|
-
- **MCP Foundation**: Universal AI development platform with 10+ specialized tools
|
|
819
|
-
- **Analysis Tools**: Usage optimization, performance benchmarking, parameter tuning
|
|
820
|
-
- **Workflow Tools**: Test generation, code refactoring, documentation, debugging
|
|
821
|
-
- **Extensibility**: Connect external tools and services via MCP protocol
|
|
822
|
-
- **🆕 Dynamic Server Management**: Programmatically add MCP servers at runtime
|
|
823
|
-
|
|
824
|
-
### 🔧 External MCP Server Management ✅ **AVAILABLE NOW**
|
|
825
|
-
|
|
826
|
-
**External MCP integration is now production-ready:**
|
|
827
|
-
|
|
828
|
-
- ✅ 6 built-in tools working across all providers
|
|
829
|
-
- ✅ SDK custom tool registration
|
|
830
|
-
- ✅ **External MCP server management** (add, remove, list, test servers)
|
|
831
|
-
- ✅ **Dynamic tool discovery** (automatic tool registration from external servers)
|
|
832
|
-
- ✅ **Multi-provider support** (external tools work with all AI providers)
|
|
833
|
-
- ✅ **Streaming integration** (external tools work with real-time streaming)
|
|
834
|
-
- ✅ **Enhanced tool tracking** (proper parameter extraction and execution logging)
|
|
835
|
-
|
|
836
253
|
```typescript
|
|
837
|
-
|
|
838
|
-
const neurolink = new NeuroLink();
|
|
839
|
-
|
|
840
|
-
// Server management
|
|
841
|
-
await neurolink.addExternalMCPServer(serverId, config);
|
|
842
|
-
await neurolink.removeExternalMCPServer(serverId);
|
|
843
|
-
const servers = neurolink.listExternalMCPServers();
|
|
844
|
-
const server = neurolink.getExternalMCPServer(serverId);
|
|
845
|
-
|
|
846
|
-
// Tool management
|
|
847
|
-
const tools = neurolink.getExternalMCPTools();
|
|
848
|
-
const serverTools = neurolink.getExternalMCPServerTools(serverId);
|
|
849
|
-
|
|
850
|
-
// Direct tool execution
|
|
851
|
-
const result = await neurolink.executeExternalMCPTool(
|
|
852
|
-
serverId,
|
|
853
|
-
toolName,
|
|
854
|
-
params,
|
|
855
|
-
);
|
|
856
|
-
|
|
857
|
-
// Statistics and monitoring
|
|
858
|
-
const stats = neurolink.getExternalMCPStatistics();
|
|
859
|
-
await neurolink.shutdownExternalMCPServers();
|
|
860
|
-
```
|
|
861
|
-
|
|
862
|
-
## 🤝 Contributing
|
|
254
|
+
import { NeuroLink } from "@juspay/neurolink";
|
|
863
255
|
|
|
864
|
-
|
|
256
|
+
const neurolink = new NeuroLink({
|
|
257
|
+
conversationMemory: {
|
|
258
|
+
enabled: true,
|
|
259
|
+
store: "redis",
|
|
260
|
+
},
|
|
261
|
+
enableOrchestration: true,
|
|
262
|
+
});
|
|
865
263
|
|
|
866
|
-
|
|
264
|
+
const result = await neurolink.generate({
|
|
265
|
+
input: {
|
|
266
|
+
text: "Create a comprehensive analysis",
|
|
267
|
+
files: [
|
|
268
|
+
"./sales_data.csv", // Auto-detected as CSV
|
|
269
|
+
"./diagrams/architecture.png", // Auto-detected as image
|
|
270
|
+
],
|
|
271
|
+
},
|
|
272
|
+
enableEvaluation: true,
|
|
273
|
+
region: "us-east-1",
|
|
274
|
+
});
|
|
867
275
|
|
|
868
|
-
|
|
869
|
-
|
|
870
|
-
cd neurolink
|
|
871
|
-
pnpm install
|
|
872
|
-
npx husky install # Setup git hooks for build rule enforcement
|
|
873
|
-
pnpm setup:complete # One-command setup with all automation
|
|
874
|
-
pnpm test:adaptive # Intelligent testing
|
|
875
|
-
pnpm build:complete # Full build pipeline
|
|
276
|
+
console.log(result.content);
|
|
277
|
+
console.log(result.evaluation?.overallScore);
|
|
876
278
|
```
|
|
877
279
|
|
|
878
|
-
|
|
280
|
+
Full command and API breakdown lives in [`docs/cli/commands.md`](docs/cli/commands.md) and [`docs/sdk/api-reference.md`](docs/sdk/api-reference.md).
|
|
879
281
|
|
|
880
|
-
|
|
282
|
+
## Platform Capabilities at a Glance
|
|
881
283
|
|
|
882
|
-
|
|
883
|
-
|
|
884
|
-
|
|
885
|
-
|
|
886
|
-
|
|
887
|
-
|
|
888
|
-
|
|
889
|
-
|
|
890
|
-
|
|
891
|
-
pnpm run format # Auto-fix code formatting
|
|
892
|
-
pnpm run lint # ESLint validation with zero-error tolerance
|
|
893
|
-
|
|
894
|
-
# Environment & Setup (2-minute initialization)
|
|
895
|
-
pnpm setup:complete # Complete project setup
|
|
896
|
-
pnpm env:setup # Safe .env configuration
|
|
897
|
-
pnpm env:backup # Environment backup
|
|
898
|
-
|
|
899
|
-
# Testing (60-80% faster)
|
|
900
|
-
pnpm test:adaptive # Intelligent test selection
|
|
901
|
-
pnpm test:providers # AI provider validation
|
|
902
|
-
|
|
903
|
-
# Documentation & Content
|
|
904
|
-
pnpm docs:sync # Cross-file documentation sync
|
|
905
|
-
pnpm content:generate # Automated content creation
|
|
906
|
-
|
|
907
|
-
# Build & Deployment
|
|
908
|
-
pnpm build:complete # 7-phase enterprise pipeline
|
|
909
|
-
pnpm dev:health # System health monitoring
|
|
910
|
-
```
|
|
284
|
+
| Capability | Highlights |
|
|
285
|
+
| ------------------------ | -------------------------------------------------------------------------------------------------------- |
|
|
286
|
+
| **Provider unification** | 12+ providers with automatic fallback, cost-aware routing, provider orchestration (Q3). |
|
|
287
|
+
| **Multimodal pipeline** | Stream images + CSV data across providers with local/remote assets. Auto-detection for mixed file types. |
|
|
288
|
+
| **Quality & governance** | Auto-evaluation engine (Q3), guardrails middleware (Q4), HITL workflows (Q4), audit logging. |
|
|
289
|
+
| **Memory & context** | Conversation memory, Mem0 integration, Redis history export (Q4), context summarization (Q4). |
|
|
290
|
+
| **CLI tooling** | Loop sessions (Q3), setup wizard, config validation, Redis auto-detect, JSON output. |
|
|
291
|
+
| **Enterprise ops** | Proxy support, regional routing (Q3), telemetry hooks, configuration management. |
|
|
292
|
+
| **Tool ecosystem** | MCP auto discovery, LiteLLM hub access, SageMaker custom deployment, web search. |
|
|
911
293
|
|
|
912
|
-
|
|
294
|
+
## Documentation Map
|
|
913
295
|
|
|
914
|
-
|
|
296
|
+
| Area | When to Use | Link |
|
|
297
|
+
| --------------- | ----------------------------------------------- | ---------------------------------------------------------------- |
|
|
298
|
+
| Getting started | Install, configure, run first prompt | [`docs/getting-started/index.md`](docs/getting-started/index.md) |
|
|
299
|
+
| Feature guides | Understand new functionality front-to-back | [`docs/features/index.md`](docs/features/index.md) |
|
|
300
|
+
| CLI reference | Command syntax, flags, loop sessions | [`docs/cli/index.md`](docs/cli/index.md) |
|
|
301
|
+
| SDK reference | Classes, methods, options | [`docs/sdk/index.md`](docs/sdk/index.md) |
|
|
302
|
+
| Integrations | LiteLLM, SageMaker, MCP, Mem0 | [`docs/LITELLM-INTEGRATION.md`](docs/LITELLM-INTEGRATION.md) |
|
|
303
|
+
| Operations | Configuration, troubleshooting, provider matrix | [`docs/reference/index.md`](docs/reference/index.md) |
|
|
304
|
+
| Visual demos | Screens, GIFs, interactive tours | [`docs/demos/index.md`](docs/demos/index.md) |
|
|
915
305
|
|
|
916
|
-
##
|
|
306
|
+
## Integrations
|
|
917
307
|
|
|
918
|
-
|
|
308
|
+
- **LiteLLM 100+ model hub** – Unified access to third-party models via LiteLLM routing. → [`docs/LITELLM-INTEGRATION.md`](docs/LITELLM-INTEGRATION.md)
|
|
309
|
+
- **Amazon SageMaker** – Deploy and call custom endpoints directly from NeuroLink CLI/SDK. → [`docs/SAGEMAKER-INTEGRATION.md`](docs/SAGEMAKER-INTEGRATION.md)
|
|
310
|
+
- **Mem0 conversational memory** – Persistent semantic memory with vector store support. → [`docs/MEM0_INTEGRATION.md`](docs/MEM0_INTEGRATION.md)
|
|
311
|
+
- **Enterprise proxy & security** – Configure outbound policies and compliance posture. → [`docs/ENTERPRISE-PROXY-SETUP.md`](docs/ENTERPRISE-PROXY-SETUP.md)
|
|
312
|
+
- **Configuration automation** – Manage environments, regions, and credentials safely. → [`docs/CONFIGURATION-MANAGEMENT.md`](docs/CONFIGURATION-MANAGEMENT.md)
|
|
313
|
+
- **MCP tool ecosystem** – Auto-discover Model Context Protocol tools and extend workflows. → [`docs/advanced/mcp-integration.md`](docs/advanced/mcp-integration.md)
|
|
919
314
|
|
|
920
|
-
##
|
|
315
|
+
## Contributing & Support
|
|
921
316
|
|
|
922
|
-
- [
|
|
923
|
-
-
|
|
924
|
-
- [
|
|
317
|
+
- Bug reports and feature requests → [GitHub Issues](https://github.com/juspay/neurolink/issues)
|
|
318
|
+
- Development workflow, testing, and pull request guidelines → [`docs/development/contributing.md`](docs/development/contributing.md)
|
|
319
|
+
- Documentation improvements → open a PR referencing the [documentation matrix](docs/tracking/FEATURE-DOC-MATRIX.md).
|
|
925
320
|
|
|
926
321
|
---
|
|
927
322
|
|
|
928
|
-
|
|
929
|
-
<strong>Built with ❤️ by <a href="https://juspay.in">Juspay Technologies</a></strong>
|
|
930
|
-
</p>
|
|
323
|
+
NeuroLink is built with ❤️ by Juspay. Contributions, questions, and production feedback are always welcome.
|