@juspay/neurolink 1.2.3 → 1.3.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/CHANGELOG.md +108 -0
- package/README.md +213 -1138
- package/dist/cli/commands/config.d.ts +373 -0
- package/dist/cli/commands/config.js +532 -0
- package/dist/cli/commands/mcp.d.ts +7 -0
- package/dist/cli/commands/mcp.js +434 -0
- package/dist/cli/index.d.ts +9 -0
- package/dist/cli/index.js +451 -169
- package/dist/core/factory.js +10 -2
- package/dist/core/types.d.ts +3 -1
- package/dist/core/types.js +2 -0
- package/dist/index.d.ts +1 -1
- package/dist/index.js +1 -1
- package/dist/mcp/context-manager.d.ts +164 -0
- package/dist/mcp/context-manager.js +273 -0
- package/dist/mcp/factory.d.ts +144 -0
- package/dist/mcp/factory.js +141 -0
- package/dist/mcp/orchestrator.d.ts +170 -0
- package/dist/mcp/orchestrator.js +372 -0
- package/dist/mcp/registry.d.ts +188 -0
- package/dist/mcp/registry.js +373 -0
- package/dist/mcp/servers/ai-providers/ai-core-server.d.ts +10 -0
- package/dist/mcp/servers/ai-providers/ai-core-server.js +280 -0
- package/dist/neurolink.d.ts +2 -2
- package/dist/neurolink.js +1 -1
- package/dist/providers/anthropic.d.ts +34 -0
- package/dist/providers/anthropic.js +307 -0
- package/dist/providers/azureOpenAI.d.ts +37 -0
- package/dist/providers/azureOpenAI.js +338 -0
- package/dist/providers/index.d.ts +4 -0
- package/dist/providers/index.js +5 -1
- package/dist/utils/providerUtils.js +8 -2
- package/package.json +163 -97
package/README.md
CHANGED
|
@@ -4,32 +4,60 @@
|
|
|
4
4
|
[](http://www.typescriptlang.org/)
|
|
5
5
|
[](https://opensource.org/licenses/MIT)
|
|
6
6
|
|
|
7
|
-
> Production-ready AI toolkit with multi-provider support, automatic fallback, and full TypeScript integration. **Now with
|
|
7
|
+
> Production-ready AI toolkit with multi-provider support, automatic fallback, and full TypeScript integration. **Now with MCP Foundation and professional CLI!**
|
|
8
8
|
|
|
9
9
|
**NeuroLink** provides a unified interface for AI providers (OpenAI, Amazon Bedrock, Google Vertex AI) with intelligent fallback, streaming support, and type-safe APIs. Available as both a **programmatic SDK** and a **professional CLI tool**. Extracted from production use at Juspay.
|
|
10
10
|
|
|
11
|
+
## 🎉 **NEW: MCP Foundation (Model Context Protocol)**
|
|
12
|
+
|
|
13
|
+
**NeuroLink v1.3.0** introduces a groundbreaking **MCP Foundation** that transforms NeuroLink from an AI SDK into a **Universal AI Development Platform** while maintaining the simple factory method interface.
|
|
14
|
+
|
|
15
|
+
### **🏆 Phase 1 Complete: 27/27 Tests Passing**
|
|
16
|
+
- ✅ **Factory-First Architecture**: MCP tools work internally, users see simple factory methods
|
|
17
|
+
- ✅ **MCP Compatible**: 99% compatible with existing MCP tools and servers
|
|
18
|
+
- ✅ **Enterprise Ready**: Rich context, permissions, tool orchestration, analytics
|
|
19
|
+
- ✅ **Production Tested**: <1ms tool execution, comprehensive error handling
|
|
20
|
+
|
|
21
|
+
### **🚀 What This Means for You**
|
|
22
|
+
```typescript
|
|
23
|
+
// Same simple interface you love
|
|
24
|
+
const result = await provider.generateText("Create a React component");
|
|
25
|
+
|
|
26
|
+
// But now powered by enterprise-grade MCP tool orchestration internally:
|
|
27
|
+
// - Context tracking across tool chains
|
|
28
|
+
// - Permission-based security
|
|
29
|
+
// - Tool registry and discovery
|
|
30
|
+
// - Pipeline execution with error recovery
|
|
31
|
+
// - Rich analytics and monitoring
|
|
32
|
+
```
|
|
33
|
+
|
|
34
|
+
### **🔧 MCP Architecture**
|
|
35
|
+
- **🏭 MCP Server Factory**: Standard MCP-compatible server creation
|
|
36
|
+
- **🧠 Context Management**: Rich context with 15+ fields + tool chain tracking
|
|
37
|
+
- **📋 Tool Registry**: Discovery, registration, execution + statistics
|
|
38
|
+
- **🎼 Tool Orchestration**: Single tools + sequential pipelines + error handling
|
|
39
|
+
- **🤖 AI Provider Integration**: Core AI tools with schema validation
|
|
40
|
+
|
|
41
|
+
**Ready for Phase 2**: MCP tool migration enabling unlimited extensibility while preserving the simple interface developers love.
|
|
42
|
+
|
|
11
43
|
## 🚀 Quick Start
|
|
12
44
|
|
|
13
45
|
### 📦 Installation
|
|
14
46
|
|
|
15
47
|
```bash
|
|
16
|
-
#
|
|
17
|
-
npm install -g @juspay/neurolink
|
|
18
|
-
|
|
19
|
-
# Or use directly with npx (no installation required)
|
|
48
|
+
# CLI Usage (No Installation Required)
|
|
20
49
|
npx @juspay/neurolink generate-text "Hello, AI!"
|
|
50
|
+
npx @juspay/neurolink status
|
|
21
51
|
|
|
22
|
-
#
|
|
52
|
+
# Global CLI Installation
|
|
23
53
|
npm install -g @juspay/neurolink
|
|
24
54
|
neurolink generate-text "Write a haiku about programming"
|
|
25
|
-
neurolink status --verbose
|
|
26
|
-
```
|
|
27
55
|
|
|
28
|
-
|
|
29
|
-
```bash
|
|
56
|
+
# SDK Installation
|
|
30
57
|
npm install @juspay/neurolink ai @ai-sdk/amazon-bedrock @ai-sdk/openai @ai-sdk/google-vertex zod
|
|
31
58
|
```
|
|
32
59
|
|
|
60
|
+
### Programmatic Usage
|
|
33
61
|
```typescript
|
|
34
62
|
import { createBestAIProvider } from '@juspay/neurolink';
|
|
35
63
|
|
|
@@ -40,8 +68,41 @@ const result = await provider.generateText({
|
|
|
40
68
|
});
|
|
41
69
|
|
|
42
70
|
console.log(result.text);
|
|
71
|
+
console.log(`Used: ${result.provider}`);
|
|
72
|
+
```
|
|
73
|
+
|
|
74
|
+
### Environment Setup
|
|
75
|
+
```bash
|
|
76
|
+
# Create .env file (automatically loaded by CLI) ✨ NEW!
|
|
77
|
+
# OpenAI
|
|
78
|
+
echo 'OPENAI_API_KEY="sk-your-openai-key"' > .env
|
|
79
|
+
echo 'OPENAI_MODEL="gpt-4o"' >> .env
|
|
80
|
+
|
|
81
|
+
# Amazon Bedrock
|
|
82
|
+
echo 'AWS_ACCESS_KEY_ID="your-aws-access-key"' >> .env
|
|
83
|
+
echo 'AWS_SECRET_ACCESS_KEY="your-aws-secret-key"' >> .env
|
|
84
|
+
echo 'AWS_REGION="us-east-1"' >> .env
|
|
85
|
+
echo 'BEDROCK_MODEL="arn:aws:bedrock:region:account:inference-profile/model"' >> .env
|
|
86
|
+
|
|
87
|
+
# Google Vertex AI
|
|
88
|
+
echo 'GOOGLE_VERTEX_PROJECT="your-project-id"' >> .env
|
|
89
|
+
echo 'GOOGLE_VERTEX_LOCATION="us-central1"' >> .env
|
|
90
|
+
echo 'GOOGLE_APPLICATION_CREDENTIALS="/path/to/service-account.json"' >> .env
|
|
91
|
+
|
|
92
|
+
# Anthropic
|
|
93
|
+
echo 'ANTHROPIC_API_KEY="sk-ant-api03-your-key"' >> .env
|
|
94
|
+
|
|
95
|
+
# Azure OpenAI
|
|
96
|
+
echo 'AZURE_OPENAI_API_KEY="your-azure-key"' >> .env
|
|
97
|
+
echo 'AZURE_OPENAI_ENDPOINT="https://your-resource.openai.azure.com/"' >> .env
|
|
98
|
+
echo 'AZURE_OPENAI_DEPLOYMENT_ID="your-deployment-name"' >> .env
|
|
99
|
+
|
|
100
|
+
# Test configuration (automatically loads .env)
|
|
101
|
+
npx @juspay/neurolink status
|
|
43
102
|
```
|
|
44
103
|
|
|
104
|
+
**📖 [Complete Environment Variables Guide](./docs/ENVIRONMENT-VARIABLES.md)** - Detailed setup instructions for all providers
|
|
105
|
+
|
|
45
106
|
## 🎬 Complete Visual Documentation
|
|
46
107
|
|
|
47
108
|
**No installation required!** Experience NeuroLink's capabilities through our comprehensive visual ecosystem:
|
|
@@ -58,30 +119,55 @@ console.log(result.text);
|
|
|
58
119
|
| **Developer Tools** |  | Code generation and API docs |
|
|
59
120
|
| **Analytics & Monitoring** |  | Real-time provider analytics |
|
|
60
121
|
|
|
61
|
-
#### **🎥 Complete Demo Videos** *(
|
|
62
|
-
- **[Basic Examples](./neurolink-demo/videos/basic-examples/)** -
|
|
63
|
-
- **[Business Use Cases](./neurolink-demo/videos/business-use-cases/)** -
|
|
64
|
-
- **[Creative Tools](./neurolink-demo/videos/creative-tools/)** -
|
|
65
|
-
- **[Developer Tools](./neurolink-demo/videos/developer-tools/)** - React
|
|
66
|
-
- **[Monitoring & Analytics](./neurolink-demo/videos/monitoring/)** -
|
|
122
|
+
#### **🎥 Complete Demo Videos** *(Real AI generation showing SDK use cases)*
|
|
123
|
+
- **[Basic Examples WebM](./neurolink-demo/videos/basic-examples.webm) | [MP4](./neurolink-demo/videos/basic-examples.mp4)** - Core SDK functionality: text generation, streaming, provider selection, status checks
|
|
124
|
+
- **[Business Use Cases WebM](./neurolink-demo/videos/business-use-cases.webm) | [MP4](./neurolink-demo/videos/business-use-cases.mp4)** - Professional applications: marketing emails, quarterly data analysis, executive summaries
|
|
125
|
+
- **[Creative Tools WebM](./neurolink-demo/videos/creative-tools.webm) | [MP4](./neurolink-demo/videos/creative-tools.mp4)** - Content creation: storytelling, translation, blog post ideas
|
|
126
|
+
- **[Developer Tools WebM](./neurolink-demo/videos/developer-tools.webm) | [MP4](./neurolink-demo/videos/developer-tools.mp4)** - Technical applications: React components, API documentation, error debugging
|
|
127
|
+
- **[Monitoring & Analytics WebM](./neurolink-demo/videos/monitoring-analytics.webm) | [MP4](./neurolink-demo/videos/monitoring-analytics.mp4)** - SDK features: performance benchmarks, provider fallback, structured data generation
|
|
128
|
+
|
|
129
|
+
**Available formats:**
|
|
130
|
+
- **WebM** (web-optimized): All videos available as `.webm` for web embedding
|
|
131
|
+
- **MP4** (universal): All videos available as `.mp4` for desktop and mobile compatibility
|
|
67
132
|
|
|
68
133
|
### 🖥️ CLI Tool Screenshots & Videos
|
|
69
134
|
|
|
70
|
-
#### **📸 Professional CLI Screenshots**
|
|
135
|
+
#### **📸 Professional CLI Screenshots** *(Latest: June 8, 2025)*
|
|
136
|
+
| Command | Screenshot | Description |
|
|
137
|
+
|---------|------------|-------------|
|
|
138
|
+
| **CLI Help Overview** |  | Complete command reference |
|
|
139
|
+
| **Provider Status Check** |  | All provider connectivity verified |
|
|
140
|
+
| **Text Generation** |  | Real AI haiku generation with JSON |
|
|
141
|
+
| **Auto Provider Selection** |  | Automatic provider selection working |
|
|
142
|
+
| **Batch Processing** |  | Multi-prompt processing with results |
|
|
143
|
+
|
|
144
|
+
#### **🎥 CLI Demonstration Videos** *(Professional H.264 MP4 format)*
|
|
145
|
+
- **[CLI Help & Overview](./docs/visual-content/cli-videos/cli-help.mp4)** (44KB) - Complete command reference and usage examples
|
|
146
|
+
- **[Provider Status Check](./docs/visual-content/cli-videos/cli-provider-status.mp4)** (496KB) - Connectivity testing and response time measurement
|
|
147
|
+
- **[Text Generation](./docs/visual-content/cli-videos/cli-text-generation.mp4)** (100KB) - Real AI content generation with different providers
|
|
148
|
+
- **[MCP Command Help](./docs/visual-content/cli-videos/mcp-help.mp4)** (36KB) - MCP server management commands
|
|
149
|
+
- **[MCP Server Listing](./docs/visual-content/cli-videos/mcp-list.mp4)** (16KB) - MCP server discovery and status
|
|
150
|
+
|
|
151
|
+
### 🔧 MCP (Model Context Protocol) Visual Documentation
|
|
152
|
+
|
|
153
|
+
#### **📸 MCP CLI Screenshots** *(Generated Jan 10, 2025)*
|
|
71
154
|
| Command | Screenshot | Description |
|
|
72
155
|
|---------|------------|-------------|
|
|
73
|
-
| **
|
|
74
|
-
| **
|
|
75
|
-
| **
|
|
76
|
-
| **
|
|
77
|
-
| **
|
|
78
|
-
|
|
79
|
-
|
|
80
|
-
|
|
81
|
-
- **[
|
|
82
|
-
- **[
|
|
83
|
-
- **[
|
|
84
|
-
|
|
156
|
+
| **MCP Help Overview** |  | Complete MCP command reference |
|
|
157
|
+
| **Server Installation** |  | Installing external MCP servers |
|
|
158
|
+
| **Server Status Check** |  | MCP server connectivity and status |
|
|
159
|
+
| **Server Testing** |  | Testing MCP server connectivity |
|
|
160
|
+
| **Custom Server Setup** |  | Adding custom MCP server configurations |
|
|
161
|
+
| **Workflow Integration** |  | Complete MCP workflow demonstrations |
|
|
162
|
+
|
|
163
|
+
#### **🎥 MCP Demo Videos** *(Real MCP server integration)*
|
|
164
|
+
- **[Server Management WebM](./neurolink-demo/videos/mcp-demos/mcp-server-management-demo.webm) | [MP4](./neurolink-demo/videos/mcp-demos/mcp-server-management-demo.mp4)** - Installing, configuring, and testing MCP servers (~45s)
|
|
165
|
+
- **[Tool Execution WebM](./neurolink-demo/videos/mcp-demos/mcp-tool-execution-demo.webm) | [MP4](./neurolink-demo/videos/mcp-demos/mcp-tool-execution-demo.mp4)** - Executing tools from external MCP servers (~60s)
|
|
166
|
+
- **[Workflow Integration WebM](./neurolink-demo/videos/mcp-demos/mcp-workflow-integration-demo.webm) | [MP4](./neurolink-demo/videos/mcp-demos/mcp-workflow-integration-demo.mp4)** - Complete workflow using multiple MCP servers (~90s)
|
|
167
|
+
|
|
168
|
+
**MCP Documentation**: All MCP visual content demonstrates real external server integration and tool execution capabilities.
|
|
169
|
+
|
|
170
|
+
**Video Quality**: All CLI videos now use professional H.264 encoding with universal compatibility across platforms and documentation systems.
|
|
85
171
|
|
|
86
172
|
### 💻 Live Interactive Demo
|
|
87
173
|
- **Working Express.js server** with real API integration
|
|
@@ -89,33 +175,19 @@ console.log(result.text);
|
|
|
89
175
|
- **15+ use cases** demonstrated across business, creative, and developer tools
|
|
90
176
|
- **Real-time provider analytics** with performance metrics
|
|
91
177
|
|
|
92
|
-
|
|
93
|
-
- ✅ **No Installation Required** - See everything in action before installing
|
|
94
|
-
- ✅ **Real AI Content** - All screenshots and videos show actual AI generation
|
|
95
|
-
- ✅ **Professional Quality** - 1920x1080 resolution suitable for documentation
|
|
96
|
-
- ✅ **Complete Coverage** - Every major feature visually documented
|
|
97
|
-
- ✅ **Production Validation** - Demonstrates real-world usage patterns
|
|
98
|
-
|
|
99
|
-
[📁 View complete visual documentation](./neurolink-demo/) including all screenshots, videos, and interactive examples.
|
|
178
|
+
**Access**: `cd neurolink-demo && npm start` - [📁 View complete visual documentation](./neurolink-demo/)
|
|
100
179
|
|
|
101
|
-
##
|
|
180
|
+
## 📚 Documentation
|
|
102
181
|
|
|
103
|
-
|
|
104
|
-
- [
|
|
105
|
-
- [
|
|
106
|
-
- [
|
|
107
|
-
|
|
108
|
-
|
|
109
|
-
|
|
110
|
-
- [React Hook](#react-hook)
|
|
111
|
-
- [API Reference](#api-reference)
|
|
112
|
-
- [Provider Configuration](#provider-configuration)
|
|
113
|
-
- [Advanced Patterns](#advanced-patterns)
|
|
114
|
-
- [Error Handling](#error-handling)
|
|
115
|
-
- [Performance](#performance)
|
|
116
|
-
- [Contributing](#contributing)
|
|
182
|
+
### Quick Reference
|
|
183
|
+
- **[🖥️ CLI Guide](./docs/CLI-GUIDE.md)** - Complete CLI commands, options, and examples
|
|
184
|
+
- **[🏗️ Framework Integration](./docs/FRAMEWORK-INTEGRATION.md)** - SvelteKit, Next.js, Express.js, React hooks
|
|
185
|
+
- **[🔧 Environment Variables](./docs/ENVIRONMENT-VARIABLES.md)** - Complete setup guide for all AI providers
|
|
186
|
+
- **[⚙️ Provider Configuration](./docs/PROVIDER-CONFIGURATION.md)** - OpenAI, Bedrock, Vertex AI setup guides
|
|
187
|
+
- **[📚 API Reference](./docs/API-REFERENCE.md)** - Complete TypeScript API documentation
|
|
188
|
+
- **[🎬 Visual Demos](./docs/VISUAL-DEMOS.md)** - Screenshots, videos, and interactive examples
|
|
117
189
|
|
|
118
|
-
|
|
190
|
+
### Key Features
|
|
119
191
|
|
|
120
192
|
🔄 **Multi-Provider Support** - OpenAI, Amazon Bedrock, Google Vertex AI
|
|
121
193
|
⚡ **Automatic Fallback** - Seamless provider switching on failures
|
|
@@ -124,1189 +196,192 @@ console.log(result.text);
|
|
|
124
196
|
🛡️ **Production Ready** - Extracted from proven production systems
|
|
125
197
|
🔧 **Zero Config** - Works out of the box with environment variables
|
|
126
198
|
|
|
127
|
-
## Installation
|
|
128
|
-
|
|
129
|
-
### Package Installation
|
|
130
|
-
```bash
|
|
131
|
-
# npm
|
|
132
|
-
npm install @juspay/neurolink ai @ai-sdk/amazon-bedrock @ai-sdk/openai @ai-sdk/google-vertex zod
|
|
133
|
-
|
|
134
|
-
# yarn
|
|
135
|
-
yarn add @juspay/neurolink ai @ai-sdk/amazon-bedrock @ai-sdk/openai @ai-sdk/google-vertex zod
|
|
136
|
-
|
|
137
|
-
# pnpm (recommended)
|
|
138
|
-
pnpm add @juspay/neurolink ai @ai-sdk/amazon-bedrock @ai-sdk/openai @ai-sdk/google-vertex zod
|
|
139
|
-
```
|
|
140
|
-
|
|
141
|
-
### Environment Setup
|
|
142
|
-
```bash
|
|
143
|
-
# Choose one or more providers
|
|
144
|
-
export OPENAI_API_KEY="sk-your-openai-key"
|
|
145
|
-
export AWS_ACCESS_KEY_ID="your-aws-key"
|
|
146
|
-
export AWS_SECRET_ACCESS_KEY="your-aws-secret"
|
|
147
|
-
export GOOGLE_APPLICATION_CREDENTIALS="path/to/service-account.json"
|
|
148
|
-
```
|
|
149
|
-
|
|
150
|
-
## Basic Usage
|
|
151
|
-
|
|
152
|
-
### Simple Text Generation
|
|
153
|
-
```typescript
|
|
154
|
-
import { createBestAIProvider } from '@juspay/neurolink';
|
|
155
|
-
|
|
156
|
-
const provider = createBestAIProvider();
|
|
157
|
-
|
|
158
|
-
// Basic generation
|
|
159
|
-
const result = await provider.generateText({
|
|
160
|
-
prompt: "Explain TypeScript generics",
|
|
161
|
-
temperature: 0.7,
|
|
162
|
-
maxTokens: 500
|
|
163
|
-
});
|
|
164
|
-
|
|
165
|
-
console.log(result.text);
|
|
166
|
-
console.log(`Used: ${result.provider}`);
|
|
167
|
-
```
|
|
168
|
-
|
|
169
|
-
### Streaming Responses
|
|
170
|
-
```typescript
|
|
171
|
-
import { createBestAIProvider } from '@juspay/neurolink';
|
|
172
|
-
|
|
173
|
-
const provider = createBestAIProvider();
|
|
174
|
-
|
|
175
|
-
const result = await provider.streamText({
|
|
176
|
-
prompt: "Write a story about AI",
|
|
177
|
-
temperature: 0.8,
|
|
178
|
-
maxTokens: 1000
|
|
179
|
-
});
|
|
180
|
-
|
|
181
|
-
// Handle streaming chunks
|
|
182
|
-
for await (const chunk of result.textStream) {
|
|
183
|
-
process.stdout.write(chunk);
|
|
184
|
-
}
|
|
185
|
-
```
|
|
186
|
-
|
|
187
|
-
### Provider Selection
|
|
188
|
-
```typescript
|
|
189
|
-
import { AIProviderFactory } from '@juspay/neurolink';
|
|
190
|
-
|
|
191
|
-
// Use specific provider
|
|
192
|
-
const openai = AIProviderFactory.createProvider('openai', 'gpt-4o');
|
|
193
|
-
const bedrock = AIProviderFactory.createProvider('bedrock', 'claude-3-7-sonnet');
|
|
194
|
-
|
|
195
|
-
// With fallback
|
|
196
|
-
const { primary, fallback } = AIProviderFactory.createProviderWithFallback(
|
|
197
|
-
'bedrock', 'openai'
|
|
198
|
-
);
|
|
199
|
-
```
|
|
200
|
-
|
|
201
199
|
## 🖥️ CLI Tool
|
|
202
200
|
|
|
203
|
-
|
|
204
|
-
|
|
205
|
-
### Installation & Usage
|
|
206
|
-
|
|
207
|
-
#### Option 1: NPX (No Installation Required)
|
|
208
|
-
```bash
|
|
209
|
-
# Use directly without installation
|
|
210
|
-
npx @juspay/neurolink --help
|
|
211
|
-
npx @juspay/neurolink generate-text "Hello, AI!"
|
|
212
|
-
npx @juspay/neurolink status
|
|
213
|
-
```
|
|
214
|
-
|
|
215
|
-
#### Option 2: Global Installation
|
|
216
|
-
```bash
|
|
217
|
-
# Install globally for convenient access
|
|
218
|
-
npm install -g @juspay/neurolink
|
|
219
|
-
|
|
220
|
-
# Then use anywhere
|
|
221
|
-
neurolink --help
|
|
222
|
-
neurolink generate-text "Write a haiku about programming"
|
|
223
|
-
neurolink status --verbose
|
|
224
|
-
```
|
|
225
|
-
|
|
226
|
-
#### Option 3: Local Project Usage
|
|
227
|
-
```bash
|
|
228
|
-
# Add to project and use via npm scripts
|
|
229
|
-
npm install @juspay/neurolink
|
|
230
|
-
npx neurolink generate-text "Explain TypeScript"
|
|
231
|
-
```
|
|
232
|
-
|
|
233
|
-
### CLI Commands
|
|
234
|
-
|
|
235
|
-
#### `generate-text <prompt>` - Core Text Generation
|
|
236
|
-
```bash
|
|
237
|
-
# Basic text generation
|
|
238
|
-
neurolink generate-text "Explain quantum computing"
|
|
239
|
-
|
|
240
|
-
# With provider selection
|
|
241
|
-
neurolink generate-text "Write a story" --provider openai
|
|
242
|
-
|
|
243
|
-
# With temperature and token control
|
|
244
|
-
neurolink generate-text "Creative writing" --temperature 0.9 --max-tokens 1000
|
|
245
|
-
|
|
246
|
-
# JSON output for scripting
|
|
247
|
-
neurolink generate-text "Summary of AI" --format json
|
|
248
|
-
```
|
|
249
|
-
|
|
250
|
-
**Output Example:**
|
|
251
|
-
```
|
|
252
|
-
🤖 Generating text...
|
|
253
|
-
✅ Text generated successfully!
|
|
254
|
-
Quantum computing represents a revolutionary approach to information processing...
|
|
255
|
-
ℹ️ 127 tokens used
|
|
256
|
-
```
|
|
257
|
-
|
|
258
|
-
#### `stream <prompt>` - Real-time Streaming
|
|
259
|
-
```bash
|
|
260
|
-
# Stream text generation in real-time
|
|
261
|
-
neurolink stream "Tell me a story about robots"
|
|
262
|
-
|
|
263
|
-
# With provider selection
|
|
264
|
-
neurolink stream "Explain machine learning" --provider vertex --temperature 0.8
|
|
265
|
-
```
|
|
266
|
-
|
|
267
|
-
**Output Example:**
|
|
268
|
-
```
|
|
269
|
-
🔄 Streaming from auto provider...
|
|
270
|
-
|
|
271
|
-
Once upon a time, in a world where technology had advanced beyond...
|
|
272
|
-
[text streams in real-time as it's generated]
|
|
273
|
-
```
|
|
274
|
-
|
|
275
|
-
#### `batch <file>` - Process Multiple Prompts
|
|
276
|
-
```bash
|
|
277
|
-
# Create a file with prompts (one per line)
|
|
278
|
-
echo -e "Write a haiku\nExplain gravity\nDescribe the ocean" > prompts.txt
|
|
279
|
-
|
|
280
|
-
# Process all prompts
|
|
281
|
-
neurolink batch prompts.txt
|
|
282
|
-
|
|
283
|
-
# Save results to JSON file
|
|
284
|
-
neurolink batch prompts.txt --output results.json
|
|
285
|
-
|
|
286
|
-
# Add delay between requests (rate limiting)
|
|
287
|
-
neurolink batch prompts.txt --delay 2000
|
|
288
|
-
```
|
|
289
|
-
|
|
290
|
-
**Output Example:**
|
|
291
|
-
```
|
|
292
|
-
📦 Processing 3 prompts...
|
|
293
|
-
|
|
294
|
-
✅ 1/3 completed
|
|
295
|
-
✅ 2/3 completed
|
|
296
|
-
✅ 3/3 completed
|
|
297
|
-
✅ Results saved to results.json
|
|
298
|
-
```
|
|
201
|
+
### Core Commands
|
|
299
202
|
|
|
300
|
-
#### `status` - Provider Diagnostics
|
|
301
203
|
```bash
|
|
302
|
-
#
|
|
303
|
-
neurolink
|
|
304
|
-
|
|
305
|
-
# Verbose output with detailed information
|
|
306
|
-
neurolink status --verbose
|
|
307
|
-
```
|
|
308
|
-
|
|
309
|
-
**Output Example:**
|
|
310
|
-
```
|
|
311
|
-
🔍 Checking AI provider status...
|
|
204
|
+
# Text Generation
|
|
205
|
+
npx @juspay/neurolink generate-text "Explain quantum computing"
|
|
206
|
+
npx @juspay/neurolink generate-text "Write a story" --provider openai --temperature 0.9
|
|
312
207
|
|
|
313
|
-
|
|
314
|
-
|
|
315
|
-
❌ vertex: ❌ Authentication failed
|
|
208
|
+
# Real-time Streaming
|
|
209
|
+
npx @juspay/neurolink stream "Tell me a story about robots"
|
|
316
210
|
|
|
317
|
-
|
|
318
|
-
|
|
319
|
-
|
|
320
|
-
#### `get-best-provider` - Auto-selection Testing
|
|
321
|
-
```bash
|
|
322
|
-
# Test which provider would be auto-selected
|
|
323
|
-
neurolink get-best-provider
|
|
324
|
-
```
|
|
211
|
+
# Batch Processing
|
|
212
|
+
echo -e "Write a haiku\nExplain gravity" > prompts.txt
|
|
213
|
+
npx @juspay/neurolink batch prompts.txt --output results.json
|
|
325
214
|
|
|
326
|
-
|
|
327
|
-
|
|
328
|
-
|
|
329
|
-
✅ Best provider: bedrock
|
|
215
|
+
# Provider Diagnostics
|
|
216
|
+
npx @juspay/neurolink status --verbose
|
|
217
|
+
npx @juspay/neurolink get-best-provider
|
|
330
218
|
```
|
|
331
219
|
|
|
332
|
-
### CLI Options & Arguments
|
|
333
|
-
|
|
334
|
-
#### Global Options
|
|
335
|
-
- `--help, -h` - Show help information
|
|
336
|
-
- `--version, -v` - Show version number
|
|
337
|
-
|
|
338
|
-
#### Generation Options
|
|
339
|
-
- `--provider <name>` - Choose provider: `auto` (default), `openai`, `bedrock`, `vertex`
|
|
340
|
-
- `--temperature <number>` - Creativity level: `0.0` (focused) to `1.0` (creative), default: `0.7`
|
|
341
|
-
- `--max-tokens <number>` - Maximum tokens to generate, default: `500`
|
|
342
|
-
- `--format <type>` - Output format: `text` (default) or `json`
|
|
343
|
-
|
|
344
|
-
#### Batch Processing Options
|
|
345
|
-
- `--output <file>` - Save results to JSON file
|
|
346
|
-
- `--delay <ms>` - Delay between requests in milliseconds, default: `1000`
|
|
347
|
-
|
|
348
|
-
#### Status Options
|
|
349
|
-
- `--verbose, -v` - Show detailed diagnostic information
|
|
350
|
-
|
|
351
220
|
### CLI Features
|
|
352
221
|
|
|
353
|
-
|
|
354
|
-
|
|
355
|
-
|
|
356
|
-
- **Progress Tracking**: Real-time progress for batch operations
|
|
357
|
-
- **Smart Error Messages**: Helpful hints for common issues
|
|
358
|
-
|
|
359
|
-
#### 🛠️ Developer-Friendly
|
|
360
|
-
- **Multiple Output Formats**: Text for humans, JSON for scripts
|
|
361
|
-
- **Provider Selection**: Test specific providers or use auto-selection
|
|
362
|
-
- **Batch Processing**: Handle multiple prompts efficiently
|
|
363
|
-
- **Status Monitoring**: Check provider health and connectivity
|
|
222
|
+
✨ **Professional UX** - Animated spinners, colorized output, progress tracking
|
|
223
|
+
🛠️ **Developer-Friendly** - Multiple output formats, provider selection, status monitoring
|
|
224
|
+
🔧 **Automation Ready** - JSON output, exit codes, scriptable for CI/CD pipelines
|
|
364
225
|
|
|
365
|
-
|
|
366
|
-
- **Exit Codes**: Standard exit codes for scripting
|
|
367
|
-
- **JSON Output**: Structured data for automated workflows
|
|
368
|
-
- **Environment Variable**: All SDK environment variables work with CLI
|
|
369
|
-
- **Scriptable**: Perfect for CI/CD pipelines and automation
|
|
226
|
+
**[📖 View complete CLI documentation](./docs/CLI-GUIDE.md)**
|
|
370
227
|
|
|
371
|
-
|
|
372
|
-
|
|
373
|
-
#### Creative Writing Workflow
|
|
374
|
-
```bash
|
|
375
|
-
# Generate creative content with high temperature
|
|
376
|
-
neurolink generate-text "Write a sci-fi story opening" \
|
|
377
|
-
--provider openai \
|
|
378
|
-
--temperature 0.9 \
|
|
379
|
-
--max-tokens 1000 \
|
|
380
|
-
--format json > story.json
|
|
381
|
-
|
|
382
|
-
# Check what was generated
|
|
383
|
-
cat story.json | jq '.content'
|
|
384
|
-
```
|
|
385
|
-
|
|
386
|
-
#### Batch Content Processing
|
|
387
|
-
```bash
|
|
388
|
-
# Create prompts file
|
|
389
|
-
cat > content-prompts.txt << EOF
|
|
390
|
-
Write a product description for AI software
|
|
391
|
-
Create a social media post about technology
|
|
392
|
-
Draft an email about our new features
|
|
393
|
-
Write a blog post title about machine learning
|
|
394
|
-
EOF
|
|
395
|
-
|
|
396
|
-
# Process all prompts and save results
|
|
397
|
-
neurolink batch content-prompts.txt \
|
|
398
|
-
--output content-results.json \
|
|
399
|
-
--provider bedrock \
|
|
400
|
-
--delay 2000
|
|
401
|
-
|
|
402
|
-
# Extract just the content
|
|
403
|
-
cat content-results.json | jq -r '.[].response'
|
|
404
|
-
```
|
|
405
|
-
|
|
406
|
-
#### Provider Health Monitoring
|
|
407
|
-
```bash
|
|
408
|
-
# Check provider status (useful for monitoring scripts)
|
|
409
|
-
neurolink status --format json > status.json
|
|
410
|
-
|
|
411
|
-
# Parse results in scripts
|
|
412
|
-
working_providers=$(cat status.json | jq '[.[] | select(.status == "working")] | length')
|
|
413
|
-
echo "Working providers: $working_providers"
|
|
414
|
-
```
|
|
415
|
-
|
|
416
|
-
#### Integration with Shell Scripts
|
|
417
|
-
```bash
|
|
418
|
-
#!/bin/bash
|
|
419
|
-
# AI-powered commit message generator
|
|
420
|
-
|
|
421
|
-
# Get git diff
|
|
422
|
-
diff=$(git diff --cached --name-only)
|
|
423
|
-
|
|
424
|
-
if [ -z "$diff" ]; then
|
|
425
|
-
echo "No staged changes found"
|
|
426
|
-
exit 1
|
|
427
|
-
fi
|
|
428
|
-
|
|
429
|
-
# Generate commit message
|
|
430
|
-
commit_msg=$(neurolink generate-text \
|
|
431
|
-
"Generate a concise git commit message for these changes: $diff" \
|
|
432
|
-
--max-tokens 50 \
|
|
433
|
-
--temperature 0.3)
|
|
434
|
-
|
|
435
|
-
echo "Suggested commit message:"
|
|
436
|
-
echo "$commit_msg"
|
|
437
|
-
|
|
438
|
-
# Optionally auto-commit
|
|
439
|
-
read -p "Use this commit message? (y/N): " -n 1 -r
|
|
440
|
-
if [[ $REPLY =~ ^[Yy]$ ]]; then
|
|
441
|
-
git commit -m "$commit_msg"
|
|
442
|
-
fi
|
|
443
|
-
```
|
|
444
|
-
|
|
445
|
-
### Environment Setup for CLI
|
|
446
|
-
|
|
447
|
-
The CLI uses the same environment variables as the SDK:
|
|
448
|
-
|
|
449
|
-
```bash
|
|
450
|
-
# Set up your providers (same as SDK)
|
|
451
|
-
export OPENAI_API_KEY="sk-your-key"
|
|
452
|
-
export AWS_ACCESS_KEY_ID="your-aws-key"
|
|
453
|
-
export AWS_SECRET_ACCESS_KEY="your-aws-secret"
|
|
454
|
-
export GOOGLE_APPLICATION_CREDENTIALS="/path/to/service-account.json"
|
|
455
|
-
|
|
456
|
-
# Test configuration
|
|
457
|
-
neurolink status
|
|
458
|
-
```
|
|
459
|
-
|
|
460
|
-
### CLI vs SDK Comparison
|
|
461
|
-
|
|
462
|
-
| Feature | CLI | SDK |
|
|
463
|
-
|---------|-----|-----|
|
|
464
|
-
| **Text Generation** | ✅ `generate-text` | ✅ `generateText()` |
|
|
465
|
-
| **Streaming** | ✅ `stream` | ✅ `streamText()` |
|
|
466
|
-
| **Provider Selection** | ✅ `--provider` flag | ✅ `createProvider()` |
|
|
467
|
-
| **Batch Processing** | ✅ `batch` command | ✅ Manual implementation |
|
|
468
|
-
| **Status Monitoring** | ✅ `status` command | ✅ Manual testing |
|
|
469
|
-
| **JSON Output** | ✅ `--format json` | ✅ Native objects |
|
|
470
|
-
| **Automation** | ✅ Perfect for scripts | ✅ Perfect for apps |
|
|
471
|
-
| **Learning Curve** | 🟢 Low | 🟡 Medium |
|
|
472
|
-
|
|
473
|
-
### When to Use CLI vs SDK
|
|
474
|
-
|
|
475
|
-
#### Use the CLI when:
|
|
476
|
-
- 🔧 **Prototyping**: Quick testing of prompts and providers
|
|
477
|
-
- 📜 **Scripting**: Shell scripts and automation workflows
|
|
478
|
-
- 🔍 **Debugging**: Checking provider status and testing connectivity
|
|
479
|
-
- 📊 **Batch Processing**: Processing multiple prompts from files
|
|
480
|
-
- 🎯 **One-off Tasks**: Generating content without writing code
|
|
481
|
-
|
|
482
|
-
#### Use the SDK when:
|
|
483
|
-
- 🏗️ **Application Development**: Building web apps, APIs, or services
|
|
484
|
-
- 🔄 **Real-time Integration**: Chat interfaces, streaming responses
|
|
485
|
-
- ⚙️ **Complex Logic**: Custom provider fallback, error handling
|
|
486
|
-
- 🎨 **UI Integration**: React components, Svelte stores
|
|
487
|
-
- 📈 **Production Applications**: Full-featured applications
|
|
488
|
-
|
|
489
|
-
## Framework Integration
|
|
228
|
+
## 🏗️ Framework Integration
|
|
490
229
|
|
|
491
230
|
### SvelteKit
|
|
492
|
-
|
|
493
|
-
#### API Route (`src/routes/api/chat/+server.ts`)
|
|
494
231
|
```typescript
|
|
495
232
|
import { createBestAIProvider } from '@juspay/neurolink';
|
|
496
|
-
import type { RequestHandler } from './$types';
|
|
497
233
|
|
|
498
234
|
export const POST: RequestHandler = async ({ request }) => {
|
|
499
|
-
|
|
500
|
-
|
|
501
|
-
|
|
502
|
-
|
|
503
|
-
const result = await provider.streamText({
|
|
504
|
-
prompt: message,
|
|
505
|
-
temperature: 0.7,
|
|
506
|
-
maxTokens: 1000
|
|
507
|
-
});
|
|
508
|
-
|
|
509
|
-
return new Response(result.toReadableStream(), {
|
|
510
|
-
headers: {
|
|
511
|
-
'Content-Type': 'text/plain; charset=utf-8',
|
|
512
|
-
'Cache-Control': 'no-cache'
|
|
513
|
-
}
|
|
514
|
-
});
|
|
515
|
-
} catch (error) {
|
|
516
|
-
return new Response(JSON.stringify({ error: error.message }), {
|
|
517
|
-
status: 500,
|
|
518
|
-
headers: { 'Content-Type': 'application/json' }
|
|
519
|
-
});
|
|
520
|
-
}
|
|
235
|
+
const { message } = await request.json();
|
|
236
|
+
const provider = createBestAIProvider();
|
|
237
|
+
const result = await provider.streamText({ prompt: message });
|
|
238
|
+
return new Response(result.toReadableStream());
|
|
521
239
|
};
|
|
522
240
|
```
|
|
523
241
|
|
|
524
|
-
#### Svelte Component (`src/routes/chat/+page.svelte`)
|
|
525
|
-
```svelte
|
|
526
|
-
<script lang="ts">
|
|
527
|
-
let message = '';
|
|
528
|
-
let response = '';
|
|
529
|
-
let isLoading = false;
|
|
530
|
-
|
|
531
|
-
async function sendMessage() {
|
|
532
|
-
if (!message.trim()) return;
|
|
533
|
-
|
|
534
|
-
isLoading = true;
|
|
535
|
-
response = '';
|
|
536
|
-
|
|
537
|
-
try {
|
|
538
|
-
const res = await fetch('/api/chat', {
|
|
539
|
-
method: 'POST',
|
|
540
|
-
headers: { 'Content-Type': 'application/json' },
|
|
541
|
-
body: JSON.stringify({ message })
|
|
542
|
-
});
|
|
543
|
-
|
|
544
|
-
if (!res.body) throw new Error('No response');
|
|
545
|
-
|
|
546
|
-
const reader = res.body.getReader();
|
|
547
|
-
const decoder = new TextDecoder();
|
|
548
|
-
|
|
549
|
-
while (true) {
|
|
550
|
-
const { done, value } = await reader.read();
|
|
551
|
-
if (done) break;
|
|
552
|
-
response += decoder.decode(value, { stream: true });
|
|
553
|
-
}
|
|
554
|
-
} catch (error) {
|
|
555
|
-
response = `Error: ${error.message}`;
|
|
556
|
-
} finally {
|
|
557
|
-
isLoading = false;
|
|
558
|
-
}
|
|
559
|
-
}
|
|
560
|
-
</script>
|
|
561
|
-
|
|
562
|
-
<div class="chat">
|
|
563
|
-
<input bind:value={message} placeholder="Ask something..." />
|
|
564
|
-
<button on:click={sendMessage} disabled={isLoading}>
|
|
565
|
-
{isLoading ? 'Sending...' : 'Send'}
|
|
566
|
-
</button>
|
|
567
|
-
|
|
568
|
-
{#if response}
|
|
569
|
-
<div class="response">{response}</div>
|
|
570
|
-
{/if}
|
|
571
|
-
</div>
|
|
572
|
-
```
|
|
573
|
-
|
|
574
242
|
### Next.js
|
|
575
|
-
|
|
576
|
-
#### App Router API (`app/api/ai/route.ts`)
|
|
577
243
|
```typescript
|
|
578
244
|
import { createBestAIProvider } from '@juspay/neurolink';
|
|
579
|
-
import { NextRequest, NextResponse } from 'next/server';
|
|
580
245
|
|
|
581
246
|
export async function POST(request: NextRequest) {
|
|
582
|
-
|
|
583
|
-
|
|
584
|
-
|
|
585
|
-
|
|
586
|
-
const result = await provider.generateText({
|
|
587
|
-
prompt,
|
|
588
|
-
temperature: 0.7,
|
|
589
|
-
maxTokens: 1000,
|
|
590
|
-
...options
|
|
591
|
-
});
|
|
592
|
-
|
|
593
|
-
return NextResponse.json({
|
|
594
|
-
text: result.text,
|
|
595
|
-
provider: result.provider,
|
|
596
|
-
usage: result.usage
|
|
597
|
-
});
|
|
598
|
-
} catch (error) {
|
|
599
|
-
return NextResponse.json(
|
|
600
|
-
{ error: error.message },
|
|
601
|
-
{ status: 500 }
|
|
602
|
-
);
|
|
603
|
-
}
|
|
604
|
-
}
|
|
605
|
-
```
|
|
606
|
-
|
|
607
|
-
#### React Component (`components/AIChat.tsx`)
|
|
608
|
-
```typescript
|
|
609
|
-
'use client';
|
|
610
|
-
import { useState } from 'react';
|
|
611
|
-
|
|
612
|
-
export default function AIChat() {
|
|
613
|
-
const [prompt, setPrompt] = useState('');
|
|
614
|
-
const [result, setResult] = useState<string>('');
|
|
615
|
-
const [loading, setLoading] = useState(false);
|
|
616
|
-
|
|
617
|
-
const generate = async () => {
|
|
618
|
-
if (!prompt.trim()) return;
|
|
619
|
-
|
|
620
|
-
setLoading(true);
|
|
621
|
-
try {
|
|
622
|
-
const response = await fetch('/api/ai', {
|
|
623
|
-
method: 'POST',
|
|
624
|
-
headers: { 'Content-Type': 'application/json' },
|
|
625
|
-
body: JSON.stringify({ prompt })
|
|
626
|
-
});
|
|
627
|
-
|
|
628
|
-
const data = await response.json();
|
|
629
|
-
setResult(data.text);
|
|
630
|
-
} catch (error) {
|
|
631
|
-
setResult(`Error: ${error.message}`);
|
|
632
|
-
} finally {
|
|
633
|
-
setLoading(false);
|
|
634
|
-
}
|
|
635
|
-
};
|
|
636
|
-
|
|
637
|
-
return (
|
|
638
|
-
<div className="space-y-4">
|
|
639
|
-
<div className="flex gap-2">
|
|
640
|
-
<input
|
|
641
|
-
value={prompt}
|
|
642
|
-
onChange={(e) => setPrompt(e.target.value)}
|
|
643
|
-
placeholder="Enter your prompt..."
|
|
644
|
-
className="flex-1 p-2 border rounded"
|
|
645
|
-
/>
|
|
646
|
-
<button
|
|
647
|
-
onClick={generate}
|
|
648
|
-
disabled={loading}
|
|
649
|
-
className="px-4 py-2 bg-blue-500 text-white rounded disabled:opacity-50"
|
|
650
|
-
>
|
|
651
|
-
{loading ? 'Generating...' : 'Generate'}
|
|
652
|
-
</button>
|
|
653
|
-
</div>
|
|
654
|
-
|
|
655
|
-
{result && (
|
|
656
|
-
<div className="p-4 bg-gray-100 rounded">
|
|
657
|
-
{result}
|
|
658
|
-
</div>
|
|
659
|
-
)}
|
|
660
|
-
</div>
|
|
661
|
-
);
|
|
247
|
+
const { prompt } = await request.json();
|
|
248
|
+
const provider = createBestAIProvider();
|
|
249
|
+
const result = await provider.generateText({ prompt });
|
|
250
|
+
return NextResponse.json({ text: result.text });
|
|
662
251
|
}
|
|
663
252
|
```
|
|
664
253
|
|
|
665
|
-
### Express.js
|
|
666
|
-
|
|
667
|
-
```typescript
|
|
668
|
-
import express from 'express';
|
|
669
|
-
import { createBestAIProvider, AIProviderFactory } from '@juspay/neurolink';
|
|
670
|
-
|
|
671
|
-
const app = express();
|
|
672
|
-
app.use(express.json());
|
|
673
|
-
|
|
674
|
-
// Simple generation endpoint
|
|
675
|
-
app.post('/api/generate', async (req, res) => {
|
|
676
|
-
try {
|
|
677
|
-
const { prompt, options = {} } = req.body;
|
|
678
|
-
|
|
679
|
-
const provider = createBestAIProvider();
|
|
680
|
-
const result = await provider.generateText({
|
|
681
|
-
prompt,
|
|
682
|
-
...options
|
|
683
|
-
});
|
|
684
|
-
|
|
685
|
-
res.json({
|
|
686
|
-
success: true,
|
|
687
|
-
text: result.text,
|
|
688
|
-
provider: result.provider
|
|
689
|
-
});
|
|
690
|
-
} catch (error) {
|
|
691
|
-
res.status(500).json({
|
|
692
|
-
success: false,
|
|
693
|
-
error: error.message
|
|
694
|
-
});
|
|
695
|
-
}
|
|
696
|
-
});
|
|
697
|
-
|
|
698
|
-
// Streaming endpoint
|
|
699
|
-
app.post('/api/stream', async (req, res) => {
|
|
700
|
-
try {
|
|
701
|
-
const { prompt } = req.body;
|
|
702
|
-
|
|
703
|
-
const provider = createBestAIProvider();
|
|
704
|
-
const result = await provider.streamText({ prompt });
|
|
705
|
-
|
|
706
|
-
res.setHeader('Content-Type', 'text/plain');
|
|
707
|
-
res.setHeader('Cache-Control', 'no-cache');
|
|
708
|
-
|
|
709
|
-
for await (const chunk of result.textStream) {
|
|
710
|
-
res.write(chunk);
|
|
711
|
-
}
|
|
712
|
-
res.end();
|
|
713
|
-
} catch (error) {
|
|
714
|
-
res.status(500).json({ error: error.message });
|
|
715
|
-
}
|
|
716
|
-
});
|
|
717
|
-
|
|
718
|
-
app.listen(3000, () => {
|
|
719
|
-
console.log('Server running on http://localhost:3000');
|
|
720
|
-
});
|
|
721
|
-
```
|
|
722
|
-
|
|
723
254
|
### React Hook
|
|
724
|
-
|
|
725
255
|
```typescript
|
|
726
|
-
import { useState
|
|
727
|
-
|
|
728
|
-
interface AIOptions {
|
|
729
|
-
temperature?: number;
|
|
730
|
-
maxTokens?: number;
|
|
731
|
-
provider?: string;
|
|
732
|
-
}
|
|
256
|
+
import { useState } from 'react';
|
|
733
257
|
|
|
734
258
|
export function useAI() {
|
|
735
259
|
const [loading, setLoading] = useState(false);
|
|
736
|
-
const [error, setError] = useState<string | null>(null);
|
|
737
260
|
|
|
738
|
-
const generate =
|
|
739
|
-
prompt: string,
|
|
740
|
-
options: AIOptions = {}
|
|
741
|
-
) => {
|
|
261
|
+
const generate = async (prompt: string) => {
|
|
742
262
|
setLoading(true);
|
|
743
|
-
|
|
744
|
-
|
|
745
|
-
|
|
746
|
-
const response = await fetch('/api/ai', {
|
|
747
|
-
method: 'POST',
|
|
748
|
-
headers: { 'Content-Type': 'application/json' },
|
|
749
|
-
body: JSON.stringify({ prompt, ...options })
|
|
750
|
-
});
|
|
751
|
-
|
|
752
|
-
if (!response.ok) {
|
|
753
|
-
throw new Error(`Request failed: ${response.statusText}`);
|
|
754
|
-
}
|
|
755
|
-
|
|
756
|
-
const data = await response.json();
|
|
757
|
-
return data.text;
|
|
758
|
-
} catch (err) {
|
|
759
|
-
const message = err instanceof Error ? err.message : 'Unknown error';
|
|
760
|
-
setError(message);
|
|
761
|
-
return null;
|
|
762
|
-
} finally {
|
|
763
|
-
setLoading(false);
|
|
764
|
-
}
|
|
765
|
-
}, []);
|
|
766
|
-
|
|
767
|
-
return { generate, loading, error };
|
|
768
|
-
}
|
|
769
|
-
|
|
770
|
-
// Usage
|
|
771
|
-
function MyComponent() {
|
|
772
|
-
const { generate, loading, error } = useAI();
|
|
773
|
-
|
|
774
|
-
const handleClick = async () => {
|
|
775
|
-
const result = await generate("Explain React hooks", {
|
|
776
|
-
temperature: 0.7,
|
|
777
|
-
maxTokens: 500
|
|
263
|
+
const response = await fetch('/api/ai', {
|
|
264
|
+
method: 'POST',
|
|
265
|
+
body: JSON.stringify({ prompt })
|
|
778
266
|
});
|
|
779
|
-
|
|
267
|
+
const data = await response.json();
|
|
268
|
+
setLoading(false);
|
|
269
|
+
return data.text;
|
|
780
270
|
};
|
|
781
271
|
|
|
782
|
-
return
|
|
783
|
-
<button onClick={handleClick} disabled={loading}>
|
|
784
|
-
{loading ? 'Generating...' : 'Generate'}
|
|
785
|
-
</button>
|
|
786
|
-
);
|
|
787
|
-
}
|
|
788
|
-
```
|
|
789
|
-
|
|
790
|
-
## API Reference
|
|
791
|
-
|
|
792
|
-
### Core Functions
|
|
793
|
-
|
|
794
|
-
#### `createBestAIProvider(requestedProvider?, modelName?)`
|
|
795
|
-
Creates the best available AI provider based on environment configuration.
|
|
796
|
-
|
|
797
|
-
```typescript
|
|
798
|
-
const provider = createBestAIProvider();
|
|
799
|
-
const provider = createBestAIProvider('openai'); // Prefer OpenAI
|
|
800
|
-
const provider = createBestAIProvider('bedrock', 'claude-3-7-sonnet');
|
|
801
|
-
```
|
|
802
|
-
|
|
803
|
-
#### `createAIProviderWithFallback(primary, fallback, modelName?)`
|
|
804
|
-
Creates a provider with automatic fallback.
|
|
805
|
-
|
|
806
|
-
```typescript
|
|
807
|
-
const { primary, fallback } = createAIProviderWithFallback('bedrock', 'openai');
|
|
808
|
-
|
|
809
|
-
try {
|
|
810
|
-
const result = await primary.generateText({ prompt });
|
|
811
|
-
} catch {
|
|
812
|
-
const result = await fallback.generateText({ prompt });
|
|
272
|
+
return { generate, loading };
|
|
813
273
|
}
|
|
814
274
|
```
|
|
815
275
|
|
|
816
|
-
|
|
817
|
-
|
|
818
|
-
#### `createProvider(providerName, modelName?)`
|
|
819
|
-
Creates a specific provider instance.
|
|
820
|
-
|
|
821
|
-
```typescript
|
|
822
|
-
const openai = AIProviderFactory.createProvider('openai', 'gpt-4o');
|
|
823
|
-
const bedrock = AIProviderFactory.createProvider('bedrock', 'claude-3-7-sonnet');
|
|
824
|
-
const vertex = AIProviderFactory.createProvider('vertex', 'gemini-2.5-flash');
|
|
825
|
-
```
|
|
826
|
-
|
|
827
|
-
### Provider Interface
|
|
276
|
+
**[📖 View complete framework integration guide](./docs/FRAMEWORK-INTEGRATION.md)**
|
|
828
277
|
|
|
829
|
-
|
|
278
|
+
## ⚙️ Provider Configuration
|
|
830
279
|
|
|
831
|
-
|
|
832
|
-
interface AIProvider {
|
|
833
|
-
generateText(options: GenerateTextOptions): Promise<GenerateTextResult>;
|
|
834
|
-
streamText(options: StreamTextOptions): Promise<StreamTextResult>;
|
|
835
|
-
}
|
|
836
|
-
|
|
837
|
-
interface GenerateTextOptions {
|
|
838
|
-
prompt: string;
|
|
839
|
-
temperature?: number;
|
|
840
|
-
maxTokens?: number;
|
|
841
|
-
systemPrompt?: string;
|
|
842
|
-
}
|
|
843
|
-
|
|
844
|
-
interface GenerateTextResult {
|
|
845
|
-
text: string;
|
|
846
|
-
provider: string;
|
|
847
|
-
model: string;
|
|
848
|
-
usage?: {
|
|
849
|
-
promptTokens: number;
|
|
850
|
-
completionTokens: number;
|
|
851
|
-
totalTokens: number;
|
|
852
|
-
};
|
|
853
|
-
}
|
|
854
|
-
```
|
|
855
|
-
|
|
856
|
-
### Supported Models
|
|
857
|
-
|
|
858
|
-
#### OpenAI
|
|
859
|
-
- `gpt-4o` (default)
|
|
860
|
-
- `gpt-4o-mini`
|
|
861
|
-
- `gpt-4-turbo`
|
|
862
|
-
|
|
863
|
-
#### Amazon Bedrock
|
|
864
|
-
- `claude-3-7-sonnet` (default)
|
|
865
|
-
- `claude-3-5-sonnet`
|
|
866
|
-
- `claude-3-haiku`
|
|
867
|
-
|
|
868
|
-
#### Google Vertex AI
|
|
869
|
-
- `gemini-2.5-flash` (default)
|
|
870
|
-
- `claude-4.0-sonnet`
|
|
871
|
-
|
|
872
|
-
## Provider Configuration
|
|
873
|
-
|
|
874
|
-
### OpenAI Setup
|
|
280
|
+
### OpenAI
|
|
875
281
|
```bash
|
|
876
|
-
export OPENAI_API_KEY="sk-your-key
|
|
282
|
+
export OPENAI_API_KEY="sk-your-openai-key"
|
|
877
283
|
```
|
|
878
284
|
|
|
879
|
-
### Amazon Bedrock
|
|
880
|
-
|
|
881
|
-
**⚠️ CRITICAL: Anthropic Models Require Inference Profile ARN**
|
|
882
|
-
|
|
883
|
-
For Anthropic Claude models in Bedrock, you **MUST** use the full inference profile ARN, not simple model names:
|
|
884
|
-
|
|
285
|
+
### Amazon Bedrock (⚠️ Requires Inference Profile ARN)
|
|
885
286
|
```bash
|
|
886
287
|
export AWS_ACCESS_KEY_ID="your-access-key"
|
|
887
288
|
export AWS_SECRET_ACCESS_KEY="your-secret-key"
|
|
888
|
-
export AWS_REGION="us-east-2"
|
|
889
|
-
|
|
890
|
-
# ✅ CORRECT: Use full inference profile ARN for Anthropic models
|
|
891
289
|
export BEDROCK_MODEL="arn:aws:bedrock:us-east-2:<account_id>:inference-profile/us.anthropic.claude-3-7-sonnet-20250219-v1:0"
|
|
892
|
-
|
|
893
|
-
# ❌ WRONG: Simple model names cause "not authorized to invoke this API" errors
|
|
894
|
-
# export BEDROCK_MODEL="anthropic.claude-3-sonnet-20240229-v1:0"
|
|
895
|
-
```
|
|
896
|
-
|
|
897
|
-
#### Why Inference Profiles?
|
|
898
|
-
- **Cross-Region Access**: Faster access across AWS regions
|
|
899
|
-
- **Better Performance**: Optimized routing and response times
|
|
900
|
-
- **Higher Availability**: Improved model availability and reliability
|
|
901
|
-
- **Different Permissions**: Separate permission model from base models
|
|
902
|
-
|
|
903
|
-
#### Available Inference Profile ARNs
|
|
904
|
-
```bash
|
|
905
|
-
# Claude 3.7 Sonnet (Latest - Recommended)
|
|
906
|
-
BEDROCK_MODEL="arn:aws:bedrock:us-east-2:<account_id>:inference-profile/us.anthropic.claude-3-7-sonnet-20250219-v1:0"
|
|
907
|
-
|
|
908
|
-
# Claude 3.5 Sonnet
|
|
909
|
-
BEDROCK_MODEL="arn:aws:bedrock:us-east-2:<account_id>:inference-profile/us.anthropic.claude-3-5-sonnet-20241022-v2:0"
|
|
910
|
-
|
|
911
|
-
# Claude 3 Haiku
|
|
912
|
-
BEDROCK_MODEL="arn:aws:bedrock:us-east-2:<account_id>:inference-profile/us.anthropic.claude-3-haiku-20240307-v1:0"
|
|
913
290
|
```
|
|
914
291
|
|
|
915
|
-
|
|
916
|
-
For temporary credentials (common in development):
|
|
917
|
-
```bash
|
|
918
|
-
export AWS_SESSION_TOKEN="your-session-token" # Required for temporary credentials
|
|
919
|
-
```
|
|
920
|
-
|
|
921
|
-
### Google Vertex AI Setup
|
|
922
|
-
|
|
923
|
-
NeuroLink supports **three authentication methods** for Google Vertex AI:
|
|
924
|
-
|
|
925
|
-
#### Method 1: Service Account File (Recommended for Production)
|
|
292
|
+
### Google Vertex AI (Multiple Auth Methods)
|
|
926
293
|
```bash
|
|
294
|
+
# Method 1: Service Account File
|
|
927
295
|
export GOOGLE_APPLICATION_CREDENTIALS="/path/to/service-account.json"
|
|
928
296
|
export GOOGLE_VERTEX_PROJECT="your-project-id"
|
|
929
|
-
export GOOGLE_VERTEX_LOCATION="us-central1"
|
|
930
|
-
```
|
|
931
297
|
|
|
932
|
-
|
|
933
|
-
|
|
934
|
-
export GOOGLE_SERVICE_ACCOUNT_KEY='{"type":"service_account","project_id":"your-project",...}'
|
|
298
|
+
# Method 2: JSON String
|
|
299
|
+
export GOOGLE_SERVICE_ACCOUNT_KEY='{"type":"service_account",...}'
|
|
935
300
|
export GOOGLE_VERTEX_PROJECT="your-project-id"
|
|
936
|
-
export GOOGLE_VERTEX_LOCATION="us-central1"
|
|
937
|
-
```
|
|
938
301
|
|
|
939
|
-
|
|
940
|
-
```bash
|
|
302
|
+
# Method 3: Individual Variables
|
|
941
303
|
export GOOGLE_AUTH_CLIENT_EMAIL="service-account@project.iam.gserviceaccount.com"
|
|
942
|
-
export GOOGLE_AUTH_PRIVATE_KEY="-----BEGIN PRIVATE KEY-----\
|
|
304
|
+
export GOOGLE_AUTH_PRIVATE_KEY="-----BEGIN PRIVATE KEY-----\n..."
|
|
943
305
|
export GOOGLE_VERTEX_PROJECT="your-project-id"
|
|
944
|
-
export GOOGLE_VERTEX_LOCATION="us-central1"
|
|
945
|
-
```
|
|
946
|
-
|
|
947
|
-
### Complete Environment Variables Reference
|
|
948
|
-
|
|
949
|
-
#### OpenAI Configuration
|
|
950
|
-
```bash
|
|
951
|
-
# Required
|
|
952
|
-
OPENAI_API_KEY="sk-your-openai-api-key"
|
|
953
|
-
|
|
954
|
-
# Optional
|
|
955
|
-
OPENAI_MODEL="gpt-4o" # Default model to use
|
|
956
306
|
```
|
|
957
307
|
|
|
958
|
-
|
|
959
|
-
```bash
|
|
960
|
-
# Required
|
|
961
|
-
AWS_ACCESS_KEY_ID="your-aws-access-key"
|
|
962
|
-
AWS_SECRET_ACCESS_KEY="your-aws-secret-key"
|
|
963
|
-
|
|
964
|
-
# Optional
|
|
965
|
-
AWS_REGION="us-east-2" # Default: us-east-2
|
|
966
|
-
AWS_SESSION_TOKEN="your-session-token" # Required for temporary credentials
|
|
967
|
-
BEDROCK_MODEL_ID="anthropic.claude-3-7-sonnet-20250219-v1:0" # Default model
|
|
968
|
-
```
|
|
969
|
-
|
|
970
|
-
#### Google Vertex AI Configuration
|
|
971
|
-
```bash
|
|
972
|
-
# Required (choose one authentication method)
|
|
973
|
-
# Method 1: Service Account File
|
|
974
|
-
GOOGLE_APPLICATION_CREDENTIALS="/path/to/service-account.json"
|
|
975
|
-
|
|
976
|
-
# Method 2: Service Account JSON String
|
|
977
|
-
GOOGLE_SERVICE_ACCOUNT_KEY='{"type":"service_account",...}'
|
|
978
|
-
|
|
979
|
-
# Method 3: Individual Environment Variables
|
|
980
|
-
GOOGLE_AUTH_CLIENT_EMAIL="service-account@project.iam.gserviceaccount.com"
|
|
981
|
-
GOOGLE_AUTH_PRIVATE_KEY="-----BEGIN PRIVATE KEY-----\nMIIE..."
|
|
982
|
-
|
|
983
|
-
# Required for all methods
|
|
984
|
-
GOOGLE_VERTEX_PROJECT="your-gcp-project-id"
|
|
985
|
-
|
|
986
|
-
# Optional
|
|
987
|
-
GOOGLE_VERTEX_LOCATION="us-east5" # Default: us-east5
|
|
988
|
-
VERTEX_MODEL_ID="claude-sonnet-4@20250514" # Default model
|
|
989
|
-
```
|
|
990
|
-
|
|
991
|
-
#### General Configuration
|
|
992
|
-
```bash
|
|
993
|
-
# Provider Selection (optional)
|
|
994
|
-
DEFAULT_PROVIDER="bedrock" # Primary provider preference
|
|
995
|
-
FALLBACK_PROVIDER="openai" # Fallback provider
|
|
996
|
-
|
|
997
|
-
# Application Settings
|
|
998
|
-
PUBLIC_APP_ENVIRONMENT="dev" # dev, staging, production
|
|
999
|
-
ENABLE_STREAMING="true" # Enable streaming responses
|
|
1000
|
-
ENABLE_FALLBACK="true" # Enable automatic fallback
|
|
1001
|
-
|
|
1002
|
-
# Debug and Logging
|
|
1003
|
-
NEUROLINK_DEBUG="true" # Enable debug logging
|
|
1004
|
-
LOG_LEVEL="info" # error, warn, info, debug
|
|
1005
|
-
```
|
|
1006
|
-
|
|
1007
|
-
#### Environment File Example (.env)
|
|
1008
|
-
```bash
|
|
1009
|
-
# Copy this to your .env file and fill in your credentials
|
|
1010
|
-
|
|
1011
|
-
# OpenAI
|
|
1012
|
-
OPENAI_API_KEY=sk-your-openai-key-here
|
|
1013
|
-
OPENAI_MODEL=gpt-4o
|
|
1014
|
-
|
|
1015
|
-
# Amazon Bedrock
|
|
1016
|
-
AWS_ACCESS_KEY_ID=your-aws-access-key
|
|
1017
|
-
AWS_SECRET_ACCESS_KEY=your-aws-secret-key
|
|
1018
|
-
AWS_REGION=us-east-2
|
|
1019
|
-
BEDROCK_MODEL_ID=anthropic.claude-3-7-sonnet-20250219-v1:0
|
|
1020
|
-
|
|
1021
|
-
# Google Vertex AI (choose one method)
|
|
1022
|
-
# Method 1: File path
|
|
1023
|
-
GOOGLE_APPLICATION_CREDENTIALS=/path/to/your/service-account.json
|
|
1024
|
-
|
|
1025
|
-
# Method 2: JSON string (uncomment to use)
|
|
1026
|
-
# GOOGLE_SERVICE_ACCOUNT_KEY={"type":"service_account","project_id":"your-project",...}
|
|
1027
|
-
|
|
1028
|
-
# Method 3: Individual variables (uncomment to use)
|
|
1029
|
-
# GOOGLE_AUTH_CLIENT_EMAIL=service-account@your-project.iam.gserviceaccount.com
|
|
1030
|
-
# GOOGLE_AUTH_PRIVATE_KEY="-----BEGIN PRIVATE KEY-----\nYOUR_PRIVATE_KEY_HERE\n-----END PRIVATE KEY-----"
|
|
1031
|
-
|
|
1032
|
-
# Required for all Google Vertex AI methods
|
|
1033
|
-
GOOGLE_VERTEX_PROJECT=your-gcp-project-id
|
|
1034
|
-
GOOGLE_VERTEX_LOCATION=us-east5
|
|
1035
|
-
VERTEX_MODEL_ID=claude-sonnet-4@20250514
|
|
1036
|
-
|
|
1037
|
-
# Application Settings
|
|
1038
|
-
DEFAULT_PROVIDER=auto
|
|
1039
|
-
ENABLE_STREAMING=true
|
|
1040
|
-
ENABLE_FALLBACK=true
|
|
1041
|
-
NEUROLINK_DEBUG=false
|
|
1042
|
-
```
|
|
1043
|
-
|
|
1044
|
-
## Advanced Patterns
|
|
1045
|
-
|
|
1046
|
-
### Custom Configuration
|
|
1047
|
-
```typescript
|
|
1048
|
-
import { AIProviderFactory } from '@juspay/neurolink';
|
|
1049
|
-
|
|
1050
|
-
// Environment-based provider selection
|
|
1051
|
-
const isDev = process.env.NODE_ENV === 'development';
|
|
1052
|
-
const provider = isDev
|
|
1053
|
-
? AIProviderFactory.createProvider('openai', 'gpt-4o-mini') // Cheaper for dev
|
|
1054
|
-
: AIProviderFactory.createProvider('bedrock', 'claude-3-7-sonnet'); // Production
|
|
1055
|
-
|
|
1056
|
-
// Multiple providers for different use cases
|
|
1057
|
-
const providers = {
|
|
1058
|
-
creative: AIProviderFactory.createProvider('openai', 'gpt-4o'),
|
|
1059
|
-
analytical: AIProviderFactory.createProvider('bedrock', 'claude-3-7-sonnet'),
|
|
1060
|
-
fast: AIProviderFactory.createProvider('vertex', 'gemini-2.5-flash')
|
|
1061
|
-
};
|
|
1062
|
-
|
|
1063
|
-
async function generateCreativeContent(prompt: string) {
|
|
1064
|
-
return await providers.creative.generateText({
|
|
1065
|
-
prompt,
|
|
1066
|
-
temperature: 0.9,
|
|
1067
|
-
maxTokens: 2000
|
|
1068
|
-
});
|
|
1069
|
-
}
|
|
1070
|
-
```
|
|
1071
|
-
|
|
1072
|
-
### Response Caching
|
|
1073
|
-
```typescript
|
|
1074
|
-
const cache = new Map<string, { text: string; timestamp: number }>();
|
|
1075
|
-
const CACHE_DURATION = 5 * 60 * 1000; // 5 minutes
|
|
1076
|
-
|
|
1077
|
-
async function cachedGenerate(prompt: string) {
|
|
1078
|
-
const key = prompt.toLowerCase().trim();
|
|
1079
|
-
const cached = cache.get(key);
|
|
1080
|
-
|
|
1081
|
-
if (cached && Date.now() - cached.timestamp < CACHE_DURATION) {
|
|
1082
|
-
return { ...cached, fromCache: true };
|
|
1083
|
-
}
|
|
1084
|
-
|
|
1085
|
-
const provider = createBestAIProvider();
|
|
1086
|
-
const result = await provider.generateText({ prompt });
|
|
1087
|
-
|
|
1088
|
-
cache.set(key, { text: result.text, timestamp: Date.now() });
|
|
1089
|
-
return { text: result.text, fromCache: false };
|
|
1090
|
-
}
|
|
1091
|
-
```
|
|
1092
|
-
|
|
1093
|
-
### Batch Processing
|
|
1094
|
-
```typescript
|
|
1095
|
-
async function processBatch(prompts: string[]) {
|
|
1096
|
-
const provider = createBestAIProvider();
|
|
1097
|
-
const chunkSize = 5;
|
|
1098
|
-
const results = [];
|
|
1099
|
-
|
|
1100
|
-
for (let i = 0; i < prompts.length; i += chunkSize) {
|
|
1101
|
-
const chunk = prompts.slice(i, i + chunkSize);
|
|
1102
|
-
|
|
1103
|
-
const chunkResults = await Promise.allSettled(
|
|
1104
|
-
chunk.map(prompt => provider.generateText({ prompt, maxTokens: 500 }))
|
|
1105
|
-
);
|
|
1106
|
-
|
|
1107
|
-
results.push(...chunkResults);
|
|
1108
|
-
|
|
1109
|
-
// Rate limiting
|
|
1110
|
-
if (i + chunkSize < prompts.length) {
|
|
1111
|
-
await new Promise(resolve => setTimeout(resolve, 1000));
|
|
1112
|
-
}
|
|
1113
|
-
}
|
|
1114
|
-
|
|
1115
|
-
return results.map((result, index) => ({
|
|
1116
|
-
prompt: prompts[index],
|
|
1117
|
-
success: result.status === 'fulfilled',
|
|
1118
|
-
result: result.status === 'fulfilled' ? result.value : result.reason
|
|
1119
|
-
}));
|
|
1120
|
-
}
|
|
1121
|
-
```
|
|
1122
|
-
|
|
1123
|
-
## Error Handling
|
|
308
|
+
**[📖 View complete provider configuration guide](./docs/PROVIDER-CONFIGURATION.md)**
|
|
1124
309
|
|
|
1125
|
-
|
|
310
|
+
## 📚 API Reference
|
|
1126
311
|
|
|
1127
|
-
|
|
1128
|
-
```
|
|
1129
|
-
ValidationException: Your account is not authorized to invoke this API operation.
|
|
1130
|
-
```
|
|
1131
|
-
- **Cause**: The AWS account doesn't have access to Bedrock or the specific model
|
|
1132
|
-
- **Solution**:
|
|
1133
|
-
- Verify your AWS account has Bedrock enabled
|
|
1134
|
-
- Check model availability in your AWS region
|
|
1135
|
-
- Ensure your IAM role has `bedrock:InvokeModel` permissions
|
|
1136
|
-
|
|
1137
|
-
#### Missing or Invalid Credentials
|
|
1138
|
-
```
|
|
1139
|
-
Error: Cannot find API key for OpenAI provider
|
|
1140
|
-
```
|
|
1141
|
-
- **Cause**: The environment variable for API credentials is missing
|
|
1142
|
-
- **Solution**: Set the appropriate environment variable (OPENAI_API_KEY, etc.)
|
|
1143
|
-
|
|
1144
|
-
#### Google Vertex Import Issues
|
|
1145
|
-
```
|
|
1146
|
-
Cannot find package '@google-cloud/vertexai' imported from...
|
|
1147
|
-
```
|
|
1148
|
-
- **Cause**: Missing Google Vertex AI peer dependency
|
|
1149
|
-
- **Solution**: Install the package with `npm install @google-cloud/vertexai`
|
|
1150
|
-
|
|
1151
|
-
#### Session Token Expired
|
|
1152
|
-
```
|
|
1153
|
-
The security token included in the request is expired
|
|
1154
|
-
```
|
|
1155
|
-
- **Cause**: AWS session token has expired
|
|
1156
|
-
- **Solution**: Generate new AWS credentials with a fresh session token
|
|
1157
|
-
|
|
1158
|
-
### Comprehensive Error Handling
|
|
312
|
+
### Core Functions
|
|
1159
313
|
```typescript
|
|
1160
|
-
|
|
314
|
+
// Auto-select best provider
|
|
315
|
+
const provider = createBestAIProvider();
|
|
1161
316
|
|
|
1162
|
-
|
|
1163
|
-
|
|
1164
|
-
|
|
1165
|
-
while (attempt < maxRetries) {
|
|
1166
|
-
try {
|
|
1167
|
-
const provider = createBestAIProvider();
|
|
1168
|
-
return await provider.generateText({ prompt });
|
|
1169
|
-
} catch (error) {
|
|
1170
|
-
attempt++;
|
|
1171
|
-
console.error(`Attempt ${attempt} failed:`, error.message);
|
|
1172
|
-
|
|
1173
|
-
if (attempt >= maxRetries) {
|
|
1174
|
-
throw new Error(`Failed after ${maxRetries} attempts: ${error.message}`);
|
|
1175
|
-
}
|
|
1176
|
-
|
|
1177
|
-
// Exponential backoff
|
|
1178
|
-
await new Promise(resolve =>
|
|
1179
|
-
setTimeout(resolve, Math.pow(2, attempt) * 1000)
|
|
1180
|
-
);
|
|
1181
|
-
}
|
|
1182
|
-
}
|
|
1183
|
-
}
|
|
1184
|
-
```
|
|
317
|
+
// Specific provider
|
|
318
|
+
const openai = AIProviderFactory.createProvider('openai', 'gpt-4o');
|
|
1185
319
|
|
|
1186
|
-
|
|
1187
|
-
|
|
1188
|
-
async function generateWithFallback(prompt: string) {
|
|
1189
|
-
const providers = ['bedrock', 'openai', 'vertex'];
|
|
1190
|
-
|
|
1191
|
-
for (const providerName of providers) {
|
|
1192
|
-
try {
|
|
1193
|
-
const provider = AIProviderFactory.createProvider(providerName);
|
|
1194
|
-
return await provider.generateText({ prompt });
|
|
1195
|
-
} catch (error) {
|
|
1196
|
-
console.warn(`${providerName} failed:`, error.message);
|
|
1197
|
-
|
|
1198
|
-
if (error.message.includes('API key') || error.message.includes('credentials')) {
|
|
1199
|
-
console.log(`${providerName} not configured, trying next...`);
|
|
1200
|
-
continue;
|
|
1201
|
-
}
|
|
1202
|
-
}
|
|
1203
|
-
}
|
|
1204
|
-
|
|
1205
|
-
throw new Error('All providers failed or are not configured');
|
|
1206
|
-
}
|
|
320
|
+
// With fallback
|
|
321
|
+
const { primary, fallback } = createAIProviderWithFallback('bedrock', 'openai');
|
|
1207
322
|
```
|
|
1208
323
|
|
|
1209
|
-
###
|
|
324
|
+
### Provider Interface
|
|
1210
325
|
```typescript
|
|
1211
|
-
|
|
1212
|
-
|
|
1213
|
-
|
|
1214
|
-
}
|
|
1215
|
-
|
|
1216
|
-
// Rate limiting
|
|
1217
|
-
if (error.message.includes('rate limit')) {
|
|
1218
|
-
console.error('Rate limit exceeded, implement backoff');
|
|
1219
|
-
}
|
|
1220
|
-
|
|
1221
|
-
// Model not available
|
|
1222
|
-
if (error.message.includes('model')) {
|
|
1223
|
-
console.error('Requested model not available');
|
|
326
|
+
interface AIProvider {
|
|
327
|
+
generateText(options: GenerateTextOptions): Promise<GenerateTextResult>;
|
|
328
|
+
streamText(options: StreamTextOptions): Promise<StreamTextResult>;
|
|
1224
329
|
}
|
|
1225
330
|
|
|
1226
|
-
|
|
1227
|
-
|
|
1228
|
-
|
|
331
|
+
interface GenerateTextOptions {
|
|
332
|
+
prompt: string;
|
|
333
|
+
temperature?: number; // 0.0 to 1.0, default: 0.7
|
|
334
|
+
maxTokens?: number; // Default: 500
|
|
335
|
+
systemPrompt?: string;
|
|
1229
336
|
}
|
|
1230
337
|
```
|
|
1231
338
|
|
|
1232
|
-
|
|
1233
|
-
|
|
1234
|
-
|
|
339
|
+
### Supported Models
|
|
340
|
+
- **OpenAI**: `gpt-4o` (default), `gpt-4o-mini`, `gpt-4-turbo`
|
|
341
|
+
- **Bedrock**: `claude-3-7-sonnet` (default), `claude-3-5-sonnet`, `claude-3-haiku`
|
|
342
|
+
- **Vertex AI**: `gemini-2.5-flash` (default), `claude-sonnet-4@20250514`
|
|
1235
343
|
|
|
1236
|
-
|
|
1237
|
-
```typescript
|
|
1238
|
-
// Fast responses for simple tasks
|
|
1239
|
-
const fast = AIProviderFactory.createProvider('vertex', 'gemini-2.5-flash');
|
|
344
|
+
**[📖 View complete API reference](./docs/API-REFERENCE.md)**
|
|
1240
345
|
|
|
1241
|
-
|
|
1242
|
-
const quality = AIProviderFactory.createProvider('bedrock', 'claude-3-7-sonnet');
|
|
346
|
+
## 🎯 Visual Content Benefits
|
|
1243
347
|
|
|
1244
|
-
|
|
1245
|
-
|
|
1246
|
-
|
|
348
|
+
- ✅ **No Installation Required** - See everything in action before installing
|
|
349
|
+
- ✅ **Real AI Content** - All screenshots and videos show actual AI generation
|
|
350
|
+
- ✅ **Professional Quality** - 1920x1080 resolution suitable for documentation
|
|
351
|
+
- ✅ **Complete Coverage** - Every major feature visually documented
|
|
352
|
+
- ✅ **Production Validation** - Demonstrates real-world usage patterns
|
|
1247
353
|
|
|
1248
|
-
|
|
1249
|
-
```typescript
|
|
1250
|
-
// Use streaming for better UX on long content
|
|
1251
|
-
const result = await provider.streamText({
|
|
1252
|
-
prompt: "Write a detailed article...",
|
|
1253
|
-
maxTokens: 2000
|
|
1254
|
-
});
|
|
1255
|
-
```
|
|
354
|
+
**[📖 View complete visual demonstrations](./docs/VISUAL-DEMOS.md)**
|
|
1256
355
|
|
|
1257
|
-
|
|
1258
|
-
```typescript
|
|
1259
|
-
// Set reasonable limits to control costs
|
|
1260
|
-
const result = await provider.generateText({
|
|
1261
|
-
prompt: "Summarize this text",
|
|
1262
|
-
maxTokens: 150 // Just enough for a summary
|
|
1263
|
-
});
|
|
1264
|
-
```
|
|
356
|
+
## 🚀 Getting Started
|
|
1265
357
|
|
|
1266
|
-
|
|
1267
|
-
|
|
1268
|
-
|
|
1269
|
-
|
|
358
|
+
1. **Try CLI immediately**: `npx @juspay/neurolink status`
|
|
359
|
+
2. **View live demo**: `cd neurolink-demo && npm start`
|
|
360
|
+
3. **Set up providers**: See [Provider Configuration Guide](./docs/PROVIDER-CONFIGURATION.md)
|
|
361
|
+
4. **Integrate with your framework**: See [Framework Integration Guide](./docs/FRAMEWORK-INTEGRATION.md)
|
|
362
|
+
5. **Build with the SDK**: See [API Reference](./docs/API-REFERENCE.md)
|
|
1270
363
|
|
|
1271
|
-
## Contributing
|
|
364
|
+
## 🤝 Contributing
|
|
1272
365
|
|
|
1273
|
-
We welcome contributions!
|
|
366
|
+
We welcome contributions! Please see our [Contributing Guidelines](./CONTRIBUTING.md) for details.
|
|
1274
367
|
|
|
1275
368
|
### Development Setup
|
|
1276
369
|
```bash
|
|
1277
370
|
git clone https://github.com/juspay/neurolink
|
|
1278
371
|
cd neurolink
|
|
1279
372
|
pnpm install
|
|
373
|
+
pnpm test
|
|
374
|
+
pnpm build
|
|
1280
375
|
```
|
|
1281
376
|
|
|
1282
|
-
|
|
1283
|
-
```bash
|
|
1284
|
-
pnpm test # Run all tests
|
|
1285
|
-
pnpm test:watch # Watch mode
|
|
1286
|
-
pnpm test:coverage # Coverage report
|
|
1287
|
-
```
|
|
1288
|
-
|
|
1289
|
-
### Building
|
|
1290
|
-
```bash
|
|
1291
|
-
pnpm build # Build the library
|
|
1292
|
-
pnpm check # Type checking
|
|
1293
|
-
pnpm lint # Lint code
|
|
1294
|
-
```
|
|
1295
|
-
|
|
1296
|
-
### Guidelines
|
|
1297
|
-
- Follow existing TypeScript patterns
|
|
1298
|
-
- Add tests for new features
|
|
1299
|
-
- Update documentation
|
|
1300
|
-
- Ensure all providers work consistently
|
|
1301
|
-
|
|
1302
|
-
## License
|
|
377
|
+
## 📄 License
|
|
1303
378
|
|
|
1304
379
|
MIT © [Juspay Technologies](https://juspay.in)
|
|
1305
380
|
|
|
1306
|
-
## Related Projects
|
|
381
|
+
## 🔗 Related Projects
|
|
1307
382
|
|
|
1308
383
|
- [Vercel AI SDK](https://github.com/vercel/ai) - Underlying provider implementations
|
|
1309
|
-
- [SvelteKit](https://kit.svelte.dev) - Web framework
|
|
384
|
+
- [SvelteKit](https://kit.svelte.dev) - Web framework used in this project
|
|
1310
385
|
- [Lighthouse](https://github.com/juspay/lighthouse) - Original source project
|
|
1311
386
|
|
|
1312
387
|
---
|