@juspay/neurolink 1.2.2 → 1.2.4

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -13,23 +13,19 @@
13
13
  ### 📦 Installation
14
14
 
15
15
  ```bash
16
- # Install globally for CLI usage
17
- npm install -g @juspay/neurolink
18
-
19
- # Or use directly with npx (no installation required)
16
+ # CLI Usage (No Installation Required)
20
17
  npx @juspay/neurolink generate-text "Hello, AI!"
18
+ npx @juspay/neurolink status
21
19
 
22
- # Or install globally
20
+ # Global CLI Installation
23
21
  npm install -g @juspay/neurolink
24
22
  neurolink generate-text "Write a haiku about programming"
25
- neurolink status --verbose
26
- ```
27
23
 
28
- ### Programmatic Usage
29
- ```bash
24
+ # SDK Installation
30
25
  npm install @juspay/neurolink ai @ai-sdk/amazon-bedrock @ai-sdk/openai @ai-sdk/google-vertex zod
31
26
  ```
32
27
 
28
+ ### Programmatic Usage
33
29
  ```typescript
34
30
  import { createBestAIProvider } from '@juspay/neurolink';
35
31
 
@@ -40,8 +36,41 @@ const result = await provider.generateText({
40
36
  });
41
37
 
42
38
  console.log(result.text);
39
+ console.log(`Used: ${result.provider}`);
43
40
  ```
44
41
 
42
+ ### Environment Setup
43
+ ```bash
44
+ # Create .env file (automatically loaded by CLI) ✨ NEW!
45
+ # OpenAI
46
+ echo 'OPENAI_API_KEY="sk-your-openai-key"' > .env
47
+ echo 'OPENAI_MODEL="gpt-4o"' >> .env
48
+
49
+ # Amazon Bedrock
50
+ echo 'AWS_ACCESS_KEY_ID="your-aws-access-key"' >> .env
51
+ echo 'AWS_SECRET_ACCESS_KEY="your-aws-secret-key"' >> .env
52
+ echo 'AWS_REGION="us-east-1"' >> .env
53
+ echo 'BEDROCK_MODEL="arn:aws:bedrock:region:account:inference-profile/model"' >> .env
54
+
55
+ # Google Vertex AI
56
+ echo 'GOOGLE_VERTEX_PROJECT="your-project-id"' >> .env
57
+ echo 'GOOGLE_VERTEX_LOCATION="us-central1"' >> .env
58
+ echo 'GOOGLE_APPLICATION_CREDENTIALS="/path/to/service-account.json"' >> .env
59
+
60
+ # Anthropic
61
+ echo 'ANTHROPIC_API_KEY="sk-ant-api03-your-key"' >> .env
62
+
63
+ # Azure OpenAI
64
+ echo 'AZURE_OPENAI_API_KEY="your-azure-key"' >> .env
65
+ echo 'AZURE_OPENAI_ENDPOINT="https://your-resource.openai.azure.com/"' >> .env
66
+ echo 'AZURE_OPENAI_DEPLOYMENT_ID="your-deployment-name"' >> .env
67
+
68
+ # Test configuration (automatically loads .env)
69
+ npx @juspay/neurolink status
70
+ ```
71
+
72
+ **📖 [Complete Environment Variables Guide](./docs/ENVIRONMENT-VARIABLES.md)** - Detailed setup instructions for all providers
73
+
45
74
  ## 🎬 Complete Visual Documentation
46
75
 
47
76
  **No installation required!** Experience NeuroLink's capabilities through our comprehensive visual ecosystem:
@@ -58,30 +87,34 @@ console.log(result.text);
58
87
  | **Developer Tools** | ![Developer Tools](./neurolink-demo/screenshots/05-developer-tools/05-developer-tools-2025-06-04T13-59-43-322Z.png) | Code generation and API docs |
59
88
  | **Analytics & Monitoring** | ![Monitoring](./neurolink-demo/screenshots/06-monitoring/06-monitoring-analytics-2025-06-04T14-00-08-919Z.png) | Real-time provider analytics |
60
89
 
61
- #### **🎥 Complete Demo Videos** *(5,681+ tokens of real AI generation)*
62
- - **[Basic Examples](./neurolink-demo/videos/basic-examples/)** - Text generation, haiku creation, storytelling
63
- - **[Business Use Cases](./neurolink-demo/videos/business-use-cases/)** - Email generation, analysis, summaries
64
- - **[Creative Tools](./neurolink-demo/videos/creative-tools/)** - Stories, translation, creative ideas
65
- - **[Developer Tools](./neurolink-demo/videos/developer-tools/)** - React code, API docs, debugging help
66
- - **[Monitoring & Analytics](./neurolink-demo/videos/monitoring/)** - Live provider status and performance
90
+ #### **🎥 Complete Demo Videos** *(Real AI generation showing SDK use cases)*
91
+ - **[Basic Examples](./neurolink-demo/videos/basic-examples.webm)** - Core SDK functionality: text generation, streaming, provider selection, status checks
92
+ - **[Business Use Cases](./neurolink-demo/videos/business-use-cases.webm)** - Professional applications: marketing emails, quarterly data analysis, executive summaries
93
+ - **[Creative Tools](./neurolink-demo/videos/creative-tools.webm)** - Content creation: storytelling, translation, blog post ideas
94
+ - **[Developer Tools](./neurolink-demo/videos/developer-tools.webm)** - Technical applications: React components, API documentation, error debugging
95
+ - **[Monitoring & Analytics](./neurolink-demo/videos/monitoring-analytics.webm)** - SDK features: performance benchmarks, provider fallback, structured data generation
96
+
97
+ **Available formats:**
98
+ - **WebM** (web-optimized): All videos available as `.webm` for web embedding
99
+ - **MP4** (universal): All videos available as `.mp4` for desktop and mobile compatibility
67
100
 
68
101
  ### 🖥️ CLI Tool Screenshots & Videos
69
102
 
70
103
  #### **📸 Professional CLI Screenshots**
71
104
  | Command | Screenshot | Description |
72
105
  |---------|------------|-------------|
73
- | **CLI Help Overview** | ![CLI Help](./cli-screenshots/01-cli-help-2025-06-04T19-38-12.png) | Complete command reference |
74
- | **Provider Status Check** | ![Provider Status](./cli-screenshots/02-provider-status-2025-06-04T19-38-25.png) | All provider connectivity verified |
75
- | **Text Generation** | ![Text Generation](./cli-screenshots/03-text-generation-2025-06-04T19-38-30.png) | Real AI haiku generation with JSON |
76
- | **Auto Provider Selection** | ![Best Provider](./cli-screenshots/04-best-provider-2025-06-04T19-38-33.png) | Automatic provider selection working |
77
- | **Batch Processing** | ![Batch Results](./cli-screenshots/05-batch-results-2025-06-04T19-38-37.png) | Multi-prompt processing with results |
106
+ | **CLI Help Overview** | ![CLI Help](./docs/visual-content/screenshots/cli-screenshots/01-cli-help-2025-06-04T19-38-12.png) | Complete command reference |
107
+ | **Provider Status Check** | ![Provider Status](./docs/visual-content/screenshots/cli-screenshots/02-provider-status-2025-06-04T19-38-25.png) | All provider connectivity verified |
108
+ | **Text Generation** | ![Text Generation](./docs/visual-content/screenshots/cli-screenshots/03-text-generation-2025-06-04T19-38-30.png) | Real AI haiku generation with JSON |
109
+ | **Auto Provider Selection** | ![Best Provider](./docs/visual-content/screenshots/cli-screenshots/04-best-provider-2025-06-04T19-38-33.png) | Automatic provider selection working |
110
+ | **Batch Processing** | ![Batch Results](./docs/visual-content/screenshots/cli-screenshots/05-batch-results-2025-06-04T19-38-37.png) | Multi-prompt processing with results |
78
111
 
79
112
  #### **🎥 CLI Demonstration Videos** *(Real command execution)*
80
- - **[CLI Overview](./cli-videos/cli-overview/)** - Help, status, provider selection commands
81
- - **[Basic Generation](./cli-videos/cli-basic-generation/)** - Text generation with different providers
82
- - **[Batch Processing](./cli-videos/cli-batch-processing/)** - File-based multi-prompt processing
83
- - **[Real-time Streaming](./cli-videos/cli-streaming/)** - Live AI content streaming
84
- - **[Advanced Features](./cli-videos/cli-advanced-features/)** - Verbose diagnostics and provider options
113
+ - **[CLI Overview](./docs/visual-content/videos/cli-videos/cli-overview/)** - Help, status, provider selection commands
114
+ - **[Basic Generation](./docs/visual-content/videos/cli-videos/cli-basic-generation/)** - Text generation with different providers
115
+ - **[Batch Processing](./docs/visual-content/videos/cli-videos/cli-batch-processing/)** - File-based multi-prompt processing
116
+ - **[Real-time Streaming](./docs/visual-content/videos/cli-videos/cli-streaming/)** - Live AI content streaming
117
+ - **[Advanced Features](./docs/visual-content/videos/cli-videos/cli-advanced-features/)** - Verbose diagnostics and provider options
85
118
 
86
119
  ### 💻 Live Interactive Demo
87
120
  - **Working Express.js server** with real API integration
@@ -89,33 +122,19 @@ console.log(result.text);
89
122
  - **15+ use cases** demonstrated across business, creative, and developer tools
90
123
  - **Real-time provider analytics** with performance metrics
91
124
 
92
- ### 🎯 Visual Content Benefits
93
- - ✅ **No Installation Required** - See everything in action before installing
94
- - ✅ **Real AI Content** - All screenshots and videos show actual AI generation
95
- - ✅ **Professional Quality** - 1920x1080 resolution suitable for documentation
96
- - ✅ **Complete Coverage** - Every major feature visually documented
97
- - ✅ **Production Validation** - Demonstrates real-world usage patterns
98
-
99
- [📁 View complete visual documentation](./neurolink-demo/) including all screenshots, videos, and interactive examples.
125
+ **Access**: `cd neurolink-demo && npm start` - [📁 View complete visual documentation](./neurolink-demo/)
100
126
 
101
- ## Table of Contents
127
+ ## 📚 Documentation
102
128
 
103
- - [Features](#features)
104
- - [Installation](#installation)
105
- - [Basic Usage](#basic-usage)
106
- - [Framework Integration](#framework-integration)
107
- - [SvelteKit](#sveltekit)
108
- - [Next.js](#nextjs)
109
- - [Express.js](#expressjs)
110
- - [React Hook](#react-hook)
111
- - [API Reference](#api-reference)
112
- - [Provider Configuration](#provider-configuration)
113
- - [Advanced Patterns](#advanced-patterns)
114
- - [Error Handling](#error-handling)
115
- - [Performance](#performance)
116
- - [Contributing](#contributing)
129
+ ### Quick Reference
130
+ - **[🖥️ CLI Guide](./docs/CLI-GUIDE.md)** - Complete CLI commands, options, and examples
131
+ - **[🏗️ Framework Integration](./docs/FRAMEWORK-INTEGRATION.md)** - SvelteKit, Next.js, Express.js, React hooks
132
+ - **[🔧 Environment Variables](./docs/ENVIRONMENT-VARIABLES.md)** - Complete setup guide for all AI providers
133
+ - **[⚙️ Provider Configuration](./docs/PROVIDER-CONFIGURATION.md)** - OpenAI, Bedrock, Vertex AI setup guides
134
+ - **[📚 API Reference](./docs/API-REFERENCE.md)** - Complete TypeScript API documentation
135
+ - **[🎬 Visual Demos](./docs/VISUAL-DEMOS.md)** - Screenshots, videos, and interactive examples
117
136
 
118
- ## Features
137
+ ### Key Features
119
138
 
120
139
  🔄 **Multi-Provider Support** - OpenAI, Amazon Bedrock, Google Vertex AI
121
140
  ⚡ **Automatic Fallback** - Seamless provider switching on failures
@@ -124,1189 +143,192 @@ console.log(result.text);
124
143
  🛡️ **Production Ready** - Extracted from proven production systems
125
144
  🔧 **Zero Config** - Works out of the box with environment variables
126
145
 
127
- ## Installation
128
-
129
- ### Package Installation
130
- ```bash
131
- # npm
132
- npm install @juspay/neurolink ai @ai-sdk/amazon-bedrock @ai-sdk/openai @ai-sdk/google-vertex zod
133
-
134
- # yarn
135
- yarn add @juspay/neurolink ai @ai-sdk/amazon-bedrock @ai-sdk/openai @ai-sdk/google-vertex zod
136
-
137
- # pnpm (recommended)
138
- pnpm add @juspay/neurolink ai @ai-sdk/amazon-bedrock @ai-sdk/openai @ai-sdk/google-vertex zod
139
- ```
140
-
141
- ### Environment Setup
142
- ```bash
143
- # Choose one or more providers
144
- export OPENAI_API_KEY="sk-your-openai-key"
145
- export AWS_ACCESS_KEY_ID="your-aws-key"
146
- export AWS_SECRET_ACCESS_KEY="your-aws-secret"
147
- export GOOGLE_APPLICATION_CREDENTIALS="path/to/service-account.json"
148
- ```
149
-
150
- ## Basic Usage
151
-
152
- ### Simple Text Generation
153
- ```typescript
154
- import { createBestAIProvider } from '@juspay/neurolink';
155
-
156
- const provider = createBestAIProvider();
157
-
158
- // Basic generation
159
- const result = await provider.generateText({
160
- prompt: "Explain TypeScript generics",
161
- temperature: 0.7,
162
- maxTokens: 500
163
- });
164
-
165
- console.log(result.text);
166
- console.log(`Used: ${result.provider}`);
167
- ```
168
-
169
- ### Streaming Responses
170
- ```typescript
171
- import { createBestAIProvider } from '@juspay/neurolink';
172
-
173
- const provider = createBestAIProvider();
174
-
175
- const result = await provider.streamText({
176
- prompt: "Write a story about AI",
177
- temperature: 0.8,
178
- maxTokens: 1000
179
- });
180
-
181
- // Handle streaming chunks
182
- for await (const chunk of result.textStream) {
183
- process.stdout.write(chunk);
184
- }
185
- ```
186
-
187
- ### Provider Selection
188
- ```typescript
189
- import { AIProviderFactory } from '@juspay/neurolink';
190
-
191
- // Use specific provider
192
- const openai = AIProviderFactory.createProvider('openai', 'gpt-4o');
193
- const bedrock = AIProviderFactory.createProvider('bedrock', 'claude-3-7-sonnet');
194
-
195
- // With fallback
196
- const { primary, fallback } = AIProviderFactory.createProviderWithFallback(
197
- 'bedrock', 'openai'
198
- );
199
- ```
200
-
201
146
  ## 🖥️ CLI Tool
202
147
 
203
- NeuroLink includes a professional CLI tool that provides all SDK functionality through an elegant command-line interface.
204
-
205
- ### Installation & Usage
148
+ ### Core Commands
206
149
 
207
- #### Option 1: NPX (No Installation Required)
208
150
  ```bash
209
- # Use directly without installation
210
- npx @juspay/neurolink --help
211
- npx @juspay/neurolink generate-text "Hello, AI!"
212
- npx @juspay/neurolink status
213
- ```
151
+ # Text Generation
152
+ npx @juspay/neurolink generate-text "Explain quantum computing"
153
+ npx @juspay/neurolink generate-text "Write a story" --provider openai --temperature 0.9
214
154
 
215
- #### Option 2: Global Installation
216
- ```bash
217
- # Install globally for convenient access
218
- npm install -g @juspay/neurolink
155
+ # Real-time Streaming
156
+ npx @juspay/neurolink stream "Tell me a story about robots"
219
157
 
220
- # Then use anywhere
221
- neurolink --help
222
- neurolink generate-text "Write a haiku about programming"
223
- neurolink status --verbose
224
- ```
158
+ # Batch Processing
159
+ echo -e "Write a haiku\nExplain gravity" > prompts.txt
160
+ npx @juspay/neurolink batch prompts.txt --output results.json
225
161
 
226
- #### Option 3: Local Project Usage
227
- ```bash
228
- # Add to project and use via npm scripts
229
- npm install @juspay/neurolink
230
- npx neurolink generate-text "Explain TypeScript"
162
+ # Provider Diagnostics
163
+ npx @juspay/neurolink status --verbose
164
+ npx @juspay/neurolink get-best-provider
231
165
  ```
232
166
 
233
- ### CLI Commands
234
-
235
- #### `generate-text <prompt>` - Core Text Generation
236
- ```bash
237
- # Basic text generation
238
- neurolink generate-text "Explain quantum computing"
239
-
240
- # With provider selection
241
- neurolink generate-text "Write a story" --provider openai
242
-
243
- # With temperature and token control
244
- neurolink generate-text "Creative writing" --temperature 0.9 --max-tokens 1000
245
-
246
- # JSON output for scripting
247
- neurolink generate-text "Summary of AI" --format json
248
- ```
249
-
250
- **Output Example:**
251
- ```
252
- 🤖 Generating text...
253
- ✅ Text generated successfully!
254
- Quantum computing represents a revolutionary approach to information processing...
255
- ℹ️ 127 tokens used
256
- ```
257
-
258
- #### `stream <prompt>` - Real-time Streaming
259
- ```bash
260
- # Stream text generation in real-time
261
- neurolink stream "Tell me a story about robots"
262
-
263
- # With provider selection
264
- neurolink stream "Explain machine learning" --provider vertex --temperature 0.8
265
- ```
266
-
267
- **Output Example:**
268
- ```
269
- 🔄 Streaming from auto provider...
270
-
271
- Once upon a time, in a world where technology had advanced beyond...
272
- [text streams in real-time as it's generated]
273
- ```
274
-
275
- #### `batch <file>` - Process Multiple Prompts
276
- ```bash
277
- # Create a file with prompts (one per line)
278
- echo -e "Write a haiku\nExplain gravity\nDescribe the ocean" > prompts.txt
279
-
280
- # Process all prompts
281
- neurolink batch prompts.txt
282
-
283
- # Save results to JSON file
284
- neurolink batch prompts.txt --output results.json
285
-
286
- # Add delay between requests (rate limiting)
287
- neurolink batch prompts.txt --delay 2000
288
- ```
289
-
290
- **Output Example:**
291
- ```
292
- 📦 Processing 3 prompts...
293
-
294
- ✅ 1/3 completed
295
- ✅ 2/3 completed
296
- ✅ 3/3 completed
297
- ✅ Results saved to results.json
298
- ```
299
-
300
- #### `status` - Provider Diagnostics
301
- ```bash
302
- # Check all provider connectivity
303
- neurolink status
304
-
305
- # Verbose output with detailed information
306
- neurolink status --verbose
307
- ```
308
-
309
- **Output Example:**
310
- ```
311
- 🔍 Checking AI provider status...
312
-
313
- ✅ openai: ✅ Working (234ms)
314
- ✅ bedrock: ✅ Working (456ms)
315
- ❌ vertex: ❌ Authentication failed
316
-
317
- 📊 Summary: 2/3 providers working
318
- ```
319
-
320
- #### `get-best-provider` - Auto-selection Testing
321
- ```bash
322
- # Test which provider would be auto-selected
323
- neurolink get-best-provider
324
- ```
325
-
326
- **Output Example:**
327
- ```
328
- 🎯 Finding best provider...
329
- ✅ Best provider: bedrock
330
- ```
331
-
332
- ### CLI Options & Arguments
333
-
334
- #### Global Options
335
- - `--help, -h` - Show help information
336
- - `--version, -v` - Show version number
337
-
338
- #### Generation Options
339
- - `--provider <name>` - Choose provider: `auto` (default), `openai`, `bedrock`, `vertex`
340
- - `--temperature <number>` - Creativity level: `0.0` (focused) to `1.0` (creative), default: `0.7`
341
- - `--max-tokens <number>` - Maximum tokens to generate, default: `500`
342
- - `--format <type>` - Output format: `text` (default) or `json`
343
-
344
- #### Batch Processing Options
345
- - `--output <file>` - Save results to JSON file
346
- - `--delay <ms>` - Delay between requests in milliseconds, default: `1000`
347
-
348
- #### Status Options
349
- - `--verbose, -v` - Show detailed diagnostic information
350
-
351
167
  ### CLI Features
352
168
 
353
- #### ✨ Professional UX
354
- - **Animated Spinners**: Beautiful animations during AI generation
355
- - **Colorized Output**: Green for success, red ❌ for errors, blue ℹ️ for info
356
- - **Progress Tracking**: Real-time progress for batch operations
357
- - **Smart Error Messages**: Helpful hints for common issues
169
+ **Professional UX** - Animated spinners, colorized output, progress tracking
170
+ 🛠️ **Developer-Friendly** - Multiple output formats, provider selection, status monitoring
171
+ 🔧 **Automation Ready** - JSON output, exit codes, scriptable for CI/CD pipelines
358
172
 
359
- #### 🛠️ Developer-Friendly
360
- - **Multiple Output Formats**: Text for humans, JSON for scripts
361
- - **Provider Selection**: Test specific providers or use auto-selection
362
- - **Batch Processing**: Handle multiple prompts efficiently
363
- - **Status Monitoring**: Check provider health and connectivity
173
+ **[📖 View complete CLI documentation](./docs/CLI-GUIDE.md)**
364
174
 
365
- #### 🔧 Automation Ready
366
- - **Exit Codes**: Standard exit codes for scripting
367
- - **JSON Output**: Structured data for automated workflows
368
- - **Environment Variable**: All SDK environment variables work with CLI
369
- - **Scriptable**: Perfect for CI/CD pipelines and automation
370
-
371
- ### CLI Usage Examples
372
-
373
- #### Creative Writing Workflow
374
- ```bash
375
- # Generate creative content with high temperature
376
- neurolink generate-text "Write a sci-fi story opening" \
377
- --provider openai \
378
- --temperature 0.9 \
379
- --max-tokens 1000 \
380
- --format json > story.json
381
-
382
- # Check what was generated
383
- cat story.json | jq '.content'
384
- ```
385
-
386
- #### Batch Content Processing
387
- ```bash
388
- # Create prompts file
389
- cat > content-prompts.txt << EOF
390
- Write a product description for AI software
391
- Create a social media post about technology
392
- Draft an email about our new features
393
- Write a blog post title about machine learning
394
- EOF
395
-
396
- # Process all prompts and save results
397
- neurolink batch content-prompts.txt \
398
- --output content-results.json \
399
- --provider bedrock \
400
- --delay 2000
401
-
402
- # Extract just the content
403
- cat content-results.json | jq -r '.[].response'
404
- ```
405
-
406
- #### Provider Health Monitoring
407
- ```bash
408
- # Check provider status (useful for monitoring scripts)
409
- neurolink status --format json > status.json
410
-
411
- # Parse results in scripts
412
- working_providers=$(cat status.json | jq '[.[] | select(.status == "working")] | length')
413
- echo "Working providers: $working_providers"
414
- ```
415
-
416
- #### Integration with Shell Scripts
417
- ```bash
418
- #!/bin/bash
419
- # AI-powered commit message generator
420
-
421
- # Get git diff
422
- diff=$(git diff --cached --name-only)
423
-
424
- if [ -z "$diff" ]; then
425
- echo "No staged changes found"
426
- exit 1
427
- fi
428
-
429
- # Generate commit message
430
- commit_msg=$(neurolink generate-text \
431
- "Generate a concise git commit message for these changes: $diff" \
432
- --max-tokens 50 \
433
- --temperature 0.3)
434
-
435
- echo "Suggested commit message:"
436
- echo "$commit_msg"
437
-
438
- # Optionally auto-commit
439
- read -p "Use this commit message? (y/N): " -n 1 -r
440
- if [[ $REPLY =~ ^[Yy]$ ]]; then
441
- git commit -m "$commit_msg"
442
- fi
443
- ```
444
-
445
- ### Environment Setup for CLI
446
-
447
- The CLI uses the same environment variables as the SDK:
448
-
449
- ```bash
450
- # Set up your providers (same as SDK)
451
- export OPENAI_API_KEY="sk-your-key"
452
- export AWS_ACCESS_KEY_ID="your-aws-key"
453
- export AWS_SECRET_ACCESS_KEY="your-aws-secret"
454
- export GOOGLE_APPLICATION_CREDENTIALS="/path/to/service-account.json"
455
-
456
- # Test configuration
457
- neurolink status
458
- ```
459
-
460
- ### CLI vs SDK Comparison
461
-
462
- | Feature | CLI | SDK |
463
- |---------|-----|-----|
464
- | **Text Generation** | ✅ `generate-text` | ✅ `generateText()` |
465
- | **Streaming** | ✅ `stream` | ✅ `streamText()` |
466
- | **Provider Selection** | ✅ `--provider` flag | ✅ `createProvider()` |
467
- | **Batch Processing** | ✅ `batch` command | ✅ Manual implementation |
468
- | **Status Monitoring** | ✅ `status` command | ✅ Manual testing |
469
- | **JSON Output** | ✅ `--format json` | ✅ Native objects |
470
- | **Automation** | ✅ Perfect for scripts | ✅ Perfect for apps |
471
- | **Learning Curve** | 🟢 Low | 🟡 Medium |
472
-
473
- ### When to Use CLI vs SDK
474
-
475
- #### Use the CLI when:
476
- - 🔧 **Prototyping**: Quick testing of prompts and providers
477
- - 📜 **Scripting**: Shell scripts and automation workflows
478
- - 🔍 **Debugging**: Checking provider status and testing connectivity
479
- - 📊 **Batch Processing**: Processing multiple prompts from files
480
- - 🎯 **One-off Tasks**: Generating content without writing code
481
-
482
- #### Use the SDK when:
483
- - 🏗️ **Application Development**: Building web apps, APIs, or services
484
- - 🔄 **Real-time Integration**: Chat interfaces, streaming responses
485
- - ⚙️ **Complex Logic**: Custom provider fallback, error handling
486
- - 🎨 **UI Integration**: React components, Svelte stores
487
- - 📈 **Production Applications**: Full-featured applications
488
-
489
- ## Framework Integration
175
+ ## 🏗️ Framework Integration
490
176
 
491
177
  ### SvelteKit
492
-
493
- #### API Route (`src/routes/api/chat/+server.ts`)
494
178
  ```typescript
495
179
  import { createBestAIProvider } from '@juspay/neurolink';
496
- import type { RequestHandler } from './$types';
497
180
 
498
181
  export const POST: RequestHandler = async ({ request }) => {
499
- try {
500
- const { message } = await request.json();
501
-
502
- const provider = createBestAIProvider();
503
- const result = await provider.streamText({
504
- prompt: message,
505
- temperature: 0.7,
506
- maxTokens: 1000
507
- });
508
-
509
- return new Response(result.toReadableStream(), {
510
- headers: {
511
- 'Content-Type': 'text/plain; charset=utf-8',
512
- 'Cache-Control': 'no-cache'
513
- }
514
- });
515
- } catch (error) {
516
- return new Response(JSON.stringify({ error: error.message }), {
517
- status: 500,
518
- headers: { 'Content-Type': 'application/json' }
519
- });
520
- }
182
+ const { message } = await request.json();
183
+ const provider = createBestAIProvider();
184
+ const result = await provider.streamText({ prompt: message });
185
+ return new Response(result.toReadableStream());
521
186
  };
522
187
  ```
523
188
 
524
- #### Svelte Component (`src/routes/chat/+page.svelte`)
525
- ```svelte
526
- <script lang="ts">
527
- let message = '';
528
- let response = '';
529
- let isLoading = false;
530
-
531
- async function sendMessage() {
532
- if (!message.trim()) return;
533
-
534
- isLoading = true;
535
- response = '';
536
-
537
- try {
538
- const res = await fetch('/api/chat', {
539
- method: 'POST',
540
- headers: { 'Content-Type': 'application/json' },
541
- body: JSON.stringify({ message })
542
- });
543
-
544
- if (!res.body) throw new Error('No response');
545
-
546
- const reader = res.body.getReader();
547
- const decoder = new TextDecoder();
548
-
549
- while (true) {
550
- const { done, value } = await reader.read();
551
- if (done) break;
552
- response += decoder.decode(value, { stream: true });
553
- }
554
- } catch (error) {
555
- response = `Error: ${error.message}`;
556
- } finally {
557
- isLoading = false;
558
- }
559
- }
560
- </script>
561
-
562
- <div class="chat">
563
- <input bind:value={message} placeholder="Ask something..." />
564
- <button on:click={sendMessage} disabled={isLoading}>
565
- {isLoading ? 'Sending...' : 'Send'}
566
- </button>
567
-
568
- {#if response}
569
- <div class="response">{response}</div>
570
- {/if}
571
- </div>
572
- ```
573
-
574
189
  ### Next.js
575
-
576
- #### App Router API (`app/api/ai/route.ts`)
577
190
  ```typescript
578
191
  import { createBestAIProvider } from '@juspay/neurolink';
579
- import { NextRequest, NextResponse } from 'next/server';
580
192
 
581
193
  export async function POST(request: NextRequest) {
582
- try {
583
- const { prompt, ...options } = await request.json();
584
-
585
- const provider = createBestAIProvider();
586
- const result = await provider.generateText({
587
- prompt,
588
- temperature: 0.7,
589
- maxTokens: 1000,
590
- ...options
591
- });
592
-
593
- return NextResponse.json({
594
- text: result.text,
595
- provider: result.provider,
596
- usage: result.usage
597
- });
598
- } catch (error) {
599
- return NextResponse.json(
600
- { error: error.message },
601
- { status: 500 }
602
- );
603
- }
604
- }
605
- ```
606
-
607
- #### React Component (`components/AIChat.tsx`)
608
- ```typescript
609
- 'use client';
610
- import { useState } from 'react';
611
-
612
- export default function AIChat() {
613
- const [prompt, setPrompt] = useState('');
614
- const [result, setResult] = useState<string>('');
615
- const [loading, setLoading] = useState(false);
616
-
617
- const generate = async () => {
618
- if (!prompt.trim()) return;
619
-
620
- setLoading(true);
621
- try {
622
- const response = await fetch('/api/ai', {
623
- method: 'POST',
624
- headers: { 'Content-Type': 'application/json' },
625
- body: JSON.stringify({ prompt })
626
- });
627
-
628
- const data = await response.json();
629
- setResult(data.text);
630
- } catch (error) {
631
- setResult(`Error: ${error.message}`);
632
- } finally {
633
- setLoading(false);
634
- }
635
- };
636
-
637
- return (
638
- <div className="space-y-4">
639
- <div className="flex gap-2">
640
- <input
641
- value={prompt}
642
- onChange={(e) => setPrompt(e.target.value)}
643
- placeholder="Enter your prompt..."
644
- className="flex-1 p-2 border rounded"
645
- />
646
- <button
647
- onClick={generate}
648
- disabled={loading}
649
- className="px-4 py-2 bg-blue-500 text-white rounded disabled:opacity-50"
650
- >
651
- {loading ? 'Generating...' : 'Generate'}
652
- </button>
653
- </div>
654
-
655
- {result && (
656
- <div className="p-4 bg-gray-100 rounded">
657
- {result}
658
- </div>
659
- )}
660
- </div>
661
- );
194
+ const { prompt } = await request.json();
195
+ const provider = createBestAIProvider();
196
+ const result = await provider.generateText({ prompt });
197
+ return NextResponse.json({ text: result.text });
662
198
  }
663
199
  ```
664
200
 
665
- ### Express.js
666
-
667
- ```typescript
668
- import express from 'express';
669
- import { createBestAIProvider, AIProviderFactory } from '@juspay/neurolink';
670
-
671
- const app = express();
672
- app.use(express.json());
673
-
674
- // Simple generation endpoint
675
- app.post('/api/generate', async (req, res) => {
676
- try {
677
- const { prompt, options = {} } = req.body;
678
-
679
- const provider = createBestAIProvider();
680
- const result = await provider.generateText({
681
- prompt,
682
- ...options
683
- });
684
-
685
- res.json({
686
- success: true,
687
- text: result.text,
688
- provider: result.provider
689
- });
690
- } catch (error) {
691
- res.status(500).json({
692
- success: false,
693
- error: error.message
694
- });
695
- }
696
- });
697
-
698
- // Streaming endpoint
699
- app.post('/api/stream', async (req, res) => {
700
- try {
701
- const { prompt } = req.body;
702
-
703
- const provider = createBestAIProvider();
704
- const result = await provider.streamText({ prompt });
705
-
706
- res.setHeader('Content-Type', 'text/plain');
707
- res.setHeader('Cache-Control', 'no-cache');
708
-
709
- for await (const chunk of result.textStream) {
710
- res.write(chunk);
711
- }
712
- res.end();
713
- } catch (error) {
714
- res.status(500).json({ error: error.message });
715
- }
716
- });
717
-
718
- app.listen(3000, () => {
719
- console.log('Server running on http://localhost:3000');
720
- });
721
- ```
722
-
723
201
  ### React Hook
724
-
725
202
  ```typescript
726
- import { useState, useCallback } from 'react';
727
-
728
- interface AIOptions {
729
- temperature?: number;
730
- maxTokens?: number;
731
- provider?: string;
732
- }
203
+ import { useState } from 'react';
733
204
 
734
205
  export function useAI() {
735
206
  const [loading, setLoading] = useState(false);
736
- const [error, setError] = useState<string | null>(null);
737
207
 
738
- const generate = useCallback(async (
739
- prompt: string,
740
- options: AIOptions = {}
741
- ) => {
208
+ const generate = async (prompt: string) => {
742
209
  setLoading(true);
743
- setError(null);
744
-
745
- try {
746
- const response = await fetch('/api/ai', {
747
- method: 'POST',
748
- headers: { 'Content-Type': 'application/json' },
749
- body: JSON.stringify({ prompt, ...options })
750
- });
751
-
752
- if (!response.ok) {
753
- throw new Error(`Request failed: ${response.statusText}`);
754
- }
755
-
756
- const data = await response.json();
757
- return data.text;
758
- } catch (err) {
759
- const message = err instanceof Error ? err.message : 'Unknown error';
760
- setError(message);
761
- return null;
762
- } finally {
763
- setLoading(false);
764
- }
765
- }, []);
766
-
767
- return { generate, loading, error };
768
- }
769
-
770
- // Usage
771
- function MyComponent() {
772
- const { generate, loading, error } = useAI();
773
-
774
- const handleClick = async () => {
775
- const result = await generate("Explain React hooks", {
776
- temperature: 0.7,
777
- maxTokens: 500
210
+ const response = await fetch('/api/ai', {
211
+ method: 'POST',
212
+ body: JSON.stringify({ prompt })
778
213
  });
779
- console.log(result);
214
+ const data = await response.json();
215
+ setLoading(false);
216
+ return data.text;
780
217
  };
781
218
 
782
- return (
783
- <button onClick={handleClick} disabled={loading}>
784
- {loading ? 'Generating...' : 'Generate'}
785
- </button>
786
- );
787
- }
788
- ```
789
-
790
- ## API Reference
791
-
792
- ### Core Functions
793
-
794
- #### `createBestAIProvider(requestedProvider?, modelName?)`
795
- Creates the best available AI provider based on environment configuration.
796
-
797
- ```typescript
798
- const provider = createBestAIProvider();
799
- const provider = createBestAIProvider('openai'); // Prefer OpenAI
800
- const provider = createBestAIProvider('bedrock', 'claude-3-7-sonnet');
801
- ```
802
-
803
- #### `createAIProviderWithFallback(primary, fallback, modelName?)`
804
- Creates a provider with automatic fallback.
805
-
806
- ```typescript
807
- const { primary, fallback } = createAIProviderWithFallback('bedrock', 'openai');
808
-
809
- try {
810
- const result = await primary.generateText({ prompt });
811
- } catch {
812
- const result = await fallback.generateText({ prompt });
813
- }
814
- ```
815
-
816
- ### AIProviderFactory
817
-
818
- #### `createProvider(providerName, modelName?)`
819
- Creates a specific provider instance.
820
-
821
- ```typescript
822
- const openai = AIProviderFactory.createProvider('openai', 'gpt-4o');
823
- const bedrock = AIProviderFactory.createProvider('bedrock', 'claude-3-7-sonnet');
824
- const vertex = AIProviderFactory.createProvider('vertex', 'gemini-2.5-flash');
825
- ```
826
-
827
- ### Provider Interface
828
-
829
- All providers implement the same interface:
830
-
831
- ```typescript
832
- interface AIProvider {
833
- generateText(options: GenerateTextOptions): Promise<GenerateTextResult>;
834
- streamText(options: StreamTextOptions): Promise<StreamTextResult>;
835
- }
836
-
837
- interface GenerateTextOptions {
838
- prompt: string;
839
- temperature?: number;
840
- maxTokens?: number;
841
- systemPrompt?: string;
842
- }
843
-
844
- interface GenerateTextResult {
845
- text: string;
846
- provider: string;
847
- model: string;
848
- usage?: {
849
- promptTokens: number;
850
- completionTokens: number;
851
- totalTokens: number;
852
- };
219
+ return { generate, loading };
853
220
  }
854
221
  ```
855
222
 
856
- ### Supported Models
223
+ **[📖 View complete framework integration guide](./docs/FRAMEWORK-INTEGRATION.md)**
857
224
 
858
- #### OpenAI
859
- - `gpt-4o` (default)
860
- - `gpt-4o-mini`
861
- - `gpt-4-turbo`
225
+ ## ⚙️ Provider Configuration
862
226
 
863
- #### Amazon Bedrock
864
- - `claude-3-7-sonnet` (default)
865
- - `claude-3-5-sonnet`
866
- - `claude-3-haiku`
867
-
868
- #### Google Vertex AI
869
- - `gemini-2.5-flash` (default)
870
- - `claude-4.0-sonnet`
871
-
872
- ## Provider Configuration
873
-
874
- ### OpenAI Setup
227
+ ### OpenAI
875
228
  ```bash
876
- export OPENAI_API_KEY="sk-your-key-here"
229
+ export OPENAI_API_KEY="sk-your-openai-key"
877
230
  ```
878
231
 
879
- ### Amazon Bedrock Setup
880
-
881
- **⚠️ CRITICAL: Anthropic Models Require Inference Profile ARN**
882
-
883
- For Anthropic Claude models in Bedrock, you **MUST** use the full inference profile ARN, not simple model names:
884
-
232
+ ### Amazon Bedrock (⚠️ Requires Inference Profile ARN)
885
233
  ```bash
886
234
  export AWS_ACCESS_KEY_ID="your-access-key"
887
235
  export AWS_SECRET_ACCESS_KEY="your-secret-key"
888
- export AWS_REGION="us-east-2"
889
-
890
- # ✅ CORRECT: Use full inference profile ARN for Anthropic models
891
236
  export BEDROCK_MODEL="arn:aws:bedrock:us-east-2:<account_id>:inference-profile/us.anthropic.claude-3-7-sonnet-20250219-v1:0"
892
-
893
- # ❌ WRONG: Simple model names cause "not authorized to invoke this API" errors
894
- # export BEDROCK_MODEL="anthropic.claude-3-sonnet-20240229-v1:0"
895
- ```
896
-
897
- #### Why Inference Profiles?
898
- - **Cross-Region Access**: Faster access across AWS regions
899
- - **Better Performance**: Optimized routing and response times
900
- - **Higher Availability**: Improved model availability and reliability
901
- - **Different Permissions**: Separate permission model from base models
902
-
903
- #### Available Inference Profile ARNs
904
- ```bash
905
- # Claude 3.7 Sonnet (Latest - Recommended)
906
- BEDROCK_MODEL="arn:aws:bedrock:us-east-2:<account_id>:inference-profile/us.anthropic.claude-3-7-sonnet-20250219-v1:0"
907
-
908
- # Claude 3.5 Sonnet
909
- BEDROCK_MODEL="arn:aws:bedrock:us-east-2:<account_id>:inference-profile/us.anthropic.claude-3-5-sonnet-20241022-v2:0"
910
-
911
- # Claude 3 Haiku
912
- BEDROCK_MODEL="arn:aws:bedrock:us-east-2:<account_id>:inference-profile/us.anthropic.claude-3-haiku-20240307-v1:0"
913
- ```
914
-
915
- #### Session Token Support
916
- For temporary credentials (common in development):
917
- ```bash
918
- export AWS_SESSION_TOKEN="your-session-token" # Required for temporary credentials
919
237
  ```
920
238
 
921
- ### Google Vertex AI Setup
922
-
923
- NeuroLink supports **three authentication methods** for Google Vertex AI:
924
-
925
- #### Method 1: Service Account File (Recommended for Production)
239
+ ### Google Vertex AI (Multiple Auth Methods)
926
240
  ```bash
241
+ # Method 1: Service Account File
927
242
  export GOOGLE_APPLICATION_CREDENTIALS="/path/to/service-account.json"
928
243
  export GOOGLE_VERTEX_PROJECT="your-project-id"
929
- export GOOGLE_VERTEX_LOCATION="us-central1"
930
- ```
931
244
 
932
- #### Method 2: Service Account JSON String (Good for Containers/Cloud)
933
- ```bash
934
- export GOOGLE_SERVICE_ACCOUNT_KEY='{"type":"service_account","project_id":"your-project",...}'
245
+ # Method 2: JSON String
246
+ export GOOGLE_SERVICE_ACCOUNT_KEY='{"type":"service_account",...}'
935
247
  export GOOGLE_VERTEX_PROJECT="your-project-id"
936
- export GOOGLE_VERTEX_LOCATION="us-central1"
937
- ```
938
248
 
939
- #### Method 3: Individual Environment Variables (Good for CI/CD)
940
- ```bash
249
+ # Method 3: Individual Variables
941
250
  export GOOGLE_AUTH_CLIENT_EMAIL="service-account@project.iam.gserviceaccount.com"
942
- export GOOGLE_AUTH_PRIVATE_KEY="-----BEGIN PRIVATE KEY-----\nMIIE..."
251
+ export GOOGLE_AUTH_PRIVATE_KEY="-----BEGIN PRIVATE KEY-----\n..."
943
252
  export GOOGLE_VERTEX_PROJECT="your-project-id"
944
- export GOOGLE_VERTEX_LOCATION="us-central1"
945
253
  ```
946
254
 
947
- ### Complete Environment Variables Reference
948
-
949
- #### OpenAI Configuration
950
- ```bash
951
- # Required
952
- OPENAI_API_KEY="sk-your-openai-api-key"
953
-
954
- # Optional
955
- OPENAI_MODEL="gpt-4o" # Default model to use
956
- ```
255
+ **[📖 View complete provider configuration guide](./docs/PROVIDER-CONFIGURATION.md)**
957
256
 
958
- #### Amazon Bedrock Configuration
959
- ```bash
960
- # Required
961
- AWS_ACCESS_KEY_ID="your-aws-access-key"
962
- AWS_SECRET_ACCESS_KEY="your-aws-secret-key"
963
-
964
- # Optional
965
- AWS_REGION="us-east-2" # Default: us-east-2
966
- AWS_SESSION_TOKEN="your-session-token" # Required for temporary credentials
967
- BEDROCK_MODEL_ID="anthropic.claude-3-7-sonnet-20250219-v1:0" # Default model
968
- ```
969
-
970
- #### Google Vertex AI Configuration
971
- ```bash
972
- # Required (choose one authentication method)
973
- # Method 1: Service Account File
974
- GOOGLE_APPLICATION_CREDENTIALS="/path/to/service-account.json"
257
+ ## 📚 API Reference
975
258
 
976
- # Method 2: Service Account JSON String
977
- GOOGLE_SERVICE_ACCOUNT_KEY='{"type":"service_account",...}'
978
-
979
- # Method 3: Individual Environment Variables
980
- GOOGLE_AUTH_CLIENT_EMAIL="service-account@project.iam.gserviceaccount.com"
981
- GOOGLE_AUTH_PRIVATE_KEY="-----BEGIN PRIVATE KEY-----\nMIIE..."
982
-
983
- # Required for all methods
984
- GOOGLE_VERTEX_PROJECT="your-gcp-project-id"
985
-
986
- # Optional
987
- GOOGLE_VERTEX_LOCATION="us-east5" # Default: us-east5
988
- VERTEX_MODEL_ID="claude-sonnet-4@20250514" # Default model
989
- ```
990
-
991
- #### General Configuration
992
- ```bash
993
- # Provider Selection (optional)
994
- DEFAULT_PROVIDER="bedrock" # Primary provider preference
995
- FALLBACK_PROVIDER="openai" # Fallback provider
996
-
997
- # Application Settings
998
- PUBLIC_APP_ENVIRONMENT="dev" # dev, staging, production
999
- ENABLE_STREAMING="true" # Enable streaming responses
1000
- ENABLE_FALLBACK="true" # Enable automatic fallback
1001
-
1002
- # Debug and Logging
1003
- NEUROLINK_DEBUG="true" # Enable debug logging
1004
- LOG_LEVEL="info" # error, warn, info, debug
1005
- ```
1006
-
1007
- #### Environment File Example (.env)
1008
- ```bash
1009
- # Copy this to your .env file and fill in your credentials
1010
-
1011
- # OpenAI
1012
- OPENAI_API_KEY=sk-your-openai-key-here
1013
- OPENAI_MODEL=gpt-4o
1014
-
1015
- # Amazon Bedrock
1016
- AWS_ACCESS_KEY_ID=your-aws-access-key
1017
- AWS_SECRET_ACCESS_KEY=your-aws-secret-key
1018
- AWS_REGION=us-east-2
1019
- BEDROCK_MODEL_ID=anthropic.claude-3-7-sonnet-20250219-v1:0
1020
-
1021
- # Google Vertex AI (choose one method)
1022
- # Method 1: File path
1023
- GOOGLE_APPLICATION_CREDENTIALS=/path/to/your/service-account.json
1024
-
1025
- # Method 2: JSON string (uncomment to use)
1026
- # GOOGLE_SERVICE_ACCOUNT_KEY={"type":"service_account","project_id":"your-project",...}
1027
-
1028
- # Method 3: Individual variables (uncomment to use)
1029
- # GOOGLE_AUTH_CLIENT_EMAIL=service-account@your-project.iam.gserviceaccount.com
1030
- # GOOGLE_AUTH_PRIVATE_KEY="-----BEGIN PRIVATE KEY-----\nYOUR_PRIVATE_KEY_HERE\n-----END PRIVATE KEY-----"
1031
-
1032
- # Required for all Google Vertex AI methods
1033
- GOOGLE_VERTEX_PROJECT=your-gcp-project-id
1034
- GOOGLE_VERTEX_LOCATION=us-east5
1035
- VERTEX_MODEL_ID=claude-sonnet-4@20250514
1036
-
1037
- # Application Settings
1038
- DEFAULT_PROVIDER=auto
1039
- ENABLE_STREAMING=true
1040
- ENABLE_FALLBACK=true
1041
- NEUROLINK_DEBUG=false
1042
- ```
1043
-
1044
- ## Advanced Patterns
1045
-
1046
- ### Custom Configuration
1047
- ```typescript
1048
- import { AIProviderFactory } from '@juspay/neurolink';
1049
-
1050
- // Environment-based provider selection
1051
- const isDev = process.env.NODE_ENV === 'development';
1052
- const provider = isDev
1053
- ? AIProviderFactory.createProvider('openai', 'gpt-4o-mini') // Cheaper for dev
1054
- : AIProviderFactory.createProvider('bedrock', 'claude-3-7-sonnet'); // Production
1055
-
1056
- // Multiple providers for different use cases
1057
- const providers = {
1058
- creative: AIProviderFactory.createProvider('openai', 'gpt-4o'),
1059
- analytical: AIProviderFactory.createProvider('bedrock', 'claude-3-7-sonnet'),
1060
- fast: AIProviderFactory.createProvider('vertex', 'gemini-2.5-flash')
1061
- };
1062
-
1063
- async function generateCreativeContent(prompt: string) {
1064
- return await providers.creative.generateText({
1065
- prompt,
1066
- temperature: 0.9,
1067
- maxTokens: 2000
1068
- });
1069
- }
1070
- ```
1071
-
1072
- ### Response Caching
1073
- ```typescript
1074
- const cache = new Map<string, { text: string; timestamp: number }>();
1075
- const CACHE_DURATION = 5 * 60 * 1000; // 5 minutes
1076
-
1077
- async function cachedGenerate(prompt: string) {
1078
- const key = prompt.toLowerCase().trim();
1079
- const cached = cache.get(key);
1080
-
1081
- if (cached && Date.now() - cached.timestamp < CACHE_DURATION) {
1082
- return { ...cached, fromCache: true };
1083
- }
1084
-
1085
- const provider = createBestAIProvider();
1086
- const result = await provider.generateText({ prompt });
1087
-
1088
- cache.set(key, { text: result.text, timestamp: Date.now() });
1089
- return { text: result.text, fromCache: false };
1090
- }
1091
- ```
1092
-
1093
- ### Batch Processing
1094
- ```typescript
1095
- async function processBatch(prompts: string[]) {
1096
- const provider = createBestAIProvider();
1097
- const chunkSize = 5;
1098
- const results = [];
1099
-
1100
- for (let i = 0; i < prompts.length; i += chunkSize) {
1101
- const chunk = prompts.slice(i, i + chunkSize);
1102
-
1103
- const chunkResults = await Promise.allSettled(
1104
- chunk.map(prompt => provider.generateText({ prompt, maxTokens: 500 }))
1105
- );
1106
-
1107
- results.push(...chunkResults);
1108
-
1109
- // Rate limiting
1110
- if (i + chunkSize < prompts.length) {
1111
- await new Promise(resolve => setTimeout(resolve, 1000));
1112
- }
1113
- }
1114
-
1115
- return results.map((result, index) => ({
1116
- prompt: prompts[index],
1117
- success: result.status === 'fulfilled',
1118
- result: result.status === 'fulfilled' ? result.value : result.reason
1119
- }));
1120
- }
1121
- ```
1122
-
1123
- ## Error Handling
1124
-
1125
- ### Troubleshooting Common Issues
1126
-
1127
- #### AWS Credentials and Authorization
1128
- ```
1129
- ValidationException: Your account is not authorized to invoke this API operation.
1130
- ```
1131
- - **Cause**: The AWS account doesn't have access to Bedrock or the specific model
1132
- - **Solution**:
1133
- - Verify your AWS account has Bedrock enabled
1134
- - Check model availability in your AWS region
1135
- - Ensure your IAM role has `bedrock:InvokeModel` permissions
1136
-
1137
- #### Missing or Invalid Credentials
1138
- ```
1139
- Error: Cannot find API key for OpenAI provider
1140
- ```
1141
- - **Cause**: The environment variable for API credentials is missing
1142
- - **Solution**: Set the appropriate environment variable (OPENAI_API_KEY, etc.)
1143
-
1144
- #### Google Vertex Import Issues
1145
- ```
1146
- Cannot find package '@google-cloud/vertexai' imported from...
1147
- ```
1148
- - **Cause**: Missing Google Vertex AI peer dependency
1149
- - **Solution**: Install the package with `npm install @google-cloud/vertexai`
1150
-
1151
- #### Session Token Expired
1152
- ```
1153
- The security token included in the request is expired
1154
- ```
1155
- - **Cause**: AWS session token has expired
1156
- - **Solution**: Generate new AWS credentials with a fresh session token
1157
-
1158
- ### Comprehensive Error Handling
259
+ ### Core Functions
1159
260
  ```typescript
1160
- import { createBestAIProvider } from '@juspay/neurolink';
261
+ // Auto-select best provider
262
+ const provider = createBestAIProvider();
1161
263
 
1162
- async function robustGenerate(prompt: string, maxRetries = 3) {
1163
- let attempt = 0;
1164
-
1165
- while (attempt < maxRetries) {
1166
- try {
1167
- const provider = createBestAIProvider();
1168
- return await provider.generateText({ prompt });
1169
- } catch (error) {
1170
- attempt++;
1171
- console.error(`Attempt ${attempt} failed:`, error.message);
1172
-
1173
- if (attempt >= maxRetries) {
1174
- throw new Error(`Failed after ${maxRetries} attempts: ${error.message}`);
1175
- }
1176
-
1177
- // Exponential backoff
1178
- await new Promise(resolve =>
1179
- setTimeout(resolve, Math.pow(2, attempt) * 1000)
1180
- );
1181
- }
1182
- }
1183
- }
1184
- ```
264
+ // Specific provider
265
+ const openai = AIProviderFactory.createProvider('openai', 'gpt-4o');
1185
266
 
1186
- ### Provider Fallback
1187
- ```typescript
1188
- async function generateWithFallback(prompt: string) {
1189
- const providers = ['bedrock', 'openai', 'vertex'];
1190
-
1191
- for (const providerName of providers) {
1192
- try {
1193
- const provider = AIProviderFactory.createProvider(providerName);
1194
- return await provider.generateText({ prompt });
1195
- } catch (error) {
1196
- console.warn(`${providerName} failed:`, error.message);
1197
-
1198
- if (error.message.includes('API key') || error.message.includes('credentials')) {
1199
- console.log(`${providerName} not configured, trying next...`);
1200
- continue;
1201
- }
1202
- }
1203
- }
1204
-
1205
- throw new Error('All providers failed or are not configured');
1206
- }
267
+ // With fallback
268
+ const { primary, fallback } = createAIProviderWithFallback('bedrock', 'openai');
1207
269
  ```
1208
270
 
1209
- ### Common Error Types
271
+ ### Provider Interface
1210
272
  ```typescript
1211
- // Provider not configured
1212
- if (error.message.includes('API key')) {
1213
- console.error('Provider API key not set');
1214
- }
1215
-
1216
- // Rate limiting
1217
- if (error.message.includes('rate limit')) {
1218
- console.error('Rate limit exceeded, implement backoff');
1219
- }
1220
-
1221
- // Model not available
1222
- if (error.message.includes('model')) {
1223
- console.error('Requested model not available');
273
+ interface AIProvider {
274
+ generateText(options: GenerateTextOptions): Promise<GenerateTextResult>;
275
+ streamText(options: StreamTextOptions): Promise<StreamTextResult>;
1224
276
  }
1225
277
 
1226
- // Network issues
1227
- if (error.message.includes('network') || error.message.includes('timeout')) {
1228
- console.error('Network connectivity issue');
278
+ interface GenerateTextOptions {
279
+ prompt: string;
280
+ temperature?: number; // 0.0 to 1.0, default: 0.7
281
+ maxTokens?: number; // Default: 500
282
+ systemPrompt?: string;
1229
283
  }
1230
284
  ```
1231
285
 
1232
- ## Performance
1233
-
1234
- ### Optimization Tips
286
+ ### Supported Models
287
+ - **OpenAI**: `gpt-4o` (default), `gpt-4o-mini`, `gpt-4-turbo`
288
+ - **Bedrock**: `claude-3-7-sonnet` (default), `claude-3-5-sonnet`, `claude-3-haiku`
289
+ - **Vertex AI**: `gemini-2.5-flash` (default), `claude-sonnet-4@20250514`
1235
290
 
1236
- 1. **Choose Right Models for Use Case**
1237
- ```typescript
1238
- // Fast responses for simple tasks
1239
- const fast = AIProviderFactory.createProvider('vertex', 'gemini-2.5-flash');
291
+ **[📖 View complete API reference](./docs/API-REFERENCE.md)**
1240
292
 
1241
- // High quality for complex tasks
1242
- const quality = AIProviderFactory.createProvider('bedrock', 'claude-3-7-sonnet');
293
+ ## 🎯 Visual Content Benefits
1243
294
 
1244
- // Cost-effective for development
1245
- const dev = AIProviderFactory.createProvider('openai', 'gpt-4o-mini');
1246
- ```
295
+ - ✅ **No Installation Required** - See everything in action before installing
296
+ - **Real AI Content** - All screenshots and videos show actual AI generation
297
+ - ✅ **Professional Quality** - 1920x1080 resolution suitable for documentation
298
+ - ✅ **Complete Coverage** - Every major feature visually documented
299
+ - ✅ **Production Validation** - Demonstrates real-world usage patterns
1247
300
 
1248
- 2. **Streaming for Long Responses**
1249
- ```typescript
1250
- // Use streaming for better UX on long content
1251
- const result = await provider.streamText({
1252
- prompt: "Write a detailed article...",
1253
- maxTokens: 2000
1254
- });
1255
- ```
301
+ **[📖 View complete visual demonstrations](./docs/VISUAL-DEMOS.md)**
1256
302
 
1257
- 3. **Appropriate Token Limits**
1258
- ```typescript
1259
- // Set reasonable limits to control costs
1260
- const result = await provider.generateText({
1261
- prompt: "Summarize this text",
1262
- maxTokens: 150 // Just enough for a summary
1263
- });
1264
- ```
303
+ ## 🚀 Getting Started
1265
304
 
1266
- ### Provider Limits
1267
- - **OpenAI**: Rate limits based on tier (TPM/RPM)
1268
- - **Bedrock**: Regional quotas and model availability
1269
- - **Vertex AI**: Project-based quotas and rate limits
305
+ 1. **Try CLI immediately**: `npx @juspay/neurolink status`
306
+ 2. **View live demo**: `cd neurolink-demo && npm start`
307
+ 3. **Set up providers**: See [Provider Configuration Guide](./docs/PROVIDER-CONFIGURATION.md)
308
+ 4. **Integrate with your framework**: See [Framework Integration Guide](./docs/FRAMEWORK-INTEGRATION.md)
309
+ 5. **Build with the SDK**: See [API Reference](./docs/API-REFERENCE.md)
1270
310
 
1271
- ## Contributing
311
+ ## 🤝 Contributing
1272
312
 
1273
- We welcome contributions! Here's how to get started:
313
+ We welcome contributions! Please see our [Contributing Guidelines](./CONTRIBUTING.md) for details.
1274
314
 
1275
315
  ### Development Setup
1276
316
  ```bash
1277
317
  git clone https://github.com/juspay/neurolink
1278
318
  cd neurolink
1279
319
  pnpm install
320
+ pnpm test
321
+ pnpm build
1280
322
  ```
1281
323
 
1282
- ### Running Tests
1283
- ```bash
1284
- pnpm test # Run all tests
1285
- pnpm test:watch # Watch mode
1286
- pnpm test:coverage # Coverage report
1287
- ```
1288
-
1289
- ### Building
1290
- ```bash
1291
- pnpm build # Build the library
1292
- pnpm check # Type checking
1293
- pnpm lint # Lint code
1294
- ```
1295
-
1296
- ### Guidelines
1297
- - Follow existing TypeScript patterns
1298
- - Add tests for new features
1299
- - Update documentation
1300
- - Ensure all providers work consistently
1301
-
1302
- ## License
324
+ ## 📄 License
1303
325
 
1304
326
  MIT © [Juspay Technologies](https://juspay.in)
1305
327
 
1306
- ## Related Projects
328
+ ## 🔗 Related Projects
1307
329
 
1308
330
  - [Vercel AI SDK](https://github.com/vercel/ai) - Underlying provider implementations
1309
- - [SvelteKit](https://kit.svelte.dev) - Web framework
331
+ - [SvelteKit](https://kit.svelte.dev) - Web framework used in this project
1310
332
  - [Lighthouse](https://github.com/juspay/lighthouse) - Original source project
1311
333
 
1312
334
  ---