@juspay/neurolink 1.0.0 → 1.2.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -1,21 +1,37 @@
1
1
  # 🧠 NeuroLink
2
2
 
3
- [![npm version](https://badge.fury.io/js/neurolink.svg)](https://badge.fury.io/js/neurolink)
3
+ [![npm version](https://badge.fury.io/js/%40juspay%2Fneurolink.svg)](https://badge.fury.io/js/%40juspay%2Fneurolink)
4
4
  [![TypeScript](https://img.shields.io/badge/%3C%2F%3E-TypeScript-%230074c1.svg)](http://www.typescriptlang.org/)
5
5
  [![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)
6
6
 
7
- > Production-ready AI toolkit with multi-provider support, automatic fallback, and full TypeScript integration.
7
+ > Production-ready AI toolkit with multi-provider support, automatic fallback, and full TypeScript integration. **Now with a professional CLI!**
8
8
 
9
- **NeuroLink** provides a unified interface for AI providers (OpenAI, Amazon Bedrock, Google Vertex AI) with intelligent fallback, streaming support, and type-safe APIs. Extracted from production use at Juspay.
9
+ **NeuroLink** provides a unified interface for AI providers (OpenAI, Amazon Bedrock, Google Vertex AI) with intelligent fallback, streaming support, and type-safe APIs. Available as both a **programmatic SDK** and a **professional CLI tool**. Extracted from production use at Juspay.
10
10
 
11
- ## Quick Start
11
+ ## 🚀 Quick Start
12
+
13
+ ### 📦 Installation
12
14
 
13
15
  ```bash
14
- npm install neurolink ai @ai-sdk/amazon-bedrock @ai-sdk/openai @ai-sdk/google-vertex zod
16
+ # Install globally for CLI usage
17
+ npm install -g @juspay/neurolink
18
+
19
+ # Or use directly with npx (no installation required)
20
+ npx @juspay/neurolink generate-text "Hello, AI!"
21
+
22
+ # Or install globally
23
+ npm install -g @juspay/neurolink
24
+ neurolink generate-text "Write a haiku about programming"
25
+ neurolink status --verbose
26
+ ```
27
+
28
+ ### Programmatic Usage
29
+ ```bash
30
+ npm install @juspay/neurolink ai @ai-sdk/amazon-bedrock @ai-sdk/openai @ai-sdk/google-vertex zod
15
31
  ```
16
32
 
17
33
  ```typescript
18
- import { createBestAIProvider } from 'neurolink';
34
+ import { createBestAIProvider } from '@juspay/neurolink';
19
35
 
20
36
  // Auto-selects best available provider
21
37
  const provider = createBestAIProvider();
@@ -26,6 +42,62 @@ const result = await provider.generateText({
26
42
  console.log(result.text);
27
43
  ```
28
44
 
45
+ ## 🎬 Complete Visual Documentation
46
+
47
+ **No installation required!** Experience NeuroLink's capabilities through our comprehensive visual ecosystem:
48
+
49
+ ### 🌐 Web Demo Screenshots & Videos
50
+
51
+ #### **📸 Interactive Web Interface Screenshots**
52
+ | Feature | Screenshot | Description |
53
+ |---------|------------|-------------|
54
+ | **Main Interface** | ![Main Interface](./neurolink-demo/screenshots/01-overview/01-main-interface-overview-2025-06-04T13-56-43-628Z.png) | Complete web interface showing all features |
55
+ | **AI Generation Results** | ![AI Generation](./neurolink-demo/screenshots/02-basic-examples/02-ai-generation-results-2025-06-04T13-57-13-156Z.png) | Real AI content generation in action |
56
+ | **Business Use Cases** | ![Business Cases](./neurolink-demo/screenshots/03-business-use-cases/03-business-use-cases-2025-06-04T13-59-07-846Z.png) | Professional business applications |
57
+ | **Creative Tools** | ![Creative Tools](./neurolink-demo/screenshots/04-creative-tools/04-creative-tools-2025-06-04T13-59-24-346Z.png) | Creative content generation |
58
+ | **Developer Tools** | ![Developer Tools](./neurolink-demo/screenshots/05-developer-tools/05-developer-tools-2025-06-04T13-59-43-322Z.png) | Code generation and API docs |
59
+ | **Analytics & Monitoring** | ![Monitoring](./neurolink-demo/screenshots/06-monitoring/06-monitoring-analytics-2025-06-04T14-00-08-919Z.png) | Real-time provider analytics |
60
+
61
+ #### **🎥 Complete Demo Videos** *(5,681+ tokens of real AI generation)*
62
+ - **[Basic Examples](./neurolink-demo/videos/basic-examples/)** - Text generation, haiku creation, storytelling
63
+ - **[Business Use Cases](./neurolink-demo/videos/business-use-cases/)** - Email generation, analysis, summaries
64
+ - **[Creative Tools](./neurolink-demo/videos/creative-tools/)** - Stories, translation, creative ideas
65
+ - **[Developer Tools](./neurolink-demo/videos/developer-tools/)** - React code, API docs, debugging help
66
+ - **[Monitoring & Analytics](./neurolink-demo/videos/monitoring/)** - Live provider status and performance
67
+
68
+ ### 🖥️ CLI Tool Screenshots & Videos
69
+
70
+ #### **📸 Professional CLI Screenshots**
71
+ | Command | Screenshot | Description |
72
+ |---------|------------|-------------|
73
+ | **CLI Help Overview** | ![CLI Help](./cli-screenshots/01-cli-help-2025-06-04T19-38-12.png) | Complete command reference |
74
+ | **Provider Status Check** | ![Provider Status](./cli-screenshots/02-provider-status-2025-06-04T19-38-25.png) | All provider connectivity verified |
75
+ | **Text Generation** | ![Text Generation](./cli-screenshots/03-text-generation-2025-06-04T19-38-30.png) | Real AI haiku generation with JSON |
76
+ | **Auto Provider Selection** | ![Best Provider](./cli-screenshots/04-best-provider-2025-06-04T19-38-33.png) | Automatic provider selection working |
77
+ | **Batch Processing** | ![Batch Results](./cli-screenshots/05-batch-results-2025-06-04T19-38-37.png) | Multi-prompt processing with results |
78
+
79
+ #### **🎥 CLI Demonstration Videos** *(Real command execution)*
80
+ - **[CLI Overview](./cli-videos/cli-overview/)** - Help, status, provider selection commands
81
+ - **[Basic Generation](./cli-videos/cli-basic-generation/)** - Text generation with different providers
82
+ - **[Batch Processing](./cli-videos/cli-batch-processing/)** - File-based multi-prompt processing
83
+ - **[Real-time Streaming](./cli-videos/cli-streaming/)** - Live AI content streaming
84
+ - **[Advanced Features](./cli-videos/cli-advanced-features/)** - Verbose diagnostics and provider options
85
+
86
+ ### 💻 Live Interactive Demo
87
+ - **Working Express.js server** with real API integration
88
+ - **All 3 providers functional** (OpenAI, Bedrock, Vertex AI)
89
+ - **15+ use cases** demonstrated across business, creative, and developer tools
90
+ - **Real-time provider analytics** with performance metrics
91
+
92
+ ### 🎯 Visual Content Benefits
93
+ - ✅ **No Installation Required** - See everything in action before installing
94
+ - ✅ **Real AI Content** - All screenshots and videos show actual AI generation
95
+ - ✅ **Professional Quality** - 1920x1080 resolution suitable for documentation
96
+ - ✅ **Complete Coverage** - Every major feature visually documented
97
+ - ✅ **Production Validation** - Demonstrates real-world usage patterns
98
+
99
+ [📁 View complete visual documentation](./neurolink-demo/) including all screenshots, videos, and interactive examples.
100
+
29
101
  ## Table of Contents
30
102
 
31
103
  - [Features](#features)
@@ -57,13 +129,13 @@ console.log(result.text);
57
129
  ### Package Installation
58
130
  ```bash
59
131
  # npm
60
- npm install neurolink ai @ai-sdk/amazon-bedrock @ai-sdk/openai @ai-sdk/google-vertex zod
132
+ npm install @juspay/neurolink ai @ai-sdk/amazon-bedrock @ai-sdk/openai @ai-sdk/google-vertex zod
61
133
 
62
134
  # yarn
63
- yarn add neurolink ai @ai-sdk/amazon-bedrock @ai-sdk/openai @ai-sdk/google-vertex zod
135
+ yarn add @juspay/neurolink ai @ai-sdk/amazon-bedrock @ai-sdk/openai @ai-sdk/google-vertex zod
64
136
 
65
137
  # pnpm (recommended)
66
- pnpm add neurolink ai @ai-sdk/amazon-bedrock @ai-sdk/openai @ai-sdk/google-vertex zod
138
+ pnpm add @juspay/neurolink ai @ai-sdk/amazon-bedrock @ai-sdk/openai @ai-sdk/google-vertex zod
67
139
  ```
68
140
 
69
141
  ### Environment Setup
@@ -79,7 +151,7 @@ export GOOGLE_APPLICATION_CREDENTIALS="path/to/service-account.json"
79
151
 
80
152
  ### Simple Text Generation
81
153
  ```typescript
82
- import { createBestAIProvider } from 'neurolink';
154
+ import { createBestAIProvider } from '@juspay/neurolink';
83
155
 
84
156
  const provider = createBestAIProvider();
85
157
 
@@ -96,7 +168,7 @@ console.log(`Used: ${result.provider}`);
96
168
 
97
169
  ### Streaming Responses
98
170
  ```typescript
99
- import { createBestAIProvider } from 'neurolink';
171
+ import { createBestAIProvider } from '@juspay/neurolink';
100
172
 
101
173
  const provider = createBestAIProvider();
102
174
 
@@ -114,7 +186,7 @@ for await (const chunk of result.textStream) {
114
186
 
115
187
  ### Provider Selection
116
188
  ```typescript
117
- import { AIProviderFactory } from 'neurolink';
189
+ import { AIProviderFactory } from '@juspay/neurolink';
118
190
 
119
191
  // Use specific provider
120
192
  const openai = AIProviderFactory.createProvider('openai', 'gpt-4o');
@@ -126,13 +198,301 @@ const { primary, fallback } = AIProviderFactory.createProviderWithFallback(
126
198
  );
127
199
  ```
128
200
 
201
+ ## 🖥️ CLI Tool
202
+
203
+ NeuroLink includes a professional CLI tool that provides all SDK functionality through an elegant command-line interface.
204
+
205
+ ### Installation & Usage
206
+
207
+ #### Option 1: NPX (No Installation Required)
208
+ ```bash
209
+ # Use directly without installation
210
+ npx @juspay/neurolink --help
211
+ npx @juspay/neurolink generate-text "Hello, AI!"
212
+ npx @juspay/neurolink status
213
+ ```
214
+
215
+ #### Option 2: Global Installation
216
+ ```bash
217
+ # Install globally for convenient access
218
+ npm install -g @juspay/neurolink
219
+
220
+ # Then use anywhere
221
+ neurolink --help
222
+ neurolink generate-text "Write a haiku about programming"
223
+ neurolink status --verbose
224
+ ```
225
+
226
+ #### Option 3: Local Project Usage
227
+ ```bash
228
+ # Add to project and use via npm scripts
229
+ npm install @juspay/neurolink
230
+ npx neurolink generate-text "Explain TypeScript"
231
+ ```
232
+
233
+ ### CLI Commands
234
+
235
+ #### `generate-text <prompt>` - Core Text Generation
236
+ ```bash
237
+ # Basic text generation
238
+ neurolink generate-text "Explain quantum computing"
239
+
240
+ # With provider selection
241
+ neurolink generate-text "Write a story" --provider openai
242
+
243
+ # With temperature and token control
244
+ neurolink generate-text "Creative writing" --temperature 0.9 --max-tokens 1000
245
+
246
+ # JSON output for scripting
247
+ neurolink generate-text "Summary of AI" --format json
248
+ ```
249
+
250
+ **Output Example:**
251
+ ```
252
+ 🤖 Generating text...
253
+ ✅ Text generated successfully!
254
+ Quantum computing represents a revolutionary approach to information processing...
255
+ ℹ️ 127 tokens used
256
+ ```
257
+
258
+ #### `stream <prompt>` - Real-time Streaming
259
+ ```bash
260
+ # Stream text generation in real-time
261
+ neurolink stream "Tell me a story about robots"
262
+
263
+ # With provider selection
264
+ neurolink stream "Explain machine learning" --provider vertex --temperature 0.8
265
+ ```
266
+
267
+ **Output Example:**
268
+ ```
269
+ 🔄 Streaming from auto provider...
270
+
271
+ Once upon a time, in a world where technology had advanced beyond...
272
+ [text streams in real-time as it's generated]
273
+ ```
274
+
275
+ #### `batch <file>` - Process Multiple Prompts
276
+ ```bash
277
+ # Create a file with prompts (one per line)
278
+ echo -e "Write a haiku\nExplain gravity\nDescribe the ocean" > prompts.txt
279
+
280
+ # Process all prompts
281
+ neurolink batch prompts.txt
282
+
283
+ # Save results to JSON file
284
+ neurolink batch prompts.txt --output results.json
285
+
286
+ # Add delay between requests (rate limiting)
287
+ neurolink batch prompts.txt --delay 2000
288
+ ```
289
+
290
+ **Output Example:**
291
+ ```
292
+ 📦 Processing 3 prompts...
293
+
294
+ ✅ 1/3 completed
295
+ ✅ 2/3 completed
296
+ ✅ 3/3 completed
297
+ ✅ Results saved to results.json
298
+ ```
299
+
300
+ #### `status` - Provider Diagnostics
301
+ ```bash
302
+ # Check all provider connectivity
303
+ neurolink status
304
+
305
+ # Verbose output with detailed information
306
+ neurolink status --verbose
307
+ ```
308
+
309
+ **Output Example:**
310
+ ```
311
+ 🔍 Checking AI provider status...
312
+
313
+ ✅ openai: ✅ Working (234ms)
314
+ ✅ bedrock: ✅ Working (456ms)
315
+ ❌ vertex: ❌ Authentication failed
316
+
317
+ 📊 Summary: 2/3 providers working
318
+ ```
319
+
320
+ #### `get-best-provider` - Auto-selection Testing
321
+ ```bash
322
+ # Test which provider would be auto-selected
323
+ neurolink get-best-provider
324
+ ```
325
+
326
+ **Output Example:**
327
+ ```
328
+ 🎯 Finding best provider...
329
+ ✅ Best provider: bedrock
330
+ ```
331
+
332
+ ### CLI Options & Arguments
333
+
334
+ #### Global Options
335
+ - `--help, -h` - Show help information
336
+ - `--version, -v` - Show version number
337
+
338
+ #### Generation Options
339
+ - `--provider <name>` - Choose provider: `auto` (default), `openai`, `bedrock`, `vertex`
340
+ - `--temperature <number>` - Creativity level: `0.0` (focused) to `1.0` (creative), default: `0.7`
341
+ - `--max-tokens <number>` - Maximum tokens to generate, default: `500`
342
+ - `--format <type>` - Output format: `text` (default) or `json`
343
+
344
+ #### Batch Processing Options
345
+ - `--output <file>` - Save results to JSON file
346
+ - `--delay <ms>` - Delay between requests in milliseconds, default: `1000`
347
+
348
+ #### Status Options
349
+ - `--verbose, -v` - Show detailed diagnostic information
350
+
351
+ ### CLI Features
352
+
353
+ #### ✨ Professional UX
354
+ - **Animated Spinners**: Beautiful animations during AI generation
355
+ - **Colorized Output**: Green ✅ for success, red ❌ for errors, blue ℹ️ for info
356
+ - **Progress Tracking**: Real-time progress for batch operations
357
+ - **Smart Error Messages**: Helpful hints for common issues
358
+
359
+ #### 🛠️ Developer-Friendly
360
+ - **Multiple Output Formats**: Text for humans, JSON for scripts
361
+ - **Provider Selection**: Test specific providers or use auto-selection
362
+ - **Batch Processing**: Handle multiple prompts efficiently
363
+ - **Status Monitoring**: Check provider health and connectivity
364
+
365
+ #### 🔧 Automation Ready
366
+ - **Exit Codes**: Standard exit codes for scripting
367
+ - **JSON Output**: Structured data for automated workflows
368
+ - **Environment Variable**: All SDK environment variables work with CLI
369
+ - **Scriptable**: Perfect for CI/CD pipelines and automation
370
+
371
+ ### CLI Usage Examples
372
+
373
+ #### Creative Writing Workflow
374
+ ```bash
375
+ # Generate creative content with high temperature
376
+ neurolink generate-text "Write a sci-fi story opening" \
377
+ --provider openai \
378
+ --temperature 0.9 \
379
+ --max-tokens 1000 \
380
+ --format json > story.json
381
+
382
+ # Check what was generated
383
+ cat story.json | jq '.content'
384
+ ```
385
+
386
+ #### Batch Content Processing
387
+ ```bash
388
+ # Create prompts file
389
+ cat > content-prompts.txt << EOF
390
+ Write a product description for AI software
391
+ Create a social media post about technology
392
+ Draft an email about our new features
393
+ Write a blog post title about machine learning
394
+ EOF
395
+
396
+ # Process all prompts and save results
397
+ neurolink batch content-prompts.txt \
398
+ --output content-results.json \
399
+ --provider bedrock \
400
+ --delay 2000
401
+
402
+ # Extract just the content
403
+ cat content-results.json | jq -r '.[].response'
404
+ ```
405
+
406
+ #### Provider Health Monitoring
407
+ ```bash
408
+ # Check provider status (useful for monitoring scripts)
409
+ neurolink status --format json > status.json
410
+
411
+ # Parse results in scripts
412
+ working_providers=$(cat status.json | jq '[.[] | select(.status == "working")] | length')
413
+ echo "Working providers: $working_providers"
414
+ ```
415
+
416
+ #### Integration with Shell Scripts
417
+ ```bash
418
+ #!/bin/bash
419
+ # AI-powered commit message generator
420
+
421
+ # Get git diff
422
+ diff=$(git diff --cached --name-only)
423
+
424
+ if [ -z "$diff" ]; then
425
+ echo "No staged changes found"
426
+ exit 1
427
+ fi
428
+
429
+ # Generate commit message
430
+ commit_msg=$(neurolink generate-text \
431
+ "Generate a concise git commit message for these changes: $diff" \
432
+ --max-tokens 50 \
433
+ --temperature 0.3)
434
+
435
+ echo "Suggested commit message:"
436
+ echo "$commit_msg"
437
+
438
+ # Optionally auto-commit
439
+ read -p "Use this commit message? (y/N): " -n 1 -r
440
+ if [[ $REPLY =~ ^[Yy]$ ]]; then
441
+ git commit -m "$commit_msg"
442
+ fi
443
+ ```
444
+
445
+ ### Environment Setup for CLI
446
+
447
+ The CLI uses the same environment variables as the SDK:
448
+
449
+ ```bash
450
+ # Set up your providers (same as SDK)
451
+ export OPENAI_API_KEY="sk-your-key"
452
+ export AWS_ACCESS_KEY_ID="your-aws-key"
453
+ export AWS_SECRET_ACCESS_KEY="your-aws-secret"
454
+ export GOOGLE_APPLICATION_CREDENTIALS="/path/to/service-account.json"
455
+
456
+ # Test configuration
457
+ neurolink status
458
+ ```
459
+
460
+ ### CLI vs SDK Comparison
461
+
462
+ | Feature | CLI | SDK |
463
+ |---------|-----|-----|
464
+ | **Text Generation** | ✅ `generate-text` | ✅ `generateText()` |
465
+ | **Streaming** | ✅ `stream` | ✅ `streamText()` |
466
+ | **Provider Selection** | ✅ `--provider` flag | ✅ `createProvider()` |
467
+ | **Batch Processing** | ✅ `batch` command | ✅ Manual implementation |
468
+ | **Status Monitoring** | ✅ `status` command | ✅ Manual testing |
469
+ | **JSON Output** | ✅ `--format json` | ✅ Native objects |
470
+ | **Automation** | ✅ Perfect for scripts | ✅ Perfect for apps |
471
+ | **Learning Curve** | 🟢 Low | 🟡 Medium |
472
+
473
+ ### When to Use CLI vs SDK
474
+
475
+ #### Use the CLI when:
476
+ - 🔧 **Prototyping**: Quick testing of prompts and providers
477
+ - 📜 **Scripting**: Shell scripts and automation workflows
478
+ - 🔍 **Debugging**: Checking provider status and testing connectivity
479
+ - 📊 **Batch Processing**: Processing multiple prompts from files
480
+ - 🎯 **One-off Tasks**: Generating content without writing code
481
+
482
+ #### Use the SDK when:
483
+ - 🏗️ **Application Development**: Building web apps, APIs, or services
484
+ - 🔄 **Real-time Integration**: Chat interfaces, streaming responses
485
+ - ⚙️ **Complex Logic**: Custom provider fallback, error handling
486
+ - 🎨 **UI Integration**: React components, Svelte stores
487
+ - 📈 **Production Applications**: Full-featured applications
488
+
129
489
  ## Framework Integration
130
490
 
131
491
  ### SvelteKit
132
492
 
133
493
  #### API Route (`src/routes/api/chat/+server.ts`)
134
494
  ```typescript
135
- import { createBestAIProvider } from 'neurolink';
495
+ import { createBestAIProvider } from '@juspay/neurolink';
136
496
  import type { RequestHandler } from './$types';
137
497
 
138
498
  export const POST: RequestHandler = async ({ request }) => {
@@ -215,7 +575,7 @@ export const POST: RequestHandler = async ({ request }) => {
215
575
 
216
576
  #### App Router API (`app/api/ai/route.ts`)
217
577
  ```typescript
218
- import { createBestAIProvider } from 'neurolink';
578
+ import { createBestAIProvider } from '@juspay/neurolink';
219
579
  import { NextRequest, NextResponse } from 'next/server';
220
580
 
221
581
  export async function POST(request: NextRequest) {
@@ -306,7 +666,7 @@ export default function AIChat() {
306
666
 
307
667
  ```typescript
308
668
  import express from 'express';
309
- import { createBestAIProvider, AIProviderFactory } from 'neurolink';
669
+ import { createBestAIProvider, AIProviderFactory } from '@juspay/neurolink';
310
670
 
311
671
  const app = express();
312
672
  app.use(express.json());
@@ -517,34 +877,175 @@ export OPENAI_API_KEY="sk-your-key-here"
517
877
  ```
518
878
 
519
879
  ### Amazon Bedrock Setup
880
+
881
+ **⚠️ CRITICAL: Anthropic Models Require Inference Profile ARN**
882
+
883
+ For Anthropic Claude models in Bedrock, you **MUST** use the full inference profile ARN, not simple model names:
884
+
520
885
  ```bash
521
886
  export AWS_ACCESS_KEY_ID="your-access-key"
522
887
  export AWS_SECRET_ACCESS_KEY="your-secret-key"
523
- export AWS_REGION="us-east-1"
888
+ export AWS_REGION="us-east-2"
889
+
890
+ # ✅ CORRECT: Use full inference profile ARN for Anthropic models
891
+ export BEDROCK_MODEL="arn:aws:bedrock:us-east-2:<account_id>:inference-profile/us.anthropic.claude-3-7-sonnet-20250219-v1:0"
892
+
893
+ # ❌ WRONG: Simple model names cause "not authorized to invoke this API" errors
894
+ # export BEDROCK_MODEL="anthropic.claude-3-sonnet-20240229-v1:0"
895
+ ```
896
+
897
+ #### Why Inference Profiles?
898
+ - **Cross-Region Access**: Faster access across AWS regions
899
+ - **Better Performance**: Optimized routing and response times
900
+ - **Higher Availability**: Improved model availability and reliability
901
+ - **Different Permissions**: Separate permission model from base models
902
+
903
+ #### Available Inference Profile ARNs
904
+ ```bash
905
+ # Claude 3.7 Sonnet (Latest - Recommended)
906
+ BEDROCK_MODEL="arn:aws:bedrock:us-east-2:<account_id>:inference-profile/us.anthropic.claude-3-7-sonnet-20250219-v1:0"
907
+
908
+ # Claude 3.5 Sonnet
909
+ BEDROCK_MODEL="arn:aws:bedrock:us-east-2:<account_id>:inference-profile/us.anthropic.claude-3-5-sonnet-20241022-v2:0"
910
+
911
+ # Claude 3 Haiku
912
+ BEDROCK_MODEL="arn:aws:bedrock:us-east-2:<account_id>:inference-profile/us.anthropic.claude-3-haiku-20240307-v1:0"
913
+ ```
914
+
915
+ #### Session Token Support
916
+ For temporary credentials (common in development):
917
+ ```bash
918
+ export AWS_SESSION_TOKEN="your-session-token" # Required for temporary credentials
524
919
  ```
525
920
 
526
921
  ### Google Vertex AI Setup
922
+
923
+ NeuroLink supports **three authentication methods** for Google Vertex AI:
924
+
925
+ #### Method 1: Service Account File (Recommended for Production)
527
926
  ```bash
528
927
  export GOOGLE_APPLICATION_CREDENTIALS="/path/to/service-account.json"
529
928
  export GOOGLE_VERTEX_PROJECT="your-project-id"
530
929
  export GOOGLE_VERTEX_LOCATION="us-central1"
531
930
  ```
532
931
 
533
- ### Environment Variables Reference
932
+ #### Method 2: Service Account JSON String (Good for Containers/Cloud)
933
+ ```bash
934
+ export GOOGLE_SERVICE_ACCOUNT_KEY='{"type":"service_account","project_id":"your-project",...}'
935
+ export GOOGLE_VERTEX_PROJECT="your-project-id"
936
+ export GOOGLE_VERTEX_LOCATION="us-central1"
937
+ ```
938
+
939
+ #### Method 3: Individual Environment Variables (Good for CI/CD)
940
+ ```bash
941
+ export GOOGLE_AUTH_CLIENT_EMAIL="service-account@project.iam.gserviceaccount.com"
942
+ export GOOGLE_AUTH_PRIVATE_KEY="-----BEGIN PRIVATE KEY-----\nMIIE..."
943
+ export GOOGLE_VERTEX_PROJECT="your-project-id"
944
+ export GOOGLE_VERTEX_LOCATION="us-central1"
945
+ ```
946
+
947
+ ### Complete Environment Variables Reference
948
+
949
+ #### OpenAI Configuration
950
+ ```bash
951
+ # Required
952
+ OPENAI_API_KEY="sk-your-openai-api-key"
953
+
954
+ # Optional
955
+ OPENAI_MODEL="gpt-4o" # Default model to use
956
+ ```
957
+
958
+ #### Amazon Bedrock Configuration
534
959
  ```bash
535
- # Provider selection (optional)
536
- AI_DEFAULT_PROVIDER="bedrock"
537
- AI_FALLBACK_PROVIDER="openai"
960
+ # Required
961
+ AWS_ACCESS_KEY_ID="your-aws-access-key"
962
+ AWS_SECRET_ACCESS_KEY="your-aws-secret-key"
963
+
964
+ # Optional
965
+ AWS_REGION="us-east-2" # Default: us-east-2
966
+ AWS_SESSION_TOKEN="your-session-token" # Required for temporary credentials
967
+ BEDROCK_MODEL_ID="anthropic.claude-3-7-sonnet-20250219-v1:0" # Default model
968
+ ```
969
+
970
+ #### Google Vertex AI Configuration
971
+ ```bash
972
+ # Required (choose one authentication method)
973
+ # Method 1: Service Account File
974
+ GOOGLE_APPLICATION_CREDENTIALS="/path/to/service-account.json"
975
+
976
+ # Method 2: Service Account JSON String
977
+ GOOGLE_SERVICE_ACCOUNT_KEY='{"type":"service_account",...}'
538
978
 
539
- # Debug mode
540
- NEUROLINK_DEBUG="true"
979
+ # Method 3: Individual Environment Variables
980
+ GOOGLE_AUTH_CLIENT_EMAIL="service-account@project.iam.gserviceaccount.com"
981
+ GOOGLE_AUTH_PRIVATE_KEY="-----BEGIN PRIVATE KEY-----\nMIIE..."
982
+
983
+ # Required for all methods
984
+ GOOGLE_VERTEX_PROJECT="your-gcp-project-id"
985
+
986
+ # Optional
987
+ GOOGLE_VERTEX_LOCATION="us-east5" # Default: us-east5
988
+ VERTEX_MODEL_ID="claude-sonnet-4@20250514" # Default model
989
+ ```
990
+
991
+ #### General Configuration
992
+ ```bash
993
+ # Provider Selection (optional)
994
+ DEFAULT_PROVIDER="bedrock" # Primary provider preference
995
+ FALLBACK_PROVIDER="openai" # Fallback provider
996
+
997
+ # Application Settings
998
+ PUBLIC_APP_ENVIRONMENT="dev" # dev, staging, production
999
+ ENABLE_STREAMING="true" # Enable streaming responses
1000
+ ENABLE_FALLBACK="true" # Enable automatic fallback
1001
+
1002
+ # Debug and Logging
1003
+ NEUROLINK_DEBUG="true" # Enable debug logging
1004
+ LOG_LEVEL="info" # error, warn, info, debug
1005
+ ```
1006
+
1007
+ #### Environment File Example (.env)
1008
+ ```bash
1009
+ # Copy this to your .env file and fill in your credentials
1010
+
1011
+ # OpenAI
1012
+ OPENAI_API_KEY=sk-your-openai-key-here
1013
+ OPENAI_MODEL=gpt-4o
1014
+
1015
+ # Amazon Bedrock
1016
+ AWS_ACCESS_KEY_ID=your-aws-access-key
1017
+ AWS_SECRET_ACCESS_KEY=your-aws-secret-key
1018
+ AWS_REGION=us-east-2
1019
+ BEDROCK_MODEL_ID=anthropic.claude-3-7-sonnet-20250219-v1:0
1020
+
1021
+ # Google Vertex AI (choose one method)
1022
+ # Method 1: File path
1023
+ GOOGLE_APPLICATION_CREDENTIALS=/path/to/your/service-account.json
1024
+
1025
+ # Method 2: JSON string (uncomment to use)
1026
+ # GOOGLE_SERVICE_ACCOUNT_KEY={"type":"service_account","project_id":"your-project",...}
1027
+
1028
+ # Method 3: Individual variables (uncomment to use)
1029
+ # GOOGLE_AUTH_CLIENT_EMAIL=service-account@your-project.iam.gserviceaccount.com
1030
+ # GOOGLE_AUTH_PRIVATE_KEY="-----BEGIN PRIVATE KEY-----\nYOUR_PRIVATE_KEY_HERE\n-----END PRIVATE KEY-----"
1031
+
1032
+ # Required for all Google Vertex AI methods
1033
+ GOOGLE_VERTEX_PROJECT=your-gcp-project-id
1034
+ GOOGLE_VERTEX_LOCATION=us-east5
1035
+ VERTEX_MODEL_ID=claude-sonnet-4@20250514
1036
+
1037
+ # Application Settings
1038
+ DEFAULT_PROVIDER=auto
1039
+ ENABLE_STREAMING=true
1040
+ ENABLE_FALLBACK=true
1041
+ NEUROLINK_DEBUG=false
541
1042
  ```
542
1043
 
543
1044
  ## Advanced Patterns
544
1045
 
545
1046
  ### Custom Configuration
546
1047
  ```typescript
547
- import { AIProviderFactory } from 'neurolink';
1048
+ import { AIProviderFactory } from '@juspay/neurolink';
548
1049
 
549
1050
  // Environment-based provider selection
550
1051
  const isDev = process.env.NODE_ENV === 'development';
@@ -656,7 +1157,7 @@ The security token included in the request is expired
656
1157
 
657
1158
  ### Comprehensive Error Handling
658
1159
  ```typescript
659
- import { createBestAIProvider } from 'neurolink';
1160
+ import { createBestAIProvider } from '@juspay/neurolink';
660
1161
 
661
1162
  async function robustGenerate(prompt: string, maxRetries = 3) {
662
1163
  let attempt = 0;