@juspay/neurolink 1.0.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md ADDED
@@ -0,0 +1,815 @@
1
+ # 🧠 NeuroLink
2
+
3
+ [![npm version](https://badge.fury.io/js/neurolink.svg)](https://badge.fury.io/js/neurolink)
4
+ [![TypeScript](https://img.shields.io/badge/%3C%2F%3E-TypeScript-%230074c1.svg)](http://www.typescriptlang.org/)
5
+ [![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)
6
+
7
+ > Production-ready AI toolkit with multi-provider support, automatic fallback, and full TypeScript integration.
8
+
9
+ **NeuroLink** provides a unified interface for AI providers (OpenAI, Amazon Bedrock, Google Vertex AI) with intelligent fallback, streaming support, and type-safe APIs. Extracted from production use at Juspay.
10
+
11
+ ## Quick Start
12
+
13
+ ```bash
14
+ npm install neurolink ai @ai-sdk/amazon-bedrock @ai-sdk/openai @ai-sdk/google-vertex zod
15
+ ```
16
+
17
+ ```typescript
18
+ import { createBestAIProvider } from 'neurolink';
19
+
20
+ // Auto-selects best available provider
21
+ const provider = createBestAIProvider();
22
+ const result = await provider.generateText({
23
+ prompt: "Hello, AI!"
24
+ });
25
+
26
+ console.log(result.text);
27
+ ```
28
+
29
+ ## Table of Contents
30
+
31
+ - [Features](#features)
32
+ - [Installation](#installation)
33
+ - [Basic Usage](#basic-usage)
34
+ - [Framework Integration](#framework-integration)
35
+ - [SvelteKit](#sveltekit)
36
+ - [Next.js](#nextjs)
37
+ - [Express.js](#expressjs)
38
+ - [React Hook](#react-hook)
39
+ - [API Reference](#api-reference)
40
+ - [Provider Configuration](#provider-configuration)
41
+ - [Advanced Patterns](#advanced-patterns)
42
+ - [Error Handling](#error-handling)
43
+ - [Performance](#performance)
44
+ - [Contributing](#contributing)
45
+
46
+ ## Features
47
+
48
+ 🔄 **Multi-Provider Support** - OpenAI, Amazon Bedrock, Google Vertex AI
49
+ ⚡ **Automatic Fallback** - Seamless provider switching on failures
50
+ 📡 **Streaming & Non-Streaming** - Real-time responses and standard generation
51
+ 🎯 **TypeScript First** - Full type safety and IntelliSense support
52
+ 🛡️ **Production Ready** - Extracted from proven production systems
53
+ 🔧 **Zero Config** - Works out of the box with environment variables
54
+
55
+ ## Installation
56
+
57
+ ### Package Installation
58
+ ```bash
59
+ # npm
60
+ npm install neurolink ai @ai-sdk/amazon-bedrock @ai-sdk/openai @ai-sdk/google-vertex zod
61
+
62
+ # yarn
63
+ yarn add neurolink ai @ai-sdk/amazon-bedrock @ai-sdk/openai @ai-sdk/google-vertex zod
64
+
65
+ # pnpm (recommended)
66
+ pnpm add neurolink ai @ai-sdk/amazon-bedrock @ai-sdk/openai @ai-sdk/google-vertex zod
67
+ ```
68
+
69
+ ### Environment Setup
70
+ ```bash
71
+ # Choose one or more providers
72
+ export OPENAI_API_KEY="sk-your-openai-key"
73
+ export AWS_ACCESS_KEY_ID="your-aws-key"
74
+ export AWS_SECRET_ACCESS_KEY="your-aws-secret"
75
+ export GOOGLE_APPLICATION_CREDENTIALS="path/to/service-account.json"
76
+ ```
77
+
78
+ ## Basic Usage
79
+
80
+ ### Simple Text Generation
81
+ ```typescript
82
+ import { createBestAIProvider } from 'neurolink';
83
+
84
+ const provider = createBestAIProvider();
85
+
86
+ // Basic generation
87
+ const result = await provider.generateText({
88
+ prompt: "Explain TypeScript generics",
89
+ temperature: 0.7,
90
+ maxTokens: 500
91
+ });
92
+
93
+ console.log(result.text);
94
+ console.log(`Used: ${result.provider}`);
95
+ ```
96
+
97
+ ### Streaming Responses
98
+ ```typescript
99
+ import { createBestAIProvider } from 'neurolink';
100
+
101
+ const provider = createBestAIProvider();
102
+
103
+ const result = await provider.streamText({
104
+ prompt: "Write a story about AI",
105
+ temperature: 0.8,
106
+ maxTokens: 1000
107
+ });
108
+
109
+ // Handle streaming chunks
110
+ for await (const chunk of result.textStream) {
111
+ process.stdout.write(chunk);
112
+ }
113
+ ```
114
+
115
+ ### Provider Selection
116
+ ```typescript
117
+ import { AIProviderFactory } from 'neurolink';
118
+
119
+ // Use specific provider
120
+ const openai = AIProviderFactory.createProvider('openai', 'gpt-4o');
121
+ const bedrock = AIProviderFactory.createProvider('bedrock', 'claude-3-7-sonnet');
122
+
123
+ // With fallback
124
+ const { primary, fallback } = AIProviderFactory.createProviderWithFallback(
125
+ 'bedrock', 'openai'
126
+ );
127
+ ```
128
+
129
+ ## Framework Integration
130
+
131
+ ### SvelteKit
132
+
133
+ #### API Route (`src/routes/api/chat/+server.ts`)
134
+ ```typescript
135
+ import { createBestAIProvider } from 'neurolink';
136
+ import type { RequestHandler } from './$types';
137
+
138
+ export const POST: RequestHandler = async ({ request }) => {
139
+ try {
140
+ const { message } = await request.json();
141
+
142
+ const provider = createBestAIProvider();
143
+ const result = await provider.streamText({
144
+ prompt: message,
145
+ temperature: 0.7,
146
+ maxTokens: 1000
147
+ });
148
+
149
+ return new Response(result.toReadableStream(), {
150
+ headers: {
151
+ 'Content-Type': 'text/plain; charset=utf-8',
152
+ 'Cache-Control': 'no-cache'
153
+ }
154
+ });
155
+ } catch (error) {
156
+ return new Response(JSON.stringify({ error: error.message }), {
157
+ status: 500,
158
+ headers: { 'Content-Type': 'application/json' }
159
+ });
160
+ }
161
+ };
162
+ ```
163
+
164
+ #### Svelte Component (`src/routes/chat/+page.svelte`)
165
+ ```svelte
166
+ <script lang="ts">
167
+ let message = '';
168
+ let response = '';
169
+ let isLoading = false;
170
+
171
+ async function sendMessage() {
172
+ if (!message.trim()) return;
173
+
174
+ isLoading = true;
175
+ response = '';
176
+
177
+ try {
178
+ const res = await fetch('/api/chat', {
179
+ method: 'POST',
180
+ headers: { 'Content-Type': 'application/json' },
181
+ body: JSON.stringify({ message })
182
+ });
183
+
184
+ if (!res.body) throw new Error('No response');
185
+
186
+ const reader = res.body.getReader();
187
+ const decoder = new TextDecoder();
188
+
189
+ while (true) {
190
+ const { done, value } = await reader.read();
191
+ if (done) break;
192
+ response += decoder.decode(value, { stream: true });
193
+ }
194
+ } catch (error) {
195
+ response = `Error: ${error.message}`;
196
+ } finally {
197
+ isLoading = false;
198
+ }
199
+ }
200
+ </script>
201
+
202
+ <div class="chat">
203
+ <input bind:value={message} placeholder="Ask something..." />
204
+ <button on:click={sendMessage} disabled={isLoading}>
205
+ {isLoading ? 'Sending...' : 'Send'}
206
+ </button>
207
+
208
+ {#if response}
209
+ <div class="response">{response}</div>
210
+ {/if}
211
+ </div>
212
+ ```
213
+
214
+ ### Next.js
215
+
216
+ #### App Router API (`app/api/ai/route.ts`)
217
+ ```typescript
218
+ import { createBestAIProvider } from 'neurolink';
219
+ import { NextRequest, NextResponse } from 'next/server';
220
+
221
+ export async function POST(request: NextRequest) {
222
+ try {
223
+ const { prompt, ...options } = await request.json();
224
+
225
+ const provider = createBestAIProvider();
226
+ const result = await provider.generateText({
227
+ prompt,
228
+ temperature: 0.7,
229
+ maxTokens: 1000,
230
+ ...options
231
+ });
232
+
233
+ return NextResponse.json({
234
+ text: result.text,
235
+ provider: result.provider,
236
+ usage: result.usage
237
+ });
238
+ } catch (error) {
239
+ return NextResponse.json(
240
+ { error: error.message },
241
+ { status: 500 }
242
+ );
243
+ }
244
+ }
245
+ ```
246
+
247
+ #### React Component (`components/AIChat.tsx`)
248
+ ```typescript
249
+ 'use client';
250
+ import { useState } from 'react';
251
+
252
+ export default function AIChat() {
253
+ const [prompt, setPrompt] = useState('');
254
+ const [result, setResult] = useState<string>('');
255
+ const [loading, setLoading] = useState(false);
256
+
257
+ const generate = async () => {
258
+ if (!prompt.trim()) return;
259
+
260
+ setLoading(true);
261
+ try {
262
+ const response = await fetch('/api/ai', {
263
+ method: 'POST',
264
+ headers: { 'Content-Type': 'application/json' },
265
+ body: JSON.stringify({ prompt })
266
+ });
267
+
268
+ const data = await response.json();
269
+ setResult(data.text);
270
+ } catch (error) {
271
+ setResult(`Error: ${error.message}`);
272
+ } finally {
273
+ setLoading(false);
274
+ }
275
+ };
276
+
277
+ return (
278
+ <div className="space-y-4">
279
+ <div className="flex gap-2">
280
+ <input
281
+ value={prompt}
282
+ onChange={(e) => setPrompt(e.target.value)}
283
+ placeholder="Enter your prompt..."
284
+ className="flex-1 p-2 border rounded"
285
+ />
286
+ <button
287
+ onClick={generate}
288
+ disabled={loading}
289
+ className="px-4 py-2 bg-blue-500 text-white rounded disabled:opacity-50"
290
+ >
291
+ {loading ? 'Generating...' : 'Generate'}
292
+ </button>
293
+ </div>
294
+
295
+ {result && (
296
+ <div className="p-4 bg-gray-100 rounded">
297
+ {result}
298
+ </div>
299
+ )}
300
+ </div>
301
+ );
302
+ }
303
+ ```
304
+
305
+ ### Express.js
306
+
307
+ ```typescript
308
+ import express from 'express';
309
+ import { createBestAIProvider, AIProviderFactory } from 'neurolink';
310
+
311
+ const app = express();
312
+ app.use(express.json());
313
+
314
+ // Simple generation endpoint
315
+ app.post('/api/generate', async (req, res) => {
316
+ try {
317
+ const { prompt, options = {} } = req.body;
318
+
319
+ const provider = createBestAIProvider();
320
+ const result = await provider.generateText({
321
+ prompt,
322
+ ...options
323
+ });
324
+
325
+ res.json({
326
+ success: true,
327
+ text: result.text,
328
+ provider: result.provider
329
+ });
330
+ } catch (error) {
331
+ res.status(500).json({
332
+ success: false,
333
+ error: error.message
334
+ });
335
+ }
336
+ });
337
+
338
+ // Streaming endpoint
339
+ app.post('/api/stream', async (req, res) => {
340
+ try {
341
+ const { prompt } = req.body;
342
+
343
+ const provider = createBestAIProvider();
344
+ const result = await provider.streamText({ prompt });
345
+
346
+ res.setHeader('Content-Type', 'text/plain');
347
+ res.setHeader('Cache-Control', 'no-cache');
348
+
349
+ for await (const chunk of result.textStream) {
350
+ res.write(chunk);
351
+ }
352
+ res.end();
353
+ } catch (error) {
354
+ res.status(500).json({ error: error.message });
355
+ }
356
+ });
357
+
358
+ app.listen(3000, () => {
359
+ console.log('Server running on http://localhost:3000');
360
+ });
361
+ ```
362
+
363
+ ### React Hook
364
+
365
+ ```typescript
366
+ import { useState, useCallback } from 'react';
367
+
368
+ interface AIOptions {
369
+ temperature?: number;
370
+ maxTokens?: number;
371
+ provider?: string;
372
+ }
373
+
374
+ export function useAI() {
375
+ const [loading, setLoading] = useState(false);
376
+ const [error, setError] = useState<string | null>(null);
377
+
378
+ const generate = useCallback(async (
379
+ prompt: string,
380
+ options: AIOptions = {}
381
+ ) => {
382
+ setLoading(true);
383
+ setError(null);
384
+
385
+ try {
386
+ const response = await fetch('/api/ai', {
387
+ method: 'POST',
388
+ headers: { 'Content-Type': 'application/json' },
389
+ body: JSON.stringify({ prompt, ...options })
390
+ });
391
+
392
+ if (!response.ok) {
393
+ throw new Error(`Request failed: ${response.statusText}`);
394
+ }
395
+
396
+ const data = await response.json();
397
+ return data.text;
398
+ } catch (err) {
399
+ const message = err instanceof Error ? err.message : 'Unknown error';
400
+ setError(message);
401
+ return null;
402
+ } finally {
403
+ setLoading(false);
404
+ }
405
+ }, []);
406
+
407
+ return { generate, loading, error };
408
+ }
409
+
410
+ // Usage
411
+ function MyComponent() {
412
+ const { generate, loading, error } = useAI();
413
+
414
+ const handleClick = async () => {
415
+ const result = await generate("Explain React hooks", {
416
+ temperature: 0.7,
417
+ maxTokens: 500
418
+ });
419
+ console.log(result);
420
+ };
421
+
422
+ return (
423
+ <button onClick={handleClick} disabled={loading}>
424
+ {loading ? 'Generating...' : 'Generate'}
425
+ </button>
426
+ );
427
+ }
428
+ ```
429
+
430
+ ## API Reference
431
+
432
+ ### Core Functions
433
+
434
+ #### `createBestAIProvider(requestedProvider?, modelName?)`
435
+ Creates the best available AI provider based on environment configuration.
436
+
437
+ ```typescript
438
+ const provider = createBestAIProvider();
439
+ const provider = createBestAIProvider('openai'); // Prefer OpenAI
440
+ const provider = createBestAIProvider('bedrock', 'claude-3-7-sonnet');
441
+ ```
442
+
443
+ #### `createAIProviderWithFallback(primary, fallback, modelName?)`
444
+ Creates a provider with automatic fallback.
445
+
446
+ ```typescript
447
+ const { primary, fallback } = createAIProviderWithFallback('bedrock', 'openai');
448
+
449
+ try {
450
+ const result = await primary.generateText({ prompt });
451
+ } catch {
452
+ const result = await fallback.generateText({ prompt });
453
+ }
454
+ ```
455
+
456
+ ### AIProviderFactory
457
+
458
+ #### `createProvider(providerName, modelName?)`
459
+ Creates a specific provider instance.
460
+
461
+ ```typescript
462
+ const openai = AIProviderFactory.createProvider('openai', 'gpt-4o');
463
+ const bedrock = AIProviderFactory.createProvider('bedrock', 'claude-3-7-sonnet');
464
+ const vertex = AIProviderFactory.createProvider('vertex', 'gemini-2.5-flash');
465
+ ```
466
+
467
+ ### Provider Interface
468
+
469
+ All providers implement the same interface:
470
+
471
+ ```typescript
472
+ interface AIProvider {
473
+ generateText(options: GenerateTextOptions): Promise<GenerateTextResult>;
474
+ streamText(options: StreamTextOptions): Promise<StreamTextResult>;
475
+ }
476
+
477
+ interface GenerateTextOptions {
478
+ prompt: string;
479
+ temperature?: number;
480
+ maxTokens?: number;
481
+ systemPrompt?: string;
482
+ }
483
+
484
+ interface GenerateTextResult {
485
+ text: string;
486
+ provider: string;
487
+ model: string;
488
+ usage?: {
489
+ promptTokens: number;
490
+ completionTokens: number;
491
+ totalTokens: number;
492
+ };
493
+ }
494
+ ```
495
+
496
+ ### Supported Models
497
+
498
+ #### OpenAI
499
+ - `gpt-4o` (default)
500
+ - `gpt-4o-mini`
501
+ - `gpt-4-turbo`
502
+
503
+ #### Amazon Bedrock
504
+ - `claude-3-7-sonnet` (default)
505
+ - `claude-3-5-sonnet`
506
+ - `claude-3-haiku`
507
+
508
+ #### Google Vertex AI
509
+ - `gemini-2.5-flash` (default)
510
+ - `claude-4.0-sonnet`
511
+
512
+ ## Provider Configuration
513
+
514
+ ### OpenAI Setup
515
+ ```bash
516
+ export OPENAI_API_KEY="sk-your-key-here"
517
+ ```
518
+
519
+ ### Amazon Bedrock Setup
520
+ ```bash
521
+ export AWS_ACCESS_KEY_ID="your-access-key"
522
+ export AWS_SECRET_ACCESS_KEY="your-secret-key"
523
+ export AWS_REGION="us-east-1"
524
+ ```
525
+
526
+ ### Google Vertex AI Setup
527
+ ```bash
528
+ export GOOGLE_APPLICATION_CREDENTIALS="/path/to/service-account.json"
529
+ export GOOGLE_VERTEX_PROJECT="your-project-id"
530
+ export GOOGLE_VERTEX_LOCATION="us-central1"
531
+ ```
532
+
533
+ ### Environment Variables Reference
534
+ ```bash
535
+ # Provider selection (optional)
536
+ AI_DEFAULT_PROVIDER="bedrock"
537
+ AI_FALLBACK_PROVIDER="openai"
538
+
539
+ # Debug mode
540
+ NEUROLINK_DEBUG="true"
541
+ ```
542
+
543
+ ## Advanced Patterns
544
+
545
+ ### Custom Configuration
546
+ ```typescript
547
+ import { AIProviderFactory } from 'neurolink';
548
+
549
+ // Environment-based provider selection
550
+ const isDev = process.env.NODE_ENV === 'development';
551
+ const provider = isDev
552
+ ? AIProviderFactory.createProvider('openai', 'gpt-4o-mini') // Cheaper for dev
553
+ : AIProviderFactory.createProvider('bedrock', 'claude-3-7-sonnet'); // Production
554
+
555
+ // Multiple providers for different use cases
556
+ const providers = {
557
+ creative: AIProviderFactory.createProvider('openai', 'gpt-4o'),
558
+ analytical: AIProviderFactory.createProvider('bedrock', 'claude-3-7-sonnet'),
559
+ fast: AIProviderFactory.createProvider('vertex', 'gemini-2.5-flash')
560
+ };
561
+
562
+ async function generateCreativeContent(prompt: string) {
563
+ return await providers.creative.generateText({
564
+ prompt,
565
+ temperature: 0.9,
566
+ maxTokens: 2000
567
+ });
568
+ }
569
+ ```
570
+
571
+ ### Response Caching
572
+ ```typescript
573
+ const cache = new Map<string, { text: string; timestamp: number }>();
574
+ const CACHE_DURATION = 5 * 60 * 1000; // 5 minutes
575
+
576
+ async function cachedGenerate(prompt: string) {
577
+ const key = prompt.toLowerCase().trim();
578
+ const cached = cache.get(key);
579
+
580
+ if (cached && Date.now() - cached.timestamp < CACHE_DURATION) {
581
+ return { ...cached, fromCache: true };
582
+ }
583
+
584
+ const provider = createBestAIProvider();
585
+ const result = await provider.generateText({ prompt });
586
+
587
+ cache.set(key, { text: result.text, timestamp: Date.now() });
588
+ return { text: result.text, fromCache: false };
589
+ }
590
+ ```
591
+
592
+ ### Batch Processing
593
+ ```typescript
594
+ async function processBatch(prompts: string[]) {
595
+ const provider = createBestAIProvider();
596
+ const chunkSize = 5;
597
+ const results = [];
598
+
599
+ for (let i = 0; i < prompts.length; i += chunkSize) {
600
+ const chunk = prompts.slice(i, i + chunkSize);
601
+
602
+ const chunkResults = await Promise.allSettled(
603
+ chunk.map(prompt => provider.generateText({ prompt, maxTokens: 500 }))
604
+ );
605
+
606
+ results.push(...chunkResults);
607
+
608
+ // Rate limiting
609
+ if (i + chunkSize < prompts.length) {
610
+ await new Promise(resolve => setTimeout(resolve, 1000));
611
+ }
612
+ }
613
+
614
+ return results.map((result, index) => ({
615
+ prompt: prompts[index],
616
+ success: result.status === 'fulfilled',
617
+ result: result.status === 'fulfilled' ? result.value : result.reason
618
+ }));
619
+ }
620
+ ```
621
+
622
+ ## Error Handling
623
+
624
+ ### Troubleshooting Common Issues
625
+
626
+ #### AWS Credentials and Authorization
627
+ ```
628
+ ValidationException: Your account is not authorized to invoke this API operation.
629
+ ```
630
+ - **Cause**: The AWS account doesn't have access to Bedrock or the specific model
631
+ - **Solution**:
632
+ - Verify your AWS account has Bedrock enabled
633
+ - Check model availability in your AWS region
634
+ - Ensure your IAM role has `bedrock:InvokeModel` permissions
635
+
636
+ #### Missing or Invalid Credentials
637
+ ```
638
+ Error: Cannot find API key for OpenAI provider
639
+ ```
640
+ - **Cause**: The environment variable for API credentials is missing
641
+ - **Solution**: Set the appropriate environment variable (OPENAI_API_KEY, etc.)
642
+
643
+ #### Google Vertex Import Issues
644
+ ```
645
+ Cannot find package '@google-cloud/vertexai' imported from...
646
+ ```
647
+ - **Cause**: Missing Google Vertex AI peer dependency
648
+ - **Solution**: Install the package with `npm install @google-cloud/vertexai`
649
+
650
+ #### Session Token Expired
651
+ ```
652
+ The security token included in the request is expired
653
+ ```
654
+ - **Cause**: AWS session token has expired
655
+ - **Solution**: Generate new AWS credentials with a fresh session token
656
+
657
+ ### Comprehensive Error Handling
658
+ ```typescript
659
+ import { createBestAIProvider } from 'neurolink';
660
+
661
+ async function robustGenerate(prompt: string, maxRetries = 3) {
662
+ let attempt = 0;
663
+
664
+ while (attempt < maxRetries) {
665
+ try {
666
+ const provider = createBestAIProvider();
667
+ return await provider.generateText({ prompt });
668
+ } catch (error) {
669
+ attempt++;
670
+ console.error(`Attempt ${attempt} failed:`, error.message);
671
+
672
+ if (attempt >= maxRetries) {
673
+ throw new Error(`Failed after ${maxRetries} attempts: ${error.message}`);
674
+ }
675
+
676
+ // Exponential backoff
677
+ await new Promise(resolve =>
678
+ setTimeout(resolve, Math.pow(2, attempt) * 1000)
679
+ );
680
+ }
681
+ }
682
+ }
683
+ ```
684
+
685
+ ### Provider Fallback
686
+ ```typescript
687
+ async function generateWithFallback(prompt: string) {
688
+ const providers = ['bedrock', 'openai', 'vertex'];
689
+
690
+ for (const providerName of providers) {
691
+ try {
692
+ const provider = AIProviderFactory.createProvider(providerName);
693
+ return await provider.generateText({ prompt });
694
+ } catch (error) {
695
+ console.warn(`${providerName} failed:`, error.message);
696
+
697
+ if (error.message.includes('API key') || error.message.includes('credentials')) {
698
+ console.log(`${providerName} not configured, trying next...`);
699
+ continue;
700
+ }
701
+ }
702
+ }
703
+
704
+ throw new Error('All providers failed or are not configured');
705
+ }
706
+ ```
707
+
708
+ ### Common Error Types
709
+ ```typescript
710
+ // Provider not configured
711
+ if (error.message.includes('API key')) {
712
+ console.error('Provider API key not set');
713
+ }
714
+
715
+ // Rate limiting
716
+ if (error.message.includes('rate limit')) {
717
+ console.error('Rate limit exceeded, implement backoff');
718
+ }
719
+
720
+ // Model not available
721
+ if (error.message.includes('model')) {
722
+ console.error('Requested model not available');
723
+ }
724
+
725
+ // Network issues
726
+ if (error.message.includes('network') || error.message.includes('timeout')) {
727
+ console.error('Network connectivity issue');
728
+ }
729
+ ```
730
+
731
+ ## Performance
732
+
733
+ ### Optimization Tips
734
+
735
+ 1. **Choose Right Models for Use Case**
736
+ ```typescript
737
+ // Fast responses for simple tasks
738
+ const fast = AIProviderFactory.createProvider('vertex', 'gemini-2.5-flash');
739
+
740
+ // High quality for complex tasks
741
+ const quality = AIProviderFactory.createProvider('bedrock', 'claude-3-7-sonnet');
742
+
743
+ // Cost-effective for development
744
+ const dev = AIProviderFactory.createProvider('openai', 'gpt-4o-mini');
745
+ ```
746
+
747
+ 2. **Streaming for Long Responses**
748
+ ```typescript
749
+ // Use streaming for better UX on long content
750
+ const result = await provider.streamText({
751
+ prompt: "Write a detailed article...",
752
+ maxTokens: 2000
753
+ });
754
+ ```
755
+
756
+ 3. **Appropriate Token Limits**
757
+ ```typescript
758
+ // Set reasonable limits to control costs
759
+ const result = await provider.generateText({
760
+ prompt: "Summarize this text",
761
+ maxTokens: 150 // Just enough for a summary
762
+ });
763
+ ```
764
+
765
+ ### Provider Limits
766
+ - **OpenAI**: Rate limits based on tier (TPM/RPM)
767
+ - **Bedrock**: Regional quotas and model availability
768
+ - **Vertex AI**: Project-based quotas and rate limits
769
+
770
+ ## Contributing
771
+
772
+ We welcome contributions! Here's how to get started:
773
+
774
+ ### Development Setup
775
+ ```bash
776
+ git clone https://github.com/juspay/neurolink
777
+ cd neurolink
778
+ pnpm install
779
+ ```
780
+
781
+ ### Running Tests
782
+ ```bash
783
+ pnpm test # Run all tests
784
+ pnpm test:watch # Watch mode
785
+ pnpm test:coverage # Coverage report
786
+ ```
787
+
788
+ ### Building
789
+ ```bash
790
+ pnpm build # Build the library
791
+ pnpm check # Type checking
792
+ pnpm lint # Lint code
793
+ ```
794
+
795
+ ### Guidelines
796
+ - Follow existing TypeScript patterns
797
+ - Add tests for new features
798
+ - Update documentation
799
+ - Ensure all providers work consistently
800
+
801
+ ## License
802
+
803
+ MIT © [Juspay Technologies](https://juspay.in)
804
+
805
+ ## Related Projects
806
+
807
+ - [Vercel AI SDK](https://github.com/vercel/ai) - Underlying provider implementations
808
+ - [SvelteKit](https://kit.svelte.dev) - Web framework
809
+ - [Lighthouse](https://github.com/juspay/lighthouse) - Original source project
810
+
811
+ ---
812
+
813
+ <p align="center">
814
+ <strong>Built with ❤️ by <a href="https://juspay.in">Juspay Technologies</a></strong>
815
+ </p>