react-native-ai-hooks 0.3.0 → 0.5.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (37) hide show
  1. package/.github/workflows/ci.yml +34 -0
  2. package/CONTRIBUTING.md +122 -0
  3. package/README.md +73 -20
  4. package/docs/ARCHITECTURE.md +301 -0
  5. package/docs/ARCHITECTURE_GUIDE.md +467 -0
  6. package/docs/IMPLEMENTATION_COMPLETE.md +349 -0
  7. package/docs/README.md +17 -0
  8. package/docs/TECHNICAL_SPECIFICATION.md +748 -0
  9. package/example/App.tsx +95 -0
  10. package/example/README.md +27 -0
  11. package/example/index.js +5 -0
  12. package/example/package.json +22 -0
  13. package/example/src/components/ProviderPicker.tsx +62 -0
  14. package/example/src/context/APIKeysContext.tsx +96 -0
  15. package/example/src/screens/ChatScreen.tsx +205 -0
  16. package/example/src/screens/SettingsScreen.tsx +124 -0
  17. package/example/tsconfig.json +7 -0
  18. package/jest.config.cjs +7 -0
  19. package/jest.setup.ts +28 -0
  20. package/package.json +17 -3
  21. package/src/hooks/__tests__/useAIForm.test.ts +345 -0
  22. package/src/hooks/__tests__/useAIStream.test.ts +427 -0
  23. package/src/hooks/useAIChat.ts +111 -51
  24. package/src/hooks/useAICode.ts +8 -0
  25. package/src/hooks/useAIForm.ts +92 -202
  26. package/src/hooks/useAIStream.ts +114 -58
  27. package/src/hooks/useAISummarize.ts +8 -0
  28. package/src/hooks/useAITranslate.ts +9 -0
  29. package/src/hooks/useAIVoice.ts +8 -0
  30. package/src/hooks/useImageAnalysis.ts +134 -79
  31. package/src/index.ts +25 -1
  32. package/src/types/index.ts +178 -4
  33. package/src/utils/__tests__/fetchWithRetry.test.ts +168 -0
  34. package/src/utils/__tests__/providerFactory.test.ts +493 -0
  35. package/src/utils/fetchWithRetry.ts +100 -0
  36. package/src/utils/index.ts +8 -0
  37. package/src/utils/providerFactory.ts +288 -0
@@ -0,0 +1,748 @@
1
+ # Technical Specification: react-native-ai-hooks v1.0
2
+
3
+ ## 1. System Design Overview
4
+
5
+ ### 1.1 Architecture Layers
6
+
7
+ ```
8
+ ┌─────────────────────────────────────────┐
9
+ │ React Native Application │
10
+ ├─────────────────────────────────────────┤
11
+ │ useAIChat / useAIStream / useAIForm / │
12
+ │ useImageAnalysis / ... (8 hooks) │
13
+ ├─────────────────────────────────────────┤
14
+ │ ProviderFactory (Unified Interface) │
15
+ ├─────────────────────────────────────────┤
16
+ │ fetchWithRetry (Resilience Layer) │
17
+ ├─────────────────────────────────────────┤
18
+ │ fetch() / HTTP Network Layer │
19
+ ├─────────────────────────────────────────┤
20
+ │ Anthropic / OpenAI / Gemini APIs │
21
+ └─────────────────────────────────────────┘
22
+ ```
23
+
24
+ ### 1.2 Request Flow Diagram
25
+
26
+ ```
27
+ User Action (sendMessage)
28
+
29
+ Hook captures input + state
30
+
31
+ ProviderFactory.makeRequest()
32
+
33
+ Validate input (prompt non-empty, apiKey present)
34
+
35
+ Build provider-specific request body
36
+
37
+ fetchWithRetry(url, options)
38
+
39
+ AbortController timeout setup
40
+
41
+ Fetch attempt #1
42
+ ├─ Success → Normalize response → Return AIResponse
43
+ ├─ 429 Rate Limit → Calculate backoff → Retry
44
+ ├─ 5xx Server Error → Wait → Retry
45
+ ├─ Timeout → Backoff → Retry
46
+ └─ Max retries exceeded → Throw error
47
+
48
+ Hook catches error (stored in state)
49
+
50
+ Component re-render with error + results
51
+ ```
52
+
53
+ ---
54
+
55
+ ## 2. Core Components Specification
56
+
57
+ ### 2.1 ProviderFactory Class
58
+
59
+ **Interface:**
60
+ ```typescript
61
+ class ProviderFactory {
62
+ constructor(config: ProviderConfig);
63
+ makeRequest(request: ProviderRequest): Promise<AIResponse>;
64
+
65
+ // Private provider-specific methods
66
+ private makeAnthropicRequest(request: ProviderRequest): Promise<AIResponse>
67
+ private makeOpenAIRequest(request: ProviderRequest): Promise<AIResponse>
68
+ private makeGeminiRequest(request: ProviderRequest): Promise<AIResponse>
69
+
70
+ // Private normalizers
71
+ private normalizeAnthropicResponse(data: AnthropicResponse): AIResponse
72
+ private normalizeOpenAIResponse(data: OpenAIResponse): AIResponse
73
+ private normalizeGeminiResponse(data: GeminiResponse): AIResponse
74
+ }
75
+ ```
76
+
77
+ **Request Flow (makeRequest):**
78
+ 1. Route by `config.provider` to specific handler
79
+ 2. Build provider-specific request body
80
+ 3. Call `fetchWithRetry` with provider headers
81
+ 4. Parse raw response
82
+ 5. Call provider-specific normalizer
83
+ 6. Return standardized `AIResponse`
84
+
85
+ **Response Normalization:**
86
+ ```
87
+ Provider Response (varying format)
88
+
89
+ Extract text content (provider-specific path)
90
+
91
+ Extract token usage (if available)
92
+
93
+ Return standardized { text, raw, usage }
94
+ ```
95
+
96
+ ### 2.2 fetchWithRetry Function
97
+
98
+ **Retry Strategy:**
99
+ ```
100
+ ┌──────────────────────────────────────────┐
101
+ │ Fetch attempt with AbortController │
102
+ ├──────────────────────────────────────────┤
103
+ │ Response OK? YES → Return response │
104
+ │ NO → Check error type │
105
+ ├──────────────────────────────────────────┤
106
+ │ Error Type Analysis: │
107
+ │ ├─ 429 (Rate Limit) → Calculate wait │
108
+ │ ├─ 5xx (Server Error) → Backoff retry │
109
+ │ ├─ Timeout → Backoff retry │
110
+ │ └─ Other (4xx, etc.) → Maybe retry │
111
+ ├──────────────────────────────────────────┤
112
+ │ Retry Count < Max? │
113
+ │ YES → Sleep(delay) → Retry │
114
+ │ NO → Throw final error │
115
+ └──────────────────────────────────────────┘
116
+ ```
117
+
118
+ **Backoff Calculation:**
119
+ - Initial: `baseDelay` (default 1000ms)
120
+ - After attempt N: `baseDelay * (multiplier ^ N)`
121
+ - Capped at: `maxDelay` (default 10000ms)
122
+ - Special: 429 responses check `Retry-After` header first
123
+
124
+ ### 2.3 Hook Pattern (All 8 Hooks)
125
+
126
+ **Common Structure:**
127
+ ```typescript
128
+ export function useAI*(options: UseAI*Options): UseAI*Return {
129
+ // 1. State hooks (data, loading, error)
130
+ const [data, setData] = useState(...);
131
+ const [isLoading, setIsLoading] = useState(false);
132
+ const [error, setError] = useState<string | null>(null);
133
+
134
+ // 2. Refs for lifecycle management
135
+ const isMountedRef = useRef(true);
136
+
137
+ // 3. Memoized provider config
138
+ const providerConfig = useMemo(() => ({...}), [deps]);
139
+ const provider = useMemo(() => createProvider(providerConfig), [providerConfig]);
140
+
141
+ // 4. Callback functions (memoized)
142
+ const primaryAction = useCallback(async (...) => {
143
+ // Try → update state → catch → finally
144
+ }, [deps]);
145
+
146
+ // 5. Cleanup
147
+ useState(() => {
148
+ isMountedRef.current = true;
149
+ return () => { isMountedRef.current = false; };
150
+ }, []);
151
+
152
+ // 6. Return public API
153
+ return { data, isLoading, error, primaryAction, ... };
154
+ }
155
+ ```
156
+
157
+ **Error Safety Pattern:**
158
+ ```typescript
159
+ try {
160
+ const result = await provider.makeRequest(params);
161
+ if (isMountedRef.current) setData(result); // Guard
162
+ } catch (err) {
163
+ if (isMountedRef.current) setError(err.message); // Guard
164
+ } finally {
165
+ if (isMountedRef.current) setIsLoading(false); // Guard
166
+ }
167
+ ```
168
+
169
+ ---
170
+
171
+ ## 3. Provider Integration Details
172
+
173
+ ### 3.1 Anthropic (Claude)
174
+
175
+ **Endpoint:** `https://api.anthropic.com/v1/messages`
176
+ **Auth:** Header `x-api-key: {apiKey}`
177
+ **Request Body:**
178
+ ```json
179
+ {
180
+ "model": "claude-sonnet-4-20250514",
181
+ "max_tokens": 1024,
182
+ "temperature": 0.7,
183
+ "system": "optional system prompt",
184
+ "messages": [
185
+ {"role": "user", "content": "prompt"},
186
+ {"role": "assistant", "content": "response"},
187
+ ...
188
+ ]
189
+ }
190
+ ```
191
+
192
+ **Response Structure:**
193
+ ```json
194
+ {
195
+ "content": [
196
+ {"type": "text", "text": "response text"}
197
+ ],
198
+ "usage": {
199
+ "input_tokens": 10,
200
+ "output_tokens": 20
201
+ }
202
+ }
203
+ ```
204
+
205
+ **Streaming Format:**
206
+ ```
207
+ data: {"type":"content_block_delta","delta":{"type":"text_delta","text":"hello"}}
208
+ data: {"type":"content_block_delta","delta":{"type":"text_delta","text":" world"}}
209
+ data: [DONE]
210
+ ```
211
+
212
+ ### 3.2 OpenAI (GPT)
213
+
214
+ **Endpoint:** `https://api.openai.com/v1/chat/completions`
215
+ **Auth:** Header `Authorization: Bearer {apiKey}`
216
+ **Request Body:**
217
+ ```json
218
+ {
219
+ "model": "gpt-4",
220
+ "max_tokens": 1024,
221
+ "temperature": 0.7,
222
+ "messages": [
223
+ {"role": "user", "content": "prompt"},
224
+ ...
225
+ ]
226
+ }
227
+ ```
228
+
229
+ **Response Structure:**
230
+ ```json
231
+ {
232
+ "choices": [
233
+ {"message": {"content": "response text"}}
234
+ ],
235
+ "usage": {
236
+ "prompt_tokens": 10,
237
+ "completion_tokens": 20,
238
+ "total_tokens": 30
239
+ }
240
+ }
241
+ ```
242
+
243
+ **Streaming Format:**
244
+ ```
245
+ data: {"choices":[{"delta":{"content":"hello"}}]}
246
+ data: {"choices":[{"delta":{"content":" world"}}]}
247
+ data: [DONE]
248
+ ```
249
+
250
+ ### 3.3 Google Gemini
251
+
252
+ **Endpoint:** `https://generativelanguage.googleapis.com/v1beta/models/{model}:generateContent?key={apiKey}`
253
+ **Auth:** URL parameter `key`
254
+ **Request Body:**
255
+ ```json
256
+ {
257
+ "contents": [
258
+ {
259
+ "role": "user",
260
+ "parts": [{"text": "prompt"}]
261
+ }
262
+ ],
263
+ "generationConfig": {
264
+ "maxOutputTokens": 1024,
265
+ "temperature": 0.7
266
+ }
267
+ }
268
+ ```
269
+
270
+ **Response Structure:**
271
+ ```json
272
+ {
273
+ "candidates": [
274
+ {
275
+ "content": {
276
+ "parts": [
277
+ {"text": "response text"}
278
+ ]
279
+ }
280
+ }
281
+ ],
282
+ "usageMetadata": {
283
+ "promptTokenCount": 10,
284
+ "candidatesTokenCount": 20
285
+ }
286
+ }
287
+ ```
288
+
289
+ ---
290
+
291
+ ## 4. Type System Specification
292
+
293
+ ### 4.1 Core Types
294
+
295
+ ```typescript
296
+ // Provider identification
297
+ type AIProviderType = 'anthropic' | 'openai' | 'gemini';
298
+
299
+ // Configuration
300
+ interface ProviderConfig {
301
+ provider: AIProviderType;
302
+ apiKey: string;
303
+ model: string;
304
+ baseUrl?: string; // For proxy routing
305
+ timeout?: number; // ms, default 30000
306
+ maxRetries?: number; // default 3
307
+ }
308
+
309
+ // Standardized response
310
+ interface AIResponse {
311
+ text: string; // The AI's response
312
+ raw: Record<string, any>; // Provider's original response
313
+ usage?: {
314
+ inputTokens?: number;
315
+ outputTokens?: number;
316
+ totalTokens?: number;
317
+ };
318
+ error?: string; // Error message if failed
319
+ }
320
+
321
+ // Request options
322
+ interface AIRequestOptions {
323
+ system?: string; // System prompt
324
+ temperature?: number; // 0.0-2.0
325
+ maxTokens?: number; // Output limit
326
+ topP?: number; // Nucleus sampling
327
+ stopSequences?: string[]; // Stop tokens
328
+ }
329
+
330
+ // Message protocol
331
+ interface Message {
332
+ role: 'user' | 'assistant';
333
+ content: string;
334
+ timestamp?: number; // Unix ms
335
+ }
336
+ ```
337
+
338
+ ### 4.2 Hook-Specific Types
339
+
340
+ ```typescript
341
+ // useAIChat
342
+ interface UseAIChatReturn {
343
+ messages: Message[];
344
+ isLoading: boolean;
345
+ error: string | null;
346
+ sendMessage: (content: string) => Promise<void>;
347
+ abort: () => void; // Cancel in-flight request
348
+ clearMessages: () => void;
349
+ }
350
+
351
+ // useAIStream
352
+ interface UseAIStreamReturn {
353
+ response: string; // Accumulated response
354
+ isLoading: boolean;
355
+ error: string | null;
356
+ streamResponse: (prompt: string) => Promise<void>;
357
+ abort: () => void;
358
+ clearResponse: () => void;
359
+ }
360
+
361
+ // useAIForm
362
+ interface FormValidationRequest {
363
+ formData: Record<string, unknown>;
364
+ validationSchema?: Record<string, string>;
365
+ customInstructions?: string;
366
+ }
367
+
368
+ interface FormValidationResult {
369
+ isValid: boolean;
370
+ errors: Record<string, string>; // Field → Error message
371
+ raw: unknown; // AI's raw response
372
+ }
373
+
374
+ interface UseAIFormReturn {
375
+ validationResult: FormValidationResult | null;
376
+ isLoading: boolean;
377
+ error: string | null;
378
+ validateForm: (input: FormValidationRequest) => Promise<FormValidationResult | null>;
379
+ clearValidation: () => void;
380
+ }
381
+ ```
382
+
383
+ ---
384
+
385
+ ## 5. State Management Patterns
386
+
387
+ ### 5.1 Loading State Transitions
388
+
389
+ ```
390
+ Initial
391
+
392
+ User action (sendMessage)
393
+
394
+ setIsLoading(true)
395
+
396
+ Make API request via ProviderFactory
397
+
398
+ Response received / Error thrown
399
+
400
+ setIsLoading(false)
401
+ setData(result) OR setError(message)
402
+
403
+ Component re-renders with new state
404
+ ```
405
+
406
+ ### 5.2 Error Lifecycle
407
+
408
+ ```
409
+ No error (null)
410
+
411
+ API request fails (caught in try/catch)
412
+
413
+ setError(err.message)
414
+
415
+ User sees error UI
416
+
417
+ clearMessages() / clearValidation() / etc.
418
+
419
+ Back to null
420
+ ```
421
+
422
+ ### 5.3 Message History (useAIChat specific)
423
+
424
+ ```
425
+ Initial: messages = []
426
+
427
+ User: sendMessage("Hello")
428
+
429
+ Add to state: messages = [{role: 'user', content: 'Hello', timestamp}]
430
+
431
+ API responds
432
+
433
+ Add to state: messages = [..., {role: 'assistant', content: 'Hi!', timestamp}]
434
+
435
+ Next turn reuses messages array as context
436
+ ```
437
+
438
+ ---
439
+
440
+ ## 6. Performance Characteristics
441
+
442
+ ### 6.1 Hook Initialization
443
+
444
+ | Phase | Time | Notes |
445
+ |-------|------|-------|
446
+ | Component mount | 0ms | setState calls queued |
447
+ | useMemo (config) | <1ms | Runs during render |
448
+ | useMemo (provider) | <1ms | Runs during render |
449
+ | Return from hook | 1ms | Ready for use |
450
+
451
+ ### 6.2 First Request
452
+
453
+ | Phase | Time | Notes |
454
+ |-------|------|-------|
455
+ | User calls sendMessage | 0ms | Sync |
456
+ | Hook updates state | 0ms | Sync (queued) |
457
+ | Provider.makeRequest | 0ms | Sync (builds request) |
458
+ | Network request | 100-300ms | Provider latency |
459
+ | Response parsing | 1-10ms | JSON parse |
460
+ | State update | 0ms | Sync (queued) |
461
+ | React re-render | 5-20ms | Component update |
462
+
463
+ ### 6.3 Retry Overhead (Worst Case)
464
+
465
+ | Attempt | Delay | Total | Notes |
466
+ |---------|-------|-------|-------|
467
+ | 1 | 0ms | 0ms | First try |
468
+ | 2 | 1s | 1s | After first fail |
469
+ | 3 | 2s | 3s | After second fail |
470
+ | 4 | 4s | 7s | After third fail |
471
+ | Throw | - | 7s | Max total |
472
+
473
+ ---
474
+
475
+ ## 7. Security Considerations
476
+
477
+ ### 7.1 API Key Protection
478
+
479
+ **❌ Unsafe:**
480
+ ```typescript
481
+ const apiKey = 'sk-abc123...'; // Hardcoded
482
+ export { useAIChat };
483
+ // Key exposed in client code
484
+ ```
485
+
486
+ **✅ Safe:**
487
+ ```typescript
488
+ // Pattern 1: Environment variable
489
+ const apiKey = process.env.EXPO_PUBLIC_CLAUDE_API_KEY!;
490
+
491
+ // Pattern 2: Backend proxy (recommended)
492
+ const { sendMessage } = useAIChat({
493
+ baseUrl: 'https://my-backend.com/api/ai',
494
+ apiKey: 'client-token' // Short-lived
495
+ });
496
+ // Backend:
497
+ // 1. Validates client token
498
+ // 2. Adds real API key from env
499
+ // 3. Makes call to provider
500
+ // 4. Returns response to client
501
+ ```
502
+
503
+ ### 7.2 Rate Limiting Protection
504
+
505
+ **Automatic:**
506
+ - fetchWithRetry handles 429 responses
507
+ - Respects Retry-After header
508
+ - Exponential backoff prevents DOS
509
+
510
+ **Manual (app level):**
511
+ - Per-user token buckets
512
+ - Time-window rate limiters
513
+ - Backend request throttling
514
+
515
+ ### 7.3 Timeout Protection
516
+
517
+ **Request Level:**
518
+ ```typescript
519
+ const hook = useAIChat({
520
+ timeout: 30000 // 30 second max per request
521
+ });
522
+ ```
523
+
524
+ **Application Level:**
525
+ - AbortController used internally
526
+ - Automatic cleanup on component unmount
527
+ - Memory not leaked from pending requests
528
+
529
+ ---
530
+
531
+ ## 8. Error Recovery Mechanisms
532
+
533
+ ### 8.1 Network Error Recovery
534
+
535
+ ```
536
+ Network Error
537
+ ↓ (caught in try/catch)
538
+ Increment attempt counter
539
+ ↓ (attempt < maxRetries)
540
+ Calculate backoff delay
541
+
542
+ Sleep for delay
543
+
544
+ Retry fetch
545
+ ↓ (success)
546
+ Update state with data
547
+ ↓ (max retries exceeded)
548
+ Store error in state
549
+ ```
550
+
551
+ ### 8.2 Provider Error Recovery
552
+
553
+ | Error | Response | User Action |
554
+ |-------|----------|-------------|
555
+ | 400 (Bad Request) | Not retried | Fix request |
556
+ | 401 (Unauthorized) | Not retried | Check API key |
557
+ | 429 (Rate Limited) | Retried w/ backoff | Wait or upgrade plan |
558
+ | 500 (Server Error) | Retried automatically | Retry after delay |
559
+ | Timeout | Retried w/ backoff | Check network |
560
+
561
+ ---
562
+
563
+ ## 9. Extension Architecture
564
+
565
+ ### 9.1 Adding Provider X
566
+
567
+ ```typescript
568
+ // 1. Add to type union
569
+ export type AIProviderType = 'anthropic' | 'openai' | 'gemini' | 'provider-x';
570
+
571
+ // 2. Implement in ProviderFactory
572
+ class ProviderFactory {
573
+ private async makeProviderXRequest(request: ProviderRequest): Promise<AIResponse> {
574
+ // Build request with Provider X's spec
575
+ // Call fetchWithRetry
576
+ // Normalize response
577
+ }
578
+
579
+ private normalizeProviderXResponse(data: ProviderXResponse): AIResponse {
580
+ // Extract text from provider-specific format
581
+ // Extract usage if available
582
+ // Return AIResponse
583
+ }
584
+ }
585
+
586
+ // 3. Add to makeRequest routing
587
+ async makeRequest(request: ProviderRequest): Promise<AIResponse> {
588
+ switch (this.config.provider) {
589
+ case 'provider-x':
590
+ return this.makeProviderXRequest(request);
591
+ // ...
592
+ }
593
+ }
594
+
595
+ // 4. Add default model to hooks
596
+ const DEFAULT_MODEL_MAP = {
597
+ // ...
598
+ 'provider-x': 'provider-x-default-model'
599
+ };
600
+
601
+ // 5. Test all 8 hooks with new provider
602
+ ```
603
+
604
+ ### 9.2 Adding Hook Y
605
+
606
+ ```typescript
607
+ // 1. Define types
608
+ interface UseAIYOptions {
609
+ apiKey: string;
610
+ provider?: AIProviderType;
611
+ model?: string;
612
+ // ... hook-specific options
613
+ }
614
+
615
+ interface UseAIYReturn {
616
+ // ... hook-specific state
617
+ performAction: (input: YInput) => Promise<YOutput>;
618
+ clearState: () => void;
619
+ }
620
+
621
+ // 2. Implement hook
622
+ export function useAIY(options: UseAIYOptions): UseAIYReturn {
623
+ // Follow the standard hook pattern
624
+ const provider = useMemo(() => createProvider(config), [config]);
625
+ const performAction = useCallback(..., [deps]);
626
+ // ... etc
627
+ }
628
+
629
+ // 3. Export from index.ts
630
+ export { useAIY } from './hooks/useAIY';
631
+ ```
632
+
633
+ ---
634
+
635
+ ## 10. Deployment Architecture
636
+
637
+ ### 10.1 Local Development
638
+
639
+ ```
640
+ React Native App (Expo)
641
+
642
+ [API Key in .env.local]
643
+
644
+ useAIChat({ apiKey: process.env.EXPO_PUBLIC_Claude_API_KEY })
645
+
646
+ Direct request to anthropic.com
647
+ ```
648
+
649
+ ### 10.2 Production with Proxy
650
+
651
+ ```
652
+ React Native App (Production)
653
+
654
+ [No API key stored]
655
+
656
+ useAIChat({ baseUrl: 'https://api.company.com/ai' })
657
+
658
+ Backend API Proxy (Request Validation)
659
+
660
+ Backend adds API key from secure storage
661
+
662
+ Request to anthropic.com / openai.com / etc
663
+
664
+ Response back through proxy
665
+
666
+ Client receives normalized response
667
+ ```
668
+
669
+ ### 10.3 Multi-Region Deployment
670
+
671
+ ```
672
+ useAIChat({ baseUrl: process.env.REACT_APP_API_ENDPOINT })
673
+
674
+ env = production-us → https://api-us.company.com
675
+ env = production-eu → https://api-eu.company.com
676
+ env = production-ap → https://api-ap.company.com
677
+
678
+ [Same client code, different backends]
679
+ ```
680
+
681
+ ---
682
+
683
+ ## 11. Testing Strategy
684
+
685
+ ### 11.1 Unit Tests
686
+
687
+ ```typescript
688
+ // Provider factory normalization
689
+ expect(normalizeAnthropicResponse(rawResponse)).toEqual({
690
+ text: 'hello',
691
+ raw: rawResponse,
692
+ usage: {...}
693
+ });
694
+
695
+ // Retry logic
696
+ expect(retryCount).toBe(3);
697
+ expect(lastDelay).toBeCloseTo(4000, 500);
698
+
699
+ // Hook state management
700
+ expect(hook.isLoading).toBe(false);
701
+ expect(hook.messages).toHaveLength(2);
702
+ expect(hook.error).toBe(null);
703
+ ```
704
+
705
+ ### 11.2 Integration Tests
706
+
707
+ ```typescript
708
+ // Multi-turn conversation
709
+ await sendMessage('Hello');
710
+ expect(messages).toHaveLength(2);
711
+ await sendMessage('How are you?');
712
+ expect(messages).toHaveLength(4);
713
+
714
+ // Provider switching
715
+ const chatWithOpenAI = useAIChat({ provider: 'openai' });
716
+ const chatWithClaude = useAIChat({ provider: 'anthropic' });
717
+ // Both should work with same API
718
+ ```
719
+
720
+ ### 11.3 E2E Tests
721
+
722
+ ```typescript
723
+ // Real API call with credentials
724
+ const { messages } = useAIChat({ apiKey: realKey });
725
+ await sendMessage('test');
726
+ expect(messages[1].content).toMatch(/test|hello|hi/i);
727
+
728
+ // Rate limit retry behavior
729
+ // (Send 10+ requests in quick succession)
730
+ // Expect all eventually succeed with retries
731
+ ```
732
+
733
+ ---
734
+
735
+ ## 12. Version & Compatibility
736
+
737
+ | Component | Min Version | Notes |
738
+ |-----------|---|---|
739
+ | React | 18.0.0 | Hooks required |
740
+ | React Native | 0.70.0 | Fetch API required |
741
+ | Node.js | 14.0.0 | For builds |
742
+ | TypeScript | 4.5.0+ | For types |
743
+
744
+ ---
745
+
746
+ **Specification Version:** 1.0
747
+ **Last Updated:** April 13, 2026
748
+ **Status:** Complete and Production Ready