react-native-ai-hooks 0.3.0 → 0.5.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/.github/workflows/ci.yml +34 -0
- package/CONTRIBUTING.md +122 -0
- package/README.md +73 -20
- package/docs/ARCHITECTURE.md +301 -0
- package/docs/ARCHITECTURE_GUIDE.md +467 -0
- package/docs/IMPLEMENTATION_COMPLETE.md +349 -0
- package/docs/README.md +17 -0
- package/docs/TECHNICAL_SPECIFICATION.md +748 -0
- package/example/App.tsx +95 -0
- package/example/README.md +27 -0
- package/example/index.js +5 -0
- package/example/package.json +22 -0
- package/example/src/components/ProviderPicker.tsx +62 -0
- package/example/src/context/APIKeysContext.tsx +96 -0
- package/example/src/screens/ChatScreen.tsx +205 -0
- package/example/src/screens/SettingsScreen.tsx +124 -0
- package/example/tsconfig.json +7 -0
- package/jest.config.cjs +7 -0
- package/jest.setup.ts +28 -0
- package/package.json +17 -3
- package/src/hooks/__tests__/useAIForm.test.ts +345 -0
- package/src/hooks/__tests__/useAIStream.test.ts +427 -0
- package/src/hooks/useAIChat.ts +111 -51
- package/src/hooks/useAICode.ts +8 -0
- package/src/hooks/useAIForm.ts +92 -202
- package/src/hooks/useAIStream.ts +114 -58
- package/src/hooks/useAISummarize.ts +8 -0
- package/src/hooks/useAITranslate.ts +9 -0
- package/src/hooks/useAIVoice.ts +8 -0
- package/src/hooks/useImageAnalysis.ts +134 -79
- package/src/index.ts +25 -1
- package/src/types/index.ts +178 -4
- package/src/utils/__tests__/fetchWithRetry.test.ts +168 -0
- package/src/utils/__tests__/providerFactory.test.ts +493 -0
- package/src/utils/fetchWithRetry.ts +100 -0
- package/src/utils/index.ts +8 -0
- package/src/utils/providerFactory.ts +288 -0
|
@@ -0,0 +1,34 @@
|
|
|
1
|
+
name: CI
|
|
2
|
+
|
|
3
|
+
on:
|
|
4
|
+
push:
|
|
5
|
+
branches:
|
|
6
|
+
- main
|
|
7
|
+
- master
|
|
8
|
+
pull_request:
|
|
9
|
+
branches:
|
|
10
|
+
- main
|
|
11
|
+
- master
|
|
12
|
+
workflow_dispatch:
|
|
13
|
+
|
|
14
|
+
jobs:
|
|
15
|
+
test:
|
|
16
|
+
runs-on: ubuntu-latest
|
|
17
|
+
env:
|
|
18
|
+
NODE_ENV: test
|
|
19
|
+
|
|
20
|
+
steps:
|
|
21
|
+
- name: Checkout repository
|
|
22
|
+
uses: actions/checkout@v4
|
|
23
|
+
|
|
24
|
+
- name: Setup Node.js
|
|
25
|
+
uses: actions/setup-node@v4
|
|
26
|
+
with:
|
|
27
|
+
node-version: 22
|
|
28
|
+
cache: npm
|
|
29
|
+
|
|
30
|
+
- name: Install dependencies
|
|
31
|
+
run: npm ci --include=dev
|
|
32
|
+
|
|
33
|
+
- name: Run Jest tests
|
|
34
|
+
run: npm test -- --config jest.config.cjs --ci
|
package/CONTRIBUTING.md
ADDED
|
@@ -0,0 +1,122 @@
|
|
|
1
|
+
# Contributing to react-native-ai-hooks
|
|
2
|
+
|
|
3
|
+
Thank you for your interest in contributing to react-native-ai-hooks.
|
|
4
|
+
|
|
5
|
+
We are building a practical, provider-agnostic AI hooks toolkit for React Native, and contributions from the community are a big part of making it better. Whether you are fixing a bug, improving docs, adding a new hook, or integrating a new provider, your help is welcome.
|
|
6
|
+
|
|
7
|
+
## Ways to Contribute
|
|
8
|
+
|
|
9
|
+
- Report bugs and edge cases
|
|
10
|
+
- Suggest new features or API improvements
|
|
11
|
+
- Improve examples and documentation
|
|
12
|
+
- Add support for new AI providers (for example Groq, Perplexity, or Mistral)
|
|
13
|
+
- Create new hooks for common AI workflows
|
|
14
|
+
- Add or improve tests
|
|
15
|
+
|
|
16
|
+
## Reporting Bugs
|
|
17
|
+
|
|
18
|
+
Please report bugs through GitHub Issues:
|
|
19
|
+
|
|
20
|
+
- https://github.com/nikapkh/react-native-ai-hooks/issues
|
|
21
|
+
|
|
22
|
+
Before opening an issue:
|
|
23
|
+
|
|
24
|
+
- Check existing issues to avoid duplicates
|
|
25
|
+
- Confirm the problem on the latest version if possible
|
|
26
|
+
|
|
27
|
+
When opening a bug report, include:
|
|
28
|
+
|
|
29
|
+
- A clear title and short summary
|
|
30
|
+
- Steps to reproduce
|
|
31
|
+
- Expected behavior
|
|
32
|
+
- Actual behavior
|
|
33
|
+
- Environment details (OS, React Native version, package version)
|
|
34
|
+
- Minimal code snippet or repo that reproduces the problem
|
|
35
|
+
- Relevant logs, stack traces, or screenshots
|
|
36
|
+
|
|
37
|
+
## Development Setup
|
|
38
|
+
|
|
39
|
+
1. Fork the repository on GitHub.
|
|
40
|
+
2. Clone your fork.
|
|
41
|
+
3. Install dependencies:
|
|
42
|
+
|
|
43
|
+
```bash
|
|
44
|
+
npm install
|
|
45
|
+
```
|
|
46
|
+
|
|
47
|
+
4. Run tests:
|
|
48
|
+
|
|
49
|
+
```bash
|
|
50
|
+
npm test
|
|
51
|
+
```
|
|
52
|
+
|
|
53
|
+
## Git Workflow
|
|
54
|
+
|
|
55
|
+
Use this workflow for all contributions:
|
|
56
|
+
|
|
57
|
+
1. Fork repository
|
|
58
|
+
2. Create branch
|
|
59
|
+
3. Commit changes
|
|
60
|
+
4. Push branch
|
|
61
|
+
5. Open pull request
|
|
62
|
+
|
|
63
|
+
Example commands:
|
|
64
|
+
|
|
65
|
+
```bash
|
|
66
|
+
git checkout -b feat/add-mistral-provider
|
|
67
|
+
# make changes
|
|
68
|
+
git add .
|
|
69
|
+
git commit -m "feat: add mistral provider adapter"
|
|
70
|
+
git push origin feat/add-mistral-provider
|
|
71
|
+
```
|
|
72
|
+
|
|
73
|
+
## Pull Request Guidelines
|
|
74
|
+
|
|
75
|
+
- Keep PRs focused and small when possible
|
|
76
|
+
- Write clear commit messages
|
|
77
|
+
- Describe the problem and solution in the PR description
|
|
78
|
+
- Link related issues when relevant
|
|
79
|
+
- Update docs when behavior or API changes
|
|
80
|
+
|
|
81
|
+
All pull requests must pass existing tests before they can be merged.
|
|
82
|
+
|
|
83
|
+
## Testing
|
|
84
|
+
|
|
85
|
+
We use Jest for testing.
|
|
86
|
+
|
|
87
|
+
- Test command: `npm test`
|
|
88
|
+
- New functionality should include tests where practical
|
|
89
|
+
- Bug fixes should include a regression test when possible
|
|
90
|
+
|
|
91
|
+
Relevant test setup files:
|
|
92
|
+
|
|
93
|
+
- [jest.config.cjs](jest.config.cjs)
|
|
94
|
+
- [jest.setup.ts](jest.setup.ts)
|
|
95
|
+
|
|
96
|
+
## Adding a New Provider
|
|
97
|
+
|
|
98
|
+
If you want to add a provider such as Groq, Perplexity, or Mistral, a typical path is:
|
|
99
|
+
|
|
100
|
+
- Add provider types and interfaces in [src/types/index.ts](src/types/index.ts)
|
|
101
|
+
- Add request/response adapter logic in [src/utils/providerFactory.ts](src/utils/providerFactory.ts)
|
|
102
|
+
- Ensure resilience behavior remains compatible with [src/utils/fetchWithRetry.ts](src/utils/fetchWithRetry.ts)
|
|
103
|
+
- Add tests in [src/utils/__tests__](src/utils/__tests__)
|
|
104
|
+
- Update docs in [README.md](README.md) and [docs/README.md](docs/README.md)
|
|
105
|
+
|
|
106
|
+
## Adding a New Hook
|
|
107
|
+
|
|
108
|
+
If you want to add a new hook:
|
|
109
|
+
|
|
110
|
+
- Create the hook in [src/hooks](src/hooks)
|
|
111
|
+
- Export it from [src/index.ts](src/index.ts)
|
|
112
|
+
- Add or update relevant types in [src/types/index.ts](src/types/index.ts)
|
|
113
|
+
- Add tests and docs for the new behavior
|
|
114
|
+
|
|
115
|
+
## Communication and Conduct
|
|
116
|
+
|
|
117
|
+
Please be respectful and constructive in issues and pull requests. Thoughtful feedback and collaboration help everyone ship better software.
|
|
118
|
+
|
|
119
|
+
## Thank You
|
|
120
|
+
|
|
121
|
+
Thanks again for helping improve react-native-ai-hooks.
|
|
122
|
+
Your contributions make the library more reliable, more useful, and more accessible for the entire community.
|
package/README.md
CHANGED
|
@@ -5,40 +5,93 @@
|
|
|
5
5
|
|
|
6
6
|
# react-native-ai-hooks
|
|
7
7
|
|
|
8
|
-
AI
|
|
8
|
+
Build AI features in React Native without rebuilding the same plumbing every sprint.
|
|
9
9
|
|
|
10
|
-
|
|
10
|
+
One hooks-first API for Claude, OpenAI, and Gemini.
|
|
11
11
|
|
|
12
|
-
|
|
13
|
-
npm install react-native-ai-hooks
|
|
14
|
-
```
|
|
12
|
+
## Why use this?
|
|
15
13
|
|
|
16
|
-
|
|
14
|
+
Most teams burn time on the same AI integration work:
|
|
15
|
+
|
|
16
|
+
- Provider-specific request/response wiring
|
|
17
|
+
- Retry, timeout, and error edge cases
|
|
18
|
+
- Streaming token parsing
|
|
19
|
+
- State handling for loading, cancellation, and failures
|
|
17
20
|
|
|
18
|
-
|
|
19
|
-
|
|
20
|
-
|
|
21
|
-
|
|
21
|
+
This library gives you that foundation out of the box so you can ship product features, not infra glue.
|
|
22
|
+
|
|
23
|
+
| What you want | What this gives you |
|
|
24
|
+
|---|---|
|
|
25
|
+
| Ship chat quickly | Drop-in hooks with minimal setup |
|
|
26
|
+
| Avoid provider lock-in | Unified interface across providers |
|
|
27
|
+
| Handle real-world failures | Built-in retries, backoff, timeout, abort |
|
|
28
|
+
| Keep code clean | Strong TypeScript types and predictable APIs |
|
|
22
29
|
|
|
23
30
|
## Quick Start
|
|
24
31
|
|
|
25
32
|
```tsx
|
|
33
|
+
// npm install react-native-ai-hooks
|
|
34
|
+
|
|
26
35
|
import { useAIChat } from 'react-native-ai-hooks';
|
|
27
36
|
|
|
28
|
-
|
|
29
|
-
|
|
30
|
-
|
|
37
|
+
export function Assistant() {
|
|
38
|
+
const { messages, sendMessage, isLoading, error } = useAIChat({
|
|
39
|
+
provider: 'anthropic',
|
|
40
|
+
apiKey: process.env.EXPO_PUBLIC_AI_KEY ?? '',
|
|
41
|
+
model: 'claude-sonnet-4-20250514',
|
|
42
|
+
});
|
|
43
|
+
|
|
44
|
+
// Example action
|
|
45
|
+
async function onAsk() {
|
|
46
|
+
await sendMessage('Draft a warm onboarding message for new users.');
|
|
47
|
+
}
|
|
48
|
+
|
|
49
|
+
return null;
|
|
50
|
+
}
|
|
51
|
+
```
|
|
52
|
+
|
|
53
|
+
## Hooks
|
|
54
|
+
|
|
55
|
+
- 💬 useAIChat: multi-turn conversation
|
|
56
|
+
- ⚡ useAIStream: token streaming
|
|
57
|
+
- 👁️ useImageAnalysis: image and vision workflows
|
|
58
|
+
- 📝 useAIForm: AI-assisted form validation
|
|
59
|
+
- 🎙️ useAIVoice: speech-to-text plus AI response
|
|
60
|
+
- 🌍 useAITranslate: real-time translation
|
|
61
|
+
- 📄 useAISummarize: concise text summaries
|
|
62
|
+
- 🧠 useAICode: generate and explain code
|
|
63
|
+
|
|
64
|
+
## Provider Support
|
|
65
|
+
|
|
66
|
+
| Provider | Chat | Stream | Vision | Voice |
|
|
67
|
+
|---|---|---|---|---|
|
|
68
|
+
| Anthropic Claude | ✅ | ✅ | ✅ | ✅ |
|
|
69
|
+
| OpenAI | ✅ | ✅ | ✅ | ✅ |
|
|
70
|
+
| Gemini | ✅ | ✅ | 🔜 | 🔜 |
|
|
71
|
+
|
|
72
|
+
## Security
|
|
73
|
+
|
|
74
|
+
Use a backend proxy in production. Do not ship permanent provider API keys in app binaries.
|
|
75
|
+
|
|
76
|
+
```tsx
|
|
77
|
+
const { sendMessage } = useAIChat({
|
|
78
|
+
baseUrl: 'https://your-backend.com/api/ai',
|
|
31
79
|
});
|
|
32
80
|
```
|
|
33
81
|
|
|
34
|
-
##
|
|
82
|
+
## Example App
|
|
83
|
+
|
|
84
|
+
See [example](./example) for a full app with provider switching, API key settings, and streaming chat.
|
|
85
|
+
|
|
86
|
+
## Deep Technical Docs
|
|
87
|
+
|
|
88
|
+
Detailed architecture and implementation references now live in [docs](./docs):
|
|
35
89
|
|
|
36
|
-
|
|
37
|
-
|
|
38
|
-
|
|
39
|
-
|
|
40
|
-
| Gemini | 🔜 |
|
|
90
|
+
- [Architecture Guide](./docs/ARCHITECTURE_GUIDE.md)
|
|
91
|
+
- [Technical Specification](./docs/TECHNICAL_SPECIFICATION.md)
|
|
92
|
+
- [Implementation Summary](./docs/IMPLEMENTATION_COMPLETE.md)
|
|
93
|
+
- [Internal Architecture Notes](./docs/ARCHITECTURE.md)
|
|
41
94
|
|
|
42
95
|
## License
|
|
43
96
|
|
|
44
|
-
MIT
|
|
97
|
+
MIT © [nikapkh](https://github.com/nikapkh)
|
|
@@ -0,0 +1,301 @@
|
|
|
1
|
+
/**
|
|
2
|
+
* React Native AI Hooks - Production Architecture
|
|
3
|
+
*
|
|
4
|
+
* This file documents the complete internal architecture of the react-native-ai-hooks
|
|
5
|
+
* library, designed for type-safety, multi-provider support, and optimal performance.
|
|
6
|
+
*/
|
|
7
|
+
|
|
8
|
+
/**
|
|
9
|
+
* CORE ARCHITECTURE PRINCIPLES
|
|
10
|
+
* ============================
|
|
11
|
+
*
|
|
12
|
+
* 1. Provider Abstraction Layer
|
|
13
|
+
* - All API calls go through ProviderFactory
|
|
14
|
+
* - Supports Anthropic, OpenAI, Gemini with uniform interface
|
|
15
|
+
* - Easy to extend with new providers
|
|
16
|
+
*
|
|
17
|
+
* 2. Unified Response Normalization
|
|
18
|
+
* - Every provider returns standardized AIResponse object
|
|
19
|
+
* - Includes text content, raw response, and token usage
|
|
20
|
+
* - Enables seamless provider switching
|
|
21
|
+
*
|
|
22
|
+
* 3. Resilience & Retry Logic
|
|
23
|
+
* - fetchWithRetry handles exponential backoff
|
|
24
|
+
* - Automatic rate-limit (429) handling with Retry-After header
|
|
25
|
+
* - Timeout support using AbortController
|
|
26
|
+
* - Configurable max retries and delays
|
|
27
|
+
*
|
|
28
|
+
* 4. Performance Optimization
|
|
29
|
+
* - useMemo for provider config to prevent recreations
|
|
30
|
+
* - useCallback for all callback functions
|
|
31
|
+
* - Proper cleanup for abort controllers and timers
|
|
32
|
+
* - Minimal re-renders through optimized dependencies
|
|
33
|
+
*
|
|
34
|
+
* 5. Error Handling Consistency
|
|
35
|
+
* - All hooks follow same error pattern
|
|
36
|
+
* - Errors caught and stored in hook state
|
|
37
|
+
* - Abort errors handled gracefully (no-op vs throw)
|
|
38
|
+
*
|
|
39
|
+
*
|
|
40
|
+
* PROVIDER FACTORY ARCHITECTURE
|
|
41
|
+
* =============================
|
|
42
|
+
*
|
|
43
|
+
* The ProviderFactory class (src/utils/providerFactory.ts) is the central hub
|
|
44
|
+
* for all API communications. It:
|
|
45
|
+
*
|
|
46
|
+
* - Normalizes request/response formats across providers
|
|
47
|
+
* - Handles authentication (API keys, OAuth for different providers)
|
|
48
|
+
* - Manages baseUrl configuration for proxy/backend integration
|
|
49
|
+
* - Applies consistent rate-limit and timeout handling
|
|
50
|
+
*
|
|
51
|
+
* Usage:
|
|
52
|
+
* const provider = createProvider({
|
|
53
|
+
* provider: 'anthropic',
|
|
54
|
+
* apiKey: 'your-key',
|
|
55
|
+
* model: 'claude-sonnet-4-20250514',
|
|
56
|
+
* baseUrl: 'https://your-proxy.com', // Optional
|
|
57
|
+
* timeout: 30000,
|
|
58
|
+
* maxRetries: 3
|
|
59
|
+
* });
|
|
60
|
+
*
|
|
61
|
+
* const response = await provider.makeRequest({
|
|
62
|
+
* prompt: 'Hello, world!',
|
|
63
|
+
* options: { temperature: 0.7, maxTokens: 1024 },
|
|
64
|
+
* context: [] // Previous messages
|
|
65
|
+
* });
|
|
66
|
+
*
|
|
67
|
+
* Response Structure:
|
|
68
|
+
* {
|
|
69
|
+
* text: string, // The AI response
|
|
70
|
+
* raw: object, // Raw provider response
|
|
71
|
+
* usage: {
|
|
72
|
+
* inputTokens?: number,
|
|
73
|
+
* outputTokens?: number,
|
|
74
|
+
* totalTokens?: number
|
|
75
|
+
* }
|
|
76
|
+
* }
|
|
77
|
+
*
|
|
78
|
+
*
|
|
79
|
+
* FETCH WITH RETRY UTILITY
|
|
80
|
+
* ========================
|
|
81
|
+
*
|
|
82
|
+
* The fetchWithRetry function (src/utils/fetchWithRetry.ts) wraps fetch with:
|
|
83
|
+
*
|
|
84
|
+
* - Exponential backoff: baseDelay * (backoffMultiplier ^ attempt)
|
|
85
|
+
* - Max delay cap: prevents excessive wait times
|
|
86
|
+
* - Rate limit handling: respects Retry-After header (429 status)
|
|
87
|
+
* - Timeout support: AbortController with configurable timeout
|
|
88
|
+
* - Server error retries: automatic retry on 5xx errors
|
|
89
|
+
*
|
|
90
|
+
* Configuration:
|
|
91
|
+
* {
|
|
92
|
+
* maxRetries: 3, // Total attempts
|
|
93
|
+
* baseDelay: 1000, // Initial delay (ms)
|
|
94
|
+
* maxDelay: 10000, // Cap delay (ms)
|
|
95
|
+
* timeout: 30000, // Per-request timeout (ms)
|
|
96
|
+
* backoffMultiplier: 2 // Exponential backoff factor
|
|
97
|
+
* }
|
|
98
|
+
*
|
|
99
|
+
*
|
|
100
|
+
* HOOK ARCHITECTURE
|
|
101
|
+
* =================
|
|
102
|
+
*
|
|
103
|
+
* All hooks follow a consistent pattern:
|
|
104
|
+
*
|
|
105
|
+
* 1. useAIChat - Multi-turn conversations
|
|
106
|
+
* - Manages message history
|
|
107
|
+
* - Auto-includes system prompt and context
|
|
108
|
+
* - Returns messages array + send/abort/clear functions
|
|
109
|
+
*
|
|
110
|
+
* 2. useAIStream - Real-time token streaming
|
|
111
|
+
* - Streams responses token-by-token
|
|
112
|
+
* - Handles both Anthropic and OpenAI stream formats
|
|
113
|
+
* - Supports abort and cleanup
|
|
114
|
+
*
|
|
115
|
+
* 3. useAIForm - Form validation against AI schema
|
|
116
|
+
* - Validates entire form at once
|
|
117
|
+
* - Parses AI response into errors object
|
|
118
|
+
* - Returns FormValidationResult with isValid flag
|
|
119
|
+
*
|
|
120
|
+
* 4. useImageAnalysis - Vision model integration
|
|
121
|
+
* - Accepts URI or base64 image
|
|
122
|
+
* - Supports Anthropic and OpenAI vision models
|
|
123
|
+
* - Auto-converts URIs to base64
|
|
124
|
+
*
|
|
125
|
+
* 5. useAITranslate - Real-time translation
|
|
126
|
+
* - Auto-detects source language
|
|
127
|
+
* - Supports configurable target language
|
|
128
|
+
* - Debounced auto-translate option
|
|
129
|
+
*
|
|
130
|
+
* 6. useAISummarize - Text summarization
|
|
131
|
+
* - Adjustable summary length (short/medium/long)
|
|
132
|
+
* - Maintains text accuracy and fidelity
|
|
133
|
+
*
|
|
134
|
+
* 7. useAICode - Code generation and explanation
|
|
135
|
+
* - Generate code in any language
|
|
136
|
+
* - Explain existing code with focus options
|
|
137
|
+
*
|
|
138
|
+
*
|
|
139
|
+
* TYPE DEFINITIONS
|
|
140
|
+
* ================
|
|
141
|
+
*
|
|
142
|
+
* Core types (src/types/index.ts):
|
|
143
|
+
*
|
|
144
|
+
* - Message: Single message object with role, content, timestamp
|
|
145
|
+
* - AIProviderType: Union of 'anthropic' | 'openai' | 'gemini'
|
|
146
|
+
* - ProviderConfig: Configuration for creating providers
|
|
147
|
+
* - AIResponse: Normalized response structure
|
|
148
|
+
* - AIRequestOptions: Parameters for AI requests
|
|
149
|
+
* - UseAI*Options: Hook configuration interfaces
|
|
150
|
+
* - UseAI*Return: Hook return type interfaces
|
|
151
|
+
* - FormValidationRequest/Result: Form validation types
|
|
152
|
+
* - *Response: Provider-specific response interfaces
|
|
153
|
+
*
|
|
154
|
+
*
|
|
155
|
+
* MULTI-PROVIDER SUPPORT
|
|
156
|
+
* ======================
|
|
157
|
+
*
|
|
158
|
+
* Supported Providers:
|
|
159
|
+
*
|
|
160
|
+
* Provider | Base URL | Auth Header
|
|
161
|
+
* ------------|------------------------------------|-----------------------
|
|
162
|
+
* Anthropic | api.anthropic.com/v1/messages | x-api-key
|
|
163
|
+
* OpenAI | api.openai.com/v1/chat/completions | Authorization: Bearer
|
|
164
|
+
* Gemini | generativelanguage.googleapis.com | Key in URL param
|
|
165
|
+
*
|
|
166
|
+
* To use different provider:
|
|
167
|
+
* const { sendMessage } = useAIChat({
|
|
168
|
+
* apiKey: 'your-key',
|
|
169
|
+
* provider: 'openai', // ← Change provider
|
|
170
|
+
* model: 'gpt-4' // ← Use provider-specific model
|
|
171
|
+
* });
|
|
172
|
+
*
|
|
173
|
+
* Router automatically selects matching endpoint and auth method.
|
|
174
|
+
*
|
|
175
|
+
*
|
|
176
|
+
* SECURITY BEST PRACTICES
|
|
177
|
+
* =======================
|
|
178
|
+
*
|
|
179
|
+
* 1. API Key Management
|
|
180
|
+
* - Store keys in environment variables, never hardcode
|
|
181
|
+
* - Consider passing through backend proxy (baseUrl option)
|
|
182
|
+
*
|
|
183
|
+
* 2. Backend Proxy Pattern
|
|
184
|
+
* - Set baseUrl to your backend endpoint
|
|
185
|
+
* - Backend validates and authenticates requests
|
|
186
|
+
* - Example: https://my-api.com/ai (then /v1/messages appended)
|
|
187
|
+
*
|
|
188
|
+
* 3. Rate Limiting
|
|
189
|
+
* - All providers have rate limits
|
|
190
|
+
* - fetchWithRetry handles 429 responses automatically
|
|
191
|
+
* - Implement customer-side throttling for high-volume apps
|
|
192
|
+
*
|
|
193
|
+
* 4. Timeout Configuration
|
|
194
|
+
* - Default: 30 seconds per request
|
|
195
|
+
* - Adjust based on model complexity and network
|
|
196
|
+
* - Lower timeout for real-time UX requirements
|
|
197
|
+
*
|
|
198
|
+
*
|
|
199
|
+
* PERFORMANCE TUNING
|
|
200
|
+
* ==================
|
|
201
|
+
*
|
|
202
|
+
* 1. Hook Dependencies
|
|
203
|
+
* - Memoized provider configs via useMemo
|
|
204
|
+
* - Wrapped callbacks with useCallback
|
|
205
|
+
* - Deps list carefully curated to prevent recreations
|
|
206
|
+
*
|
|
207
|
+
* 2. Message Management
|
|
208
|
+
* - Store message history in component state
|
|
209
|
+
* - Consider pagination for large conversations
|
|
210
|
+
* - useCallback for sendMessage prevents parent re-renders
|
|
211
|
+
*
|
|
212
|
+
* 3. Streaming Performance
|
|
213
|
+
* - Streaming in useAIStream is incremental
|
|
214
|
+
* - Response state updates are batched by React
|
|
215
|
+
* - Large responses streamed smoothly token-by-token
|
|
216
|
+
*
|
|
217
|
+
* 4. Image Analysis
|
|
218
|
+
* - Image conversion to base64 happens async
|
|
219
|
+
* - Large images may take time to convert
|
|
220
|
+
* - Consider file size limits on client-side
|
|
221
|
+
*
|
|
222
|
+
*
|
|
223
|
+
* EXTENDING THE LIBRARY
|
|
224
|
+
* =====================
|
|
225
|
+
*
|
|
226
|
+
* To add a new AI provider:
|
|
227
|
+
*
|
|
228
|
+
* 1. Add provider type to AIProviderType union
|
|
229
|
+
* 2. Implement makeXyzRequest method in ProviderFactory
|
|
230
|
+
* 3. Implement normalizeXyzResponse method
|
|
231
|
+
* 4. Add default model to DEFAULT_MODEL_MAP in hooks
|
|
232
|
+
* 5. Test with all hook types
|
|
233
|
+
*
|
|
234
|
+
* To add a new hook:
|
|
235
|
+
*
|
|
236
|
+
* 1. Define UseAIXyzOptions interface in types
|
|
237
|
+
* 2. Define UseAIXyzReturn interface in types
|
|
238
|
+
* 3. Create src/hooks/useAIXyz.ts
|
|
239
|
+
* 4. Use ProviderFactory for all API calls
|
|
240
|
+
* 5. Follow same error/loading/cleanup patterns
|
|
241
|
+
* 6. Export from src/index.ts
|
|
242
|
+
*
|
|
243
|
+
*
|
|
244
|
+
* ERROR HANDLING PATTERNS
|
|
245
|
+
* =======================
|
|
246
|
+
*
|
|
247
|
+
* All hooks follow this pattern:
|
|
248
|
+
*
|
|
249
|
+
* try {
|
|
250
|
+
* // API call via ProviderFactory
|
|
251
|
+
* } catch (err) {
|
|
252
|
+
* if (isMountedRef.current) {
|
|
253
|
+
* setError(err.message);
|
|
254
|
+
* }
|
|
255
|
+
* } finally {
|
|
256
|
+
* if (isMountedRef.current) {
|
|
257
|
+
* setIsLoading(false);
|
|
258
|
+
* }
|
|
259
|
+
* }
|
|
260
|
+
*
|
|
261
|
+
* The isMountedRef prevents state updates on unmounted components.
|
|
262
|
+
*
|
|
263
|
+
*
|
|
264
|
+
* STREAMING IMPLEMENTATION
|
|
265
|
+
* ========================
|
|
266
|
+
*
|
|
267
|
+
* Streaming works by parsing newline-delimited JSON from response.body:
|
|
268
|
+
*
|
|
269
|
+
* Anthropic Format:
|
|
270
|
+
* data: {"type":"content_block_delta","delta":{"type":"text_delta","text":"hello"}}
|
|
271
|
+
*
|
|
272
|
+
* OpenAI Format:
|
|
273
|
+
* data: {"choices":[{"delta":{"content":"hello"}}]}
|
|
274
|
+
*
|
|
275
|
+
* Both formats handled in useAIStream with provider-specific parsing.
|
|
276
|
+
*
|
|
277
|
+
*
|
|
278
|
+
* TESTING STRATEGY
|
|
279
|
+
* ================
|
|
280
|
+
*
|
|
281
|
+
* Unit tests should verify:
|
|
282
|
+
* - Provider factory normalization for each provider
|
|
283
|
+
* - Retry logic with mock fetch
|
|
284
|
+
* - Hook state management (loading, error, data)
|
|
285
|
+
* - Callback cleanup on unmount
|
|
286
|
+
* - JSON parsing in form validation
|
|
287
|
+
*
|
|
288
|
+
* Integration tests should verify:
|
|
289
|
+
* - Multi-turn conversation flow
|
|
290
|
+
* - Image analysis with different mime types
|
|
291
|
+
* - Form validation with complex schemas
|
|
292
|
+
* - Streaming response handling
|
|
293
|
+
*
|
|
294
|
+
* E2E tests should verify:
|
|
295
|
+
* - Real API calls with live keys
|
|
296
|
+
* - Provider switching credentials
|
|
297
|
+
* - Rate limit retry behavior
|
|
298
|
+
* - Error recovery workflows
|
|
299
|
+
*/
|
|
300
|
+
|
|
301
|
+
export {};
|