@ahoo-wang/fetcher-openai 2.9.8
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/LICENSE +201 -0
- package/README.md +1223 -0
- package/README.zh-CN.md +1217 -0
- package/dist/chat/chatClient.d.ts +225 -0
- package/dist/chat/chatClient.d.ts.map +1 -0
- package/dist/chat/completionStreamResultExtractor.d.ts +67 -0
- package/dist/chat/completionStreamResultExtractor.d.ts.map +1 -0
- package/dist/chat/index.d.ts +4 -0
- package/dist/chat/index.d.ts.map +1 -0
- package/dist/chat/types.d.ts +114 -0
- package/dist/chat/types.d.ts.map +1 -0
- package/dist/index.d.ts +3 -0
- package/dist/index.d.ts.map +1 -0
- package/dist/index.es.js +128 -0
- package/dist/index.es.js.map +1 -0
- package/dist/index.umd.js +2 -0
- package/dist/index.umd.js.map +1 -0
- package/dist/openai.d.ts +82 -0
- package/dist/openai.d.ts.map +1 -0
- package/package.json +80 -0
package/README.md
ADDED
|
@@ -0,0 +1,1223 @@
|
|
|
1
|
+
# @ahoo-wang/fetcher-openai
|
|
2
|
+
|
|
3
|
+
[](https://www.npmjs.com/package/@ahoo-wang/fetcher-openai)
|
|
4
|
+
[](https://github.com/Ahoo-Wang/fetcher/actions)
|
|
5
|
+
[](https://codecov.io/gh/Ahoo-Wang/fetcher)
|
|
6
|
+
[](https://github.com/Ahoo-Wang/fetcher/blob/main/LICENSE)
|
|
7
|
+
[](https://www.npmjs.com/package/@ahoo-wang/fetcher-openai)
|
|
8
|
+
[](https://www.npmjs.com/package/@ahoo-wang/fetcher-openai)
|
|
9
|
+
[](https://nodejs.org/)
|
|
10
|
+
[](https://www.typescriptlang.org/)
|
|
11
|
+
[](https://deepwiki.com/Ahoo-Wang/fetcher)
|
|
12
|
+
|
|
13
|
+
> ๐ **Modern โข Type-Safe โข Streaming-Ready** - A comprehensive OpenAI client built on the Fetcher ecosystem
|
|
14
|
+
|
|
15
|
+
A modern, type-safe OpenAI client library built on the Fetcher ecosystem. Provides seamless integration with OpenAI's Chat Completions API, supporting both streaming and non-streaming responses with full TypeScript support and automatic request handling.
|
|
16
|
+
|
|
17
|
+
## โจ Features
|
|
18
|
+
|
|
19
|
+
- ๐ **Full TypeScript Support**: Complete type safety with strict typing and IntelliSense
|
|
20
|
+
- ๐ก **Native Streaming**: Built-in support for server-sent event streams with automatic termination
|
|
21
|
+
- ๐ฏ **Declarative API**: Clean, readable code using decorator patterns
|
|
22
|
+
- ๐ง **Fetcher Ecosystem**: Built on the robust Fetcher HTTP client with advanced features
|
|
23
|
+
- ๐ฆ **Optimized Bundle**: Full tree shaking support with minimal bundle size
|
|
24
|
+
- ๐งช **Comprehensive Testing**: 100% test coverage with Vitest
|
|
25
|
+
- ๐ **Conditional Types**: Smart return types based on streaming configuration
|
|
26
|
+
- ๐ก๏ธ **Error Handling**: Robust error handling with detailed error messages
|
|
27
|
+
- โก **Performance**: Optimized for both development and production environments
|
|
28
|
+
- ๐ **Extensible**: Easy integration with custom interceptors and middleware
|
|
29
|
+
|
|
30
|
+
## ๐ฆ Installation
|
|
31
|
+
|
|
32
|
+
### Prerequisites
|
|
33
|
+
|
|
34
|
+
- **Node.js**: >= 16.0.0
|
|
35
|
+
- **TypeScript**: >= 5.0 (recommended)
|
|
36
|
+
|
|
37
|
+
### Install with npm
|
|
38
|
+
|
|
39
|
+
```bash
|
|
40
|
+
npm install @ahoo-wang/fetcher-openai @ahoo-wang/fetcher @ahoo-wang/fetcher-decorator @ahoo-wang/fetcher-eventstream
|
|
41
|
+
```
|
|
42
|
+
|
|
43
|
+
### Install with yarn
|
|
44
|
+
|
|
45
|
+
```bash
|
|
46
|
+
yarn add @ahoo-wang/fetcher-openai @ahoo-wang/fetcher @ahoo-wang/fetcher-decorator @ahoo-wang/fetcher-eventstream
|
|
47
|
+
```
|
|
48
|
+
|
|
49
|
+
### Install with pnpm
|
|
50
|
+
|
|
51
|
+
```bash
|
|
52
|
+
pnpm add @ahoo-wang/fetcher-openai @ahoo-wang/fetcher @ahoo-wang/fetcher-decorator @ahoo-wang/fetcher-eventstream
|
|
53
|
+
```
|
|
54
|
+
|
|
55
|
+
### Peer Dependencies
|
|
56
|
+
|
|
57
|
+
This package requires the following peer dependencies:
|
|
58
|
+
|
|
59
|
+
- `@ahoo-wang/fetcher`: Core HTTP client functionality
|
|
60
|
+
- `@ahoo-wang/fetcher-decorator`: Declarative API decorators
|
|
61
|
+
- `@ahoo-wang/fetcher-eventstream`: Server-sent events support
|
|
62
|
+
|
|
63
|
+
These are automatically installed when using the commands above.
|
|
64
|
+
|
|
65
|
+
## ๐ Quick Start
|
|
66
|
+
|
|
67
|
+
### Basic Setup
|
|
68
|
+
|
|
69
|
+
```typescript
|
|
70
|
+
import { OpenAI } from '@ahoo-wang/fetcher-openai';
|
|
71
|
+
|
|
72
|
+
// Initialize the client with your API key
|
|
73
|
+
const openai = new OpenAI({
|
|
74
|
+
baseURL: 'https://api.openai.com/v1',
|
|
75
|
+
apiKey: process.env.OPENAI_API_KEY!, // Your OpenAI API key
|
|
76
|
+
});
|
|
77
|
+
|
|
78
|
+
// Create a simple chat completion
|
|
79
|
+
const response = await openai.chat.completions({
|
|
80
|
+
model: 'gpt-3.5-turbo',
|
|
81
|
+
messages: [
|
|
82
|
+
{ role: 'system', content: 'You are a helpful assistant.' },
|
|
83
|
+
{ role: 'user', content: 'Hello, how are you?' },
|
|
84
|
+
],
|
|
85
|
+
temperature: 0.7,
|
|
86
|
+
max_tokens: 150,
|
|
87
|
+
});
|
|
88
|
+
|
|
89
|
+
console.log(response.choices[0].message.content);
|
|
90
|
+
// Output: "Hello! I'm doing well, thank you for asking. How can I help you today?"
|
|
91
|
+
```
|
|
92
|
+
|
|
93
|
+
### Environment Variables Setup
|
|
94
|
+
|
|
95
|
+
Create a `.env` file in your project root:
|
|
96
|
+
|
|
97
|
+
```bash
|
|
98
|
+
# .env
|
|
99
|
+
OPENAI_API_KEY=sk-your-api-key-here
|
|
100
|
+
OPENAI_BASE_URL=https://api.openai.com/v1
|
|
101
|
+
```
|
|
102
|
+
|
|
103
|
+
Then use in your code:
|
|
104
|
+
|
|
105
|
+
```typescript
|
|
106
|
+
import { config } from 'dotenv';
|
|
107
|
+
config(); // Load environment variables
|
|
108
|
+
|
|
109
|
+
const openai = new OpenAI({
|
|
110
|
+
baseURL: process.env.OPENAI_BASE_URL!,
|
|
111
|
+
apiKey: process.env.OPENAI_API_KEY!,
|
|
112
|
+
});
|
|
113
|
+
```
|
|
114
|
+
|
|
115
|
+
## ๐ก Streaming Examples
|
|
116
|
+
|
|
117
|
+
### Basic Streaming
|
|
118
|
+
|
|
119
|
+
```typescript
|
|
120
|
+
import { OpenAI } from '@ahoo-wang/fetcher-openai';
|
|
121
|
+
|
|
122
|
+
const openai = new OpenAI({
|
|
123
|
+
baseURL: 'https://api.openai.com/v1',
|
|
124
|
+
apiKey: process.env.OPENAI_API_KEY!,
|
|
125
|
+
});
|
|
126
|
+
|
|
127
|
+
// Create a streaming chat completion
|
|
128
|
+
const stream = await openai.chat.completions({
|
|
129
|
+
model: 'gpt-4', // Use GPT-4 for better quality
|
|
130
|
+
messages: [
|
|
131
|
+
{ role: 'system', content: 'You are a creative storyteller.' },
|
|
132
|
+
{
|
|
133
|
+
role: 'user',
|
|
134
|
+
content: 'Tell me a short story about a robot learning to paint',
|
|
135
|
+
},
|
|
136
|
+
],
|
|
137
|
+
stream: true,
|
|
138
|
+
temperature: 0.8, // Higher creativity
|
|
139
|
+
max_tokens: 1000,
|
|
140
|
+
});
|
|
141
|
+
|
|
142
|
+
// Process the streaming response in real-time
|
|
143
|
+
let fullResponse = '';
|
|
144
|
+
for await (const chunk of stream) {
|
|
145
|
+
const content = chunk.choices[0]?.delta?.content || '';
|
|
146
|
+
if (content) {
|
|
147
|
+
process.stdout.write(content); // Real-time output
|
|
148
|
+
fullResponse += content;
|
|
149
|
+
}
|
|
150
|
+
}
|
|
151
|
+
|
|
152
|
+
console.log('\n\n--- Stream Complete ---');
|
|
153
|
+
console.log('Total characters:', fullResponse.length);
|
|
154
|
+
```
|
|
155
|
+
|
|
156
|
+
### Advanced Streaming with Progress Tracking
|
|
157
|
+
|
|
158
|
+
```typescript
|
|
159
|
+
import { OpenAI } from '@ahoo-wang/fetcher-openai';
|
|
160
|
+
|
|
161
|
+
const openai = new OpenAI({
|
|
162
|
+
baseURL: 'https://api.openai.com/v1',
|
|
163
|
+
apiKey: process.env.OPENAI_API_KEY!,
|
|
164
|
+
});
|
|
165
|
+
|
|
166
|
+
const stream = await openai.chat.completions({
|
|
167
|
+
model: 'gpt-3.5-turbo',
|
|
168
|
+
messages: [{ role: 'user', content: 'Write a haiku about programming' }],
|
|
169
|
+
stream: true,
|
|
170
|
+
});
|
|
171
|
+
|
|
172
|
+
// Track streaming progress
|
|
173
|
+
let chunksReceived = 0;
|
|
174
|
+
let totalContent = '';
|
|
175
|
+
|
|
176
|
+
for await (const chunk of stream) {
|
|
177
|
+
chunksReceived++;
|
|
178
|
+
const content = chunk.choices[0]?.delta?.content || '';
|
|
179
|
+
|
|
180
|
+
if (content) {
|
|
181
|
+
totalContent += content;
|
|
182
|
+
// Show progress every 5 chunks
|
|
183
|
+
if (chunksReceived % 5 === 0) {
|
|
184
|
+
console.log(
|
|
185
|
+
`Received ${chunksReceived} chunks, ${totalContent.length} chars`,
|
|
186
|
+
);
|
|
187
|
+
}
|
|
188
|
+
}
|
|
189
|
+
|
|
190
|
+
// Check for completion
|
|
191
|
+
if (chunk.choices[0]?.finish_reason) {
|
|
192
|
+
console.log(`Stream finished: ${chunk.choices[0].finish_reason}`);
|
|
193
|
+
break;
|
|
194
|
+
}
|
|
195
|
+
}
|
|
196
|
+
|
|
197
|
+
console.log('Final content:', totalContent);
|
|
198
|
+
```
|
|
199
|
+
|
|
200
|
+
## ๐ API Reference
|
|
201
|
+
|
|
202
|
+
### OpenAI Class
|
|
203
|
+
|
|
204
|
+
The main client class that provides access to all OpenAI API features.
|
|
205
|
+
|
|
206
|
+
#### Constructor
|
|
207
|
+
|
|
208
|
+
```typescript
|
|
209
|
+
new OpenAI(options: OpenAIOptions)
|
|
210
|
+
```
|
|
211
|
+
|
|
212
|
+
Creates a new OpenAI client instance with the specified configuration.
|
|
213
|
+
|
|
214
|
+
**Parameters:**
|
|
215
|
+
|
|
216
|
+
| Parameter | Type | Required | Description |
|
|
217
|
+
| ----------------- | -------- | -------- | --------------------------------------------------------------------- |
|
|
218
|
+
| `options.baseURL` | `string` | โ
| The base URL for the OpenAI API (e.g., `'https://api.openai.com/v1'`) |
|
|
219
|
+
| `options.apiKey` | `string` | โ
| Your OpenAI API key for authentication |
|
|
220
|
+
|
|
221
|
+
**Example:**
|
|
222
|
+
|
|
223
|
+
```typescript
|
|
224
|
+
const openai = new OpenAI({
|
|
225
|
+
baseURL: 'https://api.openai.com/v1',
|
|
226
|
+
apiKey: 'sk-your-api-key-here',
|
|
227
|
+
});
|
|
228
|
+
```
|
|
229
|
+
|
|
230
|
+
**Throws:**
|
|
231
|
+
|
|
232
|
+
- `TypeError`: If `apiKey` or `baseURL` are not provided or are not strings
|
|
233
|
+
|
|
234
|
+
#### Properties
|
|
235
|
+
|
|
236
|
+
| Property | Type | Description |
|
|
237
|
+
| --------- | ------------ | ------------------------------------------------------------------ |
|
|
238
|
+
| `fetcher` | `Fetcher` | The underlying HTTP client instance configured with authentication |
|
|
239
|
+
| `chat` | `ChatClient` | Chat completion client for interacting with chat models |
|
|
240
|
+
|
|
241
|
+
### ChatClient
|
|
242
|
+
|
|
243
|
+
Specialized client for OpenAI's Chat Completions API with support for both streaming and non-streaming responses.
|
|
244
|
+
|
|
245
|
+
#### Methods
|
|
246
|
+
|
|
247
|
+
##### `completions<T extends ChatRequest>(chatRequest: T)`
|
|
248
|
+
|
|
249
|
+
Creates a chat completion with conditional return types based on the streaming configuration.
|
|
250
|
+
|
|
251
|
+
**Type Parameters:**
|
|
252
|
+
|
|
253
|
+
- `T`: Extends `ChatRequest` - The request type that determines return type
|
|
254
|
+
|
|
255
|
+
**Parameters:**
|
|
256
|
+
|
|
257
|
+
- `chatRequest: T` - Chat completion request configuration
|
|
258
|
+
|
|
259
|
+
**Returns:**
|
|
260
|
+
|
|
261
|
+
- `Promise<ChatResponse>` when `T['stream']` is `false` or `undefined`
|
|
262
|
+
- `Promise<JsonServerSentEventStream<ChatResponse>>` when `T['stream']` is `true`
|
|
263
|
+
|
|
264
|
+
**Throws:**
|
|
265
|
+
|
|
266
|
+
- `Error`: Network errors, authentication failures, or API errors
|
|
267
|
+
- `EventStreamConvertError`: When streaming response cannot be processed
|
|
268
|
+
|
|
269
|
+
### Core Interfaces
|
|
270
|
+
|
|
271
|
+
#### ChatRequest
|
|
272
|
+
|
|
273
|
+
Configuration object for chat completion requests.
|
|
274
|
+
|
|
275
|
+
```typescript
|
|
276
|
+
interface ChatRequest {
|
|
277
|
+
// Core parameters
|
|
278
|
+
model?: string; // Model ID (e.g., 'gpt-3.5-turbo', 'gpt-4')
|
|
279
|
+
messages: Message[]; // Conversation messages
|
|
280
|
+
stream?: boolean; // Enable streaming responses
|
|
281
|
+
|
|
282
|
+
// Generation parameters
|
|
283
|
+
temperature?: number; // Sampling temperature (0.0 - 2.0)
|
|
284
|
+
max_tokens?: number; // Maximum tokens to generate
|
|
285
|
+
top_p?: number; // Nucleus sampling parameter (0.0 - 1.0)
|
|
286
|
+
|
|
287
|
+
// Penalty parameters
|
|
288
|
+
frequency_penalty?: number; // Repetition penalty (-2.0 - 2.0)
|
|
289
|
+
presence_penalty?: number; // Topic diversity penalty (-2.0 - 2.0)
|
|
290
|
+
|
|
291
|
+
// Advanced parameters
|
|
292
|
+
n?: number; // Number of completions to generate
|
|
293
|
+
stop?: string | string[]; // Stop sequences
|
|
294
|
+
logit_bias?: Record<string, number>; // Token bias adjustments
|
|
295
|
+
user?: string; // End-user identifier
|
|
296
|
+
|
|
297
|
+
// Response format
|
|
298
|
+
response_format?: object; // Response format specification
|
|
299
|
+
|
|
300
|
+
// Function calling (beta)
|
|
301
|
+
tools?: any[]; // Available tools/functions
|
|
302
|
+
tool_choice?: any; // Tool selection strategy
|
|
303
|
+
|
|
304
|
+
// Other OpenAI parameters
|
|
305
|
+
[key: string]: any;
|
|
306
|
+
}
|
|
307
|
+
```
|
|
308
|
+
|
|
309
|
+
#### Message
|
|
310
|
+
|
|
311
|
+
Represents a single message in the conversation.
|
|
312
|
+
|
|
313
|
+
```typescript
|
|
314
|
+
interface Message {
|
|
315
|
+
role: 'system' | 'user' | 'assistant' | 'function';
|
|
316
|
+
content: string;
|
|
317
|
+
name?: string; // For function messages
|
|
318
|
+
function_call?: any; // For function call results
|
|
319
|
+
}
|
|
320
|
+
```
|
|
321
|
+
|
|
322
|
+
#### ChatResponse
|
|
323
|
+
|
|
324
|
+
Response object for non-streaming chat completions.
|
|
325
|
+
|
|
326
|
+
```typescript
|
|
327
|
+
interface ChatResponse {
|
|
328
|
+
id: string; // Unique response identifier
|
|
329
|
+
object: string; // Object type (usually 'chat.completion')
|
|
330
|
+
created: number; // Unix timestamp of creation
|
|
331
|
+
model: string; // Model used for completion
|
|
332
|
+
choices: Choice[]; // Array of completion choices
|
|
333
|
+
usage: Usage; // Token usage statistics
|
|
334
|
+
}
|
|
335
|
+
```
|
|
336
|
+
|
|
337
|
+
#### Choice
|
|
338
|
+
|
|
339
|
+
Represents a single completion choice.
|
|
340
|
+
|
|
341
|
+
```typescript
|
|
342
|
+
interface Choice {
|
|
343
|
+
index: number; // Choice index (0-based)
|
|
344
|
+
message: Message; // The completion message
|
|
345
|
+
finish_reason: string; // Reason completion stopped
|
|
346
|
+
}
|
|
347
|
+
```
|
|
348
|
+
|
|
349
|
+
#### Usage
|
|
350
|
+
|
|
351
|
+
Token usage statistics for the request.
|
|
352
|
+
|
|
353
|
+
```typescript
|
|
354
|
+
interface Usage {
|
|
355
|
+
prompt_tokens: number; // Tokens in the prompt
|
|
356
|
+
completion_tokens: number; // Tokens in the completion
|
|
357
|
+
total_tokens: number; // Total tokens used
|
|
358
|
+
}
|
|
359
|
+
```
|
|
360
|
+
|
|
361
|
+
## โ๏ธ Configuration
|
|
362
|
+
|
|
363
|
+
### Environment Variables
|
|
364
|
+
|
|
365
|
+
Set up your environment variables for easy configuration:
|
|
366
|
+
|
|
367
|
+
```bash
|
|
368
|
+
# .env
|
|
369
|
+
OPENAI_API_KEY=sk-your-api-key-here
|
|
370
|
+
OPENAI_BASE_URL=https://api.openai.com/v1
|
|
371
|
+
```
|
|
372
|
+
|
|
373
|
+
Load them in your application:
|
|
374
|
+
|
|
375
|
+
```typescript
|
|
376
|
+
import { config } from 'dotenv';
|
|
377
|
+
config();
|
|
378
|
+
|
|
379
|
+
const openai = new OpenAI({
|
|
380
|
+
baseURL: process.env.OPENAI_BASE_URL!,
|
|
381
|
+
apiKey: process.env.OPENAI_API_KEY!,
|
|
382
|
+
});
|
|
383
|
+
```
|
|
384
|
+
|
|
385
|
+
### Custom Base URL
|
|
386
|
+
|
|
387
|
+
Use with OpenAI-compatible APIs, proxies, or custom deployments:
|
|
388
|
+
|
|
389
|
+
```typescript
|
|
390
|
+
// Using Azure OpenAI
|
|
391
|
+
const openai = new OpenAI({
|
|
392
|
+
baseURL:
|
|
393
|
+
'https://your-resource.openai.azure.com/openai/deployments/your-deployment',
|
|
394
|
+
apiKey: 'your-azure-api-key',
|
|
395
|
+
});
|
|
396
|
+
|
|
397
|
+
// Using a proxy service
|
|
398
|
+
const openai = new OpenAI({
|
|
399
|
+
baseURL: 'https://your-proxy-service.com/api/openai',
|
|
400
|
+
apiKey: 'your-proxy-api-key',
|
|
401
|
+
});
|
|
402
|
+
|
|
403
|
+
// Using a local OpenAI-compatible server
|
|
404
|
+
const openai = new OpenAI({
|
|
405
|
+
baseURL: 'http://localhost:8000/v1',
|
|
406
|
+
apiKey: 'not-needed-for-local',
|
|
407
|
+
});
|
|
408
|
+
```
|
|
409
|
+
|
|
410
|
+
### Advanced Configuration
|
|
411
|
+
|
|
412
|
+
#### Custom HTTP Client Configuration
|
|
413
|
+
|
|
414
|
+
```typescript
|
|
415
|
+
import { Fetcher } from '@ahoo-wang/fetcher';
|
|
416
|
+
|
|
417
|
+
const customFetcher = new Fetcher({
|
|
418
|
+
baseURL: 'https://api.openai.com/v1',
|
|
419
|
+
headers: {
|
|
420
|
+
Authorization: `Bearer ${process.env.OPENAI_API_KEY}`,
|
|
421
|
+
'Custom-Header': 'value',
|
|
422
|
+
},
|
|
423
|
+
timeout: 30000, // 30 second timeout
|
|
424
|
+
retry: {
|
|
425
|
+
attempts: 3,
|
|
426
|
+
delay: 1000,
|
|
427
|
+
},
|
|
428
|
+
});
|
|
429
|
+
|
|
430
|
+
const openai = new OpenAI({
|
|
431
|
+
baseURL: 'https://api.openai.com/v1',
|
|
432
|
+
apiKey: process.env.OPENAI_API_KEY!,
|
|
433
|
+
});
|
|
434
|
+
|
|
435
|
+
// Replace the default fetcher
|
|
436
|
+
openai.fetcher = customFetcher;
|
|
437
|
+
```
|
|
438
|
+
|
|
439
|
+
#### Request Interceptors
|
|
440
|
+
|
|
441
|
+
```typescript
|
|
442
|
+
import { Fetcher } from '@ahoo-wang/fetcher';
|
|
443
|
+
|
|
444
|
+
// Add request logging
|
|
445
|
+
const loggingFetcher = new Fetcher({
|
|
446
|
+
baseURL: 'https://api.openai.com/v1',
|
|
447
|
+
headers: {
|
|
448
|
+
Authorization: `Bearer ${process.env.OPENAI_API_KEY}`,
|
|
449
|
+
},
|
|
450
|
+
interceptors: [
|
|
451
|
+
{
|
|
452
|
+
request: config => {
|
|
453
|
+
console.log('Making request to:', config.url);
|
|
454
|
+
return config;
|
|
455
|
+
},
|
|
456
|
+
response: response => {
|
|
457
|
+
console.log('Response status:', response.status);
|
|
458
|
+
return response;
|
|
459
|
+
},
|
|
460
|
+
},
|
|
461
|
+
],
|
|
462
|
+
});
|
|
463
|
+
|
|
464
|
+
const openai = new OpenAI({
|
|
465
|
+
baseURL: 'https://api.openai.com/v1',
|
|
466
|
+
apiKey: process.env.OPENAI_API_KEY!,
|
|
467
|
+
});
|
|
468
|
+
|
|
469
|
+
openai.fetcher = loggingFetcher;
|
|
470
|
+
```
|
|
471
|
+
|
|
472
|
+
## ๐ก๏ธ Error Handling
|
|
473
|
+
|
|
474
|
+
The library provides comprehensive error handling with detailed error messages and proper error types.
|
|
475
|
+
|
|
476
|
+
### Basic Error Handling
|
|
477
|
+
|
|
478
|
+
```typescript
|
|
479
|
+
import { OpenAI } from '@ahoo-wang/fetcher-openai';
|
|
480
|
+
|
|
481
|
+
const openai = new OpenAI({
|
|
482
|
+
baseURL: 'https://api.openai.com/v1',
|
|
483
|
+
apiKey: process.env.OPENAI_API_KEY!,
|
|
484
|
+
});
|
|
485
|
+
|
|
486
|
+
try {
|
|
487
|
+
const response = await openai.chat.completions({
|
|
488
|
+
model: 'gpt-3.5-turbo',
|
|
489
|
+
messages: [{ role: 'user', content: 'Hello, world!' }],
|
|
490
|
+
});
|
|
491
|
+
|
|
492
|
+
console.log('Success:', response.choices[0].message.content);
|
|
493
|
+
} catch (error) {
|
|
494
|
+
console.error('OpenAI API Error:', error.message);
|
|
495
|
+
console.error('Error details:', error);
|
|
496
|
+
}
|
|
497
|
+
```
|
|
498
|
+
|
|
499
|
+
### Advanced Error Handling with Status Codes
|
|
500
|
+
|
|
501
|
+
```typescript
|
|
502
|
+
try {
|
|
503
|
+
const response = await openai.chat.completions({
|
|
504
|
+
model: 'gpt-3.5-turbo',
|
|
505
|
+
messages: [{ role: 'user', content: 'Hello!' }],
|
|
506
|
+
});
|
|
507
|
+
} catch (error: any) {
|
|
508
|
+
// Handle different error types
|
|
509
|
+
if (error.response) {
|
|
510
|
+
// API returned an error response
|
|
511
|
+
const status = error.response.status;
|
|
512
|
+
const data = error.response.data;
|
|
513
|
+
|
|
514
|
+
switch (status) {
|
|
515
|
+
case 401:
|
|
516
|
+
console.error('Authentication failed - check your API key');
|
|
517
|
+
break;
|
|
518
|
+
case 429:
|
|
519
|
+
console.error('Rate limit exceeded - implement backoff strategy');
|
|
520
|
+
console.log('Retry after:', data.retry_after, 'seconds');
|
|
521
|
+
break;
|
|
522
|
+
case 400:
|
|
523
|
+
console.error('Bad request - check your parameters');
|
|
524
|
+
console.log('Error details:', data.error);
|
|
525
|
+
break;
|
|
526
|
+
case 500:
|
|
527
|
+
case 502:
|
|
528
|
+
case 503:
|
|
529
|
+
console.error('OpenAI server error - retry with exponential backoff');
|
|
530
|
+
break;
|
|
531
|
+
default:
|
|
532
|
+
console.error(`Unexpected error (${status}):`, data.error?.message);
|
|
533
|
+
}
|
|
534
|
+
} else if (error.request) {
|
|
535
|
+
// Network error
|
|
536
|
+
console.error('Network error - check your internet connection');
|
|
537
|
+
console.error('Request details:', error.request);
|
|
538
|
+
} else {
|
|
539
|
+
// Other error
|
|
540
|
+
console.error('Unexpected error:', error.message);
|
|
541
|
+
}
|
|
542
|
+
}
|
|
543
|
+
```
|
|
544
|
+
|
|
545
|
+
### Streaming Error Handling
|
|
546
|
+
|
|
547
|
+
```typescript
|
|
548
|
+
try {
|
|
549
|
+
const stream = await openai.chat.completions({
|
|
550
|
+
model: 'gpt-3.5-turbo',
|
|
551
|
+
messages: [{ role: 'user', content: 'Tell me a story' }],
|
|
552
|
+
stream: true,
|
|
553
|
+
});
|
|
554
|
+
|
|
555
|
+
for await (const chunk of stream) {
|
|
556
|
+
const content = chunk.choices[0]?.delta?.content;
|
|
557
|
+
if (content) {
|
|
558
|
+
process.stdout.write(content);
|
|
559
|
+
}
|
|
560
|
+
}
|
|
561
|
+
} catch (error) {
|
|
562
|
+
if (error.name === 'EventStreamConvertError') {
|
|
563
|
+
console.error('Streaming error - response may not be a valid event stream');
|
|
564
|
+
} else {
|
|
565
|
+
console.error('Streaming failed:', error.message);
|
|
566
|
+
}
|
|
567
|
+
}
|
|
568
|
+
```
|
|
569
|
+
|
|
570
|
+
### Retry Logic Implementation
|
|
571
|
+
|
|
572
|
+
```typescript
|
|
573
|
+
import { OpenAI } from '@ahoo-wang/fetcher-openai';
|
|
574
|
+
|
|
575
|
+
class ResilientOpenAI {
|
|
576
|
+
private client: OpenAI;
|
|
577
|
+
private maxRetries: number;
|
|
578
|
+
private baseDelay: number;
|
|
579
|
+
|
|
580
|
+
constructor(apiKey: string, maxRetries = 3, baseDelay = 1000) {
|
|
581
|
+
this.client = new OpenAI({
|
|
582
|
+
baseURL: 'https://api.openai.com/v1',
|
|
583
|
+
apiKey,
|
|
584
|
+
});
|
|
585
|
+
this.maxRetries = maxRetries;
|
|
586
|
+
this.baseDelay = baseDelay;
|
|
587
|
+
}
|
|
588
|
+
|
|
589
|
+
private async delay(ms: number): Promise<void> {
|
|
590
|
+
return new Promise(resolve => setTimeout(resolve, ms));
|
|
591
|
+
}
|
|
592
|
+
|
|
593
|
+
async completions(request: any, attempt = 1): Promise<any> {
|
|
594
|
+
try {
|
|
595
|
+
return await this.client.chat.completions(request);
|
|
596
|
+
} catch (error: any) {
|
|
597
|
+
const isRetryable =
|
|
598
|
+
error.response?.status >= 500 ||
|
|
599
|
+
error.response?.status === 429 ||
|
|
600
|
+
!error.response; // Network errors
|
|
601
|
+
|
|
602
|
+
if (isRetryable && attempt <= this.maxRetries) {
|
|
603
|
+
const delay = this.baseDelay * Math.pow(2, attempt - 1); // Exponential backoff
|
|
604
|
+
console.log(`Attempt ${attempt} failed, retrying in ${delay}ms...`);
|
|
605
|
+
await this.delay(delay);
|
|
606
|
+
return this.completions(request, attempt + 1);
|
|
607
|
+
}
|
|
608
|
+
|
|
609
|
+
throw error;
|
|
610
|
+
}
|
|
611
|
+
}
|
|
612
|
+
}
|
|
613
|
+
|
|
614
|
+
// Usage
|
|
615
|
+
const resilientClient = new ResilientOpenAI(process.env.OPENAI_API_KEY!);
|
|
616
|
+
|
|
617
|
+
try {
|
|
618
|
+
const response = await resilientClient.completions({
|
|
619
|
+
model: 'gpt-3.5-turbo',
|
|
620
|
+
messages: [{ role: 'user', content: 'Hello!' }],
|
|
621
|
+
});
|
|
622
|
+
console.log(response.choices[0].message.content);
|
|
623
|
+
} catch (error) {
|
|
624
|
+
console.error('All retry attempts failed:', error.message);
|
|
625
|
+
}
|
|
626
|
+
```
|
|
627
|
+
|
|
628
|
+
## ๐ง Advanced Usage
|
|
629
|
+
|
|
630
|
+
### Custom Fetcher Configuration
|
|
631
|
+
|
|
632
|
+
```typescript
|
|
633
|
+
import { Fetcher } from '@ahoo-wang/fetcher';
|
|
634
|
+
|
|
635
|
+
const customFetcher = new Fetcher({
|
|
636
|
+
baseURL: 'https://api.openai.com/v1',
|
|
637
|
+
headers: {
|
|
638
|
+
Authorization: `Bearer ${process.env.OPENAI_API_KEY}`,
|
|
639
|
+
'Custom-Header': 'value',
|
|
640
|
+
'X-Custom-Client': 'my-app/1.0.0',
|
|
641
|
+
},
|
|
642
|
+
timeout: 30000, // 30 second timeout
|
|
643
|
+
retry: {
|
|
644
|
+
attempts: 3, // Retry failed requests
|
|
645
|
+
delay: 1000, // Initial delay between retries
|
|
646
|
+
backoff: 'exponential', // Exponential backoff strategy
|
|
647
|
+
},
|
|
648
|
+
interceptors: [
|
|
649
|
+
{
|
|
650
|
+
request: config => {
|
|
651
|
+
console.log(`Making ${config.method} request to ${config.url}`);
|
|
652
|
+
return config;
|
|
653
|
+
},
|
|
654
|
+
response: response => {
|
|
655
|
+
console.log(`Response: ${response.status}`);
|
|
656
|
+
return response;
|
|
657
|
+
},
|
|
658
|
+
},
|
|
659
|
+
],
|
|
660
|
+
});
|
|
661
|
+
|
|
662
|
+
const openai = new OpenAI({
|
|
663
|
+
baseURL: 'https://api.openai.com/v1',
|
|
664
|
+
apiKey: process.env.OPENAI_API_KEY!,
|
|
665
|
+
});
|
|
666
|
+
|
|
667
|
+
// Replace the default fetcher
|
|
668
|
+
openai.fetcher = customFetcher;
|
|
669
|
+
```
|
|
670
|
+
|
|
671
|
+
### Function Calling (Beta)
|
|
672
|
+
|
|
673
|
+
```typescript
|
|
674
|
+
import { OpenAI } from '@ahoo-wang/fetcher-openai';
|
|
675
|
+
|
|
676
|
+
const openai = new OpenAI({
|
|
677
|
+
baseURL: 'https://api.openai.com/v1',
|
|
678
|
+
apiKey: process.env.OPENAI_API_KEY!,
|
|
679
|
+
});
|
|
680
|
+
|
|
681
|
+
// Define available functions
|
|
682
|
+
const functions = [
|
|
683
|
+
{
|
|
684
|
+
name: 'get_weather',
|
|
685
|
+
description: 'Get the current weather for a location',
|
|
686
|
+
parameters: {
|
|
687
|
+
type: 'object',
|
|
688
|
+
properties: {
|
|
689
|
+
location: {
|
|
690
|
+
type: 'string',
|
|
691
|
+
description: 'The city and state, e.g. San Francisco, CA',
|
|
692
|
+
},
|
|
693
|
+
},
|
|
694
|
+
required: ['location'],
|
|
695
|
+
},
|
|
696
|
+
},
|
|
697
|
+
];
|
|
698
|
+
|
|
699
|
+
// Make a request with function calling
|
|
700
|
+
const response = await openai.chat.completions({
|
|
701
|
+
model: 'gpt-4',
|
|
702
|
+
messages: [
|
|
703
|
+
{ role: 'user', content: "What's the weather like in San Francisco?" },
|
|
704
|
+
],
|
|
705
|
+
functions: functions,
|
|
706
|
+
function_call: 'auto', // Let the model decide when to call functions
|
|
707
|
+
});
|
|
708
|
+
|
|
709
|
+
// Handle function calls
|
|
710
|
+
if (response.choices[0].message.function_call) {
|
|
711
|
+
const functionCall = response.choices[0].message.function_call;
|
|
712
|
+
console.log('Function called:', functionCall.name);
|
|
713
|
+
console.log('Arguments:', JSON.parse(functionCall.arguments));
|
|
714
|
+
}
|
|
715
|
+
```
|
|
716
|
+
|
|
717
|
+
### Conversation Management
|
|
718
|
+
|
|
719
|
+
```typescript
|
|
720
|
+
import { OpenAI } from '@ahoo-wang/fetcher-openai';
|
|
721
|
+
|
|
722
|
+
class ChatConversation {
|
|
723
|
+
private client: OpenAI;
|
|
724
|
+
private messages: Array<{ role: string; content: string }> = [];
|
|
725
|
+
|
|
726
|
+
constructor(apiKey: string) {
|
|
727
|
+
this.client = new OpenAI({
|
|
728
|
+
baseURL: 'https://api.openai.com/v1',
|
|
729
|
+
apiKey,
|
|
730
|
+
});
|
|
731
|
+
}
|
|
732
|
+
|
|
733
|
+
async addMessage(role: 'system' | 'user' | 'assistant', content: string) {
|
|
734
|
+
this.messages.push({ role, content });
|
|
735
|
+
}
|
|
736
|
+
|
|
737
|
+
async sendMessage(content: string, options: Partial<ChatRequest> = {}) {
|
|
738
|
+
await this.addMessage('user', content);
|
|
739
|
+
|
|
740
|
+
const response = await this.client.chat.completions({
|
|
741
|
+
model: 'gpt-3.5-turbo',
|
|
742
|
+
messages: this.messages,
|
|
743
|
+
...options,
|
|
744
|
+
});
|
|
745
|
+
|
|
746
|
+
const assistantMessage = response.choices[0].message;
|
|
747
|
+
await this.addMessage('assistant', assistantMessage.content);
|
|
748
|
+
|
|
749
|
+
return assistantMessage;
|
|
750
|
+
}
|
|
751
|
+
|
|
752
|
+
getHistory() {
|
|
753
|
+
return [...this.messages];
|
|
754
|
+
}
|
|
755
|
+
|
|
756
|
+
clearHistory() {
|
|
757
|
+
this.messages = [];
|
|
758
|
+
}
|
|
759
|
+
}
|
|
760
|
+
|
|
761
|
+
// Usage
|
|
762
|
+
const conversation = new ChatConversation(process.env.OPENAI_API_KEY!);
|
|
763
|
+
|
|
764
|
+
// Set system prompt
|
|
765
|
+
await conversation.addMessage('system', 'You are a helpful coding assistant.');
|
|
766
|
+
|
|
767
|
+
// Have a conversation
|
|
768
|
+
const response1 = await conversation.sendMessage('How do I use TypeScript?');
|
|
769
|
+
console.log('Assistant:', response1.content);
|
|
770
|
+
|
|
771
|
+
const response2 = await conversation.sendMessage('Can you show me an example?');
|
|
772
|
+
console.log('Assistant:', response2.content);
|
|
773
|
+
```
|
|
774
|
+
|
|
775
|
+
### Batch Processing
|
|
776
|
+
|
|
777
|
+
```typescript
|
|
778
|
+
import { OpenAI } from '@ahoo-wang/fetcher-openai';
|
|
779
|
+
|
|
780
|
+
const openai = new OpenAI({
|
|
781
|
+
baseURL: 'https://api.openai.com/v1',
|
|
782
|
+
apiKey: process.env.OPENAI_API_KEY!,
|
|
783
|
+
});
|
|
784
|
+
|
|
785
|
+
async function processBatch(prompts: string[], batchSize = 5) {
|
|
786
|
+
const results = [];
|
|
787
|
+
|
|
788
|
+
// Process in batches to avoid rate limits
|
|
789
|
+
for (let i = 0; i < prompts.length; i += batchSize) {
|
|
790
|
+
const batch = prompts.slice(i, i + batchSize);
|
|
791
|
+
|
|
792
|
+
const batchPromises = batch.map(prompt =>
|
|
793
|
+
openai.chat.completions({
|
|
794
|
+
model: 'gpt-3.5-turbo',
|
|
795
|
+
messages: [{ role: 'user', content: prompt }],
|
|
796
|
+
temperature: 0.7,
|
|
797
|
+
}),
|
|
798
|
+
);
|
|
799
|
+
|
|
800
|
+
try {
|
|
801
|
+
const batchResults = await Promise.all(batchPromises);
|
|
802
|
+
results.push(...batchResults);
|
|
803
|
+
|
|
804
|
+
// Add delay between batches to respect rate limits
|
|
805
|
+
if (i + batchSize < prompts.length) {
|
|
806
|
+
await new Promise(resolve => setTimeout(resolve, 1000));
|
|
807
|
+
}
|
|
808
|
+
} catch (error) {
|
|
809
|
+
console.error(`Batch ${Math.floor(i / batchSize) + 1} failed:`, error);
|
|
810
|
+
// Continue with next batch or implement retry logic
|
|
811
|
+
}
|
|
812
|
+
}
|
|
813
|
+
|
|
814
|
+
return results;
|
|
815
|
+
}
|
|
816
|
+
|
|
817
|
+
// Usage
|
|
818
|
+
const prompts = [
|
|
819
|
+
'Explain quantum computing',
|
|
820
|
+
'What is machine learning?',
|
|
821
|
+
'How does blockchain work?',
|
|
822
|
+
'Describe cloud computing',
|
|
823
|
+
];
|
|
824
|
+
|
|
825
|
+
const results = await processBatch(prompts);
|
|
826
|
+
results.forEach((result, index) => {
|
|
827
|
+
console.log(`Prompt ${index + 1}:`, result.choices[0].message.content);
|
|
828
|
+
});
|
|
829
|
+
```
|
|
830
|
+
|
|
831
|
+
### Integration with Other Fetcher Features
|
|
832
|
+
|
|
833
|
+
Since this library is built on the Fetcher ecosystem, you can leverage all Fetcher features:
|
|
834
|
+
|
|
835
|
+
#### Request/Response Interceptors
|
|
836
|
+
|
|
837
|
+
```typescript
|
|
838
|
+
const openai = new OpenAI({
|
|
839
|
+
baseURL: 'https://api.openai.com/v1',
|
|
840
|
+
apiKey: process.env.OPENAI_API_KEY!,
|
|
841
|
+
});
|
|
842
|
+
|
|
843
|
+
// Add logging interceptor
|
|
844
|
+
openai.fetcher.interceptors.request.use(config => {
|
|
845
|
+
console.log(`[${new Date().toISOString()}] ${config.method} ${config.url}`);
|
|
846
|
+
return config;
|
|
847
|
+
});
|
|
848
|
+
|
|
849
|
+
openai.fetcher.interceptors.response.use(response => {
|
|
850
|
+
console.log(`[${new Date().toISOString()}] Response: ${response.status}`);
|
|
851
|
+
return response;
|
|
852
|
+
});
|
|
853
|
+
```
|
|
854
|
+
|
|
855
|
+
#### Custom Result Extractors
|
|
856
|
+
|
|
857
|
+
```typescript
|
|
858
|
+
import { ResultExtractor } from '@ahoo-wang/fetcher';
|
|
859
|
+
|
|
860
|
+
// Create a custom extractor that adds metadata
|
|
861
|
+
const metadataExtractor: ResultExtractor = exchange => {
|
|
862
|
+
const response = exchange.response;
|
|
863
|
+
return {
|
|
864
|
+
...response,
|
|
865
|
+
_metadata: {
|
|
866
|
+
requestId: response.headers.get('x-request-id'),
|
|
867
|
+
processingTime: Date.now() - exchange.startTime,
|
|
868
|
+
model: response.model,
|
|
869
|
+
},
|
|
870
|
+
};
|
|
871
|
+
};
|
|
872
|
+
|
|
873
|
+
// Use with chat completions
|
|
874
|
+
const response = await openai.chat.completions(
|
|
875
|
+
{
|
|
876
|
+
model: 'gpt-3.5-turbo',
|
|
877
|
+
messages: [{ role: 'user', content: 'Hello!' }],
|
|
878
|
+
},
|
|
879
|
+
{
|
|
880
|
+
resultExtractor: metadataExtractor,
|
|
881
|
+
},
|
|
882
|
+
);
|
|
883
|
+
```
|
|
884
|
+
|
|
885
|
+
#### Request Deduplication
|
|
886
|
+
|
|
887
|
+
```typescript
|
|
888
|
+
// Enable request deduplication for identical requests
|
|
889
|
+
openai.fetcher.defaults.deduplicate = true;
|
|
890
|
+
|
|
891
|
+
// This will reuse the response from identical concurrent requests
|
|
892
|
+
const [response1, response2] = await Promise.all([
|
|
893
|
+
openai.chat.completions({
|
|
894
|
+
model: 'gpt-3.5-turbo',
|
|
895
|
+
messages: [{ role: 'user', content: 'Hello!' }],
|
|
896
|
+
}),
|
|
897
|
+
openai.chat.completions({
|
|
898
|
+
model: 'gpt-3.5-turbo',
|
|
899
|
+
messages: [{ role: 'user', content: 'Hello!' }],
|
|
900
|
+
}),
|
|
901
|
+
]);
|
|
902
|
+
```
|
|
903
|
+
|
|
904
|
+
## ๐ Migration Guide
|
|
905
|
+
|
|
906
|
+
### Migrating from OpenAI SDK
|
|
907
|
+
|
|
908
|
+
If you're migrating from the official OpenAI SDK:
|
|
909
|
+
|
|
910
|
+
```typescript
|
|
911
|
+
// Before (OpenAI SDK)
|
|
912
|
+
import OpenAI from 'openai';
|
|
913
|
+
|
|
914
|
+
const openai = new OpenAI({
|
|
915
|
+
apiKey: process.env.OPENAI_API_KEY,
|
|
916
|
+
});
|
|
917
|
+
|
|
918
|
+
const response = await openai.chat.completions.create({
|
|
919
|
+
model: 'gpt-3.5-turbo',
|
|
920
|
+
messages: [{ role: 'user', content: 'Hello!' }],
|
|
921
|
+
});
|
|
922
|
+
|
|
923
|
+
// After (Fetcher OpenAI)
|
|
924
|
+
import { OpenAI } from '@ahoo-wang/fetcher-openai';
|
|
925
|
+
|
|
926
|
+
const openai = new OpenAI({
|
|
927
|
+
baseURL: 'https://api.openai.com/v1',
|
|
928
|
+
apiKey: process.env.OPENAI_API_KEY!,
|
|
929
|
+
});
|
|
930
|
+
|
|
931
|
+
const response = await openai.chat.completions({
|
|
932
|
+
model: 'gpt-3.5-turbo',
|
|
933
|
+
messages: [{ role: 'user', content: 'Hello!' }],
|
|
934
|
+
});
|
|
935
|
+
```
|
|
936
|
+
|
|
937
|
+
### Key Differences
|
|
938
|
+
|
|
939
|
+
| Feature | OpenAI SDK | Fetcher OpenAI |
|
|
940
|
+
| ------------------ | --------------------- | ----------------------------------- |
|
|
941
|
+
| **Streaming** | `for await (...)` | `for await (...)` (same) |
|
|
942
|
+
| **Error Handling** | Custom error types | Standard JavaScript errors |
|
|
943
|
+
| **Configuration** | `new OpenAI(options)` | `new OpenAI(options)` |
|
|
944
|
+
| **TypeScript** | Full support | Full support with conditional types |
|
|
945
|
+
| **Interceptors** | Limited | Full Fetcher interceptor support |
|
|
946
|
+
| **Bundle Size** | Larger | Optimized with tree shaking |
|
|
947
|
+
|
|
948
|
+
## ๐ Troubleshooting
|
|
949
|
+
|
|
950
|
+
### Common Issues
|
|
951
|
+
|
|
952
|
+
#### Authentication Errors
|
|
953
|
+
|
|
954
|
+
**Problem:** `401 Unauthorized` error
|
|
955
|
+
|
|
956
|
+
**Solutions:**
|
|
957
|
+
|
|
958
|
+
```typescript
|
|
959
|
+
// 1. Check your API key
|
|
960
|
+
const openai = new OpenAI({
|
|
961
|
+
baseURL: 'https://api.openai.com/v1',
|
|
962
|
+
apiKey: process.env.OPENAI_API_KEY!, // Make sure this is set
|
|
963
|
+
});
|
|
964
|
+
|
|
965
|
+
// 2. Verify API key format
|
|
966
|
+
if (!process.env.OPENAI_API_KEY?.startsWith('sk-')) {
|
|
967
|
+
throw new Error('Invalid API key format');
|
|
968
|
+
}
|
|
969
|
+
```
|
|
970
|
+
|
|
971
|
+
#### Streaming Not Working
|
|
972
|
+
|
|
973
|
+
**Problem:** Streaming responses not working as expected
|
|
974
|
+
|
|
975
|
+
**Solutions:**
|
|
976
|
+
|
|
977
|
+
```typescript
|
|
978
|
+
// 1. Ensure stream parameter is set to true
|
|
979
|
+
const stream = await openai.chat.completions({
|
|
980
|
+
model: 'gpt-3.5-turbo',
|
|
981
|
+
messages: [{ role: 'user', content: 'Hello!' }],
|
|
982
|
+
stream: true, // This is required for streaming
|
|
983
|
+
});
|
|
984
|
+
|
|
985
|
+
// 2. Handle the stream properly
|
|
986
|
+
for await (const chunk of stream) {
|
|
987
|
+
const content = chunk.choices[0]?.delta?.content;
|
|
988
|
+
if (content) {
|
|
989
|
+
process.stdout.write(content); // Use process.stdout.write for real-time output
|
|
990
|
+
}
|
|
991
|
+
}
|
|
992
|
+
```
|
|
993
|
+
|
|
994
|
+
#### Rate Limiting
|
|
995
|
+
|
|
996
|
+
**Problem:** `429 Too Many Requests` error
|
|
997
|
+
|
|
998
|
+
**Solutions:**
|
|
999
|
+
|
|
1000
|
+
```typescript
|
|
1001
|
+
// Implement exponential backoff
|
|
1002
|
+
async function completionsWithRetry(request: any, maxRetries = 3) {
|
|
1003
|
+
for (let attempt = 1; attempt <= maxRetries; attempt++) {
|
|
1004
|
+
try {
|
|
1005
|
+
return await openai.chat.completions(request);
|
|
1006
|
+
} catch (error: any) {
|
|
1007
|
+
if (error.response?.status === 429 && attempt < maxRetries) {
|
|
1008
|
+
const delay = Math.pow(2, attempt) * 1000; // Exponential backoff
|
|
1009
|
+
await new Promise(resolve => setTimeout(resolve, delay));
|
|
1010
|
+
continue;
|
|
1011
|
+
}
|
|
1012
|
+
throw error;
|
|
1013
|
+
}
|
|
1014
|
+
}
|
|
1015
|
+
}
|
|
1016
|
+
```
|
|
1017
|
+
|
|
1018
|
+
#### Network Errors
|
|
1019
|
+
|
|
1020
|
+
**Problem:** Connection timeouts or network failures
|
|
1021
|
+
|
|
1022
|
+
**Solutions:**
|
|
1023
|
+
|
|
1024
|
+
```typescript
|
|
1025
|
+
// Configure timeout and retry
|
|
1026
|
+
const openai = new OpenAI({
|
|
1027
|
+
baseURL: 'https://api.openai.com/v1',
|
|
1028
|
+
apiKey: process.env.OPENAI_API_KEY!,
|
|
1029
|
+
});
|
|
1030
|
+
|
|
1031
|
+
// Set custom timeout
|
|
1032
|
+
openai.fetcher.defaults.timeout = 60000; // 60 seconds
|
|
1033
|
+
|
|
1034
|
+
// Add retry logic
|
|
1035
|
+
openai.fetcher.defaults.retry = {
|
|
1036
|
+
attempts: 3,
|
|
1037
|
+
delay: 1000,
|
|
1038
|
+
backoff: 'exponential',
|
|
1039
|
+
};
|
|
1040
|
+
```
|
|
1041
|
+
|
|
1042
|
+
### Debug Mode
|
|
1043
|
+
|
|
1044
|
+
Enable debug logging to troubleshoot issues:
|
|
1045
|
+
|
|
1046
|
+
```typescript
|
|
1047
|
+
// Enable request logging
|
|
1048
|
+
openai.fetcher.interceptors.request.use(config => {
|
|
1049
|
+
console.log('Request:', {
|
|
1050
|
+
url: config.url,
|
|
1051
|
+
method: config.method,
|
|
1052
|
+
headers: config.headers,
|
|
1053
|
+
body: config.body,
|
|
1054
|
+
});
|
|
1055
|
+
return config;
|
|
1056
|
+
});
|
|
1057
|
+
|
|
1058
|
+
openai.fetcher.interceptors.response.use(response => {
|
|
1059
|
+
console.log('Response:', {
|
|
1060
|
+
status: response.status,
|
|
1061
|
+
headers: response.headers,
|
|
1062
|
+
data: response.data,
|
|
1063
|
+
});
|
|
1064
|
+
return response;
|
|
1065
|
+
});
|
|
1066
|
+
```
|
|
1067
|
+
|
|
1068
|
+
## โก Performance Tips
|
|
1069
|
+
|
|
1070
|
+
### Optimize Bundle Size
|
|
1071
|
+
|
|
1072
|
+
```typescript
|
|
1073
|
+
// Only import what you need
|
|
1074
|
+
import { OpenAI } from '@ahoo-wang/fetcher-openai';
|
|
1075
|
+
|
|
1076
|
+
// Avoid importing unused features
|
|
1077
|
+
// โ Don't do this if you only need chat completions
|
|
1078
|
+
// import * as OpenAI from '@ahoo-wang/fetcher-openai';
|
|
1079
|
+
```
|
|
1080
|
+
|
|
1081
|
+
### Connection Pooling
|
|
1082
|
+
|
|
1083
|
+
For high-throughput applications:
|
|
1084
|
+
|
|
1085
|
+
```typescript
|
|
1086
|
+
// Use HTTP/2 compatible clients for better performance
|
|
1087
|
+
const openai = new OpenAI({
|
|
1088
|
+
baseURL: 'https://api.openai.com/v1',
|
|
1089
|
+
apiKey: process.env.OPENAI_API_KEY!,
|
|
1090
|
+
});
|
|
1091
|
+
|
|
1092
|
+
// Configure connection pooling if using Node.js
|
|
1093
|
+
process.env.NODE_OPTIONS = '--max-http-header-size=81920';
|
|
1094
|
+
```
|
|
1095
|
+
|
|
1096
|
+
### Streaming Optimization
|
|
1097
|
+
|
|
1098
|
+
```typescript
|
|
1099
|
+
// Process streaming responses efficiently
|
|
1100
|
+
const stream = await openai.chat.completions({
|
|
1101
|
+
model: 'gpt-3.5-turbo',
|
|
1102
|
+
messages: [{ role: 'user', content: 'Long response here...' }],
|
|
1103
|
+
stream: true,
|
|
1104
|
+
max_tokens: 1000, // Limit tokens for faster responses
|
|
1105
|
+
});
|
|
1106
|
+
|
|
1107
|
+
// Use efficient streaming processing
|
|
1108
|
+
let buffer = '';
|
|
1109
|
+
for await (const chunk of stream) {
|
|
1110
|
+
const content = chunk.choices[0]?.delta?.content || '';
|
|
1111
|
+
buffer += content;
|
|
1112
|
+
|
|
1113
|
+
// Process in chunks rather than character by character
|
|
1114
|
+
if (buffer.length >= 100) {
|
|
1115
|
+
processChunk(buffer);
|
|
1116
|
+
buffer = '';
|
|
1117
|
+
}
|
|
1118
|
+
}
|
|
1119
|
+
```
|
|
1120
|
+
|
|
1121
|
+
### Caching Strategy
|
|
1122
|
+
|
|
1123
|
+
```typescript
|
|
1124
|
+
// Implement response caching for similar requests
|
|
1125
|
+
class CachedOpenAI {
|
|
1126
|
+
private cache = new Map<string, any>();
|
|
1127
|
+
private client: OpenAI;
|
|
1128
|
+
|
|
1129
|
+
constructor(apiKey: string) {
|
|
1130
|
+
this.client = new OpenAI({
|
|
1131
|
+
baseURL: 'https://api.openai.com/v1',
|
|
1132
|
+
apiKey,
|
|
1133
|
+
});
|
|
1134
|
+
}
|
|
1135
|
+
|
|
1136
|
+
private getCacheKey(request: any): string {
|
|
1137
|
+
return JSON.stringify({
|
|
1138
|
+
model: request.model,
|
|
1139
|
+
messages: request.messages,
|
|
1140
|
+
temperature: request.temperature,
|
|
1141
|
+
});
|
|
1142
|
+
}
|
|
1143
|
+
|
|
1144
|
+
async completions(request: any, useCache = true) {
|
|
1145
|
+
const cacheKey = this.getCacheKey(request);
|
|
1146
|
+
|
|
1147
|
+
if (useCache && this.cache.has(cacheKey)) {
|
|
1148
|
+
return this.cache.get(cacheKey);
|
|
1149
|
+
}
|
|
1150
|
+
|
|
1151
|
+
const response = await this.client.chat.completions(request);
|
|
1152
|
+
|
|
1153
|
+
if (useCache) {
|
|
1154
|
+
this.cache.set(cacheKey, response);
|
|
1155
|
+
}
|
|
1156
|
+
|
|
1157
|
+
return response;
|
|
1158
|
+
}
|
|
1159
|
+
}
|
|
1160
|
+
```
|
|
1161
|
+
|
|
1162
|
+
## ๐ค Contributing
|
|
1163
|
+
|
|
1164
|
+
We welcome contributions! Please see our [Contributing Guide](../../CONTRIBUTING.md) for details.
|
|
1165
|
+
|
|
1166
|
+
### Development Setup
|
|
1167
|
+
|
|
1168
|
+
```bash
|
|
1169
|
+
# Clone the repository
|
|
1170
|
+
git clone https://github.com/Ahoo-Wang/fetcher.git
|
|
1171
|
+
cd fetcher
|
|
1172
|
+
|
|
1173
|
+
# Install dependencies
|
|
1174
|
+
pnpm install
|
|
1175
|
+
|
|
1176
|
+
# Run tests for this package
|
|
1177
|
+
pnpm --filter @ahoo-wang/fetcher-openai test
|
|
1178
|
+
|
|
1179
|
+
# Build the package
|
|
1180
|
+
pnpm --filter @ahoo-wang/fetcher-openai build
|
|
1181
|
+
```
|
|
1182
|
+
|
|
1183
|
+
### Code Style
|
|
1184
|
+
|
|
1185
|
+
This project uses ESLint and Prettier for code formatting. Please ensure your code follows the established patterns:
|
|
1186
|
+
|
|
1187
|
+
```bash
|
|
1188
|
+
# Lint the code
|
|
1189
|
+
pnpm --filter @ahoo-wang/fetcher-openai lint
|
|
1190
|
+
|
|
1191
|
+
# Format code
|
|
1192
|
+
pnpm format
|
|
1193
|
+
```
|
|
1194
|
+
|
|
1195
|
+
## ๐ License
|
|
1196
|
+
|
|
1197
|
+
Licensed under the Apache License, Version 2.0. See [LICENSE](../../LICENSE) for details.
|
|
1198
|
+
|
|
1199
|
+
## ๐ Related Packages
|
|
1200
|
+
|
|
1201
|
+
- [**@ahoo-wang/fetcher**](https://github.com/Ahoo-Wang/fetcher/tree/master/packages/fetcher) - Core HTTP client with advanced features
|
|
1202
|
+
- [**@ahoo-wang/fetcher-decorator**](https://github.com/Ahoo-Wang/fetcher/tree/master/packages/fetcher-decorator) - Declarative API decorators for type-safe requests
|
|
1203
|
+
- [**@ahoo-wang/fetcher-eventstream**](https://github.com/Ahoo-Wang/fetcher/tree/master/packages/fetcher-eventstream) - Server-sent events support for real-time streaming
|
|
1204
|
+
- [**@ahoo-wang/fetcher-openapi**](https://github.com/Ahoo-Wang/fetcher/tree/master/packages/openapi) - OpenAPI specification client generation
|
|
1205
|
+
|
|
1206
|
+
## ๐ Project Status
|
|
1207
|
+
|
|
1208
|
+
[](https://www.npmjs.com/package/@ahoo-wang/fetcher-openai)
|
|
1209
|
+
[](https://github.com/Ahoo-Wang/fetcher/actions)
|
|
1210
|
+
[](https://codecov.io/gh/Ahoo-Wang/fetcher)
|
|
1211
|
+
[](https://github.com/Ahoo-Wang/fetcher/blob/main/LICENSE)
|
|
1212
|
+
|
|
1213
|
+
---
|
|
1214
|
+
|
|
1215
|
+
<p align="center">
|
|
1216
|
+
<strong>Built with โค๏ธ using the Fetcher ecosystem</strong>
|
|
1217
|
+
</p>
|
|
1218
|
+
|
|
1219
|
+
<p align="center">
|
|
1220
|
+
<a href="https://github.com/Ahoo-Wang/fetcher">GitHub</a> โข
|
|
1221
|
+
<a href="https://www.npmjs.com/package/@ahoo-wang/fetcher-openai">NPM</a> โข
|
|
1222
|
+
<a href="https://deepwiki.com/Ahoo-Wang/fetcher">Documentation</a>
|
|
1223
|
+
</p>
|