@ai-sdk/openai-compatible 2.0.21 → 2.0.22

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/docs/index.mdx ADDED
@@ -0,0 +1,579 @@
1
+ ---
2
+ title: OpenAI Compatible Providers
3
+ description: Use OpenAI compatible providers with the AI SDK.
4
+ ---
5
+
6
+ # OpenAI Compatible Providers
7
+
8
+ You can use the [OpenAI Compatible Provider](https://www.npmjs.com/package/@ai-sdk/openai-compatible) package to use language model providers that implement the OpenAI API.
9
+
10
+ Below we focus on the general setup and provider instance creation. You can also [write a custom provider package leveraging the OpenAI Compatible package](/providers/openai-compatible-providers/custom-providers).
11
+
12
+ We provide detailed documentation for the following OpenAI compatible providers:
13
+
14
+ - [LM Studio](/providers/openai-compatible-providers/lmstudio)
15
+ - [NIM](/providers/openai-compatible-providers/nim)
16
+ - [Heroku](/providers/openai-compatible-providers/heroku)
17
+ - [Clarifai](/providers/openai-compatible-providers/clarifai)
18
+
19
+ The general setup and provider instance creation is the same for all of these providers.
20
+
21
+ ## Setup
22
+
23
+ The OpenAI Compatible provider is available via the `@ai-sdk/openai-compatible` module. You can install it with:
24
+
25
+ <Tabs items={['pnpm', 'npm', 'yarn', 'bun']}>
26
+ <Tab>
27
+ <Snippet text="pnpm add @ai-sdk/openai-compatible" dark />
28
+ </Tab>
29
+ <Tab>
30
+ <Snippet text="npm install @ai-sdk/openai-compatible" dark />
31
+ </Tab>
32
+ <Tab>
33
+ <Snippet text="yarn add @ai-sdk/openai-compatible" dark />
34
+ </Tab>
35
+
36
+ <Tab>
37
+ <Snippet text="bun add @ai-sdk/openai-compatible" dark />
38
+ </Tab>
39
+ </Tabs>
40
+
41
+ ## Provider Instance
42
+
43
+ To use an OpenAI compatible provider, you can create a custom provider instance with the `createOpenAICompatible` function from `@ai-sdk/openai-compatible`:
44
+
45
+ ```ts
46
+ import { createOpenAICompatible } from '@ai-sdk/openai-compatible';
47
+
48
+ const provider = createOpenAICompatible({
49
+ name: 'providerName',
50
+ apiKey: process.env.PROVIDER_API_KEY,
51
+ baseURL: 'https://api.provider.com/v1',
52
+ includeUsage: true, // Include usage information in streaming responses
53
+ });
54
+ ```
55
+
56
+ You can use the following optional settings to customize the provider instance:
57
+
58
+ - **baseURL** _string_
59
+
60
+ Set the URL prefix for API calls.
61
+
62
+ - **apiKey** _string_
63
+
64
+ API key for authenticating requests. If specified, adds an `Authorization`
65
+ header to request headers with the value `Bearer <apiKey>`. This will be added
66
+ before any headers potentially specified in the `headers` option.
67
+
68
+ - **headers** _Record&lt;string,string&gt;_
69
+
70
+ Optional custom headers to include in requests. These will be added to request headers
71
+ after any headers potentially added by use of the `apiKey` option.
72
+
73
+ - **queryParams** _Record&lt;string,string&gt;_
74
+
75
+ Optional custom url query parameters to include in request urls.
76
+
77
+ - **fetch** _(input: RequestInfo, init?: RequestInit) => Promise&lt;Response&gt;_
78
+
79
+ Custom [fetch](https://developer.mozilla.org/en-US/docs/Web/API/fetch) implementation.
80
+ Defaults to the global `fetch` function.
81
+ You can use it as a middleware to intercept requests,
82
+ or to provide a custom fetch implementation for e.g. testing.
83
+
84
+ - **includeUsage** _boolean_
85
+
86
+ Include usage information in streaming responses. When enabled, usage data will be included in the response metadata for streaming requests. Defaults to `undefined` (`false`).
87
+
88
+ - **supportsStructuredOutputs** _boolean_
89
+
90
+ Set to true if the provider supports structured outputs. Only relevant for `provider()`, `provider.chatModel()`, and `provider.languageModel()`.
91
+
92
+ - **transformRequestBody** _(args: Record&lt;string, any&gt;) =&gt; Record&lt;string, any&gt;_
93
+
94
+ Optional function to transform the request body before sending it to the API.
95
+ This is useful for proxy providers that may require a different request format
96
+ than the official OpenAI API.
97
+
98
+ - **metadataExtractor** _MetadataExtractor_
99
+
100
+ Optional metadata extractor to capture provider-specific metadata from API responses.
101
+ See [Custom Metadata Extraction](#custom-metadata-extraction) for details.
102
+
103
+ ## Language Models
104
+
105
+ You can create provider models using a provider instance.
106
+ The first argument is the model id, e.g. `model-id`.
107
+
108
+ ```ts
109
+ const model = provider('model-id');
110
+ ```
111
+
112
+ You can also use the following factory methods:
113
+
114
+ - `provider.languageModel('model-id')` - creates a chat language model (same as `provider('model-id')`)
115
+ - `provider.chatModel('model-id')` - creates a chat language model
116
+
117
+ ### Supported Capabilities
118
+
119
+ Chat models created with this provider support the following capabilities:
120
+
121
+ - **Text generation** - Generate text completions
122
+ - **Streaming** - Stream text responses in real-time
123
+ - **Tool calling** - Call tools/functions with streaming support
124
+ - **Structured outputs** - Generate JSON with schema validation (when `supportsStructuredOutputs` is enabled)
125
+ - **Reasoning content** - Support for models that return reasoning/thinking tokens (e.g., DeepSeek R1)
126
+ - **System messages** - Support for system prompts
127
+ - **Multi-modal inputs** - Support for images and other content types (provider-dependent)
128
+
129
+ ### Example
130
+
131
+ You can use provider language models to generate text with the `generateText` function:
132
+
133
+ ```ts
134
+ import { createOpenAICompatible } from '@ai-sdk/openai-compatible';
135
+ import { generateText } from 'ai';
136
+
137
+ const provider = createOpenAICompatible({
138
+ name: 'providerName',
139
+ apiKey: process.env.PROVIDER_API_KEY,
140
+ baseURL: 'https://api.provider.com/v1',
141
+ });
142
+
143
+ const { text } = await generateText({
144
+ model: provider('model-id'),
145
+ prompt: 'Write a vegetarian lasagna recipe for 4 people.',
146
+ });
147
+ ```
148
+
149
+ ### Including model ids for auto-completion
150
+
151
+ ```ts
152
+ import { createOpenAICompatible } from '@ai-sdk/openai-compatible';
153
+ import { generateText } from 'ai';
154
+
155
+ type ExampleChatModelIds =
156
+ | 'meta-llama/Llama-3-70b-chat-hf'
157
+ | 'meta-llama/Meta-Llama-3.1-8B-Instruct-Turbo'
158
+ | (string & {});
159
+
160
+ type ExampleCompletionModelIds =
161
+ | 'codellama/CodeLlama-34b-Instruct-hf'
162
+ | 'Qwen/Qwen2.5-Coder-32B-Instruct'
163
+ | (string & {});
164
+
165
+ type ExampleEmbeddingModelIds =
166
+ | 'BAAI/bge-large-en-v1.5'
167
+ | 'bert-base-uncased'
168
+ | (string & {});
169
+
170
+ type ExampleImageModelIds = 'dall-e-3' | 'stable-diffusion-xl' | (string & {});
171
+
172
+ const model = createOpenAICompatible<
173
+ ExampleChatModelIds,
174
+ ExampleCompletionModelIds,
175
+ ExampleEmbeddingModelIds,
176
+ ExampleImageModelIds
177
+ >({
178
+ name: 'example',
179
+ apiKey: process.env.PROVIDER_API_KEY,
180
+ baseURL: 'https://api.example.com/v1',
181
+ });
182
+
183
+ // Subsequent calls to e.g. `model.chatModel` will auto-complete the model id
184
+ // from the list of `ExampleChatModelIds` while still allowing free-form
185
+ // strings as well.
186
+
187
+ const { text } = await generateText({
188
+ model: model.chatModel('meta-llama/Llama-3-70b-chat-hf'),
189
+ prompt: 'Write a vegetarian lasagna recipe for 4 people.',
190
+ });
191
+ ```
192
+
193
+ ### Custom query parameters
194
+
195
+ Some providers may require custom query parameters. An example is the [Azure AI
196
+ Model Inference
197
+ API](https://learn.microsoft.com/en-us/azure/machine-learning/reference-model-inference-chat-completions?view=azureml-api-2)
198
+ which requires an `api-version` query parameter.
199
+
200
+ You can set these via the optional `queryParams` provider setting. These will be
201
+ added to all requests made by the provider.
202
+
203
+ ```ts highlight="7-9"
204
+ import { createOpenAICompatible } from '@ai-sdk/openai-compatible';
205
+
206
+ const provider = createOpenAICompatible({
207
+ name: 'providerName',
208
+ apiKey: process.env.PROVIDER_API_KEY,
209
+ baseURL: 'https://api.provider.com/v1',
210
+ queryParams: {
211
+ 'api-version': '1.0.0',
212
+ },
213
+ });
214
+ ```
215
+
216
+ For example, with the above configuration, API requests would include the query parameter in the URL like:
217
+ `https://api.provider.com/v1/chat/completions?api-version=1.0.0`.
218
+
219
+ ## Image Models
220
+
221
+ You can create image models using the `.imageModel()` factory method:
222
+
223
+ ```ts
224
+ const model = provider.imageModel('model-id');
225
+ ```
226
+
227
+ ### Basic Image Generation
228
+
229
+ ```ts
230
+ import { createOpenAICompatible } from '@ai-sdk/openai-compatible';
231
+ import { generateImage } from 'ai';
232
+
233
+ const provider = createOpenAICompatible({
234
+ name: 'providerName',
235
+ apiKey: process.env.PROVIDER_API_KEY,
236
+ baseURL: 'https://api.provider.com/v1',
237
+ });
238
+
239
+ const { images } = await generateImage({
240
+ model: provider.imageModel('model-id'),
241
+ prompt: 'A futuristic cityscape at sunset',
242
+ size: '1024x1024',
243
+ });
244
+ ```
245
+
246
+ ### Image Editing
247
+
248
+ The OpenAI Compatible provider supports image editing through the `/images/edits` endpoint. Pass input images via `prompt.images` to transform or edit existing images.
249
+
250
+ #### Basic Image Editing
251
+
252
+ ```ts
253
+ import { createOpenAICompatible } from '@ai-sdk/openai-compatible';
254
+ import { generateImage } from 'ai';
255
+ import fs from 'fs';
256
+
257
+ const provider = createOpenAICompatible({
258
+ name: 'providerName',
259
+ apiKey: process.env.PROVIDER_API_KEY,
260
+ baseURL: 'https://api.provider.com/v1',
261
+ });
262
+
263
+ const imageBuffer = fs.readFileSync('./input-image.png');
264
+
265
+ const { images } = await generateImage({
266
+ model: provider.imageModel('model-id'),
267
+ prompt: {
268
+ text: 'Turn the cat into a dog but retain the style of the original image',
269
+ images: [imageBuffer],
270
+ },
271
+ });
272
+ ```
273
+
274
+ #### Inpainting with Mask
275
+
276
+ Edit specific parts of an image using a mask:
277
+
278
+ ```ts
279
+ import { createOpenAICompatible } from '@ai-sdk/openai-compatible';
280
+ import { generateImage } from 'ai';
281
+ import fs from 'fs';
282
+
283
+ const provider = createOpenAICompatible({
284
+ name: 'providerName',
285
+ apiKey: process.env.PROVIDER_API_KEY,
286
+ baseURL: 'https://api.provider.com/v1',
287
+ });
288
+
289
+ const image = fs.readFileSync('./input-image.png');
290
+ const mask = fs.readFileSync('./mask.png');
291
+
292
+ const { images } = await generateImage({
293
+ model: provider.imageModel('model-id'),
294
+ prompt: {
295
+ text: 'A sunlit indoor lounge area with a pool containing a flamingo',
296
+ images: [image],
297
+ mask,
298
+ },
299
+ });
300
+ ```
301
+
302
+ <Note>
303
+ Input images can be provided as `Buffer`, `ArrayBuffer`, `Uint8Array`,
304
+ base64-encoded strings, or URLs. The provider will automatically download
305
+ URL-based images and convert them to the appropriate format.
306
+ </Note>
307
+
308
+ ## Embedding Models
309
+
310
+ You can create embedding models using the `.embeddingModel()` factory method:
311
+
312
+ ```ts
313
+ const model = provider.embeddingModel('model-id');
314
+ ```
315
+
316
+ ### Example
317
+
318
+ ```ts
319
+ import { createOpenAICompatible } from '@ai-sdk/openai-compatible';
320
+ import { embed } from 'ai';
321
+
322
+ const provider = createOpenAICompatible({
323
+ name: 'providerName',
324
+ apiKey: process.env.PROVIDER_API_KEY,
325
+ baseURL: 'https://api.provider.com/v1',
326
+ });
327
+
328
+ const { embedding } = await embed({
329
+ model: provider.embeddingModel('text-embedding-model'),
330
+ value: 'The quick brown fox jumps over the lazy dog',
331
+ });
332
+ ```
333
+
334
+ ### Embedding Model Options
335
+
336
+ The following provider options are available for embedding models via `providerOptions`:
337
+
338
+ - **dimensions** _number_
339
+
340
+ The number of dimensions the resulting output embeddings should have.
341
+ Only supported in models that allow dimension configuration.
342
+
343
+ - **user** _string_
344
+
345
+ A unique identifier representing your end-user, which can help providers to
346
+ monitor and detect abuse.
347
+
348
+ ```ts
349
+ const { embedding } = await embed({
350
+ model: provider.embeddingModel('text-embedding-model'),
351
+ value: 'The quick brown fox jumps over the lazy dog',
352
+ providerOptions: {
353
+ providerName: {
354
+ dimensions: 512,
355
+ user: 'user-123',
356
+ },
357
+ },
358
+ });
359
+ ```
360
+
361
+ ## Completion Models
362
+
363
+ You can create completion models (for text completion, not chat) using the `.completionModel()` factory method:
364
+
365
+ ```ts
366
+ const model = provider.completionModel('model-id');
367
+ ```
368
+
369
+ ### Example
370
+
371
+ ```ts
372
+ import { createOpenAICompatible } from '@ai-sdk/openai-compatible';
373
+ import { generateText } from 'ai';
374
+
375
+ const provider = createOpenAICompatible({
376
+ name: 'providerName',
377
+ apiKey: process.env.PROVIDER_API_KEY,
378
+ baseURL: 'https://api.provider.com/v1',
379
+ });
380
+
381
+ const { text } = await generateText({
382
+ model: provider.completionModel('completion-model-id'),
383
+ prompt: 'The quick brown fox',
384
+ });
385
+ ```
386
+
387
+ ### Completion Model Options
388
+
389
+ The following provider options are available for completion models via `providerOptions`:
390
+
391
+ - **echo** _boolean_
392
+
393
+ Echo back the prompt in addition to the completion.
394
+
395
+ - **logitBias** _Record&lt;string, number&gt;_
396
+
397
+ Modify the likelihood of specified tokens appearing in the completion.
398
+ Accepts a JSON object that maps tokens (specified by their token ID) to an
399
+ associated bias value from -100 to 100.
400
+
401
+ - **suffix** _string_
402
+
403
+ The suffix that comes after a completion of inserted text.
404
+
405
+ - **user** _string_
406
+
407
+ A unique identifier representing your end-user, which can help providers to
408
+ monitor and detect abuse.
409
+
410
+ ```ts
411
+ const { text } = await generateText({
412
+ model: provider.completionModel('completion-model-id'),
413
+ prompt: 'The quick brown fox',
414
+ providerOptions: {
415
+ providerName: {
416
+ echo: true,
417
+ suffix: ' The end.',
418
+ user: 'user-123',
419
+ },
420
+ },
421
+ });
422
+ ```
423
+
424
+ ## Chat Model Options
425
+
426
+ The following provider options are available for chat models via `providerOptions`:
427
+
428
+ - **user** _string_
429
+
430
+ A unique identifier representing your end-user, which can help the provider to
431
+ monitor and detect abuse.
432
+
433
+ - **reasoningEffort** _string_
434
+
435
+ Reasoning effort for reasoning models. The exact values depend on the provider.
436
+
437
+ - **textVerbosity** _string_
438
+
439
+ Controls the verbosity of the generated text. The exact values depend on the provider.
440
+
441
+ - **strictJsonSchema** _boolean_
442
+
443
+ Whether to use strict JSON schema validation. When true, the model uses constrained
444
+ decoding to guarantee schema compliance. Only used when the provider supports
445
+ structured outputs and a schema is provided. Defaults to `true`.
446
+
447
+ ```ts
448
+ const { text } = await generateText({
449
+ model: provider('model-id'),
450
+ prompt: 'Solve this step by step: What is 15 * 23?',
451
+ providerOptions: {
452
+ providerName: {
453
+ user: 'user-123',
454
+ reasoningEffort: 'high',
455
+ },
456
+ },
457
+ });
458
+ ```
459
+
460
+ ## Provider-specific options
461
+
462
+ The OpenAI Compatible provider supports adding provider-specific options to the request body. These are specified with the `providerOptions` field in the request body.
463
+
464
+ For example, if you create a provider instance with the name `providerName`, you can add a `customOption` field to the request body like this:
465
+
466
+ ```ts
467
+ const provider = createOpenAICompatible({
468
+ name: 'providerName',
469
+ apiKey: process.env.PROVIDER_API_KEY,
470
+ baseURL: 'https://api.provider.com/v1',
471
+ });
472
+
473
+ const { text } = await generateText({
474
+ model: provider('model-id'),
475
+ prompt: 'Hello',
476
+ providerOptions: {
477
+ providerName: { customOption: 'magic-value' },
478
+ },
479
+ });
480
+ ```
481
+
482
+ Note that the `providerOptions` key will be in camelCase. If you set the provider name to `provider-name`, the options still need to be set on `providerOptions.providerName`.
483
+
484
+ The request body sent to the provider will include the `customOption` field with the value `magic-value`. This gives you an easy way to add provider-specific options to requests without having to modify the provider or AI SDK code.
485
+
486
+ ## Custom Metadata Extraction
487
+
488
+ The OpenAI Compatible provider supports extracting provider-specific metadata from API responses through metadata extractors.
489
+ These extractors allow you to capture additional information returned by the provider beyond the standard response format.
490
+
491
+ Metadata extractors receive the raw, unprocessed response data from the provider, giving you complete flexibility
492
+ to extract any custom fields or experimental features that the provider may include.
493
+ This is particularly useful when:
494
+
495
+ - Working with providers that include non-standard response fields
496
+ - Experimenting with beta or preview features
497
+ - Capturing provider-specific metrics or debugging information
498
+ - Supporting rapid provider API evolution without SDK changes
499
+
500
+ Metadata extractors work with both streaming and non-streaming chat completions and consist of two main components:
501
+
502
+ 1. A function to extract metadata from complete responses
503
+ 2. A streaming extractor that can accumulate metadata across chunks in a streaming response
504
+
505
+ Here's an example metadata extractor that captures both standard and custom provider data:
506
+
507
+ ```typescript
508
+ import { MetadataExtractor } from '@ai-sdk/openai-compatible';
509
+
510
+ const myMetadataExtractor: MetadataExtractor = {
511
+ // Process complete, non-streaming responses
512
+ extractMetadata: ({ parsedBody }) => {
513
+ // You have access to the complete raw response
514
+ // Extract any fields the provider includes
515
+ return {
516
+ myProvider: {
517
+ standardUsage: parsedBody.usage,
518
+ experimentalFeatures: parsedBody.beta_features,
519
+ customMetrics: {
520
+ processingTime: parsedBody.server_timing?.total_ms,
521
+ modelVersion: parsedBody.model_version,
522
+ // ... any other provider-specific data
523
+ },
524
+ },
525
+ };
526
+ },
527
+
528
+ // Process streaming responses
529
+ createStreamExtractor: () => {
530
+ let accumulatedData = {
531
+ timing: [],
532
+ customFields: {},
533
+ };
534
+
535
+ return {
536
+ // Process each chunk's raw data
537
+ processChunk: parsedChunk => {
538
+ if (parsedChunk.server_timing) {
539
+ accumulatedData.timing.push(parsedChunk.server_timing);
540
+ }
541
+ if (parsedChunk.custom_data) {
542
+ Object.assign(accumulatedData.customFields, parsedChunk.custom_data);
543
+ }
544
+ },
545
+ // Build final metadata from accumulated data
546
+ buildMetadata: () => ({
547
+ myProvider: {
548
+ streamTiming: accumulatedData.timing,
549
+ customData: accumulatedData.customFields,
550
+ },
551
+ }),
552
+ };
553
+ },
554
+ };
555
+ ```
556
+
557
+ You can provide a metadata extractor when creating your provider instance:
558
+
559
+ ```typescript
560
+ const provider = createOpenAICompatible({
561
+ name: 'my-provider',
562
+ apiKey: process.env.PROVIDER_API_KEY,
563
+ baseURL: 'https://api.provider.com/v1',
564
+ metadataExtractor: myMetadataExtractor,
565
+ });
566
+ ```
567
+
568
+ The extracted metadata will be included in the response under the `providerMetadata` field:
569
+
570
+ ```typescript
571
+ const { text, providerMetadata } = await generateText({
572
+ model: provider('model-id'),
573
+ prompt: 'Hello',
574
+ });
575
+
576
+ console.log(providerMetadata.myProvider.customMetric);
577
+ ```
578
+
579
+ This allows you to access provider-specific information while maintaining a consistent interface across different providers.
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "@ai-sdk/openai-compatible",
3
- "version": "2.0.21",
3
+ "version": "2.0.22",
4
4
  "license": "Apache-2.0",
5
5
  "sideEffects": false,
6
6
  "main": "./dist/index.js",
@@ -8,6 +8,7 @@
8
8
  "types": "./dist/index.d.ts",
9
9
  "files": [
10
10
  "dist/**/*",
11
+ "docs/**/*",
11
12
  "src",
12
13
  "!src/**/*.test.ts",
13
14
  "!src/**/*.test-d.ts",
@@ -17,6 +18,9 @@
17
18
  "README.md",
18
19
  "internal.d.ts"
19
20
  ],
21
+ "directories": {
22
+ "doc": "./docs"
23
+ },
20
24
  "exports": {
21
25
  "./package.json": "./package.json",
22
26
  ".": {
@@ -66,7 +70,7 @@
66
70
  "scripts": {
67
71
  "build": "pnpm clean && tsup --tsconfig tsconfig.build.json",
68
72
  "build:watch": "pnpm clean && tsup --watch",
69
- "clean": "del-cli dist *.tsbuildinfo",
73
+ "clean": "del-cli dist docs *.tsbuildinfo",
70
74
  "lint": "eslint \"./**/*.ts*\"",
71
75
  "type-check": "tsc --build",
72
76
  "prettier-check": "prettier --check \"./**/*.ts*\"",
@@ -1,10 +1,10 @@
1
1
  import { SharedV3ProviderMetadata } from '@ai-sdk/provider';
2
2
 
3
3
  /**
4
- Extracts provider-specific metadata from API responses.
5
- Used to standardize metadata handling across different LLM providers while allowing
6
- provider-specific metadata to be captured.
7
- */
4
+ * Extracts provider-specific metadata from API responses.
5
+ * Used to standardize metadata handling across different LLM providers while allowing
6
+ * provider-specific metadata to be captured.
7
+ */
8
8
  export type MetadataExtractor = {
9
9
  /**
10
10
  * Extracts provider metadata from a complete, non-streaming response.
@@ -23,13 +23,13 @@ import {
23
23
 
24
24
  type OpenAICompatibleEmbeddingConfig = {
25
25
  /**
26
- Override the maximum number of embeddings per call.
26
+ * Override the maximum number of embeddings per call.
27
27
  */
28
28
  maxEmbeddingsPerCall?: number;
29
29
 
30
30
  /**
31
- Override the parallelism of embedding calls.
32
- */
31
+ * Override the parallelism of embedding calls.
32
+ */
33
33
  supportsParallelCalls?: boolean;
34
34
 
35
35
  provider: string;
@@ -48,41 +48,41 @@ export interface OpenAICompatibleProvider<
48
48
 
49
49
  export interface OpenAICompatibleProviderSettings {
50
50
  /**
51
- Base URL for the API calls.
51
+ * Base URL for the API calls.
52
52
  */
53
53
  baseURL: string;
54
54
 
55
55
  /**
56
- Provider name.
56
+ * Provider name.
57
57
  */
58
58
  name: string;
59
59
 
60
60
  /**
61
- API key for authenticating requests. If specified, adds an `Authorization`
62
- header to request headers with the value `Bearer <apiKey>`. This will be added
63
- before any headers potentially specified in the `headers` option.
61
+ * API key for authenticating requests. If specified, adds an `Authorization`
62
+ * header to request headers with the value `Bearer <apiKey>`. This will be added
63
+ * before any headers potentially specified in the `headers` option.
64
64
  */
65
65
  apiKey?: string;
66
66
 
67
67
  /**
68
- Optional custom headers to include in requests. These will be added to request headers
69
- after any headers potentially added by use of the `apiKey` option.
68
+ * Optional custom headers to include in requests. These will be added to request headers
69
+ * after any headers potentially added by use of the `apiKey` option.
70
70
  */
71
71
  headers?: Record<string, string>;
72
72
 
73
73
  /**
74
- Optional custom url query parameters to include in request urls.
74
+ * Optional custom url query parameters to include in request urls.
75
75
  */
76
76
  queryParams?: Record<string, string>;
77
77
 
78
78
  /**
79
- Custom fetch implementation. You can use it as a middleware to intercept requests,
80
- or to provide a custom fetch implementation for e.g. testing.
79
+ * Custom fetch implementation. You can use it as a middleware to intercept requests,
80
+ * or to provide a custom fetch implementation for e.g. testing.
81
81
  */
82
82
  fetch?: FetchFunction;
83
83
 
84
84
  /**
85
- Include usage information in streaming responses.
85
+ * Include usage information in streaming responses.
86
86
  */
87
87
  includeUsage?: boolean;
88
88
 
@@ -107,7 +107,7 @@ Include usage information in streaming responses.
107
107
  }
108
108
 
109
109
  /**
110
- Create an OpenAICompatible provider instance.
110
+ * Create an OpenAICompatible provider instance.
111
111
  */
112
112
  export function createOpenAICompatible<
113
113
  CHAT_MODEL_IDS extends string,