@ai-sdk/openai 4.0.0-beta.2 → 4.0.0-beta.21
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/CHANGELOG.md +234 -22
- package/README.md +2 -0
- package/dist/index.d.mts +134 -35
- package/dist/index.d.ts +134 -35
- package/dist/index.js +1700 -1139
- package/dist/index.js.map +1 -1
- package/dist/index.mjs +1697 -1117
- package/dist/index.mjs.map +1 -1
- package/dist/internal/index.d.mts +107 -41
- package/dist/internal/index.d.ts +107 -41
- package/dist/internal/index.js +1380 -939
- package/dist/internal/index.js.map +1 -1
- package/dist/internal/index.mjs +1371 -917
- package/dist/internal/index.mjs.map +1 -1
- package/docs/03-openai.mdx +274 -9
- package/package.json +3 -5
- package/src/chat/convert-openai-chat-usage.ts +2 -2
- package/src/chat/convert-to-openai-chat-messages.ts +26 -15
- package/src/chat/map-openai-finish-reason.ts +2 -2
- package/src/chat/openai-chat-language-model.ts +32 -24
- package/src/chat/openai-chat-options.ts +5 -0
- package/src/chat/openai-chat-prepare-tools.ts +6 -6
- package/src/completion/convert-openai-completion-usage.ts +2 -2
- package/src/completion/convert-to-openai-completion-prompt.ts +2 -2
- package/src/completion/map-openai-finish-reason.ts +2 -2
- package/src/completion/openai-completion-language-model.ts +20 -20
- package/src/embedding/openai-embedding-model.ts +5 -5
- package/src/files/openai-files-api.ts +17 -0
- package/src/files/openai-files-options.ts +18 -0
- package/src/files/openai-files.ts +102 -0
- package/src/image/openai-image-model.ts +9 -9
- package/src/index.ts +2 -0
- package/src/openai-config.ts +5 -5
- package/src/openai-language-model-capabilities.ts +3 -2
- package/src/openai-provider.ts +39 -21
- package/src/openai-tools.ts +12 -1
- package/src/responses/convert-openai-responses-usage.ts +2 -2
- package/src/responses/convert-to-openai-responses-input.ts +188 -14
- package/src/responses/map-openai-responses-finish-reason.ts +2 -2
- package/src/responses/openai-responses-api.ts +136 -2
- package/src/responses/openai-responses-language-model.ts +233 -37
- package/src/responses/openai-responses-options.ts +24 -2
- package/src/responses/openai-responses-prepare-tools.ts +34 -9
- package/src/responses/openai-responses-provider-metadata.ts +10 -0
- package/src/speech/openai-speech-model.ts +7 -7
- package/src/tool/custom.ts +0 -6
- package/src/tool/tool-search.ts +98 -0
- package/src/transcription/openai-transcription-model.ts +8 -8
package/CHANGELOG.md
CHANGED
|
@@ -1,5 +1,217 @@
|
|
|
1
1
|
# @ai-sdk/openai
|
|
2
2
|
|
|
3
|
+
## 4.0.0-beta.21
|
|
4
|
+
|
|
5
|
+
### Patch Changes
|
|
6
|
+
|
|
7
|
+
- c29a26f: feat(provider): add support for provider references and uploading files as supported per provider
|
|
8
|
+
- Updated dependencies [c29a26f]
|
|
9
|
+
- @ai-sdk/provider-utils@5.0.0-beta.10
|
|
10
|
+
- @ai-sdk/provider@4.0.0-beta.6
|
|
11
|
+
|
|
12
|
+
## 4.0.0-beta.20
|
|
13
|
+
|
|
14
|
+
### Patch Changes
|
|
15
|
+
|
|
16
|
+
- 38fc777: Add AI Gateway hint to provider READMEs
|
|
17
|
+
|
|
18
|
+
## 4.0.0-beta.19
|
|
19
|
+
|
|
20
|
+
### Patch Changes
|
|
21
|
+
|
|
22
|
+
- Updated dependencies [2e17091]
|
|
23
|
+
- @ai-sdk/provider-utils@5.0.0-beta.9
|
|
24
|
+
|
|
25
|
+
## 4.0.0-beta.18
|
|
26
|
+
|
|
27
|
+
### Patch Changes
|
|
28
|
+
|
|
29
|
+
- Updated dependencies [986c6fd]
|
|
30
|
+
- Updated dependencies [493295c]
|
|
31
|
+
- @ai-sdk/provider-utils@5.0.0-beta.8
|
|
32
|
+
|
|
33
|
+
## 4.0.0-beta.17
|
|
34
|
+
|
|
35
|
+
### Patch Changes
|
|
36
|
+
|
|
37
|
+
- 817a1a6: fix(openai): support file-url parts in tool output content
|
|
38
|
+
|
|
39
|
+
## 4.0.0-beta.16
|
|
40
|
+
|
|
41
|
+
### Patch Changes
|
|
42
|
+
|
|
43
|
+
- 1f509d4: fix(ai): force template check on 'kind' param
|
|
44
|
+
- Updated dependencies [1f509d4]
|
|
45
|
+
- @ai-sdk/provider-utils@5.0.0-beta.7
|
|
46
|
+
- @ai-sdk/provider@4.0.0-beta.5
|
|
47
|
+
|
|
48
|
+
## 4.0.0-beta.15
|
|
49
|
+
|
|
50
|
+
### Patch Changes
|
|
51
|
+
|
|
52
|
+
- 365da1a: Add `gpt-5.4-mini`, `gpt-5.4-mini-2026-03-17`, `gpt-5.4-nano`, and `gpt-5.4-nano-2026-03-17` models.
|
|
53
|
+
|
|
54
|
+
## 4.0.0-beta.14
|
|
55
|
+
|
|
56
|
+
### Patch Changes
|
|
57
|
+
|
|
58
|
+
- e6376c2: fix(openai): preserve raw finish reason for failed responses stream events
|
|
59
|
+
|
|
60
|
+
Handle `response.failed` chunks in Responses API streaming so `finishReason.raw` is preserved from `incomplete_details.reason` (e.g. `max_output_tokens`), and map failed-without-reason cases to unified `error` instead of `other`.
|
|
61
|
+
|
|
62
|
+
## 4.0.0-beta.13
|
|
63
|
+
|
|
64
|
+
### Patch Changes
|
|
65
|
+
|
|
66
|
+
- 3887c70: feat(provider): add new top-level reasoning parameter to spec and support it in `generateText` and `streamText`
|
|
67
|
+
- Updated dependencies [3887c70]
|
|
68
|
+
- @ai-sdk/provider-utils@5.0.0-beta.6
|
|
69
|
+
- @ai-sdk/provider@4.0.0-beta.4
|
|
70
|
+
|
|
71
|
+
## 4.0.0-beta.12
|
|
72
|
+
|
|
73
|
+
### Patch Changes
|
|
74
|
+
|
|
75
|
+
- d9a1e9a: feat(openai): add server side compaction for openai
|
|
76
|
+
|
|
77
|
+
## 4.0.0-beta.11
|
|
78
|
+
|
|
79
|
+
### Patch Changes
|
|
80
|
+
|
|
81
|
+
- Updated dependencies [776b617]
|
|
82
|
+
- @ai-sdk/provider-utils@5.0.0-beta.5
|
|
83
|
+
- @ai-sdk/provider@4.0.0-beta.3
|
|
84
|
+
|
|
85
|
+
## 4.0.0-beta.10
|
|
86
|
+
|
|
87
|
+
### Major Changes
|
|
88
|
+
|
|
89
|
+
- 61753c3: ### `@ai-sdk/openai`: remove redundant `name` argument from `openai.tools.customTool()`
|
|
90
|
+
|
|
91
|
+
`openai.tools.customTool()` no longer accepts a `name` field. the tool name is now derived from the sdk tool key (the object key in the `tools` object).
|
|
92
|
+
|
|
93
|
+
migration: remove the `name` property from `customTool()` calls. the object key is now used as the tool name sent to the openai api.
|
|
94
|
+
|
|
95
|
+
before:
|
|
96
|
+
|
|
97
|
+
```ts
|
|
98
|
+
tools: {
|
|
99
|
+
write_sql: openai.tools.customTool({
|
|
100
|
+
name: 'write_sql',
|
|
101
|
+
description: '...',
|
|
102
|
+
}),
|
|
103
|
+
}
|
|
104
|
+
```
|
|
105
|
+
|
|
106
|
+
after:
|
|
107
|
+
|
|
108
|
+
```ts
|
|
109
|
+
tools: {
|
|
110
|
+
write_sql: openai.tools.customTool({
|
|
111
|
+
description: '...',
|
|
112
|
+
}),
|
|
113
|
+
}
|
|
114
|
+
```
|
|
115
|
+
|
|
116
|
+
### `@ai-sdk/provider-utils`: `createToolNameMapping()` no longer accepts the `resolveProviderToolName` parameter
|
|
117
|
+
|
|
118
|
+
before: tool name can be set dynamically
|
|
119
|
+
|
|
120
|
+
```ts
|
|
121
|
+
const toolNameMapping = createToolNameMapping({
|
|
122
|
+
tools,
|
|
123
|
+
providerToolNames: {
|
|
124
|
+
"openai.code_interpreter": "code_interpreter",
|
|
125
|
+
"openai.file_search": "file_search",
|
|
126
|
+
"openai.image_generation": "image_generation",
|
|
127
|
+
"openai.local_shell": "local_shell",
|
|
128
|
+
"openai.shell": "shell",
|
|
129
|
+
"openai.web_search": "web_search",
|
|
130
|
+
"openai.web_search_preview": "web_search_preview",
|
|
131
|
+
"openai.mcp": "mcp",
|
|
132
|
+
"openai.apply_patch": "apply_patch",
|
|
133
|
+
},
|
|
134
|
+
resolveProviderToolName: (tool) =>
|
|
135
|
+
tool.id === "openai.custom"
|
|
136
|
+
? (tool.args as { name?: string }).name
|
|
137
|
+
: undefined,
|
|
138
|
+
});
|
|
139
|
+
```
|
|
140
|
+
|
|
141
|
+
after: tool name is static based on `tools` keys
|
|
142
|
+
|
|
143
|
+
```
|
|
144
|
+
const toolNameMapping = createToolNameMapping({
|
|
145
|
+
tools,
|
|
146
|
+
providerToolNames: {
|
|
147
|
+
'openai.code_interpreter': 'code_interpreter',
|
|
148
|
+
'openai.file_search': 'file_search',
|
|
149
|
+
'openai.image_generation': 'image_generation',
|
|
150
|
+
'openai.local_shell': 'local_shell',
|
|
151
|
+
'openai.shell': 'shell',
|
|
152
|
+
'openai.web_search': 'web_search',
|
|
153
|
+
'openai.web_search_preview': 'web_search_preview',
|
|
154
|
+
'openai.mcp': 'mcp',
|
|
155
|
+
'openai.apply_patch': 'apply_patch',
|
|
156
|
+
}
|
|
157
|
+
});
|
|
158
|
+
```
|
|
159
|
+
|
|
160
|
+
### Patch Changes
|
|
161
|
+
|
|
162
|
+
- Updated dependencies [61753c3]
|
|
163
|
+
- @ai-sdk/provider-utils@5.0.0-beta.4
|
|
164
|
+
|
|
165
|
+
## 4.0.0-beta.9
|
|
166
|
+
|
|
167
|
+
### Patch Changes
|
|
168
|
+
|
|
169
|
+
- 156cdf0: feat(openai): add new tool search tool
|
|
170
|
+
|
|
171
|
+
## 4.0.0-beta.8
|
|
172
|
+
|
|
173
|
+
### Patch Changes
|
|
174
|
+
|
|
175
|
+
- Updated dependencies [f7d4f01]
|
|
176
|
+
- @ai-sdk/provider-utils@5.0.0-beta.3
|
|
177
|
+
- @ai-sdk/provider@4.0.0-beta.2
|
|
178
|
+
|
|
179
|
+
## 4.0.0-beta.7
|
|
180
|
+
|
|
181
|
+
### Patch Changes
|
|
182
|
+
|
|
183
|
+
- Updated dependencies [5c2a5a2]
|
|
184
|
+
- @ai-sdk/provider@4.0.0-beta.1
|
|
185
|
+
- @ai-sdk/provider-utils@5.0.0-beta.2
|
|
186
|
+
|
|
187
|
+
## 4.0.0-beta.6
|
|
188
|
+
|
|
189
|
+
### Patch Changes
|
|
190
|
+
|
|
191
|
+
- 83f9d04: feat(openai): upgrade v3 specs to v4
|
|
192
|
+
|
|
193
|
+
## 4.0.0-beta.5
|
|
194
|
+
|
|
195
|
+
### Patch Changes
|
|
196
|
+
|
|
197
|
+
- ac18f89: feat(provider/openai): add `gpt-5.3-chat-latest`
|
|
198
|
+
|
|
199
|
+
## 4.0.0-beta.4
|
|
200
|
+
|
|
201
|
+
### Patch Changes
|
|
202
|
+
|
|
203
|
+
- a71d345: fix(provider/openai): drop reasoning parts without encrypted content when store: false
|
|
204
|
+
|
|
205
|
+
## 4.0.0-beta.3
|
|
206
|
+
|
|
207
|
+
### Patch Changes
|
|
208
|
+
|
|
209
|
+
- 45b3d76: fix(security): prevent streaming tool calls from finalizing on parsable partial JSON
|
|
210
|
+
|
|
211
|
+
Streaming tool call arguments were finalized using `isParsableJson()` as a heuristic for completion. If partial accumulated JSON happened to be valid JSON before all chunks arrived, the tool call would be executed with incomplete arguments. Tool call finalization now only occurs in `flush()` after the stream is fully consumed.
|
|
212
|
+
|
|
213
|
+
- f7295cb: revert incorrect fix https://github.com/vercel/ai/pull/13172
|
|
214
|
+
|
|
3
215
|
## 4.0.0-beta.2
|
|
4
216
|
|
|
5
217
|
### Patch Changes
|
|
@@ -360,13 +572,13 @@
|
|
|
360
572
|
Before
|
|
361
573
|
|
|
362
574
|
```ts
|
|
363
|
-
model.textEmbeddingModel(
|
|
575
|
+
model.textEmbeddingModel("my-model-id");
|
|
364
576
|
```
|
|
365
577
|
|
|
366
578
|
After
|
|
367
579
|
|
|
368
580
|
```ts
|
|
369
|
-
model.embeddingModel(
|
|
581
|
+
model.embeddingModel("my-model-id");
|
|
370
582
|
```
|
|
371
583
|
|
|
372
584
|
- 60f4775: fix: remove code for unsuported o1-mini and o1-preview models
|
|
@@ -376,15 +588,15 @@
|
|
|
376
588
|
- 2e86082: feat(provider/openai): `OpenAIChatLanguageModelOptions` type
|
|
377
589
|
|
|
378
590
|
```ts
|
|
379
|
-
import { openai, type OpenAIChatLanguageModelOptions } from
|
|
380
|
-
import { generateText } from
|
|
591
|
+
import { openai, type OpenAIChatLanguageModelOptions } from "@ai-sdk/openai";
|
|
592
|
+
import { generateText } from "ai";
|
|
381
593
|
|
|
382
594
|
await generateText({
|
|
383
|
-
model: openai.chat(
|
|
384
|
-
prompt:
|
|
595
|
+
model: openai.chat("gpt-4o"),
|
|
596
|
+
prompt: "Invent a new holiday and describe its traditions.",
|
|
385
597
|
providerOptions: {
|
|
386
598
|
openai: {
|
|
387
|
-
user:
|
|
599
|
+
user: "user-123",
|
|
388
600
|
} satisfies OpenAIChatLanguageModelOptions,
|
|
389
601
|
},
|
|
390
602
|
});
|
|
@@ -785,13 +997,13 @@
|
|
|
785
997
|
Before
|
|
786
998
|
|
|
787
999
|
```ts
|
|
788
|
-
model.textEmbeddingModel(
|
|
1000
|
+
model.textEmbeddingModel("my-model-id");
|
|
789
1001
|
```
|
|
790
1002
|
|
|
791
1003
|
After
|
|
792
1004
|
|
|
793
1005
|
```ts
|
|
794
|
-
model.embeddingModel(
|
|
1006
|
+
model.embeddingModel("my-model-id");
|
|
795
1007
|
```
|
|
796
1008
|
|
|
797
1009
|
- Updated dependencies [8d9e8ad]
|
|
@@ -1261,15 +1473,15 @@
|
|
|
1261
1473
|
- 2e86082: feat(provider/openai): `OpenAIChatLanguageModelOptions` type
|
|
1262
1474
|
|
|
1263
1475
|
```ts
|
|
1264
|
-
import { openai, type OpenAIChatLanguageModelOptions } from
|
|
1265
|
-
import { generateText } from
|
|
1476
|
+
import { openai, type OpenAIChatLanguageModelOptions } from "@ai-sdk/openai";
|
|
1477
|
+
import { generateText } from "ai";
|
|
1266
1478
|
|
|
1267
1479
|
await generateText({
|
|
1268
|
-
model: openai.chat(
|
|
1269
|
-
prompt:
|
|
1480
|
+
model: openai.chat("gpt-4o"),
|
|
1481
|
+
prompt: "Invent a new holiday and describe its traditions.",
|
|
1270
1482
|
providerOptions: {
|
|
1271
1483
|
openai: {
|
|
1272
|
-
user:
|
|
1484
|
+
user: "user-123",
|
|
1273
1485
|
} satisfies OpenAIChatLanguageModelOptions,
|
|
1274
1486
|
},
|
|
1275
1487
|
});
|
|
@@ -1565,7 +1777,7 @@
|
|
|
1565
1777
|
|
|
1566
1778
|
```js
|
|
1567
1779
|
await generateImage({
|
|
1568
|
-
model: luma.image(
|
|
1780
|
+
model: luma.image("photon-flash-1", {
|
|
1569
1781
|
maxImagesPerCall: 5,
|
|
1570
1782
|
pollIntervalMillis: 500,
|
|
1571
1783
|
}),
|
|
@@ -1578,7 +1790,7 @@
|
|
|
1578
1790
|
|
|
1579
1791
|
```js
|
|
1580
1792
|
await generateImage({
|
|
1581
|
-
model: luma.image(
|
|
1793
|
+
model: luma.image("photon-flash-1"),
|
|
1582
1794
|
prompt,
|
|
1583
1795
|
n: 10,
|
|
1584
1796
|
maxImagesPerCall: 5,
|
|
@@ -1640,10 +1852,10 @@
|
|
|
1640
1852
|
The `experimental_generateImage` method from the `ai` package now returnes revised prompts for OpenAI's image models.
|
|
1641
1853
|
|
|
1642
1854
|
```js
|
|
1643
|
-
const prompt =
|
|
1855
|
+
const prompt = "Santa Claus driving a Cadillac";
|
|
1644
1856
|
|
|
1645
1857
|
const { providerMetadata } = await experimental_generateImage({
|
|
1646
|
-
model: openai.image(
|
|
1858
|
+
model: openai.image("dall-e-3"),
|
|
1647
1859
|
prompt,
|
|
1648
1860
|
});
|
|
1649
1861
|
|
|
@@ -1942,7 +2154,7 @@
|
|
|
1942
2154
|
|
|
1943
2155
|
```js
|
|
1944
2156
|
await generateImage({
|
|
1945
|
-
model: luma.image(
|
|
2157
|
+
model: luma.image("photon-flash-1", {
|
|
1946
2158
|
maxImagesPerCall: 5,
|
|
1947
2159
|
pollIntervalMillis: 500,
|
|
1948
2160
|
}),
|
|
@@ -1955,7 +2167,7 @@
|
|
|
1955
2167
|
|
|
1956
2168
|
```js
|
|
1957
2169
|
await generateImage({
|
|
1958
|
-
model: luma.image(
|
|
2170
|
+
model: luma.image("photon-flash-1"),
|
|
1959
2171
|
prompt,
|
|
1960
2172
|
n: 10,
|
|
1961
2173
|
maxImagesPerCall: 5,
|
|
@@ -2000,10 +2212,10 @@
|
|
|
2000
2212
|
The `experimental_generateImage` method from the `ai` package now returnes revised prompts for OpenAI's image models.
|
|
2001
2213
|
|
|
2002
2214
|
```js
|
|
2003
|
-
const prompt =
|
|
2215
|
+
const prompt = "Santa Claus driving a Cadillac";
|
|
2004
2216
|
|
|
2005
2217
|
const { providerMetadata } = await experimental_generateImage({
|
|
2006
|
-
model: openai.image(
|
|
2218
|
+
model: openai.image("dall-e-3"),
|
|
2007
2219
|
prompt,
|
|
2008
2220
|
});
|
|
2009
2221
|
|
package/README.md
CHANGED
|
@@ -3,6 +3,8 @@
|
|
|
3
3
|
The **[OpenAI provider](https://ai-sdk.dev/providers/ai-sdk-providers/openai)** for the [AI SDK](https://ai-sdk.dev/docs)
|
|
4
4
|
contains language model support for the OpenAI chat and completion APIs and embedding model support for the OpenAI embeddings API.
|
|
5
5
|
|
|
6
|
+
> **Deploying to Vercel?** With Vercel's AI Gateway you can access OpenAI (and hundreds of models from other providers) — no additional packages, API keys, or extra cost. [Get started with AI Gateway](https://vercel.com/ai-gateway).
|
|
7
|
+
|
|
6
8
|
## Setup
|
|
7
9
|
|
|
8
10
|
The OpenAI provider is available in the `@ai-sdk/openai` module. You can install it with
|