@runpod/ai-sdk-provider 0.9.0 → 0.11.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/CHANGELOG.md +17 -0
- package/README.md +378 -363
- package/dist/index.js +61 -3
- package/dist/index.js.map +1 -1
- package/dist/index.mjs +61 -3
- package/dist/index.mjs.map +1 -1
- package/package.json +2 -2
package/README.md
CHANGED
|
@@ -1,363 +1,378 @@
|
|
|
1
|
-
# Runpod AI SDK Provider
|
|
2
|
-
|
|
3
|
-
The **Runpod provider** for the [AI SDK](https://ai-sdk.dev/docs) contains language model and image generation support for [Runpod's](https://runpod.io) public endpoints.
|
|
4
|
-
|
|
5
|
-
## Setup
|
|
6
|
-
|
|
7
|
-
The Runpod provider is available in the `@runpod/ai-sdk-provider` module. You can install it with:
|
|
8
|
-
|
|
9
|
-
```bash
|
|
10
|
-
# npm
|
|
11
|
-
npm install @runpod/ai-sdk-provider
|
|
12
|
-
|
|
13
|
-
# pnpm
|
|
14
|
-
pnpm add @runpod/ai-sdk-provider
|
|
15
|
-
|
|
16
|
-
# yarn
|
|
17
|
-
yarn add @runpod/ai-sdk-provider
|
|
18
|
-
|
|
19
|
-
# bun
|
|
20
|
-
bun add @runpod/ai-sdk-provider
|
|
21
|
-
```
|
|
22
|
-
|
|
23
|
-
## Provider Instance
|
|
24
|
-
|
|
25
|
-
You can import the default provider instance `runpod` from `@runpod/ai-sdk-provider`:
|
|
26
|
-
|
|
27
|
-
```ts
|
|
28
|
-
import { runpod } from '@runpod/ai-sdk-provider';
|
|
29
|
-
```
|
|
30
|
-
|
|
31
|
-
If you need a customized setup, you can import `createRunpod` and create a provider instance with your settings:
|
|
32
|
-
|
|
33
|
-
```ts
|
|
34
|
-
import { createRunpod } from '@runpod/ai-sdk-provider';
|
|
35
|
-
|
|
36
|
-
const runpod = createRunpod({
|
|
37
|
-
apiKey: 'your-api-key', // optional, defaults to RUNPOD_API_KEY environment variable
|
|
38
|
-
baseURL: 'custom-url', // optional, for custom endpoints
|
|
39
|
-
headers: {
|
|
40
|
-
/* custom headers */
|
|
41
|
-
}, // optional
|
|
42
|
-
});
|
|
43
|
-
```
|
|
44
|
-
|
|
45
|
-
You can use the following optional settings to customize the Runpod provider instance:
|
|
46
|
-
|
|
47
|
-
- **baseURL** _string_
|
|
48
|
-
|
|
49
|
-
Use a different URL prefix for API calls, e.g. to use proxy servers or custom endpoints.
|
|
50
|
-
Supports vLLM deployments, SGLang servers, and any OpenAI-compatible API.
|
|
51
|
-
The default prefix is `https://api.runpod.ai/v2`.
|
|
52
|
-
|
|
53
|
-
- **apiKey** _string_
|
|
54
|
-
|
|
55
|
-
API key that is being sent using the `Authorization` header.
|
|
56
|
-
It defaults to the `RUNPOD_API_KEY` environment variable.
|
|
57
|
-
You can obtain your api key from the [Runpod Console](https://console.runpod.io/user/settings) under "API Keys".
|
|
58
|
-
|
|
59
|
-
- **headers** _Record<string,string>_
|
|
60
|
-
|
|
61
|
-
Custom headers to include in the requests.
|
|
62
|
-
|
|
63
|
-
- **fetch** _(input: RequestInfo, init?: RequestInit) => Promise<Response>_
|
|
64
|
-
|
|
65
|
-
Custom [fetch](https://developer.mozilla.org/en-US/docs/Web/API/fetch) implementation.
|
|
66
|
-
You can use it as a middleware to intercept requests,
|
|
67
|
-
or to provide a custom fetch implementation for e.g. testing.
|
|
68
|
-
|
|
69
|
-
## Language Models
|
|
70
|
-
|
|
71
|
-
You can create language models using the provider instance. The first argument is the model ID:
|
|
72
|
-
|
|
73
|
-
```ts
|
|
74
|
-
import { runpod } from '@runpod/ai-sdk-provider';
|
|
75
|
-
import { generateText } from 'ai';
|
|
76
|
-
|
|
77
|
-
const { text } = await generateText({
|
|
78
|
-
model: runpod('qwen/qwen3-32b-awq'),
|
|
79
|
-
prompt: 'What is the capital of Germany?',
|
|
80
|
-
});
|
|
81
|
-
```
|
|
82
|
-
|
|
83
|
-
**Returns:**
|
|
84
|
-
|
|
85
|
-
- `text` - Generated text string
|
|
86
|
-
- `finishReason` - Why generation stopped ('stop', 'length', etc.)
|
|
87
|
-
- `usage` - Token usage information (prompt, completion, total tokens)
|
|
88
|
-
|
|
89
|
-
### Streaming
|
|
90
|
-
|
|
91
|
-
```ts
|
|
92
|
-
import { runpod } from '@runpod/ai-sdk-provider';
|
|
93
|
-
import { streamText } from 'ai';
|
|
94
|
-
|
|
95
|
-
const { textStream } = await streamText({
|
|
96
|
-
model: runpod('qwen/qwen3-32b-awq'),
|
|
97
|
-
prompt:
|
|
98
|
-
'Write a short poem about artificial intelligence in exactly 4 lines.',
|
|
99
|
-
temperature: 0.7,
|
|
100
|
-
});
|
|
101
|
-
|
|
102
|
-
for await (const delta of textStream) {
|
|
103
|
-
process.stdout.write(delta);
|
|
104
|
-
}
|
|
105
|
-
```
|
|
106
|
-
|
|
107
|
-
### Model Capabilities
|
|
108
|
-
|
|
109
|
-
| Model ID
|
|
110
|
-
|
|
|
111
|
-
| `qwen/qwen3-32b-awq`
|
|
112
|
-
| `openai/gpt-oss-120b`
|
|
113
|
-
|
|
114
|
-
|
|
115
|
-
|
|
116
|
-
|
|
117
|
-
|
|
118
|
-
|
|
119
|
-
|
|
120
|
-
|
|
121
|
-
|
|
122
|
-
|
|
123
|
-
}
|
|
124
|
-
|
|
125
|
-
|
|
126
|
-
|
|
127
|
-
|
|
128
|
-
|
|
129
|
-
|
|
130
|
-
|
|
131
|
-
|
|
132
|
-
|
|
133
|
-
|
|
134
|
-
|
|
135
|
-
|
|
136
|
-
|
|
137
|
-
|
|
138
|
-
|
|
139
|
-
|
|
140
|
-
|
|
141
|
-
|
|
142
|
-
|
|
143
|
-
},
|
|
144
|
-
|
|
145
|
-
|
|
146
|
-
}
|
|
147
|
-
|
|
148
|
-
|
|
149
|
-
|
|
150
|
-
|
|
151
|
-
|
|
152
|
-
|
|
153
|
-
|
|
154
|
-
|
|
155
|
-
|
|
156
|
-
|
|
157
|
-
|
|
158
|
-
|
|
159
|
-
|
|
160
|
-
|
|
161
|
-
|
|
162
|
-
|
|
163
|
-
|
|
164
|
-
|
|
165
|
-
|
|
166
|
-
|
|
167
|
-
|
|
168
|
-
|
|
169
|
-
|
|
170
|
-
|
|
171
|
-
|
|
172
|
-
|
|
173
|
-
|
|
174
|
-
|
|
175
|
-
|
|
176
|
-
|
|
177
|
-
|
|
178
|
-
|
|
179
|
-
|
|
180
|
-
|
|
181
|
-
|
|
182
|
-
}
|
|
183
|
-
|
|
184
|
-
|
|
185
|
-
|
|
186
|
-
|
|
187
|
-
|
|
188
|
-
|
|
189
|
-
|
|
190
|
-
|
|
191
|
-
|
|
192
|
-
|
|
193
|
-
|
|
194
|
-
|
|
195
|
-
|
|
196
|
-
|
|
197
|
-
|
|
198
|
-
|
|
199
|
-
|
|
200
|
-
|
|
201
|
-
|
|
202
|
-
|
|
203
|
-
|
|
204
|
-
|
|
205
|
-
|
|
206
|
-
|
|
207
|
-
|
|
208
|
-
|
|
209
|
-
|
|
210
|
-
|
|
211
|
-
|
|
212
|
-
|
|
213
|
-
|
|
214
|
-
|
|
215
|
-
|
|
216
|
-
|
|
217
|
-
|
|
218
|
-
|
|
219
|
-
|
|
220
|
-
- `
|
|
221
|
-
|
|
222
|
-
|
|
223
|
-
|
|
224
|
-
|
|
225
|
-
|
|
226
|
-
|
|
227
|
-
|
|
|
228
|
-
|
|
|
229
|
-
| `
|
|
230
|
-
| `
|
|
231
|
-
| `
|
|
232
|
-
| `
|
|
233
|
-
| `
|
|
234
|
-
| `
|
|
235
|
-
|
|
236
|
-
|
|
237
|
-
|
|
238
|
-
|
|
239
|
-
|
|
240
|
-
|
|
241
|
-
|
|
242
|
-
|
|
243
|
-
|
|
244
|
-
|
|
245
|
-
|
|
246
|
-
|
|
247
|
-
|
|
248
|
-
|
|
249
|
-
|
|
250
|
-
|
|
251
|
-
|
|
252
|
-
|
|
253
|
-
|
|
254
|
-
|
|
255
|
-
|
|
256
|
-
|
|
257
|
-
|
|
258
|
-
|
|
259
|
-
|
|
260
|
-
|
|
261
|
-
|
|
262
|
-
|
|
263
|
-
|
|
264
|
-
|
|
265
|
-
|
|
266
|
-
|
|
267
|
-
|
|
268
|
-
|
|
269
|
-
|
|
270
|
-
|
|
271
|
-
|
|
272
|
-
|
|
273
|
-
|
|
274
|
-
|
|
275
|
-
|
|
276
|
-
|
|
277
|
-
|
|
278
|
-
|
|
279
|
-
|
|
280
|
-
|
|
281
|
-
}
|
|
282
|
-
|
|
283
|
-
|
|
284
|
-
|
|
285
|
-
|
|
286
|
-
|
|
287
|
-
|
|
288
|
-
|
|
289
|
-
|
|
290
|
-
|
|
291
|
-
|
|
292
|
-
|
|
293
|
-
|
|
294
|
-
|
|
295
|
-
|
|
296
|
-
|
|
297
|
-
|
|
298
|
-
|
|
299
|
-
|
|
300
|
-
|
|
301
|
-
|
|
302
|
-
|
|
303
|
-
|
|
304
|
-
|
|
305
|
-
|
|
306
|
-
|
|
307
|
-
|
|
308
|
-
|
|
309
|
-
|
|
310
|
-
|
|
311
|
-
|
|
312
|
-
|
|
313
|
-
|
|
314
|
-
|
|
315
|
-
|
|
316
|
-
|
|
317
|
-
|
|
318
|
-
|
|
319
|
-
|
|
320
|
-
|
|
321
|
-
|
|
322
|
-
|
|
323
|
-
|
|
324
|
-
|
|
325
|
-
|
|
326
|
-
|
|
327
|
-
|
|
328
|
-
//
|
|
329
|
-
|
|
330
|
-
|
|
331
|
-
|
|
332
|
-
|
|
333
|
-
|
|
334
|
-
|
|
335
|
-
|
|
336
|
-
|
|
337
|
-
|
|
338
|
-
|
|
339
|
-
|
|
340
|
-
|
|
341
|
-
|
|
342
|
-
|
|
343
|
-
|
|
344
|
-
|
|
345
|
-
|
|
346
|
-
|
|
347
|
-
|
|
348
|
-
|
|
349
|
-
|
|
350
|
-
|
|
351
|
-
|
|
352
|
-
|
|
353
|
-
|
|
354
|
-
|
|
355
|
-
|
|
356
|
-
|
|
357
|
-
|
|
|
358
|
-
|
|
359
|
-
|
|
360
|
-
|
|
361
|
-
|
|
362
|
-
|
|
363
|
-
|
|
1
|
+
# Runpod AI SDK Provider
|
|
2
|
+
|
|
3
|
+
The **Runpod provider** for the [AI SDK](https://ai-sdk.dev/docs) contains language model and image generation support for [Runpod's](https://runpod.io) public endpoints.
|
|
4
|
+
|
|
5
|
+
## Setup
|
|
6
|
+
|
|
7
|
+
The Runpod provider is available in the `@runpod/ai-sdk-provider` module. You can install it with:
|
|
8
|
+
|
|
9
|
+
```bash
|
|
10
|
+
# npm
|
|
11
|
+
npm install @runpod/ai-sdk-provider
|
|
12
|
+
|
|
13
|
+
# pnpm
|
|
14
|
+
pnpm add @runpod/ai-sdk-provider
|
|
15
|
+
|
|
16
|
+
# yarn
|
|
17
|
+
yarn add @runpod/ai-sdk-provider
|
|
18
|
+
|
|
19
|
+
# bun
|
|
20
|
+
bun add @runpod/ai-sdk-provider
|
|
21
|
+
```
|
|
22
|
+
|
|
23
|
+
## Provider Instance
|
|
24
|
+
|
|
25
|
+
You can import the default provider instance `runpod` from `@runpod/ai-sdk-provider`:
|
|
26
|
+
|
|
27
|
+
```ts
|
|
28
|
+
import { runpod } from '@runpod/ai-sdk-provider';
|
|
29
|
+
```
|
|
30
|
+
|
|
31
|
+
If you need a customized setup, you can import `createRunpod` and create a provider instance with your settings:
|
|
32
|
+
|
|
33
|
+
```ts
|
|
34
|
+
import { createRunpod } from '@runpod/ai-sdk-provider';
|
|
35
|
+
|
|
36
|
+
const runpod = createRunpod({
|
|
37
|
+
apiKey: 'your-api-key', // optional, defaults to RUNPOD_API_KEY environment variable
|
|
38
|
+
baseURL: 'custom-url', // optional, for custom endpoints
|
|
39
|
+
headers: {
|
|
40
|
+
/* custom headers */
|
|
41
|
+
}, // optional
|
|
42
|
+
});
|
|
43
|
+
```
|
|
44
|
+
|
|
45
|
+
You can use the following optional settings to customize the Runpod provider instance:
|
|
46
|
+
|
|
47
|
+
- **baseURL** _string_
|
|
48
|
+
|
|
49
|
+
Use a different URL prefix for API calls, e.g. to use proxy servers or custom endpoints.
|
|
50
|
+
Supports vLLM deployments, SGLang servers, and any OpenAI-compatible API.
|
|
51
|
+
The default prefix is `https://api.runpod.ai/v2`.
|
|
52
|
+
|
|
53
|
+
- **apiKey** _string_
|
|
54
|
+
|
|
55
|
+
API key that is being sent using the `Authorization` header.
|
|
56
|
+
It defaults to the `RUNPOD_API_KEY` environment variable.
|
|
57
|
+
You can obtain your api key from the [Runpod Console](https://console.runpod.io/user/settings) under "API Keys".
|
|
58
|
+
|
|
59
|
+
- **headers** _Record<string,string>_
|
|
60
|
+
|
|
61
|
+
Custom headers to include in the requests.
|
|
62
|
+
|
|
63
|
+
- **fetch** _(input: RequestInfo, init?: RequestInit) => Promise<Response>_
|
|
64
|
+
|
|
65
|
+
Custom [fetch](https://developer.mozilla.org/en-US/docs/Web/API/fetch) implementation.
|
|
66
|
+
You can use it as a middleware to intercept requests,
|
|
67
|
+
or to provide a custom fetch implementation for e.g. testing.
|
|
68
|
+
|
|
69
|
+
## Language Models
|
|
70
|
+
|
|
71
|
+
You can create language models using the provider instance. The first argument is the model ID:
|
|
72
|
+
|
|
73
|
+
```ts
|
|
74
|
+
import { runpod } from '@runpod/ai-sdk-provider';
|
|
75
|
+
import { generateText } from 'ai';
|
|
76
|
+
|
|
77
|
+
const { text } = await generateText({
|
|
78
|
+
model: runpod('qwen/qwen3-32b-awq'),
|
|
79
|
+
prompt: 'What is the capital of Germany?',
|
|
80
|
+
});
|
|
81
|
+
```
|
|
82
|
+
|
|
83
|
+
**Returns:**
|
|
84
|
+
|
|
85
|
+
- `text` - Generated text string
|
|
86
|
+
- `finishReason` - Why generation stopped ('stop', 'length', etc.)
|
|
87
|
+
- `usage` - Token usage information (prompt, completion, total tokens)
|
|
88
|
+
|
|
89
|
+
### Streaming
|
|
90
|
+
|
|
91
|
+
```ts
|
|
92
|
+
import { runpod } from '@runpod/ai-sdk-provider';
|
|
93
|
+
import { streamText } from 'ai';
|
|
94
|
+
|
|
95
|
+
const { textStream } = await streamText({
|
|
96
|
+
model: runpod('qwen/qwen3-32b-awq'),
|
|
97
|
+
prompt:
|
|
98
|
+
'Write a short poem about artificial intelligence in exactly 4 lines.',
|
|
99
|
+
temperature: 0.7,
|
|
100
|
+
});
|
|
101
|
+
|
|
102
|
+
for await (const delta of textStream) {
|
|
103
|
+
process.stdout.write(delta);
|
|
104
|
+
}
|
|
105
|
+
```
|
|
106
|
+
|
|
107
|
+
### Model Capabilities
|
|
108
|
+
|
|
109
|
+
| Model ID | Description | Streaming | Object Generation | Tool Usage | Reasoning Notes |
|
|
110
|
+
| --------------------------------- | ------------------------------------------------------------------- | --------- | ----------------- | ---------- | ------------------------- |
|
|
111
|
+
| `qwen/qwen3-32b-awq` | 32B parameter multilingual model with strong reasoning capabilities | ✅ | ❌ | ✅ | Standard reasoning events |
|
|
112
|
+
| `openai/gpt-oss-120b` | 120B parameter open-source GPT model | ✅ | ❌ | ✅ | Standard reasoning events |
|
|
113
|
+
| `deepcogito/cogito-671b-v2.1-fp8` | 671B parameter Cogito model with FP8 quantization | ✅ | ❌ | ✅ | Standard reasoning events |
|
|
114
|
+
|
|
115
|
+
**Note:** This list is not complete. For a full list of all available models, see the [Runpod Public Endpoint Reference](https://docs.runpod.io/hub/public-endpoint-reference).
|
|
116
|
+
|
|
117
|
+
### Chat Conversations
|
|
118
|
+
|
|
119
|
+
```ts
|
|
120
|
+
const { text } = await generateText({
|
|
121
|
+
model: runpod('qwen/qwen3-32b-awq'),
|
|
122
|
+
messages: [
|
|
123
|
+
{ role: 'system', content: 'You are a helpful assistant.' },
|
|
124
|
+
{ role: 'user', content: 'What is the capital of France?' },
|
|
125
|
+
],
|
|
126
|
+
});
|
|
127
|
+
```
|
|
128
|
+
|
|
129
|
+
### Tool Calling
|
|
130
|
+
|
|
131
|
+
```ts
|
|
132
|
+
import { generateText, tool } from 'ai';
|
|
133
|
+
import { z } from 'zod';
|
|
134
|
+
|
|
135
|
+
const { text, toolCalls } = await generateText({
|
|
136
|
+
model: runpod('openai/gpt-oss-120b'),
|
|
137
|
+
prompt: 'What is the weather like in San Francisco?',
|
|
138
|
+
tools: {
|
|
139
|
+
getWeather: tool({
|
|
140
|
+
description: 'Get weather information for a city',
|
|
141
|
+
inputSchema: z.object({
|
|
142
|
+
city: z.string().describe('The city name'),
|
|
143
|
+
}),
|
|
144
|
+
execute: async ({ city }) => {
|
|
145
|
+
return `The weather in ${city} is sunny.`;
|
|
146
|
+
},
|
|
147
|
+
}),
|
|
148
|
+
},
|
|
149
|
+
});
|
|
150
|
+
```
|
|
151
|
+
|
|
152
|
+
**Additional Returns:**
|
|
153
|
+
|
|
154
|
+
- `toolCalls` - Array of tool calls made by the model
|
|
155
|
+
- `toolResults` - Results from executed tools
|
|
156
|
+
|
|
157
|
+
### Structured output
|
|
158
|
+
|
|
159
|
+
Using `generateObject` to enforce structured ouput is not supported by two models that are part of this provider.
|
|
160
|
+
|
|
161
|
+
You can still return structured data by instructing the model to return JSON and validating it yourself.
|
|
162
|
+
|
|
163
|
+
```ts
|
|
164
|
+
import { runpod } from '@runpod/ai-sdk-provider';
|
|
165
|
+
import { generateText } from 'ai';
|
|
166
|
+
import { z } from 'zod';
|
|
167
|
+
|
|
168
|
+
const RecipeSchema = z.object({
|
|
169
|
+
name: z.string(),
|
|
170
|
+
ingredients: z.array(z.string()),
|
|
171
|
+
steps: z.array(z.string()),
|
|
172
|
+
});
|
|
173
|
+
|
|
174
|
+
const { text } = await generateText({
|
|
175
|
+
model: runpod('qwen/qwen3-32b-awq'),
|
|
176
|
+
messages: [
|
|
177
|
+
{
|
|
178
|
+
role: 'system',
|
|
179
|
+
content:
|
|
180
|
+
'return ONLY valid JSON matching { name: string; ingredients: string[]; steps: string[] }',
|
|
181
|
+
},
|
|
182
|
+
{ role: 'user', content: 'generate a lasagna recipe.' },
|
|
183
|
+
],
|
|
184
|
+
temperature: 0,
|
|
185
|
+
});
|
|
186
|
+
|
|
187
|
+
const parsed = JSON.parse(text);
|
|
188
|
+
const result = RecipeSchema.safeParse(parsed);
|
|
189
|
+
|
|
190
|
+
if (!result.success) {
|
|
191
|
+
// handle invalid JSON shape
|
|
192
|
+
}
|
|
193
|
+
|
|
194
|
+
console.log(result.success ? result.data : parsed);
|
|
195
|
+
```
|
|
196
|
+
|
|
197
|
+
## Image Models
|
|
198
|
+
|
|
199
|
+
You can create Runpod image models using the `.imageModel()` factory method.
|
|
200
|
+
|
|
201
|
+
### Basic Usage
|
|
202
|
+
|
|
203
|
+
```ts
|
|
204
|
+
import { runpod } from '@runpod/ai-sdk-provider';
|
|
205
|
+
import { experimental_generateImage as generateImage } from 'ai';
|
|
206
|
+
|
|
207
|
+
const { image } = await generateImage({
|
|
208
|
+
model: runpod.imageModel('qwen/qwen-image'),
|
|
209
|
+
prompt: 'A serene mountain landscape at sunset',
|
|
210
|
+
aspectRatio: '4:3',
|
|
211
|
+
});
|
|
212
|
+
|
|
213
|
+
// Save to filesystem
|
|
214
|
+
import { writeFileSync } from 'fs';
|
|
215
|
+
writeFileSync('landscape.jpg', image.uint8Array);
|
|
216
|
+
```
|
|
217
|
+
|
|
218
|
+
**Returns:**
|
|
219
|
+
|
|
220
|
+
- `image.uint8Array` - Binary image data (efficient for processing/saving)
|
|
221
|
+
- `image.base64` - Base64 encoded string (for web display)
|
|
222
|
+
- `image.mediaType` - MIME type ('image/jpeg' or 'image/png')
|
|
223
|
+
- `warnings` - Array of any warnings about unsupported parameters
|
|
224
|
+
|
|
225
|
+
### Model Capabilities
|
|
226
|
+
|
|
227
|
+
| Model ID | Description | Supported Aspect Ratios |
|
|
228
|
+
| -------------------------------------- | ------------------------------- | ------------------------------------- |
|
|
229
|
+
| `bytedance/seedream-3.0` | Advanced text-to-image model | 1:1, 4:3, 3:4 |
|
|
230
|
+
| `bytedance/seedream-4.0` | Text-to-image (v4) | 1:1 (supports 1024, 2048, 4096) |
|
|
231
|
+
| `bytedance/seedream-4.0-edit` | Image editing (v4, multi-image) | 1:1 (supports 1024, 1536, 2048, 4096) |
|
|
232
|
+
| `black-forest-labs/flux-1-schnell` | Fast image generation (4 steps) | 1:1, 4:3, 3:4 |
|
|
233
|
+
| `black-forest-labs/flux-1-dev` | High-quality image generation | 1:1, 4:3, 3:4 |
|
|
234
|
+
| `black-forest-labs/flux-1-kontext-dev` | Context-aware image generation | 1:1, 4:3, 3:4 |
|
|
235
|
+
| `qwen/qwen-image` | Text-to-image generation | 1:1, 4:3, 3:4 |
|
|
236
|
+
| `qwen/qwen-image-edit` | Image editing (prompt-guided) | 1:1, 4:3, 3:4 |
|
|
237
|
+
| `nano-banana-edit` | Image editing (multi-image) | 1:1, 4:3, 3:4 |
|
|
238
|
+
| `google/nano-banana-pro-edit` | Image editing (Gemini-powered) | Uses resolution param (1k, 2k) |
|
|
239
|
+
| `pruna/p-image-t2i` | Pruna text-to-image | 1:1, 16:9, 9:16, 4:3, 3:4, etc. |
|
|
240
|
+
| `pruna/p-image-edit` | Pruna image editing | match_input_image, 1:1, 16:9, etc. |
|
|
241
|
+
|
|
242
|
+
**Note**: The provider uses strict validation for image parameters. Unsupported aspect ratios (like `16:9`, `9:16`, `3:2`, `2:3`) will throw an `InvalidArgumentError` with a clear message about supported alternatives.
|
|
243
|
+
|
|
244
|
+
**Note:** This list is not complete. For a full list of all available models, see the [Runpod Public Endpoint Reference](https://docs.runpod.io/hub/public-endpoint-reference).
|
|
245
|
+
|
|
246
|
+
### Advanced Parameters
|
|
247
|
+
|
|
248
|
+
```ts
|
|
249
|
+
const { image } = await generateImage({
|
|
250
|
+
model: runpod.imageModel('bytedance/seedream-3.0'),
|
|
251
|
+
prompt: 'A sunset over mountains',
|
|
252
|
+
size: '1328x1328',
|
|
253
|
+
seed: 42,
|
|
254
|
+
providerOptions: {
|
|
255
|
+
runpod: {
|
|
256
|
+
negative_prompt: 'blurry, low quality',
|
|
257
|
+
enable_safety_checker: true,
|
|
258
|
+
},
|
|
259
|
+
},
|
|
260
|
+
});
|
|
261
|
+
```
|
|
262
|
+
|
|
263
|
+
#### Modify Image
|
|
264
|
+
|
|
265
|
+
Transform existing images using text prompts.
|
|
266
|
+
|
|
267
|
+
```ts
|
|
268
|
+
// Example: Transform existing image
|
|
269
|
+
const { image } = await generateImage({
|
|
270
|
+
model: runpod.imageModel('black-forest-labs/flux-1-kontext-dev'),
|
|
271
|
+
prompt: 'Transform this into a cyberpunk style with neon lights',
|
|
272
|
+
aspectRatio: '1:1',
|
|
273
|
+
providerOptions: {
|
|
274
|
+
runpod: {
|
|
275
|
+
image: 'https://example.com/input-image.jpg',
|
|
276
|
+
},
|
|
277
|
+
},
|
|
278
|
+
});
|
|
279
|
+
|
|
280
|
+
// Example: Using base64 encoded image
|
|
281
|
+
const { image } = await generateImage({
|
|
282
|
+
model: runpod.imageModel('black-forest-labs/flux-1-kontext-dev'),
|
|
283
|
+
prompt: 'Make this image look like a painting',
|
|
284
|
+
providerOptions: {
|
|
285
|
+
runpod: {
|
|
286
|
+
image: 'data:image/png;base64,iVBORw0KGgoAAAANS...',
|
|
287
|
+
},
|
|
288
|
+
},
|
|
289
|
+
});
|
|
290
|
+
```
|
|
291
|
+
|
|
292
|
+
```ts
|
|
293
|
+
// Example: Combine multiple images using Nano Banana edit
|
|
294
|
+
const { image } = await generateImage({
|
|
295
|
+
model: runpod.imageModel('nano-banana-edit'),
|
|
296
|
+
prompt:
|
|
297
|
+
'Combine these four images into a single realistic 3D character scene.',
|
|
298
|
+
// Defaults to 1:1; you can also set size: '1328x1328' or aspectRatio: '4:3'
|
|
299
|
+
providerOptions: {
|
|
300
|
+
runpod: {
|
|
301
|
+
images: [
|
|
302
|
+
'https://image.runpod.ai/uploads/0bz_xzhuLq/a2166199-5bd5-496b-b9ab-a8bae3f73bdc.jpg',
|
|
303
|
+
'https://image.runpod.ai/uploads/Yw86rhY6xi/2ff8435f-f416-4096-9a4d-2f8c838b2d53.jpg',
|
|
304
|
+
'https://image.runpod.ai/uploads/bpCCX9zLY8/3bc27605-6f9a-40ad-83e9-c29bed45fed9.jpg',
|
|
305
|
+
'https://image.runpod.ai/uploads/LPHEY6pyHp/f950ceb8-fafa-4800-bdf1-fd3fd684d843.jpg',
|
|
306
|
+
],
|
|
307
|
+
enable_safety_checker: true,
|
|
308
|
+
},
|
|
309
|
+
},
|
|
310
|
+
});
|
|
311
|
+
```
|
|
312
|
+
|
|
313
|
+
Check out our [examples](https://github.com/runpod/examples/tree/main/ai-sdk/getting-started) for more code snippets on how to use all the different models.
|
|
314
|
+
|
|
315
|
+
### Advanced Configuration
|
|
316
|
+
|
|
317
|
+
```ts
|
|
318
|
+
// Full control over generation parameters
|
|
319
|
+
const { image } = await generateImage({
|
|
320
|
+
model: runpod.imageModel('black-forest-labs/flux-1-dev'),
|
|
321
|
+
prompt: 'A majestic dragon breathing fire in a medieval castle',
|
|
322
|
+
size: '1328x1328',
|
|
323
|
+
seed: 42, // For reproducible results
|
|
324
|
+
providerOptions: {
|
|
325
|
+
runpod: {
|
|
326
|
+
negative_prompt: 'blurry, low quality, distorted, ugly, bad anatomy',
|
|
327
|
+
enable_safety_checker: true,
|
|
328
|
+
num_inference_steps: 50, // Higher quality (default: 28)
|
|
329
|
+
guidance: 3.5, // Stronger prompt adherence (default: 2)
|
|
330
|
+
output_format: 'png', // High quality format
|
|
331
|
+
// Polling settings for long generations
|
|
332
|
+
maxPollAttempts: 30,
|
|
333
|
+
pollIntervalMillis: 4000,
|
|
334
|
+
},
|
|
335
|
+
},
|
|
336
|
+
});
|
|
337
|
+
|
|
338
|
+
// Fast generation with minimal steps
|
|
339
|
+
const { image } = await generateImage({
|
|
340
|
+
model: runpod.imageModel('black-forest-labs/flux-1-schnell'),
|
|
341
|
+
prompt: 'A simple red apple',
|
|
342
|
+
aspectRatio: '1:1',
|
|
343
|
+
providerOptions: {
|
|
344
|
+
runpod: {
|
|
345
|
+
num_inference_steps: 2, // Even faster (default: 4)
|
|
346
|
+
guidance: 10, // Higher guidance for simple prompts
|
|
347
|
+
output_format: 'jpg', // Smaller file size
|
|
348
|
+
},
|
|
349
|
+
},
|
|
350
|
+
});
|
|
351
|
+
```
|
|
352
|
+
|
|
353
|
+
### Provider Options
|
|
354
|
+
|
|
355
|
+
Runpod image models support flexible provider options through the `providerOptions.runpod` object:
|
|
356
|
+
|
|
357
|
+
| Option | Type | Default | Description |
|
|
358
|
+
| ------------------------ | ---------- | ------- | ------------------------------------------------------------------------ |
|
|
359
|
+
| `negative_prompt` | `string` | `""` | Text describing what you don't want in the image |
|
|
360
|
+
| `enable_safety_checker` | `boolean` | `true` | Enable content safety filtering |
|
|
361
|
+
| `disable_safety_checker` | `boolean` | `false` | Disable safety checker (Pruna models) |
|
|
362
|
+
| `image` | `string` | - | Single input image: URL or base64 data URI (Flux Kontext) |
|
|
363
|
+
| `images` | `string[]` | - | Multiple input images (e.g., for `nano-banana-edit` multi-image editing) |
|
|
364
|
+
| `aspect_ratio` | `string` | `"1:1"` | Aspect ratio string (Pruna: "16:9", "match_input_image", etc.) |
|
|
365
|
+
| `resolution` | `string` | `"1k"` | Output resolution (Nano Banana Pro: "1k", "2k") |
|
|
366
|
+
| `num_inference_steps` | `number` | Auto | Number of denoising steps (Flux: 4 for schnell, 28 for others) |
|
|
367
|
+
| `guidance` | `number` | Auto | Guidance scale for prompt adherence (Flux: 7 for schnell, 2 for others) |
|
|
368
|
+
| `output_format` | `string` | `"png"` | Output image format ("png", "jpg", or "jpeg") |
|
|
369
|
+
| `enable_base64_output` | `boolean` | `false` | Return base64 instead of URL (Nano Banana Pro) |
|
|
370
|
+
| `enable_sync_mode` | `boolean` | `false` | Enable synchronous mode (some models) |
|
|
371
|
+
| `maxPollAttempts` | `number` | `60` | Maximum polling attempts for async generation |
|
|
372
|
+
| `pollIntervalMillis` | `number` | `5000` | Polling interval in milliseconds (5 seconds) |
|
|
373
|
+
|
|
374
|
+
## About Runpod
|
|
375
|
+
|
|
376
|
+
[Runpod](https://runpod.io) is the foundation for developers to build, deploy, and scale custom AI systems.
|
|
377
|
+
|
|
378
|
+
Beyond some of the public endpoints you've seen above (+ more generative media APIs), Runpod offers private [serverless endpoints](https://docs.runpod.io/serverless/overview) / [pods](https://docs.runpod.io/pods/overview) / [instant clusters](https://docs.runpod.io/instant-clusters), so that you can train, fine-tune or run any open-source or private model on your terms.
|