@ai-sdk/xai 3.0.30 → 3.0.32

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -0,0 +1,747 @@
1
+ ---
2
+ title: xAI Grok
3
+ description: Learn how to use xAI Grok.
4
+ ---
5
+
6
+ # xAI Grok Provider
7
+
8
+ The [xAI Grok](https://x.ai) provider contains language model support for the [xAI API](https://x.ai/api).
9
+
10
+ ## Setup
11
+
12
+ The xAI Grok provider is available via the `@ai-sdk/xai` module. You can
13
+ install it with
14
+
15
+ <Tabs items={['pnpm', 'npm', 'yarn', 'bun']}>
16
+ <Tab>
17
+ <Snippet text="pnpm add @ai-sdk/xai" dark />
18
+ </Tab>
19
+ <Tab>
20
+ <Snippet text="npm install @ai-sdk/xai" dark />
21
+ </Tab>
22
+ <Tab>
23
+ <Snippet text="yarn add @ai-sdk/xai" dark />
24
+ </Tab>
25
+
26
+ <Tab>
27
+ <Snippet text="bun add @ai-sdk/xai" dark />
28
+ </Tab>
29
+ </Tabs>
30
+
31
+ ## Provider Instance
32
+
33
+ You can import the default provider instance `xai` from `@ai-sdk/xai`:
34
+
35
+ ```ts
36
+ import { xai } from '@ai-sdk/xai';
37
+ ```
38
+
39
+ If you need a customized setup, you can import `createXai` from `@ai-sdk/xai`
40
+ and create a provider instance with your settings:
41
+
42
+ ```ts
43
+ import { createXai } from '@ai-sdk/xai';
44
+
45
+ const xai = createXai({
46
+ apiKey: 'your-api-key',
47
+ });
48
+ ```
49
+
50
+ You can use the following optional settings to customize the xAI provider instance:
51
+
52
+ - **baseURL** _string_
53
+
54
+ Use a different URL prefix for API calls, e.g. to use proxy servers.
55
+ The default prefix is `https://api.x.ai/v1`.
56
+
57
+ - **apiKey** _string_
58
+
59
+ API key that is being sent using the `Authorization` header. It defaults to
60
+ the `XAI_API_KEY` environment variable.
61
+
62
+ - **headers** _Record&lt;string,string&gt;_
63
+
64
+ Custom headers to include in the requests.
65
+
66
+ - **fetch** _(input: RequestInfo, init?: RequestInit) => Promise&lt;Response&gt;_
67
+
68
+ Custom [fetch](https://developer.mozilla.org/en-US/docs/Web/API/fetch) implementation.
69
+ Defaults to the global `fetch` function.
70
+ You can use it as a middleware to intercept requests,
71
+ or to provide a custom fetch implementation for e.g. testing.
72
+
73
+ ## Language Models
74
+
75
+ You can create [xAI models](https://console.x.ai) using a provider instance. The
76
+ first argument is the model id, e.g. `grok-3`.
77
+
78
+ ```ts
79
+ const model = xai('grok-3');
80
+ ```
81
+
82
+ By default, `xai(modelId)` uses the Chat API. To use the Responses API with server-side agentic tools, explicitly use `xai.responses(modelId)`.
83
+
84
+ ### Example
85
+
86
+ You can use xAI language models to generate text with the `generateText` function:
87
+
88
+ ```ts
89
+ import { xai } from '@ai-sdk/xai';
90
+ import { generateText } from 'ai';
91
+
92
+ const { text } = await generateText({
93
+ model: xai('grok-3'),
94
+ prompt: 'Write a vegetarian lasagna recipe for 4 people.',
95
+ });
96
+ ```
97
+
98
+ xAI language models can also be used in the `streamText`, `generateObject`, and `streamObject` functions
99
+ (see [AI SDK Core](/docs/ai-sdk-core)).
100
+
101
+ ### Provider Options
102
+
103
+ xAI chat models support additional provider options that are not part of
104
+ the [standard call settings](/docs/ai-sdk-core/settings). You can pass them in the `providerOptions` argument:
105
+
106
+ ```ts
107
+ const model = xai('grok-3-mini');
108
+
109
+ await generateText({
110
+ model,
111
+ providerOptions: {
112
+ xai: {
113
+ reasoningEffort: 'high',
114
+ },
115
+ },
116
+ });
117
+ ```
118
+
119
+ The following optional provider options are available for xAI chat models:
120
+
121
+ - **reasoningEffort** _'low' | 'medium' | 'high'_
122
+
123
+ Reasoning effort for reasoning models.
124
+
125
+ - **store** _boolean_
126
+
127
+ Whether to store the generation. Defaults to `true`.
128
+
129
+ - **previousResponseId** _string_
130
+
131
+ The ID of the previous response. You can use it to continue a conversation. Defaults to `undefined`.
132
+
133
+ ## Responses API (Agentic Tools)
134
+
135
+ You can use the xAI Responses API with the `xai.responses(modelId)` factory method for server-side agentic tool calling. This enables the model to autonomously orchestrate tool calls and research on xAI's servers.
136
+
137
+ ```ts
138
+ const model = xai.responses('grok-4-fast');
139
+ ```
140
+
141
+ The Responses API provides server-side tools that the model can autonomously execute during its reasoning process:
142
+
143
+ - **web_search**: Real-time web search and page browsing
144
+ - **x_search**: Search X (Twitter) posts, users, and threads
145
+ - **code_execution**: Execute Python code for calculations and data analysis
146
+ - **mcp_server**: Connect to remote MCP servers and use their tools
147
+
148
+ ### Vision
149
+
150
+ The Responses API supports image input with vision models:
151
+
152
+ ```ts
153
+ import { xai } from '@ai-sdk/xai';
154
+ import { generateText } from 'ai';
155
+
156
+ const { text } = await generateText({
157
+ model: xai.responses('grok-2-vision-1212'),
158
+ messages: [
159
+ {
160
+ role: 'user',
161
+ content: [
162
+ { type: 'text', text: 'What do you see in this image?' },
163
+ { type: 'image', image: fs.readFileSync('./image.png') },
164
+ ],
165
+ },
166
+ ],
167
+ });
168
+ ```
169
+
170
+ ### Web Search Tool
171
+
172
+ The web search tool enables autonomous web research with optional domain filtering and image understanding:
173
+
174
+ ```ts
175
+ import { xai } from '@ai-sdk/xai';
176
+ import { generateText } from 'ai';
177
+
178
+ const { text, sources } = await generateText({
179
+ model: xai.responses('grok-4-fast'),
180
+ prompt: 'What are the latest developments in AI?',
181
+ tools: {
182
+ web_search: xai.tools.webSearch({
183
+ allowedDomains: ['arxiv.org', 'openai.com'],
184
+ enableImageUnderstanding: true,
185
+ }),
186
+ },
187
+ });
188
+
189
+ console.log(text);
190
+ console.log('Citations:', sources);
191
+ ```
192
+
193
+ #### Web Search Parameters
194
+
195
+ - **allowedDomains** _string[]_
196
+
197
+ Only search within specified domains (max 5). Cannot be used with `excludedDomains`.
198
+
199
+ - **excludedDomains** _string[]_
200
+
201
+ Exclude specified domains from search (max 5). Cannot be used with `allowedDomains`.
202
+
203
+ - **enableImageUnderstanding** _boolean_
204
+
205
+ Enable the model to view and analyze images found during search. Increases token usage.
206
+
207
+ ### X Search Tool
208
+
209
+ The X search tool enables searching X (Twitter) for posts, with filtering by handles and date ranges:
210
+
211
+ ```ts
212
+ const { text, sources } = await generateText({
213
+ model: xai.responses('grok-4-fast'),
214
+ prompt: 'What are people saying about AI on X this week?',
215
+ tools: {
216
+ x_search: xai.tools.xSearch({
217
+ allowedXHandles: ['elonmusk', 'xai'],
218
+ fromDate: '2025-10-23',
219
+ toDate: '2025-10-30',
220
+ enableImageUnderstanding: true,
221
+ enableVideoUnderstanding: true,
222
+ }),
223
+ },
224
+ });
225
+ ```
226
+
227
+ #### X Search Parameters
228
+
229
+ - **allowedXHandles** _string[]_
230
+
231
+ Only search posts from specified X handles (max 10). Cannot be used with `excludedXHandles`.
232
+
233
+ - **excludedXHandles** _string[]_
234
+
235
+ Exclude posts from specified X handles (max 10). Cannot be used with `allowedXHandles`.
236
+
237
+ - **fromDate** _string_
238
+
239
+ Start date for posts in ISO8601 format (`YYYY-MM-DD`).
240
+
241
+ - **toDate** _string_
242
+
243
+ End date for posts in ISO8601 format (`YYYY-MM-DD`).
244
+
245
+ - **enableImageUnderstanding** _boolean_
246
+
247
+ Enable the model to view and analyze images in X posts.
248
+
249
+ - **enableVideoUnderstanding** _boolean_
250
+
251
+ Enable the model to view and analyze videos in X posts.
252
+
253
+ ### Code Execution Tool
254
+
255
+ The code execution tool enables the model to write and execute Python code for calculations and data analysis:
256
+
257
+ ```ts
258
+ const { text } = await generateText({
259
+ model: xai.responses('grok-4-fast'),
260
+ prompt:
261
+ 'Calculate the compound interest for $10,000 at 5% annually for 10 years',
262
+ tools: {
263
+ code_execution: xai.tools.codeExecution(),
264
+ },
265
+ });
266
+ ```
267
+
268
+ ### MCP Server Tool
269
+
270
+ The MCP server tool enables the model to connect to remote [Model Context Protocol (MCP)](https://modelcontextprotocol.io/) servers and use their tools:
271
+
272
+ ```ts
273
+ const { text } = await generateText({
274
+ model: xai.responses('grok-4-fast'),
275
+ prompt: 'Use the weather tool to check conditions in San Francisco',
276
+ tools: {
277
+ weather_server: xai.tools.mcpServer({
278
+ serverUrl: 'https://example.com/mcp',
279
+ serverLabel: 'weather-service',
280
+ serverDescription: 'Weather data provider',
281
+ allowedTools: ['get_weather', 'get_forecast'],
282
+ }),
283
+ },
284
+ });
285
+ ```
286
+
287
+ #### MCP Server Parameters
288
+
289
+ - **serverUrl** _string_ (required)
290
+
291
+ The URL of the remote MCP server.
292
+
293
+ - **serverLabel** _string_ (required)
294
+
295
+ A label to identify the MCP server.
296
+
297
+ - **serverDescription** _string_
298
+
299
+ A description of what the MCP server provides.
300
+
301
+ - **allowedTools** _string[]_
302
+
303
+ List of tool names that the model is allowed to use from the MCP server. If not specified, all tools are allowed.
304
+
305
+ - **headers** _Record&lt;string, string&gt;_
306
+
307
+ Custom headers to include when connecting to the MCP server.
308
+
309
+ - **authorization** _string_
310
+
311
+ Authorization header value for authenticating with the MCP server (e.g., `'Bearer token123'`).
312
+
313
+ ### File Search Tool
314
+
315
+ xAI supports file search through OpenAI compatibility. You can use the OpenAI provider with xAI's base URL to search vector stores:
316
+
317
+ ```ts
318
+ import { createOpenAI } from '@ai-sdk/openai';
319
+ import { streamText } from 'ai';
320
+
321
+ const openai = createOpenAI({
322
+ baseURL: 'https://api.x.ai/v1',
323
+ apiKey: process.env.XAI_API_KEY,
324
+ });
325
+
326
+ const result = streamText({
327
+ model: openai('grok-4'),
328
+ prompt: 'What documents do you have access to?',
329
+ tools: {
330
+ file_search: openai.tools.fileSearch({
331
+ vectorStoreIds: ['your-vector-store-id'],
332
+ maxNumResults: 5,
333
+ }),
334
+ },
335
+ });
336
+ ```
337
+
338
+ <Note>
339
+ File search requires grok-4 family models. See the [OpenAI
340
+ provider](/providers/ai-sdk-providers/openai) documentation for additional
341
+ file search options like filters and ranking.
342
+ </Note>
343
+
344
+ ### Multiple Tools
345
+
346
+ You can combine multiple server-side tools for comprehensive research:
347
+
348
+ ```ts
349
+ import { xai } from '@ai-sdk/xai';
350
+ import { streamText } from 'ai';
351
+
352
+ const { fullStream } = streamText({
353
+ model: xai.responses('grok-4-fast'),
354
+ prompt: 'Research AI safety developments and calculate risk metrics',
355
+ tools: {
356
+ web_search: xai.tools.webSearch(),
357
+ x_search: xai.tools.xSearch(),
358
+ code_execution: xai.tools.codeExecution(),
359
+ data_service: xai.tools.mcpServer({
360
+ serverUrl: 'https://data.example.com/mcp',
361
+ serverLabel: 'data-service',
362
+ }),
363
+ },
364
+ });
365
+
366
+ for await (const part of fullStream) {
367
+ if (part.type === 'text-delta') {
368
+ process.stdout.write(part.text);
369
+ } else if (part.type === 'source' && part.sourceType === 'url') {
370
+ console.log('\nSource:', part.url);
371
+ }
372
+ }
373
+ ```
374
+
375
+ ### Provider Options
376
+
377
+ The Responses API supports the following provider options:
378
+
379
+ ```ts
380
+ import { xai } from '@ai-sdk/xai';
381
+ import { generateText } from 'ai';
382
+
383
+ const result = await generateText({
384
+ model: xai.responses('grok-4-fast'),
385
+ providerOptions: {
386
+ xai: {
387
+ reasoningEffort: 'high',
388
+ },
389
+ },
390
+ // ...
391
+ });
392
+ ```
393
+
394
+ The following provider options are available:
395
+
396
+ - **reasoningEffort** _'low' | 'high'_
397
+
398
+ Control the reasoning effort for the model. Higher effort may produce more thorough results at the cost of increased latency and token usage.
399
+
400
+ <Note>
401
+ The Responses API only supports server-side tools. You cannot mix server-side
402
+ tools with client-side function tools in the same request.
403
+ </Note>
404
+
405
+ ## Live Search
406
+
407
+ xAI models support Live Search functionality, allowing them to query real-time data from various sources and include it in responses with citations.
408
+
409
+ ### Basic Search
410
+
411
+ To enable search, specify `searchParameters` with a search mode:
412
+
413
+ ```ts
414
+ import { xai } from '@ai-sdk/xai';
415
+ import { generateText } from 'ai';
416
+
417
+ const { text, sources } = await generateText({
418
+ model: xai('grok-3-latest'),
419
+ prompt: 'What are the latest developments in AI?',
420
+ providerOptions: {
421
+ xai: {
422
+ searchParameters: {
423
+ mode: 'auto', // 'auto', 'on', or 'off'
424
+ returnCitations: true,
425
+ maxSearchResults: 5,
426
+ },
427
+ },
428
+ },
429
+ });
430
+
431
+ console.log(text);
432
+ console.log('Sources:', sources);
433
+ ```
434
+
435
+ ### Search Parameters
436
+
437
+ The following search parameters are available:
438
+
439
+ - **mode** _'auto' | 'on' | 'off'_
440
+
441
+ Search mode preference:
442
+
443
+ - `'auto'` (default): Model decides whether to search
444
+ - `'on'`: Always enables search
445
+ - `'off'`: Disables search completely
446
+
447
+ - **returnCitations** _boolean_
448
+
449
+ Whether to return citations in the response. Defaults to `true`.
450
+
451
+ - **fromDate** _string_
452
+
453
+ Start date for search data in ISO8601 format (`YYYY-MM-DD`).
454
+
455
+ - **toDate** _string_
456
+
457
+ End date for search data in ISO8601 format (`YYYY-MM-DD`).
458
+
459
+ - **maxSearchResults** _number_
460
+
461
+ Maximum number of search results to consider. Defaults to 20, max 50.
462
+
463
+ - **sources** _Array&lt;SearchSource&gt;_
464
+
465
+ Data sources to search from. Defaults to `["web", "x"]` if not specified.
466
+
467
+ ### Search Sources
468
+
469
+ You can specify different types of data sources for search:
470
+
471
+ #### Web Search
472
+
473
+ ```ts
474
+ const result = await generateText({
475
+ model: xai('grok-3-latest'),
476
+ prompt: 'Best ski resorts in Switzerland',
477
+ providerOptions: {
478
+ xai: {
479
+ searchParameters: {
480
+ mode: 'on',
481
+ sources: [
482
+ {
483
+ type: 'web',
484
+ country: 'CH', // ISO alpha-2 country code
485
+ allowedWebsites: ['ski.com', 'snow-forecast.com'],
486
+ safeSearch: true,
487
+ },
488
+ ],
489
+ },
490
+ },
491
+ },
492
+ });
493
+ ```
494
+
495
+ #### Web source parameters
496
+
497
+ - **country** _string_: ISO alpha-2 country code
498
+ - **allowedWebsites** _string[]_: Max 5 allowed websites
499
+ - **excludedWebsites** _string[]_: Max 5 excluded websites
500
+ - **safeSearch** _boolean_: Enable safe search (default: true)
501
+
502
+ #### X (Twitter) Search
503
+
504
+ ```ts
505
+ const result = await generateText({
506
+ model: xai('grok-3-latest'),
507
+ prompt: 'Latest updates on Grok AI',
508
+ providerOptions: {
509
+ xai: {
510
+ searchParameters: {
511
+ mode: 'on',
512
+ sources: [
513
+ {
514
+ type: 'x',
515
+ includedXHandles: ['grok', 'xai'],
516
+ excludedXHandles: ['openai'],
517
+ postFavoriteCount: 10,
518
+ postViewCount: 100,
519
+ },
520
+ ],
521
+ },
522
+ },
523
+ },
524
+ });
525
+ ```
526
+
527
+ #### X source parameters
528
+
529
+ - **includedXHandles** _string[]_: Array of X handles to search (without @ symbol)
530
+ - **excludedXHandles** _string[]_: Array of X handles to exclude from search (without @ symbol)
531
+ - **postFavoriteCount** _number_: Minimum favorite count of the X posts to consider.
532
+ - **postViewCount** _number_: Minimum view count of the X posts to consider.
533
+
534
+ #### News Search
535
+
536
+ ```ts
537
+ const result = await generateText({
538
+ model: xai('grok-3-latest'),
539
+ prompt: 'Recent tech industry news',
540
+ providerOptions: {
541
+ xai: {
542
+ searchParameters: {
543
+ mode: 'on',
544
+ sources: [
545
+ {
546
+ type: 'news',
547
+ country: 'US',
548
+ excludedWebsites: ['tabloid.com'],
549
+ safeSearch: true,
550
+ },
551
+ ],
552
+ },
553
+ },
554
+ },
555
+ });
556
+ ```
557
+
558
+ #### News source parameters
559
+
560
+ - **country** _string_: ISO alpha-2 country code
561
+ - **excludedWebsites** _string[]_: Max 5 excluded websites
562
+ - **safeSearch** _boolean_: Enable safe search (default: true)
563
+
564
+ #### RSS Feed Search
565
+
566
+ ```ts
567
+ const result = await generateText({
568
+ model: xai('grok-3-latest'),
569
+ prompt: 'Latest status updates',
570
+ providerOptions: {
571
+ xai: {
572
+ searchParameters: {
573
+ mode: 'on',
574
+ sources: [
575
+ {
576
+ type: 'rss',
577
+ links: ['https://status.x.ai/feed.xml'],
578
+ },
579
+ ],
580
+ },
581
+ },
582
+ },
583
+ });
584
+ ```
585
+
586
+ #### RSS source parameters
587
+
588
+ - **links** _string[]_: Array of RSS feed URLs (max 1 currently supported)
589
+
590
+ ### Multiple Sources
591
+
592
+ You can combine multiple data sources in a single search:
593
+
594
+ ```ts
595
+ const result = await generateText({
596
+ model: xai('grok-3-latest'),
597
+ prompt: 'Comprehensive overview of recent AI breakthroughs',
598
+ providerOptions: {
599
+ xai: {
600
+ searchParameters: {
601
+ mode: 'on',
602
+ returnCitations: true,
603
+ maxSearchResults: 15,
604
+ sources: [
605
+ {
606
+ type: 'web',
607
+ allowedWebsites: ['arxiv.org', 'openai.com'],
608
+ },
609
+ {
610
+ type: 'news',
611
+ country: 'US',
612
+ },
613
+ {
614
+ type: 'x',
615
+ includedXHandles: ['openai', 'deepmind'],
616
+ },
617
+ ],
618
+ },
619
+ },
620
+ },
621
+ });
622
+ ```
623
+
624
+ ### Sources and Citations
625
+
626
+ When search is enabled with `returnCitations: true`, the response includes sources that were used to generate the answer:
627
+
628
+ ```ts
629
+ const { text, sources } = await generateText({
630
+ model: xai('grok-3-latest'),
631
+ prompt: 'What are the latest developments in AI?',
632
+ providerOptions: {
633
+ xai: {
634
+ searchParameters: {
635
+ mode: 'auto',
636
+ returnCitations: true,
637
+ },
638
+ },
639
+ },
640
+ });
641
+
642
+ // Access the sources used
643
+ for (const source of sources) {
644
+ if (source.sourceType === 'url') {
645
+ console.log('Source:', source.url);
646
+ }
647
+ }
648
+ ```
649
+
650
+ ### Streaming with Search
651
+
652
+ Live Search works with streaming responses. Citations are included when the stream completes:
653
+
654
+ ```ts
655
+ import { streamText } from 'ai';
656
+
657
+ const result = streamText({
658
+ model: xai('grok-3-latest'),
659
+ prompt: 'What has happened in tech recently?',
660
+ providerOptions: {
661
+ xai: {
662
+ searchParameters: {
663
+ mode: 'auto',
664
+ returnCitations: true,
665
+ },
666
+ },
667
+ },
668
+ });
669
+
670
+ for await (const textPart of result.textStream) {
671
+ process.stdout.write(textPart);
672
+ }
673
+
674
+ console.log('Sources:', await result.sources);
675
+ ```
676
+
677
+ ## Model Capabilities
678
+
679
+ | Model | Image Input | Object Generation | Tool Usage | Tool Streaming | Reasoning |
680
+ | --------------------------- | ------------------- | ------------------- | ------------------- | ------------------- | ------------------- |
681
+ | `grok-4-fast-non-reasoning` | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Cross size={18} /> |
682
+ | `grok-4-fast-reasoning` | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
683
+ | `grok-code-fast-1` | <Cross size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
684
+ | `grok-4` | <Cross size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Cross size={18} /> |
685
+ | `grok-3` | <Cross size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Cross size={18} /> |
686
+ | `grok-3-latest` | <Cross size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Cross size={18} /> |
687
+ | `grok-3-fast` | <Cross size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Cross size={18} /> |
688
+ | `grok-3-fast-latest` | <Cross size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Cross size={18} /> |
689
+ | `grok-3-mini` | <Cross size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
690
+ | `grok-3-mini-latest` | <Cross size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
691
+ | `grok-3-mini-fast` | <Cross size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
692
+ | `grok-3-mini-fast-latest` | <Cross size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
693
+ | `grok-2` | <Cross size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Cross size={18} /> |
694
+ | `grok-2-latest` | <Cross size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Cross size={18} /> |
695
+ | `grok-2-1212` | <Cross size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Cross size={18} /> |
696
+ | `grok-2-vision` | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Cross size={18} /> |
697
+ | `grok-2-vision-latest` | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Cross size={18} /> |
698
+ | `grok-2-vision-1212` | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Cross size={18} /> |
699
+ | `grok-beta` | <Cross size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Cross size={18} /> |
700
+ | `grok-vision-beta` | <Check size={18} /> | <Cross size={18} /> | <Cross size={18} /> | <Cross size={18} /> | <Cross size={18} /> |
701
+
702
+ <Note>
703
+ The table above lists popular models. Please see the [xAI
704
+ docs](https://docs.x.ai/docs#models) for a full list of available models. You
705
+ can also pass any available provider model ID as a string if needed.
706
+ </Note>
707
+
708
+ ## Image Models
709
+
710
+ You can create xAI image models using the `.image()` factory method. For more on image generation with the AI SDK see [generateImage()](/docs/reference/ai-sdk-core/generate-image).
711
+
712
+ ```ts
713
+ import { xai } from '@ai-sdk/xai';
714
+ import { generateImage } from 'ai';
715
+
716
+ const { image } = await generateImage({
717
+ model: xai.image('grok-2-image'),
718
+ prompt: 'A futuristic cityscape at sunset',
719
+ });
720
+ ```
721
+
722
+ <Note>
723
+ The xAI image model does not currently support the `aspectRatio` or `size`
724
+ parameters. Image size defaults to 1024x768.
725
+ </Note>
726
+
727
+ ### Model-specific options
728
+
729
+ You can customize the image generation behavior with model-specific settings:
730
+
731
+ ```ts
732
+ import { xai } from '@ai-sdk/xai';
733
+ import { generateImage } from 'ai';
734
+
735
+ const { images } = await generateImage({
736
+ model: xai.image('grok-2-image'),
737
+ prompt: 'A futuristic cityscape at sunset',
738
+ maxImagesPerCall: 5, // Default is 10
739
+ n: 2, // Generate 2 images
740
+ });
741
+ ```
742
+
743
+ ### Model Capabilities
744
+
745
+ | Model | Sizes | Notes |
746
+ | -------------- | ------------------ | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
747
+ | `grok-2-image` | 1024x768 (default) | xAI's text-to-image generation model, designed to create high-quality images from text prompts. It's trained on a diverse dataset and can generate images across various styles, subjects, and settings. |