@ai-sdk/amazon-bedrock 4.0.25 → 4.0.27

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -0,0 +1,1453 @@
1
+ ---
2
+ title: Amazon Bedrock
3
+ description: Learn how to use the Amazon Bedrock provider.
4
+ ---
5
+
6
+ # Amazon Bedrock Provider
7
+
8
+ The Amazon Bedrock provider for the [AI SDK](/docs) contains language model support for the [Amazon Bedrock](https://aws.amazon.com/bedrock) APIs.
9
+
10
+ ## Setup
11
+
12
+ The Bedrock provider is available in the `@ai-sdk/amazon-bedrock` module. You can install it with
13
+
14
+ <Tabs items={['pnpm', 'npm', 'yarn', 'bun']}>
15
+ <Tab>
16
+ <Snippet text="pnpm add @ai-sdk/amazon-bedrock" dark />
17
+ </Tab>
18
+ <Tab>
19
+ <Snippet text="npm install @ai-sdk/amazon-bedrock" dark />
20
+ </Tab>
21
+ <Tab>
22
+ <Snippet text="yarn add @ai-sdk/amazon-bedrock" dark />
23
+ </Tab>
24
+
25
+ <Tab>
26
+ <Snippet text="bun add @ai-sdk/amazon-bedrock" dark />
27
+ </Tab>
28
+ </Tabs>
29
+
30
+ ### Prerequisites
31
+
32
+ Access to Amazon Bedrock foundation models isn't granted by default. In order to gain access to a foundation model, an IAM user with sufficient permissions needs to request access to it through the console. Once access is provided to a model, it is available for all users in the account.
33
+
34
+ See the [Model Access Docs](https://docs.aws.amazon.com/bedrock/latest/userguide/model-access.html) for more information.
35
+
36
+ ### Authentication
37
+
38
+ #### Using IAM Access Key and Secret Key
39
+
40
+ **Step 1: Creating AWS Access Key and Secret Key**
41
+
42
+ To get started, you'll need to create an AWS access key and secret key. Here's how:
43
+
44
+ **Login to AWS Management Console**
45
+
46
+ - Go to the [AWS Management Console](https://console.aws.amazon.com/) and log in with your AWS account credentials.
47
+
48
+ **Create an IAM User**
49
+
50
+ - Navigate to the [IAM dashboard](https://console.aws.amazon.com/iam/home) and click on "Users" in the left-hand navigation menu.
51
+ - Click on "Create user" and fill in the required details to create a new IAM user.
52
+ - Make sure to select "Programmatic access" as the access type.
53
+ - The user account needs the `AmazonBedrockFullAccess` policy attached to it.
54
+
55
+ **Create Access Key**
56
+
57
+ - Click on the "Security credentials" tab and then click on "Create access key".
58
+ - Click "Create access key" to generate a new access key pair.
59
+ - Download the `.csv` file containing the access key ID and secret access key.
60
+
61
+ **Step 2: Configuring the Access Key and Secret Key**
62
+
63
+ Within your project add a `.env` file if you don't already have one. This file will be used to set the access key and secret key as environment variables. Add the following lines to the `.env` file:
64
+
65
+ ```makefile
66
+ AWS_ACCESS_KEY_ID=YOUR_ACCESS_KEY_ID
67
+ AWS_SECRET_ACCESS_KEY=YOUR_SECRET_ACCESS_KEY
68
+ AWS_REGION=YOUR_REGION
69
+ ```
70
+
71
+ <Note>
72
+ Many frameworks such as [Next.js](https://nextjs.org/) load the `.env` file
73
+ automatically. If you're using a different framework, you may need to load the
74
+ `.env` file manually using a package like
75
+ [`dotenv`](https://github.com/motdotla/dotenv).
76
+ </Note>
77
+
78
+ Remember to replace `YOUR_ACCESS_KEY_ID`, `YOUR_SECRET_ACCESS_KEY`, and `YOUR_REGION` with the actual values from your AWS account.
79
+
80
+ #### Using AWS SDK Credentials Chain (instance profiles, instance roles, ECS roles, EKS Service Accounts, etc.)
81
+
82
+ When using AWS SDK, the SDK will automatically use the credentials chain to determine the credentials to use. This includes instance profiles, instance roles, ECS roles, EKS Service Accounts, etc. A similar behavior is possible using the AI SDK by not specifying the `accessKeyId` and `secretAccessKey`, `sessionToken` properties in the provider settings and instead passing a `credentialProvider` property.
83
+
84
+ _Usage:_
85
+
86
+ `@aws-sdk/credential-providers` package provides a set of credential providers that can be used to create a credential provider chain.
87
+
88
+ <Tabs items={['pnpm', 'npm', 'yarn', 'bun']}>
89
+ <Tab>
90
+ <Snippet text="pnpm add @aws-sdk/credential-providers" dark />
91
+ </Tab>
92
+ <Tab>
93
+ <Snippet text="npm install @aws-sdk/credential-providers" dark />
94
+ </Tab>
95
+ <Tab>
96
+ <Snippet text="yarn add @aws-sdk/credential-providers" dark />
97
+ </Tab>
98
+
99
+ <Tab>
100
+ <Snippet text="bun add @aws-sdk/credential-providers" dark />
101
+ </Tab>
102
+ </Tabs>
103
+
104
+ ```ts
105
+ import { createAmazonBedrock } from '@ai-sdk/amazon-bedrock';
106
+ import { fromNodeProviderChain } from '@aws-sdk/credential-providers';
107
+
108
+ const bedrock = createAmazonBedrock({
109
+ region: 'us-east-1',
110
+ credentialProvider: fromNodeProviderChain(),
111
+ });
112
+ ```
113
+
114
+ ## Provider Instance
115
+
116
+ You can import the default provider instance `bedrock` from `@ai-sdk/amazon-bedrock`:
117
+
118
+ ```ts
119
+ import { bedrock } from '@ai-sdk/amazon-bedrock';
120
+ ```
121
+
122
+ If you need a customized setup, you can import `createAmazonBedrock` from `@ai-sdk/amazon-bedrock` and create a provider instance with your settings:
123
+
124
+ ```ts
125
+ import { createAmazonBedrock } from '@ai-sdk/amazon-bedrock';
126
+
127
+ const bedrock = createAmazonBedrock({
128
+ region: 'us-east-1',
129
+ accessKeyId: 'xxxxxxxxx',
130
+ secretAccessKey: 'xxxxxxxxx',
131
+ sessionToken: 'xxxxxxxxx',
132
+ });
133
+ ```
134
+
135
+ <Note>
136
+ The credentials settings fall back to environment variable defaults described
137
+ below. These may be set by your serverless environment without your awareness,
138
+ which can lead to merged/conflicting credential values and provider errors
139
+ around failed authentication. If you're experiencing issues be sure you are
140
+ explicitly specifying all settings (even if `undefined`) to avoid any
141
+ defaults.
142
+ </Note>
143
+
144
+ You can use the following optional settings to customize the Amazon Bedrock provider instance:
145
+
146
+ - **region** _string_
147
+
148
+ The AWS region that you want to use for the API calls.
149
+ It uses the `AWS_REGION` environment variable by default.
150
+
151
+ - **accessKeyId** _string_
152
+
153
+ The AWS access key ID that you want to use for the API calls.
154
+ It uses the `AWS_ACCESS_KEY_ID` environment variable by default.
155
+
156
+ - **secretAccessKey** _string_
157
+
158
+ The AWS secret access key that you want to use for the API calls.
159
+ It uses the `AWS_SECRET_ACCESS_KEY` environment variable by default.
160
+
161
+ - **sessionToken** _string_
162
+
163
+ Optional. The AWS session token that you want to use for the API calls.
164
+ It uses the `AWS_SESSION_TOKEN` environment variable by default.
165
+
166
+ - **credentialProvider** _() =&gt; Promise&lt;&#123; accessKeyId: string; secretAccessKey: string; sessionToken?: string; &#125;&gt;_
167
+
168
+ Optional. The AWS credential provider chain that you want to use for the API calls.
169
+ It uses the specified credentials by default.
170
+
171
+ ## Language Models
172
+
173
+ You can create models that call the Bedrock API using the provider instance.
174
+ The first argument is the model id, e.g. `meta.llama3-70b-instruct-v1:0`.
175
+
176
+ ```ts
177
+ const model = bedrock('meta.llama3-70b-instruct-v1:0');
178
+ ```
179
+
180
+ Amazon Bedrock models also support some model specific provider options that are not part of the [standard call settings](/docs/ai-sdk-core/settings).
181
+ You can pass them in the `providerOptions` argument:
182
+
183
+ ```ts
184
+ const model = bedrock('anthropic.claude-3-sonnet-20240229-v1:0');
185
+
186
+ await generateText({
187
+ model,
188
+ providerOptions: {
189
+ anthropic: {
190
+ additionalModelRequestFields: { top_k: 350 },
191
+ },
192
+ },
193
+ });
194
+ ```
195
+
196
+ Documentation for additional settings based on the selected model can be found within the [Amazon Bedrock Inference Parameter Documentation](https://docs.aws.amazon.com/bedrock/latest/userguide/model-parameters.html).
197
+
198
+ You can use Amazon Bedrock language models to generate text with the `generateText` function:
199
+
200
+ ```ts
201
+ import { bedrock } from '@ai-sdk/amazon-bedrock';
202
+ import { generateText } from 'ai';
203
+
204
+ const { text } = await generateText({
205
+ model: bedrock('meta.llama3-70b-instruct-v1:0'),
206
+ prompt: 'Write a vegetarian lasagna recipe for 4 people.',
207
+ });
208
+ ```
209
+
210
+ Amazon Bedrock language models can also be used in the `streamText` function
211
+ (see [AI SDK Core](/docs/ai-sdk-core)).
212
+
213
+ ### File Inputs
214
+
215
+ <Note type="warning">
216
+ Amazon Bedrock supports file inputs in combination with specific models, e.g.
217
+ `anthropic.claude-3-haiku-20240307-v1:0`.
218
+ </Note>
219
+
220
+ The Amazon Bedrock provider supports file inputs, e.g. PDF files.
221
+
222
+ ```ts
223
+ import { bedrock } from '@ai-sdk/amazon-bedrock';
224
+ import { generateText } from 'ai';
225
+
226
+ const result = await generateText({
227
+ model: bedrock('anthropic.claude-3-haiku-20240307-v1:0'),
228
+ messages: [
229
+ {
230
+ role: 'user',
231
+ content: [
232
+ { type: 'text', text: 'Describe the pdf in detail.' },
233
+ {
234
+ type: 'file',
235
+ data: readFileSync('./data/ai.pdf'),
236
+ mediaType: 'application/pdf',
237
+ },
238
+ ],
239
+ },
240
+ ],
241
+ });
242
+ ```
243
+
244
+ ### Guardrails
245
+
246
+ You can use the `bedrock` provider options to utilize [Amazon Bedrock Guardrails](https://aws.amazon.com/bedrock/guardrails/):
247
+
248
+ ```ts
249
+ const result = await generateText({
250
+ model: bedrock('anthropic.claude-3-sonnet-20240229-v1:0'),
251
+ prompt: 'Write a story about space exploration.',
252
+ providerOptions: {
253
+ bedrock: {
254
+ guardrailConfig: {
255
+ guardrailIdentifier: '1abcd2ef34gh',
256
+ guardrailVersion: '1',
257
+ trace: 'enabled' as const,
258
+ streamProcessingMode: 'async',
259
+ },
260
+ },
261
+ },
262
+ });
263
+ ```
264
+
265
+ Tracing information will be returned in the provider metadata if you have tracing enabled.
266
+
267
+ ```ts
268
+ if (result.providerMetadata?.bedrock.trace) {
269
+ // ...
270
+ }
271
+ ```
272
+
273
+ See the [Amazon Bedrock Guardrails documentation](https://docs.aws.amazon.com/bedrock/latest/userguide/guardrails.html) for more information.
274
+
275
+ ### Citations
276
+
277
+ Amazon Bedrock supports citations for document-based inputs across compatible models. When enabled:
278
+
279
+ - Some models can read documents with visual understanding, not just extracting text
280
+ - Models can cite specific parts of documents you provide, making it easier to trace information back to its source (Not Supported Yet)
281
+
282
+ ```ts
283
+ import { bedrock } from '@ai-sdk/amazon-bedrock';
284
+ import { generateObject } from 'ai';
285
+ import { z } from 'zod';
286
+ import fs from 'fs';
287
+
288
+ const result = await generateObject({
289
+ model: bedrock('apac.anthropic.claude-sonnet-4-20250514-v1:0'),
290
+ schema: z.object({
291
+ summary: z.string().describe('Summary of the PDF document'),
292
+ keyPoints: z.array(z.string()).describe('Key points from the PDF'),
293
+ }),
294
+ messages: [
295
+ {
296
+ role: 'user',
297
+ content: [
298
+ {
299
+ type: 'text',
300
+ text: 'Summarize this PDF and provide key points.',
301
+ },
302
+ {
303
+ type: 'file',
304
+ data: readFileSync('./document.pdf'),
305
+ mediaType: 'application/pdf',
306
+ providerOptions: {
307
+ bedrock: {
308
+ citations: { enabled: true },
309
+ },
310
+ },
311
+ },
312
+ ],
313
+ },
314
+ ],
315
+ });
316
+
317
+ console.log('Response:', result.object);
318
+ ```
319
+
320
+ ### Cache Points
321
+
322
+ <Note>
323
+ Amazon Bedrock prompt caching is currently in preview release. To request
324
+ access, visit the [Amazon Bedrock prompt caching
325
+ page](https://aws.amazon.com/bedrock/prompt-caching/).
326
+ </Note>
327
+
328
+ In messages, you can use the `providerOptions` property to set cache points. Set the `bedrock` property in the `providerOptions` object to `{ cachePoint: { type: 'default' } }` to create a cache point.
329
+
330
+ Cache usage information is returned in the `providerMetadata` object`. See examples below.
331
+
332
+ <Note>
333
+ Cache points have model-specific token minimums and limits. For example,
334
+ Claude 3.5 Sonnet v2 requires at least 1,024 tokens for a cache point and
335
+ allows up to 4 cache points. See the [Amazon Bedrock prompt caching
336
+ documentation](https://docs.aws.amazon.com/bedrock/latest/userguide/prompt-caching.html)
337
+ for details on supported models, regions, and limits.
338
+ </Note>
339
+
340
+ ```ts
341
+ import { bedrock } from '@ai-sdk/amazon-bedrock';
342
+ import { generateText } from 'ai';
343
+
344
+ const cyberpunkAnalysis =
345
+ '... literary analysis of cyberpunk themes and concepts ...';
346
+
347
+ const result = await generateText({
348
+ model: bedrock('anthropic.claude-3-5-sonnet-20241022-v2:0'),
349
+ messages: [
350
+ {
351
+ role: 'system',
352
+ content: `You are an expert on William Gibson's cyberpunk literature and themes. You have access to the following academic analysis: ${cyberpunkAnalysis}`,
353
+ providerOptions: {
354
+ bedrock: { cachePoint: { type: 'default' } },
355
+ },
356
+ },
357
+ {
358
+ role: 'user',
359
+ content:
360
+ 'What are the key cyberpunk themes that Gibson explores in Neuromancer?',
361
+ },
362
+ ],
363
+ });
364
+
365
+ console.log(result.text);
366
+ console.log(result.providerMetadata?.bedrock?.usage);
367
+ // Shows cache read/write token usage, e.g.:
368
+ // {
369
+ // cacheReadInputTokens: 1337,
370
+ // cacheWriteInputTokens: 42,
371
+ // }
372
+ ```
373
+
374
+ Cache points also work with streaming responses:
375
+
376
+ ```ts
377
+ import { bedrock } from '@ai-sdk/amazon-bedrock';
378
+ import { streamText } from 'ai';
379
+
380
+ const cyberpunkAnalysis =
381
+ '... literary analysis of cyberpunk themes and concepts ...';
382
+
383
+ const result = streamText({
384
+ model: bedrock('anthropic.claude-3-5-sonnet-20241022-v2:0'),
385
+ messages: [
386
+ {
387
+ role: 'assistant',
388
+ content: [
389
+ { type: 'text', text: 'You are an expert on cyberpunk literature.' },
390
+ { type: 'text', text: `Academic analysis: ${cyberpunkAnalysis}` },
391
+ ],
392
+ providerOptions: { bedrock: { cachePoint: { type: 'default' } } },
393
+ },
394
+ {
395
+ role: 'user',
396
+ content:
397
+ 'How does Gibson explore the relationship between humanity and technology?',
398
+ },
399
+ ],
400
+ });
401
+
402
+ for await (const textPart of result.textStream) {
403
+ process.stdout.write(textPart);
404
+ }
405
+
406
+ console.log(
407
+ 'Cache token usage:',
408
+ (await result.providerMetadata)?.bedrock?.usage,
409
+ );
410
+ // Shows cache read/write token usage, e.g.:
411
+ // {
412
+ // cacheReadInputTokens: 1337,
413
+ // cacheWriteInputTokens: 42,
414
+ // }
415
+ ```
416
+
417
+ ## Reasoning
418
+
419
+ Amazon Bedrock supports model creator-specific reasoning features:
420
+
421
+ - Anthropic (e.g. `claude-3-7-sonnet-20250219`): enable via the `reasoningConfig` provider option and specifying a thinking budget in tokens (minimum: `1024`, maximum: `64000`).
422
+ - Amazon (e.g. `us.amazon.nova-2-lite-v1:0`): enable via the `reasoningConfig` provider option and specifying a maximum reasoning effort level (`'low' | 'medium' | 'high'`).
423
+
424
+ ```ts
425
+ import { bedrock } from '@ai-sdk/amazon-bedrock';
426
+ import { generateText } from 'ai';
427
+
428
+ // Anthropic example
429
+ const anthropicResult = await generateText({
430
+ model: bedrock('us.anthropclaude-3-7-sonnet-20250219-v1:0'),
431
+ prompt: 'How many people will live in the world in 2040?',
432
+ providerOptions: {
433
+ bedrock: {
434
+ reasoningConfig: { type: 'enabled', budgetTokens: 1024 },
435
+ },
436
+ },
437
+ });
438
+
439
+ console.log(anthropicResult.reasoningText); // reasoning text
440
+ console.log(anthropicResult.text); // text response
441
+
442
+ // Nova 2 example
443
+ const amazonResult = await generateText({
444
+ model: bedrock('us.amazon.nova-2-lite-v1:0'),
445
+ prompt: 'How many people will live in the world in 2040?',
446
+ providerOptions: {
447
+ bedrock: {
448
+ reasoningConfig: { type: 'enabled', maxReasoningEffort: 'medium' },
449
+ },
450
+ },
451
+ });
452
+
453
+ console.log(amazonResult.reasoningText); // reasoning text
454
+ console.log(amazonResult.text); // text response
455
+ ```
456
+
457
+ See [AI SDK UI: Chatbot](/docs/ai-sdk-ui/chatbot#reasoning) for more details
458
+ on how to integrate reasoning into your chatbot.
459
+
460
+ ## Extended Context Window
461
+
462
+ Claude Sonnet 4 models on Amazon Bedrock support an extended context window of up to 1 million tokens when using the `context-1m-2025-08-07` beta feature.
463
+
464
+ ```ts
465
+ import { bedrock } from '@ai-sdk/amazon-bedrock';
466
+ import { generateText } from 'ai';
467
+
468
+ const result = await generateText({
469
+ model: bedrock('us.anthropic.claude-sonnet-4-20250514-v1:0'),
470
+ prompt: 'analyze this large document...',
471
+ providerOptions: {
472
+ bedrock: {
473
+ anthropicBeta: ['context-1m-2025-08-07'],
474
+ },
475
+ },
476
+ });
477
+ ```
478
+
479
+ ## Computer Use
480
+
481
+ Via Anthropic, Amazon Bedrock provides three provider-defined tools that can be used to interact with external systems:
482
+
483
+ 1. **Bash Tool**: Allows running bash commands.
484
+ 2. **Text Editor Tool**: Provides functionality for viewing and editing text files.
485
+ 3. **Computer Tool**: Enables control of keyboard and mouse actions on a computer.
486
+
487
+ They are available via the `tools` property of the provider instance.
488
+
489
+ ### Bash Tool
490
+
491
+ The Bash Tool allows running bash commands. Here's how to create and use it:
492
+
493
+ ```ts
494
+ const bashTool = anthropic.tools.bash_20241022({
495
+ execute: async ({ command, restart }) => {
496
+ // Implement your bash command execution logic here
497
+ // Return the result of the command execution
498
+ },
499
+ });
500
+ ```
501
+
502
+ Parameters:
503
+
504
+ - `command` (string): The bash command to run. Required unless the tool is being restarted.
505
+ - `restart` (boolean, optional): Specifying true will restart this tool.
506
+
507
+ ### Text Editor Tool
508
+
509
+ The Text Editor Tool provides functionality for viewing and editing text files.
510
+
511
+ **For Claude 4 models (Opus & Sonnet):**
512
+
513
+ ```ts
514
+ const textEditorTool = anthropic.tools.textEditor_20250429({
515
+ execute: async ({
516
+ command,
517
+ path,
518
+ file_text,
519
+ insert_line,
520
+ new_str,
521
+ old_str,
522
+ view_range,
523
+ }) => {
524
+ // Implement your text editing logic here
525
+ // Return the result of the text editing operation
526
+ },
527
+ });
528
+ ```
529
+
530
+ **For Claude 3.5 Sonnet and earlier models:**
531
+
532
+ ```ts
533
+ const textEditorTool = anthropic.tools.textEditor_20241022({
534
+ execute: async ({
535
+ command,
536
+ path,
537
+ file_text,
538
+ insert_line,
539
+ new_str,
540
+ old_str,
541
+ view_range,
542
+ }) => {
543
+ // Implement your text editing logic here
544
+ // Return the result of the text editing operation
545
+ },
546
+ });
547
+ ```
548
+
549
+ Parameters:
550
+
551
+ - `command` ('view' | 'create' | 'str_replace' | 'insert' | 'undo_edit'): The command to run. Note: `undo_edit` is only available in Claude 3.5 Sonnet and earlier models.
552
+ - `path` (string): Absolute path to file or directory, e.g. `/repo/file.py` or `/repo`.
553
+ - `file_text` (string, optional): Required for `create` command, with the content of the file to be created.
554
+ - `insert_line` (number, optional): Required for `insert` command. The line number after which to insert the new string.
555
+ - `new_str` (string, optional): New string for `str_replace` or `insert` commands.
556
+ - `old_str` (string, optional): Required for `str_replace` command, containing the string to replace.
557
+ - `view_range` (number[], optional): Optional for `view` command to specify line range to show.
558
+
559
+ When using the Text Editor Tool, make sure to name the key in the tools object correctly:
560
+
561
+ - **Claude 4 models**: Use `str_replace_based_edit_tool`
562
+ - **Claude 3.5 Sonnet and earlier**: Use `str_replace_editor`
563
+
564
+ ```ts
565
+ // For Claude 4 models
566
+ const response = await generateText({
567
+ model: bedrock('us.anthropic.claude-sonnet-4-20250514-v1:0'),
568
+ prompt:
569
+ "Create a new file called example.txt, write 'Hello World' to it, and run 'cat example.txt' in the terminal",
570
+ tools: {
571
+ str_replace_based_edit_tool: textEditorTool, // Claude 4 tool name
572
+ },
573
+ });
574
+
575
+ // For Claude 3.5 Sonnet and earlier
576
+ const response = await generateText({
577
+ model: bedrock('anthropic.claude-3-5-sonnet-20241022-v2:0'),
578
+ prompt:
579
+ "Create a new file called example.txt, write 'Hello World' to it, and run 'cat example.txt' in the terminal",
580
+ tools: {
581
+ str_replace_editor: textEditorTool, // Earlier models tool name
582
+ },
583
+ });
584
+ ```
585
+
586
+ ### Computer Tool
587
+
588
+ The Computer Tool enables control of keyboard and mouse actions on a computer:
589
+
590
+ ```ts
591
+ const computerTool = anthropic.tools.computer_20241022({
592
+ displayWidthPx: 1920,
593
+ displayHeightPx: 1080,
594
+ displayNumber: 0, // Optional, for X11 environments
595
+
596
+ execute: async ({ action, coordinate, text }) => {
597
+ // Implement your computer control logic here
598
+ // Return the result of the action
599
+
600
+ // Example code:
601
+ switch (action) {
602
+ case 'screenshot': {
603
+ // multipart result:
604
+ return {
605
+ type: 'image',
606
+ data: fs
607
+ .readFileSync('./data/screenshot-editor.png')
608
+ .toString('base64'),
609
+ };
610
+ }
611
+ default: {
612
+ console.log('Action:', action);
613
+ console.log('Coordinate:', coordinate);
614
+ console.log('Text:', text);
615
+ return `executed ${action}`;
616
+ }
617
+ }
618
+ },
619
+
620
+ // map to tool result content for LLM consumption:
621
+ toModelOutput({ output }) {
622
+ return typeof output === 'string'
623
+ ? [{ type: 'text', text: output }]
624
+ : [{ type: 'image', data: output.data, mediaType: 'image/png' }];
625
+ },
626
+ });
627
+ ```
628
+
629
+ Parameters:
630
+
631
+ - `action` ('key' | 'type' | 'mouse_move' | 'left_click' | 'left_click_drag' | 'right_click' | 'middle_click' | 'double_click' | 'screenshot' | 'cursor_position'): The action to perform.
632
+ - `coordinate` (number[], optional): Required for `mouse_move` and `left_click_drag` actions. Specifies the (x, y) coordinates.
633
+ - `text` (string, optional): Required for `type` and `key` actions.
634
+
635
+ These tools can be used in conjunction with the `anthropic.claude-3-5-sonnet-20240620-v1:0` model to enable more complex interactions and tasks.
636
+
637
+ ### Model Capabilities
638
+
639
+ | Model | Image Input | Object Generation | Tool Usage | Tool Streaming |
640
+ | ---------------------------------------------- | ------------------- | ------------------- | ------------------- | ------------------- |
641
+ | `amazon.titan-tg1-large` | <Cross size={18} /> | <Cross size={18} /> | <Cross size={18} /> | <Cross size={18} /> |
642
+ | `amazon.titan-text-express-v1` | <Cross size={18} /> | <Cross size={18} /> | <Cross size={18} /> | <Cross size={18} /> |
643
+ | `amazon.titan-text-lite-v1` | <Cross size={18} /> | <Cross size={18} /> | <Cross size={18} /> | <Cross size={18} /> |
644
+ | `us.amazon.nova-premier-v1:0` | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
645
+ | `us.amazon.nova-pro-v1:0` | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
646
+ | `us.amazon.nova-lite-v1:0` | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
647
+ | `us.amazon.nova-micro-v1:0` | <Cross size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
648
+ | `anthropic.claude-haiku-4-5-20251001-v1:0` | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
649
+ | `anthropic.claude-sonnet-4-20250514-v1:0` | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
650
+ | `anthropic.claude-sonnet-4-5-20250929-v1:0` | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
651
+ | `anthropic.claude-opus-4-20250514-v1:0` | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
652
+ | `anthropic.claude-opus-4-1-20250805-v1:0` | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
653
+ | `anthropic.claude-3-7-sonnet-20250219-v1:0` | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
654
+ | `anthropic.claude-3-5-sonnet-20241022-v2:0` | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
655
+ | `anthropic.claude-3-5-sonnet-20240620-v1:0` | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
656
+ | `anthropic.claude-3-5-haiku-20241022-v1:0` | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Cross size={18} /> |
657
+ | `anthropic.claude-3-opus-20240229-v1:0` | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
658
+ | `anthropic.claude-3-sonnet-20240229-v1:0` | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
659
+ | `anthropic.claude-3-haiku-20240307-v1:0` | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
660
+ | `us.anthropic.claude-sonnet-4-20250514-v1:0` | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
661
+ | `us.anthropic.claude-sonnet-4-5-20250929-v1:0` | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
662
+ | `us.anthropic.claude-opus-4-20250514-v1:0` | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
663
+ | `us.anthropic.claude-opus-4-1-20250805-v1:0` | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
664
+ | `us.anthropic.claude-3-7-sonnet-20250219-v1:0` | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
665
+ | `us.anthropic.claude-3-5-sonnet-20241022-v2:0` | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
666
+ | `us.anthropic.claude-3-5-sonnet-20240620-v1:0` | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
667
+ | `us.anthropic.claude-3-5-haiku-20241022-v1:0` | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Cross size={18} /> |
668
+ | `us.anthropic.claude-3-sonnet-20240229-v1:0` | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
669
+ | `us.anthropic.claude-3-opus-20240229-v1:0` | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
670
+ | `us.anthropic.claude-3-haiku-20240307-v1:0` | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
671
+ | `anthropic.claude-v2` | <Cross size={18} /> | <Cross size={18} /> | <Cross size={18} /> | <Cross size={18} /> |
672
+ | `anthropic.claude-v2:1` | <Cross size={18} /> | <Cross size={18} /> | <Cross size={18} /> | <Cross size={18} /> |
673
+ | `anthropic.claude-instant-v1` | <Cross size={18} /> | <Cross size={18} /> | <Cross size={18} /> | <Cross size={18} /> |
674
+ | `cohere.command-text-v14` | <Cross size={18} /> | <Cross size={18} /> | <Cross size={18} /> | <Cross size={18} /> |
675
+ | `cohere.command-light-text-v14` | <Cross size={18} /> | <Cross size={18} /> | <Cross size={18} /> | <Cross size={18} /> |
676
+ | `cohere.command-r-v1:0` | <Cross size={18} /> | <Cross size={18} /> | <Check size={18} /> | <Cross size={18} /> |
677
+ | `cohere.command-r-plus-v1:0` | <Cross size={18} /> | <Cross size={18} /> | <Check size={18} /> | <Cross size={18} /> |
678
+ | `us.deepseek.r1-v1:0` | <Cross size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
679
+ | `meta.llama3-8b-instruct-v1:0` | <Cross size={18} /> | <Cross size={18} /> | <Cross size={18} /> | <Cross size={18} /> |
680
+ | `meta.llama3-70b-instruct-v1:0` | <Cross size={18} /> | <Cross size={18} /> | <Cross size={18} /> | <Cross size={18} /> |
681
+ | `meta.llama3-1-8b-instruct-v1:0` | <Cross size={18} /> | <Cross size={18} /> | <Cross size={18} /> | <Cross size={18} /> |
682
+ | `meta.llama3-1-70b-instruct-v1:0` | <Cross size={18} /> | <Cross size={18} /> | <Cross size={18} /> | <Cross size={18} /> |
683
+ | `meta.llama3-1-405b-instruct-v1:0` | <Cross size={18} /> | <Cross size={18} /> | <Cross size={18} /> | <Cross size={18} /> |
684
+ | `meta.llama3-2-1b-instruct-v1:0` | <Cross size={18} /> | <Cross size={18} /> | <Cross size={18} /> | <Cross size={18} /> |
685
+ | `meta.llama3-2-3b-instruct-v1:0` | <Cross size={18} /> | <Cross size={18} /> | <Cross size={18} /> | <Cross size={18} /> |
686
+ | `meta.llama3-2-11b-instruct-v1:0` | <Cross size={18} /> | <Cross size={18} /> | <Cross size={18} /> | <Cross size={18} /> |
687
+ | `meta.llama3-2-90b-instruct-v1:0` | <Cross size={18} /> | <Cross size={18} /> | <Cross size={18} /> | <Cross size={18} /> |
688
+ | `us.meta.llama3-2-1b-instruct-v1:0` | <Cross size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
689
+ | `us.meta.llama3-2-3b-instruct-v1:0` | <Cross size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
690
+ | `us.meta.llama3-2-11b-instruct-v1:0` | <Cross size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
691
+ | `us.meta.llama3-2-90b-instruct-v1:0` | <Cross size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
692
+ | `us.meta.llama3-1-8b-instruct-v1:0` | <Cross size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
693
+ | `us.meta.llama3-1-70b-instruct-v1:0` | <Cross size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
694
+ | `us.meta.llama3-3-70b-instruct-v1:0` | <Cross size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
695
+ | `us.meta.llama4-scout-17b-instruct-v1:0` | <Cross size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
696
+ | `us.meta.llama4-maverick-17b-instruct-v1:0` | <Cross size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
697
+ | `mistral.mistral-7b-instruct-v0:2` | <Cross size={18} /> | <Cross size={18} /> | <Cross size={18} /> | <Cross size={18} /> |
698
+ | `mistral.mixtral-8x7b-instruct-v0:1` | <Cross size={18} /> | <Cross size={18} /> | <Cross size={18} /> | <Cross size={18} /> |
699
+ | `mistral.mistral-large-2402-v1:0` | <Cross size={18} /> | <Cross size={18} /> | <Cross size={18} /> | <Cross size={18} /> |
700
+ | `mistral.mistral-small-2402-v1:0` | <Cross size={18} /> | <Cross size={18} /> | <Cross size={18} /> | <Cross size={18} /> |
701
+ | `us.mistral.pixtral-large-2502-v1:0` | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
702
+ | `openai.gpt-oss-120b-1:0` | <Cross size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
703
+ | `openai.gpt-oss-20b-1:0` | <Cross size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
704
+
705
+ <Note>
706
+ The table above lists popular models. Please see the [Amazon Bedrock
707
+ docs](https://docs.aws.amazon.com/bedrock/latest/userguide/conversation-inference-supported-models-features.html)
708
+ for a full list of available models. You can also pass any available provider
709
+ model ID as a string if needed.
710
+ </Note>
711
+
712
+ ## Embedding Models
713
+
714
+ You can create models that call the Bedrock API [Bedrock API](https://docs.aws.amazon.com/bedrock/latest/userguide/titan-embedding-models.html)
715
+ using the `.embedding()` factory method.
716
+
717
+ ```ts
718
+ const model = bedrock.embedding('amazon.titan-embed-text-v1');
719
+ ```
720
+
721
+ Bedrock Titan embedding model amazon.titan-embed-text-v2:0 supports several additional settings.
722
+ You can pass them as an options argument:
723
+
724
+ ```ts
725
+ import { bedrock } from '@ai-sdk/amazon-bedrock';
726
+ import { embed } from 'ai';
727
+
728
+ const model = bedrock.embedding('amazon.titan-embed-text-v2:0');
729
+
730
+ const { embedding } = await embed({
731
+ model,
732
+ value: 'sunny day at the beach',
733
+ providerOptions: {
734
+ bedrock: {
735
+ dimensions: 512, // optional, number of dimensions for the embedding
736
+ normalize: true, // optional, normalize the output embeddings
737
+ },
738
+ },
739
+ });
740
+ ```
741
+
742
+ The following optional provider options are available for Bedrock Titan embedding models:
743
+
744
+ - **dimensions**: _number_
745
+
746
+ The number of dimensions the output embeddings should have. The following values are accepted: 1024 (default), 512, 256.
747
+
748
+ - **normalize** _boolean_
749
+
750
+ Flag indicating whether or not to normalize the output embeddings. Defaults to true.
751
+
752
+ ### Model Capabilities
753
+
754
+ | Model | Default Dimensions | Custom Dimensions |
755
+ | ------------------------------ | ------------------ | ------------------- |
756
+ | `amazon.titan-embed-text-v1` | 1536 | <Cross size={18} /> |
757
+ | `amazon.titan-embed-text-v2:0` | 1024 | <Check size={18} /> |
758
+ | `cohere.embed-english-v3` | 1024 | <Cross size={18} /> |
759
+ | `cohere.embed-multilingual-v3` | 1024 | <Cross size={18} /> |
760
+
761
+ ## Reranking Models
762
+
763
+ You can create models that call the [Bedrock Rerank API](https://docs.aws.amazon.com/bedrock/latest/userguide/rerank-api.html)
764
+ using the `.reranking()` factory method.
765
+
766
+ ```ts
767
+ const model = bedrock.reranking('cohere.rerank-v3-5:0');
768
+ ```
769
+
770
+ You can use Amazon Bedrock reranking models to rerank documents with the `rerank` function:
771
+
772
+ ```ts
773
+ import { bedrock } from '@ai-sdk/amazon-bedrock';
774
+ import { rerank } from 'ai';
775
+
776
+ const documents = [
777
+ 'sunny day at the beach',
778
+ 'rainy afternoon in the city',
779
+ 'snowy night in the mountains',
780
+ ];
781
+
782
+ const { ranking } = await rerank({
783
+ model: bedrock.reranking('cohere.rerank-v3-5:0'),
784
+ documents,
785
+ query: 'talk about rain',
786
+ topN: 2,
787
+ });
788
+
789
+ console.log(ranking);
790
+ // [
791
+ // { originalIndex: 1, score: 0.9, document: 'rainy afternoon in the city' },
792
+ // { originalIndex: 0, score: 0.3, document: 'sunny day at the beach' }
793
+ // ]
794
+ ```
795
+
796
+ Amazon Bedrock reranking models support additional provider options that can be passed via `providerOptions.bedrock`:
797
+
798
+ ```ts
799
+ import { bedrock } from '@ai-sdk/amazon-bedrock';
800
+ import { rerank } from 'ai';
801
+
802
+ const { ranking } = await rerank({
803
+ model: bedrock.reranking('cohere.rerank-v3-5:0'),
804
+ documents: ['sunny day at the beach', 'rainy afternoon in the city'],
805
+ query: 'talk about rain',
806
+ providerOptions: {
807
+ bedrock: {
808
+ nextToken: 'pagination_token_here',
809
+ },
810
+ },
811
+ });
812
+ ```
813
+
814
+ The following provider options are available:
815
+
816
+ - **nextToken** _string_
817
+
818
+ Token for pagination of results.
819
+
820
+ - **additionalModelRequestFields** _Record&lt;string, unknown&gt;_
821
+
822
+ Additional model-specific request fields.
823
+
824
+ ### Model Capabilities
825
+
826
+ | Model |
827
+ | ---------------------- |
828
+ | `amazon.rerank-v1:0` |
829
+ | `cohere.rerank-v3-5:0` |
830
+
831
+ ## Image Models
832
+
833
+ You can create models that call the Bedrock API [Bedrock API](https://docs.aws.amazon.com/nova/latest/userguide/image-generation.html)
834
+ using the `.image()` factory method.
835
+
836
+ For more on the Amazon Nova Canvas image model, see the [Nova Canvas
837
+ Overview](https://docs.aws.amazon.com/ai/responsible-ai/nova-canvas/overview.html).
838
+
839
+ <Note>
840
+ The `amazon.nova-canvas-v1:0` model is available in the `us-east-1`,
841
+ `eu-west-1`, and `ap-northeast-1` regions.
842
+ </Note>
843
+
844
+ ```ts
845
+ const model = bedrock.image('amazon.nova-canvas-v1:0');
846
+ ```
847
+
848
+ You can then generate images with the `generateImage` function:
849
+
850
+ ```ts
851
+ import { bedrock } from '@ai-sdk/amazon-bedrock';
852
+ import { generateImage } from 'ai';
853
+
854
+ const { image } = await generateImage({
855
+ model: bedrock.image('amazon.nova-canvas-v1:0'),
856
+ prompt: 'A beautiful sunset over a calm ocean',
857
+ size: '512x512',
858
+ seed: 42,
859
+ });
860
+ ```
861
+
862
+ You can also pass the `providerOptions` object to the `generateImage` function to customize the generation behavior:
863
+
864
+ ```ts
865
+ import { bedrock } from '@ai-sdk/amazon-bedrock';
866
+ import { generateImage } from 'ai';
867
+
868
+ const { image } = await generateImage({
869
+ model: bedrock.image('amazon.nova-canvas-v1:0'),
870
+ prompt: 'A beautiful sunset over a calm ocean',
871
+ size: '512x512',
872
+ seed: 42,
873
+ providerOptions: {
874
+ bedrock: {
875
+ quality: 'premium',
876
+ negativeText: 'blurry, low quality',
877
+ cfgScale: 7.5,
878
+ style: 'PHOTOREALISM',
879
+ },
880
+ },
881
+ });
882
+ ```
883
+
884
+ The following optional provider options are available for Amazon Nova Canvas:
885
+
886
+ - **quality** _string_
887
+
888
+ The quality level for image generation. Accepts `'standard'` or `'premium'`.
889
+
890
+ - **negativeText** _string_
891
+
892
+ Text describing what you don't want in the generated image.
893
+
894
+ - **cfgScale** _number_
895
+
896
+ Controls how closely the generated image adheres to the prompt. Higher values result in images that are more closely aligned to the prompt.
897
+
898
+ - **style** _string_
899
+
900
+ Predefined visual style for image generation.
901
+ Accepts one of:
902
+ `3D_ANIMATED_FAMILY_FILM` · `DESIGN_SKETCH` · `FLAT_VECTOR_ILLUSTRATION` ·
903
+ `GRAPHIC_NOVEL_ILLUSTRATION` · `MAXIMALISM` · `MIDCENTURY_RETRO` ·
904
+ `PHOTOREALISM` · `SOFT_DIGITAL_PAINTING`.
905
+
906
+ Documentation for additional settings can be found within the [Amazon Bedrock
907
+ User Guide for Amazon Nova
908
+ Documentation](https://docs.aws.amazon.com/nova/latest/userguide/image-gen-req-resp-structure.html).
909
+
910
+ ### Image Editing
911
+
912
+ Amazon Nova Canvas supports several image editing task types. When you provide input images via `prompt.images`, the model automatically detects the appropriate editing mode, or you can explicitly specify the `taskType` in provider options.
913
+
914
+ #### Image Variation
915
+
916
+ Create variations of an existing image while maintaining its core characteristics:
917
+
918
+ ```ts
919
+ const imageBuffer = readFileSync('./input-image.png');
920
+
921
+ const { images } = await generateImage({
922
+ model: bedrock.image('amazon.nova-canvas-v1:0'),
923
+ prompt: {
924
+ text: 'Modernize the style, photo-realistic, 8k, hdr',
925
+ images: [imageBuffer],
926
+ },
927
+ providerOptions: {
928
+ bedrock: {
929
+ taskType: 'IMAGE_VARIATION',
930
+ similarityStrength: 0.7, // 0-1, higher = closer to original
931
+ negativeText: 'bad quality, low resolution',
932
+ },
933
+ },
934
+ });
935
+ ```
936
+
937
+ - **similarityStrength** _number_
938
+
939
+ Controls how similar the output is to the input image. Values range from 0 to 1, where higher values produce results closer to the original.
940
+
941
+ #### Inpainting
942
+
943
+ Edit specific parts of an image. You can define the area to modify using either a mask image or a text prompt:
944
+
945
+ **Using a mask prompt (text-based selection):**
946
+
947
+ ```ts
948
+ const imageBuffer = readFileSync('./input-image.png');
949
+
950
+ const { images } = await generateImage({
951
+ model: bedrock.image('amazon.nova-canvas-v1:0'),
952
+ prompt: {
953
+ text: 'a cute corgi dog in the same style',
954
+ images: [imageBuffer],
955
+ },
956
+ providerOptions: {
957
+ bedrock: {
958
+ maskPrompt: 'cat', // Describe what to replace
959
+ },
960
+ },
961
+ seed: 42,
962
+ });
963
+ ```
964
+
965
+ **Using a mask image:**
966
+
967
+ ```ts
968
+ const image = readFileSync('./input-image.png');
969
+ const mask = readFileSync('./mask.png'); // White pixels = area to change
970
+
971
+ const { images } = await generateImage({
972
+ model: bedrock.image('amazon.nova-canvas-v1:0'),
973
+ prompt: {
974
+ text: 'A sunlit indoor lounge area with a pool containing a flamingo',
975
+ images: [image],
976
+ mask: mask,
977
+ },
978
+ });
979
+ ```
980
+
981
+ - **maskPrompt** _string_
982
+
983
+ A text description of the area to modify. The model will automatically identify and mask the described region.
984
+
985
+ #### Outpainting
986
+
987
+ Extend an image beyond its original boundaries:
988
+
989
+ ```ts
990
+ const imageBuffer = readFileSync('./input-image.png');
991
+
992
+ const { images } = await generateImage({
993
+ model: bedrock.image('amazon.nova-canvas-v1:0'),
994
+ prompt: {
995
+ text: 'A beautiful sunset landscape with mountains',
996
+ images: [imageBuffer],
997
+ },
998
+ providerOptions: {
999
+ bedrock: {
1000
+ taskType: 'OUTPAINTING',
1001
+ maskPrompt: 'background',
1002
+ outPaintingMode: 'DEFAULT', // or 'PRECISE'
1003
+ },
1004
+ },
1005
+ });
1006
+ ```
1007
+
1008
+ - **outPaintingMode** _string_
1009
+
1010
+ Controls how the outpainting is performed. Accepts `'DEFAULT'` or `'PRECISE'`.
1011
+
1012
+ #### Background Removal
1013
+
1014
+ Remove the background from an image:
1015
+
1016
+ ```ts
1017
+ const imageBuffer = readFileSync('./input-image.png');
1018
+
1019
+ const { images } = await generateImage({
1020
+ model: bedrock.image('amazon.nova-canvas-v1:0'),
1021
+ prompt: {
1022
+ images: [imageBuffer],
1023
+ },
1024
+ providerOptions: {
1025
+ bedrock: {
1026
+ taskType: 'BACKGROUND_REMOVAL',
1027
+ },
1028
+ },
1029
+ });
1030
+ ```
1031
+
1032
+ <Note>
1033
+ Background removal does not require a text prompt - only the input image is
1034
+ needed.
1035
+ </Note>
1036
+
1037
+ #### Image Editing Provider Options
1038
+
1039
+ The following additional provider options are available for image editing:
1040
+
1041
+ - **taskType** _string_
1042
+
1043
+ Explicitly set the editing task type. Accepts `'TEXT_IMAGE'` (default for text-only), `'IMAGE_VARIATION'`, `'INPAINTING'`, `'OUTPAINTING'`, or `'BACKGROUND_REMOVAL'`. When images are provided without an explicit taskType, the model defaults to `'IMAGE_VARIATION'` (or `'INPAINTING'` if a mask is provided).
1044
+
1045
+ - **maskPrompt** _string_
1046
+
1047
+ Text description of the area to modify (for inpainting/outpainting). Alternative to providing a mask image.
1048
+
1049
+ - **similarityStrength** _number_
1050
+
1051
+ For `IMAGE_VARIATION`: Controls similarity to the original (0-1).
1052
+
1053
+ - **outPaintingMode** _string_
1054
+
1055
+ For `OUTPAINTING`: Controls the outpainting behavior (`'DEFAULT'` or `'PRECISE'`).
1056
+
1057
+ ### Image Model Settings
1058
+
1059
+ You can customize the generation behavior with optional options:
1060
+
1061
+ ```ts
1062
+ await generateImage({
1063
+ model: bedrock.image('amazon.nova-canvas-v1:0'),
1064
+ prompt: 'A beautiful sunset over a calm ocean',
1065
+ size: '512x512',
1066
+ seed: 42,
1067
+ maxImagesPerCall: 1, // Maximum number of images to generate per API call
1068
+ });
1069
+ ```
1070
+
1071
+ - **maxImagesPerCall** _number_
1072
+
1073
+ Override the maximum number of images generated per API call. Default can vary
1074
+ by model, with 5 as a common default.
1075
+
1076
+ ### Model Capabilities
1077
+
1078
+ The Amazon Nova Canvas model supports custom sizes with constraints as follows:
1079
+
1080
+ - Each side must be between 320-4096 pixels, inclusive.
1081
+ - Each side must be evenly divisible by 16.
1082
+ - The aspect ratio must be between 1:4 and 4:1. That is, one side can't be more than 4 times longer than the other side.
1083
+ - The total pixel count must be less than 4,194,304.
1084
+
1085
+ For more, see [Image generation access and
1086
+ usage](https://docs.aws.amazon.com/nova/latest/userguide/image-gen-access.html).
1087
+
1088
+ | Model | Sizes |
1089
+ | ------------------------- | ----------------------------------------------------------------------------------------------------- |
1090
+ | `amazon.nova-canvas-v1:0` | Custom sizes: 320-4096px per side (must be divisible by 16), aspect ratio 1:4 to 4:1, max 4.2M pixels |
1091
+
1092
+ ## Response Headers
1093
+
1094
+ The Amazon Bedrock provider will return the response headers associated with
1095
+ network requests made of the Bedrock servers.
1096
+
1097
+ ```ts
1098
+ import { bedrock } from '@ai-sdk/amazon-bedrock';
1099
+ import { generateText } from 'ai';
1100
+
1101
+ const { text } = await generateText({
1102
+ model: bedrock('meta.llama3-70b-instruct-v1:0'),
1103
+ prompt: 'Write a vegetarian lasagna recipe for 4 people.',
1104
+ });
1105
+
1106
+ console.log(result.response.headers);
1107
+ ```
1108
+
1109
+ Below is sample output where you can see the `x-amzn-requestid` header. This can
1110
+ be useful for correlating Bedrock API calls with requests made by the AI SDK:
1111
+
1112
+ ```js highlight="6"
1113
+ {
1114
+ connection: 'keep-alive',
1115
+ 'content-length': '2399',
1116
+ 'content-type': 'application/json',
1117
+ date: 'Fri, 07 Feb 2025 04:28:30 GMT',
1118
+ 'x-amzn-requestid': 'c9f3ace4-dd5d-49e5-9807-39aedfa47c8e'
1119
+ }
1120
+ ```
1121
+
1122
+ This information is also available with `streamText`:
1123
+
1124
+ ```ts
1125
+ import { bedrock } from '@ai-sdk/amazon-bedrock';
1126
+ import { streamText } from 'ai';
1127
+
1128
+ const result = streamText({
1129
+ model: bedrock('meta.llama3-70b-instruct-v1:0'),
1130
+ prompt: 'Write a vegetarian lasagna recipe for 4 people.',
1131
+ });
1132
+ for await (const textPart of result.textStream) {
1133
+ process.stdout.write(textPart);
1134
+ }
1135
+ console.log('Response headers:', (await result.response).headers);
1136
+ ```
1137
+
1138
+ With sample output as:
1139
+
1140
+ ```js highlight="6"
1141
+ {
1142
+ connection: 'keep-alive',
1143
+ 'content-type': 'application/vnd.amazon.eventstream',
1144
+ date: 'Fri, 07 Feb 2025 04:33:37 GMT',
1145
+ 'transfer-encoding': 'chunked',
1146
+ 'x-amzn-requestid': 'a976e3fc-0e45-4241-9954-b9bdd80ab407'
1147
+ }
1148
+ ```
1149
+
1150
+ ## Bedrock Anthropic Provider Usage
1151
+
1152
+ The Bedrock Anthropic provider offers support for Anthropic's Claude models through Amazon Bedrock's native InvokeModel API. This provides full feature parity with the [Anthropic API](https://platform.claude.com/docs/en/build-with-claude/overview), including features that may not be available through the Converse API (such as `stop_sequence` in streaming responses).
1153
+
1154
+ For more information on Claude models available on Amazon Bedrock, see [Claude on Amazon Bedrock](https://platform.claude.com/docs/en/build-with-claude/claude-on-amazon-bedrock).
1155
+
1156
+ ### Provider Instance
1157
+
1158
+ You can import the default provider instance `bedrockAnthropic` from `@ai-sdk/amazon-bedrock/anthropic`:
1159
+
1160
+ ```typescript
1161
+ import { bedrockAnthropic } from '@ai-sdk/amazon-bedrock/anthropic';
1162
+ ```
1163
+
1164
+ If you need a customized setup, you can import `createBedrockAnthropic` from `@ai-sdk/amazon-bedrock/anthropic` and create a provider instance with your settings:
1165
+
1166
+ ```typescript
1167
+ import { createBedrockAnthropic } from '@ai-sdk/amazon-bedrock/anthropic';
1168
+
1169
+ const bedrockAnthropic = createBedrockAnthropic({
1170
+ region: 'us-east-1', // optional
1171
+ accessKeyId: 'xxxxxxxxx', // optional
1172
+ secretAccessKey: 'xxxxxxxxx', // optional
1173
+ sessionToken: 'xxxxxxxxx', // optional
1174
+ });
1175
+ ```
1176
+
1177
+ #### Provider Settings
1178
+
1179
+ You can use the following optional settings to customize the Bedrock Anthropic provider instance:
1180
+
1181
+ - **region** _string_
1182
+
1183
+ The AWS region that you want to use for the API calls.
1184
+ It uses the `AWS_REGION` environment variable by default.
1185
+
1186
+ - **accessKeyId** _string_
1187
+
1188
+ The AWS access key ID that you want to use for the API calls.
1189
+ It uses the `AWS_ACCESS_KEY_ID` environment variable by default.
1190
+
1191
+ - **secretAccessKey** _string_
1192
+
1193
+ The AWS secret access key that you want to use for the API calls.
1194
+ It uses the `AWS_SECRET_ACCESS_KEY` environment variable by default.
1195
+
1196
+ - **sessionToken** _string_
1197
+
1198
+ Optional. The AWS session token that you want to use for the API calls.
1199
+ It uses the `AWS_SESSION_TOKEN` environment variable by default.
1200
+
1201
+ - **apiKey** _string_
1202
+
1203
+ API key for authenticating requests using Bearer token authentication.
1204
+ When provided, this will be used instead of AWS SigV4 authentication.
1205
+ It uses the `AWS_BEARER_TOKEN_BEDROCK` environment variable by default.
1206
+
1207
+ - **baseURL** _string_
1208
+
1209
+ Base URL for the Bedrock API calls.
1210
+ Useful for custom endpoints or proxy configurations.
1211
+
1212
+ - **headers** _Resolvable&lt;Record&lt;string, string | undefined&gt;&gt;_
1213
+
1214
+ Headers to include in the requests.
1215
+
1216
+ - **fetch** _(input: RequestInfo, init?: RequestInit) => Promise&lt;Response&gt;_
1217
+
1218
+ Custom [fetch](https://developer.mozilla.org/en-US/docs/Web/API/fetch) implementation.
1219
+ You can use it as a middleware to intercept requests,
1220
+ or to provide a custom fetch implementation for e.g. testing.
1221
+
1222
+ - **credentialProvider** _() => PromiseLike&lt;BedrockCredentials&gt;_
1223
+
1224
+ The AWS credential provider to use for the Bedrock provider to get dynamic
1225
+ credentials similar to the AWS SDK. Setting a provider here will cause its
1226
+ credential values to be used instead of the `accessKeyId`, `secretAccessKey`,
1227
+ and `sessionToken` settings.
1228
+
1229
+ ### Language Models
1230
+
1231
+ You can create models that call the [Anthropic Messages API](https://docs.anthropic.com/claude/reference/messages_post) using the provider instance.
1232
+ The first argument is the model id, e.g. `us.anthropic.claude-3-5-sonnet-20241022-v2:0`.
1233
+
1234
+ ```ts
1235
+ const model = bedrockAnthropic('us.anthropic.claude-3-5-sonnet-20241022-v2:0');
1236
+ ```
1237
+
1238
+ You can use Bedrock Anthropic language models to generate text with the `generateText` function:
1239
+
1240
+ ```ts
1241
+ import { bedrockAnthropic } from '@ai-sdk/amazon-bedrock/anthropic';
1242
+ import { generateText } from 'ai';
1243
+
1244
+ const { text } = await generateText({
1245
+ model: bedrockAnthropic('us.anthropic.claude-3-5-sonnet-20241022-v2:0'),
1246
+ prompt: 'Write a vegetarian lasagna recipe for 4 people.',
1247
+ });
1248
+ ```
1249
+
1250
+ ### Cache Control
1251
+
1252
+ In the messages and message parts, you can use the `providerOptions` property to set cache control breakpoints.
1253
+ You need to set the `anthropic` property in the `providerOptions` object to `{ cacheControl: { type: 'ephemeral' } }` to set a cache control breakpoint.
1254
+
1255
+ ```ts
1256
+ import { bedrockAnthropic } from '@ai-sdk/amazon-bedrock/anthropic';
1257
+ import { generateText } from 'ai';
1258
+
1259
+ const result = await generateText({
1260
+ model: bedrockAnthropic('us.anthropic.claude-3-7-sonnet-20250219-v1:0'),
1261
+ messages: [
1262
+ {
1263
+ role: 'system',
1264
+ content: 'You are an expert assistant.',
1265
+ providerOptions: {
1266
+ anthropic: { cacheControl: { type: 'ephemeral' } },
1267
+ },
1268
+ },
1269
+ {
1270
+ role: 'user',
1271
+ content: 'Explain quantum computing.',
1272
+ },
1273
+ ],
1274
+ });
1275
+ ```
1276
+
1277
+ <Note>
1278
+ Cache control requires a minimum of 1024 tokens before the cache checkpoint.
1279
+ See the [Amazon Bedrock prompt caching
1280
+ documentation](https://docs.aws.amazon.com/bedrock/latest/userguide/prompt-caching.html)
1281
+ for details on supported models and limits.
1282
+ </Note>
1283
+
1284
+ ### Computer Use
1285
+
1286
+ The Bedrock Anthropic provider supports Anthropic's computer use tools:
1287
+
1288
+ 1. **Bash Tool**: Allows running bash commands.
1289
+ 2. **Text Editor Tool**: Provides functionality for viewing and editing text files.
1290
+ 3. **Computer Tool**: Enables control of keyboard and mouse actions on a computer.
1291
+
1292
+ They are available via the `tools` property of the provider instance.
1293
+
1294
+ <Note>
1295
+ Computer use tools require Claude 3.7 Sonnet or newer models. Claude 3.5
1296
+ Sonnet v2 does not support these tools.
1297
+ </Note>
1298
+
1299
+ #### Bash Tool
1300
+
1301
+ ```ts
1302
+ import { bedrockAnthropic } from '@ai-sdk/amazon-bedrock/anthropic';
1303
+ import { generateText, stepCountIs } from 'ai';
1304
+
1305
+ const result = await generateText({
1306
+ model: bedrockAnthropic('us.anthropic.claude-3-7-sonnet-20250219-v1:0'),
1307
+ tools: {
1308
+ bash: bedrockAnthropic.tools.bash_20241022({
1309
+ execute: async ({ command }) => {
1310
+ // Implement your bash command execution logic here
1311
+ return [{ type: 'text', text: `Executed: ${command}` }];
1312
+ },
1313
+ }),
1314
+ },
1315
+ prompt: 'List the files in my directory.',
1316
+ stopWhen: stepCountIs(2),
1317
+ });
1318
+ ```
1319
+
1320
+ #### Text Editor Tool
1321
+
1322
+ ```ts
1323
+ import { bedrockAnthropic } from '@ai-sdk/amazon-bedrock/anthropic';
1324
+ import { generateText, stepCountIs } from 'ai';
1325
+
1326
+ const result = await generateText({
1327
+ model: bedrockAnthropic('us.anthropic.claude-3-7-sonnet-20250219-v1:0'),
1328
+ tools: {
1329
+ str_replace_editor: bedrockAnthropic.tools.textEditor_20241022({
1330
+ execute: async ({ command, path, old_str, new_str }) => {
1331
+ // Implement your text editing logic here
1332
+ return 'File updated successfully';
1333
+ },
1334
+ }),
1335
+ },
1336
+ prompt: 'Update my README file.',
1337
+ stopWhen: stepCountIs(5),
1338
+ });
1339
+ ```
1340
+
1341
+ #### Computer Tool
1342
+
1343
+ ```ts
1344
+ import { bedrockAnthropic } from '@ai-sdk/amazon-bedrock/anthropic';
1345
+ import { generateText, stepCountIs } from 'ai';
1346
+ import fs from 'fs';
1347
+
1348
+ const result = await generateText({
1349
+ model: bedrockAnthropic('us.anthropic.claude-3-7-sonnet-20250219-v1:0'),
1350
+ tools: {
1351
+ computer: bedrockAnthropic.tools.computer_20241022({
1352
+ displayWidthPx: 1024,
1353
+ displayHeightPx: 768,
1354
+ execute: async ({ action, coordinate, text }) => {
1355
+ if (action === 'screenshot') {
1356
+ return {
1357
+ type: 'image',
1358
+ data: fs.readFileSync('./screenshot.png').toString('base64'),
1359
+ };
1360
+ }
1361
+ return `executed ${action}`;
1362
+ },
1363
+ toModelOutput({ output }) {
1364
+ return {
1365
+ type: 'content',
1366
+ value: [
1367
+ typeof output === 'string'
1368
+ ? { type: 'text', text: output }
1369
+ : {
1370
+ type: 'image-data',
1371
+ data: output.data,
1372
+ mediaType: 'image/png',
1373
+ },
1374
+ ],
1375
+ };
1376
+ },
1377
+ }),
1378
+ },
1379
+ prompt: 'Take a screenshot.',
1380
+ stopWhen: stepCountIs(3),
1381
+ });
1382
+ ```
1383
+
1384
+ ### Reasoning
1385
+
1386
+ Anthropic has reasoning support for Claude 3.7 and Claude 4 models on Bedrock, including:
1387
+
1388
+ - `us.anthropic.claude-opus-4-5-20251101-v1:0`
1389
+ - `us.anthropic.claude-sonnet-4-5-20250929-v1:0`
1390
+ - `us.anthropic.claude-opus-4-20250514-v1:0`
1391
+ - `us.anthropic.claude-sonnet-4-20250514-v1:0`
1392
+ - `us.anthropic.claude-opus-4-1-20250805-v1:0`
1393
+ - `us.anthropic.claude-haiku-4-5-20251001-v1:0`
1394
+ - `us.anthropic.claude-3-7-sonnet-20250219-v1:0`
1395
+
1396
+ You can enable it using the `thinking` provider option and specifying a thinking budget in tokens.
1397
+
1398
+ ```ts
1399
+ import { bedrockAnthropic } from '@ai-sdk/amazon-bedrock/anthropic';
1400
+ import { generateText } from 'ai';
1401
+
1402
+ const { text, reasoningText, reasoning } = await generateText({
1403
+ model: bedrockAnthropic('us.anthropic.claude-sonnet-4-5-20250929-v1:0'),
1404
+ prompt: 'How many people will live in the world in 2040?',
1405
+ providerOptions: {
1406
+ anthropic: {
1407
+ thinking: { type: 'enabled', budgetTokens: 12000 },
1408
+ },
1409
+ },
1410
+ });
1411
+
1412
+ console.log(reasoningText); // reasoning text
1413
+ console.log(reasoning); // reasoning details including redacted reasoning
1414
+ console.log(text); // text response
1415
+ ```
1416
+
1417
+ See [AI SDK UI: Chatbot](/docs/ai-sdk-ui/chatbot#reasoning) for more details
1418
+ on how to integrate reasoning into your chatbot.
1419
+
1420
+ ### Model Capabilities
1421
+
1422
+ | Model | Image Input | Object Generation | Tool Usage | Computer Use | Reasoning |
1423
+ | ---------------------------------------------- | ------------------- | ------------------- | ------------------- | ------------------- | ------------------- |
1424
+ | `us.anthropic.claude-opus-4-5-20251101-v1:0` | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
1425
+ | `us.anthropic.claude-sonnet-4-5-20250929-v1:0` | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
1426
+ | `us.anthropic.claude-opus-4-20250514-v1:0` | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
1427
+ | `us.anthropic.claude-sonnet-4-20250514-v1:0` | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
1428
+ | `us.anthropic.claude-opus-4-1-20250805-v1:0` | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
1429
+ | `us.anthropic.claude-haiku-4-5-20251001-v1:0` | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
1430
+ | `us.anthropic.claude-3-7-sonnet-20250219-v1:0` | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
1431
+ | `us.anthropic.claude-3-5-sonnet-20241022-v2:0` | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Cross size={18} /> | <Cross size={18} /> |
1432
+ | `us.anthropic.claude-3-5-haiku-20241022-v1:0` | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Cross size={18} /> | <Cross size={18} /> |
1433
+
1434
+ <Note>
1435
+ The Bedrock Anthropic provider uses the native InvokeModel API and supports
1436
+ all features available in the Anthropic API, except for the Files API and MCP
1437
+ Connector which are not supported on Bedrock.
1438
+ </Note>
1439
+
1440
+ ## Migrating to `@ai-sdk/amazon-bedrock` 2.x
1441
+
1442
+ The Amazon Bedrock provider was rewritten in version 2.x to remove the
1443
+ dependency on the `@aws-sdk/client-bedrock-runtime` package.
1444
+
1445
+ The `bedrockOptions` provider setting previously available has been removed. If
1446
+ you were using the `bedrockOptions` object, you should now use the `region`,
1447
+ `accessKeyId`, `secretAccessKey`, and `sessionToken` settings directly instead.
1448
+
1449
+ Note that you may need to set all of these explicitly, e.g. even if you're not
1450
+ using `sessionToken`, set it to `undefined`. If you're running in a serverless
1451
+ environment, there may be default environment variables set by your containing
1452
+ environment that the Amazon Bedrock provider will then pick up and could
1453
+ conflict with the ones you're intending to use.