@ai-sdk/anthropic 3.0.19 → 3.0.20

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/CHANGELOG.md CHANGED
@@ -1,5 +1,11 @@
1
1
  # @ai-sdk/anthropic
2
2
 
3
+ ## 3.0.20
4
+
5
+ ### Patch Changes
6
+
7
+ - 2b8369d: chore: add docs to package dist
8
+
3
9
  ## 3.0.19
4
10
 
5
11
  ### Patch Changes
package/dist/index.js CHANGED
@@ -32,7 +32,7 @@ var import_provider4 = require("@ai-sdk/provider");
32
32
  var import_provider_utils22 = require("@ai-sdk/provider-utils");
33
33
 
34
34
  // src/version.ts
35
- var VERSION = true ? "3.0.19" : "0.0.0-test";
35
+ var VERSION = true ? "3.0.20" : "0.0.0-test";
36
36
 
37
37
  // src/anthropic-messages-language-model.ts
38
38
  var import_provider3 = require("@ai-sdk/provider");
package/dist/index.mjs CHANGED
@@ -11,7 +11,7 @@ import {
11
11
  } from "@ai-sdk/provider-utils";
12
12
 
13
13
  // src/version.ts
14
- var VERSION = true ? "3.0.19" : "0.0.0-test";
14
+ var VERSION = true ? "3.0.20" : "0.0.0-test";
15
15
 
16
16
  // src/anthropic-messages-language-model.ts
17
17
  import {
@@ -0,0 +1,1096 @@
1
+ ---
2
+ title: Anthropic
3
+ description: Learn how to use the Anthropic provider for the AI SDK.
4
+ ---
5
+
6
+ # Anthropic Provider
7
+
8
+ The [Anthropic](https://www.anthropic.com/) provider contains language model support for the [Anthropic Messages API](https://docs.anthropic.com/claude/reference/messages_post).
9
+
10
+ ## Setup
11
+
12
+ The Anthropic provider is available in the `@ai-sdk/anthropic` module. You can install it with
13
+
14
+ <Tabs items={['pnpm', 'npm', 'yarn', 'bun']}>
15
+ <Tab>
16
+ <Snippet text="pnpm add @ai-sdk/anthropic" dark />
17
+ </Tab>
18
+ <Tab>
19
+ <Snippet text="npm install @ai-sdk/anthropic" dark />
20
+ </Tab>
21
+ <Tab>
22
+ <Snippet text="yarn add @ai-sdk/anthropic" dark />
23
+ </Tab>
24
+
25
+ <Tab>
26
+ <Snippet text="bun add @ai-sdk/anthropic" dark />
27
+ </Tab>
28
+ </Tabs>
29
+
30
+ ## Provider Instance
31
+
32
+ You can import the default provider instance `anthropic` from `@ai-sdk/anthropic`:
33
+
34
+ ```ts
35
+ import { anthropic } from '@ai-sdk/anthropic';
36
+ ```
37
+
38
+ If you need a customized setup, you can import `createAnthropic` from `@ai-sdk/anthropic` and create a provider instance with your settings:
39
+
40
+ ```ts
41
+ import { createAnthropic } from '@ai-sdk/anthropic';
42
+
43
+ const anthropic = createAnthropic({
44
+ // custom settings
45
+ });
46
+ ```
47
+
48
+ You can use the following optional settings to customize the Anthropic provider instance:
49
+
50
+ - **baseURL** _string_
51
+
52
+ Use a different URL prefix for API calls, e.g. to use proxy servers.
53
+ The default prefix is `https://api.anthropic.com/v1`.
54
+
55
+ - **apiKey** _string_
56
+
57
+ API key that is being sent using the `x-api-key` header.
58
+ It defaults to the `ANTHROPIC_API_KEY` environment variable.
59
+
60
+ - **headers** _Record&lt;string,string&gt;_
61
+
62
+ Custom headers to include in the requests.
63
+
64
+ - **fetch** _(input: RequestInfo, init?: RequestInit) => Promise&lt;Response&gt;_
65
+
66
+ Custom [fetch](https://developer.mozilla.org/en-US/docs/Web/API/fetch) implementation.
67
+ Defaults to the global `fetch` function.
68
+ You can use it as a middleware to intercept requests,
69
+ or to provide a custom fetch implementation for e.g. testing.
70
+
71
+ ## Language Models
72
+
73
+ You can create models that call the [Anthropic Messages API](https://docs.anthropic.com/claude/reference/messages_post) using the provider instance.
74
+ The first argument is the model id, e.g. `claude-3-haiku-20240307`.
75
+ Some models have multi-modal capabilities.
76
+
77
+ ```ts
78
+ const model = anthropic('claude-3-haiku-20240307');
79
+ ```
80
+
81
+ You can use Anthropic language models to generate text with the `generateText` function:
82
+
83
+ ```ts
84
+ import { anthropic } from '@ai-sdk/anthropic';
85
+ import { generateText } from 'ai';
86
+
87
+ const { text } = await generateText({
88
+ model: anthropic('claude-3-haiku-20240307'),
89
+ prompt: 'Write a vegetarian lasagna recipe for 4 people.',
90
+ });
91
+ ```
92
+
93
+ Anthropic language models can also be used in the `streamText`, `generateObject`, and `streamObject` functions
94
+ (see [AI SDK Core](/docs/ai-sdk-core)).
95
+
96
+ The following optional provider options are available for Anthropic models:
97
+
98
+ - `disableParallelToolUse` _boolean_
99
+
100
+ Optional. Disables the use of parallel tool calls. Defaults to `false`.
101
+
102
+ When set to `true`, the model will only call one tool at a time instead of potentially calling multiple tools in parallel.
103
+
104
+ - `sendReasoning` _boolean_
105
+
106
+ Optional. Include reasoning content in requests sent to the model. Defaults to `true`.
107
+
108
+ If you are experiencing issues with the model handling requests involving
109
+ reasoning content, you can set this to `false` to omit them from the request.
110
+
111
+ - `effort` _"high" | "medium" | "low"_
112
+
113
+ Optional. See [Effort section](#effort) for more details.
114
+
115
+ - `thinking` _object_
116
+
117
+ Optional. See [Reasoning section](#reasoning) for more details.
118
+
119
+ - `toolStreaming` _boolean_
120
+
121
+ Whether to enable tool streaming (and structured output streaming). Default to `true`.
122
+
123
+ - `structuredOutputMode` _"outputFormat" | "jsonTool" | "auto"_
124
+
125
+ Determines how structured outputs are generated. Optional.
126
+
127
+ - `"outputFormat"`: Use the `output_format` parameter to specify the structured output format.
128
+ - `"jsonTool"`: Use a special `"json"` tool to specify the structured output format.
129
+ - `"auto"`: Use `"outputFormat"` when supported, otherwise fall back to `"jsonTool"` (default).
130
+
131
+ ### Structured Outputs and Tool Input Streaming
132
+
133
+ Tool call streaming is enabled by default. You can opt out by setting the
134
+ `toolStreaming` provider option to `false`.
135
+
136
+ ```ts
137
+ import { anthropic } from '@ai-sdk/anthropic';
138
+ import { streamText, tool } from 'ai';
139
+ import { z } from 'zod';
140
+
141
+ const result = streamText({
142
+ model: anthropic('claude-sonnet-4-20250514'),
143
+ tools: {
144
+ writeFile: tool({
145
+ description: 'Write content to a file',
146
+ inputSchema: z.object({
147
+ path: z.string(),
148
+ content: z.string(),
149
+ }),
150
+ execute: async ({ path, content }) => {
151
+ // Implementation
152
+ return { success: true };
153
+ },
154
+ }),
155
+ },
156
+ prompt: 'Write a short story to story.txt',
157
+ });
158
+ ```
159
+
160
+ ### Effort
161
+
162
+ Anthropic introduced an `effort` option with `claude-opus-4-5` that affects thinking, text responses, and function calls. Effort defaults to `high` and you can set it to `medium` or `low` to save tokens and to lower time-to-last-token latency (TTLT).
163
+
164
+ ```ts highlight="8-10"
165
+ import { anthropic, AnthropicProviderOptions } from '@ai-sdk/anthropic';
166
+ import { generateText } from 'ai';
167
+
168
+ const { text, usage } = await generateText({
169
+ model: anthropic('claude-opus-4-20250514'),
170
+ prompt: 'How many people will live in the world in 2040?',
171
+ providerOptions: {
172
+ anthropic: {
173
+ effort: 'low',
174
+ } satisfies AnthropicProviderOptions,
175
+ },
176
+ });
177
+
178
+ console.log(text); // resulting text
179
+ console.log(usage); // token usage
180
+ ```
181
+
182
+ ### Reasoning
183
+
184
+ Anthropic has reasoning support for `claude-opus-4-20250514`, `claude-sonnet-4-20250514`, and `claude-3-7-sonnet-20250219` models.
185
+
186
+ You can enable it using the `thinking` provider option
187
+ and specifying a thinking budget in tokens.
188
+
189
+ ```ts highlight="4,8-10"
190
+ import { anthropic, AnthropicProviderOptions } from '@ai-sdk/anthropic';
191
+ import { generateText } from 'ai';
192
+
193
+ const { text, reasoningText, reasoning } = await generateText({
194
+ model: anthropic('claude-opus-4-20250514'),
195
+ prompt: 'How many people will live in the world in 2040?',
196
+ providerOptions: {
197
+ anthropic: {
198
+ thinking: { type: 'enabled', budgetTokens: 12000 },
199
+ } satisfies AnthropicProviderOptions,
200
+ },
201
+ });
202
+
203
+ console.log(reasoningText); // reasoning text
204
+ console.log(reasoning); // reasoning details including redacted reasoning
205
+ console.log(text); // text response
206
+ ```
207
+
208
+ See [AI SDK UI: Chatbot](/docs/ai-sdk-ui/chatbot#reasoning) for more details
209
+ on how to integrate reasoning into your chatbot.
210
+
211
+ ### Context Management
212
+
213
+ Anthropic's Context Management feature allows you to automatically manage conversation context by clearing tool uses or thinking content when certain conditions are met. This helps optimize token usage and manage long conversations more efficiently.
214
+
215
+ You can configure context management using the `contextManagement` provider option:
216
+
217
+ ```ts highlight="7-20"
218
+ import { anthropic, AnthropicProviderOptions } from '@ai-sdk/anthropic';
219
+ import { generateText } from 'ai';
220
+
221
+ const result = await generateText({
222
+ model: anthropic('claude-3-7-sonnet-20250219'),
223
+ prompt: 'Continue our conversation...',
224
+ providerOptions: {
225
+ anthropic: {
226
+ contextManagement: {
227
+ edits: [
228
+ {
229
+ type: 'clear_tool_uses_20250919',
230
+ trigger: { type: 'input_tokens', value: 10000 },
231
+ keep: { type: 'tool_uses', value: 5 },
232
+ clearAtLeast: { type: 'input_tokens', value: 1000 },
233
+ clearToolInputs: true,
234
+ excludeTools: ['important_tool'],
235
+ },
236
+ ],
237
+ },
238
+ } satisfies AnthropicProviderOptions,
239
+ },
240
+ });
241
+
242
+ // Check what was cleared
243
+ console.log(result.providerMetadata?.anthropic?.contextManagement);
244
+ ```
245
+
246
+ #### Clear Tool Uses
247
+
248
+ The `clear_tool_uses_20250919` edit type removes old tool calls from the conversation history:
249
+
250
+ - **trigger** - Condition that triggers the clearing (e.g., `{ type: 'input_tokens', value: 10000 }`)
251
+ - **keep** - How many recent tool uses to preserve (e.g., `{ type: 'tool_uses', value: 5 }`)
252
+ - **clearAtLeast** - Minimum amount to clear (e.g., `{ type: 'input_tokens', value: 1000 }`)
253
+ - **clearToolInputs** - Whether to clear tool input parameters (boolean)
254
+ - **excludeTools** - Array of tool names to never clear
255
+
256
+ #### Clear Thinking
257
+
258
+ The `clear_thinking_20251015` edit type removes thinking/reasoning content:
259
+
260
+ ```ts
261
+ const result = await generateText({
262
+ model: anthropic('claude-opus-4-20250514'),
263
+ prompt: 'Continue reasoning...',
264
+ providerOptions: {
265
+ anthropic: {
266
+ thinking: { type: 'enabled', budgetTokens: 12000 },
267
+ contextManagement: {
268
+ edits: [
269
+ {
270
+ type: 'clear_thinking_20251015',
271
+ keep: { type: 'thinking_turns', value: 2 },
272
+ },
273
+ ],
274
+ },
275
+ } satisfies AnthropicProviderOptions,
276
+ },
277
+ });
278
+ ```
279
+
280
+ #### Applied Edits Metadata
281
+
282
+ After generation, you can check which edits were applied in the provider metadata:
283
+
284
+ ```ts
285
+ const metadata = result.providerMetadata?.anthropic?.contextManagement;
286
+
287
+ if (metadata?.appliedEdits) {
288
+ metadata.appliedEdits.forEach(edit => {
289
+ if (edit.type === 'clear_tool_uses_20250919') {
290
+ console.log(`Cleared ${edit.clearedToolUses} tool uses`);
291
+ console.log(`Freed ${edit.clearedInputTokens} tokens`);
292
+ } else if (edit.type === 'clear_thinking_20251015') {
293
+ console.log(`Cleared ${edit.clearedThinkingTurns} thinking turns`);
294
+ console.log(`Freed ${edit.clearedInputTokens} tokens`);
295
+ }
296
+ });
297
+ }
298
+ ```
299
+
300
+ For more details, see [Anthropic's Context Management documentation](https://docs.anthropic.com/en/docs/build-with-claude/context-management).
301
+
302
+ ### Cache Control
303
+
304
+ In the messages and message parts, you can use the `providerOptions` property to set cache control breakpoints.
305
+ You need to set the `anthropic` property in the `providerOptions` object to `{ cacheControl: { type: 'ephemeral' } }` to set a cache control breakpoint.
306
+
307
+ The cache creation input tokens are then returned in the `providerMetadata` object
308
+ for `generateText` and `generateObject`, again under the `anthropic` property.
309
+ When you use `streamText` or `streamObject`, the response contains a promise
310
+ that resolves to the metadata. Alternatively you can receive it in the
311
+ `onFinish` callback.
312
+
313
+ ```ts highlight="8,18-20,29-30"
314
+ import { anthropic } from '@ai-sdk/anthropic';
315
+ import { generateText } from 'ai';
316
+
317
+ const errorMessage = '... long error message ...';
318
+
319
+ const result = await generateText({
320
+ model: anthropic('claude-3-5-sonnet-20240620'),
321
+ messages: [
322
+ {
323
+ role: 'user',
324
+ content: [
325
+ { type: 'text', text: 'You are a JavaScript expert.' },
326
+ {
327
+ type: 'text',
328
+ text: `Error message: ${errorMessage}`,
329
+ providerOptions: {
330
+ anthropic: { cacheControl: { type: 'ephemeral' } },
331
+ },
332
+ },
333
+ { type: 'text', text: 'Explain the error message.' },
334
+ ],
335
+ },
336
+ ],
337
+ });
338
+
339
+ console.log(result.text);
340
+ console.log(result.providerMetadata?.anthropic);
341
+ // e.g. { cacheCreationInputTokens: 2118 }
342
+ ```
343
+
344
+ You can also use cache control on system messages by providing multiple system messages at the head of your messages array:
345
+
346
+ ```ts highlight="3,7-9"
347
+ const result = await generateText({
348
+ model: anthropic('claude-3-5-sonnet-20240620'),
349
+ messages: [
350
+ {
351
+ role: 'system',
352
+ content: 'Cached system message part',
353
+ providerOptions: {
354
+ anthropic: { cacheControl: { type: 'ephemeral' } },
355
+ },
356
+ },
357
+ {
358
+ role: 'system',
359
+ content: 'Uncached system message part',
360
+ },
361
+ {
362
+ role: 'user',
363
+ content: 'User prompt',
364
+ },
365
+ ],
366
+ });
367
+ ```
368
+
369
+ Cache control for tools:
370
+
371
+ ```ts
372
+ const result = await generateText({
373
+ model: anthropic('claude-3-5-haiku-latest'),
374
+ tools: {
375
+ cityAttractions: tool({
376
+ inputSchema: z.object({ city: z.string() }),
377
+ providerOptions: {
378
+ anthropic: {
379
+ cacheControl: { type: 'ephemeral' },
380
+ },
381
+ },
382
+ }),
383
+ },
384
+ messages: [
385
+ {
386
+ role: 'user',
387
+ content: 'User prompt',
388
+ },
389
+ ],
390
+ });
391
+ ```
392
+
393
+ #### Longer cache TTL
394
+
395
+ Anthropic also supports a longer 1-hour cache duration.
396
+
397
+ Here's an example:
398
+
399
+ ```ts
400
+ const result = await generateText({
401
+ model: anthropic('claude-3-5-haiku-latest'),
402
+ messages: [
403
+ {
404
+ role: 'user',
405
+ content: [
406
+ {
407
+ type: 'text',
408
+ text: 'Long cached message',
409
+ providerOptions: {
410
+ anthropic: {
411
+ cacheControl: { type: 'ephemeral', ttl: '1h' },
412
+ },
413
+ },
414
+ },
415
+ ],
416
+ },
417
+ ],
418
+ });
419
+ ```
420
+
421
+ #### Limitations
422
+
423
+ The minimum cacheable prompt length is:
424
+
425
+ - 4096 tokens for Claude Opus 4.5
426
+ - 1024 tokens for Claude Opus 4.1, Claude Opus 4, Claude Sonnet 4.5, Claude Sonnet 4, Claude Sonnet 3.7, and Claude Opus 3
427
+ - 4096 tokens for Claude Haiku 4.5
428
+ - 2048 tokens for Claude Haiku 3.5 and Claude Haiku 3
429
+
430
+ Shorter prompts cannot be cached, even if marked with `cacheControl`. Any requests to cache fewer than this number of tokens will be processed without caching.
431
+
432
+ For more on prompt caching with Anthropic, see [Anthropic's Cache Control documentation](https://docs.anthropic.com/en/docs/build-with-claude/prompt-caching).
433
+
434
+ <Note type="warning">
435
+ Because the `UIMessage` type (used by AI SDK UI hooks like `useChat`) does not
436
+ support the `providerOptions` property, you can use `convertToModelMessages`
437
+ first before passing the messages to functions like `generateText` or
438
+ `streamText`. For more details on `providerOptions` usage, see
439
+ [here](/docs/foundations/prompts#provider-options).
440
+ </Note>
441
+
442
+ ### Bash Tool
443
+
444
+ The Bash Tool allows running bash commands. Here's how to create and use it:
445
+
446
+ ```ts
447
+ const bashTool = anthropic.tools.bash_20241022({
448
+ execute: async ({ command, restart }) => {
449
+ // Implement your bash command execution logic here
450
+ // Return the result of the command execution
451
+ },
452
+ });
453
+ ```
454
+
455
+ Parameters:
456
+
457
+ - `command` (string): The bash command to run. Required unless the tool is being restarted.
458
+ - `restart` (boolean, optional): Specifying true will restart this tool.
459
+
460
+ <Note>Only certain Claude versions are supported.</Note>
461
+
462
+ ### Memory Tool
463
+
464
+ The [Memory Tool](https://docs.claude.com/en/docs/agents-and-tools/tool-use/memory-tool) allows Claude to use a local memory, e.g. in the filesystem.
465
+ Here's how to create it:
466
+
467
+ ```ts
468
+ const memory = anthropic.tools.memory_20250818({
469
+ execute: async action => {
470
+ // Implement your memory command execution logic here
471
+ // Return the result of the command execution
472
+ },
473
+ });
474
+ ```
475
+
476
+ <Note>Only certain Claude versions are supported.</Note>
477
+
478
+ ### Text Editor Tool
479
+
480
+ The Text Editor Tool provides functionality for viewing and editing text files.
481
+
482
+ ```ts
483
+ const tools = {
484
+ str_replace_based_edit_tool: anthropic.tools.textEditor_20250728({
485
+ maxCharacters: 10000, // optional
486
+ async execute({ command, path, old_str, new_str }) {
487
+ // ...
488
+ },
489
+ }),
490
+ } satisfies ToolSet;
491
+ ```
492
+
493
+ <Note>
494
+ Different models support different versions of the tool. For Claude Sonnet 3.5
495
+ and 3.7 you need to use older tool versions.
496
+ </Note>
497
+
498
+ Parameters:
499
+
500
+ - `command` ('view' | 'create' | 'str_replace' | 'insert' | 'undo_edit'): The command to run. Note: `undo_edit` is only available in Claude 3.5 Sonnet and earlier models.
501
+ - `path` (string): Absolute path to file or directory, e.g. `/repo/file.py` or `/repo`.
502
+ - `file_text` (string, optional): Required for `create` command, with the content of the file to be created.
503
+ - `insert_line` (number, optional): Required for `insert` command. The line number after which to insert the new string.
504
+ - `new_str` (string, optional): New string for `str_replace` or `insert` commands.
505
+ - `old_str` (string, optional): Required for `str_replace` command, containing the string to replace.
506
+ - `view_range` (number[], optional): Optional for `view` command to specify line range to show.
507
+
508
+ ### Computer Tool
509
+
510
+ The Computer Tool enables control of keyboard and mouse actions on a computer:
511
+
512
+ ```ts
513
+ const computerTool = anthropic.tools.computer_20241022({
514
+ displayWidthPx: 1920,
515
+ displayHeightPx: 1080,
516
+ displayNumber: 0, // Optional, for X11 environments
517
+
518
+ execute: async ({ action, coordinate, text }) => {
519
+ // Implement your computer control logic here
520
+ // Return the result of the action
521
+
522
+ // Example code:
523
+ switch (action) {
524
+ case 'screenshot': {
525
+ // multipart result:
526
+ return {
527
+ type: 'image',
528
+ data: fs
529
+ .readFileSync('./data/screenshot-editor.png')
530
+ .toString('base64'),
531
+ };
532
+ }
533
+ default: {
534
+ console.log('Action:', action);
535
+ console.log('Coordinate:', coordinate);
536
+ console.log('Text:', text);
537
+ return `executed ${action}`;
538
+ }
539
+ }
540
+ },
541
+
542
+ // map to tool result content for LLM consumption:
543
+ toModelOutput({ output }) {
544
+ return typeof output === 'string'
545
+ ? [{ type: 'text', text: output }]
546
+ : [{ type: 'image', data: output.data, mediaType: 'image/png' }];
547
+ },
548
+ });
549
+ ```
550
+
551
+ Parameters:
552
+
553
+ - `action` ('key' | 'type' | 'mouse_move' | 'left_click' | 'left_click_drag' | 'right_click' | 'middle_click' | 'double_click' | 'screenshot' | 'cursor_position'): The action to perform.
554
+ - `coordinate` (number[], optional): Required for `mouse_move` and `left_click_drag` actions. Specifies the (x, y) coordinates.
555
+ - `text` (string, optional): Required for `type` and `key` actions.
556
+
557
+ These tools can be used in conjunction with the `sonnet-3-5-sonnet-20240620` model to enable more complex interactions and tasks.
558
+
559
+ ### Web Search Tool
560
+
561
+ Anthropic provides a provider-defined web search tool that gives Claude direct access to real-time web content, allowing it to answer questions with up-to-date information beyond its knowledge cutoff.
562
+
563
+ You can enable web search using the provider-defined web search tool:
564
+
565
+ ```ts
566
+ import { anthropic } from '@ai-sdk/anthropic';
567
+ import { generateText } from 'ai';
568
+
569
+ const webSearchTool = anthropic.tools.webSearch_20250305({
570
+ maxUses: 5,
571
+ });
572
+
573
+ const result = await generateText({
574
+ model: anthropic('claude-opus-4-20250514'),
575
+ prompt: 'What are the latest developments in AI?',
576
+ tools: {
577
+ web_search: webSearchTool,
578
+ },
579
+ });
580
+ ```
581
+
582
+ <Note>
583
+ Web search must be enabled in your organization's [Console
584
+ settings](https://console.anthropic.com/settings/privacy).
585
+ </Note>
586
+
587
+ #### Configuration Options
588
+
589
+ The web search tool supports several configuration options:
590
+
591
+ - **maxUses** _number_
592
+
593
+ Maximum number of web searches Claude can perform during the conversation.
594
+
595
+ - **allowedDomains** _string[]_
596
+
597
+ Optional list of domains that Claude is allowed to search. If provided, searches will be restricted to these domains.
598
+
599
+ - **blockedDomains** _string[]_
600
+
601
+ Optional list of domains that Claude should avoid when searching.
602
+
603
+ - **userLocation** _object_
604
+
605
+ Optional user location information to provide geographically relevant search results.
606
+
607
+ ```ts
608
+ const webSearchTool = anthropic.tools.webSearch_20250305({
609
+ maxUses: 3,
610
+ allowedDomains: ['techcrunch.com', 'wired.com'],
611
+ blockedDomains: ['example-spam-site.com'],
612
+ userLocation: {
613
+ type: 'approximate',
614
+ country: 'US',
615
+ region: 'California',
616
+ city: 'San Francisco',
617
+ timezone: 'America/Los_Angeles',
618
+ },
619
+ });
620
+
621
+ const result = await generateText({
622
+ model: anthropic('claude-opus-4-20250514'),
623
+ prompt: 'Find local news about technology',
624
+ tools: {
625
+ web_search: webSearchTool,
626
+ },
627
+ });
628
+ ```
629
+
630
+ ### Web Fetch Tool
631
+
632
+ Anthropic provides a provider-defined web fetch tool that allows Claude to retrieve content from specific URLs. This is useful when you want Claude to analyze or reference content from a particular webpage or document.
633
+
634
+ You can enable web fetch using the provider-defined web fetch tool:
635
+
636
+ ```ts
637
+ import { anthropic } from '@ai-sdk/anthropic';
638
+ import { generateText } from 'ai';
639
+
640
+ const result = await generateText({
641
+ model: anthropic('claude-sonnet-4-0'),
642
+ prompt:
643
+ 'What is this page about? https://en.wikipedia.org/wiki/Maglemosian_culture',
644
+ tools: {
645
+ web_fetch: anthropic.tools.webFetch_20250910({ maxUses: 1 }),
646
+ },
647
+ });
648
+ ```
649
+
650
+ ### Tool Search
651
+
652
+ Anthropic provides provider-defined tool search tools that enable Claude to work with hundreds or thousands of tools by dynamically discovering and loading them on-demand. Instead of loading all tool definitions into the context window upfront, Claude searches your tool catalog and loads only the tools it needs.
653
+
654
+ There are two variants:
655
+
656
+ - **BM25 Search** - Uses natural language queries to find tools
657
+ - **Regex Search** - Uses regex patterns (Python `re.search()` syntax) to find tools
658
+
659
+ #### Basic Usage
660
+
661
+ ```ts
662
+ import { anthropic } from '@ai-sdk/anthropic';
663
+ import { generateText, tool } from 'ai';
664
+ import { z } from 'zod';
665
+
666
+ const result = await generateText({
667
+ model: anthropic('claude-sonnet-4-5'),
668
+ prompt: 'What is the weather in San Francisco?',
669
+ tools: {
670
+ toolSearch: anthropic.tools.toolSearchBm25_20251119(),
671
+
672
+ get_weather: tool({
673
+ description: 'Get the current weather at a specific location',
674
+ inputSchema: z.object({
675
+ location: z.string().describe('The city and state'),
676
+ }),
677
+ execute: async ({ location }) => ({
678
+ location,
679
+ temperature: 72,
680
+ condition: 'Sunny',
681
+ }),
682
+ // Defer tool here - Claude discovers these via the tool search tool
683
+ providerOptions: {
684
+ anthropic: { deferLoading: true },
685
+ },
686
+ }),
687
+ },
688
+ });
689
+ ```
690
+
691
+ #### Using Regex Search
692
+
693
+ For more precise tool matching, you can use the regex variant:
694
+
695
+ ```ts
696
+ const result = await generateText({
697
+ model: anthropic('claude-sonnet-4-5'),
698
+ prompt: 'Get the weather data',
699
+ tools: {
700
+ toolSearch: anthropic.tools.toolSearchRegex_20251119(),
701
+ // ... deferred tools
702
+ },
703
+ });
704
+ ```
705
+
706
+ Claude will construct regex patterns like `weather|temperature|forecast` to find matching tools.
707
+
708
+ ### MCP Connectors
709
+
710
+ Anthropic supports connecting to [MCP servers](https://docs.claude.com/en/docs/agents-and-tools/mcp-connector) as part of their execution.
711
+
712
+ You can enable this feature with the `mcpServers` provider option:
713
+
714
+ ```ts
715
+ import { anthropic, AnthropicProviderOptions } from '@ai-sdk/anthropic';
716
+ import { generateText } from 'ai';
717
+
718
+ const result = await generateText({
719
+ model: anthropic('claude-sonnet-4-5'),
720
+ prompt: `Call the echo tool with "hello world". what does it respond with back?`,
721
+ providerOptions: {
722
+ anthropic: {
723
+ mcpServers: [
724
+ {
725
+ type: 'url',
726
+ name: 'echo',
727
+ url: 'https://echo.mcp.inevitable.fyi/mcp',
728
+ // optional: authorization token
729
+ authorizationToken: mcpAuthToken,
730
+ // optional: tool configuration
731
+ toolConfiguration: {
732
+ enabled: true,
733
+ allowedTools: ['echo'],
734
+ },
735
+ },
736
+ ],
737
+ } satisfies AnthropicProviderOptions,
738
+ },
739
+ });
740
+ ```
741
+
742
+ The tool calls and results are dynamic, i.e. the input and output schemas are not known.
743
+
744
+ #### Configuration Options
745
+
746
+ The web fetch tool supports several configuration options:
747
+
748
+ - **maxUses** _number_
749
+
750
+ The maxUses parameter limits the number of web fetches performed.
751
+
752
+ - **allowedDomains** _string[]_
753
+
754
+ Only fetch from these domains.
755
+
756
+ - **blockedDomains** _string[]_
757
+
758
+ Never fetch from these domains.
759
+
760
+ - **citations** _object_
761
+
762
+ Unlike web search where citations are always enabled, citations are optional for web fetch. Set `"citations": {"enabled": true}` to enable Claude to cite specific passages from fetched documents.
763
+
764
+ - **maxContentTokens** _number_
765
+
766
+ The maxContentTokens parameter limits the amount of content that will be included in the context.
767
+
768
+ #### Error Handling
769
+
770
+ Web search errors are handled differently depending on whether you're using streaming or non-streaming:
771
+
772
+ **Non-streaming (`generateText`, `generateObject`):**
773
+ Web search errors throw exceptions that you can catch:
774
+
775
+ ```ts
776
+ try {
777
+ const result = await generateText({
778
+ model: anthropic('claude-opus-4-20250514'),
779
+ prompt: 'Search for something',
780
+ tools: {
781
+ web_search: webSearchTool,
782
+ },
783
+ });
784
+ } catch (error) {
785
+ if (error.message.includes('Web search failed')) {
786
+ console.log('Search error:', error.message);
787
+ // Handle search error appropriately
788
+ }
789
+ }
790
+ ```
791
+
792
+ **Streaming (`streamText`, `streamObject`):**
793
+ Web search errors are delivered as error parts in the stream:
794
+
795
+ ```ts
796
+ const result = await streamText({
797
+ model: anthropic('claude-opus-4-20250514'),
798
+ prompt: 'Search for something',
799
+ tools: {
800
+ web_search: webSearchTool,
801
+ },
802
+ });
803
+
804
+ for await (const part of result.textStream) {
805
+ if (part.type === 'error') {
806
+ console.log('Search error:', part.error);
807
+ // Handle search error appropriately
808
+ }
809
+ }
810
+ ```
811
+
812
+ ## Code Execution
813
+
814
+ Anthropic provides a provider-defined code execution tool that gives Claude direct access to a real Python environment allowing it to execute code to inform its responses.
815
+
816
+ You can enable code execution using the provider-defined code execution tool:
817
+
818
+ ```ts
819
+ import { anthropic } from '@ai-sdk/anthropic';
820
+ import { generateText } from 'ai';
821
+
822
+ const codeExecutionTool = anthropic.tools.codeExecution_20250825();
823
+
824
+ const result = await generateText({
825
+ model: anthropic('claude-opus-4-20250514'),
826
+ prompt:
827
+ 'Calculate the mean and standard deviation of [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]',
828
+ tools: {
829
+ code_execution: codeExecutionTool,
830
+ },
831
+ });
832
+ ```
833
+
834
+ #### Error Handling
835
+
836
+ Code execution errors are handled differently depending on whether you're using streaming or non-streaming:
837
+
838
+ **Non-streaming (`generateText`, `generateObject`):**
839
+ Code execution errors are delivered as tool result parts in the response:
840
+
841
+ ```ts
842
+ const result = await generateText({
843
+ model: anthropic('claude-opus-4-20250514'),
844
+ prompt: 'Execute some Python script',
845
+ tools: {
846
+ code_execution: codeExecutionTool,
847
+ },
848
+ });
849
+
850
+ const toolErrors = result.content?.filter(
851
+ content => content.type === 'tool-error',
852
+ );
853
+
854
+ toolErrors?.forEach(error => {
855
+ console.error('Tool execution error:', {
856
+ toolName: error.toolName,
857
+ toolCallId: error.toolCallId,
858
+ error: error.error,
859
+ });
860
+ });
861
+ ```
862
+
863
+ **Streaming (`streamText`, `streamObject`):**
864
+ Code execution errors are delivered as error parts in the stream:
865
+
866
+ ```ts
867
+ const result = await streamText({
868
+ model: anthropic('claude-opus-4-20250514'),
869
+ prompt: 'Execute some Python script',
870
+ tools: {
871
+ code_execution: codeExecutionTool,
872
+ },
873
+ });
874
+ for await (const part of result.textStream) {
875
+ if (part.type === 'error') {
876
+ console.log('Code execution error:', part.error);
877
+ // Handle code execution error appropriately
878
+ }
879
+ }
880
+ ```
881
+
882
+ ### Programmatic Tool Calling
883
+
884
+ [Programmatic Tool Calling](https://docs.anthropic.com/en/docs/agents-and-tools/tool-use/programmatic-tool-calling) allows Claude to write code that calls your tools programmatically within a code execution container, rather than requiring round trips through the model for each tool invocation. This reduces latency for multi-tool workflows and decreases token consumption.
885
+
886
+ To enable programmatic tool calling, use the `allowedCallers` provider option on tools that you want to be callable from within code execution:
887
+
888
+ ```ts highlight="13-17"
889
+ import {
890
+ anthropic,
891
+ forwardAnthropicContainerIdFromLastStep,
892
+ } from '@ai-sdk/anthropic';
893
+ import { generateText, tool, stepCountIs } from 'ai';
894
+ import { z } from 'zod';
895
+
896
+ const result = await generateText({
897
+ model: anthropic('claude-sonnet-4-5'),
898
+ stopWhen: stepCountIs(10),
899
+ prompt:
900
+ 'Get the weather for Tokyo, Sydney, and London, then calculate the average temperature.',
901
+ tools: {
902
+ code_execution: anthropic.tools.codeExecution_20250825(),
903
+
904
+ getWeather: tool({
905
+ description: 'Get current weather data for a city.',
906
+ inputSchema: z.object({
907
+ city: z.string().describe('Name of the city'),
908
+ }),
909
+ execute: async ({ city }) => {
910
+ // Your weather API implementation
911
+ return { temp: 22, condition: 'Sunny' };
912
+ },
913
+ // Enable this tool to be called from within code execution
914
+ providerOptions: {
915
+ anthropic: {
916
+ allowedCallers: ['code_execution_20250825'],
917
+ },
918
+ },
919
+ }),
920
+ },
921
+
922
+ // Propagate container ID between steps for code execution continuity
923
+ prepareStep: forwardAnthropicContainerIdFromLastStep,
924
+ });
925
+ ```
926
+
927
+ In this flow:
928
+
929
+ 1. Claude writes Python code that calls your `getWeather` tool multiple times in parallel
930
+ 2. The SDK automatically executes your tool and returns results to the code execution container
931
+ 3. Claude processes the results in code and generates the final response
932
+
933
+ <Note>
934
+ Programmatic tool calling requires `claude-sonnet-4-5` or `claude-opus-4-5`
935
+ models and uses the `code_execution_20250825` tool.
936
+ </Note>
937
+
938
+ #### Container Persistence
939
+
940
+ When using programmatic tool calling across multiple steps, you need to preserve the container ID between steps using `prepareStep`. You can use the `forwardAnthropicContainerIdFromLastStep` helper function to do this automatically. The container ID is available in `providerMetadata.anthropic.container.id` after each step completes.
941
+
942
+ ## Agent Skills
943
+
944
+ [Anthropic Agent Skills](https://docs.claude.com/en/docs/agents-and-tools/agent-skills/overview) enable Claude to perform specialized tasks like document processing (PPTX, DOCX, PDF, XLSX) and data analysis. Skills run in a sandboxed container and require the code execution tool to be enabled.
945
+
946
+ ### Using Built-in Skills
947
+
948
+ Anthropic provides several built-in skills:
949
+
950
+ - **pptx** - Create and edit PowerPoint presentations
951
+ - **docx** - Create and edit Word documents
952
+ - **pdf** - Process and analyze PDF files
953
+ - **xlsx** - Work with Excel spreadsheets
954
+
955
+ To use skills, you need to:
956
+
957
+ 1. Enable the code execution tool
958
+ 2. Specify the container with skills in `providerOptions`
959
+
960
+ ```ts highlight="4,9-17,19-23"
961
+ import { anthropic, AnthropicProviderOptions } from '@ai-sdk/anthropic';
962
+ import { generateText } from 'ai';
963
+
964
+ const result = await generateText({
965
+ model: anthropic('claude-sonnet-4-5'),
966
+ tools: {
967
+ code_execution: anthropic.tools.codeExecution_20250825(),
968
+ },
969
+ prompt: 'Create a presentation about renewable energy with 5 slides',
970
+ providerOptions: {
971
+ anthropic: {
972
+ container: {
973
+ skills: [
974
+ {
975
+ type: 'anthropic',
976
+ skillId: 'pptx',
977
+ version: 'latest', // optional
978
+ },
979
+ ],
980
+ },
981
+ } satisfies AnthropicProviderOptions,
982
+ },
983
+ });
984
+ ```
985
+
986
+ ### Custom Skills
987
+
988
+ You can also use custom skills by specifying `type: 'custom'`:
989
+
990
+ ```ts highlight="9-11"
991
+ const result = await generateText({
992
+ model: anthropic('claude-sonnet-4-5'),
993
+ tools: {
994
+ code_execution: anthropic.tools.codeExecution_20250825(),
995
+ },
996
+ prompt: 'Use my custom skill to process this data',
997
+ providerOptions: {
998
+ anthropic: {
999
+ container: {
1000
+ skills: [
1001
+ {
1002
+ type: 'custom',
1003
+ skillId: 'my-custom-skill-id',
1004
+ version: '1.0', // optional
1005
+ },
1006
+ ],
1007
+ },
1008
+ } satisfies AnthropicProviderOptions,
1009
+ },
1010
+ });
1011
+ ```
1012
+
1013
+ <Note>
1014
+ Skills use progressive context loading and execute within a sandboxed
1015
+ container with code execution capabilities.
1016
+ </Note>
1017
+
1018
+ ### PDF support
1019
+
1020
+ Anthropic Sonnet `claude-3-5-sonnet-20241022` supports reading PDF files.
1021
+ You can pass PDF files as part of the message content using the `file` type:
1022
+
1023
+ Option 1: URL-based PDF document
1024
+
1025
+ ```ts
1026
+ const result = await generateText({
1027
+ model: anthropic('claude-3-5-sonnet-20241022'),
1028
+ messages: [
1029
+ {
1030
+ role: 'user',
1031
+ content: [
1032
+ {
1033
+ type: 'text',
1034
+ text: 'What is an embedding model according to this document?',
1035
+ },
1036
+ {
1037
+ type: 'file',
1038
+ data: new URL(
1039
+ 'https://github.com/vercel/ai/blob/main/examples/ai-functions/data/ai.pdf?raw=true',
1040
+ ),
1041
+ mimeType: 'application/pdf',
1042
+ },
1043
+ ],
1044
+ },
1045
+ ],
1046
+ });
1047
+ ```
1048
+
1049
+ Option 2: Base64-encoded PDF document
1050
+
1051
+ ```ts
1052
+ const result = await generateText({
1053
+ model: anthropic('claude-3-5-sonnet-20241022'),
1054
+ messages: [
1055
+ {
1056
+ role: 'user',
1057
+ content: [
1058
+ {
1059
+ type: 'text',
1060
+ text: 'What is an embedding model according to this document?',
1061
+ },
1062
+ {
1063
+ type: 'file',
1064
+ data: fs.readFileSync('./data/ai.pdf'),
1065
+ mediaType: 'application/pdf',
1066
+ },
1067
+ ],
1068
+ },
1069
+ ],
1070
+ });
1071
+ ```
1072
+
1073
+ The model will have access to the contents of the PDF file and
1074
+ respond to questions about it.
1075
+ The PDF file should be passed using the `data` field,
1076
+ and the `mediaType` should be set to `'application/pdf'`.
1077
+
1078
+ ### Model Capabilities
1079
+
1080
+ | Model | Image Input | Object Generation | Tool Usage | Computer Use | Web Search | Tool Search |
1081
+ | -------------------------- | ------------------- | ------------------- | ------------------- | ------------------- | ------------------- | ------------------- |
1082
+ | `claude-opus-4-5` | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
1083
+ | `claude-haiku-4-5` | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | |
1084
+ | `claude-sonnet-4-5` | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
1085
+ | `claude-opus-4-1` | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | |
1086
+ | `claude-opus-4-0` | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | |
1087
+ | `claude-sonnet-4-0` | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | |
1088
+ | `claude-3-7-sonnet-latest` | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | |
1089
+ | `claude-3-5-haiku-latest` | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | |
1090
+
1091
+ <Note>
1092
+ The table above lists popular models. Please see the [Anthropic
1093
+ docs](https://docs.anthropic.com/en/docs/about-claude/models) for a full list
1094
+ of available models. The table above lists popular models. You can also pass
1095
+ any available provider model ID as a string if needed.
1096
+ </Note>
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "@ai-sdk/anthropic",
3
- "version": "3.0.19",
3
+ "version": "3.0.20",
4
4
  "license": "Apache-2.0",
5
5
  "sideEffects": false,
6
6
  "main": "./dist/index.js",
@@ -8,11 +8,15 @@
8
8
  "types": "./dist/index.d.ts",
9
9
  "files": [
10
10
  "dist/**/*",
11
+ "docs/**/*",
11
12
  "src",
12
13
  "CHANGELOG.md",
13
14
  "README.md",
14
15
  "internal.d.ts"
15
16
  ],
17
+ "directories": {
18
+ "doc": "./docs"
19
+ },
16
20
  "exports": {
17
21
  "./package.json": "./package.json",
18
22
  ".": {
@@ -62,7 +66,7 @@
62
66
  "scripts": {
63
67
  "build": "pnpm clean && tsup --tsconfig tsconfig.build.json",
64
68
  "build:watch": "pnpm clean && tsup --watch --tsconfig tsconfig.build.json",
65
- "clean": "del-cli dist *.tsbuildinfo",
69
+ "clean": "del-cli dist docs *.tsbuildinfo",
66
70
  "lint": "eslint \"./**/*.ts*\"",
67
71
  "type-check": "tsc --build",
68
72
  "prettier-check": "prettier --check \"./**/*.ts*\"",