llm-advanced-tools 0.1.1 → 0.1.3

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/plan.md DELETED
@@ -1,576 +0,0 @@
1
- Skip to main contentSkip to footer
2
- Research
3
- Economic Futures
4
-
5
- Commitments
6
-
7
- Learn
8
- News
9
- Try Claude
10
- Engineering at Anthropic
11
- Illustration for advanced tool use article.
12
- Introducing advanced tool use on the Claude Developer Platform
13
- Published Nov 24, 2025
14
-
15
- We’ve added three new beta features that let Claude discover, learn, and execute tools dynamically. Here’s how they work.
16
-
17
- The future of AI agents is one where models work seamlessly across hundreds or thousands of tools. An IDE assistant that integrates git operations, file manipulation, package managers, testing frameworks, and deployment pipelines. An operations coordinator that connects Slack, GitHub, Google Drive, Jira, company databases, and dozens of MCP servers simultaneously.
18
-
19
- To build effective agents, they need to work with unlimited tool libraries without stuffing every definition into context upfront. Our blog article on using code execution with MCP discussed how tool results and definitions can sometimes consume 50,000+ tokens before an agent reads a request. Agents should discover and load tools on-demand, keeping only what's relevant for the current task.
20
-
21
- Agents also need the ability to call tools from code. When using natural language tool calling, each invocation requires a full inference pass, and intermediate results pile up in context whether they're useful or not. Code is a natural fit for orchestration logic, such as loops, conditionals, and data transformations. Agents need the flexibility to choose between code execution and inference based on the task at hand.
22
-
23
- Agents also need to learn correct tool usage from examples, not just schema definitions. JSON schemas define what's structurally valid, but can't express usage patterns: when to include optional parameters, which combinations make sense, or what conventions your API expects.
24
-
25
- Today, we're releasing three features that make this possible:
26
-
27
- Tool Search Tool, which allows Claude to use search tools to access thousands of tools without consuming its context window
28
- Programmatic Tool Calling, which allows Claude to invoke tools in a code execution environment reducing the impact on the model’s context window
29
- Tool Use Examples, which provides a universal standard for demonstrating how to effectively use a given tool
30
- In internal testing, we’ve found these features have helped us build things that wouldn’t have been possible with conventional tool use patterns. For example, Claude for Excel uses Programmatic Tool Calling to read and modify spreadsheets with thousands of rows without overloading the model’s context window.
31
-
32
- Based on our experience, we believe these features open up new possibilities for what you can build with Claude.
33
-
34
-
35
- Tool Search Tool
36
- The challenge
37
- MCP tool definitions provide important context, but as more servers connect, those tokens can add up. Consider a five-server setup:
38
-
39
- GitHub: 35 tools (~26K tokens)
40
- Slack: 11 tools (~21K tokens)
41
- Sentry: 5 tools (~3K tokens)
42
- Grafana: 5 tools (~3K tokens)
43
- Splunk: 2 tools (~2K tokens)
44
- That's 58 tools consuming approximately 55K tokens before the conversation even starts. Add more servers like Jira (which alone uses ~17K tokens) and you're quickly approaching 100K+ token overhead. At Anthropic, we've seen tool definitions consume 134K tokens before optimization.
45
-
46
- But token cost isn't the only issue. The most common failures are wrong tool selection and incorrect parameters, especially when tools have similar names like notification-send-user vs. notification-send-channel.
47
-
48
- Our solution
49
- Instead of loading all tool definitions upfront, the Tool Search Tool discovers tools on-demand. Claude only sees the tools it actually needs for the current task.
50
-
51
- Tool Search Tool diagram
52
- Tool Search Tool preserves 191,300 tokens of context compared to 122,800 with Claude’s traditional approach.
53
- Traditional approach:
54
-
55
- All tool definitions loaded upfront (~72K tokens for 50+ MCP tools)
56
- Conversation history and system prompt compete for remaining space
57
- Total context consumption: ~77K tokens before any work begins
58
- With the Tool Search Tool:
59
-
60
- Only the Tool Search Tool loaded upfront (~500 tokens)
61
- Tools discovered on-demand as needed (3-5 relevant tools, ~3K tokens)
62
- Total context consumption: ~8.7K tokens, preserving 95% of context window
63
- This represents an 85% reduction in token usage while maintaining access to your full tool library. Internal testing showed significant accuracy improvements on MCP evaluations when working with large tool libraries. Opus 4 improved from 49% to 74%, and Opus 4.5 improved from 79.5% to 88.1% with Tool Search Tool enabled.
64
-
65
- How the Tool Search Tool works
66
- The Tool Search Tool lets Claude dynamically discover tools instead of loading all definitions upfront. You provide all your tool definitions to the API, but mark tools with defer_loading: true to make them discoverable on-demand. Deferred tools aren't loaded into Claude's context initially. Claude only sees the Tool Search Tool itself plus any tools with defer_loading: false (your most critical, frequently-used tools).
67
-
68
- When Claude needs specific capabilities, it searches for relevant tools. The Tool Search Tool returns references to matching tools, which get expanded into full definitions in Claude's context.
69
-
70
- For example, if Claude needs to interact with GitHub, it searches for "github," and only github.createPullRequest and github.listIssues get loaded—not your other 50+ tools from Slack, Jira, and Google Drive.
71
-
72
- This way, Claude has access to your full tool library while only paying the token cost for tools it actually needs.
73
-
74
- Prompt caching note: Tool Search Tool doesn't break prompt caching because deferred tools are excluded from the initial prompt entirely. They're only added to context after Claude searches for them, so your system prompt and core tool definitions remain cacheable.
75
-
76
- Implementation:
77
-
78
- {
79
- "tools": [
80
- // Include a tool search tool (regex, BM25, or custom)
81
- {"type": "tool_search_tool_regex_20251119", "name": "tool_search_tool_regex"},
82
-
83
- // Mark tools for on-demand discovery
84
- {
85
- "name": "github.createPullRequest",
86
- "description": "Create a pull request",
87
- "input_schema": {...},
88
- "defer_loading": true
89
- }
90
- // ... hundreds more deferred tools with defer_loading: true
91
- ]
92
- }
93
-
94
- Copy
95
- For MCP servers, you can defer loading entire servers while keeping specific high-use tools loaded:
96
-
97
- {
98
- "type": "mcp_toolset",
99
- "mcp_server_name": "google-drive",
100
- "default_config": {"defer_loading": true}, # defer loading the entire server
101
- "configs": {
102
- "search_files": {
103
- "defer_loading": false
104
- } // Keep most used tool loaded
105
- }
106
- }
107
-
108
- Copy
109
- The Claude Developer Platform provides regex-based and BM25-based search tools out of the box, but you can also implement custom search tools using embeddings or other strategies.
110
-
111
- When to use the Tool Search Tool
112
- Like any architectural decision, enabling the Tool Search Tool involves trade-offs. The feature adds a search step before tool invocation, so it delivers the best ROI when the context savings and accuracy improvements outweigh additional latency.
113
-
114
- Use it when:
115
-
116
- Tool definitions consuming >10K tokens
117
- Experiencing tool selection accuracy issues
118
- Building MCP-powered systems with multiple servers
119
- 10+ tools available
120
- Less beneficial when:
121
-
122
- Small tool library (<10 tools)
123
- All tools used frequently in every session
124
- Tool definitions are compact
125
- Programmatic Tool Calling
126
- The challenge
127
- Traditional tool calling creates two fundamental problems as workflows become more complex:
128
-
129
- Context pollution from intermediate results: When Claude analyzes a 10MB log file for error patterns, the entire file enters its context window, even though Claude only needs a summary of error frequencies. When fetching customer data across multiple tables, every record accumulates in context regardless of relevance. These intermediate results consume massive token budgets and can push important information out of the context window entirely.
130
- Inference overhead and manual synthesis: Each tool call requires a full model inference pass. After receiving results, Claude must "eyeball" the data to extract relevant information, reason about how pieces fit together, and decide what to do next—all through natural language processing. A five tool workflow means five inference passes plus Claude parsing each result, comparing values, and synthesizing conclusions. This is both slow and error-prone.
131
- Our solution
132
- Programmatic Tool Calling enables Claude to orchestrate tools through code rather than through individual API round-trips. Instead of Claude requesting tools one at a time with each result being returned to its context, Claude writes code that calls multiple tools, processes their outputs, and controls what information actually enters its context window.
133
-
134
- Claude excels at writing code and by letting it express orchestration logic in Python rather than through natural language tool invocations, you get more reliable, precise control flow. Loops, conditionals, data transformations, and error handling are all explicit in code rather than implicit in Claude's reasoning.
135
-
136
- Example: Budget compliance check
137
- Consider a common business task: "Which team members exceeded their Q3 travel budget?"
138
-
139
- You have three tools available:
140
-
141
- get_team_members(department) - Returns team member list with IDs and levels
142
- get_expenses(user_id, quarter) - Returns expense line items for a user
143
- get_budget_by_level(level) - Returns budget limits for an employee level
144
- Traditional approach:
145
-
146
- Fetch team members → 20 people
147
- For each person, fetch their Q3 expenses → 20 tool calls, each returning 50-100 line items (flights, hotels, meals, receipts)
148
- Fetch budget limits by employee level
149
- All of this enters Claude's context: 2,000+ expense line items (50 KB+)
150
- Claude manually sums each person's expenses, looks up their budget, compares expenses against budget limits
151
- More round-trips to the model, significant context consumption
152
- With Programmatic Tool Calling:
153
-
154
- Instead of each tool result returning to Claude, Claude writes a Python script that orchestrates the entire workflow. The script runs in the Code Execution tool (a sandboxed environment), pausing when it needs results from your tools. When you return tool results via the API, they're processed by the script rather than consumed by the model. The script continues executing, and Claude only sees the final output.
155
-
156
- Programmatic tool calling flow
157
- Programmatic Tool Calling enables Claude to orchestrate tools through code rather than through individual API round-trips, allowing for parallel tool execution.
158
- Here's what Claude's orchestration code looks like for the budget compliance task:
159
-
160
- team = await get_team_members("engineering")
161
-
162
- # Fetch budgets for each unique level
163
- levels = list(set(m["level"] for m in team))
164
- budget_results = await asyncio.gather(*[
165
- get_budget_by_level(level) for level in levels
166
- ])
167
-
168
- # Create a lookup dictionary: {"junior": budget1, "senior": budget2, ...}
169
- budgets = {level: budget for level, budget in zip(levels, budget_results)}
170
-
171
- # Fetch all expenses in parallel
172
- expenses = await asyncio.gather(*[
173
- get_expenses(m["id"], "Q3") for m in team
174
- ])
175
-
176
- # Find employees who exceeded their travel budget
177
- exceeded = []
178
- for member, exp in zip(team, expenses):
179
- budget = budgets[member["level"]]
180
- total = sum(e["amount"] for e in exp)
181
- if total > budget["travel_limit"]:
182
- exceeded.append({
183
- "name": member["name"],
184
- "spent": total,
185
- "limit": budget["travel_limit"]
186
- })
187
-
188
- print(json.dumps(exceeded))
189
-
190
- Copy
191
-
192
- Expand
193
- Claude's context receives only the final result: the two to three people who exceeded their budget. The 2,000+ line items, the intermediate sums, and the budget lookups do not affect Claude’s context, reducing consumption from 200KB of raw expense data to just 1KB of results.
194
-
195
- The efficiency gains are substantial:
196
-
197
- Token savings: By keeping intermediate results out of Claude's context, PTC dramatically reduces token consumption. Average usage dropped from 43,588 to 27,297 tokens, a 37% reduction on complex research tasks.
198
- Reduced latency: Each API round-trip requires model inference (hundreds of milliseconds to seconds). When Claude orchestrates 20+ tool calls in a single code block, you eliminate 19+ inference passes. The API handles tool execution without returning to the model each time.
199
- Improved accuracy: By writing explicit orchestration logic, Claude makes fewer errors than when juggling multiple tool results in natural language. Internal knowledge retrieval improved from 25.6% to 28.5%; GIA benchmarks from 46.5% to 51.2%.
200
- Production workflows involve messy data, conditional logic, and operations that need to scale. Programmatic Tool Calling lets Claude handle that complexity programmatically while keeping its focus on actionable results rather than raw data processing.
201
-
202
- How Programmatic Tool Calling works
203
- 1. Mark tools as callable from code
204
- Add code_execution to tools, and set allowed_callers to opt-in tools for programmatic execution:
205
-
206
- {
207
- "tools": [
208
- {
209
- "type": "code_execution_20250825",
210
- "name": "code_execution"
211
- },
212
- {
213
- "name": "get_team_members",
214
- "description": "Get all members of a department...",
215
- "input_schema": {...},
216
- "allowed_callers": ["code_execution_20250825"] # opt-in to programmatic tool calling
217
- },
218
- {
219
- "name": "get_expenses",
220
- ...
221
- },
222
- {
223
- "name": "get_budget_by_level",
224
- ...
225
- }
226
- ]
227
- }
228
-
229
- Copy
230
-
231
- Expand
232
- The API converts these tool definitions into Python functions that Claude can call.
233
-
234
- 2. Claude writes orchestration code
235
- Instead of requesting tools one at a time, Claude generates Python code:
236
-
237
- {
238
- "type": "server_tool_use",
239
- "id": "srvtoolu_abc",
240
- "name": "code_execution",
241
- "input": {
242
- "code": "team = get_team_members('engineering')\n..." # the code example above
243
- }
244
- }
245
-
246
- Copy
247
- 3. Tools execute without hitting Claude's context
248
- When the code calls get_expenses(), you receive a tool request with a caller field:
249
-
250
- {
251
- "type": "tool_use",
252
- "id": "toolu_xyz",
253
- "name": "get_expenses",
254
- "input": {"user_id": "emp_123", "quarter": "Q3"},
255
- "caller": {
256
- "type": "code_execution_20250825",
257
- "tool_id": "srvtoolu_abc"
258
- }
259
- }
260
-
261
- Copy
262
- You provide the result, which is processed in the Code Execution environment rather than Claude's context. This request-response cycle repeats for each tool call in the code.
263
-
264
- 4. Only final output enters context
265
- When the code finishes running, only the results of the code are returned to Claude:
266
-
267
- {
268
- "type": "code_execution_tool_result",
269
- "tool_use_id": "srvtoolu_abc",
270
- "content": {
271
- "stdout": "[{\"name\": \"Alice\", \"spent\": 12500, \"limit\": 10000}...]"
272
- }
273
- }
274
-
275
- Copy
276
- This is all Claude sees, not the 2000+ expense line items processed along the way.
277
-
278
- When to use Programmatic Tool Calling
279
- Programmatic Tool Calling adds a code execution step to your workflow. This extra overhead pays off when the token savings, latency improvements, and accuracy gains are substantial.
280
-
281
- Most beneficial when:
282
-
283
- Processing large datasets where you only need aggregates or summaries
284
- Running multi-step workflows with three or more dependent tool calls
285
- Filtering, sorting, or transforming tool results before Claude sees them
286
- Handling tasks where intermediate data shouldn't influence Claude's reasoning
287
- Running parallel operations across many items (checking 50 endpoints, for example)
288
- Less beneficial when:
289
-
290
- Making simple single-tool invocations
291
- Working on tasks where Claude should see and reason about all intermediate results
292
- Running quick lookups with small responses
293
- Tool Use Examples
294
- The challenge
295
- JSON Schema excels at defining structure–types, required fields, allowed enums–but it can't express usage patterns: when to include optional parameters, which combinations make sense, or what conventions your API expects.
296
-
297
- Consider a support ticket API:
298
-
299
- {
300
- "name": "create_ticket",
301
- "input_schema": {
302
- "properties": {
303
- "title": {"type": "string"},
304
- "priority": {"enum": ["low", "medium", "high", "critical"]},
305
- "labels": {"type": "array", "items": {"type": "string"}},
306
- "reporter": {
307
- "type": "object",
308
- "properties": {
309
- "id": {"type": "string"},
310
- "name": {"type": "string"},
311
- "contact": {
312
- "type": "object",
313
- "properties": {
314
- "email": {"type": "string"},
315
- "phone": {"type": "string"}
316
- }
317
- }
318
- }
319
- },
320
- "due_date": {"type": "string"},
321
- "escalation": {
322
- "type": "object",
323
- "properties": {
324
- "level": {"type": "integer"},
325
- "notify_manager": {"type": "boolean"},
326
- "sla_hours": {"type": "integer"}
327
- }
328
- }
329
- },
330
- "required": ["title"]
331
- }
332
- }
333
-
334
- Copy
335
-
336
- Expand
337
- The schema defines what's valid, but leaves critical questions unanswered:
338
-
339
- Format ambiguity: Should due_date use "2024-11-06", "Nov 6, 2024", or "2024-11-06T00:00:00Z"?
340
- ID conventions: Is reporter.id a UUID, "USR-12345", or just "12345"?
341
- Nested structure usage: When should Claude populate reporter.contact?
342
- Parameter correlations: How do escalation.level and escalation.sla_hours relate to priority?
343
- These ambiguities can lead to malformed tool calls and inconsistent parameter usage.
344
-
345
- Our solution
346
- Tool Use Examples let you provide sample tool calls directly in your tool definitions. Instead of relying on schema alone, you show Claude concrete usage patterns:
347
-
348
- {
349
- "name": "create_ticket",
350
- "input_schema": { /* same schema as above */ },
351
- "input_examples": [
352
- {
353
- "title": "Login page returns 500 error",
354
- "priority": "critical",
355
- "labels": ["bug", "authentication", "production"],
356
- "reporter": {
357
- "id": "USR-12345",
358
- "name": "Jane Smith",
359
- "contact": {
360
- "email": "jane@acme.com",
361
- "phone": "+1-555-0123"
362
- }
363
- },
364
- "due_date": "2024-11-06",
365
- "escalation": {
366
- "level": 2,
367
- "notify_manager": true,
368
- "sla_hours": 4
369
- }
370
- },
371
- {
372
- "title": "Add dark mode support",
373
- "labels": ["feature-request", "ui"],
374
- "reporter": {
375
- "id": "USR-67890",
376
- "name": "Alex Chen"
377
- }
378
- },
379
- {
380
- "title": "Update API documentation"
381
- }
382
- ]
383
- }
384
-
385
- Copy
386
-
387
- Expand
388
- From these three examples, Claude learns:
389
-
390
- Format conventions: Dates use YYYY-MM-DD, user IDs follow USR-XXXXX, labels use kebab-case
391
- Nested structure patterns: How to construct the reporter object with its nested contact object
392
- Optional parameter correlations: Critical bugs have full contact info + escalation with tight SLAs; feature requests have reporter but no contact/escalation; internal tasks have title only
393
- In our own internal testing, tool use examples improved accuracy from 72% to 90% on complex parameter handling.
394
-
395
- When to use Tool Use Examples
396
- Tool Use Examples add tokens to your tool definitions, so they’re most valuable when accuracy improvements outweigh the additional cost.
397
-
398
- Most beneficial when:
399
-
400
- Complex nested structures where valid JSON doesn't imply correct usage
401
- Tools with many optional parameters and inclusion patterns matter
402
- APIs with domain-specific conventions not captured in schemas
403
- Similar tools where examples clarify which one to use (e.g., create_ticket vs create_incident)
404
- Less beneficial when:
405
-
406
- Simple single-parameter tools with obvious usage
407
- Standard formats like URLs or emails that Claude already understands
408
- Validation concerns better handled by JSON Schema constraints
409
- Best practices
410
- Building agents that take real-world actions means handling scale, complexity, and precision simultaneously. These three features work together to solve different bottlenecks in tool use workflows. Here's how to combine them effectively.
411
-
412
- Layer features strategically
413
- Not every agent needs to use all three features for a given task. Start with your biggest bottleneck:
414
-
415
- Context bloat from tool definitions → Tool Search Tool
416
- Large intermediate results polluting context → Programmatic Tool Calling
417
- Parameter errors and malformed calls → Tool Use Examples
418
- This focused approach lets you address the specific constraint limiting your agent's performance, rather than adding complexity upfront.
419
-
420
- Then layer additional features as needed. They're complementary: Tool Search Tool ensures the right tools are found, Programmatic Tool Calling ensures efficient execution, and Tool Use Examples ensure correct invocation.
421
-
422
- Set up Tool Search Tool for better discovery
423
- Tool search matches against names and descriptions, so clear, descriptive definitions improve discovery accuracy.
424
-
425
- // Good
426
- {
427
- "name": "search_customer_orders",
428
- "description": "Search for customer orders by date range, status, or total amount. Returns order details including items, shipping, and payment info."
429
- }
430
-
431
- // Bad
432
- {
433
- "name": "query_db_orders",
434
- "description": "Execute order query"
435
- }
436
-
437
- Copy
438
- Add system prompt guidance so Claude knows what's available:
439
-
440
- You have access to tools for Slack messaging, Google Drive file management,
441
- Jira ticket tracking, and GitHub repository operations. Use the tool search
442
- to find specific capabilities.
443
-
444
- Copy
445
- Keep your three to five most-used tools always loaded, defer the rest. This balances immediate access for common operations with on-demand discovery for everything else.
446
-
447
- Set up Programmatic Tool Calling for correct execution
448
- Since Claude writes code to parse tool outputs, document return formats clearly. This helps Claude write correct parsing logic:
449
-
450
- {
451
- "name": "get_orders",
452
- "description": "Retrieve orders for a customer.
453
- Returns:
454
- List of order objects, each containing:
455
- - id (str): Order identifier
456
- - total (float): Order total in USD
457
- - status (str): One of 'pending', 'shipped', 'delivered'
458
- - items (list): Array of {sku, quantity, price}
459
- - created_at (str): ISO 8601 timestamp"
460
- }
461
-
462
- Copy
463
- See below for opt-in tools that benefit from programmatic orchestration:
464
-
465
- Tools that can run in parallel (independent operations)
466
- Operations safe to retry (idempotent)
467
- Set up Tool Use Examples for parameter accuracy
468
- Craft examples for behavioral clarity:
469
-
470
- Use realistic data (real city names, plausible prices, not "string" or "value")
471
- Show variety with minimal, partial, and full specification patterns
472
- Keep it concise: 1-5 examples per tool
473
- Focus on ambiguity (only add examples where correct usage isn't obvious from schema)
474
- Getting started
475
- These features are available in beta. To enable them, add the beta header and include the tools you need:
476
-
477
- client.beta.messages.create(
478
- betas=["advanced-tool-use-2025-11-20"],
479
- model="claude-sonnet-4-5-20250929",
480
- max_tokens=4096,
481
- tools=[
482
- {"type": "tool_search_tool_regex_20251119", "name": "tool_search_tool_regex"},
483
- {"type": "code_execution_20250825", "name": "code_execution"},
484
- # Your tools with defer_loading, allowed_callers, and input_examples
485
- ]
486
- )
487
-
488
- Copy
489
- For detailed API documentation and SDK examples, see our:
490
-
491
- Documentation and cookbook for Tool Search Tool
492
- Documentation and cookbook for Programmatic Tool Calling
493
- Documentation for Tool Use Examples
494
- These features move tool use from simple function calling toward intelligent orchestration. As agents tackle more complex workflows spanning dozens of tools and large datasets, dynamic discovery, efficient execution, and reliable invocation become foundational.
495
-
496
- We're excited to see what you build.
497
-
498
- Acknowledgements
499
- Written by Bin Wu, with contributions from Adam Jones, Artur Renault, Henry Tay, Jake Noble, Nathan McCandlish, Noah Picard, Sam Jiang, and the Claude Developer Platform team. This work builds on foundational research by Chris Gorgolewski, Daniel Jiang, Jeremy Fox and Mike Lambert. We also drew inspiration from across the AI ecosystem, including Joel Pobar's LLMVM, Cloudflare's Code Mode and Code Execution as MCP. Special thanks to Andy Schumeister, Hamish Kerr, Keir Bradwell, Matt Bleifer and Molly Vorwerck for their support.
500
-
501
- Get the developer newsletter
502
- Product updates, how-tos, community spotlights, and more. Delivered monthly to your inbox.
503
-
504
- Enter your email
505
-
506
- Please provide your email address if you’d like to receive our monthly developer newsletter. You can unsubscribe at any time.
507
-
508
- Products
509
- Claude
510
- Claude Code
511
- Claude in Chrome
512
- Claude in Excel
513
- Claude in Slack
514
- Skills
515
- Max plan
516
- Team plan
517
- Enterprise plan
518
- Download app
519
- Pricing
520
- Log in to Claude
521
- Models
522
- Opus
523
- Sonnet
524
- Haiku
525
- Solutions
526
- AI agents
527
- Code modernization
528
- Coding
529
- Customer support
530
- Education
531
- Financial services
532
- Government
533
- Life sciences
534
- Nonprofits
535
- Claude Developer Platform
536
- Overview
537
- Developer docs
538
- Pricing
539
- Regional Compliance
540
- Amazon Bedrock
541
- Google Cloud’s Vertex AI
542
- Console login
543
- Learn
544
- Blog
545
- Claude partner network
546
- Connectors
547
- Courses
548
- Customer stories
549
- Engineering at Anthropic
550
- Events
551
- Powered by Claude
552
- Service partners
553
- Startups program
554
- Tutorials
555
- Use cases
556
- Company
557
- Anthropic
558
- Careers
559
- Economic Futures
560
- Research
561
- News
562
- Responsible Scaling Policy
563
- Security and compliance
564
- Transparency
565
- Help and security
566
- Availability
567
- Status
568
- Support center
569
- Terms and policies
570
- Privacy choices
571
- Privacy policy
572
- Responsible disclosure policy
573
- Terms of service: Commercial
574
- Terms of service: Consumer
575
- Usage policy
576
- © 2025 Anthropic PBC
@@ -1,2 +0,0 @@
1
- export { OpenAIAdapter } from './openai';
2
- export { VercelAIAdapter } from './vercel-ai';