@fairyhunter13/ai-sdk 6.0.116-fork.1

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (534) hide show
  1. package/CHANGELOG.md +7582 -0
  2. package/README.md +238 -0
  3. package/dist/index.d.mts +6751 -0
  4. package/dist/index.d.ts +6751 -0
  5. package/dist/index.js +14155 -0
  6. package/dist/index.js.map +1 -0
  7. package/dist/index.mjs +14127 -0
  8. package/dist/index.mjs.map +1 -0
  9. package/dist/internal/index.d.mts +324 -0
  10. package/dist/internal/index.d.ts +324 -0
  11. package/dist/internal/index.js +1352 -0
  12. package/dist/internal/index.js.map +1 -0
  13. package/dist/internal/index.mjs +1336 -0
  14. package/dist/internal/index.mjs.map +1 -0
  15. package/dist/test/index.d.mts +265 -0
  16. package/dist/test/index.d.ts +265 -0
  17. package/dist/test/index.js +509 -0
  18. package/dist/test/index.js.map +1 -0
  19. package/dist/test/index.mjs +472 -0
  20. package/dist/test/index.mjs.map +1 -0
  21. package/docs/00-introduction/index.mdx +76 -0
  22. package/docs/02-foundations/01-overview.mdx +43 -0
  23. package/docs/02-foundations/02-providers-and-models.mdx +158 -0
  24. package/docs/02-foundations/03-prompts.mdx +616 -0
  25. package/docs/02-foundations/04-tools.mdx +251 -0
  26. package/docs/02-foundations/05-streaming.mdx +62 -0
  27. package/docs/02-foundations/06-provider-options.mdx +345 -0
  28. package/docs/02-foundations/index.mdx +49 -0
  29. package/docs/02-getting-started/00-choosing-a-provider.mdx +110 -0
  30. package/docs/02-getting-started/01-navigating-the-library.mdx +85 -0
  31. package/docs/02-getting-started/02-nextjs-app-router.mdx +559 -0
  32. package/docs/02-getting-started/03-nextjs-pages-router.mdx +542 -0
  33. package/docs/02-getting-started/04-svelte.mdx +627 -0
  34. package/docs/02-getting-started/05-nuxt.mdx +566 -0
  35. package/docs/02-getting-started/06-nodejs.mdx +512 -0
  36. package/docs/02-getting-started/07-expo.mdx +766 -0
  37. package/docs/02-getting-started/08-tanstack-start.mdx +583 -0
  38. package/docs/02-getting-started/09-coding-agents.mdx +179 -0
  39. package/docs/02-getting-started/index.mdx +44 -0
  40. package/docs/03-agents/01-overview.mdx +96 -0
  41. package/docs/03-agents/02-building-agents.mdx +449 -0
  42. package/docs/03-agents/03-workflows.mdx +386 -0
  43. package/docs/03-agents/04-loop-control.mdx +394 -0
  44. package/docs/03-agents/05-configuring-call-options.mdx +286 -0
  45. package/docs/03-agents/06-memory.mdx +222 -0
  46. package/docs/03-agents/06-subagents.mdx +362 -0
  47. package/docs/03-agents/index.mdx +46 -0
  48. package/docs/03-ai-sdk-core/01-overview.mdx +31 -0
  49. package/docs/03-ai-sdk-core/05-generating-text.mdx +707 -0
  50. package/docs/03-ai-sdk-core/10-generating-structured-data.mdx +498 -0
  51. package/docs/03-ai-sdk-core/15-tools-and-tool-calling.mdx +1144 -0
  52. package/docs/03-ai-sdk-core/16-mcp-tools.mdx +383 -0
  53. package/docs/03-ai-sdk-core/20-prompt-engineering.mdx +146 -0
  54. package/docs/03-ai-sdk-core/25-settings.mdx +216 -0
  55. package/docs/03-ai-sdk-core/26-reasoning.mdx +190 -0
  56. package/docs/03-ai-sdk-core/30-embeddings.mdx +246 -0
  57. package/docs/03-ai-sdk-core/31-reranking.mdx +218 -0
  58. package/docs/03-ai-sdk-core/35-image-generation.mdx +341 -0
  59. package/docs/03-ai-sdk-core/36-transcription.mdx +227 -0
  60. package/docs/03-ai-sdk-core/37-speech.mdx +169 -0
  61. package/docs/03-ai-sdk-core/38-video-generation.mdx +366 -0
  62. package/docs/03-ai-sdk-core/40-middleware.mdx +485 -0
  63. package/docs/03-ai-sdk-core/45-provider-management.mdx +349 -0
  64. package/docs/03-ai-sdk-core/50-error-handling.mdx +149 -0
  65. package/docs/03-ai-sdk-core/55-testing.mdx +219 -0
  66. package/docs/03-ai-sdk-core/60-telemetry.mdx +391 -0
  67. package/docs/03-ai-sdk-core/65-devtools.mdx +107 -0
  68. package/docs/03-ai-sdk-core/65-event-listeners.mdx +1118 -0
  69. package/docs/03-ai-sdk-core/index.mdx +99 -0
  70. package/docs/04-ai-sdk-ui/01-overview.mdx +44 -0
  71. package/docs/04-ai-sdk-ui/02-chatbot.mdx +1320 -0
  72. package/docs/04-ai-sdk-ui/03-chatbot-message-persistence.mdx +535 -0
  73. package/docs/04-ai-sdk-ui/03-chatbot-resume-streams.mdx +263 -0
  74. package/docs/04-ai-sdk-ui/03-chatbot-tool-usage.mdx +682 -0
  75. package/docs/04-ai-sdk-ui/04-generative-user-interfaces.mdx +389 -0
  76. package/docs/04-ai-sdk-ui/05-completion.mdx +181 -0
  77. package/docs/04-ai-sdk-ui/08-object-generation.mdx +344 -0
  78. package/docs/04-ai-sdk-ui/20-streaming-data.mdx +397 -0
  79. package/docs/04-ai-sdk-ui/21-error-handling.mdx +190 -0
  80. package/docs/04-ai-sdk-ui/21-transport.mdx +174 -0
  81. package/docs/04-ai-sdk-ui/24-reading-ui-message-streams.mdx +104 -0
  82. package/docs/04-ai-sdk-ui/25-message-metadata.mdx +152 -0
  83. package/docs/04-ai-sdk-ui/50-stream-protocol.mdx +503 -0
  84. package/docs/04-ai-sdk-ui/index.mdx +64 -0
  85. package/docs/05-ai-sdk-rsc/01-overview.mdx +45 -0
  86. package/docs/05-ai-sdk-rsc/02-streaming-react-components.mdx +209 -0
  87. package/docs/05-ai-sdk-rsc/03-generative-ui-state.mdx +279 -0
  88. package/docs/05-ai-sdk-rsc/03-saving-and-restoring-states.mdx +105 -0
  89. package/docs/05-ai-sdk-rsc/04-multistep-interfaces.mdx +282 -0
  90. package/docs/05-ai-sdk-rsc/05-streaming-values.mdx +157 -0
  91. package/docs/05-ai-sdk-rsc/06-loading-state.mdx +273 -0
  92. package/docs/05-ai-sdk-rsc/08-error-handling.mdx +94 -0
  93. package/docs/05-ai-sdk-rsc/09-authentication.mdx +42 -0
  94. package/docs/05-ai-sdk-rsc/10-migrating-to-ui.mdx +722 -0
  95. package/docs/05-ai-sdk-rsc/index.mdx +63 -0
  96. package/docs/06-advanced/01-prompt-engineering.mdx +96 -0
  97. package/docs/06-advanced/02-stopping-streams.mdx +184 -0
  98. package/docs/06-advanced/03-backpressure.mdx +173 -0
  99. package/docs/06-advanced/04-caching.mdx +169 -0
  100. package/docs/06-advanced/05-multiple-streamables.mdx +68 -0
  101. package/docs/06-advanced/06-rate-limiting.mdx +60 -0
  102. package/docs/06-advanced/07-rendering-ui-with-language-models.mdx +225 -0
  103. package/docs/06-advanced/08-model-as-router.mdx +120 -0
  104. package/docs/06-advanced/09-multistep-interfaces.mdx +115 -0
  105. package/docs/06-advanced/09-sequential-generations.mdx +55 -0
  106. package/docs/06-advanced/10-vercel-deployment-guide.mdx +117 -0
  107. package/docs/06-advanced/index.mdx +11 -0
  108. package/docs/07-reference/01-ai-sdk-core/01-generate-text.mdx +2785 -0
  109. package/docs/07-reference/01-ai-sdk-core/02-stream-text.mdx +3752 -0
  110. package/docs/07-reference/01-ai-sdk-core/05-embed.mdx +332 -0
  111. package/docs/07-reference/01-ai-sdk-core/06-embed-many.mdx +330 -0
  112. package/docs/07-reference/01-ai-sdk-core/06-rerank.mdx +309 -0
  113. package/docs/07-reference/01-ai-sdk-core/10-generate-image.mdx +251 -0
  114. package/docs/07-reference/01-ai-sdk-core/11-transcribe.mdx +152 -0
  115. package/docs/07-reference/01-ai-sdk-core/12-generate-speech.mdx +221 -0
  116. package/docs/07-reference/01-ai-sdk-core/13-generate-video.mdx +264 -0
  117. package/docs/07-reference/01-ai-sdk-core/15-agent.mdx +235 -0
  118. package/docs/07-reference/01-ai-sdk-core/16-tool-loop-agent.mdx +973 -0
  119. package/docs/07-reference/01-ai-sdk-core/17-create-agent-ui-stream.mdx +154 -0
  120. package/docs/07-reference/01-ai-sdk-core/18-create-agent-ui-stream-response.mdx +173 -0
  121. package/docs/07-reference/01-ai-sdk-core/18-pipe-agent-ui-stream-to-response.mdx +150 -0
  122. package/docs/07-reference/01-ai-sdk-core/20-tool.mdx +209 -0
  123. package/docs/07-reference/01-ai-sdk-core/22-dynamic-tool.mdx +223 -0
  124. package/docs/07-reference/01-ai-sdk-core/23-create-mcp-client.mdx +423 -0
  125. package/docs/07-reference/01-ai-sdk-core/24-mcp-stdio-transport.mdx +68 -0
  126. package/docs/07-reference/01-ai-sdk-core/25-json-schema.mdx +94 -0
  127. package/docs/07-reference/01-ai-sdk-core/26-zod-schema.mdx +109 -0
  128. package/docs/07-reference/01-ai-sdk-core/27-valibot-schema.mdx +58 -0
  129. package/docs/07-reference/01-ai-sdk-core/28-output.mdx +342 -0
  130. package/docs/07-reference/01-ai-sdk-core/30-model-message.mdx +435 -0
  131. package/docs/07-reference/01-ai-sdk-core/31-ui-message.mdx +264 -0
  132. package/docs/07-reference/01-ai-sdk-core/32-validate-ui-messages.mdx +101 -0
  133. package/docs/07-reference/01-ai-sdk-core/33-safe-validate-ui-messages.mdx +113 -0
  134. package/docs/07-reference/01-ai-sdk-core/40-provider-registry.mdx +198 -0
  135. package/docs/07-reference/01-ai-sdk-core/42-custom-provider.mdx +157 -0
  136. package/docs/07-reference/01-ai-sdk-core/50-cosine-similarity.mdx +52 -0
  137. package/docs/07-reference/01-ai-sdk-core/60-wrap-language-model.mdx +59 -0
  138. package/docs/07-reference/01-ai-sdk-core/61-wrap-image-model.mdx +64 -0
  139. package/docs/07-reference/01-ai-sdk-core/65-language-model-v2-middleware.mdx +74 -0
  140. package/docs/07-reference/01-ai-sdk-core/66-extract-reasoning-middleware.mdx +68 -0
  141. package/docs/07-reference/01-ai-sdk-core/67-simulate-streaming-middleware.mdx +71 -0
  142. package/docs/07-reference/01-ai-sdk-core/68-default-settings-middleware.mdx +80 -0
  143. package/docs/07-reference/01-ai-sdk-core/69-add-tool-input-examples-middleware.mdx +155 -0
  144. package/docs/07-reference/01-ai-sdk-core/70-extract-json-middleware.mdx +147 -0
  145. package/docs/07-reference/01-ai-sdk-core/70-step-count-is.mdx +84 -0
  146. package/docs/07-reference/01-ai-sdk-core/71-has-tool-call.mdx +120 -0
  147. package/docs/07-reference/01-ai-sdk-core/75-simulate-readable-stream.mdx +94 -0
  148. package/docs/07-reference/01-ai-sdk-core/80-smooth-stream.mdx +145 -0
  149. package/docs/07-reference/01-ai-sdk-core/90-generate-id.mdx +30 -0
  150. package/docs/07-reference/01-ai-sdk-core/91-create-id-generator.mdx +89 -0
  151. package/docs/07-reference/01-ai-sdk-core/92-default-generated-file.mdx +68 -0
  152. package/docs/07-reference/01-ai-sdk-core/index.mdx +160 -0
  153. package/docs/07-reference/02-ai-sdk-ui/01-use-chat.mdx +493 -0
  154. package/docs/07-reference/02-ai-sdk-ui/02-use-completion.mdx +185 -0
  155. package/docs/07-reference/02-ai-sdk-ui/03-use-object.mdx +196 -0
  156. package/docs/07-reference/02-ai-sdk-ui/31-convert-to-model-messages.mdx +231 -0
  157. package/docs/07-reference/02-ai-sdk-ui/32-prune-messages.mdx +108 -0
  158. package/docs/07-reference/02-ai-sdk-ui/40-create-ui-message-stream.mdx +162 -0
  159. package/docs/07-reference/02-ai-sdk-ui/41-create-ui-message-stream-response.mdx +119 -0
  160. package/docs/07-reference/02-ai-sdk-ui/42-pipe-ui-message-stream-to-response.mdx +77 -0
  161. package/docs/07-reference/02-ai-sdk-ui/43-read-ui-message-stream.mdx +57 -0
  162. package/docs/07-reference/02-ai-sdk-ui/46-infer-ui-tools.mdx +99 -0
  163. package/docs/07-reference/02-ai-sdk-ui/47-infer-ui-tool.mdx +75 -0
  164. package/docs/07-reference/02-ai-sdk-ui/50-direct-chat-transport.mdx +333 -0
  165. package/docs/07-reference/02-ai-sdk-ui/index.mdx +89 -0
  166. package/docs/07-reference/03-ai-sdk-rsc/01-stream-ui.mdx +767 -0
  167. package/docs/07-reference/03-ai-sdk-rsc/02-create-ai.mdx +90 -0
  168. package/docs/07-reference/03-ai-sdk-rsc/03-create-streamable-ui.mdx +91 -0
  169. package/docs/07-reference/03-ai-sdk-rsc/04-create-streamable-value.mdx +78 -0
  170. package/docs/07-reference/03-ai-sdk-rsc/05-read-streamable-value.mdx +79 -0
  171. package/docs/07-reference/03-ai-sdk-rsc/06-get-ai-state.mdx +50 -0
  172. package/docs/07-reference/03-ai-sdk-rsc/07-get-mutable-ai-state.mdx +70 -0
  173. package/docs/07-reference/03-ai-sdk-rsc/08-use-ai-state.mdx +26 -0
  174. package/docs/07-reference/03-ai-sdk-rsc/09-use-actions.mdx +42 -0
  175. package/docs/07-reference/03-ai-sdk-rsc/10-use-ui-state.mdx +35 -0
  176. package/docs/07-reference/03-ai-sdk-rsc/11-use-streamable-value.mdx +46 -0
  177. package/docs/07-reference/03-ai-sdk-rsc/20-render.mdx +266 -0
  178. package/docs/07-reference/03-ai-sdk-rsc/index.mdx +67 -0
  179. package/docs/07-reference/05-ai-sdk-errors/ai-api-call-error.mdx +31 -0
  180. package/docs/07-reference/05-ai-sdk-errors/ai-download-error.mdx +28 -0
  181. package/docs/07-reference/05-ai-sdk-errors/ai-empty-response-body-error.mdx +24 -0
  182. package/docs/07-reference/05-ai-sdk-errors/ai-invalid-argument-error.mdx +26 -0
  183. package/docs/07-reference/05-ai-sdk-errors/ai-invalid-data-content-error.mdx +26 -0
  184. package/docs/07-reference/05-ai-sdk-errors/ai-invalid-message-role-error.mdx +25 -0
  185. package/docs/07-reference/05-ai-sdk-errors/ai-invalid-prompt-error.mdx +47 -0
  186. package/docs/07-reference/05-ai-sdk-errors/ai-invalid-response-data-error.mdx +25 -0
  187. package/docs/07-reference/05-ai-sdk-errors/ai-invalid-tool-approval-error.mdx +24 -0
  188. package/docs/07-reference/05-ai-sdk-errors/ai-invalid-tool-input-error.mdx +27 -0
  189. package/docs/07-reference/05-ai-sdk-errors/ai-json-parse-error.mdx +25 -0
  190. package/docs/07-reference/05-ai-sdk-errors/ai-load-api-key-error.mdx +24 -0
  191. package/docs/07-reference/05-ai-sdk-errors/ai-load-setting-error.mdx +24 -0
  192. package/docs/07-reference/05-ai-sdk-errors/ai-message-conversion-error.mdx +25 -0
  193. package/docs/07-reference/05-ai-sdk-errors/ai-no-content-generated-error.mdx +24 -0
  194. package/docs/07-reference/05-ai-sdk-errors/ai-no-image-generated-error.mdx +36 -0
  195. package/docs/07-reference/05-ai-sdk-errors/ai-no-object-generated-error.mdx +43 -0
  196. package/docs/07-reference/05-ai-sdk-errors/ai-no-output-generated-error.mdx +25 -0
  197. package/docs/07-reference/05-ai-sdk-errors/ai-no-speech-generated-error.mdx +24 -0
  198. package/docs/07-reference/05-ai-sdk-errors/ai-no-such-model-error.mdx +26 -0
  199. package/docs/07-reference/05-ai-sdk-errors/ai-no-such-provider-error.mdx +28 -0
  200. package/docs/07-reference/05-ai-sdk-errors/ai-no-such-tool-error.mdx +26 -0
  201. package/docs/07-reference/05-ai-sdk-errors/ai-no-transcript-generated-error.mdx +24 -0
  202. package/docs/07-reference/05-ai-sdk-errors/ai-no-video-generated-error.mdx +39 -0
  203. package/docs/07-reference/05-ai-sdk-errors/ai-retry-error.mdx +27 -0
  204. package/docs/07-reference/05-ai-sdk-errors/ai-too-many-embedding-values-for-call-error.mdx +27 -0
  205. package/docs/07-reference/05-ai-sdk-errors/ai-tool-call-not-found-for-approval-error.mdx +25 -0
  206. package/docs/07-reference/05-ai-sdk-errors/ai-tool-call-repair-error.mdx +28 -0
  207. package/docs/07-reference/05-ai-sdk-errors/ai-type-validation-error.mdx +25 -0
  208. package/docs/07-reference/05-ai-sdk-errors/ai-ui-message-stream-error.mdx +67 -0
  209. package/docs/07-reference/05-ai-sdk-errors/ai-unsupported-functionality-error.mdx +25 -0
  210. package/docs/07-reference/05-ai-sdk-errors/index.mdx +39 -0
  211. package/docs/07-reference/index.mdx +28 -0
  212. package/docs/08-migration-guides/00-versioning.mdx +46 -0
  213. package/docs/08-migration-guides/23-migration-guide-7-0.mdx +95 -0
  214. package/docs/08-migration-guides/24-migration-guide-6-0.mdx +823 -0
  215. package/docs/08-migration-guides/25-migration-guide-5-0-data.mdx +882 -0
  216. package/docs/08-migration-guides/26-migration-guide-5-0.mdx +3427 -0
  217. package/docs/08-migration-guides/27-migration-guide-4-2.mdx +99 -0
  218. package/docs/08-migration-guides/28-migration-guide-4-1.mdx +14 -0
  219. package/docs/08-migration-guides/29-migration-guide-4-0.mdx +1157 -0
  220. package/docs/08-migration-guides/36-migration-guide-3-4.mdx +14 -0
  221. package/docs/08-migration-guides/37-migration-guide-3-3.mdx +64 -0
  222. package/docs/08-migration-guides/38-migration-guide-3-2.mdx +46 -0
  223. package/docs/08-migration-guides/39-migration-guide-3-1.mdx +168 -0
  224. package/docs/08-migration-guides/index.mdx +22 -0
  225. package/docs/09-troubleshooting/01-azure-stream-slow.mdx +33 -0
  226. package/docs/09-troubleshooting/03-server-actions-in-client-components.mdx +40 -0
  227. package/docs/09-troubleshooting/04-strange-stream-output.mdx +36 -0
  228. package/docs/09-troubleshooting/05-streamable-ui-errors.mdx +16 -0
  229. package/docs/09-troubleshooting/05-tool-invocation-missing-result.mdx +106 -0
  230. package/docs/09-troubleshooting/06-streaming-not-working-when-deployed.mdx +31 -0
  231. package/docs/09-troubleshooting/06-streaming-not-working-when-proxied.mdx +31 -0
  232. package/docs/09-troubleshooting/06-timeout-on-vercel.mdx +60 -0
  233. package/docs/09-troubleshooting/07-unclosed-streams.mdx +34 -0
  234. package/docs/09-troubleshooting/08-use-chat-failed-to-parse-stream.mdx +26 -0
  235. package/docs/09-troubleshooting/09-client-stream-error.mdx +25 -0
  236. package/docs/09-troubleshooting/10-use-chat-tools-no-response.mdx +32 -0
  237. package/docs/09-troubleshooting/11-use-chat-custom-request-options.mdx +149 -0
  238. package/docs/09-troubleshooting/12-typescript-performance-zod.mdx +46 -0
  239. package/docs/09-troubleshooting/12-use-chat-an-error-occurred.mdx +59 -0
  240. package/docs/09-troubleshooting/13-repeated-assistant-messages.mdx +73 -0
  241. package/docs/09-troubleshooting/14-stream-abort-handling.mdx +73 -0
  242. package/docs/09-troubleshooting/14-tool-calling-with-structured-outputs.mdx +48 -0
  243. package/docs/09-troubleshooting/15-abort-breaks-resumable-streams.mdx +55 -0
  244. package/docs/09-troubleshooting/15-stream-text-not-working.mdx +33 -0
  245. package/docs/09-troubleshooting/16-streaming-status-delay.mdx +63 -0
  246. package/docs/09-troubleshooting/17-use-chat-stale-body-data.mdx +141 -0
  247. package/docs/09-troubleshooting/18-ontoolcall-type-narrowing.mdx +66 -0
  248. package/docs/09-troubleshooting/19-unsupported-model-version.mdx +50 -0
  249. package/docs/09-troubleshooting/20-no-object-generated-content-filter.mdx +76 -0
  250. package/docs/09-troubleshooting/21-missing-tool-results-error.mdx +82 -0
  251. package/docs/09-troubleshooting/30-model-is-not-assignable-to-type.mdx +21 -0
  252. package/docs/09-troubleshooting/40-typescript-cannot-find-namespace-jsx.mdx +24 -0
  253. package/docs/09-troubleshooting/50-react-maximum-update-depth-exceeded.mdx +39 -0
  254. package/docs/09-troubleshooting/60-jest-cannot-find-module-ai-rsc.mdx +22 -0
  255. package/docs/09-troubleshooting/70-high-memory-usage-with-images.mdx +108 -0
  256. package/docs/09-troubleshooting/index.mdx +11 -0
  257. package/internal.d.ts +1 -0
  258. package/package.json +120 -0
  259. package/src/agent/agent.ts +156 -0
  260. package/src/agent/create-agent-ui-stream-response.ts +61 -0
  261. package/src/agent/create-agent-ui-stream.ts +84 -0
  262. package/src/agent/index.ts +37 -0
  263. package/src/agent/infer-agent-tools.ts +7 -0
  264. package/src/agent/infer-agent-ui-message.ts +11 -0
  265. package/src/agent/pipe-agent-ui-stream-to-response.ts +64 -0
  266. package/src/agent/tool-loop-agent-settings.ts +244 -0
  267. package/src/agent/tool-loop-agent.ts +205 -0
  268. package/src/embed/embed-events.ts +109 -0
  269. package/src/embed/embed-many-result.ts +53 -0
  270. package/src/embed/embed-many.ts +484 -0
  271. package/src/embed/embed-result.ts +50 -0
  272. package/src/embed/embed.ts +294 -0
  273. package/src/embed/index.ts +5 -0
  274. package/src/error/index.ts +37 -0
  275. package/src/error/invalid-argument-error.ts +34 -0
  276. package/src/error/invalid-stream-part-error.ts +28 -0
  277. package/src/error/invalid-tool-approval-error.ts +26 -0
  278. package/src/error/invalid-tool-input-error.ts +33 -0
  279. package/src/error/missing-tool-result-error.ts +28 -0
  280. package/src/error/no-image-generated-error.ts +39 -0
  281. package/src/error/no-object-generated-error.ts +70 -0
  282. package/src/error/no-output-generated-error.ts +26 -0
  283. package/src/error/no-speech-generated-error.ts +28 -0
  284. package/src/error/no-such-tool-error.ts +35 -0
  285. package/src/error/no-transcript-generated-error.ts +30 -0
  286. package/src/error/no-video-generated-error.ts +57 -0
  287. package/src/error/tool-call-not-found-for-approval-error.ts +32 -0
  288. package/src/error/tool-call-repair-error.ts +30 -0
  289. package/src/error/ui-message-stream-error.ts +48 -0
  290. package/src/error/unsupported-model-version-error.ts +23 -0
  291. package/src/error/verify-no-object-generated-error.ts +27 -0
  292. package/src/generate-image/generate-image-result.ts +42 -0
  293. package/src/generate-image/generate-image.ts +361 -0
  294. package/src/generate-image/index.ts +18 -0
  295. package/src/generate-object/generate-object-result.ts +67 -0
  296. package/src/generate-object/generate-object.ts +514 -0
  297. package/src/generate-object/index.ts +9 -0
  298. package/src/generate-object/inject-json-instruction.ts +30 -0
  299. package/src/generate-object/output-strategy.ts +415 -0
  300. package/src/generate-object/parse-and-validate-object-result.ts +111 -0
  301. package/src/generate-object/repair-text.ts +12 -0
  302. package/src/generate-object/stream-object-result.ts +120 -0
  303. package/src/generate-object/stream-object.ts +984 -0
  304. package/src/generate-object/validate-object-generation-input.ts +144 -0
  305. package/src/generate-speech/generate-speech-result.ts +30 -0
  306. package/src/generate-speech/generate-speech.ts +191 -0
  307. package/src/generate-speech/generated-audio-file.ts +65 -0
  308. package/src/generate-speech/index.ts +3 -0
  309. package/src/generate-text/collect-tool-approvals.ts +116 -0
  310. package/src/generate-text/content-part.ts +31 -0
  311. package/src/generate-text/core-events.ts +390 -0
  312. package/src/generate-text/create-execute-tools-transformation.ts +168 -0
  313. package/src/generate-text/create-stream-text-part-transform.ts +229 -0
  314. package/src/generate-text/execute-tool-call.ts +190 -0
  315. package/src/generate-text/extract-reasoning-content.ts +17 -0
  316. package/src/generate-text/extract-text-content.ts +15 -0
  317. package/src/generate-text/generate-text-result.ts +168 -0
  318. package/src/generate-text/generate-text.ts +1411 -0
  319. package/src/generate-text/generated-file.ts +70 -0
  320. package/src/generate-text/index.ts +74 -0
  321. package/src/generate-text/is-approval-needed.ts +29 -0
  322. package/src/generate-text/output-utils.ts +23 -0
  323. package/src/generate-text/output.ts +590 -0
  324. package/src/generate-text/parse-tool-call.ts +188 -0
  325. package/src/generate-text/prepare-step.ts +103 -0
  326. package/src/generate-text/prune-messages.ts +167 -0
  327. package/src/generate-text/reasoning-output.ts +99 -0
  328. package/src/generate-text/reasoning.ts +10 -0
  329. package/src/generate-text/response-message.ts +10 -0
  330. package/src/generate-text/smooth-stream.ts +162 -0
  331. package/src/generate-text/step-result.ts +310 -0
  332. package/src/generate-text/stop-condition.ts +29 -0
  333. package/src/generate-text/stream-text-result.ts +536 -0
  334. package/src/generate-text/stream-text.ts +2693 -0
  335. package/src/generate-text/to-response-messages.ts +178 -0
  336. package/src/generate-text/tool-approval-request-output.ts +21 -0
  337. package/src/generate-text/tool-call-repair-function.ts +27 -0
  338. package/src/generate-text/tool-call.ts +47 -0
  339. package/src/generate-text/tool-error.ts +34 -0
  340. package/src/generate-text/tool-output-denied.ts +21 -0
  341. package/src/generate-text/tool-output.ts +7 -0
  342. package/src/generate-text/tool-result.ts +36 -0
  343. package/src/generate-text/tool-set.ts +14 -0
  344. package/src/generate-video/generate-video-result.ts +36 -0
  345. package/src/generate-video/generate-video.ts +402 -0
  346. package/src/generate-video/index.ts +3 -0
  347. package/src/global.ts +36 -0
  348. package/src/index.ts +49 -0
  349. package/src/logger/index.ts +6 -0
  350. package/src/logger/log-warnings.ts +140 -0
  351. package/src/middleware/add-tool-input-examples-middleware.ts +90 -0
  352. package/src/middleware/default-embedding-settings-middleware.ts +22 -0
  353. package/src/middleware/default-settings-middleware.ts +33 -0
  354. package/src/middleware/extract-json-middleware.ts +197 -0
  355. package/src/middleware/extract-reasoning-middleware.ts +249 -0
  356. package/src/middleware/index.ts +10 -0
  357. package/src/middleware/simulate-streaming-middleware.ts +79 -0
  358. package/src/middleware/wrap-embedding-model.ts +89 -0
  359. package/src/middleware/wrap-image-model.ts +92 -0
  360. package/src/middleware/wrap-language-model.ts +108 -0
  361. package/src/middleware/wrap-provider.ts +51 -0
  362. package/src/model/as-embedding-model-v3.ts +24 -0
  363. package/src/model/as-embedding-model-v4.ts +25 -0
  364. package/src/model/as-image-model-v3.ts +24 -0
  365. package/src/model/as-image-model-v4.ts +21 -0
  366. package/src/model/as-language-model-v3.ts +103 -0
  367. package/src/model/as-language-model-v4.ts +25 -0
  368. package/src/model/as-provider-v3.ts +36 -0
  369. package/src/model/as-provider-v4.ts +47 -0
  370. package/src/model/as-reranking-model-v4.ts +16 -0
  371. package/src/model/as-speech-model-v3.ts +24 -0
  372. package/src/model/as-speech-model-v4.ts +21 -0
  373. package/src/model/as-transcription-model-v3.ts +24 -0
  374. package/src/model/as-transcription-model-v4.ts +25 -0
  375. package/src/model/as-video-model-v4.ts +19 -0
  376. package/src/model/resolve-model.ts +172 -0
  377. package/src/prompt/call-settings.ts +177 -0
  378. package/src/prompt/content-part.ts +236 -0
  379. package/src/prompt/convert-to-language-model-prompt.ts +548 -0
  380. package/src/prompt/create-tool-model-output.ts +34 -0
  381. package/src/prompt/data-content.ts +134 -0
  382. package/src/prompt/index.ts +27 -0
  383. package/src/prompt/invalid-data-content-error.ts +29 -0
  384. package/src/prompt/invalid-message-role-error.ts +27 -0
  385. package/src/prompt/message-conversion-error.ts +28 -0
  386. package/src/prompt/message.ts +72 -0
  387. package/src/prompt/prepare-call-settings.ts +110 -0
  388. package/src/prompt/prepare-tools-and-tool-choice.ts +86 -0
  389. package/src/prompt/prompt.ts +43 -0
  390. package/src/prompt/split-data-url.ts +17 -0
  391. package/src/prompt/standardize-prompt.ts +99 -0
  392. package/src/prompt/wrap-gateway-error.ts +29 -0
  393. package/src/registry/custom-provider.ts +210 -0
  394. package/src/registry/index.ts +7 -0
  395. package/src/registry/no-such-provider-error.ts +41 -0
  396. package/src/registry/provider-registry.ts +331 -0
  397. package/src/rerank/index.ts +2 -0
  398. package/src/rerank/rerank-result.ts +70 -0
  399. package/src/rerank/rerank.ts +239 -0
  400. package/src/telemetry/assemble-operation-name.ts +21 -0
  401. package/src/telemetry/get-base-telemetry-attributes.ts +55 -0
  402. package/src/telemetry/get-global-telemetry-integration.ts +110 -0
  403. package/src/telemetry/get-tracer.ts +20 -0
  404. package/src/telemetry/index.ts +4 -0
  405. package/src/telemetry/noop-tracer.ts +69 -0
  406. package/src/telemetry/open-telemetry-integration.ts +537 -0
  407. package/src/telemetry/record-span.ts +75 -0
  408. package/src/telemetry/select-telemetry-attributes.ts +78 -0
  409. package/src/telemetry/stringify-for-telemetry.ts +33 -0
  410. package/src/telemetry/telemetry-integration-registry.ts +22 -0
  411. package/src/telemetry/telemetry-integration.ts +100 -0
  412. package/src/telemetry/telemetry-settings.ts +55 -0
  413. package/src/test/mock-embedding-model-v2.ts +35 -0
  414. package/src/test/mock-embedding-model-v3.ts +48 -0
  415. package/src/test/mock-embedding-model-v4.ts +48 -0
  416. package/src/test/mock-image-model-v2.ts +28 -0
  417. package/src/test/mock-image-model-v3.ts +28 -0
  418. package/src/test/mock-image-model-v4.ts +28 -0
  419. package/src/test/mock-language-model-v2.ts +72 -0
  420. package/src/test/mock-language-model-v3.ts +77 -0
  421. package/src/test/mock-language-model-v4.ts +77 -0
  422. package/src/test/mock-provider-v2.ts +68 -0
  423. package/src/test/mock-provider-v3.ts +80 -0
  424. package/src/test/mock-provider-v4.ts +80 -0
  425. package/src/test/mock-reranking-model-v3.ts +25 -0
  426. package/src/test/mock-reranking-model-v4.ts +25 -0
  427. package/src/test/mock-server-response.ts +69 -0
  428. package/src/test/mock-speech-model-v2.ts +24 -0
  429. package/src/test/mock-speech-model-v3.ts +24 -0
  430. package/src/test/mock-speech-model-v4.ts +24 -0
  431. package/src/test/mock-tracer.ts +156 -0
  432. package/src/test/mock-transcription-model-v2.ts +24 -0
  433. package/src/test/mock-transcription-model-v3.ts +24 -0
  434. package/src/test/mock-transcription-model-v4.ts +24 -0
  435. package/src/test/mock-values.ts +4 -0
  436. package/src/test/mock-video-model-v3.ts +28 -0
  437. package/src/test/mock-video-model-v4.ts +28 -0
  438. package/src/test/not-implemented.ts +3 -0
  439. package/src/text-stream/create-text-stream-response.ts +30 -0
  440. package/src/text-stream/index.ts +2 -0
  441. package/src/text-stream/pipe-text-stream-to-response.ts +38 -0
  442. package/src/transcribe/index.ts +2 -0
  443. package/src/transcribe/transcribe-result.ts +60 -0
  444. package/src/transcribe/transcribe.ts +187 -0
  445. package/src/types/embedding-model-middleware.ts +15 -0
  446. package/src/types/embedding-model.ts +20 -0
  447. package/src/types/image-model-middleware.ts +15 -0
  448. package/src/types/image-model-response-metadata.ts +16 -0
  449. package/src/types/image-model.ts +19 -0
  450. package/src/types/index.ts +29 -0
  451. package/src/types/json-value.ts +15 -0
  452. package/src/types/language-model-middleware.ts +15 -0
  453. package/src/types/language-model-request-metadata.ts +6 -0
  454. package/src/types/language-model-response-metadata.ts +21 -0
  455. package/src/types/language-model.ts +106 -0
  456. package/src/types/provider-metadata.ts +16 -0
  457. package/src/types/provider.ts +55 -0
  458. package/src/types/reranking-model.ts +6 -0
  459. package/src/types/speech-model-response-metadata.ts +21 -0
  460. package/src/types/speech-model.ts +10 -0
  461. package/src/types/transcription-model-response-metadata.ts +16 -0
  462. package/src/types/transcription-model.ts +14 -0
  463. package/src/types/usage.ts +200 -0
  464. package/src/types/video-model-response-metadata.ts +28 -0
  465. package/src/types/video-model.ts +15 -0
  466. package/src/types/warning.ts +7 -0
  467. package/src/ui/call-completion-api.ts +157 -0
  468. package/src/ui/chat-transport.ts +83 -0
  469. package/src/ui/chat.ts +786 -0
  470. package/src/ui/convert-file-list-to-file-ui-parts.ts +36 -0
  471. package/src/ui/convert-to-model-messages.ts +403 -0
  472. package/src/ui/default-chat-transport.ts +36 -0
  473. package/src/ui/direct-chat-transport.ts +117 -0
  474. package/src/ui/http-chat-transport.ts +273 -0
  475. package/src/ui/index.ts +76 -0
  476. package/src/ui/last-assistant-message-is-complete-with-approval-responses.ts +44 -0
  477. package/src/ui/last-assistant-message-is-complete-with-tool-calls.ts +39 -0
  478. package/src/ui/process-text-stream.ts +16 -0
  479. package/src/ui/process-ui-message-stream.ts +858 -0
  480. package/src/ui/text-stream-chat-transport.ts +23 -0
  481. package/src/ui/transform-text-to-ui-message-stream.ts +27 -0
  482. package/src/ui/ui-messages.ts +602 -0
  483. package/src/ui/use-completion.ts +84 -0
  484. package/src/ui/validate-ui-messages.ts +521 -0
  485. package/src/ui-message-stream/create-ui-message-stream-response.ts +44 -0
  486. package/src/ui-message-stream/create-ui-message-stream.ts +145 -0
  487. package/src/ui-message-stream/get-response-ui-message-id.ts +35 -0
  488. package/src/ui-message-stream/handle-ui-message-stream-finish.ts +170 -0
  489. package/src/ui-message-stream/index.ts +14 -0
  490. package/src/ui-message-stream/json-to-sse-transform-stream.ts +17 -0
  491. package/src/ui-message-stream/pipe-ui-message-stream-to-response.ts +51 -0
  492. package/src/ui-message-stream/read-ui-message-stream.ts +87 -0
  493. package/src/ui-message-stream/ui-message-chunks.ts +372 -0
  494. package/src/ui-message-stream/ui-message-stream-headers.ts +7 -0
  495. package/src/ui-message-stream/ui-message-stream-on-finish-callback.ts +32 -0
  496. package/src/ui-message-stream/ui-message-stream-on-step-finish-callback.ts +25 -0
  497. package/src/ui-message-stream/ui-message-stream-response-init.ts +14 -0
  498. package/src/ui-message-stream/ui-message-stream-writer.ts +24 -0
  499. package/src/util/as-array.ts +3 -0
  500. package/src/util/async-iterable-stream.ts +94 -0
  501. package/src/util/consume-stream.ts +31 -0
  502. package/src/util/cosine-similarity.ts +46 -0
  503. package/src/util/create-resolvable-promise.ts +30 -0
  504. package/src/util/create-stitchable-stream.ts +112 -0
  505. package/src/util/data-url.ts +17 -0
  506. package/src/util/deep-partial.ts +84 -0
  507. package/src/util/detect-media-type.ts +226 -0
  508. package/src/util/download/create-download.ts +13 -0
  509. package/src/util/download/download-function.ts +45 -0
  510. package/src/util/download/download.ts +74 -0
  511. package/src/util/error-handler.ts +1 -0
  512. package/src/util/fix-json.ts +401 -0
  513. package/src/util/get-potential-start-index.ts +39 -0
  514. package/src/util/index.ts +12 -0
  515. package/src/util/is-deep-equal-data.ts +48 -0
  516. package/src/util/is-non-empty-object.ts +5 -0
  517. package/src/util/job.ts +1 -0
  518. package/src/util/log-v2-compatibility-warning.ts +21 -0
  519. package/src/util/merge-abort-signals.ts +43 -0
  520. package/src/util/merge-objects.ts +79 -0
  521. package/src/util/notify.ts +22 -0
  522. package/src/util/now.ts +4 -0
  523. package/src/util/parse-partial-json.ts +30 -0
  524. package/src/util/prepare-headers.ts +14 -0
  525. package/src/util/prepare-retries.ts +47 -0
  526. package/src/util/retry-error.ts +41 -0
  527. package/src/util/retry-with-exponential-backoff.ts +154 -0
  528. package/src/util/serial-job-executor.ts +36 -0
  529. package/src/util/simulate-readable-stream.ts +39 -0
  530. package/src/util/split-array.ts +20 -0
  531. package/src/util/value-of.ts +65 -0
  532. package/src/util/write-to-server-response.ts +49 -0
  533. package/src/version.ts +5 -0
  534. package/test.d.ts +1 -0
@@ -0,0 +1,1144 @@
1
+ ---
2
+ title: Tool Calling
3
+ description: Learn about tool calling and multi-step calls (using stopWhen) with AI SDK Core.
4
+ ---
5
+
6
+ # Tool Calling
7
+
8
+ As covered under Foundations, [tools](/docs/foundations/tools) are objects that can be called by the model to perform a specific task.
9
+ AI SDK Core tools contain several core elements:
10
+
11
+ - **`description`**: An optional description of the tool that can influence when the tool is picked.
12
+ - **`inputSchema`**: A [Zod schema](/docs/foundations/tools#schemas) or a [JSON schema](/docs/reference/ai-sdk-core/json-schema) that defines the input parameters. The schema is consumed by the LLM, and also used to validate the LLM tool calls.
13
+ - **`execute`**: An optional async function that is called with the inputs from the tool call. It produces a value of type `RESULT` (generic type). It is optional because you might want to forward tool calls to the client or to a queue instead of executing them in the same process.
14
+ - **`strict`**: _(optional, boolean)_ Enables strict tool calling when supported by the provider
15
+
16
+ <Note className="mb-2">
17
+ You can use the [`tool`](/docs/reference/ai-sdk-core/tool) helper function to
18
+ infer the types of the `execute` parameters.
19
+ </Note>
20
+
21
+ The `tools` parameter of `generateText` and `streamText` is an object that has the tool names as keys and the tools as values:
22
+
23
+ ```ts highlight="6-17"
24
+ import { z } from 'zod';
25
+ import { generateText, tool, stepCountIs } from 'ai';
26
+ __PROVIDER_IMPORT__;
27
+
28
+ const result = await generateText({
29
+ model: __MODEL__,
30
+ tools: {
31
+ weather: tool({
32
+ description: 'Get the weather in a location',
33
+ inputSchema: z.object({
34
+ location: z.string().describe('The location to get the weather for'),
35
+ }),
36
+ execute: async ({ location }) => ({
37
+ location,
38
+ temperature: 72 + Math.floor(Math.random() * 21) - 10,
39
+ }),
40
+ }),
41
+ },
42
+ stopWhen: stepCountIs(5),
43
+ prompt: 'What is the weather in San Francisco?',
44
+ });
45
+ ```
46
+
47
+ <Note>
48
+ When a model uses a tool, it is called a "tool call" and the output of the
49
+ tool is called a "tool result".
50
+ </Note>
51
+
52
+ Tool calling is not restricted to only text generation.
53
+ You can also use it to render user interfaces (Generative UI).
54
+
55
+ ## Strict Mode
56
+
57
+ When enabled, language model providers that support strict tool calling will only generate tool calls that are valid according to your defined `inputSchema`.
58
+ This increases the reliability of tool calling.
59
+ However, not all schemas may be supported in strict mode, and what is supported depends on the specific provider.
60
+
61
+ By default, strict mode is disabled. You can enable it per-tool by setting `strict: true`:
62
+
63
+ ```ts
64
+ tool({
65
+ description: 'Get the weather in a location',
66
+ inputSchema: z.object({
67
+ location: z.string(),
68
+ }),
69
+ strict: true, // Enable strict validation for this tool
70
+ execute: async ({ location }) => ({
71
+ // ...
72
+ }),
73
+ });
74
+ ```
75
+
76
+ <Note>
77
+ Not all providers or models support strict mode. For those that do not, this
78
+ option is ignored.
79
+ </Note>
80
+
81
+ ## Input Examples
82
+
83
+ You can specify example inputs for your tools to help guide the model on how input data should be structured.
84
+ When supported by providers, input examples can help when JSON schema itself does not fully specify the intended
85
+ usage or when there are optional values.
86
+
87
+ ```ts
88
+ tool({
89
+ description: 'Get the weather in a location',
90
+ inputSchema: z.object({
91
+ location: z.string().describe('The location to get the weather for'),
92
+ }),
93
+ inputExamples: [
94
+ { input: { location: 'San Francisco' } },
95
+ { input: { location: 'London' } },
96
+ ],
97
+ execute: async ({ location }) => {
98
+ // ...
99
+ },
100
+ });
101
+ ```
102
+
103
+ <Note>
104
+ Only the Anthropic providers supports tool input examples natively. Other
105
+ providers ignore the setting.
106
+ </Note>
107
+
108
+ ## Tool Execution Approval
109
+
110
+ By default, tools with an `execute` function run automatically as the model calls them. You can require approval before execution by setting `needsApproval`:
111
+
112
+ ```ts highlight="13"
113
+ import { tool } from 'ai';
114
+ import { z } from 'zod';
115
+
116
+ const runCommand = tool({
117
+ description: 'Run a shell command',
118
+ inputSchema: z.object({
119
+ command: z.string().describe('The shell command to execute'),
120
+ }),
121
+ needsApproval: true,
122
+ execute: async ({ command }) => {
123
+ // your command execution logic here
124
+ },
125
+ });
126
+ ```
127
+
128
+ This is useful for tools that perform sensitive operations like executing commands, processing payments, modifying data, and more potentially dangerous actions.
129
+
130
+ ### How It Works
131
+
132
+ When a tool requires approval, `generateText` and `streamText` don't pause execution. Instead, they complete and return `tool-approval-request` parts in the result content. This means the approval flow requires two calls to the model: the first returns the approval request, and the second (after receiving the approval response) either executes the tool or informs the model that approval was denied.
133
+
134
+ Here's the complete flow:
135
+
136
+ 1. Call `generateText` with a tool that has `needsApproval: true`
137
+ 2. Model generates a tool call
138
+ 3. `generateText` returns with `tool-approval-request` parts in `result.content`
139
+ 4. Your app requests an approval and collects the user's decision
140
+ 5. Add a `tool-approval-response` to the messages array
141
+ 6. Call `generateText` again with the updated messages
142
+ 7. If approved, the tool runs and returns a result. If denied, the model sees the denial and responds accordingly.
143
+
144
+ ### Handling Approval Requests
145
+
146
+ After calling `generateText` or `streamText`, check `result.content` for `tool-approval-request` parts:
147
+
148
+ ```ts
149
+ import { type ModelMessage, generateText } from 'ai';
150
+
151
+ const messages: ModelMessage[] = [
152
+ { role: 'user', content: 'Remove the most recent file' },
153
+ ];
154
+ const result = await generateText({
155
+ model: __MODEL__,
156
+ tools: { runCommand },
157
+ messages,
158
+ });
159
+
160
+ messages.push(...result.response.messages);
161
+
162
+ for (const part of result.content) {
163
+ if (part.type === 'tool-approval-request') {
164
+ console.log(part.approvalId); // Unique ID for this approval request
165
+ console.log(part.toolCall); // Contains toolName, input, etc.
166
+ }
167
+ }
168
+ ```
169
+
170
+ To respond, create a `tool-approval-response` and add it to your messages:
171
+
172
+ ```ts
173
+ import { type ToolApprovalResponse } from 'ai';
174
+
175
+ const approvals: ToolApprovalResponse[] = [];
176
+
177
+ for (const part of result.content) {
178
+ if (part.type === 'tool-approval-request') {
179
+ const response: ToolApprovalResponse = {
180
+ type: 'tool-approval-response',
181
+ approvalId: part.approvalId,
182
+ approved: true, // or false to deny
183
+ reason: 'User confirmed the command', // Optional context for the model
184
+ };
185
+ approvals.push(response);
186
+ }
187
+ }
188
+
189
+ // add approvals to messages
190
+ messages.push({ role: 'tool', content: approvals });
191
+ ```
192
+
193
+ Then call `generateText` again with the updated messages. If approved, the tool executes. If denied, the model receives the denial and can respond accordingly.
194
+
195
+ <Note>
196
+ When a tool execution is denied, consider adding a system instruction like
197
+ "When a tool execution is not approved, do not retry it" to prevent the model
198
+ from attempting the same call again.
199
+ </Note>
200
+
201
+ ### Dynamic Approval
202
+
203
+ You can make approval decisions based on tool input by providing an async function:
204
+
205
+ ```ts
206
+ const paymentTool = tool({
207
+ description: 'Process a payment',
208
+ inputSchema: z.object({
209
+ amount: z.number(),
210
+ recipient: z.string(),
211
+ }),
212
+ needsApproval: async ({ amount }) => amount > 1000,
213
+ execute: async ({ amount, recipient }) => {
214
+ return await processPayment(amount, recipient);
215
+ },
216
+ });
217
+ ```
218
+
219
+ In this example, only transactions over $1000 require approval. Smaller transactions execute automatically.
220
+
221
+ ### Tool Execution Approval with useChat
222
+
223
+ When using `useChat`, the approval flow is handled through UI state. See [Chatbot Tool Usage](/docs/ai-sdk-ui/chatbot-tool-usage#tool-execution-approval) for details on handling approvals in your UI with `addToolApprovalResponse`.
224
+
225
+ ## Multi-Step Calls (using stopWhen)
226
+
227
+ With the `stopWhen` setting, you can enable multi-step calls in `generateText` and `streamText`. When `stopWhen` is set and the model generates a tool call, the AI SDK will trigger a new generation passing in the tool result until there are no further tool calls or the stopping condition is met.
228
+
229
+ <Note>
230
+ The `stopWhen` conditions are only evaluated when the last step contains tool
231
+ results.
232
+ </Note>
233
+
234
+ By default, when you use `generateText` or `streamText`, it triggers a single generation. This works well for many use cases where you can rely on the model's training data to generate a response. However, when you provide tools, the model now has the choice to either generate a normal text response, or generate a tool call. If the model generates a tool call, its generation is complete and that step is finished.
235
+
236
+ You may want the model to generate text after the tool has been executed, either to summarize the tool results in the context of the users query. In many cases, you may also want the model to use multiple tools in a single response. This is where multi-step calls come in.
237
+
238
+ You can think of multi-step calls in a similar way to a conversation with a human. When you ask a question, if the person does not have the requisite knowledge in their common knowledge (a model's training data), the person may need to look up information (use a tool) before they can provide you with an answer. In the same way, the model may need to call a tool to get the information it needs to answer your question where each generation (tool call or text generation) is a step.
239
+
240
+ ### Example
241
+
242
+ In the following example, there are two steps:
243
+
244
+ 1. **Step 1**
245
+ 1. The prompt `'What is the weather in San Francisco?'` is sent to the model.
246
+ 1. The model generates a tool call.
247
+ 1. The tool call is executed.
248
+ 1. **Step 2**
249
+ 1. The tool result is sent to the model.
250
+ 1. The model generates a response considering the tool result.
251
+
252
+ ```ts highlight="18-19"
253
+ import { z } from 'zod';
254
+ import { generateText, tool, stepCountIs } from 'ai';
255
+ __PROVIDER_IMPORT__;
256
+
257
+ const { text, steps } = await generateText({
258
+ model: __MODEL__,
259
+ tools: {
260
+ weather: tool({
261
+ description: 'Get the weather in a location',
262
+ inputSchema: z.object({
263
+ location: z.string().describe('The location to get the weather for'),
264
+ }),
265
+ execute: async ({ location }) => ({
266
+ location,
267
+ temperature: 72 + Math.floor(Math.random() * 21) - 10,
268
+ }),
269
+ }),
270
+ },
271
+ stopWhen: stepCountIs(5), // stop after a maximum of 5 steps if tools were called
272
+ prompt: 'What is the weather in San Francisco?',
273
+ });
274
+ ```
275
+
276
+ <Note>You can use `streamText` in a similar way.</Note>
277
+
278
+ ### Steps
279
+
280
+ To access intermediate tool calls and results, you can use the `steps` property in the result object
281
+ or the `streamText` `onFinish` callback.
282
+ It contains all the text, tool calls, tool results, and more from each step.
283
+
284
+ #### Example: Extract tool results from all steps
285
+
286
+ ```ts highlight="3,9-10"
287
+ import { generateText } from 'ai';
288
+ __PROVIDER_IMPORT__;
289
+
290
+ const { steps } = await generateText({
291
+ model: __MODEL__,
292
+ stopWhen: stepCountIs(10),
293
+ // ...
294
+ });
295
+
296
+ // extract all tool calls from the steps:
297
+ const allToolCalls = steps.flatMap(step => step.toolCalls);
298
+ ```
299
+
300
+ ### `onStepFinish` callback
301
+
302
+ When using `generateText` or `streamText`, you can provide an `onStepFinish` callback that
303
+ is triggered when a step is finished,
304
+ i.e. all text deltas, tool calls, and tool results for the step are available.
305
+ When you have multiple steps, the callback is triggered for each step.
306
+
307
+ The callback receives a `stepNumber` (zero-based) to identify which step just completed:
308
+
309
+ ```tsx highlight="5-8"
310
+ import { generateText } from 'ai';
311
+
312
+ const result = await generateText({
313
+ // ...
314
+ onStepFinish({
315
+ stepNumber,
316
+ text,
317
+ toolCalls,
318
+ toolResults,
319
+ finishReason,
320
+ usage,
321
+ }) {
322
+ console.log(`Step ${stepNumber} finished (${finishReason})`);
323
+ // your own logic, e.g. for saving the chat history or recording usage
324
+ },
325
+ });
326
+ ```
327
+
328
+ ### Tool execution lifecycle callbacks
329
+
330
+ You can use `experimental_onToolCallStart` and `experimental_onToolCallFinish` to observe tool execution.
331
+ These callbacks are called right before and after each tool's `execute` function, giving you
332
+ visibility into tool execution timing, inputs, outputs, and errors:
333
+
334
+ ```tsx highlight="5-14"
335
+ import { generateText } from 'ai';
336
+
337
+ const result = await generateText({
338
+ // ... model, tools, prompt
339
+ experimental_onToolCallStart({ toolName, toolCallId, input }) {
340
+ console.log(`Calling tool: ${toolName}`, { toolCallId, input });
341
+ },
342
+ experimental_onToolCallFinish({
343
+ toolName,
344
+ toolCallId,
345
+ output,
346
+ error,
347
+ durationMs,
348
+ }) {
349
+ if (error) {
350
+ console.error(`Tool ${toolName} failed after ${durationMs}ms:`, error);
351
+ } else {
352
+ console.log(`Tool ${toolName} completed in ${durationMs}ms`, { output });
353
+ }
354
+ },
355
+ });
356
+ ```
357
+
358
+ Errors thrown inside these callbacks are silently caught and do not break the generation flow.
359
+
360
+ ### `prepareStep` callback
361
+
362
+ The `prepareStep` callback is called before a step is started.
363
+
364
+ It is called with the following parameters:
365
+
366
+ - `model`: The model that was passed into `generateText`.
367
+ - `stopWhen`: The stopping condition that was passed into `generateText`.
368
+ - `stepNumber`: The number of the step that is being executed.
369
+ - `steps`: The steps that have been executed so far.
370
+ - `messages`: The messages that will be sent to the model for the current step.
371
+ - `experimental_context`: The context passed via the `experimental_context` setting (experimental).
372
+
373
+ You can use it to provide different settings for a step, including modifying the input messages.
374
+
375
+ ```tsx highlight="5-7"
376
+ import { generateText } from 'ai';
377
+
378
+ const result = await generateText({
379
+ // ...
380
+ prepareStep: async ({ model, stepNumber, steps, messages }) => {
381
+ if (stepNumber === 0) {
382
+ return {
383
+ // use a different model for this step:
384
+ model: modelForThisParticularStep,
385
+ // force a tool choice for this step:
386
+ toolChoice: { type: 'tool', toolName: 'tool1' },
387
+ // limit the tools that are available for this step:
388
+ activeTools: ['tool1'],
389
+ };
390
+ }
391
+
392
+ // when nothing is returned, the default settings are used
393
+ },
394
+ });
395
+ ```
396
+
397
+ #### Message Modification for Longer Agentic Loops
398
+
399
+ In longer agentic loops, you can use the `messages` parameter to modify the input messages for each step. This is particularly useful for prompt compression:
400
+
401
+ ```tsx
402
+ prepareStep: async ({ stepNumber, steps, messages }) => {
403
+ // Compress conversation history for longer loops
404
+ if (messages.length > 20) {
405
+ return {
406
+ messages: messages.slice(-10),
407
+ };
408
+ }
409
+
410
+ return {};
411
+ },
412
+ ```
413
+
414
+ #### Provider Options for Step Configuration
415
+
416
+ You can use `providerOptions` in `prepareStep` to pass provider-specific configuration for each step. This is useful for features like Anthropic's code execution container persistence:
417
+
418
+ ```tsx
419
+ import { forwardAnthropicContainerIdFromLastStep } from '@ai-sdk/anthropic';
420
+
421
+ // Propagate container ID from previous step for code execution continuity
422
+ prepareStep: forwardAnthropicContainerIdFromLastStep,
423
+ ```
424
+
425
+ ## Response Messages
426
+
427
+ Adding the generated assistant and tool messages to your conversation history is a common task,
428
+ especially if you are using multi-step tool calls.
429
+
430
+ Both `generateText` and `streamText` have a `response.messages` property that you can use to
431
+ add the assistant and tool messages to your conversation history.
432
+ It is also available in the `onFinish` callback of `streamText`.
433
+
434
+ The `response.messages` property contains an array of `ModelMessage` objects that you can add to your conversation history:
435
+
436
+ ```ts
437
+ import { generateText, ModelMessage } from 'ai';
438
+
439
+ const messages: ModelMessage[] = [
440
+ // ...
441
+ ];
442
+
443
+ const { response } = await generateText({
444
+ // ...
445
+ messages,
446
+ });
447
+
448
+ // add the response messages to your conversation history:
449
+ messages.push(...response.messages); // streamText: ...((await response).messages)
450
+ ```
451
+
452
+ ## Dynamic Tools
453
+
454
+ AI SDK Core supports dynamic tools for scenarios where tool schemas are not known at compile time. This is useful for:
455
+
456
+ - MCP (Model Context Protocol) tools without schemas
457
+ - User-defined functions at runtime
458
+ - Tools loaded from external sources
459
+
460
+ ### Using dynamicTool
461
+
462
+ The `dynamicTool` helper creates tools with unknown input/output types:
463
+
464
+ ```ts
465
+ import { dynamicTool } from 'ai';
466
+ import { z } from 'zod';
467
+
468
+ const customTool = dynamicTool({
469
+ description: 'Execute a custom function',
470
+ inputSchema: z.object({}),
471
+ execute: async input => {
472
+ // input is typed as 'unknown'
473
+ // You need to validate/cast it at runtime
474
+ const { action, parameters } = input as any;
475
+
476
+ // Execute your dynamic logic
477
+ return { result: `Executed ${action}` };
478
+ },
479
+ });
480
+ ```
481
+
482
+ ### Type-Safe Handling
483
+
484
+ When using both static and dynamic tools, use the `dynamic` flag for type narrowing:
485
+
486
+ ```ts
487
+ const result = await generateText({
488
+ model: __MODEL__,
489
+ tools: {
490
+ // Static tool with known types
491
+ weather: weatherTool,
492
+ // Dynamic tool
493
+ custom: dynamicTool({
494
+ /* ... */
495
+ }),
496
+ },
497
+ onStepFinish: ({ toolCalls, toolResults }) => {
498
+ // Type-safe iteration
499
+ for (const toolCall of toolCalls) {
500
+ if (toolCall.dynamic) {
501
+ // Dynamic tool: input is 'unknown'
502
+ console.log('Dynamic:', toolCall.toolName, toolCall.input);
503
+ continue;
504
+ }
505
+
506
+ // Static tool: full type inference
507
+ switch (toolCall.toolName) {
508
+ case 'weather':
509
+ console.log(toolCall.input.location); // typed as string
510
+ break;
511
+ }
512
+ }
513
+ },
514
+ });
515
+ ```
516
+
517
+ ## Preliminary Tool Results
518
+
519
+ You can return an `AsyncIterable` over multiple results.
520
+ In this case, the last value from the iterable is the final tool result.
521
+
522
+ This can be used in combination with generator functions to e.g. stream status information
523
+ during the tool execution:
524
+
525
+ ```ts
526
+ tool({
527
+ description: 'Get the current weather.',
528
+ inputSchema: z.object({
529
+ location: z.string(),
530
+ }),
531
+ async *execute({ location }) {
532
+ yield {
533
+ status: 'loading' as const,
534
+ text: `Getting weather for ${location}`,
535
+ weather: undefined,
536
+ };
537
+
538
+ await new Promise(resolve => setTimeout(resolve, 3000));
539
+
540
+ const temperature = 72 + Math.floor(Math.random() * 21) - 10;
541
+
542
+ yield {
543
+ status: 'success' as const,
544
+ text: `The weather in ${location} is ${temperature}°F`,
545
+ temperature,
546
+ };
547
+ },
548
+ });
549
+ ```
550
+
551
+ ## Tool Choice
552
+
553
+ You can use the `toolChoice` setting to influence when a tool is selected.
554
+ It supports the following settings:
555
+
556
+ - `auto` (default): the model can choose whether and which tools to call.
557
+ - `required`: the model must call a tool. It can choose which tool to call.
558
+ - `none`: the model must not call tools
559
+ - `{ type: 'tool', toolName: string (typed) }`: the model must call the specified tool
560
+
561
+ ```ts highlight="18"
562
+ import { z } from 'zod';
563
+ import { generateText, tool } from 'ai';
564
+ __PROVIDER_IMPORT__;
565
+
566
+ const result = await generateText({
567
+ model: __MODEL__,
568
+ tools: {
569
+ weather: tool({
570
+ description: 'Get the weather in a location',
571
+ inputSchema: z.object({
572
+ location: z.string().describe('The location to get the weather for'),
573
+ }),
574
+ execute: async ({ location }) => ({
575
+ location,
576
+ temperature: 72 + Math.floor(Math.random() * 21) - 10,
577
+ }),
578
+ }),
579
+ },
580
+ toolChoice: 'required', // force the model to call a tool
581
+ prompt: 'What is the weather in San Francisco?',
582
+ });
583
+ ```
584
+
585
+ ## Tool Execution Options
586
+
587
+ When tools are called, they receive additional options as a second parameter.
588
+
589
+ ### Tool Call ID
590
+
591
+ The ID of the tool call is forwarded to the tool execution.
592
+ You can use it e.g. when sending tool-call related information with stream data.
593
+
594
+ ```ts highlight="14-20"
595
+ import {
596
+ streamText,
597
+ tool,
598
+ createUIMessageStream,
599
+ createUIMessageStreamResponse,
600
+ } from 'ai';
601
+
602
+ export async function POST(req: Request) {
603
+ const { messages } = await req.json();
604
+
605
+ const stream = createUIMessageStream({
606
+ execute: ({ writer }) => {
607
+ const result = streamText({
608
+ // ...
609
+ messages,
610
+ tools: {
611
+ myTool: tool({
612
+ // ...
613
+ execute: async (args, { toolCallId }) => {
614
+ // return e.g. custom status for tool call
615
+ writer.write({
616
+ type: 'data-tool-status',
617
+ id: toolCallId,
618
+ data: {
619
+ name: 'myTool',
620
+ status: 'in-progress',
621
+ },
622
+ });
623
+ // ...
624
+ },
625
+ }),
626
+ },
627
+ });
628
+
629
+ writer.merge(result.toUIMessageStream());
630
+ },
631
+ });
632
+
633
+ return createUIMessageStreamResponse({ stream });
634
+ }
635
+ ```
636
+
637
+ ### Messages
638
+
639
+ The messages that were sent to the language model to initiate the response that contained the tool call are forwarded to the tool execution.
640
+ You can access them in the second parameter of the `execute` function.
641
+ In multi-step calls, the messages contain the text, tool calls, and tool results from all previous steps.
642
+
643
+ ```ts highlight="8-9"
644
+ import { generateText, tool } from 'ai';
645
+
646
+ const result = await generateText({
647
+ // ...
648
+ tools: {
649
+ myTool: tool({
650
+ // ...
651
+ execute: async (args, { messages }) => {
652
+ // use the message history in e.g. calls to other language models
653
+ return { ... };
654
+ },
655
+ }),
656
+ },
657
+ });
658
+ ```
659
+
660
+ ### Abort Signals
661
+
662
+ The abort signals from `generateText` and `streamText` are forwarded to the tool execution.
663
+ You can access them in the second parameter of the `execute` function and e.g. abort long-running computations or forward them to fetch calls inside tools.
664
+
665
+ ```ts highlight="6,11,14"
666
+ import { z } from 'zod';
667
+ import { generateText, tool } from 'ai';
668
+ __PROVIDER_IMPORT__;
669
+
670
+ const result = await generateText({
671
+ model: __MODEL__,
672
+ abortSignal: myAbortSignal, // signal that will be forwarded to tools
673
+ tools: {
674
+ weather: tool({
675
+ description: 'Get the weather in a location',
676
+ inputSchema: z.object({ location: z.string() }),
677
+ execute: async ({ location }, { abortSignal }) => {
678
+ return fetch(
679
+ `https://api.weatherapi.com/v1/current.json?q=${location}`,
680
+ { signal: abortSignal }, // forward the abort signal to fetch
681
+ );
682
+ },
683
+ }),
684
+ },
685
+ prompt: 'What is the weather in San Francisco?',
686
+ });
687
+ ```
688
+
689
+ ### Context (experimental)
690
+
691
+ You can pass in arbitrary context from `generateText` or `streamText` via the `experimental_context` setting.
692
+ This context is available in the `experimental_context` tool execution option.
693
+
694
+ ```ts
695
+ const result = await generateText({
696
+ // ...
697
+ tools: {
698
+ someTool: tool({
699
+ // ...
700
+ execute: async (input, { experimental_context: context }) => {
701
+ const typedContext = context as { example: string }; // or use type validation library
702
+ // ...
703
+ },
704
+ }),
705
+ },
706
+ experimental_context: { example: '123' },
707
+ });
708
+ ```
709
+
710
+ ## Tool Input Lifecycle Hooks
711
+
712
+ The following tool input lifecycle hooks are available:
713
+
714
+ - **`onInputStart`**: Called when the model starts generating the input (arguments) for the tool call
715
+ - **`onInputDelta`**: Called for each chunk of text as the input is streamed
716
+ - **`onInputAvailable`**: Called when the complete input is available and validated
717
+
718
+ `onInputStart` and `onInputDelta` are only called in streaming contexts (when using `streamText`). They are not called when using `generateText`.
719
+
720
+ ### Example
721
+
722
+ ```ts highlight="15-23"
723
+ import { streamText, tool } from 'ai';
724
+ __PROVIDER_IMPORT__;
725
+ import { z } from 'zod';
726
+
727
+ const result = streamText({
728
+ model: __MODEL__,
729
+ tools: {
730
+ getWeather: tool({
731
+ description: 'Get the weather in a location',
732
+ inputSchema: z.object({
733
+ location: z.string().describe('The location to get the weather for'),
734
+ }),
735
+ execute: async ({ location }) => ({
736
+ temperature: 72 + Math.floor(Math.random() * 21) - 10,
737
+ }),
738
+ onInputStart: () => {
739
+ console.log('Tool call starting');
740
+ },
741
+ onInputDelta: ({ inputTextDelta }) => {
742
+ console.log('Received input chunk:', inputTextDelta);
743
+ },
744
+ onInputAvailable: ({ input }) => {
745
+ console.log('Complete input:', input);
746
+ },
747
+ }),
748
+ },
749
+ prompt: 'What is the weather in San Francisco?',
750
+ });
751
+ ```
752
+
753
+ ## Types
754
+
755
+ Modularizing your code often requires defining types to ensure type safety and reusability.
756
+ To enable this, the AI SDK provides several helper types for tools, tool calls, and tool results.
757
+
758
+ You can use them to strongly type your variables, function parameters, and return types
759
+ in parts of the code that are not directly related to `streamText` or `generateText`.
760
+
761
+ Each tool call is typed with `ToolCall<NAME extends string, ARGS>`, depending
762
+ on the tool that has been invoked.
763
+ Similarly, the tool results are typed with `ToolResult<NAME extends string, ARGS, RESULT>`.
764
+
765
+ The tools in `streamText` and `generateText` are defined as a `ToolSet`.
766
+ The type inference helpers `TypedToolCall<TOOLS extends ToolSet>`
767
+ and `TypedToolResult<TOOLS extends ToolSet>` can be used to
768
+ extract the tool call and tool result types from the tools.
769
+
770
+ ```ts highlight="18-19,23-24"
771
+ import { TypedToolCall, TypedToolResult, generateText, tool } from 'ai';
772
+ __PROVIDER_IMPORT__;
773
+ import { z } from 'zod';
774
+
775
+ const myToolSet = {
776
+ firstTool: tool({
777
+ description: 'Greets the user',
778
+ inputSchema: z.object({ name: z.string() }),
779
+ execute: async ({ name }) => `Hello, ${name}!`,
780
+ }),
781
+ secondTool: tool({
782
+ description: 'Tells the user their age',
783
+ inputSchema: z.object({ age: z.number() }),
784
+ execute: async ({ age }) => `You are ${age} years old!`,
785
+ }),
786
+ };
787
+
788
+ type MyToolCall = TypedToolCall<typeof myToolSet>;
789
+ type MyToolResult = TypedToolResult<typeof myToolSet>;
790
+
791
+ async function generateSomething(prompt: string): Promise<{
792
+ text: string;
793
+ toolCalls: Array<MyToolCall>; // typed tool calls
794
+ toolResults: Array<MyToolResult>; // typed tool results
795
+ }> {
796
+ return generateText({
797
+ model: __MODEL__,
798
+ tools: myToolSet,
799
+ prompt,
800
+ });
801
+ }
802
+ ```
803
+
804
+ ## Handling Errors
805
+
806
+ The AI SDK has three tool-call related errors:
807
+
808
+ - [`NoSuchToolError`](/docs/reference/ai-sdk-errors/ai-no-such-tool-error): the model tries to call a tool that is not defined in the tools object
809
+ - [`InvalidToolInputError`](/docs/reference/ai-sdk-errors/ai-invalid-tool-input-error): the model calls a tool with inputs that do not match the tool's input schema
810
+ - [`ToolCallRepairError`](/docs/reference/ai-sdk-errors/ai-tool-call-repair-error): an error that occurred during tool call repair
811
+
812
+ When tool execution fails (errors thrown by your tool's `execute` function), the AI SDK adds them as `tool-error` content parts to enable automated LLM roundtrips in multi-step scenarios.
813
+
814
+ ### `generateText`
815
+
816
+ `generateText` throws errors for tool schema validation issues and other errors, and can be handled using a `try`/`catch` block. Tool execution errors appear as `tool-error` parts in the result steps:
817
+
818
+ ```ts
819
+ try {
820
+ const result = await generateText({
821
+ //...
822
+ });
823
+ } catch (error) {
824
+ if (NoSuchToolError.isInstance(error)) {
825
+ // handle the no such tool error
826
+ } else if (InvalidToolInputError.isInstance(error)) {
827
+ // handle the invalid tool inputs error
828
+ } else {
829
+ // handle other errors
830
+ }
831
+ }
832
+ ```
833
+
834
+ Tool execution errors are available in the result steps:
835
+
836
+ ```ts
837
+ const { steps } = await generateText({
838
+ // ...
839
+ });
840
+
841
+ // check for tool errors in the steps
842
+ const toolErrors = steps.flatMap(step =>
843
+ step.content.filter(part => part.type === 'tool-error'),
844
+ );
845
+
846
+ toolErrors.forEach(toolError => {
847
+ console.log('Tool error:', toolError.error);
848
+ console.log('Tool name:', toolError.toolName);
849
+ console.log('Tool input:', toolError.input);
850
+ });
851
+ ```
852
+
853
+ ### `streamText`
854
+
855
+ `streamText` sends errors as part of the full stream. Tool execution errors appear as `tool-error` parts, while other errors appear as `error` parts.
856
+
857
+ When using `toUIMessageStreamResponse`, you can pass an `onError` function to extract the error message from the error part and forward it as part of the stream response:
858
+
859
+ ```ts
860
+ const result = streamText({
861
+ // ...
862
+ });
863
+
864
+ return result.toUIMessageStreamResponse({
865
+ onError: error => {
866
+ if (NoSuchToolError.isInstance(error)) {
867
+ return 'The model tried to call a unknown tool.';
868
+ } else if (InvalidToolInputError.isInstance(error)) {
869
+ return 'The model called a tool with invalid inputs.';
870
+ } else {
871
+ return 'An unknown error occurred.';
872
+ }
873
+ },
874
+ });
875
+ ```
876
+
877
+ ## Tool Call Repair
878
+
879
+ <Note type="warning">
880
+ The tool call repair feature is experimental and may change in the future.
881
+ </Note>
882
+
883
+ Language models sometimes fail to generate valid tool calls,
884
+ especially when the input schema is complex or the model is smaller.
885
+
886
+ If you use multiple steps, those failed tool calls will be sent back to the LLM
887
+ in the next step to give it an opportunity to fix it.
888
+ However, you may want to control how invalid tool calls are repaired without requiring
889
+ additional steps that pollute the message history.
890
+
891
+ You can use the `experimental_repairToolCall` function to attempt to repair the tool call
892
+ with a custom function.
893
+
894
+ You can use different strategies to repair the tool call:
895
+
896
+ - Use a model with structured outputs to generate the inputs.
897
+ - Send the messages, system prompt, and tool schema to a stronger model to generate the inputs.
898
+ - Provide more specific repair instructions based on which tool was called.
899
+
900
+ ### Example: Use a model with structured outputs for repair
901
+
902
+ ```ts
903
+ import { openai } from '@ai-sdk/openai';
904
+ import { generateText, NoSuchToolError, Output, tool } from 'ai';
905
+
906
+ const result = await generateText({
907
+ model,
908
+ tools,
909
+ prompt,
910
+
911
+ experimental_repairToolCall: async ({
912
+ toolCall,
913
+ tools,
914
+ inputSchema,
915
+ error,
916
+ }) => {
917
+ if (NoSuchToolError.isInstance(error)) {
918
+ return null; // do not attempt to fix invalid tool names
919
+ }
920
+
921
+ const tool = tools[toolCall.toolName as keyof typeof tools];
922
+
923
+ const { output: repairedArgs } = await generateText({
924
+ model: __MODEL__,
925
+ output: Output.object({ schema: tool.inputSchema }),
926
+ prompt: [
927
+ `The model tried to call the tool "${toolCall.toolName}"` +
928
+ ` with the following inputs:`,
929
+ JSON.stringify(toolCall.input),
930
+ `The tool accepts the following schema:`,
931
+ JSON.stringify(inputSchema(toolCall)),
932
+ 'Please fix the inputs.',
933
+ ].join('\n'),
934
+ });
935
+
936
+ return { ...toolCall, input: JSON.stringify(repairedArgs) };
937
+ },
938
+ });
939
+ ```
940
+
941
+ ### Example: Use the re-ask strategy for repair
942
+
943
+ ```ts
944
+ import { openai } from '@ai-sdk/openai';
945
+ import { generateText, NoSuchToolError, tool } from 'ai';
946
+
947
+ const result = await generateText({
948
+ model,
949
+ tools,
950
+ prompt,
951
+
952
+ experimental_repairToolCall: async ({
953
+ toolCall,
954
+ tools,
955
+ error,
956
+ messages,
957
+ system,
958
+ }) => {
959
+ const result = await generateText({
960
+ model,
961
+ system,
962
+ messages: [
963
+ ...messages,
964
+ {
965
+ role: 'assistant',
966
+ content: [
967
+ {
968
+ type: 'tool-call',
969
+ toolCallId: toolCall.toolCallId,
970
+ toolName: toolCall.toolName,
971
+ input: toolCall.input,
972
+ },
973
+ ],
974
+ },
975
+ {
976
+ role: 'tool' as const,
977
+ content: [
978
+ {
979
+ type: 'tool-result',
980
+ toolCallId: toolCall.toolCallId,
981
+ toolName: toolCall.toolName,
982
+ output: error.message,
983
+ },
984
+ ],
985
+ },
986
+ ],
987
+ tools,
988
+ });
989
+
990
+ const newToolCall = result.toolCalls.find(
991
+ newToolCall => newToolCall.toolName === toolCall.toolName,
992
+ );
993
+
994
+ return newToolCall != null
995
+ ? {
996
+ type: 'tool-call' as const,
997
+ toolCallId: toolCall.toolCallId,
998
+ toolName: toolCall.toolName,
999
+ input: JSON.stringify(newToolCall.input),
1000
+ }
1001
+ : null;
1002
+ },
1003
+ });
1004
+ ```
1005
+
1006
+ ## Active Tools
1007
+
1008
+ Language models can only handle a limited number of tools at a time, depending on the model.
1009
+ To allow for static typing using a large number of tools and limiting the available tools to the model at the same time,
1010
+ the AI SDK provides the `activeTools` property.
1011
+
1012
+ It is an array of tool names that are currently active.
1013
+ By default, the value is `undefined` and all tools are active.
1014
+
1015
+ ```ts highlight="7"
1016
+ import { openai } from '@ai-sdk/openai';
1017
+ import { generateText } from 'ai';
1018
+ __PROVIDER_IMPORT__;
1019
+
1020
+ const { text } = await generateText({
1021
+ model: __MODEL__,
1022
+ tools: myToolSet,
1023
+ activeTools: ['firstTool'],
1024
+ });
1025
+ ```
1026
+
1027
+ ## Multi-modal Tool Results
1028
+
1029
+ <Note type="warning">
1030
+ Multi-modal tool results are experimental and only supported by Anthropic and
1031
+ OpenAI.
1032
+ </Note>
1033
+
1034
+ In order to send multi-modal tool results, e.g. screenshots, back to the model,
1035
+ they need to be converted into a specific format.
1036
+
1037
+ AI SDK Core tools have an optional `toModelOutput` function
1038
+ that converts the tool result into a content part.
1039
+
1040
+ Here is an example for converting a screenshot into a content part:
1041
+
1042
+ ```ts highlight="22-27"
1043
+ const result = await generateText({
1044
+ model: __MODEL__,
1045
+ tools: {
1046
+ computer: anthropic.tools.computer_20241022({
1047
+ // ...
1048
+ async execute({ action, coordinate, text }) {
1049
+ switch (action) {
1050
+ case 'screenshot': {
1051
+ return {
1052
+ type: 'image',
1053
+ data: fs
1054
+ .readFileSync('./data/screenshot-editor.png')
1055
+ .toString('base64'),
1056
+ };
1057
+ }
1058
+ default: {
1059
+ return `executed ${action}`;
1060
+ }
1061
+ }
1062
+ },
1063
+
1064
+ // map to tool result content for LLM consumption:
1065
+ toModelOutput({ output }) {
1066
+ return {
1067
+ type: 'content',
1068
+ value:
1069
+ typeof output === 'string'
1070
+ ? [{ type: 'text', text: output }]
1071
+ : [{ type: 'media', data: output.data, mediaType: 'image/png' }],
1072
+ };
1073
+ },
1074
+ }),
1075
+ },
1076
+ // ...
1077
+ });
1078
+ ```
1079
+
1080
+ ## Extracting Tools
1081
+
1082
+ Once you start having many tools, you might want to extract them into separate files.
1083
+ The `tool` helper function is crucial for this, because it ensures correct type inference.
1084
+
1085
+ Here is an example of an extracted tool:
1086
+
1087
+ ```ts filename="tools/weather-tool.ts" highlight="1,4-5"
1088
+ import { tool } from 'ai';
1089
+ import { z } from 'zod';
1090
+
1091
+ // the `tool` helper function ensures correct type inference:
1092
+ export const weatherTool = tool({
1093
+ description: 'Get the weather in a location',
1094
+ inputSchema: z.object({
1095
+ location: z.string().describe('The location to get the weather for'),
1096
+ }),
1097
+ execute: async ({ location }) => ({
1098
+ location,
1099
+ temperature: 72 + Math.floor(Math.random() * 21) - 10,
1100
+ }),
1101
+ });
1102
+ ```
1103
+
1104
+ ## MCP Tools
1105
+
1106
+ The AI SDK supports connecting to Model Context Protocol (MCP) servers to access their tools.
1107
+ MCP enables your AI applications to discover and use tools across various services through a standardized interface.
1108
+
1109
+ For detailed information about MCP tools, including initialization, transport options, and usage patterns, see the [MCP Tools documentation](/docs/ai-sdk-core/mcp-tools).
1110
+
1111
+ ### AI SDK Tools vs MCP Tools
1112
+
1113
+ In most cases, you should define your own AI SDK tools for production applications. They provide full control, type safety, and optimal performance. MCP tools are best suited for rapid development iteration and scenarios where users bring their own tools.
1114
+
1115
+ | Aspect | AI SDK Tools | MCP Tools |
1116
+ | ---------------------- | --------------------------------------------------------- | ----------------------------------------------------- |
1117
+ | **Type Safety** | Full static typing end-to-end | Dynamic discovery at runtime |
1118
+ | **Execution** | Same process as your request (low latency) | Separate server (network overhead) |
1119
+ | **Prompt Control** | Full control over descriptions and schemas | Controlled by MCP server owner |
1120
+ | **Schema Control** | You define and optimize for your model | Controlled by MCP server owner |
1121
+ | **Version Management** | Full visibility over updates | Can update independently (version skew risk) |
1122
+ | **Authentication** | Same process, no additional auth required | Separate server introduces additional auth complexity |
1123
+ | **Best For** | Production applications requiring control and performance | Development iteration, user-provided tools |
1124
+
1125
+ ## Examples
1126
+
1127
+ You can see tools in action using various frameworks in the following examples:
1128
+
1129
+ <ExampleLinks
1130
+ examples={[
1131
+ {
1132
+ title: 'Learn to use tools in Node.js',
1133
+ link: '/cookbook/node/call-tools',
1134
+ },
1135
+ {
1136
+ title: 'Learn to use tools in Next.js with Route Handlers',
1137
+ link: '/cookbook/next/call-tools',
1138
+ },
1139
+ {
1140
+ title: 'Learn to use MCP tools in Node.js',
1141
+ link: '/cookbook/node/mcp-tools',
1142
+ },
1143
+ ]}
1144
+ />