@fairyhunter13/ai-sdk 6.0.116-fork.1
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/CHANGELOG.md +7582 -0
- package/README.md +238 -0
- package/dist/index.d.mts +6751 -0
- package/dist/index.d.ts +6751 -0
- package/dist/index.js +14155 -0
- package/dist/index.js.map +1 -0
- package/dist/index.mjs +14127 -0
- package/dist/index.mjs.map +1 -0
- package/dist/internal/index.d.mts +324 -0
- package/dist/internal/index.d.ts +324 -0
- package/dist/internal/index.js +1352 -0
- package/dist/internal/index.js.map +1 -0
- package/dist/internal/index.mjs +1336 -0
- package/dist/internal/index.mjs.map +1 -0
- package/dist/test/index.d.mts +265 -0
- package/dist/test/index.d.ts +265 -0
- package/dist/test/index.js +509 -0
- package/dist/test/index.js.map +1 -0
- package/dist/test/index.mjs +472 -0
- package/dist/test/index.mjs.map +1 -0
- package/docs/00-introduction/index.mdx +76 -0
- package/docs/02-foundations/01-overview.mdx +43 -0
- package/docs/02-foundations/02-providers-and-models.mdx +158 -0
- package/docs/02-foundations/03-prompts.mdx +616 -0
- package/docs/02-foundations/04-tools.mdx +251 -0
- package/docs/02-foundations/05-streaming.mdx +62 -0
- package/docs/02-foundations/06-provider-options.mdx +345 -0
- package/docs/02-foundations/index.mdx +49 -0
- package/docs/02-getting-started/00-choosing-a-provider.mdx +110 -0
- package/docs/02-getting-started/01-navigating-the-library.mdx +85 -0
- package/docs/02-getting-started/02-nextjs-app-router.mdx +559 -0
- package/docs/02-getting-started/03-nextjs-pages-router.mdx +542 -0
- package/docs/02-getting-started/04-svelte.mdx +627 -0
- package/docs/02-getting-started/05-nuxt.mdx +566 -0
- package/docs/02-getting-started/06-nodejs.mdx +512 -0
- package/docs/02-getting-started/07-expo.mdx +766 -0
- package/docs/02-getting-started/08-tanstack-start.mdx +583 -0
- package/docs/02-getting-started/09-coding-agents.mdx +179 -0
- package/docs/02-getting-started/index.mdx +44 -0
- package/docs/03-agents/01-overview.mdx +96 -0
- package/docs/03-agents/02-building-agents.mdx +449 -0
- package/docs/03-agents/03-workflows.mdx +386 -0
- package/docs/03-agents/04-loop-control.mdx +394 -0
- package/docs/03-agents/05-configuring-call-options.mdx +286 -0
- package/docs/03-agents/06-memory.mdx +222 -0
- package/docs/03-agents/06-subagents.mdx +362 -0
- package/docs/03-agents/index.mdx +46 -0
- package/docs/03-ai-sdk-core/01-overview.mdx +31 -0
- package/docs/03-ai-sdk-core/05-generating-text.mdx +707 -0
- package/docs/03-ai-sdk-core/10-generating-structured-data.mdx +498 -0
- package/docs/03-ai-sdk-core/15-tools-and-tool-calling.mdx +1144 -0
- package/docs/03-ai-sdk-core/16-mcp-tools.mdx +383 -0
- package/docs/03-ai-sdk-core/20-prompt-engineering.mdx +146 -0
- package/docs/03-ai-sdk-core/25-settings.mdx +216 -0
- package/docs/03-ai-sdk-core/26-reasoning.mdx +190 -0
- package/docs/03-ai-sdk-core/30-embeddings.mdx +246 -0
- package/docs/03-ai-sdk-core/31-reranking.mdx +218 -0
- package/docs/03-ai-sdk-core/35-image-generation.mdx +341 -0
- package/docs/03-ai-sdk-core/36-transcription.mdx +227 -0
- package/docs/03-ai-sdk-core/37-speech.mdx +169 -0
- package/docs/03-ai-sdk-core/38-video-generation.mdx +366 -0
- package/docs/03-ai-sdk-core/40-middleware.mdx +485 -0
- package/docs/03-ai-sdk-core/45-provider-management.mdx +349 -0
- package/docs/03-ai-sdk-core/50-error-handling.mdx +149 -0
- package/docs/03-ai-sdk-core/55-testing.mdx +219 -0
- package/docs/03-ai-sdk-core/60-telemetry.mdx +391 -0
- package/docs/03-ai-sdk-core/65-devtools.mdx +107 -0
- package/docs/03-ai-sdk-core/65-event-listeners.mdx +1118 -0
- package/docs/03-ai-sdk-core/index.mdx +99 -0
- package/docs/04-ai-sdk-ui/01-overview.mdx +44 -0
- package/docs/04-ai-sdk-ui/02-chatbot.mdx +1320 -0
- package/docs/04-ai-sdk-ui/03-chatbot-message-persistence.mdx +535 -0
- package/docs/04-ai-sdk-ui/03-chatbot-resume-streams.mdx +263 -0
- package/docs/04-ai-sdk-ui/03-chatbot-tool-usage.mdx +682 -0
- package/docs/04-ai-sdk-ui/04-generative-user-interfaces.mdx +389 -0
- package/docs/04-ai-sdk-ui/05-completion.mdx +181 -0
- package/docs/04-ai-sdk-ui/08-object-generation.mdx +344 -0
- package/docs/04-ai-sdk-ui/20-streaming-data.mdx +397 -0
- package/docs/04-ai-sdk-ui/21-error-handling.mdx +190 -0
- package/docs/04-ai-sdk-ui/21-transport.mdx +174 -0
- package/docs/04-ai-sdk-ui/24-reading-ui-message-streams.mdx +104 -0
- package/docs/04-ai-sdk-ui/25-message-metadata.mdx +152 -0
- package/docs/04-ai-sdk-ui/50-stream-protocol.mdx +503 -0
- package/docs/04-ai-sdk-ui/index.mdx +64 -0
- package/docs/05-ai-sdk-rsc/01-overview.mdx +45 -0
- package/docs/05-ai-sdk-rsc/02-streaming-react-components.mdx +209 -0
- package/docs/05-ai-sdk-rsc/03-generative-ui-state.mdx +279 -0
- package/docs/05-ai-sdk-rsc/03-saving-and-restoring-states.mdx +105 -0
- package/docs/05-ai-sdk-rsc/04-multistep-interfaces.mdx +282 -0
- package/docs/05-ai-sdk-rsc/05-streaming-values.mdx +157 -0
- package/docs/05-ai-sdk-rsc/06-loading-state.mdx +273 -0
- package/docs/05-ai-sdk-rsc/08-error-handling.mdx +94 -0
- package/docs/05-ai-sdk-rsc/09-authentication.mdx +42 -0
- package/docs/05-ai-sdk-rsc/10-migrating-to-ui.mdx +722 -0
- package/docs/05-ai-sdk-rsc/index.mdx +63 -0
- package/docs/06-advanced/01-prompt-engineering.mdx +96 -0
- package/docs/06-advanced/02-stopping-streams.mdx +184 -0
- package/docs/06-advanced/03-backpressure.mdx +173 -0
- package/docs/06-advanced/04-caching.mdx +169 -0
- package/docs/06-advanced/05-multiple-streamables.mdx +68 -0
- package/docs/06-advanced/06-rate-limiting.mdx +60 -0
- package/docs/06-advanced/07-rendering-ui-with-language-models.mdx +225 -0
- package/docs/06-advanced/08-model-as-router.mdx +120 -0
- package/docs/06-advanced/09-multistep-interfaces.mdx +115 -0
- package/docs/06-advanced/09-sequential-generations.mdx +55 -0
- package/docs/06-advanced/10-vercel-deployment-guide.mdx +117 -0
- package/docs/06-advanced/index.mdx +11 -0
- package/docs/07-reference/01-ai-sdk-core/01-generate-text.mdx +2785 -0
- package/docs/07-reference/01-ai-sdk-core/02-stream-text.mdx +3752 -0
- package/docs/07-reference/01-ai-sdk-core/05-embed.mdx +332 -0
- package/docs/07-reference/01-ai-sdk-core/06-embed-many.mdx +330 -0
- package/docs/07-reference/01-ai-sdk-core/06-rerank.mdx +309 -0
- package/docs/07-reference/01-ai-sdk-core/10-generate-image.mdx +251 -0
- package/docs/07-reference/01-ai-sdk-core/11-transcribe.mdx +152 -0
- package/docs/07-reference/01-ai-sdk-core/12-generate-speech.mdx +221 -0
- package/docs/07-reference/01-ai-sdk-core/13-generate-video.mdx +264 -0
- package/docs/07-reference/01-ai-sdk-core/15-agent.mdx +235 -0
- package/docs/07-reference/01-ai-sdk-core/16-tool-loop-agent.mdx +973 -0
- package/docs/07-reference/01-ai-sdk-core/17-create-agent-ui-stream.mdx +154 -0
- package/docs/07-reference/01-ai-sdk-core/18-create-agent-ui-stream-response.mdx +173 -0
- package/docs/07-reference/01-ai-sdk-core/18-pipe-agent-ui-stream-to-response.mdx +150 -0
- package/docs/07-reference/01-ai-sdk-core/20-tool.mdx +209 -0
- package/docs/07-reference/01-ai-sdk-core/22-dynamic-tool.mdx +223 -0
- package/docs/07-reference/01-ai-sdk-core/23-create-mcp-client.mdx +423 -0
- package/docs/07-reference/01-ai-sdk-core/24-mcp-stdio-transport.mdx +68 -0
- package/docs/07-reference/01-ai-sdk-core/25-json-schema.mdx +94 -0
- package/docs/07-reference/01-ai-sdk-core/26-zod-schema.mdx +109 -0
- package/docs/07-reference/01-ai-sdk-core/27-valibot-schema.mdx +58 -0
- package/docs/07-reference/01-ai-sdk-core/28-output.mdx +342 -0
- package/docs/07-reference/01-ai-sdk-core/30-model-message.mdx +435 -0
- package/docs/07-reference/01-ai-sdk-core/31-ui-message.mdx +264 -0
- package/docs/07-reference/01-ai-sdk-core/32-validate-ui-messages.mdx +101 -0
- package/docs/07-reference/01-ai-sdk-core/33-safe-validate-ui-messages.mdx +113 -0
- package/docs/07-reference/01-ai-sdk-core/40-provider-registry.mdx +198 -0
- package/docs/07-reference/01-ai-sdk-core/42-custom-provider.mdx +157 -0
- package/docs/07-reference/01-ai-sdk-core/50-cosine-similarity.mdx +52 -0
- package/docs/07-reference/01-ai-sdk-core/60-wrap-language-model.mdx +59 -0
- package/docs/07-reference/01-ai-sdk-core/61-wrap-image-model.mdx +64 -0
- package/docs/07-reference/01-ai-sdk-core/65-language-model-v2-middleware.mdx +74 -0
- package/docs/07-reference/01-ai-sdk-core/66-extract-reasoning-middleware.mdx +68 -0
- package/docs/07-reference/01-ai-sdk-core/67-simulate-streaming-middleware.mdx +71 -0
- package/docs/07-reference/01-ai-sdk-core/68-default-settings-middleware.mdx +80 -0
- package/docs/07-reference/01-ai-sdk-core/69-add-tool-input-examples-middleware.mdx +155 -0
- package/docs/07-reference/01-ai-sdk-core/70-extract-json-middleware.mdx +147 -0
- package/docs/07-reference/01-ai-sdk-core/70-step-count-is.mdx +84 -0
- package/docs/07-reference/01-ai-sdk-core/71-has-tool-call.mdx +120 -0
- package/docs/07-reference/01-ai-sdk-core/75-simulate-readable-stream.mdx +94 -0
- package/docs/07-reference/01-ai-sdk-core/80-smooth-stream.mdx +145 -0
- package/docs/07-reference/01-ai-sdk-core/90-generate-id.mdx +30 -0
- package/docs/07-reference/01-ai-sdk-core/91-create-id-generator.mdx +89 -0
- package/docs/07-reference/01-ai-sdk-core/92-default-generated-file.mdx +68 -0
- package/docs/07-reference/01-ai-sdk-core/index.mdx +160 -0
- package/docs/07-reference/02-ai-sdk-ui/01-use-chat.mdx +493 -0
- package/docs/07-reference/02-ai-sdk-ui/02-use-completion.mdx +185 -0
- package/docs/07-reference/02-ai-sdk-ui/03-use-object.mdx +196 -0
- package/docs/07-reference/02-ai-sdk-ui/31-convert-to-model-messages.mdx +231 -0
- package/docs/07-reference/02-ai-sdk-ui/32-prune-messages.mdx +108 -0
- package/docs/07-reference/02-ai-sdk-ui/40-create-ui-message-stream.mdx +162 -0
- package/docs/07-reference/02-ai-sdk-ui/41-create-ui-message-stream-response.mdx +119 -0
- package/docs/07-reference/02-ai-sdk-ui/42-pipe-ui-message-stream-to-response.mdx +77 -0
- package/docs/07-reference/02-ai-sdk-ui/43-read-ui-message-stream.mdx +57 -0
- package/docs/07-reference/02-ai-sdk-ui/46-infer-ui-tools.mdx +99 -0
- package/docs/07-reference/02-ai-sdk-ui/47-infer-ui-tool.mdx +75 -0
- package/docs/07-reference/02-ai-sdk-ui/50-direct-chat-transport.mdx +333 -0
- package/docs/07-reference/02-ai-sdk-ui/index.mdx +89 -0
- package/docs/07-reference/03-ai-sdk-rsc/01-stream-ui.mdx +767 -0
- package/docs/07-reference/03-ai-sdk-rsc/02-create-ai.mdx +90 -0
- package/docs/07-reference/03-ai-sdk-rsc/03-create-streamable-ui.mdx +91 -0
- package/docs/07-reference/03-ai-sdk-rsc/04-create-streamable-value.mdx +78 -0
- package/docs/07-reference/03-ai-sdk-rsc/05-read-streamable-value.mdx +79 -0
- package/docs/07-reference/03-ai-sdk-rsc/06-get-ai-state.mdx +50 -0
- package/docs/07-reference/03-ai-sdk-rsc/07-get-mutable-ai-state.mdx +70 -0
- package/docs/07-reference/03-ai-sdk-rsc/08-use-ai-state.mdx +26 -0
- package/docs/07-reference/03-ai-sdk-rsc/09-use-actions.mdx +42 -0
- package/docs/07-reference/03-ai-sdk-rsc/10-use-ui-state.mdx +35 -0
- package/docs/07-reference/03-ai-sdk-rsc/11-use-streamable-value.mdx +46 -0
- package/docs/07-reference/03-ai-sdk-rsc/20-render.mdx +266 -0
- package/docs/07-reference/03-ai-sdk-rsc/index.mdx +67 -0
- package/docs/07-reference/05-ai-sdk-errors/ai-api-call-error.mdx +31 -0
- package/docs/07-reference/05-ai-sdk-errors/ai-download-error.mdx +28 -0
- package/docs/07-reference/05-ai-sdk-errors/ai-empty-response-body-error.mdx +24 -0
- package/docs/07-reference/05-ai-sdk-errors/ai-invalid-argument-error.mdx +26 -0
- package/docs/07-reference/05-ai-sdk-errors/ai-invalid-data-content-error.mdx +26 -0
- package/docs/07-reference/05-ai-sdk-errors/ai-invalid-message-role-error.mdx +25 -0
- package/docs/07-reference/05-ai-sdk-errors/ai-invalid-prompt-error.mdx +47 -0
- package/docs/07-reference/05-ai-sdk-errors/ai-invalid-response-data-error.mdx +25 -0
- package/docs/07-reference/05-ai-sdk-errors/ai-invalid-tool-approval-error.mdx +24 -0
- package/docs/07-reference/05-ai-sdk-errors/ai-invalid-tool-input-error.mdx +27 -0
- package/docs/07-reference/05-ai-sdk-errors/ai-json-parse-error.mdx +25 -0
- package/docs/07-reference/05-ai-sdk-errors/ai-load-api-key-error.mdx +24 -0
- package/docs/07-reference/05-ai-sdk-errors/ai-load-setting-error.mdx +24 -0
- package/docs/07-reference/05-ai-sdk-errors/ai-message-conversion-error.mdx +25 -0
- package/docs/07-reference/05-ai-sdk-errors/ai-no-content-generated-error.mdx +24 -0
- package/docs/07-reference/05-ai-sdk-errors/ai-no-image-generated-error.mdx +36 -0
- package/docs/07-reference/05-ai-sdk-errors/ai-no-object-generated-error.mdx +43 -0
- package/docs/07-reference/05-ai-sdk-errors/ai-no-output-generated-error.mdx +25 -0
- package/docs/07-reference/05-ai-sdk-errors/ai-no-speech-generated-error.mdx +24 -0
- package/docs/07-reference/05-ai-sdk-errors/ai-no-such-model-error.mdx +26 -0
- package/docs/07-reference/05-ai-sdk-errors/ai-no-such-provider-error.mdx +28 -0
- package/docs/07-reference/05-ai-sdk-errors/ai-no-such-tool-error.mdx +26 -0
- package/docs/07-reference/05-ai-sdk-errors/ai-no-transcript-generated-error.mdx +24 -0
- package/docs/07-reference/05-ai-sdk-errors/ai-no-video-generated-error.mdx +39 -0
- package/docs/07-reference/05-ai-sdk-errors/ai-retry-error.mdx +27 -0
- package/docs/07-reference/05-ai-sdk-errors/ai-too-many-embedding-values-for-call-error.mdx +27 -0
- package/docs/07-reference/05-ai-sdk-errors/ai-tool-call-not-found-for-approval-error.mdx +25 -0
- package/docs/07-reference/05-ai-sdk-errors/ai-tool-call-repair-error.mdx +28 -0
- package/docs/07-reference/05-ai-sdk-errors/ai-type-validation-error.mdx +25 -0
- package/docs/07-reference/05-ai-sdk-errors/ai-ui-message-stream-error.mdx +67 -0
- package/docs/07-reference/05-ai-sdk-errors/ai-unsupported-functionality-error.mdx +25 -0
- package/docs/07-reference/05-ai-sdk-errors/index.mdx +39 -0
- package/docs/07-reference/index.mdx +28 -0
- package/docs/08-migration-guides/00-versioning.mdx +46 -0
- package/docs/08-migration-guides/23-migration-guide-7-0.mdx +95 -0
- package/docs/08-migration-guides/24-migration-guide-6-0.mdx +823 -0
- package/docs/08-migration-guides/25-migration-guide-5-0-data.mdx +882 -0
- package/docs/08-migration-guides/26-migration-guide-5-0.mdx +3427 -0
- package/docs/08-migration-guides/27-migration-guide-4-2.mdx +99 -0
- package/docs/08-migration-guides/28-migration-guide-4-1.mdx +14 -0
- package/docs/08-migration-guides/29-migration-guide-4-0.mdx +1157 -0
- package/docs/08-migration-guides/36-migration-guide-3-4.mdx +14 -0
- package/docs/08-migration-guides/37-migration-guide-3-3.mdx +64 -0
- package/docs/08-migration-guides/38-migration-guide-3-2.mdx +46 -0
- package/docs/08-migration-guides/39-migration-guide-3-1.mdx +168 -0
- package/docs/08-migration-guides/index.mdx +22 -0
- package/docs/09-troubleshooting/01-azure-stream-slow.mdx +33 -0
- package/docs/09-troubleshooting/03-server-actions-in-client-components.mdx +40 -0
- package/docs/09-troubleshooting/04-strange-stream-output.mdx +36 -0
- package/docs/09-troubleshooting/05-streamable-ui-errors.mdx +16 -0
- package/docs/09-troubleshooting/05-tool-invocation-missing-result.mdx +106 -0
- package/docs/09-troubleshooting/06-streaming-not-working-when-deployed.mdx +31 -0
- package/docs/09-troubleshooting/06-streaming-not-working-when-proxied.mdx +31 -0
- package/docs/09-troubleshooting/06-timeout-on-vercel.mdx +60 -0
- package/docs/09-troubleshooting/07-unclosed-streams.mdx +34 -0
- package/docs/09-troubleshooting/08-use-chat-failed-to-parse-stream.mdx +26 -0
- package/docs/09-troubleshooting/09-client-stream-error.mdx +25 -0
- package/docs/09-troubleshooting/10-use-chat-tools-no-response.mdx +32 -0
- package/docs/09-troubleshooting/11-use-chat-custom-request-options.mdx +149 -0
- package/docs/09-troubleshooting/12-typescript-performance-zod.mdx +46 -0
- package/docs/09-troubleshooting/12-use-chat-an-error-occurred.mdx +59 -0
- package/docs/09-troubleshooting/13-repeated-assistant-messages.mdx +73 -0
- package/docs/09-troubleshooting/14-stream-abort-handling.mdx +73 -0
- package/docs/09-troubleshooting/14-tool-calling-with-structured-outputs.mdx +48 -0
- package/docs/09-troubleshooting/15-abort-breaks-resumable-streams.mdx +55 -0
- package/docs/09-troubleshooting/15-stream-text-not-working.mdx +33 -0
- package/docs/09-troubleshooting/16-streaming-status-delay.mdx +63 -0
- package/docs/09-troubleshooting/17-use-chat-stale-body-data.mdx +141 -0
- package/docs/09-troubleshooting/18-ontoolcall-type-narrowing.mdx +66 -0
- package/docs/09-troubleshooting/19-unsupported-model-version.mdx +50 -0
- package/docs/09-troubleshooting/20-no-object-generated-content-filter.mdx +76 -0
- package/docs/09-troubleshooting/21-missing-tool-results-error.mdx +82 -0
- package/docs/09-troubleshooting/30-model-is-not-assignable-to-type.mdx +21 -0
- package/docs/09-troubleshooting/40-typescript-cannot-find-namespace-jsx.mdx +24 -0
- package/docs/09-troubleshooting/50-react-maximum-update-depth-exceeded.mdx +39 -0
- package/docs/09-troubleshooting/60-jest-cannot-find-module-ai-rsc.mdx +22 -0
- package/docs/09-troubleshooting/70-high-memory-usage-with-images.mdx +108 -0
- package/docs/09-troubleshooting/index.mdx +11 -0
- package/internal.d.ts +1 -0
- package/package.json +120 -0
- package/src/agent/agent.ts +156 -0
- package/src/agent/create-agent-ui-stream-response.ts +61 -0
- package/src/agent/create-agent-ui-stream.ts +84 -0
- package/src/agent/index.ts +37 -0
- package/src/agent/infer-agent-tools.ts +7 -0
- package/src/agent/infer-agent-ui-message.ts +11 -0
- package/src/agent/pipe-agent-ui-stream-to-response.ts +64 -0
- package/src/agent/tool-loop-agent-settings.ts +244 -0
- package/src/agent/tool-loop-agent.ts +205 -0
- package/src/embed/embed-events.ts +109 -0
- package/src/embed/embed-many-result.ts +53 -0
- package/src/embed/embed-many.ts +484 -0
- package/src/embed/embed-result.ts +50 -0
- package/src/embed/embed.ts +294 -0
- package/src/embed/index.ts +5 -0
- package/src/error/index.ts +37 -0
- package/src/error/invalid-argument-error.ts +34 -0
- package/src/error/invalid-stream-part-error.ts +28 -0
- package/src/error/invalid-tool-approval-error.ts +26 -0
- package/src/error/invalid-tool-input-error.ts +33 -0
- package/src/error/missing-tool-result-error.ts +28 -0
- package/src/error/no-image-generated-error.ts +39 -0
- package/src/error/no-object-generated-error.ts +70 -0
- package/src/error/no-output-generated-error.ts +26 -0
- package/src/error/no-speech-generated-error.ts +28 -0
- package/src/error/no-such-tool-error.ts +35 -0
- package/src/error/no-transcript-generated-error.ts +30 -0
- package/src/error/no-video-generated-error.ts +57 -0
- package/src/error/tool-call-not-found-for-approval-error.ts +32 -0
- package/src/error/tool-call-repair-error.ts +30 -0
- package/src/error/ui-message-stream-error.ts +48 -0
- package/src/error/unsupported-model-version-error.ts +23 -0
- package/src/error/verify-no-object-generated-error.ts +27 -0
- package/src/generate-image/generate-image-result.ts +42 -0
- package/src/generate-image/generate-image.ts +361 -0
- package/src/generate-image/index.ts +18 -0
- package/src/generate-object/generate-object-result.ts +67 -0
- package/src/generate-object/generate-object.ts +514 -0
- package/src/generate-object/index.ts +9 -0
- package/src/generate-object/inject-json-instruction.ts +30 -0
- package/src/generate-object/output-strategy.ts +415 -0
- package/src/generate-object/parse-and-validate-object-result.ts +111 -0
- package/src/generate-object/repair-text.ts +12 -0
- package/src/generate-object/stream-object-result.ts +120 -0
- package/src/generate-object/stream-object.ts +984 -0
- package/src/generate-object/validate-object-generation-input.ts +144 -0
- package/src/generate-speech/generate-speech-result.ts +30 -0
- package/src/generate-speech/generate-speech.ts +191 -0
- package/src/generate-speech/generated-audio-file.ts +65 -0
- package/src/generate-speech/index.ts +3 -0
- package/src/generate-text/collect-tool-approvals.ts +116 -0
- package/src/generate-text/content-part.ts +31 -0
- package/src/generate-text/core-events.ts +390 -0
- package/src/generate-text/create-execute-tools-transformation.ts +168 -0
- package/src/generate-text/create-stream-text-part-transform.ts +229 -0
- package/src/generate-text/execute-tool-call.ts +190 -0
- package/src/generate-text/extract-reasoning-content.ts +17 -0
- package/src/generate-text/extract-text-content.ts +15 -0
- package/src/generate-text/generate-text-result.ts +168 -0
- package/src/generate-text/generate-text.ts +1411 -0
- package/src/generate-text/generated-file.ts +70 -0
- package/src/generate-text/index.ts +74 -0
- package/src/generate-text/is-approval-needed.ts +29 -0
- package/src/generate-text/output-utils.ts +23 -0
- package/src/generate-text/output.ts +590 -0
- package/src/generate-text/parse-tool-call.ts +188 -0
- package/src/generate-text/prepare-step.ts +103 -0
- package/src/generate-text/prune-messages.ts +167 -0
- package/src/generate-text/reasoning-output.ts +99 -0
- package/src/generate-text/reasoning.ts +10 -0
- package/src/generate-text/response-message.ts +10 -0
- package/src/generate-text/smooth-stream.ts +162 -0
- package/src/generate-text/step-result.ts +310 -0
- package/src/generate-text/stop-condition.ts +29 -0
- package/src/generate-text/stream-text-result.ts +536 -0
- package/src/generate-text/stream-text.ts +2693 -0
- package/src/generate-text/to-response-messages.ts +178 -0
- package/src/generate-text/tool-approval-request-output.ts +21 -0
- package/src/generate-text/tool-call-repair-function.ts +27 -0
- package/src/generate-text/tool-call.ts +47 -0
- package/src/generate-text/tool-error.ts +34 -0
- package/src/generate-text/tool-output-denied.ts +21 -0
- package/src/generate-text/tool-output.ts +7 -0
- package/src/generate-text/tool-result.ts +36 -0
- package/src/generate-text/tool-set.ts +14 -0
- package/src/generate-video/generate-video-result.ts +36 -0
- package/src/generate-video/generate-video.ts +402 -0
- package/src/generate-video/index.ts +3 -0
- package/src/global.ts +36 -0
- package/src/index.ts +49 -0
- package/src/logger/index.ts +6 -0
- package/src/logger/log-warnings.ts +140 -0
- package/src/middleware/add-tool-input-examples-middleware.ts +90 -0
- package/src/middleware/default-embedding-settings-middleware.ts +22 -0
- package/src/middleware/default-settings-middleware.ts +33 -0
- package/src/middleware/extract-json-middleware.ts +197 -0
- package/src/middleware/extract-reasoning-middleware.ts +249 -0
- package/src/middleware/index.ts +10 -0
- package/src/middleware/simulate-streaming-middleware.ts +79 -0
- package/src/middleware/wrap-embedding-model.ts +89 -0
- package/src/middleware/wrap-image-model.ts +92 -0
- package/src/middleware/wrap-language-model.ts +108 -0
- package/src/middleware/wrap-provider.ts +51 -0
- package/src/model/as-embedding-model-v3.ts +24 -0
- package/src/model/as-embedding-model-v4.ts +25 -0
- package/src/model/as-image-model-v3.ts +24 -0
- package/src/model/as-image-model-v4.ts +21 -0
- package/src/model/as-language-model-v3.ts +103 -0
- package/src/model/as-language-model-v4.ts +25 -0
- package/src/model/as-provider-v3.ts +36 -0
- package/src/model/as-provider-v4.ts +47 -0
- package/src/model/as-reranking-model-v4.ts +16 -0
- package/src/model/as-speech-model-v3.ts +24 -0
- package/src/model/as-speech-model-v4.ts +21 -0
- package/src/model/as-transcription-model-v3.ts +24 -0
- package/src/model/as-transcription-model-v4.ts +25 -0
- package/src/model/as-video-model-v4.ts +19 -0
- package/src/model/resolve-model.ts +172 -0
- package/src/prompt/call-settings.ts +177 -0
- package/src/prompt/content-part.ts +236 -0
- package/src/prompt/convert-to-language-model-prompt.ts +548 -0
- package/src/prompt/create-tool-model-output.ts +34 -0
- package/src/prompt/data-content.ts +134 -0
- package/src/prompt/index.ts +27 -0
- package/src/prompt/invalid-data-content-error.ts +29 -0
- package/src/prompt/invalid-message-role-error.ts +27 -0
- package/src/prompt/message-conversion-error.ts +28 -0
- package/src/prompt/message.ts +72 -0
- package/src/prompt/prepare-call-settings.ts +110 -0
- package/src/prompt/prepare-tools-and-tool-choice.ts +86 -0
- package/src/prompt/prompt.ts +43 -0
- package/src/prompt/split-data-url.ts +17 -0
- package/src/prompt/standardize-prompt.ts +99 -0
- package/src/prompt/wrap-gateway-error.ts +29 -0
- package/src/registry/custom-provider.ts +210 -0
- package/src/registry/index.ts +7 -0
- package/src/registry/no-such-provider-error.ts +41 -0
- package/src/registry/provider-registry.ts +331 -0
- package/src/rerank/index.ts +2 -0
- package/src/rerank/rerank-result.ts +70 -0
- package/src/rerank/rerank.ts +239 -0
- package/src/telemetry/assemble-operation-name.ts +21 -0
- package/src/telemetry/get-base-telemetry-attributes.ts +55 -0
- package/src/telemetry/get-global-telemetry-integration.ts +110 -0
- package/src/telemetry/get-tracer.ts +20 -0
- package/src/telemetry/index.ts +4 -0
- package/src/telemetry/noop-tracer.ts +69 -0
- package/src/telemetry/open-telemetry-integration.ts +537 -0
- package/src/telemetry/record-span.ts +75 -0
- package/src/telemetry/select-telemetry-attributes.ts +78 -0
- package/src/telemetry/stringify-for-telemetry.ts +33 -0
- package/src/telemetry/telemetry-integration-registry.ts +22 -0
- package/src/telemetry/telemetry-integration.ts +100 -0
- package/src/telemetry/telemetry-settings.ts +55 -0
- package/src/test/mock-embedding-model-v2.ts +35 -0
- package/src/test/mock-embedding-model-v3.ts +48 -0
- package/src/test/mock-embedding-model-v4.ts +48 -0
- package/src/test/mock-image-model-v2.ts +28 -0
- package/src/test/mock-image-model-v3.ts +28 -0
- package/src/test/mock-image-model-v4.ts +28 -0
- package/src/test/mock-language-model-v2.ts +72 -0
- package/src/test/mock-language-model-v3.ts +77 -0
- package/src/test/mock-language-model-v4.ts +77 -0
- package/src/test/mock-provider-v2.ts +68 -0
- package/src/test/mock-provider-v3.ts +80 -0
- package/src/test/mock-provider-v4.ts +80 -0
- package/src/test/mock-reranking-model-v3.ts +25 -0
- package/src/test/mock-reranking-model-v4.ts +25 -0
- package/src/test/mock-server-response.ts +69 -0
- package/src/test/mock-speech-model-v2.ts +24 -0
- package/src/test/mock-speech-model-v3.ts +24 -0
- package/src/test/mock-speech-model-v4.ts +24 -0
- package/src/test/mock-tracer.ts +156 -0
- package/src/test/mock-transcription-model-v2.ts +24 -0
- package/src/test/mock-transcription-model-v3.ts +24 -0
- package/src/test/mock-transcription-model-v4.ts +24 -0
- package/src/test/mock-values.ts +4 -0
- package/src/test/mock-video-model-v3.ts +28 -0
- package/src/test/mock-video-model-v4.ts +28 -0
- package/src/test/not-implemented.ts +3 -0
- package/src/text-stream/create-text-stream-response.ts +30 -0
- package/src/text-stream/index.ts +2 -0
- package/src/text-stream/pipe-text-stream-to-response.ts +38 -0
- package/src/transcribe/index.ts +2 -0
- package/src/transcribe/transcribe-result.ts +60 -0
- package/src/transcribe/transcribe.ts +187 -0
- package/src/types/embedding-model-middleware.ts +15 -0
- package/src/types/embedding-model.ts +20 -0
- package/src/types/image-model-middleware.ts +15 -0
- package/src/types/image-model-response-metadata.ts +16 -0
- package/src/types/image-model.ts +19 -0
- package/src/types/index.ts +29 -0
- package/src/types/json-value.ts +15 -0
- package/src/types/language-model-middleware.ts +15 -0
- package/src/types/language-model-request-metadata.ts +6 -0
- package/src/types/language-model-response-metadata.ts +21 -0
- package/src/types/language-model.ts +106 -0
- package/src/types/provider-metadata.ts +16 -0
- package/src/types/provider.ts +55 -0
- package/src/types/reranking-model.ts +6 -0
- package/src/types/speech-model-response-metadata.ts +21 -0
- package/src/types/speech-model.ts +10 -0
- package/src/types/transcription-model-response-metadata.ts +16 -0
- package/src/types/transcription-model.ts +14 -0
- package/src/types/usage.ts +200 -0
- package/src/types/video-model-response-metadata.ts +28 -0
- package/src/types/video-model.ts +15 -0
- package/src/types/warning.ts +7 -0
- package/src/ui/call-completion-api.ts +157 -0
- package/src/ui/chat-transport.ts +83 -0
- package/src/ui/chat.ts +786 -0
- package/src/ui/convert-file-list-to-file-ui-parts.ts +36 -0
- package/src/ui/convert-to-model-messages.ts +403 -0
- package/src/ui/default-chat-transport.ts +36 -0
- package/src/ui/direct-chat-transport.ts +117 -0
- package/src/ui/http-chat-transport.ts +273 -0
- package/src/ui/index.ts +76 -0
- package/src/ui/last-assistant-message-is-complete-with-approval-responses.ts +44 -0
- package/src/ui/last-assistant-message-is-complete-with-tool-calls.ts +39 -0
- package/src/ui/process-text-stream.ts +16 -0
- package/src/ui/process-ui-message-stream.ts +858 -0
- package/src/ui/text-stream-chat-transport.ts +23 -0
- package/src/ui/transform-text-to-ui-message-stream.ts +27 -0
- package/src/ui/ui-messages.ts +602 -0
- package/src/ui/use-completion.ts +84 -0
- package/src/ui/validate-ui-messages.ts +521 -0
- package/src/ui-message-stream/create-ui-message-stream-response.ts +44 -0
- package/src/ui-message-stream/create-ui-message-stream.ts +145 -0
- package/src/ui-message-stream/get-response-ui-message-id.ts +35 -0
- package/src/ui-message-stream/handle-ui-message-stream-finish.ts +170 -0
- package/src/ui-message-stream/index.ts +14 -0
- package/src/ui-message-stream/json-to-sse-transform-stream.ts +17 -0
- package/src/ui-message-stream/pipe-ui-message-stream-to-response.ts +51 -0
- package/src/ui-message-stream/read-ui-message-stream.ts +87 -0
- package/src/ui-message-stream/ui-message-chunks.ts +372 -0
- package/src/ui-message-stream/ui-message-stream-headers.ts +7 -0
- package/src/ui-message-stream/ui-message-stream-on-finish-callback.ts +32 -0
- package/src/ui-message-stream/ui-message-stream-on-step-finish-callback.ts +25 -0
- package/src/ui-message-stream/ui-message-stream-response-init.ts +14 -0
- package/src/ui-message-stream/ui-message-stream-writer.ts +24 -0
- package/src/util/as-array.ts +3 -0
- package/src/util/async-iterable-stream.ts +94 -0
- package/src/util/consume-stream.ts +31 -0
- package/src/util/cosine-similarity.ts +46 -0
- package/src/util/create-resolvable-promise.ts +30 -0
- package/src/util/create-stitchable-stream.ts +112 -0
- package/src/util/data-url.ts +17 -0
- package/src/util/deep-partial.ts +84 -0
- package/src/util/detect-media-type.ts +226 -0
- package/src/util/download/create-download.ts +13 -0
- package/src/util/download/download-function.ts +45 -0
- package/src/util/download/download.ts +74 -0
- package/src/util/error-handler.ts +1 -0
- package/src/util/fix-json.ts +401 -0
- package/src/util/get-potential-start-index.ts +39 -0
- package/src/util/index.ts +12 -0
- package/src/util/is-deep-equal-data.ts +48 -0
- package/src/util/is-non-empty-object.ts +5 -0
- package/src/util/job.ts +1 -0
- package/src/util/log-v2-compatibility-warning.ts +21 -0
- package/src/util/merge-abort-signals.ts +43 -0
- package/src/util/merge-objects.ts +79 -0
- package/src/util/notify.ts +22 -0
- package/src/util/now.ts +4 -0
- package/src/util/parse-partial-json.ts +30 -0
- package/src/util/prepare-headers.ts +14 -0
- package/src/util/prepare-retries.ts +47 -0
- package/src/util/retry-error.ts +41 -0
- package/src/util/retry-with-exponential-backoff.ts +154 -0
- package/src/util/serial-job-executor.ts +36 -0
- package/src/util/simulate-readable-stream.ts +39 -0
- package/src/util/split-array.ts +20 -0
- package/src/util/value-of.ts +65 -0
- package/src/util/write-to-server-response.ts +49 -0
- package/src/version.ts +5 -0
- package/test.d.ts +1 -0
|
@@ -0,0 +1,222 @@
|
|
|
1
|
+
---
|
|
2
|
+
title: Memory
|
|
3
|
+
description: Add persistent memory to your agent using provider-defined tools, memory providers, or a custom tool.
|
|
4
|
+
---
|
|
5
|
+
|
|
6
|
+
# Memory
|
|
7
|
+
|
|
8
|
+
Memory lets your agent save information and recall it later. Without memory, every conversation starts fresh. With memory, your agent builds context over time, recalls previous interactions, and adapts to the user.
|
|
9
|
+
|
|
10
|
+
## Three Approaches
|
|
11
|
+
|
|
12
|
+
You can add memory to your agent with the AI SDK in three ways, each with different tradeoffs:
|
|
13
|
+
|
|
14
|
+
| Approach | Effort | Flexibility | Provider Lock-in |
|
|
15
|
+
| ------------------------------------------------- | ------ | ----------- | -------------------------- |
|
|
16
|
+
| [Provider-Defined Tools](#provider-defined-tools) | Low | Medium | Yes |
|
|
17
|
+
| [Memory Providers](#memory-providers) | Low | Low | Depends on memory provider |
|
|
18
|
+
| [Custom Tool](#custom-tool) | High | High | No |
|
|
19
|
+
|
|
20
|
+
## Provider-Defined Tools
|
|
21
|
+
|
|
22
|
+
[Provider-defined tools](/docs/foundations/tools#types-of-tools) are tools where the provider specifies the tool's `inputSchema` and `description`, but you provide the `execute` function. The model has been trained to use these tools, which can result in better performance compared to custom tools.
|
|
23
|
+
|
|
24
|
+
### Anthropic Memory Tool
|
|
25
|
+
|
|
26
|
+
The [Anthropic Memory Tool](https://platform.claude.com/docs/en/agents-and-tools/tool-use/memory-tool) gives Claude a structured interface for managing a `/memories` directory. Claude reads its memory before starting tasks, creates and updates files as it works, and references them in future conversations.
|
|
27
|
+
|
|
28
|
+
```ts
|
|
29
|
+
import { anthropic } from '@ai-sdk/anthropic';
|
|
30
|
+
import { ToolLoopAgent } from 'ai';
|
|
31
|
+
|
|
32
|
+
const memory = anthropic.tools.memory_20250818({
|
|
33
|
+
execute: async action => {
|
|
34
|
+
// `action` contains `command`, `path`, and other fields
|
|
35
|
+
// depending on the command (view, create, str_replace,
|
|
36
|
+
// insert, delete, rename).
|
|
37
|
+
// Implement your storage backend here.
|
|
38
|
+
// Return the result as a string.
|
|
39
|
+
},
|
|
40
|
+
});
|
|
41
|
+
|
|
42
|
+
const agent = new ToolLoopAgent({
|
|
43
|
+
model: 'anthropic/claude-haiku-4.5',
|
|
44
|
+
tools: { memory },
|
|
45
|
+
});
|
|
46
|
+
|
|
47
|
+
const result = await agent.generate({
|
|
48
|
+
prompt: 'Remember that my favorite editor is Neovim',
|
|
49
|
+
});
|
|
50
|
+
```
|
|
51
|
+
|
|
52
|
+
The tool receives structured commands (`view`, `create`, `str_replace`, `insert`, `delete`, `rename`), each with a `path` scoped to `/memories`. Your `execute` function maps these to your storage backend (the filesystem, a database, or any other persistence layer).
|
|
53
|
+
|
|
54
|
+
**When to use this**: you want memory with minimal implementation effort and are already using Anthropic models. The tradeoff is provider lock-in, since this tool only works with Claude.
|
|
55
|
+
|
|
56
|
+
## Memory Providers
|
|
57
|
+
|
|
58
|
+
Another approach is to use a provider that has memory built in. These providers wrap an external memory service and expose it through the AI SDK's standard interface. Memory storage, retrieval, and injection happen transparently, and you do not define any tools yourself.
|
|
59
|
+
|
|
60
|
+
### Letta
|
|
61
|
+
|
|
62
|
+
[Letta](https://letta.com) provides agents with persistent long-term memory. You create an agent on Letta's platform (cloud or self-hosted), configure its memory there, and use the AI SDK provider to interact with it. Letta's agent runtime handles memory management (core memory, archival memory, recall).
|
|
63
|
+
|
|
64
|
+
```bash
|
|
65
|
+
pnpm add @letta-ai/vercel-ai-sdk-provider
|
|
66
|
+
```
|
|
67
|
+
|
|
68
|
+
```ts
|
|
69
|
+
import { lettaCloud } from '@letta-ai/vercel-ai-sdk-provider';
|
|
70
|
+
import { ToolLoopAgent } from 'ai';
|
|
71
|
+
|
|
72
|
+
const agent = new ToolLoopAgent({
|
|
73
|
+
model: lettaCloud(),
|
|
74
|
+
providerOptions: {
|
|
75
|
+
letta: {
|
|
76
|
+
agent: { id: 'your-agent-id' },
|
|
77
|
+
},
|
|
78
|
+
},
|
|
79
|
+
});
|
|
80
|
+
|
|
81
|
+
const result = await agent.generate({
|
|
82
|
+
prompt: 'Remember that my favorite editor is Neovim',
|
|
83
|
+
});
|
|
84
|
+
```
|
|
85
|
+
|
|
86
|
+
You can also use Letta's built-in memory tools alongside custom tools:
|
|
87
|
+
|
|
88
|
+
```ts
|
|
89
|
+
import { lettaCloud } from '@letta-ai/vercel-ai-sdk-provider';
|
|
90
|
+
import { ToolLoopAgent } from 'ai';
|
|
91
|
+
|
|
92
|
+
const agent = new ToolLoopAgent({
|
|
93
|
+
model: lettaCloud(),
|
|
94
|
+
tools: {
|
|
95
|
+
core_memory_append: lettaCloud.tool('core_memory_append'),
|
|
96
|
+
memory_insert: lettaCloud.tool('memory_insert'),
|
|
97
|
+
memory_replace: lettaCloud.tool('memory_replace'),
|
|
98
|
+
},
|
|
99
|
+
providerOptions: {
|
|
100
|
+
letta: {
|
|
101
|
+
agent: { id: 'your-agent-id' },
|
|
102
|
+
},
|
|
103
|
+
},
|
|
104
|
+
});
|
|
105
|
+
|
|
106
|
+
const stream = agent.stream({
|
|
107
|
+
prompt: 'What do you remember about me?',
|
|
108
|
+
});
|
|
109
|
+
```
|
|
110
|
+
|
|
111
|
+
See the [Letta provider documentation](/providers/community-providers/letta) for full setup and configuration.
|
|
112
|
+
|
|
113
|
+
### Mem0
|
|
114
|
+
|
|
115
|
+
[Mem0](https://mem0.ai) adds a memory layer on top of any supported LLM provider. It automatically extracts memories from conversations, stores them, and retrieves relevant ones for future prompts.
|
|
116
|
+
|
|
117
|
+
```bash
|
|
118
|
+
pnpm add @mem0/vercel-ai-provider
|
|
119
|
+
```
|
|
120
|
+
|
|
121
|
+
```ts
|
|
122
|
+
import { createMem0 } from '@mem0/vercel-ai-provider';
|
|
123
|
+
import { ToolLoopAgent } from 'ai';
|
|
124
|
+
|
|
125
|
+
const mem0 = createMem0({
|
|
126
|
+
provider: 'openai',
|
|
127
|
+
mem0ApiKey: process.env.MEM0_API_KEY,
|
|
128
|
+
apiKey: process.env.OPENAI_API_KEY,
|
|
129
|
+
});
|
|
130
|
+
|
|
131
|
+
const agent = new ToolLoopAgent({
|
|
132
|
+
model: mem0('gpt-4.1', { user_id: 'user-123' }),
|
|
133
|
+
});
|
|
134
|
+
|
|
135
|
+
const { text } = await agent.generate({
|
|
136
|
+
prompt: 'Remember that my favorite editor is Neovim',
|
|
137
|
+
});
|
|
138
|
+
```
|
|
139
|
+
|
|
140
|
+
Mem0 works across multiple LLM providers (OpenAI, Anthropic, Google, Groq, Cohere). You can also manage memories explicitly:
|
|
141
|
+
|
|
142
|
+
```ts
|
|
143
|
+
import { addMemories, retrieveMemories } from '@mem0/vercel-ai-provider';
|
|
144
|
+
|
|
145
|
+
await addMemories(messages, { user_id: 'user-123' });
|
|
146
|
+
const context = await retrieveMemories(prompt, { user_id: 'user-123' });
|
|
147
|
+
```
|
|
148
|
+
|
|
149
|
+
See the [Mem0 provider documentation](/providers/community-providers/mem0) for full setup and configuration.
|
|
150
|
+
|
|
151
|
+
### Supermemory
|
|
152
|
+
|
|
153
|
+
[Supermemory](https://supermemory.ai) is a long-term memory platform that adds persistent, self-growing memory to your AI applications. It provides tools that handle saving and retrieving memories automatically through semantic search.
|
|
154
|
+
|
|
155
|
+
```bash
|
|
156
|
+
pnpm add @supermemory/tools
|
|
157
|
+
```
|
|
158
|
+
|
|
159
|
+
```ts
|
|
160
|
+
__PROVIDER_IMPORT__;
|
|
161
|
+
import { supermemoryTools } from '@supermemory/tools/ai-sdk';
|
|
162
|
+
import { ToolLoopAgent } from 'ai';
|
|
163
|
+
|
|
164
|
+
const agent = new ToolLoopAgent({
|
|
165
|
+
model: __MODEL__,
|
|
166
|
+
tools: supermemoryTools(process.env.SUPERMEMORY_API_KEY!),
|
|
167
|
+
});
|
|
168
|
+
|
|
169
|
+
const result = await agent.generate({
|
|
170
|
+
prompt: 'Remember that my favorite editor is Neovim',
|
|
171
|
+
});
|
|
172
|
+
```
|
|
173
|
+
|
|
174
|
+
Supermemory works with any AI SDK provider. The tools give the model `addMemory` and `searchMemories` operations that handle storage and retrieval.
|
|
175
|
+
|
|
176
|
+
See the [Supermemory provider documentation](/providers/community-providers/supermemory) for full setup and configuration.
|
|
177
|
+
|
|
178
|
+
### Hindsight
|
|
179
|
+
|
|
180
|
+
[Hindsight](/providers/community-providers/hindsight) provides agents with persistent memory through five tools: `retain`, `recall`, `reflect`, `getMentalModel`, and `getDocument`. It can be self-hosted with Docker or used as a cloud service.
|
|
181
|
+
|
|
182
|
+
```bash
|
|
183
|
+
pnpm add @vectorize-io/hindsight-ai-sdk @vectorize-io/hindsight-client
|
|
184
|
+
```
|
|
185
|
+
|
|
186
|
+
```ts
|
|
187
|
+
__PROVIDER_IMPORT__;
|
|
188
|
+
import { HindsightClient } from '@vectorize-io/hindsight-client';
|
|
189
|
+
import { createHindsightTools } from '@vectorize-io/hindsight-ai-sdk';
|
|
190
|
+
import { ToolLoopAgent, stepCountIs } from 'ai';
|
|
191
|
+
import { openai } from '@ai-sdk/openai';
|
|
192
|
+
|
|
193
|
+
const client = new HindsightClient({ baseUrl: process.env.HINDSIGHT_API_URL });
|
|
194
|
+
|
|
195
|
+
const agent = new ToolLoopAgent({
|
|
196
|
+
model: __MODEL__,
|
|
197
|
+
tools: createHindsightTools({ client, bankId: 'user-123' }),
|
|
198
|
+
stopWhen: stepCountIs(10),
|
|
199
|
+
instructions: 'You are a helpful assistant with long-term memory.',
|
|
200
|
+
});
|
|
201
|
+
|
|
202
|
+
const result = await agent.generate({
|
|
203
|
+
prompt: 'Remember that my favorite editor is Neovim',
|
|
204
|
+
});
|
|
205
|
+
```
|
|
206
|
+
|
|
207
|
+
The `bankId` identifies the memory store and is typically a user ID. In multi-user apps, call `createHindsightTools` inside your request handler so each request gets the right bank. Hindsight works with any AI SDK provider.
|
|
208
|
+
|
|
209
|
+
See the [Hindsight provider documentation](/providers/community-providers/hindsight) for full setup and configuration.
|
|
210
|
+
|
|
211
|
+
**When to use memory providers**: these providers are a good fit when you want memory without building any storage infrastructure. The tradeoff is that the provider controls memory behavior, so you have less visibility into what gets stored and how it is retrieved. You also take on a dependency on an external service.
|
|
212
|
+
|
|
213
|
+
## Custom Tool
|
|
214
|
+
|
|
215
|
+
Building your own memory tool from scratch is the most flexible approach. You control the storage format, the interface, and the retrieval logic. This requires the most upfront work but gives you full ownership of how memory works, with no provider lock-in and no external dependencies.
|
|
216
|
+
|
|
217
|
+
There are two common patterns:
|
|
218
|
+
|
|
219
|
+
- **Structured actions**: you define explicit operations (`view`, `create`, `update`, `search`) and handle structured input yourself. Safe by design since you control every operation.
|
|
220
|
+
- **Bash-backed**: you give the model a sandboxed bash environment to compose shell commands (`cat`, `grep`, `sed`, `echo`) for flexible memory access. More powerful but requires command validation for safety.
|
|
221
|
+
|
|
222
|
+
For a full walkthrough of implementing a custom memory tool with a bash-backed interface, AST-based command validation, and filesystem persistence, see the **[Build a Custom Memory Tool](/cookbook/guides/custom-memory-tool)** recipe.
|
|
@@ -0,0 +1,362 @@
|
|
|
1
|
+
---
|
|
2
|
+
title: Subagents
|
|
3
|
+
description: Delegate context-heavy tasks to specialized subagents while keeping the main agent focused.
|
|
4
|
+
---
|
|
5
|
+
|
|
6
|
+
# Subagents
|
|
7
|
+
|
|
8
|
+
A subagent is an agent that a parent agent can invoke. The parent delegates work via a tool, and the subagent executes autonomously before returning a result.
|
|
9
|
+
|
|
10
|
+
## How It Works
|
|
11
|
+
|
|
12
|
+
1. **Define a subagent** with its own model, instructions, and tools
|
|
13
|
+
2. **Create a tool that calls it** for the main agent to use
|
|
14
|
+
3. **Subagent runs independently with its own context window**
|
|
15
|
+
4. **Return a result** (optionally streaming progress to the UI)
|
|
16
|
+
5. **Control what the model sees** using `toModelOutput` to summarize
|
|
17
|
+
|
|
18
|
+
## When to Use Subagents
|
|
19
|
+
|
|
20
|
+
Subagents add latency and complexity. Use them when the benefits outweigh the costs:
|
|
21
|
+
|
|
22
|
+
| Use Subagents When | Avoid Subagents When |
|
|
23
|
+
| ----------------------------------------------- | ------------------------------ |
|
|
24
|
+
| Tasks require exploring large amounts of tokens | Tasks are simple and focused |
|
|
25
|
+
| You need to parallelize independent research | Sequential processing suffices |
|
|
26
|
+
| Context would grow beyond model limits | Context stays manageable |
|
|
27
|
+
| You want to isolate tool access by capability | All tools can safely coexist |
|
|
28
|
+
|
|
29
|
+
## Why Use Subagents?
|
|
30
|
+
|
|
31
|
+
### Offloading Context-Heavy Tasks
|
|
32
|
+
|
|
33
|
+
Some tasks require exploring large amounts of information—reading files, searching codebases, or researching topics. Running these in the main agent consumes context quickly, making the agent less coherent over time.
|
|
34
|
+
|
|
35
|
+
With subagents, you can:
|
|
36
|
+
|
|
37
|
+
- Spin up a dedicated agent that uses hundreds of thousands of tokens
|
|
38
|
+
- Have it return only a focused summary (perhaps 1,000 tokens)
|
|
39
|
+
- Keep your main agent's context clean and coherent
|
|
40
|
+
|
|
41
|
+
The subagent does the heavy lifting while the main agent stays focused on orchestration.
|
|
42
|
+
|
|
43
|
+
### Parallelizing Independent Work
|
|
44
|
+
|
|
45
|
+
For tasks like exploring a codebase, you can spawn multiple subagents to research different areas simultaneously. Each returns a summary, and the main agent synthesizes the findings—without paying the context cost of all that exploration.
|
|
46
|
+
|
|
47
|
+
### Specialized Orchestration
|
|
48
|
+
|
|
49
|
+
A less common but valid pattern is using a main agent purely for orchestration, delegating to specialized subagents for different types of work. For example:
|
|
50
|
+
|
|
51
|
+
- An exploration subagent with read-only tools for researching codebases
|
|
52
|
+
- A coding subagent with file editing tools
|
|
53
|
+
- An integration subagent with tools for a specific platform or API
|
|
54
|
+
|
|
55
|
+
This creates a clear separation of concerns, though context offloading and parallelization are the more common motivations for subagents.
|
|
56
|
+
|
|
57
|
+
## Basic Subagent Without Streaming
|
|
58
|
+
|
|
59
|
+
The simplest subagent pattern requires no special machinery. Your main agent has a tool that calls another agent in its `execute` function:
|
|
60
|
+
|
|
61
|
+
```ts
|
|
62
|
+
import { ToolLoopAgent, tool } from 'ai';
|
|
63
|
+
__PROVIDER_IMPORT__;
|
|
64
|
+
import { z } from 'zod';
|
|
65
|
+
|
|
66
|
+
// Define a subagent for research tasks
|
|
67
|
+
const researchSubagent = new ToolLoopAgent({
|
|
68
|
+
model: __MODEL__,
|
|
69
|
+
instructions: `You are a research agent.
|
|
70
|
+
Summarize your findings in your final response.`,
|
|
71
|
+
tools: {
|
|
72
|
+
read: readFileTool, // defined elsewhere
|
|
73
|
+
search: searchTool, // defined elsewhere
|
|
74
|
+
},
|
|
75
|
+
});
|
|
76
|
+
|
|
77
|
+
// Create a tool that delegates to the subagent
|
|
78
|
+
const researchTool = tool({
|
|
79
|
+
description: 'Research a topic or question in depth.',
|
|
80
|
+
inputSchema: z.object({
|
|
81
|
+
task: z.string().describe('The research task to complete'),
|
|
82
|
+
}),
|
|
83
|
+
execute: async ({ task }, { abortSignal }) => {
|
|
84
|
+
const result = await researchSubagent.generate({
|
|
85
|
+
prompt: task,
|
|
86
|
+
abortSignal,
|
|
87
|
+
});
|
|
88
|
+
return result.text;
|
|
89
|
+
},
|
|
90
|
+
});
|
|
91
|
+
|
|
92
|
+
// Main agent uses the research tool
|
|
93
|
+
const mainAgent = new ToolLoopAgent({
|
|
94
|
+
model: __MODEL__,
|
|
95
|
+
instructions: 'You are a helpful assistant that can delegate research tasks.',
|
|
96
|
+
tools: {
|
|
97
|
+
research: researchTool,
|
|
98
|
+
},
|
|
99
|
+
});
|
|
100
|
+
```
|
|
101
|
+
|
|
102
|
+
This works well when you don't need to show the subagent's progress in the UI. The tool call blocks until the subagent completes, then returns the final text response.
|
|
103
|
+
|
|
104
|
+
### Handling Cancellation
|
|
105
|
+
|
|
106
|
+
When the user cancels a request, the `abortSignal` propagates to the subagent. Always pass it through to ensure cleanup:
|
|
107
|
+
|
|
108
|
+
```ts
|
|
109
|
+
execute: async ({ task }, { abortSignal }) => {
|
|
110
|
+
const result = await researchSubagent.generate({
|
|
111
|
+
prompt: task,
|
|
112
|
+
abortSignal, // Cancels subagent if main request is aborted
|
|
113
|
+
});
|
|
114
|
+
return result.text;
|
|
115
|
+
},
|
|
116
|
+
```
|
|
117
|
+
|
|
118
|
+
If you abort the signal, the subagent stops executing and throws an `AbortError`. The main agent's tool execution fails, which stops the main loop.
|
|
119
|
+
|
|
120
|
+
To avoid errors about incomplete tool calls in subsequent messages, use `convertToModelMessages` with `ignoreIncompleteToolCalls`:
|
|
121
|
+
|
|
122
|
+
```ts
|
|
123
|
+
import { convertToModelMessages } from 'ai';
|
|
124
|
+
|
|
125
|
+
const modelMessages = await convertToModelMessages(messages, {
|
|
126
|
+
ignoreIncompleteToolCalls: true,
|
|
127
|
+
});
|
|
128
|
+
```
|
|
129
|
+
|
|
130
|
+
This filters out tool calls that don't have corresponding results. Learn more in the [convertToModelMessages](/docs/reference/ai-sdk-ui/convert-to-model-messages) reference.
|
|
131
|
+
|
|
132
|
+
## Streaming Subagent Progress
|
|
133
|
+
|
|
134
|
+
When you want to show incremental progress as the subagent works, use [**preliminary tool results**](/docs/ai-sdk-core/tools-and-tool-calling#preliminary-tool-results). This pattern uses a generator function that yields partial updates to the UI.
|
|
135
|
+
|
|
136
|
+
### How Preliminary Tool Results Work
|
|
137
|
+
|
|
138
|
+
Change your `execute` function from a regular function to an async generator (`async function*`). Each `yield` sends a preliminary result to the frontend:
|
|
139
|
+
|
|
140
|
+
```ts
|
|
141
|
+
execute: async function* ({ /* input */ }) {
|
|
142
|
+
// ... do work ...
|
|
143
|
+
yield partialResult;
|
|
144
|
+
// ... do more work ...
|
|
145
|
+
yield updatedResult;
|
|
146
|
+
}
|
|
147
|
+
```
|
|
148
|
+
|
|
149
|
+
### Building the Complete Message
|
|
150
|
+
|
|
151
|
+
Each `yield` **replaces** the previous output entirely (it does not append). This means you need a way to accumulate the subagent's response into a complete message that grows over time.
|
|
152
|
+
|
|
153
|
+
The `readUIMessageStream` utility handles this. It reads each chunk from the stream and builds an ever-growing `UIMessage` containing all parts received so far:
|
|
154
|
+
|
|
155
|
+
```ts
|
|
156
|
+
import { readUIMessageStream, tool } from 'ai';
|
|
157
|
+
import { z } from 'zod';
|
|
158
|
+
|
|
159
|
+
const researchTool = tool({
|
|
160
|
+
description: 'Research a topic or question in depth.',
|
|
161
|
+
inputSchema: z.object({
|
|
162
|
+
task: z.string().describe('The research task to complete'),
|
|
163
|
+
}),
|
|
164
|
+
execute: async function* ({ task }, { abortSignal }) {
|
|
165
|
+
// Start the subagent with streaming
|
|
166
|
+
const result = await researchSubagent.stream({
|
|
167
|
+
prompt: task,
|
|
168
|
+
abortSignal,
|
|
169
|
+
});
|
|
170
|
+
|
|
171
|
+
// Each iteration yields a complete, accumulated UIMessage
|
|
172
|
+
for await (const message of readUIMessageStream({
|
|
173
|
+
stream: result.toUIMessageStream(),
|
|
174
|
+
})) {
|
|
175
|
+
yield message;
|
|
176
|
+
}
|
|
177
|
+
},
|
|
178
|
+
});
|
|
179
|
+
```
|
|
180
|
+
|
|
181
|
+
Each yielded `message` is a complete `UIMessage` containing all the subagent's parts up to that point (text, tool calls, and tool results). The frontend simply replaces its display with each new message.
|
|
182
|
+
|
|
183
|
+
## Controlling What the Model Sees
|
|
184
|
+
|
|
185
|
+
Here's where subagents become powerful for context management. The full `UIMessage` with all the subagent's work is stored in the message history and displayed in the UI. But you can control what the main agent's model actually sees using `toModelOutput`.
|
|
186
|
+
|
|
187
|
+
### How It Works
|
|
188
|
+
|
|
189
|
+
The `toModelOutput` function maps the tool's output to the tokens sent to the model:
|
|
190
|
+
|
|
191
|
+
```ts
|
|
192
|
+
const researchTool = tool({
|
|
193
|
+
description: 'Research a topic or question in depth.',
|
|
194
|
+
inputSchema: z.object({
|
|
195
|
+
task: z.string().describe('The research task to complete'),
|
|
196
|
+
}),
|
|
197
|
+
execute: async function* ({ task }, { abortSignal }) {
|
|
198
|
+
const result = await researchSubagent.stream({
|
|
199
|
+
prompt: task,
|
|
200
|
+
abortSignal,
|
|
201
|
+
});
|
|
202
|
+
|
|
203
|
+
for await (const message of readUIMessageStream({
|
|
204
|
+
stream: result.toUIMessageStream(),
|
|
205
|
+
})) {
|
|
206
|
+
yield message;
|
|
207
|
+
}
|
|
208
|
+
},
|
|
209
|
+
toModelOutput: ({ output: message }) => {
|
|
210
|
+
// Extract just the final text as a summary
|
|
211
|
+
const lastTextPart = message?.parts.findLast(p => p.type === 'text');
|
|
212
|
+
return {
|
|
213
|
+
type: 'text',
|
|
214
|
+
value: lastTextPart?.text ?? 'Task completed.',
|
|
215
|
+
};
|
|
216
|
+
},
|
|
217
|
+
});
|
|
218
|
+
```
|
|
219
|
+
|
|
220
|
+
With this setup:
|
|
221
|
+
|
|
222
|
+
- **Users see**: The full subagent execution—every tool call, every intermediate step
|
|
223
|
+
- **The model sees**: Just the final summary text
|
|
224
|
+
|
|
225
|
+
The subagent might use 100,000 tokens exploring and reasoning, but the main agent only consumes the summary. This keeps the main agent coherent and focused.
|
|
226
|
+
|
|
227
|
+
### Write Subagent Instructions for Summarization
|
|
228
|
+
|
|
229
|
+
For `toModelOutput` to extract a useful summary, your subagent must produce one. Add explicit instructions like this:
|
|
230
|
+
|
|
231
|
+
```ts
|
|
232
|
+
const researchSubagent = new ToolLoopAgent({
|
|
233
|
+
model: __MODEL__,
|
|
234
|
+
instructions: `You are a research agent. Complete the task autonomously.
|
|
235
|
+
|
|
236
|
+
IMPORTANT: When you have finished, write a clear summary of your findings as your final response.
|
|
237
|
+
This summary will be returned to the main agent, so include all relevant information.`,
|
|
238
|
+
tools: {
|
|
239
|
+
read: readFileTool,
|
|
240
|
+
search: searchTool,
|
|
241
|
+
},
|
|
242
|
+
});
|
|
243
|
+
```
|
|
244
|
+
|
|
245
|
+
Without this instruction, the subagent might not produce a comprehensive summary. It could simply say "Done", leaving `toModelOutput` with nothing useful to extract.
|
|
246
|
+
|
|
247
|
+
## Rendering Subagents in the UI (with useChat)
|
|
248
|
+
|
|
249
|
+
To display streaming progress, check the tool part's `state` and `preliminary` flag.
|
|
250
|
+
|
|
251
|
+
### Tool Part States
|
|
252
|
+
|
|
253
|
+
| State | Description |
|
|
254
|
+
| ------------------ | ------------------------------------------ |
|
|
255
|
+
| `input-streaming` | Tool input being generated |
|
|
256
|
+
| `input-available` | Tool ready to execute |
|
|
257
|
+
| `output-available` | Tool produced output (check `preliminary`) |
|
|
258
|
+
| `output-error` | Tool execution failed |
|
|
259
|
+
|
|
260
|
+
### Detecting Streaming vs Complete
|
|
261
|
+
|
|
262
|
+
```tsx
|
|
263
|
+
const hasOutput = part.state === 'output-available';
|
|
264
|
+
const isStreaming = hasOutput && part.preliminary === true;
|
|
265
|
+
const isComplete = hasOutput && !part.preliminary;
|
|
266
|
+
```
|
|
267
|
+
|
|
268
|
+
### Type Safety for Subagent Output
|
|
269
|
+
|
|
270
|
+
Export types alongside your agents for use in UI components:
|
|
271
|
+
|
|
272
|
+
```ts filename="lib/agents.ts"
|
|
273
|
+
import { ToolLoopAgent, InferAgentUIMessage } from 'ai';
|
|
274
|
+
|
|
275
|
+
export const mainAgent = new ToolLoopAgent({
|
|
276
|
+
// ... configuration with researchTool
|
|
277
|
+
});
|
|
278
|
+
|
|
279
|
+
// Export the main agent message type for the chat UI
|
|
280
|
+
export type MainAgentMessage = InferAgentUIMessage<typeof mainAgent>;
|
|
281
|
+
```
|
|
282
|
+
|
|
283
|
+
### Render Messages and Subagent Output
|
|
284
|
+
|
|
285
|
+
This example uses the types defined above to render both the main agent's messages and the subagent's streamed output:
|
|
286
|
+
|
|
287
|
+
```tsx
|
|
288
|
+
'use client';
|
|
289
|
+
|
|
290
|
+
import { useChat } from '@ai-sdk/react';
|
|
291
|
+
import type { MainAgentMessage } from '@/lib/agents';
|
|
292
|
+
|
|
293
|
+
export function Chat() {
|
|
294
|
+
const { messages } = useChat<MainAgentMessage>();
|
|
295
|
+
|
|
296
|
+
return (
|
|
297
|
+
<div>
|
|
298
|
+
{messages.map(message =>
|
|
299
|
+
message.parts.map((part, i) => {
|
|
300
|
+
switch (part.type) {
|
|
301
|
+
case 'text':
|
|
302
|
+
return <p key={i}>{part.text}</p>;
|
|
303
|
+
case 'tool-research':
|
|
304
|
+
return (
|
|
305
|
+
<div>
|
|
306
|
+
{part.state !== 'input-streaming' && (
|
|
307
|
+
<div>Research: {part.input.task}</div>
|
|
308
|
+
)}
|
|
309
|
+
{part.state === 'output-available' && (
|
|
310
|
+
<div>
|
|
311
|
+
{part.output.parts.map((nestedPart, i) => {
|
|
312
|
+
switch (nestedPart.type) {
|
|
313
|
+
case 'text':
|
|
314
|
+
return <p key={i}>{nestedPart.text}</p>;
|
|
315
|
+
default:
|
|
316
|
+
return null;
|
|
317
|
+
}
|
|
318
|
+
})}
|
|
319
|
+
</div>
|
|
320
|
+
)}
|
|
321
|
+
</div>
|
|
322
|
+
);
|
|
323
|
+
default:
|
|
324
|
+
return null;
|
|
325
|
+
}
|
|
326
|
+
}),
|
|
327
|
+
)}
|
|
328
|
+
</div>
|
|
329
|
+
);
|
|
330
|
+
}
|
|
331
|
+
```
|
|
332
|
+
|
|
333
|
+
## Caveats
|
|
334
|
+
|
|
335
|
+
### No Tool Approvals in Subagents
|
|
336
|
+
|
|
337
|
+
Subagent tools cannot use `needsApproval`. All tools must execute automatically without user confirmation.
|
|
338
|
+
|
|
339
|
+
### Subagent Context is Isolated
|
|
340
|
+
|
|
341
|
+
Each subagent invocation starts with a fresh context window. This is one of the key benefits of subagents: they don't inherit the accumulated context from the main agent, which is exactly what allows them to do heavy exploration without bloating the main conversation.
|
|
342
|
+
|
|
343
|
+
If you need to give a subagent access to the conversation history, the `messages` are available in the tool's execute function alongside `abortSignal`:
|
|
344
|
+
|
|
345
|
+
```ts
|
|
346
|
+
execute: async ({ task }, { abortSignal, messages }) => {
|
|
347
|
+
const result = await researchSubagent.generate({
|
|
348
|
+
messages: [
|
|
349
|
+
...messages, // The main agent's conversation history
|
|
350
|
+
{ role: 'user', content: task }, // The specific task for this invocation
|
|
351
|
+
],
|
|
352
|
+
abortSignal,
|
|
353
|
+
});
|
|
354
|
+
return result.text;
|
|
355
|
+
},
|
|
356
|
+
```
|
|
357
|
+
|
|
358
|
+
Use this sparingly since passing full history defeats some of the context isolation benefits.
|
|
359
|
+
|
|
360
|
+
### Streaming Adds Complexity
|
|
361
|
+
|
|
362
|
+
The basic pattern (no streaming) is simpler to implement and debug. Only add streaming when you need to show real-time progress in the UI.
|
|
@@ -0,0 +1,46 @@
|
|
|
1
|
+
---
|
|
2
|
+
title: Agents
|
|
3
|
+
description: An overview of building agents with the AI SDK.
|
|
4
|
+
---
|
|
5
|
+
|
|
6
|
+
# Agents
|
|
7
|
+
|
|
8
|
+
The following section shows you how to build agents with the AI SDK - systems where large language models (LLMs) use tools in a loop to accomplish tasks.
|
|
9
|
+
|
|
10
|
+
<IndexCards
|
|
11
|
+
cards={[
|
|
12
|
+
{
|
|
13
|
+
title: 'Overview',
|
|
14
|
+
description: 'Learn what agents are and why to use the ToolLoopAgent.',
|
|
15
|
+
href: '/docs/agents/overview',
|
|
16
|
+
},
|
|
17
|
+
{
|
|
18
|
+
title: 'Building Agents',
|
|
19
|
+
description: 'Complete guide to creating agents with the ToolLoopAgent.',
|
|
20
|
+
href: '/docs/agents/building-agents',
|
|
21
|
+
},
|
|
22
|
+
{
|
|
23
|
+
title: 'Workflow Patterns',
|
|
24
|
+
description:
|
|
25
|
+
'Structured patterns using core functions for complex workflows.',
|
|
26
|
+
href: '/docs/agents/workflows',
|
|
27
|
+
},
|
|
28
|
+
{
|
|
29
|
+
title: 'Loop Control',
|
|
30
|
+
description: 'Advanced execution control with stopWhen and prepareStep.',
|
|
31
|
+
href: '/docs/agents/loop-control',
|
|
32
|
+
},
|
|
33
|
+
{
|
|
34
|
+
title: 'Configuring Call Options',
|
|
35
|
+
description:
|
|
36
|
+
'Pass type-safe runtime inputs to dynamically configure agent behavior.',
|
|
37
|
+
href: '/docs/agents/configuring-call-options',
|
|
38
|
+
},
|
|
39
|
+
{
|
|
40
|
+
title: 'Subagents',
|
|
41
|
+
description:
|
|
42
|
+
'Delegate context-heavy tasks to specialized subagents while keeping the main agent focused.',
|
|
43
|
+
href: '/docs/agents/subagents',
|
|
44
|
+
},
|
|
45
|
+
]}
|
|
46
|
+
/>
|
|
@@ -0,0 +1,31 @@
|
|
|
1
|
+
---
|
|
2
|
+
title: Overview
|
|
3
|
+
description: An overview of AI SDK Core.
|
|
4
|
+
---
|
|
5
|
+
|
|
6
|
+
# AI SDK Core
|
|
7
|
+
|
|
8
|
+
Large Language Models (LLMs) are advanced programs that can understand, create, and engage with human language on a large scale.
|
|
9
|
+
They are trained on vast amounts of written material to recognize patterns in language and predict what might come next in a given piece of text.
|
|
10
|
+
|
|
11
|
+
AI SDK Core **simplifies working with LLMs by offering a standardized way of integrating them into your app** - so you can focus on building great AI applications for your users, not waste time on technical details.
|
|
12
|
+
|
|
13
|
+
For example, here’s how you can generate text with various models using the AI SDK:
|
|
14
|
+
|
|
15
|
+
<PreviewSwitchProviders />
|
|
16
|
+
|
|
17
|
+
## AI SDK Core Functions
|
|
18
|
+
|
|
19
|
+
AI SDK Core has various functions designed for [text generation](./generating-text), [structured data generation](./generating-structured-data), and [tool usage](./tools-and-tool-calling).
|
|
20
|
+
These functions take a standardized approach to setting up [prompts](./prompts) and [settings](./settings), making it easier to work with different models.
|
|
21
|
+
|
|
22
|
+
- [`generateText`](/docs/ai-sdk-core/generating-text): Generates text and [tool calls](./tools-and-tool-calling).
|
|
23
|
+
This function is ideal for non-interactive use cases such as automation tasks where you need to write text (e.g. drafting email or summarizing web pages) and for agents that use tools.
|
|
24
|
+
- [`streamText`](/docs/ai-sdk-core/generating-text): Stream text and tool calls.
|
|
25
|
+
You can use the `streamText` function for interactive use cases such as [chat bots](/docs/ai-sdk-ui/chatbot) and [content streaming](/docs/ai-sdk-ui/completion).
|
|
26
|
+
|
|
27
|
+
Both `generateText` and `streamText` support [structured output](/docs/ai-sdk-core/generating-structured-data) via the `output` property (e.g. `Output.object()`, `Output.array()`), allowing you to generate typed, schema-validated data for information extraction, synthetic data generation, classification tasks, and [streaming generated UIs](/docs/ai-sdk-ui/object-generation).
|
|
28
|
+
|
|
29
|
+
## API Reference
|
|
30
|
+
|
|
31
|
+
Please check out the [AI SDK Core API Reference](/docs/reference/ai-sdk-core) for more details on each function.
|