ai 6.0.30 → 6.0.32
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/CHANGELOG.md +13 -0
- package/dist/index.js +1 -1
- package/dist/index.mjs +1 -1
- package/dist/internal/index.js +1 -1
- package/dist/internal/index.mjs +1 -1
- package/docs/00-introduction/index.mdx +76 -0
- package/docs/02-foundations/01-overview.mdx +43 -0
- package/docs/02-foundations/02-providers-and-models.mdx +163 -0
- package/docs/02-foundations/03-prompts.mdx +620 -0
- package/docs/02-foundations/04-tools.mdx +160 -0
- package/docs/02-foundations/05-streaming.mdx +62 -0
- package/docs/02-foundations/index.mdx +43 -0
- package/docs/02-getting-started/00-choosing-a-provider.mdx +110 -0
- package/docs/02-getting-started/01-navigating-the-library.mdx +85 -0
- package/docs/02-getting-started/02-nextjs-app-router.mdx +556 -0
- package/docs/02-getting-started/03-nextjs-pages-router.mdx +542 -0
- package/docs/02-getting-started/04-svelte.mdx +627 -0
- package/docs/02-getting-started/05-nuxt.mdx +566 -0
- package/docs/02-getting-started/06-nodejs.mdx +512 -0
- package/docs/02-getting-started/07-expo.mdx +766 -0
- package/docs/02-getting-started/08-tanstack-start.mdx +583 -0
- package/docs/02-getting-started/index.mdx +44 -0
- package/docs/03-agents/01-overview.mdx +96 -0
- package/docs/03-agents/02-building-agents.mdx +367 -0
- package/docs/03-agents/03-workflows.mdx +370 -0
- package/docs/03-agents/04-loop-control.mdx +350 -0
- package/docs/03-agents/05-configuring-call-options.mdx +286 -0
- package/docs/03-agents/index.mdx +40 -0
- package/docs/03-ai-sdk-core/01-overview.mdx +33 -0
- package/docs/03-ai-sdk-core/05-generating-text.mdx +600 -0
- package/docs/03-ai-sdk-core/10-generating-structured-data.mdx +662 -0
- package/docs/03-ai-sdk-core/15-tools-and-tool-calling.mdx +1102 -0
- package/docs/03-ai-sdk-core/16-mcp-tools.mdx +375 -0
- package/docs/03-ai-sdk-core/20-prompt-engineering.mdx +144 -0
- package/docs/03-ai-sdk-core/25-settings.mdx +198 -0
- package/docs/03-ai-sdk-core/30-embeddings.mdx +247 -0
- package/docs/03-ai-sdk-core/31-reranking.mdx +218 -0
- package/docs/03-ai-sdk-core/35-image-generation.mdx +341 -0
- package/docs/03-ai-sdk-core/36-transcription.mdx +173 -0
- package/docs/03-ai-sdk-core/37-speech.mdx +167 -0
- package/docs/03-ai-sdk-core/40-middleware.mdx +480 -0
- package/docs/03-ai-sdk-core/45-provider-management.mdx +349 -0
- package/docs/03-ai-sdk-core/50-error-handling.mdx +149 -0
- package/docs/03-ai-sdk-core/55-testing.mdx +218 -0
- package/docs/03-ai-sdk-core/60-telemetry.mdx +313 -0
- package/docs/03-ai-sdk-core/65-devtools.mdx +107 -0
- package/docs/03-ai-sdk-core/index.mdx +88 -0
- package/docs/04-ai-sdk-ui/01-overview.mdx +44 -0
- package/docs/04-ai-sdk-ui/02-chatbot.mdx +1313 -0
- package/docs/04-ai-sdk-ui/03-chatbot-message-persistence.mdx +535 -0
- package/docs/04-ai-sdk-ui/03-chatbot-resume-streams.mdx +263 -0
- package/docs/04-ai-sdk-ui/03-chatbot-tool-usage.mdx +682 -0
- package/docs/04-ai-sdk-ui/04-generative-user-interfaces.mdx +389 -0
- package/docs/04-ai-sdk-ui/05-completion.mdx +186 -0
- package/docs/04-ai-sdk-ui/08-object-generation.mdx +344 -0
- package/docs/04-ai-sdk-ui/20-streaming-data.mdx +397 -0
- package/docs/04-ai-sdk-ui/21-error-handling.mdx +190 -0
- package/docs/04-ai-sdk-ui/21-transport.mdx +174 -0
- package/docs/04-ai-sdk-ui/24-reading-ui-message-streams.mdx +104 -0
- package/docs/04-ai-sdk-ui/25-message-metadata.mdx +152 -0
- package/docs/04-ai-sdk-ui/50-stream-protocol.mdx +477 -0
- package/docs/04-ai-sdk-ui/index.mdx +64 -0
- package/docs/05-ai-sdk-rsc/01-overview.mdx +45 -0
- package/docs/05-ai-sdk-rsc/02-streaming-react-components.mdx +209 -0
- package/docs/05-ai-sdk-rsc/03-generative-ui-state.mdx +279 -0
- package/docs/05-ai-sdk-rsc/03-saving-and-restoring-states.mdx +105 -0
- package/docs/05-ai-sdk-rsc/04-multistep-interfaces.mdx +282 -0
- package/docs/05-ai-sdk-rsc/05-streaming-values.mdx +158 -0
- package/docs/05-ai-sdk-rsc/06-loading-state.mdx +273 -0
- package/docs/05-ai-sdk-rsc/08-error-handling.mdx +96 -0
- package/docs/05-ai-sdk-rsc/09-authentication.mdx +42 -0
- package/docs/05-ai-sdk-rsc/10-migrating-to-ui.mdx +722 -0
- package/docs/05-ai-sdk-rsc/index.mdx +58 -0
- package/docs/06-advanced/01-prompt-engineering.mdx +96 -0
- package/docs/06-advanced/02-stopping-streams.mdx +184 -0
- package/docs/06-advanced/03-backpressure.mdx +173 -0
- package/docs/06-advanced/04-caching.mdx +169 -0
- package/docs/06-advanced/05-multiple-streamables.mdx +68 -0
- package/docs/06-advanced/06-rate-limiting.mdx +60 -0
- package/docs/06-advanced/07-rendering-ui-with-language-models.mdx +213 -0
- package/docs/06-advanced/08-model-as-router.mdx +120 -0
- package/docs/06-advanced/09-multistep-interfaces.mdx +115 -0
- package/docs/06-advanced/09-sequential-generations.mdx +55 -0
- package/docs/06-advanced/10-vercel-deployment-guide.mdx +117 -0
- package/docs/06-advanced/index.mdx +11 -0
- package/docs/07-reference/01-ai-sdk-core/01-generate-text.mdx +2142 -0
- package/docs/07-reference/01-ai-sdk-core/02-stream-text.mdx +3215 -0
- package/docs/07-reference/01-ai-sdk-core/03-generate-object.mdx +780 -0
- package/docs/07-reference/01-ai-sdk-core/04-stream-object.mdx +1140 -0
- package/docs/07-reference/01-ai-sdk-core/05-embed.mdx +190 -0
- package/docs/07-reference/01-ai-sdk-core/06-embed-many.mdx +171 -0
- package/docs/07-reference/01-ai-sdk-core/06-rerank.mdx +309 -0
- package/docs/07-reference/01-ai-sdk-core/10-generate-image.mdx +227 -0
- package/docs/07-reference/01-ai-sdk-core/11-transcribe.mdx +138 -0
- package/docs/07-reference/01-ai-sdk-core/12-generate-speech.mdx +214 -0
- package/docs/07-reference/01-ai-sdk-core/15-agent.mdx +203 -0
- package/docs/07-reference/01-ai-sdk-core/16-tool-loop-agent.mdx +449 -0
- package/docs/07-reference/01-ai-sdk-core/17-create-agent-ui-stream.mdx +148 -0
- package/docs/07-reference/01-ai-sdk-core/18-create-agent-ui-stream-response.mdx +168 -0
- package/docs/07-reference/01-ai-sdk-core/18-pipe-agent-ui-stream-to-response.mdx +144 -0
- package/docs/07-reference/01-ai-sdk-core/20-tool.mdx +196 -0
- package/docs/07-reference/01-ai-sdk-core/22-dynamic-tool.mdx +175 -0
- package/docs/07-reference/01-ai-sdk-core/23-create-mcp-client.mdx +410 -0
- package/docs/07-reference/01-ai-sdk-core/24-mcp-stdio-transport.mdx +68 -0
- package/docs/07-reference/01-ai-sdk-core/25-json-schema.mdx +94 -0
- package/docs/07-reference/01-ai-sdk-core/26-zod-schema.mdx +109 -0
- package/docs/07-reference/01-ai-sdk-core/27-valibot-schema.mdx +55 -0
- package/docs/07-reference/01-ai-sdk-core/28-output.mdx +342 -0
- package/docs/07-reference/01-ai-sdk-core/30-model-message.mdx +415 -0
- package/docs/07-reference/01-ai-sdk-core/31-ui-message.mdx +246 -0
- package/docs/07-reference/01-ai-sdk-core/32-validate-ui-messages.mdx +101 -0
- package/docs/07-reference/01-ai-sdk-core/33-safe-validate-ui-messages.mdx +113 -0
- package/docs/07-reference/01-ai-sdk-core/40-provider-registry.mdx +182 -0
- package/docs/07-reference/01-ai-sdk-core/42-custom-provider.mdx +121 -0
- package/docs/07-reference/01-ai-sdk-core/50-cosine-similarity.mdx +52 -0
- package/docs/07-reference/01-ai-sdk-core/60-wrap-language-model.mdx +59 -0
- package/docs/07-reference/01-ai-sdk-core/61-wrap-image-model.mdx +64 -0
- package/docs/07-reference/01-ai-sdk-core/65-language-model-v2-middleware.mdx +46 -0
- package/docs/07-reference/01-ai-sdk-core/66-extract-reasoning-middleware.mdx +68 -0
- package/docs/07-reference/01-ai-sdk-core/67-simulate-streaming-middleware.mdx +71 -0
- package/docs/07-reference/01-ai-sdk-core/68-default-settings-middleware.mdx +80 -0
- package/docs/07-reference/01-ai-sdk-core/69-add-tool-input-examples-middleware.mdx +155 -0
- package/docs/07-reference/01-ai-sdk-core/70-extract-json-middleware.mdx +147 -0
- package/docs/07-reference/01-ai-sdk-core/70-step-count-is.mdx +84 -0
- package/docs/07-reference/01-ai-sdk-core/71-has-tool-call.mdx +120 -0
- package/docs/07-reference/01-ai-sdk-core/75-simulate-readable-stream.mdx +94 -0
- package/docs/07-reference/01-ai-sdk-core/80-smooth-stream.mdx +145 -0
- package/docs/07-reference/01-ai-sdk-core/90-generate-id.mdx +43 -0
- package/docs/07-reference/01-ai-sdk-core/91-create-id-generator.mdx +89 -0
- package/docs/07-reference/01-ai-sdk-core/index.mdx +159 -0
- package/docs/07-reference/02-ai-sdk-ui/01-use-chat.mdx +446 -0
- package/docs/07-reference/02-ai-sdk-ui/02-use-completion.mdx +179 -0
- package/docs/07-reference/02-ai-sdk-ui/03-use-object.mdx +178 -0
- package/docs/07-reference/02-ai-sdk-ui/31-convert-to-model-messages.mdx +230 -0
- package/docs/07-reference/02-ai-sdk-ui/32-prune-messages.mdx +108 -0
- package/docs/07-reference/02-ai-sdk-ui/40-create-ui-message-stream.mdx +151 -0
- package/docs/07-reference/02-ai-sdk-ui/41-create-ui-message-stream-response.mdx +113 -0
- package/docs/07-reference/02-ai-sdk-ui/42-pipe-ui-message-stream-to-response.mdx +73 -0
- package/docs/07-reference/02-ai-sdk-ui/43-read-ui-message-stream.mdx +57 -0
- package/docs/07-reference/02-ai-sdk-ui/46-infer-ui-tools.mdx +99 -0
- package/docs/07-reference/02-ai-sdk-ui/47-infer-ui-tool.mdx +75 -0
- package/docs/07-reference/02-ai-sdk-ui/50-direct-chat-transport.mdx +333 -0
- package/docs/07-reference/02-ai-sdk-ui/index.mdx +89 -0
- package/docs/07-reference/03-ai-sdk-rsc/01-stream-ui.mdx +767 -0
- package/docs/07-reference/03-ai-sdk-rsc/02-create-ai.mdx +90 -0
- package/docs/07-reference/03-ai-sdk-rsc/03-create-streamable-ui.mdx +91 -0
- package/docs/07-reference/03-ai-sdk-rsc/04-create-streamable-value.mdx +48 -0
- package/docs/07-reference/03-ai-sdk-rsc/05-read-streamable-value.mdx +78 -0
- package/docs/07-reference/03-ai-sdk-rsc/06-get-ai-state.mdx +50 -0
- package/docs/07-reference/03-ai-sdk-rsc/07-get-mutable-ai-state.mdx +70 -0
- package/docs/07-reference/03-ai-sdk-rsc/08-use-ai-state.mdx +26 -0
- package/docs/07-reference/03-ai-sdk-rsc/09-use-actions.mdx +42 -0
- package/docs/07-reference/03-ai-sdk-rsc/10-use-ui-state.mdx +35 -0
- package/docs/07-reference/03-ai-sdk-rsc/11-use-streamable-value.mdx +46 -0
- package/docs/07-reference/03-ai-sdk-rsc/20-render.mdx +262 -0
- package/docs/07-reference/03-ai-sdk-rsc/index.mdx +67 -0
- package/docs/07-reference/04-stream-helpers/01-ai-stream.mdx +89 -0
- package/docs/07-reference/04-stream-helpers/02-streaming-text-response.mdx +79 -0
- package/docs/07-reference/04-stream-helpers/05-stream-to-response.mdx +108 -0
- package/docs/07-reference/04-stream-helpers/07-openai-stream.mdx +77 -0
- package/docs/07-reference/04-stream-helpers/08-anthropic-stream.mdx +79 -0
- package/docs/07-reference/04-stream-helpers/09-aws-bedrock-stream.mdx +91 -0
- package/docs/07-reference/04-stream-helpers/10-aws-bedrock-anthropic-stream.mdx +96 -0
- package/docs/07-reference/04-stream-helpers/10-aws-bedrock-messages-stream.mdx +96 -0
- package/docs/07-reference/04-stream-helpers/11-aws-bedrock-cohere-stream.mdx +93 -0
- package/docs/07-reference/04-stream-helpers/12-aws-bedrock-llama-2-stream.mdx +93 -0
- package/docs/07-reference/04-stream-helpers/13-cohere-stream.mdx +78 -0
- package/docs/07-reference/04-stream-helpers/14-google-generative-ai-stream.mdx +85 -0
- package/docs/07-reference/04-stream-helpers/15-hugging-face-stream.mdx +84 -0
- package/docs/07-reference/04-stream-helpers/16-langchain-adapter.mdx +98 -0
- package/docs/07-reference/04-stream-helpers/16-llamaindex-adapter.mdx +70 -0
- package/docs/07-reference/04-stream-helpers/17-mistral-stream.mdx +81 -0
- package/docs/07-reference/04-stream-helpers/18-replicate-stream.mdx +83 -0
- package/docs/07-reference/04-stream-helpers/19-inkeep-stream.mdx +80 -0
- package/docs/07-reference/04-stream-helpers/index.mdx +103 -0
- package/docs/07-reference/05-ai-sdk-errors/ai-api-call-error.mdx +30 -0
- package/docs/07-reference/05-ai-sdk-errors/ai-download-error.mdx +27 -0
- package/docs/07-reference/05-ai-sdk-errors/ai-empty-response-body-error.mdx +24 -0
- package/docs/07-reference/05-ai-sdk-errors/ai-invalid-argument-error.mdx +26 -0
- package/docs/07-reference/05-ai-sdk-errors/ai-invalid-data-content-error.mdx +25 -0
- package/docs/07-reference/05-ai-sdk-errors/ai-invalid-data-content.mdx +26 -0
- package/docs/07-reference/05-ai-sdk-errors/ai-invalid-message-role-error.mdx +25 -0
- package/docs/07-reference/05-ai-sdk-errors/ai-invalid-prompt-error.mdx +47 -0
- package/docs/07-reference/05-ai-sdk-errors/ai-invalid-response-data-error.mdx +25 -0
- package/docs/07-reference/05-ai-sdk-errors/ai-invalid-tool-approval-error.mdx +25 -0
- package/docs/07-reference/05-ai-sdk-errors/ai-invalid-tool-input-error.mdx +27 -0
- package/docs/07-reference/05-ai-sdk-errors/ai-json-parse-error.mdx +25 -0
- package/docs/07-reference/05-ai-sdk-errors/ai-load-api-key-error.mdx +24 -0
- package/docs/07-reference/05-ai-sdk-errors/ai-load-setting-error.mdx +24 -0
- package/docs/07-reference/05-ai-sdk-errors/ai-message-conversion-error.mdx +25 -0
- package/docs/07-reference/05-ai-sdk-errors/ai-no-content-generated-error.mdx +24 -0
- package/docs/07-reference/05-ai-sdk-errors/ai-no-image-generated-error.mdx +36 -0
- package/docs/07-reference/05-ai-sdk-errors/ai-no-object-generated-error.mdx +43 -0
- package/docs/07-reference/05-ai-sdk-errors/ai-no-speech-generated-error.mdx +25 -0
- package/docs/07-reference/05-ai-sdk-errors/ai-no-such-model-error.mdx +26 -0
- package/docs/07-reference/05-ai-sdk-errors/ai-no-such-provider-error.mdx +28 -0
- package/docs/07-reference/05-ai-sdk-errors/ai-no-such-tool-error.mdx +26 -0
- package/docs/07-reference/05-ai-sdk-errors/ai-no-transcript-generated-error.mdx +25 -0
- package/docs/07-reference/05-ai-sdk-errors/ai-retry-error.mdx +27 -0
- package/docs/07-reference/05-ai-sdk-errors/ai-too-many-embedding-values-for-call-error.mdx +27 -0
- package/docs/07-reference/05-ai-sdk-errors/ai-tool-call-not-found-for-approval-error.mdx +26 -0
- package/docs/07-reference/05-ai-sdk-errors/ai-tool-call-repair-error.mdx +28 -0
- package/docs/07-reference/05-ai-sdk-errors/ai-type-validation-error.mdx +25 -0
- package/docs/07-reference/05-ai-sdk-errors/ai-unsupported-functionality-error.mdx +25 -0
- package/docs/07-reference/05-ai-sdk-errors/index.mdx +38 -0
- package/docs/07-reference/index.mdx +34 -0
- package/docs/08-migration-guides/00-versioning.mdx +46 -0
- package/docs/08-migration-guides/24-migration-guide-6-0.mdx +823 -0
- package/docs/08-migration-guides/25-migration-guide-5-0-data.mdx +882 -0
- package/docs/08-migration-guides/26-migration-guide-5-0.mdx +3427 -0
- package/docs/08-migration-guides/27-migration-guide-4-2.mdx +99 -0
- package/docs/08-migration-guides/28-migration-guide-4-1.mdx +14 -0
- package/docs/08-migration-guides/29-migration-guide-4-0.mdx +1157 -0
- package/docs/08-migration-guides/36-migration-guide-3-4.mdx +14 -0
- package/docs/08-migration-guides/37-migration-guide-3-3.mdx +64 -0
- package/docs/08-migration-guides/38-migration-guide-3-2.mdx +46 -0
- package/docs/08-migration-guides/39-migration-guide-3-1.mdx +168 -0
- package/docs/08-migration-guides/index.mdx +22 -0
- package/docs/09-troubleshooting/01-azure-stream-slow.mdx +33 -0
- package/docs/09-troubleshooting/02-client-side-function-calls-not-invoked.mdx +22 -0
- package/docs/09-troubleshooting/03-server-actions-in-client-components.mdx +40 -0
- package/docs/09-troubleshooting/04-strange-stream-output.mdx +36 -0
- package/docs/09-troubleshooting/05-streamable-ui-errors.mdx +16 -0
- package/docs/09-troubleshooting/05-tool-invocation-missing-result.mdx +106 -0
- package/docs/09-troubleshooting/06-streaming-not-working-when-deployed.mdx +31 -0
- package/docs/09-troubleshooting/06-streaming-not-working-when-proxied.mdx +31 -0
- package/docs/09-troubleshooting/06-timeout-on-vercel.mdx +60 -0
- package/docs/09-troubleshooting/07-unclosed-streams.mdx +34 -0
- package/docs/09-troubleshooting/08-use-chat-failed-to-parse-stream.mdx +26 -0
- package/docs/09-troubleshooting/09-client-stream-error.mdx +25 -0
- package/docs/09-troubleshooting/10-use-chat-tools-no-response.mdx +32 -0
- package/docs/09-troubleshooting/11-use-chat-custom-request-options.mdx +149 -0
- package/docs/09-troubleshooting/12-typescript-performance-zod.mdx +46 -0
- package/docs/09-troubleshooting/12-use-chat-an-error-occurred.mdx +59 -0
- package/docs/09-troubleshooting/13-repeated-assistant-messages.mdx +73 -0
- package/docs/09-troubleshooting/14-stream-abort-handling.mdx +73 -0
- package/docs/09-troubleshooting/14-tool-calling-with-structured-outputs.mdx +48 -0
- package/docs/09-troubleshooting/15-abort-breaks-resumable-streams.mdx +55 -0
- package/docs/09-troubleshooting/15-stream-text-not-working.mdx +33 -0
- package/docs/09-troubleshooting/16-streaming-status-delay.mdx +63 -0
- package/docs/09-troubleshooting/17-use-chat-stale-body-data.mdx +141 -0
- package/docs/09-troubleshooting/18-ontoolcall-type-narrowing.mdx +66 -0
- package/docs/09-troubleshooting/19-unsupported-model-version.mdx +50 -0
- package/docs/09-troubleshooting/20-no-object-generated-content-filter.mdx +72 -0
- package/docs/09-troubleshooting/30-model-is-not-assignable-to-type.mdx +21 -0
- package/docs/09-troubleshooting/40-typescript-cannot-find-namespace-jsx.mdx +24 -0
- package/docs/09-troubleshooting/50-react-maximum-update-depth-exceeded.mdx +39 -0
- package/docs/09-troubleshooting/60-jest-cannot-find-module-ai-rsc.mdx +22 -0
- package/docs/09-troubleshooting/index.mdx +11 -0
- package/package.json +7 -3
|
@@ -0,0 +1,60 @@
|
|
|
1
|
+
---
|
|
2
|
+
title: Getting Timeouts When Deploying on Vercel
|
|
3
|
+
description: Learn how to fix timeouts and cut off responses when deploying to Vercel.
|
|
4
|
+
---
|
|
5
|
+
|
|
6
|
+
# Getting Timeouts When Deploying on Vercel
|
|
7
|
+
|
|
8
|
+
## Issue
|
|
9
|
+
|
|
10
|
+
Streaming with the AI SDK works in my local development environment.
|
|
11
|
+
However, when I'm deploying to Vercel, longer responses get chopped off in the UI and I'm seeing timeouts in the Vercel logs or I'm seeing the error: `Uncaught (in promise) Error: Connection closed`.
|
|
12
|
+
|
|
13
|
+
## Solution
|
|
14
|
+
|
|
15
|
+
With Vercel's [Fluid Compute](https://vercel.com/docs/fluid-compute), the default function duration is now **5 minutes (300 seconds)** across all plans. This should be sufficient for most streaming applications.
|
|
16
|
+
|
|
17
|
+
If you need to extend the timeout for longer-running processes, you can increase the `maxDuration` setting:
|
|
18
|
+
|
|
19
|
+
### Next.js (App Router)
|
|
20
|
+
|
|
21
|
+
Add the following to your route file or the page you are calling your Server Action from:
|
|
22
|
+
|
|
23
|
+
```tsx
|
|
24
|
+
export const maxDuration = 600;
|
|
25
|
+
```
|
|
26
|
+
|
|
27
|
+
<Note>
|
|
28
|
+
Setting `maxDuration` above 300 seconds requires a Pro or Enterprise plan.
|
|
29
|
+
</Note>
|
|
30
|
+
|
|
31
|
+
### Other Frameworks
|
|
32
|
+
|
|
33
|
+
For other frameworks, you can set timeouts in your `vercel.json` file:
|
|
34
|
+
|
|
35
|
+
```json
|
|
36
|
+
{
|
|
37
|
+
"functions": {
|
|
38
|
+
"api/chat/route.ts": {
|
|
39
|
+
"maxDuration": 600
|
|
40
|
+
}
|
|
41
|
+
}
|
|
42
|
+
}
|
|
43
|
+
```
|
|
44
|
+
|
|
45
|
+
<Note>
|
|
46
|
+
Setting `maxDuration` above 300 seconds requires a Pro or Enterprise plan.
|
|
47
|
+
</Note>
|
|
48
|
+
|
|
49
|
+
### Maximum Duration Limits
|
|
50
|
+
|
|
51
|
+
The maximum duration you can set depends on your Vercel plan:
|
|
52
|
+
|
|
53
|
+
- **Hobby**: Up to 300 seconds (5 minutes)
|
|
54
|
+
- **Pro**: Up to 800 seconds (~13 minutes)
|
|
55
|
+
- **Enterprise**: Up to 800 seconds (~13 minutes)
|
|
56
|
+
|
|
57
|
+
## Learn more
|
|
58
|
+
|
|
59
|
+
- [Fluid Compute Default Settings](https://vercel.com/docs/fluid-compute#default-settings-by-plan)
|
|
60
|
+
- [Configuring Maximum Duration for Vercel Functions](https://vercel.com/docs/functions/configuring-functions/duration)
|
|
@@ -0,0 +1,34 @@
|
|
|
1
|
+
---
|
|
2
|
+
title: Unclosed Streams
|
|
3
|
+
description: Troubleshooting errors related to unclosed streams.
|
|
4
|
+
---
|
|
5
|
+
|
|
6
|
+
# Unclosed Streams
|
|
7
|
+
|
|
8
|
+
Sometimes streams are not closed properly, which can lead to unexpected behavior. The following are some common issues that can occur when streams are not closed properly.
|
|
9
|
+
|
|
10
|
+
## Issue
|
|
11
|
+
|
|
12
|
+
The streamable UI has been slow to update.
|
|
13
|
+
|
|
14
|
+
## Solution
|
|
15
|
+
|
|
16
|
+
This happens when you create a streamable UI using [`createStreamableUI`](/docs/reference/ai-sdk-rsc/create-streamable-ui) and fail to close the stream.
|
|
17
|
+
In order to fix this, you must ensure you close the stream by calling the [`.done()`](/docs/reference/ai-sdk-rsc/create-streamable-ui#done) method.
|
|
18
|
+
This will ensure the stream is closed.
|
|
19
|
+
|
|
20
|
+
```tsx file='app/actions.tsx'
|
|
21
|
+
import { createStreamableUI } from '@ai-sdk/rsc';
|
|
22
|
+
|
|
23
|
+
const submitMessage = async () => {
|
|
24
|
+
'use server';
|
|
25
|
+
|
|
26
|
+
const stream = createStreamableUI('1');
|
|
27
|
+
|
|
28
|
+
stream.update('2');
|
|
29
|
+
stream.append('3');
|
|
30
|
+
stream.done('4'); // [!code ++]
|
|
31
|
+
|
|
32
|
+
return stream.value;
|
|
33
|
+
};
|
|
34
|
+
```
|
|
@@ -0,0 +1,26 @@
|
|
|
1
|
+
---
|
|
2
|
+
title: useChat Failed to Parse Stream
|
|
3
|
+
description: Troubleshooting errors related to the Use Chat Failed to Parse Stream error.
|
|
4
|
+
---
|
|
5
|
+
|
|
6
|
+
# `useChat` "Failed to Parse Stream String" Error
|
|
7
|
+
|
|
8
|
+
## Issue
|
|
9
|
+
|
|
10
|
+
I am using [`useChat`](/docs/reference/ai-sdk-ui/use-chat) or [`useCompletion`](/docs/reference/ai-sdk-ui/use-completion), and I am getting a `"Failed to parse stream string. Invalid code"` error. I am using version `3.0.20` or newer of the AI SDK.
|
|
11
|
+
|
|
12
|
+
## Background
|
|
13
|
+
|
|
14
|
+
The AI SDK has switched to the stream data protocol in version `3.0.20`.
|
|
15
|
+
[`useChat`](/docs/reference/ai-sdk-ui/use-chat) and [`useCompletion`](/docs/reference/ai-sdk-ui/use-completion) expect stream parts that support data, tool calls, etc.
|
|
16
|
+
What you see is a failure to parse the stream.
|
|
17
|
+
This can be caused by using an older version of the AI SDK in the backend, by providing a text stream using a custom provider, or by using a raw LangChain stream result.
|
|
18
|
+
|
|
19
|
+
## Solution
|
|
20
|
+
|
|
21
|
+
You can switch [`useChat`](/docs/reference/ai-sdk-ui/use-chat) and [`useCompletion`](/docs/reference/ai-sdk-ui/use-completion) to raw text stream processing with the [`streamProtocol`](/docs/reference/ai-sdk-ui/use-completion#stream-protocol) parameter.
|
|
22
|
+
Set it to `text` as follows:
|
|
23
|
+
|
|
24
|
+
```tsx
|
|
25
|
+
const { messages, append } = useChat({ streamProtocol: 'text' });
|
|
26
|
+
```
|
|
@@ -0,0 +1,25 @@
|
|
|
1
|
+
---
|
|
2
|
+
title: Server Action Plain Objects Error
|
|
3
|
+
description: Troubleshooting errors related to using AI SDK Core functions with Server Actions.
|
|
4
|
+
---
|
|
5
|
+
|
|
6
|
+
# "Only plain objects can be passed from client components" Server Action Error
|
|
7
|
+
|
|
8
|
+
## Issue
|
|
9
|
+
|
|
10
|
+
I am using [`streamText`](/docs/reference/ai-sdk-core/stream-text) or [`streamObject`](/docs/reference/ai-sdk-core/stream-object) with Server Actions, and I am getting a `"only plain objects and a few built ins can be passed from client components"` error.
|
|
11
|
+
|
|
12
|
+
## Background
|
|
13
|
+
|
|
14
|
+
This error occurs when you're trying to return a non-serializable object from a Server Action to a Client Component. The streamText function likely returns an object with methods or complex structures that can't be directly serialized and passed to the client.
|
|
15
|
+
|
|
16
|
+
## Solution
|
|
17
|
+
|
|
18
|
+
To fix this issue, you need to ensure that you're only returning serializable data from your Server Action. Here's how you can modify your approach:
|
|
19
|
+
|
|
20
|
+
1. Instead of returning the entire result object from streamText, extract only the necessary serializable data.
|
|
21
|
+
2. Use the [`createStreamableValue`](/docs/reference/ai-sdk-rsc/create-streamable-value) function to create a streamable value that can be safely passed to the client.
|
|
22
|
+
|
|
23
|
+
Here's an example that demonstrates how to implement this solution: [Streaming Text Generation](/examples/next-app/basics/streaming-text-generation).
|
|
24
|
+
|
|
25
|
+
This approach ensures that only serializable data (the text) is passed to the client, avoiding the "only plain objects" error.
|
|
@@ -0,0 +1,32 @@
|
|
|
1
|
+
---
|
|
2
|
+
title: useChat No Response
|
|
3
|
+
description: Troubleshooting errors related to the Use Chat Failed to Parse Stream error.
|
|
4
|
+
---
|
|
5
|
+
|
|
6
|
+
# `useChat` No Response
|
|
7
|
+
|
|
8
|
+
## Issue
|
|
9
|
+
|
|
10
|
+
I am using [`useChat`](/docs/reference/ai-sdk-ui/use-chat).
|
|
11
|
+
When I log the incoming messages on the server, I can see the tool call and the tool result, but the model does not respond with anything.
|
|
12
|
+
|
|
13
|
+
## Solution
|
|
14
|
+
|
|
15
|
+
To resolve this issue, convert the incoming messages to the `ModelMessage` format using the [`convertToModelMessages`](/docs/reference/ai-sdk-ui/convert-to-model-messages) function.
|
|
16
|
+
|
|
17
|
+
```tsx highlight="9"
|
|
18
|
+
import { openai } from '@ai-sdk/openai';
|
|
19
|
+
import { convertToModelMessages, streamText } from 'ai';
|
|
20
|
+
__PROVIDER_IMPORT__;
|
|
21
|
+
|
|
22
|
+
export async function POST(req: Request) {
|
|
23
|
+
const { messages } = await req.json();
|
|
24
|
+
|
|
25
|
+
const result = streamText({
|
|
26
|
+
model: __MODEL__,
|
|
27
|
+
messages: await convertToModelMessages(messages),
|
|
28
|
+
});
|
|
29
|
+
|
|
30
|
+
return result.toUIMessageStreamResponse();
|
|
31
|
+
}
|
|
32
|
+
```
|
|
@@ -0,0 +1,149 @@
|
|
|
1
|
+
---
|
|
2
|
+
title: Custom headers, body, and credentials not working with useChat
|
|
3
|
+
description: Troubleshooting errors related to custom request configuration in useChat hook
|
|
4
|
+
---
|
|
5
|
+
|
|
6
|
+
# Custom headers, body, and credentials not working with useChat
|
|
7
|
+
|
|
8
|
+
## Issue
|
|
9
|
+
|
|
10
|
+
When using the `useChat` hook, custom request options like headers, body fields, and credentials configured directly on the hook are not being sent with the request:
|
|
11
|
+
|
|
12
|
+
```tsx
|
|
13
|
+
// These options are not sent with the request
|
|
14
|
+
const { messages, sendMessage } = useChat({
|
|
15
|
+
headers: {
|
|
16
|
+
Authorization: 'Bearer token123',
|
|
17
|
+
},
|
|
18
|
+
body: {
|
|
19
|
+
user_id: '123',
|
|
20
|
+
},
|
|
21
|
+
credentials: 'include',
|
|
22
|
+
});
|
|
23
|
+
```
|
|
24
|
+
|
|
25
|
+
## Background
|
|
26
|
+
|
|
27
|
+
The `useChat` hook has changed its API for configuring request options. Direct options like `headers`, `body`, and `credentials` on the hook itself are no longer supported. Instead, you need to use the `transport` configuration with `DefaultChatTransport` or pass options at the request level.
|
|
28
|
+
|
|
29
|
+
## Solution
|
|
30
|
+
|
|
31
|
+
There are three ways to properly configure request options with `useChat`:
|
|
32
|
+
|
|
33
|
+
### Option 1: Request-Level Configuration (Recommended for Dynamic Values)
|
|
34
|
+
|
|
35
|
+
For dynamic values that change over time, the recommended approach is to pass options when calling `sendMessage`:
|
|
36
|
+
|
|
37
|
+
```tsx
|
|
38
|
+
const { messages, sendMessage } = useChat();
|
|
39
|
+
|
|
40
|
+
// Send options with each message
|
|
41
|
+
sendMessage(
|
|
42
|
+
{ text: input },
|
|
43
|
+
{
|
|
44
|
+
headers: {
|
|
45
|
+
Authorization: `Bearer ${getAuthToken()}`, // Dynamic auth token
|
|
46
|
+
'X-Request-ID': generateRequestId(),
|
|
47
|
+
},
|
|
48
|
+
body: {
|
|
49
|
+
temperature: 0.7,
|
|
50
|
+
max_tokens: 100,
|
|
51
|
+
user_id: getCurrentUserId(), // Dynamic user ID
|
|
52
|
+
sessionId: getCurrentSessionId(), // Dynamic session
|
|
53
|
+
},
|
|
54
|
+
},
|
|
55
|
+
);
|
|
56
|
+
```
|
|
57
|
+
|
|
58
|
+
This approach ensures that the most up-to-date values are always sent with each request.
|
|
59
|
+
|
|
60
|
+
### Option 2: Hook-Level Configuration with Static Values
|
|
61
|
+
|
|
62
|
+
For static values that don't change during the component lifecycle, use the `DefaultChatTransport`:
|
|
63
|
+
|
|
64
|
+
```tsx
|
|
65
|
+
import { useChat } from '@ai-sdk/react';
|
|
66
|
+
import { DefaultChatTransport } from 'ai';
|
|
67
|
+
|
|
68
|
+
const { messages, sendMessage } = useChat({
|
|
69
|
+
transport: new DefaultChatTransport({
|
|
70
|
+
api: '/api/chat',
|
|
71
|
+
headers: {
|
|
72
|
+
'X-API-Version': 'v1', // Static API version
|
|
73
|
+
'X-App-ID': 'my-app', // Static app identifier
|
|
74
|
+
},
|
|
75
|
+
body: {
|
|
76
|
+
model: 'gpt-5.1', // Default model
|
|
77
|
+
stream: true, // Static configuration
|
|
78
|
+
},
|
|
79
|
+
credentials: 'include', // Static credentials policy
|
|
80
|
+
}),
|
|
81
|
+
});
|
|
82
|
+
```
|
|
83
|
+
|
|
84
|
+
### Option 3: Hook-Level Configuration with Resolvable Functions
|
|
85
|
+
|
|
86
|
+
If you need dynamic values at the hook level, you can use functions that return configuration values. However, request-level configuration is generally preferred for better reliability:
|
|
87
|
+
|
|
88
|
+
```tsx
|
|
89
|
+
import { useChat } from '@ai-sdk/react';
|
|
90
|
+
import { DefaultChatTransport } from 'ai';
|
|
91
|
+
|
|
92
|
+
const { messages, sendMessage } = useChat({
|
|
93
|
+
transport: new DefaultChatTransport({
|
|
94
|
+
api: '/api/chat',
|
|
95
|
+
headers: () => ({
|
|
96
|
+
Authorization: `Bearer ${getAuthToken()}`,
|
|
97
|
+
'X-User-ID': getCurrentUserId(),
|
|
98
|
+
}),
|
|
99
|
+
body: () => ({
|
|
100
|
+
sessionId: getCurrentSessionId(),
|
|
101
|
+
preferences: getUserPreferences(),
|
|
102
|
+
}),
|
|
103
|
+
credentials: () => (isAuthenticated() ? 'include' : 'same-origin'),
|
|
104
|
+
}),
|
|
105
|
+
});
|
|
106
|
+
```
|
|
107
|
+
|
|
108
|
+
<Note>
|
|
109
|
+
For component state that changes over time, request-level configuration
|
|
110
|
+
(Option 1) is recommended. If using hook-level functions, consider using
|
|
111
|
+
`useRef` to store current values and reference `ref.current` in your
|
|
112
|
+
configuration function.
|
|
113
|
+
</Note>
|
|
114
|
+
|
|
115
|
+
### Combining Hook and Request Level Options
|
|
116
|
+
|
|
117
|
+
Request-level options take precedence over hook-level options:
|
|
118
|
+
|
|
119
|
+
```tsx
|
|
120
|
+
// Hook-level default configuration
|
|
121
|
+
const { messages, sendMessage } = useChat({
|
|
122
|
+
transport: new DefaultChatTransport({
|
|
123
|
+
api: '/api/chat',
|
|
124
|
+
headers: {
|
|
125
|
+
'X-API-Version': 'v1',
|
|
126
|
+
},
|
|
127
|
+
body: {
|
|
128
|
+
model: 'gpt-5.1',
|
|
129
|
+
},
|
|
130
|
+
}),
|
|
131
|
+
});
|
|
132
|
+
|
|
133
|
+
// Override or add options per request
|
|
134
|
+
sendMessage(
|
|
135
|
+
{ text: input },
|
|
136
|
+
{
|
|
137
|
+
headers: {
|
|
138
|
+
'X-API-Version': 'v2', // This overrides the hook-level header
|
|
139
|
+
'X-Request-ID': '123', // This is added
|
|
140
|
+
},
|
|
141
|
+
body: {
|
|
142
|
+
model: 'gpt-5-mini', // This overrides the hook-level body field
|
|
143
|
+
temperature: 0.5, // This is added
|
|
144
|
+
},
|
|
145
|
+
},
|
|
146
|
+
);
|
|
147
|
+
```
|
|
148
|
+
|
|
149
|
+
For more details on request configuration, see the [Request Configuration](/docs/ai-sdk-ui/chatbot#request-configuration) documentation.
|
|
@@ -0,0 +1,46 @@
|
|
|
1
|
+
---
|
|
2
|
+
title: TypeScript performance issues with Zod and AI SDK 5
|
|
3
|
+
description: Troubleshooting TypeScript server crashes and slow performance when using Zod with AI SDK 5
|
|
4
|
+
---
|
|
5
|
+
|
|
6
|
+
# TypeScript performance issues with Zod and AI SDK 5
|
|
7
|
+
|
|
8
|
+
## Issue
|
|
9
|
+
|
|
10
|
+
When using the AI SDK 5 with Zod, you may experience:
|
|
11
|
+
|
|
12
|
+
- TypeScript server crashes or hangs
|
|
13
|
+
- Extremely slow type checking in files that import AI SDK functions
|
|
14
|
+
- Error messages like "Type instantiation is excessively deep and possibly infinite"
|
|
15
|
+
- IDE becoming unresponsive when working with AI SDK code
|
|
16
|
+
|
|
17
|
+
## Background
|
|
18
|
+
|
|
19
|
+
The AI SDK 5 has specific compatibility requirements with Zod versions. When importing Zod using the standard import path (`import { z } from 'zod'`), TypeScript's type inference can become excessively complex, leading to performance degradation or crashes.
|
|
20
|
+
|
|
21
|
+
## Solution
|
|
22
|
+
|
|
23
|
+
### Upgrade Zod to 4.1.8 or Later
|
|
24
|
+
|
|
25
|
+
The primary solution is to upgrade to Zod version 4.1.8 or later, which includes a fix for this module resolution issue:
|
|
26
|
+
|
|
27
|
+
```bash
|
|
28
|
+
pnpm add zod@^4.1.8
|
|
29
|
+
```
|
|
30
|
+
|
|
31
|
+
This version resolves the underlying problem where different module resolution settings were causing TypeScript to load the same Zod declarations twice, leading to expensive structural comparisons.
|
|
32
|
+
|
|
33
|
+
### Alternative: Update TypeScript Configuration
|
|
34
|
+
|
|
35
|
+
If upgrading Zod isn't possible, you can update your `tsconfig.json` to use `moduleResolution: "nodenext"`:
|
|
36
|
+
|
|
37
|
+
```json
|
|
38
|
+
{
|
|
39
|
+
"compilerOptions": {
|
|
40
|
+
"moduleResolution": "nodenext"
|
|
41
|
+
// ... other options
|
|
42
|
+
}
|
|
43
|
+
}
|
|
44
|
+
```
|
|
45
|
+
|
|
46
|
+
This resolves the TypeScript performance issues while allowing you to continue using the standard Zod import.
|
|
@@ -0,0 +1,59 @@
|
|
|
1
|
+
---
|
|
2
|
+
title: useChat "An error occurred"
|
|
3
|
+
description: Troubleshooting errors related to the "An error occurred" error in useChat.
|
|
4
|
+
---
|
|
5
|
+
|
|
6
|
+
# `useChat` "An error occurred"
|
|
7
|
+
|
|
8
|
+
## Issue
|
|
9
|
+
|
|
10
|
+
I am using [`useChat`](/docs/reference/ai-sdk-ui/use-chat) and I get the error "An error occurred".
|
|
11
|
+
|
|
12
|
+
## Background
|
|
13
|
+
|
|
14
|
+
Error messages from `streamText` are masked by default when using `toDataStreamResponse` for security reasons (secure-by-default).
|
|
15
|
+
This prevents leaking sensitive information to the client.
|
|
16
|
+
|
|
17
|
+
## Solution
|
|
18
|
+
|
|
19
|
+
To forward error details to the client or to log errors, use the `getErrorMessage` function when calling `toDataStreamResponse`.
|
|
20
|
+
|
|
21
|
+
```tsx
|
|
22
|
+
export function errorHandler(error: unknown) {
|
|
23
|
+
if (error == null) {
|
|
24
|
+
return 'unknown error';
|
|
25
|
+
}
|
|
26
|
+
|
|
27
|
+
if (typeof error === 'string') {
|
|
28
|
+
return error;
|
|
29
|
+
}
|
|
30
|
+
|
|
31
|
+
if (error instanceof Error) {
|
|
32
|
+
return error.message;
|
|
33
|
+
}
|
|
34
|
+
|
|
35
|
+
return JSON.stringify(error);
|
|
36
|
+
}
|
|
37
|
+
```
|
|
38
|
+
|
|
39
|
+
```tsx
|
|
40
|
+
const result = streamText({
|
|
41
|
+
// ...
|
|
42
|
+
});
|
|
43
|
+
|
|
44
|
+
return result.toUIMessageStreamResponse({
|
|
45
|
+
getErrorMessage: errorHandler,
|
|
46
|
+
});
|
|
47
|
+
```
|
|
48
|
+
|
|
49
|
+
In case you are using `createDataStreamResponse`, you can use the `onError` function when calling `toDataStreamResponse`:
|
|
50
|
+
|
|
51
|
+
```tsx
|
|
52
|
+
const response = createDataStreamResponse({
|
|
53
|
+
// ...
|
|
54
|
+
async execute(dataStream) {
|
|
55
|
+
// ...
|
|
56
|
+
},
|
|
57
|
+
onError: errorHandler,
|
|
58
|
+
});
|
|
59
|
+
```
|
|
@@ -0,0 +1,73 @@
|
|
|
1
|
+
---
|
|
2
|
+
title: Repeated assistant messages in useChat
|
|
3
|
+
description: Troubleshooting duplicate assistant messages when using useChat with streamText
|
|
4
|
+
---
|
|
5
|
+
|
|
6
|
+
# Repeated assistant messages in useChat
|
|
7
|
+
|
|
8
|
+
## Issue
|
|
9
|
+
|
|
10
|
+
When using `useChat` with `streamText` on the server, the assistant's messages appear duplicated in the UI - showing both the previous message and the new message, or showing the same message multiple times. This can occur when using tool calls or complex message flows.
|
|
11
|
+
|
|
12
|
+
```tsx
|
|
13
|
+
// Server-side code that may experience assistant message duplication on the client
|
|
14
|
+
export async function POST(req: Request) {
|
|
15
|
+
const { messages } = await req.json();
|
|
16
|
+
|
|
17
|
+
const result = streamText({
|
|
18
|
+
model: 'openai/gpt-5-mini',
|
|
19
|
+
messages: await convertToModelMessages(messages),
|
|
20
|
+
tools: {
|
|
21
|
+
weather: {
|
|
22
|
+
description: 'Get the weather for a location',
|
|
23
|
+
parameters: z.object({
|
|
24
|
+
location: z.string(),
|
|
25
|
+
}),
|
|
26
|
+
execute: async ({ location }) => {
|
|
27
|
+
return { temperature: 72, condition: 'sunny' };
|
|
28
|
+
},
|
|
29
|
+
},
|
|
30
|
+
},
|
|
31
|
+
});
|
|
32
|
+
|
|
33
|
+
return result.toUIMessageStreamResponse();
|
|
34
|
+
}
|
|
35
|
+
```
|
|
36
|
+
|
|
37
|
+
## Background
|
|
38
|
+
|
|
39
|
+
The duplication occurs because `toUIMessageStreamResponse` generates new message IDs for each new message.
|
|
40
|
+
|
|
41
|
+
## Solution
|
|
42
|
+
|
|
43
|
+
Pass the original messages array to `toUIMessageStreamResponse` using the `originalMessages` option. By passing `originalMessages`, the method can reuse existing message IDs instead of generating new ones, ensuring the client properly updates existing messages rather than creating duplicates.
|
|
44
|
+
|
|
45
|
+
```tsx
|
|
46
|
+
export async function POST(req: Request) {
|
|
47
|
+
const { messages } = await req.json();
|
|
48
|
+
|
|
49
|
+
const result = streamText({
|
|
50
|
+
model: 'openai/gpt-5-mini',
|
|
51
|
+
messages: await convertToModelMessages(messages),
|
|
52
|
+
tools: {
|
|
53
|
+
weather: {
|
|
54
|
+
description: 'Get the weather for a location',
|
|
55
|
+
parameters: z.object({
|
|
56
|
+
location: z.string(),
|
|
57
|
+
}),
|
|
58
|
+
execute: async ({ location }) => {
|
|
59
|
+
return { temperature: 72, condition: 'sunny' };
|
|
60
|
+
},
|
|
61
|
+
},
|
|
62
|
+
},
|
|
63
|
+
});
|
|
64
|
+
|
|
65
|
+
return result.toUIMessageStreamResponse({
|
|
66
|
+
originalMessages: messages, // Pass the original messages here
|
|
67
|
+
generateMessageId: generateId,
|
|
68
|
+
onFinish: ({ messages }) => {
|
|
69
|
+
saveChat({ id, messages });
|
|
70
|
+
},
|
|
71
|
+
});
|
|
72
|
+
}
|
|
73
|
+
```
|
|
@@ -0,0 +1,73 @@
|
|
|
1
|
+
---
|
|
2
|
+
title: onFinish not called when stream is aborted
|
|
3
|
+
description: Troubleshooting onFinish callback not executing when streams are aborted with toUIMessageStreamResponse
|
|
4
|
+
---
|
|
5
|
+
|
|
6
|
+
# onFinish not called when stream is aborted
|
|
7
|
+
|
|
8
|
+
## Issue
|
|
9
|
+
|
|
10
|
+
When using `toUIMessageStreamResponse` with an `onFinish` callback, the callback may not execute when the stream is aborted. This happens because the abort handler immediately terminates the response, preventing the `onFinish` callback from being triggered.
|
|
11
|
+
|
|
12
|
+
```tsx
|
|
13
|
+
// Server-side code where onFinish isn't called on abort
|
|
14
|
+
export async function POST(req: Request) {
|
|
15
|
+
const { messages } = await req.json();
|
|
16
|
+
|
|
17
|
+
const result = streamText({
|
|
18
|
+
model: __MODEL__,
|
|
19
|
+
messages: await convertToModelMessages(messages),
|
|
20
|
+
abortSignal: req.signal,
|
|
21
|
+
});
|
|
22
|
+
|
|
23
|
+
return result.toUIMessageStreamResponse({
|
|
24
|
+
onFinish: async ({ isAborted }) => {
|
|
25
|
+
// This isn't called when the stream is aborted!
|
|
26
|
+
if (isAborted) {
|
|
27
|
+
console.log('Stream was aborted');
|
|
28
|
+
// Handle abort-specific cleanup
|
|
29
|
+
} else {
|
|
30
|
+
console.log('Stream completed normally');
|
|
31
|
+
// Handle normal completion
|
|
32
|
+
}
|
|
33
|
+
},
|
|
34
|
+
});
|
|
35
|
+
}
|
|
36
|
+
```
|
|
37
|
+
|
|
38
|
+
## Background
|
|
39
|
+
|
|
40
|
+
When a stream is aborted, the response is immediately terminated. Without proper handling, the `onFinish` callback has no chance to execute, preventing important cleanup operations like saving partial results or logging abort events.
|
|
41
|
+
|
|
42
|
+
## Solution
|
|
43
|
+
|
|
44
|
+
Add `consumeStream` to the `toUIMessageStreamResponse` configuration. This ensures that abort events are properly captured and forwarded to the `onFinish` callback, allowing it to execute even when the stream is aborted.
|
|
45
|
+
|
|
46
|
+
```tsx
|
|
47
|
+
// other imports...
|
|
48
|
+
import { consumeStream } from 'ai';
|
|
49
|
+
|
|
50
|
+
export async function POST(req: Request) {
|
|
51
|
+
const { messages } = await req.json();
|
|
52
|
+
|
|
53
|
+
const result = streamText({
|
|
54
|
+
model: __MODEL__,
|
|
55
|
+
messages: await convertToModelMessages(messages),
|
|
56
|
+
abortSignal: req.signal,
|
|
57
|
+
});
|
|
58
|
+
|
|
59
|
+
return result.toUIMessageStreamResponse({
|
|
60
|
+
onFinish: async ({ isAborted }) => {
|
|
61
|
+
// Now this WILL be called even when aborted!
|
|
62
|
+
if (isAborted) {
|
|
63
|
+
console.log('Stream was aborted');
|
|
64
|
+
// Handle abort-specific cleanup
|
|
65
|
+
} else {
|
|
66
|
+
console.log('Stream completed normally');
|
|
67
|
+
// Handle normal completion
|
|
68
|
+
}
|
|
69
|
+
},
|
|
70
|
+
consumeSseStream: consumeStream, // This enables onFinish to be called on abort
|
|
71
|
+
});
|
|
72
|
+
}
|
|
73
|
+
```
|
|
@@ -0,0 +1,48 @@
|
|
|
1
|
+
---
|
|
2
|
+
title: Tool calling with generateObject and streamObject
|
|
3
|
+
description: Troubleshooting tool calling when combined with generateObject and streamObject
|
|
4
|
+
---
|
|
5
|
+
|
|
6
|
+
# Tool calling with generateObject and streamObject (structured outputs)
|
|
7
|
+
|
|
8
|
+
## Issue
|
|
9
|
+
|
|
10
|
+
You may want to combine tool calling with structured output generation. While `generateObject` and `streamObject` are designed specifically for structured outputs, they don't support tool calling.
|
|
11
|
+
|
|
12
|
+
## Background
|
|
13
|
+
|
|
14
|
+
To use tool calling with structured outputs, use `generateText` or `streamText` with the `output` option.
|
|
15
|
+
|
|
16
|
+
**Important**: When using `output` with tool calling, the structured output generation counts as an additional step in the execution flow.
|
|
17
|
+
|
|
18
|
+
## Solution
|
|
19
|
+
|
|
20
|
+
When using `output` with tool calling, adjust your `stopWhen` condition to account for the additional step required for structured output generation:
|
|
21
|
+
|
|
22
|
+
```tsx
|
|
23
|
+
const result = await generateText({
|
|
24
|
+
model: __MODEL__,
|
|
25
|
+
output: Output.object({
|
|
26
|
+
schema: z.object({
|
|
27
|
+
summary: z.string(),
|
|
28
|
+
sentiment: z.enum(['positive', 'neutral', 'negative']),
|
|
29
|
+
}),
|
|
30
|
+
}),
|
|
31
|
+
tools: {
|
|
32
|
+
analyze: tool({
|
|
33
|
+
description: 'Analyze data',
|
|
34
|
+
inputSchema: z.object({
|
|
35
|
+
data: z.string(),
|
|
36
|
+
}),
|
|
37
|
+
execute: async ({ data }) => {
|
|
38
|
+
return { result: 'analyzed' };
|
|
39
|
+
}),
|
|
40
|
+
},
|
|
41
|
+
},
|
|
42
|
+
// Add at least 1 to your intended step count to account for structured output
|
|
43
|
+
stopWhen: stepCountIs(3), // Now accounts for: tool call + tool result + structured output
|
|
44
|
+
prompt: 'Analyze the data and provide a summary',
|
|
45
|
+
});
|
|
46
|
+
```
|
|
47
|
+
|
|
48
|
+
For more information about using structured outputs with `generateText` and `streamText` see [Generating Structured Data](/docs/ai-sdk-core/generating-structured-data#structured-outputs-with-generatetext-and-streamtext).
|
|
@@ -0,0 +1,55 @@
|
|
|
1
|
+
---
|
|
2
|
+
title: Abort breaks resumable streams
|
|
3
|
+
description: Troubleshooting stream resumption failures when using abort functionality
|
|
4
|
+
---
|
|
5
|
+
|
|
6
|
+
# Abort breaks resumable streams
|
|
7
|
+
|
|
8
|
+
## Issue
|
|
9
|
+
|
|
10
|
+
When using `useChat` with `resume: true` for stream resumption, the abort functionality breaks. Closing a tab, refreshing the page, or calling the `stop()` function will trigger an abort signal that interferes with the resumption mechanism, preventing streams from being properly resumed.
|
|
11
|
+
|
|
12
|
+
```tsx
|
|
13
|
+
// This configuration will cause conflicts
|
|
14
|
+
const { messages, stop } = useChat({
|
|
15
|
+
id: chatId,
|
|
16
|
+
resume: true, // Stream resumption enabled
|
|
17
|
+
});
|
|
18
|
+
|
|
19
|
+
// Closing the tab will trigger abort and stop resumption
|
|
20
|
+
```
|
|
21
|
+
|
|
22
|
+
## Background
|
|
23
|
+
|
|
24
|
+
When a page is closed or refreshed, the browser automatically sends an abort signal, which breaks the resumption flow.
|
|
25
|
+
|
|
26
|
+
## Current limitations
|
|
27
|
+
|
|
28
|
+
We're aware of this incompatibility and are exploring solutions. **In the meantime, please choose either stream resumption or abort functionality based on your application's requirements**, but not both.
|
|
29
|
+
|
|
30
|
+
### Option 1: Use stream resumption without abort
|
|
31
|
+
|
|
32
|
+
If you need to support long-running generations that persist across page reloads:
|
|
33
|
+
|
|
34
|
+
```tsx
|
|
35
|
+
const { messages, sendMessage } = useChat({
|
|
36
|
+
id: chatId,
|
|
37
|
+
resume: true,
|
|
38
|
+
});
|
|
39
|
+
```
|
|
40
|
+
|
|
41
|
+
### Option 2: Use abort without stream resumption
|
|
42
|
+
|
|
43
|
+
If you need to allow users to stop streams manually:
|
|
44
|
+
|
|
45
|
+
```tsx
|
|
46
|
+
const { messages, sendMessage, stop } = useChat({
|
|
47
|
+
id: chatId,
|
|
48
|
+
resume: false, // Disable stream resumption (default behaviour)
|
|
49
|
+
});
|
|
50
|
+
```
|
|
51
|
+
|
|
52
|
+
## Related
|
|
53
|
+
|
|
54
|
+
- [Chatbot Resume Streams](/docs/ai-sdk-ui/chatbot-resume-streams)
|
|
55
|
+
- [Stopping Streams](/docs/advanced/stopping-streams)
|