ai 6.0.30 → 6.0.32
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/CHANGELOG.md +13 -0
- package/dist/index.js +1 -1
- package/dist/index.mjs +1 -1
- package/dist/internal/index.js +1 -1
- package/dist/internal/index.mjs +1 -1
- package/docs/00-introduction/index.mdx +76 -0
- package/docs/02-foundations/01-overview.mdx +43 -0
- package/docs/02-foundations/02-providers-and-models.mdx +163 -0
- package/docs/02-foundations/03-prompts.mdx +620 -0
- package/docs/02-foundations/04-tools.mdx +160 -0
- package/docs/02-foundations/05-streaming.mdx +62 -0
- package/docs/02-foundations/index.mdx +43 -0
- package/docs/02-getting-started/00-choosing-a-provider.mdx +110 -0
- package/docs/02-getting-started/01-navigating-the-library.mdx +85 -0
- package/docs/02-getting-started/02-nextjs-app-router.mdx +556 -0
- package/docs/02-getting-started/03-nextjs-pages-router.mdx +542 -0
- package/docs/02-getting-started/04-svelte.mdx +627 -0
- package/docs/02-getting-started/05-nuxt.mdx +566 -0
- package/docs/02-getting-started/06-nodejs.mdx +512 -0
- package/docs/02-getting-started/07-expo.mdx +766 -0
- package/docs/02-getting-started/08-tanstack-start.mdx +583 -0
- package/docs/02-getting-started/index.mdx +44 -0
- package/docs/03-agents/01-overview.mdx +96 -0
- package/docs/03-agents/02-building-agents.mdx +367 -0
- package/docs/03-agents/03-workflows.mdx +370 -0
- package/docs/03-agents/04-loop-control.mdx +350 -0
- package/docs/03-agents/05-configuring-call-options.mdx +286 -0
- package/docs/03-agents/index.mdx +40 -0
- package/docs/03-ai-sdk-core/01-overview.mdx +33 -0
- package/docs/03-ai-sdk-core/05-generating-text.mdx +600 -0
- package/docs/03-ai-sdk-core/10-generating-structured-data.mdx +662 -0
- package/docs/03-ai-sdk-core/15-tools-and-tool-calling.mdx +1102 -0
- package/docs/03-ai-sdk-core/16-mcp-tools.mdx +375 -0
- package/docs/03-ai-sdk-core/20-prompt-engineering.mdx +144 -0
- package/docs/03-ai-sdk-core/25-settings.mdx +198 -0
- package/docs/03-ai-sdk-core/30-embeddings.mdx +247 -0
- package/docs/03-ai-sdk-core/31-reranking.mdx +218 -0
- package/docs/03-ai-sdk-core/35-image-generation.mdx +341 -0
- package/docs/03-ai-sdk-core/36-transcription.mdx +173 -0
- package/docs/03-ai-sdk-core/37-speech.mdx +167 -0
- package/docs/03-ai-sdk-core/40-middleware.mdx +480 -0
- package/docs/03-ai-sdk-core/45-provider-management.mdx +349 -0
- package/docs/03-ai-sdk-core/50-error-handling.mdx +149 -0
- package/docs/03-ai-sdk-core/55-testing.mdx +218 -0
- package/docs/03-ai-sdk-core/60-telemetry.mdx +313 -0
- package/docs/03-ai-sdk-core/65-devtools.mdx +107 -0
- package/docs/03-ai-sdk-core/index.mdx +88 -0
- package/docs/04-ai-sdk-ui/01-overview.mdx +44 -0
- package/docs/04-ai-sdk-ui/02-chatbot.mdx +1313 -0
- package/docs/04-ai-sdk-ui/03-chatbot-message-persistence.mdx +535 -0
- package/docs/04-ai-sdk-ui/03-chatbot-resume-streams.mdx +263 -0
- package/docs/04-ai-sdk-ui/03-chatbot-tool-usage.mdx +682 -0
- package/docs/04-ai-sdk-ui/04-generative-user-interfaces.mdx +389 -0
- package/docs/04-ai-sdk-ui/05-completion.mdx +186 -0
- package/docs/04-ai-sdk-ui/08-object-generation.mdx +344 -0
- package/docs/04-ai-sdk-ui/20-streaming-data.mdx +397 -0
- package/docs/04-ai-sdk-ui/21-error-handling.mdx +190 -0
- package/docs/04-ai-sdk-ui/21-transport.mdx +174 -0
- package/docs/04-ai-sdk-ui/24-reading-ui-message-streams.mdx +104 -0
- package/docs/04-ai-sdk-ui/25-message-metadata.mdx +152 -0
- package/docs/04-ai-sdk-ui/50-stream-protocol.mdx +477 -0
- package/docs/04-ai-sdk-ui/index.mdx +64 -0
- package/docs/05-ai-sdk-rsc/01-overview.mdx +45 -0
- package/docs/05-ai-sdk-rsc/02-streaming-react-components.mdx +209 -0
- package/docs/05-ai-sdk-rsc/03-generative-ui-state.mdx +279 -0
- package/docs/05-ai-sdk-rsc/03-saving-and-restoring-states.mdx +105 -0
- package/docs/05-ai-sdk-rsc/04-multistep-interfaces.mdx +282 -0
- package/docs/05-ai-sdk-rsc/05-streaming-values.mdx +158 -0
- package/docs/05-ai-sdk-rsc/06-loading-state.mdx +273 -0
- package/docs/05-ai-sdk-rsc/08-error-handling.mdx +96 -0
- package/docs/05-ai-sdk-rsc/09-authentication.mdx +42 -0
- package/docs/05-ai-sdk-rsc/10-migrating-to-ui.mdx +722 -0
- package/docs/05-ai-sdk-rsc/index.mdx +58 -0
- package/docs/06-advanced/01-prompt-engineering.mdx +96 -0
- package/docs/06-advanced/02-stopping-streams.mdx +184 -0
- package/docs/06-advanced/03-backpressure.mdx +173 -0
- package/docs/06-advanced/04-caching.mdx +169 -0
- package/docs/06-advanced/05-multiple-streamables.mdx +68 -0
- package/docs/06-advanced/06-rate-limiting.mdx +60 -0
- package/docs/06-advanced/07-rendering-ui-with-language-models.mdx +213 -0
- package/docs/06-advanced/08-model-as-router.mdx +120 -0
- package/docs/06-advanced/09-multistep-interfaces.mdx +115 -0
- package/docs/06-advanced/09-sequential-generations.mdx +55 -0
- package/docs/06-advanced/10-vercel-deployment-guide.mdx +117 -0
- package/docs/06-advanced/index.mdx +11 -0
- package/docs/07-reference/01-ai-sdk-core/01-generate-text.mdx +2142 -0
- package/docs/07-reference/01-ai-sdk-core/02-stream-text.mdx +3215 -0
- package/docs/07-reference/01-ai-sdk-core/03-generate-object.mdx +780 -0
- package/docs/07-reference/01-ai-sdk-core/04-stream-object.mdx +1140 -0
- package/docs/07-reference/01-ai-sdk-core/05-embed.mdx +190 -0
- package/docs/07-reference/01-ai-sdk-core/06-embed-many.mdx +171 -0
- package/docs/07-reference/01-ai-sdk-core/06-rerank.mdx +309 -0
- package/docs/07-reference/01-ai-sdk-core/10-generate-image.mdx +227 -0
- package/docs/07-reference/01-ai-sdk-core/11-transcribe.mdx +138 -0
- package/docs/07-reference/01-ai-sdk-core/12-generate-speech.mdx +214 -0
- package/docs/07-reference/01-ai-sdk-core/15-agent.mdx +203 -0
- package/docs/07-reference/01-ai-sdk-core/16-tool-loop-agent.mdx +449 -0
- package/docs/07-reference/01-ai-sdk-core/17-create-agent-ui-stream.mdx +148 -0
- package/docs/07-reference/01-ai-sdk-core/18-create-agent-ui-stream-response.mdx +168 -0
- package/docs/07-reference/01-ai-sdk-core/18-pipe-agent-ui-stream-to-response.mdx +144 -0
- package/docs/07-reference/01-ai-sdk-core/20-tool.mdx +196 -0
- package/docs/07-reference/01-ai-sdk-core/22-dynamic-tool.mdx +175 -0
- package/docs/07-reference/01-ai-sdk-core/23-create-mcp-client.mdx +410 -0
- package/docs/07-reference/01-ai-sdk-core/24-mcp-stdio-transport.mdx +68 -0
- package/docs/07-reference/01-ai-sdk-core/25-json-schema.mdx +94 -0
- package/docs/07-reference/01-ai-sdk-core/26-zod-schema.mdx +109 -0
- package/docs/07-reference/01-ai-sdk-core/27-valibot-schema.mdx +55 -0
- package/docs/07-reference/01-ai-sdk-core/28-output.mdx +342 -0
- package/docs/07-reference/01-ai-sdk-core/30-model-message.mdx +415 -0
- package/docs/07-reference/01-ai-sdk-core/31-ui-message.mdx +246 -0
- package/docs/07-reference/01-ai-sdk-core/32-validate-ui-messages.mdx +101 -0
- package/docs/07-reference/01-ai-sdk-core/33-safe-validate-ui-messages.mdx +113 -0
- package/docs/07-reference/01-ai-sdk-core/40-provider-registry.mdx +182 -0
- package/docs/07-reference/01-ai-sdk-core/42-custom-provider.mdx +121 -0
- package/docs/07-reference/01-ai-sdk-core/50-cosine-similarity.mdx +52 -0
- package/docs/07-reference/01-ai-sdk-core/60-wrap-language-model.mdx +59 -0
- package/docs/07-reference/01-ai-sdk-core/61-wrap-image-model.mdx +64 -0
- package/docs/07-reference/01-ai-sdk-core/65-language-model-v2-middleware.mdx +46 -0
- package/docs/07-reference/01-ai-sdk-core/66-extract-reasoning-middleware.mdx +68 -0
- package/docs/07-reference/01-ai-sdk-core/67-simulate-streaming-middleware.mdx +71 -0
- package/docs/07-reference/01-ai-sdk-core/68-default-settings-middleware.mdx +80 -0
- package/docs/07-reference/01-ai-sdk-core/69-add-tool-input-examples-middleware.mdx +155 -0
- package/docs/07-reference/01-ai-sdk-core/70-extract-json-middleware.mdx +147 -0
- package/docs/07-reference/01-ai-sdk-core/70-step-count-is.mdx +84 -0
- package/docs/07-reference/01-ai-sdk-core/71-has-tool-call.mdx +120 -0
- package/docs/07-reference/01-ai-sdk-core/75-simulate-readable-stream.mdx +94 -0
- package/docs/07-reference/01-ai-sdk-core/80-smooth-stream.mdx +145 -0
- package/docs/07-reference/01-ai-sdk-core/90-generate-id.mdx +43 -0
- package/docs/07-reference/01-ai-sdk-core/91-create-id-generator.mdx +89 -0
- package/docs/07-reference/01-ai-sdk-core/index.mdx +159 -0
- package/docs/07-reference/02-ai-sdk-ui/01-use-chat.mdx +446 -0
- package/docs/07-reference/02-ai-sdk-ui/02-use-completion.mdx +179 -0
- package/docs/07-reference/02-ai-sdk-ui/03-use-object.mdx +178 -0
- package/docs/07-reference/02-ai-sdk-ui/31-convert-to-model-messages.mdx +230 -0
- package/docs/07-reference/02-ai-sdk-ui/32-prune-messages.mdx +108 -0
- package/docs/07-reference/02-ai-sdk-ui/40-create-ui-message-stream.mdx +151 -0
- package/docs/07-reference/02-ai-sdk-ui/41-create-ui-message-stream-response.mdx +113 -0
- package/docs/07-reference/02-ai-sdk-ui/42-pipe-ui-message-stream-to-response.mdx +73 -0
- package/docs/07-reference/02-ai-sdk-ui/43-read-ui-message-stream.mdx +57 -0
- package/docs/07-reference/02-ai-sdk-ui/46-infer-ui-tools.mdx +99 -0
- package/docs/07-reference/02-ai-sdk-ui/47-infer-ui-tool.mdx +75 -0
- package/docs/07-reference/02-ai-sdk-ui/50-direct-chat-transport.mdx +333 -0
- package/docs/07-reference/02-ai-sdk-ui/index.mdx +89 -0
- package/docs/07-reference/03-ai-sdk-rsc/01-stream-ui.mdx +767 -0
- package/docs/07-reference/03-ai-sdk-rsc/02-create-ai.mdx +90 -0
- package/docs/07-reference/03-ai-sdk-rsc/03-create-streamable-ui.mdx +91 -0
- package/docs/07-reference/03-ai-sdk-rsc/04-create-streamable-value.mdx +48 -0
- package/docs/07-reference/03-ai-sdk-rsc/05-read-streamable-value.mdx +78 -0
- package/docs/07-reference/03-ai-sdk-rsc/06-get-ai-state.mdx +50 -0
- package/docs/07-reference/03-ai-sdk-rsc/07-get-mutable-ai-state.mdx +70 -0
- package/docs/07-reference/03-ai-sdk-rsc/08-use-ai-state.mdx +26 -0
- package/docs/07-reference/03-ai-sdk-rsc/09-use-actions.mdx +42 -0
- package/docs/07-reference/03-ai-sdk-rsc/10-use-ui-state.mdx +35 -0
- package/docs/07-reference/03-ai-sdk-rsc/11-use-streamable-value.mdx +46 -0
- package/docs/07-reference/03-ai-sdk-rsc/20-render.mdx +262 -0
- package/docs/07-reference/03-ai-sdk-rsc/index.mdx +67 -0
- package/docs/07-reference/04-stream-helpers/01-ai-stream.mdx +89 -0
- package/docs/07-reference/04-stream-helpers/02-streaming-text-response.mdx +79 -0
- package/docs/07-reference/04-stream-helpers/05-stream-to-response.mdx +108 -0
- package/docs/07-reference/04-stream-helpers/07-openai-stream.mdx +77 -0
- package/docs/07-reference/04-stream-helpers/08-anthropic-stream.mdx +79 -0
- package/docs/07-reference/04-stream-helpers/09-aws-bedrock-stream.mdx +91 -0
- package/docs/07-reference/04-stream-helpers/10-aws-bedrock-anthropic-stream.mdx +96 -0
- package/docs/07-reference/04-stream-helpers/10-aws-bedrock-messages-stream.mdx +96 -0
- package/docs/07-reference/04-stream-helpers/11-aws-bedrock-cohere-stream.mdx +93 -0
- package/docs/07-reference/04-stream-helpers/12-aws-bedrock-llama-2-stream.mdx +93 -0
- package/docs/07-reference/04-stream-helpers/13-cohere-stream.mdx +78 -0
- package/docs/07-reference/04-stream-helpers/14-google-generative-ai-stream.mdx +85 -0
- package/docs/07-reference/04-stream-helpers/15-hugging-face-stream.mdx +84 -0
- package/docs/07-reference/04-stream-helpers/16-langchain-adapter.mdx +98 -0
- package/docs/07-reference/04-stream-helpers/16-llamaindex-adapter.mdx +70 -0
- package/docs/07-reference/04-stream-helpers/17-mistral-stream.mdx +81 -0
- package/docs/07-reference/04-stream-helpers/18-replicate-stream.mdx +83 -0
- package/docs/07-reference/04-stream-helpers/19-inkeep-stream.mdx +80 -0
- package/docs/07-reference/04-stream-helpers/index.mdx +103 -0
- package/docs/07-reference/05-ai-sdk-errors/ai-api-call-error.mdx +30 -0
- package/docs/07-reference/05-ai-sdk-errors/ai-download-error.mdx +27 -0
- package/docs/07-reference/05-ai-sdk-errors/ai-empty-response-body-error.mdx +24 -0
- package/docs/07-reference/05-ai-sdk-errors/ai-invalid-argument-error.mdx +26 -0
- package/docs/07-reference/05-ai-sdk-errors/ai-invalid-data-content-error.mdx +25 -0
- package/docs/07-reference/05-ai-sdk-errors/ai-invalid-data-content.mdx +26 -0
- package/docs/07-reference/05-ai-sdk-errors/ai-invalid-message-role-error.mdx +25 -0
- package/docs/07-reference/05-ai-sdk-errors/ai-invalid-prompt-error.mdx +47 -0
- package/docs/07-reference/05-ai-sdk-errors/ai-invalid-response-data-error.mdx +25 -0
- package/docs/07-reference/05-ai-sdk-errors/ai-invalid-tool-approval-error.mdx +25 -0
- package/docs/07-reference/05-ai-sdk-errors/ai-invalid-tool-input-error.mdx +27 -0
- package/docs/07-reference/05-ai-sdk-errors/ai-json-parse-error.mdx +25 -0
- package/docs/07-reference/05-ai-sdk-errors/ai-load-api-key-error.mdx +24 -0
- package/docs/07-reference/05-ai-sdk-errors/ai-load-setting-error.mdx +24 -0
- package/docs/07-reference/05-ai-sdk-errors/ai-message-conversion-error.mdx +25 -0
- package/docs/07-reference/05-ai-sdk-errors/ai-no-content-generated-error.mdx +24 -0
- package/docs/07-reference/05-ai-sdk-errors/ai-no-image-generated-error.mdx +36 -0
- package/docs/07-reference/05-ai-sdk-errors/ai-no-object-generated-error.mdx +43 -0
- package/docs/07-reference/05-ai-sdk-errors/ai-no-speech-generated-error.mdx +25 -0
- package/docs/07-reference/05-ai-sdk-errors/ai-no-such-model-error.mdx +26 -0
- package/docs/07-reference/05-ai-sdk-errors/ai-no-such-provider-error.mdx +28 -0
- package/docs/07-reference/05-ai-sdk-errors/ai-no-such-tool-error.mdx +26 -0
- package/docs/07-reference/05-ai-sdk-errors/ai-no-transcript-generated-error.mdx +25 -0
- package/docs/07-reference/05-ai-sdk-errors/ai-retry-error.mdx +27 -0
- package/docs/07-reference/05-ai-sdk-errors/ai-too-many-embedding-values-for-call-error.mdx +27 -0
- package/docs/07-reference/05-ai-sdk-errors/ai-tool-call-not-found-for-approval-error.mdx +26 -0
- package/docs/07-reference/05-ai-sdk-errors/ai-tool-call-repair-error.mdx +28 -0
- package/docs/07-reference/05-ai-sdk-errors/ai-type-validation-error.mdx +25 -0
- package/docs/07-reference/05-ai-sdk-errors/ai-unsupported-functionality-error.mdx +25 -0
- package/docs/07-reference/05-ai-sdk-errors/index.mdx +38 -0
- package/docs/07-reference/index.mdx +34 -0
- package/docs/08-migration-guides/00-versioning.mdx +46 -0
- package/docs/08-migration-guides/24-migration-guide-6-0.mdx +823 -0
- package/docs/08-migration-guides/25-migration-guide-5-0-data.mdx +882 -0
- package/docs/08-migration-guides/26-migration-guide-5-0.mdx +3427 -0
- package/docs/08-migration-guides/27-migration-guide-4-2.mdx +99 -0
- package/docs/08-migration-guides/28-migration-guide-4-1.mdx +14 -0
- package/docs/08-migration-guides/29-migration-guide-4-0.mdx +1157 -0
- package/docs/08-migration-guides/36-migration-guide-3-4.mdx +14 -0
- package/docs/08-migration-guides/37-migration-guide-3-3.mdx +64 -0
- package/docs/08-migration-guides/38-migration-guide-3-2.mdx +46 -0
- package/docs/08-migration-guides/39-migration-guide-3-1.mdx +168 -0
- package/docs/08-migration-guides/index.mdx +22 -0
- package/docs/09-troubleshooting/01-azure-stream-slow.mdx +33 -0
- package/docs/09-troubleshooting/02-client-side-function-calls-not-invoked.mdx +22 -0
- package/docs/09-troubleshooting/03-server-actions-in-client-components.mdx +40 -0
- package/docs/09-troubleshooting/04-strange-stream-output.mdx +36 -0
- package/docs/09-troubleshooting/05-streamable-ui-errors.mdx +16 -0
- package/docs/09-troubleshooting/05-tool-invocation-missing-result.mdx +106 -0
- package/docs/09-troubleshooting/06-streaming-not-working-when-deployed.mdx +31 -0
- package/docs/09-troubleshooting/06-streaming-not-working-when-proxied.mdx +31 -0
- package/docs/09-troubleshooting/06-timeout-on-vercel.mdx +60 -0
- package/docs/09-troubleshooting/07-unclosed-streams.mdx +34 -0
- package/docs/09-troubleshooting/08-use-chat-failed-to-parse-stream.mdx +26 -0
- package/docs/09-troubleshooting/09-client-stream-error.mdx +25 -0
- package/docs/09-troubleshooting/10-use-chat-tools-no-response.mdx +32 -0
- package/docs/09-troubleshooting/11-use-chat-custom-request-options.mdx +149 -0
- package/docs/09-troubleshooting/12-typescript-performance-zod.mdx +46 -0
- package/docs/09-troubleshooting/12-use-chat-an-error-occurred.mdx +59 -0
- package/docs/09-troubleshooting/13-repeated-assistant-messages.mdx +73 -0
- package/docs/09-troubleshooting/14-stream-abort-handling.mdx +73 -0
- package/docs/09-troubleshooting/14-tool-calling-with-structured-outputs.mdx +48 -0
- package/docs/09-troubleshooting/15-abort-breaks-resumable-streams.mdx +55 -0
- package/docs/09-troubleshooting/15-stream-text-not-working.mdx +33 -0
- package/docs/09-troubleshooting/16-streaming-status-delay.mdx +63 -0
- package/docs/09-troubleshooting/17-use-chat-stale-body-data.mdx +141 -0
- package/docs/09-troubleshooting/18-ontoolcall-type-narrowing.mdx +66 -0
- package/docs/09-troubleshooting/19-unsupported-model-version.mdx +50 -0
- package/docs/09-troubleshooting/20-no-object-generated-content-filter.mdx +72 -0
- package/docs/09-troubleshooting/30-model-is-not-assignable-to-type.mdx +21 -0
- package/docs/09-troubleshooting/40-typescript-cannot-find-namespace-jsx.mdx +24 -0
- package/docs/09-troubleshooting/50-react-maximum-update-depth-exceeded.mdx +39 -0
- package/docs/09-troubleshooting/60-jest-cannot-find-module-ai-rsc.mdx +22 -0
- package/docs/09-troubleshooting/index.mdx +11 -0
- package/package.json +7 -3
|
@@ -0,0 +1,313 @@
|
|
|
1
|
+
---
|
|
2
|
+
title: Telemetry
|
|
3
|
+
description: Using OpenTelemetry with AI SDK Core
|
|
4
|
+
---
|
|
5
|
+
|
|
6
|
+
# Telemetry
|
|
7
|
+
|
|
8
|
+
<Note type="warning">
|
|
9
|
+
AI SDK Telemetry is experimental and may change in the future.
|
|
10
|
+
</Note>
|
|
11
|
+
|
|
12
|
+
The AI SDK uses [OpenTelemetry](https://opentelemetry.io/) to collect telemetry data.
|
|
13
|
+
OpenTelemetry is an open-source observability framework designed to provide
|
|
14
|
+
standardized instrumentation for collecting telemetry data.
|
|
15
|
+
|
|
16
|
+
Check out the [AI SDK Observability Integrations](/providers/observability)
|
|
17
|
+
to see providers that offer monitoring and tracing for AI SDK applications.
|
|
18
|
+
|
|
19
|
+
## Enabling telemetry
|
|
20
|
+
|
|
21
|
+
For Next.js applications, please follow the [Next.js OpenTelemetry guide](https://nextjs.org/docs/app/building-your-application/optimizing/open-telemetry) to enable telemetry first.
|
|
22
|
+
|
|
23
|
+
You can then use the `experimental_telemetry` option to enable telemetry on specific function calls while the feature is experimental:
|
|
24
|
+
|
|
25
|
+
```ts highlight="4"
|
|
26
|
+
const result = await generateText({
|
|
27
|
+
model: __MODEL__,
|
|
28
|
+
prompt: 'Write a short story about a cat.',
|
|
29
|
+
experimental_telemetry: { isEnabled: true },
|
|
30
|
+
});
|
|
31
|
+
```
|
|
32
|
+
|
|
33
|
+
When telemetry is enabled, you can also control if you want to record the input values and the output values for the function.
|
|
34
|
+
By default, both are enabled. You can disable them by setting the `recordInputs` and `recordOutputs` options to `false`.
|
|
35
|
+
|
|
36
|
+
Disabling the recording of inputs and outputs can be useful for privacy, data transfer, and performance reasons.
|
|
37
|
+
You might for example want to disable recording inputs if they contain sensitive information.
|
|
38
|
+
|
|
39
|
+
## Telemetry Metadata
|
|
40
|
+
|
|
41
|
+
You can provide a `functionId` to identify the function that the telemetry data is for,
|
|
42
|
+
and `metadata` to include additional information in the telemetry data.
|
|
43
|
+
|
|
44
|
+
```ts highlight="6-10"
|
|
45
|
+
const result = await generateText({
|
|
46
|
+
model: __MODEL__,
|
|
47
|
+
prompt: 'Write a short story about a cat.',
|
|
48
|
+
experimental_telemetry: {
|
|
49
|
+
isEnabled: true,
|
|
50
|
+
functionId: 'my-awesome-function',
|
|
51
|
+
metadata: {
|
|
52
|
+
something: 'custom',
|
|
53
|
+
someOtherThing: 'other-value',
|
|
54
|
+
},
|
|
55
|
+
},
|
|
56
|
+
});
|
|
57
|
+
```
|
|
58
|
+
|
|
59
|
+
## Custom Tracer
|
|
60
|
+
|
|
61
|
+
You may provide a `tracer` which must return an OpenTelemetry `Tracer`. This is useful in situations where
|
|
62
|
+
you want your traces to use a `TracerProvider` other than the one provided by the `@opentelemetry/api` singleton.
|
|
63
|
+
|
|
64
|
+
```ts highlight="7"
|
|
65
|
+
const tracerProvider = new NodeTracerProvider();
|
|
66
|
+
const result = await generateText({
|
|
67
|
+
model: __MODEL__,
|
|
68
|
+
prompt: 'Write a short story about a cat.',
|
|
69
|
+
experimental_telemetry: {
|
|
70
|
+
isEnabled: true,
|
|
71
|
+
tracer: tracerProvider.getTracer('ai'),
|
|
72
|
+
},
|
|
73
|
+
});
|
|
74
|
+
```
|
|
75
|
+
|
|
76
|
+
## Collected Data
|
|
77
|
+
|
|
78
|
+
### generateText function
|
|
79
|
+
|
|
80
|
+
`generateText` records 3 types of spans:
|
|
81
|
+
|
|
82
|
+
- `ai.generateText` (span): the full length of the generateText call. It contains 1 or more `ai.generateText.doGenerate` spans.
|
|
83
|
+
It contains the [basic LLM span information](#basic-llm-span-information) and the following attributes:
|
|
84
|
+
|
|
85
|
+
- `operation.name`: `ai.generateText` and the functionId that was set through `telemetry.functionId`
|
|
86
|
+
- `ai.operationId`: `"ai.generateText"`
|
|
87
|
+
- `ai.prompt`: the prompt that was used when calling `generateText`
|
|
88
|
+
- `ai.response.text`: the text that was generated
|
|
89
|
+
- `ai.response.toolCalls`: the tool calls that were made as part of the generation (stringified JSON)
|
|
90
|
+
- `ai.response.finishReason`: the reason why the generation finished
|
|
91
|
+
- `ai.settings.maxOutputTokens`: the maximum number of output tokens that were set
|
|
92
|
+
|
|
93
|
+
- `ai.generateText.doGenerate` (span): a provider doGenerate call. It can contain `ai.toolCall` spans.
|
|
94
|
+
It contains the [call LLM span information](#call-llm-span-information) and the following attributes:
|
|
95
|
+
|
|
96
|
+
- `operation.name`: `ai.generateText.doGenerate` and the functionId that was set through `telemetry.functionId`
|
|
97
|
+
- `ai.operationId`: `"ai.generateText.doGenerate"`
|
|
98
|
+
- `ai.prompt.messages`: the messages that were passed into the provider
|
|
99
|
+
- `ai.prompt.tools`: array of stringified tool definitions. The tools can be of type `function` or `provider-defined-client`.
|
|
100
|
+
Function tools have a `name`, `description` (optional), and `inputSchema` (JSON schema).
|
|
101
|
+
Provider-defined-client tools have a `name`, `id`, and `input` (Record).
|
|
102
|
+
- `ai.prompt.toolChoice`: the stringified tool choice setting (JSON). It has a `type` property
|
|
103
|
+
(`auto`, `none`, `required`, `tool`), and if the type is `tool`, a `toolName` property with the specific tool.
|
|
104
|
+
- `ai.response.text`: the text that was generated
|
|
105
|
+
- `ai.response.toolCalls`: the tool calls that were made as part of the generation (stringified JSON)
|
|
106
|
+
- `ai.response.finishReason`: the reason why the generation finished
|
|
107
|
+
|
|
108
|
+
- `ai.toolCall` (span): a tool call that is made as part of the generateText call. See [Tool call spans](#tool-call-spans) for more details.
|
|
109
|
+
|
|
110
|
+
### streamText function
|
|
111
|
+
|
|
112
|
+
`streamText` records 3 types of spans and 2 types of events:
|
|
113
|
+
|
|
114
|
+
- `ai.streamText` (span): the full length of the streamText call. It contains a `ai.streamText.doStream` span.
|
|
115
|
+
It contains the [basic LLM span information](#basic-llm-span-information) and the following attributes:
|
|
116
|
+
|
|
117
|
+
- `operation.name`: `ai.streamText` and the functionId that was set through `telemetry.functionId`
|
|
118
|
+
- `ai.operationId`: `"ai.streamText"`
|
|
119
|
+
- `ai.prompt`: the prompt that was used when calling `streamText`
|
|
120
|
+
- `ai.response.text`: the text that was generated
|
|
121
|
+
- `ai.response.toolCalls`: the tool calls that were made as part of the generation (stringified JSON)
|
|
122
|
+
- `ai.response.finishReason`: the reason why the generation finished
|
|
123
|
+
- `ai.settings.maxOutputTokens`: the maximum number of output tokens that were set
|
|
124
|
+
|
|
125
|
+
- `ai.streamText.doStream` (span): a provider doStream call.
|
|
126
|
+
This span contains an `ai.stream.firstChunk` event and `ai.toolCall` spans.
|
|
127
|
+
It contains the [call LLM span information](#call-llm-span-information) and the following attributes:
|
|
128
|
+
|
|
129
|
+
- `operation.name`: `ai.streamText.doStream` and the functionId that was set through `telemetry.functionId`
|
|
130
|
+
- `ai.operationId`: `"ai.streamText.doStream"`
|
|
131
|
+
- `ai.prompt.messages`: the messages that were passed into the provider
|
|
132
|
+
- `ai.prompt.tools`: array of stringified tool definitions. The tools can be of type `function` or `provider-defined-client`.
|
|
133
|
+
Function tools have a `name`, `description` (optional), and `inputSchema` (JSON schema).
|
|
134
|
+
Provider-defined-client tools have a `name`, `id`, and `input` (Record).
|
|
135
|
+
- `ai.prompt.toolChoice`: the stringified tool choice setting (JSON). It has a `type` property
|
|
136
|
+
(`auto`, `none`, `required`, `tool`), and if the type is `tool`, a `toolName` property with the specific tool.
|
|
137
|
+
- `ai.response.text`: the text that was generated
|
|
138
|
+
- `ai.response.toolCalls`: the tool calls that were made as part of the generation (stringified JSON)
|
|
139
|
+
- `ai.response.msToFirstChunk`: the time it took to receive the first chunk in milliseconds
|
|
140
|
+
- `ai.response.msToFinish`: the time it took to receive the finish part of the LLM stream in milliseconds
|
|
141
|
+
- `ai.response.avgCompletionTokensPerSecond`: the average number of completion tokens per second
|
|
142
|
+
- `ai.response.finishReason`: the reason why the generation finished
|
|
143
|
+
|
|
144
|
+
- `ai.toolCall` (span): a tool call that is made as part of the generateText call. See [Tool call spans](#tool-call-spans) for more details.
|
|
145
|
+
|
|
146
|
+
- `ai.stream.firstChunk` (event): an event that is emitted when the first chunk of the stream is received.
|
|
147
|
+
|
|
148
|
+
- `ai.response.msToFirstChunk`: the time it took to receive the first chunk
|
|
149
|
+
|
|
150
|
+
- `ai.stream.finish` (event): an event that is emitted when the finish part of the LLM stream is received.
|
|
151
|
+
|
|
152
|
+
It also records a `ai.stream.firstChunk` event when the first chunk of the stream is received.
|
|
153
|
+
|
|
154
|
+
### generateObject function
|
|
155
|
+
|
|
156
|
+
`generateObject` records 2 types of spans:
|
|
157
|
+
|
|
158
|
+
- `ai.generateObject` (span): the full length of the generateObject call. It contains 1 or more `ai.generateObject.doGenerate` spans.
|
|
159
|
+
It contains the [basic LLM span information](#basic-llm-span-information) and the following attributes:
|
|
160
|
+
|
|
161
|
+
- `operation.name`: `ai.generateObject` and the functionId that was set through `telemetry.functionId`
|
|
162
|
+
- `ai.operationId`: `"ai.generateObject"`
|
|
163
|
+
- `ai.prompt`: the prompt that was used when calling `generateObject`
|
|
164
|
+
- `ai.schema`: Stringified JSON schema version of the schema that was passed into the `generateObject` function
|
|
165
|
+
- `ai.schema.name`: the name of the schema that was passed into the `generateObject` function
|
|
166
|
+
- `ai.schema.description`: the description of the schema that was passed into the `generateObject` function
|
|
167
|
+
- `ai.response.object`: the object that was generated (stringified JSON)
|
|
168
|
+
- `ai.settings.output`: the output type that was used, e.g. `object` or `no-schema`
|
|
169
|
+
|
|
170
|
+
- `ai.generateObject.doGenerate` (span): a provider doGenerate call.
|
|
171
|
+
It contains the [call LLM span information](#call-llm-span-information) and the following attributes:
|
|
172
|
+
|
|
173
|
+
- `operation.name`: `ai.generateObject.doGenerate` and the functionId that was set through `telemetry.functionId`
|
|
174
|
+
- `ai.operationId`: `"ai.generateObject.doGenerate"`
|
|
175
|
+
- `ai.prompt.messages`: the messages that were passed into the provider
|
|
176
|
+
- `ai.response.object`: the object that was generated (stringified JSON)
|
|
177
|
+
- `ai.response.finishReason`: the reason why the generation finished
|
|
178
|
+
|
|
179
|
+
### streamObject function
|
|
180
|
+
|
|
181
|
+
`streamObject` records 2 types of spans and 1 type of event:
|
|
182
|
+
|
|
183
|
+
- `ai.streamObject` (span): the full length of the streamObject call. It contains 1 or more `ai.streamObject.doStream` spans.
|
|
184
|
+
It contains the [basic LLM span information](#basic-llm-span-information) and the following attributes:
|
|
185
|
+
|
|
186
|
+
- `operation.name`: `ai.streamObject` and the functionId that was set through `telemetry.functionId`
|
|
187
|
+
- `ai.operationId`: `"ai.streamObject"`
|
|
188
|
+
- `ai.prompt`: the prompt that was used when calling `streamObject`
|
|
189
|
+
- `ai.schema`: Stringified JSON schema version of the schema that was passed into the `streamObject` function
|
|
190
|
+
- `ai.schema.name`: the name of the schema that was passed into the `streamObject` function
|
|
191
|
+
- `ai.schema.description`: the description of the schema that was passed into the `streamObject` function
|
|
192
|
+
- `ai.response.object`: the object that was generated (stringified JSON)
|
|
193
|
+
- `ai.settings.output`: the output type that was used, e.g. `object` or `no-schema`
|
|
194
|
+
|
|
195
|
+
- `ai.streamObject.doStream` (span): a provider doStream call.
|
|
196
|
+
This span contains an `ai.stream.firstChunk` event.
|
|
197
|
+
It contains the [call LLM span information](#call-llm-span-information) and the following attributes:
|
|
198
|
+
|
|
199
|
+
- `operation.name`: `ai.streamObject.doStream` and the functionId that was set through `telemetry.functionId`
|
|
200
|
+
- `ai.operationId`: `"ai.streamObject.doStream"`
|
|
201
|
+
- `ai.prompt.messages`: the messages that were passed into the provider
|
|
202
|
+
- `ai.response.object`: the object that was generated (stringified JSON)
|
|
203
|
+
- `ai.response.msToFirstChunk`: the time it took to receive the first chunk
|
|
204
|
+
- `ai.response.finishReason`: the reason why the generation finished
|
|
205
|
+
|
|
206
|
+
- `ai.stream.firstChunk` (event): an event that is emitted when the first chunk of the stream is received.
|
|
207
|
+
- `ai.response.msToFirstChunk`: the time it took to receive the first chunk
|
|
208
|
+
|
|
209
|
+
### embed function
|
|
210
|
+
|
|
211
|
+
`embed` records 2 types of spans:
|
|
212
|
+
|
|
213
|
+
- `ai.embed` (span): the full length of the embed call. It contains 1 `ai.embed.doEmbed` spans.
|
|
214
|
+
It contains the [basic embedding span information](#basic-embedding-span-information) and the following attributes:
|
|
215
|
+
|
|
216
|
+
- `operation.name`: `ai.embed` and the functionId that was set through `telemetry.functionId`
|
|
217
|
+
- `ai.operationId`: `"ai.embed"`
|
|
218
|
+
- `ai.value`: the value that was passed into the `embed` function
|
|
219
|
+
- `ai.embedding`: a JSON-stringified embedding
|
|
220
|
+
|
|
221
|
+
- `ai.embed.doEmbed` (span): a provider doEmbed call.
|
|
222
|
+
It contains the [basic embedding span information](#basic-embedding-span-information) and the following attributes:
|
|
223
|
+
|
|
224
|
+
- `operation.name`: `ai.embed.doEmbed` and the functionId that was set through `telemetry.functionId`
|
|
225
|
+
- `ai.operationId`: `"ai.embed.doEmbed"`
|
|
226
|
+
- `ai.values`: the values that were passed into the provider (array)
|
|
227
|
+
- `ai.embeddings`: an array of JSON-stringified embeddings
|
|
228
|
+
|
|
229
|
+
### embedMany function
|
|
230
|
+
|
|
231
|
+
`embedMany` records 2 types of spans:
|
|
232
|
+
|
|
233
|
+
- `ai.embedMany` (span): the full length of the embedMany call. It contains 1 or more `ai.embedMany.doEmbed` spans.
|
|
234
|
+
It contains the [basic embedding span information](#basic-embedding-span-information) and the following attributes:
|
|
235
|
+
|
|
236
|
+
- `operation.name`: `ai.embedMany` and the functionId that was set through `telemetry.functionId`
|
|
237
|
+
- `ai.operationId`: `"ai.embedMany"`
|
|
238
|
+
- `ai.values`: the values that were passed into the `embedMany` function
|
|
239
|
+
- `ai.embeddings`: an array of JSON-stringified embedding
|
|
240
|
+
|
|
241
|
+
- `ai.embedMany.doEmbed` (span): a provider doEmbed call.
|
|
242
|
+
It contains the [basic embedding span information](#basic-embedding-span-information) and the following attributes:
|
|
243
|
+
|
|
244
|
+
- `operation.name`: `ai.embedMany.doEmbed` and the functionId that was set through `telemetry.functionId`
|
|
245
|
+
- `ai.operationId`: `"ai.embedMany.doEmbed"`
|
|
246
|
+
- `ai.values`: the values that were sent to the provider
|
|
247
|
+
- `ai.embeddings`: an array of JSON-stringified embeddings for each value
|
|
248
|
+
|
|
249
|
+
## Span Details
|
|
250
|
+
|
|
251
|
+
### Basic LLM span information
|
|
252
|
+
|
|
253
|
+
Many spans that use LLMs (`ai.generateText`, `ai.generateText.doGenerate`, `ai.streamText`, `ai.streamText.doStream`,
|
|
254
|
+
`ai.generateObject`, `ai.generateObject.doGenerate`, `ai.streamObject`, `ai.streamObject.doStream`) contain the following attributes:
|
|
255
|
+
|
|
256
|
+
- `resource.name`: the functionId that was set through `telemetry.functionId`
|
|
257
|
+
- `ai.model.id`: the id of the model
|
|
258
|
+
- `ai.model.provider`: the provider of the model
|
|
259
|
+
- `ai.request.headers.*`: the request headers that were passed in through `headers`
|
|
260
|
+
- `ai.response.providerMetadata`: provider specific metadata returned with the generation response
|
|
261
|
+
- `ai.settings.maxRetries`: the maximum number of retries that were set
|
|
262
|
+
- `ai.telemetry.functionId`: the functionId that was set through `telemetry.functionId`
|
|
263
|
+
- `ai.telemetry.metadata.*`: the metadata that was passed in through `telemetry.metadata`
|
|
264
|
+
- `ai.usage.completionTokens`: the number of completion tokens that were used
|
|
265
|
+
- `ai.usage.promptTokens`: the number of prompt tokens that were used
|
|
266
|
+
|
|
267
|
+
### Call LLM span information
|
|
268
|
+
|
|
269
|
+
Spans that correspond to individual LLM calls (`ai.generateText.doGenerate`, `ai.streamText.doStream`, `ai.generateObject.doGenerate`, `ai.streamObject.doStream`) contain
|
|
270
|
+
[basic LLM span information](#basic-llm-span-information) and the following attributes:
|
|
271
|
+
|
|
272
|
+
- `ai.response.model`: the model that was used to generate the response. This can be different from the model that was requested if the provider supports aliases.
|
|
273
|
+
- `ai.response.id`: the id of the response. Uses the ID from the provider when available.
|
|
274
|
+
- `ai.response.timestamp`: the timestamp of the response. Uses the timestamp from the provider when available.
|
|
275
|
+
- [Semantic Conventions for GenAI operations](https://opentelemetry.io/docs/specs/semconv/gen-ai/gen-ai-spans/)
|
|
276
|
+
- `gen_ai.system`: the provider that was used
|
|
277
|
+
- `gen_ai.request.model`: the model that was requested
|
|
278
|
+
- `gen_ai.request.temperature`: the temperature that was set
|
|
279
|
+
- `gen_ai.request.max_tokens`: the maximum number of tokens that were set
|
|
280
|
+
- `gen_ai.request.frequency_penalty`: the frequency penalty that was set
|
|
281
|
+
- `gen_ai.request.presence_penalty`: the presence penalty that was set
|
|
282
|
+
- `gen_ai.request.top_k`: the topK parameter value that was set
|
|
283
|
+
- `gen_ai.request.top_p`: the topP parameter value that was set
|
|
284
|
+
- `gen_ai.request.stop_sequences`: the stop sequences
|
|
285
|
+
- `gen_ai.response.finish_reasons`: the finish reasons that were returned by the provider
|
|
286
|
+
- `gen_ai.response.model`: the model that was used to generate the response. This can be different from the model that was requested if the provider supports aliases.
|
|
287
|
+
- `gen_ai.response.id`: the id of the response. Uses the ID from the provider when available.
|
|
288
|
+
- `gen_ai.usage.input_tokens`: the number of prompt tokens that were used
|
|
289
|
+
- `gen_ai.usage.output_tokens`: the number of completion tokens that were used
|
|
290
|
+
|
|
291
|
+
### Basic embedding span information
|
|
292
|
+
|
|
293
|
+
Many spans that use embedding models (`ai.embed`, `ai.embed.doEmbed`, `ai.embedMany`, `ai.embedMany.doEmbed`) contain the following attributes:
|
|
294
|
+
|
|
295
|
+
- `ai.model.id`: the id of the model
|
|
296
|
+
- `ai.model.provider`: the provider of the model
|
|
297
|
+
- `ai.request.headers.*`: the request headers that were passed in through `headers`
|
|
298
|
+
- `ai.settings.maxRetries`: the maximum number of retries that were set
|
|
299
|
+
- `ai.telemetry.functionId`: the functionId that was set through `telemetry.functionId`
|
|
300
|
+
- `ai.telemetry.metadata.*`: the metadata that was passed in through `telemetry.metadata`
|
|
301
|
+
- `ai.usage.tokens`: the number of tokens that were used
|
|
302
|
+
- `resource.name`: the functionId that was set through `telemetry.functionId`
|
|
303
|
+
|
|
304
|
+
### Tool call spans
|
|
305
|
+
|
|
306
|
+
Tool call spans (`ai.toolCall`) contain the following attributes:
|
|
307
|
+
|
|
308
|
+
- `operation.name`: `"ai.toolCall"`
|
|
309
|
+
- `ai.operationId`: `"ai.toolCall"`
|
|
310
|
+
- `ai.toolCall.name`: the name of the tool
|
|
311
|
+
- `ai.toolCall.id`: the id of the tool call
|
|
312
|
+
- `ai.toolCall.args`: the input parameters of the tool call
|
|
313
|
+
- `ai.toolCall.result`: the output result of the tool call. Only available if the tool call is successful and the result is serializable.
|
|
@@ -0,0 +1,107 @@
|
|
|
1
|
+
---
|
|
2
|
+
title: DevTools
|
|
3
|
+
description: Debug and inspect AI SDK applications with DevTools
|
|
4
|
+
---
|
|
5
|
+
|
|
6
|
+
# DevTools
|
|
7
|
+
|
|
8
|
+
<Note type="warning">
|
|
9
|
+
AI SDK DevTools is experimental and intended for local development only. Do
|
|
10
|
+
not use in production environments.
|
|
11
|
+
</Note>
|
|
12
|
+
|
|
13
|
+
AI SDK DevTools gives you full visibility over your AI SDK calls with [`generateText`](/docs/reference/ai-sdk-core/generate-text), [`streamText`](/docs/reference/ai-sdk-core/stream-text), and [`ToolLoopAgent`](/docs/reference/ai-sdk-core/tool-loop-agent). It helps you debug and inspect LLM requests, responses, tool calls, and multi-step interactions through a web-based UI.
|
|
14
|
+
|
|
15
|
+
DevTools is composed of two parts:
|
|
16
|
+
|
|
17
|
+
1. **Middleware**: Captures runs and steps from your AI SDK calls
|
|
18
|
+
2. **Viewer**: A web UI to inspect the captured data
|
|
19
|
+
|
|
20
|
+
## Installation
|
|
21
|
+
|
|
22
|
+
Install the DevTools package:
|
|
23
|
+
|
|
24
|
+
```bash
|
|
25
|
+
pnpm add @ai-sdk/devtools
|
|
26
|
+
```
|
|
27
|
+
|
|
28
|
+
## Requirements
|
|
29
|
+
|
|
30
|
+
- AI SDK v6 beta (`ai@^6.0.0-beta.0`)
|
|
31
|
+
- Node.js compatible runtime
|
|
32
|
+
|
|
33
|
+
## Using DevTools
|
|
34
|
+
|
|
35
|
+
### Add the middleware
|
|
36
|
+
|
|
37
|
+
Wrap your language model with the DevTools middleware using [`wrapLanguageModel`](/docs/ai-sdk-core/middleware):
|
|
38
|
+
|
|
39
|
+
```ts
|
|
40
|
+
import { wrapLanguageModel, gateway } from 'ai';
|
|
41
|
+
import { devToolsMiddleware } from '@ai-sdk/devtools';
|
|
42
|
+
|
|
43
|
+
const model = wrapLanguageModel({
|
|
44
|
+
model: gateway('anthropic/claude-sonnet-4.5'),
|
|
45
|
+
middleware: devToolsMiddleware(),
|
|
46
|
+
});
|
|
47
|
+
```
|
|
48
|
+
|
|
49
|
+
The wrapped model can be used with any AI SDK Core function:
|
|
50
|
+
|
|
51
|
+
```ts highlight="4"
|
|
52
|
+
import { generateText } from 'ai';
|
|
53
|
+
|
|
54
|
+
const result = await generateText({
|
|
55
|
+
model, // wrapped model with DevTools
|
|
56
|
+
prompt: 'What cities are in the United States?',
|
|
57
|
+
});
|
|
58
|
+
```
|
|
59
|
+
|
|
60
|
+
### Launch the viewer
|
|
61
|
+
|
|
62
|
+
Start the DevTools viewer:
|
|
63
|
+
|
|
64
|
+
```bash
|
|
65
|
+
npx @ai-sdk/devtools
|
|
66
|
+
```
|
|
67
|
+
|
|
68
|
+
Open [http://localhost:4983](http://localhost:4983) to view your AI SDK interactions.
|
|
69
|
+
|
|
70
|
+
## Captured data
|
|
71
|
+
|
|
72
|
+
The DevTools middleware captures the following information from your AI SDK calls:
|
|
73
|
+
|
|
74
|
+
- **Input parameters and prompts**: View the complete input sent to your LLM
|
|
75
|
+
- **Output content and tool calls**: Inspect generated text and tool invocations
|
|
76
|
+
- **Token usage and timing**: Monitor resource consumption and performance
|
|
77
|
+
- **Raw provider data**: Access complete request and response payloads
|
|
78
|
+
|
|
79
|
+
### Runs and steps
|
|
80
|
+
|
|
81
|
+
DevTools organizes captured data into runs and steps:
|
|
82
|
+
|
|
83
|
+
- **Run**: A complete multi-step AI interaction, grouped by the initial prompt
|
|
84
|
+
- **Step**: A single LLM call within a run (e.g., one `generateText` or `streamText` call)
|
|
85
|
+
|
|
86
|
+
Multi-step interactions, such as those created by tool calling or agent loops, are grouped together as a single run with multiple steps.
|
|
87
|
+
|
|
88
|
+
## How it works
|
|
89
|
+
|
|
90
|
+
The DevTools middleware intercepts all `generateText` and `streamText` calls through the [language model middleware](/docs/ai-sdk-core/middleware) system. Captured data is stored locally in a JSON file (`.devtools/generations.json`) and served through a web UI built with Hono and React.
|
|
91
|
+
|
|
92
|
+
<Note type="warning">
|
|
93
|
+
The middleware automatically adds `.devtools` to your `.gitignore` file.
|
|
94
|
+
Verify that `.devtools` is in your `.gitignore` to ensure you don't commit
|
|
95
|
+
sensitive AI interaction data to your repository.
|
|
96
|
+
</Note>
|
|
97
|
+
|
|
98
|
+
## Security considerations
|
|
99
|
+
|
|
100
|
+
DevTools stores all AI interactions locally in plain text files, including:
|
|
101
|
+
|
|
102
|
+
- User prompts and messages
|
|
103
|
+
- LLM responses
|
|
104
|
+
- Tool call arguments and results
|
|
105
|
+
- API request and response data
|
|
106
|
+
|
|
107
|
+
**Only use DevTools in local development environments.** Do not enable DevTools in production or when handling sensitive data.
|
|
@@ -0,0 +1,88 @@
|
|
|
1
|
+
---
|
|
2
|
+
title: AI SDK Core
|
|
3
|
+
description: Learn about AI SDK Core.
|
|
4
|
+
---
|
|
5
|
+
|
|
6
|
+
# AI SDK Core
|
|
7
|
+
|
|
8
|
+
<IndexCards
|
|
9
|
+
cards={[
|
|
10
|
+
{
|
|
11
|
+
title: 'Overview',
|
|
12
|
+
description:
|
|
13
|
+
'Learn about AI SDK Core and how to work with Large Language Models (LLMs).',
|
|
14
|
+
href: '/docs/ai-sdk-core/overview',
|
|
15
|
+
},
|
|
16
|
+
{
|
|
17
|
+
title: 'Generating Text',
|
|
18
|
+
description: 'Learn how to generate text.',
|
|
19
|
+
href: '/docs/ai-sdk-core/generating-text',
|
|
20
|
+
},
|
|
21
|
+
{
|
|
22
|
+
title: 'Generating Structured Data',
|
|
23
|
+
description: 'Learn how to generate structured data.',
|
|
24
|
+
href: '/docs/ai-sdk-core/generating-structured-data',
|
|
25
|
+
},
|
|
26
|
+
{
|
|
27
|
+
title: 'Tool Calling',
|
|
28
|
+
description: 'Learn how to do tool calling with AI SDK Core.',
|
|
29
|
+
href: '/docs/ai-sdk-core/tools-and-tool-calling',
|
|
30
|
+
},
|
|
31
|
+
{
|
|
32
|
+
title: 'Prompt Engineering',
|
|
33
|
+
description: 'Learn how to write prompts with AI SDK Core.',
|
|
34
|
+
href: '/docs/ai-sdk-core/prompt-engineering',
|
|
35
|
+
},
|
|
36
|
+
{
|
|
37
|
+
title: 'Settings',
|
|
38
|
+
description:
|
|
39
|
+
'Learn how to set up settings for language models generations.',
|
|
40
|
+
href: '/docs/ai-sdk-core/settings',
|
|
41
|
+
},
|
|
42
|
+
{
|
|
43
|
+
title: 'Embeddings',
|
|
44
|
+
description: 'Learn how to use embeddings with AI SDK Core.',
|
|
45
|
+
href: '/docs/ai-sdk-core/embeddings',
|
|
46
|
+
},
|
|
47
|
+
{
|
|
48
|
+
title: 'Image Generation',
|
|
49
|
+
description: 'Learn how to generate images with AI SDK Core.',
|
|
50
|
+
href: '/docs/ai-sdk-core/image-generation',
|
|
51
|
+
},
|
|
52
|
+
{
|
|
53
|
+
title: 'Transcription',
|
|
54
|
+
description: 'Learn how to transcribe audio with AI SDK Core.',
|
|
55
|
+
href: '/docs/ai-sdk-core/transcription',
|
|
56
|
+
},
|
|
57
|
+
{
|
|
58
|
+
title: 'Speech',
|
|
59
|
+
description: 'Learn how to generate speech with AI SDK Core.',
|
|
60
|
+
href: '/docs/ai-sdk-core/speech',
|
|
61
|
+
},
|
|
62
|
+
{
|
|
63
|
+
title: 'Provider Management',
|
|
64
|
+
description: 'Learn how to work with multiple providers.',
|
|
65
|
+
href: '/docs/ai-sdk-core/provider-management',
|
|
66
|
+
},
|
|
67
|
+
{
|
|
68
|
+
title: 'Middleware',
|
|
69
|
+
description: 'Learn how to use middleware with AI SDK Core.',
|
|
70
|
+
href: '/docs/ai-sdk-core/middleware',
|
|
71
|
+
},
|
|
72
|
+
{
|
|
73
|
+
title: 'Error Handling',
|
|
74
|
+
description: 'Learn how to handle errors with AI SDK Core.',
|
|
75
|
+
href: '/docs/ai-sdk-core/error-handling',
|
|
76
|
+
},
|
|
77
|
+
{
|
|
78
|
+
title: 'Testing',
|
|
79
|
+
description: 'Learn how to test with AI SDK Core.',
|
|
80
|
+
href: '/docs/ai-sdk-core/testing',
|
|
81
|
+
},
|
|
82
|
+
{
|
|
83
|
+
title: 'Telemetry',
|
|
84
|
+
description: 'Learn how to use telemetry with AI SDK Core.',
|
|
85
|
+
href: '/docs/ai-sdk-core/telemetry',
|
|
86
|
+
},
|
|
87
|
+
]}
|
|
88
|
+
/>
|
|
@@ -0,0 +1,44 @@
|
|
|
1
|
+
---
|
|
2
|
+
title: Overview
|
|
3
|
+
description: An overview of AI SDK UI.
|
|
4
|
+
---
|
|
5
|
+
|
|
6
|
+
# AI SDK UI
|
|
7
|
+
|
|
8
|
+
AI SDK UI is designed to help you build interactive chat, completion, and assistant applications with ease. It is a **framework-agnostic toolkit**, streamlining the integration of advanced AI functionalities into your applications.
|
|
9
|
+
|
|
10
|
+
AI SDK UI provides robust abstractions that simplify the complex tasks of managing chat streams and UI updates on the frontend, enabling you to develop dynamic AI-driven interfaces more efficiently. With three main hooks — **`useChat`**, **`useCompletion`**, and **`useObject`** — you can incorporate real-time chat capabilities, text completions, streamed JSON, and interactive assistant features into your app.
|
|
11
|
+
|
|
12
|
+
- **[`useChat`](/docs/ai-sdk-ui/chatbot)** offers real-time streaming of chat messages, abstracting state management for inputs, messages, loading, and errors, allowing for seamless integration into any UI design.
|
|
13
|
+
- **[`useCompletion`](/docs/ai-sdk-ui/completion)** enables you to handle text completions in your applications, managing the prompt input and automatically updating the UI as new completions are streamed.
|
|
14
|
+
- **[`useObject`](/docs/ai-sdk-ui/object-generation)** is a hook that allows you to consume streamed JSON objects, providing a simple way to handle and display structured data in your application.
|
|
15
|
+
|
|
16
|
+
These hooks are designed to reduce the complexity and time required to implement AI interactions, letting you focus on creating exceptional user experiences.
|
|
17
|
+
|
|
18
|
+
## UI Framework Support
|
|
19
|
+
|
|
20
|
+
AI SDK UI supports the following frameworks: [React](https://react.dev/), [Svelte](https://svelte.dev/), [Vue.js](https://vuejs.org/),
|
|
21
|
+
[Angular](https://angular.dev/), and [SolidJS](https://www.solidjs.com/).
|
|
22
|
+
|
|
23
|
+
Here is a comparison of the supported functions across these frameworks:
|
|
24
|
+
|
|
25
|
+
| | [useChat](/docs/reference/ai-sdk-ui/use-chat) | [useCompletion](/docs/reference/ai-sdk-ui/use-completion) | [useObject](/docs/reference/ai-sdk-ui/use-object) |
|
|
26
|
+
| --------------------------------------------------------------- | --------------------------------------------- | --------------------------------------------------------- | ------------------------------------------------- |
|
|
27
|
+
| React `@ai-sdk/react` | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
|
|
28
|
+
| Vue.js `@ai-sdk/vue` | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
|
|
29
|
+
| Svelte `@ai-sdk/svelte` | <Check size={18} /> Chat | <Check size={18} /> Completion | <Check size={18} /> StructuredObject |
|
|
30
|
+
| Angular `@ai-sdk/angular` | <Check size={18} /> Chat | <Check size={18} /> Completion | <Check size={18} /> StructuredObject |
|
|
31
|
+
| [SolidJS](https://github.com/kodehort/ai-sdk-solid) (community) | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
|
|
32
|
+
|
|
33
|
+
## Framework Examples
|
|
34
|
+
|
|
35
|
+
Explore these example implementations for different frameworks:
|
|
36
|
+
|
|
37
|
+
- [**Next.js**](https://github.com/vercel/ai/tree/main/examples/next-openai)
|
|
38
|
+
- [**Nuxt**](https://github.com/vercel/ai/tree/main/examples/nuxt-openai)
|
|
39
|
+
- [**SvelteKit**](https://github.com/vercel/ai/tree/main/examples/sveltekit-openai)
|
|
40
|
+
- [**Angular**](https://github.com/vercel/ai/tree/main/examples/angular)
|
|
41
|
+
|
|
42
|
+
## API Reference
|
|
43
|
+
|
|
44
|
+
Please check out the [AI SDK UI API Reference](/docs/reference/ai-sdk-ui) for more details on each function.
|