@mastra/mcp-docs-server 1.1.8 → 1.1.9-alpha.1
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/.docs/docs/agents/adding-voice.md +4 -4
- package/.docs/docs/agents/agent-approval.md +3 -3
- package/.docs/docs/agents/agent-memory.md +3 -3
- package/.docs/docs/agents/guardrails.md +3 -3
- package/.docs/docs/agents/network-approval.md +5 -2
- package/.docs/docs/agents/networks.md +2 -2
- package/.docs/docs/agents/overview.md +2 -2
- package/.docs/docs/agents/processors.md +42 -24
- package/.docs/docs/agents/structured-output.md +2 -2
- package/.docs/docs/agents/supervisor-agents.md +3 -3
- package/.docs/docs/agents/using-tools.md +3 -3
- package/.docs/docs/build-with-ai/mcp-docs-server.md +5 -5
- package/.docs/docs/build-with-ai/skills.md +2 -2
- package/.docs/docs/community/contributing-templates.md +1 -1
- package/.docs/docs/community/discord.md +3 -3
- package/.docs/docs/community/licensing.md +2 -2
- package/.docs/docs/deployment/cloud-providers.md +2 -2
- package/.docs/docs/deployment/mastra-server.md +2 -2
- package/.docs/docs/deployment/monorepo.md +1 -1
- package/.docs/docs/deployment/overview.md +3 -3
- package/.docs/docs/deployment/studio.md +3 -3
- package/.docs/docs/deployment/web-framework.md +2 -2
- package/.docs/docs/deployment/workflow-runners.md +1 -1
- package/.docs/docs/evals/built-in-scorers.md +1 -1
- package/.docs/docs/evals/custom-scorers.md +6 -6
- package/.docs/docs/evals/overview.md +2 -2
- package/.docs/docs/evals/running-in-ci.md +6 -6
- package/.docs/docs/getting-started/build-with-ai.md +3 -3
- package/.docs/docs/getting-started/manual-install.md +1 -1
- package/.docs/docs/getting-started/project-structure.md +2 -2
- package/.docs/docs/index.md +63 -17
- package/.docs/docs/mastra-cloud/deployment.md +1 -1
- package/.docs/docs/mastra-cloud/studio.md +1 -1
- package/.docs/docs/mcp/overview.md +1 -1
- package/.docs/docs/mcp/publishing-mcp-server.md +4 -4
- package/.docs/docs/memory/memory-processors.md +9 -9
- package/.docs/docs/memory/message-history.md +4 -4
- package/.docs/docs/memory/observational-memory.md +11 -7
- package/.docs/docs/memory/semantic-recall.md +9 -9
- package/.docs/docs/memory/storage.md +1 -1
- package/.docs/docs/memory/working-memory.md +20 -20
- package/.docs/docs/observability/datasets/overview.md +1 -1
- package/.docs/docs/observability/datasets/running-experiments.md +1 -1
- package/.docs/docs/observability/logging.md +1 -1
- package/.docs/docs/observability/overview.md +5 -5
- package/.docs/docs/observability/tracing/bridges/otel.md +9 -9
- package/.docs/docs/observability/tracing/exporters/arize.md +3 -3
- package/.docs/docs/observability/tracing/exporters/braintrust.md +1 -1
- package/.docs/docs/observability/tracing/exporters/cloud.md +2 -2
- package/.docs/docs/observability/tracing/exporters/datadog.md +3 -3
- package/.docs/docs/observability/tracing/exporters/default.md +8 -8
- package/.docs/docs/observability/tracing/exporters/laminar.md +1 -1
- package/.docs/docs/observability/tracing/exporters/langfuse.md +3 -3
- package/.docs/docs/observability/tracing/exporters/langsmith.md +4 -4
- package/.docs/docs/observability/tracing/exporters/otel.md +8 -8
- package/.docs/docs/observability/tracing/exporters/posthog.md +2 -2
- package/.docs/docs/observability/tracing/exporters/sentry.md +4 -4
- package/.docs/docs/observability/tracing/overview.md +24 -24
- package/.docs/docs/observability/tracing/processors/sensitive-data-filter.md +13 -13
- package/.docs/docs/rag/chunking-and-embedding.md +5 -5
- package/.docs/docs/rag/overview.md +2 -2
- package/.docs/docs/rag/retrieval.md +4 -4
- package/.docs/docs/rag/vector-databases.md +13 -13
- package/.docs/docs/server/auth/auth0.md +2 -2
- package/.docs/docs/server/auth/clerk.md +1 -1
- package/.docs/docs/server/auth/composite-auth.md +9 -9
- package/.docs/docs/server/auth/custom-auth-provider.md +12 -12
- package/.docs/docs/server/auth/firebase.md +3 -3
- package/.docs/docs/server/auth/jwt.md +1 -1
- package/.docs/docs/server/auth/simple-auth.md +9 -9
- package/.docs/docs/server/auth/supabase.md +1 -1
- package/.docs/docs/server/auth/workos.md +1 -1
- package/.docs/docs/server/auth.md +2 -2
- package/.docs/docs/server/custom-adapters.md +7 -7
- package/.docs/docs/server/custom-api-routes.md +2 -2
- package/.docs/docs/server/mastra-client.md +2 -2
- package/.docs/docs/server/mastra-server.md +2 -2
- package/.docs/docs/server/request-context.md +2 -2
- package/.docs/docs/server/server-adapters.md +3 -3
- package/.docs/docs/streaming/events.md +2 -2
- package/.docs/docs/streaming/overview.md +2 -2
- package/.docs/docs/streaming/tool-streaming.md +46 -32
- package/.docs/docs/streaming/workflow-streaming.md +1 -1
- package/.docs/docs/voice/overview.md +3 -3
- package/.docs/docs/voice/speech-to-speech.md +1 -1
- package/.docs/docs/voice/speech-to-text.md +2 -2
- package/.docs/docs/voice/text-to-speech.md +2 -2
- package/.docs/docs/workflows/agents-and-tools.md +1 -1
- package/.docs/docs/workflows/control-flow.md +45 -3
- package/.docs/docs/workflows/error-handling.md +4 -4
- package/.docs/docs/workflows/overview.md +3 -3
- package/.docs/docs/workflows/snapshots.md +1 -1
- package/.docs/docs/workflows/suspend-and-resume.md +1 -1
- package/.docs/docs/workflows/time-travel.md +3 -3
- package/.docs/docs/workflows/workflow-state.md +1 -1
- package/.docs/docs/workspace/filesystem.md +3 -3
- package/.docs/docs/workspace/overview.md +53 -8
- package/.docs/docs/workspace/sandbox.md +72 -13
- package/.docs/docs/workspace/search.md +1 -1
- package/.docs/docs/workspace/skills.md +4 -4
- package/.docs/guides/build-your-ui/ai-sdk-ui.md +2 -2
- package/.docs/guides/build-your-ui/assistant-ui.md +1 -1
- package/.docs/guides/build-your-ui/copilotkit.md +2 -2
- package/.docs/guides/deployment/digital-ocean.md +1 -1
- package/.docs/guides/deployment/inngest.md +4 -4
- package/.docs/guides/getting-started/astro.md +1 -1
- package/.docs/guides/getting-started/electron.md +1 -1
- package/.docs/guides/getting-started/next-js.md +1 -1
- package/.docs/guides/getting-started/vite-react.md +1 -1
- package/.docs/guides/guide/ai-recruiter.md +4 -4
- package/.docs/guides/guide/chef-michel.md +4 -4
- package/.docs/guides/guide/code-review-bot.md +3 -3
- package/.docs/guides/guide/dev-assistant.md +5 -5
- package/.docs/guides/guide/docs-manager.md +3 -3
- package/.docs/guides/guide/github-actions-pr-description.md +4 -4
- package/.docs/guides/guide/notes-mcp-server.md +4 -4
- package/.docs/guides/guide/research-assistant.md +4 -4
- package/.docs/guides/guide/research-coordinator.md +1 -1
- package/.docs/guides/guide/stock-agent.md +6 -6
- package/.docs/guides/guide/web-search.md +2 -2
- package/.docs/guides/guide/whatsapp-chat-bot.md +1 -1
- package/.docs/guides/migrations/agentnetwork.md +1 -1
- package/.docs/guides/migrations/ai-sdk-v4-to-v5.md +3 -3
- package/.docs/guides/migrations/network-to-supervisor.md +1 -1
- package/.docs/guides/migrations/upgrade-to-v1/agent.md +1 -1
- package/.docs/guides/migrations/upgrade-to-v1/client.md +2 -2
- package/.docs/guides/migrations/upgrade-to-v1/deployment.md +2 -2
- package/.docs/guides/migrations/upgrade-to-v1/evals.md +1 -1
- package/.docs/guides/migrations/upgrade-to-v1/mastra.md +1 -1
- package/.docs/guides/migrations/upgrade-to-v1/memory.md +2 -2
- package/.docs/guides/migrations/upgrade-to-v1/overview.md +3 -3
- package/.docs/guides/migrations/upgrade-to-v1/storage.md +4 -4
- package/.docs/guides/migrations/upgrade-to-v1/tools.md +2 -2
- package/.docs/guides/migrations/upgrade-to-v1/tracing.md +2 -2
- package/.docs/guides/migrations/upgrade-to-v1/vectors.md +3 -3
- package/.docs/guides/migrations/upgrade-to-v1/voice.md +1 -1
- package/.docs/guides/migrations/upgrade-to-v1/workflows.md +6 -6
- package/.docs/guides/migrations/vnext-to-standard-apis.md +3 -3
- package/.docs/models/embeddings.md +4 -4
- package/.docs/models/gateways/custom-gateways.md +4 -4
- package/.docs/models/gateways/netlify.md +2 -3
- package/.docs/models/gateways/openrouter.md +9 -2
- package/.docs/models/gateways/vercel.md +11 -2
- package/.docs/models/gateways.md +2 -2
- package/.docs/models/index.md +1 -1
- package/.docs/models/providers/302ai.md +3 -3
- package/.docs/models/providers/abacus.md +24 -14
- package/.docs/models/providers/aihubmix.md +10 -5
- package/.docs/models/providers/alibaba-cn.md +83 -74
- package/.docs/models/providers/alibaba-coding-plan-cn.md +78 -0
- package/.docs/models/providers/alibaba-coding-plan.md +78 -0
- package/.docs/models/providers/alibaba.md +3 -3
- package/.docs/models/providers/anthropic.md +4 -4
- package/.docs/models/providers/bailing.md +3 -3
- package/.docs/models/providers/baseten.md +3 -3
- package/.docs/models/providers/berget.md +3 -3
- package/.docs/models/providers/cerebras.md +4 -4
- package/.docs/models/providers/chutes.md +7 -6
- package/.docs/models/providers/clarifai.md +81 -0
- package/.docs/models/providers/cloudferro-sherlock.md +8 -7
- package/.docs/models/providers/cloudflare-workers-ai.md +6 -5
- package/.docs/models/providers/cortecs.md +10 -8
- package/.docs/models/providers/deepinfra.md +11 -6
- package/.docs/models/providers/deepseek.md +4 -4
- package/.docs/models/providers/drun.md +73 -0
- package/.docs/models/providers/evroc.md +3 -3
- package/.docs/models/providers/fastrouter.md +3 -3
- package/.docs/models/providers/fireworks-ai.md +3 -3
- package/.docs/models/providers/firmware.md +31 -23
- package/.docs/models/providers/friendli.md +3 -3
- package/.docs/models/providers/github-models.md +3 -3
- package/.docs/models/providers/google.md +7 -5
- package/.docs/models/providers/groq.md +4 -4
- package/.docs/models/providers/helicone.md +3 -3
- package/.docs/models/providers/huggingface.md +3 -3
- package/.docs/models/providers/iflowcn.md +3 -3
- package/.docs/models/providers/inception.md +7 -5
- package/.docs/models/providers/inference.md +3 -3
- package/.docs/models/providers/io-net.md +3 -3
- package/.docs/models/providers/jiekou.md +3 -3
- package/.docs/models/providers/kilo.md +6 -4
- package/.docs/models/providers/kimi-for-coding.md +4 -4
- package/.docs/models/providers/kuae-cloud-coding-plan.md +3 -3
- package/.docs/models/providers/llama.md +3 -3
- package/.docs/models/providers/lmstudio.md +3 -3
- package/.docs/models/providers/lucidquery.md +3 -3
- package/.docs/models/providers/meganova.md +3 -3
- package/.docs/models/providers/minimax-cn-coding-plan.md +4 -4
- package/.docs/models/providers/minimax-cn.md +4 -4
- package/.docs/models/providers/minimax-coding-plan.md +4 -4
- package/.docs/models/providers/minimax.md +4 -4
- package/.docs/models/providers/mistral.md +4 -4
- package/.docs/models/providers/moark.md +3 -3
- package/.docs/models/providers/modelscope.md +3 -3
- package/.docs/models/providers/moonshotai-cn.md +3 -3
- package/.docs/models/providers/moonshotai.md +3 -3
- package/.docs/models/providers/morph.md +3 -3
- package/.docs/models/providers/nano-gpt.md +523 -42
- package/.docs/models/providers/nebius.md +37 -37
- package/.docs/models/providers/nova.md +3 -3
- package/.docs/models/providers/novita-ai.md +3 -3
- package/.docs/models/providers/nvidia.md +7 -5
- package/.docs/models/providers/ollama-cloud.md +4 -5
- package/.docs/models/providers/openai.md +7 -5
- package/.docs/models/providers/opencode-go.md +3 -3
- package/.docs/models/providers/opencode.md +39 -36
- package/.docs/models/providers/ovhcloud.md +3 -3
- package/.docs/models/providers/perplexity-agent.md +4 -4
- package/.docs/models/providers/perplexity.md +4 -4
- package/.docs/models/providers/poe.md +11 -5
- package/.docs/models/providers/privatemode-ai.md +3 -3
- package/.docs/models/providers/qihang-ai.md +3 -3
- package/.docs/models/providers/qiniu-ai.md +23 -8
- package/.docs/models/providers/requesty.md +20 -4
- package/.docs/models/providers/scaleway.md +3 -3
- package/.docs/models/providers/siliconflow-cn.md +10 -4
- package/.docs/models/providers/siliconflow.md +3 -3
- package/.docs/models/providers/stackit.md +3 -3
- package/.docs/models/providers/stepfun.md +3 -3
- package/.docs/models/providers/submodel.md +3 -3
- package/.docs/models/providers/synthetic.md +3 -3
- package/.docs/models/providers/togetherai.md +5 -7
- package/.docs/models/providers/upstage.md +3 -3
- package/.docs/models/providers/vivgrid.md +4 -4
- package/.docs/models/providers/vultr.md +3 -3
- package/.docs/models/providers/wandb.md +3 -3
- package/.docs/models/providers/xai.md +34 -31
- package/.docs/models/providers/xiaomi.md +4 -4
- package/.docs/models/providers/zai-coding-plan.md +3 -3
- package/.docs/models/providers/zai.md +3 -3
- package/.docs/models/providers/zenmux.md +7 -5
- package/.docs/models/providers/zhipuai-coding-plan.md +3 -3
- package/.docs/models/providers/zhipuai.md +3 -3
- package/.docs/models/providers.md +4 -0
- package/.docs/reference/agents/agent.md +3 -3
- package/.docs/reference/agents/generateLegacy.md +1 -1
- package/.docs/reference/agents/getDefaultGenerateOptions.md +1 -1
- package/.docs/reference/agents/getDefaultOptions.md +1 -1
- package/.docs/reference/agents/getDefaultStreamOptions.md +1 -1
- package/.docs/reference/agents/getDescription.md +1 -1
- package/.docs/reference/agents/network.md +5 -3
- package/.docs/reference/ai-sdk/handle-chat-stream.md +2 -0
- package/.docs/reference/ai-sdk/handle-network-stream.md +2 -0
- package/.docs/reference/ai-sdk/network-route.md +2 -0
- package/.docs/reference/ai-sdk/to-ai-sdk-stream.md +1 -1
- package/.docs/reference/ai-sdk/to-ai-sdk-v4-messages.md +1 -1
- package/.docs/reference/ai-sdk/to-ai-sdk-v5-messages.md +1 -1
- package/.docs/reference/auth/auth0.md +7 -7
- package/.docs/reference/auth/better-auth.md +2 -2
- package/.docs/reference/auth/clerk.md +1 -1
- package/.docs/reference/auth/firebase.md +5 -5
- package/.docs/reference/auth/jwt.md +1 -1
- package/.docs/reference/auth/supabase.md +1 -1
- package/.docs/reference/auth/workos.md +6 -6
- package/.docs/reference/cli/mastra.md +5 -5
- package/.docs/reference/client-js/agents.md +22 -22
- package/.docs/reference/client-js/error-handling.md +2 -2
- package/.docs/reference/client-js/logs.md +2 -2
- package/.docs/reference/client-js/mastra-client.md +2 -2
- package/.docs/reference/client-js/memory.md +6 -6
- package/.docs/reference/client-js/observability.md +4 -4
- package/.docs/reference/client-js/telemetry.md +1 -1
- package/.docs/reference/client-js/tools.md +3 -3
- package/.docs/reference/client-js/vectors.md +2 -2
- package/.docs/reference/client-js/workflows.md +12 -12
- package/.docs/reference/configuration.md +62 -6
- package/.docs/reference/core/getDeployer.md +1 -1
- package/.docs/reference/core/getGatewayById.md +1 -1
- package/.docs/reference/core/getLogger.md +1 -1
- package/.docs/reference/core/getMCPServer.md +2 -2
- package/.docs/reference/core/getMCPServerById.md +2 -2
- package/.docs/reference/core/getMemory.md +1 -1
- package/.docs/reference/core/getScorer.md +4 -4
- package/.docs/reference/core/getScorerById.md +2 -2
- package/.docs/reference/core/getServer.md +1 -1
- package/.docs/reference/core/getStorage.md +1 -1
- package/.docs/reference/core/getStoredAgentById.md +3 -3
- package/.docs/reference/core/getTelemetry.md +1 -1
- package/.docs/reference/core/getWorkflow.md +1 -1
- package/.docs/reference/core/listAgents.md +1 -1
- package/.docs/reference/core/listMCPServers.md +3 -3
- package/.docs/reference/core/listMemory.md +1 -1
- package/.docs/reference/core/listScorers.md +1 -1
- package/.docs/reference/core/listStoredAgents.md +3 -3
- package/.docs/reference/core/listVectors.md +1 -1
- package/.docs/reference/core/mastra-class.md +2 -2
- package/.docs/reference/core/mastra-model-gateway.md +11 -11
- package/.docs/reference/core/setLogger.md +1 -1
- package/.docs/reference/core/setStorage.md +1 -1
- package/.docs/reference/datasets/dataset.md +2 -2
- package/.docs/reference/datasets/datasets-manager.md +1 -1
- package/.docs/reference/datasets/get.md +2 -2
- package/.docs/reference/datasets/getDetails.md +1 -1
- package/.docs/reference/datasets/listItems.md +1 -1
- package/.docs/reference/deployer/vercel.md +1 -1
- package/.docs/reference/deployer.md +4 -4
- package/.docs/reference/evals/answer-relevancy.md +4 -4
- package/.docs/reference/evals/answer-similarity.md +3 -3
- package/.docs/reference/evals/bias.md +4 -4
- package/.docs/reference/evals/completeness.md +6 -6
- package/.docs/reference/evals/content-similarity.md +3 -3
- package/.docs/reference/evals/context-precision.md +9 -9
- package/.docs/reference/evals/context-relevance.md +7 -7
- package/.docs/reference/evals/create-scorer.md +7 -7
- package/.docs/reference/evals/faithfulness.md +3 -3
- package/.docs/reference/evals/hallucination.md +8 -14
- package/.docs/reference/evals/keyword-coverage.md +4 -4
- package/.docs/reference/evals/mastra-scorer.md +7 -7
- package/.docs/reference/evals/noise-sensitivity.md +11 -11
- package/.docs/reference/evals/prompt-alignment.md +5 -5
- package/.docs/reference/evals/run-evals.md +5 -5
- package/.docs/reference/evals/scorer-utils.md +17 -17
- package/.docs/reference/evals/textual-difference.md +4 -4
- package/.docs/reference/evals/tone-consistency.md +5 -5
- package/.docs/reference/evals/tool-call-accuracy.md +10 -10
- package/.docs/reference/evals/toxicity.md +3 -3
- package/.docs/reference/harness/harness-class.md +5 -3
- package/.docs/reference/index.md +2 -0
- package/.docs/reference/memory/clone-utilities.md +7 -7
- package/.docs/reference/memory/cloneThread.md +5 -5
- package/.docs/reference/memory/createThread.md +1 -1
- package/.docs/reference/memory/deleteMessages.md +1 -1
- package/.docs/reference/memory/getThreadById.md +1 -1
- package/.docs/reference/memory/listThreads.md +3 -3
- package/.docs/reference/memory/memory-class.md +1 -1
- package/.docs/reference/memory/observational-memory.md +8 -6
- package/.docs/reference/memory/recall.md +1 -1
- package/.docs/reference/observability/tracing/bridges/otel.md +6 -6
- package/.docs/reference/observability/tracing/configuration.md +17 -17
- package/.docs/reference/observability/tracing/exporters/arize.md +4 -4
- package/.docs/reference/observability/tracing/exporters/braintrust.md +3 -3
- package/.docs/reference/observability/tracing/exporters/cloud-exporter.md +6 -6
- package/.docs/reference/observability/tracing/exporters/console-exporter.md +4 -4
- package/.docs/reference/observability/tracing/exporters/datadog.md +4 -4
- package/.docs/reference/observability/tracing/exporters/default-exporter.md +6 -6
- package/.docs/reference/observability/tracing/exporters/laminar.md +2 -2
- package/.docs/reference/observability/tracing/exporters/langfuse.md +4 -4
- package/.docs/reference/observability/tracing/exporters/langsmith.md +6 -6
- package/.docs/reference/observability/tracing/exporters/otel.md +12 -12
- package/.docs/reference/observability/tracing/exporters/posthog.md +3 -3
- package/.docs/reference/observability/tracing/exporters/sentry.md +5 -5
- package/.docs/reference/observability/tracing/instances.md +9 -9
- package/.docs/reference/observability/tracing/interfaces.md +39 -39
- package/.docs/reference/observability/tracing/processors/sensitive-data-filter.md +6 -6
- package/.docs/reference/observability/tracing/spans.md +15 -13
- package/.docs/reference/processors/message-history-processor.md +1 -1
- package/.docs/reference/processors/processor-interface.md +21 -17
- package/.docs/reference/processors/token-limiter-processor.md +2 -2
- package/.docs/reference/rag/chunk.md +2 -2
- package/.docs/reference/rag/database-config.md +8 -8
- package/.docs/reference/rag/document.md +11 -11
- package/.docs/reference/rag/embeddings.md +5 -5
- package/.docs/reference/rag/extract-params.md +8 -8
- package/.docs/reference/rag/graph-rag.md +4 -4
- package/.docs/reference/rag/metadata-filters.md +15 -15
- package/.docs/reference/rag/rerank.md +2 -2
- package/.docs/reference/rag/rerankWithScorer.md +2 -2
- package/.docs/reference/server/create-route.md +2 -0
- package/.docs/reference/server/express-adapter.md +1 -1
- package/.docs/reference/server/fastify-adapter.md +1 -1
- package/.docs/reference/server/hono-adapter.md +1 -1
- package/.docs/reference/server/koa-adapter.md +2 -2
- package/.docs/reference/server/mastra-server.md +16 -16
- package/.docs/reference/server/register-api-route.md +7 -7
- package/.docs/reference/server/routes.md +1 -1
- package/.docs/reference/storage/cloudflare-d1.md +5 -5
- package/.docs/reference/storage/cloudflare.md +3 -3
- package/.docs/reference/storage/composite.md +1 -1
- package/.docs/reference/storage/convex.md +6 -6
- package/.docs/reference/storage/dynamodb.md +7 -7
- package/.docs/reference/storage/lance.md +5 -5
- package/.docs/reference/storage/libsql.md +1 -1
- package/.docs/reference/storage/mongodb.md +6 -6
- package/.docs/reference/storage/mssql.md +4 -4
- package/.docs/reference/storage/overview.md +2 -2
- package/.docs/reference/storage/postgresql.md +7 -7
- package/.docs/reference/storage/upstash.md +4 -4
- package/.docs/reference/streaming/ChunkType.md +13 -13
- package/.docs/reference/streaming/agents/MastraModelOutput.md +6 -6
- package/.docs/reference/streaming/agents/stream.md +2 -2
- package/.docs/reference/streaming/agents/streamLegacy.md +1 -1
- package/.docs/reference/streaming/workflows/observeStream.md +2 -2
- package/.docs/reference/streaming/workflows/resumeStream.md +1 -1
- package/.docs/reference/streaming/workflows/stream.md +1 -1
- package/.docs/reference/templates/overview.md +4 -4
- package/.docs/reference/tools/create-tool.md +10 -10
- package/.docs/reference/tools/document-chunker-tool.md +4 -4
- package/.docs/reference/tools/graph-rag-tool.md +7 -7
- package/.docs/reference/tools/mcp-client.md +13 -13
- package/.docs/reference/tools/mcp-server.md +27 -27
- package/.docs/reference/tools/vector-query-tool.md +12 -12
- package/.docs/reference/vectors/astra.md +13 -13
- package/.docs/reference/vectors/chroma.md +18 -18
- package/.docs/reference/vectors/convex.md +15 -15
- package/.docs/reference/vectors/couchbase.md +21 -21
- package/.docs/reference/vectors/duckdb.md +17 -17
- package/.docs/reference/vectors/elasticsearch.md +14 -14
- package/.docs/reference/vectors/lance.md +22 -22
- package/.docs/reference/vectors/libsql.md +15 -15
- package/.docs/reference/vectors/mongodb.md +18 -18
- package/.docs/reference/vectors/opensearch.md +11 -11
- package/.docs/reference/vectors/pg.md +23 -21
- package/.docs/reference/vectors/pinecone.md +15 -15
- package/.docs/reference/vectors/qdrant.md +15 -15
- package/.docs/reference/vectors/s3vectors.md +22 -22
- package/.docs/reference/vectors/turbopuffer.md +14 -14
- package/.docs/reference/vectors/upstash.md +15 -15
- package/.docs/reference/vectors/vectorize.md +16 -16
- package/.docs/reference/voice/azure.md +12 -10
- package/.docs/reference/voice/cloudflare.md +9 -7
- package/.docs/reference/voice/composite-voice.md +5 -5
- package/.docs/reference/voice/deepgram.md +5 -5
- package/.docs/reference/voice/elevenlabs.md +7 -7
- package/.docs/reference/voice/google-gemini-live.md +22 -22
- package/.docs/reference/voice/google.md +12 -12
- package/.docs/reference/voice/mastra-voice.md +18 -18
- package/.docs/reference/voice/murf.md +8 -8
- package/.docs/reference/voice/openai-realtime.md +19 -17
- package/.docs/reference/voice/openai.md +12 -8
- package/.docs/reference/voice/playai.md +9 -7
- package/.docs/reference/voice/sarvam.md +8 -6
- package/.docs/reference/voice/speechify.md +11 -9
- package/.docs/reference/voice/voice.addInstructions.md +4 -4
- package/.docs/reference/voice/voice.addTools.md +3 -3
- package/.docs/reference/voice/voice.answer.md +2 -2
- package/.docs/reference/voice/voice.close.md +4 -4
- package/.docs/reference/voice/voice.connect.md +9 -7
- package/.docs/reference/voice/voice.events.md +4 -4
- package/.docs/reference/voice/voice.getSpeakers.md +4 -4
- package/.docs/reference/voice/voice.listen.md +17 -11
- package/.docs/reference/voice/voice.off.md +4 -4
- package/.docs/reference/voice/voice.on.md +5 -5
- package/.docs/reference/voice/voice.send.md +2 -2
- package/.docs/reference/voice/voice.speak.md +19 -9
- package/.docs/reference/voice/voice.updateConfig.md +4 -4
- package/.docs/reference/workflows/run-methods/startAsync.md +1 -1
- package/.docs/reference/workflows/run-methods/timeTravel.md +1 -1
- package/.docs/reference/workflows/run.md +3 -3
- package/.docs/reference/workflows/step.md +2 -2
- package/.docs/reference/workflows/workflow-methods/create-run.md +1 -1
- package/.docs/reference/workflows/workflow.md +1 -1
- package/.docs/reference/workspace/blaxel-sandbox.md +164 -0
- package/.docs/reference/workspace/daytona-sandbox.md +50 -141
- package/.docs/reference/workspace/e2b-sandbox.md +41 -77
- package/.docs/reference/workspace/filesystem.md +25 -11
- package/.docs/reference/workspace/gcs-filesystem.md +21 -1
- package/.docs/reference/workspace/local-filesystem.md +24 -10
- package/.docs/reference/workspace/local-sandbox.md +27 -102
- package/.docs/reference/workspace/process-manager.md +296 -0
- package/.docs/reference/workspace/s3-filesystem.md +21 -1
- package/.docs/reference/workspace/sandbox.md +9 -1
- package/.docs/reference/workspace/workspace-class.md +95 -27
- package/CHANGELOG.md +15 -0
- package/dist/tools/course.d.ts +7 -27
- package/dist/tools/course.d.ts.map +1 -1
- package/dist/tools/docs.d.ts +6 -18
- package/dist/tools/docs.d.ts.map +1 -1
- package/dist/tools/embedded-docs.d.ts +12 -112
- package/dist/tools/embedded-docs.d.ts.map +1 -1
- package/dist/tools/migration.d.ts +6 -26
- package/dist/tools/migration.d.ts.map +1 -1
- package/package.json +6 -6
|
@@ -39,7 +39,7 @@ try {
|
|
|
39
39
|
}
|
|
40
40
|
```
|
|
41
41
|
|
|
42
|
-
## Working with
|
|
42
|
+
## Working with audio streams
|
|
43
43
|
|
|
44
44
|
The `speak()` and `listen()` methods work with Node.js streams. Here's how to save and load audio files:
|
|
45
45
|
|
|
@@ -87,7 +87,7 @@ try {
|
|
|
87
87
|
}
|
|
88
88
|
```
|
|
89
89
|
|
|
90
|
-
## Speech-to-
|
|
90
|
+
## Speech-to-speech voice interactions
|
|
91
91
|
|
|
92
92
|
For more dynamic and interactive voice experiences, you can use real-time voice providers that support speech-to-speech capabilities:
|
|
93
93
|
|
|
@@ -323,7 +323,7 @@ For the complete list of supported AI SDK providers and their capabilities:
|
|
|
323
323
|
- [Transcription](https://ai-sdk.dev/docs/providers/openai/transcription)
|
|
324
324
|
- [Speech](https://ai-sdk.dev/docs/providers/elevenlabs/speech)
|
|
325
325
|
|
|
326
|
-
## Supported
|
|
326
|
+
## Supported voice providers
|
|
327
327
|
|
|
328
328
|
Mastra supports multiple voice providers for text-to-speech (TTS) and speech-to-text (STT) capabilities:
|
|
329
329
|
|
|
@@ -341,7 +341,7 @@ Mastra supports multiple voice providers for text-to-speech (TTS) and speech-to-
|
|
|
341
341
|
| Azure | `@mastra/voice-azure` | TTS, STT | [Documentation](https://mastra.ai/reference/voice/mastra-voice) |
|
|
342
342
|
| Cloudflare | `@mastra/voice-cloudflare` | TTS | [Documentation](https://mastra.ai/reference/voice/mastra-voice) |
|
|
343
343
|
|
|
344
|
-
## Next
|
|
344
|
+
## Next steps
|
|
345
345
|
|
|
346
346
|
- [Voice API Reference](https://mastra.ai/reference/voice/mastra-voice) - Detailed API documentation for voice capabilities
|
|
347
347
|
- [Text to Speech Examples](https://github.com/mastra-ai/voice-examples/tree/main/text-to-speech) - Interactive story generator and other TTS implementations
|
|
@@ -92,7 +92,7 @@ const handleDecline = async () => {
|
|
|
92
92
|
}
|
|
93
93
|
```
|
|
94
94
|
|
|
95
|
-
## Tool approval with generate()
|
|
95
|
+
## Tool approval with `generate()`
|
|
96
96
|
|
|
97
97
|
Tool approval also works with the `generate()` method for non-streaming use cases. When a tool requires approval during a `generate()` call, the method returns immediately instead of executing the tool.
|
|
98
98
|
|
|
@@ -504,7 +504,7 @@ for await (const chunk of stream.fullStream) {
|
|
|
504
504
|
}
|
|
505
505
|
```
|
|
506
506
|
|
|
507
|
-
### Using suspend() in supervisor pattern
|
|
507
|
+
### Using `suspend()` in supervisor pattern
|
|
508
508
|
|
|
509
509
|
Tools can also use [`suspend()`](#approval-using-suspend) to pause execution and return context to the user. This approach works through the supervisor delegation chain the same way `requireApproval` does — the suspension surfaces at the supervisor level:
|
|
510
510
|
|
|
@@ -553,7 +553,7 @@ for await (const chunk of stream.fullStream) {
|
|
|
553
553
|
}
|
|
554
554
|
```
|
|
555
555
|
|
|
556
|
-
### Tool approval with generate()
|
|
556
|
+
### Tool approval with `generate()`
|
|
557
557
|
|
|
558
558
|
Tool approval propagation also works with `generate()` in supervisor pattern:
|
|
559
559
|
|
|
@@ -96,7 +96,7 @@ export const memoryAgent = new Agent({
|
|
|
96
96
|
})
|
|
97
97
|
```
|
|
98
98
|
|
|
99
|
-
> **Mastra Cloud Store limitation:** Agent-level storage
|
|
99
|
+
> **Mastra Cloud Store limitation:** Agent-level storage isn't supported when using [Mastra Cloud Store](https://mastra.ai/docs/mastra-cloud/deployment). If you use Mastra Cloud Store, configure storage on the Mastra instance instead. This limitation doesn't apply if you bring your own database.
|
|
100
100
|
|
|
101
101
|
## Message history
|
|
102
102
|
|
|
@@ -127,11 +127,11 @@ const response = await memoryAgent.generate("What's my favorite color?", {
|
|
|
127
127
|
})
|
|
128
128
|
```
|
|
129
129
|
|
|
130
|
-
> **Warning:** Each thread has an owner (`resourceId`) that
|
|
130
|
+
> **Warning:** Each thread has an owner (`resourceId`) that can't be changed after creation. Avoid reusing the same thread ID for threads with different owners, as this will cause errors when querying.
|
|
131
131
|
|
|
132
132
|
To learn more about memory see the [Memory](https://mastra.ai/docs/memory/overview) documentation.
|
|
133
133
|
|
|
134
|
-
## Observational
|
|
134
|
+
## Observational memory
|
|
135
135
|
|
|
136
136
|
For long-running conversations, raw message history grows until it fills the context window, degrading agent performance. [Observational Memory](https://mastra.ai/docs/memory/observational-memory) solves this by running background agents that compress old messages into dense observations, keeping the context window small while preserving long-term memory.
|
|
137
137
|
|
|
@@ -40,7 +40,7 @@ export const moderatedAgent = new Agent({
|
|
|
40
40
|
|
|
41
41
|
## Input processors
|
|
42
42
|
|
|
43
|
-
Input processors are applied before user messages reach the language model. They
|
|
43
|
+
Input processors are applied before user messages reach the language model. They're useful for normalization, validation, content moderation, prompt injection detection, and security checks.
|
|
44
44
|
|
|
45
45
|
### Normalizing user messages
|
|
46
46
|
|
|
@@ -111,7 +111,7 @@ export const multilingualAgent = new Agent({
|
|
|
111
111
|
|
|
112
112
|
## Output processors
|
|
113
113
|
|
|
114
|
-
Output processors are applied after the language model generates a response, but before it
|
|
114
|
+
Output processors are applied after the language model generates a response, but before it's returned to the user. They're useful for response optimization, moderation, transformation, and applying safety controls.
|
|
115
115
|
|
|
116
116
|
### Batching streamed output
|
|
117
117
|
|
|
@@ -188,7 +188,7 @@ const scrubbedAgent = new Agent({
|
|
|
188
188
|
|
|
189
189
|
## Hybrid processors
|
|
190
190
|
|
|
191
|
-
Hybrid processors can be applied either before messages are sent to the language model or before responses are returned to the user. They
|
|
191
|
+
Hybrid processors can be applied either before messages are sent to the language model or before responses are returned to the user. They're useful for tasks like content moderation and PII redaction.
|
|
192
192
|
|
|
193
193
|
### Moderating input and output
|
|
194
194
|
|
|
@@ -1,4 +1,6 @@
|
|
|
1
|
-
# Network
|
|
1
|
+
# Network approval
|
|
2
|
+
|
|
3
|
+
> **Deprecated:** Agent networks are deprecated and will be removed in a future release. Use the [supervisor pattern](https://mastra.ai/docs/agents/supervisor-agents) instead. See the [migration guide](https://mastra.ai/guides/migrations/network-to-supervisor) to upgrade.
|
|
2
4
|
|
|
3
5
|
Agent networks can require the same [human-in-the-loop](https://mastra.ai/docs/workflows/human-in-the-loop) oversight used in individual agents and workflows. When a tool, subagent, or workflow within a network requires approval or suspends execution, the network pauses and emits events that allow your application to collect user input before resuming.
|
|
4
6
|
|
|
@@ -269,7 +271,8 @@ Both approaches work with the same tool definitions. Automatic resumption trigge
|
|
|
269
271
|
|
|
270
272
|
## Related
|
|
271
273
|
|
|
272
|
-
- [
|
|
274
|
+
- [Supervisor Agents](https://mastra.ai/docs/agents/supervisor-agents)
|
|
275
|
+
- [Migration: .network() to Supervisor Pattern](https://mastra.ai/guides/migrations/network-to-supervisor)
|
|
273
276
|
- [Agent Approval](https://mastra.ai/docs/agents/agent-approval)
|
|
274
277
|
- [Human-in-the-Loop](https://mastra.ai/docs/workflows/human-in-the-loop)
|
|
275
278
|
- [Agent Memory](https://mastra.ai/docs/agents/agent-memory)
|
|
@@ -1,6 +1,6 @@
|
|
|
1
|
-
# Agent
|
|
1
|
+
# Agent networks
|
|
2
2
|
|
|
3
|
-
> **Supervisor Pattern Recommended:** The [supervisor pattern](https://mastra.ai/docs/agents/supervisor-agents) using `agent.stream()` or `agent.generate()` is now the recommended approach for coordinating multiple agents. It provides the same multi-agent coordination capabilities as `.network()` with significant improvements:
|
|
3
|
+
> **Agent Network Deprecated — Supervisor Pattern Recommended:** Agent networks are deprecated and will be removed in a future release. The [supervisor pattern](https://mastra.ai/docs/agents/supervisor-agents) using `agent.stream()` or `agent.generate()` is now the recommended approach for coordinating multiple agents. It provides the same multi-agent coordination capabilities as `.network()` with significant improvements:
|
|
4
4
|
>
|
|
5
5
|
> - **Better control**: Iteration hooks, delegation hooks, and task completion scoring give you fine-grained control over execution
|
|
6
6
|
> - **Simpler API**: Uses familiar `stream()` and `generate()` methods instead of a separate `.network()` API
|
|
@@ -1,4 +1,4 @@
|
|
|
1
|
-
# Using
|
|
1
|
+
# Using agents
|
|
2
2
|
|
|
3
3
|
Agents use LLMs and tools to solve open-ended tasks. They reason about goals, decide which tools to use, retain conversation memory, and iterate internally until the model emits a final answer or an optional stop condition is met. Agents produce structured responses you can render in your UI or process programmatically. Use agents directly or compose them into workflows or agent networks.
|
|
4
4
|
|
|
@@ -59,7 +59,7 @@ Agents use LLMs and tools to solve open-ended tasks. They reason about goals, de
|
|
|
59
59
|
|
|
60
60
|
### Instruction formats
|
|
61
61
|
|
|
62
|
-
Instructions define the agent's behavior, personality, and capabilities. They
|
|
62
|
+
Instructions define the agent's behavior, personality, and capabilities. They're system-level prompts that establish the agent's core identity and expertise.
|
|
63
63
|
|
|
64
64
|
Instructions can be provided in multiple formats for greater flexibility. The examples below illustrate the supported shapes:
|
|
65
65
|
|
|
@@ -160,7 +160,7 @@ This is useful for:
|
|
|
160
160
|
- Filtering or modifying semantic recall content to prevent "prompt too long" errors
|
|
161
161
|
- Dynamically adjusting system instructions based on the conversation
|
|
162
162
|
|
|
163
|
-
### Per-step processing with processInputStep
|
|
163
|
+
### Per-step processing with `processInputStep`
|
|
164
164
|
|
|
165
165
|
While `processInput` runs once at the start of agent execution, `processInputStep` runs at **each step** of the agentic loop (including tool call continuations). This enables per-step configuration changes like dynamic model switching or tool choice modifications.
|
|
166
166
|
|
|
@@ -219,7 +219,7 @@ The method can return any combination of:
|
|
|
219
219
|
- `modelSettings`: Modify model settings
|
|
220
220
|
- `structuredOutput`: Modify structured output configuration
|
|
221
221
|
|
|
222
|
-
#### Ensuring a final response with maxSteps
|
|
222
|
+
#### Ensuring a final response with `maxSteps`
|
|
223
223
|
|
|
224
224
|
When using `maxSteps` to limit agent execution, the agent may return an empty response if it attempts a tool call on the final step. Use `processInputStep` to force a text response on the last step:
|
|
225
225
|
|
|
@@ -283,7 +283,7 @@ const result = await agent.generate('Your prompt', { maxSteps: MAX_STEPS })
|
|
|
283
283
|
|
|
284
284
|
This ensures that on the final allowed step (step 4 when `maxSteps` is 5, since steps are 0-indexed), the LLM generates a summary instead of attempting another tool call, and clearly indicates if the task is incomplete.
|
|
285
285
|
|
|
286
|
-
#### Using prepareStep callback
|
|
286
|
+
#### Using `prepareStep` callback
|
|
287
287
|
|
|
288
288
|
For simpler per-step logic, you can use the `prepareStep` callback on `generate()` or `stream()` instead of creating a full processor:
|
|
289
289
|
|
|
@@ -303,31 +303,49 @@ await agent.generate('Complex task', {
|
|
|
303
303
|
### Custom output processor
|
|
304
304
|
|
|
305
305
|
```typescript
|
|
306
|
-
import type { Processor, MastraDBMessage,
|
|
306
|
+
import type { Processor, MastraDBMessage, ChunkType } from '@mastra/core'
|
|
307
307
|
|
|
308
308
|
export class CustomOutputProcessor implements Processor {
|
|
309
309
|
id = 'custom-output'
|
|
310
310
|
|
|
311
|
-
async processOutputResult({
|
|
312
|
-
messages,
|
|
313
|
-
context,
|
|
314
|
-
}: {
|
|
315
|
-
messages: MastraDBMessage[]
|
|
316
|
-
context: RequestContext
|
|
317
|
-
}): Promise<MastraDBMessage[]> {
|
|
311
|
+
async processOutputResult({ messages }): Promise<MastraDBMessage[]> {
|
|
318
312
|
// Transform messages after the LLM generates them
|
|
319
313
|
return messages.filter(msg => msg.role !== 'system')
|
|
320
314
|
}
|
|
321
315
|
|
|
322
|
-
async processOutputStream({
|
|
323
|
-
|
|
324
|
-
|
|
325
|
-
}
|
|
326
|
-
|
|
327
|
-
|
|
328
|
-
|
|
329
|
-
|
|
330
|
-
|
|
316
|
+
async processOutputStream({ part }): Promise<ChunkType | null> {
|
|
317
|
+
// Transform or filter streaming chunks
|
|
318
|
+
return part
|
|
319
|
+
}
|
|
320
|
+
}
|
|
321
|
+
```
|
|
322
|
+
|
|
323
|
+
The `processOutputStream` method receives all streaming chunks. To also receive custom `data-*` chunks emitted by tools via `writer.custom()`, set `processDataParts = true` on your processor. This lets you inspect, modify, or block tool-emitted data chunks before they reach the client.
|
|
324
|
+
|
|
325
|
+
#### Accessing generation result data
|
|
326
|
+
|
|
327
|
+
The `processOutputResult` method receives a `result` object containing the resolved generation data — the same information available in the `onFinish` callback. This lets you access token usage, generated text, finish reason, and step details.
|
|
328
|
+
|
|
329
|
+
```typescript
|
|
330
|
+
import type { Processor } from '@mastra/core'
|
|
331
|
+
|
|
332
|
+
export class UsageTracker implements Processor {
|
|
333
|
+
id = 'usage-tracker'
|
|
334
|
+
|
|
335
|
+
async processOutputResult({ messages, result }) {
|
|
336
|
+
console.log(`Text: ${result.text}`)
|
|
337
|
+
console.log(`Tokens: ${result.usage.inputTokens} in, ${result.usage.outputTokens} out`)
|
|
338
|
+
console.log(`Finish reason: ${result.finishReason}`)
|
|
339
|
+
console.log(`Steps: ${result.steps.length}`)
|
|
340
|
+
|
|
341
|
+
// Each step contains toolCalls, toolResults, reasoning, sources, files, etc.
|
|
342
|
+
for (const step of result.steps) {
|
|
343
|
+
if (step.toolCalls?.length) {
|
|
344
|
+
console.log(`Step used ${step.toolCalls.length} tool calls`)
|
|
345
|
+
}
|
|
346
|
+
}
|
|
347
|
+
|
|
348
|
+
return messages
|
|
331
349
|
}
|
|
332
350
|
}
|
|
333
351
|
```
|
|
@@ -448,13 +466,13 @@ const response = await stream.response
|
|
|
448
466
|
console.log(response.uiMessages)
|
|
449
467
|
```
|
|
450
468
|
|
|
451
|
-
## Built-in
|
|
469
|
+
## Built-in utility processors
|
|
452
470
|
|
|
453
471
|
Mastra provides utility processors for common tasks:
|
|
454
472
|
|
|
455
473
|
**For security and validation processors**, see the [Guardrails](https://mastra.ai/docs/agents/guardrails) page for input/output guardrails and moderation processors. **For memory-specific processors**, see the [Memory Processors](https://mastra.ai/docs/memory/memory-processors) page for processors that handle message history, semantic recall, and working memory.
|
|
456
474
|
|
|
457
|
-
### TokenLimiter
|
|
475
|
+
### `TokenLimiter`
|
|
458
476
|
|
|
459
477
|
Prevents context window overflow by removing older messages when the total token count exceeds a specified limit.
|
|
460
478
|
|
|
@@ -488,7 +506,7 @@ const agent = new Agent({
|
|
|
488
506
|
})
|
|
489
507
|
```
|
|
490
508
|
|
|
491
|
-
### ToolCallFilter
|
|
509
|
+
### `ToolCallFilter`
|
|
492
510
|
|
|
493
511
|
Removes tool calls from messages sent to the LLM, saving tokens by excluding potentially verbose tool interactions.
|
|
494
512
|
|
|
@@ -514,7 +532,7 @@ const agent = new Agent({
|
|
|
514
532
|
|
|
515
533
|
> **Note:** The example above filters tool calls and limits tokens for the LLM, but these filtered messages will still be saved to memory. To also filter messages before they're saved to memory, manually add memory processors before utility processors. See [Memory Processors](https://mastra.ai/docs/memory/memory-processors) for details.
|
|
516
534
|
|
|
517
|
-
### ToolSearchProcessor
|
|
535
|
+
### `ToolSearchProcessor`
|
|
518
536
|
|
|
519
537
|
Enables dynamic tool discovery and loading for agents with large tool libraries. Instead of providing all tools upfront, the agent searches for tools by keyword and loads them on demand, reducing context token usage.
|
|
520
538
|
|
|
@@ -1,4 +1,4 @@
|
|
|
1
|
-
# Structured
|
|
1
|
+
# Structured output
|
|
2
2
|
|
|
3
3
|
Structured output lets an agent return an object that matches the shape defined by a schema instead of returning text. The schema tells the model what fields to produce, and the model ensures the final result fits that shape.
|
|
4
4
|
|
|
@@ -188,7 +188,7 @@ const response = await testAgent.generate('Help me plan my day.', {
|
|
|
188
188
|
console.log(response.object)
|
|
189
189
|
```
|
|
190
190
|
|
|
191
|
-
> **Gemini 2.5 with tools:** Gemini 2.5 models
|
|
191
|
+
> **Gemini 2.5 with tools:** Gemini 2.5 models don't support combining `response_format` (structured output) with function calling (tools) in the same API call. If your agent has tools and you're using `structuredOutput` with a Gemini 2.5 model, you must set `jsonPromptInjection: true` to avoid the error `Function calling with a response mime type: 'application/json' is unsupported`.
|
|
192
192
|
>
|
|
193
193
|
> ```typescript
|
|
194
194
|
> const response = await agentWithTools.generate('Your prompt', {
|
|
@@ -1,4 +1,4 @@
|
|
|
1
|
-
# Supervisor
|
|
1
|
+
# Supervisor agents
|
|
2
2
|
|
|
3
3
|
A supervisor agent coordinates multiple subagents using `agent.stream()` or `agent.generate()`. You configure subagents on the supervisor's `agents` property, and the supervisor uses its instructions and each subagent's `description` to decide when and how to delegate tasks.
|
|
4
4
|
|
|
@@ -57,7 +57,7 @@ for await (const chunk of stream.textStream) {
|
|
|
57
57
|
|
|
58
58
|
Delegation hooks let you intercept, modify, or reject delegations as they happen. Configure them under the `delegation` option, either in the agent's `defaultOptions` or per-call.
|
|
59
59
|
|
|
60
|
-
### onDelegationStart
|
|
60
|
+
### `onDelegationStart`
|
|
61
61
|
|
|
62
62
|
Called before the supervisor delegates to a subagent. Return an object to control the delegation:
|
|
63
63
|
|
|
@@ -104,7 +104,7 @@ The `context` object includes:
|
|
|
104
104
|
| `prompt` | The prompt the supervisor is sending |
|
|
105
105
|
| `iteration` | Current iteration number |
|
|
106
106
|
|
|
107
|
-
### onDelegationComplete
|
|
107
|
+
### `onDelegationComplete`
|
|
108
108
|
|
|
109
109
|
Called after a delegation finishes. Use it to inspect results, provide feedback, or stop execution:
|
|
110
110
|
|
|
@@ -1,4 +1,4 @@
|
|
|
1
|
-
# Using
|
|
1
|
+
# Using tools
|
|
2
2
|
|
|
3
3
|
Agents use tools to call APIs, query databases, or run custom functions from your codebase. Tools give agents capabilities beyond language generation by providing structured access to data and performing clearly defined operations. You can also load tools from remote [MCP servers](https://mastra.ai/docs/mcp/overview) to expand an agent's capabilities.
|
|
4
4
|
|
|
@@ -8,7 +8,7 @@ Use tools when an agent needs additional context or information from remote reso
|
|
|
8
8
|
|
|
9
9
|
## Creating a tool
|
|
10
10
|
|
|
11
|
-
When creating tools, keep descriptions
|
|
11
|
+
When creating tools, keep descriptions concise and focused on what the tool does, emphasizing its primary use case. Descriptive schema names can also help guide the agent on how to use the tool.
|
|
12
12
|
|
|
13
13
|
This example shows how to create a tool that fetches weather data from an API. When the agent calls the tool, it provides the required input as defined by the tool's `inputSchema`. The tool accesses this data through its `inputData` parameter, which in this example includes the `location` used in the weather API query.
|
|
14
14
|
|
|
@@ -211,7 +211,7 @@ This lets you specify how tools are identified in the stream. If you want the `t
|
|
|
211
211
|
|
|
212
212
|
### Subagents and workflows as tools
|
|
213
213
|
|
|
214
|
-
Subagents and workflows follow the same pattern. They
|
|
214
|
+
Subagents and workflows follow the same pattern. They're converted to tools with a prefix followed by your object key:
|
|
215
215
|
|
|
216
216
|
| Property | Prefix | Example key | `toolName` |
|
|
217
217
|
| ----------- | ----------- | ----------- | ------------------- |
|
|
@@ -1,4 +1,4 @@
|
|
|
1
|
-
# Mastra
|
|
1
|
+
# Mastra docs server
|
|
2
2
|
|
|
3
3
|
The `@mastra/mcp-docs-server` package provides direct access to Mastra’s full documentation via the [Model Context Protocol (MCP)](https://modelcontextprotocol.io/docs/getting-started/intro). It works with Cursor, Windsurf, Cline, Claude Code, VS Code, Codex or any tool that supports MCP.
|
|
4
4
|
|
|
@@ -56,7 +56,7 @@ This creates a project-scoped `.mcp.json` file if one doesn't already exist. You
|
|
|
56
56
|
|
|
57
57
|
### Cursor
|
|
58
58
|
|
|
59
|
-
Install by
|
|
59
|
+
Install by selecting the button below:
|
|
60
60
|
|
|
61
61
|
[](cursor://anysphere.cursor-deeplink/mcp/install?name=mastra\&config=eyJjb21tYW5kIjoibnB4IC15IEBtYXN0cmEvbWNwLWRvY3Mtc2VydmVyIn0%3D)
|
|
62
62
|
|
|
@@ -77,7 +77,7 @@ Google Antigravity is an agent-first development platform that supports MCP serv
|
|
|
77
77
|
|
|
78
78
|

|
|
79
79
|
|
|
80
|
-
2. To add a custom MCP server, select **Manage MCP Servers** at the top of the MCP Store and
|
|
80
|
+
2. To add a custom MCP server, select **Manage MCP Servers** at the top of the MCP Store and select **View raw config** in the main tab.
|
|
81
81
|
|
|
82
82
|

|
|
83
83
|
|
|
@@ -137,11 +137,11 @@ Once you installed the MCP server, you can use it like so:
|
|
|
137
137
|
|
|
138
138
|

|
|
139
139
|
|
|
140
|
-
MCP only works in Agent mode in VSCode. Once you are in agent mode, open the `mcp.json` file and
|
|
140
|
+
MCP only works in Agent mode in VSCode. Once you are in agent mode, open the `mcp.json` file and select the "start" button. Note that the "start" button will only appear if the `.vscode` folder containing `mcp.json` is in your workspace root, or the highest level of the in-editor file explorer.
|
|
141
141
|
|
|
142
142
|

|
|
143
143
|
|
|
144
|
-
After starting the MCP server,
|
|
144
|
+
After starting the MCP server, select the tools button in the Copilot pane to see available tools.
|
|
145
145
|
|
|
146
146
|

|
|
147
147
|
|
|
@@ -1,4 +1,4 @@
|
|
|
1
|
-
# Mastra
|
|
1
|
+
# Mastra skills
|
|
2
2
|
|
|
3
3
|
Mastra Skills are folders of instructions, scripts, and resources that agents can discover and use to gain Mastra knowledge. They contain setup instructions, best practices, and methods to fetch up-to-date information from Mastra's documentation.
|
|
4
4
|
|
|
@@ -32,4 +32,4 @@ bun x skills add mastra-ai/skills
|
|
|
32
32
|
|
|
33
33
|
Mastra skills work with any coding agent that supports the [Skills standard](https://agentskills.io/), including Claude Code, Cursor, Codex, OpenCode, and others.
|
|
34
34
|
|
|
35
|
-
They
|
|
35
|
+
They're also available on [GitHub](https://github.com/mastra-ai/skills).
|
|
@@ -1,3 +1,3 @@
|
|
|
1
|
-
# Contributing
|
|
1
|
+
# Contributing templates
|
|
2
2
|
|
|
3
3
|
The Mastra community plays a vital role in creating templates that showcase innovative application patterns. We're currently reworking our template contribution process to ensure high-quality, valuable templates for the community. For the time being, we're not accepting new template contributions. Please keep an eye on this page for updates on when contributions will reopen and the new submission process.
|
|
@@ -1,9 +1,9 @@
|
|
|
1
|
-
# Discord
|
|
1
|
+
# Discord community
|
|
2
2
|
|
|
3
3
|
The Discord server has over 1000 members and serves as the main discussion forum for Mastra. The Mastra team monitors Discord during North American and European business hours, with community members active across other time zones.
|
|
4
4
|
|
|
5
5
|
[Join the Discord server](https://discord.gg/BTYqqHKUrf)
|
|
6
6
|
|
|
7
|
-
## Discord MCP
|
|
7
|
+
## Discord MCP bot
|
|
8
8
|
|
|
9
|
-
In addition to community members, we
|
|
9
|
+
In addition to community members, we've an (experimental!) Discord bot that can also help answer questions. It uses [Model Context Protocol (MCP)](https://mastra.ai/docs/mcp/overview). You can ask it a question with `/ask` (either in public channels or DMs) and clear history (in DMs only) with `/cleardm`.
|
|
@@ -1,10 +1,10 @@
|
|
|
1
1
|
# License
|
|
2
2
|
|
|
3
|
-
## Apache
|
|
3
|
+
## Apache license 2.0
|
|
4
4
|
|
|
5
5
|
Mastra is licensed under the Apache License 2.0, a permissive open-source license that provides users with broad rights to use, modify, and distribute the software.
|
|
6
6
|
|
|
7
|
-
### What
|
|
7
|
+
### What's Apache License 2.0?
|
|
8
8
|
|
|
9
9
|
The Apache License 2.0 is a permissive open-source license that grants users extensive rights to use, modify, and distribute the software. It allows:
|
|
10
10
|
|
|
@@ -1,8 +1,8 @@
|
|
|
1
|
-
# Deploy to
|
|
1
|
+
# Deploy to cloud providers
|
|
2
2
|
|
|
3
3
|
Mastra applications can be deployed to cloud providers and serverless platforms. Mastra includes optional built-in deployers for Vercel, Netlify, and Cloudflare to automate the deployment process.
|
|
4
4
|
|
|
5
|
-
## Supported
|
|
5
|
+
## Supported cloud providers
|
|
6
6
|
|
|
7
7
|
The following guides show how to deploy Mastra to specific cloud providers:
|
|
8
8
|
|
|
@@ -1,4 +1,4 @@
|
|
|
1
|
-
# Deploy a Mastra
|
|
1
|
+
# Deploy a Mastra server
|
|
2
2
|
|
|
3
3
|
Mastra compiles your application into a standalone Node.js server that can run on any platform supporting Node.js, Bun, or Deno.
|
|
4
4
|
|
|
@@ -100,7 +100,7 @@ The built server exposes endpoints for health checks, agents, workflows, and mor
|
|
|
100
100
|
| `GET /openapi.json` | OpenAPI specification (if `server.build.openAPIDocs` is enabled) |
|
|
101
101
|
| `GET /swagger-ui` | Interactive API documentation (if `server.build.swaggerUI` is enabled) |
|
|
102
102
|
|
|
103
|
-
This list
|
|
103
|
+
This list isn't exhaustive. To view all endpoints, run `mastra dev` and visit `http://localhost:4111/swagger-ui`.
|
|
104
104
|
|
|
105
105
|
To add your own endpoints, see [Custom API Routes](https://mastra.ai/docs/server/custom-api-routes).
|
|
106
106
|
|
|
@@ -1,4 +1,4 @@
|
|
|
1
|
-
# Deploy in a
|
|
1
|
+
# Deploy in a monorepo
|
|
2
2
|
|
|
3
3
|
Deploying Mastra in a monorepo follows the same process as a standalone application. This guide covers monorepo-specific considerations. For the core build and deployment steps, see [Deploy a Mastra Server](https://mastra.ai/docs/deployment/mastra-server).
|
|
4
4
|
|
|
@@ -1,4 +1,4 @@
|
|
|
1
|
-
# Deployment
|
|
1
|
+
# Deployment overview
|
|
2
2
|
|
|
3
3
|
Mastra applications can be deployed to any Node.js-compatible environment. You can deploy a Mastra server, integrate with an existing web framework, deploy to cloud providers, or use Mastra Cloud for managed hosting.
|
|
4
4
|
|
|
@@ -11,7 +11,7 @@ Mastra can run against any of these runtime environments:
|
|
|
11
11
|
- Deno
|
|
12
12
|
- Cloudflare
|
|
13
13
|
|
|
14
|
-
## Deployment
|
|
14
|
+
## Deployment options
|
|
15
15
|
|
|
16
16
|
### Mastra Server
|
|
17
17
|
|
|
@@ -55,7 +55,7 @@ We're building Mastra Cloud to be the easiest place to deploy and observe your M
|
|
|
55
55
|
|
|
56
56
|
Learn more in the [Mastra Cloud docs](https://mastra.ai/docs/mastra-cloud/overview).
|
|
57
57
|
|
|
58
|
-
## Workflow
|
|
58
|
+
## Workflow runners
|
|
59
59
|
|
|
60
60
|
Mastra workflows run using the built-in execution engine by default. For production workloads requiring managed infrastructure, workflows can also be deployed to specialized platforms like [Inngest](https://www.inngest.com) that provide step memoization, automatic retries, and real-time monitoring.
|
|
61
61
|
|
|
@@ -1,8 +1,8 @@
|
|
|
1
|
-
# Deploying
|
|
1
|
+
# Deploying studio
|
|
2
2
|
|
|
3
3
|
[Studio](https://mastra.ai/docs/getting-started/studio) provides an interactive UI for building and testing your agents. It's a React-based Single Page Application (SPA) that runs in the browser and connects to a running [Mastra server](https://mastra.ai/docs/deployment/mastra-server).
|
|
4
4
|
|
|
5
|
-
|
|
5
|
+
You can deploy Studio in two primary ways:
|
|
6
6
|
|
|
7
7
|
- [Mastra Cloud](https://mastra.ai/docs/mastra-cloud/overview) hosts Studio for you and allows you to share access with your team via link
|
|
8
8
|
- You can self-host Studio on your own infrastructure, either alongside your Mastra server or separately as a standalone SPA
|
|
@@ -61,7 +61,7 @@ The command uses Node's built-in `http` module and [`serve-handler`](https://www
|
|
|
61
61
|
|
|
62
62
|
## Running a server
|
|
63
63
|
|
|
64
|
-
Running `mastra studio` as a long-running process is no different from running any other Node.js service. All the best practices, tools, and options for deployment apply here as well. You can use process managers like PM2, use Docker, or cloud services that support Node.js applications. You'll need to ensure CORS is configured correctly and errors are monitored,
|
|
64
|
+
Running `mastra studio` as a long-running process is no different from running any other Node.js service. All the best practices, tools, and options for deployment apply here as well. You can use process managers like PM2, use Docker, or cloud services that support Node.js applications. You'll need to ensure CORS is configured correctly and errors are monitored, as with any web service.
|
|
65
65
|
|
|
66
66
|
> **Warning:** Once Studio is connected to your Mastra server, it has full access to your agents, workflows, and tools. Be sure to secure it properly in production (e.g. behind authentication, VPN, etc.) to prevent unauthorized access.
|
|
67
67
|
|
|
@@ -1,8 +1,8 @@
|
|
|
1
|
-
# Deploy with a
|
|
1
|
+
# Deploy with a web framework
|
|
2
2
|
|
|
3
3
|
When Mastra is integrated with a web framework, it deploys alongside your application using the framework's standard deployment process. Follow the instructions below to ensure your Mastra integration deploys correctly.
|
|
4
4
|
|
|
5
|
-
> **Warning:** If you're deploying to a cloud provider, remove any usage of [LibSQLStore](https://mastra.ai/reference/storage/libsql) from your Mastra configuration. LibSQLStore requires filesystem access and
|
|
5
|
+
> **Warning:** If you're deploying to a cloud provider, remove any usage of [LibSQLStore](https://mastra.ai/reference/storage/libsql) from your Mastra configuration. LibSQLStore requires filesystem access and isn't compatible with serverless platforms.
|
|
6
6
|
|
|
7
7
|
Integration guides:
|
|
8
8
|
|
|
@@ -2,7 +2,7 @@
|
|
|
2
2
|
|
|
3
3
|
Mastra provides a unified `createScorer` factory that allows you to build custom evaluation logic using either JavaScript functions or LLM-based prompt objects for each step. This flexibility lets you choose the best approach for each part of your evaluation pipeline.
|
|
4
4
|
|
|
5
|
-
## The
|
|
5
|
+
## The four-step pipeline
|
|
6
6
|
|
|
7
7
|
All scorers in Mastra follow a consistent four-step evaluation pipeline:
|
|
8
8
|
|
|
@@ -13,7 +13,7 @@ All scorers in Mastra follow a consistent four-step evaluation pipeline:
|
|
|
13
13
|
|
|
14
14
|
Each step can use either **functions** or **prompt objects** (LLM-based evaluation), giving you the flexibility to combine deterministic algorithms with AI judgment as needed.
|
|
15
15
|
|
|
16
|
-
## Functions vs
|
|
16
|
+
## Functions vs prompt objects
|
|
17
17
|
|
|
18
18
|
**Functions** use JavaScript for deterministic logic. They're ideal for:
|
|
19
19
|
|
|
@@ -33,7 +33,7 @@ Each step can use either **functions** or **prompt objects** (LLM-based evaluati
|
|
|
33
33
|
|
|
34
34
|
You can mix and match approaches within a single scorer - for example, use a function for preprocessing data and an LLM for analyzing quality.
|
|
35
35
|
|
|
36
|
-
## Initializing a
|
|
36
|
+
## Initializing a scorer
|
|
37
37
|
|
|
38
38
|
Every scorer starts with the `createScorer` factory function, which requires an id and description, and optionally accepts a type specification and judge configuration.
|
|
39
39
|
|
|
@@ -113,7 +113,7 @@ const myScorer = createScorer({
|
|
|
113
113
|
})
|
|
114
114
|
```
|
|
115
115
|
|
|
116
|
-
## Step-by-
|
|
116
|
+
## Step-by-step breakdown
|
|
117
117
|
|
|
118
118
|
### preprocess Step (Optional)
|
|
119
119
|
|
|
@@ -207,7 +207,7 @@ const glutenCheckerScorer = createScorer({...})
|
|
|
207
207
|
|
|
208
208
|
**Data Flow:** Results are available to subsequent steps as `results.analyzeStepResult`
|
|
209
209
|
|
|
210
|
-
### generateScore
|
|
210
|
+
### `generateScore` step (required)
|
|
211
211
|
|
|
212
212
|
Converts analysis results into a numerical score. This is the only required step in the pipeline.
|
|
213
213
|
|
|
@@ -230,7 +230,7 @@ const glutenCheckerScorer = createScorer({...})
|
|
|
230
230
|
|
|
231
231
|
**Data Flow:** The score is available to generateReason as the `score` parameter
|
|
232
232
|
|
|
233
|
-
### generateReason
|
|
233
|
+
### `generateReason` step (optional)
|
|
234
234
|
|
|
235
235
|
Generates human-readable explanations for the score, useful for debugging, transparency, or user feedback.
|
|
236
236
|
|
|
@@ -6,9 +6,9 @@ Scorers are automated tests that evaluate Agents outputs using model-graded, rul
|
|
|
6
6
|
|
|
7
7
|
Scorers can be run in the cloud, capturing real-time results. But scorers can also be part of your CI/CD pipeline, allowing you to test and monitor your agents over time.
|
|
8
8
|
|
|
9
|
-
## Types of
|
|
9
|
+
## Types of scorers
|
|
10
10
|
|
|
11
|
-
|
|
11
|
+
Mastra provides different kinds of scorers, each serving a specific purpose. Here are some common types:
|
|
12
12
|
|
|
13
13
|
1. **Textual Scorers**: Evaluate accuracy, reliability, and context understanding of agent responses
|
|
14
14
|
2. **Classification Scorers**: Measure accuracy in categorizing data based on predefined categories
|