@mastra/mcp-docs-server 1.1.9-alpha.0 → 1.1.9-alpha.2
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/.docs/docs/agents/adding-voice.md +4 -4
- package/.docs/docs/agents/agent-approval.md +3 -3
- package/.docs/docs/agents/agent-memory.md +1 -1
- package/.docs/docs/agents/network-approval.md +1 -1
- package/.docs/docs/agents/networks.md +1 -1
- package/.docs/docs/agents/overview.md +1 -1
- package/.docs/docs/agents/processors.md +8 -8
- package/.docs/docs/agents/structured-output.md +1 -1
- package/.docs/docs/agents/supervisor-agents.md +3 -3
- package/.docs/docs/agents/using-tools.md +1 -1
- package/.docs/docs/build-with-ai/mcp-docs-server.md +1 -1
- package/.docs/docs/build-with-ai/skills.md +1 -1
- package/.docs/docs/community/contributing-templates.md +1 -1
- package/.docs/docs/community/discord.md +2 -2
- package/.docs/docs/community/licensing.md +1 -1
- package/.docs/docs/deployment/cloud-providers.md +2 -2
- package/.docs/docs/deployment/mastra-server.md +1 -1
- package/.docs/docs/deployment/monorepo.md +1 -1
- package/.docs/docs/deployment/overview.md +3 -3
- package/.docs/docs/deployment/studio.md +1 -1
- package/.docs/docs/deployment/web-framework.md +1 -1
- package/.docs/docs/deployment/workflow-runners.md +1 -1
- package/.docs/docs/evals/built-in-scorers.md +1 -1
- package/.docs/docs/evals/custom-scorers.md +6 -6
- package/.docs/docs/evals/overview.md +1 -1
- package/.docs/docs/evals/running-in-ci.md +6 -6
- package/.docs/docs/getting-started/build-with-ai.md +2 -2
- package/.docs/docs/getting-started/manual-install.md +1 -1
- package/.docs/docs/getting-started/project-structure.md +1 -1
- package/.docs/docs/index.md +1 -1
- package/.docs/docs/mcp/overview.md +1 -1
- package/.docs/docs/mcp/publishing-mcp-server.md +3 -3
- package/.docs/docs/memory/memory-processors.md +8 -8
- package/.docs/docs/memory/message-history.md +2 -2
- package/.docs/docs/memory/observational-memory.md +5 -5
- package/.docs/docs/memory/semantic-recall.md +7 -7
- package/.docs/docs/memory/working-memory.md +14 -14
- package/.docs/docs/observability/datasets/overview.md +1 -1
- package/.docs/docs/observability/datasets/running-experiments.md +1 -1
- package/.docs/docs/observability/logging.md +1 -1
- package/.docs/docs/observability/overview.md +5 -5
- package/.docs/docs/observability/tracing/bridges/otel.md +7 -7
- package/.docs/docs/observability/tracing/exporters/arize.md +3 -3
- package/.docs/docs/observability/tracing/exporters/braintrust.md +1 -1
- package/.docs/docs/observability/tracing/exporters/cloud.md +2 -2
- package/.docs/docs/observability/tracing/exporters/datadog.md +3 -3
- package/.docs/docs/observability/tracing/exporters/default.md +7 -7
- package/.docs/docs/observability/tracing/exporters/laminar.md +1 -1
- package/.docs/docs/observability/tracing/exporters/langfuse.md +3 -3
- package/.docs/docs/observability/tracing/exporters/langsmith.md +4 -4
- package/.docs/docs/observability/tracing/exporters/otel.md +8 -8
- package/.docs/docs/observability/tracing/exporters/posthog.md +2 -2
- package/.docs/docs/observability/tracing/exporters/sentry.md +4 -4
- package/.docs/docs/observability/tracing/overview.md +20 -20
- package/.docs/docs/observability/tracing/processors/sensitive-data-filter.md +11 -11
- package/.docs/docs/rag/chunking-and-embedding.md +4 -4
- package/.docs/docs/rag/overview.md +2 -2
- package/.docs/docs/rag/retrieval.md +4 -4
- package/.docs/docs/rag/vector-databases.md +11 -11
- package/.docs/docs/server/auth/auth0.md +1 -1
- package/.docs/docs/server/auth/clerk.md +1 -1
- package/.docs/docs/server/auth/composite-auth.md +9 -9
- package/.docs/docs/server/auth/custom-auth-provider.md +12 -12
- package/.docs/docs/server/auth/firebase.md +2 -2
- package/.docs/docs/server/auth/jwt.md +1 -1
- package/.docs/docs/server/auth/simple-auth.md +8 -8
- package/.docs/docs/server/auth/supabase.md +1 -1
- package/.docs/docs/server/auth/workos.md +1 -1
- package/.docs/docs/server/auth.md +1 -1
- package/.docs/docs/server/custom-adapters.md +7 -7
- package/.docs/docs/server/custom-api-routes.md +2 -2
- package/.docs/docs/server/mastra-client.md +1 -1
- package/.docs/docs/server/mastra-server.md +1 -1
- package/.docs/docs/server/request-context.md +2 -2
- package/.docs/docs/server/server-adapters.md +1 -1
- package/.docs/docs/streaming/events.md +1 -1
- package/.docs/docs/streaming/overview.md +1 -1
- package/.docs/docs/streaming/tool-streaming.md +2 -2
- package/.docs/docs/voice/overview.md +3 -3
- package/.docs/docs/voice/speech-to-speech.md +1 -1
- package/.docs/docs/voice/speech-to-text.md +2 -2
- package/.docs/docs/voice/text-to-speech.md +2 -2
- package/.docs/docs/workflows/agents-and-tools.md +1 -1
- package/.docs/docs/workflows/control-flow.md +1 -1
- package/.docs/docs/workflows/error-handling.md +3 -3
- package/.docs/docs/workflows/suspend-and-resume.md +1 -1
- package/.docs/docs/workflows/time-travel.md +1 -1
- package/.docs/docs/workflows/workflow-state.md +1 -1
- package/.docs/docs/workspace/filesystem.md +1 -1
- package/.docs/docs/workspace/overview.md +1 -1
- package/.docs/docs/workspace/search.md +1 -1
- package/.docs/docs/workspace/skills.md +2 -2
- package/.docs/guides/build-your-ui/ai-sdk-ui.md +2 -2
- package/.docs/guides/build-your-ui/assistant-ui.md +1 -1
- package/.docs/guides/build-your-ui/copilotkit.md +1 -1
- package/.docs/guides/deployment/digital-ocean.md +1 -1
- package/.docs/guides/getting-started/astro.md +1 -1
- package/.docs/guides/getting-started/electron.md +1 -1
- package/.docs/guides/getting-started/next-js.md +1 -1
- package/.docs/guides/getting-started/vite-react.md +1 -1
- package/.docs/guides/guide/ai-recruiter.md +3 -3
- package/.docs/guides/guide/chef-michel.md +4 -4
- package/.docs/guides/guide/code-review-bot.md +3 -3
- package/.docs/guides/guide/dev-assistant.md +5 -5
- package/.docs/guides/guide/docs-manager.md +3 -3
- package/.docs/guides/guide/github-actions-pr-description.md +2 -2
- package/.docs/guides/guide/notes-mcp-server.md +3 -3
- package/.docs/guides/guide/research-assistant.md +4 -4
- package/.docs/guides/guide/research-coordinator.md +1 -1
- package/.docs/guides/guide/stock-agent.md +4 -4
- package/.docs/guides/guide/web-search.md +2 -2
- package/.docs/guides/guide/whatsapp-chat-bot.md +1 -1
- package/.docs/guides/migrations/ai-sdk-v4-to-v5.md +3 -3
- package/.docs/guides/migrations/network-to-supervisor.md +1 -1
- package/.docs/guides/migrations/upgrade-to-v1/agent.md +1 -1
- package/.docs/guides/migrations/upgrade-to-v1/deployment.md +1 -1
- package/.docs/guides/migrations/upgrade-to-v1/evals.md +1 -1
- package/.docs/guides/migrations/upgrade-to-v1/mastra.md +1 -1
- package/.docs/guides/migrations/upgrade-to-v1/overview.md +3 -3
- package/.docs/guides/migrations/upgrade-to-v1/storage.md +3 -3
- package/.docs/guides/migrations/upgrade-to-v1/tracing.md +2 -2
- package/.docs/guides/migrations/upgrade-to-v1/vectors.md +3 -3
- package/.docs/guides/migrations/upgrade-to-v1/voice.md +1 -1
- package/.docs/guides/migrations/upgrade-to-v1/workflows.md +1 -1
- package/.docs/guides/migrations/vnext-to-standard-apis.md +1 -1
- package/.docs/models/embeddings.md +4 -4
- package/.docs/models/gateways/custom-gateways.md +4 -4
- package/.docs/models/gateways/netlify.md +1 -1
- package/.docs/models/gateways/openrouter.md +1 -1
- package/.docs/models/gateways/vercel.md +9 -2
- package/.docs/models/gateways.md +2 -2
- package/.docs/models/index.md +1 -1
- package/.docs/models/providers/302ai.md +3 -3
- package/.docs/models/providers/abacus.md +3 -3
- package/.docs/models/providers/aihubmix.md +3 -3
- package/.docs/models/providers/alibaba-cn.md +3 -3
- package/.docs/models/providers/alibaba-coding-plan-cn.md +3 -3
- package/.docs/models/providers/alibaba-coding-plan.md +3 -3
- package/.docs/models/providers/alibaba.md +3 -3
- package/.docs/models/providers/anthropic.md +4 -4
- package/.docs/models/providers/bailing.md +3 -3
- package/.docs/models/providers/baseten.md +3 -3
- package/.docs/models/providers/berget.md +3 -3
- package/.docs/models/providers/cerebras.md +4 -4
- package/.docs/models/providers/chutes.md +6 -5
- package/.docs/models/providers/clarifai.md +3 -3
- package/.docs/models/providers/cloudferro-sherlock.md +3 -3
- package/.docs/models/providers/cloudflare-workers-ai.md +3 -3
- package/.docs/models/providers/cortecs.md +3 -3
- package/.docs/models/providers/deepinfra.md +4 -4
- package/.docs/models/providers/deepseek.md +3 -3
- package/.docs/models/providers/drun.md +3 -3
- package/.docs/models/providers/evroc.md +3 -3
- package/.docs/models/providers/fastrouter.md +3 -3
- package/.docs/models/providers/fireworks-ai.md +3 -3
- package/.docs/models/providers/firmware.md +3 -3
- package/.docs/models/providers/friendli.md +3 -3
- package/.docs/models/providers/github-models.md +3 -3
- package/.docs/models/providers/google.md +4 -4
- package/.docs/models/providers/groq.md +4 -4
- package/.docs/models/providers/helicone.md +3 -3
- package/.docs/models/providers/huggingface.md +3 -3
- package/.docs/models/providers/iflowcn.md +3 -3
- package/.docs/models/providers/inception.md +3 -3
- package/.docs/models/providers/inference.md +3 -3
- package/.docs/models/providers/io-net.md +3 -3
- package/.docs/models/providers/jiekou.md +3 -3
- package/.docs/models/providers/kilo.md +3 -3
- package/.docs/models/providers/kimi-for-coding.md +4 -4
- package/.docs/models/providers/kuae-cloud-coding-plan.md +3 -3
- package/.docs/models/providers/llama.md +3 -3
- package/.docs/models/providers/lmstudio.md +3 -3
- package/.docs/models/providers/lucidquery.md +3 -3
- package/.docs/models/providers/meganova.md +3 -3
- package/.docs/models/providers/minimax-cn-coding-plan.md +4 -4
- package/.docs/models/providers/minimax-cn.md +4 -4
- package/.docs/models/providers/minimax-coding-plan.md +4 -4
- package/.docs/models/providers/minimax.md +4 -4
- package/.docs/models/providers/mistral.md +4 -4
- package/.docs/models/providers/moark.md +3 -3
- package/.docs/models/providers/modelscope.md +3 -3
- package/.docs/models/providers/moonshotai-cn.md +3 -3
- package/.docs/models/providers/moonshotai.md +3 -3
- package/.docs/models/providers/morph.md +3 -3
- package/.docs/models/providers/nano-gpt.md +25 -23
- package/.docs/models/providers/nebius.md +3 -3
- package/.docs/models/providers/nova.md +3 -3
- package/.docs/models/providers/novita-ai.md +3 -3
- package/.docs/models/providers/nvidia.md +3 -3
- package/.docs/models/providers/ollama-cloud.md +3 -3
- package/.docs/models/providers/openai.md +4 -4
- package/.docs/models/providers/opencode-go.md +3 -3
- package/.docs/models/providers/opencode.md +3 -3
- package/.docs/models/providers/ovhcloud.md +3 -3
- package/.docs/models/providers/perplexity-agent.md +4 -4
- package/.docs/models/providers/perplexity.md +4 -4
- package/.docs/models/providers/poe.md +3 -3
- package/.docs/models/providers/privatemode-ai.md +3 -3
- package/.docs/models/providers/qihang-ai.md +3 -3
- package/.docs/models/providers/qiniu-ai.md +3 -3
- package/.docs/models/providers/requesty.md +3 -3
- package/.docs/models/providers/scaleway.md +3 -3
- package/.docs/models/providers/siliconflow-cn.md +3 -3
- package/.docs/models/providers/siliconflow.md +3 -3
- package/.docs/models/providers/stackit.md +3 -3
- package/.docs/models/providers/stepfun.md +3 -3
- package/.docs/models/providers/submodel.md +3 -3
- package/.docs/models/providers/synthetic.md +3 -3
- package/.docs/models/providers/togetherai.md +4 -4
- package/.docs/models/providers/upstage.md +3 -3
- package/.docs/models/providers/vivgrid.md +4 -4
- package/.docs/models/providers/vultr.md +3 -3
- package/.docs/models/providers/wandb.md +3 -3
- package/.docs/models/providers/xai.md +4 -4
- package/.docs/models/providers/xiaomi.md +3 -3
- package/.docs/models/providers/zai-coding-plan.md +3 -3
- package/.docs/models/providers/zai.md +3 -3
- package/.docs/models/providers/zenmux.md +4 -4
- package/.docs/models/providers/zhipuai-coding-plan.md +3 -3
- package/.docs/models/providers/zhipuai.md +3 -3
- package/.docs/reference/agents/agent.md +3 -3
- package/.docs/reference/agents/generateLegacy.md +1 -1
- package/.docs/reference/agents/network.md +2 -2
- package/.docs/reference/ai-sdk/to-ai-sdk-stream.md +1 -1
- package/.docs/reference/auth/auth0.md +4 -4
- package/.docs/reference/auth/better-auth.md +2 -2
- package/.docs/reference/auth/clerk.md +1 -1
- package/.docs/reference/auth/firebase.md +4 -4
- package/.docs/reference/auth/jwt.md +1 -1
- package/.docs/reference/auth/supabase.md +1 -1
- package/.docs/reference/auth/workos.md +4 -4
- package/.docs/reference/cli/mastra.md +1 -1
- package/.docs/reference/client-js/agents.md +22 -22
- package/.docs/reference/client-js/error-handling.md +2 -2
- package/.docs/reference/client-js/logs.md +2 -2
- package/.docs/reference/client-js/mastra-client.md +1 -1
- package/.docs/reference/client-js/memory.md +6 -6
- package/.docs/reference/client-js/observability.md +4 -4
- package/.docs/reference/client-js/telemetry.md +1 -1
- package/.docs/reference/client-js/tools.md +3 -3
- package/.docs/reference/client-js/vectors.md +2 -2
- package/.docs/reference/client-js/workflows.md +12 -12
- package/.docs/reference/core/getGatewayById.md +1 -1
- package/.docs/reference/core/getMCPServer.md +2 -2
- package/.docs/reference/core/getMCPServerById.md +2 -2
- package/.docs/reference/core/getMemory.md +1 -1
- package/.docs/reference/core/getScorer.md +2 -2
- package/.docs/reference/core/getScorerById.md +2 -2
- package/.docs/reference/core/getStoredAgentById.md +2 -2
- package/.docs/reference/core/listMCPServers.md +2 -2
- package/.docs/reference/core/listMemory.md +1 -1
- package/.docs/reference/core/listScorers.md +1 -1
- package/.docs/reference/core/listStoredAgents.md +2 -2
- package/.docs/reference/core/mastra-class.md +1 -1
- package/.docs/reference/core/mastra-model-gateway.md +11 -11
- package/.docs/reference/datasets/dataset.md +1 -1
- package/.docs/reference/deployer.md +4 -4
- package/.docs/reference/evals/answer-relevancy.md +3 -3
- package/.docs/reference/evals/answer-similarity.md +3 -3
- package/.docs/reference/evals/bias.md +4 -4
- package/.docs/reference/evals/completeness.md +5 -5
- package/.docs/reference/evals/content-similarity.md +3 -3
- package/.docs/reference/evals/context-precision.md +6 -6
- package/.docs/reference/evals/context-relevance.md +6 -6
- package/.docs/reference/evals/create-scorer.md +7 -7
- package/.docs/reference/evals/faithfulness.md +3 -3
- package/.docs/reference/evals/hallucination.md +5 -5
- package/.docs/reference/evals/keyword-coverage.md +3 -3
- package/.docs/reference/evals/mastra-scorer.md +6 -6
- package/.docs/reference/evals/noise-sensitivity.md +9 -9
- package/.docs/reference/evals/prompt-alignment.md +5 -5
- package/.docs/reference/evals/run-evals.md +5 -5
- package/.docs/reference/evals/scorer-utils.md +17 -17
- package/.docs/reference/evals/textual-difference.md +3 -3
- package/.docs/reference/evals/tone-consistency.md +4 -4
- package/.docs/reference/evals/tool-call-accuracy.md +9 -9
- package/.docs/reference/evals/toxicity.md +3 -3
- package/.docs/reference/harness/harness-class.md +1 -1
- package/.docs/reference/memory/clone-utilities.md +7 -7
- package/.docs/reference/memory/cloneThread.md +4 -4
- package/.docs/reference/memory/createThread.md +1 -1
- package/.docs/reference/memory/deleteMessages.md +1 -1
- package/.docs/reference/memory/getThreadById.md +1 -1
- package/.docs/reference/memory/listThreads.md +3 -3
- package/.docs/reference/memory/memory-class.md +1 -1
- package/.docs/reference/memory/observational-memory.md +1 -1
- package/.docs/reference/memory/recall.md +1 -1
- package/.docs/reference/observability/tracing/bridges/otel.md +5 -5
- package/.docs/reference/observability/tracing/configuration.md +17 -17
- package/.docs/reference/observability/tracing/exporters/arize.md +4 -4
- package/.docs/reference/observability/tracing/exporters/braintrust.md +3 -3
- package/.docs/reference/observability/tracing/exporters/cloud-exporter.md +6 -6
- package/.docs/reference/observability/tracing/exporters/console-exporter.md +4 -4
- package/.docs/reference/observability/tracing/exporters/datadog.md +4 -4
- package/.docs/reference/observability/tracing/exporters/default-exporter.md +6 -6
- package/.docs/reference/observability/tracing/exporters/laminar.md +2 -2
- package/.docs/reference/observability/tracing/exporters/langfuse.md +4 -4
- package/.docs/reference/observability/tracing/exporters/langsmith.md +6 -6
- package/.docs/reference/observability/tracing/exporters/otel.md +12 -12
- package/.docs/reference/observability/tracing/exporters/posthog.md +3 -3
- package/.docs/reference/observability/tracing/exporters/sentry.md +5 -5
- package/.docs/reference/observability/tracing/instances.md +9 -9
- package/.docs/reference/observability/tracing/interfaces.md +39 -39
- package/.docs/reference/observability/tracing/processors/sensitive-data-filter.md +5 -5
- package/.docs/reference/observability/tracing/spans.md +13 -13
- package/.docs/reference/processors/processor-interface.md +15 -15
- package/.docs/reference/rag/chunk.md +2 -2
- package/.docs/reference/rag/database-config.md +8 -8
- package/.docs/reference/rag/document.md +11 -11
- package/.docs/reference/rag/embeddings.md +5 -5
- package/.docs/reference/rag/extract-params.md +8 -8
- package/.docs/reference/rag/graph-rag.md +4 -4
- package/.docs/reference/rag/metadata-filters.md +5 -5
- package/.docs/reference/rag/rerank.md +2 -2
- package/.docs/reference/rag/rerankWithScorer.md +2 -2
- package/.docs/reference/server/express-adapter.md +1 -1
- package/.docs/reference/server/fastify-adapter.md +1 -1
- package/.docs/reference/server/hono-adapter.md +1 -1
- package/.docs/reference/server/koa-adapter.md +1 -1
- package/.docs/reference/server/mastra-server.md +16 -16
- package/.docs/reference/server/register-api-route.md +5 -5
- package/.docs/reference/server/routes.md +1 -1
- package/.docs/reference/storage/cloudflare-d1.md +2 -2
- package/.docs/reference/storage/cloudflare.md +2 -2
- package/.docs/reference/storage/composite.md +1 -1
- package/.docs/reference/storage/convex.md +5 -5
- package/.docs/reference/storage/dynamodb.md +5 -5
- package/.docs/reference/storage/lance.md +3 -3
- package/.docs/reference/storage/libsql.md +1 -1
- package/.docs/reference/storage/mongodb.md +5 -5
- package/.docs/reference/storage/mssql.md +3 -3
- package/.docs/reference/storage/overview.md +2 -2
- package/.docs/reference/storage/postgresql.md +5 -5
- package/.docs/reference/storage/upstash.md +3 -3
- package/.docs/reference/streaming/ChunkType.md +13 -13
- package/.docs/reference/streaming/agents/MastraModelOutput.md +6 -6
- package/.docs/reference/streaming/agents/stream.md +2 -2
- package/.docs/reference/streaming/agents/streamLegacy.md +1 -1
- package/.docs/reference/streaming/workflows/observeStream.md +1 -1
- package/.docs/reference/streaming/workflows/resumeStream.md +1 -1
- package/.docs/reference/streaming/workflows/stream.md +1 -1
- package/.docs/reference/templates/overview.md +3 -3
- package/.docs/reference/tools/create-tool.md +9 -9
- package/.docs/reference/tools/document-chunker-tool.md +4 -4
- package/.docs/reference/tools/graph-rag-tool.md +7 -7
- package/.docs/reference/tools/mcp-client.md +13 -13
- package/.docs/reference/tools/mcp-server.md +23 -23
- package/.docs/reference/tools/vector-query-tool.md +12 -12
- package/.docs/reference/vectors/astra.md +13 -13
- package/.docs/reference/vectors/chroma.md +16 -16
- package/.docs/reference/vectors/convex.md +15 -15
- package/.docs/reference/vectors/couchbase.md +15 -15
- package/.docs/reference/vectors/duckdb.md +17 -17
- package/.docs/reference/vectors/elasticsearch.md +14 -14
- package/.docs/reference/vectors/lance.md +22 -22
- package/.docs/reference/vectors/libsql.md +15 -15
- package/.docs/reference/vectors/mongodb.md +18 -18
- package/.docs/reference/vectors/opensearch.md +11 -11
- package/.docs/reference/vectors/pg.md +21 -21
- package/.docs/reference/vectors/pinecone.md +15 -15
- package/.docs/reference/vectors/qdrant.md +15 -15
- package/.docs/reference/vectors/s3vectors.md +17 -17
- package/.docs/reference/vectors/turbopuffer.md +14 -14
- package/.docs/reference/vectors/upstash.md +15 -15
- package/.docs/reference/vectors/vectorize.md +16 -16
- package/.docs/reference/voice/azure.md +8 -8
- package/.docs/reference/voice/cloudflare.md +5 -5
- package/.docs/reference/voice/composite-voice.md +5 -5
- package/.docs/reference/voice/deepgram.md +5 -5
- package/.docs/reference/voice/elevenlabs.md +6 -6
- package/.docs/reference/voice/google-gemini-live.md +20 -20
- package/.docs/reference/voice/google.md +9 -9
- package/.docs/reference/voice/mastra-voice.md +17 -17
- package/.docs/reference/voice/murf.md +6 -6
- package/.docs/reference/voice/openai-realtime.md +16 -16
- package/.docs/reference/voice/openai.md +5 -5
- package/.docs/reference/voice/playai.md +5 -5
- package/.docs/reference/voice/sarvam.md +5 -5
- package/.docs/reference/voice/speechify.md +5 -5
- package/.docs/reference/voice/voice.addInstructions.md +2 -2
- package/.docs/reference/voice/voice.addTools.md +2 -2
- package/.docs/reference/voice/voice.answer.md +2 -2
- package/.docs/reference/voice/voice.close.md +2 -2
- package/.docs/reference/voice/voice.connect.md +5 -5
- package/.docs/reference/voice/voice.events.md +2 -2
- package/.docs/reference/voice/voice.getSpeakers.md +3 -3
- package/.docs/reference/voice/voice.listen.md +6 -6
- package/.docs/reference/voice/voice.off.md +2 -2
- package/.docs/reference/voice/voice.on.md +3 -3
- package/.docs/reference/voice/voice.send.md +2 -2
- package/.docs/reference/voice/voice.speak.md +5 -5
- package/.docs/reference/voice/voice.updateConfig.md +3 -3
- package/.docs/reference/workflows/run-methods/startAsync.md +1 -1
- package/.docs/reference/workflows/run.md +3 -3
- package/.docs/reference/workflows/step.md +2 -2
- package/.docs/reference/workflows/workflow-methods/create-run.md +1 -1
- package/.docs/reference/workflows/workflow.md +1 -1
- package/.docs/reference/workspace/daytona-sandbox.md +2 -2
- package/.docs/reference/workspace/e2b-sandbox.md +2 -2
- package/.docs/reference/workspace/filesystem.md +1 -1
- package/.docs/reference/workspace/gcs-filesystem.md +1 -1
- package/.docs/reference/workspace/local-filesystem.md +1 -1
- package/.docs/reference/workspace/local-sandbox.md +4 -4
- package/.docs/reference/workspace/process-manager.md +2 -2
- package/.docs/reference/workspace/s3-filesystem.md +1 -1
- package/.docs/reference/workspace/workspace-class.md +2 -2
- package/CHANGELOG.md +14 -0
- package/package.json +4 -4
|
@@ -39,7 +39,7 @@ try {
|
|
|
39
39
|
}
|
|
40
40
|
```
|
|
41
41
|
|
|
42
|
-
## Working with
|
|
42
|
+
## Working with audio streams
|
|
43
43
|
|
|
44
44
|
The `speak()` and `listen()` methods work with Node.js streams. Here's how to save and load audio files:
|
|
45
45
|
|
|
@@ -87,7 +87,7 @@ try {
|
|
|
87
87
|
}
|
|
88
88
|
```
|
|
89
89
|
|
|
90
|
-
## Speech-to-
|
|
90
|
+
## Speech-to-speech voice interactions
|
|
91
91
|
|
|
92
92
|
For more dynamic and interactive voice experiences, you can use real-time voice providers that support speech-to-speech capabilities:
|
|
93
93
|
|
|
@@ -323,7 +323,7 @@ For the complete list of supported AI SDK providers and their capabilities:
|
|
|
323
323
|
- [Transcription](https://ai-sdk.dev/docs/providers/openai/transcription)
|
|
324
324
|
- [Speech](https://ai-sdk.dev/docs/providers/elevenlabs/speech)
|
|
325
325
|
|
|
326
|
-
## Supported
|
|
326
|
+
## Supported voice providers
|
|
327
327
|
|
|
328
328
|
Mastra supports multiple voice providers for text-to-speech (TTS) and speech-to-text (STT) capabilities:
|
|
329
329
|
|
|
@@ -341,7 +341,7 @@ Mastra supports multiple voice providers for text-to-speech (TTS) and speech-to-
|
|
|
341
341
|
| Azure | `@mastra/voice-azure` | TTS, STT | [Documentation](https://mastra.ai/reference/voice/mastra-voice) |
|
|
342
342
|
| Cloudflare | `@mastra/voice-cloudflare` | TTS | [Documentation](https://mastra.ai/reference/voice/mastra-voice) |
|
|
343
343
|
|
|
344
|
-
## Next
|
|
344
|
+
## Next steps
|
|
345
345
|
|
|
346
346
|
- [Voice API Reference](https://mastra.ai/reference/voice/mastra-voice) - Detailed API documentation for voice capabilities
|
|
347
347
|
- [Text to Speech Examples](https://github.com/mastra-ai/voice-examples/tree/main/text-to-speech) - Interactive story generator and other TTS implementations
|
|
@@ -92,7 +92,7 @@ const handleDecline = async () => {
|
|
|
92
92
|
}
|
|
93
93
|
```
|
|
94
94
|
|
|
95
|
-
## Tool approval with generate()
|
|
95
|
+
## Tool approval with `generate()`
|
|
96
96
|
|
|
97
97
|
Tool approval also works with the `generate()` method for non-streaming use cases. When a tool requires approval during a `generate()` call, the method returns immediately instead of executing the tool.
|
|
98
98
|
|
|
@@ -504,7 +504,7 @@ for await (const chunk of stream.fullStream) {
|
|
|
504
504
|
}
|
|
505
505
|
```
|
|
506
506
|
|
|
507
|
-
### Using suspend() in supervisor pattern
|
|
507
|
+
### Using `suspend()` in supervisor pattern
|
|
508
508
|
|
|
509
509
|
Tools can also use [`suspend()`](#approval-using-suspend) to pause execution and return context to the user. This approach works through the supervisor delegation chain the same way `requireApproval` does — the suspension surfaces at the supervisor level:
|
|
510
510
|
|
|
@@ -553,7 +553,7 @@ for await (const chunk of stream.fullStream) {
|
|
|
553
553
|
}
|
|
554
554
|
```
|
|
555
555
|
|
|
556
|
-
### Tool approval with generate()
|
|
556
|
+
### Tool approval with `generate()`
|
|
557
557
|
|
|
558
558
|
Tool approval propagation also works with `generate()` in supervisor pattern:
|
|
559
559
|
|
|
@@ -131,7 +131,7 @@ const response = await memoryAgent.generate("What's my favorite color?", {
|
|
|
131
131
|
|
|
132
132
|
To learn more about memory see the [Memory](https://mastra.ai/docs/memory/overview) documentation.
|
|
133
133
|
|
|
134
|
-
## Observational
|
|
134
|
+
## Observational memory
|
|
135
135
|
|
|
136
136
|
For long-running conversations, raw message history grows until it fills the context window, degrading agent performance. [Observational Memory](https://mastra.ai/docs/memory/observational-memory) solves this by running background agents that compress old messages into dense observations, keeping the context window small while preserving long-term memory.
|
|
137
137
|
|
|
@@ -1,4 +1,4 @@
|
|
|
1
|
-
# Network
|
|
1
|
+
# Network approval
|
|
2
2
|
|
|
3
3
|
> **Deprecated:** Agent networks are deprecated and will be removed in a future release. Use the [supervisor pattern](https://mastra.ai/docs/agents/supervisor-agents) instead. See the [migration guide](https://mastra.ai/guides/migrations/network-to-supervisor) to upgrade.
|
|
4
4
|
|
|
@@ -1,4 +1,4 @@
|
|
|
1
|
-
# Agent
|
|
1
|
+
# Agent networks
|
|
2
2
|
|
|
3
3
|
> **Agent Network Deprecated — Supervisor Pattern Recommended:** Agent networks are deprecated and will be removed in a future release. The [supervisor pattern](https://mastra.ai/docs/agents/supervisor-agents) using `agent.stream()` or `agent.generate()` is now the recommended approach for coordinating multiple agents. It provides the same multi-agent coordination capabilities as `.network()` with significant improvements:
|
|
4
4
|
>
|
|
@@ -1,4 +1,4 @@
|
|
|
1
|
-
# Using
|
|
1
|
+
# Using agents
|
|
2
2
|
|
|
3
3
|
Agents use LLMs and tools to solve open-ended tasks. They reason about goals, decide which tools to use, retain conversation memory, and iterate internally until the model emits a final answer or an optional stop condition is met. Agents produce structured responses you can render in your UI or process programmatically. Use agents directly or compose them into workflows or agent networks.
|
|
4
4
|
|
|
@@ -160,7 +160,7 @@ This is useful for:
|
|
|
160
160
|
- Filtering or modifying semantic recall content to prevent "prompt too long" errors
|
|
161
161
|
- Dynamically adjusting system instructions based on the conversation
|
|
162
162
|
|
|
163
|
-
### Per-step processing with processInputStep
|
|
163
|
+
### Per-step processing with `processInputStep`
|
|
164
164
|
|
|
165
165
|
While `processInput` runs once at the start of agent execution, `processInputStep` runs at **each step** of the agentic loop (including tool call continuations). This enables per-step configuration changes like dynamic model switching or tool choice modifications.
|
|
166
166
|
|
|
@@ -219,7 +219,7 @@ The method can return any combination of:
|
|
|
219
219
|
- `modelSettings`: Modify model settings
|
|
220
220
|
- `structuredOutput`: Modify structured output configuration
|
|
221
221
|
|
|
222
|
-
#### Ensuring a final response with maxSteps
|
|
222
|
+
#### Ensuring a final response with `maxSteps`
|
|
223
223
|
|
|
224
224
|
When using `maxSteps` to limit agent execution, the agent may return an empty response if it attempts a tool call on the final step. Use `processInputStep` to force a text response on the last step:
|
|
225
225
|
|
|
@@ -283,7 +283,7 @@ const result = await agent.generate('Your prompt', { maxSteps: MAX_STEPS })
|
|
|
283
283
|
|
|
284
284
|
This ensures that on the final allowed step (step 4 when `maxSteps` is 5, since steps are 0-indexed), the LLM generates a summary instead of attempting another tool call, and clearly indicates if the task is incomplete.
|
|
285
285
|
|
|
286
|
-
#### Using prepareStep callback
|
|
286
|
+
#### Using `prepareStep` callback
|
|
287
287
|
|
|
288
288
|
For simpler per-step logic, you can use the `prepareStep` callback on `generate()` or `stream()` instead of creating a full processor:
|
|
289
289
|
|
|
@@ -324,7 +324,7 @@ The `processOutputStream` method receives all streaming chunks. To also receive
|
|
|
324
324
|
|
|
325
325
|
#### Accessing generation result data
|
|
326
326
|
|
|
327
|
-
The `processOutputResult` method receives a `result` object containing the resolved generation data — the same information available in the `onFinish` callback. This
|
|
327
|
+
The `processOutputResult` method receives a `result` object containing the resolved generation data — the same information available in the `onFinish` callback. This lets you access token usage, generated text, finish reason, and step details.
|
|
328
328
|
|
|
329
329
|
```typescript
|
|
330
330
|
import type { Processor } from '@mastra/core'
|
|
@@ -466,13 +466,13 @@ const response = await stream.response
|
|
|
466
466
|
console.log(response.uiMessages)
|
|
467
467
|
```
|
|
468
468
|
|
|
469
|
-
## Built-in
|
|
469
|
+
## Built-in utility processors
|
|
470
470
|
|
|
471
471
|
Mastra provides utility processors for common tasks:
|
|
472
472
|
|
|
473
473
|
**For security and validation processors**, see the [Guardrails](https://mastra.ai/docs/agents/guardrails) page for input/output guardrails and moderation processors. **For memory-specific processors**, see the [Memory Processors](https://mastra.ai/docs/memory/memory-processors) page for processors that handle message history, semantic recall, and working memory.
|
|
474
474
|
|
|
475
|
-
### TokenLimiter
|
|
475
|
+
### `TokenLimiter`
|
|
476
476
|
|
|
477
477
|
Prevents context window overflow by removing older messages when the total token count exceeds a specified limit.
|
|
478
478
|
|
|
@@ -506,7 +506,7 @@ const agent = new Agent({
|
|
|
506
506
|
})
|
|
507
507
|
```
|
|
508
508
|
|
|
509
|
-
### ToolCallFilter
|
|
509
|
+
### `ToolCallFilter`
|
|
510
510
|
|
|
511
511
|
Removes tool calls from messages sent to the LLM, saving tokens by excluding potentially verbose tool interactions.
|
|
512
512
|
|
|
@@ -532,7 +532,7 @@ const agent = new Agent({
|
|
|
532
532
|
|
|
533
533
|
> **Note:** The example above filters tool calls and limits tokens for the LLM, but these filtered messages will still be saved to memory. To also filter messages before they're saved to memory, manually add memory processors before utility processors. See [Memory Processors](https://mastra.ai/docs/memory/memory-processors) for details.
|
|
534
534
|
|
|
535
|
-
### ToolSearchProcessor
|
|
535
|
+
### `ToolSearchProcessor`
|
|
536
536
|
|
|
537
537
|
Enables dynamic tool discovery and loading for agents with large tool libraries. Instead of providing all tools upfront, the agent searches for tools by keyword and loads them on demand, reducing context token usage.
|
|
538
538
|
|
|
@@ -1,4 +1,4 @@
|
|
|
1
|
-
# Structured
|
|
1
|
+
# Structured output
|
|
2
2
|
|
|
3
3
|
Structured output lets an agent return an object that matches the shape defined by a schema instead of returning text. The schema tells the model what fields to produce, and the model ensures the final result fits that shape.
|
|
4
4
|
|
|
@@ -1,4 +1,4 @@
|
|
|
1
|
-
# Supervisor
|
|
1
|
+
# Supervisor agents
|
|
2
2
|
|
|
3
3
|
A supervisor agent coordinates multiple subagents using `agent.stream()` or `agent.generate()`. You configure subagents on the supervisor's `agents` property, and the supervisor uses its instructions and each subagent's `description` to decide when and how to delegate tasks.
|
|
4
4
|
|
|
@@ -57,7 +57,7 @@ for await (const chunk of stream.textStream) {
|
|
|
57
57
|
|
|
58
58
|
Delegation hooks let you intercept, modify, or reject delegations as they happen. Configure them under the `delegation` option, either in the agent's `defaultOptions` or per-call.
|
|
59
59
|
|
|
60
|
-
### onDelegationStart
|
|
60
|
+
### `onDelegationStart`
|
|
61
61
|
|
|
62
62
|
Called before the supervisor delegates to a subagent. Return an object to control the delegation:
|
|
63
63
|
|
|
@@ -104,7 +104,7 @@ The `context` object includes:
|
|
|
104
104
|
| `prompt` | The prompt the supervisor is sending |
|
|
105
105
|
| `iteration` | Current iteration number |
|
|
106
106
|
|
|
107
|
-
### onDelegationComplete
|
|
107
|
+
### `onDelegationComplete`
|
|
108
108
|
|
|
109
109
|
Called after a delegation finishes. Use it to inspect results, provide feedback, or stop execution:
|
|
110
110
|
|
|
@@ -1,4 +1,4 @@
|
|
|
1
|
-
# Using
|
|
1
|
+
# Using tools
|
|
2
2
|
|
|
3
3
|
Agents use tools to call APIs, query databases, or run custom functions from your codebase. Tools give agents capabilities beyond language generation by providing structured access to data and performing clearly defined operations. You can also load tools from remote [MCP servers](https://mastra.ai/docs/mcp/overview) to expand an agent's capabilities.
|
|
4
4
|
|
|
@@ -1,4 +1,4 @@
|
|
|
1
|
-
# Mastra
|
|
1
|
+
# Mastra docs server
|
|
2
2
|
|
|
3
3
|
The `@mastra/mcp-docs-server` package provides direct access to Mastra’s full documentation via the [Model Context Protocol (MCP)](https://modelcontextprotocol.io/docs/getting-started/intro). It works with Cursor, Windsurf, Cline, Claude Code, VS Code, Codex or any tool that supports MCP.
|
|
4
4
|
|
|
@@ -1,4 +1,4 @@
|
|
|
1
|
-
# Mastra
|
|
1
|
+
# Mastra skills
|
|
2
2
|
|
|
3
3
|
Mastra Skills are folders of instructions, scripts, and resources that agents can discover and use to gain Mastra knowledge. They contain setup instructions, best practices, and methods to fetch up-to-date information from Mastra's documentation.
|
|
4
4
|
|
|
@@ -1,3 +1,3 @@
|
|
|
1
|
-
# Contributing
|
|
1
|
+
# Contributing templates
|
|
2
2
|
|
|
3
3
|
The Mastra community plays a vital role in creating templates that showcase innovative application patterns. We're currently reworking our template contribution process to ensure high-quality, valuable templates for the community. For the time being, we're not accepting new template contributions. Please keep an eye on this page for updates on when contributions will reopen and the new submission process.
|
|
@@ -1,9 +1,9 @@
|
|
|
1
|
-
# Discord
|
|
1
|
+
# Discord community
|
|
2
2
|
|
|
3
3
|
The Discord server has over 1000 members and serves as the main discussion forum for Mastra. The Mastra team monitors Discord during North American and European business hours, with community members active across other time zones.
|
|
4
4
|
|
|
5
5
|
[Join the Discord server](https://discord.gg/BTYqqHKUrf)
|
|
6
6
|
|
|
7
|
-
## Discord MCP
|
|
7
|
+
## Discord MCP bot
|
|
8
8
|
|
|
9
9
|
In addition to community members, we've an (experimental!) Discord bot that can also help answer questions. It uses [Model Context Protocol (MCP)](https://mastra.ai/docs/mcp/overview). You can ask it a question with `/ask` (either in public channels or DMs) and clear history (in DMs only) with `/cleardm`.
|
|
@@ -1,8 +1,8 @@
|
|
|
1
|
-
# Deploy to
|
|
1
|
+
# Deploy to cloud providers
|
|
2
2
|
|
|
3
3
|
Mastra applications can be deployed to cloud providers and serverless platforms. Mastra includes optional built-in deployers for Vercel, Netlify, and Cloudflare to automate the deployment process.
|
|
4
4
|
|
|
5
|
-
## Supported
|
|
5
|
+
## Supported cloud providers
|
|
6
6
|
|
|
7
7
|
The following guides show how to deploy Mastra to specific cloud providers:
|
|
8
8
|
|
|
@@ -1,4 +1,4 @@
|
|
|
1
|
-
# Deploy in a
|
|
1
|
+
# Deploy in a monorepo
|
|
2
2
|
|
|
3
3
|
Deploying Mastra in a monorepo follows the same process as a standalone application. This guide covers monorepo-specific considerations. For the core build and deployment steps, see [Deploy a Mastra Server](https://mastra.ai/docs/deployment/mastra-server).
|
|
4
4
|
|
|
@@ -1,4 +1,4 @@
|
|
|
1
|
-
# Deployment
|
|
1
|
+
# Deployment overview
|
|
2
2
|
|
|
3
3
|
Mastra applications can be deployed to any Node.js-compatible environment. You can deploy a Mastra server, integrate with an existing web framework, deploy to cloud providers, or use Mastra Cloud for managed hosting.
|
|
4
4
|
|
|
@@ -11,7 +11,7 @@ Mastra can run against any of these runtime environments:
|
|
|
11
11
|
- Deno
|
|
12
12
|
- Cloudflare
|
|
13
13
|
|
|
14
|
-
## Deployment
|
|
14
|
+
## Deployment options
|
|
15
15
|
|
|
16
16
|
### Mastra Server
|
|
17
17
|
|
|
@@ -55,7 +55,7 @@ We're building Mastra Cloud to be the easiest place to deploy and observe your M
|
|
|
55
55
|
|
|
56
56
|
Learn more in the [Mastra Cloud docs](https://mastra.ai/docs/mastra-cloud/overview).
|
|
57
57
|
|
|
58
|
-
## Workflow
|
|
58
|
+
## Workflow runners
|
|
59
59
|
|
|
60
60
|
Mastra workflows run using the built-in execution engine by default. For production workloads requiring managed infrastructure, workflows can also be deployed to specialized platforms like [Inngest](https://www.inngest.com) that provide step memoization, automatic retries, and real-time monitoring.
|
|
61
61
|
|
|
@@ -1,4 +1,4 @@
|
|
|
1
|
-
# Deploying
|
|
1
|
+
# Deploying studio
|
|
2
2
|
|
|
3
3
|
[Studio](https://mastra.ai/docs/getting-started/studio) provides an interactive UI for building and testing your agents. It's a React-based Single Page Application (SPA) that runs in the browser and connects to a running [Mastra server](https://mastra.ai/docs/deployment/mastra-server).
|
|
4
4
|
|
|
@@ -1,4 +1,4 @@
|
|
|
1
|
-
# Deploy with a
|
|
1
|
+
# Deploy with a web framework
|
|
2
2
|
|
|
3
3
|
When Mastra is integrated with a web framework, it deploys alongside your application using the framework's standard deployment process. Follow the instructions below to ensure your Mastra integration deploys correctly.
|
|
4
4
|
|
|
@@ -2,7 +2,7 @@
|
|
|
2
2
|
|
|
3
3
|
Mastra provides a unified `createScorer` factory that allows you to build custom evaluation logic using either JavaScript functions or LLM-based prompt objects for each step. This flexibility lets you choose the best approach for each part of your evaluation pipeline.
|
|
4
4
|
|
|
5
|
-
## The
|
|
5
|
+
## The four-step pipeline
|
|
6
6
|
|
|
7
7
|
All scorers in Mastra follow a consistent four-step evaluation pipeline:
|
|
8
8
|
|
|
@@ -13,7 +13,7 @@ All scorers in Mastra follow a consistent four-step evaluation pipeline:
|
|
|
13
13
|
|
|
14
14
|
Each step can use either **functions** or **prompt objects** (LLM-based evaluation), giving you the flexibility to combine deterministic algorithms with AI judgment as needed.
|
|
15
15
|
|
|
16
|
-
## Functions vs
|
|
16
|
+
## Functions vs prompt objects
|
|
17
17
|
|
|
18
18
|
**Functions** use JavaScript for deterministic logic. They're ideal for:
|
|
19
19
|
|
|
@@ -33,7 +33,7 @@ Each step can use either **functions** or **prompt objects** (LLM-based evaluati
|
|
|
33
33
|
|
|
34
34
|
You can mix and match approaches within a single scorer - for example, use a function for preprocessing data and an LLM for analyzing quality.
|
|
35
35
|
|
|
36
|
-
## Initializing a
|
|
36
|
+
## Initializing a scorer
|
|
37
37
|
|
|
38
38
|
Every scorer starts with the `createScorer` factory function, which requires an id and description, and optionally accepts a type specification and judge configuration.
|
|
39
39
|
|
|
@@ -113,7 +113,7 @@ const myScorer = createScorer({
|
|
|
113
113
|
})
|
|
114
114
|
```
|
|
115
115
|
|
|
116
|
-
## Step-by-
|
|
116
|
+
## Step-by-step breakdown
|
|
117
117
|
|
|
118
118
|
### preprocess Step (Optional)
|
|
119
119
|
|
|
@@ -207,7 +207,7 @@ const glutenCheckerScorer = createScorer({...})
|
|
|
207
207
|
|
|
208
208
|
**Data Flow:** Results are available to subsequent steps as `results.analyzeStepResult`
|
|
209
209
|
|
|
210
|
-
### generateScore
|
|
210
|
+
### `generateScore` step (required)
|
|
211
211
|
|
|
212
212
|
Converts analysis results into a numerical score. This is the only required step in the pipeline.
|
|
213
213
|
|
|
@@ -230,7 +230,7 @@ const glutenCheckerScorer = createScorer({...})
|
|
|
230
230
|
|
|
231
231
|
**Data Flow:** The score is available to generateReason as the `score` parameter
|
|
232
232
|
|
|
233
|
-
### generateReason
|
|
233
|
+
### `generateReason` step (optional)
|
|
234
234
|
|
|
235
235
|
Generates human-readable explanations for the score, useful for debugging, transparency, or user feedback.
|
|
236
236
|
|
|
@@ -6,7 +6,7 @@ Scorers are automated tests that evaluate Agents outputs using model-graded, rul
|
|
|
6
6
|
|
|
7
7
|
Scorers can be run in the cloud, capturing real-time results. But scorers can also be part of your CI/CD pipeline, allowing you to test and monitor your agents over time.
|
|
8
8
|
|
|
9
|
-
## Types of
|
|
9
|
+
## Types of scorers
|
|
10
10
|
|
|
11
11
|
Mastra provides different kinds of scorers, each serving a specific purpose. Here are some common types:
|
|
12
12
|
|
|
@@ -1,12 +1,12 @@
|
|
|
1
|
-
# Running
|
|
1
|
+
# Running scorers in CI
|
|
2
2
|
|
|
3
3
|
Running scorers in your CI pipeline provides quantifiable metrics for measuring agent quality over time. The `runEvals` function processes multiple test cases through your agent or workflow and returns aggregate scores.
|
|
4
4
|
|
|
5
|
-
## Basic
|
|
5
|
+
## Basic setup
|
|
6
6
|
|
|
7
7
|
You can use any testing framework that supports ESM modules, such as [Vitest](https://vitest.dev/), [Jest](https://jestjs.io/), or [Mocha](https://mochajs.org/).
|
|
8
8
|
|
|
9
|
-
## Creating
|
|
9
|
+
## Creating test cases
|
|
10
10
|
|
|
11
11
|
Use `runEvals` to evaluate your agent against multiple test cases. The function accepts an array of data items, each containing an `input` and optional `groundTruth` for scorer validation.
|
|
12
12
|
|
|
@@ -44,7 +44,7 @@ describe('Weather Agent Tests', () => {
|
|
|
44
44
|
})
|
|
45
45
|
```
|
|
46
46
|
|
|
47
|
-
## Understanding
|
|
47
|
+
## Understanding results
|
|
48
48
|
|
|
49
49
|
The `runEvals` function returns an object with:
|
|
50
50
|
|
|
@@ -63,7 +63,7 @@ The `runEvals` function returns an object with:
|
|
|
63
63
|
}
|
|
64
64
|
```
|
|
65
65
|
|
|
66
|
-
## Multiple
|
|
66
|
+
## Multiple test scenarios
|
|
67
67
|
|
|
68
68
|
Create separate test cases for different evaluation scenarios:
|
|
69
69
|
|
|
@@ -117,7 +117,7 @@ describe('Weather Agent Tests', () => {
|
|
|
117
117
|
})
|
|
118
118
|
```
|
|
119
119
|
|
|
120
|
-
## Next
|
|
120
|
+
## Next steps
|
|
121
121
|
|
|
122
122
|
- Learn about [creating custom scorers](https://mastra.ai/docs/evals/custom-scorers)
|
|
123
123
|
- Explore [built-in scorers](https://mastra.ai/docs/evals/built-in-scorers)
|
|
@@ -2,7 +2,7 @@
|
|
|
2
2
|
|
|
3
3
|
AI agents may not have up-to-date knowledge about Mastra's APIs, patterns, and best practices. These resources give your AI tools direct access to current Mastra documentation, enabling them to generate accurate code and help you build faster.
|
|
4
4
|
|
|
5
|
-
## Mastra
|
|
5
|
+
## Mastra skills
|
|
6
6
|
|
|
7
7
|
Agent Skills are folders of instructions, scripts, and resources that agents can discover and use to do things more accurately and efficiently. [Mastra Skills](https://mastra.ai/docs/build-with-ai/skills) contain setup instructions, best practices, and instructions on how to fetch up-to-date information from Mastra's documentation.
|
|
8
8
|
|
|
@@ -34,7 +34,7 @@ Read the dedicated [Mastra Skills](https://mastra.ai/docs/build-with-ai/skills)
|
|
|
34
34
|
|
|
35
35
|
> **Tip:** If you're interested in giving your agent access to Mastra's documentation, we recommend using **Skills**. While the MCP Docs Server also provides this information, Skills will perform better. Use the MCP Docs Server when you need its tools, e.g. the migration tool.
|
|
36
36
|
|
|
37
|
-
## MCP
|
|
37
|
+
## MCP docs server
|
|
38
38
|
|
|
39
39
|
In addition to documentation access, the [MCP Docs Server](https://mastra.ai/docs/build-with-ai/mcp-docs-server) also provides tools to help you migrate to newer versions of Mastra or follow the [Mastra 101 course](https://mastra.ai/course).
|
|
40
40
|
|
|
@@ -1,4 +1,4 @@
|
|
|
1
|
-
# Manual
|
|
1
|
+
# Manual install
|
|
2
2
|
|
|
3
3
|
> **Info:** Use this guide to manually build a standalone Mastra server step by step. In most cases, it's quicker to follow the [quickstart guide](https://mastra.ai/guides/getting-started/quickstart), which achieves the same result using the [`mastra create`](https://mastra.ai/reference/cli/create-mastra) command. For existing projects, you can also use [`mastra init`](https://mastra.ai/reference/cli/mastra).
|
|
4
4
|
|
package/.docs/docs/index.md
CHANGED
|
@@ -1,4 +1,4 @@
|
|
|
1
|
-
# MCP
|
|
1
|
+
# MCP overview
|
|
2
2
|
|
|
3
3
|
Mastra supports the [Model Context Protocol (MCP)](https://modelcontextprotocol.io/introduction), an open standard for connecting AI agents to external tools and resources. It serves as a universal plugin system, enabling agents to call tools regardless of language or hosting environment.
|
|
4
4
|
|
|
@@ -1,4 +1,4 @@
|
|
|
1
|
-
# Publishing an MCP
|
|
1
|
+
# Publishing an MCP server
|
|
2
2
|
|
|
3
3
|
This example guides you through setting up a basic Mastra MCPServer using the stdio transport, building it, and preparing it for publishing to NPM.
|
|
4
4
|
|
|
@@ -10,7 +10,7 @@ Install the necessary packages:
|
|
|
10
10
|
pnpm add @mastra/mcp @mastra/core tsup
|
|
11
11
|
```
|
|
12
12
|
|
|
13
|
-
## Setting up an MCP
|
|
13
|
+
## Setting up an MCP server
|
|
14
14
|
|
|
15
15
|
1. Create a file for your stdio server, for example, `/src/mastra/stdio.ts`.
|
|
16
16
|
|
|
@@ -70,7 +70,7 @@ To make your MCP server available for others (or yourself) to use via `npx` or a
|
|
|
70
70
|
|
|
71
71
|
For more details on publishing packages, refer to the [NPM documentation](https://docs.npmjs.com/creating-and-publishing-scoped-public-packages).
|
|
72
72
|
|
|
73
|
-
## Using a published MCP
|
|
73
|
+
## Using a published MCP server
|
|
74
74
|
|
|
75
75
|
Once published, your MCP server can be used by an `MCPClient` by specifying the command to run your package. You can also use any other MCP client like Claude desktop, Cursor, or Windsurf.
|
|
76
76
|
|
|
@@ -1,4 +1,4 @@
|
|
|
1
|
-
# Memory
|
|
1
|
+
# Memory processors
|
|
2
2
|
|
|
3
3
|
Memory processors transform and filter messages as they pass through an agent with memory enabled. They manage context window limits, remove unnecessary content, and optimize the information sent to the language model.
|
|
4
4
|
|
|
@@ -6,11 +6,11 @@ When memory is enabled on an agent, Mastra adds memory processors to the agent's
|
|
|
6
6
|
|
|
7
7
|
Memory processors are [processors](https://mastra.ai/docs/agents/processors) that operate specifically on memory-related messages and state.
|
|
8
8
|
|
|
9
|
-
## Built-in
|
|
9
|
+
## Built-in memory processors
|
|
10
10
|
|
|
11
11
|
Mastra automatically adds these processors when memory is enabled:
|
|
12
12
|
|
|
13
|
-
### MessageHistory
|
|
13
|
+
### `MessageHistory`
|
|
14
14
|
|
|
15
15
|
Retrieves message history and persists new messages.
|
|
16
16
|
|
|
@@ -56,7 +56,7 @@ const agent = new Agent({
|
|
|
56
56
|
})
|
|
57
57
|
```
|
|
58
58
|
|
|
59
|
-
### SemanticRecall
|
|
59
|
+
### `SemanticRecall`
|
|
60
60
|
|
|
61
61
|
Retrieves semantically relevant messages based on the current input and creates embeddings for new messages.
|
|
62
62
|
|
|
@@ -114,7 +114,7 @@ const agent = new Agent({
|
|
|
114
114
|
})
|
|
115
115
|
```
|
|
116
116
|
|
|
117
|
-
### WorkingMemory
|
|
117
|
+
### `WorkingMemory`
|
|
118
118
|
|
|
119
119
|
Manages working memory state across conversations.
|
|
120
120
|
|
|
@@ -159,7 +159,7 @@ const agent = new Agent({
|
|
|
159
159
|
})
|
|
160
160
|
```
|
|
161
161
|
|
|
162
|
-
## Manual
|
|
162
|
+
## Manual control and deduplication
|
|
163
163
|
|
|
164
164
|
If you manually add a memory processor to `inputProcessors` or `outputProcessors`, Mastra **won't** automatically add it. This gives you full control over processor ordering:
|
|
165
165
|
|
|
@@ -192,7 +192,7 @@ const agent = new Agent({
|
|
|
192
192
|
})
|
|
193
193
|
```
|
|
194
194
|
|
|
195
|
-
## Processor
|
|
195
|
+
## Processor execution order
|
|
196
196
|
|
|
197
197
|
Understanding the execution order is important when combining guardrails with memory:
|
|
198
198
|
|
|
@@ -218,7 +218,7 @@ This means memory loads message history before your processors can validate or f
|
|
|
218
218
|
|
|
219
219
|
This ordering is designed to be **safe by default**: if your output guardrail calls `abort()`, the memory processors never run and **no messages are saved**.
|
|
220
220
|
|
|
221
|
-
## Guardrails and
|
|
221
|
+
## Guardrails and memory
|
|
222
222
|
|
|
223
223
|
The default execution order provides safe guardrail behavior:
|
|
224
224
|
|
|
@@ -1,4 +1,4 @@
|
|
|
1
|
-
# Message
|
|
1
|
+
# Message history
|
|
2
2
|
|
|
3
3
|
Message history is the most basic and important form of memory. It gives the LLM a view of recent messages in the context window, enabling your agent to reference earlier exchanges and respond coherently.
|
|
4
4
|
|
|
@@ -103,7 +103,7 @@ You can use this history in two ways:
|
|
|
103
103
|
- **Automatic inclusion** - Mastra automatically fetches and includes recent messages in the context window. By default, it includes the last 10 messages, keeping agents grounded in the conversation. You can adjust this number with `lastMessages`, but in most cases you don't need to think about it.
|
|
104
104
|
- [**Manual querying**](#querying) - For more control, use the `recall()` function to query threads and messages directly. This lets you choose exactly which memories are included in the context window, or fetch messages to render conversation history in your UI.
|
|
105
105
|
|
|
106
|
-
## Accessing
|
|
106
|
+
## Accessing memory
|
|
107
107
|
|
|
108
108
|
To access memory functions for querying, cloning, or deleting threads and messages, call `getMemory()` on an agent:
|
|
109
109
|
|
|
@@ -1,10 +1,10 @@
|
|
|
1
|
-
# Observational
|
|
1
|
+
# Observational memory
|
|
2
2
|
|
|
3
3
|
**Added in:** `@mastra/memory@1.1.0`
|
|
4
4
|
|
|
5
5
|
Observational Memory (OM) is Mastra's memory system for long-context agentic memory. Two background agents — an **Observer** and a **Reflector** — watch your agent's conversations and maintain a dense observation log that replaces raw message history as it grows.
|
|
6
6
|
|
|
7
|
-
## Quick
|
|
7
|
+
## Quick start
|
|
8
8
|
|
|
9
9
|
Enable `observationalMemory` in the memory options when creating your agent:
|
|
10
10
|
|
|
@@ -46,7 +46,7 @@ See [configuration options](https://mastra.ai/reference/memory/observational-mem
|
|
|
46
46
|
- **Compression**: Raw message history and tool results get compressed into a dense observation log. Smaller context means faster responses and longer coherent conversations.
|
|
47
47
|
- **Zero context rot**: The agent sees relevant information instead of noisy tool calls and irrelevant tokens, so the agent stays on task over long sessions.
|
|
48
48
|
|
|
49
|
-
## How
|
|
49
|
+
## How it works
|
|
50
50
|
|
|
51
51
|
You don't remember every word of every conversation you've ever had. You observe what happened subconsciously, then your brain reflects — reorganizing, combining, and condensing into long-term memory. OM works the same way.
|
|
52
52
|
|
|
@@ -149,7 +149,7 @@ For your use-case this may not be a problem, so your mileage may vary.
|
|
|
149
149
|
|
|
150
150
|
> **Warning:** In resource scope, unobserved messages across _all_ threads are processed together. For users with many existing threads, this can be slow. Use thread scope for existing apps.
|
|
151
151
|
|
|
152
|
-
## Token
|
|
152
|
+
## Token budgets
|
|
153
153
|
|
|
154
154
|
OM uses token thresholds to decide when to observe and reflect. See [token budget configuration](https://mastra.ai/reference/memory/observational-memory) for details.
|
|
155
155
|
|
|
@@ -183,7 +183,7 @@ OM caches tiktoken part estimates in message metadata to reduce repeat counting
|
|
|
183
183
|
- Message and conversation overhead are still recalculated on every pass. The cache only stores payload estimates, so counting semantics stay the same.
|
|
184
184
|
- `data-*` and `reasoning` parts are still skipped and aren't cached.
|
|
185
185
|
|
|
186
|
-
## Async
|
|
186
|
+
## Async buffering
|
|
187
187
|
|
|
188
188
|
Without async buffering, the Observer runs synchronously when the message threshold is reached — the agent pauses mid-conversation while the Observer LLM call completes. With async buffering (enabled by default), observations are pre-computed in the background as the conversation grows. When the threshold is hit, buffered observations activate instantly with no pause.
|
|
189
189
|
|