@mastra/core 1.7.0 → 1.8.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/CHANGELOG.md +218 -0
- package/dist/agent/agent-legacy.d.ts +15 -0
- package/dist/agent/agent-legacy.d.ts.map +1 -1
- package/dist/agent/agent.d.ts +7 -0
- package/dist/agent/agent.d.ts.map +1 -1
- package/dist/agent/agent.types.d.ts +311 -2
- package/dist/agent/agent.types.d.ts.map +1 -1
- package/dist/agent/index.cjs +13 -13
- package/dist/agent/index.d.ts +3 -1
- package/dist/agent/index.d.ts.map +1 -1
- package/dist/agent/index.js +2 -2
- package/dist/agent/message-list/index.cjs +18 -18
- package/dist/agent/message-list/index.js +1 -1
- package/dist/agent/message-list/merge/MessageMerger.d.ts.map +1 -1
- package/dist/agent/message-list/message-list.d.ts.map +1 -1
- package/dist/agent/workflows/prepare-stream/map-results-step.d.ts.map +1 -1
- package/dist/agent/workflows/prepare-stream/prepare-tools-step.d.ts.map +1 -1
- package/dist/{chunk-A72NTLFT.cjs → chunk-2IO5Q7OZ.cjs} +7 -7
- package/dist/{chunk-A72NTLFT.cjs.map → chunk-2IO5Q7OZ.cjs.map} +1 -1
- package/dist/{chunk-DFCRXDVK.js → chunk-2KHPZJNU.js} +10 -8
- package/dist/chunk-2KHPZJNU.js.map +1 -0
- package/dist/{chunk-R4N65TLG.js → chunk-2R5MQMSA.js} +35 -16
- package/dist/chunk-2R5MQMSA.js.map +1 -0
- package/dist/{chunk-ZSBM2SVU.js → chunk-4H5F6AFP.js} +1064 -226
- package/dist/chunk-4H5F6AFP.js.map +1 -0
- package/dist/{chunk-BQHWJLXU.js → chunk-63G75DJE.js} +9 -3
- package/dist/chunk-63G75DJE.js.map +1 -0
- package/dist/{chunk-SBOHDNIZ.cjs → chunk-6GSWC5ZA.cjs} +2 -2
- package/dist/{chunk-SBOHDNIZ.cjs.map → chunk-6GSWC5ZA.cjs.map} +1 -1
- package/dist/{chunk-QTAS3HND.cjs → chunk-6Q2UD3XF.cjs} +21 -14
- package/dist/chunk-6Q2UD3XF.cjs.map +1 -0
- package/dist/{chunk-GPJGPARM.js → chunk-DTPR3JAM.js} +2 -2
- package/dist/{chunk-GPJGPARM.js.map → chunk-DTPR3JAM.js.map} +1 -1
- package/dist/{chunk-NN26FSKL.js → chunk-FHJ2KIU5.js} +3 -3
- package/dist/{chunk-NN26FSKL.js.map → chunk-FHJ2KIU5.js.map} +1 -1
- package/dist/{chunk-RABITNTG.cjs → chunk-HWG7NPJA.cjs} +55 -55
- package/dist/{chunk-RABITNTG.cjs.map → chunk-HWG7NPJA.cjs.map} +1 -1
- package/dist/{chunk-HB6T4554.cjs → chunk-KH3G65IS.cjs} +10 -8
- package/dist/chunk-KH3G65IS.cjs.map +1 -0
- package/dist/{chunk-YQG7NBPR.cjs → chunk-KZ4IKNPN.cjs} +25 -23
- package/dist/chunk-KZ4IKNPN.cjs.map +1 -0
- package/dist/{chunk-6DUTLERJ.js → chunk-MRV5NCPC.js} +3 -3
- package/dist/{chunk-6DUTLERJ.js.map → chunk-MRV5NCPC.js.map} +1 -1
- package/dist/{chunk-O7PZ4VOO.cjs → chunk-N3ROEJG4.cjs} +12 -10
- package/dist/chunk-N3ROEJG4.cjs.map +1 -0
- package/dist/{chunk-7EXW4AAG.js → chunk-NXKI2L4X.js} +6 -4
- package/dist/chunk-NXKI2L4X.js.map +1 -0
- package/dist/{chunk-QWTB53GS.js → chunk-OSEPGSLN.js} +6 -6
- package/dist/{chunk-QWTB53GS.js.map → chunk-OSEPGSLN.js.map} +1 -1
- package/dist/{chunk-6OXW5E2O.js → chunk-PI7ONENO.js} +4 -4
- package/dist/{chunk-6OXW5E2O.js.map → chunk-PI7ONENO.js.map} +1 -1
- package/dist/{chunk-KUXNBWN7.js → chunk-Q4MV4XKX.js} +8 -6
- package/dist/chunk-Q4MV4XKX.js.map +1 -0
- package/dist/{chunk-7UAJ6LMR.cjs → chunk-QKQGKEN7.cjs} +1078 -241
- package/dist/chunk-QKQGKEN7.cjs.map +1 -0
- package/dist/{chunk-IC5OUWKJ.js → chunk-SP7P6Z4L.js} +19 -2
- package/dist/chunk-SP7P6Z4L.js.map +1 -0
- package/dist/{chunk-QDH6MVJ7.cjs → chunk-TGUDI64A.cjs} +14 -14
- package/dist/{chunk-QDH6MVJ7.cjs.map → chunk-TGUDI64A.cjs.map} +1 -1
- package/dist/{chunk-EAZ6YDCQ.cjs → chunk-U3HBG2GU.cjs} +9 -2
- package/dist/chunk-U3HBG2GU.cjs.map +1 -0
- package/dist/{chunk-6QBN6MZY.cjs → chunk-VAKB5EXJ.cjs} +42 -23
- package/dist/chunk-VAKB5EXJ.cjs.map +1 -0
- package/dist/{chunk-QSHV7GPT.js → chunk-VBPU6CLZ.js} +3808 -3026
- package/dist/chunk-VBPU6CLZ.js.map +1 -0
- package/dist/{chunk-2X66GWF5.cjs → chunk-VTVCMIAI.cjs} +3905 -3121
- package/dist/chunk-VTVCMIAI.cjs.map +1 -0
- package/dist/{chunk-PHHJLGZU.cjs → chunk-XNWF6CYR.cjs} +6 -6
- package/dist/{chunk-PHHJLGZU.cjs.map → chunk-XNWF6CYR.cjs.map} +1 -1
- package/dist/{chunk-T6GAM3SQ.js → chunk-ZRPTWYWJ.js} +18 -11
- package/dist/chunk-ZRPTWYWJ.js.map +1 -0
- package/dist/{chunk-DB7U2C5B.cjs → chunk-ZXOWG32X.cjs} +19 -2
- package/dist/chunk-ZXOWG32X.cjs.map +1 -0
- package/dist/datasets/experiment/index.d.ts.map +1 -1
- package/dist/datasets/experiment/scorer.d.ts +1 -1
- package/dist/datasets/experiment/scorer.d.ts.map +1 -1
- package/dist/datasets/index.cjs +17 -17
- package/dist/datasets/index.js +2 -2
- package/dist/docs/SKILL.md +300 -0
- package/dist/docs/assets/SOURCE_MAP.json +1423 -0
- package/dist/docs/references/docs-agents-adding-voice.md +349 -0
- package/dist/docs/references/docs-agents-agent-approval.md +558 -0
- package/dist/docs/references/docs-agents-agent-memory.md +209 -0
- package/dist/docs/references/docs-agents-guardrails.md +374 -0
- package/dist/docs/references/docs-agents-network-approval.md +275 -0
- package/dist/docs/references/docs-agents-networks.md +299 -0
- package/dist/docs/references/docs-agents-overview.md +304 -0
- package/dist/docs/references/docs-agents-processors.md +622 -0
- package/dist/docs/references/docs-agents-structured-output.md +273 -0
- package/dist/docs/references/docs-agents-supervisor-agents.md +304 -0
- package/dist/docs/references/docs-agents-using-tools.md +214 -0
- package/dist/docs/references/docs-evals-custom-scorers.md +519 -0
- package/dist/docs/references/docs-evals-overview.md +141 -0
- package/dist/docs/references/docs-evals-running-in-ci.md +124 -0
- package/dist/docs/references/docs-memory-memory-processors.md +314 -0
- package/dist/docs/references/docs-memory-observational-memory.md +248 -0
- package/dist/docs/references/docs-memory-overview.md +45 -0
- package/dist/docs/references/docs-memory-semantic-recall.md +272 -0
- package/dist/docs/references/docs-memory-storage.md +261 -0
- package/dist/docs/references/docs-memory-working-memory.md +400 -0
- package/dist/docs/references/docs-observability-datasets-overview.md +198 -0
- package/dist/docs/references/docs-observability-datasets-running-experiments.md +274 -0
- package/dist/docs/references/docs-observability-logging.md +99 -0
- package/dist/docs/references/docs-observability-overview.md +70 -0
- package/dist/docs/references/docs-observability-tracing-bridges-otel.md +209 -0
- package/dist/docs/references/docs-observability-tracing-exporters-arize.md +272 -0
- package/dist/docs/references/docs-observability-tracing-exporters-braintrust.md +111 -0
- package/dist/docs/references/docs-observability-tracing-exporters-cloud.md +127 -0
- package/dist/docs/references/docs-observability-tracing-exporters-datadog.md +187 -0
- package/dist/docs/references/docs-observability-tracing-exporters-default.md +209 -0
- package/dist/docs/references/docs-observability-tracing-exporters-laminar.md +100 -0
- package/dist/docs/references/docs-observability-tracing-exporters-langfuse.md +213 -0
- package/dist/docs/references/docs-observability-tracing-exporters-langsmith.md +198 -0
- package/dist/docs/references/docs-observability-tracing-exporters-otel.md +476 -0
- package/dist/docs/references/docs-observability-tracing-exporters-posthog.md +148 -0
- package/dist/docs/references/docs-observability-tracing-overview.md +1112 -0
- package/dist/docs/references/docs-rag-chunking-and-embedding.md +183 -0
- package/dist/docs/references/docs-rag-graph-rag.md +215 -0
- package/dist/docs/references/docs-rag-overview.md +72 -0
- package/dist/docs/references/docs-rag-retrieval.md +515 -0
- package/dist/docs/references/docs-rag-vector-databases.md +645 -0
- package/dist/docs/references/docs-server-auth-auth0.md +220 -0
- package/dist/docs/references/docs-server-auth-clerk.md +132 -0
- package/dist/docs/references/docs-server-auth-composite-auth.md +234 -0
- package/dist/docs/references/docs-server-auth-custom-auth-provider.md +513 -0
- package/dist/docs/references/docs-server-auth-firebase.md +272 -0
- package/dist/docs/references/docs-server-auth-jwt.md +110 -0
- package/dist/docs/references/docs-server-auth-simple-auth.md +180 -0
- package/dist/docs/references/docs-server-auth-supabase.md +117 -0
- package/dist/docs/references/docs-server-auth-workos.md +186 -0
- package/dist/docs/references/docs-server-custom-adapters.md +378 -0
- package/dist/docs/references/docs-server-custom-api-routes.md +267 -0
- package/dist/docs/references/docs-server-mastra-client.md +243 -0
- package/dist/docs/references/docs-server-mastra-server.md +71 -0
- package/dist/docs/references/docs-server-middleware.md +225 -0
- package/dist/docs/references/docs-server-request-context.md +471 -0
- package/dist/docs/references/docs-streaming-events.md +237 -0
- package/dist/docs/references/docs-streaming-tool-streaming.md +175 -0
- package/dist/docs/references/docs-streaming-workflow-streaming.md +109 -0
- package/dist/docs/references/docs-voice-overview.md +959 -0
- package/dist/docs/references/docs-voice-speech-to-speech.md +102 -0
- package/dist/docs/references/docs-voice-speech-to-text.md +79 -0
- package/dist/docs/references/docs-voice-text-to-speech.md +83 -0
- package/dist/docs/references/docs-workflows-agents-and-tools.md +166 -0
- package/dist/docs/references/docs-workflows-control-flow.md +822 -0
- package/dist/docs/references/docs-workflows-error-handling.md +360 -0
- package/dist/docs/references/docs-workflows-human-in-the-loop.md +215 -0
- package/dist/docs/references/docs-workflows-overview.md +370 -0
- package/dist/docs/references/docs-workflows-snapshots.md +238 -0
- package/dist/docs/references/docs-workflows-suspend-and-resume.md +205 -0
- package/dist/docs/references/docs-workflows-time-travel.md +309 -0
- package/dist/docs/references/docs-workflows-workflow-state.md +181 -0
- package/dist/docs/references/docs-workspace-filesystem.md +164 -0
- package/dist/docs/references/docs-workspace-overview.md +239 -0
- package/dist/docs/references/docs-workspace-sandbox.md +63 -0
- package/dist/docs/references/docs-workspace-search.md +243 -0
- package/dist/docs/references/docs-workspace-skills.md +169 -0
- package/dist/docs/references/guides-agent-frameworks-ai-sdk.md +140 -0
- package/dist/docs/references/reference-agents-agent.md +141 -0
- package/dist/docs/references/reference-agents-generate.md +186 -0
- package/dist/docs/references/reference-agents-generateLegacy.md +173 -0
- package/dist/docs/references/reference-agents-getDefaultGenerateOptions.md +36 -0
- package/dist/docs/references/reference-agents-getDefaultOptions.md +34 -0
- package/dist/docs/references/reference-agents-getDefaultStreamOptions.md +36 -0
- package/dist/docs/references/reference-agents-getDescription.md +21 -0
- package/dist/docs/references/reference-agents-getInstructions.md +34 -0
- package/dist/docs/references/reference-agents-getLLM.md +37 -0
- package/dist/docs/references/reference-agents-getMemory.md +34 -0
- package/dist/docs/references/reference-agents-getModel.md +34 -0
- package/dist/docs/references/reference-agents-getTools.md +29 -0
- package/dist/docs/references/reference-agents-getVoice.md +34 -0
- package/dist/docs/references/reference-agents-listAgents.md +35 -0
- package/dist/docs/references/reference-agents-listScorers.md +34 -0
- package/dist/docs/references/reference-agents-listTools.md +34 -0
- package/dist/docs/references/reference-agents-listWorkflows.md +34 -0
- package/dist/docs/references/reference-agents-network.md +133 -0
- package/dist/docs/references/reference-ai-sdk-chat-route.md +82 -0
- package/dist/docs/references/reference-ai-sdk-network-route.md +74 -0
- package/dist/docs/references/reference-ai-sdk-to-ai-sdk-stream.md +231 -0
- package/dist/docs/references/reference-ai-sdk-with-mastra.md +59 -0
- package/dist/docs/references/reference-ai-sdk-workflow-route.md +79 -0
- package/dist/docs/references/reference-auth-auth0.md +73 -0
- package/dist/docs/references/reference-auth-clerk.md +36 -0
- package/dist/docs/references/reference-auth-firebase.md +80 -0
- package/dist/docs/references/reference-auth-jwt.md +26 -0
- package/dist/docs/references/reference-auth-supabase.md +33 -0
- package/dist/docs/references/reference-auth-workos.md +84 -0
- package/dist/docs/references/reference-client-js-agents.md +437 -0
- package/dist/docs/references/reference-configuration.md +752 -0
- package/dist/docs/references/reference-core-addGateway.md +42 -0
- package/dist/docs/references/reference-core-getAgent.md +21 -0
- package/dist/docs/references/reference-core-getAgentById.md +21 -0
- package/dist/docs/references/reference-core-getDeployer.md +22 -0
- package/dist/docs/references/reference-core-getGateway.md +38 -0
- package/dist/docs/references/reference-core-getGatewayById.md +41 -0
- package/dist/docs/references/reference-core-getLogger.md +22 -0
- package/dist/docs/references/reference-core-getMCPServer.md +47 -0
- package/dist/docs/references/reference-core-getMCPServerById.md +55 -0
- package/dist/docs/references/reference-core-getMemory.md +50 -0
- package/dist/docs/references/reference-core-getScorer.md +54 -0
- package/dist/docs/references/reference-core-getScorerById.md +54 -0
- package/dist/docs/references/reference-core-getServer.md +22 -0
- package/dist/docs/references/reference-core-getStorage.md +22 -0
- package/dist/docs/references/reference-core-getStoredAgentById.md +89 -0
- package/dist/docs/references/reference-core-getTelemetry.md +22 -0
- package/dist/docs/references/reference-core-getVector.md +22 -0
- package/dist/docs/references/reference-core-getWorkflow.md +42 -0
- package/dist/docs/references/reference-core-listAgents.md +21 -0
- package/dist/docs/references/reference-core-listGateways.md +40 -0
- package/dist/docs/references/reference-core-listLogs.md +38 -0
- package/dist/docs/references/reference-core-listLogsByRunId.md +36 -0
- package/dist/docs/references/reference-core-listMCPServers.md +55 -0
- package/dist/docs/references/reference-core-listMemory.md +56 -0
- package/dist/docs/references/reference-core-listScorers.md +29 -0
- package/dist/docs/references/reference-core-listStoredAgents.md +93 -0
- package/dist/docs/references/reference-core-listVectors.md +22 -0
- package/dist/docs/references/reference-core-listWorkflows.md +21 -0
- package/dist/docs/references/reference-core-mastra-class.md +66 -0
- package/dist/docs/references/reference-core-mastra-model-gateway.md +153 -0
- package/dist/docs/references/reference-core-setLogger.md +26 -0
- package/dist/docs/references/reference-core-setStorage.md +27 -0
- package/dist/docs/references/reference-datasets-addItem.md +37 -0
- package/dist/docs/references/reference-datasets-addItems.md +35 -0
- package/dist/docs/references/reference-datasets-compareExperiments.md +52 -0
- package/dist/docs/references/reference-datasets-create.md +51 -0
- package/dist/docs/references/reference-datasets-dataset.md +82 -0
- package/dist/docs/references/reference-datasets-datasets-manager.md +94 -0
- package/dist/docs/references/reference-datasets-delete.md +25 -0
- package/dist/docs/references/reference-datasets-deleteExperiment.md +27 -0
- package/dist/docs/references/reference-datasets-deleteItem.md +27 -0
- package/dist/docs/references/reference-datasets-deleteItems.md +29 -0
- package/dist/docs/references/reference-datasets-get.md +31 -0
- package/dist/docs/references/reference-datasets-getDetails.md +47 -0
- package/dist/docs/references/reference-datasets-getExperiment.md +30 -0
- package/dist/docs/references/reference-datasets-getItem.md +33 -0
- package/dist/docs/references/reference-datasets-getItemHistory.md +31 -0
- package/dist/docs/references/reference-datasets-list.md +31 -0
- package/dist/docs/references/reference-datasets-listExperimentResults.md +39 -0
- package/dist/docs/references/reference-datasets-listExperiments.md +33 -0
- package/dist/docs/references/reference-datasets-listItems.md +46 -0
- package/dist/docs/references/reference-datasets-listVersions.md +33 -0
- package/dist/docs/references/reference-datasets-startExperiment.md +62 -0
- package/dist/docs/references/reference-datasets-startExperimentAsync.md +43 -0
- package/dist/docs/references/reference-datasets-update.md +48 -0
- package/dist/docs/references/reference-datasets-updateItem.md +38 -0
- package/dist/docs/references/reference-evals-answer-relevancy.md +105 -0
- package/dist/docs/references/reference-evals-answer-similarity.md +99 -0
- package/dist/docs/references/reference-evals-bias.md +120 -0
- package/dist/docs/references/reference-evals-completeness.md +136 -0
- package/dist/docs/references/reference-evals-content-similarity.md +101 -0
- package/dist/docs/references/reference-evals-context-precision.md +196 -0
- package/dist/docs/references/reference-evals-create-scorer.md +270 -0
- package/dist/docs/references/reference-evals-faithfulness.md +114 -0
- package/dist/docs/references/reference-evals-hallucination.md +213 -0
- package/dist/docs/references/reference-evals-keyword-coverage.md +128 -0
- package/dist/docs/references/reference-evals-mastra-scorer.md +123 -0
- package/dist/docs/references/reference-evals-run-evals.md +179 -0
- package/dist/docs/references/reference-evals-scorer-utils.md +326 -0
- package/dist/docs/references/reference-evals-textual-difference.md +113 -0
- package/dist/docs/references/reference-evals-tone-consistency.md +119 -0
- package/dist/docs/references/reference-evals-toxicity.md +123 -0
- package/dist/docs/references/reference-harness-harness-class.md +708 -0
- package/dist/docs/references/reference-logging-pino-logger.md +117 -0
- package/dist/docs/references/reference-memory-deleteMessages.md +38 -0
- package/dist/docs/references/reference-memory-memory-class.md +147 -0
- package/dist/docs/references/reference-memory-observational-memory.md +565 -0
- package/dist/docs/references/reference-observability-tracing-bridges-otel.md +131 -0
- package/dist/docs/references/reference-observability-tracing-configuration.md +178 -0
- package/dist/docs/references/reference-observability-tracing-exporters-console-exporter.md +138 -0
- package/dist/docs/references/reference-observability-tracing-exporters-datadog.md +116 -0
- package/dist/docs/references/reference-observability-tracing-instances.md +107 -0
- package/dist/docs/references/reference-observability-tracing-interfaces.md +743 -0
- package/dist/docs/references/reference-observability-tracing-processors-sensitive-data-filter.md +144 -0
- package/dist/docs/references/reference-observability-tracing-spans.md +224 -0
- package/dist/docs/references/reference-processors-batch-parts-processor.md +61 -0
- package/dist/docs/references/reference-processors-language-detector.md +82 -0
- package/dist/docs/references/reference-processors-message-history-processor.md +85 -0
- package/dist/docs/references/reference-processors-moderation-processor.md +104 -0
- package/dist/docs/references/reference-processors-pii-detector.md +108 -0
- package/dist/docs/references/reference-processors-processor-interface.md +521 -0
- package/dist/docs/references/reference-processors-prompt-injection-detector.md +72 -0
- package/dist/docs/references/reference-processors-semantic-recall-processor.md +117 -0
- package/dist/docs/references/reference-processors-system-prompt-scrubber.md +80 -0
- package/dist/docs/references/reference-processors-token-limiter-processor.md +115 -0
- package/dist/docs/references/reference-processors-tool-call-filter.md +85 -0
- package/dist/docs/references/reference-processors-tool-search-processor.md +111 -0
- package/dist/docs/references/reference-processors-unicode-normalizer.md +62 -0
- package/dist/docs/references/reference-processors-working-memory-processor.md +152 -0
- package/dist/docs/references/reference-rag-database-config.md +261 -0
- package/dist/docs/references/reference-rag-embeddings.md +92 -0
- package/dist/docs/references/reference-server-mastra-server.md +298 -0
- package/dist/docs/references/reference-server-register-api-route.md +249 -0
- package/dist/docs/references/reference-storage-cloudflare-d1.md +218 -0
- package/dist/docs/references/reference-storage-composite.md +235 -0
- package/dist/docs/references/reference-storage-lance.md +131 -0
- package/dist/docs/references/reference-storage-libsql.md +135 -0
- package/dist/docs/references/reference-storage-mongodb.md +262 -0
- package/dist/docs/references/reference-storage-mssql.md +157 -0
- package/dist/docs/references/reference-storage-overview.md +121 -0
- package/dist/docs/references/reference-storage-postgresql.md +526 -0
- package/dist/docs/references/reference-storage-upstash.md +160 -0
- package/dist/docs/references/reference-streaming-ChunkType.md +292 -0
- package/dist/docs/references/reference-streaming-agents-MastraModelOutput.md +182 -0
- package/dist/docs/references/reference-streaming-agents-streamLegacy.md +142 -0
- package/dist/docs/references/reference-streaming-workflows-observeStream.md +42 -0
- package/dist/docs/references/reference-streaming-workflows-resumeStream.md +61 -0
- package/dist/docs/references/reference-streaming-workflows-stream.md +88 -0
- package/dist/docs/references/reference-streaming-workflows-timeTravelStream.md +142 -0
- package/dist/docs/references/reference-templates-overview.md +194 -0
- package/dist/docs/references/reference-tools-create-tool.md +237 -0
- package/dist/docs/references/reference-tools-graph-rag-tool.md +182 -0
- package/dist/docs/references/reference-tools-mcp-client.md +954 -0
- package/dist/docs/references/reference-tools-mcp-server.md +1271 -0
- package/dist/docs/references/reference-tools-vector-query-tool.md +459 -0
- package/dist/docs/references/reference-vectors-libsql.md +305 -0
- package/dist/docs/references/reference-vectors-mongodb.md +295 -0
- package/dist/docs/references/reference-vectors-pg.md +408 -0
- package/dist/docs/references/reference-vectors-upstash.md +294 -0
- package/dist/docs/references/reference-voice-composite-voice.md +121 -0
- package/dist/docs/references/reference-voice-mastra-voice.md +311 -0
- package/dist/docs/references/reference-voice-voice.addInstructions.md +55 -0
- package/dist/docs/references/reference-voice-voice.addTools.md +67 -0
- package/dist/docs/references/reference-voice-voice.connect.md +94 -0
- package/dist/docs/references/reference-voice-voice.events.md +37 -0
- package/dist/docs/references/reference-voice-voice.listen.md +164 -0
- package/dist/docs/references/reference-voice-voice.on.md +111 -0
- package/dist/docs/references/reference-voice-voice.speak.md +157 -0
- package/dist/docs/references/reference-workflows-run-methods-cancel.md +86 -0
- package/dist/docs/references/reference-workflows-run-methods-restart.md +33 -0
- package/dist/docs/references/reference-workflows-run-methods-resume.md +59 -0
- package/dist/docs/references/reference-workflows-run-methods-start.md +58 -0
- package/dist/docs/references/reference-workflows-run-methods-startAsync.md +67 -0
- package/dist/docs/references/reference-workflows-run-methods-timeTravel.md +142 -0
- package/dist/docs/references/reference-workflows-run.md +59 -0
- package/dist/docs/references/reference-workflows-step.md +119 -0
- package/dist/docs/references/reference-workflows-workflow-methods-branch.md +25 -0
- package/dist/docs/references/reference-workflows-workflow-methods-commit.md +17 -0
- package/dist/docs/references/reference-workflows-workflow-methods-create-run.md +63 -0
- package/dist/docs/references/reference-workflows-workflow-methods-dountil.md +25 -0
- package/dist/docs/references/reference-workflows-workflow-methods-dowhile.md +25 -0
- package/dist/docs/references/reference-workflows-workflow-methods-foreach.md +118 -0
- package/dist/docs/references/reference-workflows-workflow-methods-map.md +93 -0
- package/dist/docs/references/reference-workflows-workflow-methods-parallel.md +21 -0
- package/dist/docs/references/reference-workflows-workflow-methods-sleep.md +35 -0
- package/dist/docs/references/reference-workflows-workflow-methods-sleepUntil.md +35 -0
- package/dist/docs/references/reference-workflows-workflow-methods-then.md +21 -0
- package/dist/docs/references/reference-workflows-workflow.md +157 -0
- package/dist/docs/references/reference-workspace-filesystem.md +255 -0
- package/dist/docs/references/reference-workspace-local-filesystem.md +343 -0
- package/dist/docs/references/reference-workspace-local-sandbox.md +301 -0
- package/dist/docs/references/reference-workspace-sandbox.md +87 -0
- package/dist/docs/references/reference-workspace-workspace-class.md +244 -0
- package/dist/docs/references/reference.md +277 -0
- package/dist/evals/index.cjs +20 -20
- package/dist/evals/index.js +3 -3
- package/dist/evals/run/index.d.ts +9 -2
- package/dist/evals/run/index.d.ts.map +1 -1
- package/dist/evals/scoreTraces/index.cjs +5 -5
- package/dist/evals/scoreTraces/index.js +2 -2
- package/dist/harness/harness.d.ts +6 -0
- package/dist/harness/harness.d.ts.map +1 -1
- package/dist/harness/index.cjs +28 -13
- package/dist/harness/index.cjs.map +1 -1
- package/dist/harness/index.js +20 -5
- package/dist/harness/index.js.map +1 -1
- package/dist/index.cjs +2 -2
- package/dist/index.js +1 -1
- package/dist/integration/index.cjs +2 -2
- package/dist/integration/index.js +1 -1
- package/dist/llm/index.cjs +6 -6
- package/dist/llm/index.js +1 -1
- package/dist/llm/model/embedding-router.d.ts.map +1 -1
- package/dist/llm/model/model.loop.d.ts +1 -1
- package/dist/llm/model/model.loop.d.ts.map +1 -1
- package/dist/loop/index.cjs +20 -12
- package/dist/loop/index.js +1 -1
- package/dist/loop/network/index.d.ts.map +1 -1
- package/dist/loop/network/validation.d.ts +51 -0
- package/dist/loop/network/validation.d.ts.map +1 -1
- package/dist/loop/test-utils/generateText.d.ts.map +1 -1
- package/dist/loop/test-utils/options.d.ts.map +1 -1
- package/dist/loop/test-utils/streamObject.d.ts.map +1 -1
- package/dist/loop/types.d.ts +15 -0
- package/dist/loop/types.d.ts.map +1 -1
- package/dist/loop/workflows/agentic-execution/index.d.ts +3 -0
- package/dist/loop/workflows/agentic-execution/index.d.ts.map +1 -1
- package/dist/loop/workflows/agentic-execution/is-task-complete-step.d.ts +126 -0
- package/dist/loop/workflows/agentic-execution/is-task-complete-step.d.ts.map +1 -0
- package/dist/loop/workflows/agentic-execution/llm-execution-step.d.ts +3 -1
- package/dist/loop/workflows/agentic-execution/llm-execution-step.d.ts.map +1 -1
- package/dist/loop/workflows/agentic-execution/llm-mapping-step.d.ts +1 -0
- package/dist/loop/workflows/agentic-execution/llm-mapping-step.d.ts.map +1 -1
- package/dist/loop/workflows/agentic-execution/tool-call-step.d.ts.map +1 -1
- package/dist/loop/workflows/agentic-loop/index.d.ts +3 -0
- package/dist/loop/workflows/agentic-loop/index.d.ts.map +1 -1
- package/dist/loop/workflows/schema.d.ts +3 -0
- package/dist/loop/workflows/schema.d.ts.map +1 -1
- package/dist/mastra/index.cjs +2 -2
- package/dist/mastra/index.d.ts +9 -5
- package/dist/mastra/index.d.ts.map +1 -1
- package/dist/mastra/index.js +1 -1
- package/dist/memory/index.cjs +14 -14
- package/dist/memory/index.js +1 -1
- package/dist/processor-provider/index.cjs +10 -10
- package/dist/processor-provider/index.js +1 -1
- package/dist/processors/index.cjs +42 -42
- package/dist/processors/index.js +1 -1
- package/dist/processors/processors/skills.d.ts.map +1 -1
- package/dist/relevance/index.cjs +3 -3
- package/dist/relevance/index.js +1 -1
- package/dist/storage/constants.cjs +56 -56
- package/dist/storage/constants.js +1 -1
- package/dist/storage/domains/memory/inmemory.d.ts.map +1 -1
- package/dist/storage/index.cjs +160 -160
- package/dist/storage/index.js +2 -2
- package/dist/storage/types.d.ts +2 -3
- package/dist/storage/types.d.ts.map +1 -1
- package/dist/stream/aisdk/v5/compat/prepare-tools.d.ts.map +1 -1
- package/dist/stream/base/output.d.ts +1 -0
- package/dist/stream/base/output.d.ts.map +1 -1
- package/dist/stream/index.cjs +11 -11
- package/dist/stream/index.js +2 -2
- package/dist/stream/types.d.ts +27 -1
- package/dist/stream/types.d.ts.map +1 -1
- package/dist/test-utils/llm-mock.cjs +4 -4
- package/dist/test-utils/llm-mock.js +1 -1
- package/dist/tool-loop-agent/index.cjs +4 -4
- package/dist/tool-loop-agent/index.js +1 -1
- package/dist/tools/index.cjs +9 -5
- package/dist/tools/index.d.ts +1 -1
- package/dist/tools/index.d.ts.map +1 -1
- package/dist/tools/index.js +1 -1
- package/dist/tools/is-vercel-tool.cjs +2 -2
- package/dist/tools/is-vercel-tool.js +1 -1
- package/dist/tools/toolchecks.d.ts +10 -0
- package/dist/tools/toolchecks.d.ts.map +1 -1
- package/dist/utils.cjs +23 -23
- package/dist/utils.js +1 -1
- package/dist/vector/index.cjs +7 -7
- package/dist/vector/index.js +1 -1
- package/dist/vector/types.d.ts +9 -1
- package/dist/vector/types.d.ts.map +1 -1
- package/dist/workflows/evented/index.cjs +10 -10
- package/dist/workflows/evented/index.js +1 -1
- package/dist/workflows/index.cjs +25 -25
- package/dist/workflows/index.js +1 -1
- package/dist/workflows/types.d.ts +14 -1
- package/dist/workflows/types.d.ts.map +1 -1
- package/dist/workflows/workflow.d.ts +3 -17
- package/dist/workflows/workflow.d.ts.map +1 -1
- package/dist/workspace/filesystem/composite-filesystem.d.ts +5 -0
- package/dist/workspace/filesystem/composite-filesystem.d.ts.map +1 -1
- package/dist/workspace/filesystem/filesystem.d.ts +12 -0
- package/dist/workspace/filesystem/filesystem.d.ts.map +1 -1
- package/dist/workspace/filesystem/fs-utils.d.ts +12 -0
- package/dist/workspace/filesystem/fs-utils.d.ts.map +1 -1
- package/dist/workspace/filesystem/local-filesystem.d.ts +6 -0
- package/dist/workspace/filesystem/local-filesystem.d.ts.map +1 -1
- package/dist/workspace/index.cjs +66 -66
- package/dist/workspace/index.js +1 -1
- package/dist/workspace/lsp/client.d.ts +76 -0
- package/dist/workspace/lsp/client.d.ts.map +1 -0
- package/dist/workspace/lsp/index.d.ts +6 -0
- package/dist/workspace/lsp/index.d.ts.map +1 -0
- package/dist/workspace/lsp/language.d.ts +16 -0
- package/dist/workspace/lsp/language.d.ts.map +1 -0
- package/dist/workspace/lsp/manager.d.ts +72 -0
- package/dist/workspace/lsp/manager.d.ts.map +1 -0
- package/dist/workspace/lsp/servers.d.ts +43 -0
- package/dist/workspace/lsp/servers.d.ts.map +1 -0
- package/dist/workspace/lsp/types.d.ts +45 -0
- package/dist/workspace/lsp/types.d.ts.map +1 -0
- package/dist/workspace/tools/ast-edit.d.ts.map +1 -1
- package/dist/workspace/tools/edit-file.d.ts.map +1 -1
- package/dist/workspace/tools/helpers.d.ts +13 -0
- package/dist/workspace/tools/helpers.d.ts.map +1 -1
- package/dist/workspace/tools/write-file.d.ts.map +1 -1
- package/dist/workspace/workspace.d.ts +33 -0
- package/dist/workspace/workspace.d.ts.map +1 -1
- package/package.json +10 -8
- package/dist/chunk-2X66GWF5.cjs.map +0 -1
- package/dist/chunk-6QBN6MZY.cjs.map +0 -1
- package/dist/chunk-7EXW4AAG.js.map +0 -1
- package/dist/chunk-7UAJ6LMR.cjs.map +0 -1
- package/dist/chunk-BQHWJLXU.js.map +0 -1
- package/dist/chunk-DB7U2C5B.cjs.map +0 -1
- package/dist/chunk-DFCRXDVK.js.map +0 -1
- package/dist/chunk-EAZ6YDCQ.cjs.map +0 -1
- package/dist/chunk-HB6T4554.cjs.map +0 -1
- package/dist/chunk-IC5OUWKJ.js.map +0 -1
- package/dist/chunk-KUXNBWN7.js.map +0 -1
- package/dist/chunk-O7PZ4VOO.cjs.map +0 -1
- package/dist/chunk-QSHV7GPT.js.map +0 -1
- package/dist/chunk-QTAS3HND.cjs.map +0 -1
- package/dist/chunk-R4N65TLG.js.map +0 -1
- package/dist/chunk-T6GAM3SQ.js.map +0 -1
- package/dist/chunk-YQG7NBPR.cjs.map +0 -1
- package/dist/chunk-ZSBM2SVU.js.map +0 -1
|
@@ -0,0 +1,124 @@
|
|
|
1
|
+
# Running Scorers in CI
|
|
2
|
+
|
|
3
|
+
Running scorers in your CI pipeline provides quantifiable metrics for measuring agent quality over time. The `runEvals` function processes multiple test cases through your agent or workflow and returns aggregate scores.
|
|
4
|
+
|
|
5
|
+
## Basic Setup
|
|
6
|
+
|
|
7
|
+
You can use any testing framework that supports ESM modules, such as [Vitest](https://vitest.dev/), [Jest](https://jestjs.io/), or [Mocha](https://mochajs.org/).
|
|
8
|
+
|
|
9
|
+
## Creating Test Cases
|
|
10
|
+
|
|
11
|
+
Use `runEvals` to evaluate your agent against multiple test cases. The function accepts an array of data items, each containing an `input` and optional `groundTruth` for scorer validation.
|
|
12
|
+
|
|
13
|
+
```typescript
|
|
14
|
+
import { describe, it, expect } from 'vitest'
|
|
15
|
+
import { createScorer, runEvals } from '@mastra/core/evals'
|
|
16
|
+
import { weatherAgent } from './weather-agent'
|
|
17
|
+
import { locationScorer } from '../scorers/location-scorer'
|
|
18
|
+
|
|
19
|
+
describe('Weather Agent Tests', () => {
|
|
20
|
+
it('should correctly extract locations from queries', async () => {
|
|
21
|
+
const result = await runEvals({
|
|
22
|
+
data: [
|
|
23
|
+
{
|
|
24
|
+
input: 'weather in Berlin',
|
|
25
|
+
groundTruth: { expectedLocation: 'Berlin', expectedCountry: 'DE' },
|
|
26
|
+
},
|
|
27
|
+
{
|
|
28
|
+
input: 'weather in Berlin, Maryland',
|
|
29
|
+
groundTruth: { expectedLocation: 'Berlin', expectedCountry: 'US' },
|
|
30
|
+
},
|
|
31
|
+
{
|
|
32
|
+
input: 'weather in Berlin, Russia',
|
|
33
|
+
groundTruth: { expectedLocation: 'Berlin', expectedCountry: 'RU' },
|
|
34
|
+
},
|
|
35
|
+
],
|
|
36
|
+
target: weatherAgent,
|
|
37
|
+
scorers: [locationScorer],
|
|
38
|
+
})
|
|
39
|
+
|
|
40
|
+
// Assert aggregate score meets threshold
|
|
41
|
+
expect(result.scores['location-accuracy']).toBe(1)
|
|
42
|
+
expect(result.summary.totalItems).toBe(3)
|
|
43
|
+
})
|
|
44
|
+
})
|
|
45
|
+
```
|
|
46
|
+
|
|
47
|
+
## Understanding Results
|
|
48
|
+
|
|
49
|
+
The `runEvals` function returns an object with:
|
|
50
|
+
|
|
51
|
+
- `scores`: Average scores for each scorer across all test cases
|
|
52
|
+
- `summary.totalItems`: Total number of test cases processed
|
|
53
|
+
|
|
54
|
+
```typescript
|
|
55
|
+
{
|
|
56
|
+
scores: {
|
|
57
|
+
'location-accuracy': 1.0, // Average score across all items
|
|
58
|
+
'another-scorer': 0.85
|
|
59
|
+
},
|
|
60
|
+
summary: {
|
|
61
|
+
totalItems: 3
|
|
62
|
+
}
|
|
63
|
+
}
|
|
64
|
+
```
|
|
65
|
+
|
|
66
|
+
## Multiple Test Scenarios
|
|
67
|
+
|
|
68
|
+
Create separate test cases for different evaluation scenarios:
|
|
69
|
+
|
|
70
|
+
```typescript
|
|
71
|
+
describe('Weather Agent Tests', () => {
|
|
72
|
+
const locationScorer = createScorer({
|
|
73
|
+
/* ... */
|
|
74
|
+
})
|
|
75
|
+
|
|
76
|
+
it('should handle location disambiguation', async () => {
|
|
77
|
+
const result = await runEvals({
|
|
78
|
+
data: [
|
|
79
|
+
{
|
|
80
|
+
input: 'weather in Berlin',
|
|
81
|
+
groundTruth: {
|
|
82
|
+
/* ... */
|
|
83
|
+
},
|
|
84
|
+
},
|
|
85
|
+
{
|
|
86
|
+
input: 'weather in Berlin, Maryland',
|
|
87
|
+
groundTruth: {
|
|
88
|
+
/* ... */
|
|
89
|
+
},
|
|
90
|
+
},
|
|
91
|
+
],
|
|
92
|
+
target: weatherAgent,
|
|
93
|
+
scorers: [locationScorer],
|
|
94
|
+
})
|
|
95
|
+
|
|
96
|
+
expect(result.scores['location-accuracy']).toBe(1)
|
|
97
|
+
})
|
|
98
|
+
|
|
99
|
+
it('should handle typos and misspellings', async () => {
|
|
100
|
+
const result = await runEvals({
|
|
101
|
+
data: [
|
|
102
|
+
{
|
|
103
|
+
input: 'weather in Berln',
|
|
104
|
+
groundTruth: { expectedLocation: 'Berlin', expectedCountry: 'DE' },
|
|
105
|
+
},
|
|
106
|
+
{
|
|
107
|
+
input: 'weather in Parris',
|
|
108
|
+
groundTruth: { expectedLocation: 'Paris', expectedCountry: 'FR' },
|
|
109
|
+
},
|
|
110
|
+
],
|
|
111
|
+
target: weatherAgent,
|
|
112
|
+
scorers: [locationScorer],
|
|
113
|
+
})
|
|
114
|
+
|
|
115
|
+
expect(result.scores['location-accuracy']).toBe(1)
|
|
116
|
+
})
|
|
117
|
+
})
|
|
118
|
+
```
|
|
119
|
+
|
|
120
|
+
## Next Steps
|
|
121
|
+
|
|
122
|
+
- Learn about [creating custom scorers](https://mastra.ai/docs/evals/custom-scorers)
|
|
123
|
+
- Explore [built-in scorers](https://mastra.ai/docs/evals/built-in-scorers)
|
|
124
|
+
- Read the [runEvals API reference](https://mastra.ai/reference/evals/run-evals)
|
|
@@ -0,0 +1,314 @@
|
|
|
1
|
+
# Memory Processors
|
|
2
|
+
|
|
3
|
+
Memory processors transform and filter messages as they pass through an agent with memory enabled. They manage context window limits, remove unnecessary content, and optimize the information sent to the language model.
|
|
4
|
+
|
|
5
|
+
When memory is enabled on an agent, Mastra adds memory processors to the agent's processor pipeline. These processors retrieve message history, working memory, and semantically relevant messages, then persist new messages after the model responds.
|
|
6
|
+
|
|
7
|
+
Memory processors are [processors](https://mastra.ai/docs/agents/processors) that operate specifically on memory-related messages and state.
|
|
8
|
+
|
|
9
|
+
## Built-in Memory Processors
|
|
10
|
+
|
|
11
|
+
Mastra automatically adds these processors when memory is enabled:
|
|
12
|
+
|
|
13
|
+
### MessageHistory
|
|
14
|
+
|
|
15
|
+
Retrieves message history and persists new messages.
|
|
16
|
+
|
|
17
|
+
**When you configure:**
|
|
18
|
+
|
|
19
|
+
```typescript
|
|
20
|
+
memory: new Memory({
|
|
21
|
+
lastMessages: 10,
|
|
22
|
+
})
|
|
23
|
+
```
|
|
24
|
+
|
|
25
|
+
**Mastra internally:**
|
|
26
|
+
|
|
27
|
+
1. Creates a `MessageHistory` processor with `limit: 10`
|
|
28
|
+
2. Adds it to the agent's input processors (runs before the LLM)
|
|
29
|
+
3. Adds it to the agent's output processors (runs after the LLM)
|
|
30
|
+
|
|
31
|
+
**What it does:**
|
|
32
|
+
|
|
33
|
+
- **Input**: Fetches the last 10 messages from storage and prepends them to the conversation
|
|
34
|
+
- **Output**: Persists new messages to storage after the model responds
|
|
35
|
+
|
|
36
|
+
**Example:**
|
|
37
|
+
|
|
38
|
+
```typescript
|
|
39
|
+
import { Agent } from '@mastra/core/agent'
|
|
40
|
+
import { Memory } from '@mastra/memory'
|
|
41
|
+
import { LibSQLStore } from '@mastra/libsql'
|
|
42
|
+
import { openai } from '@ai-sdk/openai'
|
|
43
|
+
|
|
44
|
+
const agent = new Agent({
|
|
45
|
+
id: 'test-agent',
|
|
46
|
+
name: 'Test Agent',
|
|
47
|
+
instructions: 'You are a helpful assistant',
|
|
48
|
+
model: 'openai/gpt-4o',
|
|
49
|
+
memory: new Memory({
|
|
50
|
+
storage: new LibSQLStore({
|
|
51
|
+
id: 'memory-store',
|
|
52
|
+
url: 'file:memory.db',
|
|
53
|
+
}),
|
|
54
|
+
lastMessages: 10, // MessageHistory processor automatically added
|
|
55
|
+
}),
|
|
56
|
+
})
|
|
57
|
+
```
|
|
58
|
+
|
|
59
|
+
### SemanticRecall
|
|
60
|
+
|
|
61
|
+
Retrieves semantically relevant messages based on the current input and creates embeddings for new messages.
|
|
62
|
+
|
|
63
|
+
**When you configure:**
|
|
64
|
+
|
|
65
|
+
```typescript
|
|
66
|
+
memory: new Memory({
|
|
67
|
+
semanticRecall: { enabled: true },
|
|
68
|
+
vector: myVectorStore,
|
|
69
|
+
embedder: myEmbedder,
|
|
70
|
+
})
|
|
71
|
+
```
|
|
72
|
+
|
|
73
|
+
**Mastra internally:**
|
|
74
|
+
|
|
75
|
+
1. Creates a `SemanticRecall` processor
|
|
76
|
+
2. Adds it to the agent's input processors (runs before the LLM)
|
|
77
|
+
3. Adds it to the agent's output processors (runs after the LLM)
|
|
78
|
+
4. Requires both a vector store and embedder to be configured
|
|
79
|
+
|
|
80
|
+
**What it does:**
|
|
81
|
+
|
|
82
|
+
- **Input**: Performs vector similarity search to find relevant past messages and prepends them to the conversation
|
|
83
|
+
- **Output**: Creates embeddings for new messages and stores them in the vector store for future retrieval
|
|
84
|
+
|
|
85
|
+
**Example:**
|
|
86
|
+
|
|
87
|
+
```typescript
|
|
88
|
+
import { Agent } from '@mastra/core/agent'
|
|
89
|
+
import { Memory } from '@mastra/memory'
|
|
90
|
+
import { LibSQLStore } from '@mastra/libsql'
|
|
91
|
+
import { PineconeVector } from '@mastra/pinecone'
|
|
92
|
+
import { OpenAIEmbedder } from '@mastra/openai'
|
|
93
|
+
import { openai } from '@ai-sdk/openai'
|
|
94
|
+
|
|
95
|
+
const agent = new Agent({
|
|
96
|
+
name: 'semantic-agent',
|
|
97
|
+
instructions: 'You are a helpful assistant with semantic memory',
|
|
98
|
+
model: 'openai/gpt-4o',
|
|
99
|
+
memory: new Memory({
|
|
100
|
+
storage: new LibSQLStore({
|
|
101
|
+
id: 'memory-store',
|
|
102
|
+
url: 'file:memory.db',
|
|
103
|
+
}),
|
|
104
|
+
vector: new PineconeVector({
|
|
105
|
+
id: 'memory-vector',
|
|
106
|
+
apiKey: process.env.PINECONE_API_KEY!,
|
|
107
|
+
}),
|
|
108
|
+
embedder: new OpenAIEmbedder({
|
|
109
|
+
model: 'text-embedding-3-small',
|
|
110
|
+
apiKey: process.env.OPENAI_API_KEY!,
|
|
111
|
+
}),
|
|
112
|
+
semanticRecall: { enabled: true }, // SemanticRecall processor automatically added
|
|
113
|
+
}),
|
|
114
|
+
})
|
|
115
|
+
```
|
|
116
|
+
|
|
117
|
+
### WorkingMemory
|
|
118
|
+
|
|
119
|
+
Manages working memory state across conversations.
|
|
120
|
+
|
|
121
|
+
**When you configure:**
|
|
122
|
+
|
|
123
|
+
```typescript
|
|
124
|
+
memory: new Memory({
|
|
125
|
+
workingMemory: { enabled: true },
|
|
126
|
+
})
|
|
127
|
+
```
|
|
128
|
+
|
|
129
|
+
**Mastra internally:**
|
|
130
|
+
|
|
131
|
+
1. Creates a `WorkingMemory` processor
|
|
132
|
+
2. Adds it to the agent's input processors (runs before the LLM)
|
|
133
|
+
3. Requires a storage adapter to be configured
|
|
134
|
+
|
|
135
|
+
**What it does:**
|
|
136
|
+
|
|
137
|
+
- **Input**: Retrieves working memory state for the current thread and prepends it to the conversation
|
|
138
|
+
- **Output**: No output processing
|
|
139
|
+
|
|
140
|
+
**Example:**
|
|
141
|
+
|
|
142
|
+
```typescript
|
|
143
|
+
import { Agent } from '@mastra/core/agent'
|
|
144
|
+
import { Memory } from '@mastra/memory'
|
|
145
|
+
import { LibSQLStore } from '@mastra/libsql'
|
|
146
|
+
import { openai } from '@ai-sdk/openai'
|
|
147
|
+
|
|
148
|
+
const agent = new Agent({
|
|
149
|
+
name: 'working-memory-agent',
|
|
150
|
+
instructions: 'You are an assistant with working memory',
|
|
151
|
+
model: 'openai/gpt-4o',
|
|
152
|
+
memory: new Memory({
|
|
153
|
+
storage: new LibSQLStore({
|
|
154
|
+
id: 'memory-store',
|
|
155
|
+
url: 'file:memory.db',
|
|
156
|
+
}),
|
|
157
|
+
workingMemory: { enabled: true }, // WorkingMemory processor automatically added
|
|
158
|
+
}),
|
|
159
|
+
})
|
|
160
|
+
```
|
|
161
|
+
|
|
162
|
+
## Manual Control and Deduplication
|
|
163
|
+
|
|
164
|
+
If you manually add a memory processor to `inputProcessors` or `outputProcessors`, Mastra will **not** automatically add it. This gives you full control over processor ordering:
|
|
165
|
+
|
|
166
|
+
```typescript
|
|
167
|
+
import { Agent } from '@mastra/core/agent'
|
|
168
|
+
import { Memory } from '@mastra/memory'
|
|
169
|
+
import { MessageHistory } from '@mastra/core/processors'
|
|
170
|
+
import { TokenLimiter } from '@mastra/core/processors'
|
|
171
|
+
import { LibSQLStore } from '@mastra/libsql'
|
|
172
|
+
import { openai } from '@ai-sdk/openai'
|
|
173
|
+
|
|
174
|
+
// Custom MessageHistory with different configuration
|
|
175
|
+
const customMessageHistory = new MessageHistory({
|
|
176
|
+
storage: new LibSQLStore({ id: 'memory-store', url: 'file:memory.db' }),
|
|
177
|
+
lastMessages: 20,
|
|
178
|
+
})
|
|
179
|
+
|
|
180
|
+
const agent = new Agent({
|
|
181
|
+
name: 'custom-memory-agent',
|
|
182
|
+
instructions: 'You are a helpful assistant',
|
|
183
|
+
model: 'openai/gpt-4o',
|
|
184
|
+
memory: new Memory({
|
|
185
|
+
storage: new LibSQLStore({ id: 'memory-store', url: 'file:memory.db' }),
|
|
186
|
+
lastMessages: 10, // This would normally add MessageHistory(10)
|
|
187
|
+
}),
|
|
188
|
+
inputProcessors: [
|
|
189
|
+
customMessageHistory, // Your custom one is used instead
|
|
190
|
+
new TokenLimiter({ limit: 4000 }), // Runs after your custom MessageHistory
|
|
191
|
+
],
|
|
192
|
+
})
|
|
193
|
+
```
|
|
194
|
+
|
|
195
|
+
## Processor Execution Order
|
|
196
|
+
|
|
197
|
+
Understanding the execution order is important when combining guardrails with memory:
|
|
198
|
+
|
|
199
|
+
### Input Processors
|
|
200
|
+
|
|
201
|
+
```text
|
|
202
|
+
[Memory Processors] → [Your inputProcessors]
|
|
203
|
+
```
|
|
204
|
+
|
|
205
|
+
1. **Memory processors run FIRST**: `WorkingMemory`, `MessageHistory`, `SemanticRecall`
|
|
206
|
+
2. **Your input processors run AFTER**: guardrails, filters, validators
|
|
207
|
+
|
|
208
|
+
This means memory loads message history before your processors can validate or filter the input.
|
|
209
|
+
|
|
210
|
+
### Output Processors
|
|
211
|
+
|
|
212
|
+
```text
|
|
213
|
+
[Your outputProcessors] → [Memory Processors]
|
|
214
|
+
```
|
|
215
|
+
|
|
216
|
+
1. **Your output processors run FIRST**: guardrails, filters, validators
|
|
217
|
+
2. **Memory processors run AFTER**: `SemanticRecall` (embeddings), `MessageHistory` (persistence)
|
|
218
|
+
|
|
219
|
+
This ordering is designed to be **safe by default**: if your output guardrail calls `abort()`, the memory processors never run and **no messages are saved**.
|
|
220
|
+
|
|
221
|
+
## Guardrails and Memory
|
|
222
|
+
|
|
223
|
+
The default execution order provides safe guardrail behavior:
|
|
224
|
+
|
|
225
|
+
### Output guardrails (recommended)
|
|
226
|
+
|
|
227
|
+
Output guardrails run **before** memory processors save messages. If a guardrail aborts:
|
|
228
|
+
|
|
229
|
+
- The tripwire is triggered
|
|
230
|
+
- Memory processors are skipped
|
|
231
|
+
- **No messages are persisted to storage**
|
|
232
|
+
|
|
233
|
+
```typescript
|
|
234
|
+
import { Agent } from '@mastra/core/agent'
|
|
235
|
+
import { Memory } from '@mastra/memory'
|
|
236
|
+
import { openai } from '@ai-sdk/openai'
|
|
237
|
+
|
|
238
|
+
// Output guardrail that blocks inappropriate content
|
|
239
|
+
const contentBlocker = {
|
|
240
|
+
id: 'content-blocker',
|
|
241
|
+
processOutputResult: async ({ messages, abort }) => {
|
|
242
|
+
const hasInappropriateContent = messages.some(msg => containsBadContent(msg))
|
|
243
|
+
if (hasInappropriateContent) {
|
|
244
|
+
abort('Content blocked by guardrail')
|
|
245
|
+
}
|
|
246
|
+
return messages
|
|
247
|
+
},
|
|
248
|
+
}
|
|
249
|
+
|
|
250
|
+
const agent = new Agent({
|
|
251
|
+
name: 'safe-agent',
|
|
252
|
+
instructions: 'You are a helpful assistant',
|
|
253
|
+
model: 'openai/gpt-4o',
|
|
254
|
+
memory: new Memory({ lastMessages: 10 }),
|
|
255
|
+
// Your guardrail runs BEFORE memory saves
|
|
256
|
+
outputProcessors: [contentBlocker],
|
|
257
|
+
})
|
|
258
|
+
|
|
259
|
+
// If the guardrail aborts, nothing is saved to memory
|
|
260
|
+
const result = await agent.generate('Hello')
|
|
261
|
+
if (result.tripwire) {
|
|
262
|
+
console.log('Blocked:', result.tripwire.reason)
|
|
263
|
+
// Memory is empty - no messages were persisted
|
|
264
|
+
}
|
|
265
|
+
```
|
|
266
|
+
|
|
267
|
+
### Input guardrails
|
|
268
|
+
|
|
269
|
+
Input guardrails run **after** memory processors load history. If a guardrail aborts:
|
|
270
|
+
|
|
271
|
+
- The tripwire is triggered
|
|
272
|
+
- The LLM is never called
|
|
273
|
+
- Output processors (including memory persistence) are skipped
|
|
274
|
+
- **No messages are persisted to storage**
|
|
275
|
+
|
|
276
|
+
```typescript
|
|
277
|
+
// Input guardrail that validates user input
|
|
278
|
+
const inputValidator = {
|
|
279
|
+
id: 'input-validator',
|
|
280
|
+
processInput: async ({ messages, abort }) => {
|
|
281
|
+
const lastUserMessage = messages.findLast(m => m.role === 'user')
|
|
282
|
+
if (isInvalidInput(lastUserMessage)) {
|
|
283
|
+
abort('Invalid input detected')
|
|
284
|
+
}
|
|
285
|
+
return messages
|
|
286
|
+
},
|
|
287
|
+
}
|
|
288
|
+
|
|
289
|
+
const agent = new Agent({
|
|
290
|
+
name: 'validated-agent',
|
|
291
|
+
instructions: 'You are a helpful assistant',
|
|
292
|
+
model: 'openai/gpt-4o',
|
|
293
|
+
memory: new Memory({ lastMessages: 10 }),
|
|
294
|
+
// Your guardrail runs AFTER memory loads history
|
|
295
|
+
inputProcessors: [inputValidator],
|
|
296
|
+
})
|
|
297
|
+
```
|
|
298
|
+
|
|
299
|
+
### Summary
|
|
300
|
+
|
|
301
|
+
| Guardrail Type | When it runs | If it aborts |
|
|
302
|
+
| -------------- | -------------------------- | ----------------------------- |
|
|
303
|
+
| Input | After memory loads history | LLM not called, nothing saved |
|
|
304
|
+
| Output | Before memory saves | Nothing saved to storage |
|
|
305
|
+
|
|
306
|
+
Both scenarios are safe - guardrails prevent inappropriate content from being persisted to memory
|
|
307
|
+
|
|
308
|
+
## Related documentation
|
|
309
|
+
|
|
310
|
+
- [Processors](https://mastra.ai/docs/agents/processors) - General processor concepts and custom processor creation
|
|
311
|
+
- [Guardrails](https://mastra.ai/docs/agents/guardrails) - Security and validation processors
|
|
312
|
+
- [Memory Overview](https://mastra.ai/docs/memory/overview) - Memory types and configuration
|
|
313
|
+
|
|
314
|
+
When creating custom processors avoid mutating the input `messages` array or its objects directly.
|
|
@@ -0,0 +1,248 @@
|
|
|
1
|
+
# Observational Memory
|
|
2
|
+
|
|
3
|
+
**Added in:** `@mastra/memory@1.1.0`
|
|
4
|
+
|
|
5
|
+
Observational Memory (OM) is Mastra's memory system for long-context agentic memory. Two background agents — an **Observer** and a **Reflector** — watch your agent's conversations and maintain a dense observation log that replaces raw message history as it grows.
|
|
6
|
+
|
|
7
|
+
## Quick Start
|
|
8
|
+
|
|
9
|
+
Enable `observationalMemory` in the memory options when creating your agent:
|
|
10
|
+
|
|
11
|
+
```typescript
|
|
12
|
+
import { Memory } from '@mastra/memory'
|
|
13
|
+
import { Agent } from '@mastra/core/agent'
|
|
14
|
+
|
|
15
|
+
export const agent = new Agent({
|
|
16
|
+
name: 'my-agent',
|
|
17
|
+
instructions: 'You are a helpful assistant.',
|
|
18
|
+
model: 'openai/gpt-5-mini',
|
|
19
|
+
memory: new Memory({
|
|
20
|
+
options: {
|
|
21
|
+
observationalMemory: true,
|
|
22
|
+
},
|
|
23
|
+
}),
|
|
24
|
+
})
|
|
25
|
+
```
|
|
26
|
+
|
|
27
|
+
That's it. The agent now has humanlike long-term memory that persists across conversations. Setting `observationalMemory: true` uses `google/gemini-2.5-flash` by default. To use a different model or customize thresholds, pass a config object instead:
|
|
28
|
+
|
|
29
|
+
```typescript
|
|
30
|
+
const memory = new Memory({
|
|
31
|
+
options: {
|
|
32
|
+
observationalMemory: {
|
|
33
|
+
model: 'deepseek/deepseek-reasoner',
|
|
34
|
+
},
|
|
35
|
+
},
|
|
36
|
+
})
|
|
37
|
+
```
|
|
38
|
+
|
|
39
|
+
See [configuration options](https://mastra.ai/reference/memory/observational-memory) for full API details.
|
|
40
|
+
|
|
41
|
+
> **Note:** OM currently only supports `@mastra/pg`, `@mastra/libsql`, and `@mastra/mongodb` storage adapters. It uses background agents for managing memory. When using `observationalMemory: true`, the default model is `google/gemini-2.5-flash`. When passing a config object, a `model` must be explicitly set.
|
|
42
|
+
|
|
43
|
+
## Benefits
|
|
44
|
+
|
|
45
|
+
- **Prompt caching**: OM's context is stable — observations append over time rather than being dynamically retrieved each turn. This keeps the prompt prefix cacheable, which reduces costs.
|
|
46
|
+
- **Compression**: Raw message history and tool results get compressed into a dense observation log. Smaller context means faster responses and longer coherent conversations.
|
|
47
|
+
- **Zero context rot**: The agent sees relevant information instead of noisy tool calls and irrelevant tokens, so the agent stays on task over long sessions.
|
|
48
|
+
|
|
49
|
+
## How It Works
|
|
50
|
+
|
|
51
|
+
You don't remember every word of every conversation you've ever had. You observe what happened subconsciously, then your brain reflects — reorganizing, combining, and condensing into long-term memory. OM works the same way.
|
|
52
|
+
|
|
53
|
+
Every time an agent responds, it sees a context window containing its system prompt, recent message history, and any injected context. The context window is finite — even models with large token limits perform worse when the window is full. This causes two problems:
|
|
54
|
+
|
|
55
|
+
- **Context rot**: the more raw message history an agent carries, the worse it performs.
|
|
56
|
+
- **Context waste**: most of that history contains tokens no longer needed to keep the agent on task.
|
|
57
|
+
|
|
58
|
+
OM solves both problems by compressing old context into dense observations.
|
|
59
|
+
|
|
60
|
+
### Observations
|
|
61
|
+
|
|
62
|
+
When message history tokens exceed a threshold (default: 30,000), the Observer creates observations — concise notes about what happened:
|
|
63
|
+
|
|
64
|
+
```text
|
|
65
|
+
Date: 2026-01-15
|
|
66
|
+
- 🔴 12:10 User is building a Next.js app with Supabase auth, due in 1 week (meaning January 22nd 2026)
|
|
67
|
+
- 🔴 12:10 App uses server components with client-side hydration
|
|
68
|
+
- 🟡 12:12 User asked about middleware configuration for protected routes
|
|
69
|
+
- 🔴 12:15 User stated the app name is "Acme Dashboard"
|
|
70
|
+
```
|
|
71
|
+
|
|
72
|
+
The compression is typically 5–40×. The Observer also tracks a **current task** and **suggested response** so the agent picks up where it left off.
|
|
73
|
+
|
|
74
|
+
Example: an agent using Playwright MCP might see 50,000+ tokens per page snapshot. With OM, the Observer watches the interaction and creates a few hundred tokens of observations about what was on the page and what actions were taken. The agent stays on task without carrying every raw snapshot.
|
|
75
|
+
|
|
76
|
+
### Reflections
|
|
77
|
+
|
|
78
|
+
When observations exceed their threshold (default: 40,000 tokens), the Reflector condenses them — combining related items and reflecting on patterns.
|
|
79
|
+
|
|
80
|
+
The result is a three-tier system:
|
|
81
|
+
|
|
82
|
+
1. **Recent messages**: Exact conversation history for the current task
|
|
83
|
+
2. **Observations**: A log of what the Observer has seen
|
|
84
|
+
3. **Reflections**: Condensed observations when memory becomes too long
|
|
85
|
+
|
|
86
|
+
## Models
|
|
87
|
+
|
|
88
|
+
The Observer and Reflector run in the background. Any model that works with Mastra's model routing (e.g. `openai/...`, `google/...`, `deepseek/...`) can be used.
|
|
89
|
+
|
|
90
|
+
When using `observationalMemory: true`, the default model is `google/gemini-2.5-flash`. When passing a config object, a `model` must be explicitly set.
|
|
91
|
+
|
|
92
|
+
We recommend `google/gemini-2.5-flash` — it works well for both observation and reflection, and its 1M token context window gives the Reflector headroom.
|
|
93
|
+
|
|
94
|
+
We've also tested `deepseek`, `qwen3`, and `glm-4.7` for the Observer. For the Reflector, make sure the model's context window can fit all observations. Note that Claude 4.5 models currently don't work well as observer or reflector.
|
|
95
|
+
|
|
96
|
+
```typescript
|
|
97
|
+
const memory = new Memory({
|
|
98
|
+
options: {
|
|
99
|
+
observationalMemory: {
|
|
100
|
+
model: 'deepseek/deepseek-reasoner',
|
|
101
|
+
},
|
|
102
|
+
},
|
|
103
|
+
})
|
|
104
|
+
```
|
|
105
|
+
|
|
106
|
+
See [model configuration](https://mastra.ai/reference/memory/observational-memory) for using different models per agent.
|
|
107
|
+
|
|
108
|
+
## Scopes
|
|
109
|
+
|
|
110
|
+
### Thread scope (default)
|
|
111
|
+
|
|
112
|
+
Each thread has its own observations. This scope is well tested and works well as a general purpose memory system, especially for long horizon agentic use-cases.
|
|
113
|
+
|
|
114
|
+
```typescript
|
|
115
|
+
const memory = new Memory({
|
|
116
|
+
options: {
|
|
117
|
+
observationalMemory: {
|
|
118
|
+
model: 'google/gemini-2.5-flash',
|
|
119
|
+
scope: 'thread',
|
|
120
|
+
},
|
|
121
|
+
},
|
|
122
|
+
})
|
|
123
|
+
```
|
|
124
|
+
|
|
125
|
+
Thread scope requires a valid `threadId` to be provided when calling the agent. If `threadId` is missing, Observational Memory throws an error. This prevents multiple threads from silently sharing a single observation record, which can cause database deadlocks.
|
|
126
|
+
|
|
127
|
+
### Resource scope (experimental)
|
|
128
|
+
|
|
129
|
+
Observations are shared across all threads for a resource (typically a user). Enables cross-conversation memory.
|
|
130
|
+
|
|
131
|
+
```typescript
|
|
132
|
+
const memory = new Memory({
|
|
133
|
+
options: {
|
|
134
|
+
observationalMemory: {
|
|
135
|
+
model: 'google/gemini-2.5-flash',
|
|
136
|
+
scope: 'resource',
|
|
137
|
+
},
|
|
138
|
+
},
|
|
139
|
+
})
|
|
140
|
+
```
|
|
141
|
+
|
|
142
|
+
Resource scope works, however it's marked as experimental for now until we prove task adherence/continuity across multiple ongoing simultaneous threads. As of today, you may need to tweak your system prompt to prevent one thread from continuing the work that another had already started (but hadn't finished).
|
|
143
|
+
|
|
144
|
+
This is because in resource scope, each thread is a perspective on _all_ threads for the resource.
|
|
145
|
+
|
|
146
|
+
For your use-case this may not be a problem, so your mileage may vary.
|
|
147
|
+
|
|
148
|
+
> **Warning:** In resource scope, unobserved messages across _all_ threads are processed together. For users with many existing threads, this can be slow. Use thread scope for existing apps.
|
|
149
|
+
|
|
150
|
+
## Token Budgets
|
|
151
|
+
|
|
152
|
+
OM uses token thresholds to decide when to observe and reflect. See [token budget configuration](https://mastra.ai/reference/memory/observational-memory) for details.
|
|
153
|
+
|
|
154
|
+
```typescript
|
|
155
|
+
const memory = new Memory({
|
|
156
|
+
options: {
|
|
157
|
+
observationalMemory: {
|
|
158
|
+
model: 'google/gemini-2.5-flash',
|
|
159
|
+
observation: {
|
|
160
|
+
// when to run the Observer (default: 30,000)
|
|
161
|
+
messageTokens: 30_000,
|
|
162
|
+
},
|
|
163
|
+
reflection: {
|
|
164
|
+
// when to run the Reflector (default: 40,000)
|
|
165
|
+
observationTokens: 40_000,
|
|
166
|
+
},
|
|
167
|
+
// let message history borrow from observation budget
|
|
168
|
+
// requires bufferTokens: false (temporary limitation)
|
|
169
|
+
shareTokenBudget: false,
|
|
170
|
+
},
|
|
171
|
+
},
|
|
172
|
+
})
|
|
173
|
+
```
|
|
174
|
+
|
|
175
|
+
## Async Buffering
|
|
176
|
+
|
|
177
|
+
Without async buffering, the Observer runs synchronously when the message threshold is reached — the agent pauses mid-conversation while the Observer LLM call completes. With async buffering (enabled by default), observations are pre-computed in the background as the conversation grows. When the threshold is hit, buffered observations activate instantly with no pause.
|
|
178
|
+
|
|
179
|
+
### How it works
|
|
180
|
+
|
|
181
|
+
As the agent converses, message tokens accumulate. At regular intervals (`bufferTokens`), a background Observer call runs without blocking the agent. Each call produces a "chunk" of observations that's stored in a buffer.
|
|
182
|
+
|
|
183
|
+
When message tokens reach the `messageTokens` threshold, buffered chunks activate: their observations move into the active observation log, and the corresponding raw messages are removed from the context window. The agent never pauses.
|
|
184
|
+
|
|
185
|
+
Buffered observations also include continuation hints — a suggested next response and the current task — so the main agent maintains conversational continuity after activation shrinks the context window.
|
|
186
|
+
|
|
187
|
+
If the agent produces messages faster than the Observer can process them, a `blockAfter` safety threshold forces a synchronous observation as a last resort. Buffered activation still preserves a minimum remaining context (the smaller of \~1k tokens or the configured retention floor).
|
|
188
|
+
|
|
189
|
+
Reflection works similarly — the Reflector runs in the background when observations reach a fraction of the reflection threshold.
|
|
190
|
+
|
|
191
|
+
### Settings
|
|
192
|
+
|
|
193
|
+
| Setting | Default | What it controls |
|
|
194
|
+
| ------------------------------ | ------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
|
|
195
|
+
| `observation.bufferTokens` | `0.2` | How often to buffer. `0.2` means every 20% of `messageTokens` — with the default 30k threshold, that's roughly every 6k tokens. Can also be an absolute token count (e.g. `5000`). |
|
|
196
|
+
| `observation.bufferActivation` | `0.8` | How aggressively to clear the message window on activation. `0.8` means remove enough messages to keep only 20% of `messageTokens` remaining. Lower values keep more message history. |
|
|
197
|
+
| `observation.blockAfter` | `1.2` | Safety threshold as a multiplier of `messageTokens`. At `1.2`, synchronous observation is forced at 36k tokens (1.2 × 30k). Only matters if buffering can't keep up. |
|
|
198
|
+
| `reflection.bufferActivation` | `0.5` | When to start background reflection. `0.5` means reflection begins when observations reach 50% of the `observationTokens` threshold. |
|
|
199
|
+
| `reflection.blockAfter` | `1.2` | Safety threshold for reflection, same logic as observation. |
|
|
200
|
+
|
|
201
|
+
### Disabling
|
|
202
|
+
|
|
203
|
+
To disable async buffering and use synchronous observation/reflection instead:
|
|
204
|
+
|
|
205
|
+
```typescript
|
|
206
|
+
const memory = new Memory({
|
|
207
|
+
options: {
|
|
208
|
+
observationalMemory: {
|
|
209
|
+
model: 'google/gemini-2.5-flash',
|
|
210
|
+
observation: {
|
|
211
|
+
bufferTokens: false,
|
|
212
|
+
},
|
|
213
|
+
},
|
|
214
|
+
},
|
|
215
|
+
})
|
|
216
|
+
```
|
|
217
|
+
|
|
218
|
+
Setting `bufferTokens: false` disables both observation and reflection async buffering. See [async buffering configuration](https://mastra.ai/reference/memory/observational-memory) for the full API.
|
|
219
|
+
|
|
220
|
+
> **Note:** Async buffering is not supported with `scope: 'resource'`. It is automatically disabled in resource scope.
|
|
221
|
+
|
|
222
|
+
## Migrating existing threads
|
|
223
|
+
|
|
224
|
+
No manual migration needed. OM reads existing messages and observes them lazily when thresholds are exceeded.
|
|
225
|
+
|
|
226
|
+
- **Thread scope**: The first time a thread exceeds `observation.messageTokens`, the Observer processes the backlog.
|
|
227
|
+
- **Resource scope**: All unobserved messages across all threads for a resource are processed together. For users with many existing threads, this could take significant time.
|
|
228
|
+
|
|
229
|
+
## Viewing in Mastra Studio
|
|
230
|
+
|
|
231
|
+
Mastra Studio shows OM status in real time in the memory tab: token usage, which model is running, current observations, and reflection history.
|
|
232
|
+
|
|
233
|
+
## Comparing OM with other memory features
|
|
234
|
+
|
|
235
|
+
- **[Message history](https://mastra.ai/docs/memory/message-history)**: High-fidelity record of the current conversation
|
|
236
|
+
- **[Working memory](https://mastra.ai/docs/memory/working-memory)**: Small, structured state (JSON or markdown) for user preferences, names, goals
|
|
237
|
+
- **[Semantic Recall](https://mastra.ai/docs/memory/semantic-recall)**: RAG-based retrieval of relevant past messages
|
|
238
|
+
|
|
239
|
+
If you're using working memory to store conversation summaries or ongoing state that grows over time, OM is a better fit. Working memory is for small, structured data; OM is for long-running event logs. OM also manages message history automatically—the `messageTokens` setting controls how much raw history remains before observation runs.
|
|
240
|
+
|
|
241
|
+
In practical terms, OM replaces both working memory and message history, and has greater accuracy (and lower cost) than Semantic Recall.
|
|
242
|
+
|
|
243
|
+
## Related
|
|
244
|
+
|
|
245
|
+
- [Observational Memory Reference](https://mastra.ai/reference/memory/observational-memory)
|
|
246
|
+
- [Memory Overview](https://mastra.ai/docs/memory/overview)
|
|
247
|
+
- [Message History](https://mastra.ai/docs/memory/message-history)
|
|
248
|
+
- [Memory Processors](https://mastra.ai/docs/memory/memory-processors)
|