@mastra/mcp-docs-server 1.1.8 → 1.1.9-alpha.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (228) hide show
  1. package/.docs/docs/agents/agent-memory.md +2 -2
  2. package/.docs/docs/agents/guardrails.md +3 -3
  3. package/.docs/docs/agents/network-approval.md +4 -1
  4. package/.docs/docs/agents/networks.md +1 -1
  5. package/.docs/docs/agents/overview.md +1 -1
  6. package/.docs/docs/agents/processors.md +35 -17
  7. package/.docs/docs/agents/structured-output.md +1 -1
  8. package/.docs/docs/agents/using-tools.md +2 -2
  9. package/.docs/docs/build-with-ai/mcp-docs-server.md +4 -4
  10. package/.docs/docs/build-with-ai/skills.md +1 -1
  11. package/.docs/docs/community/discord.md +1 -1
  12. package/.docs/docs/community/licensing.md +1 -1
  13. package/.docs/docs/deployment/mastra-server.md +1 -1
  14. package/.docs/docs/deployment/studio.md +2 -2
  15. package/.docs/docs/deployment/web-framework.md +1 -1
  16. package/.docs/docs/evals/overview.md +1 -1
  17. package/.docs/docs/getting-started/build-with-ai.md +1 -1
  18. package/.docs/docs/getting-started/project-structure.md +1 -1
  19. package/.docs/docs/index.md +62 -16
  20. package/.docs/docs/mastra-cloud/deployment.md +1 -1
  21. package/.docs/docs/mastra-cloud/studio.md +1 -1
  22. package/.docs/docs/mcp/publishing-mcp-server.md +1 -1
  23. package/.docs/docs/memory/memory-processors.md +1 -1
  24. package/.docs/docs/memory/message-history.md +2 -2
  25. package/.docs/docs/memory/observational-memory.md +6 -2
  26. package/.docs/docs/memory/semantic-recall.md +2 -2
  27. package/.docs/docs/memory/storage.md +1 -1
  28. package/.docs/docs/memory/working-memory.md +6 -6
  29. package/.docs/docs/observability/tracing/bridges/otel.md +2 -2
  30. package/.docs/docs/observability/tracing/exporters/default.md +1 -1
  31. package/.docs/docs/observability/tracing/overview.md +4 -4
  32. package/.docs/docs/observability/tracing/processors/sensitive-data-filter.md +2 -2
  33. package/.docs/docs/rag/chunking-and-embedding.md +1 -1
  34. package/.docs/docs/rag/vector-databases.md +2 -2
  35. package/.docs/docs/server/auth/auth0.md +1 -1
  36. package/.docs/docs/server/auth/firebase.md +1 -1
  37. package/.docs/docs/server/auth/simple-auth.md +1 -1
  38. package/.docs/docs/server/auth.md +1 -1
  39. package/.docs/docs/server/mastra-client.md +1 -1
  40. package/.docs/docs/server/mastra-server.md +1 -1
  41. package/.docs/docs/server/server-adapters.md +2 -2
  42. package/.docs/docs/streaming/events.md +1 -1
  43. package/.docs/docs/streaming/overview.md +1 -1
  44. package/.docs/docs/streaming/tool-streaming.md +44 -30
  45. package/.docs/docs/streaming/workflow-streaming.md +1 -1
  46. package/.docs/docs/workflows/control-flow.md +44 -2
  47. package/.docs/docs/workflows/error-handling.md +1 -1
  48. package/.docs/docs/workflows/overview.md +3 -3
  49. package/.docs/docs/workflows/snapshots.md +1 -1
  50. package/.docs/docs/workflows/time-travel.md +2 -2
  51. package/.docs/docs/workspace/filesystem.md +2 -2
  52. package/.docs/docs/workspace/overview.md +52 -7
  53. package/.docs/docs/workspace/sandbox.md +72 -13
  54. package/.docs/docs/workspace/skills.md +2 -2
  55. package/.docs/guides/build-your-ui/copilotkit.md +1 -1
  56. package/.docs/guides/deployment/inngest.md +4 -4
  57. package/.docs/guides/guide/ai-recruiter.md +1 -1
  58. package/.docs/guides/guide/github-actions-pr-description.md +2 -2
  59. package/.docs/guides/guide/notes-mcp-server.md +1 -1
  60. package/.docs/guides/guide/stock-agent.md +2 -2
  61. package/.docs/guides/migrations/agentnetwork.md +1 -1
  62. package/.docs/guides/migrations/upgrade-to-v1/client.md +2 -2
  63. package/.docs/guides/migrations/upgrade-to-v1/deployment.md +1 -1
  64. package/.docs/guides/migrations/upgrade-to-v1/memory.md +2 -2
  65. package/.docs/guides/migrations/upgrade-to-v1/storage.md +1 -1
  66. package/.docs/guides/migrations/upgrade-to-v1/tools.md +2 -2
  67. package/.docs/guides/migrations/upgrade-to-v1/workflows.md +5 -5
  68. package/.docs/guides/migrations/vnext-to-standard-apis.md +2 -2
  69. package/.docs/models/gateways/netlify.md +1 -2
  70. package/.docs/models/gateways/openrouter.md +8 -1
  71. package/.docs/models/gateways/vercel.md +3 -1
  72. package/.docs/models/index.md +1 -1
  73. package/.docs/models/providers/abacus.md +21 -11
  74. package/.docs/models/providers/aihubmix.md +7 -2
  75. package/.docs/models/providers/alibaba-cn.md +80 -71
  76. package/.docs/models/providers/alibaba-coding-plan-cn.md +78 -0
  77. package/.docs/models/providers/alibaba-coding-plan.md +78 -0
  78. package/.docs/models/providers/chutes.md +1 -1
  79. package/.docs/models/providers/clarifai.md +81 -0
  80. package/.docs/models/providers/cloudferro-sherlock.md +5 -4
  81. package/.docs/models/providers/cloudflare-workers-ai.md +3 -2
  82. package/.docs/models/providers/cortecs.md +7 -5
  83. package/.docs/models/providers/deepinfra.md +7 -2
  84. package/.docs/models/providers/deepseek.md +1 -1
  85. package/.docs/models/providers/drun.md +73 -0
  86. package/.docs/models/providers/firmware.md +28 -20
  87. package/.docs/models/providers/google.md +3 -1
  88. package/.docs/models/providers/inception.md +4 -2
  89. package/.docs/models/providers/kilo.md +3 -1
  90. package/.docs/models/providers/nano-gpt.md +519 -40
  91. package/.docs/models/providers/nebius.md +34 -34
  92. package/.docs/models/providers/nvidia.md +4 -2
  93. package/.docs/models/providers/ollama-cloud.md +1 -2
  94. package/.docs/models/providers/openai.md +3 -1
  95. package/.docs/models/providers/opencode.md +36 -33
  96. package/.docs/models/providers/poe.md +8 -2
  97. package/.docs/models/providers/qiniu-ai.md +20 -5
  98. package/.docs/models/providers/requesty.md +17 -1
  99. package/.docs/models/providers/siliconflow-cn.md +7 -1
  100. package/.docs/models/providers/togetherai.md +1 -3
  101. package/.docs/models/providers/xai.md +28 -25
  102. package/.docs/models/providers/xiaomi.md +1 -1
  103. package/.docs/models/providers/zenmux.md +3 -1
  104. package/.docs/models/providers.md +4 -0
  105. package/.docs/reference/agents/getDefaultGenerateOptions.md +1 -1
  106. package/.docs/reference/agents/getDefaultOptions.md +1 -1
  107. package/.docs/reference/agents/getDefaultStreamOptions.md +1 -1
  108. package/.docs/reference/agents/getDescription.md +1 -1
  109. package/.docs/reference/agents/network.md +3 -1
  110. package/.docs/reference/ai-sdk/handle-chat-stream.md +2 -0
  111. package/.docs/reference/ai-sdk/handle-network-stream.md +2 -0
  112. package/.docs/reference/ai-sdk/network-route.md +2 -0
  113. package/.docs/reference/ai-sdk/to-ai-sdk-v4-messages.md +1 -1
  114. package/.docs/reference/ai-sdk/to-ai-sdk-v5-messages.md +1 -1
  115. package/.docs/reference/auth/auth0.md +3 -3
  116. package/.docs/reference/auth/firebase.md +1 -1
  117. package/.docs/reference/auth/workos.md +2 -2
  118. package/.docs/reference/cli/mastra.md +4 -4
  119. package/.docs/reference/client-js/mastra-client.md +1 -1
  120. package/.docs/reference/configuration.md +62 -6
  121. package/.docs/reference/core/getDeployer.md +1 -1
  122. package/.docs/reference/core/getLogger.md +1 -1
  123. package/.docs/reference/core/getScorer.md +2 -2
  124. package/.docs/reference/core/getServer.md +1 -1
  125. package/.docs/reference/core/getStorage.md +1 -1
  126. package/.docs/reference/core/getStoredAgentById.md +1 -1
  127. package/.docs/reference/core/getTelemetry.md +1 -1
  128. package/.docs/reference/core/getWorkflow.md +1 -1
  129. package/.docs/reference/core/listAgents.md +1 -1
  130. package/.docs/reference/core/listMCPServers.md +1 -1
  131. package/.docs/reference/core/listStoredAgents.md +1 -1
  132. package/.docs/reference/core/listVectors.md +1 -1
  133. package/.docs/reference/core/mastra-class.md +1 -1
  134. package/.docs/reference/core/setLogger.md +1 -1
  135. package/.docs/reference/core/setStorage.md +1 -1
  136. package/.docs/reference/datasets/dataset.md +1 -1
  137. package/.docs/reference/datasets/datasets-manager.md +1 -1
  138. package/.docs/reference/datasets/get.md +2 -2
  139. package/.docs/reference/datasets/getDetails.md +1 -1
  140. package/.docs/reference/datasets/listItems.md +1 -1
  141. package/.docs/reference/deployer/vercel.md +1 -1
  142. package/.docs/reference/evals/answer-relevancy.md +1 -1
  143. package/.docs/reference/evals/completeness.md +1 -1
  144. package/.docs/reference/evals/context-precision.md +3 -3
  145. package/.docs/reference/evals/context-relevance.md +1 -1
  146. package/.docs/reference/evals/hallucination.md +3 -9
  147. package/.docs/reference/evals/keyword-coverage.md +1 -1
  148. package/.docs/reference/evals/mastra-scorer.md +1 -1
  149. package/.docs/reference/evals/noise-sensitivity.md +2 -2
  150. package/.docs/reference/evals/textual-difference.md +1 -1
  151. package/.docs/reference/evals/tone-consistency.md +1 -1
  152. package/.docs/reference/evals/tool-call-accuracy.md +1 -1
  153. package/.docs/reference/harness/harness-class.md +4 -2
  154. package/.docs/reference/index.md +2 -0
  155. package/.docs/reference/memory/cloneThread.md +1 -1
  156. package/.docs/reference/memory/observational-memory.md +7 -5
  157. package/.docs/reference/observability/tracing/bridges/otel.md +1 -1
  158. package/.docs/reference/observability/tracing/processors/sensitive-data-filter.md +1 -1
  159. package/.docs/reference/observability/tracing/spans.md +2 -0
  160. package/.docs/reference/processors/message-history-processor.md +1 -1
  161. package/.docs/reference/processors/processor-interface.md +6 -2
  162. package/.docs/reference/processors/token-limiter-processor.md +2 -2
  163. package/.docs/reference/rag/metadata-filters.md +10 -10
  164. package/.docs/reference/server/create-route.md +2 -0
  165. package/.docs/reference/server/koa-adapter.md +1 -1
  166. package/.docs/reference/server/register-api-route.md +2 -2
  167. package/.docs/reference/storage/cloudflare-d1.md +3 -3
  168. package/.docs/reference/storage/cloudflare.md +1 -1
  169. package/.docs/reference/storage/convex.md +1 -1
  170. package/.docs/reference/storage/dynamodb.md +2 -2
  171. package/.docs/reference/storage/lance.md +2 -2
  172. package/.docs/reference/storage/mongodb.md +1 -1
  173. package/.docs/reference/storage/mssql.md +1 -1
  174. package/.docs/reference/storage/postgresql.md +2 -2
  175. package/.docs/reference/storage/upstash.md +1 -1
  176. package/.docs/reference/streaming/workflows/observeStream.md +1 -1
  177. package/.docs/reference/templates/overview.md +1 -1
  178. package/.docs/reference/tools/create-tool.md +1 -1
  179. package/.docs/reference/tools/mcp-server.md +4 -4
  180. package/.docs/reference/vectors/chroma.md +2 -2
  181. package/.docs/reference/vectors/couchbase.md +6 -6
  182. package/.docs/reference/vectors/pg.md +2 -0
  183. package/.docs/reference/vectors/s3vectors.md +5 -5
  184. package/.docs/reference/voice/azure.md +4 -2
  185. package/.docs/reference/voice/cloudflare.md +4 -2
  186. package/.docs/reference/voice/elevenlabs.md +1 -1
  187. package/.docs/reference/voice/google-gemini-live.md +2 -2
  188. package/.docs/reference/voice/google.md +3 -3
  189. package/.docs/reference/voice/mastra-voice.md +1 -1
  190. package/.docs/reference/voice/murf.md +2 -2
  191. package/.docs/reference/voice/openai-realtime.md +3 -1
  192. package/.docs/reference/voice/openai.md +7 -3
  193. package/.docs/reference/voice/playai.md +4 -2
  194. package/.docs/reference/voice/sarvam.md +3 -1
  195. package/.docs/reference/voice/speechify.md +6 -4
  196. package/.docs/reference/voice/voice.addInstructions.md +2 -2
  197. package/.docs/reference/voice/voice.addTools.md +1 -1
  198. package/.docs/reference/voice/voice.close.md +2 -2
  199. package/.docs/reference/voice/voice.connect.md +4 -2
  200. package/.docs/reference/voice/voice.events.md +2 -2
  201. package/.docs/reference/voice/voice.getSpeakers.md +1 -1
  202. package/.docs/reference/voice/voice.listen.md +11 -5
  203. package/.docs/reference/voice/voice.off.md +2 -2
  204. package/.docs/reference/voice/voice.on.md +2 -2
  205. package/.docs/reference/voice/voice.speak.md +14 -4
  206. package/.docs/reference/voice/voice.updateConfig.md +1 -1
  207. package/.docs/reference/workflows/run-methods/timeTravel.md +1 -1
  208. package/.docs/reference/workspace/blaxel-sandbox.md +164 -0
  209. package/.docs/reference/workspace/daytona-sandbox.md +48 -139
  210. package/.docs/reference/workspace/e2b-sandbox.md +39 -75
  211. package/.docs/reference/workspace/filesystem.md +24 -10
  212. package/.docs/reference/workspace/gcs-filesystem.md +20 -0
  213. package/.docs/reference/workspace/local-filesystem.md +23 -9
  214. package/.docs/reference/workspace/local-sandbox.md +23 -98
  215. package/.docs/reference/workspace/process-manager.md +296 -0
  216. package/.docs/reference/workspace/s3-filesystem.md +20 -0
  217. package/.docs/reference/workspace/sandbox.md +9 -1
  218. package/.docs/reference/workspace/workspace-class.md +93 -25
  219. package/CHANGELOG.md +8 -0
  220. package/dist/tools/course.d.ts +7 -27
  221. package/dist/tools/course.d.ts.map +1 -1
  222. package/dist/tools/docs.d.ts +6 -18
  223. package/dist/tools/docs.d.ts.map +1 -1
  224. package/dist/tools/embedded-docs.d.ts +12 -112
  225. package/dist/tools/embedded-docs.d.ts.map +1 -1
  226. package/dist/tools/migration.d.ts +6 -26
  227. package/dist/tools/migration.d.ts.map +1 -1
  228. package/package.json +7 -7
@@ -96,7 +96,7 @@ export const memoryAgent = new Agent({
96
96
  })
97
97
  ```
98
98
 
99
- > **Mastra Cloud Store limitation:** Agent-level storage is not supported when using [Mastra Cloud Store](https://mastra.ai/docs/mastra-cloud/deployment). If you use Mastra Cloud Store, configure storage on the Mastra instance instead. This limitation does not apply if you bring your own database.
99
+ > **Mastra Cloud Store limitation:** Agent-level storage isn't supported when using [Mastra Cloud Store](https://mastra.ai/docs/mastra-cloud/deployment). If you use Mastra Cloud Store, configure storage on the Mastra instance instead. This limitation doesn't apply if you bring your own database.
100
100
 
101
101
  ## Message history
102
102
 
@@ -127,7 +127,7 @@ const response = await memoryAgent.generate("What's my favorite color?", {
127
127
  })
128
128
  ```
129
129
 
130
- > **Warning:** Each thread has an owner (`resourceId`) that cannot be changed after creation. Avoid reusing the same thread ID for threads with different owners, as this will cause errors when querying.
130
+ > **Warning:** Each thread has an owner (`resourceId`) that can't be changed after creation. Avoid reusing the same thread ID for threads with different owners, as this will cause errors when querying.
131
131
 
132
132
  To learn more about memory see the [Memory](https://mastra.ai/docs/memory/overview) documentation.
133
133
 
@@ -40,7 +40,7 @@ export const moderatedAgent = new Agent({
40
40
 
41
41
  ## Input processors
42
42
 
43
- Input processors are applied before user messages reach the language model. They are useful for normalization, validation, content moderation, prompt injection detection, and security checks.
43
+ Input processors are applied before user messages reach the language model. They're useful for normalization, validation, content moderation, prompt injection detection, and security checks.
44
44
 
45
45
  ### Normalizing user messages
46
46
 
@@ -111,7 +111,7 @@ export const multilingualAgent = new Agent({
111
111
 
112
112
  ## Output processors
113
113
 
114
- Output processors are applied after the language model generates a response, but before it is returned to the user. They are useful for response optimization, moderation, transformation, and applying safety controls.
114
+ Output processors are applied after the language model generates a response, but before it's returned to the user. They're useful for response optimization, moderation, transformation, and applying safety controls.
115
115
 
116
116
  ### Batching streamed output
117
117
 
@@ -188,7 +188,7 @@ const scrubbedAgent = new Agent({
188
188
 
189
189
  ## Hybrid processors
190
190
 
191
- Hybrid processors can be applied either before messages are sent to the language model or before responses are returned to the user. They are useful for tasks like content moderation and PII redaction.
191
+ Hybrid processors can be applied either before messages are sent to the language model or before responses are returned to the user. They're useful for tasks like content moderation and PII redaction.
192
192
 
193
193
  ### Moderating input and output
194
194
 
@@ -1,5 +1,7 @@
1
1
  # Network Approval
2
2
 
3
+ > **Deprecated:** Agent networks are deprecated and will be removed in a future release. Use the [supervisor pattern](https://mastra.ai/docs/agents/supervisor-agents) instead. See the [migration guide](https://mastra.ai/guides/migrations/network-to-supervisor) to upgrade.
4
+
3
5
  Agent networks can require the same [human-in-the-loop](https://mastra.ai/docs/workflows/human-in-the-loop) oversight used in individual agents and workflows. When a tool, subagent, or workflow within a network requires approval or suspends execution, the network pauses and emits events that allow your application to collect user input before resuming.
4
6
 
5
7
  ## Storage
@@ -269,7 +271,8 @@ Both approaches work with the same tool definitions. Automatic resumption trigge
269
271
 
270
272
  ## Related
271
273
 
272
- - [Agent Networks](https://mastra.ai/docs/agents/networks)
274
+ - [Supervisor Agents](https://mastra.ai/docs/agents/supervisor-agents)
275
+ - [Migration: .network() to Supervisor Pattern](https://mastra.ai/guides/migrations/network-to-supervisor)
273
276
  - [Agent Approval](https://mastra.ai/docs/agents/agent-approval)
274
277
  - [Human-in-the-Loop](https://mastra.ai/docs/workflows/human-in-the-loop)
275
278
  - [Agent Memory](https://mastra.ai/docs/agents/agent-memory)
@@ -1,6 +1,6 @@
1
1
  # Agent Networks
2
2
 
3
- > **Supervisor Pattern Recommended:** The [supervisor pattern](https://mastra.ai/docs/agents/supervisor-agents) using `agent.stream()` or `agent.generate()` is now the recommended approach for coordinating multiple agents. It provides the same multi-agent coordination capabilities as `.network()` with significant improvements:
3
+ > **Agent Network Deprecated — Supervisor Pattern Recommended:** Agent networks are deprecated and will be removed in a future release. The [supervisor pattern](https://mastra.ai/docs/agents/supervisor-agents) using `agent.stream()` or `agent.generate()` is now the recommended approach for coordinating multiple agents. It provides the same multi-agent coordination capabilities as `.network()` with significant improvements:
4
4
  >
5
5
  > - **Better control**: Iteration hooks, delegation hooks, and task completion scoring give you fine-grained control over execution
6
6
  > - **Simpler API**: Uses familiar `stream()` and `generate()` methods instead of a separate `.network()` API
@@ -59,7 +59,7 @@ Agents use LLMs and tools to solve open-ended tasks. They reason about goals, de
59
59
 
60
60
  ### Instruction formats
61
61
 
62
- Instructions define the agent's behavior, personality, and capabilities. They are system-level prompts that establish the agent's core identity and expertise.
62
+ Instructions define the agent's behavior, personality, and capabilities. They're system-level prompts that establish the agent's core identity and expertise.
63
63
 
64
64
  Instructions can be provided in multiple formats for greater flexibility. The examples below illustrate the supported shapes:
65
65
 
@@ -303,31 +303,49 @@ await agent.generate('Complex task', {
303
303
  ### Custom output processor
304
304
 
305
305
  ```typescript
306
- import type { Processor, MastraDBMessage, RequestContext } from '@mastra/core'
306
+ import type { Processor, MastraDBMessage, ChunkType } from '@mastra/core'
307
307
 
308
308
  export class CustomOutputProcessor implements Processor {
309
309
  id = 'custom-output'
310
310
 
311
- async processOutputResult({
312
- messages,
313
- context,
314
- }: {
315
- messages: MastraDBMessage[]
316
- context: RequestContext
317
- }): Promise<MastraDBMessage[]> {
311
+ async processOutputResult({ messages }): Promise<MastraDBMessage[]> {
318
312
  // Transform messages after the LLM generates them
319
313
  return messages.filter(msg => msg.role !== 'system')
320
314
  }
321
315
 
322
- async processOutputStream({
323
- stream,
324
- context,
325
- }: {
326
- stream: ReadableStream
327
- context: RequestContext
328
- }): Promise<ReadableStream> {
329
- // Transform streaming responses
330
- return stream
316
+ async processOutputStream({ part }): Promise<ChunkType | null> {
317
+ // Transform or filter streaming chunks
318
+ return part
319
+ }
320
+ }
321
+ ```
322
+
323
+ The `processOutputStream` method receives all streaming chunks. To also receive custom `data-*` chunks emitted by tools via `writer.custom()`, set `processDataParts = true` on your processor. This lets you inspect, modify, or block tool-emitted data chunks before they reach the client.
324
+
325
+ #### Accessing generation result data
326
+
327
+ The `processOutputResult` method receives a `result` object containing the resolved generation data — the same information available in the `onFinish` callback. This makes it easy to access token usage, generated text, finish reason, and step details.
328
+
329
+ ```typescript
330
+ import type { Processor } from '@mastra/core'
331
+
332
+ export class UsageTracker implements Processor {
333
+ id = 'usage-tracker'
334
+
335
+ async processOutputResult({ messages, result }) {
336
+ console.log(`Text: ${result.text}`)
337
+ console.log(`Tokens: ${result.usage.inputTokens} in, ${result.usage.outputTokens} out`)
338
+ console.log(`Finish reason: ${result.finishReason}`)
339
+ console.log(`Steps: ${result.steps.length}`)
340
+
341
+ // Each step contains toolCalls, toolResults, reasoning, sources, files, etc.
342
+ for (const step of result.steps) {
343
+ if (step.toolCalls?.length) {
344
+ console.log(`Step used ${step.toolCalls.length} tool calls`)
345
+ }
346
+ }
347
+
348
+ return messages
331
349
  }
332
350
  }
333
351
  ```
@@ -188,7 +188,7 @@ const response = await testAgent.generate('Help me plan my day.', {
188
188
  console.log(response.object)
189
189
  ```
190
190
 
191
- > **Gemini 2.5 with tools:** Gemini 2.5 models do not support combining `response_format` (structured output) with function calling (tools) in the same API call. If your agent has tools and you're using `structuredOutput` with a Gemini 2.5 model, you must set `jsonPromptInjection: true` to avoid the error `Function calling with a response mime type: 'application/json' is unsupported`.
191
+ > **Gemini 2.5 with tools:** Gemini 2.5 models don't support combining `response_format` (structured output) with function calling (tools) in the same API call. If your agent has tools and you're using `structuredOutput` with a Gemini 2.5 model, you must set `jsonPromptInjection: true` to avoid the error `Function calling with a response mime type: 'application/json' is unsupported`.
192
192
  >
193
193
  > ```typescript
194
194
  > const response = await agentWithTools.generate('Your prompt', {
@@ -8,7 +8,7 @@ Use tools when an agent needs additional context or information from remote reso
8
8
 
9
9
  ## Creating a tool
10
10
 
11
- When creating tools, keep descriptions simple and focused on what the tool does, emphasizing its primary use case. Descriptive schema names can also help guide the agent on how to use the tool.
11
+ When creating tools, keep descriptions concise and focused on what the tool does, emphasizing its primary use case. Descriptive schema names can also help guide the agent on how to use the tool.
12
12
 
13
13
  This example shows how to create a tool that fetches weather data from an API. When the agent calls the tool, it provides the required input as defined by the tool's `inputSchema`. The tool accesses this data through its `inputData` parameter, which in this example includes the `location` used in the weather API query.
14
14
 
@@ -211,7 +211,7 @@ This lets you specify how tools are identified in the stream. If you want the `t
211
211
 
212
212
  ### Subagents and workflows as tools
213
213
 
214
- Subagents and workflows follow the same pattern. They are converted to tools with a prefix followed by your object key:
214
+ Subagents and workflows follow the same pattern. They're converted to tools with a prefix followed by your object key:
215
215
 
216
216
  | Property | Prefix | Example key | `toolName` |
217
217
  | ----------- | ----------- | ----------- | ------------------- |
@@ -56,7 +56,7 @@ This creates a project-scoped `.mcp.json` file if one doesn't already exist. You
56
56
 
57
57
  ### Cursor
58
58
 
59
- Install by clicking the button below:
59
+ Install by selecting the button below:
60
60
 
61
61
  [![Install MCP Server](https://cursor.com/deeplink/mcp-install-light.svg)](cursor://anysphere.cursor-deeplink/mcp/install?name=mastra\&config=eyJjb21tYW5kIjoibnB4IC15IEBtYXN0cmEvbWNwLWRvY3Mtc2VydmVyIn0%3D)
62
62
 
@@ -77,7 +77,7 @@ Google Antigravity is an agent-first development platform that supports MCP serv
77
77
 
78
78
  ![The Antigravity MCP store. At the top is a search bar and below a list of available MCP servers. On the very top right is a dropdown menu.](/assets/images/antigravity_mcp_server-689ea495d9c7139cc431f1f1b9827f9b.png)
79
79
 
80
- 2. To add a custom MCP server, select **Manage MCP Servers** at the top of the MCP Store and click **View raw config** in the main tab.
80
+ 2. To add a custom MCP server, select **Manage MCP Servers** at the top of the MCP Store and select **View raw config** in the main tab.
81
81
 
82
82
  ![The Antigravity MCP store showing the Manage MCP Servers option and the View raw config button.](/assets/images/antigravity_managed_mcp-b661e8c04b3219000f8d842e5eb26a1a.png)
83
83
 
@@ -137,11 +137,11 @@ Once you installed the MCP server, you can use it like so:
137
137
 
138
138
  ![Entry in VSCode\&#39;s settings page. The option is called \&quot;Chat \&gt; MCP: Enabled (Preview)\&quot;. The description says: \&quot;Enables integration with Model Context Protocol servers to provide additional tools and functionality.\&quot;](/assets/images/vscode-mcp-setting-8d1eb4f3df1e33606503f8c5e937e9e3.png)
139
139
 
140
- MCP only works in Agent mode in VSCode. Once you are in agent mode, open the `mcp.json` file and click the "start" button. Note that the "start" button will only appear if the `.vscode` folder containing `mcp.json` is in your workspace root, or the highest level of the in-editor file explorer.
140
+ MCP only works in Agent mode in VSCode. Once you are in agent mode, open the `mcp.json` file and select the "start" button. Note that the "start" button will only appear if the `.vscode` folder containing `mcp.json` is in your workspace root, or the highest level of the in-editor file explorer.
141
141
 
142
142
  ![A screenshot of the mcp.json file showing the start button in the editor](/assets/images/vscode-start-mcp-26480d86080c4907cb497a325de106a4.png)
143
143
 
144
- After starting the MCP server, click the tools button in the Copilot pane to see available tools.
144
+ After starting the MCP server, select the tools button in the Copilot pane to see available tools.
145
145
 
146
146
  ![Tools page of VSCode to see available tools](/assets/images/vscode-mcp-running-d92d6ed234d1148093dc804b0ead3515.png)
147
147
 
@@ -32,4 +32,4 @@ bun x skills add mastra-ai/skills
32
32
 
33
33
  Mastra skills work with any coding agent that supports the [Skills standard](https://agentskills.io/), including Claude Code, Cursor, Codex, OpenCode, and others.
34
34
 
35
- They are also available on [GitHub](https://github.com/mastra-ai/skills).
35
+ They're also available on [GitHub](https://github.com/mastra-ai/skills).
@@ -6,4 +6,4 @@ The Discord server has over 1000 members and serves as the main discussion forum
6
6
 
7
7
  ## Discord MCP Bot
8
8
 
9
- In addition to community members, we have an (experimental!) Discord bot that can also help answer questions. It uses [Model Context Protocol (MCP)](https://mastra.ai/docs/mcp/overview). You can ask it a question with `/ask` (either in public channels or DMs) and clear history (in DMs only) with `/cleardm`.
9
+ In addition to community members, we've an (experimental!) Discord bot that can also help answer questions. It uses [Model Context Protocol (MCP)](https://mastra.ai/docs/mcp/overview). You can ask it a question with `/ask` (either in public channels or DMs) and clear history (in DMs only) with `/cleardm`.
@@ -4,7 +4,7 @@
4
4
 
5
5
  Mastra is licensed under the Apache License 2.0, a permissive open-source license that provides users with broad rights to use, modify, and distribute the software.
6
6
 
7
- ### What is Apache License 2.0?
7
+ ### What's Apache License 2.0?
8
8
 
9
9
  The Apache License 2.0 is a permissive open-source license that grants users extensive rights to use, modify, and distribute the software. It allows:
10
10
 
@@ -100,7 +100,7 @@ The built server exposes endpoints for health checks, agents, workflows, and mor
100
100
  | `GET /openapi.json` | OpenAPI specification (if `server.build.openAPIDocs` is enabled) |
101
101
  | `GET /swagger-ui` | Interactive API documentation (if `server.build.swaggerUI` is enabled) |
102
102
 
103
- This list is not exhaustive. To view all endpoints, run `mastra dev` and visit `http://localhost:4111/swagger-ui`.
103
+ This list isn't exhaustive. To view all endpoints, run `mastra dev` and visit `http://localhost:4111/swagger-ui`.
104
104
 
105
105
  To add your own endpoints, see [Custom API Routes](https://mastra.ai/docs/server/custom-api-routes).
106
106
 
@@ -2,7 +2,7 @@
2
2
 
3
3
  [Studio](https://mastra.ai/docs/getting-started/studio) provides an interactive UI for building and testing your agents. It's a React-based Single Page Application (SPA) that runs in the browser and connects to a running [Mastra server](https://mastra.ai/docs/deployment/mastra-server).
4
4
 
5
- There are two primary ways of deploying Studio:
5
+ You can deploy Studio in two primary ways:
6
6
 
7
7
  - [Mastra Cloud](https://mastra.ai/docs/mastra-cloud/overview) hosts Studio for you and allows you to share access with your team via link
8
8
  - You can self-host Studio on your own infrastructure, either alongside your Mastra server or separately as a standalone SPA
@@ -61,7 +61,7 @@ The command uses Node's built-in `http` module and [`serve-handler`](https://www
61
61
 
62
62
  ## Running a server
63
63
 
64
- Running `mastra studio` as a long-running process is no different from running any other Node.js service. All the best practices, tools, and options for deployment apply here as well. You can use process managers like PM2, use Docker, or cloud services that support Node.js applications. You'll need to ensure CORS is configured correctly and errors are monitored, just like with any web service.
64
+ Running `mastra studio` as a long-running process is no different from running any other Node.js service. All the best practices, tools, and options for deployment apply here as well. You can use process managers like PM2, use Docker, or cloud services that support Node.js applications. You'll need to ensure CORS is configured correctly and errors are monitored, as with any web service.
65
65
 
66
66
  > **Warning:** Once Studio is connected to your Mastra server, it has full access to your agents, workflows, and tools. Be sure to secure it properly in production (e.g. behind authentication, VPN, etc.) to prevent unauthorized access.
67
67
 
@@ -2,7 +2,7 @@
2
2
 
3
3
  When Mastra is integrated with a web framework, it deploys alongside your application using the framework's standard deployment process. Follow the instructions below to ensure your Mastra integration deploys correctly.
4
4
 
5
- > **Warning:** If you're deploying to a cloud provider, remove any usage of [LibSQLStore](https://mastra.ai/reference/storage/libsql) from your Mastra configuration. LibSQLStore requires filesystem access and is not compatible with serverless platforms.
5
+ > **Warning:** If you're deploying to a cloud provider, remove any usage of [LibSQLStore](https://mastra.ai/reference/storage/libsql) from your Mastra configuration. LibSQLStore requires filesystem access and isn't compatible with serverless platforms.
6
6
 
7
7
  Integration guides:
8
8
 
@@ -8,7 +8,7 @@ Scorers can be run in the cloud, capturing real-time results. But scorers can al
8
8
 
9
9
  ## Types of Scorers
10
10
 
11
- There are different kinds of scorers, each serving a specific purpose. Here are some common types:
11
+ Mastra provides different kinds of scorers, each serving a specific purpose. Here are some common types:
12
12
 
13
13
  1. **Textual Scorers**: Evaluate accuracy, reliability, and context understanding of agent responses
14
14
  2. **Classification Scorers**: Measure accuracy in categorizing data based on predefined categories
@@ -32,7 +32,7 @@ bun x skills add mastra-ai/skills
32
32
 
33
33
  Read the dedicated [Mastra Skills](https://mastra.ai/docs/build-with-ai/skills) guide to learn more about installation options and available skills.
34
34
 
35
- > **Tip:** If you're just interested in giving your agent access to Mastra's documentation, we recommend using **Skills**. While the MCP Docs Server also provides this information, Skills will perform better. Use the MCP Docs Server when you need its tools, e.g. the migration tool.
35
+ > **Tip:** If you're interested in giving your agent access to Mastra's documentation, we recommend using **Skills**. While the MCP Docs Server also provides this information, Skills will perform better. Use the MCP Docs Server when you need its tools, e.g. the migration tool.
36
36
 
37
37
  ## MCP Docs Server
38
38
 
@@ -2,7 +2,7 @@
2
2
 
3
3
  Your new Mastra project, created with the `create mastra` command, comes with a predefined set of files and folders to help you get started.
4
4
 
5
- Mastra is a framework, but it's **unopinionated** about how you organize or colocate your files. The CLI provides a sensible default structure that works well for most projects, but you're free to adapt it to your workflow or team conventions. You could even build your entire project in a single file if you wanted! Whatever structure you choose, keep it consistent to ensure your code stays maintainable and easy to navigate.
5
+ Mastra is a framework, but it's **unopinionated** about how you organize or colocate your files. The CLI provides a sensible default structure that works well for most projects, but you're free to adapt it to your workflow or team conventions. You could even build your entire project in a single file if you wanted! Whatever structure you choose, keep it consistent to ensure your code stays maintainable and straightforward to navigate.
6
6
 
7
7
  ## Default project structure
8
8
 
@@ -1,10 +1,10 @@
1
1
  # Get Started
2
2
 
3
- Mastra is a TypeScript framework for building AI agents, workflows, and tools. Create your first project in seconds and start building.
3
+ Build AI agents your users actually depend on. Mastra is a TypeScript framework that gives you everything you need to prototype fast and ship with confidence. Create your first agent with a single command and start building.
4
4
 
5
5
  ## Quickstart
6
6
 
7
- Run the command below to create a new Mastra project with an example agent:
7
+ Run this command to create a new project you can test immediately in [Studio](https://mastra.ai/docs/getting-started/studio):
8
8
 
9
9
  **npm**:
10
10
 
@@ -30,11 +30,11 @@ yarn create mastra
30
30
  bunx create-mastra
31
31
  ```
32
32
 
33
- This sets up a project you can test immediately in [Studio](https://mastra.ai/docs/getting-started/studio). See the [quickstart guide](https://mastra.ai/guides/getting-started/quickstart) for a full walkthrough.
33
+ See the [quickstart guide](https://mastra.ai/guides/getting-started/quickstart) for a full walkthrough.
34
34
 
35
35
  ## Integrate with your framework
36
36
 
37
- Add Mastra to an existing project or create a new app with your preferred framework.
37
+ Add Mastra to an existing project, or create a new app with your preferred framework:
38
38
 
39
39
  - [Next.js](https://mastra.ai/guides/getting-started/next-js)
40
40
  - [React](https://mastra.ai/guides/getting-started/vite-react)
@@ -45,43 +45,89 @@ Add Mastra to an existing project or create a new app with your preferred framew
45
45
 
46
46
  For other frameworks, see the [framework integration guides](https://mastra.ai/guides/getting-started/next-js).
47
47
 
48
- ## What you can do
48
+ ## What you can build
49
+
50
+ Here are some of the ways you can use Mastra:
51
+
52
+ <details>
53
+ **Embed agents in your product**
54
+
55
+ Add AI capabilities to your platform so your users can build or interact with agents.
56
+
57
+ Used by [Replit](https://mastra.ai/blog/replitagent3), [Fireworks](https://mastra.ai/blog/fireworks-xml-prompting), [Medusa](https://mastra.ai/blog/medusa-ecommerce)
58
+
59
+ </details>
60
+
61
+ <details>
62
+ **Customer-facing assistants**
63
+
64
+ Build agents that handle inquiries, schedule appointments, send reminders, and answer questions via chat, WhatsApp, or voice.
65
+
66
+ Used by [Vetnio](https://mastra.ai/blog/vetnio), [Lua](https://mastra.ai/blog/lua-scaling)
67
+
68
+ Templates: [Docs Chatbot](https://mastra.ai/templates/docs-chatbot), [Slack Agent](https://mastra.ai/templates/slack-agent)
69
+
70
+ </details>
49
71
 
50
72
  <details>
51
- **Conversational agents**
73
+ **Internal copilots**
74
+
75
+ Help employees work faster with AI that understands your domain—HR queries, clinical documentation, sales prep, or document generation.
52
76
 
53
- Customer support, onboarding, or internal query bots. Agents maintain context across sessions with thread-based message history and observational memory — background agents that compress conversation history into dense observation logs, keeping the context window small while preserving long-term recall. Stream responses token-by-token for responsive chat UIs. Attach tools so agents can look up orders, create tickets, or call APIs mid-conversation.
77
+ Used by [Factorial](https://mastra.ai/blog/factorial-case-study), [Counsel Health](https://mastra.ai/blog/counsel-health), [Cedar](https://mastra.ai/blog/cedar-case-study), [SoftBank](https://mastra.ai/blog/softbank-productivity-mastra-2025-08-20)
78
+
79
+ Templates: [Chat with PDF](https://mastra.ai/templates/chat-with-pdf), [Google Sheet Analysis](https://mastra.ai/templates/google-sheets-analysis)
54
80
 
55
81
  </details>
56
82
 
57
83
  <details>
58
- **Domain-specific copilots**
84
+ **Data analysis agents**
85
+
86
+ Let users query databases and dashboards in natural language. Connect to your data sources and return answers, charts, or reports.
59
87
 
60
- Assistants for coding, legal, finance, research, or creative work. Ground agents in your own data with a full RAG pipeline — chunking, embedding, vector storage across 12+ providers, metadata filtering, and re-ranking. Customize behavior with dynamic instructions that adapt per user or request. Connect to external services through typed tools and MCP servers. Add voice interaction with 12+ speech providers, and measure output quality with built-in evaluation scorers.
88
+ Used by [Index](https://mastra.ai/blog/index-case-study), [PLAID Japan](https://mastra.ai/blog/plaid-jpn-gcp-agents)
89
+
90
+ Templates: [Chat with Database](https://mastra.ai/templates/text-to-sql), [CSV to Questions](https://mastra.ai/templates/csv-to-questions)
61
91
 
62
92
  </details>
63
93
 
64
94
  <details>
65
- **Workflow automations**
95
+ **Content automation**
96
+
97
+ Generate, transform, and manage structured content at scale—whether for a CMS, knowledge base, or documentation system.
98
+
99
+ Used by [Sanity](https://mastra.ai/blog/sanity)
66
100
 
67
- Multi-step processes that trigger, route, and complete tasks. Define type-safe steps with Zod-validated inputs and outputs, then compose them with sequential chaining, parallel fan-out, conditional branching, loops, and iteration with concurrency control. Suspend workflows mid-execution to wait for human approval or external events, then resume from where they left off. Nest workflows inside other workflows for reusable sub-pipelines.
101
+ Templates: [Chat with YouTube](https://mastra.ai/templates/chat-with-youtube), [Flash Cards from PDF](https://mastra.ai/templates/flash-cards-from-pdf)
68
102
 
69
103
  </details>
70
104
 
71
105
  <details>
72
- **Decision-support tools**
106
+ **DevOps & engineering automation**
73
107
 
74
- Systems that analyze data and provide actionable recommendations. Compose multiple tools so agents can query databases, call APIs, and run analysis functions in a single interaction. Use structured output with Zod schemas to return validated, typed results. Coordinate specialist agents through supervisor patterns, with task-completion scoring to verify recommendation quality before finalizing. Add human-in-the-loop gates for high-stakes decisions.
108
+ Automate deployments, debug production issues, manage infrastructure, and handle on-call workflows.
109
+
110
+ Used by [StarSling](https://mastra.ai/blog/starsling)
111
+
112
+ Templates: [GitHub PR Code Review](https://mastra.ai/templates/github-pr-code-review-agent), [Browser Agent](https://mastra.ai/templates/browsing-agent)
75
113
 
76
114
  </details>
77
115
 
78
116
  <details>
79
- **AI-powered applications**
117
+ **Sales & GTM workflows**
118
+
119
+ Turn customer conversations into structured tasks, generate investment memos, or automate outreach sequences.
80
120
 
81
- Products that combine language understanding, reasoning, and action. Orchestrate multiple agents with supervisor delegation and multi-agent networks. Deploy to any Node.js runtime, cloud provider, or framework — Vercel, Cloudflare, Next.js, Astro, and more. Monitor production behavior with AI-specific tracing that captures token usage, latency, and decision paths. Choose from 10+ storage providers and configure composite backends optimized per workload. Test everything interactively in Studio before shipping.
121
+ Used by [Kestral](https://mastra.ai/blog/kestral), [Orange Collective](https://mastra.ai/blog/orange-collective-vc-operating-system), [WorkOS](https://mastra.ai/blog/workos-teaching-mastra)
122
+
123
+ Templates: [Customer Feedback Summarization](https://mastra.ai/templates/customer-feedback-summarization)
82
124
 
83
125
  </details>
84
126
 
85
- Browse [templates](https://mastra.ai/templates) from Mastra and the community to see working examples of these use cases.
127
+ Browse [templates](https://mastra.ai/templates) for working examples.
128
+
129
+ ## Not ready to build yet?
130
+
131
+ Watch this quick introduction:
86
132
 
87
133
  [YouTube video player](https://www.youtube-nocookie.com/embed/1qnmnRICX50)
@@ -6,7 +6,7 @@ Deploy your Mastra application to production and expose your agents, tools, and
6
6
 
7
7
  ## Enable deployments
8
8
 
9
- After [setting up your project](https://mastra.ai/docs/mastra-cloud/setup), click **Deployment** in the sidebar and select **Enable Deployments**.
9
+ After [setting up your project](https://mastra.ai/docs/mastra-cloud/setup), select **Deployment** in the sidebar and select **Enable Deployments**.
10
10
 
11
11
  Once enabled, your project automatically builds and deploys. Future pushes to your main branch trigger automatic redeployments.
12
12
 
@@ -12,7 +12,7 @@ See the [Studio documentation](https://mastra.ai/docs/getting-started/studio) fo
12
12
 
13
13
  ## Sharing access
14
14
 
15
- To invite team members, go to [Mastra Cloud](https://cloud.mastra.ai), click **Team Settings**, then **Members** to add them. Once invited, they can sign in and access your project's Studio.
15
+ To invite team members, go to [Mastra Cloud](https://cloud.mastra.ai), select **Team Settings**, then **Members** to add them. Once invited, they can sign in and access your project's Studio.
16
16
 
17
17
  ![Invite team members](/assets/images/mastra-cloud-invite-c91b5a0fff03de5ecba3e0162484c819.png)
18
18
 
@@ -92,4 +92,4 @@ const tools = await mcp.listTools()
92
92
  const toolsets = await mcp.listToolsets()
93
93
  ```
94
94
 
95
- Note: If you published without an organization scope, the `args` might just be `["-y", "your-package-name@latest"]`.
95
+ Note: If you published without an organization scope, the `args` might be `["-y", "your-package-name@latest"]`.
@@ -161,7 +161,7 @@ const agent = new Agent({
161
161
 
162
162
  ## Manual Control and Deduplication
163
163
 
164
- If you manually add a memory processor to `inputProcessors` or `outputProcessors`, Mastra will **not** automatically add it. This gives you full control over processor ordering:
164
+ If you manually add a memory processor to `inputProcessors` or `outputProcessors`, Mastra **won't** automatically add it. This gives you full control over processor ordering:
165
165
 
166
166
  ```typescript
167
167
  import { Agent } from '@mastra/core/agent'
@@ -98,7 +98,7 @@ await agent.stream('Hello', {
98
98
 
99
99
  > **Info:** Threads and messages are created automatically when you call `agent.generate()` or `agent.stream()`, but you can also create them manually with [`createThread()`](https://mastra.ai/reference/memory/createThread) and [`saveMessages()`](https://mastra.ai/reference/memory/memory-class).
100
100
 
101
- There are two ways to use this history:
101
+ You can use this history in two ways:
102
102
 
103
103
  - **Automatic inclusion** - Mastra automatically fetches and includes recent messages in the context window. By default, it includes the last 10 messages, keeping agents grounded in the conversation. You can adjust this number with `lastMessages`, but in most cases you don't need to think about it.
104
104
  - [**Manual querying**](#querying) - For more control, use the `recall()` function to query threads and messages directly. This lets you choose exactly which memories are included in the context window, or fetch messages to render conversation history in your UI.
@@ -118,7 +118,7 @@ The `Memory` instance gives you access to functions for listing threads, recalli
118
118
 
119
119
  Use these methods to fetch threads and messages for displaying conversation history in your UI or for custom memory retrieval logic.
120
120
 
121
- > **Warning:** The memory system does not enforce access control. Before running any query, verify in your application logic that the current user is authorized to access the `resourceId` being queried.
121
+ > **Warning:** The memory system doesn't enforce access control. Before running any query, verify in your application logic that the current user is authorized to access the `resourceId` being queried.
122
122
 
123
123
  ### Threads
124
124
 
@@ -61,6 +61,10 @@ OM solves both problems by compressing old context into dense observations.
61
61
 
62
62
  When message history tokens exceed a threshold (default: 30,000), the Observer creates observations — concise notes about what happened:
63
63
 
64
+ Image parts contribute to this threshold with model-aware estimates, so multimodal conversations trigger observation at the right time. The same applies to image-like `file` parts when a transport normalizes an uploaded image as a file instead of an image part. For example, OpenAI image detail settings can materially change when OM decides to observe.
65
+
66
+ The Observer can also see attachments in the history it reviews. OM keeps readable placeholders like `[Image #1: reference-board.png]` or `[File #1: floorplan.pdf]` in the transcript for readability, and forwards the actual attachment parts alongside the text. Image-like `file` parts are upgraded to image inputs for the Observer when possible, while non-image attachments are forwarded as file parts with normalized token counting. This applies to both normal thread observation and batched resource-scope observation.
67
+
64
68
  ```text
65
69
  Date: 2026-01-15
66
70
  - 🔴 12:10 User is building a Next.js app with Supabase auth, due in 1 week (meaning January 22nd 2026)
@@ -177,7 +181,7 @@ OM caches tiktoken part estimates in message metadata to reduce repeat counting
177
181
  - Per-part estimates are stored on `part.providerMetadata.mastra` and reused on subsequent passes when the cache version/tokenizer source matches.
178
182
  - For string-only message content (without parts), OM uses a message-level metadata fallback cache.
179
183
  - Message and conversation overhead are still recalculated on every pass. The cache only stores payload estimates, so counting semantics stay the same.
180
- - `data-*` and `reasoning` parts are still skipped and are not cached.
184
+ - `data-*` and `reasoning` parts are still skipped and aren't cached.
181
185
 
182
186
  ## Async Buffering
183
187
 
@@ -224,7 +228,7 @@ const memory = new Memory({
224
228
 
225
229
  Setting `bufferTokens: false` disables both observation and reflection async buffering. See [async buffering configuration](https://mastra.ai/reference/memory/observational-memory) for the full API.
226
230
 
227
- > **Note:** Async buffering is not supported with `scope: 'resource'`. It is automatically disabled in resource scope.
231
+ > **Note:** Async buffering isn't supported with `scope: 'resource'`. It's automatically disabled in resource scope.
228
232
 
229
233
  ## Migrating existing threads
230
234
 
@@ -35,7 +35,7 @@ const agent = new Agent({
35
35
 
36
36
  ## Using the recall() Method
37
37
 
38
- While `listMessages` retrieves messages by thread ID with basic pagination, [`recall()`](https://mastra.ai/reference/memory/recall) adds support for **semantic search**. When you need to find messages by meaning rather than just recency, use `recall()` with a `vectorSearchString`:
38
+ While `listMessages` retrieves messages by thread ID with basic pagination, [`recall()`](https://mastra.ai/reference/memory/recall) adds support for **semantic search**. When you need to find messages by meaning rather than recency, use `recall()` with a `vectorSearchString`:
39
39
 
40
40
  ```typescript
41
41
  const memory = await agent.getMemory()
@@ -264,7 +264,7 @@ For detailed information about index configuration options and performance tunin
264
264
 
265
265
  ## Disabling
266
266
 
267
- There is a performance impact to using semantic recall. New messages are converted into embeddings and used to query a vector database before new messages are sent to the LLM.
267
+ Semantic recall has a performance impact. New messages are converted into embeddings and used to query a vector database before new messages are sent to the LLM.
268
268
 
269
269
  Semantic recall is enabled by default but can be disabled when not needed:
270
270
 
@@ -170,7 +170,7 @@ export const agent = new Agent({
170
170
  })
171
171
  ```
172
172
 
173
- Title generation runs asynchronously after the agent responds and does not affect response time.
173
+ Title generation runs asynchronously after the agent responds and doesn't affect response time.
174
174
 
175
175
  To optimize cost or behavior, provide a smaller [`model`](https://mastra.ai/models) and custom `instructions`:
176
176
 
@@ -170,11 +170,11 @@ const memory = new Memory({
170
170
 
171
171
  ## Designing Effective Templates
172
172
 
173
- A well-structured template keeps the information easy for the agent to parse and update. Treat the template as a short form that you want the assistant to keep up to date.
173
+ A well-structured template keeps the information straightforward for the agent to parse and update. Treat the template as a short form that you want the assistant to keep up to date.
174
174
 
175
- - **Short, focused labels.** Avoid paragraphs or very long headings. Keep labels brief (for example `## Personal Info` or `- Name:`) so updates are easy to read and less likely to be truncated.
175
+ - **Short, focused labels.** Avoid paragraphs or very long headings. Keep labels brief (for example `## Personal Info` or `- Name:`) so updates are readable and less likely to be truncated.
176
176
  - **Use consistent casing.** Inconsistent capitalization (`Timezone:` vs `timezone:`) can cause messy updates. Stick to Title Case or lower case for headings and bullet labels.
177
- - **Keep placeholder text simple.** Use hints such as `[e.g., Formal]` or `[Date]` to help the LLM fill in the correct spots.
177
+ - **Keep placeholder text minimal.** Use hints such as `[e.g., Formal]` or `[Date]` to help the LLM fill in the correct spots.
178
178
  - **Abbreviate very long values.** If you only need a short form, include guidance like `- Name: [First name or nickname]` or `- Address (short):` rather than the full legal text.
179
179
  - **Mention update rules in `instructions`.** You can instruct how and when to fill or clear parts of the template directly in the agent's `instructions` field.
180
180
 
@@ -263,13 +263,13 @@ Schema-based working memory uses **merge semantics**, meaning the agent only nee
263
263
 
264
264
  - **Object fields are deep merged:** Only provided fields are updated; others remain unchanged
265
265
  - **Set a field to `null` to delete it:** This explicitly removes the field from memory
266
- - **Arrays are replaced entirely:** When an array field is provided, it replaces the existing array (arrays are not merged element-by-element)
266
+ - **Arrays are replaced entirely:** When an array field is provided, it replaces the existing array (arrays aren't merged element-by-element)
267
267
 
268
268
  ## Choosing Between Template and Schema
269
269
 
270
270
  - Use a **template** (Markdown) if you want the agent to maintain memory as a free-form text block, such as a user profile or scratchpad. Templates use **replace semantics** — the agent must provide the complete memory content on each update.
271
271
  - Use a **schema** if you need structured, type-safe data that can be validated and programmatically accessed as JSON. Schemas use **merge semantics** — the agent only provides fields to update, and existing fields are preserved.
272
- - Only one mode can be active at a time: setting both `template` and `schema` is not supported.
272
+ - Only one mode can be active at a time: setting both `template` and `schema` isn't supported.
273
273
 
274
274
  ## Example: Multi-step Retention
275
275
 
@@ -301,7 +301,7 @@ Below is a simplified view of how the `User Profile` template updates across a s
301
301
 
302
302
  The agent can now refer to `Sam` or `Berlin` in later responses without requesting the information again because it has been stored in working memory.
303
303
 
304
- If your agent is not properly updating working memory when you expect it to, you can add system instructions on _how_ and _when_ to use this template in your agent's `instructions` setting.
304
+ If your agent isn't properly updating working memory when you expect it to, you can add system instructions on _how_ and _when_ to use this template in your agent's `instructions` setting.
305
305
 
306
306
  ## Setting Initial Working Memory
307
307