@mastra/mcp-docs-server 0.0.6 → 0.0.7

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (66) hide show
  1. package/.docs/organized/changelogs/%40mastra%2Fastra.md +45 -45
  2. package/.docs/organized/changelogs/%40mastra%2Fchroma.md +45 -45
  3. package/.docs/organized/changelogs/%40mastra%2Fclickhouse.md +46 -0
  4. package/.docs/organized/changelogs/%40mastra%2Fclient-js.md +51 -51
  5. package/.docs/organized/changelogs/%40mastra%2Fcloudflare.md +44 -0
  6. package/.docs/organized/changelogs/%40mastra%2Fcore.md +45 -45
  7. package/.docs/organized/changelogs/%40mastra%2Fdeployer-cloudflare.md +58 -58
  8. package/.docs/organized/changelogs/%40mastra%2Fdeployer-netlify.md +57 -57
  9. package/.docs/organized/changelogs/%40mastra%2Fdeployer-vercel.md +57 -57
  10. package/.docs/organized/changelogs/%40mastra%2Fdeployer.md +69 -69
  11. package/.docs/organized/changelogs/%40mastra%2Fevals.md +45 -45
  12. package/.docs/organized/changelogs/%40mastra%2Ffirecrawl.md +47 -47
  13. package/.docs/organized/changelogs/%40mastra%2Fgithub.md +44 -44
  14. package/.docs/organized/changelogs/%40mastra%2Floggers.md +44 -44
  15. package/.docs/organized/changelogs/%40mastra%2Fmcp-docs-server.md +44 -0
  16. package/.docs/organized/changelogs/%40mastra%2Fmcp.md +44 -44
  17. package/.docs/organized/changelogs/%40mastra%2Fmem0.md +43 -0
  18. package/.docs/organized/changelogs/%40mastra%2Fmemory.md +50 -50
  19. package/.docs/organized/changelogs/%40mastra%2Fpg.md +52 -52
  20. package/.docs/organized/changelogs/%40mastra%2Fpinecone.md +45 -45
  21. package/.docs/organized/changelogs/%40mastra%2Fplayground-ui.md +75 -75
  22. package/.docs/organized/changelogs/%40mastra%2Fqdrant.md +45 -45
  23. package/.docs/organized/changelogs/%40mastra%2Frag.md +46 -46
  24. package/.docs/organized/changelogs/%40mastra%2Fragie.md +44 -44
  25. package/.docs/organized/changelogs/%40mastra%2Fserver.md +50 -50
  26. package/.docs/organized/changelogs/%40mastra%2Fspeech-azure.md +44 -44
  27. package/.docs/organized/changelogs/%40mastra%2Fspeech-deepgram.md +44 -44
  28. package/.docs/organized/changelogs/%40mastra%2Fspeech-elevenlabs.md +44 -44
  29. package/.docs/organized/changelogs/%40mastra%2Fspeech-google.md +44 -44
  30. package/.docs/organized/changelogs/%40mastra%2Fspeech-ibm.md +44 -44
  31. package/.docs/organized/changelogs/%40mastra%2Fspeech-murf.md +44 -44
  32. package/.docs/organized/changelogs/%40mastra%2Fspeech-openai.md +44 -44
  33. package/.docs/organized/changelogs/%40mastra%2Fspeech-playai.md +44 -44
  34. package/.docs/organized/changelogs/%40mastra%2Fspeech-replicate.md +44 -44
  35. package/.docs/organized/changelogs/%40mastra%2Fspeech-speechify.md +44 -44
  36. package/.docs/organized/changelogs/%40mastra%2Fturbopuffer.md +44 -16
  37. package/.docs/organized/changelogs/%40mastra%2Fupstash.md +48 -48
  38. package/.docs/organized/changelogs/%40mastra%2Fvectorize.md +46 -46
  39. package/.docs/organized/changelogs/%40mastra%2Fvoice-azure.md +44 -0
  40. package/.docs/organized/changelogs/%40mastra%2Fvoice-cloudflare.md +44 -0
  41. package/.docs/organized/changelogs/%40mastra%2Fvoice-deepgram.md +45 -45
  42. package/.docs/organized/changelogs/%40mastra%2Fvoice-elevenlabs.md +45 -45
  43. package/.docs/organized/changelogs/%40mastra%2Fvoice-google.md +44 -44
  44. package/.docs/organized/changelogs/%40mastra%2Fvoice-murf.md +45 -45
  45. package/.docs/organized/changelogs/%40mastra%2Fvoice-openai-realtime.md +44 -0
  46. package/.docs/organized/changelogs/%40mastra%2Fvoice-openai.md +45 -45
  47. package/.docs/organized/changelogs/%40mastra%2Fvoice-playai.md +45 -45
  48. package/.docs/organized/changelogs/%40mastra%2Fvoice-sarvam.md +44 -0
  49. package/.docs/organized/changelogs/%40mastra%2Fvoice-speechify.md +45 -45
  50. package/.docs/organized/changelogs/create-mastra.md +37 -37
  51. package/.docs/organized/changelogs/mastra.md +92 -92
  52. package/.docs/organized/code-examples/ai-sdk-useChat.md +1 -1
  53. package/.docs/organized/code-examples/memory-todo-agent.md +1 -1
  54. package/.docs/raw/agents/agent-memory.mdx +34 -585
  55. package/.docs/raw/deployment/server.mdx +19 -0
  56. package/.docs/raw/memory/memory-processors.mdx +131 -0
  57. package/.docs/raw/memory/overview.mdx +120 -0
  58. package/.docs/raw/memory/semantic-recall.mdx +123 -0
  59. package/.docs/raw/memory/working-memory.mdx +88 -0
  60. package/.docs/raw/reference/memory/Memory.mdx +52 -30
  61. package/.docs/raw/reference/memory/createThread.mdx +3 -1
  62. package/.docs/raw/reference/memory/getThreadById.mdx +4 -1
  63. package/.docs/raw/reference/memory/getThreadsByResourceId.mdx +4 -1
  64. package/.docs/raw/reference/memory/query.mdx +4 -1
  65. package/package.json +3 -3
  66. package/.docs/raw/reference/memory/memory-processors.mdx +0 -229
@@ -5,613 +5,62 @@ description: Documentation on how agents in Mastra use memory to store conversat
5
5
 
6
6
  # Agent Memory
7
7
 
8
- Agents in Mastra have a sophisticated memory system that stores conversation history and contextual information. This memory system supports both traditional message storage and vector-based semantic search, enabling agents to maintain state across interactions and retrieve relevant historical context.
8
+ Agents in Mastra can leverage a powerful memory system to store conversation history, recall relevant information, and maintain persistent context across interactions. This allows agents to have more natural, stateful conversations.
9
9
 
10
- ## Threads and Resources
10
+ ## Enabling Memory for an Agent
11
11
 
12
- In Mastra, you can organize conversations by a `thread_id`. This allows the system to maintain context and retrieve historical messages that belong to the same discussion.
12
+ To enable memory, simply instantiate the `Memory` class and pass it to your agent's configuration. You also need to install the memory package:
13
13
 
14
- Mastra also supports the concept of a `resource_id`, which typically represents the user involved in the conversation, ensuring that the agent's memory and context are correctly associated with the right entity.
15
-
16
- This separation allows you to manage multiple conversations (threads) for a single user or even share conversation context across users if needed.
17
-
18
- ```typescript copy showLineNumbers
19
- import { Agent } from "@mastra/core/agent";
20
- import { openai } from "@ai-sdk/openai";
21
-
22
- const agent = new Agent({
23
- name: "Project Manager",
24
- instructions:
25
- "You are a project manager. You are responsible for managing the project and the team.",
26
- model: openai("gpt-4o-mini"),
27
- });
28
-
29
- await agent.stream("When will the project be completed?", {
30
- threadId: "project_123",
31
- resourceId: "user_123",
32
- });
33
- ```
34
-
35
- ## Managing Conversation Context
36
-
37
- The key to getting good responses from LLMs is feeding them the right context.
38
-
39
- Mastra has a Memory API that stores and manages conversation history and contextual information. The Memory API uses a storage backend to persist conversation history and contextual information (more on this later).
40
-
41
- The Memory API uses two main mechanisms to maintain context in conversations, recent message history and semantic search.
42
-
43
- ### Recent Message History
44
-
45
- By default, Memory keeps track of the 40 most recent messages in a conversation. You can customize this with the `lastMessages` setting:
46
-
47
- ```typescript copy showLineNumbers
48
- const memory = new Memory({
49
- options: {
50
- lastMessages: 5, // Keep 5 most recent messages
51
- },
52
- });
53
-
54
- // When user asks this question, the agent will see the last 10 messages,
55
- await agent.stream("Can you summarize the search feature requirements?", {
56
- memoryOptions: {
57
- lastMessages: 10,
58
- },
59
- });
60
- ```
61
-
62
- ### Semantic Search
63
-
64
- Semantic search is enabled by default in Mastra. While FastEmbed (bge-small-en-v1.5) and LibSQL are included by default, you can use any embedder (like OpenAI or Cohere) and vector database (like PostgreSQL, Pinecone, or Chroma) that fits your needs.
65
-
66
- This allows your agent to find and recall relevant information from earlier in the conversation:
67
-
68
- ```typescript copy showLineNumbers
69
- const memory = new Memory({
70
- options: {
71
- semanticRecall: {
72
- topK: 10, // Include 10 most relevant past messages
73
- messageRange: 2, // Messages before and after each result
74
- },
75
- },
76
- });
77
-
78
- // Example: User asks about a past feature discussion
79
- await agent.stream("What did we decide about the search feature last week?", {
80
- memoryOptions: {
81
- lastMessages: 10,
82
- semanticRecall: {
83
- topK: 3,
84
- messageRange: 2,
85
- },
86
- },
87
- });
14
+ ```bash npm2yarn copy
15
+ npm install @mastra/memory
88
16
  ```
89
17
 
90
- When semantic search is used:
91
-
92
- 1. The message is converted to a vector embedding
93
- 2. Similar messages are found using vector similarity search
94
- 3. Surrounding context is included based on `messageRange`
95
- 4. All relevant context is provided to the agent
96
-
97
- You can also customize the vector database and embedder:
98
-
99
- ```typescript copy showLineNumbers
100
- import { openai } from "@ai-sdk/openai";
101
- import { PgVector } from "@mastra/pg";
102
-
103
- const memory = new Memory({
104
- // Use a different vector database (libsql is default)
105
- vector: new PgVector("postgresql://user:pass@localhost:5432/db"),
106
- // Or a different embedder (fastembed is default)
107
- embedder: openai.embedding("text-embedding-3-small"),
108
- });
109
- ```
110
-
111
- ## Memory Configuration
112
-
113
- The Mastra memory system is highly configurable and supports multiple storage backends. By default, it uses LibSQL for storage and vector search, and FastEmbed for embeddings.
114
-
115
- ### Basic Configuration
116
-
117
- For most use cases, you can use the default configuration:
118
-
119
- ```typescript copy showLineNumbers
18
+ ```typescript
19
+ import { Agent } from "@mastra/core/agent";
120
20
  import { Memory } from "@mastra/memory";
21
+ import { openai } from "@ai-sdk/openai";
121
22
 
23
+ // Basic memory setup
122
24
  const memory = new Memory();
123
- ```
124
-
125
- ### Custom Configuration
126
-
127
- For more control, you can customize the storage backend, vector database, and memory options:
128
-
129
- ```typescript copy showLineNumbers
130
- import { Memory } from "@mastra/memory";
131
- import { PostgresStore, PgVector } from "@mastra/pg";
132
-
133
- const memory = new Memory({
134
- storage: new PostgresStore({
135
- host: "localhost",
136
- port: 5432,
137
- user: "postgres",
138
- database: "postgres",
139
- password: "postgres",
140
- }),
141
- vector: new PgVector("postgresql://user:pass@localhost:5432/db"),
142
- options: {
143
- // Number of recent messages to include (false to disable)
144
- lastMessages: 10,
145
- // Configure vector-based semantic search (false to disable)
146
- semanticRecall: {
147
- topK: 3, // Number of semantic search results
148
- messageRange: 2, // Messages before and after each result
149
- },
150
- },
151
- });
152
- ```
153
-
154
- ### Overriding Memory Settings
155
-
156
- When you initialize a Mastra instance with memory configuration, all agents will automatically use these memory settings when you call their `stream()` or `generate()` methods. You can override these default settings for individual calls:
157
-
158
- ```typescript copy showLineNumbers
159
- // Use default memory settings from Memory configuration
160
- const response1 = await agent.generate("What were we discussing earlier?", {
161
- resourceId: "user_123",
162
- threadId: "thread_456",
163
- });
164
-
165
- // Override memory settings for this specific call
166
- const response2 = await agent.generate("What were we discussing earlier?", {
167
- resourceId: "user_123",
168
- threadId: "thread_456",
169
- memoryOptions: {
170
- lastMessages: 5, // Only inject 5 recent messages
171
- semanticRecall: {
172
- topK: 2, // Only get 2 semantic search results
173
- messageRange: 1, // Context around each result
174
- },
175
- },
176
- });
177
- ```
178
-
179
- ### Configuring Memory for Different Use Cases
180
-
181
- You can adjust memory settings based on your agent's needs:
182
-
183
- ```typescript copy showLineNumbers
184
- // Customer support agent with minimal context
185
- await agent.stream("What are your store hours?", {
186
- threadId,
187
- resourceId,
188
- memoryOptions: {
189
- lastMessages: 5, // Quick responses need minimal conversation history
190
- semanticRecall: false, // no need to search through earlier messages
191
- },
192
- });
193
-
194
- // Project management agent with extensive context
195
- await agent.stream("Update me on the project status", {
196
- threadId,
197
- resourceId,
198
- memoryOptions: {
199
- lastMessages: 50, // Maintain longer conversation history across project discussions
200
- semanticRecall: {
201
- topK: 5, // Find more relevant project details
202
- messageRange: 3, // Number of messages before and after each result
203
- },
204
- },
205
- });
206
- ```
207
-
208
- ## Storage Options
209
-
210
- Mastra currently supports several storage backends:
211
-
212
- ### LibSQL Storage
213
-
214
- ```typescript copy showLineNumbers
215
- import { LibSQLStore } from "@mastra/core/storage/libsql";
216
-
217
- const storage = new LibSQLStore({
218
- config: {
219
- url: "file:example.db",
220
- },
221
- });
222
- ```
223
-
224
- ### PostgreSQL Storage
225
-
226
- ```typescript copy showLineNumbers
227
- import { PostgresStore } from "@mastra/pg";
228
-
229
- const storage = new PostgresStore({
230
- host: "localhost",
231
- port: 5432,
232
- user: "postgres",
233
- database: "postgres",
234
- password: "postgres",
235
- });
236
- ```
237
-
238
- ### Upstash KV Storage
239
-
240
- ```typescript copy showLineNumbers
241
- import { UpstashStore } from "@mastra/upstash";
242
-
243
- const storage = new UpstashStore({
244
- url: "http://localhost:8089",
245
- token: "your_token",
246
- });
247
- ```
248
-
249
- ## Vector Search
250
-
251
- Mastra supports semantic search through vector embeddings. When configured with a vector store, agents can find relevant historical messages based on semantic similarity. To enable vector search:
252
-
253
- 1. Configure a vector store (currently supports PostgreSQL):
254
-
255
- ```typescript copy showLineNumbers
256
- import { PgVector } from "@mastra/pg";
257
-
258
- const vector = new PgVector(connectionString);
259
-
260
- const memory = new Memory({ vector });
261
- ```
262
-
263
- 2. Configure embedding options:
264
-
265
- ```typescript copy showLineNumbers
266
- const memory = new Memory({
267
- vector,
268
- embedder: openai.embedding("text-embedding-3-small"),
269
- });
270
- ```
271
-
272
- 3. Enable vector search in memory configuration options:
273
-
274
- ```typescript copy showLineNumbers
275
- const memory = new Memory({
276
- vector,
277
- embedder,
278
-
279
- options: {
280
- semanticRecall: {
281
- topK: 3, // Number of similar messages to find
282
- messageRange: 2, // Context around each result
283
- },
284
- },
285
- });
286
- ```
287
-
288
- ## Using Memory in Agents
289
-
290
- Once configured, the memory system is automatically used by agents. Here's how to use it:
291
-
292
- ```typescript copy showLineNumbers
293
- // Initialize Agent with memory
294
- const myAgent = new Agent({
295
- memory,
296
- // other agent options
297
- });
298
- // Add agent to mastra
299
- const mastra = new Mastra({
300
- agents: { myAgent },
301
- });
302
-
303
- // Memory is automatically used in agent interactions when resourceId and threadId are added
304
- const response = await myAgent.generate(
305
- "What were we discussing earlier about performance?",
306
- {
307
- resourceId: "user_123",
308
- threadId: "thread_456",
309
- },
310
- );
311
- ```
312
-
313
- The memory system will automatically:
314
-
315
- 1. Store all messages in the configured storage backend
316
- 2. Create vector embeddings for semantic search (if configured)
317
- 3. Inject relevant historical context into new conversations
318
- 4. Maintain conversation threads and context
319
-
320
- ## useChat()
321
-
322
- When using `useChat` from the AI SDK, you must send only the latest message or you will encounter message ordering bugs.
323
-
324
- If the `useChat()` implementation for your framework supports `experimental_prepareRequestBody`, you can do the following:
325
-
326
- ```ts
327
- const { messages } = useChat({
328
- api: "api/chat",
329
- experimental_prepareRequestBody({ messages, id }) {
330
- return { message: messages.at(-1), id };
331
- },
332
- });
333
- ```
334
-
335
- This will only ever send the latest message to the server.
336
- In your chat server endpoint you can then pass a threadId and resourceId when calling stream or generate and the agent will have access to the memory thread messages:
337
-
338
- ```ts
339
- const { messages } = await request.json();
340
-
341
- const stream = await myAgent.stream(messages, {
342
- threadId,
343
- resourceId,
344
- });
345
-
346
- return stream.toDataStreamResponse();
347
- ```
348
-
349
- If the `useChat()` for your framework (svelte for example) doesn't support `experimental_prepareRequestBody`, you can pick and use the last message before calling stream or generate:
350
-
351
- ```ts
352
- const { messages } = await request.json();
353
-
354
- const stream = await myAgent.stream([messages.at(-1)], {
355
- threadId,
356
- resourceId,
357
- });
358
-
359
- return stream.toDataStreamResponse();
360
- ```
361
-
362
- See the [AI SDK documentation on message persistence](https://sdk.vercel.ai/docs/ai-sdk-ui/chatbot-message-persistence) for more information.
363
-
364
- ## Manually Managing Threads
365
-
366
- While threads are automatically managed when using agent methods, you can also manually manage threads using the memory API directly. This is useful for advanced use cases like:
367
-
368
- - Creating threads before starting conversations
369
- - Managing thread metadata
370
- - Explicitly saving or retrieving messages
371
- - Cleaning up old threads
372
-
373
- Here's how to manually work with threads:
374
-
375
- ```typescript copy showLineNumbers
376
- import { Memory } from "@mastra/memory";
377
- import { PostgresStore } from "@mastra/pg";
378
-
379
- // Initialize memory
380
- const memory = new Memory({
381
- storage: new PostgresStore({
382
- host: "localhost",
383
- port: 5432,
384
- user: "postgres",
385
- database: "postgres",
386
- password: "postgres",
387
- }),
388
- });
389
-
390
- // Create a new thread
391
- const thread = await memory.createThread({
392
- resourceId: "user_123",
393
- title: "Project Discussion",
394
- metadata: {
395
- project: "mastra",
396
- topic: "architecture",
397
- },
398
- });
399
-
400
- // Manually save messages to a thread
401
- await memory.saveMessages({
402
- messages: [
403
- {
404
- id: "msg_1",
405
- threadId: thread.id,
406
- role: "user",
407
- content: "What's the project status?",
408
- createdAt: new Date(),
409
- type: "text",
410
- },
411
- ],
412
- });
413
-
414
- // Get messages from a thread with various filters
415
- const messages = await memory.query({
416
- threadId: thread.id,
417
- selectBy: {
418
- last: 10, // Get last 10 messages
419
- vectorSearchString: "performance", // Find messages about performance
420
- },
421
- });
422
-
423
- // Get thread by ID
424
- const existingThread = await memory.getThreadById({
425
- threadId: "thread_123",
426
- });
427
-
428
- // Get all threads for a resource
429
- const threads = await memory.getThreadsByResourceId({
430
- resourceId: "user_123",
431
- });
432
-
433
- // Update thread metadata
434
- await memory.updateThread({
435
- id: thread.id,
436
- title: "Updated Project Discussion",
437
- metadata: {
438
- status: "completed",
439
- },
440
- });
441
-
442
- // Delete a thread and all its messages
443
- await memory.deleteThread(thread.id);
444
- ```
445
-
446
- Note that in most cases, you won't need to manage threads manually since the agent's `generate()` and `stream()` methods handle thread management automatically. Manual thread management is primarily useful for advanced use cases or when you need more fine-grained control over the conversation history.
447
-
448
- ## Working Memory
449
-
450
- Working memory is a powerful feature that allows agents to maintain persistent information across conversations, even with minimal context. This is particularly useful for remembering user preferences, personal details, or any other contextual information that should persist throughout interactions.
451
-
452
- Inspired by the working memory concept from the MemGPT whitepaper, our implementation improves upon it in several key ways:
453
-
454
- - No extra roundtrips or tool calls required
455
- - Full support for streaming messages
456
- - Seamless integration with the agent's natural response flow
457
-
458
- #### How It Works
459
-
460
- Working memory operates through a system of organized data and automatic updates:
461
-
462
- 1. **Template Structure**: Define what information should be remembered using Markdown. The Memory class comes with a comprehensive default template for user information, or you can create your own template to match your specific needs.
463
-
464
- 2. **Automatic Updates**: The Memory class injects special instructions into the agent's system prompt that tell it to:
465
-
466
- - Store relevant information by including `<working_memory>...</working_memory>` tags in its responses
467
- - Update information proactively when anything changes
468
- - Maintain the Markdown structure while updating values
469
- - Keep this process invisible to users
470
-
471
- 3. **Memory Management**: The system:
472
- - Extracts working memory blocks from agent responses
473
- - Stores them for future use
474
- - Injects working memory into the system prompt on the next agent call
475
-
476
- The agent is instructed to be proactive about storing information - if there's any doubt about whether something might be useful later, it should be stored. This helps maintain conversation context even when using very small context windows.
477
-
478
- #### Basic Usage
479
-
480
- ```typescript copy showLineNumbers
481
- import { openai } from "@ai-sdk/openai";
482
25
 
483
26
  const agent = new Agent({
484
- name: "Customer Service",
485
- instructions:
486
- "You are a helpful customer service agent. Remember customer preferences and past interactions.",
487
- model: openai("gpt-4o-mini"),
488
-
489
- memory: new Memory({
490
- options: {
491
- workingMemory: {
492
- enabled: true, // enables working memory
493
- },
494
- lastMessages: 5, // Only keep recent context
495
- },
496
- }),
497
- });
498
- ```
499
-
500
- Working memory becomes particularly powerful when combined with specialized system prompts. For example, you could create a TODO list manager that maintains state even though it only has access to the previous message:
501
-
502
- ```typescript copy showLineNumbers
503
- const todoAgent = new Agent({
504
- name: "TODO Manager",
505
- instructions:
506
- "You are a TODO list manager. Update the todo list in working memory whenever tasks are added, completed, or modified.",
507
- model: openai("gpt-4o-mini"),
508
- memory: new Memory({
509
- options: {
510
- workingMemory: {
511
- enabled: true,
512
-
513
- // optional Markdown template to encourage agent to store specific kinds of info.
514
- // if you leave this out a default template will be used
515
- template: `# Todo List
516
- ## In Progress
517
- -
518
- ## Pending
519
- -
520
- ## Completed
521
- -
522
- `,
523
- },
524
- lastMessages: 1, // Only keep the last message in context
525
- },
526
- }),
27
+ name: "MyMemoryAgent",
28
+ instructions: "You are a helpful assistant with memory.",
29
+ model: openai("gpt-4o"),
30
+ memory: memory, // Attach the memory instance
527
31
  });
528
32
  ```
529
33
 
530
- ### Handling Memory Updates in Streaming
531
-
532
- When an agent responds, it includes working memory updates directly in its response stream. These updates appear as tagged blocks in the text:
533
-
534
- ```typescript copy showLineNumbers
535
- // Raw agent response stream:
536
- Let me help you with that! <working_memory># User Information
537
- - **First Name**: John
538
- - **Last Name**:
539
- - **Location**:
540
- ...</working_memory> Based on your question...
541
- ```
542
-
543
- To prevent these memory blocks from being visible to users while still allowing the system to process them, use the `maskStreamTags` utility:
34
+ This basic setup uses default settings, including LibSQL for storage and FastEmbed for embeddings. For detailed setup instructions, see the [Memory Getting Started guide](/docs/memory/getting-started).
544
35
 
545
- ```typescript copy showLineNumbers
546
- import { maskStreamTags } from "@mastra/core/utils";
36
+ ## Using Memory in Agent Calls
547
37
 
548
- // Basic usage - just mask the working_memory tags
549
- for await (const chunk of maskStreamTags(
550
- response.textStream,
551
- "working_memory",
552
- )) {
553
- process.stdout.write(chunk);
554
- }
38
+ To utilize memory during interactions, you **must** provide `resourceId` and `threadId` when calling the agent's `stream()` or `generate()` methods.
555
39
 
556
- // Without masking: "Let me help you! <working_memory>...</working_memory> Based on..."
557
- // With masking: "Let me help you! Based on..."
558
- ```
40
+ - `resourceId`: Typically identifies the user or entity (e.g., `user_123`).
41
+ - `threadId`: Identifies a specific conversation thread (e.g., `support_chat_456`).
559
42
 
560
- You can also hook into memory update events:
43
+ ```typescript
44
+ // Example agent call using memory
45
+ await agent.stream("Remember my favorite color is blue.", {
46
+ resourceId: "user_alice",
47
+ threadId: "preferences_thread",
48
+ });
561
49
 
562
- ```typescript copy showLineNumbers
563
- const maskedStream = maskStreamTags(response.textStream, "working_memory", {
564
- onStart: () => showLoadingSpinner(),
565
- onEnd: () => hideLoadingSpinner(),
566
- onMask: (chunk) => console.debug(chunk),
50
+ // Later in the same thread...
51
+ const response = await agent.stream("What's my favorite color?", {
52
+ resourceId: "user_alice",
53
+ threadId: "preferences_thread",
567
54
  });
55
+ // Agent will use memory to recall the favorite color.
568
56
  ```
569
57
 
570
- The `maskStreamTags` utility:
571
-
572
- - Removes content between specified XML tags in a streaming response
573
- - Optionally provides lifecycle callbacks for memory updates
574
- - Handles tags that might be split across stream chunks
575
-
576
- ### Accessing Thread and Resource IDs in Tools
58
+ These IDs ensure that conversation history and context are correctly stored and retrieved for the appropriate user and conversation.
577
59
 
578
- When creating custom tools, you can access the `threadId` and `resourceId` directly in the tool's execute function. These parameters are automatically provided by the Mastra runtime:
579
-
580
- ```typescript copy showLineNumbers
581
- import { Memory } from "@mastra/memory";
582
- const memory = new Memory();
60
+ ## Next Steps
583
61
 
584
- const myTool = createTool({
585
- id: "Thread Info Tool",
586
- inputSchema: z.object({
587
- fetchMessages: z.boolean().optional(),
588
- }),
589
- description: "A tool that demonstrates accessing thread and resource IDs",
590
- execute: async ({ threadId, resourceId, context }) => {
591
- // threadId and resourceId are directly available in the execute parameters
592
- console.log(`Executing in thread ${threadId}`);
593
-
594
- if (!context.fetchMessages) {
595
- return { threadId, resourceId };
596
- }
597
-
598
- const recentMessages = await memory.query({
599
- threadId,
600
- selectBy: { last: 5 },
601
- });
602
-
603
- return {
604
- threadId,
605
- resourceId,
606
- messageCount: recentMessages.length,
607
- };
608
- },
609
- });
610
- ```
62
+ Explore Mastra's memory capabilities further:
611
63
 
612
- This allows tools to:
64
+ - **[Getting Started](/docs/memory/getting-started)**: Learn how to add basic memory to your agents.
65
+ - **[Memory Features](/docs/memory/overview)**: Understand core features like Threads, Conversation History, Semantic Recall, Working Memory, Tools & Memory interaction, and Frontend Integration.
613
66
 
614
- - Access the current conversation context
615
- - Store or retrieve thread-specific data
616
- - Associate tool actions with specific users/resources
617
- - Maintain state across multiple tool invocations
@@ -65,6 +65,25 @@ export const mastra = new Mastra({
65
65
  });
66
66
  ```
67
67
 
68
+ ## Custom CORS Config
69
+
70
+ Mastra allows you to configure CORS (Cross-Origin Resource Sharing) settings for your server.
71
+
72
+ ```typescript copy showLineNumbers
73
+ import { Mastra } from '@mastra/core';
74
+
75
+ export const mastra = new Mastra({
76
+ server: {
77
+ cors: {
78
+ origin: ['https://example.com'], // Allow specific origins or '*' for all
79
+ allowMethods: ['GET', 'POST', 'PUT', 'DELETE', 'OPTIONS'],
80
+ allowHeaders: ['Content-Type', 'Authorization'],
81
+ credentials: false,
82
+ }
83
+ }
84
+ });
85
+ ```
86
+
68
87
  ## Middleware
69
88
 
70
89
  Mastra allows you to configure custom middleware functions that will be applied to API routes. This is useful for adding authentication, logging, CORS, or other HTTP-level functionality to your API endpoints.