@mastra/pg 1.0.0-beta.10 → 1.0.0-beta.12

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -0,0 +1,386 @@
1
+ > Learn how to configure working memory in Mastra to store persistent user data, preferences.
2
+
3
+ # Working Memory
4
+
5
+ While [message history](https://mastra.ai/docs/v1/memory/message-history) and [semantic recall](./semantic-recall) help agents remember conversations, working memory allows them to maintain persistent information about users across interactions.
6
+
7
+ Think of it as the agent's active thoughts or scratchpad – the key information they keep available about the user or task. It's similar to how a person would naturally remember someone's name, preferences, or important details during a conversation.
8
+
9
+ This is useful for maintaining ongoing state that's always relevant and should always be available to the agent.
10
+
11
+ Working memory can persist at two different scopes:
12
+
13
+ - **Resource-scoped** (default): Memory persists across all conversation threads for the same user
14
+ - **Thread-scoped**: Memory is isolated per conversation thread
15
+
16
+ **Important:** Switching between scopes means the agent won't see memory from the other scope - thread-scoped memory is completely separate from resource-scoped memory.
17
+
18
+ ## Quick Start
19
+
20
+ Here's a minimal example of setting up an agent with working memory:
21
+
22
+ ```typescript {11-15}
23
+ import { Agent } from "@mastra/core/agent";
24
+ import { Memory } from "@mastra/memory";
25
+
26
+ // Create agent with working memory enabled
27
+ const agent = new Agent({
28
+ id: "personal-assistant",
29
+ name: "PersonalAssistant",
30
+ instructions: "You are a helpful personal assistant.",
31
+ model: "openai/gpt-5.1",
32
+ memory: new Memory({
33
+ options: {
34
+ workingMemory: {
35
+ enabled: true,
36
+ },
37
+ },
38
+ }),
39
+ });
40
+ ```
41
+
42
+ ## How it Works
43
+
44
+ Working memory is a block of Markdown text that the agent is able to update over time to store continuously relevant information:
45
+
46
+ <YouTube id="UMy_JHLf1n8" />
47
+
48
+ ## Memory Persistence Scopes
49
+
50
+ Working memory can operate in two different scopes, allowing you to choose how memory persists across conversations:
51
+
52
+ ### Resource-Scoped Memory (Default)
53
+
54
+ By default, working memory persists across all conversation threads for the same user (resourceId), enabling persistent user memory:
55
+
56
+ ```typescript
57
+ const memory = new Memory({
58
+ storage,
59
+ options: {
60
+ workingMemory: {
61
+ enabled: true,
62
+ scope: "resource", // Memory persists across all user threads
63
+ template: `# User Profile
64
+ - **Name**:
65
+ - **Location**:
66
+ - **Interests**:
67
+ - **Preferences**:
68
+ - **Long-term Goals**:
69
+ `,
70
+ },
71
+ },
72
+ });
73
+ ```
74
+
75
+ **Use cases:**
76
+
77
+ - Personal assistants that remember user preferences
78
+ - Customer service bots that maintain customer context
79
+ - Educational applications that track student progress
80
+
81
+ ### Usage with Agents
82
+
83
+ When using resource-scoped memory, make sure to pass the `resourceId` parameter:
84
+
85
+ ```typescript
86
+ // Resource-scoped memory requires resourceId
87
+ const response = await agent.generate("Hello!", {
88
+ threadId: "conversation-123",
89
+ resourceId: "user-alice-456", // Same user across different threads
90
+ });
91
+ ```
92
+
93
+ ### Thread-Scoped Memory
94
+
95
+ Thread-scoped memory isolates working memory to individual conversation threads. Each thread maintains its own isolated memory:
96
+
97
+ ```typescript
98
+ const memory = new Memory({
99
+ storage,
100
+ options: {
101
+ workingMemory: {
102
+ enabled: true,
103
+ scope: "thread", // Memory is isolated per thread
104
+ template: `# User Profile
105
+ - **Name**:
106
+ - **Interests**:
107
+ - **Current Goal**:
108
+ `,
109
+ },
110
+ },
111
+ });
112
+ ```
113
+
114
+ **Use cases:**
115
+
116
+ - Different conversations about separate topics
117
+ - Temporary or session-specific information
118
+ - Workflows where each thread needs working memory but threads are ephemeral and not related to each other
119
+
120
+ ## Storage Adapter Support
121
+
122
+ Resource-scoped working memory requires specific storage adapters that support the `mastra_resources` table:
123
+
124
+ ### Supported Storage Adapters
125
+
126
+ - **libSQL** (`@mastra/libsql`)
127
+ - **PostgreSQL** (`@mastra/pg`)
128
+ - **Upstash** (`@mastra/upstash`)
129
+ - **MongoDB** (`@mastra/mongodb`)
130
+
131
+ ## Custom Templates
132
+
133
+ Templates guide the agent on what information to track and update in working memory. While a default template is used if none is provided, you'll typically want to define a custom template tailored to your agent's specific use case to ensure it remembers the most relevant information.
134
+
135
+ Here's an example of a custom template. In this example the agent will store the users name, location, timezone, etc as soon as the user sends a message containing any of the info:
136
+
137
+ ```typescript {5-28}
138
+ const memory = new Memory({
139
+ options: {
140
+ workingMemory: {
141
+ enabled: true,
142
+ template: `
143
+ # User Profile
144
+
145
+ ## Personal Info
146
+
147
+ - Name:
148
+ - Location:
149
+ - Timezone:
150
+
151
+ ## Preferences
152
+
153
+ - Communication Style: [e.g., Formal, Casual]
154
+ - Project Goal:
155
+ - Key Deadlines:
156
+ - [Deadline 1]: [Date]
157
+ - [Deadline 2]: [Date]
158
+
159
+ ## Session State
160
+
161
+ - Last Task Discussed:
162
+ - Open Questions:
163
+ - [Question 1]
164
+ - [Question 2]
165
+ `,
166
+ },
167
+ },
168
+ });
169
+ ```
170
+
171
+ ## Designing Effective Templates
172
+
173
+ A well-structured template keeps the information easy for the agent to parse and update. Treat the
174
+ template as a short form that you want the assistant to keep up to date.
175
+
176
+ - **Short, focused labels.** Avoid paragraphs or very long headings. Keep labels brief (for example
177
+ `## Personal Info` or `- Name:`) so updates are easy to read and less likely to be truncated.
178
+ - **Use consistent casing.** Inconsistent capitalization (`Timezone:` vs `timezone:`) can cause messy
179
+ updates. Stick to Title Case or lower case for headings and bullet labels.
180
+ - **Keep placeholder text simple.** Use hints such as `[e.g., Formal]` or `[Date]` to help the LLM
181
+ fill in the correct spots.
182
+ - **Abbreviate very long values.** If you only need a short form, include guidance like
183
+ `- Name: [First name or nickname]` or `- Address (short):` rather than the full legal text.
184
+ - **Mention update rules in `instructions`.** You can instruct how and when to fill or clear parts of
185
+ the template directly in the agent's `instructions` field.
186
+
187
+ ### Alternative Template Styles
188
+
189
+ Use a shorter single block if you only need a few items:
190
+
191
+ ```typescript
192
+ const basicMemory = new Memory({
193
+ options: {
194
+ workingMemory: {
195
+ enabled: true,
196
+ template: `User Facts:\n- Name:\n- Favorite Color:\n- Current Topic:`,
197
+ },
198
+ },
199
+ });
200
+ ```
201
+
202
+ You can also store the key facts in a short paragraph format if you prefer a more narrative style:
203
+
204
+ ```typescript
205
+ const paragraphMemory = new Memory({
206
+ options: {
207
+ workingMemory: {
208
+ enabled: true,
209
+ template: `Important Details:\n\nKeep a short paragraph capturing the user's important facts (name, main goal, current task).`,
210
+ },
211
+ },
212
+ });
213
+ ```
214
+
215
+ ## Structured Working Memory
216
+
217
+ Working memory can also be defined using a structured schema instead of a Markdown template. This allows you to specify the exact fields and types that should be tracked, using a [Zod](https://zod.dev/) schema. When using a schema, the agent will see and update working memory as a JSON object matching your schema.
218
+
219
+ **Important:** You must specify either `template` or `schema`, but not both.
220
+
221
+ ### Example: Schema-Based Working Memory
222
+
223
+ ```typescript
224
+ import { z } from "zod";
225
+ import { Memory } from "@mastra/memory";
226
+
227
+ const userProfileSchema = z.object({
228
+ name: z.string().optional(),
229
+ location: z.string().optional(),
230
+ timezone: z.string().optional(),
231
+ preferences: z
232
+ .object({
233
+ communicationStyle: z.string().optional(),
234
+ projectGoal: z.string().optional(),
235
+ deadlines: z.array(z.string()).optional(),
236
+ })
237
+ .optional(),
238
+ });
239
+
240
+ const memory = new Memory({
241
+ options: {
242
+ workingMemory: {
243
+ enabled: true,
244
+ schema: userProfileSchema,
245
+ // template: ... (do not set)
246
+ },
247
+ },
248
+ });
249
+ ```
250
+
251
+ When a schema is provided, the agent receives the working memory as a JSON object. For example:
252
+
253
+ ```json
254
+ {
255
+ "name": "Sam",
256
+ "location": "Berlin",
257
+ "timezone": "CET",
258
+ "preferences": {
259
+ "communicationStyle": "Formal",
260
+ "projectGoal": "Launch MVP",
261
+ "deadlines": ["2025-07-01"]
262
+ }
263
+ }
264
+ ```
265
+
266
+ ### Merge Semantics for Schema-Based Memory
267
+
268
+ Schema-based working memory uses **merge semantics**, meaning the agent only needs to include fields it wants to add or update. Existing fields are preserved automatically.
269
+
270
+ - **Object fields are deep merged:** Only provided fields are updated; others remain unchanged
271
+ - **Set a field to `null` to delete it:** This explicitly removes the field from memory
272
+ - **Arrays are replaced entirely:** When an array field is provided, it replaces the existing array (arrays are not merged element-by-element)
273
+
274
+ ## Choosing Between Template and Schema
275
+
276
+ - Use a **template** (Markdown) if you want the agent to maintain memory as a free-form text block, such as a user profile or scratchpad. Templates use **replace semantics** — the agent must provide the complete memory content on each update.
277
+ - Use a **schema** if you need structured, type-safe data that can be validated and programmatically accessed as JSON. Schemas use **merge semantics** — the agent only provides fields to update, and existing fields are preserved.
278
+ - Only one mode can be active at a time: setting both `template` and `schema` is not supported.
279
+
280
+ ## Example: Multi-step Retention
281
+
282
+ Below is a simplified view of how the `User Profile` template updates across a short user
283
+ conversation:
284
+
285
+ ```nohighlight
286
+ # User Profile
287
+
288
+ ## Personal Info
289
+
290
+ - Name:
291
+ - Location:
292
+ - Timezone:
293
+
294
+ --- After user says "My name is **Sam** and I'm from **Berlin**" ---
295
+
296
+ # User Profile
297
+ - Name: Sam
298
+ - Location: Berlin
299
+ - Timezone:
300
+
301
+ --- After user adds "By the way I'm normally in **CET**" ---
302
+
303
+ # User Profile
304
+ - Name: Sam
305
+ - Location: Berlin
306
+ - Timezone: CET
307
+ ```
308
+
309
+ The agent can now refer to `Sam` or `Berlin` in later responses without requesting the information
310
+ again because it has been stored in working memory.
311
+
312
+ If your agent is not properly updating working memory when you expect it to, you can add system
313
+ instructions on _how_ and _when_ to use this template in your agent's `instructions` setting.
314
+
315
+ ## Setting Initial Working Memory
316
+
317
+ While agents typically update working memory through the `updateWorkingMemory` tool, you can also set initial working memory programmatically when creating or updating threads. This is useful for injecting user data (like their name, preferences, or other info) that you want available to the agent without passing it in every request.
318
+
319
+ ### Setting Working Memory via Thread Metadata
320
+
321
+ When creating a thread, you can provide initial working memory through the metadata's `workingMemory` key:
322
+
323
+ ```typescript title="src/app/medical-consultation.ts"
324
+ // Create a thread with initial working memory
325
+ const thread = await memory.createThread({
326
+ threadId: "thread-123",
327
+ resourceId: "user-456",
328
+ title: "Medical Consultation",
329
+ metadata: {
330
+ workingMemory: `# Patient Profile
331
+ - Name: John Doe
332
+ - Blood Type: O+
333
+ - Allergies: Penicillin
334
+ - Current Medications: None
335
+ - Medical History: Hypertension (controlled)
336
+ `,
337
+ },
338
+ });
339
+
340
+ // The agent will now have access to this information in all messages
341
+ await agent.generate("What's my blood type?", {
342
+ threadId: thread.id,
343
+ resourceId: "user-456",
344
+ });
345
+ // Response: "Your blood type is O+."
346
+ ```
347
+
348
+ ### Updating Working Memory Programmatically
349
+
350
+ You can also update an existing thread's working memory:
351
+
352
+ ```typescript title="src/app/medical-consultation.ts"
353
+ // Update thread metadata to add/modify working memory
354
+ await memory.updateThread({
355
+ id: "thread-123",
356
+ title: thread.title,
357
+ metadata: {
358
+ ...thread.metadata,
359
+ workingMemory: `# Patient Profile
360
+ - Name: John Doe
361
+ - Blood Type: O+
362
+ - Allergies: Penicillin, Ibuprofen // Updated
363
+ - Current Medications: Lisinopril 10mg daily // Added
364
+ - Medical History: Hypertension (controlled)
365
+ `,
366
+ },
367
+ });
368
+ ```
369
+
370
+ ### Direct Memory Update
371
+
372
+ Alternatively, use the `updateWorkingMemory` method directly:
373
+
374
+ ```typescript title="src/app/medical-consultation.ts"
375
+ await memory.updateWorkingMemory({
376
+ threadId: "thread-123",
377
+ resourceId: "user-456", // Required for resource-scoped memory
378
+ workingMemory: "Updated memory content...",
379
+ });
380
+ ```
381
+
382
+ ## Examples
383
+
384
+ - [Working memory with template](https://github.com/mastra-ai/mastra/tree/main/examples/memory-with-template)
385
+ - [Working memory with schema](https://github.com/mastra-ai/mastra/tree/main/examples/memory-with-schema)
386
+ - [Per-resource working memory](https://github.com/mastra-ai/mastra/tree/main/examples/memory-per-resource-example) - Complete example showing resource-scoped memory persistence
@@ -0,0 +1,235 @@
1
+ > Learn how to use semantic recall in Mastra to retrieve relevant messages from past conversations using vector search and embeddings.
2
+
3
+ # Semantic Recall
4
+
5
+ If you ask your friend what they did last weekend, they will search in their memory for events associated with "last weekend" and then tell you what they did. That's sort of like how semantic recall works in Mastra.
6
+
7
+ > **Watch 📹**
8
+
9
+ What semantic recall is, how it works, and how to configure it in Mastra → [YouTube (5 minutes)](https://youtu.be/UVZtK8cK8xQ)
10
+
11
+ ## How Semantic Recall Works
12
+
13
+ Semantic recall is RAG-based search that helps agents maintain context across longer interactions when messages are no longer within [recent message history](./message-history).
14
+
15
+ It uses vector embeddings of messages for similarity search, integrates with various vector stores, and has configurable context windows around retrieved messages.
16
+
17
+ ![Diagram showing Mastra Memory semantic recall](/img/semantic-recall.png)
18
+
19
+ When it's enabled, new messages are used to query a vector DB for semantically similar messages.
20
+
21
+ After getting a response from the LLM, all new messages (user, assistant, and tool calls/results) are inserted into the vector DB to be recalled in later interactions.
22
+
23
+ ## Quick Start
24
+
25
+ Semantic recall is enabled by default, so if you give your agent memory it will be included:
26
+
27
+ ```typescript {9}
28
+ import { Agent } from "@mastra/core/agent";
29
+ import { Memory } from "@mastra/memory";
30
+
31
+ const agent = new Agent({
32
+ id: "support-agent",
33
+ name: "SupportAgent",
34
+ instructions: "You are a helpful support agent.",
35
+ model: "openai/gpt-5.1",
36
+ memory: new Memory(),
37
+ });
38
+ ```
39
+
40
+ ## Storage configuration
41
+
42
+ Semantic recall relies on a [storage and vector db](https://mastra.ai/reference/v1/memory/memory-class) to store messages and their embeddings.
43
+
44
+ ```ts {8-16}
45
+ import { Memory } from "@mastra/memory";
46
+ import { Agent } from "@mastra/core/agent";
47
+ import { LibSQLStore, LibSQLVector } from "@mastra/libsql";
48
+
49
+ const agent = new Agent({
50
+ memory: new Memory({
51
+ // this is the default storage db if omitted
52
+ storage: new LibSQLStore({
53
+ id: 'agent-storage',
54
+ url: "file:./local.db",
55
+ }),
56
+ // this is the default vector db if omitted
57
+ vector: new LibSQLVector({
58
+ id: 'agent-vector',
59
+ connectionUrl: "file:./local.db",
60
+ }),
61
+ }),
62
+ });
63
+ ```
64
+
65
+ Each vector store page below includes installation instructions, configuration parameters, and usage examples:
66
+
67
+ - [Astra](https://mastra.ai/reference/v1/vectors/astra)
68
+ - [Chroma](https://mastra.ai/reference/v1/vectors/chroma)
69
+ - [Cloudflare Vectorize](https://mastra.ai/reference/v1/vectors/vectorize)
70
+ - [Convex](https://mastra.ai/reference/v1/vectors/convex)
71
+ - [Couchbase](https://mastra.ai/reference/v1/vectors/couchbase)
72
+ - [DuckDB](https://mastra.ai/reference/v1/vectors/duckdb)
73
+ - [Elasticsearch](https://mastra.ai/reference/v1/vectors/elasticsearch)
74
+ - [LanceDB](https://mastra.ai/reference/v1/vectors/lance)
75
+ - [libSQL](https://mastra.ai/reference/v1/vectors/libsql)
76
+ - [MongoDB](https://mastra.ai/reference/v1/vectors/mongodb)
77
+ - [OpenSearch](https://mastra.ai/reference/v1/vectors/opensearch)
78
+ - [Pinecone](https://mastra.ai/reference/v1/vectors/pinecone)
79
+ - [PostgreSQL](https://mastra.ai/reference/v1/vectors/pg)
80
+ - [Qdrant](https://mastra.ai/reference/v1/vectors/qdrant)
81
+ - [S3 Vectors](https://mastra.ai/reference/v1/vectors/s3vectors)
82
+ - [Turbopuffer](https://mastra.ai/reference/v1/vectors/turbopuffer)
83
+ - [Upstash](https://mastra.ai/reference/v1/vectors/upstash)
84
+
85
+ ## Recall configuration
86
+
87
+ The three main parameters that control semantic recall behavior are:
88
+
89
+ 1. **topK**: How many semantically similar messages to retrieve
90
+ 2. **messageRange**: How much surrounding context to include with each match
91
+ 3. **scope**: Whether to search within the current thread or across all threads owned by a resource (the default is resource scope).
92
+
93
+ ```typescript {5-7}
94
+ const agent = new Agent({
95
+ memory: new Memory({
96
+ options: {
97
+ semanticRecall: {
98
+ topK: 3, // Retrieve 3 most similar messages
99
+ messageRange: 2, // Include 2 messages before and after each match
100
+ scope: "resource", // Search across all threads for this user (default setting if omitted)
101
+ },
102
+ },
103
+ }),
104
+ });
105
+ ```
106
+
107
+ ## Embedder configuration
108
+
109
+ Semantic recall relies on an [embedding model](https://mastra.ai/reference/v1/memory/memory-class) to convert messages into embeddings. Mastra supports embedding models through the model router using `provider/model` strings, or you can use any [embedding model](https://sdk.vercel.ai/docs/ai-sdk-core/embeddings) compatible with the AI SDK.
110
+
111
+ #### Using the Model Router (Recommended)
112
+
113
+ The simplest way is to use a `provider/model` string with autocomplete support:
114
+
115
+ ```ts {7}
116
+ import { Memory } from "@mastra/memory";
117
+ import { Agent } from "@mastra/core/agent";
118
+ import { ModelRouterEmbeddingModel } from "@mastra/core/llm";
119
+
120
+ const agent = new Agent({
121
+ memory: new Memory({
122
+ embedder: new ModelRouterEmbeddingModel("openai/text-embedding-3-small"),
123
+ }),
124
+ });
125
+ ```
126
+
127
+ Supported embedding models:
128
+
129
+ - **OpenAI**: `text-embedding-3-small`, `text-embedding-3-large`, `text-embedding-ada-002`
130
+ - **Google**: `gemini-embedding-001`, `text-embedding-004`
131
+
132
+ The model router automatically handles API key detection from environment variables (`OPENAI_API_KEY`, `GOOGLE_GENERATIVE_AI_API_KEY`).
133
+
134
+ #### Using AI SDK Packages
135
+
136
+ You can also use AI SDK embedding models directly:
137
+
138
+ ```ts {2,7}
139
+ import { Memory } from "@mastra/memory";
140
+ import { Agent } from "@mastra/core/agent";
141
+ import { ModelRouterEmbeddingModel } from "@mastra/core/llm";
142
+
143
+ const agent = new Agent({
144
+ memory: new Memory({
145
+ embedder: new ModelRouterEmbeddingModel("openai/text-embedding-3-small"),
146
+ }),
147
+ });
148
+ ```
149
+
150
+ #### Using FastEmbed (Local)
151
+
152
+ To use FastEmbed (a local embedding model), install `@mastra/fastembed`:
153
+
154
+ ```bash npm2yarn
155
+ npm install @mastra/fastembed@beta
156
+ ```
157
+
158
+ Then configure it in your memory:
159
+
160
+ ```ts {3,7}
161
+ import { Memory } from "@mastra/memory";
162
+ import { Agent } from "@mastra/core/agent";
163
+ import { fastembed } from "@mastra/fastembed";
164
+
165
+ const agent = new Agent({
166
+ memory: new Memory({
167
+ embedder: fastembed,
168
+ }),
169
+ });
170
+ ```
171
+
172
+ ## PostgreSQL Index Optimization
173
+
174
+ When using PostgreSQL as your vector store, you can optimize semantic recall performance by configuring the vector index. This is particularly important for large-scale deployments with thousands of messages.
175
+
176
+ PostgreSQL supports both IVFFlat and HNSW indexes. By default, Mastra creates an IVFFlat index, but HNSW indexes typically provide better performance, especially with OpenAI embeddings which use inner product distance.
177
+
178
+ ```typescript {18-23}
179
+ import { Memory } from "@mastra/memory";
180
+ import { PgStore, PgVector } from "@mastra/pg";
181
+
182
+ const agent = new Agent({
183
+ memory: new Memory({
184
+ storage: new PgStore({
185
+ id: 'agent-storage',
186
+ connectionString: process.env.DATABASE_URL,
187
+ }),
188
+ vector: new PgVector({
189
+ id: 'agent-vector',
190
+ connectionString: process.env.DATABASE_URL,
191
+ }),
192
+ options: {
193
+ semanticRecall: {
194
+ topK: 5,
195
+ messageRange: 2,
196
+ indexConfig: {
197
+ type: "hnsw", // Use HNSW for better performance
198
+ metric: "dotproduct", // Best for OpenAI embeddings
199
+ m: 16, // Number of bi-directional links (default: 16)
200
+ efConstruction: 64, // Size of candidate list during construction (default: 64)
201
+ },
202
+ },
203
+ },
204
+ }),
205
+ });
206
+ ```
207
+
208
+ For detailed information about index configuration options and performance tuning, see the [PgVector configuration guide](https://mastra.ai/reference/v1/vectors/pg#index-configuration-guide).
209
+
210
+ ## Disabling
211
+
212
+ There is a performance impact to using semantic recall. New messages are converted into embeddings and used to query a vector database before new messages are sent to the LLM.
213
+
214
+ Semantic recall is enabled by default but can be disabled when not needed:
215
+
216
+ ```typescript {4}
217
+ const agent = new Agent({
218
+ memory: new Memory({
219
+ options: {
220
+ semanticRecall: false,
221
+ },
222
+ }),
223
+ });
224
+ ```
225
+
226
+ You might want to disable semantic recall in scenarios like:
227
+
228
+ - When message history provides sufficient context for the current conversation.
229
+ - In performance-sensitive applications, like realtime two-way audio, where the added latency of creating embeddings and running vector queries is noticeable.
230
+
231
+ ## Viewing Recalled Messages
232
+
233
+ When tracing is enabled, any messages retrieved via semantic recall will appear in the agent's trace output, alongside recent message history (if configured).
234
+
235
+ For more info on viewing message traces, see [Viewing Retrieved Messages](./overview#viewing-retrieved-messages).