@mastra/memory 1.9.0-alpha.1 → 1.9.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (51) hide show
  1. package/CHANGELOG.md +61 -0
  2. package/dist/{chunk-5SMKVGJP.js → chunk-JJBSFPC5.js} +315 -25
  3. package/dist/chunk-JJBSFPC5.js.map +1 -0
  4. package/dist/{chunk-AR52LM55.cjs → chunk-LVV2RT42.cjs} +327 -24
  5. package/dist/chunk-LVV2RT42.cjs.map +1 -0
  6. package/dist/docs/SKILL.md +5 -7
  7. package/dist/docs/assets/SOURCE_MAP.json +77 -27
  8. package/dist/docs/references/docs-agents-agent-approval.md +114 -193
  9. package/dist/docs/references/docs-agents-networks.md +88 -205
  10. package/dist/docs/references/docs-agents-supervisor-agents.md +24 -18
  11. package/dist/docs/references/docs-memory-observational-memory.md +30 -2
  12. package/dist/docs/references/docs-memory-overview.md +219 -24
  13. package/dist/docs/references/docs-memory-semantic-recall.md +1 -1
  14. package/dist/docs/references/docs-memory-storage.md +4 -4
  15. package/dist/docs/references/docs-memory-working-memory.md +1 -1
  16. package/dist/docs/references/reference-core-getMemory.md +1 -2
  17. package/dist/docs/references/reference-core-listMemory.md +1 -2
  18. package/dist/docs/references/reference-memory-cloneThread.md +1 -1
  19. package/dist/docs/references/reference-memory-observational-memory.md +39 -1
  20. package/dist/index.cjs +432 -11
  21. package/dist/index.cjs.map +1 -1
  22. package/dist/index.d.ts.map +1 -1
  23. package/dist/index.js +432 -10
  24. package/dist/index.js.map +1 -1
  25. package/dist/observational-memory-3XFCO6MX.js +3 -0
  26. package/dist/{observational-memory-5NFPG6M3.js.map → observational-memory-3XFCO6MX.js.map} +1 -1
  27. package/dist/observational-memory-MJJFU26W.cjs +108 -0
  28. package/dist/{observational-memory-NH7VDTXM.cjs.map → observational-memory-MJJFU26W.cjs.map} +1 -1
  29. package/dist/processors/index.cjs +56 -16
  30. package/dist/processors/index.js +1 -1
  31. package/dist/processors/observational-memory/anchor-ids.d.ts +4 -0
  32. package/dist/processors/observational-memory/anchor-ids.d.ts.map +1 -0
  33. package/dist/processors/observational-memory/index.d.ts +2 -0
  34. package/dist/processors/observational-memory/index.d.ts.map +1 -1
  35. package/dist/processors/observational-memory/observation-groups.d.ts +15 -0
  36. package/dist/processors/observational-memory/observation-groups.d.ts.map +1 -0
  37. package/dist/processors/observational-memory/observational-memory.d.ts +14 -0
  38. package/dist/processors/observational-memory/observational-memory.d.ts.map +1 -1
  39. package/dist/processors/observational-memory/observer-agent.d.ts.map +1 -1
  40. package/dist/processors/observational-memory/reflector-agent.d.ts +1 -1
  41. package/dist/processors/observational-memory/reflector-agent.d.ts.map +1 -1
  42. package/dist/processors/observational-memory/tool-result-helpers.d.ts.map +1 -1
  43. package/dist/tools/om-tools.d.ts +77 -0
  44. package/dist/tools/om-tools.d.ts.map +1 -0
  45. package/package.json +8 -8
  46. package/dist/chunk-5SMKVGJP.js.map +0 -1
  47. package/dist/chunk-AR52LM55.cjs.map +0 -1
  48. package/dist/docs/references/docs-agents-agent-memory.md +0 -209
  49. package/dist/docs/references/docs-agents-network-approval.md +0 -278
  50. package/dist/observational-memory-5NFPG6M3.js +0 -3
  51. package/dist/observational-memory-NH7VDTXM.cjs +0 -68
@@ -2,44 +2,239 @@
2
2
 
3
3
  Memory enables your agent to remember user messages, agent replies, and tool results across interactions, giving it the context it needs to stay consistent, maintain conversation flow, and produce better answers over time.
4
4
 
5
- Mastra supports four complementary memory types:
5
+ Mastra agents can be configured to store [message history](https://mastra.ai/docs/memory/message-history). Additionally, you can enable:
6
6
 
7
- - [**Message history**](https://mastra.ai/docs/memory/message-history) - keeps recent messages from the current conversation so they can be rendered in the UI and used to maintain short-term continuity within the exchange.
8
- - [**Observational memory**](https://mastra.ai/docs/memory/observational-memory) - uses background Observer and Reflector agents to maintain a dense observation log that replaces raw message history as it grows, keeping the context window small while preserving long-term memory across conversations.
9
- - [**Working memory**](https://mastra.ai/docs/memory/working-memory) - stores persistent, structured user data such as names, preferences, and goals.
10
- - [**Semantic recall**](https://mastra.ai/docs/memory/semantic-recall) - retrieves relevant messages from older conversations based on semantic meaning rather than exact keywords, mirroring how humans recall information by association. Requires a [vector database](https://mastra.ai/docs/memory/semantic-recall) and an [embedding model](https://mastra.ai/docs/memory/semantic-recall).
7
+ - [Observational Memory](https://mastra.ai/docs/memory/observational-memory) (Recommended): Uses background agents to maintain a dense observation log that replaces raw message history as it grows. This keeps the context window small while preserving long-term memory.
8
+ - [Working memory](https://mastra.ai/docs/memory/working-memory): Stores persistent, structured user data such as names, preferences, and goals.
9
+ - [Semantic recall](https://mastra.ai/docs/memory/semantic-recall): Retrieves relevant past messages based on semantic meaning rather than exact keywords.
11
10
 
12
11
  If the combined memory exceeds the model's context limit, [memory processors](https://mastra.ai/docs/memory/memory-processors) can filter, trim, or prioritize content so the most relevant information is preserved.
13
12
 
14
- ## Getting started
13
+ Memory results will be stored in one or more of your configured [storage providers](https://mastra.ai/docs/memory/storage).
15
14
 
16
- Choose a memory option to get started:
15
+ ## When to use memory
17
16
 
18
- - [Message history](https://mastra.ai/docs/memory/message-history)
19
- - [Observational memory](https://mastra.ai/docs/memory/observational-memory)
20
- - [Working memory](https://mastra.ai/docs/memory/working-memory)
21
- - [Semantic recall](https://mastra.ai/docs/memory/semantic-recall)
17
+ Use memory when your agent needs to maintain multi-turn conversations that reference prior exchanges, recall user preferences or facts from earlier in a session, or build context over time within a conversation thread. Skip memory for single-turn requests where each interaction is independent.
22
18
 
23
- ## Storage
19
+ ## Quickstart
24
20
 
25
- Before enabling memory, you must first configure a storage adapter. Mastra supports several databases including PostgreSQL, MongoDB, libSQL, and [more](https://mastra.ai/docs/memory/storage).
21
+ 1. Install the `@mastra/memory` package.
26
22
 
27
- Storage can be configured at the [instance level](https://mastra.ai/docs/memory/storage) (shared across all agents) or at the [agent level](https://mastra.ai/docs/memory/storage) (dedicated per agent).
23
+ **npm**:
28
24
 
29
- For semantic recall, you can use a separate vector database like Pinecone alongside your primary storage.
25
+ ```bash
26
+ npm install @mastra/memory@latest
27
+ ```
30
28
 
31
- See the [Storage](https://mastra.ai/docs/memory/storage) documentation for configuration options, supported providers, and examples.
29
+ **pnpm**:
32
30
 
33
- ## Debugging memory
31
+ ```bash
32
+ pnpm add @mastra/memory@latest
33
+ ```
34
34
 
35
- When [tracing](https://mastra.ai/docs/observability/tracing/overview) is enabled, you can inspect exactly which messages the agent uses for context in each request. The trace output shows all memory included in the agent's context window - both recent message history and messages recalled via semantic recall.
35
+ **Yarn**:
36
36
 
37
- ![Trace output showing memory context included in an agent request](https://mastra.ai/_next/image?url=%2Ftracingafter.png\&w=1920\&q=75)
37
+ ```bash
38
+ yarn add @mastra/memory@latest
39
+ ```
38
40
 
39
- This visibility helps you understand why an agent made specific decisions and verify that memory retrieval is working as expected.
41
+ **Bun**:
40
42
 
41
- ## Next steps
43
+ ```bash
44
+ bun add @mastra/memory@latest
45
+ ```
42
46
 
43
- - Learn more about [Storage](https://mastra.ai/docs/memory/storage) providers and configuration options
44
- - Add [Message history](https://mastra.ai/docs/memory/message-history), [Observational memory](https://mastra.ai/docs/memory/observational-memory), [Working memory](https://mastra.ai/docs/memory/working-memory), or [Semantic recall](https://mastra.ai/docs/memory/semantic-recall)
45
- - Visit [Memory configuration reference](https://mastra.ai/reference/memory/memory-class) for all available options
47
+ 2. Memory **requires** a storage provider to persist message history, including user messages and agent responses.
48
+
49
+ For the purposes of this quickstart, use `@mastra/libsql`.
50
+
51
+ **npm**:
52
+
53
+ ```bash
54
+ npm install @mastra/libsql@latest
55
+ ```
56
+
57
+ **pnpm**:
58
+
59
+ ```bash
60
+ pnpm add @mastra/libsql@latest
61
+ ```
62
+
63
+ **Yarn**:
64
+
65
+ ```bash
66
+ yarn add @mastra/libsql@latest
67
+ ```
68
+
69
+ **Bun**:
70
+
71
+ ```bash
72
+ bun add @mastra/libsql@latest
73
+ ```
74
+
75
+ > **Note:** For more details on available providers and how storage works in Mastra, visit the [storage](https://mastra.ai/docs/memory/storage) documentation.
76
+
77
+ 3. Add the storage provider to your main Mastra instance to enable memory across all configured agents.
78
+
79
+ ```typescript
80
+ import { Mastra } from '@mastra/core'
81
+ import { LibSQLStore } from '@mastra/libsql'
82
+
83
+ export const mastra = new Mastra({
84
+ storage: new LibSQLStore({
85
+ id: 'mastra-storage',
86
+ url: ':memory:',
87
+ }),
88
+ })
89
+ ```
90
+
91
+ 4. Create a `Memory` instance and pass it to the agent's `memory` option.
92
+
93
+ ```typescript
94
+ import { Agent } from '@mastra/core/agent'
95
+ import { Memory } from '@mastra/memory'
96
+
97
+ export const memoryAgent = new Agent({
98
+ id: 'memory-agent',
99
+ name: 'Memory Agent',
100
+ memory: new Memory({
101
+ options: {
102
+ lastMessages: 20,
103
+ },
104
+ }),
105
+ })
106
+ ```
107
+
108
+ > **Note:** Visit [Memory Class](https://mastra.ai/reference/memory/memory-class) for a full list of configuration options.
109
+
110
+ 5. Call your agent, for example in [Mastra Studio](https://mastra.ai/docs/getting-started/studio). Inside Studio, start a new chat with your agent and take a look at the right sidebar. It'll now display various memory-related information.
111
+
112
+ ## Message history
113
+
114
+ Pass a `memory` object with `resource` and `thread` to track message history.
115
+
116
+ - `resource`: A stable identifier for the user or entity.
117
+ - `thread`: An ID that isolates a specific conversation or session.
118
+
119
+ ```typescript
120
+ const response = await memoryAgent.generate('Remember my favorite color is blue.', {
121
+ memory: {
122
+ resource: 'user-123',
123
+ thread: 'conversation-123',
124
+ },
125
+ })
126
+ ```
127
+
128
+ To recall information stored in memory, call the agent with the same `resource` and `thread` values used in the original conversation.
129
+
130
+ ```typescript
131
+ const response = await memoryAgent.generate("What's my favorite color?", {
132
+ memory: {
133
+ resource: 'user-123',
134
+ thread: 'conversation-123',
135
+ },
136
+ })
137
+
138
+ // Response: "Your favorite color is blue."
139
+ ```
140
+
141
+ > **Warning:** Each thread has an owner (`resourceId`) that can't be changed after creation. Avoid reusing the same thread ID for threads with different owners, as this will cause errors when querying.
142
+
143
+ To list all threads for a resource, or retrieve a specific thread, [use the memory API directly](https://mastra.ai/docs/memory/message-history).
144
+
145
+ ## Observational Memory
146
+
147
+ For long-running conversations, raw message history grows until it fills the context window, degrading agent performance. [Observational Memory](https://mastra.ai/docs/memory/observational-memory) solves this by running background agents that compress old messages into dense observations, keeping the context window small while preserving long-term memory.
148
+
149
+ ```typescript
150
+ import { Agent } from '@mastra/core/agent'
151
+ import { Memory } from '@mastra/memory'
152
+
153
+ export const memoryAgent = new Agent({
154
+ id: 'memory-agent',
155
+ name: 'Memory Agent',
156
+ memory: new Memory({
157
+ options: {
158
+ observationalMemory: true,
159
+ },
160
+ }),
161
+ })
162
+ ```
163
+
164
+ > **Note:** See [Observational Memory](https://mastra.ai/docs/memory/observational-memory) for details on how observations and reflections work, and [the reference](https://mastra.ai/reference/memory/observational-memory) for all configuration options.
165
+
166
+ ## Memory in multi-agent systems
167
+
168
+ When a [supervisor agent](https://mastra.ai/docs/agents/supervisor-agents) delegates to a subagent, Mastra isolates subagent memory automatically. There is no flag to enable this as it happens on every delegation. Understanding how this scoping works lets you decide what stays private and what to share intentionally.
169
+
170
+ ### How delegation scopes memory
171
+
172
+ Each delegation creates a fresh `threadId` and a deterministic `resourceId` for the subagent:
173
+
174
+ - **Thread ID**: Unique per delegation. The subagent starts with a clean message history every time it's called.
175
+ - **Resource ID**: Derived as `{parentResourceId}-{agentName}`. Because the resource ID is stable across delegations, resource-scoped memory persists between calls. A subagent remembers facts from previous delegations by the same user.
176
+ - **Memory instance**: If a subagent has no memory configured, it inherits the supervisor's `Memory` instance. If the subagent defines its own, that takes precedence.
177
+
178
+ The supervisor forwards its conversation context to the subagent so it has enough background to complete the task. Only the delegation prompt and the subagent's response are saved — the full parent conversation is not stored. You can control which messages reach the subagent with the [`messageFilter`](https://mastra.ai/docs/agents/supervisor-agents) callback.
179
+
180
+ > **Note:** Subagent resource IDs are always suffixed with the agent name (`{parentResourceId}-{agentName}`). Two different subagents under the same supervisor never share a resource ID through delegation.
181
+
182
+ To go beyond this default isolation, you can share memory between agents by passing matching identifiers when you call them directly.
183
+
184
+ ### Share memory between agents
185
+
186
+ When you call agents directly (outside the delegation flow), memory sharing is controlled by two identifiers: `resourceId` and `threadId`. Agents that use the same values read and write to the same data. This is useful when agents collaborate on a shared context — for example, a researcher that saves notes and a writer that reads them.
187
+
188
+ **Resource-scoped sharing** is the most common pattern. [Working memory](https://mastra.ai/docs/memory/working-memory) and [semantic recall](https://mastra.ai/docs/memory/semantic-recall) default to `scope: 'resource'`. If two agents share a `resourceId`, they share observations, working memory, and embeddings — even across different threads:
189
+
190
+ ```typescript
191
+ // Both agents share the same resource-scoped memory
192
+ await researcher.generate('Find information about quantum computing.', {
193
+ memory: { resource: 'project-42', thread: 'research-session' },
194
+ })
195
+
196
+ await writer.generate('Write a summary from the research notes.', {
197
+ memory: { resource: 'project-42', thread: 'writing-session' },
198
+ })
199
+ ```
200
+
201
+ Because both calls use `resource: 'project-42'`, the writer can access the researcher's observations, working memory, and semantic embeddings. Each agent still has its own thread, so message histories stay separate.
202
+
203
+ **Thread-scoped sharing** gives tighter coupling. [Observational Memory](https://mastra.ai/docs/memory/observational-memory) uses `scope: 'thread'` by default. If two agents use the same `resource` _and_ `thread`, they share the full message history. Each agent sees every message the other has written. This is useful when agents need to build on each other's exact outputs.
204
+
205
+ ## Observability
206
+
207
+ Enable [Tracing](https://mastra.ai/docs/observability/tracing/overview) to monitor and debug memory in action. Traces show you exactly which messages and observations the agent included in its context for each request, helping you understand agent behavior and verify that memory retrieval is working as expected.
208
+
209
+ Open [Mastra Studio](https://mastra.ai/docs/getting-started/studio) and select the **Observability** tab in the sidebar. Open the trace of a recent agent request, then look for spans of LLMs calls.
210
+
211
+ ## Switch memory per request
212
+
213
+ Use [`RequestContext`](https://mastra.ai/docs/server/request-context) to access request-specific values. This lets you conditionally select different memory or storage configurations based on the context of the request.
214
+
215
+ ```typescript
216
+ export type UserTier = {
217
+ 'user-tier': 'enterprise' | 'pro'
218
+ }
219
+
220
+ const premiumMemory = new Memory()
221
+ const standardMemory = new Memory()
222
+
223
+ export const memoryAgent = new Agent({
224
+ id: 'memory-agent',
225
+ name: 'Memory Agent',
226
+ memory: ({ requestContext }) => {
227
+ const userTier = requestContext.get('user-tier') as UserTier['user-tier']
228
+
229
+ return userTier === 'enterprise' ? premiumMemory : standardMemory
230
+ },
231
+ })
232
+ ```
233
+
234
+ > **Note:** Visit [Request Context](https://mastra.ai/docs/server/request-context) for more information.
235
+
236
+ ## Related
237
+
238
+ - [`Memory` reference](https://mastra.ai/reference/memory/memory-class)
239
+ - [Tracing](https://mastra.ai/docs/observability/tracing/overview)
240
+ - [Request Context](https://mastra.ai/docs/server/request-context)
@@ -16,7 +16,7 @@ When it's enabled, new messages are used to query a vector DB for semantically s
16
16
 
17
17
  After getting a response from the LLM, all new messages (user, assistant, and tool calls/results) are inserted into the vector DB to be recalled in later interactions.
18
18
 
19
- ## Quick start
19
+ ## Quickstart
20
20
 
21
21
  Semantic recall is enabled by default, so if you give your agent memory it will be included:
22
22
 
@@ -100,7 +100,7 @@ This is useful when different types of data have different performance or operat
100
100
 
101
101
  ### Agent-level storage
102
102
 
103
- Agent-level storage overrides storage configured at the instance level. Add storage to a specific agent when you need data boundaries or compliance requirements:
103
+ Agent-level storage overrides storage configured at the instance level. Add storage to a specific agent when you need to keep data separate or use different providers per agent.
104
104
 
105
105
  ```typescript
106
106
  import { Agent } from '@mastra/core/agent'
@@ -118,14 +118,14 @@ export const agent = new Agent({
118
118
  })
119
119
  ```
120
120
 
121
- > **Warning:** [Mastra Cloud Store](https://mastra.ai/docs/mastra-cloud/deployment) doesn't support agent-level storage.
121
+ > **Warning:** Agent-level storage isn't supported when using [Mastra Cloud Store](https://mastra.ai/docs/mastra-cloud/deployment). If you use Mastra Cloud Store, configure storage on the Mastra instance instead. This limitation doesn't apply if you bring your own database.
122
122
 
123
123
  ## Threads and resources
124
124
 
125
125
  Mastra organizes conversations using two identifiers:
126
126
 
127
- - **Thread** - a conversation session containing a sequence of messages.
128
- - **Resource** - the entity that owns the thread, such as a user, organization, project, or any other domain entity in your application.
127
+ - **Thread**: A conversation session containing a sequence of messages.
128
+ - **Resource**: The entity that owns the thread, such as a user, organization, project, or any other domain entity in your application.
129
129
 
130
130
  Both identifiers are required for agents to store information:
131
131
 
@@ -13,7 +13,7 @@ Working memory can persist at two different scopes:
13
13
 
14
14
  **Important:** Switching between scopes means the agent won't see memory from the other scope - thread-scoped memory is completely separate from resource-scoped memory.
15
15
 
16
- ## Quick start
16
+ ## Quickstart
17
17
 
18
18
  Here's a minimal example of setting up an agent with working memory:
19
19
 
@@ -46,5 +46,4 @@ const memory = mastra.getMemory('conversationMemory')
46
46
  ## Related
47
47
 
48
48
  - [Mastra.listMemory()](https://mastra.ai/reference/core/listMemory)
49
- - [Memory overview](https://mastra.ai/docs/memory/overview)
50
- - [Agent Memory](https://mastra.ai/docs/agents/agent-memory)
49
+ - [Memory overview](https://mastra.ai/docs/memory/overview)
@@ -52,5 +52,4 @@ console.log(Object.keys(allMemory)) // ["conversationMemory", "analyticsMemory"]
52
52
  ## Related
53
53
 
54
54
  - [Mastra.getMemory()](https://mastra.ai/reference/core/getMemory)
55
- - [Memory overview](https://mastra.ai/docs/memory/overview)
56
- - [Agent Memory](https://mastra.ai/docs/agents/agent-memory)
55
+ - [Memory overview](https://mastra.ai/docs/memory/overview)
@@ -127,7 +127,7 @@ const results = await memory.recall({
127
127
  })
128
128
  ```
129
129
 
130
- ## Observational memory
130
+ ## Observational Memory
131
131
 
132
132
  When [Observational Memory](https://mastra.ai/docs/memory/observational-memory) is enabled, `cloneThread()` automatically clones the OM records associated with the source thread. The behavior depends on the OM scope:
133
133
 
@@ -1,4 +1,4 @@
1
- # Observational memory
1
+ # Observational Memory
2
2
 
3
3
  **Added in:** `@mastra/memory@1.1.0`
4
4
 
@@ -38,6 +38,8 @@ OM performs thresholding with fast local token estimation. Text uses `tokenx`, a
38
38
 
39
39
  **shareTokenBudget** (`boolean`): Share the token budget between messages and observations. When enabled, the total budget is \`observation.messageTokens + reflection.observationTokens\`. Messages can use more space when observations are small, and vice versa. This maximizes context usage through flexible allocation. \`shareTokenBudget\` is not yet compatible with async buffering. You must set \`observation: { bufferTokens: false }\` when using this option (this is a temporary limitation). (Default: `false`)
40
40
 
41
+ **retrieval** (`boolean`): \*\*Experimental.\*\* Enable retrieval-mode observation groups as durable pointers to raw message history. Retrieval mode is only active when \`scope\` is \`'thread'\`. If you set \`retrieval: true\` with \`scope: 'resource'\`, OM keeps resource-scoped memory behavior but skips retrieval-mode context and does not register the \`recall\` tool. (Default: `false`)
42
+
41
43
  **observation** (`ObservationalMemoryObservationConfig`): Configuration for the observation step. Controls when the Observer agent runs and how it behaves.
42
44
 
43
45
  **observation.model** (`string | LanguageModel | DynamicModel | ModelWithRetries[]`): Model for the Observer agent. Cannot be set if a top-level \`model\` is also provided. If neither this nor the top-level \`model\` is set, falls back to \`reflection.model\`.
@@ -574,6 +576,42 @@ The standalone `ObservationalMemory` class accepts all the same options as the `
574
576
 
575
577
  **obscureThreadIds** (`boolean`): When enabled, thread IDs are hashed before being included in observation context. This prevents the LLM from recognizing patterns in thread identifiers. Automatically enabled when using resource scope through the Memory class. (Default: `false`)
576
578
 
579
+ ## Recall tool
580
+
581
+ When `retrieval: true` is set with `scope: 'thread'`, OM registers a `recall` tool that the agent can call to page through the raw messages behind an observation group's `_range`. The tool is automatically added to the agent's tool list — no manual registration is needed.
582
+
583
+ ### Parameters
584
+
585
+ **cursor** (`string`): A message ID to anchor the recall query. Extract the start or end ID from an observation group range (e.g. from \`\_range: \\\`startId:endId\\\`\_\`, use either \`startId\` or \`endId\`). If a range string is passed directly, the tool returns a hint explaining how to extract the correct ID.
586
+
587
+ **page** (`number`): Pagination offset from the cursor. Positive values page forward (messages after the cursor), negative values page backward (messages before the cursor). \`0\` is treated as \`1\`. (Default: `1`)
588
+
589
+ **limit** (`number`): Maximum number of messages per page. (Default: `20`)
590
+
591
+ **detail** (`'low' | 'high'`): Controls how much content is shown per message part. \`'low'\` shows truncated text and tool names with positional indices (\`\[p0]\`, \`\[p1]\`). \`'high'\` shows full content including tool arguments and results, clamped to one part per call with continuation hints. (Default: `'low'`)
592
+
593
+ **partIndex** (`number`): Fetch a single message part at full detail by its positional index. Use this when a low-detail recall shows an interesting part at \`\[p1]\` — call again with \`partIndex: 1\` to see the full content without loading every part.
594
+
595
+ ### Returns
596
+
597
+ **messages** (`string`): Formatted message content. Format depends on the \`detail\` level.
598
+
599
+ **count** (`number`): Number of messages in this page.
600
+
601
+ **cursor** (`string`): The cursor message ID used for this query.
602
+
603
+ **page** (`number`): The page number returned.
604
+
605
+ **limit** (`number`): The limit used for this query.
606
+
607
+ **hasNextPage** (`boolean`): Whether more messages exist after this page.
608
+
609
+ **hasPrevPage** (`boolean`): Whether more messages exist before this page.
610
+
611
+ **truncated** (`boolean`): Present and \`true\` when the output was capped by the token budget. The agent can paginate or use \`partIndex\` to access remaining content.
612
+
613
+ **tokenOffset** (`number`): Approximate number of tokens that were trimmed when \`truncated\` is true.
614
+
577
615
  ### Related
578
616
 
579
617
  - [Observational Memory](https://mastra.ai/docs/memory/observational-memory)