@mastra/memory 1.5.1 → 1.5.2

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (52) hide show
  1. package/CHANGELOG.md +28 -0
  2. package/dist/{chunk-6PKWQ3GH.js → chunk-HNPAIFCZ.js} +59 -16
  3. package/dist/chunk-HNPAIFCZ.js.map +1 -0
  4. package/dist/{chunk-6XVTMLW4.cjs → chunk-PVFLHAZX.cjs} +59 -16
  5. package/dist/chunk-PVFLHAZX.cjs.map +1 -0
  6. package/dist/docs/SKILL.md +55 -0
  7. package/dist/docs/assets/SOURCE_MAP.json +103 -0
  8. package/dist/docs/references/docs-agents-agent-approval.md +558 -0
  9. package/dist/docs/references/docs-agents-agent-memory.md +209 -0
  10. package/dist/docs/references/docs-agents-network-approval.md +275 -0
  11. package/dist/docs/references/docs-agents-networks.md +299 -0
  12. package/dist/docs/references/docs-agents-supervisor-agents.md +304 -0
  13. package/dist/docs/references/docs-memory-memory-processors.md +314 -0
  14. package/dist/docs/references/docs-memory-message-history.md +260 -0
  15. package/dist/docs/references/docs-memory-observational-memory.md +248 -0
  16. package/dist/docs/references/docs-memory-overview.md +45 -0
  17. package/dist/docs/references/docs-memory-semantic-recall.md +272 -0
  18. package/dist/docs/references/docs-memory-storage.md +261 -0
  19. package/dist/docs/references/docs-memory-working-memory.md +400 -0
  20. package/dist/docs/references/reference-core-getMemory.md +50 -0
  21. package/dist/docs/references/reference-core-listMemory.md +56 -0
  22. package/dist/docs/references/reference-memory-clone-utilities.md +199 -0
  23. package/dist/docs/references/reference-memory-cloneThread.md +130 -0
  24. package/dist/docs/references/reference-memory-createThread.md +68 -0
  25. package/dist/docs/references/reference-memory-getThreadById.md +24 -0
  26. package/dist/docs/references/reference-memory-listThreads.md +145 -0
  27. package/dist/docs/references/reference-memory-memory-class.md +147 -0
  28. package/dist/docs/references/reference-memory-observational-memory.md +565 -0
  29. package/dist/docs/references/reference-processors-token-limiter-processor.md +115 -0
  30. package/dist/docs/references/reference-storage-dynamodb.md +282 -0
  31. package/dist/docs/references/reference-storage-libsql.md +135 -0
  32. package/dist/docs/references/reference-storage-mongodb.md +262 -0
  33. package/dist/docs/references/reference-storage-postgresql.md +526 -0
  34. package/dist/docs/references/reference-storage-upstash.md +160 -0
  35. package/dist/docs/references/reference-vectors-libsql.md +305 -0
  36. package/dist/docs/references/reference-vectors-mongodb.md +295 -0
  37. package/dist/docs/references/reference-vectors-pg.md +408 -0
  38. package/dist/docs/references/reference-vectors-upstash.md +294 -0
  39. package/dist/index.cjs +1 -1
  40. package/dist/index.js +1 -1
  41. package/dist/{observational-memory-AJWSMZVP.js → observational-memory-KAFD4QZK.js} +3 -3
  42. package/dist/{observational-memory-AJWSMZVP.js.map → observational-memory-KAFD4QZK.js.map} +1 -1
  43. package/dist/{observational-memory-Q5TO525O.cjs → observational-memory-Q47HN5YL.cjs} +17 -17
  44. package/dist/{observational-memory-Q5TO525O.cjs.map → observational-memory-Q47HN5YL.cjs.map} +1 -1
  45. package/dist/processors/index.cjs +15 -15
  46. package/dist/processors/index.js +1 -1
  47. package/dist/processors/observational-memory/observational-memory.d.ts +2 -2
  48. package/dist/processors/observational-memory/observational-memory.d.ts.map +1 -1
  49. package/dist/processors/observational-memory/token-counter.d.ts.map +1 -1
  50. package/package.json +8 -8
  51. package/dist/chunk-6PKWQ3GH.js.map +0 -1
  52. package/dist/chunk-6XVTMLW4.cjs.map +0 -1
@@ -0,0 +1,565 @@
1
+ # Observational Memory
2
+
3
+ **Added in:** `@mastra/memory@1.1.0`
4
+
5
+ Observational Memory (OM) is Mastra's memory system for long-context agentic memory. Two background agents — an **Observer** that watches conversations and creates observations, and a **Reflector** that restructures observations by combining related items, reflecting on overarching patterns, and condensing where possible — maintain an observation log that replaces raw message history as it grows.
6
+
7
+ ## Usage
8
+
9
+ ```typescript
10
+ import { Memory } from '@mastra/memory'
11
+ import { Agent } from '@mastra/core/agent'
12
+
13
+ export const agent = new Agent({
14
+ name: 'my-agent',
15
+ instructions: 'You are a helpful assistant.',
16
+ model: 'openai/gpt-5-mini',
17
+ memory: new Memory({
18
+ options: {
19
+ observationalMemory: true,
20
+ },
21
+ }),
22
+ })
23
+ ```
24
+
25
+ ## Configuration
26
+
27
+ The `observationalMemory` option accepts `true`, a configuration object, or `false`. Setting `true` enables OM with `google/gemini-2.5-flash` as the default model. When passing a config object, a `model` must be explicitly set — either at the top level, or on `observation.model` and/or `reflection.model`.
28
+
29
+ **enabled?:** (`boolean`): Enable or disable Observational Memory. When omitted from a config object, defaults to \`true\`. Only \`enabled: false\` explicitly disables it. (Default: `true`)
30
+
31
+ **model?:** (`string | LanguageModel | DynamicModel | ModelWithRetries[]`): Model for both the Observer and Reflector agents. Sets the model for both at once. Cannot be used together with \`observation.model\` or \`reflection.model\` — an error will be thrown if both are set. When using \`observationalMemory: true\`, defaults to \`google/gemini-2.5-flash\`. When passing a config object, this or \`observation.model\`/\`reflection.model\` must be set. Use \`"default"\` to explicitly use the default model (\`google/gemini-2.5-flash\`). (Default: `'google/gemini-2.5-flash' (when using observationalMemory: true)`)
32
+
33
+ **scope?:** (`'resource' | 'thread'`): Memory scope for observations. \`'thread'\` keeps observations per-thread. \`'resource'\` (experimental) shares observations across all threads for a resource, enabling cross-conversation memory. (Default: `'thread'`)
34
+
35
+ **shareTokenBudget?:** (`boolean`): Share the token budget between messages and observations. When enabled, the total budget is \`observation.messageTokens + reflection.observationTokens\`. Messages can use more space when observations are small, and vice versa. This maximizes context usage through flexible allocation. \*\*Note:\*\* \`shareTokenBudget\` is not yet compatible with async buffering. You must set \`observation: { bufferTokens: false }\` when using this option (this is a temporary limitation). (Default: `false`)
36
+
37
+ **observation?:** (`ObservationalMemoryObservationConfig`): Configuration for the observation step. Controls when the Observer agent runs and how it behaves.
38
+
39
+ **reflection?:** (`ObservationalMemoryReflectionConfig`): Configuration for the reflection step. Controls when the Reflector agent runs and how it behaves.
40
+
41
+ ### Observation config
42
+
43
+ **model?:** (`string | LanguageModel | DynamicModel | ModelWithRetries[]`): Model for the Observer agent. Cannot be set if a top-level \`model\` is also provided. If neither this nor the top-level \`model\` is set, falls back to \`reflection.model\`.
44
+
45
+ **instruction?:** (`string`): Custom instruction appended to the Observer's system prompt. Use this to customize what the Observer focuses on, such as domain-specific preferences or priorities.
46
+
47
+ **messageTokens?:** (`number`): Token count of unobserved messages that triggers observation. When unobserved message tokens exceed this threshold, the Observer agent is called. (Default: `30000`)
48
+
49
+ **maxTokensPerBatch?:** (`number`): Maximum tokens per batch when observing multiple threads in resource scope. Threads are chunked into batches of this size and processed in parallel. Lower values mean more parallelism but more API calls. (Default: `10000`)
50
+
51
+ **modelSettings?:** (`ObservationalMemoryModelSettings`): Model settings for the Observer agent. (Default: `{ temperature: 0.3, maxOutputTokens: 100_000 }`)
52
+
53
+ **bufferTokens?:** (`number | false`): Token interval for async background observation buffering. Can be an absolute token count (e.g. \`5000\`) or a fraction of \`messageTokens\` (e.g. \`0.25\` = buffer every 25% of threshold). When set, observations run in the background at this interval, storing results in a buffer. When the main \`messageTokens\` threshold is reached, buffered observations activate instantly without a blocking LLM call. Must resolve to less than \`messageTokens\`. Set to \`false\` to explicitly disable all async buffering (both observation and reflection). (Default: `0.2`)
54
+
55
+ **bufferActivation?:** (`number`): Controls how much of the message window to retain after activation. Accepts a ratio (0-1) or an absolute token count (≥ 1000). For example, \`0.8\` means: activate enough buffers to remove 80% of \`messageTokens\` and leave 20% as active message history. An absolute token count like \`4000\` targets a goal of keeping \~4k message tokens remaining after activation. Higher values remove more message history per activation when using a ratio. Higher values keep more message history when using a token count. (Default: `0.8`)
56
+
57
+ **blockAfter?:** (`number`): Token threshold above which synchronous (blocking) observation is forced. Between \`messageTokens\` and \`blockAfter\`, only async buffering/activation is used. Above \`blockAfter\`, a synchronous observation runs as a last resort, while buffered activation still preserves a minimum remaining context (min(1000, retention floor)). Accepts a multiplier (1 < value < 2, multiplied by \`messageTokens\`) or an absolute token count (≥ 2, must be greater than \`messageTokens\`). Only relevant when \`bufferTokens\` is set. Defaults to \`1.2\` when async buffering is enabled. (Default: `1.2 (when bufferTokens is set)`)
58
+
59
+ ### Reflection config
60
+
61
+ **model?:** (`string | LanguageModel | DynamicModel | ModelWithRetries[]`): Model for the Reflector agent. Cannot be set if a top-level \`model\` is also provided. If neither this nor the top-level \`model\` is set, falls back to \`observation.model\`.
62
+
63
+ **instruction?:** (`string`): Custom instruction appended to the Reflector's system prompt. Use this to customize how the Reflector consolidates observations, such as prioritizing certain types of information.
64
+
65
+ **observationTokens?:** (`number`): Token count of observations that triggers reflection. When observation tokens exceed this threshold, the Reflector agent is called to condense them. (Default: `40000`)
66
+
67
+ **modelSettings?:** (`ObservationalMemoryModelSettings`): Model settings for the Reflector agent. (Default: `{ temperature: 0, maxOutputTokens: 100_000 }`)
68
+
69
+ **bufferActivation?:** (`number`): Ratio (0-1) controlling when async reflection buffering starts. When observation tokens reach \`observationTokens \* bufferActivation\`, reflection runs in the background. On activation at the full threshold, the buffered reflection replaces the observations it covers, preserving any new observations appended after that range. (Default: `0.5`)
70
+
71
+ **blockAfter?:** (`number`): Token threshold above which synchronous (blocking) reflection is forced. Between \`observationTokens\` and \`blockAfter\`, only async buffering/activation is used. Above \`blockAfter\`, a synchronous reflection runs as a last resort. Accepts a multiplier (1 < value < 2, multiplied by \`observationTokens\`) or an absolute token count (≥ 2, must be greater than \`observationTokens\`). Only relevant when \`bufferActivation\` is set. Defaults to \`1.2\` when async reflection is enabled. (Default: `1.2 (when bufferActivation is set)`)
72
+
73
+ ### Model settings
74
+
75
+ **temperature?:** (`number`): Temperature for generation. Lower values produce more consistent output. (Default: `0.3`)
76
+
77
+ **maxOutputTokens?:** (`number`): Maximum output tokens. Set high to prevent truncation of observations. (Default: `100000`)
78
+
79
+ ## Examples
80
+
81
+ ### Resource scope with custom thresholds (experimental)
82
+
83
+ ```typescript
84
+ import { Memory } from '@mastra/memory'
85
+ import { Agent } from '@mastra/core/agent'
86
+
87
+ export const agent = new Agent({
88
+ name: 'my-agent',
89
+ instructions: 'You are a helpful assistant.',
90
+ model: 'openai/gpt-5-mini',
91
+ memory: new Memory({
92
+ options: {
93
+ observationalMemory: {
94
+ model: 'google/gemini-2.5-flash',
95
+ scope: 'resource',
96
+ observation: {
97
+ messageTokens: 20_000,
98
+ },
99
+ reflection: {
100
+ observationTokens: 60_000,
101
+ },
102
+ },
103
+ },
104
+ }),
105
+ })
106
+ ```
107
+
108
+ ### Shared token budget
109
+
110
+ ```typescript
111
+ import { Memory } from '@mastra/memory'
112
+ import { Agent } from '@mastra/core/agent'
113
+
114
+ export const agent = new Agent({
115
+ name: 'my-agent',
116
+ instructions: 'You are a helpful assistant.',
117
+ model: 'openai/gpt-5-mini',
118
+ memory: new Memory({
119
+ options: {
120
+ observationalMemory: {
121
+ shareTokenBudget: true,
122
+ observation: {
123
+ messageTokens: 20_000,
124
+ bufferTokens: false, // required when using shareTokenBudget (temporary limitation)
125
+ },
126
+ reflection: {
127
+ observationTokens: 80_000,
128
+ },
129
+ },
130
+ },
131
+ }),
132
+ })
133
+ ```
134
+
135
+ When `shareTokenBudget` is enabled, the total budget is `observation.messageTokens + reflection.observationTokens` (100k in this example). If observations only use 30k tokens, messages can expand to use up to 70k. If messages are short, observations have more room before triggering reflection.
136
+
137
+ ### Custom model
138
+
139
+ ```typescript
140
+ import { Memory } from '@mastra/memory'
141
+ import { Agent } from '@mastra/core/agent'
142
+
143
+ export const agent = new Agent({
144
+ name: 'my-agent',
145
+ instructions: 'You are a helpful assistant.',
146
+ model: 'openai/gpt-5-mini',
147
+ memory: new Memory({
148
+ options: {
149
+ observationalMemory: {
150
+ model: 'openai/gpt-4o-mini',
151
+ },
152
+ },
153
+ }),
154
+ })
155
+ ```
156
+
157
+ ### Different models per agent
158
+
159
+ ```typescript
160
+ import { Memory } from '@mastra/memory'
161
+ import { Agent } from '@mastra/core/agent'
162
+
163
+ export const agent = new Agent({
164
+ name: 'my-agent',
165
+ instructions: 'You are a helpful assistant.',
166
+ model: 'openai/gpt-5-mini',
167
+ memory: new Memory({
168
+ options: {
169
+ observationalMemory: {
170
+ observation: {
171
+ model: 'google/gemini-2.5-flash',
172
+ },
173
+ reflection: {
174
+ model: 'openai/gpt-4o-mini',
175
+ },
176
+ },
177
+ },
178
+ }),
179
+ })
180
+ ```
181
+
182
+ ### Custom instructions
183
+
184
+ Customize what the Observer and Reflector focus on by providing custom instructions:
185
+
186
+ ```typescript
187
+ import { Memory } from '@mastra/memory'
188
+ import { Agent } from '@mastra/core/agent'
189
+
190
+ export const agent = new Agent({
191
+ name: 'health-assistant',
192
+ instructions: 'You are a health and wellness assistant.',
193
+ model: 'openai/gpt-5-mini',
194
+ memory: new Memory({
195
+ options: {
196
+ observationalMemory: {
197
+ model: 'google/gemini-2.5-flash',
198
+ observation: {
199
+ // Focus observations on health-related preferences and goals
200
+ instruction:
201
+ 'Prioritize capturing user health goals, dietary restrictions, exercise preferences, and medical considerations. Avoid capturing general chit-chat.',
202
+ },
203
+ reflection: {
204
+ // Guide reflection to consolidate health patterns
205
+ instruction:
206
+ 'When consolidating, group related health information together. Preserve specific metrics, dates, and medical details.',
207
+ },
208
+ },
209
+ },
210
+ }),
211
+ })
212
+ ```
213
+
214
+ ### Async buffering
215
+
216
+ Async buffering is **enabled by default**. It pre-computes observations in the background as the conversation grows — when the `messageTokens` threshold is reached, buffered observations activate instantly with no blocking LLM call.
217
+
218
+ The lifecycle is: **buffer → activate → remove messages → repeat**. Background Observer calls run at `bufferTokens` intervals, each producing a chunk of observations. At threshold, chunks activate: observations move into the log, raw messages are removed from context. The `blockAfter` threshold forces a synchronous fallback if buffering can't keep up.
219
+
220
+ Default settings:
221
+
222
+ - `observation.bufferTokens: 0.2` — buffer every 20% of `messageTokens` (e.g. every \~6k tokens with a 30k threshold)
223
+ - `observation.bufferActivation: 0.8` — on activation, remove enough messages to keep only 20% of the threshold remaining
224
+ - Buffered observations include continuation hints (`suggestedResponse`, `currentTask`) that survive activation to maintain conversational continuity
225
+ - `reflection.bufferActivation: 0.5` — start background reflection at 50% of observation threshold
226
+
227
+ To customize:
228
+
229
+ ```typescript
230
+ import { Memory } from '@mastra/memory'
231
+ import { Agent } from '@mastra/core/agent'
232
+
233
+ export const agent = new Agent({
234
+ name: 'my-agent',
235
+ instructions: 'You are a helpful assistant.',
236
+ model: 'openai/gpt-5-mini',
237
+ memory: new Memory({
238
+ options: {
239
+ observationalMemory: {
240
+ model: 'google/gemini-2.5-flash',
241
+ observation: {
242
+ messageTokens: 30_000,
243
+ // Buffer every 5k tokens (runs in background)
244
+ bufferTokens: 5_000,
245
+ // Activate to retain 30% of threshold
246
+ bufferActivation: 0.7,
247
+ // Force synchronous observation at 1.5x threshold
248
+ blockAfter: 1.5,
249
+ },
250
+ reflection: {
251
+ observationTokens: 60_000,
252
+ // Start background reflection at 50% of threshold
253
+ bufferActivation: 0.5,
254
+ // Force synchronous reflection at 1.2x threshold
255
+ blockAfter: 1.2,
256
+ },
257
+ },
258
+ },
259
+ }),
260
+ })
261
+ ```
262
+
263
+ To disable async buffering entirely:
264
+
265
+ ```typescript
266
+ observationalMemory: {
267
+ model: "google/gemini-2.5-flash",
268
+ observation: {
269
+ bufferTokens: false,
270
+ },
271
+ }
272
+ ```
273
+
274
+ Setting `bufferTokens: false` disables both observation and reflection async buffering. Observations and reflections will run synchronously when their thresholds are reached.
275
+
276
+ > **Note:** Async buffering is not supported with `scope: 'resource'` and is automatically disabled in resource scope.
277
+
278
+ ## Streaming data parts
279
+
280
+ Observational Memory emits typed data parts during agent execution that clients can use for real-time UI feedback. These are streamed alongside the agent's response.
281
+
282
+ ### `data-om-status`
283
+
284
+ Emitted once per agent loop step, before model generation. Provides a snapshot of the current memory state, including token usage for both context windows and the state of any async buffered content.
285
+
286
+ ```typescript
287
+ interface DataOmStatusPart {
288
+ type: 'data-om-status'
289
+ data: {
290
+ windows: {
291
+ active: {
292
+ /** Unobserved message tokens and the threshold that triggers observation */
293
+ messages: { tokens: number; threshold: number }
294
+ /** Observation tokens and the threshold that triggers reflection */
295
+ observations: { tokens: number; threshold: number }
296
+ }
297
+ buffered: {
298
+ observations: {
299
+ /** Number of buffered chunks staged for activation */
300
+ chunks: number
301
+ /** Total message tokens across all buffered chunks */
302
+ messageTokens: number
303
+ /** Projected message tokens that would be removed if activation happened now (based on bufferActivation ratio and chunk boundaries) */
304
+ projectedMessageRemoval: number
305
+ /** Observation tokens that will be added on activation */
306
+ observationTokens: number
307
+ /** idle: no buffering in progress. running: background observer is working. complete: chunks are ready for activation. */
308
+ status: 'idle' | 'running' | 'complete'
309
+ }
310
+ reflection: {
311
+ /** Observation tokens that were fed into the reflector (pre-compression size) */
312
+ inputObservationTokens: number
313
+ /** Observation tokens the reflection will produce on activation (post-compression size) */
314
+ observationTokens: number
315
+ /** idle: no reflection buffered. running: background reflector is working. complete: reflection is ready for activation. */
316
+ status: 'idle' | 'running' | 'complete'
317
+ }
318
+ }
319
+ }
320
+ recordId: string
321
+ threadId: string
322
+ stepNumber: number
323
+ /** Increments each time the Reflector creates a new generation */
324
+ generationCount: number
325
+ }
326
+ }
327
+ ```
328
+
329
+ `buffered.reflection.inputObservationTokens` is the size of the observations that were sent to the Reflector. `buffered.reflection.observationTokens` is the compressed result — the size of what will replace those observations when the reflection activates. A client can use these two values to show a compression ratio.
330
+
331
+ Clients can derive percentages and post-activation estimates from the raw values:
332
+
333
+ ```typescript
334
+ // Message window usage %
335
+ const msgPercent = status.windows.active.messages.tokens / status.windows.active.messages.threshold
336
+
337
+ // Observation window usage %
338
+ const obsPercent =
339
+ status.windows.active.observations.tokens / status.windows.active.observations.threshold
340
+
341
+ // Projected message tokens after buffered observations activate
342
+ // Uses projectedMessageRemoval which accounts for bufferActivation ratio and chunk boundaries
343
+ const postActivation =
344
+ status.windows.active.messages.tokens -
345
+ status.windows.buffered.observations.projectedMessageRemoval
346
+
347
+ // Reflection compression ratio (when buffered reflection exists)
348
+ const { inputObservationTokens, observationTokens } = status.windows.buffered.reflection
349
+ if (inputObservationTokens > 0) {
350
+ const compressionRatio = observationTokens / inputObservationTokens
351
+ }
352
+ ```
353
+
354
+ ### `data-om-observation-start`
355
+
356
+ Emitted when the Observer or Reflector agent begins processing.
357
+
358
+ **cycleId:** (`string`): Unique ID for this cycle — shared between start/end/failed markers.
359
+
360
+ **operationType:** (`'observation' | 'reflection'`): Whether this is an observation or reflection operation.
361
+
362
+ **startedAt:** (`string`): ISO timestamp when processing started.
363
+
364
+ **tokensToObserve:** (`number`): Message tokens (input) being processed in this batch.
365
+
366
+ **recordId:** (`string`): The OM record ID.
367
+
368
+ **threadId:** (`string`): This thread's ID.
369
+
370
+ **threadIds:** (`string[]`): All thread IDs in this batch (for resource-scoped).
371
+
372
+ **config:** (`ObservationMarkerConfig`): Snapshot of \`messageTokens\`, \`observationTokens\`, and \`scope\` at observation time.
373
+
374
+ ### `data-om-observation-end`
375
+
376
+ Emitted when observation or reflection completes successfully.
377
+
378
+ **cycleId:** (`string`): Matches the corresponding \`start\` marker.
379
+
380
+ **operationType:** (`'observation' | 'reflection'`): Type of operation that completed.
381
+
382
+ **completedAt:** (`string`): ISO timestamp when processing completed.
383
+
384
+ **durationMs:** (`number`): Duration in milliseconds.
385
+
386
+ **tokensObserved:** (`number`): Message tokens (input) that were processed.
387
+
388
+ **observationTokens:** (`number`): Resulting observation tokens (output) after the Observer compressed them.
389
+
390
+ **observations?:** (`string`): The generated observations text.
391
+
392
+ **currentTask?:** (`string`): Current task extracted by the Observer.
393
+
394
+ **suggestedResponse?:** (`string`): Suggested response extracted by the Observer.
395
+
396
+ **recordId:** (`string`): The OM record ID.
397
+
398
+ **threadId:** (`string`): This thread's ID.
399
+
400
+ ### `data-om-observation-failed`
401
+
402
+ Emitted when observation or reflection fails. The system falls back to synchronous processing.
403
+
404
+ **cycleId:** (`string`): Matches the corresponding \`start\` marker.
405
+
406
+ **operationType:** (`'observation' | 'reflection'`): Type of operation that failed.
407
+
408
+ **failedAt:** (`string`): ISO timestamp when the failure occurred.
409
+
410
+ **durationMs:** (`number`): Duration until failure in milliseconds.
411
+
412
+ **tokensAttempted:** (`number`): Message tokens (input) that were attempted.
413
+
414
+ **error:** (`string`): Error message.
415
+
416
+ **observations?:** (`string`): Any partial content available for display.
417
+
418
+ **recordId:** (`string`): The OM record ID.
419
+
420
+ **threadId:** (`string`): This thread's ID.
421
+
422
+ ### `data-om-buffering-start`
423
+
424
+ Emitted when async buffering begins in the background. Buffering pre-computes observations or reflections before the main threshold is reached.
425
+
426
+ **cycleId:** (`string`): Unique ID for this buffering cycle.
427
+
428
+ **operationType:** (`'observation' | 'reflection'`): Type of operation being buffered.
429
+
430
+ **startedAt:** (`string`): ISO timestamp when buffering started.
431
+
432
+ **tokensToBuffer:** (`number`): Message tokens (input) being buffered in this cycle.
433
+
434
+ **recordId:** (`string`): The OM record ID.
435
+
436
+ **threadId:** (`string`): This thread's ID.
437
+
438
+ **threadIds:** (`string[]`): All thread IDs being buffered (for resource-scoped).
439
+
440
+ **config:** (`ObservationMarkerConfig`): Snapshot of config at buffering time.
441
+
442
+ ### `data-om-buffering-end`
443
+
444
+ Emitted when async buffering completes. The content is stored but not yet activated in the main context.
445
+
446
+ **cycleId:** (`string`): Matches the corresponding \`buffering-start\` marker.
447
+
448
+ **operationType:** (`'observation' | 'reflection'`): Type of operation that was buffered.
449
+
450
+ **completedAt:** (`string`): ISO timestamp when buffering completed.
451
+
452
+ **durationMs:** (`number`): Duration in milliseconds.
453
+
454
+ **tokensBuffered:** (`number`): Message tokens (input) that were buffered.
455
+
456
+ **bufferedTokens:** (`number`): Observation tokens (output) after the Observer compressed them.
457
+
458
+ **observations?:** (`string`): The buffered content.
459
+
460
+ **recordId:** (`string`): The OM record ID.
461
+
462
+ **threadId:** (`string`): This thread's ID.
463
+
464
+ ### `data-om-buffering-failed`
465
+
466
+ Emitted when async buffering fails. The system falls back to synchronous processing when the threshold is reached.
467
+
468
+ **cycleId:** (`string`): Matches the corresponding \`buffering-start\` marker.
469
+
470
+ **operationType:** (`'observation' | 'reflection'`): Type of operation that failed.
471
+
472
+ **failedAt:** (`string`): ISO timestamp when the failure occurred.
473
+
474
+ **durationMs:** (`number`): Duration until failure in milliseconds.
475
+
476
+ **tokensAttempted:** (`number`): Message tokens (input) that were attempted to buffer.
477
+
478
+ **error:** (`string`): Error message.
479
+
480
+ **observations?:** (`string`): Any partial content.
481
+
482
+ **recordId:** (`string`): The OM record ID.
483
+
484
+ **threadId:** (`string`): This thread's ID.
485
+
486
+ ### `data-om-activation`
487
+
488
+ Emitted when buffered observations or reflections are activated (moved into the active context window). This is an instant operation — no LLM call is involved.
489
+
490
+ **cycleId:** (`string`): Unique ID for this activation event.
491
+
492
+ **operationType:** (`'observation' | 'reflection'`): Type of content activated.
493
+
494
+ **activatedAt:** (`string`): ISO timestamp when activation occurred.
495
+
496
+ **chunksActivated:** (`number`): Number of buffered chunks activated.
497
+
498
+ **tokensActivated:** (`number`): Message tokens (input) from activated chunks. For observation activation, these are removed from the message window. For reflection activation, this is the observation tokens that were compressed.
499
+
500
+ **observationTokens:** (`number`): Resulting observation tokens after activation.
501
+
502
+ **messagesActivated:** (`number`): Number of messages that were observed via activation.
503
+
504
+ **generationCount:** (`number`): Current reflection generation count.
505
+
506
+ **observations?:** (`string`): The activated observations text.
507
+
508
+ **recordId:** (`string`): The OM record ID.
509
+
510
+ **threadId:** (`string`): This thread's ID.
511
+
512
+ **config:** (`ObservationMarkerConfig`): Snapshot of config at activation time.
513
+
514
+ ## Standalone usage
515
+
516
+ Most users should use the `Memory` class above. Using `ObservationalMemory` directly is mainly useful for benchmarking, experimentation, or when you need to control processor ordering with other processors (like [guardrails](https://mastra.ai/docs/agents/guardrails)).
517
+
518
+ ```typescript
519
+ import { ObservationalMemory } from '@mastra/memory/processors'
520
+ import { Agent } from '@mastra/core/agent'
521
+ import { LibSQLStore } from '@mastra/libsql'
522
+
523
+ const storage = new LibSQLStore({
524
+ id: 'my-storage',
525
+ url: 'file:./memory.db',
526
+ })
527
+
528
+ const om = new ObservationalMemory({
529
+ storage: storage.stores.memory,
530
+ model: 'google/gemini-2.5-flash',
531
+ scope: 'resource',
532
+ observation: {
533
+ messageTokens: 20_000,
534
+ },
535
+ reflection: {
536
+ observationTokens: 60_000,
537
+ },
538
+ })
539
+
540
+ export const agent = new Agent({
541
+ name: 'my-agent',
542
+ instructions: 'You are a helpful assistant.',
543
+ model: 'openai/gpt-5-mini',
544
+ inputProcessors: [om],
545
+ outputProcessors: [om],
546
+ })
547
+ ```
548
+
549
+ ### Standalone config
550
+
551
+ The standalone `ObservationalMemory` class accepts all the same options as the `observationalMemory` config object above, plus the following:
552
+
553
+ **storage:** (`MemoryStorage`): Storage adapter for persisting observations. Must be a MemoryStorage instance (from \`MastraStorage.stores.memory\`).
554
+
555
+ **onDebugEvent?:** (`(event: ObservationDebugEvent) => void`): Debug callback for observation events. Called whenever observation-related events occur. Useful for debugging and understanding the observation flow.
556
+
557
+ **obscureThreadIds?:** (`boolean`): When enabled, thread IDs are hashed before being included in observation context. This prevents the LLM from recognizing patterns in thread identifiers. Automatically enabled when using resource scope through the Memory class. (Default: `false`)
558
+
559
+ ### Related
560
+
561
+ - [Observational Memory](https://mastra.ai/docs/memory/observational-memory)
562
+ - [Memory Overview](https://mastra.ai/docs/memory/overview)
563
+ - [Memory Class](https://mastra.ai/reference/memory/memory-class)
564
+ - [Memory Processors](https://mastra.ai/docs/memory/memory-processors)
565
+ - [Processors](https://mastra.ai/docs/agents/processors)
@@ -0,0 +1,115 @@
1
+ # TokenLimiterProcessor
2
+
3
+ The `TokenLimiterProcessor` limits the number of tokens in messages. It can be used as both an input and output processor:
4
+
5
+ - **Input processor**: Filters historical messages to fit within the context window, prioritizing recent messages
6
+ - **Output processor**: Limits generated response tokens via streaming or non-streaming with configurable strategies for handling exceeded limits
7
+
8
+ ## Usage example
9
+
10
+ ```typescript
11
+ import { TokenLimiterProcessor } from '@mastra/core/processors'
12
+
13
+ const processor = new TokenLimiterProcessor({
14
+ limit: 1000,
15
+ strategy: 'truncate',
16
+ countMode: 'cumulative',
17
+ })
18
+ ```
19
+
20
+ ## Constructor parameters
21
+
22
+ **options:** (`number | Options`): Either a simple number for token limit, or configuration options object
23
+
24
+ ### Options
25
+
26
+ **limit:** (`number`): Maximum number of tokens to allow in the response
27
+
28
+ **encoding?:** (`TiktokenBPE`): Optional encoding to use. Defaults to o200k\_base which is used by gpt-5.1
29
+
30
+ **strategy?:** (`'truncate' | 'abort'`): Strategy when token limit is reached: 'truncate' stops emitting chunks, 'abort' calls abort() to stop the stream
31
+
32
+ **countMode?:** (`'cumulative' | 'part'`): Whether to count tokens from the beginning of the stream or just the current part: 'cumulative' counts all tokens from start, 'part' only counts tokens in current part
33
+
34
+ ## Returns
35
+
36
+ **id:** (`string`): Processor identifier set to 'token-limiter'
37
+
38
+ **name?:** (`string`): Optional processor display name
39
+
40
+ **processInput:** (`(args: { messages: MastraDBMessage[]; abort: (reason?: string) => never }) => Promise<MastraDBMessage[]>`): Filters input messages to fit within token limit, prioritizing recent messages while preserving system messages
41
+
42
+ **processOutputStream:** (`(args: { part: ChunkType; streamParts: ChunkType[]; state: Record<string, any>; abort: (reason?: string) => never }) => Promise<ChunkType | null>`): Processes streaming output parts to limit token count during streaming
43
+
44
+ **processOutputResult:** (`(args: { messages: MastraDBMessage[]; abort: (reason?: string) => never }) => Promise<MastraDBMessage[]>`): Processes final output results to limit token count in non-streaming scenarios
45
+
46
+ **getMaxTokens:** (`() => number`): Get the maximum token limit
47
+
48
+ ## Error behavior
49
+
50
+ When used as an input processor, `TokenLimiterProcessor` throws a `TripWire` error in the following cases:
51
+
52
+ - **Empty messages**: If there are no messages to process, a TripWire is thrown because you cannot send an LLM request with no messages.
53
+ - **System messages exceed limit**: If system messages alone exceed the token limit, a TripWire is thrown because you cannot send an LLM request with only system messages and no user/assistant messages.
54
+
55
+ ```typescript
56
+ import { TripWire } from '@mastra/core/agent'
57
+
58
+ try {
59
+ await agent.generate('Hello')
60
+ } catch (error) {
61
+ if (error instanceof TripWire) {
62
+ console.log('Token limit error:', error.message)
63
+ }
64
+ }
65
+ ```
66
+
67
+ ## Extended usage example
68
+
69
+ ### As an input processor (limit context window)
70
+
71
+ Use `inputProcessors` to limit historical messages sent to the model, which helps stay within context window limits:
72
+
73
+ ```typescript
74
+ import { Agent } from '@mastra/core/agent'
75
+ import { Memory } from '@mastra/memory'
76
+ import { TokenLimiterProcessor } from '@mastra/core/processors'
77
+
78
+ export const agent = new Agent({
79
+ name: 'context-limited-agent',
80
+ instructions: 'You are a helpful assistant',
81
+ model: 'openai/gpt-4o',
82
+ memory: new Memory({
83
+ /* ... */
84
+ }),
85
+ inputProcessors: [
86
+ new TokenLimiterProcessor({ limit: 4000 }), // Limits historical messages to ~4000 tokens
87
+ ],
88
+ })
89
+ ```
90
+
91
+ ### As an output processor (limit response length)
92
+
93
+ Use `outputProcessors` to limit the length of generated responses:
94
+
95
+ ```typescript
96
+ import { Agent } from '@mastra/core/agent'
97
+ import { TokenLimiterProcessor } from '@mastra/core/processors'
98
+
99
+ export const agent = new Agent({
100
+ name: 'response-limited-agent',
101
+ instructions: 'You are a helpful assistant',
102
+ model: 'openai/gpt-4o',
103
+ outputProcessors: [
104
+ new TokenLimiterProcessor({
105
+ limit: 1000,
106
+ strategy: 'truncate',
107
+ countMode: 'cumulative',
108
+ }),
109
+ ],
110
+ })
111
+ ```
112
+
113
+ ## Related
114
+
115
+ - [Guardrails](https://mastra.ai/docs/agents/guardrails)