@mastra/memory 1.8.0-alpha.0 → 1.8.1

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/CHANGELOG.md CHANGED
@@ -1,5 +1,39 @@
1
1
  # @mastra/memory
2
2
 
3
+ ## 1.8.1
4
+
5
+ ### Patch Changes
6
+
7
+ - Updated dependencies [[`c4e600e`](https://github.com/mastra-ai/mastra/commit/c4e600e39a04309c3a7ff182bd806ab2b3c788ea), [`205e76c`](https://github.com/mastra-ai/mastra/commit/205e76c3ba652205dafb037f50a4a8eea73f6736)]:
8
+ - @mastra/schema-compat@1.2.3
9
+ - @mastra/core@1.13.1
10
+
11
+ ## 1.8.0
12
+
13
+ ### Minor Changes
14
+
15
+ - Added observer context optimization for Observational Memory. The `observation.previousObserverTokens` field reduces Observer input token costs for long-running conversations: ([#13568](https://github.com/mastra-ai/mastra/pull/13568))
16
+ - **previousObserverTokens** (default: `2000`): Truncates the 'Previous Observations' section to a token budget, keeping the most recent observations and automatically replacing already-reflected lines with the buffered reflection summary. Set to `0` to omit previous observations entirely, or `false` to disable truncation and keep the full observation history.
17
+
18
+ ```typescript
19
+ const memory = new Memory({
20
+ options: {
21
+ observationalMemory: {
22
+ model: 'google/gemini-2.5-flash',
23
+ observation: {
24
+ previousObserverTokens: 10_000,
25
+ },
26
+ },
27
+ },
28
+ });
29
+ ```
30
+
31
+ ### Patch Changes
32
+
33
+ - Updated dependencies [[`ea86967`](https://github.com/mastra-ai/mastra/commit/ea86967449426e0a3673253bd1c2c052a99d970d), [`db21c21`](https://github.com/mastra-ai/mastra/commit/db21c21a6ae5f33539262cc535342fa8757eb359), [`a1d6b9c`](https://github.com/mastra-ai/mastra/commit/a1d6b9c907c909f259632a7ea26e9e3c221fb691), [`11f5dbe`](https://github.com/mastra-ai/mastra/commit/11f5dbe9a1e7ad8ef3b1ea34fb4a9fa3631d1587), [`c562ec2`](https://github.com/mastra-ai/mastra/commit/c562ec228f1af63693e2984ffa9712aa6db8fea8), [`6751354`](https://github.com/mastra-ai/mastra/commit/67513544d1a64be891d9de7624d40aadc895d56e), [`c958cd3`](https://github.com/mastra-ai/mastra/commit/c958cd36627c1eea122ec241b2b15492977a263a), [`86f2426`](https://github.com/mastra-ai/mastra/commit/86f242631d252a172d2f9f9a2ea0feb8647a76b0), [`950eb07`](https://github.com/mastra-ai/mastra/commit/950eb07b7e7354629630e218d49550fdd299c452)]:
34
+ - @mastra/core@1.13.0
35
+ - @mastra/schema-compat@1.2.2
36
+
3
37
  ## 1.8.0-alpha.0
4
38
 
5
39
  ### Minor Changes
@@ -3,7 +3,7 @@ name: mastra-memory
3
3
  description: Documentation for @mastra/memory. Use when working with @mastra/memory APIs, configuration, or implementation.
4
4
  metadata:
5
5
  package: "@mastra/memory"
6
- version: "1.8.0-alpha.0"
6
+ version: "1.8.1"
7
7
  ---
8
8
 
9
9
  ## When to use
@@ -1,5 +1,5 @@
1
1
  {
2
- "version": "1.8.0-alpha.0",
2
+ "version": "1.8.1",
3
3
  "package": "@mastra/memory",
4
4
  "exports": {
5
5
  "OBSERVATIONAL_MEMORY_DEFAULTS": {
@@ -5,9 +5,9 @@ Memory enables your agent to remember user messages, agent replies, and tool res
5
5
  Mastra supports four complementary memory types:
6
6
 
7
7
  - [**Message history**](https://mastra.ai/docs/memory/message-history) - keeps recent messages from the current conversation so they can be rendered in the UI and used to maintain short-term continuity within the exchange.
8
+ - [**Observational memory**](https://mastra.ai/docs/memory/observational-memory) - uses background Observer and Reflector agents to maintain a dense observation log that replaces raw message history as it grows, keeping the context window small while preserving long-term memory across conversations.
8
9
  - [**Working memory**](https://mastra.ai/docs/memory/working-memory) - stores persistent, structured user data such as names, preferences, and goals.
9
10
  - [**Semantic recall**](https://mastra.ai/docs/memory/semantic-recall) - retrieves relevant messages from older conversations based on semantic meaning rather than exact keywords, mirroring how humans recall information by association. Requires a [vector database](https://mastra.ai/docs/memory/semantic-recall) and an [embedding model](https://mastra.ai/docs/memory/semantic-recall).
10
- - [**Observational memory**](https://mastra.ai/docs/memory/observational-memory) - uses background Observer and Reflector agents to maintain a dense observation log that replaces raw message history as it grows, keeping the context window small while preserving long-term memory across conversations.
11
11
 
12
12
  If the combined memory exceeds the model's context limit, [memory processors](https://mastra.ai/docs/memory/memory-processors) can filter, trim, or prioritize content so the most relevant information is preserved.
13
13
 
@@ -16,9 +16,9 @@ If the combined memory exceeds the model's context limit, [memory processors](ht
16
16
  Choose a memory option to get started:
17
17
 
18
18
  - [Message history](https://mastra.ai/docs/memory/message-history)
19
+ - [Observational memory](https://mastra.ai/docs/memory/observational-memory)
19
20
  - [Working memory](https://mastra.ai/docs/memory/working-memory)
20
21
  - [Semantic recall](https://mastra.ai/docs/memory/semantic-recall)
21
- - [Observational memory](https://mastra.ai/docs/memory/observational-memory)
22
22
 
23
23
  ## Storage
24
24
 
@@ -41,5 +41,5 @@ This visibility helps you understand why an agent made specific decisions and ve
41
41
  ## Next steps
42
42
 
43
43
  - Learn more about [Storage](https://mastra.ai/docs/memory/storage) providers and configuration options
44
- - Add [Message history](https://mastra.ai/docs/memory/message-history), [Working memory](https://mastra.ai/docs/memory/working-memory), [Semantic recall](https://mastra.ai/docs/memory/semantic-recall), or [Observational memory](https://mastra.ai/docs/memory/observational-memory)
44
+ - Add [Message history](https://mastra.ai/docs/memory/message-history), [Observational memory](https://mastra.ai/docs/memory/observational-memory), [Working memory](https://mastra.ai/docs/memory/working-memory), or [Semantic recall](https://mastra.ai/docs/memory/semantic-recall)
45
45
  - Visit [Memory configuration reference](https://mastra.ai/reference/memory/memory-class) for all available options
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "@mastra/memory",
3
- "version": "1.8.0-alpha.0",
3
+ "version": "1.8.1",
4
4
  "description": "",
5
5
  "type": "module",
6
6
  "main": "./dist/index.js",
@@ -43,7 +43,7 @@
43
43
  "probe-image-size": "^7.2.3",
44
44
  "tokenx": "^1.3.0",
45
45
  "xxhash-wasm": "^1.1.0",
46
- "@mastra/schema-compat": "1.2.2-alpha.0"
46
+ "@mastra/schema-compat": "1.2.3"
47
47
  },
48
48
  "devDependencies": {
49
49
  "@ai-sdk/openai": "^1.3.24",
@@ -59,12 +59,12 @@
59
59
  "typescript-eslint": "^8.51.0",
60
60
  "vitest": "4.0.18",
61
61
  "zod": "^4.3.6",
62
- "@internal/ai-sdk-v4": "0.0.15",
63
- "@internal/ai-sdk-v5": "0.0.15",
64
- "@internal/ai-v6": "0.0.15",
65
- "@internal/lint": "0.0.68",
66
- "@internal/types-builder": "0.0.43",
67
- "@mastra/core": "1.13.0-alpha.0"
62
+ "@internal/ai-sdk-v5": "0.0.17",
63
+ "@internal/ai-sdk-v4": "0.0.17",
64
+ "@internal/ai-v6": "0.0.17",
65
+ "@internal/lint": "0.0.70",
66
+ "@internal/types-builder": "0.0.45",
67
+ "@mastra/core": "1.13.1"
68
68
  },
69
69
  "peerDependencies": {
70
70
  "@mastra/core": ">=1.4.1-0 <2.0.0-0",