@mastra/memory 1.8.0-alpha.0 → 1.8.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/CHANGELOG.md
CHANGED
|
@@ -1,5 +1,31 @@
|
|
|
1
1
|
# @mastra/memory
|
|
2
2
|
|
|
3
|
+
## 1.8.0
|
|
4
|
+
|
|
5
|
+
### Minor Changes
|
|
6
|
+
|
|
7
|
+
- Added observer context optimization for Observational Memory. The `observation.previousObserverTokens` field reduces Observer input token costs for long-running conversations: ([#13568](https://github.com/mastra-ai/mastra/pull/13568))
|
|
8
|
+
- **previousObserverTokens** (default: `2000`): Truncates the 'Previous Observations' section to a token budget, keeping the most recent observations and automatically replacing already-reflected lines with the buffered reflection summary. Set to `0` to omit previous observations entirely, or `false` to disable truncation and keep the full observation history.
|
|
9
|
+
|
|
10
|
+
```typescript
|
|
11
|
+
const memory = new Memory({
|
|
12
|
+
options: {
|
|
13
|
+
observationalMemory: {
|
|
14
|
+
model: 'google/gemini-2.5-flash',
|
|
15
|
+
observation: {
|
|
16
|
+
previousObserverTokens: 10_000,
|
|
17
|
+
},
|
|
18
|
+
},
|
|
19
|
+
},
|
|
20
|
+
});
|
|
21
|
+
```
|
|
22
|
+
|
|
23
|
+
### Patch Changes
|
|
24
|
+
|
|
25
|
+
- Updated dependencies [[`ea86967`](https://github.com/mastra-ai/mastra/commit/ea86967449426e0a3673253bd1c2c052a99d970d), [`db21c21`](https://github.com/mastra-ai/mastra/commit/db21c21a6ae5f33539262cc535342fa8757eb359), [`a1d6b9c`](https://github.com/mastra-ai/mastra/commit/a1d6b9c907c909f259632a7ea26e9e3c221fb691), [`11f5dbe`](https://github.com/mastra-ai/mastra/commit/11f5dbe9a1e7ad8ef3b1ea34fb4a9fa3631d1587), [`c562ec2`](https://github.com/mastra-ai/mastra/commit/c562ec228f1af63693e2984ffa9712aa6db8fea8), [`6751354`](https://github.com/mastra-ai/mastra/commit/67513544d1a64be891d9de7624d40aadc895d56e), [`c958cd3`](https://github.com/mastra-ai/mastra/commit/c958cd36627c1eea122ec241b2b15492977a263a), [`86f2426`](https://github.com/mastra-ai/mastra/commit/86f242631d252a172d2f9f9a2ea0feb8647a76b0), [`950eb07`](https://github.com/mastra-ai/mastra/commit/950eb07b7e7354629630e218d49550fdd299c452)]:
|
|
26
|
+
- @mastra/core@1.13.0
|
|
27
|
+
- @mastra/schema-compat@1.2.2
|
|
28
|
+
|
|
3
29
|
## 1.8.0-alpha.0
|
|
4
30
|
|
|
5
31
|
### Minor Changes
|
package/dist/docs/SKILL.md
CHANGED
|
@@ -5,9 +5,9 @@ Memory enables your agent to remember user messages, agent replies, and tool res
|
|
|
5
5
|
Mastra supports four complementary memory types:
|
|
6
6
|
|
|
7
7
|
- [**Message history**](https://mastra.ai/docs/memory/message-history) - keeps recent messages from the current conversation so they can be rendered in the UI and used to maintain short-term continuity within the exchange.
|
|
8
|
+
- [**Observational memory**](https://mastra.ai/docs/memory/observational-memory) - uses background Observer and Reflector agents to maintain a dense observation log that replaces raw message history as it grows, keeping the context window small while preserving long-term memory across conversations.
|
|
8
9
|
- [**Working memory**](https://mastra.ai/docs/memory/working-memory) - stores persistent, structured user data such as names, preferences, and goals.
|
|
9
10
|
- [**Semantic recall**](https://mastra.ai/docs/memory/semantic-recall) - retrieves relevant messages from older conversations based on semantic meaning rather than exact keywords, mirroring how humans recall information by association. Requires a [vector database](https://mastra.ai/docs/memory/semantic-recall) and an [embedding model](https://mastra.ai/docs/memory/semantic-recall).
|
|
10
|
-
- [**Observational memory**](https://mastra.ai/docs/memory/observational-memory) - uses background Observer and Reflector agents to maintain a dense observation log that replaces raw message history as it grows, keeping the context window small while preserving long-term memory across conversations.
|
|
11
11
|
|
|
12
12
|
If the combined memory exceeds the model's context limit, [memory processors](https://mastra.ai/docs/memory/memory-processors) can filter, trim, or prioritize content so the most relevant information is preserved.
|
|
13
13
|
|
|
@@ -16,9 +16,9 @@ If the combined memory exceeds the model's context limit, [memory processors](ht
|
|
|
16
16
|
Choose a memory option to get started:
|
|
17
17
|
|
|
18
18
|
- [Message history](https://mastra.ai/docs/memory/message-history)
|
|
19
|
+
- [Observational memory](https://mastra.ai/docs/memory/observational-memory)
|
|
19
20
|
- [Working memory](https://mastra.ai/docs/memory/working-memory)
|
|
20
21
|
- [Semantic recall](https://mastra.ai/docs/memory/semantic-recall)
|
|
21
|
-
- [Observational memory](https://mastra.ai/docs/memory/observational-memory)
|
|
22
22
|
|
|
23
23
|
## Storage
|
|
24
24
|
|
|
@@ -41,5 +41,5 @@ This visibility helps you understand why an agent made specific decisions and ve
|
|
|
41
41
|
## Next steps
|
|
42
42
|
|
|
43
43
|
- Learn more about [Storage](https://mastra.ai/docs/memory/storage) providers and configuration options
|
|
44
|
-
- Add [Message history](https://mastra.ai/docs/memory/message-history), [
|
|
44
|
+
- Add [Message history](https://mastra.ai/docs/memory/message-history), [Observational memory](https://mastra.ai/docs/memory/observational-memory), [Working memory](https://mastra.ai/docs/memory/working-memory), or [Semantic recall](https://mastra.ai/docs/memory/semantic-recall)
|
|
45
45
|
- Visit [Memory configuration reference](https://mastra.ai/reference/memory/memory-class) for all available options
|
package/package.json
CHANGED
|
@@ -1,6 +1,6 @@
|
|
|
1
1
|
{
|
|
2
2
|
"name": "@mastra/memory",
|
|
3
|
-
"version": "1.8.0
|
|
3
|
+
"version": "1.8.0",
|
|
4
4
|
"description": "",
|
|
5
5
|
"type": "module",
|
|
6
6
|
"main": "./dist/index.js",
|
|
@@ -43,7 +43,7 @@
|
|
|
43
43
|
"probe-image-size": "^7.2.3",
|
|
44
44
|
"tokenx": "^1.3.0",
|
|
45
45
|
"xxhash-wasm": "^1.1.0",
|
|
46
|
-
"@mastra/schema-compat": "1.2.2
|
|
46
|
+
"@mastra/schema-compat": "1.2.2"
|
|
47
47
|
},
|
|
48
48
|
"devDependencies": {
|
|
49
49
|
"@ai-sdk/openai": "^1.3.24",
|
|
@@ -59,12 +59,12 @@
|
|
|
59
59
|
"typescript-eslint": "^8.51.0",
|
|
60
60
|
"vitest": "4.0.18",
|
|
61
61
|
"zod": "^4.3.6",
|
|
62
|
-
"@internal/ai-sdk-v4": "0.0.
|
|
63
|
-
"@internal/ai-sdk-v5": "0.0.
|
|
64
|
-
"@internal/ai-v6": "0.0.
|
|
65
|
-
"@internal/lint": "0.0.
|
|
66
|
-
"@internal/types-builder": "0.0.
|
|
67
|
-
"@mastra/core": "1.13.0
|
|
62
|
+
"@internal/ai-sdk-v4": "0.0.16",
|
|
63
|
+
"@internal/ai-sdk-v5": "0.0.16",
|
|
64
|
+
"@internal/ai-v6": "0.0.16",
|
|
65
|
+
"@internal/lint": "0.0.69",
|
|
66
|
+
"@internal/types-builder": "0.0.44",
|
|
67
|
+
"@mastra/core": "1.13.0"
|
|
68
68
|
},
|
|
69
69
|
"peerDependencies": {
|
|
70
70
|
"@mastra/core": ">=1.4.1-0 <2.0.0-0",
|