@mastra/libsql 1.7.2 → 1.7.3-alpha.1
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/CHANGELOG.md +101 -0
- package/dist/docs/SKILL.md +1 -1
- package/dist/docs/assets/SOURCE_MAP.json +1 -1
- package/dist/docs/references/guides-agent-frameworks-ai-sdk.md +3 -3
- package/dist/index.cjs +14 -2
- package/dist/index.cjs.map +1 -1
- package/dist/index.js +14 -2
- package/dist/index.js.map +1 -1
- package/dist/storage/domains/memory/index.d.ts.map +1 -1
- package/package.json +2 -2
package/CHANGELOG.md
CHANGED
|
@@ -1,5 +1,106 @@
|
|
|
1
1
|
# @mastra/libsql
|
|
2
2
|
|
|
3
|
+
## 1.7.3-alpha.1
|
|
4
|
+
|
|
5
|
+
### Patch Changes
|
|
6
|
+
|
|
7
|
+
- The internal architecture of observational memory has been refactored. The public API and behavior remain unchanged. ([#14453](https://github.com/mastra-ai/mastra/pull/14453))
|
|
8
|
+
|
|
9
|
+
- Updated dependencies [[`dc514a8`](https://github.com/mastra-ai/mastra/commit/dc514a83dba5f719172dddfd2c7b858e4943d067), [`404fea1`](https://github.com/mastra-ai/mastra/commit/404fea13042181f0b0c73a101392ac87c79ceae2), [`ebf5047`](https://github.com/mastra-ai/mastra/commit/ebf5047e825c38a1a356f10b214c1d4260dfcd8d), [`675f15b`](https://github.com/mastra-ai/mastra/commit/675f15b7eaeea649158d228ea635be40480c584d), [`b174c63`](https://github.com/mastra-ai/mastra/commit/b174c63a093108d4e53b9bc89a078d9f66202b3f), [`eef7cb2`](https://github.com/mastra-ai/mastra/commit/eef7cb2abe7ef15951e2fdf792a5095c6c643333), [`e8a5b0b`](https://github.com/mastra-ai/mastra/commit/e8a5b0b9bc94d12dee4150095512ca27a288d778)]:
|
|
10
|
+
- @mastra/core@1.18.0-alpha.0
|
|
11
|
+
|
|
12
|
+
## 1.7.3-alpha.0
|
|
13
|
+
|
|
14
|
+
### Patch Changes
|
|
15
|
+
|
|
16
|
+
- **Refactored Observational Memory into modular architecture** ([#14453](https://github.com/mastra-ai/mastra/pull/14453))
|
|
17
|
+
|
|
18
|
+
Restructured the Observational Memory (OM) engine from a single ~3,800-line monolithic class into a modular, strategy-based architecture. The public API and behavior are unchanged — this is a purely internal refactor that improves maintainability, testability, and separation of concerns.
|
|
19
|
+
|
|
20
|
+
**Why** — The original `ObservationalMemory` class handled everything: orchestration, LLM calling, observation logic for three different scopes, reflection, buffering coordination, turn lifecycle, and message processing. This made it difficult to reason about individual behaviors, test them in isolation, or extend the system. The refactor separates these responsibilities into focused modules.
|
|
21
|
+
|
|
22
|
+
**Observation strategies** — Extracted three duplicated observation code paths (~650 lines of conditionals) into pluggable strategy classes sharing a common `prepare → process → persist` lifecycle via an abstract base class. The correct strategy is selected automatically based on scope and buffering configuration.
|
|
23
|
+
|
|
24
|
+
```
|
|
25
|
+
observation-strategies/
|
|
26
|
+
base.ts — abstract ObservationStrategy + StrategyDeps interface
|
|
27
|
+
sync.ts — SyncObservationStrategy (thread-scoped synchronous)
|
|
28
|
+
async-buffer.ts — AsyncBufferObservationStrategy (background buffered)
|
|
29
|
+
resource-scoped.ts — ResourceScopedObservationStrategy (multi-thread)
|
|
30
|
+
index.ts — static factory: ObservationStrategy.create(om, opts)
|
|
31
|
+
```
|
|
32
|
+
|
|
33
|
+
```ts
|
|
34
|
+
// Internal usage — strategies are selected and run automatically:
|
|
35
|
+
const strategy = ObservationStrategy.create(om, {
|
|
36
|
+
record,
|
|
37
|
+
threadId,
|
|
38
|
+
resourceId,
|
|
39
|
+
messages,
|
|
40
|
+
cycleId,
|
|
41
|
+
startedAt,
|
|
42
|
+
});
|
|
43
|
+
const result = await strategy.run();
|
|
44
|
+
```
|
|
45
|
+
|
|
46
|
+
**Turn/Step abstraction** — Introduced `ObservationTurn` and `StepContext` to model the lifecycle of a single agent interaction. A Turn manages message loading, system message injection, record caching, and cleanup. A Step handles per-generation observation, activation, and reflection decisions. This replaced ~580 lines of inline orchestration in the processor with ~170 lines of structured calls.
|
|
47
|
+
|
|
48
|
+
```ts
|
|
49
|
+
// Internal lifecycle managed by the processor:
|
|
50
|
+
const turn = new ObservationTurn(om, memory, { threadId, resourceId });
|
|
51
|
+
await turn.start(messageList, writer); // loads history, injects OM system message
|
|
52
|
+
|
|
53
|
+
const step = turn.step(0);
|
|
54
|
+
await step.prepare(messageList, writer); // activate buffered, maybe reflect
|
|
55
|
+
// ... agent generates response ...
|
|
56
|
+
await step.complete(messageList, writer); // observe new messages, buffer if needed
|
|
57
|
+
|
|
58
|
+
await turn.end(messageList, writer); // persist, cleanup
|
|
59
|
+
```
|
|
60
|
+
|
|
61
|
+
**Dedicated runners** — Moved observer and reflector LLM-calling logic into `ObserverRunner` (194 lines) and `ReflectorRunner` (710 lines), separating prompt construction, degenerate output detection, retry logic, and compression level escalation from orchestration. `BufferingCoordinator` (175 lines) extracts the static buffering state machine and async operation tracking.
|
|
62
|
+
|
|
63
|
+
**Processor** — Added `ObservationalMemoryProcessor` implementing the `Processor` interface, bridging the OM engine with the AI SDK message pipeline. It owns the decision of _when_ to buffer, activate, observe, and reflect — while the OM engine owns _how_ to do each operation.
|
|
64
|
+
|
|
65
|
+
```ts
|
|
66
|
+
// The processor is created automatically by Memory when OM is enabled.
|
|
67
|
+
// It plugs into the AI SDK message pipeline:
|
|
68
|
+
const memory = new Memory({
|
|
69
|
+
storage: new InMemoryStore(),
|
|
70
|
+
options: {
|
|
71
|
+
observationalMemory: {
|
|
72
|
+
enabled: true,
|
|
73
|
+
observation: { model, messageTokens: 500 },
|
|
74
|
+
reflection: { model, observationTokens: 10_000 },
|
|
75
|
+
},
|
|
76
|
+
},
|
|
77
|
+
});
|
|
78
|
+
|
|
79
|
+
// For direct access to the OM engine (e.g. for manual observe/buffer/activate):
|
|
80
|
+
const om = await memory.omEngine;
|
|
81
|
+
```
|
|
82
|
+
|
|
83
|
+
**Unified OM engine instantiation** — Replaced the duplicated `getOMEngine()` singleton and per-call `createOMProcessor()` engine creation with a single lazy `omEngine` property on the `Memory` class. This eliminates config drift between the legacy `getContext()` API and the processor pipeline — both now share the same `ObservationalMemory` instance with the full configuration.
|
|
84
|
+
|
|
85
|
+
```ts
|
|
86
|
+
// Before (casting required, config could drift):
|
|
87
|
+
const om = (await (memory as any).getOMEngine()) as ObservationalMemory;
|
|
88
|
+
|
|
89
|
+
// After (typed, single shared engine):
|
|
90
|
+
const om = await memory.omEngine;
|
|
91
|
+
```
|
|
92
|
+
|
|
93
|
+
**Improved observation activation atomicity** — Added conditional WHERE clauses to `activateBufferedObservations` in all storage adapters (pg, libsql, mongodb) to prevent duplicate chunk swaps when concurrent processes attempt activation simultaneously. If chunks have already been cleared by another process, the operation returns early with zero counts instead of corrupting state.
|
|
94
|
+
|
|
95
|
+
**Compression start level from model context** — Integrated model-aware compression start levels into the `ReflectorRunner`. Models like `gemini-2.5-flash` that struggle with light compression now start at compression level 2 instead of 1, reducing wasted reflection retries.
|
|
96
|
+
|
|
97
|
+
**Pure function extraction** — Moved reusable helpers into `message-utils.ts`: `filterObservedMessages`, `getBufferedChunks`, `sortThreadsByOldestMessage`, `stripThreadTags`. Eliminated dead code including `isObserving` DB flag, `countMessageTokens`, `acquireObservingLock`/`releaseObservingLock`, and ~10 cascading dead private methods.
|
|
98
|
+
|
|
99
|
+
**Cleanup** — Dropped `threadIdCache` (pointless memoization), removed `as any` casts for private method access (made methods properly public with `@internal` tsdoc), replaced sealed-ID-based tracking with message-level `metadata.mastra.sealed` flag checks.
|
|
100
|
+
|
|
101
|
+
- Updated dependencies [[`7302e5c`](https://github.com/mastra-ai/mastra/commit/7302e5ce0f52d769d3d63fb0faa8a7d4089cda6d)]:
|
|
102
|
+
- @mastra/core@1.16.1-alpha.1
|
|
103
|
+
|
|
3
104
|
## 1.7.2
|
|
4
105
|
|
|
5
106
|
### Patch Changes
|
package/dist/docs/SKILL.md
CHANGED
|
@@ -56,7 +56,7 @@ const loggingProcessor: Processor<'logger'> = {
|
|
|
56
56
|
},
|
|
57
57
|
}
|
|
58
58
|
|
|
59
|
-
const model = withMastra(openai('gpt-
|
|
59
|
+
const model = withMastra(openai('gpt-5.4'), {
|
|
60
60
|
inputProcessors: [loggingProcessor],
|
|
61
61
|
outputProcessors: [loggingProcessor],
|
|
62
62
|
})
|
|
@@ -85,7 +85,7 @@ await storage.init()
|
|
|
85
85
|
|
|
86
86
|
const memoryStorage = await storage.getStore('memory')
|
|
87
87
|
|
|
88
|
-
const model = withMastra(openai('gpt-
|
|
88
|
+
const model = withMastra(openai('gpt-5.4'), {
|
|
89
89
|
memory: {
|
|
90
90
|
storage: memoryStorage!,
|
|
91
91
|
threadId: 'user-thread-123',
|
|
@@ -115,7 +115,7 @@ await storage.init()
|
|
|
115
115
|
|
|
116
116
|
const memoryStorage = await storage.getStore('memory')
|
|
117
117
|
|
|
118
|
-
const model = withMastra(openai('gpt-
|
|
118
|
+
const model = withMastra(openai('gpt-5.4'), {
|
|
119
119
|
inputProcessors: [myGuardProcessor],
|
|
120
120
|
outputProcessors: [myLoggingProcessor],
|
|
121
121
|
memory: {
|
package/dist/index.cjs
CHANGED
|
@@ -7156,7 +7156,7 @@ var MemoryLibSQL = class extends storage.MemoryStorage {
|
|
|
7156
7156
|
const newTokenCount = existingTokenCount + activatedTokens;
|
|
7157
7157
|
const existingPending = Number(row.pendingMessageTokens || 0);
|
|
7158
7158
|
const newPending = Math.max(0, existingPending - activatedMessageTokens);
|
|
7159
|
-
await this.#client.execute({
|
|
7159
|
+
const updateResult = await this.#client.execute({
|
|
7160
7160
|
sql: `UPDATE "${OM_TABLE}" SET
|
|
7161
7161
|
"activeObservations" = ?,
|
|
7162
7162
|
"observationTokenCount" = ?,
|
|
@@ -7164,7 +7164,9 @@ var MemoryLibSQL = class extends storage.MemoryStorage {
|
|
|
7164
7164
|
"bufferedObservationChunks" = ?,
|
|
7165
7165
|
"lastObservedAt" = ?,
|
|
7166
7166
|
"updatedAt" = ?
|
|
7167
|
-
WHERE id =
|
|
7167
|
+
WHERE id = ?
|
|
7168
|
+
AND "bufferedObservationChunks" IS NOT NULL
|
|
7169
|
+
AND "bufferedObservationChunks" != '[]'`,
|
|
7168
7170
|
args: [
|
|
7169
7171
|
newActive,
|
|
7170
7172
|
newTokenCount,
|
|
@@ -7175,6 +7177,16 @@ var MemoryLibSQL = class extends storage.MemoryStorage {
|
|
|
7175
7177
|
input.id
|
|
7176
7178
|
]
|
|
7177
7179
|
});
|
|
7180
|
+
if (updateResult.rowsAffected === 0) {
|
|
7181
|
+
return {
|
|
7182
|
+
chunksActivated: 0,
|
|
7183
|
+
messageTokensActivated: 0,
|
|
7184
|
+
observationTokensActivated: 0,
|
|
7185
|
+
messagesActivated: 0,
|
|
7186
|
+
activatedCycleIds: [],
|
|
7187
|
+
activatedMessageIds: []
|
|
7188
|
+
};
|
|
7189
|
+
}
|
|
7178
7190
|
const latestChunkHints = activatedChunks[activatedChunks.length - 1];
|
|
7179
7191
|
return {
|
|
7180
7192
|
chunksActivated: activatedChunks.length,
|