@mastra/pg 1.2.0-alpha.0 → 1.3.0-alpha.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/CHANGELOG.md +180 -0
- package/dist/docs/SKILL.md +28 -25
- package/dist/docs/{SOURCE_MAP.json → assets/SOURCE_MAP.json} +1 -1
- package/dist/docs/{memory/03-semantic-recall.md → references/docs-memory-semantic-recall.md} +33 -17
- package/dist/docs/{memory/01-storage.md → references/docs-memory-storage.md} +29 -39
- package/dist/docs/{memory/02-working-memory.md → references/docs-memory-working-memory.md} +16 -27
- package/dist/docs/{rag/01-overview.md → references/docs-rag-overview.md} +2 -4
- package/dist/docs/{rag/03-retrieval.md → references/docs-rag-retrieval.md} +26 -53
- package/dist/docs/{rag/02-vector-databases.md → references/docs-rag-vector-databases.md} +198 -202
- package/dist/docs/{memory/04-reference.md → references/reference-memory-memory-class.md} +28 -14
- package/dist/docs/references/reference-processors-message-history-processor.md +85 -0
- package/dist/docs/references/reference-processors-semantic-recall-processor.md +123 -0
- package/dist/docs/references/reference-processors-working-memory-processor.md +154 -0
- package/dist/docs/{rag/04-reference.md → references/reference-rag-metadata-filters.md} +26 -179
- package/dist/docs/references/reference-storage-composite.md +235 -0
- package/dist/docs/references/reference-storage-dynamodb.md +282 -0
- package/dist/docs/references/reference-storage-postgresql.md +529 -0
- package/dist/docs/{tools/01-reference.md → references/reference-tools-vector-query-tool.md} +137 -118
- package/dist/docs/{vectors/01-reference.md → references/reference-vectors-pg.md} +115 -14
- package/dist/index.cjs +1998 -217
- package/dist/index.cjs.map +1 -1
- package/dist/index.js +1998 -219
- package/dist/index.js.map +1 -1
- package/dist/storage/db/constraint-utils.d.ts +16 -0
- package/dist/storage/db/constraint-utils.d.ts.map +1 -0
- package/dist/storage/db/index.d.ts.map +1 -1
- package/dist/storage/domains/agents/index.d.ts +9 -12
- package/dist/storage/domains/agents/index.d.ts.map +1 -1
- package/dist/storage/domains/memory/index.d.ts +7 -1
- package/dist/storage/domains/memory/index.d.ts.map +1 -1
- package/dist/storage/domains/prompt-blocks/index.d.ts +33 -0
- package/dist/storage/domains/prompt-blocks/index.d.ts.map +1 -0
- package/dist/storage/domains/scorer-definitions/index.d.ts +33 -0
- package/dist/storage/domains/scorer-definitions/index.d.ts.map +1 -0
- package/dist/storage/index.d.ts +3 -1
- package/dist/storage/index.d.ts.map +1 -1
- package/package.json +5 -6
- package/dist/docs/README.md +0 -36
- package/dist/docs/processors/01-reference.md +0 -296
- package/dist/docs/storage/01-reference.md +0 -905
package/CHANGELOG.md
CHANGED
|
@@ -1,5 +1,185 @@
|
|
|
1
1
|
# @mastra/pg
|
|
2
2
|
|
|
3
|
+
## 1.3.0-alpha.0
|
|
4
|
+
|
|
5
|
+
### Minor Changes
|
|
6
|
+
|
|
7
|
+
- **Updated storage adapters for generic storage domain API** ([#12846](https://github.com/mastra-ai/mastra/pull/12846))
|
|
8
|
+
|
|
9
|
+
All storage adapters now implement the unified `VersionedStorageDomain` method names. Entity-specific methods (`createAgent`, `getAgentById`, `deleteAgent`, etc.) have been replaced with generic equivalents (`create`, `getById`, `delete`, etc.) across agents, prompt blocks, and scorer definitions domains.
|
|
10
|
+
|
|
11
|
+
Added `scorer-definitions` domain support to all adapters.
|
|
12
|
+
|
|
13
|
+
**Before:**
|
|
14
|
+
|
|
15
|
+
```ts
|
|
16
|
+
const store = storage.getStore('agents');
|
|
17
|
+
await store.createAgent({ agent: input });
|
|
18
|
+
await store.getAgentById({ id: 'abc' });
|
|
19
|
+
await store.deleteAgent({ id: 'abc' });
|
|
20
|
+
```
|
|
21
|
+
|
|
22
|
+
**After:**
|
|
23
|
+
|
|
24
|
+
```ts
|
|
25
|
+
const store = storage.getStore('agents');
|
|
26
|
+
await store.create({ agent: input });
|
|
27
|
+
await store.getById('abc');
|
|
28
|
+
await store.delete('abc');
|
|
29
|
+
```
|
|
30
|
+
|
|
31
|
+
### Patch Changes
|
|
32
|
+
|
|
33
|
+
- Fixed issues with stored agents ([#12790](https://github.com/mastra-ai/mastra/pull/12790))
|
|
34
|
+
|
|
35
|
+
- Fixed cross-schema constraint checks in multi-schema PostgreSQL setups so tables and indexes are created in the intended schema. Single-schema (default) setups are unaffected. ([#12868](https://github.com/mastra-ai/mastra/pull/12868))
|
|
36
|
+
|
|
37
|
+
- **Async buffering for observational memory is now enabled by default.** Observations are pre-computed in the background as conversations grow — when the context window fills up, buffered observations activate instantly with no blocking LLM call. This keeps agents responsive during long conversations. ([#12891](https://github.com/mastra-ai/mastra/pull/12891))
|
|
38
|
+
|
|
39
|
+
**Default settings:**
|
|
40
|
+
- `observation.bufferTokens: 0.2` — buffer every 20% of `messageTokens` (~6k tokens with the default 30k threshold)
|
|
41
|
+
- `observation.bufferActivation: 0.8` — on activation, retain 20% of the message window
|
|
42
|
+
- `reflection.bufferActivation: 0.5` — start background reflection at 50% of the observation threshold
|
|
43
|
+
|
|
44
|
+
**Disabling async buffering:**
|
|
45
|
+
|
|
46
|
+
Set `observation.bufferTokens: false` to disable async buffering for both observations and reflections:
|
|
47
|
+
|
|
48
|
+
```ts
|
|
49
|
+
const memory = new Memory({
|
|
50
|
+
options: {
|
|
51
|
+
observationalMemory: {
|
|
52
|
+
model: 'google/gemini-2.5-flash',
|
|
53
|
+
observation: {
|
|
54
|
+
bufferTokens: false,
|
|
55
|
+
},
|
|
56
|
+
},
|
|
57
|
+
},
|
|
58
|
+
});
|
|
59
|
+
```
|
|
60
|
+
|
|
61
|
+
**Model is now required** when passing an observational memory config object. Use `observationalMemory: true` for the default (google/gemini-2.5-flash), or set a model explicitly:
|
|
62
|
+
|
|
63
|
+
```ts
|
|
64
|
+
// Uses default model (google/gemini-2.5-flash)
|
|
65
|
+
observationalMemory: true
|
|
66
|
+
|
|
67
|
+
// Explicit model
|
|
68
|
+
observationalMemory: {
|
|
69
|
+
model: "google/gemini-2.5-flash",
|
|
70
|
+
}
|
|
71
|
+
```
|
|
72
|
+
|
|
73
|
+
**`shareTokenBudget` requires `bufferTokens: false`** (temporary limitation). If you use `shareTokenBudget: true`, you must explicitly disable async buffering:
|
|
74
|
+
|
|
75
|
+
```ts
|
|
76
|
+
observationalMemory: {
|
|
77
|
+
model: "google/gemini-2.5-flash",
|
|
78
|
+
shareTokenBudget: true,
|
|
79
|
+
observation: { bufferTokens: false },
|
|
80
|
+
}
|
|
81
|
+
```
|
|
82
|
+
|
|
83
|
+
**New streaming event:** `data-om-status` replaces `data-om-progress` with a structured status object containing active window usage, buffered observation/reflection state, and projected activation impact.
|
|
84
|
+
|
|
85
|
+
**Buffering markers:** New `data-om-buffering-start`, `data-om-buffering-end`, and `data-om-buffering-failed` streaming events for UI feedback during background operations.
|
|
86
|
+
|
|
87
|
+
- Fixed PostgreSQL constraint names exceeding 63-byte identifier limit. Schema-prefixed constraint names are now truncated to fit within PostgreSQL's identifier length limit, preventing "relation already exists" errors when restarting the dev server with schema names longer than 13 characters. Fixes #12679. ([#12687](https://github.com/mastra-ai/mastra/pull/12687))
|
|
88
|
+
|
|
89
|
+
- Added prompt block storage implementations. Each store supports full CRUD for prompt blocks and their versions, including JSON serialization for rules and metadata. Also updated agent instructions serialization to support the new `AgentInstructionBlock` array format alongside plain strings. ([#12776](https://github.com/mastra-ai/mastra/pull/12776))
|
|
90
|
+
|
|
91
|
+
- Updated dependencies [[`717ffab`](https://github.com/mastra-ai/mastra/commit/717ffab42cfd58ff723b5c19ada4939997773004), [`e4b6dab`](https://github.com/mastra-ai/mastra/commit/e4b6dab171c5960e340b3ea3ea6da8d64d2b8672), [`5719fa8`](https://github.com/mastra-ai/mastra/commit/5719fa8880e86e8affe698ec4b3807c7e0e0a06f), [`83cda45`](https://github.com/mastra-ai/mastra/commit/83cda4523e588558466892bff8f80f631a36945a), [`11804ad`](https://github.com/mastra-ai/mastra/commit/11804adf1d6be46ebe216be40a43b39bb8b397d7), [`aa95f95`](https://github.com/mastra-ai/mastra/commit/aa95f958b186ae5c9f4219c88e268f5565c277a2), [`f5501ae`](https://github.com/mastra-ai/mastra/commit/f5501aedb0a11106c7db7e480d6eaf3971b7bda8), [`44573af`](https://github.com/mastra-ai/mastra/commit/44573afad0a4bc86f627d6cbc0207961cdcb3bc3), [`00e3861`](https://github.com/mastra-ai/mastra/commit/00e3861863fbfee78faeb1ebbdc7c0223aae13ff), [`7bfbc52`](https://github.com/mastra-ai/mastra/commit/7bfbc52a8604feb0fff2c0a082c13c0c2a3df1a2), [`1445994`](https://github.com/mastra-ai/mastra/commit/1445994aee19c9334a6a101cf7bd80ca7ed4d186), [`61f44a2`](https://github.com/mastra-ai/mastra/commit/61f44a26861c89e364f367ff40825bdb7f19df55), [`37145d2`](https://github.com/mastra-ai/mastra/commit/37145d25f99dc31f1a9105576e5452609843ce32), [`fdad759`](https://github.com/mastra-ai/mastra/commit/fdad75939ff008b27625f5ec0ce9c6915d99d9ec), [`e4569c5`](https://github.com/mastra-ai/mastra/commit/e4569c589e00c4061a686c9eb85afe1b7050b0a8), [`7309a85`](https://github.com/mastra-ai/mastra/commit/7309a85427281a8be23f4fb80ca52e18eaffd596), [`99424f6`](https://github.com/mastra-ai/mastra/commit/99424f6862ffb679c4ec6765501486034754a4c2), [`44eb452`](https://github.com/mastra-ai/mastra/commit/44eb4529b10603c279688318bebf3048543a1d61), [`6c40593`](https://github.com/mastra-ai/mastra/commit/6c40593d6d2b1b68b0c45d1a3a4c6ac5ecac3937), [`8c1135d`](https://github.com/mastra-ai/mastra/commit/8c1135dfb91b057283eae7ee11f9ec28753cc64f), [`dd39e54`](https://github.com/mastra-ai/mastra/commit/dd39e54ea34532c995b33bee6e0e808bf41a7341), [`b6fad9a`](https://github.com/mastra-ai/mastra/commit/b6fad9a602182b1cc0df47cd8c55004fa829ad61), [`4129c07`](https://github.com/mastra-ai/mastra/commit/4129c073349b5a66643fd8136ebfe9d7097cf793), [`5b930ab`](https://github.com/mastra-ai/mastra/commit/5b930aba1834d9898e8460a49d15106f31ac7c8d), [`4be93d0`](https://github.com/mastra-ai/mastra/commit/4be93d09d68e20aaf0ea3f210749422719618b5f), [`047635c`](https://github.com/mastra-ai/mastra/commit/047635ccd7861d726c62d135560c0022a5490aec), [`8c90ff4`](https://github.com/mastra-ai/mastra/commit/8c90ff4d3414e7f2a2d216ea91274644f7b29133), [`ed232d1`](https://github.com/mastra-ai/mastra/commit/ed232d1583f403925dc5ae45f7bee948cf2a182b), [`3891795`](https://github.com/mastra-ai/mastra/commit/38917953518eb4154a984ee36e6ededdcfe80f72), [`4f955b2`](https://github.com/mastra-ai/mastra/commit/4f955b20c7f66ed282ee1fd8709696fa64c4f19d), [`55a4c90`](https://github.com/mastra-ai/mastra/commit/55a4c9044ac7454349b9f6aeba0bbab5ee65d10f)]:
|
|
92
|
+
- @mastra/core@1.3.0-alpha.1
|
|
93
|
+
|
|
94
|
+
## 1.2.0
|
|
95
|
+
|
|
96
|
+
### Minor Changes
|
|
97
|
+
|
|
98
|
+
- Added Observational Memory — a new memory system that keeps your agent's context window small while preserving long-term memory across conversations. ([#12599](https://github.com/mastra-ai/mastra/pull/12599))
|
|
99
|
+
|
|
100
|
+
**Why:** Long conversations cause context rot and waste tokens. Observational Memory compresses conversation history into observations (5–40x compression) and periodically condenses those into reflections. Your agent stays fast and focused, even after thousands of messages.
|
|
101
|
+
|
|
102
|
+
**Usage:**
|
|
103
|
+
|
|
104
|
+
```ts
|
|
105
|
+
import { Memory } from '@mastra/memory';
|
|
106
|
+
import { PostgresStore } from '@mastra/pg';
|
|
107
|
+
|
|
108
|
+
const memory = new Memory({
|
|
109
|
+
storage: new PostgresStore({ connectionString: process.env.DATABASE_URL }),
|
|
110
|
+
options: {
|
|
111
|
+
observationalMemory: true,
|
|
112
|
+
},
|
|
113
|
+
});
|
|
114
|
+
|
|
115
|
+
const agent = new Agent({
|
|
116
|
+
name: 'my-agent',
|
|
117
|
+
model: openai('gpt-4o'),
|
|
118
|
+
memory,
|
|
119
|
+
});
|
|
120
|
+
```
|
|
121
|
+
|
|
122
|
+
**What's new:**
|
|
123
|
+
- `observationalMemory: true` enables the three-tier memory system (recent messages → observations → reflections)
|
|
124
|
+
- Thread-scoped (per-conversation) and resource-scoped (shared across all threads for a user) modes
|
|
125
|
+
- Manual `observe()` API for triggering observation outside the normal agent loop
|
|
126
|
+
- New OM storage methods for pg, libsql, and mongodb adapters (conditionally enabled)
|
|
127
|
+
- `Agent.findProcessor()` method for looking up processors by ID
|
|
128
|
+
- `processorStates` for persisting processor state across loop iterations
|
|
129
|
+
- Abort signal propagation to processors
|
|
130
|
+
- `ProcessorStreamWriter` for custom stream events from processors
|
|
131
|
+
|
|
132
|
+
### Patch Changes
|
|
133
|
+
|
|
134
|
+
- Created @mastra/editor package for managing and resolving stored agent configurations ([#12631](https://github.com/mastra-ai/mastra/pull/12631))
|
|
135
|
+
|
|
136
|
+
This major addition introduces the editor package, which provides a complete solution for storing, versioning, and instantiating agent configurations from a database. The editor seamlessly integrates with Mastra's storage layer to enable dynamic agent management.
|
|
137
|
+
|
|
138
|
+
**Key Features:**
|
|
139
|
+
- **Agent Storage & Retrieval**: Store complete agent configurations including instructions, model settings, tools, workflows, nested agents, scorers, processors, and memory configuration
|
|
140
|
+
- **Version Management**: Create and manage multiple versions of agents, with support for activating specific versions
|
|
141
|
+
- **Dependency Resolution**: Automatically resolves and instantiates all agent dependencies (tools, workflows, sub-agents, etc.) from the Mastra registry
|
|
142
|
+
- **Caching**: Built-in caching for improved performance when repeatedly accessing stored agents
|
|
143
|
+
- **Type Safety**: Full TypeScript support with proper typing for stored configurations
|
|
144
|
+
|
|
145
|
+
**Usage Example:**
|
|
146
|
+
|
|
147
|
+
```typescript
|
|
148
|
+
import { MastraEditor } from '@mastra/editor';
|
|
149
|
+
import { Mastra } from '@mastra/core';
|
|
150
|
+
|
|
151
|
+
// Initialize editor with Mastra
|
|
152
|
+
const mastra = new Mastra({
|
|
153
|
+
/* config */
|
|
154
|
+
editor: new MastraEditor(),
|
|
155
|
+
});
|
|
156
|
+
|
|
157
|
+
// Store an agent configuration
|
|
158
|
+
const agentId = await mastra.storage.stores?.agents?.createAgent({
|
|
159
|
+
name: 'customer-support',
|
|
160
|
+
instructions: 'Help customers with inquiries',
|
|
161
|
+
model: { provider: 'openai', name: 'gpt-4' },
|
|
162
|
+
tools: ['search-kb', 'create-ticket'],
|
|
163
|
+
workflows: ['escalation-flow'],
|
|
164
|
+
memory: { vector: 'pinecone-db' },
|
|
165
|
+
});
|
|
166
|
+
|
|
167
|
+
// Retrieve and use the stored agent
|
|
168
|
+
const agent = await mastra.getEditor()?.getStoredAgentById(agentId);
|
|
169
|
+
const response = await agent?.generate('How do I reset my password?');
|
|
170
|
+
|
|
171
|
+
// List all stored agents
|
|
172
|
+
const agents = await mastra.getEditor()?.listStoredAgents({ pageSize: 10 });
|
|
173
|
+
```
|
|
174
|
+
|
|
175
|
+
**Storage Improvements:**
|
|
176
|
+
- Fixed JSONB handling in LibSQL, PostgreSQL, and MongoDB adapters
|
|
177
|
+
- Improved agent resolution queries to properly merge version data
|
|
178
|
+
- Enhanced type safety for serialized configurations
|
|
179
|
+
|
|
180
|
+
- Updated dependencies [[`e6fc281`](https://github.com/mastra-ai/mastra/commit/e6fc281896a3584e9e06465b356a44fe7faade65), [`97be6c8`](https://github.com/mastra-ai/mastra/commit/97be6c8963130fca8a664fcf99d7b3a38e463595), [`2770921`](https://github.com/mastra-ai/mastra/commit/2770921eec4d55a36b278d15c3a83f694e462ee5), [`b1695db`](https://github.com/mastra-ai/mastra/commit/b1695db2d7be0c329d499619c7881899649188d0), [`5fe1fe0`](https://github.com/mastra-ai/mastra/commit/5fe1fe0109faf2c87db34b725d8a4571a594f80e), [`4133d48`](https://github.com/mastra-ai/mastra/commit/4133d48eaa354cdb45920dc6265732ffbc96788d), [`5dd01cc`](https://github.com/mastra-ai/mastra/commit/5dd01cce68d61874aa3ecbd91ee17884cfd5aca2), [`13e0a2a`](https://github.com/mastra-ai/mastra/commit/13e0a2a2bcec01ff4d701274b3727d5e907a6a01), [`f6673b8`](https://github.com/mastra-ai/mastra/commit/f6673b893b65b7d273ad25ead42e990704cc1e17), [`cd6be8a`](https://github.com/mastra-ai/mastra/commit/cd6be8ad32741cd41cabf508355bb31b71e8a5bd), [`9eb4e8e`](https://github.com/mastra-ai/mastra/commit/9eb4e8e39efbdcfff7a40ff2ce07ce2714c65fa8), [`c987384`](https://github.com/mastra-ai/mastra/commit/c987384d6c8ca844a9701d7778f09f5a88da7f9f), [`cb8cc12`](https://github.com/mastra-ai/mastra/commit/cb8cc12bfadd526aa95a01125076f1da44e4afa7), [`aa37c84`](https://github.com/mastra-ai/mastra/commit/aa37c84d29b7db68c72517337932ef486c316275), [`62f5d50`](https://github.com/mastra-ai/mastra/commit/62f5d5043debbba497dacb7ab008fe86b38b8de3), [`47eba72`](https://github.com/mastra-ai/mastra/commit/47eba72f0397d0d14fbe324b97940c3d55e5a525)]:
|
|
181
|
+
- @mastra/core@1.2.0
|
|
182
|
+
|
|
3
183
|
## 1.2.0-alpha.0
|
|
4
184
|
|
|
5
185
|
### Minor Changes
|
package/dist/docs/SKILL.md
CHANGED
|
@@ -1,37 +1,40 @@
|
|
|
1
1
|
---
|
|
2
|
-
name: mastra-pg
|
|
3
|
-
description: Documentation for @mastra/pg.
|
|
2
|
+
name: mastra-pg
|
|
3
|
+
description: Documentation for @mastra/pg. Use when working with @mastra/pg APIs, configuration, or implementation.
|
|
4
|
+
metadata:
|
|
5
|
+
package: "@mastra/pg"
|
|
6
|
+
version: "1.3.0-alpha.0"
|
|
4
7
|
---
|
|
5
8
|
|
|
6
|
-
|
|
9
|
+
## When to use
|
|
7
10
|
|
|
8
|
-
|
|
9
|
-
> **Package**: @mastra/pg
|
|
11
|
+
Use this skill whenever you are working with @mastra/pg to obtain the domain-specific knowledge.
|
|
10
12
|
|
|
11
|
-
##
|
|
13
|
+
## How to use
|
|
12
14
|
|
|
13
|
-
|
|
15
|
+
Read the individual reference documents for detailed explanations and code examples.
|
|
14
16
|
|
|
15
|
-
|
|
16
|
-
cat docs/SOURCE_MAP.json
|
|
17
|
-
```
|
|
17
|
+
### Docs
|
|
18
18
|
|
|
19
|
-
|
|
20
|
-
-
|
|
21
|
-
-
|
|
22
|
-
-
|
|
19
|
+
- [Semantic Recall](references/docs-memory-semantic-recall.md) - Learn how to use semantic recall in Mastra to retrieve relevant messages from past conversations using vector search and embeddings.
|
|
20
|
+
- [Storage](references/docs-memory-storage.md) - Configure storage for Mastra's memory system to persist conversations, workflows, and traces.
|
|
21
|
+
- [Working Memory](references/docs-memory-working-memory.md) - Learn how to configure working memory in Mastra to store persistent user data, preferences.
|
|
22
|
+
- [RAG (Retrieval-Augmented Generation) in Mastra](references/docs-rag-overview.md) - Overview of Retrieval-Augmented Generation (RAG) in Mastra, detailing its capabilities for enhancing LLM outputs with relevant context.
|
|
23
|
+
- [Retrieval, Semantic Search, Reranking](references/docs-rag-retrieval.md) - Guide on retrieval processes in Mastra's RAG systems, including semantic search, filtering, and re-ranking.
|
|
24
|
+
- [Storing Embeddings in A Vector Database](references/docs-rag-vector-databases.md) - Guide on vector storage options in Mastra, including embedded and dedicated vector databases for similarity search.
|
|
23
25
|
|
|
24
|
-
|
|
26
|
+
### Reference
|
|
25
27
|
|
|
28
|
+
- [Reference: Memory Class](references/reference-memory-memory-class.md) - Documentation for the `Memory` class in Mastra, which provides a robust system for managing conversation history and thread-based message storage.
|
|
29
|
+
- [Reference: Message History Processor](references/reference-processors-message-history-processor.md) - Documentation for the MessageHistory processor in Mastra, which handles retrieval and persistence of conversation history.
|
|
30
|
+
- [Reference: Semantic Recall Processor](references/reference-processors-semantic-recall-processor.md) - Documentation for the SemanticRecall processor in Mastra, which enables semantic search over conversation history using vector embeddings.
|
|
31
|
+
- [Reference: Working Memory Processor](references/reference-processors-working-memory-processor.md) - Documentation for the WorkingMemory processor in Mastra, which injects persistent user/context data as system instructions.
|
|
32
|
+
- [Reference: Metadata Filters](references/reference-rag-metadata-filters.md) - Documentation for metadata filtering capabilities in Mastra, which allow for precise querying of vector search results across different vector stores.
|
|
33
|
+
- [Reference: Composite Storage](references/reference-storage-composite.md) - Documentation for combining multiple storage backends in Mastra.
|
|
34
|
+
- [Reference: DynamoDB Storage](references/reference-storage-dynamodb.md) - Documentation for the DynamoDB storage implementation in Mastra, using a single-table design with ElectroDB.
|
|
35
|
+
- [Reference: PostgreSQL Storage](references/reference-storage-postgresql.md) - Documentation for the PostgreSQL storage implementation in Mastra.
|
|
36
|
+
- [Reference: createVectorQueryTool()](references/reference-tools-vector-query-tool.md) - Documentation for the Vector Query Tool in Mastra, which facilitates semantic search over vector stores with filtering and reranking capabilities.
|
|
37
|
+
- [Reference: PG Vector Store](references/reference-vectors-pg.md) - Documentation for the PgVector class in Mastra, which provides vector search using PostgreSQL with pgvector extension.
|
|
26
38
|
|
|
27
39
|
|
|
28
|
-
|
|
29
|
-
|
|
30
|
-
## Available Topics
|
|
31
|
-
|
|
32
|
-
- [Memory](memory/) - 4 file(s)
|
|
33
|
-
- [Processors](processors/) - 3 file(s)
|
|
34
|
-
- [Rag](rag/) - 4 file(s)
|
|
35
|
-
- [Storage](storage/) - 3 file(s)
|
|
36
|
-
- [Tools](tools/) - 1 file(s)
|
|
37
|
-
- [Vectors](vectors/) - 1 file(s)
|
|
40
|
+
Read [assets/SOURCE_MAP.json](assets/SOURCE_MAP.json) for source code references.
|
package/dist/docs/{memory/03-semantic-recall.md → references/docs-memory-semantic-recall.md}
RENAMED
|
@@ -1,20 +1,16 @@
|
|
|
1
|
-
> Learn how to use semantic recall in Mastra to retrieve relevant messages from past conversations using vector search and embeddings.
|
|
2
|
-
|
|
3
1
|
# Semantic Recall
|
|
4
2
|
|
|
5
3
|
If you ask your friend what they did last weekend, they will search in their memory for events associated with "last weekend" and then tell you what they did. That's sort of like how semantic recall works in Mastra.
|
|
6
4
|
|
|
7
|
-
> **Watch
|
|
8
|
-
|
|
9
|
-
What semantic recall is, how it works, and how to configure it in Mastra → [YouTube (5 minutes)](https://youtu.be/UVZtK8cK8xQ)
|
|
5
|
+
> **Watch 📹:** What semantic recall is, how it works, and how to configure it in Mastra → [YouTube (5 minutes)](https://youtu.be/UVZtK8cK8xQ)
|
|
10
6
|
|
|
11
7
|
## How Semantic Recall Works
|
|
12
8
|
|
|
13
|
-
Semantic recall is RAG-based search that helps agents maintain context across longer interactions when messages are no longer within [recent message history](
|
|
9
|
+
Semantic recall is RAG-based search that helps agents maintain context across longer interactions when messages are no longer within [recent message history](https://mastra.ai/docs/memory/message-history).
|
|
14
10
|
|
|
15
11
|
It uses vector embeddings of messages for similarity search, integrates with various vector stores, and has configurable context windows around retrieved messages.
|
|
16
12
|
|
|
17
|
-

|
|
18
14
|
|
|
19
15
|
When it's enabled, new messages are used to query a vector DB for semantically similar messages.
|
|
20
16
|
|
|
@@ -24,7 +20,7 @@ After getting a response from the LLM, all new messages (user, assistant, and to
|
|
|
24
20
|
|
|
25
21
|
Semantic recall is enabled by default, so if you give your agent memory it will be included:
|
|
26
22
|
|
|
27
|
-
```typescript
|
|
23
|
+
```typescript
|
|
28
24
|
import { Agent } from "@mastra/core/agent";
|
|
29
25
|
import { Memory } from "@mastra/memory";
|
|
30
26
|
|
|
@@ -64,7 +60,7 @@ const { messages: relevantMessages } = await memory!.recall({
|
|
|
64
60
|
|
|
65
61
|
Semantic recall relies on a [storage and vector db](https://mastra.ai/reference/memory/memory-class) to store messages and their embeddings.
|
|
66
62
|
|
|
67
|
-
```ts
|
|
63
|
+
```ts
|
|
68
64
|
import { Memory } from "@mastra/memory";
|
|
69
65
|
import { Agent } from "@mastra/core/agent";
|
|
70
66
|
import { LibSQLStore, LibSQLVector } from "@mastra/libsql";
|
|
@@ -113,7 +109,7 @@ The three main parameters that control semantic recall behavior are:
|
|
|
113
109
|
2. **messageRange**: How much surrounding context to include with each match
|
|
114
110
|
3. **scope**: Whether to search within the current thread or across all threads owned by a resource (the default is resource scope).
|
|
115
111
|
|
|
116
|
-
```typescript
|
|
112
|
+
```typescript
|
|
117
113
|
const agent = new Agent({
|
|
118
114
|
memory: new Memory({
|
|
119
115
|
options: {
|
|
@@ -135,7 +131,7 @@ Semantic recall relies on an [embedding model](https://mastra.ai/reference/memor
|
|
|
135
131
|
|
|
136
132
|
The simplest way is to use a `provider/model` string with autocomplete support:
|
|
137
133
|
|
|
138
|
-
```ts
|
|
134
|
+
```ts
|
|
139
135
|
import { Memory } from "@mastra/memory";
|
|
140
136
|
import { Agent } from "@mastra/core/agent";
|
|
141
137
|
import { ModelRouterEmbeddingModel } from "@mastra/core/llm";
|
|
@@ -158,7 +154,7 @@ The model router automatically handles API key detection from environment variab
|
|
|
158
154
|
|
|
159
155
|
You can also use AI SDK embedding models directly:
|
|
160
156
|
|
|
161
|
-
```ts
|
|
157
|
+
```ts
|
|
162
158
|
import { Memory } from "@mastra/memory";
|
|
163
159
|
import { Agent } from "@mastra/core/agent";
|
|
164
160
|
import { ModelRouterEmbeddingModel } from "@mastra/core/llm";
|
|
@@ -174,13 +170,33 @@ const agent = new Agent({
|
|
|
174
170
|
|
|
175
171
|
To use FastEmbed (a local embedding model), install `@mastra/fastembed`:
|
|
176
172
|
|
|
177
|
-
|
|
173
|
+
**npm**:
|
|
174
|
+
|
|
175
|
+
```bash
|
|
178
176
|
npm install @mastra/fastembed@latest
|
|
179
177
|
```
|
|
180
178
|
|
|
179
|
+
**pnpm**:
|
|
180
|
+
|
|
181
|
+
```bash
|
|
182
|
+
pnpm add @mastra/fastembed@latest
|
|
183
|
+
```
|
|
184
|
+
|
|
185
|
+
**Yarn**:
|
|
186
|
+
|
|
187
|
+
```bash
|
|
188
|
+
yarn add @mastra/fastembed@latest
|
|
189
|
+
```
|
|
190
|
+
|
|
191
|
+
**Bun**:
|
|
192
|
+
|
|
193
|
+
```bash
|
|
194
|
+
bun add @mastra/fastembed@latest
|
|
195
|
+
```
|
|
196
|
+
|
|
181
197
|
Then configure it in your memory:
|
|
182
198
|
|
|
183
|
-
```ts
|
|
199
|
+
```ts
|
|
184
200
|
import { Memory } from "@mastra/memory";
|
|
185
201
|
import { Agent } from "@mastra/core/agent";
|
|
186
202
|
import { fastembed } from "@mastra/fastembed";
|
|
@@ -198,7 +214,7 @@ When using PostgreSQL as your vector store, you can optimize semantic recall per
|
|
|
198
214
|
|
|
199
215
|
PostgreSQL supports both IVFFlat and HNSW indexes. By default, Mastra creates an IVFFlat index, but HNSW indexes typically provide better performance, especially with OpenAI embeddings which use inner product distance.
|
|
200
216
|
|
|
201
|
-
```typescript
|
|
217
|
+
```typescript
|
|
202
218
|
import { Memory } from "@mastra/memory";
|
|
203
219
|
import { PgStore, PgVector } from "@mastra/pg";
|
|
204
220
|
|
|
@@ -228,7 +244,7 @@ const agent = new Agent({
|
|
|
228
244
|
});
|
|
229
245
|
```
|
|
230
246
|
|
|
231
|
-
For detailed information about index configuration options and performance tuning, see the [PgVector configuration guide](https://mastra.ai/reference/vectors/pg
|
|
247
|
+
For detailed information about index configuration options and performance tuning, see the [PgVector configuration guide](https://mastra.ai/reference/vectors/pg).
|
|
232
248
|
|
|
233
249
|
## Disabling
|
|
234
250
|
|
|
@@ -236,7 +252,7 @@ There is a performance impact to using semantic recall. New messages are convert
|
|
|
236
252
|
|
|
237
253
|
Semantic recall is enabled by default but can be disabled when not needed:
|
|
238
254
|
|
|
239
|
-
```typescript
|
|
255
|
+
```typescript
|
|
240
256
|
const agent = new Agent({
|
|
241
257
|
memory: new Memory({
|
|
242
258
|
options: {
|
|
@@ -1,10 +1,8 @@
|
|
|
1
|
-
> Configure storage for Mastra
|
|
2
|
-
|
|
3
1
|
# Storage
|
|
4
2
|
|
|
5
|
-
For agents to remember previous interactions, Mastra needs a database. Use a storage adapter for one of the [supported databases](#supported-providers) and pass it to your Mastra instance.
|
|
3
|
+
For agents to remember previous interactions, Mastra needs a database. Use a storage adapter for one of the [supported databases](#supported-providers) and pass it to your Mastra instance.
|
|
6
4
|
|
|
7
|
-
```typescript
|
|
5
|
+
```typescript
|
|
8
6
|
import { Mastra } from "@mastra/core";
|
|
9
7
|
import { LibSQLStore } from "@mastra/libsql";
|
|
10
8
|
|
|
@@ -16,18 +14,17 @@ export const mastra = new Mastra({
|
|
|
16
14
|
});
|
|
17
15
|
```
|
|
18
16
|
|
|
19
|
-
> **Sharing the database with Mastra Studio
|
|
20
|
-
|
|
21
|
-
|
|
22
|
-
|
|
23
|
-
|
|
24
|
-
|
|
25
|
-
|
|
26
|
-
Relative paths like `file:./mastra.db` resolve based on each process's working directory, which may differ.
|
|
17
|
+
> **Sharing the database with Mastra Studio:** When running `mastra dev` alongside your application (e.g., Next.js), use an absolute path to ensure both processes access the same database:
|
|
18
|
+
>
|
|
19
|
+
> ```typescript
|
|
20
|
+
> url: "file:/absolute/path/to/your/project/mastra.db"
|
|
21
|
+
> ```
|
|
22
|
+
>
|
|
23
|
+
> Relative paths like `file:./mastra.db` resolve based on each process's working directory, which may differ.
|
|
27
24
|
|
|
28
25
|
This configures instance-level storage, which all agents share by default. You can also configure [agent-level storage](#agent-level-storage) for isolated data boundaries.
|
|
29
26
|
|
|
30
|
-
Mastra automatically creates the necessary tables on first interaction. See the [core schema](https://mastra.ai/reference/storage/overview
|
|
27
|
+
Mastra automatically creates the necessary tables on first interaction. See the [core schema](https://mastra.ai/reference/storage/overview) for details on what gets created, including tables for messages, threads, resources, workflows, traces, and evaluation datasets.
|
|
31
28
|
|
|
32
29
|
## Supported providers
|
|
33
30
|
|
|
@@ -44,8 +41,7 @@ Each provider page includes installation instructions, configuration parameters,
|
|
|
44
41
|
- [LanceDB](https://mastra.ai/reference/storage/lance)
|
|
45
42
|
- [Microsoft SQL Server](https://mastra.ai/reference/storage/mssql)
|
|
46
43
|
|
|
47
|
-
> **
|
|
48
|
-
libSQL is the easiest way to get started because it doesn’t require running a separate database server.
|
|
44
|
+
> **Tip:** libSQL is the easiest way to get started because it doesn’t require running a separate database server.
|
|
49
45
|
|
|
50
46
|
## Configuration scope
|
|
51
47
|
|
|
@@ -55,7 +51,7 @@ Storage can be configured at the instance level (shared by all agents) or at the
|
|
|
55
51
|
|
|
56
52
|
Add storage to your Mastra instance so all agents, workflows, observability traces and scores share the same memory provider:
|
|
57
53
|
|
|
58
|
-
```typescript
|
|
54
|
+
```typescript
|
|
59
55
|
import { Mastra } from "@mastra/core";
|
|
60
56
|
import { PostgresStore } from "@mastra/pg";
|
|
61
57
|
|
|
@@ -75,9 +71,9 @@ This is useful when all primitives share the same storage backend and have simil
|
|
|
75
71
|
|
|
76
72
|
#### Composite storage
|
|
77
73
|
|
|
78
|
-
[Composite storage](https://mastra.ai/reference/storage/composite) is an alternative way to configure instance-level storage. Use `MastraCompositeStore` to set the `memory` domain (and any other [domains](https://mastra.ai/reference/storage/composite
|
|
74
|
+
[Composite storage](https://mastra.ai/reference/storage/composite) is an alternative way to configure instance-level storage. Use `MastraCompositeStore` to set the `memory` domain (and any other [domains](https://mastra.ai/reference/storage/composite) you need) to different storage providers.
|
|
79
75
|
|
|
80
|
-
```typescript
|
|
76
|
+
```typescript
|
|
81
77
|
import { Mastra } from "@mastra/core";
|
|
82
78
|
import { MastraCompositeStore } from "@mastra/core/storage";
|
|
83
79
|
import { MemoryLibSQL } from "@mastra/libsql";
|
|
@@ -88,7 +84,6 @@ export const mastra = new Mastra({
|
|
|
88
84
|
storage: new MastraCompositeStore({
|
|
89
85
|
id: "composite",
|
|
90
86
|
domains: {
|
|
91
|
-
// highlight-next-line
|
|
92
87
|
memory: new MemoryLibSQL({ url: "file:./memory.db" }),
|
|
93
88
|
workflows: new WorkflowsPG({ connectionString: process.env.DATABASE_URL }),
|
|
94
89
|
observability: new ObservabilityStorageClickhouse({
|
|
@@ -107,7 +102,7 @@ This is useful when different types of data have different performance or operat
|
|
|
107
102
|
|
|
108
103
|
Agent-level storage overrides storage configured at the instance level. Add storage to a specific agent when you need data boundaries or compliance requirements:
|
|
109
104
|
|
|
110
|
-
```typescript
|
|
105
|
+
```typescript
|
|
111
106
|
import { Agent } from "@mastra/core/agent";
|
|
112
107
|
import { Memory } from "@mastra/memory";
|
|
113
108
|
import { PostgresStore } from "@mastra/pg";
|
|
@@ -123,19 +118,18 @@ export const agent = new Agent({
|
|
|
123
118
|
});
|
|
124
119
|
```
|
|
125
120
|
|
|
126
|
-
> **
|
|
127
|
-
[Mastra Cloud Store](https://mastra.ai/docs/mastra-cloud/deployment#using-mastra-cloud-store) doesn't support agent-level storage.
|
|
121
|
+
> **Warning:** [Mastra Cloud Store](https://mastra.ai/docs/mastra-cloud/deployment) doesn't support agent-level storage.
|
|
128
122
|
|
|
129
123
|
## Threads and resources
|
|
130
124
|
|
|
131
|
-
Mastra organizes conversations using two identifiers:
|
|
125
|
+
Mastra organizes conversations using two identifiers:
|
|
132
126
|
|
|
133
127
|
- **Thread** - a conversation session containing a sequence of messages.
|
|
134
128
|
- **Resource** - the entity that owns the thread, such as a user, organization, project, or any other domain entity in your application.
|
|
135
129
|
|
|
136
130
|
Both identifiers are required for agents to store information:
|
|
137
131
|
|
|
138
|
-
|
|
132
|
+
**Generate**:
|
|
139
133
|
|
|
140
134
|
```typescript
|
|
141
135
|
const response = await agent.generate("hello", {
|
|
@@ -146,8 +140,7 @@ const response = await agent.generate("hello", {
|
|
|
146
140
|
});
|
|
147
141
|
```
|
|
148
142
|
|
|
149
|
-
|
|
150
|
-
**stream:**
|
|
143
|
+
**Stream**:
|
|
151
144
|
|
|
152
145
|
```typescript
|
|
153
146
|
const stream = await agent.stream("hello", {
|
|
@@ -158,10 +151,7 @@ const stream = await agent.stream("hello", {
|
|
|
158
151
|
});
|
|
159
152
|
```
|
|
160
153
|
|
|
161
|
-
|
|
162
|
-
|
|
163
|
-
> **Note:**
|
|
164
|
-
[Studio](https://mastra.ai/docs/getting-started/studio) automatically generates a thread and resource ID for you. When calling `stream()` or `generate()` yourself, remember to provide these identifiers explicitly.
|
|
154
|
+
> **Note:** [Studio](https://mastra.ai/docs/getting-started/studio) automatically generates a thread and resource ID for you. When calling `stream()` or `generate()` yourself, remember to provide these identifiers explicitly.
|
|
165
155
|
|
|
166
156
|
### Thread title generation
|
|
167
157
|
|
|
@@ -169,7 +159,7 @@ Mastra can automatically generate descriptive thread titles based on the user's
|
|
|
169
159
|
|
|
170
160
|
Use this option when implementing a ChatGPT-style chat interface to render a title alongside each thread in the conversation list (for example, in a sidebar) derived from the thread’s initial user message.
|
|
171
161
|
|
|
172
|
-
```typescript
|
|
162
|
+
```typescript
|
|
173
163
|
export const agent = new Agent({
|
|
174
164
|
id: "agent",
|
|
175
165
|
memory: new Memory({
|
|
@@ -182,9 +172,9 @@ export const agent = new Agent({
|
|
|
182
172
|
|
|
183
173
|
Title generation runs asynchronously after the agent responds and does not affect response time.
|
|
184
174
|
|
|
185
|
-
To optimize cost or behavior, provide a smaller [`model`](/models) and custom `instructions`:
|
|
175
|
+
To optimize cost or behavior, provide a smaller [`model`](https://mastra.ai/models) and custom `instructions`:
|
|
186
176
|
|
|
187
|
-
```typescript
|
|
177
|
+
```typescript
|
|
188
178
|
export const agent = new Agent({
|
|
189
179
|
id: "agent",
|
|
190
180
|
memory: new Memory({
|
|
@@ -206,17 +196,17 @@ Semantic recall has different storage requirements - it needs a vector database
|
|
|
206
196
|
|
|
207
197
|
Some storage providers enforce record size limits that base64-encoded file attachments (such as images) can exceed:
|
|
208
198
|
|
|
209
|
-
| Provider
|
|
210
|
-
|
|
|
211
|
-
| [DynamoDB](https://mastra.ai/reference/storage/dynamodb)
|
|
212
|
-
| [Convex](https://mastra.ai/reference/storage/convex)
|
|
213
|
-
| [Cloudflare D1](https://mastra.ai/reference/storage/cloudflare-d1) | 1 MiB
|
|
199
|
+
| Provider | Record size limit |
|
|
200
|
+
| ------------------------------------------------------------------ | ----------------- |
|
|
201
|
+
| [DynamoDB](https://mastra.ai/reference/storage/dynamodb) | 400 KB |
|
|
202
|
+
| [Convex](https://mastra.ai/reference/storage/convex) | 1 MiB |
|
|
203
|
+
| [Cloudflare D1](https://mastra.ai/reference/storage/cloudflare-d1) | 1 MiB |
|
|
214
204
|
|
|
215
205
|
PostgreSQL, MongoDB, and libSQL have higher limits and are generally unaffected.
|
|
216
206
|
|
|
217
207
|
To avoid this, use an input processor to upload attachments to external storage (S3, R2, GCS, [Convex file storage](https://docs.convex.dev/file-storage), etc.) and replace them with URL references before persistence.
|
|
218
208
|
|
|
219
|
-
```typescript
|
|
209
|
+
```typescript
|
|
220
210
|
import type { Processor } from "@mastra/core/processors";
|
|
221
211
|
import type { MastraDBMessage } from "@mastra/core/memory";
|
|
222
212
|
|