@mastra/pg 1.2.0 → 1.3.0-alpha.1

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (40) hide show
  1. package/CHANGELOG.md +100 -0
  2. package/dist/docs/SKILL.md +28 -25
  3. package/dist/docs/{SOURCE_MAP.json → assets/SOURCE_MAP.json} +1 -1
  4. package/dist/docs/{memory/03-semantic-recall.md → references/docs-memory-semantic-recall.md} +33 -17
  5. package/dist/docs/{memory/01-storage.md → references/docs-memory-storage.md} +29 -39
  6. package/dist/docs/{memory/02-working-memory.md → references/docs-memory-working-memory.md} +16 -27
  7. package/dist/docs/{rag/01-overview.md → references/docs-rag-overview.md} +2 -4
  8. package/dist/docs/{rag/03-retrieval.md → references/docs-rag-retrieval.md} +26 -53
  9. package/dist/docs/{rag/02-vector-databases.md → references/docs-rag-vector-databases.md} +198 -202
  10. package/dist/docs/{memory/04-reference.md → references/reference-memory-memory-class.md} +28 -14
  11. package/dist/docs/references/reference-processors-message-history-processor.md +85 -0
  12. package/dist/docs/references/reference-processors-semantic-recall-processor.md +123 -0
  13. package/dist/docs/references/reference-processors-working-memory-processor.md +154 -0
  14. package/dist/docs/{rag/04-reference.md → references/reference-rag-metadata-filters.md} +26 -179
  15. package/dist/docs/references/reference-storage-composite.md +235 -0
  16. package/dist/docs/references/reference-storage-dynamodb.md +282 -0
  17. package/dist/docs/references/reference-storage-postgresql.md +529 -0
  18. package/dist/docs/{tools/01-reference.md → references/reference-tools-vector-query-tool.md} +137 -118
  19. package/dist/docs/{vectors/01-reference.md → references/reference-vectors-pg.md} +115 -14
  20. package/dist/index.cjs +2002 -221
  21. package/dist/index.cjs.map +1 -1
  22. package/dist/index.js +2002 -223
  23. package/dist/index.js.map +1 -1
  24. package/dist/storage/db/constraint-utils.d.ts +16 -0
  25. package/dist/storage/db/constraint-utils.d.ts.map +1 -0
  26. package/dist/storage/db/index.d.ts.map +1 -1
  27. package/dist/storage/domains/agents/index.d.ts +9 -12
  28. package/dist/storage/domains/agents/index.d.ts.map +1 -1
  29. package/dist/storage/domains/memory/index.d.ts +8 -2
  30. package/dist/storage/domains/memory/index.d.ts.map +1 -1
  31. package/dist/storage/domains/prompt-blocks/index.d.ts +33 -0
  32. package/dist/storage/domains/prompt-blocks/index.d.ts.map +1 -0
  33. package/dist/storage/domains/scorer-definitions/index.d.ts +33 -0
  34. package/dist/storage/domains/scorer-definitions/index.d.ts.map +1 -0
  35. package/dist/storage/index.d.ts +3 -1
  36. package/dist/storage/index.d.ts.map +1 -1
  37. package/package.json +3 -4
  38. package/dist/docs/README.md +0 -36
  39. package/dist/docs/processors/01-reference.md +0 -296
  40. package/dist/docs/storage/01-reference.md +0 -905
package/CHANGELOG.md CHANGED
@@ -1,5 +1,105 @@
1
1
  # @mastra/pg
2
2
 
3
+ ## 1.3.0-alpha.1
4
+
5
+ ### Patch Changes
6
+
7
+ - Fixed observational memory progress bars resetting to zero after agent responses finish. The messages and observations sidebar bars now retain their values on stream completion, cancellation, and page reload. Also added a buffer-status endpoint so buffering badges resolve with accurate token counts instead of spinning forever when buffering outlives the stream. ([#12934](https://github.com/mastra-ai/mastra/pull/12934))
8
+
9
+ - Updated dependencies [[`b31c922`](https://github.com/mastra-ai/mastra/commit/b31c922215b513791d98feaea1b98784aa00803a)]:
10
+ - @mastra/core@1.3.0-alpha.2
11
+
12
+ ## 1.3.0-alpha.0
13
+
14
+ ### Minor Changes
15
+
16
+ - **Updated storage adapters for generic storage domain API** ([#12846](https://github.com/mastra-ai/mastra/pull/12846))
17
+
18
+ All storage adapters now implement the unified `VersionedStorageDomain` method names. Entity-specific methods (`createAgent`, `getAgentById`, `deleteAgent`, etc.) have been replaced with generic equivalents (`create`, `getById`, `delete`, etc.) across agents, prompt blocks, and scorer definitions domains.
19
+
20
+ Added `scorer-definitions` domain support to all adapters.
21
+
22
+ **Before:**
23
+
24
+ ```ts
25
+ const store = storage.getStore('agents');
26
+ await store.createAgent({ agent: input });
27
+ await store.getAgentById({ id: 'abc' });
28
+ await store.deleteAgent({ id: 'abc' });
29
+ ```
30
+
31
+ **After:**
32
+
33
+ ```ts
34
+ const store = storage.getStore('agents');
35
+ await store.create({ agent: input });
36
+ await store.getById('abc');
37
+ await store.delete('abc');
38
+ ```
39
+
40
+ ### Patch Changes
41
+
42
+ - Fixed issues with stored agents ([#12790](https://github.com/mastra-ai/mastra/pull/12790))
43
+
44
+ - Fixed cross-schema constraint checks in multi-schema PostgreSQL setups so tables and indexes are created in the intended schema. Single-schema (default) setups are unaffected. ([#12868](https://github.com/mastra-ai/mastra/pull/12868))
45
+
46
+ - **Async buffering for observational memory is now enabled by default.** Observations are pre-computed in the background as conversations grow — when the context window fills up, buffered observations activate instantly with no blocking LLM call. This keeps agents responsive during long conversations. ([#12891](https://github.com/mastra-ai/mastra/pull/12891))
47
+
48
+ **Default settings:**
49
+ - `observation.bufferTokens: 0.2` — buffer every 20% of `messageTokens` (~6k tokens with the default 30k threshold)
50
+ - `observation.bufferActivation: 0.8` — on activation, retain 20% of the message window
51
+ - `reflection.bufferActivation: 0.5` — start background reflection at 50% of the observation threshold
52
+
53
+ **Disabling async buffering:**
54
+
55
+ Set `observation.bufferTokens: false` to disable async buffering for both observations and reflections:
56
+
57
+ ```ts
58
+ const memory = new Memory({
59
+ options: {
60
+ observationalMemory: {
61
+ model: 'google/gemini-2.5-flash',
62
+ observation: {
63
+ bufferTokens: false,
64
+ },
65
+ },
66
+ },
67
+ });
68
+ ```
69
+
70
+ **Model is now required** when passing an observational memory config object. Use `observationalMemory: true` for the default (google/gemini-2.5-flash), or set a model explicitly:
71
+
72
+ ```ts
73
+ // Uses default model (google/gemini-2.5-flash)
74
+ observationalMemory: true
75
+
76
+ // Explicit model
77
+ observationalMemory: {
78
+ model: "google/gemini-2.5-flash",
79
+ }
80
+ ```
81
+
82
+ **`shareTokenBudget` requires `bufferTokens: false`** (temporary limitation). If you use `shareTokenBudget: true`, you must explicitly disable async buffering:
83
+
84
+ ```ts
85
+ observationalMemory: {
86
+ model: "google/gemini-2.5-flash",
87
+ shareTokenBudget: true,
88
+ observation: { bufferTokens: false },
89
+ }
90
+ ```
91
+
92
+ **New streaming event:** `data-om-status` replaces `data-om-progress` with a structured status object containing active window usage, buffered observation/reflection state, and projected activation impact.
93
+
94
+ **Buffering markers:** New `data-om-buffering-start`, `data-om-buffering-end`, and `data-om-buffering-failed` streaming events for UI feedback during background operations.
95
+
96
+ - Fixed PostgreSQL constraint names exceeding 63-byte identifier limit. Schema-prefixed constraint names are now truncated to fit within PostgreSQL's identifier length limit, preventing "relation already exists" errors when restarting the dev server with schema names longer than 13 characters. Fixes #12679. ([#12687](https://github.com/mastra-ai/mastra/pull/12687))
97
+
98
+ - Added prompt block storage implementations. Each store supports full CRUD for prompt blocks and their versions, including JSON serialization for rules and metadata. Also updated agent instructions serialization to support the new `AgentInstructionBlock` array format alongside plain strings. ([#12776](https://github.com/mastra-ai/mastra/pull/12776))
99
+
100
+ - Updated dependencies [[`717ffab`](https://github.com/mastra-ai/mastra/commit/717ffab42cfd58ff723b5c19ada4939997773004), [`e4b6dab`](https://github.com/mastra-ai/mastra/commit/e4b6dab171c5960e340b3ea3ea6da8d64d2b8672), [`5719fa8`](https://github.com/mastra-ai/mastra/commit/5719fa8880e86e8affe698ec4b3807c7e0e0a06f), [`83cda45`](https://github.com/mastra-ai/mastra/commit/83cda4523e588558466892bff8f80f631a36945a), [`11804ad`](https://github.com/mastra-ai/mastra/commit/11804adf1d6be46ebe216be40a43b39bb8b397d7), [`aa95f95`](https://github.com/mastra-ai/mastra/commit/aa95f958b186ae5c9f4219c88e268f5565c277a2), [`f5501ae`](https://github.com/mastra-ai/mastra/commit/f5501aedb0a11106c7db7e480d6eaf3971b7bda8), [`44573af`](https://github.com/mastra-ai/mastra/commit/44573afad0a4bc86f627d6cbc0207961cdcb3bc3), [`00e3861`](https://github.com/mastra-ai/mastra/commit/00e3861863fbfee78faeb1ebbdc7c0223aae13ff), [`7bfbc52`](https://github.com/mastra-ai/mastra/commit/7bfbc52a8604feb0fff2c0a082c13c0c2a3df1a2), [`1445994`](https://github.com/mastra-ai/mastra/commit/1445994aee19c9334a6a101cf7bd80ca7ed4d186), [`61f44a2`](https://github.com/mastra-ai/mastra/commit/61f44a26861c89e364f367ff40825bdb7f19df55), [`37145d2`](https://github.com/mastra-ai/mastra/commit/37145d25f99dc31f1a9105576e5452609843ce32), [`fdad759`](https://github.com/mastra-ai/mastra/commit/fdad75939ff008b27625f5ec0ce9c6915d99d9ec), [`e4569c5`](https://github.com/mastra-ai/mastra/commit/e4569c589e00c4061a686c9eb85afe1b7050b0a8), [`7309a85`](https://github.com/mastra-ai/mastra/commit/7309a85427281a8be23f4fb80ca52e18eaffd596), [`99424f6`](https://github.com/mastra-ai/mastra/commit/99424f6862ffb679c4ec6765501486034754a4c2), [`44eb452`](https://github.com/mastra-ai/mastra/commit/44eb4529b10603c279688318bebf3048543a1d61), [`6c40593`](https://github.com/mastra-ai/mastra/commit/6c40593d6d2b1b68b0c45d1a3a4c6ac5ecac3937), [`8c1135d`](https://github.com/mastra-ai/mastra/commit/8c1135dfb91b057283eae7ee11f9ec28753cc64f), [`dd39e54`](https://github.com/mastra-ai/mastra/commit/dd39e54ea34532c995b33bee6e0e808bf41a7341), [`b6fad9a`](https://github.com/mastra-ai/mastra/commit/b6fad9a602182b1cc0df47cd8c55004fa829ad61), [`4129c07`](https://github.com/mastra-ai/mastra/commit/4129c073349b5a66643fd8136ebfe9d7097cf793), [`5b930ab`](https://github.com/mastra-ai/mastra/commit/5b930aba1834d9898e8460a49d15106f31ac7c8d), [`4be93d0`](https://github.com/mastra-ai/mastra/commit/4be93d09d68e20aaf0ea3f210749422719618b5f), [`047635c`](https://github.com/mastra-ai/mastra/commit/047635ccd7861d726c62d135560c0022a5490aec), [`8c90ff4`](https://github.com/mastra-ai/mastra/commit/8c90ff4d3414e7f2a2d216ea91274644f7b29133), [`ed232d1`](https://github.com/mastra-ai/mastra/commit/ed232d1583f403925dc5ae45f7bee948cf2a182b), [`3891795`](https://github.com/mastra-ai/mastra/commit/38917953518eb4154a984ee36e6ededdcfe80f72), [`4f955b2`](https://github.com/mastra-ai/mastra/commit/4f955b20c7f66ed282ee1fd8709696fa64c4f19d), [`55a4c90`](https://github.com/mastra-ai/mastra/commit/55a4c9044ac7454349b9f6aeba0bbab5ee65d10f)]:
101
+ - @mastra/core@1.3.0-alpha.1
102
+
3
103
  ## 1.2.0
4
104
 
5
105
  ### Minor Changes
@@ -1,37 +1,40 @@
1
1
  ---
2
- name: mastra-pg-docs
3
- description: Documentation for @mastra/pg. Includes links to type definitions and readable implementation code in dist/.
2
+ name: mastra-pg
3
+ description: Documentation for @mastra/pg. Use when working with @mastra/pg APIs, configuration, or implementation.
4
+ metadata:
5
+ package: "@mastra/pg"
6
+ version: "1.3.0-alpha.1"
4
7
  ---
5
8
 
6
- # @mastra/pg Documentation
9
+ ## When to use
7
10
 
8
- > **Version**: 1.2.0
9
- > **Package**: @mastra/pg
11
+ Use this skill whenever you are working with @mastra/pg to obtain the domain-specific knowledge.
10
12
 
11
- ## Quick Navigation
13
+ ## How to use
12
14
 
13
- Use SOURCE_MAP.json to find any export:
15
+ Read the individual reference documents for detailed explanations and code examples.
14
16
 
15
- ```bash
16
- cat docs/SOURCE_MAP.json
17
- ```
17
+ ### Docs
18
18
 
19
- Each export maps to:
20
- - **types**: `.d.ts` file with JSDoc and API signatures
21
- - **implementation**: `.js` chunk file with readable source
22
- - **docs**: Conceptual documentation in `docs/`
19
+ - [Semantic Recall](references/docs-memory-semantic-recall.md) - Learn how to use semantic recall in Mastra to retrieve relevant messages from past conversations using vector search and embeddings.
20
+ - [Storage](references/docs-memory-storage.md) - Configure storage for Mastra's memory system to persist conversations, workflows, and traces.
21
+ - [Working Memory](references/docs-memory-working-memory.md) - Learn how to configure working memory in Mastra to store persistent user data, preferences.
22
+ - [RAG (Retrieval-Augmented Generation) in Mastra](references/docs-rag-overview.md) - Overview of Retrieval-Augmented Generation (RAG) in Mastra, detailing its capabilities for enhancing LLM outputs with relevant context.
23
+ - [Retrieval, Semantic Search, Reranking](references/docs-rag-retrieval.md) - Guide on retrieval processes in Mastra's RAG systems, including semantic search, filtering, and re-ranking.
24
+ - [Storing Embeddings in A Vector Database](references/docs-rag-vector-databases.md) - Guide on vector storage options in Mastra, including embedded and dedicated vector databases for similarity search.
23
25
 
24
- ## Top Exports
26
+ ### Reference
25
27
 
28
+ - [Reference: Memory Class](references/reference-memory-memory-class.md) - Documentation for the `Memory` class in Mastra, which provides a robust system for managing conversation history and thread-based message storage.
29
+ - [Reference: Message History Processor](references/reference-processors-message-history-processor.md) - Documentation for the MessageHistory processor in Mastra, which handles retrieval and persistence of conversation history.
30
+ - [Reference: Semantic Recall Processor](references/reference-processors-semantic-recall-processor.md) - Documentation for the SemanticRecall processor in Mastra, which enables semantic search over conversation history using vector embeddings.
31
+ - [Reference: Working Memory Processor](references/reference-processors-working-memory-processor.md) - Documentation for the WorkingMemory processor in Mastra, which injects persistent user/context data as system instructions.
32
+ - [Reference: Metadata Filters](references/reference-rag-metadata-filters.md) - Documentation for metadata filtering capabilities in Mastra, which allow for precise querying of vector search results across different vector stores.
33
+ - [Reference: Composite Storage](references/reference-storage-composite.md) - Documentation for combining multiple storage backends in Mastra.
34
+ - [Reference: DynamoDB Storage](references/reference-storage-dynamodb.md) - Documentation for the DynamoDB storage implementation in Mastra, using a single-table design with ElectroDB.
35
+ - [Reference: PostgreSQL Storage](references/reference-storage-postgresql.md) - Documentation for the PostgreSQL storage implementation in Mastra.
36
+ - [Reference: createVectorQueryTool()](references/reference-tools-vector-query-tool.md) - Documentation for the Vector Query Tool in Mastra, which facilitates semantic search over vector stores with filtering and reranking capabilities.
37
+ - [Reference: PG Vector Store](references/reference-vectors-pg.md) - Documentation for the PgVector class in Mastra, which provides vector search using PostgreSQL with pgvector extension.
26
38
 
27
39
 
28
- See SOURCE_MAP.json for the complete list.
29
-
30
- ## Available Topics
31
-
32
- - [Memory](memory/) - 4 file(s)
33
- - [Processors](processors/) - 3 file(s)
34
- - [Rag](rag/) - 4 file(s)
35
- - [Storage](storage/) - 3 file(s)
36
- - [Tools](tools/) - 1 file(s)
37
- - [Vectors](vectors/) - 1 file(s)
40
+ Read [assets/SOURCE_MAP.json](assets/SOURCE_MAP.json) for source code references.
@@ -1,5 +1,5 @@
1
1
  {
2
- "version": "1.2.0",
2
+ "version": "1.3.0-alpha.1",
3
3
  "package": "@mastra/pg",
4
4
  "exports": {},
5
5
  "modules": {}
@@ -1,20 +1,16 @@
1
- > Learn how to use semantic recall in Mastra to retrieve relevant messages from past conversations using vector search and embeddings.
2
-
3
1
  # Semantic Recall
4
2
 
5
3
  If you ask your friend what they did last weekend, they will search in their memory for events associated with "last weekend" and then tell you what they did. That's sort of like how semantic recall works in Mastra.
6
4
 
7
- > **Watch 📹**
8
-
9
- What semantic recall is, how it works, and how to configure it in Mastra → [YouTube (5 minutes)](https://youtu.be/UVZtK8cK8xQ)
5
+ > **Watch 📹:** What semantic recall is, how it works, and how to configure it in Mastra → [YouTube (5 minutes)](https://youtu.be/UVZtK8cK8xQ)
10
6
 
11
7
  ## How Semantic Recall Works
12
8
 
13
- Semantic recall is RAG-based search that helps agents maintain context across longer interactions when messages are no longer within [recent message history](./message-history).
9
+ Semantic recall is RAG-based search that helps agents maintain context across longer interactions when messages are no longer within [recent message history](https://mastra.ai/docs/memory/message-history).
14
10
 
15
11
  It uses vector embeddings of messages for similarity search, integrates with various vector stores, and has configurable context windows around retrieved messages.
16
12
 
17
- ![Diagram showing Mastra Memory semantic recall](/img/semantic-recall.png)
13
+ ![Diagram showing Mastra Memory semantic recall](/assets/images/semantic-recall-fd7b9336a6d0d18019216cb6d3dbe710.png)
18
14
 
19
15
  When it's enabled, new messages are used to query a vector DB for semantically similar messages.
20
16
 
@@ -24,7 +20,7 @@ After getting a response from the LLM, all new messages (user, assistant, and to
24
20
 
25
21
  Semantic recall is enabled by default, so if you give your agent memory it will be included:
26
22
 
27
- ```typescript {9}
23
+ ```typescript
28
24
  import { Agent } from "@mastra/core/agent";
29
25
  import { Memory } from "@mastra/memory";
30
26
 
@@ -64,7 +60,7 @@ const { messages: relevantMessages } = await memory!.recall({
64
60
 
65
61
  Semantic recall relies on a [storage and vector db](https://mastra.ai/reference/memory/memory-class) to store messages and their embeddings.
66
62
 
67
- ```ts {8-16}
63
+ ```ts
68
64
  import { Memory } from "@mastra/memory";
69
65
  import { Agent } from "@mastra/core/agent";
70
66
  import { LibSQLStore, LibSQLVector } from "@mastra/libsql";
@@ -113,7 +109,7 @@ The three main parameters that control semantic recall behavior are:
113
109
  2. **messageRange**: How much surrounding context to include with each match
114
110
  3. **scope**: Whether to search within the current thread or across all threads owned by a resource (the default is resource scope).
115
111
 
116
- ```typescript {5-7}
112
+ ```typescript
117
113
  const agent = new Agent({
118
114
  memory: new Memory({
119
115
  options: {
@@ -135,7 +131,7 @@ Semantic recall relies on an [embedding model](https://mastra.ai/reference/memor
135
131
 
136
132
  The simplest way is to use a `provider/model` string with autocomplete support:
137
133
 
138
- ```ts {7}
134
+ ```ts
139
135
  import { Memory } from "@mastra/memory";
140
136
  import { Agent } from "@mastra/core/agent";
141
137
  import { ModelRouterEmbeddingModel } from "@mastra/core/llm";
@@ -158,7 +154,7 @@ The model router automatically handles API key detection from environment variab
158
154
 
159
155
  You can also use AI SDK embedding models directly:
160
156
 
161
- ```ts {2,7}
157
+ ```ts
162
158
  import { Memory } from "@mastra/memory";
163
159
  import { Agent } from "@mastra/core/agent";
164
160
  import { ModelRouterEmbeddingModel } from "@mastra/core/llm";
@@ -174,13 +170,33 @@ const agent = new Agent({
174
170
 
175
171
  To use FastEmbed (a local embedding model), install `@mastra/fastembed`:
176
172
 
177
- ```bash npm2yarn
173
+ **npm**:
174
+
175
+ ```bash
178
176
  npm install @mastra/fastembed@latest
179
177
  ```
180
178
 
179
+ **pnpm**:
180
+
181
+ ```bash
182
+ pnpm add @mastra/fastembed@latest
183
+ ```
184
+
185
+ **Yarn**:
186
+
187
+ ```bash
188
+ yarn add @mastra/fastembed@latest
189
+ ```
190
+
191
+ **Bun**:
192
+
193
+ ```bash
194
+ bun add @mastra/fastembed@latest
195
+ ```
196
+
181
197
  Then configure it in your memory:
182
198
 
183
- ```ts {3,7}
199
+ ```ts
184
200
  import { Memory } from "@mastra/memory";
185
201
  import { Agent } from "@mastra/core/agent";
186
202
  import { fastembed } from "@mastra/fastembed";
@@ -198,7 +214,7 @@ When using PostgreSQL as your vector store, you can optimize semantic recall per
198
214
 
199
215
  PostgreSQL supports both IVFFlat and HNSW indexes. By default, Mastra creates an IVFFlat index, but HNSW indexes typically provide better performance, especially with OpenAI embeddings which use inner product distance.
200
216
 
201
- ```typescript {18-23}
217
+ ```typescript
202
218
  import { Memory } from "@mastra/memory";
203
219
  import { PgStore, PgVector } from "@mastra/pg";
204
220
 
@@ -228,7 +244,7 @@ const agent = new Agent({
228
244
  });
229
245
  ```
230
246
 
231
- For detailed information about index configuration options and performance tuning, see the [PgVector configuration guide](https://mastra.ai/reference/vectors/pg#index-configuration-guide).
247
+ For detailed information about index configuration options and performance tuning, see the [PgVector configuration guide](https://mastra.ai/reference/vectors/pg).
232
248
 
233
249
  ## Disabling
234
250
 
@@ -236,7 +252,7 @@ There is a performance impact to using semantic recall. New messages are convert
236
252
 
237
253
  Semantic recall is enabled by default but can be disabled when not needed:
238
254
 
239
- ```typescript {4}
255
+ ```typescript
240
256
  const agent = new Agent({
241
257
  memory: new Memory({
242
258
  options: {
@@ -1,10 +1,8 @@
1
- > Configure storage for Mastra
2
-
3
1
  # Storage
4
2
 
5
- For agents to remember previous interactions, Mastra needs a database. Use a storage adapter for one of the [supported databases](#supported-providers) and pass it to your Mastra instance.
3
+ For agents to remember previous interactions, Mastra needs a database. Use a storage adapter for one of the [supported databases](#supported-providers) and pass it to your Mastra instance.
6
4
 
7
- ```typescript title="src/mastra/index.ts"
5
+ ```typescript
8
6
  import { Mastra } from "@mastra/core";
9
7
  import { LibSQLStore } from "@mastra/libsql";
10
8
 
@@ -16,18 +14,17 @@ export const mastra = new Mastra({
16
14
  });
17
15
  ```
18
16
 
19
- > **Sharing the database with Mastra Studio**
20
- When running `mastra dev` alongside your application (e.g., Next.js), use an absolute path to ensure both processes access the same database:
21
-
22
- ```typescript
23
- url: "file:/absolute/path/to/your/project/mastra.db"
24
- ```
25
-
26
- Relative paths like `file:./mastra.db` resolve based on each process's working directory, which may differ.
17
+ > **Sharing the database with Mastra Studio:** When running `mastra dev` alongside your application (e.g., Next.js), use an absolute path to ensure both processes access the same database:
18
+ >
19
+ > ```typescript
20
+ > url: "file:/absolute/path/to/your/project/mastra.db"
21
+ > ```
22
+ >
23
+ > Relative paths like `file:./mastra.db` resolve based on each process's working directory, which may differ.
27
24
 
28
25
  This configures instance-level storage, which all agents share by default. You can also configure [agent-level storage](#agent-level-storage) for isolated data boundaries.
29
26
 
30
- Mastra automatically creates the necessary tables on first interaction. See the [core schema](https://mastra.ai/reference/storage/overview#core-schema) for details on what gets created, including tables for messages, threads, resources, workflows, traces, and evaluation datasets.
27
+ Mastra automatically creates the necessary tables on first interaction. See the [core schema](https://mastra.ai/reference/storage/overview) for details on what gets created, including tables for messages, threads, resources, workflows, traces, and evaluation datasets.
31
28
 
32
29
  ## Supported providers
33
30
 
@@ -44,8 +41,7 @@ Each provider page includes installation instructions, configuration parameters,
44
41
  - [LanceDB](https://mastra.ai/reference/storage/lance)
45
42
  - [Microsoft SQL Server](https://mastra.ai/reference/storage/mssql)
46
43
 
47
- > **Note:**
48
- libSQL is the easiest way to get started because it doesn’t require running a separate database server.
44
+ > **Tip:** libSQL is the easiest way to get started because it doesn’t require running a separate database server.
49
45
 
50
46
  ## Configuration scope
51
47
 
@@ -55,7 +51,7 @@ Storage can be configured at the instance level (shared by all agents) or at the
55
51
 
56
52
  Add storage to your Mastra instance so all agents, workflows, observability traces and scores share the same memory provider:
57
53
 
58
- ```typescript title="src/mastra/index.ts"
54
+ ```typescript
59
55
  import { Mastra } from "@mastra/core";
60
56
  import { PostgresStore } from "@mastra/pg";
61
57
 
@@ -75,9 +71,9 @@ This is useful when all primitives share the same storage backend and have simil
75
71
 
76
72
  #### Composite storage
77
73
 
78
- [Composite storage](https://mastra.ai/reference/storage/composite) is an alternative way to configure instance-level storage. Use `MastraCompositeStore` to set the `memory` domain (and any other [domains](https://mastra.ai/reference/storage/composite#storage-domains) you need) to different storage providers.
74
+ [Composite storage](https://mastra.ai/reference/storage/composite) is an alternative way to configure instance-level storage. Use `MastraCompositeStore` to set the `memory` domain (and any other [domains](https://mastra.ai/reference/storage/composite) you need) to different storage providers.
79
75
 
80
- ```typescript title="src/mastra/index.ts"
76
+ ```typescript
81
77
  import { Mastra } from "@mastra/core";
82
78
  import { MastraCompositeStore } from "@mastra/core/storage";
83
79
  import { MemoryLibSQL } from "@mastra/libsql";
@@ -88,7 +84,6 @@ export const mastra = new Mastra({
88
84
  storage: new MastraCompositeStore({
89
85
  id: "composite",
90
86
  domains: {
91
- // highlight-next-line
92
87
  memory: new MemoryLibSQL({ url: "file:./memory.db" }),
93
88
  workflows: new WorkflowsPG({ connectionString: process.env.DATABASE_URL }),
94
89
  observability: new ObservabilityStorageClickhouse({
@@ -107,7 +102,7 @@ This is useful when different types of data have different performance or operat
107
102
 
108
103
  Agent-level storage overrides storage configured at the instance level. Add storage to a specific agent when you need data boundaries or compliance requirements:
109
104
 
110
- ```typescript title="src/mastra/agents/your-agent.ts"
105
+ ```typescript
111
106
  import { Agent } from "@mastra/core/agent";
112
107
  import { Memory } from "@mastra/memory";
113
108
  import { PostgresStore } from "@mastra/pg";
@@ -123,19 +118,18 @@ export const agent = new Agent({
123
118
  });
124
119
  ```
125
120
 
126
- > **Note:**
127
- [Mastra Cloud Store](https://mastra.ai/docs/mastra-cloud/deployment#using-mastra-cloud-store) doesn't support agent-level storage.
121
+ > **Warning:** [Mastra Cloud Store](https://mastra.ai/docs/mastra-cloud/deployment) doesn't support agent-level storage.
128
122
 
129
123
  ## Threads and resources
130
124
 
131
- Mastra organizes conversations using two identifiers:
125
+ Mastra organizes conversations using two identifiers:
132
126
 
133
127
  - **Thread** - a conversation session containing a sequence of messages.
134
128
  - **Resource** - the entity that owns the thread, such as a user, organization, project, or any other domain entity in your application.
135
129
 
136
130
  Both identifiers are required for agents to store information:
137
131
 
138
- **generate:**
132
+ **Generate**:
139
133
 
140
134
  ```typescript
141
135
  const response = await agent.generate("hello", {
@@ -146,8 +140,7 @@ const response = await agent.generate("hello", {
146
140
  });
147
141
  ```
148
142
 
149
-
150
- **stream:**
143
+ **Stream**:
151
144
 
152
145
  ```typescript
153
146
  const stream = await agent.stream("hello", {
@@ -158,10 +151,7 @@ const stream = await agent.stream("hello", {
158
151
  });
159
152
  ```
160
153
 
161
-
162
-
163
- > **Note:**
164
- [Studio](https://mastra.ai/docs/getting-started/studio) automatically generates a thread and resource ID for you. When calling `stream()` or `generate()` yourself, remember to provide these identifiers explicitly.
154
+ > **Note:** [Studio](https://mastra.ai/docs/getting-started/studio) automatically generates a thread and resource ID for you. When calling `stream()` or `generate()` yourself, remember to provide these identifiers explicitly.
165
155
 
166
156
  ### Thread title generation
167
157
 
@@ -169,7 +159,7 @@ Mastra can automatically generate descriptive thread titles based on the user's
169
159
 
170
160
  Use this option when implementing a ChatGPT-style chat interface to render a title alongside each thread in the conversation list (for example, in a sidebar) derived from the thread’s initial user message.
171
161
 
172
- ```typescript title="src/mastra/agents/my-agent.ts"
162
+ ```typescript
173
163
  export const agent = new Agent({
174
164
  id: "agent",
175
165
  memory: new Memory({
@@ -182,9 +172,9 @@ export const agent = new Agent({
182
172
 
183
173
  Title generation runs asynchronously after the agent responds and does not affect response time.
184
174
 
185
- To optimize cost or behavior, provide a smaller [`model`](/models) and custom `instructions`:
175
+ To optimize cost or behavior, provide a smaller [`model`](https://mastra.ai/models) and custom `instructions`:
186
176
 
187
- ```typescript title="src/mastra/agents/my-agent.ts"
177
+ ```typescript
188
178
  export const agent = new Agent({
189
179
  id: "agent",
190
180
  memory: new Memory({
@@ -206,17 +196,17 @@ Semantic recall has different storage requirements - it needs a vector database
206
196
 
207
197
  Some storage providers enforce record size limits that base64-encoded file attachments (such as images) can exceed:
208
198
 
209
- | Provider | Record size limit |
210
- | -------- | ----------------- |
211
- | [DynamoDB](https://mastra.ai/reference/storage/dynamodb) | 400 KB |
212
- | [Convex](https://mastra.ai/reference/storage/convex) | 1 MiB |
213
- | [Cloudflare D1](https://mastra.ai/reference/storage/cloudflare-d1) | 1 MiB |
199
+ | Provider | Record size limit |
200
+ | ------------------------------------------------------------------ | ----------------- |
201
+ | [DynamoDB](https://mastra.ai/reference/storage/dynamodb) | 400 KB |
202
+ | [Convex](https://mastra.ai/reference/storage/convex) | 1 MiB |
203
+ | [Cloudflare D1](https://mastra.ai/reference/storage/cloudflare-d1) | 1 MiB |
214
204
 
215
205
  PostgreSQL, MongoDB, and libSQL have higher limits and are generally unaffected.
216
206
 
217
207
  To avoid this, use an input processor to upload attachments to external storage (S3, R2, GCS, [Convex file storage](https://docs.convex.dev/file-storage), etc.) and replace them with URL references before persistence.
218
208
 
219
- ```typescript title="src/mastra/processors/attachment-uploader.ts"
209
+ ```typescript
220
210
  import type { Processor } from "@mastra/core/processors";
221
211
  import type { MastraDBMessage } from "@mastra/core/memory";
222
212
 
@@ -1,8 +1,6 @@
1
- > Learn how to configure working memory in Mastra to store persistent user data, preferences.
2
-
3
1
  # Working Memory
4
2
 
5
- While [message history](https://mastra.ai/docs/memory/message-history) and [semantic recall](./semantic-recall) help agents remember conversations, working memory allows them to maintain persistent information about users across interactions.
3
+ While [message history](https://mastra.ai/docs/memory/message-history) and [semantic recall](https://mastra.ai/docs/memory/semantic-recall) help agents remember conversations, working memory allows them to maintain persistent information about users across interactions.
6
4
 
7
5
  Think of it as the agent's active thoughts or scratchpad – the key information they keep available about the user or task. It's similar to how a person would naturally remember someone's name, preferences, or important details during a conversation.
8
6
 
@@ -19,7 +17,7 @@ Working memory can persist at two different scopes:
19
17
 
20
18
  Here's a minimal example of setting up an agent with working memory:
21
19
 
22
- ```typescript {11-15}
20
+ ```typescript
23
21
  import { Agent } from "@mastra/core/agent";
24
22
  import { Memory } from "@mastra/memory";
25
23
 
@@ -43,7 +41,7 @@ const agent = new Agent({
43
41
 
44
42
  Working memory is a block of Markdown text that the agent is able to update over time to store continuously relevant information:
45
43
 
46
- <YouTube id="UMy_JHLf1n8" />
44
+ [YouTube video player](https://www.youtube-nocookie.com/embed/UMy_JHLf1n8)
47
45
 
48
46
  ## Memory Persistence Scopes
49
47
 
@@ -136,7 +134,7 @@ Templates guide the agent on what information to track and update in working mem
136
134
 
137
135
  Here's an example of a custom template. In this example the agent will store the users name, location, timezone, etc as soon as the user sends a message containing any of the info:
138
136
 
139
- ```typescript {5-28}
137
+ ```typescript
140
138
  const memory = new Memory({
141
139
  options: {
142
140
  workingMemory: {
@@ -172,19 +170,13 @@ const memory = new Memory({
172
170
 
173
171
  ## Designing Effective Templates
174
172
 
175
- A well-structured template keeps the information easy for the agent to parse and update. Treat the
176
- template as a short form that you want the assistant to keep up to date.
173
+ A well-structured template keeps the information easy for the agent to parse and update. Treat the template as a short form that you want the assistant to keep up to date.
177
174
 
178
- - **Short, focused labels.** Avoid paragraphs or very long headings. Keep labels brief (for example
179
- `## Personal Info` or `- Name:`) so updates are easy to read and less likely to be truncated.
180
- - **Use consistent casing.** Inconsistent capitalization (`Timezone:` vs `timezone:`) can cause messy
181
- updates. Stick to Title Case or lower case for headings and bullet labels.
182
- - **Keep placeholder text simple.** Use hints such as `[e.g., Formal]` or `[Date]` to help the LLM
183
- fill in the correct spots.
184
- - **Abbreviate very long values.** If you only need a short form, include guidance like
185
- `- Name: [First name or nickname]` or `- Address (short):` rather than the full legal text.
186
- - **Mention update rules in `instructions`.** You can instruct how and when to fill or clear parts of
187
- the template directly in the agent's `instructions` field.
175
+ - **Short, focused labels.** Avoid paragraphs or very long headings. Keep labels brief (for example `## Personal Info` or `- Name:`) so updates are easy to read and less likely to be truncated.
176
+ - **Use consistent casing.** Inconsistent capitalization (`Timezone:` vs `timezone:`) can cause messy updates. Stick to Title Case or lower case for headings and bullet labels.
177
+ - **Keep placeholder text simple.** Use hints such as `[e.g., Formal]` or `[Date]` to help the LLM fill in the correct spots.
178
+ - **Abbreviate very long values.** If you only need a short form, include guidance like `- Name: [First name or nickname]` or `- Address (short):` rather than the full legal text.
179
+ - **Mention update rules in `instructions`.** You can instruct how and when to fill or clear parts of the template directly in the agent's `instructions` field.
188
180
 
189
181
  ### Alternative Template Styles
190
182
 
@@ -281,8 +273,7 @@ Schema-based working memory uses **merge semantics**, meaning the agent only nee
281
273
 
282
274
  ## Example: Multi-step Retention
283
275
 
284
- Below is a simplified view of how the `User Profile` template updates across a short user
285
- conversation:
276
+ Below is a simplified view of how the `User Profile` template updates across a short user conversation:
286
277
 
287
278
  ```nohighlight
288
279
  # User Profile
@@ -308,11 +299,9 @@ conversation:
308
299
  - Timezone: CET
309
300
  ```
310
301
 
311
- The agent can now refer to `Sam` or `Berlin` in later responses without requesting the information
312
- again because it has been stored in working memory.
302
+ The agent can now refer to `Sam` or `Berlin` in later responses without requesting the information again because it has been stored in working memory.
313
303
 
314
- If your agent is not properly updating working memory when you expect it to, you can add system
315
- instructions on _how_ and _when_ to use this template in your agent's `instructions` setting.
304
+ If your agent is not properly updating working memory when you expect it to, you can add system instructions on _how_ and _when_ to use this template in your agent's `instructions` setting.
316
305
 
317
306
  ## Setting Initial Working Memory
318
307
 
@@ -322,7 +311,7 @@ While agents typically update working memory through the `updateWorkingMemory` t
322
311
 
323
312
  When creating a thread, you can provide initial working memory through the metadata's `workingMemory` key:
324
313
 
325
- ```typescript title="src/app/medical-consultation.ts"
314
+ ```typescript
326
315
  // Create a thread with initial working memory
327
316
  const thread = await memory.createThread({
328
317
  threadId: "thread-123",
@@ -353,7 +342,7 @@ await agent.generate("What's my blood type?", {
353
342
 
354
343
  You can also update an existing thread's working memory:
355
344
 
356
- ```typescript title="src/app/medical-consultation.ts"
345
+ ```typescript
357
346
  // Update thread metadata to add/modify working memory
358
347
  await memory.updateThread({
359
348
  id: "thread-123",
@@ -375,7 +364,7 @@ await memory.updateThread({
375
364
 
376
365
  Alternatively, use the `updateWorkingMemory` method directly:
377
366
 
378
- ```typescript title="src/app/medical-consultation.ts"
367
+ ```typescript
379
368
  await memory.updateWorkingMemory({
380
369
  threadId: "thread-123",
381
370
  resourceId: "user-456", // Required for resource-scoped memory
@@ -1,5 +1,3 @@
1
- > Overview of Retrieval-Augmented Generation (RAG) in Mastra, detailing its capabilities for enhancing LLM outputs with relevant context.
2
-
3
1
  # RAG (Retrieval-Augmented Generation) in Mastra
4
2
 
5
3
  RAG in Mastra helps you enhance LLM outputs by incorporating relevant context from your own data sources, improving accuracy and grounding responses in real information.
@@ -63,11 +61,11 @@ This example shows the essentials: initialize a document, create chunks, generat
63
61
 
64
62
  ## Document Processing
65
63
 
66
- The basic building block of RAG is document processing. Documents can be chunked using various strategies (recursive, sliding window, etc.) and enriched with metadata. See the [chunking and embedding doc](./chunking-and-embedding).
64
+ The basic building block of RAG is document processing. Documents can be chunked using various strategies (recursive, sliding window, etc.) and enriched with metadata. See the [chunking and embedding doc](https://mastra.ai/docs/rag/chunking-and-embedding).
67
65
 
68
66
  ## Vector Storage
69
67
 
70
- Mastra supports multiple vector stores for embedding persistence and similarity search, including pgvector, Pinecone, Qdrant, and MongoDB. See the [vector database doc](./vector-databases).
68
+ Mastra supports multiple vector stores for embedding persistence and similarity search, including pgvector, Pinecone, Qdrant, and MongoDB. See the [vector database doc](https://mastra.ai/docs/rag/vector-databases).
71
69
 
72
70
  ## More resources
73
71