@mastra/libsql 1.0.0-beta.10 → 1.0.0-beta.11
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/CHANGELOG.md +76 -0
- package/dist/docs/README.md +39 -0
- package/dist/docs/SKILL.md +40 -0
- package/dist/docs/SOURCE_MAP.json +6 -0
- package/dist/docs/agents/01-agent-memory.md +160 -0
- package/dist/docs/agents/02-networks.md +236 -0
- package/dist/docs/agents/03-agent-approval.md +317 -0
- package/dist/docs/core/01-reference.md +152 -0
- package/dist/docs/guides/01-ai-sdk.md +141 -0
- package/dist/docs/memory/01-overview.md +76 -0
- package/dist/docs/memory/02-storage.md +181 -0
- package/dist/docs/memory/03-working-memory.md +386 -0
- package/dist/docs/memory/04-semantic-recall.md +235 -0
- package/dist/docs/memory/05-memory-processors.md +319 -0
- package/dist/docs/memory/06-reference.md +135 -0
- package/dist/docs/observability/01-overview.md +64 -0
- package/dist/docs/observability/02-default.md +177 -0
- package/dist/docs/rag/01-retrieval.md +549 -0
- package/dist/docs/storage/01-reference.md +451 -0
- package/dist/docs/vectors/01-reference.md +215 -0
- package/dist/docs/workflows/01-snapshots.md +240 -0
- package/dist/index.cjs +146 -2
- package/dist/index.cjs.map +1 -1
- package/dist/index.js +146 -2
- package/dist/index.js.map +1 -1
- package/dist/storage/domains/memory/index.d.ts +2 -1
- package/dist/storage/domains/memory/index.d.ts.map +1 -1
- package/package.json +7 -6
|
@@ -0,0 +1,317 @@
|
|
|
1
|
+
> Learn how to require approvals, suspend tool execution, and automatically resume suspended tools while keeping humans in control of agent workflows.
|
|
2
|
+
|
|
3
|
+
# Agent Approval
|
|
4
|
+
|
|
5
|
+
Agents sometimes require the same [human-in-the-loop](https://mastra.ai/docs/v1/workflows/human-in-the-loop) oversight used in workflows when calling tools that handle sensitive operations, like deleting resources or performing running long processes. With agent approval you can suspend a tool call and provide feedback to the user, or approve or decline a tool call based on targeted application conditions.
|
|
6
|
+
|
|
7
|
+
## Tool call approval
|
|
8
|
+
|
|
9
|
+
Tool call approval can be enabled at the agent level and apply to every tool the agent uses, or at the tool level providing more granular control over individual tool calls.
|
|
10
|
+
|
|
11
|
+
### Storage
|
|
12
|
+
|
|
13
|
+
Agent approval uses a snapshot to capture the state of the request. Ensure you've enabled a storage provider in your main Mastra instance. If storage isn't enabled you'll see an error relating to snapshot not found.
|
|
14
|
+
|
|
15
|
+
```typescript title="src/mastra/index.ts"
|
|
16
|
+
import { Mastra } from "@mastra/core/mastra";
|
|
17
|
+
import { LibSQLStore } from "@mastra/libsql";
|
|
18
|
+
|
|
19
|
+
export const mastra = new Mastra({
|
|
20
|
+
storage: new LibSQLStore({
|
|
21
|
+
id: "mastra-storage",
|
|
22
|
+
url: ":memory:"
|
|
23
|
+
})
|
|
24
|
+
});
|
|
25
|
+
```
|
|
26
|
+
|
|
27
|
+
## Agent-level approval
|
|
28
|
+
|
|
29
|
+
When calling an agent using `.stream()` set `requireToolApproval` to `true` which will prevent the agent from calling any of the tools defined in its configuration.
|
|
30
|
+
|
|
31
|
+
```typescript
|
|
32
|
+
const stream = await agent.stream("What's the weather in London?", {
|
|
33
|
+
requireToolApproval: true
|
|
34
|
+
});
|
|
35
|
+
```
|
|
36
|
+
|
|
37
|
+
### Approving tool calls
|
|
38
|
+
|
|
39
|
+
To approve a tool call, access `approveToolCall` from the `agent`, passing in the `runId` of the stream. This will let the agent know its now OK to call its tools.
|
|
40
|
+
|
|
41
|
+
```typescript
|
|
42
|
+
const handleApproval = async () => {
|
|
43
|
+
const approvedStream = await agent.approveToolCall({ runId: stream.runId });
|
|
44
|
+
|
|
45
|
+
for await (const chunk of approvedStream.textStream) {
|
|
46
|
+
process.stdout.write(chunk);
|
|
47
|
+
}
|
|
48
|
+
process.stdout.write("\n");
|
|
49
|
+
};
|
|
50
|
+
```
|
|
51
|
+
|
|
52
|
+
### Declining tool calls
|
|
53
|
+
|
|
54
|
+
To decline a tool call, access the `declineToolCall` from the `agent`. You will see the streamed response from the agent, but it won't call its tools.
|
|
55
|
+
|
|
56
|
+
```typescript
|
|
57
|
+
const handleDecline = async () => {
|
|
58
|
+
const declinedStream = await agent.declineToolCall({ runId: stream.runId });
|
|
59
|
+
|
|
60
|
+
for await (const chunk of declinedStream.textStream) {
|
|
61
|
+
process.stdout.write(chunk);
|
|
62
|
+
}
|
|
63
|
+
process.stdout.write("\n");
|
|
64
|
+
};
|
|
65
|
+
```
|
|
66
|
+
|
|
67
|
+
## Tool-level approval
|
|
68
|
+
|
|
69
|
+
There are two types of tool call approval. The first uses `requireApproval`, which is a property on the tool definition, while `requireToolApproval` is a parameter passed to `agent.stream()`. The second uses `suspend` and lets the agent provide context or confirmation prompts so the user can decide whether the tool call should continue.
|
|
70
|
+
|
|
71
|
+
### Tool approval using `requireToolApproval`
|
|
72
|
+
|
|
73
|
+
In this approach, `requireApproval` is configured on the tool definition (shown below) rather than on the agent.
|
|
74
|
+
|
|
75
|
+
```typescript
|
|
76
|
+
export const testTool = createTool({
|
|
77
|
+
id: "test-tool",
|
|
78
|
+
description: "Fetches weather for a location",
|
|
79
|
+
inputSchema: z.object({
|
|
80
|
+
location: z.string()
|
|
81
|
+
}),
|
|
82
|
+
outputSchema: z.object({
|
|
83
|
+
weather: z.string()
|
|
84
|
+
}),
|
|
85
|
+
resumeSchema: z.object({
|
|
86
|
+
approved: z.boolean()
|
|
87
|
+
}),
|
|
88
|
+
execute: async ({ location }) => {
|
|
89
|
+
const response = await fetch(`https://wttr.in/${location}?format=3`);
|
|
90
|
+
const weather = await response.text();
|
|
91
|
+
|
|
92
|
+
return { weather };
|
|
93
|
+
},
|
|
94
|
+
requireApproval: true
|
|
95
|
+
});
|
|
96
|
+
```
|
|
97
|
+
|
|
98
|
+
When `requireApproval` is true for a tool, the stream will include chunks of type `tool-call-approval` to indicate that the call is paused. To continue the call, invoke `resumeStream` with the required `resumeSchema` and the `runId`.
|
|
99
|
+
|
|
100
|
+
```typescript
|
|
101
|
+
const stream = await agent.stream("What's the weather in London?");
|
|
102
|
+
|
|
103
|
+
for await (const chunk of stream.fullStream) {
|
|
104
|
+
if (chunk.type === "tool-call-approval") {
|
|
105
|
+
console.log("Approval required.");
|
|
106
|
+
}
|
|
107
|
+
}
|
|
108
|
+
|
|
109
|
+
const handleResume = async () => {
|
|
110
|
+
const resumedStream = await agent.resumeStream({ approved: true }, { runId: stream.runId });
|
|
111
|
+
|
|
112
|
+
for await (const chunk of resumedStream.textStream) {
|
|
113
|
+
process.stdout.write(chunk);
|
|
114
|
+
}
|
|
115
|
+
process.stdout.write("\n");
|
|
116
|
+
};
|
|
117
|
+
```
|
|
118
|
+
|
|
119
|
+
### Tool approval using `suspend`
|
|
120
|
+
|
|
121
|
+
With this approach, neither the agent nor the tool uses `requireApproval`. Instead, the tool implementation calls `suspend` to pause execution and return context or confirmation prompts to the user.
|
|
122
|
+
|
|
123
|
+
```typescript
|
|
124
|
+
|
|
125
|
+
export const testToolB = createTool({
|
|
126
|
+
id: "test-tool-b",
|
|
127
|
+
description: "Fetches weather for a location",
|
|
128
|
+
inputSchema: z.object({
|
|
129
|
+
location: z.string()
|
|
130
|
+
}),
|
|
131
|
+
outputSchema: z.object({
|
|
132
|
+
weather: z.string()
|
|
133
|
+
}),
|
|
134
|
+
resumeSchema: z.object({
|
|
135
|
+
approved: z.boolean()
|
|
136
|
+
}),
|
|
137
|
+
suspendSchema: z.object({
|
|
138
|
+
reason: z.string()
|
|
139
|
+
}),
|
|
140
|
+
execute: async ({ location }, { agent } = {}) => {
|
|
141
|
+
const { resumeData: { approved } = {}, suspend } = agent ?? {};
|
|
142
|
+
|
|
143
|
+
if (!approved) {
|
|
144
|
+
return suspend?.({ reason: "Approval required." });
|
|
145
|
+
}
|
|
146
|
+
|
|
147
|
+
const response = await fetch(`https://wttr.in/${location}?format=3`);
|
|
148
|
+
const weather = await response.text();
|
|
149
|
+
|
|
150
|
+
return { weather };
|
|
151
|
+
}
|
|
152
|
+
});
|
|
153
|
+
```
|
|
154
|
+
|
|
155
|
+
With this approach the stream will include a `tool-call-suspended` chunk, and the `suspendPayload` will contain the `reason` defined by the tool's `suspendSchema`. To continue the call, invoke `resumeStream` with the required `resumeSchema` and the `runId`.
|
|
156
|
+
|
|
157
|
+
```typescript
|
|
158
|
+
const stream = await agent.stream("What's the weather in London?");
|
|
159
|
+
|
|
160
|
+
for await (const chunk of stream.fullStream) {
|
|
161
|
+
if (chunk.type === "tool-call-suspended") {
|
|
162
|
+
console.log(chunk.payload.suspendPayload);
|
|
163
|
+
}
|
|
164
|
+
}
|
|
165
|
+
|
|
166
|
+
const handleResume = async () => {
|
|
167
|
+
const resumedStream = await agent.resumeStream({ approved: true }, { runId: stream.runId });
|
|
168
|
+
|
|
169
|
+
for await (const chunk of resumedStream.textStream) {
|
|
170
|
+
process.stdout.write(chunk);
|
|
171
|
+
}
|
|
172
|
+
process.stdout.write("\n");
|
|
173
|
+
};
|
|
174
|
+
|
|
175
|
+
```
|
|
176
|
+
|
|
177
|
+
## Automatic tool resumption
|
|
178
|
+
|
|
179
|
+
When using tools that call `suspend()`, you can enable automatic resumption so the agent resumes suspended tools based on the user's next message. This creates a conversational flow where users provide the required information naturally, without your application needing to call `resumeStream()` explicitly.
|
|
180
|
+
|
|
181
|
+
### Enabling auto-resume
|
|
182
|
+
|
|
183
|
+
Set `autoResumeSuspendedTools` to `true` in the agent's default options or when calling `stream()`:
|
|
184
|
+
|
|
185
|
+
```typescript
|
|
186
|
+
import { Agent } from "@mastra/core/agent";
|
|
187
|
+
import { Memory } from "@mastra/memory";
|
|
188
|
+
|
|
189
|
+
// Option 1: In agent configuration
|
|
190
|
+
const agent = new Agent({
|
|
191
|
+
id: "my-agent",
|
|
192
|
+
name: "My Agent",
|
|
193
|
+
instructions: "You are a helpful assistant",
|
|
194
|
+
model: "openai/gpt-4o-mini",
|
|
195
|
+
tools: { weatherTool },
|
|
196
|
+
memory: new Memory(),
|
|
197
|
+
defaultOptions: {
|
|
198
|
+
autoResumeSuspendedTools: true,
|
|
199
|
+
},
|
|
200
|
+
});
|
|
201
|
+
|
|
202
|
+
// Option 2: Per-request
|
|
203
|
+
const stream = await agent.stream("What's the weather?", {
|
|
204
|
+
autoResumeSuspendedTools: true,
|
|
205
|
+
});
|
|
206
|
+
```
|
|
207
|
+
|
|
208
|
+
### How it works
|
|
209
|
+
|
|
210
|
+
When `autoResumeSuspendedTools` is enabled:
|
|
211
|
+
|
|
212
|
+
1. A tool suspends execution by calling `suspend()` with a payload (e.g., requesting more information)
|
|
213
|
+
2. The suspension is persisted to memory along with the conversation
|
|
214
|
+
3. When the user sends their next message on the same thread, the agent:
|
|
215
|
+
- Detects the suspended tool from message history
|
|
216
|
+
- Extracts `resumeData` from the user's message based on the tool's `resumeSchema`
|
|
217
|
+
- Automatically resumes the tool with the extracted data
|
|
218
|
+
|
|
219
|
+
### Example
|
|
220
|
+
|
|
221
|
+
```typescript
|
|
222
|
+
import { createTool } from "@mastra/core/tools";
|
|
223
|
+
import { z } from "zod";
|
|
224
|
+
|
|
225
|
+
export const weatherTool = createTool({
|
|
226
|
+
id: "weather-info",
|
|
227
|
+
description: "Fetches weather information for a city",
|
|
228
|
+
suspendSchema: z.object({
|
|
229
|
+
message: z.string(),
|
|
230
|
+
}),
|
|
231
|
+
resumeSchema: z.object({
|
|
232
|
+
city: z.string(),
|
|
233
|
+
}),
|
|
234
|
+
execute: async (_inputData, context) => {
|
|
235
|
+
// Check if this is a resume with data
|
|
236
|
+
if (!context?.agent?.resumeData) {
|
|
237
|
+
// First call - suspend and ask for the city
|
|
238
|
+
return context?.agent?.suspend({
|
|
239
|
+
message: "What city do you want to know the weather for?",
|
|
240
|
+
});
|
|
241
|
+
}
|
|
242
|
+
|
|
243
|
+
// Resume call - city was extracted from user's message
|
|
244
|
+
const { city } = context.agent.resumeData;
|
|
245
|
+
const response = await fetch(`https://wttr.in/${city}?format=3`);
|
|
246
|
+
const weather = await response.text();
|
|
247
|
+
|
|
248
|
+
return { city, weather };
|
|
249
|
+
},
|
|
250
|
+
});
|
|
251
|
+
|
|
252
|
+
const agent = new Agent({
|
|
253
|
+
id: "my-agent",
|
|
254
|
+
name: "My Agent",
|
|
255
|
+
instructions: "You are a helpful assistant",
|
|
256
|
+
model: "openai/gpt-4o-mini",
|
|
257
|
+
tools: { weatherTool },
|
|
258
|
+
memory: new Memory(),
|
|
259
|
+
defaultOptions: {
|
|
260
|
+
autoResumeSuspendedTools: true,
|
|
261
|
+
},
|
|
262
|
+
});
|
|
263
|
+
|
|
264
|
+
const stream = await agent.stream("What's the weather like?");
|
|
265
|
+
|
|
266
|
+
for await (const chunk of stream.fullStream) {
|
|
267
|
+
if (chunk.type === "tool-call-suspended") {
|
|
268
|
+
console.log(chunk.payload.suspendPayload);
|
|
269
|
+
}
|
|
270
|
+
}
|
|
271
|
+
|
|
272
|
+
const handleResume = async () => {
|
|
273
|
+
const resumedStream = await agent.stream("San Francisco");
|
|
274
|
+
|
|
275
|
+
for await (const chunk of resumedStream.textStream) {
|
|
276
|
+
process.stdout.write(chunk);
|
|
277
|
+
}
|
|
278
|
+
process.stdout.write("\n");
|
|
279
|
+
};
|
|
280
|
+
```
|
|
281
|
+
|
|
282
|
+
**Conversation flow:**
|
|
283
|
+
|
|
284
|
+
```
|
|
285
|
+
User: "What's the weather like?"
|
|
286
|
+
Agent: "What city do you want to know the weather for?"
|
|
287
|
+
|
|
288
|
+
User: "San Francisco"
|
|
289
|
+
Agent: "The weather in San Francisco is: San Francisco: ☀️ +72°F"
|
|
290
|
+
```
|
|
291
|
+
|
|
292
|
+
The second message automatically resumes the suspended tool - the agent extracts `{ city: "San Francisco" }` from the user's message and passes it as `resumeData`.
|
|
293
|
+
|
|
294
|
+
### Requirements
|
|
295
|
+
|
|
296
|
+
For automatic tool resumption to work:
|
|
297
|
+
|
|
298
|
+
- **Memory configured**: The agent needs memory to track suspended tools across messages
|
|
299
|
+
- **Same thread**: The follow-up message must use the same memory thread and resource identifiers
|
|
300
|
+
- **`resumeSchema` defined**: The tool must define a `resumeSchema` so the agent knows what data structure to extract from the user's message
|
|
301
|
+
|
|
302
|
+
### Manual vs automatic resumption
|
|
303
|
+
|
|
304
|
+
| Approach | Use case |
|
|
305
|
+
|----------|----------|
|
|
306
|
+
| Manual (`resumeStream()`) | Programmatic control, webhooks, button clicks, external triggers |
|
|
307
|
+
| Automatic (`autoResumeSuspendedTools`) | Conversational flows where users provide resume data in natural language |
|
|
308
|
+
|
|
309
|
+
Both approaches work with the same tool definitions. Automatic resumption triggers only when suspended tools exist in the message history and the user sends a new message on the same thread.
|
|
310
|
+
|
|
311
|
+
## Related
|
|
312
|
+
|
|
313
|
+
- [Using Tools](./using-tools)
|
|
314
|
+
- [Agent Overview](./overview)
|
|
315
|
+
- [Tools Overview](../mcp/overview)
|
|
316
|
+
- [Agent Memory](./agent-memory)
|
|
317
|
+
- [Request Context](https://mastra.ai/docs/v1/server/request-context)
|
|
@@ -0,0 +1,152 @@
|
|
|
1
|
+
# Core API Reference
|
|
2
|
+
|
|
3
|
+
> API reference for core - 3 entries
|
|
4
|
+
|
|
5
|
+
|
|
6
|
+
---
|
|
7
|
+
|
|
8
|
+
## Reference: Mastra Class
|
|
9
|
+
|
|
10
|
+
> Documentation for the `Mastra` class in Mastra, the core entry point for managing agents, workflows, MCP servers, and server endpoints.
|
|
11
|
+
|
|
12
|
+
The `Mastra` class is the central orchestrator in any Mastra application, managing agents, workflows, storage, logging, telemetry, and more. Typically, you create a single instance of `Mastra` to coordinate your application.
|
|
13
|
+
|
|
14
|
+
Think of `Mastra` as a top-level registry:
|
|
15
|
+
|
|
16
|
+
- Registering **integrations** makes them accessible to **agents**, **workflows**, and **tools** alike.
|
|
17
|
+
- **tools** aren’t registered on `Mastra` directly but are associated with agents and discovered automatically.
|
|
18
|
+
|
|
19
|
+
## Usage example
|
|
20
|
+
|
|
21
|
+
```typescript title="src/mastra/index.ts"
|
|
22
|
+
import { Mastra } from "@mastra/core";
|
|
23
|
+
import { PinoLogger } from "@mastra/loggers";
|
|
24
|
+
import { LibSQLStore } from "@mastra/libsql";
|
|
25
|
+
import { weatherWorkflow } from "./workflows/weather-workflow";
|
|
26
|
+
import { weatherAgent } from "./agents/weather-agent";
|
|
27
|
+
|
|
28
|
+
export const mastra = new Mastra({
|
|
29
|
+
workflows: { weatherWorkflow },
|
|
30
|
+
agents: { weatherAgent },
|
|
31
|
+
storage: new LibSQLStore({
|
|
32
|
+
id: 'mastra-storage',
|
|
33
|
+
url: ":memory:",
|
|
34
|
+
}),
|
|
35
|
+
logger: new PinoLogger({
|
|
36
|
+
name: "Mastra",
|
|
37
|
+
level: "info",
|
|
38
|
+
}),
|
|
39
|
+
});
|
|
40
|
+
```
|
|
41
|
+
|
|
42
|
+
## Constructor parameters
|
|
43
|
+
|
|
44
|
+
---
|
|
45
|
+
|
|
46
|
+
## Reference: Mastra.getMemory()
|
|
47
|
+
|
|
48
|
+
> Documentation for the `Mastra.getMemory()` method in Mastra, which retrieves a registered memory instance by its registry key.
|
|
49
|
+
|
|
50
|
+
The `.getMemory()` method retrieves a memory instance from the Mastra registry by its key. Memory instances are registered in the Mastra constructor and can be referenced by stored agents.
|
|
51
|
+
|
|
52
|
+
## Usage example
|
|
53
|
+
|
|
54
|
+
```typescript
|
|
55
|
+
const memory = mastra.getMemory("conversationMemory");
|
|
56
|
+
|
|
57
|
+
// Use the memory instance
|
|
58
|
+
const thread = await memory.createThread({
|
|
59
|
+
resourceId: "user-123",
|
|
60
|
+
title: "New Conversation",
|
|
61
|
+
});
|
|
62
|
+
```
|
|
63
|
+
|
|
64
|
+
## Parameters
|
|
65
|
+
|
|
66
|
+
## Returns
|
|
67
|
+
|
|
68
|
+
## Example: Registering and Retrieving Memory
|
|
69
|
+
|
|
70
|
+
```typescript
|
|
71
|
+
import { Mastra } from "@mastra/core";
|
|
72
|
+
import { Memory } from "@mastra/memory";
|
|
73
|
+
import { LibSQLStore } from "@mastra/libsql";
|
|
74
|
+
|
|
75
|
+
const conversationMemory = new Memory({
|
|
76
|
+
storage: new LibSQLStore({ url: ":memory:" }),
|
|
77
|
+
});
|
|
78
|
+
|
|
79
|
+
const mastra = new Mastra({
|
|
80
|
+
memory: {
|
|
81
|
+
conversationMemory,
|
|
82
|
+
},
|
|
83
|
+
});
|
|
84
|
+
|
|
85
|
+
// Later, retrieve the memory instance
|
|
86
|
+
const memory = mastra.getMemory("conversationMemory");
|
|
87
|
+
```
|
|
88
|
+
|
|
89
|
+
## Related
|
|
90
|
+
|
|
91
|
+
- [Mastra.listMemory()](https://mastra.ai/reference/v1/core/listMemory)
|
|
92
|
+
- [Memory overview](https://mastra.ai/docs/v1/memory/overview)
|
|
93
|
+
- [Agent Memory](https://mastra.ai/docs/v1/agents/agent-memory)
|
|
94
|
+
|
|
95
|
+
---
|
|
96
|
+
|
|
97
|
+
## Reference: Mastra.listMemory()
|
|
98
|
+
|
|
99
|
+
> Documentation for the `Mastra.listMemory()` method in Mastra, which returns all registered memory instances.
|
|
100
|
+
|
|
101
|
+
The `.listMemory()` method returns all memory instances registered with the Mastra instance.
|
|
102
|
+
|
|
103
|
+
## Usage example
|
|
104
|
+
|
|
105
|
+
```typescript
|
|
106
|
+
const memoryInstances = mastra.listMemory();
|
|
107
|
+
|
|
108
|
+
for (const [key, memory] of Object.entries(memoryInstances)) {
|
|
109
|
+
console.log(`Memory "${key}": ${memory.id}`);
|
|
110
|
+
}
|
|
111
|
+
```
|
|
112
|
+
|
|
113
|
+
## Parameters
|
|
114
|
+
|
|
115
|
+
This method takes no parameters.
|
|
116
|
+
|
|
117
|
+
## Returns
|
|
118
|
+
|
|
119
|
+
## Example: Checking Registered Memory
|
|
120
|
+
|
|
121
|
+
```typescript
|
|
122
|
+
import { Mastra } from "@mastra/core";
|
|
123
|
+
import { Memory } from "@mastra/memory";
|
|
124
|
+
import { LibSQLStore } from "@mastra/libsql";
|
|
125
|
+
|
|
126
|
+
const conversationMemory = new Memory({
|
|
127
|
+
id: "conversation-memory",
|
|
128
|
+
storage: new LibSQLStore({ url: ":memory:" }),
|
|
129
|
+
});
|
|
130
|
+
|
|
131
|
+
const analyticsMemory = new Memory({
|
|
132
|
+
id: "analytics-memory",
|
|
133
|
+
storage: new LibSQLStore({ url: ":memory:" }),
|
|
134
|
+
});
|
|
135
|
+
|
|
136
|
+
const mastra = new Mastra({
|
|
137
|
+
memory: {
|
|
138
|
+
conversationMemory,
|
|
139
|
+
analyticsMemory,
|
|
140
|
+
},
|
|
141
|
+
});
|
|
142
|
+
|
|
143
|
+
// List all registered memory instances
|
|
144
|
+
const allMemory = mastra.listMemory();
|
|
145
|
+
console.log(Object.keys(allMemory)); // ["conversationMemory", "analyticsMemory"]
|
|
146
|
+
```
|
|
147
|
+
|
|
148
|
+
## Related
|
|
149
|
+
|
|
150
|
+
- [Mastra.getMemory()](https://mastra.ai/reference/v1/core/getMemory)
|
|
151
|
+
- [Memory overview](https://mastra.ai/docs/v1/memory/overview)
|
|
152
|
+
- [Agent Memory](https://mastra.ai/docs/v1/agents/agent-memory)
|
|
@@ -0,0 +1,141 @@
|
|
|
1
|
+
> Use Mastra processors and memory with the Vercel AI SDK
|
|
2
|
+
|
|
3
|
+
# AI SDK
|
|
4
|
+
|
|
5
|
+
If you're already using the [Vercel AI SDK](https://sdk.vercel.ai) directly and want to add Mastra capabilities like [processors](https://mastra.ai/docs/v1/agents/processors) or [memory](https://mastra.ai/docs/v1/memory/memory-processors) without switching to the full Mastra agent API, [`withMastra()`](https://mastra.ai/reference/v1/ai-sdk/with-mastra) lets you wrap any AI SDK model with these features. This is useful when you want to keep your existing AI SDK code but add input/output processing, conversation persistence, or content filtering.
|
|
6
|
+
|
|
7
|
+
> **Note:**
|
|
8
|
+
|
|
9
|
+
If you want to use Mastra together with AI SDK UI (e.g. `useChat()`), visit the [AI SDK UI guide](https://mastra.ai/guides/v1/build-your-ui/ai-sdk-ui).
|
|
10
|
+
|
|
11
|
+
## Installation
|
|
12
|
+
|
|
13
|
+
Install `@mastra/ai-sdk` to begin using the `withMastra()` function.
|
|
14
|
+
|
|
15
|
+
**npm:**
|
|
16
|
+
|
|
17
|
+
```bash
|
|
18
|
+
npm install @mastra/ai-sdk@beta
|
|
19
|
+
```
|
|
20
|
+
|
|
21
|
+
**pnpm:**
|
|
22
|
+
|
|
23
|
+
```bash
|
|
24
|
+
pnpm add @mastra/ai-sdk@beta
|
|
25
|
+
```
|
|
26
|
+
|
|
27
|
+
**yarn:**
|
|
28
|
+
|
|
29
|
+
```bash
|
|
30
|
+
yarn add @mastra/ai-sdk@beta
|
|
31
|
+
```
|
|
32
|
+
|
|
33
|
+
**bun:**
|
|
34
|
+
|
|
35
|
+
```bash
|
|
36
|
+
bun add @mastra/ai-sdk@beta
|
|
37
|
+
```
|
|
38
|
+
|
|
39
|
+
|
|
40
|
+
## Examples
|
|
41
|
+
|
|
42
|
+
### With Processors
|
|
43
|
+
|
|
44
|
+
Processors let you transform messages before they're sent to the model (`processInput`) and after responses are received (`processOutputResult`). This example creates a logging processor that logs message counts at each stage, then wraps an OpenAI model with it.
|
|
45
|
+
|
|
46
|
+
```typescript title="src/example.ts"
|
|
47
|
+
import { openai } from '@ai-sdk/openai';
|
|
48
|
+
import { generateText } from 'ai';
|
|
49
|
+
import { withMastra } from '@mastra/ai-sdk';
|
|
50
|
+
import type { Processor } from '@mastra/core/processors';
|
|
51
|
+
|
|
52
|
+
const loggingProcessor: Processor<'logger'> = {
|
|
53
|
+
id: 'logger',
|
|
54
|
+
async processInput({ messages }) {
|
|
55
|
+
console.log('Input:', messages.length, 'messages');
|
|
56
|
+
return messages;
|
|
57
|
+
},
|
|
58
|
+
async processOutputResult({ messages }) {
|
|
59
|
+
console.log('Output:', messages.length, 'messages');
|
|
60
|
+
return messages;
|
|
61
|
+
},
|
|
62
|
+
};
|
|
63
|
+
|
|
64
|
+
const model = withMastra(openai('gpt-4o'), {
|
|
65
|
+
inputProcessors: [loggingProcessor],
|
|
66
|
+
outputProcessors: [loggingProcessor],
|
|
67
|
+
});
|
|
68
|
+
|
|
69
|
+
const { text } = await generateText({
|
|
70
|
+
model,
|
|
71
|
+
prompt: 'What is 2 + 2?',
|
|
72
|
+
});
|
|
73
|
+
```
|
|
74
|
+
|
|
75
|
+
### With Memory
|
|
76
|
+
|
|
77
|
+
Memory automatically loads previous messages from storage before the LLM call and saves new messages after. This example configures a libSQL storage backend to persist conversation history, loading the last 10 messages for context.
|
|
78
|
+
|
|
79
|
+
```typescript title="src/memory-example.ts"
|
|
80
|
+
import { openai } from '@ai-sdk/openai';
|
|
81
|
+
import { generateText } from 'ai';
|
|
82
|
+
import { withMastra } from '@mastra/ai-sdk';
|
|
83
|
+
import { LibSQLStore } from '@mastra/libsql';
|
|
84
|
+
|
|
85
|
+
const storage = new LibSQLStore({
|
|
86
|
+
id: 'my-app',
|
|
87
|
+
url: 'file:./data.db',
|
|
88
|
+
});
|
|
89
|
+
await storage.init();
|
|
90
|
+
|
|
91
|
+
const model = withMastra(openai('gpt-4o'), {
|
|
92
|
+
memory: {
|
|
93
|
+
storage,
|
|
94
|
+
threadId: 'user-thread-123',
|
|
95
|
+
resourceId: 'user-123',
|
|
96
|
+
lastMessages: 10,
|
|
97
|
+
},
|
|
98
|
+
});
|
|
99
|
+
|
|
100
|
+
const { text } = await generateText({
|
|
101
|
+
model,
|
|
102
|
+
prompt: 'What did we talk about earlier?',
|
|
103
|
+
});
|
|
104
|
+
```
|
|
105
|
+
|
|
106
|
+
### With Processors & Memory
|
|
107
|
+
|
|
108
|
+
You can combine processors and memory together. Input processors run after memory loads historical messages, and output processors run before memory saves the response.
|
|
109
|
+
|
|
110
|
+
```typescript title="src/combined-example.ts"
|
|
111
|
+
import { openai } from '@ai-sdk/openai';
|
|
112
|
+
import { generateText } from 'ai';
|
|
113
|
+
import { withMastra } from '@mastra/ai-sdk';
|
|
114
|
+
import { LibSQLStore } from '@mastra/libsql';
|
|
115
|
+
|
|
116
|
+
const storage = new LibSQLStore({ id: 'my-app', url: 'file:./data.db' });
|
|
117
|
+
await storage.init();
|
|
118
|
+
|
|
119
|
+
const model = withMastra(openai('gpt-4o'), {
|
|
120
|
+
inputProcessors: [myGuardProcessor],
|
|
121
|
+
outputProcessors: [myLoggingProcessor],
|
|
122
|
+
memory: {
|
|
123
|
+
storage,
|
|
124
|
+
threadId: 'thread-123',
|
|
125
|
+
resourceId: 'user-123',
|
|
126
|
+
lastMessages: 10,
|
|
127
|
+
},
|
|
128
|
+
});
|
|
129
|
+
|
|
130
|
+
const { text } = await generateText({
|
|
131
|
+
model,
|
|
132
|
+
prompt: 'Hello!',
|
|
133
|
+
});
|
|
134
|
+
```
|
|
135
|
+
|
|
136
|
+
## Related
|
|
137
|
+
|
|
138
|
+
- [`withMastra()`](https://mastra.ai/reference/v1/ai-sdk/with-mastra) - API reference for `withMastra()`
|
|
139
|
+
- [Processors](https://mastra.ai/docs/v1/agents/processors) - Learn about input and output processors
|
|
140
|
+
- [Memory](https://mastra.ai/docs/v1/memory/overview) - Overview of Mastra's memory system
|
|
141
|
+
- [AI SDK UI](https://mastra.ai/guides/v1/build-your-ui/ai-sdk-ui) - Using AI SDK UI hooks with Mastra agents, workflows, and networks
|
|
@@ -0,0 +1,76 @@
|
|
|
1
|
+
> Learn how Mastra
|
|
2
|
+
|
|
3
|
+
# Memory
|
|
4
|
+
|
|
5
|
+
Memory gives your agent coherence across interactions and allows it to improve over time by retaining relevant information from past conversations.
|
|
6
|
+
|
|
7
|
+
Mastra requires a [storage provider](./storage) to persist memory and supports three types:
|
|
8
|
+
|
|
9
|
+
- [**Message history**](https://mastra.ai/docs/v1/memory/message-history) captures recent messages from the current conversation, providing short-term continuity and maintaining dialogue flow.
|
|
10
|
+
- [**Working memory**](https://mastra.ai/docs/v1/memory/working-memory) stores persistent user-specific details such as names, preferences, goals, and other structured data.
|
|
11
|
+
- [**Semantic recall**](https://mastra.ai/docs/v1/memory/semantic-recall) retrieves older messages from past conversations based on semantic relevance. Matches are retrieved using vector search and can include surrounding context for better comprehension.
|
|
12
|
+
|
|
13
|
+
You can enable any combination of these memory types. Mastra assembles the relevant memories into the model’s context window. If the total exceeds the model's token limit, use [memory processors](https://mastra.ai/docs/v1/memory/memory-processors) to trim or filter messages before sending them to the model.
|
|
14
|
+
|
|
15
|
+
## Getting started
|
|
16
|
+
|
|
17
|
+
Install Mastra's memory module and the storage adapter for your preferred database (see the storage section below):
|
|
18
|
+
|
|
19
|
+
```bash
|
|
20
|
+
npm install @mastra/memory@beta @mastra/libsql@beta
|
|
21
|
+
```
|
|
22
|
+
|
|
23
|
+
Add the storage adapter to the main Mastra instance:
|
|
24
|
+
|
|
25
|
+
```typescript title="src/mastra/index.ts"
|
|
26
|
+
import { Mastra } from "@mastra/core";
|
|
27
|
+
import { LibSQLStore } from "@mastra/libsql";
|
|
28
|
+
|
|
29
|
+
export const mastra = new Mastra({
|
|
30
|
+
storage: new LibSQLStore({
|
|
31
|
+
id: 'mastra-storage',
|
|
32
|
+
url: ":memory:",
|
|
33
|
+
}),
|
|
34
|
+
});
|
|
35
|
+
```
|
|
36
|
+
|
|
37
|
+
Enable memory by passing a `Memory` instance to your agent:
|
|
38
|
+
|
|
39
|
+
```typescript title="src/mastra/agents/test-agent.ts"
|
|
40
|
+
import { Memory } from "@mastra/memory";
|
|
41
|
+
import { Agent } from "@mastra/core/agent";
|
|
42
|
+
|
|
43
|
+
export const testAgent = new Agent({
|
|
44
|
+
id: "test-agent",
|
|
45
|
+
memory: new Memory({
|
|
46
|
+
options: {
|
|
47
|
+
lastMessages: 20,
|
|
48
|
+
},
|
|
49
|
+
}),
|
|
50
|
+
});
|
|
51
|
+
```
|
|
52
|
+
When you send a new message, the model can now "see" the previous 20 messages, which gives it better context for the conversation and leads to more coherent, accurate replies.
|
|
53
|
+
|
|
54
|
+
This example configures basic [message history](https://mastra.ai/docs/v1/memory/message-history). You can also enable [working memory](https://mastra.ai/docs/v1/memory/working-memory) and [semantic recall](https://mastra.ai/docs/v1/memory/semantic-recall) by passing additional options to `Memory`.
|
|
55
|
+
|
|
56
|
+
## Storage
|
|
57
|
+
|
|
58
|
+
Before enabling memory, you must first configure a storage adapter. Mastra supports multiple database providers including PostgreSQL, MongoDB, libSQL, and more.
|
|
59
|
+
|
|
60
|
+
Storage can be configured at the instance level (shared across all agents) or at the agent level (dedicated per agent). You can also use different databases for storage and vector operations.
|
|
61
|
+
|
|
62
|
+
See the [Storage](https://mastra.ai/docs/v1/memory/storage) documentation for configuration options, supported providers, and examples.
|
|
63
|
+
|
|
64
|
+
## Debugging memory
|
|
65
|
+
|
|
66
|
+
When tracing is enabled, you can inspect exactly which messages the agent uses for context in each request. The trace output shows all memory included in the agent's context window - both recent message history and messages recalled via semantic recall.
|
|
67
|
+
|
|
68
|
+
This visibility helps you understand why an agent made specific decisions and verify that memory retrieval is working as expected.
|
|
69
|
+
|
|
70
|
+
For more details on enabling and configuring tracing, see [Tracing](https://mastra.ai/docs/v1/observability/tracing/overview).
|
|
71
|
+
|
|
72
|
+
## Next Steps
|
|
73
|
+
|
|
74
|
+
- Learn more about [Storage](https://mastra.ai/docs/v1/memory/storage) providers and configuration options
|
|
75
|
+
- Add [Message History](https://mastra.ai/docs/v1/memory/message-history), [Working Memory](https://mastra.ai/docs/v1/memory/working-memory), or [Semantic Recall](https://mastra.ai/docs/v1/memory/semantic-recall)
|
|
76
|
+
- Visit [Memory configuration reference](https://mastra.ai/reference/v1/memory/memory-class) for all available options
|