@mastra/libsql 1.6.0 → 1.6.1-alpha.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/CHANGELOG.md +11 -0
- package/dist/index.cjs +17 -8
- package/dist/index.cjs.map +1 -1
- package/dist/index.js +17 -8
- package/dist/index.js.map +1 -1
- package/dist/storage/domains/prompt-blocks/index.d.ts.map +1 -1
- package/package.json +4 -4
- package/dist/docs/SKILL.md +0 -50
- package/dist/docs/assets/SOURCE_MAP.json +0 -6
- package/dist/docs/references/docs-agents-agent-approval.md +0 -377
- package/dist/docs/references/docs-agents-agent-memory.md +0 -212
- package/dist/docs/references/docs-agents-network-approval.md +0 -275
- package/dist/docs/references/docs-agents-networks.md +0 -290
- package/dist/docs/references/docs-memory-memory-processors.md +0 -316
- package/dist/docs/references/docs-memory-message-history.md +0 -260
- package/dist/docs/references/docs-memory-overview.md +0 -45
- package/dist/docs/references/docs-memory-semantic-recall.md +0 -272
- package/dist/docs/references/docs-memory-storage.md +0 -261
- package/dist/docs/references/docs-memory-working-memory.md +0 -400
- package/dist/docs/references/docs-observability-overview.md +0 -70
- package/dist/docs/references/docs-observability-tracing-exporters-default.md +0 -211
- package/dist/docs/references/docs-rag-retrieval.md +0 -521
- package/dist/docs/references/docs-workflows-snapshots.md +0 -238
- package/dist/docs/references/guides-agent-frameworks-ai-sdk.md +0 -140
- package/dist/docs/references/reference-core-getMemory.md +0 -50
- package/dist/docs/references/reference-core-listMemory.md +0 -56
- package/dist/docs/references/reference-core-mastra-class.md +0 -66
- package/dist/docs/references/reference-memory-memory-class.md +0 -147
- package/dist/docs/references/reference-storage-composite.md +0 -235
- package/dist/docs/references/reference-storage-dynamodb.md +0 -282
- package/dist/docs/references/reference-storage-libsql.md +0 -135
- package/dist/docs/references/reference-vectors-libsql.md +0 -305
|
@@ -1,400 +0,0 @@
|
|
|
1
|
-
# Working Memory
|
|
2
|
-
|
|
3
|
-
While [message history](https://mastra.ai/docs/memory/message-history) and [semantic recall](https://mastra.ai/docs/memory/semantic-recall) help agents remember conversations, working memory allows them to maintain persistent information about users across interactions.
|
|
4
|
-
|
|
5
|
-
Think of it as the agent's active thoughts or scratchpad – the key information they keep available about the user or task. It's similar to how a person would naturally remember someone's name, preferences, or important details during a conversation.
|
|
6
|
-
|
|
7
|
-
This is useful for maintaining ongoing state that's always relevant and should always be available to the agent.
|
|
8
|
-
|
|
9
|
-
Working memory can persist at two different scopes:
|
|
10
|
-
|
|
11
|
-
- **Resource-scoped** (default): Memory persists across all conversation threads for the same user
|
|
12
|
-
- **Thread-scoped**: Memory is isolated per conversation thread
|
|
13
|
-
|
|
14
|
-
**Important:** Switching between scopes means the agent won't see memory from the other scope - thread-scoped memory is completely separate from resource-scoped memory.
|
|
15
|
-
|
|
16
|
-
## Quick Start
|
|
17
|
-
|
|
18
|
-
Here's a minimal example of setting up an agent with working memory:
|
|
19
|
-
|
|
20
|
-
```typescript
|
|
21
|
-
import { Agent } from "@mastra/core/agent";
|
|
22
|
-
import { Memory } from "@mastra/memory";
|
|
23
|
-
|
|
24
|
-
// Create agent with working memory enabled
|
|
25
|
-
const agent = new Agent({
|
|
26
|
-
id: "personal-assistant",
|
|
27
|
-
name: "PersonalAssistant",
|
|
28
|
-
instructions: "You are a helpful personal assistant.",
|
|
29
|
-
model: "openai/gpt-5.1",
|
|
30
|
-
memory: new Memory({
|
|
31
|
-
options: {
|
|
32
|
-
workingMemory: {
|
|
33
|
-
enabled: true,
|
|
34
|
-
},
|
|
35
|
-
},
|
|
36
|
-
}),
|
|
37
|
-
});
|
|
38
|
-
```
|
|
39
|
-
|
|
40
|
-
## How it Works
|
|
41
|
-
|
|
42
|
-
Working memory is a block of Markdown text that the agent is able to update over time to store continuously relevant information:
|
|
43
|
-
|
|
44
|
-
[YouTube video player](https://www.youtube-nocookie.com/embed/UMy_JHLf1n8)
|
|
45
|
-
|
|
46
|
-
## Memory Persistence Scopes
|
|
47
|
-
|
|
48
|
-
Working memory can operate in two different scopes, allowing you to choose how memory persists across conversations:
|
|
49
|
-
|
|
50
|
-
### Resource-Scoped Memory (Default)
|
|
51
|
-
|
|
52
|
-
By default, working memory persists across all conversation threads for the same user (resourceId), enabling persistent user memory:
|
|
53
|
-
|
|
54
|
-
```typescript
|
|
55
|
-
const memory = new Memory({
|
|
56
|
-
storage,
|
|
57
|
-
options: {
|
|
58
|
-
workingMemory: {
|
|
59
|
-
enabled: true,
|
|
60
|
-
scope: "resource", // Memory persists across all user threads
|
|
61
|
-
template: `# User Profile
|
|
62
|
-
- **Name**:
|
|
63
|
-
- **Location**:
|
|
64
|
-
- **Interests**:
|
|
65
|
-
- **Preferences**:
|
|
66
|
-
- **Long-term Goals**:
|
|
67
|
-
`,
|
|
68
|
-
},
|
|
69
|
-
},
|
|
70
|
-
});
|
|
71
|
-
```
|
|
72
|
-
|
|
73
|
-
**Use cases:**
|
|
74
|
-
|
|
75
|
-
- Personal assistants that remember user preferences
|
|
76
|
-
- Customer service bots that maintain customer context
|
|
77
|
-
- Educational applications that track student progress
|
|
78
|
-
|
|
79
|
-
### Usage with Agents
|
|
80
|
-
|
|
81
|
-
When using resource-scoped memory, make sure to pass the `resource` parameter in the memory options:
|
|
82
|
-
|
|
83
|
-
```typescript
|
|
84
|
-
// Resource-scoped memory requires resource
|
|
85
|
-
const response = await agent.generate("Hello!", {
|
|
86
|
-
memory: {
|
|
87
|
-
thread: "conversation-123",
|
|
88
|
-
resource: "user-alice-456", // Same user across different threads
|
|
89
|
-
},
|
|
90
|
-
});
|
|
91
|
-
```
|
|
92
|
-
|
|
93
|
-
### Thread-Scoped Memory
|
|
94
|
-
|
|
95
|
-
Thread-scoped memory isolates working memory to individual conversation threads. Each thread maintains its own isolated memory:
|
|
96
|
-
|
|
97
|
-
```typescript
|
|
98
|
-
const memory = new Memory({
|
|
99
|
-
storage,
|
|
100
|
-
options: {
|
|
101
|
-
workingMemory: {
|
|
102
|
-
enabled: true,
|
|
103
|
-
scope: "thread", // Memory is isolated per thread
|
|
104
|
-
template: `# User Profile
|
|
105
|
-
- **Name**:
|
|
106
|
-
- **Interests**:
|
|
107
|
-
- **Current Goal**:
|
|
108
|
-
`,
|
|
109
|
-
},
|
|
110
|
-
},
|
|
111
|
-
});
|
|
112
|
-
```
|
|
113
|
-
|
|
114
|
-
**Use cases:**
|
|
115
|
-
|
|
116
|
-
- Different conversations about separate topics
|
|
117
|
-
- Temporary or session-specific information
|
|
118
|
-
- Workflows where each thread needs working memory but threads are ephemeral and not related to each other
|
|
119
|
-
|
|
120
|
-
## Storage Adapter Support
|
|
121
|
-
|
|
122
|
-
Resource-scoped working memory requires specific storage adapters that support the `mastra_resources` table:
|
|
123
|
-
|
|
124
|
-
### Supported Storage Adapters
|
|
125
|
-
|
|
126
|
-
- **libSQL** (`@mastra/libsql`)
|
|
127
|
-
- **PostgreSQL** (`@mastra/pg`)
|
|
128
|
-
- **Upstash** (`@mastra/upstash`)
|
|
129
|
-
- **MongoDB** (`@mastra/mongodb`)
|
|
130
|
-
|
|
131
|
-
## Custom Templates
|
|
132
|
-
|
|
133
|
-
Templates guide the agent on what information to track and update in working memory. While a default template is used if none is provided, you'll typically want to define a custom template tailored to your agent's specific use case to ensure it remembers the most relevant information.
|
|
134
|
-
|
|
135
|
-
Here's an example of a custom template. In this example the agent will store the users name, location, timezone, etc as soon as the user sends a message containing any of the info:
|
|
136
|
-
|
|
137
|
-
```typescript
|
|
138
|
-
const memory = new Memory({
|
|
139
|
-
options: {
|
|
140
|
-
workingMemory: {
|
|
141
|
-
enabled: true,
|
|
142
|
-
template: `
|
|
143
|
-
# User Profile
|
|
144
|
-
|
|
145
|
-
## Personal Info
|
|
146
|
-
|
|
147
|
-
- Name:
|
|
148
|
-
- Location:
|
|
149
|
-
- Timezone:
|
|
150
|
-
|
|
151
|
-
## Preferences
|
|
152
|
-
|
|
153
|
-
- Communication Style: [e.g., Formal, Casual]
|
|
154
|
-
- Project Goal:
|
|
155
|
-
- Key Deadlines:
|
|
156
|
-
- [Deadline 1]: [Date]
|
|
157
|
-
- [Deadline 2]: [Date]
|
|
158
|
-
|
|
159
|
-
## Session State
|
|
160
|
-
|
|
161
|
-
- Last Task Discussed:
|
|
162
|
-
- Open Questions:
|
|
163
|
-
- [Question 1]
|
|
164
|
-
- [Question 2]
|
|
165
|
-
`,
|
|
166
|
-
},
|
|
167
|
-
},
|
|
168
|
-
});
|
|
169
|
-
```
|
|
170
|
-
|
|
171
|
-
## Designing Effective Templates
|
|
172
|
-
|
|
173
|
-
A well-structured template keeps the information easy for the agent to parse and update. Treat the template as a short form that you want the assistant to keep up to date.
|
|
174
|
-
|
|
175
|
-
- **Short, focused labels.** Avoid paragraphs or very long headings. Keep labels brief (for example `## Personal Info` or `- Name:`) so updates are easy to read and less likely to be truncated.
|
|
176
|
-
- **Use consistent casing.** Inconsistent capitalization (`Timezone:` vs `timezone:`) can cause messy updates. Stick to Title Case or lower case for headings and bullet labels.
|
|
177
|
-
- **Keep placeholder text simple.** Use hints such as `[e.g., Formal]` or `[Date]` to help the LLM fill in the correct spots.
|
|
178
|
-
- **Abbreviate very long values.** If you only need a short form, include guidance like `- Name: [First name or nickname]` or `- Address (short):` rather than the full legal text.
|
|
179
|
-
- **Mention update rules in `instructions`.** You can instruct how and when to fill or clear parts of the template directly in the agent's `instructions` field.
|
|
180
|
-
|
|
181
|
-
### Alternative Template Styles
|
|
182
|
-
|
|
183
|
-
Use a shorter single block if you only need a few items:
|
|
184
|
-
|
|
185
|
-
```typescript
|
|
186
|
-
const basicMemory = new Memory({
|
|
187
|
-
options: {
|
|
188
|
-
workingMemory: {
|
|
189
|
-
enabled: true,
|
|
190
|
-
template: `User Facts:\n- Name:\n- Favorite Color:\n- Current Topic:`,
|
|
191
|
-
},
|
|
192
|
-
},
|
|
193
|
-
});
|
|
194
|
-
```
|
|
195
|
-
|
|
196
|
-
You can also store the key facts in a short paragraph format if you prefer a more narrative style:
|
|
197
|
-
|
|
198
|
-
```typescript
|
|
199
|
-
const paragraphMemory = new Memory({
|
|
200
|
-
options: {
|
|
201
|
-
workingMemory: {
|
|
202
|
-
enabled: true,
|
|
203
|
-
template: `Important Details:\n\nKeep a short paragraph capturing the user's important facts (name, main goal, current task).`,
|
|
204
|
-
},
|
|
205
|
-
},
|
|
206
|
-
});
|
|
207
|
-
```
|
|
208
|
-
|
|
209
|
-
## Structured Working Memory
|
|
210
|
-
|
|
211
|
-
Working memory can also be defined using a structured schema instead of a Markdown template. This allows you to specify the exact fields and types that should be tracked, using a [Zod](https://zod.dev/) schema. When using a schema, the agent will see and update working memory as a JSON object matching your schema.
|
|
212
|
-
|
|
213
|
-
**Important:** You must specify either `template` or `schema`, but not both.
|
|
214
|
-
|
|
215
|
-
### Example: Schema-Based Working Memory
|
|
216
|
-
|
|
217
|
-
```typescript
|
|
218
|
-
import { z } from "zod";
|
|
219
|
-
import { Memory } from "@mastra/memory";
|
|
220
|
-
|
|
221
|
-
const userProfileSchema = z.object({
|
|
222
|
-
name: z.string().optional(),
|
|
223
|
-
location: z.string().optional(),
|
|
224
|
-
timezone: z.string().optional(),
|
|
225
|
-
preferences: z
|
|
226
|
-
.object({
|
|
227
|
-
communicationStyle: z.string().optional(),
|
|
228
|
-
projectGoal: z.string().optional(),
|
|
229
|
-
deadlines: z.array(z.string()).optional(),
|
|
230
|
-
})
|
|
231
|
-
.optional(),
|
|
232
|
-
});
|
|
233
|
-
|
|
234
|
-
const memory = new Memory({
|
|
235
|
-
options: {
|
|
236
|
-
workingMemory: {
|
|
237
|
-
enabled: true,
|
|
238
|
-
schema: userProfileSchema,
|
|
239
|
-
// template: ... (do not set)
|
|
240
|
-
},
|
|
241
|
-
},
|
|
242
|
-
});
|
|
243
|
-
```
|
|
244
|
-
|
|
245
|
-
When a schema is provided, the agent receives the working memory as a JSON object. For example:
|
|
246
|
-
|
|
247
|
-
```json
|
|
248
|
-
{
|
|
249
|
-
"name": "Sam",
|
|
250
|
-
"location": "Berlin",
|
|
251
|
-
"timezone": "CET",
|
|
252
|
-
"preferences": {
|
|
253
|
-
"communicationStyle": "Formal",
|
|
254
|
-
"projectGoal": "Launch MVP",
|
|
255
|
-
"deadlines": ["2025-07-01"]
|
|
256
|
-
}
|
|
257
|
-
}
|
|
258
|
-
```
|
|
259
|
-
|
|
260
|
-
### Merge Semantics for Schema-Based Memory
|
|
261
|
-
|
|
262
|
-
Schema-based working memory uses **merge semantics**, meaning the agent only needs to include fields it wants to add or update. Existing fields are preserved automatically.
|
|
263
|
-
|
|
264
|
-
- **Object fields are deep merged:** Only provided fields are updated; others remain unchanged
|
|
265
|
-
- **Set a field to `null` to delete it:** This explicitly removes the field from memory
|
|
266
|
-
- **Arrays are replaced entirely:** When an array field is provided, it replaces the existing array (arrays are not merged element-by-element)
|
|
267
|
-
|
|
268
|
-
## Choosing Between Template and Schema
|
|
269
|
-
|
|
270
|
-
- Use a **template** (Markdown) if you want the agent to maintain memory as a free-form text block, such as a user profile or scratchpad. Templates use **replace semantics** — the agent must provide the complete memory content on each update.
|
|
271
|
-
- Use a **schema** if you need structured, type-safe data that can be validated and programmatically accessed as JSON. Schemas use **merge semantics** — the agent only provides fields to update, and existing fields are preserved.
|
|
272
|
-
- Only one mode can be active at a time: setting both `template` and `schema` is not supported.
|
|
273
|
-
|
|
274
|
-
## Example: Multi-step Retention
|
|
275
|
-
|
|
276
|
-
Below is a simplified view of how the `User Profile` template updates across a short user conversation:
|
|
277
|
-
|
|
278
|
-
```nohighlight
|
|
279
|
-
# User Profile
|
|
280
|
-
|
|
281
|
-
## Personal Info
|
|
282
|
-
|
|
283
|
-
- Name:
|
|
284
|
-
- Location:
|
|
285
|
-
- Timezone:
|
|
286
|
-
|
|
287
|
-
--- After user says "My name is **Sam** and I'm from **Berlin**" ---
|
|
288
|
-
|
|
289
|
-
# User Profile
|
|
290
|
-
- Name: Sam
|
|
291
|
-
- Location: Berlin
|
|
292
|
-
- Timezone:
|
|
293
|
-
|
|
294
|
-
--- After user adds "By the way I'm normally in **CET**" ---
|
|
295
|
-
|
|
296
|
-
# User Profile
|
|
297
|
-
- Name: Sam
|
|
298
|
-
- Location: Berlin
|
|
299
|
-
- Timezone: CET
|
|
300
|
-
```
|
|
301
|
-
|
|
302
|
-
The agent can now refer to `Sam` or `Berlin` in later responses without requesting the information again because it has been stored in working memory.
|
|
303
|
-
|
|
304
|
-
If your agent is not properly updating working memory when you expect it to, you can add system instructions on _how_ and _when_ to use this template in your agent's `instructions` setting.
|
|
305
|
-
|
|
306
|
-
## Setting Initial Working Memory
|
|
307
|
-
|
|
308
|
-
While agents typically update working memory through the `updateWorkingMemory` tool, you can also set initial working memory programmatically when creating or updating threads. This is useful for injecting user data (like their name, preferences, or other info) that you want available to the agent without passing it in every request.
|
|
309
|
-
|
|
310
|
-
### Setting Working Memory via Thread Metadata
|
|
311
|
-
|
|
312
|
-
When creating a thread, you can provide initial working memory through the metadata's `workingMemory` key:
|
|
313
|
-
|
|
314
|
-
```typescript
|
|
315
|
-
// Create a thread with initial working memory
|
|
316
|
-
const thread = await memory.createThread({
|
|
317
|
-
threadId: "thread-123",
|
|
318
|
-
resourceId: "user-456",
|
|
319
|
-
title: "Medical Consultation",
|
|
320
|
-
metadata: {
|
|
321
|
-
workingMemory: `# Patient Profile
|
|
322
|
-
- Name: John Doe
|
|
323
|
-
- Blood Type: O+
|
|
324
|
-
- Allergies: Penicillin
|
|
325
|
-
- Current Medications: None
|
|
326
|
-
- Medical History: Hypertension (controlled)
|
|
327
|
-
`,
|
|
328
|
-
},
|
|
329
|
-
});
|
|
330
|
-
|
|
331
|
-
// The agent will now have access to this information in all messages
|
|
332
|
-
await agent.generate("What's my blood type?", {
|
|
333
|
-
memory: {
|
|
334
|
-
thread: thread.id,
|
|
335
|
-
resource: "user-456",
|
|
336
|
-
},
|
|
337
|
-
});
|
|
338
|
-
// Response: "Your blood type is O+."
|
|
339
|
-
```
|
|
340
|
-
|
|
341
|
-
### Updating Working Memory Programmatically
|
|
342
|
-
|
|
343
|
-
You can also update an existing thread's working memory:
|
|
344
|
-
|
|
345
|
-
```typescript
|
|
346
|
-
// Update thread metadata to add/modify working memory
|
|
347
|
-
await memory.updateThread({
|
|
348
|
-
id: "thread-123",
|
|
349
|
-
title: thread.title,
|
|
350
|
-
metadata: {
|
|
351
|
-
...thread.metadata,
|
|
352
|
-
workingMemory: `# Patient Profile
|
|
353
|
-
- Name: John Doe
|
|
354
|
-
- Blood Type: O+
|
|
355
|
-
- Allergies: Penicillin, Ibuprofen // Updated
|
|
356
|
-
- Current Medications: Lisinopril 10mg daily // Added
|
|
357
|
-
- Medical History: Hypertension (controlled)
|
|
358
|
-
`,
|
|
359
|
-
},
|
|
360
|
-
});
|
|
361
|
-
```
|
|
362
|
-
|
|
363
|
-
### Direct Memory Update
|
|
364
|
-
|
|
365
|
-
Alternatively, use the `updateWorkingMemory` method directly:
|
|
366
|
-
|
|
367
|
-
```typescript
|
|
368
|
-
await memory.updateWorkingMemory({
|
|
369
|
-
threadId: "thread-123",
|
|
370
|
-
resourceId: "user-456", // Required for resource-scoped memory
|
|
371
|
-
workingMemory: "Updated memory content...",
|
|
372
|
-
});
|
|
373
|
-
```
|
|
374
|
-
|
|
375
|
-
## Read-Only Working Memory
|
|
376
|
-
|
|
377
|
-
In some scenarios, you may want an agent to have access to working memory data without the ability to modify it. This is useful for:
|
|
378
|
-
|
|
379
|
-
- **Routing agents** that need context but shouldn't update user profiles
|
|
380
|
-
- **Sub agents** in a multi-agent system that should reference but not own the memory
|
|
381
|
-
|
|
382
|
-
To enable read-only mode, set `readOnly: true` in the memory options:
|
|
383
|
-
|
|
384
|
-
```typescript
|
|
385
|
-
const response = await agent.generate("What do you know about me?", {
|
|
386
|
-
memory: {
|
|
387
|
-
thread: "conversation-123",
|
|
388
|
-
resource: "user-alice-456",
|
|
389
|
-
options: {
|
|
390
|
-
readOnly: true, // Working memory is provided but cannot be updated
|
|
391
|
-
},
|
|
392
|
-
},
|
|
393
|
-
});
|
|
394
|
-
```
|
|
395
|
-
|
|
396
|
-
## Examples
|
|
397
|
-
|
|
398
|
-
- [Working memory with template](https://github.com/mastra-ai/mastra/tree/main/examples/memory-with-template)
|
|
399
|
-
- [Working memory with schema](https://github.com/mastra-ai/mastra/tree/main/examples/memory-with-schema)
|
|
400
|
-
- [Per-resource working memory](https://github.com/mastra-ai/mastra/tree/main/examples/memory-per-resource-example) - Complete example showing resource-scoped memory persistence
|
|
@@ -1,70 +0,0 @@
|
|
|
1
|
-
# Observability Overview
|
|
2
|
-
|
|
3
|
-
Mastra provides observability features for AI applications. Monitor LLM operations, trace agent decisions, and debug complex workflows with tools that understand AI-specific patterns.
|
|
4
|
-
|
|
5
|
-
## Key Features
|
|
6
|
-
|
|
7
|
-
### Tracing
|
|
8
|
-
|
|
9
|
-
Specialized tracing for AI operations that captures:
|
|
10
|
-
|
|
11
|
-
- **Model interactions**: Token usage, latency, prompts, and completions
|
|
12
|
-
- **Agent execution**: Decision paths, tool calls, and memory operations
|
|
13
|
-
- **Workflow steps**: Branching logic, parallel execution, and step outputs
|
|
14
|
-
- **Automatic instrumentation**: Tracing with decorators
|
|
15
|
-
|
|
16
|
-
## Storage Requirements
|
|
17
|
-
|
|
18
|
-
The `DefaultExporter` persists traces to your configured storage backend. Not all storage providers support observability—for the full list, see [Storage Provider Support](https://mastra.ai/docs/observability/tracing/exporters/default).
|
|
19
|
-
|
|
20
|
-
For production environments with high traffic, we recommend using **ClickHouse** for the observability domain via [composite storage](https://mastra.ai/reference/storage/composite). See [Production Recommendations](https://mastra.ai/docs/observability/tracing/exporters/default) for details.
|
|
21
|
-
|
|
22
|
-
## Quick Start
|
|
23
|
-
|
|
24
|
-
Configure Observability in your Mastra instance:
|
|
25
|
-
|
|
26
|
-
```typescript
|
|
27
|
-
import { Mastra } from "@mastra/core";
|
|
28
|
-
import { PinoLogger } from "@mastra/loggers";
|
|
29
|
-
import { LibSQLStore } from "@mastra/libsql";
|
|
30
|
-
import {
|
|
31
|
-
Observability,
|
|
32
|
-
DefaultExporter,
|
|
33
|
-
CloudExporter,
|
|
34
|
-
SensitiveDataFilter,
|
|
35
|
-
} from "@mastra/observability";
|
|
36
|
-
|
|
37
|
-
export const mastra = new Mastra({
|
|
38
|
-
logger: new PinoLogger(),
|
|
39
|
-
storage: new LibSQLStore({
|
|
40
|
-
id: 'mastra-storage',
|
|
41
|
-
url: "file:./mastra.db", // Storage is required for tracing
|
|
42
|
-
}),
|
|
43
|
-
observability: new Observability({
|
|
44
|
-
configs: {
|
|
45
|
-
default: {
|
|
46
|
-
serviceName: "mastra",
|
|
47
|
-
exporters: [
|
|
48
|
-
new DefaultExporter(), // Persists traces to storage for Mastra Studio
|
|
49
|
-
new CloudExporter(), // Sends traces to Mastra Cloud (if MASTRA_CLOUD_ACCESS_TOKEN is set)
|
|
50
|
-
],
|
|
51
|
-
spanOutputProcessors: [
|
|
52
|
-
new SensitiveDataFilter(), // Redacts sensitive data like passwords, tokens, keys
|
|
53
|
-
],
|
|
54
|
-
},
|
|
55
|
-
},
|
|
56
|
-
}),
|
|
57
|
-
});
|
|
58
|
-
```
|
|
59
|
-
|
|
60
|
-
> **Serverless environments:** The `file:./mastra.db` storage URL uses the local filesystem, which doesn't work in serverless environments like Vercel, AWS Lambda, or Cloudflare Workers. For serverless deployments, use external storage. See the [Vercel deployment guide](https://mastra.ai/guides/deployment/vercel) for a complete example.
|
|
61
|
-
|
|
62
|
-
With this basic setup, you will see Traces and Logs in both Studio and in Mastra Cloud.
|
|
63
|
-
|
|
64
|
-
We also support various external tracing providers like MLflow, Langfuse, Braintrust, and any OpenTelemetry-compatible platform (Datadog, New Relic, SigNoz, etc.). See more about this in the [Tracing](https://mastra.ai/docs/observability/tracing/overview) documentation.
|
|
65
|
-
|
|
66
|
-
## What's Next?
|
|
67
|
-
|
|
68
|
-
- **[Set up Tracing](https://mastra.ai/docs/observability/tracing/overview)**: Configure tracing for your application
|
|
69
|
-
- **[Configure Logging](https://mastra.ai/docs/observability/logging)**: Add structured logging
|
|
70
|
-
- **[API Reference](https://mastra.ai/reference/observability/tracing/instances)**: Detailed configuration options
|
|
@@ -1,211 +0,0 @@
|
|
|
1
|
-
# Default Exporter
|
|
2
|
-
|
|
3
|
-
The `DefaultExporter` persists traces to your configured storage backend, making them accessible through Studio. It's automatically enabled when using the default observability configuration and requires no external services.
|
|
4
|
-
|
|
5
|
-
> **Production Observability:** Observability data can quickly overwhelm general-purpose databases in production. For high-traffic applications, we recommend using **ClickHouse** for the observability storage domain via [composite storage](https://mastra.ai/reference/storage/composite). See [Production Recommendations](#production-recommendations) for details.
|
|
6
|
-
|
|
7
|
-
## Configuration
|
|
8
|
-
|
|
9
|
-
### Prerequisites
|
|
10
|
-
|
|
11
|
-
1. **Storage Backend**: Configure a storage provider (libSQL, PostgreSQL, etc.)
|
|
12
|
-
2. **Studio**: Install for viewing traces locally
|
|
13
|
-
|
|
14
|
-
### Basic Setup
|
|
15
|
-
|
|
16
|
-
```typescript
|
|
17
|
-
import { Mastra } from "@mastra/core";
|
|
18
|
-
import { Observability, DefaultExporter } from "@mastra/observability";
|
|
19
|
-
import { LibSQLStore } from "@mastra/libsql";
|
|
20
|
-
|
|
21
|
-
export const mastra = new Mastra({
|
|
22
|
-
storage: new LibSQLStore({
|
|
23
|
-
id: 'mastra-storage',
|
|
24
|
-
url: "file:./mastra.db", // Required for trace persistence
|
|
25
|
-
}),
|
|
26
|
-
observability: new Observability({
|
|
27
|
-
configs: {
|
|
28
|
-
local: {
|
|
29
|
-
serviceName: "my-service",
|
|
30
|
-
exporters: [new DefaultExporter()],
|
|
31
|
-
},
|
|
32
|
-
},
|
|
33
|
-
}),
|
|
34
|
-
});
|
|
35
|
-
```
|
|
36
|
-
|
|
37
|
-
### Recommended Configuration
|
|
38
|
-
|
|
39
|
-
Include DefaultExporter in your observability configuration:
|
|
40
|
-
|
|
41
|
-
```typescript
|
|
42
|
-
import { Mastra } from "@mastra/core";
|
|
43
|
-
import {
|
|
44
|
-
Observability,
|
|
45
|
-
DefaultExporter,
|
|
46
|
-
CloudExporter,
|
|
47
|
-
SensitiveDataFilter,
|
|
48
|
-
} from "@mastra/observability";
|
|
49
|
-
import { LibSQLStore } from "@mastra/libsql";
|
|
50
|
-
|
|
51
|
-
export const mastra = new Mastra({
|
|
52
|
-
storage: new LibSQLStore({
|
|
53
|
-
id: 'mastra-storage',
|
|
54
|
-
url: "file:./mastra.db",
|
|
55
|
-
}),
|
|
56
|
-
observability: new Observability({
|
|
57
|
-
configs: {
|
|
58
|
-
default: {
|
|
59
|
-
serviceName: "mastra",
|
|
60
|
-
exporters: [
|
|
61
|
-
new DefaultExporter(), // Persists traces to storage for Mastra Studio
|
|
62
|
-
new CloudExporter(), // Sends traces to Mastra Cloud (requires MASTRA_CLOUD_ACCESS_TOKEN)
|
|
63
|
-
],
|
|
64
|
-
spanOutputProcessors: [
|
|
65
|
-
new SensitiveDataFilter(),
|
|
66
|
-
],
|
|
67
|
-
},
|
|
68
|
-
},
|
|
69
|
-
}),
|
|
70
|
-
});
|
|
71
|
-
```
|
|
72
|
-
|
|
73
|
-
## Viewing Traces
|
|
74
|
-
|
|
75
|
-
### Studio
|
|
76
|
-
|
|
77
|
-
Access your traces through Studio:
|
|
78
|
-
|
|
79
|
-
1. Start Studio
|
|
80
|
-
2. Navigate to Observability
|
|
81
|
-
3. Filter and search your local traces
|
|
82
|
-
4. Inspect detailed span information
|
|
83
|
-
|
|
84
|
-
## Tracing Strategies
|
|
85
|
-
|
|
86
|
-
DefaultExporter automatically selects the optimal tracing strategy based on your storage provider. You can also override this selection if needed.
|
|
87
|
-
|
|
88
|
-
### Available Strategies
|
|
89
|
-
|
|
90
|
-
| Strategy | Description | Use Case |
|
|
91
|
-
| ---------------------- | --------------------------------------------------------- | ----------------------------------- |
|
|
92
|
-
| **realtime** | Process each event immediately | Development, debugging, low traffic |
|
|
93
|
-
| **batch-with-updates** | Buffer events and batch write with full lifecycle support | Low volume Production |
|
|
94
|
-
| **insert-only** | Only process completed spans, ignore updates | High volume Production |
|
|
95
|
-
|
|
96
|
-
### Strategy Configuration
|
|
97
|
-
|
|
98
|
-
```typescript
|
|
99
|
-
new DefaultExporter({
|
|
100
|
-
strategy: "auto", // Default - let storage provider decide
|
|
101
|
-
// or explicitly set:
|
|
102
|
-
// strategy: 'realtime' | 'batch-with-updates' | 'insert-only'
|
|
103
|
-
|
|
104
|
-
// Batching configuration (applies to both batch-with-updates and insert-only)
|
|
105
|
-
maxBatchSize: 1000, // Max spans per batch
|
|
106
|
-
maxBatchWaitMs: 5000, // Max wait before flushing
|
|
107
|
-
maxBufferSize: 10000, // Max spans to buffer
|
|
108
|
-
});
|
|
109
|
-
```
|
|
110
|
-
|
|
111
|
-
## Storage Provider Support
|
|
112
|
-
|
|
113
|
-
Different storage providers support different tracing strategies. Some providers support observability for production workloads, while others are intended primarily for local development.
|
|
114
|
-
|
|
115
|
-
If you set the strategy to `'auto'`, the `DefaultExporter` automatically selects the optimal strategy for the storage provider. If you set the strategy to a mode that the storage provider doesn't support, you will get an error message.
|
|
116
|
-
|
|
117
|
-
### Providers with Observability Support
|
|
118
|
-
|
|
119
|
-
| Storage Provider | Preferred Strategy | Supported Strategies | Recommended Use |
|
|
120
|
-
| ---------------------------------------------------------------- | ------------------ | ------------------------------- | ------------------------------------- |
|
|
121
|
-
| **ClickHouse** (`@mastra/clickhouse`) | insert-only | insert-only | Production (high-volume) |
|
|
122
|
-
| **[PostgreSQL](https://mastra.ai/reference/storage/postgresql)** | batch-with-updates | batch-with-updates, insert-only | Production (low volume) |
|
|
123
|
-
| **[MSSQL](https://mastra.ai/reference/storage/mssql)** | batch-with-updates | batch-with-updates, insert-only | Production (low volume) |
|
|
124
|
-
| **[MongoDB](https://mastra.ai/reference/storage/mongodb)** | batch-with-updates | batch-with-updates, insert-only | Production (low volume) |
|
|
125
|
-
| **[libSQL](https://mastra.ai/reference/storage/libsql)** | batch-with-updates | batch-with-updates, insert-only | Default storage, good for development |
|
|
126
|
-
|
|
127
|
-
### Providers without Observability Support
|
|
128
|
-
|
|
129
|
-
The following storage providers **do not support** the observability domain. If you're using one of these providers and need observability, use [composite storage](https://mastra.ai/reference/storage/composite) to route observability data to a supported provider:
|
|
130
|
-
|
|
131
|
-
- [Convex](https://mastra.ai/reference/storage/convex)
|
|
132
|
-
- [DynamoDB](https://mastra.ai/reference/storage/dynamodb)
|
|
133
|
-
- [Cloudflare D1](https://mastra.ai/reference/storage/cloudflare-d1)
|
|
134
|
-
- [Cloudflare Durable Objects](https://mastra.ai/reference/storage/cloudflare)
|
|
135
|
-
- [Upstash](https://mastra.ai/reference/storage/upstash)
|
|
136
|
-
- [LanceDB](https://mastra.ai/reference/storage/lance)
|
|
137
|
-
|
|
138
|
-
### Strategy Benefits
|
|
139
|
-
|
|
140
|
-
- **realtime**: Immediate visibility, best for debugging
|
|
141
|
-
- **batch-with-updates**: 10-100x throughput improvement, full span lifecycle
|
|
142
|
-
- **insert-only**: Additional 70% reduction in database operations, perfect for analytics
|
|
143
|
-
|
|
144
|
-
## Production Recommendations
|
|
145
|
-
|
|
146
|
-
Observability data grows quickly in production environments. A single agent interaction can generate hundreds of spans, and high-traffic applications can produce thousands of traces per day. Most general-purpose databases aren't optimized for this write-heavy, append-only workload.
|
|
147
|
-
|
|
148
|
-
### Recommended: ClickHouse for High-Volume Production
|
|
149
|
-
|
|
150
|
-
[ClickHouse](https://mastra.ai/reference/storage/composite) is a columnar database designed for high-volume analytics workloads. It's the recommended choice for production observability because:
|
|
151
|
-
|
|
152
|
-
- **Optimized for writes**: Handles millions of inserts per second
|
|
153
|
-
- **Efficient compression**: Reduces storage costs for trace data
|
|
154
|
-
- **Fast queries**: Columnar storage enables quick trace lookups and aggregations
|
|
155
|
-
- **Time-series native**: Built-in support for time-based data retention and partitioning
|
|
156
|
-
|
|
157
|
-
### Using Composite Storage
|
|
158
|
-
|
|
159
|
-
If you're using a provider without observability support (like Convex or DynamoDB) or want to optimize performance, use [composite storage](https://mastra.ai/reference/storage/composite) to route observability data to ClickHouse while keeping other data in your primary database.
|
|
160
|
-
|
|
161
|
-
## Batching Behavior
|
|
162
|
-
|
|
163
|
-
### Flush Triggers
|
|
164
|
-
|
|
165
|
-
For both batch strategies (`batch-with-updates` and `insert-only`), traces are flushed to storage when any of these conditions are met:
|
|
166
|
-
|
|
167
|
-
1. **Size trigger**: Buffer reaches `maxBatchSize` spans
|
|
168
|
-
2. **Time trigger**: `maxBatchWaitMs` elapsed since first event
|
|
169
|
-
3. **Emergency flush**: Buffer approaches `maxBufferSize` limit
|
|
170
|
-
4. **Shutdown**: Force flush all pending events
|
|
171
|
-
|
|
172
|
-
### Error Handling
|
|
173
|
-
|
|
174
|
-
The DefaultExporter includes robust error handling for production use:
|
|
175
|
-
|
|
176
|
-
- **Retry Logic**: Exponential backoff (500ms, 1s, 2s, 4s)
|
|
177
|
-
- **Transient Failures**: Automatic retry with backoff
|
|
178
|
-
- **Persistent Failures**: Drop batch after 4 failed attempts
|
|
179
|
-
- **Buffer Overflow**: Prevent memory issues during storage outages
|
|
180
|
-
|
|
181
|
-
### Configuration Examples
|
|
182
|
-
|
|
183
|
-
```typescript
|
|
184
|
-
// Zero config - recommended for most users
|
|
185
|
-
new DefaultExporter();
|
|
186
|
-
|
|
187
|
-
// Development override
|
|
188
|
-
new DefaultExporter({
|
|
189
|
-
strategy: "realtime", // Immediate visibility for debugging
|
|
190
|
-
});
|
|
191
|
-
|
|
192
|
-
// High-throughput production
|
|
193
|
-
new DefaultExporter({
|
|
194
|
-
maxBatchSize: 2000, // Larger batches
|
|
195
|
-
maxBatchWaitMs: 10000, // Wait longer to fill batches
|
|
196
|
-
maxBufferSize: 50000, // Handle longer outages
|
|
197
|
-
});
|
|
198
|
-
|
|
199
|
-
// Low-latency production
|
|
200
|
-
new DefaultExporter({
|
|
201
|
-
maxBatchSize: 100, // Smaller batches
|
|
202
|
-
maxBatchWaitMs: 1000, // Flush quickly
|
|
203
|
-
});
|
|
204
|
-
```
|
|
205
|
-
|
|
206
|
-
## Related
|
|
207
|
-
|
|
208
|
-
- [Tracing Overview](https://mastra.ai/docs/observability/tracing/overview)
|
|
209
|
-
- [CloudExporter](https://mastra.ai/docs/observability/tracing/exporters/cloud)
|
|
210
|
-
- [Composite Storage](https://mastra.ai/reference/storage/composite) - Combine multiple storage providers
|
|
211
|
-
- [Storage Configuration](https://mastra.ai/docs/memory/storage)
|