@mastra/libsql 1.0.0-beta.8 → 1.0.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (49) hide show
  1. package/CHANGELOG.md +1358 -0
  2. package/dist/docs/README.md +39 -0
  3. package/dist/docs/SKILL.md +40 -0
  4. package/dist/docs/SOURCE_MAP.json +6 -0
  5. package/dist/docs/agents/01-agent-memory.md +166 -0
  6. package/dist/docs/agents/02-networks.md +292 -0
  7. package/dist/docs/agents/03-agent-approval.md +377 -0
  8. package/dist/docs/agents/04-network-approval.md +274 -0
  9. package/dist/docs/core/01-reference.md +151 -0
  10. package/dist/docs/guides/01-ai-sdk.md +141 -0
  11. package/dist/docs/memory/01-overview.md +76 -0
  12. package/dist/docs/memory/02-storage.md +233 -0
  13. package/dist/docs/memory/03-working-memory.md +390 -0
  14. package/dist/docs/memory/04-semantic-recall.md +233 -0
  15. package/dist/docs/memory/05-memory-processors.md +318 -0
  16. package/dist/docs/memory/06-reference.md +133 -0
  17. package/dist/docs/observability/01-overview.md +64 -0
  18. package/dist/docs/observability/02-default.md +177 -0
  19. package/dist/docs/rag/01-retrieval.md +548 -0
  20. package/dist/docs/storage/01-reference.md +542 -0
  21. package/dist/docs/vectors/01-reference.md +213 -0
  22. package/dist/docs/workflows/01-snapshots.md +240 -0
  23. package/dist/index.cjs +2394 -1824
  24. package/dist/index.cjs.map +1 -1
  25. package/dist/index.js +2392 -1827
  26. package/dist/index.js.map +1 -1
  27. package/dist/storage/db/index.d.ts +305 -0
  28. package/dist/storage/db/index.d.ts.map +1 -0
  29. package/dist/storage/{domains → db}/utils.d.ts +21 -13
  30. package/dist/storage/db/utils.d.ts.map +1 -0
  31. package/dist/storage/domains/agents/index.d.ts +5 -7
  32. package/dist/storage/domains/agents/index.d.ts.map +1 -1
  33. package/dist/storage/domains/memory/index.d.ts +8 -10
  34. package/dist/storage/domains/memory/index.d.ts.map +1 -1
  35. package/dist/storage/domains/observability/index.d.ts +42 -27
  36. package/dist/storage/domains/observability/index.d.ts.map +1 -1
  37. package/dist/storage/domains/scores/index.d.ts +11 -27
  38. package/dist/storage/domains/scores/index.d.ts.map +1 -1
  39. package/dist/storage/domains/workflows/index.d.ts +10 -14
  40. package/dist/storage/domains/workflows/index.d.ts.map +1 -1
  41. package/dist/storage/index.d.ts +28 -189
  42. package/dist/storage/index.d.ts.map +1 -1
  43. package/dist/vector/index.d.ts +6 -2
  44. package/dist/vector/index.d.ts.map +1 -1
  45. package/dist/vector/sql-builder.d.ts.map +1 -1
  46. package/package.json +9 -8
  47. package/dist/storage/domains/operations/index.d.ts +0 -110
  48. package/dist/storage/domains/operations/index.d.ts.map +0 -1
  49. package/dist/storage/domains/utils.d.ts.map +0 -1
@@ -0,0 +1,318 @@
1
+ > Learn how to use memory processors in Mastra to filter, trim, and transform messages before they
2
+
3
+ # Memory Processors
4
+
5
+ Memory processors transform and filter messages as they pass through an agent with memory enabled. They manage context window limits, remove unnecessary content, and optimize the information sent to the language model.
6
+
7
+ When memory is enabled on an agent, Mastra adds memory processors to the agent's processor pipeline. These processors retrieve message history, working memory, and semantically relevant messages, then persist new messages after the model responds.
8
+
9
+ Memory processors are [processors](https://mastra.ai/docs/v1/agents/processors) that operate specifically on memory-related messages and state.
10
+
11
+ ## Built-in Memory Processors
12
+
13
+ Mastra automatically adds these processors when memory is enabled:
14
+
15
+ ### MessageHistory
16
+
17
+ Retrieves message history and persists new messages.
18
+
19
+ **When you configure:**
20
+
21
+ ```typescript
22
+ memory: new Memory({
23
+ lastMessages: 10,
24
+ });
25
+ ```
26
+
27
+ **Mastra internally:**
28
+
29
+ 1. Creates a `MessageHistory` processor with `limit: 10`
30
+ 2. Adds it to the agent's input processors (runs before the LLM)
31
+ 3. Adds it to the agent's output processors (runs after the LLM)
32
+
33
+ **What it does:**
34
+
35
+ - **Input**: Fetches the last 10 messages from storage and prepends them to the conversation
36
+ - **Output**: Persists new messages to storage after the model responds
37
+
38
+ **Example:**
39
+
40
+ ```typescript
41
+ import { Agent } from "@mastra/core/agent";
42
+ import { Memory } from "@mastra/memory";
43
+ import { LibSQLStore } from "@mastra/libsql";
44
+ import { openai } from "@ai-sdk/openai";
45
+
46
+ const agent = new Agent({
47
+ id: "test-agent",
48
+ name: "Test Agent",
49
+ instructions: "You are a helpful assistant",
50
+ model: 'openai/gpt-4o',
51
+ memory: new Memory({
52
+ storage: new LibSQLStore({
53
+ id: "memory-store",
54
+ url: "file:memory.db",
55
+ }),
56
+ lastMessages: 10, // MessageHistory processor automatically added
57
+ }),
58
+ });
59
+ ```
60
+
61
+ ### SemanticRecall
62
+
63
+ Retrieves semantically relevant messages based on the current input and creates embeddings for new messages.
64
+
65
+ **When you configure:**
66
+
67
+ ```typescript
68
+ memory: new Memory({
69
+ semanticRecall: { enabled: true },
70
+ vector: myVectorStore,
71
+ embedder: myEmbedder,
72
+ });
73
+ ```
74
+
75
+ **Mastra internally:**
76
+
77
+ 1. Creates a `SemanticRecall` processor
78
+ 2. Adds it to the agent's input processors (runs before the LLM)
79
+ 3. Adds it to the agent's output processors (runs after the LLM)
80
+ 4. Requires both a vector store and embedder to be configured
81
+
82
+ **What it does:**
83
+
84
+ - **Input**: Performs vector similarity search to find relevant past messages and prepends them to the conversation
85
+ - **Output**: Creates embeddings for new messages and stores them in the vector store for future retrieval
86
+
87
+ **Example:**
88
+
89
+ ```typescript
90
+ import { Agent } from "@mastra/core/agent";
91
+ import { Memory } from "@mastra/memory";
92
+ import { LibSQLStore } from "@mastra/libsql";
93
+ import { PineconeVector } from "@mastra/pinecone";
94
+ import { OpenAIEmbedder } from "@mastra/openai";
95
+ import { openai } from "@ai-sdk/openai";
96
+
97
+ const agent = new Agent({
98
+ name: "semantic-agent",
99
+ instructions: "You are a helpful assistant with semantic memory",
100
+ model: 'openai/gpt-4o',
101
+ memory: new Memory({
102
+ storage: new LibSQLStore({
103
+ id: "memory-store",
104
+ url: "file:memory.db",
105
+ }),
106
+ vector: new PineconeVector({
107
+ id: "memory-vector",
108
+ apiKey: process.env.PINECONE_API_KEY!,
109
+ }),
110
+ embedder: new OpenAIEmbedder({
111
+ model: "text-embedding-3-small",
112
+ apiKey: process.env.OPENAI_API_KEY!,
113
+ }),
114
+ semanticRecall: { enabled: true }, // SemanticRecall processor automatically added
115
+ }),
116
+ });
117
+ ```
118
+
119
+ ### WorkingMemory
120
+
121
+ Manages working memory state across conversations.
122
+
123
+ **When you configure:**
124
+
125
+ ```typescript
126
+ memory: new Memory({
127
+ workingMemory: { enabled: true },
128
+ });
129
+ ```
130
+
131
+ **Mastra internally:**
132
+
133
+ 1. Creates a `WorkingMemory` processor
134
+ 2. Adds it to the agent's input processors (runs before the LLM)
135
+ 3. Requires a storage adapter to be configured
136
+
137
+ **What it does:**
138
+
139
+ - **Input**: Retrieves working memory state for the current thread and prepends it to the conversation
140
+ - **Output**: No output processing
141
+
142
+ **Example:**
143
+
144
+ ```typescript
145
+ import { Agent } from "@mastra/core/agent";
146
+ import { Memory } from "@mastra/memory";
147
+ import { LibSQLStore } from "@mastra/libsql";
148
+ import { openai } from "@ai-sdk/openai";
149
+
150
+ const agent = new Agent({
151
+ name: "working-memory-agent",
152
+ instructions: "You are an assistant with working memory",
153
+ model: 'openai/gpt-4o',
154
+ memory: new Memory({
155
+ storage: new LibSQLStore({
156
+ id: "memory-store",
157
+ url: "file:memory.db",
158
+ }),
159
+ workingMemory: { enabled: true }, // WorkingMemory processor automatically added
160
+ }),
161
+ });
162
+ ```
163
+
164
+ ## Manual Control and Deduplication
165
+
166
+ If you manually add a memory processor to `inputProcessors` or `outputProcessors`, Mastra will **not** automatically add it. This gives you full control over processor ordering:
167
+
168
+ ```typescript
169
+ import { Agent } from "@mastra/core/agent";
170
+ import { Memory } from "@mastra/memory";
171
+ import { MessageHistory } from "@mastra/core/processors";
172
+ import { TokenLimiter } from "@mastra/core/processors";
173
+ import { LibSQLStore } from "@mastra/libsql";
174
+ import { openai } from "@ai-sdk/openai";
175
+
176
+ // Custom MessageHistory with different configuration
177
+ const customMessageHistory = new MessageHistory({
178
+ storage: new LibSQLStore({ id: "memory-store", url: "file:memory.db" }),
179
+ lastMessages: 20,
180
+ });
181
+
182
+ const agent = new Agent({
183
+ name: "custom-memory-agent",
184
+ instructions: "You are a helpful assistant",
185
+ model: 'openai/gpt-4o',
186
+ memory: new Memory({
187
+ storage: new LibSQLStore({ id: "memory-store", url: "file:memory.db" }),
188
+ lastMessages: 10, // This would normally add MessageHistory(10)
189
+ }),
190
+ inputProcessors: [
191
+ customMessageHistory, // Your custom one is used instead
192
+ new TokenLimiter({ limit: 4000 }), // Runs after your custom MessageHistory
193
+ ],
194
+ });
195
+ ```
196
+
197
+ ## Processor Execution Order
198
+
199
+ Understanding the execution order is important when combining guardrails with memory:
200
+
201
+ ### Input Processors
202
+
203
+ ```
204
+ [Memory Processors] → [Your inputProcessors]
205
+ ```
206
+
207
+ 1. **Memory processors run FIRST**: `WorkingMemory`, `MessageHistory`, `SemanticRecall`
208
+ 2. **Your input processors run AFTER**: guardrails, filters, validators
209
+
210
+ This means memory loads message history before your processors can validate or filter the input.
211
+
212
+ ### Output Processors
213
+
214
+ ```
215
+ [Your outputProcessors] → [Memory Processors]
216
+ ```
217
+
218
+ 1. **Your output processors run FIRST**: guardrails, filters, validators
219
+ 2. **Memory processors run AFTER**: `SemanticRecall` (embeddings), `MessageHistory` (persistence)
220
+
221
+ This ordering is designed to be **safe by default**: if your output guardrail calls `abort()`, the memory processors never run and **no messages are saved**.
222
+
223
+ ## Guardrails and Memory
224
+
225
+ The default execution order provides safe guardrail behavior:
226
+
227
+ ### Output guardrails (recommended)
228
+
229
+ Output guardrails run **before** memory processors save messages. If a guardrail aborts:
230
+
231
+ - The tripwire is triggered
232
+ - Memory processors are skipped
233
+ - **No messages are persisted to storage**
234
+
235
+ ```typescript
236
+ import { Agent } from "@mastra/core/agent";
237
+ import { Memory } from "@mastra/memory";
238
+ import { openai } from "@ai-sdk/openai";
239
+
240
+ // Output guardrail that blocks inappropriate content
241
+ const contentBlocker = {
242
+ id: "content-blocker",
243
+ processOutputResult: async ({ messages, abort }) => {
244
+ const hasInappropriateContent = messages.some((msg) =>
245
+ containsBadContent(msg)
246
+ );
247
+ if (hasInappropriateContent) {
248
+ abort("Content blocked by guardrail");
249
+ }
250
+ return messages;
251
+ },
252
+ };
253
+
254
+ const agent = new Agent({
255
+ name: "safe-agent",
256
+ instructions: "You are a helpful assistant",
257
+ model: 'openai/gpt-4o',
258
+ memory: new Memory({ lastMessages: 10 }),
259
+ // Your guardrail runs BEFORE memory saves
260
+ outputProcessors: [contentBlocker],
261
+ });
262
+
263
+ // If the guardrail aborts, nothing is saved to memory
264
+ const result = await agent.generate("Hello");
265
+ if (result.tripwire) {
266
+ console.log("Blocked:", result.tripwire.reason);
267
+ // Memory is empty - no messages were persisted
268
+ }
269
+ ```
270
+
271
+ ### Input guardrails
272
+
273
+ Input guardrails run **after** memory processors load history. If a guardrail aborts:
274
+
275
+ - The tripwire is triggered
276
+ - The LLM is never called
277
+ - Output processors (including memory persistence) are skipped
278
+ - **No messages are persisted to storage**
279
+
280
+ ```typescript
281
+ // Input guardrail that validates user input
282
+ const inputValidator = {
283
+ id: "input-validator",
284
+ processInput: async ({ messages, abort }) => {
285
+ const lastUserMessage = messages.findLast((m) => m.role === "user");
286
+ if (isInvalidInput(lastUserMessage)) {
287
+ abort("Invalid input detected");
288
+ }
289
+ return messages;
290
+ },
291
+ };
292
+
293
+ const agent = new Agent({
294
+ name: "validated-agent",
295
+ instructions: "You are a helpful assistant",
296
+ model: 'openai/gpt-4o',
297
+ memory: new Memory({ lastMessages: 10 }),
298
+ // Your guardrail runs AFTER memory loads history
299
+ inputProcessors: [inputValidator],
300
+ });
301
+ ```
302
+
303
+ ### Summary
304
+
305
+ | Guardrail Type | When it runs | If it aborts |
306
+ | -------------- | ------------ | ------------ |
307
+ | Input | After memory loads history | LLM not called, nothing saved |
308
+ | Output | Before memory saves | Nothing saved to storage |
309
+
310
+ Both scenarios are safe - guardrails prevent inappropriate content from being persisted to memory
311
+
312
+ ## Related documentation
313
+
314
+ - [Processors](https://mastra.ai/docs/v1/agents/processors) - General processor concepts and custom processor creation
315
+ - [Guardrails](https://mastra.ai/docs/v1/agents/guardrails) - Security and validation processors
316
+ - [Memory Overview](https://mastra.ai/docs/v1/memory/overview) - Memory types and configuration
317
+
318
+ When creating custom processors avoid mutating the input `messages` array or its objects directly.
@@ -0,0 +1,133 @@
1
+ # Memory API Reference
2
+
3
+ > API reference for memory - 1 entries
4
+
5
+
6
+ ---
7
+
8
+ ## Reference: Memory Class
9
+
10
+ > Documentation for the `Memory` class in Mastra, which provides a robust system for managing conversation history and thread-based message storage.
11
+
12
+ The `Memory` class provides a robust system for managing conversation history and thread-based message storage in Mastra. It enables persistent storage of conversations, semantic search capabilities, and efficient message retrieval. You must configure a storage provider for conversation history, and if you enable semantic recall you will also need to provide a vector store and embedder.
13
+
14
+ ## Usage example
15
+
16
+ ```typescript title="src/mastra/agents/test-agent.ts"
17
+ import { Memory } from "@mastra/memory";
18
+ import { Agent } from "@mastra/core/agent";
19
+
20
+ export const agent = new Agent({
21
+ name: "test-agent",
22
+ instructions: "You are an agent with memory.",
23
+ model: "openai/gpt-5.1",
24
+ memory: new Memory({
25
+ options: {
26
+ workingMemory: {
27
+ enabled: true,
28
+ },
29
+ },
30
+ }),
31
+ });
32
+ ```
33
+
34
+ > To enable `workingMemory` on an agent, you’ll need a storage provider configured on your main Mastra instance. See [Mastra class](../core/mastra-class) for more information.
35
+
36
+ ## Constructor parameters
37
+
38
+ ### Options parameters
39
+
40
+ ## Returns
41
+
42
+ ## Extended usage example
43
+
44
+ ```typescript title="src/mastra/agents/test-agent.ts"
45
+ import { Memory } from "@mastra/memory";
46
+ import { Agent } from "@mastra/core/agent";
47
+ import { LibSQLStore, LibSQLVector } from "@mastra/libsql";
48
+
49
+ export const agent = new Agent({
50
+ name: "test-agent",
51
+ instructions: "You are an agent with memory.",
52
+ model: "openai/gpt-5.1",
53
+ memory: new Memory({
54
+ storage: new LibSQLStore({
55
+ id: 'test-agent-storage',
56
+ url: "file:./working-memory.db",
57
+ }),
58
+ vector: new LibSQLVector({
59
+ id: 'test-agent-vector',
60
+ url: "file:./vector-memory.db",
61
+ }),
62
+ options: {
63
+ lastMessages: 10,
64
+ semanticRecall: {
65
+ topK: 3,
66
+ messageRange: 2,
67
+ scope: "resource",
68
+ },
69
+ workingMemory: {
70
+ enabled: true,
71
+ },
72
+ generateTitle: true,
73
+ },
74
+ }),
75
+ });
76
+ ```
77
+
78
+ ## PostgreSQL with index configuration
79
+
80
+ ```typescript title="src/mastra/agents/pg-agent.ts"
81
+ import { Memory } from "@mastra/memory";
82
+ import { Agent } from "@mastra/core/agent";
83
+ import { ModelRouterEmbeddingModel } from "@mastra/core/llm";
84
+ import { PgStore, PgVector } from "@mastra/pg";
85
+
86
+ export const agent = new Agent({
87
+ name: "pg-agent",
88
+ instructions: "You are an agent with optimized PostgreSQL memory.",
89
+ model: "openai/gpt-5.1",
90
+ memory: new Memory({
91
+ storage: new PgStore({
92
+ id: 'pg-agent-storage',
93
+ connectionString: process.env.DATABASE_URL,
94
+ }),
95
+ vector: new PgVector({
96
+ id: 'pg-agent-vector',
97
+ connectionString: process.env.DATABASE_URL,
98
+ }),
99
+ embedder: new ModelRouterEmbeddingModel("openai/text-embedding-3-small"),
100
+ options: {
101
+ lastMessages: 20,
102
+ semanticRecall: {
103
+ topK: 5,
104
+ messageRange: 3,
105
+ scope: "resource",
106
+ indexConfig: {
107
+ type: "hnsw", // Use HNSW for better performance
108
+ metric: "dotproduct", // Optimal for OpenAI embeddings
109
+ m: 16, // Number of bi-directional links
110
+ efConstruction: 64, // Construction-time candidate list size
111
+ },
112
+ },
113
+ workingMemory: {
114
+ enabled: true,
115
+ },
116
+ },
117
+ }),
118
+ });
119
+ ```
120
+
121
+ ### Related
122
+
123
+ - [Getting Started with Memory](https://mastra.ai/docs/v1/memory/overview)
124
+ - [Semantic Recall](https://mastra.ai/docs/v1/memory/semantic-recall)
125
+ - [Working Memory](https://mastra.ai/docs/v1/memory/working-memory)
126
+ - [Memory Processors](https://mastra.ai/docs/v1/memory/memory-processors)
127
+ - [createThread](https://mastra.ai/reference/v1/memory/createThread)
128
+ - [recall](https://mastra.ai/reference/v1/memory/recall)
129
+ - [getThreadById](https://mastra.ai/reference/v1/memory/getThreadById)
130
+ - [listThreads](https://mastra.ai/reference/v1/memory/listThreads)
131
+ - [deleteMessages](https://mastra.ai/reference/v1/memory/deleteMessages)
132
+ - [cloneThread](https://mastra.ai/reference/v1/memory/cloneThread)
133
+ - [Clone Utility Methods](https://mastra.ai/reference/v1/memory/clone-utilities)
@@ -0,0 +1,64 @@
1
+ > Monitor and debug applications with Mastra
2
+
3
+ # Observability Overview
4
+
5
+ Mastra provides observability features for AI applications. Monitor LLM operations, trace agent decisions, and debug complex workflows with tools that understand AI-specific patterns.
6
+
7
+ ## Key Features
8
+
9
+ ### Tracing
10
+
11
+ Specialized tracing for AI operations that captures:
12
+
13
+ - **Model interactions**: Token usage, latency, prompts, and completions
14
+ - **Agent execution**: Decision paths, tool calls, and memory operations
15
+ - **Workflow steps**: Branching logic, parallel execution, and step outputs
16
+ - **Automatic instrumentation**: Tracing with decorators
17
+
18
+ ## Quick Start
19
+
20
+ Configure Observability in your Mastra instance:
21
+
22
+ ```typescript title="src/mastra/index.ts"
23
+ import { Mastra } from "@mastra/core";
24
+ import { PinoLogger } from "@mastra/loggers";
25
+ import { LibSQLStore } from "@mastra/libsql";
26
+ import {
27
+ Observability,
28
+ DefaultExporter,
29
+ CloudExporter,
30
+ SensitiveDataFilter,
31
+ } from "@mastra/observability";
32
+
33
+ export const mastra = new Mastra({
34
+ logger: new PinoLogger(),
35
+ storage: new LibSQLStore({
36
+ id: 'mastra-storage',
37
+ url: "file:./mastra.db", // Storage is required for tracing
38
+ }),
39
+ observability: new Observability({
40
+ configs: {
41
+ default: {
42
+ serviceName: "mastra",
43
+ exporters: [
44
+ new DefaultExporter(), // Persists traces to storage for Mastra Studio
45
+ new CloudExporter(), // Sends traces to Mastra Cloud (if MASTRA_CLOUD_ACCESS_TOKEN is set)
46
+ ],
47
+ spanOutputProcessors: [
48
+ new SensitiveDataFilter(), // Redacts sensitive data like passwords, tokens, keys
49
+ ],
50
+ },
51
+ },
52
+ }),
53
+ });
54
+ ```
55
+
56
+ With this basic setup, you will see Traces and Logs in both Studio and in Mastra Cloud.
57
+
58
+ We also support various external tracing providers like MLflow, Langfuse, Braintrust, and any OpenTelemetry-compatible platform (Datadog, New Relic, SigNoz, etc.). See more about this in the [Tracing](https://mastra.ai/docs/v1/observability/tracing/overview) documentation.
59
+
60
+ ## What's Next?
61
+
62
+ - **[Set up Tracing](https://mastra.ai/docs/v1/observability/tracing/overview)**: Configure tracing for your application
63
+ - **[Configure Logging](https://mastra.ai/docs/v1/observability/logging)**: Add structured logging
64
+ - **[API Reference](https://mastra.ai/reference/v1/observability/tracing/instances)**: Detailed configuration options
@@ -0,0 +1,177 @@
1
+ > Store traces locally for development and debugging
2
+
3
+ # Default Exporter
4
+
5
+ The `DefaultExporter` persists traces to your configured storage backend, making them accessible through Studio. It's automatically enabled when using the default observability configuration and requires no external services.
6
+
7
+ ## Configuration
8
+
9
+ ### Prerequisites
10
+
11
+ 1. **Storage Backend**: Configure a storage provider (libSQL, PostgreSQL, etc.)
12
+ 2. **Studio**: Install for viewing traces locally
13
+
14
+ ### Basic Setup
15
+
16
+ ```typescript title="src/mastra/index.ts"
17
+ import { Mastra } from "@mastra/core";
18
+ import { Observability, DefaultExporter } from "@mastra/observability";
19
+ import { LibSQLStore } from "@mastra/libsql";
20
+
21
+ export const mastra = new Mastra({
22
+ storage: new LibSQLStore({
23
+ id: 'mastra-storage',
24
+ url: "file:./mastra.db", // Required for trace persistence
25
+ }),
26
+ observability: new Observability({
27
+ configs: {
28
+ local: {
29
+ serviceName: "my-service",
30
+ exporters: [new DefaultExporter()],
31
+ },
32
+ },
33
+ }),
34
+ });
35
+ ```
36
+
37
+ ### Recommended Configuration
38
+
39
+ Include DefaultExporter in your observability configuration:
40
+
41
+ ```typescript
42
+ import { Mastra } from "@mastra/core";
43
+ import {
44
+ Observability,
45
+ DefaultExporter,
46
+ CloudExporter,
47
+ SensitiveDataFilter,
48
+ } from "@mastra/observability";
49
+ import { LibSQLStore } from "@mastra/libsql";
50
+
51
+ export const mastra = new Mastra({
52
+ storage: new LibSQLStore({
53
+ id: 'mastra-storage',
54
+ url: "file:./mastra.db",
55
+ }),
56
+ observability: new Observability({
57
+ configs: {
58
+ default: {
59
+ serviceName: "mastra",
60
+ exporters: [
61
+ new DefaultExporter(), // Persists traces to storage for Mastra Studio
62
+ new CloudExporter(), // Sends traces to Mastra Cloud (requires MASTRA_CLOUD_ACCESS_TOKEN)
63
+ ],
64
+ spanOutputProcessors: [
65
+ new SensitiveDataFilter(),
66
+ ],
67
+ },
68
+ },
69
+ }),
70
+ });
71
+ ```
72
+
73
+ ## Viewing Traces
74
+
75
+ ### Studio
76
+
77
+ Access your traces through Studio:
78
+
79
+ 1. Start Studio
80
+ 2. Navigate to Observability
81
+ 3. Filter and search your local traces
82
+ 4. Inspect detailed span information
83
+
84
+ ## Tracing Strategies
85
+
86
+ DefaultExporter automatically selects the optimal tracing strategy based on your storage provider. You can also override this selection if needed.
87
+
88
+ ### Available Strategies
89
+
90
+ | Strategy | Description | Use Case |
91
+ | ---------------------- | --------------------------------------------------------- | ----------------------------------- |
92
+ | **realtime** | Process each event immediately | Development, debugging, low traffic |
93
+ | **batch-with-updates** | Buffer events and batch write with full lifecycle support | Low volume Production |
94
+ | **insert-only** | Only process completed spans, ignore updates | High volume Production |
95
+
96
+ ### Strategy Configuration
97
+
98
+ ```typescript
99
+ new DefaultExporter({
100
+ strategy: "auto", // Default - let storage provider decide
101
+ // or explicitly set:
102
+ // strategy: 'realtime' | 'batch-with-updates' | 'insert-only'
103
+
104
+ // Batching configuration (applies to both batch-with-updates and insert-only)
105
+ maxBatchSize: 1000, // Max spans per batch
106
+ maxBatchWaitMs: 5000, // Max wait before flushing
107
+ maxBufferSize: 10000, // Max spans to buffer
108
+ });
109
+ ```
110
+
111
+ ## Storage Provider Support
112
+
113
+ Different storage providers support different tracing strategies.
114
+
115
+ If you set the strategy to `'auto'`, the `DefaultExporter` automatically selects the optimal strategy for the storage provider. If you set the strategy to a mode that the storage provider doesn't support, you will get an error message.
116
+
117
+ | Storage Provider | Preferred Strategy | Supported Strategies | Notes |
118
+ | ----------------------------------------------- | ------------------ | ----------------------------------------- | ------------------------------------- |
119
+ | **[libSQL](https://mastra.ai/reference/v1/storage/libsql)** | batch-with-updates | realtime, batch-with-updates, insert-only | Default storage, good for development |
120
+ | **[PostgreSQL](https://mastra.ai/reference/v1/storage/postgresql)** | batch-with-updates | batch-with-updates, insert-only | Recommended for production |
121
+
122
+ ### Strategy Benefits
123
+
124
+ - **realtime**: Immediate visibility, best for debugging
125
+ - **batch-with-updates**: 10-100x throughput improvement, full span lifecycle
126
+ - **insert-only**: Additional 70% reduction in database operations, perfect for analytics
127
+
128
+ ## Batching Behavior
129
+
130
+ ### Flush Triggers
131
+
132
+ For both batch strategies (`batch-with-updates` and `insert-only`), traces are flushed to storage when any of these conditions are met:
133
+
134
+ 1. **Size trigger**: Buffer reaches `maxBatchSize` spans
135
+ 2. **Time trigger**: `maxBatchWaitMs` elapsed since first event
136
+ 3. **Emergency flush**: Buffer approaches `maxBufferSize` limit
137
+ 4. **Shutdown**: Force flush all pending events
138
+
139
+ ### Error Handling
140
+
141
+ The DefaultExporter includes robust error handling for production use:
142
+
143
+ - **Retry Logic**: Exponential backoff (500ms, 1s, 2s, 4s)
144
+ - **Transient Failures**: Automatic retry with backoff
145
+ - **Persistent Failures**: Drop batch after 4 failed attempts
146
+ - **Buffer Overflow**: Prevent memory issues during storage outages
147
+
148
+ ### Configuration Examples
149
+
150
+ ```typescript
151
+ // Zero config - recommended for most users
152
+ new DefaultExporter();
153
+
154
+ // Development override
155
+ new DefaultExporter({
156
+ strategy: "realtime", // Immediate visibility for debugging
157
+ });
158
+
159
+ // High-throughput production
160
+ new DefaultExporter({
161
+ maxBatchSize: 2000, // Larger batches
162
+ maxBatchWaitMs: 10000, // Wait longer to fill batches
163
+ maxBufferSize: 50000, // Handle longer outages
164
+ });
165
+
166
+ // Low-latency production
167
+ new DefaultExporter({
168
+ maxBatchSize: 100, // Smaller batches
169
+ maxBatchWaitMs: 1000, // Flush quickly
170
+ });
171
+ ```
172
+
173
+ ## Related
174
+
175
+ - [Tracing Overview](https://mastra.ai/docs/v1/observability/tracing/overview)
176
+ - [CloudExporter](https://mastra.ai/docs/v1/observability/tracing/exporters/cloud)
177
+ - [Storage Configuration](https://mastra.ai/docs/v1/memory/storage)