@mastra/libsql 1.6.1 → 1.6.2-alpha.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (33) hide show
  1. package/CHANGELOG.md +11 -0
  2. package/dist/docs/SKILL.md +50 -0
  3. package/dist/docs/assets/SOURCE_MAP.json +6 -0
  4. package/dist/docs/references/docs-agents-agent-approval.md +558 -0
  5. package/dist/docs/references/docs-agents-agent-memory.md +209 -0
  6. package/dist/docs/references/docs-agents-network-approval.md +275 -0
  7. package/dist/docs/references/docs-agents-networks.md +299 -0
  8. package/dist/docs/references/docs-memory-memory-processors.md +314 -0
  9. package/dist/docs/references/docs-memory-message-history.md +260 -0
  10. package/dist/docs/references/docs-memory-overview.md +45 -0
  11. package/dist/docs/references/docs-memory-semantic-recall.md +272 -0
  12. package/dist/docs/references/docs-memory-storage.md +261 -0
  13. package/dist/docs/references/docs-memory-working-memory.md +400 -0
  14. package/dist/docs/references/docs-observability-overview.md +70 -0
  15. package/dist/docs/references/docs-observability-tracing-exporters-default.md +209 -0
  16. package/dist/docs/references/docs-rag-retrieval.md +515 -0
  17. package/dist/docs/references/docs-workflows-snapshots.md +238 -0
  18. package/dist/docs/references/guides-agent-frameworks-ai-sdk.md +140 -0
  19. package/dist/docs/references/reference-core-getMemory.md +50 -0
  20. package/dist/docs/references/reference-core-listMemory.md +56 -0
  21. package/dist/docs/references/reference-core-mastra-class.md +66 -0
  22. package/dist/docs/references/reference-memory-memory-class.md +147 -0
  23. package/dist/docs/references/reference-storage-composite.md +235 -0
  24. package/dist/docs/references/reference-storage-dynamodb.md +282 -0
  25. package/dist/docs/references/reference-storage-libsql.md +135 -0
  26. package/dist/docs/references/reference-vectors-libsql.md +305 -0
  27. package/dist/index.cjs +14 -3
  28. package/dist/index.cjs.map +1 -1
  29. package/dist/index.js +14 -3
  30. package/dist/index.js.map +1 -1
  31. package/dist/storage/domains/memory/index.d.ts.map +1 -1
  32. package/dist/vector/index.d.ts.map +1 -1
  33. package/package.json +3 -3
@@ -0,0 +1,238 @@
1
+ # Snapshots
2
+
3
+ In Mastra, a snapshot is a serializable representation of a workflow's complete execution state at a specific point in time. Snapshots capture all the information needed to resume a workflow from exactly where it left off, including:
4
+
5
+ - The current state of each step in the workflow
6
+ - The outputs of completed steps
7
+ - The execution path taken through the workflow
8
+ - Any suspended steps and their metadata
9
+ - The remaining retry attempts for each step
10
+ - Additional contextual data needed to resume execution
11
+
12
+ Snapshots are automatically created and managed by Mastra whenever a workflow is suspended, and are persisted to the configured storage system.
13
+
14
+ ## The role of snapshots in suspend and resume
15
+
16
+ Snapshots are the key mechanism enabling Mastra's suspend and resume capabilities. When a workflow step calls `await suspend()`:
17
+
18
+ 1. The workflow execution is paused at that exact point
19
+ 2. The current state of the workflow is captured as a snapshot
20
+ 3. The snapshot is persisted to storage
21
+ 4. The workflow step is marked as "suspended" with a status of `'suspended'`
22
+ 5. Later, when `resume()` is called on the suspended step, the snapshot is retrieved
23
+ 6. The workflow execution resumes from exactly where it left off
24
+
25
+ This mechanism provides a powerful way to implement human-in-the-loop workflows, handle rate limiting, wait for external resources, and implement complex branching workflows that may need to pause for extended periods.
26
+
27
+ ## Snapshot anatomy
28
+
29
+ Each snapshot includes the `runId`, input, step status (`success`, `suspended`, etc.), any suspend and resume payloads, and the final output. This ensures full context is available when resuming execution.
30
+
31
+ ```json
32
+ {
33
+ "runId": "34904c14-e79e-4a12-9804-9655d4616c50",
34
+ "status": "success",
35
+ "value": {},
36
+ "context": {
37
+ "input": {
38
+ "value": 100,
39
+ "user": "Michael",
40
+ "requiredApprovers": ["manager", "finance"]
41
+ },
42
+ "approval-step": {
43
+ "payload": {
44
+ "value": 100,
45
+ "user": "Michael",
46
+ "requiredApprovers": ["manager", "finance"]
47
+ },
48
+ "startedAt": 1758027577955,
49
+ "status": "success",
50
+ "suspendPayload": {
51
+ "message": "Workflow suspended",
52
+ "requestedBy": "Michael",
53
+ "approvers": ["manager", "finance"]
54
+ },
55
+ "suspendedAt": 1758027578065,
56
+ "resumePayload": { "confirm": true, "approver": "manager" },
57
+ "resumedAt": 1758027578517,
58
+ "output": { "value": 100, "approved": true },
59
+ "endedAt": 1758027578634
60
+ }
61
+ },
62
+ "activePaths": [],
63
+ "serializedStepGraph": [
64
+ {
65
+ "type": "step",
66
+ "step": {
67
+ "id": "approval-step",
68
+ "description": "Accepts a value, waits for confirmation"
69
+ }
70
+ }
71
+ ],
72
+ "suspendedPaths": {},
73
+ "waitingPaths": {},
74
+ "result": { "value": 100, "approved": true },
75
+ "requestContext": {},
76
+ "timestamp": 1758027578740
77
+ }
78
+ ```
79
+
80
+ ## How snapshots are saved and retrieved
81
+
82
+ Snapshots are saved to the configured storage system. By default, they use libSQL, but you can configure Upstash or PostgreSQL instead. Each snapshot is saved in the `workflow_snapshots` table and identified by the workflow's `runId`.
83
+
84
+ Read more about:
85
+
86
+ - [libSQL Storage](https://mastra.ai/reference/storage/libsql)
87
+ - [Upstash Storage](https://mastra.ai/reference/storage/upstash)
88
+ - [PostgreSQL Storage](https://mastra.ai/reference/storage/postgresql)
89
+
90
+ ### Saving snapshots
91
+
92
+ When a workflow is suspended, Mastra automatically persists the workflow snapshot with these steps:
93
+
94
+ 1. The `suspend()` function in a step execution triggers the snapshot process
95
+ 2. The `WorkflowInstance.suspend()` method records the suspended machine
96
+ 3. `persistWorkflowSnapshot()` is called to save the current state
97
+ 4. The snapshot is serialized and stored in the configured database in the `workflow_snapshots` table
98
+ 5. The storage record includes the workflow name, run ID, and the serialized snapshot
99
+
100
+ ### Retrieving snapshots
101
+
102
+ When a workflow is resumed, Mastra retrieves the persisted snapshot with these steps:
103
+
104
+ 1. The `resume()` method is called with a specific step ID
105
+ 2. The snapshot is loaded from storage using `loadWorkflowSnapshot()`
106
+ 3. The snapshot is parsed and prepared for resumption
107
+ 4. The workflow execution is recreated with the snapshot state
108
+ 5. The suspended step is resumed, and execution continues
109
+
110
+ ```typescript
111
+ const storage = mastra.getStorage()
112
+ const workflowStore = await storage?.getStore('workflows')
113
+
114
+ const snapshot = await workflowStore?.loadWorkflowSnapshot({
115
+ runId: '<run-id>',
116
+ workflowName: '<workflow-id>',
117
+ })
118
+
119
+ console.log(snapshot)
120
+ ```
121
+
122
+ ## Storage options for snapshots
123
+
124
+ Snapshots are persisted using a `storage` instance configured on the `Mastra` class. This storage layer is shared across all workflows registered to that instance. Mastra supports multiple storage options for flexibility in different environments.
125
+
126
+ ```typescript
127
+ import { Mastra } from '@mastra/core'
128
+ import { LibSQLStore } from '@mastra/libsql'
129
+ import { approvalWorkflow } from './workflows'
130
+
131
+ export const mastra = new Mastra({
132
+ storage: new LibSQLStore({
133
+ id: 'mastra-storage',
134
+ url: ':memory:',
135
+ }),
136
+ workflows: { approvalWorkflow },
137
+ })
138
+ ```
139
+
140
+ - [libSQL Storage](https://mastra.ai/reference/storage/libsql)
141
+ - [PostgreSQL Storage](https://mastra.ai/reference/storage/postgresql)
142
+ - [MongoDB Storage](https://mastra.ai/reference/storage/mongodb)
143
+ - [Upstash Storage](https://mastra.ai/reference/storage/upstash)
144
+ - [Cloudflare D1](https://mastra.ai/reference/storage/cloudflare-d1)
145
+ - [DynamoDB](https://mastra.ai/reference/storage/dynamodb)
146
+ - [More storage providers](https://mastra.ai/docs/memory/storage)
147
+
148
+ ## Best practices
149
+
150
+ 1. **Ensure Serializability**: Any data that needs to be included in the snapshot must be serializable (convertible to JSON).
151
+ 2. **Minimize Snapshot Size**: Avoid storing large data objects directly in the workflow context. Instead, store references to them (like IDs) and retrieve the data when needed.
152
+ 3. **Handle Resume Context Carefully**: When resuming a workflow, carefully consider what context to provide. This will be merged with the existing snapshot data.
153
+ 4. **Set Up Proper Monitoring**: Implement monitoring for suspended workflows, especially long-running ones, to ensure they are properly resumed.
154
+ 5. **Consider Storage Scaling**: For applications with many suspended workflows, ensure your storage solution is appropriately scaled.
155
+
156
+ ## Custom snapshot metadata
157
+
158
+ You can attach custom metadata when suspending a workflow by defining a `suspendSchema`. This metadata is stored in the snapshot and made available when the workflow is resumed.
159
+
160
+ ```typescript
161
+ import { createWorkflow, createStep } from '@mastra/core/workflows'
162
+ import { z } from 'zod'
163
+
164
+ const approvalStep = createStep({
165
+ id: 'approval-step',
166
+ description: 'Accepts a value, waits for confirmation',
167
+ inputSchema: z.object({
168
+ value: z.number(),
169
+ user: z.string(),
170
+ requiredApprovers: z.array(z.string()),
171
+ }),
172
+ suspendSchema: z.object({
173
+ message: z.string(),
174
+ requestedBy: z.string(),
175
+ approvers: z.array(z.string()),
176
+ }),
177
+ resumeSchema: z.object({
178
+ confirm: z.boolean(),
179
+ approver: z.string(),
180
+ }),
181
+ outputSchema: z.object({
182
+ value: z.number(),
183
+ approved: z.boolean(),
184
+ }),
185
+ execute: async ({ inputData, resumeData, suspend }) => {
186
+ const { value, user, requiredApprovers } = inputData
187
+ const { confirm } = resumeData ?? {}
188
+
189
+ if (!confirm) {
190
+ return await suspend({
191
+ message: 'Workflow suspended',
192
+ requestedBy: user,
193
+ approvers: [...requiredApprovers],
194
+ })
195
+ }
196
+
197
+ return {
198
+ value,
199
+ approved: confirm,
200
+ }
201
+ },
202
+ })
203
+ ```
204
+
205
+ ### Providing resume data
206
+
207
+ Use `resumeData` to pass structured input when resuming a suspended step. It must match the step’s `resumeSchema`.
208
+
209
+ ```typescript
210
+ const workflow = mastra.getWorkflow('approvalWorkflow')
211
+
212
+ const run = await workflow.createRun()
213
+
214
+ const result = await run.start({
215
+ inputData: {
216
+ value: 100,
217
+ user: 'Michael',
218
+ requiredApprovers: ['manager', 'finance'],
219
+ },
220
+ })
221
+
222
+ if (result.status === 'suspended') {
223
+ const resumedResult = await run.resume({
224
+ step: 'approval-step',
225
+ resumeData: {
226
+ confirm: true,
227
+ approver: 'manager',
228
+ },
229
+ })
230
+ }
231
+ ```
232
+
233
+ ## Related
234
+
235
+ - [Control Flow](https://mastra.ai/docs/workflows/control-flow)
236
+ - [Suspend & Resume](https://mastra.ai/docs/workflows/suspend-and-resume)
237
+ - [Time Travel](https://mastra.ai/docs/workflows/time-travel)
238
+ - [Human-in-the-loop](https://mastra.ai/docs/workflows/human-in-the-loop)
@@ -0,0 +1,140 @@
1
+ # AI SDK
2
+
3
+ If you're already using the [Vercel AI SDK](https://sdk.vercel.ai) directly and want to add Mastra capabilities like [processors](https://mastra.ai/docs/agents/processors) or [memory](https://mastra.ai/docs/memory/memory-processors) without switching to the full Mastra agent API, [`withMastra()`](https://mastra.ai/reference/ai-sdk/with-mastra) lets you wrap any AI SDK model with these features. This is useful when you want to keep your existing AI SDK code but add input/output processing, conversation persistence, or content filtering.
4
+
5
+ > **Tip:** If you want to use Mastra together with AI SDK UI (e.g. `useChat()`), visit the [AI SDK UI guide](https://mastra.ai/guides/build-your-ui/ai-sdk-ui).
6
+
7
+ ## Installation
8
+
9
+ Install `@mastra/ai-sdk` to begin using the `withMastra()` function.
10
+
11
+ **npm**:
12
+
13
+ ```bash
14
+ npm install @mastra/ai-sdk@latest
15
+ ```
16
+
17
+ **pnpm**:
18
+
19
+ ```bash
20
+ pnpm add @mastra/ai-sdk@latest
21
+ ```
22
+
23
+ **Yarn**:
24
+
25
+ ```bash
26
+ yarn add @mastra/ai-sdk@latest
27
+ ```
28
+
29
+ **Bun**:
30
+
31
+ ```bash
32
+ bun add @mastra/ai-sdk@latest
33
+ ```
34
+
35
+ ## Examples
36
+
37
+ ### With Processors
38
+
39
+ Processors let you transform messages before they're sent to the model (`processInput`) and after responses are received (`processOutputResult`). This example creates a logging processor that logs message counts at each stage, then wraps an OpenAI model with it.
40
+
41
+ ```typescript
42
+ import { openai } from '@ai-sdk/openai'
43
+ import { generateText } from 'ai'
44
+ import { withMastra } from '@mastra/ai-sdk'
45
+ import type { Processor } from '@mastra/core/processors'
46
+
47
+ const loggingProcessor: Processor<'logger'> = {
48
+ id: 'logger',
49
+ async processInput({ messages }) {
50
+ console.log('Input:', messages.length, 'messages')
51
+ return messages
52
+ },
53
+ async processOutputResult({ messages }) {
54
+ console.log('Output:', messages.length, 'messages')
55
+ return messages
56
+ },
57
+ }
58
+
59
+ const model = withMastra(openai('gpt-4o'), {
60
+ inputProcessors: [loggingProcessor],
61
+ outputProcessors: [loggingProcessor],
62
+ })
63
+
64
+ const { text } = await generateText({
65
+ model,
66
+ prompt: 'What is 2 + 2?',
67
+ })
68
+ ```
69
+
70
+ ### With Memory
71
+
72
+ Memory automatically loads previous messages from storage before the LLM call and saves new messages after. This example configures a libSQL storage backend to persist conversation history, loading the last 10 messages for context.
73
+
74
+ ```typescript
75
+ import { openai } from '@ai-sdk/openai'
76
+ import { generateText } from 'ai'
77
+ import { withMastra } from '@mastra/ai-sdk'
78
+ import { LibSQLStore } from '@mastra/libsql'
79
+
80
+ const storage = new LibSQLStore({
81
+ id: 'my-app',
82
+ url: 'file:./data.db',
83
+ })
84
+ await storage.init()
85
+
86
+ const memoryStorage = await storage.getStore('memory')
87
+
88
+ const model = withMastra(openai('gpt-4o'), {
89
+ memory: {
90
+ storage: memoryStorage!,
91
+ threadId: 'user-thread-123',
92
+ resourceId: 'user-123',
93
+ lastMessages: 10,
94
+ },
95
+ })
96
+
97
+ const { text } = await generateText({
98
+ model,
99
+ prompt: 'What did we talk about earlier?',
100
+ })
101
+ ```
102
+
103
+ ### With Processors & Memory
104
+
105
+ You can combine processors and memory together. Input processors run after memory loads historical messages, and output processors run before memory saves the response.
106
+
107
+ ```typescript
108
+ import { openai } from '@ai-sdk/openai'
109
+ import { generateText } from 'ai'
110
+ import { withMastra } from '@mastra/ai-sdk'
111
+ import { LibSQLStore } from '@mastra/libsql'
112
+
113
+ const storage = new LibSQLStore({ id: 'my-app', url: 'file:./data.db' })
114
+ await storage.init()
115
+
116
+ const memoryStorage = await storage.getStore('memory')
117
+
118
+ const model = withMastra(openai('gpt-4o'), {
119
+ inputProcessors: [myGuardProcessor],
120
+ outputProcessors: [myLoggingProcessor],
121
+ memory: {
122
+ storage: memoryStorage!,
123
+ threadId: 'thread-123',
124
+ resourceId: 'user-123',
125
+ lastMessages: 10,
126
+ },
127
+ })
128
+
129
+ const { text } = await generateText({
130
+ model,
131
+ prompt: 'Hello!',
132
+ })
133
+ ```
134
+
135
+ ## Related
136
+
137
+ - [`withMastra()`](https://mastra.ai/reference/ai-sdk/with-mastra) - API reference for `withMastra()`
138
+ - [Processors](https://mastra.ai/docs/agents/processors) - Learn about input and output processors
139
+ - [Memory](https://mastra.ai/docs/memory/overview) - Overview of Mastra's memory system
140
+ - [AI SDK UI](https://mastra.ai/guides/build-your-ui/ai-sdk-ui) - Using AI SDK UI hooks with Mastra agents, workflows, and networks
@@ -0,0 +1,50 @@
1
+ # Mastra.getMemory()
2
+
3
+ The `.getMemory()` method retrieves a memory instance from the Mastra registry by its key. Memory instances are registered in the Mastra constructor and can be referenced by stored agents.
4
+
5
+ ## Usage example
6
+
7
+ ```typescript
8
+ const memory = mastra.getMemory('conversationMemory')
9
+
10
+ // Use the memory instance
11
+ const thread = await memory.createThread({
12
+ resourceId: 'user-123',
13
+ title: 'New Conversation',
14
+ })
15
+ ```
16
+
17
+ ## Parameters
18
+
19
+ **key:** (`TMemoryKey extends keyof TMemory`): The registry key of the memory instance to retrieve. Must match a key used when registering memory in the Mastra constructor.
20
+
21
+ ## Returns
22
+
23
+ **memory:** (`TMemory[TMemoryKey]`): The memory instance with the specified key. Throws an error if the memory is not found.
24
+
25
+ ## Example: Registering and Retrieving Memory
26
+
27
+ ```typescript
28
+ import { Mastra } from '@mastra/core'
29
+ import { Memory } from '@mastra/memory'
30
+ import { LibSQLStore } from '@mastra/libsql'
31
+
32
+ const conversationMemory = new Memory({
33
+ storage: new LibSQLStore({ id: 'conversation-store', url: ':memory:' }),
34
+ })
35
+
36
+ const mastra = new Mastra({
37
+ memory: {
38
+ conversationMemory,
39
+ },
40
+ })
41
+
42
+ // Later, retrieve the memory instance
43
+ const memory = mastra.getMemory('conversationMemory')
44
+ ```
45
+
46
+ ## Related
47
+
48
+ - [Mastra.listMemory()](https://mastra.ai/reference/core/listMemory)
49
+ - [Memory overview](https://mastra.ai/docs/memory/overview)
50
+ - [Agent Memory](https://mastra.ai/docs/agents/agent-memory)
@@ -0,0 +1,56 @@
1
+ # Mastra.listMemory()
2
+
3
+ The `.listMemory()` method returns all memory instances registered with the Mastra instance.
4
+
5
+ ## Usage example
6
+
7
+ ```typescript
8
+ const memoryInstances = mastra.listMemory()
9
+
10
+ for (const [key, memory] of Object.entries(memoryInstances)) {
11
+ console.log(`Memory "${key}": ${memory.id}`)
12
+ }
13
+ ```
14
+
15
+ ## Parameters
16
+
17
+ This method takes no parameters.
18
+
19
+ ## Returns
20
+
21
+ **memory:** (`Record<string, MastraMemory>`): An object containing all registered memory instances, keyed by their registry keys.
22
+
23
+ ## Example: Checking Registered Memory
24
+
25
+ ```typescript
26
+ import { Mastra } from '@mastra/core'
27
+ import { Memory } from '@mastra/memory'
28
+ import { LibSQLStore } from '@mastra/libsql'
29
+
30
+ const conversationMemory = new Memory({
31
+ id: 'conversation-memory',
32
+ storage: new LibSQLStore({ id: 'conversation-store', url: ':memory:' }),
33
+ })
34
+
35
+ const analyticsMemory = new Memory({
36
+ id: 'analytics-memory',
37
+ storage: new LibSQLStore({ id: 'analytics-store', url: ':memory:' }),
38
+ })
39
+
40
+ const mastra = new Mastra({
41
+ memory: {
42
+ conversationMemory,
43
+ analyticsMemory,
44
+ },
45
+ })
46
+
47
+ // List all registered memory instances
48
+ const allMemory = mastra.listMemory()
49
+ console.log(Object.keys(allMemory)) // ["conversationMemory", "analyticsMemory"]
50
+ ```
51
+
52
+ ## Related
53
+
54
+ - [Mastra.getMemory()](https://mastra.ai/reference/core/getMemory)
55
+ - [Memory overview](https://mastra.ai/docs/memory/overview)
56
+ - [Agent Memory](https://mastra.ai/docs/agents/agent-memory)
@@ -0,0 +1,66 @@
1
+ # Mastra Class
2
+
3
+ The `Mastra` class is the central orchestrator in any Mastra application, managing agents, workflows, storage, logging, observability, and more. Typically, you create a single instance of `Mastra` to coordinate your application.
4
+
5
+ Think of `Mastra` as a top-level registry where you register agents, workflows, tools, and other components that need to be accessible throughout your application.
6
+
7
+ ## Usage example
8
+
9
+ ```typescript
10
+ import { Mastra } from '@mastra/core'
11
+ import { PinoLogger } from '@mastra/loggers'
12
+ import { LibSQLStore } from '@mastra/libsql'
13
+ import { weatherWorkflow } from './workflows/weather-workflow'
14
+ import { weatherAgent } from './agents/weather-agent'
15
+
16
+ export const mastra = new Mastra({
17
+ workflows: { weatherWorkflow },
18
+ agents: { weatherAgent },
19
+ storage: new LibSQLStore({
20
+ id: 'mastra-storage',
21
+ url: ':memory:',
22
+ }),
23
+ logger: new PinoLogger({
24
+ name: 'Mastra',
25
+ level: 'info',
26
+ }),
27
+ })
28
+ ```
29
+
30
+ ## Constructor parameters
31
+
32
+ Visit the [Configuration reference](https://mastra.ai/reference/configuration) for detailed documentation on all available configuration options.
33
+
34
+ **agents?:** (`Record<string, Agent>`): Agent instances to register, keyed by name (Default: `{}`)
35
+
36
+ **tools?:** (`Record<string, ToolApi>`): Custom tools to register. Structured as a key-value pair, with keys being the tool name and values being the tool function. (Default: `{}`)
37
+
38
+ **storage?:** (`MastraCompositeStore`): Storage engine instance for persisting data
39
+
40
+ **vectors?:** (`Record<string, MastraVector>`): Vector store instance, used for semantic search and vector-based tools (eg Pinecone, PgVector or Qdrant)
41
+
42
+ **logger?:** (`Logger`): Logger instance created with new PinoLogger() (Default: `Console logger with INFO level`)
43
+
44
+ **idGenerator?:** (`() => string`): Custom ID generator function. Used by agents, workflows, memory, and other components to generate unique identifiers.
45
+
46
+ **workflows?:** (`Record<string, Workflow>`): Workflows to register. Structured as a key-value pair, with keys being the workflow name and values being the workflow instance. (Default: `{}`)
47
+
48
+ **tts?:** (`Record<string, MastraVoice>`): Text-to-speech providers for voice synthesis
49
+
50
+ **observability?:** (`ObservabilityEntrypoint`): Observability configuration for tracing and monitoring
51
+
52
+ **deployer?:** (`MastraDeployer`): An instance of a MastraDeployer for managing deployments.
53
+
54
+ **server?:** (`ServerConfig`): Server configuration including port, host, timeout, API routes, middleware, CORS settings, and build options for Swagger UI, API request logging, and OpenAPI docs.
55
+
56
+ **mcpServers?:** (`Record<string, MCPServerBase>`): An object where keys are registry keys (used for getMCPServer()) and values are instances of MCPServer or classes extending MCPServerBase. Each MCPServer must have an id property. Servers can be retrieved by registry key using getMCPServer() or by their intrinsic id using getMCPServerById().
57
+
58
+ **bundler?:** (`BundlerConfig`): Configuration for the asset bundler with options for externals, sourcemap, and transpilePackages.
59
+
60
+ **scorers?:** (`Record<string, Scorer>`): Scorers for evaluating agent responses and workflow outputs (Default: `{}`)
61
+
62
+ **processors?:** (`Record<string, Processor>`): Input/output processors for transforming agent inputs and outputs (Default: `{}`)
63
+
64
+ **gateways?:** (`Record<string, MastraModelGateway>`): Custom model gateways to register for accessing AI models through alternative providers or private deployments. Structured as a key-value pair, with keys being the registry key (used for getGateway()) and values being gateway instances. (Default: `{}`)
65
+
66
+ **memory?:** (`Record<string, MastraMemory>`): Memory instances to register. These can be referenced by stored agents and resolved at runtime. Structured as a key-value pair, with keys being the registry key and values being memory instances. (Default: `{}`)