@mastra/libsql 1.6.3 → 1.6.4

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/CHANGELOG.md CHANGED
@@ -1,5 +1,23 @@
1
1
  # @mastra/libsql
2
2
 
3
+ ## 1.6.4
4
+
5
+ ### Patch Changes
6
+
7
+ - - Fixed experiment pending count showing negative values when experiments are triggered from the Studio ([#13831](https://github.com/mastra-ai/mastra/pull/13831))
8
+ - Fixed scorer prompt metadata (analysis context, generated prompts) being lost when saving experiment scores
9
+ - Updated dependencies [[`41e48c1`](https://github.com/mastra-ai/mastra/commit/41e48c198eee846478e60c02ec432c19d322a517), [`82469d3`](https://github.com/mastra-ai/mastra/commit/82469d3135d5a49dd8dc8feec0ff398b4e0225a0), [`33e2fd5`](https://github.com/mastra-ai/mastra/commit/33e2fd5088f83666df17401e2da68c943dbc0448), [`7ef6e2c`](https://github.com/mastra-ai/mastra/commit/7ef6e2c61be5a42e26f55d15b5902866fc76634f), [`b12d2a5`](https://github.com/mastra-ai/mastra/commit/b12d2a59a48be0477cabae66eb6cf0fc94a7d40d), [`fa37d39`](https://github.com/mastra-ai/mastra/commit/fa37d39910421feaf8847716292e3d65dd4f30c2), [`b12d2a5`](https://github.com/mastra-ai/mastra/commit/b12d2a59a48be0477cabae66eb6cf0fc94a7d40d), [`71c38bf`](https://github.com/mastra-ai/mastra/commit/71c38bf905905148ecd0e75c07c1f9825d299b76), [`f993c38`](https://github.com/mastra-ai/mastra/commit/f993c3848c97479b813231be872443bedeced6ab), [`f51849a`](https://github.com/mastra-ai/mastra/commit/f51849a568935122b5100b7ee69704e6d680cf7b), [`9bf3a0d`](https://github.com/mastra-ai/mastra/commit/9bf3a0dac602787925f1762f1f0387d7b4a59620), [`cafa045`](https://github.com/mastra-ai/mastra/commit/cafa0453c9de141ad50c09a13894622dffdd9978), [`1fd9ddb`](https://github.com/mastra-ai/mastra/commit/1fd9ddbb3fe83b281b12bd2e27e426ae86288266), [`6135ef4`](https://github.com/mastra-ai/mastra/commit/6135ef4f5288652bf45f616ec590607e4c95f443), [`d9d228c`](https://github.com/mastra-ai/mastra/commit/d9d228c0c6ae82ae6ce3b540a3a56b2b1c2b8d98), [`5576507`](https://github.com/mastra-ai/mastra/commit/55765071e360fb97e443aa0a91ccf7e1cd8d92aa), [`79d69c9`](https://github.com/mastra-ai/mastra/commit/79d69c9d5f842ff1c31352fb6026f04c1f6190f3), [`94f44b8`](https://github.com/mastra-ai/mastra/commit/94f44b827ce57b179e50f4916a84c0fa6e7f3b8c), [`13187db`](https://github.com/mastra-ai/mastra/commit/13187dbac880174232dedc5a501ff6c5d0fe59bc), [`2ae5311`](https://github.com/mastra-ai/mastra/commit/2ae531185fff66a80fa165c0999e3d801900e89d), [`6135ef4`](https://github.com/mastra-ai/mastra/commit/6135ef4f5288652bf45f616ec590607e4c95f443)]:
10
+ - @mastra/core@1.10.0
11
+
12
+ ## 1.6.4-alpha.0
13
+
14
+ ### Patch Changes
15
+
16
+ - - Fixed experiment pending count showing negative values when experiments are triggered from the Studio ([#13831](https://github.com/mastra-ai/mastra/pull/13831))
17
+ - Fixed scorer prompt metadata (analysis context, generated prompts) being lost when saving experiment scores
18
+ - Updated dependencies [[`41e48c1`](https://github.com/mastra-ai/mastra/commit/41e48c198eee846478e60c02ec432c19d322a517), [`82469d3`](https://github.com/mastra-ai/mastra/commit/82469d3135d5a49dd8dc8feec0ff398b4e0225a0), [`33e2fd5`](https://github.com/mastra-ai/mastra/commit/33e2fd5088f83666df17401e2da68c943dbc0448), [`7ef6e2c`](https://github.com/mastra-ai/mastra/commit/7ef6e2c61be5a42e26f55d15b5902866fc76634f), [`b12d2a5`](https://github.com/mastra-ai/mastra/commit/b12d2a59a48be0477cabae66eb6cf0fc94a7d40d), [`fa37d39`](https://github.com/mastra-ai/mastra/commit/fa37d39910421feaf8847716292e3d65dd4f30c2), [`b12d2a5`](https://github.com/mastra-ai/mastra/commit/b12d2a59a48be0477cabae66eb6cf0fc94a7d40d), [`71c38bf`](https://github.com/mastra-ai/mastra/commit/71c38bf905905148ecd0e75c07c1f9825d299b76), [`f993c38`](https://github.com/mastra-ai/mastra/commit/f993c3848c97479b813231be872443bedeced6ab), [`f51849a`](https://github.com/mastra-ai/mastra/commit/f51849a568935122b5100b7ee69704e6d680cf7b), [`9bf3a0d`](https://github.com/mastra-ai/mastra/commit/9bf3a0dac602787925f1762f1f0387d7b4a59620), [`cafa045`](https://github.com/mastra-ai/mastra/commit/cafa0453c9de141ad50c09a13894622dffdd9978), [`1fd9ddb`](https://github.com/mastra-ai/mastra/commit/1fd9ddbb3fe83b281b12bd2e27e426ae86288266), [`6135ef4`](https://github.com/mastra-ai/mastra/commit/6135ef4f5288652bf45f616ec590607e4c95f443), [`d9d228c`](https://github.com/mastra-ai/mastra/commit/d9d228c0c6ae82ae6ce3b540a3a56b2b1c2b8d98), [`5576507`](https://github.com/mastra-ai/mastra/commit/55765071e360fb97e443aa0a91ccf7e1cd8d92aa), [`79d69c9`](https://github.com/mastra-ai/mastra/commit/79d69c9d5f842ff1c31352fb6026f04c1f6190f3), [`94f44b8`](https://github.com/mastra-ai/mastra/commit/94f44b827ce57b179e50f4916a84c0fa6e7f3b8c), [`13187db`](https://github.com/mastra-ai/mastra/commit/13187dbac880174232dedc5a501ff6c5d0fe59bc), [`2ae5311`](https://github.com/mastra-ai/mastra/commit/2ae531185fff66a80fa165c0999e3d801900e89d), [`6135ef4`](https://github.com/mastra-ai/mastra/commit/6135ef4f5288652bf45f616ec590607e4c95f443)]:
19
+ - @mastra/core@1.10.0-alpha.0
20
+
3
21
  ## 1.6.3
4
22
 
5
23
  ### Patch Changes
@@ -3,7 +3,7 @@ name: mastra-libsql
3
3
  description: Documentation for @mastra/libsql. Use when working with @mastra/libsql APIs, configuration, or implementation.
4
4
  metadata:
5
5
  package: "@mastra/libsql"
6
- version: "1.6.3"
6
+ version: "1.6.4"
7
7
  ---
8
8
 
9
9
  ## When to use
@@ -1,5 +1,5 @@
1
1
  {
2
- "version": "1.6.3",
2
+ "version": "1.6.4",
3
3
  "package": "@mastra/libsql",
4
4
  "exports": {},
5
5
  "modules": {}
@@ -1,10 +1,29 @@
1
- # Agent Approval
1
+ # Agent approval
2
2
 
3
- Agents sometimes require the same [human-in-the-loop](https://mastra.ai/docs/workflows/human-in-the-loop) oversight used in workflows when calling tools that handle sensitive operations, like deleting resources or performing running long processes. With agent approval you can suspend a tool call and provide feedback to the user, or approve or decline a tool call based on targeted application conditions.
3
+ Agents sometimes require the same [human-in-the-loop](https://mastra.ai/docs/workflows/human-in-the-loop) oversight used in workflows when calling tools that handle sensitive operations, like deleting resources or running long processes. With agent approval you can suspend a tool call before it executes so a human can approve or decline it, or let tools suspend themselves to request additional context from the user.
4
4
 
5
- ## Tool call approval
5
+ ## How approval works
6
6
 
7
- Tool call approval can be enabled at the agent level and apply to every tool the agent uses, or at the tool level providing more granular control over individual tool calls.
7
+ Mastra offers two distinct mechanisms for pausing tool calls: **pre-execution approval** and **runtime suspension**.
8
+
9
+ ### Pre-execution approval
10
+
11
+ Pre-execution approval pauses a tool call _before_ its `execute` function runs. The LLM still decides which tool to call and provides arguments, but `execute` doesn't run until you explicitly approve.
12
+
13
+ Two flags control this, combined with OR logic. If _either_ is `true`, the call pauses:
14
+
15
+ | Flag | Where to set it | Scope |
16
+ | --------------------------- | --------------------------------- | ------------------------------------------- |
17
+ | `requireToolApproval: true` | `stream()` / `generate()` options | Pauses **every** tool call for that request |
18
+ | `requireApproval: true` | `createTool()` definition | Pauses calls to **that specific tool** |
19
+
20
+ The stream emits a `tool-call-approval` chunk when a call is paused this way. You then call `approveToolCall()` or `declineToolCall()` to continue.
21
+
22
+ ### Runtime suspension with `suspend()`
23
+
24
+ A tool can also pause _during_ its `execute` function by calling `suspend()`. This is useful when the tool starts running and then discovers it needs additional user input or confirmation before it can finish.
25
+
26
+ The stream emits a `tool-call-suspended` chunk with a custom payload defined by the tool's `suspendSchema`. You resume by calling `resumeStream()` with data matching the tool's `resumeSchema`.
8
27
 
9
28
  ### Storage
10
29
 
@@ -24,7 +43,7 @@ export const mastra = new Mastra({
24
43
 
25
44
  ## Agent-level approval
26
45
 
27
- When calling an agent using `.stream()` set `requireToolApproval` to `true` which will prevent the agent from calling any of the tools defined in its configuration.
46
+ Pass `requireToolApproval: true` to `stream()` or `generate()` to pause every tool call before execution. The LLM still decides which tools to call and with what arguments but no tool runs until you approve or decline.
28
47
 
29
48
  ```typescript
30
49
  const stream = await agent.stream("What's the weather in London?", {
@@ -32,9 +51,20 @@ const stream = await agent.stream("What's the weather in London?", {
32
51
  })
33
52
  ```
34
53
 
54
+ When a tool call is paused, the stream emits a `tool-call-approval` chunk containing the `toolCallId`, `toolName`, and `args`. Use this to inspect the pending call and decide whether to approve or decline:
55
+
56
+ ```typescript
57
+ for await (const chunk of stream.fullStream) {
58
+ if (chunk.type === 'tool-call-approval') {
59
+ console.log('Tool:', chunk.payload.toolName)
60
+ console.log('Args:', chunk.payload.args)
61
+ }
62
+ }
63
+ ```
64
+
35
65
  ### Approving tool calls
36
66
 
37
- To approve a tool call, access `approveToolCall` from the `agent`, passing in the `runId` of the stream. This will let the agent know its now OK to call its tools.
67
+ Call `approveToolCall()` on the agent with the `runId` of the stream to resume the suspended tool call and let it execute:
38
68
 
39
69
  ```typescript
40
70
  const handleApproval = async () => {
@@ -49,7 +79,7 @@ const handleApproval = async () => {
49
79
 
50
80
  ### Declining tool calls
51
81
 
52
- To decline a tool call, access the `declineToolCall` from the `agent`. You will see the streamed response from the agent, but it won't call its tools.
82
+ Call `declineToolCall()` on the agent to skip the tool call. The agent continues without executing the tool and responds accordingly:
53
83
 
54
84
  ```typescript
55
85
  const handleDecline = async () => {
@@ -64,19 +94,19 @@ const handleDecline = async () => {
64
94
 
65
95
  ## Tool approval with generate()
66
96
 
67
- Tool approval also works with the `generate()` method for non-streaming use cases. When using `generate()` with `requireToolApproval: true`, the method returns immediately when a tool requires approval instead of executing it.
97
+ Tool approval also works with the `generate()` method for non-streaming use cases. When a tool requires approval during a `generate()` call, the method returns immediately instead of executing the tool.
68
98
 
69
99
  ### How it works
70
100
 
71
101
  When a tool requires approval during a `generate()` call, the response includes:
72
102
 
73
- - `finishReason: 'suspended'` - indicates the agent is waiting for approval
74
- - `suspendPayload` - contains tool call details (`toolCallId`, `toolName`, `args`)
75
- - `runId` - needed to approve or decline the tool call
103
+ - `finishReason: 'suspended'`: Indicates the agent is waiting for approval
104
+ - `suspendPayload`: Contains tool call details (`toolCallId`, `toolName`, `args`)
105
+ - `runId`: Needed to approve or decline the tool call
76
106
 
77
107
  ### Approving tool calls
78
108
 
79
- To approve a tool call with `generate()`, use the `approveToolCallGenerate` method:
109
+ Use `approveToolCallGenerate()` to approve the tool call and get the final result:
80
110
 
81
111
  ```typescript
82
112
  const output = await agent.generate('Find user John', {
@@ -99,7 +129,7 @@ if (output.finishReason === 'suspended') {
99
129
 
100
130
  ### Declining tool calls
101
131
 
102
- To decline a tool call, use the `declineToolCallGenerate` method:
132
+ Use `declineToolCallGenerate()` to skip the tool call:
103
133
 
104
134
  ```typescript
105
135
  if (output.finishReason === 'suspended') {
@@ -108,12 +138,12 @@ if (output.finishReason === 'suspended') {
108
138
  toolCallId: output.suspendPayload.toolCallId,
109
139
  })
110
140
 
111
- // Agent will respond acknowledging the declined tool
141
+ // Agent responds acknowledging the declined tool
112
142
  console.log(result.text)
113
143
  }
114
144
  ```
115
145
 
116
- ### Stream vs Generate comparison
146
+ ### Stream vs generate comparison
117
147
 
118
148
  | Aspect | `stream()` | `generate()` |
119
149
  | ------------------ | ---------------------------- | ------------------------------------------------ |
@@ -125,11 +155,11 @@ if (output.finishReason === 'suspended') {
125
155
 
126
156
  ## Tool-level approval
127
157
 
128
- There are two types of tool call approval. The first uses `requireApproval`, which is a property on the tool definition, while `requireToolApproval` is a parameter passed to `agent.stream()`. The second uses `suspend` and lets the agent provide context or confirmation prompts so the user can decide whether the tool call should continue.
158
+ Instead of pausing every tool call at the agent level, you can mark individual tools as requiring approval. This gives you granular control only specific tools pause, while others execute immediately.
129
159
 
130
- ### Tool approval using `requireToolApproval`
160
+ ### Approval using `requireApproval`
131
161
 
132
- In this approach, `requireApproval` is configured on the tool definition (shown below) rather than on the agent.
162
+ Set `requireApproval: true` on a tool definition. The tool pauses before execution regardless of whether `requireToolApproval` is set on the agent:
133
163
 
134
164
  ```typescript
135
165
  export const testTool = createTool({
@@ -154,30 +184,30 @@ export const testTool = createTool({
154
184
  })
155
185
  ```
156
186
 
157
- When `requireApproval` is true for a tool, the stream will include chunks of type `tool-call-approval` to indicate that the call is paused. To continue the call, invoke `resumeStream` with the required `resumeSchema` and the `runId`.
187
+ When `requireApproval` is `true`, the stream emits `tool-call-approval` chunks the same way agent-level approval does. Use `approveToolCall()` or `declineToolCall()` to continue:
158
188
 
159
189
  ```typescript
160
190
  const stream = await agent.stream("What's the weather in London?")
161
191
 
162
192
  for await (const chunk of stream.fullStream) {
163
193
  if (chunk.type === 'tool-call-approval') {
164
- console.log('Approval required.')
194
+ console.log('Approval required for:', chunk.payload.toolName)
165
195
  }
166
196
  }
167
197
 
168
- const handleResume = async () => {
169
- const resumedStream = await agent.resumeStream({ approved: true }, { runId: stream.runId })
198
+ const handleApproval = async () => {
199
+ const approvedStream = await agent.approveToolCall({ runId: stream.runId })
170
200
 
171
- for await (const chunk of resumedStream.textStream) {
201
+ for await (const chunk of approvedStream.textStream) {
172
202
  process.stdout.write(chunk)
173
203
  }
174
204
  process.stdout.write('\n')
175
205
  }
176
206
  ```
177
207
 
178
- ### Tool approval using `suspend`
208
+ ### Approval using `suspend`
179
209
 
180
- With this approach, neither the agent nor the tool uses `requireApproval`. Instead, the tool implementation calls `suspend` to pause execution and return context or confirmation prompts to the user.
210
+ With this approach, neither the agent nor the tool uses `requireApproval`. Instead, the tool's `execute` function calls `suspend` to pause at a specific point and return context or confirmation prompts to the user. This is useful when approval depends on runtime conditions rather than being unconditional.
181
211
 
182
212
  ```typescript
183
213
  export const testToolB = createTool({
@@ -210,7 +240,7 @@ export const testToolB = createTool({
210
240
  })
211
241
  ```
212
242
 
213
- With this approach the stream will include a `tool-call-suspended` chunk, and the `suspendPayload` will contain the `reason` defined by the tool's `suspendSchema`. To continue the call, invoke `resumeStream` with the required `resumeSchema` and the `runId`.
243
+ With this approach the stream includes a `tool-call-suspended` chunk, and the `suspendPayload` contains the `reason` defined by the tool's `suspendSchema`. Call `resumeStream` with the `resumeSchema` data and `runId` to continue:
214
244
 
215
245
  ```typescript
216
246
  const stream = await agent.stream("What's the weather in London?")
@@ -349,7 +379,7 @@ User: "San Francisco"
349
379
  Agent: "The weather in San Francisco is: San Francisco: ☀️ +72°F"
350
380
  ```
351
381
 
352
- The second message automatically resumes the suspended tool - the agent extracts `{ city: "San Francisco" }` from the user's message and passes it as `resumeData`.
382
+ The second message automatically resumes the suspended tool, the agent extracts `{ city: "San Francisco" }` from the user's message and passes it as `resumeData`.
353
383
 
354
384
  ### Requirements
355
385
 
@@ -370,7 +400,7 @@ Both approaches work with the same tool definitions. Automatic resumption trigge
370
400
 
371
401
  ## Tool approval: Supervisor pattern
372
402
 
373
- The [supervisor pattern](https://mastra.ai/docs/agents/networks) lets a supervisor agent coordinate multiple subagents using `.stream()` or `.generate()`. The supervisor delegates tasks to subagents, which may use tools that require approval. When this happens, tool approvals properly propagate through the delegation chain -- the approval request surfaces at the supervisor level where you can handle it, regardless of which subagent triggered it.
403
+ The [supervisor pattern](https://mastra.ai/docs/agents/networks) lets a supervisor agent coordinate multiple subagents using `.stream()` or `.generate()`. The supervisor delegates tasks to subagents, which may use tools that require approval. When this happens, tool approvals properly propagate through the delegation chain the approval request surfaces at the supervisor level where you can handle it, regardless of which subagent triggered it.
374
404
 
375
405
  ### How it works
376
406
 
@@ -453,7 +483,7 @@ for await (const chunk of stream.fullStream) {
453
483
 
454
484
  ### Declining tool calls in supervisor pattern
455
485
 
456
- You can also decline tool calls at the supervisor level by calling `declineToolCall`. The supervisor will respond acknowledging the declined tool without executing it:
486
+ Decline tool calls at the supervisor level by calling `declineToolCall`. The supervisor responds acknowledging the declined tool without executing it:
457
487
 
458
488
  ```typescript
459
489
  for await (const chunk of stream.fullStream) {
@@ -466,7 +496,7 @@ for await (const chunk of stream.fullStream) {
466
496
  toolCallId: chunk.payload.toolCallId,
467
497
  })
468
498
 
469
- // The supervisor will respond acknowledging the declined tool
499
+ // The supervisor responds acknowledging the declined tool
470
500
  for await (const declineChunk of declineStream.textStream) {
471
501
  process.stdout.write(declineChunk)
472
502
  }
@@ -476,7 +506,7 @@ for await (const chunk of stream.fullStream) {
476
506
 
477
507
  ### Using suspend() in supervisor pattern
478
508
 
479
- Tools can also use [`suspend()`](#tool-approval-using-suspend) to pause execution and return context to the user. This approach works through the supervisor delegation chain the same way `requireApproval` does -- the suspension surfaces at the supervisor level:
509
+ Tools can also use [`suspend()`](#approval-using-suspend) to pause execution and return context to the user. This approach works through the supervisor delegation chain the same way `requireApproval` does the suspension surfaces at the supervisor level:
480
510
 
481
511
  ```typescript
482
512
  const conditionalTool = createTool({
@@ -147,8 +147,24 @@ Supported embedding models:
147
147
 
148
148
  - **OpenAI**: `text-embedding-3-small`, `text-embedding-3-large`, `text-embedding-ada-002`
149
149
  - **Google**: `gemini-embedding-001`
150
+ - **OpenRouter**: Access embedding models from various providers
150
151
 
151
- The model router automatically handles API key detection from environment variables (`OPENAI_API_KEY`, `GOOGLE_GENERATIVE_AI_API_KEY`).
152
+ ```ts
153
+ import { Agent } from '@mastra/core/agent'
154
+ import { Memory } from '@mastra/memory'
155
+ import { ModelRouterEmbeddingModel } from '@mastra/core/llm'
156
+
157
+ const agent = new Agent({
158
+ memory: new Memory({
159
+ embedder: new ModelRouterEmbeddingModel({
160
+ providerId: 'openrouter',
161
+ modelId: 'openai/text-embedding-3-small',
162
+ }),
163
+ }),
164
+ })
165
+ ```
166
+
167
+ The model router automatically handles API key detection from environment variables (`OPENAI_API_KEY`, `GOOGLE_GENERATIVE_AI_API_KEY`, `OPENROUTER_API_KEY`).
152
168
 
153
169
  ### Using AI SDK Packages
154
170
 
@@ -16,11 +16,11 @@ const thread = await memory.createThread({
16
16
 
17
17
  ## Parameters
18
18
 
19
- **key:** (`TMemoryKey extends keyof TMemory`): The registry key of the memory instance to retrieve. Must match a key used when registering memory in the Mastra constructor.
19
+ **key** (`TMemoryKey extends keyof TMemory`): The registry key of the memory instance to retrieve. Must match a key used when registering memory in the Mastra constructor.
20
20
 
21
21
  ## Returns
22
22
 
23
- **memory:** (`TMemory[TMemoryKey]`): The memory instance with the specified key. Throws an error if the memory is not found.
23
+ **memory** (`TMemory[TMemoryKey]`): The memory instance with the specified key. Throws an error if the memory is not found.
24
24
 
25
25
  ## Example: Registering and Retrieving Memory
26
26
 
@@ -18,7 +18,7 @@ This method takes no parameters.
18
18
 
19
19
  ## Returns
20
20
 
21
- **memory:** (`Record<string, MastraMemory>`): An object containing all registered memory instances, keyed by their registry keys.
21
+ **memory** (`Record<string, MastraMemory>`): An object containing all registered memory instances, keyed by their registry keys.
22
22
 
23
23
  ## Example: Checking Registered Memory
24
24
 
@@ -31,36 +31,36 @@ export const mastra = new Mastra({
31
31
 
32
32
  Visit the [Configuration reference](https://mastra.ai/reference/configuration) for detailed documentation on all available configuration options.
33
33
 
34
- **agents?:** (`Record<string, Agent>`): Agent instances to register, keyed by name (Default: `{}`)
34
+ **agents** (`Record<string, Agent>`): Agent instances to register, keyed by name (Default: `{}`)
35
35
 
36
- **tools?:** (`Record<string, ToolApi>`): Custom tools to register. Structured as a key-value pair, with keys being the tool name and values being the tool function. (Default: `{}`)
36
+ **tools** (`Record<string, ToolApi>`): Custom tools to register. Structured as a key-value pair, with keys being the tool name and values being the tool function. (Default: `{}`)
37
37
 
38
- **storage?:** (`MastraCompositeStore`): Storage engine instance for persisting data
38
+ **storage** (`MastraCompositeStore`): Storage engine instance for persisting data
39
39
 
40
- **vectors?:** (`Record<string, MastraVector>`): Vector store instance, used for semantic search and vector-based tools (eg Pinecone, PgVector or Qdrant)
40
+ **vectors** (`Record<string, MastraVector>`): Vector store instance, used for semantic search and vector-based tools (eg Pinecone, PgVector or Qdrant)
41
41
 
42
- **logger?:** (`Logger`): Logger instance created with new PinoLogger() (Default: `Console logger with INFO level`)
42
+ **logger** (`Logger`): Logger instance created with new PinoLogger() (Default: `Console logger with INFO level`)
43
43
 
44
- **idGenerator?:** (`() => string`): Custom ID generator function. Used by agents, workflows, memory, and other components to generate unique identifiers.
44
+ **idGenerator** (`() => string`): Custom ID generator function. Used by agents, workflows, memory, and other components to generate unique identifiers.
45
45
 
46
- **workflows?:** (`Record<string, Workflow>`): Workflows to register. Structured as a key-value pair, with keys being the workflow name and values being the workflow instance. (Default: `{}`)
46
+ **workflows** (`Record<string, Workflow>`): Workflows to register. Structured as a key-value pair, with keys being the workflow name and values being the workflow instance. (Default: `{}`)
47
47
 
48
- **tts?:** (`Record<string, MastraVoice>`): Text-to-speech providers for voice synthesis
48
+ **tts** (`Record<string, MastraVoice>`): Text-to-speech providers for voice synthesis
49
49
 
50
- **observability?:** (`ObservabilityEntrypoint`): Observability configuration for tracing and monitoring
50
+ **observability** (`ObservabilityEntrypoint`): Observability configuration for tracing and monitoring
51
51
 
52
- **deployer?:** (`MastraDeployer`): An instance of a MastraDeployer for managing deployments.
52
+ **deployer** (`MastraDeployer`): An instance of a MastraDeployer for managing deployments.
53
53
 
54
- **server?:** (`ServerConfig`): Server configuration including port, host, timeout, API routes, middleware, CORS settings, and build options for Swagger UI, API request logging, and OpenAPI docs.
54
+ **server** (`ServerConfig`): Server configuration including port, host, timeout, API routes, middleware, CORS settings, and build options for Swagger UI, API request logging, and OpenAPI docs.
55
55
 
56
- **mcpServers?:** (`Record<string, MCPServerBase>`): An object where keys are registry keys (used for getMCPServer()) and values are instances of MCPServer or classes extending MCPServerBase. Each MCPServer must have an id property. Servers can be retrieved by registry key using getMCPServer() or by their intrinsic id using getMCPServerById().
56
+ **mcpServers** (`Record<string, MCPServerBase>`): An object where keys are registry keys (used for getMCPServer()) and values are instances of MCPServer or classes extending MCPServerBase. Each MCPServer must have an id property. Servers can be retrieved by registry key using getMCPServer() or by their intrinsic id using getMCPServerById().
57
57
 
58
- **bundler?:** (`BundlerConfig`): Configuration for the asset bundler with options for externals, sourcemap, and transpilePackages.
58
+ **bundler** (`BundlerConfig`): Configuration for the asset bundler with options for externals, sourcemap, and transpilePackages.
59
59
 
60
- **scorers?:** (`Record<string, Scorer>`): Scorers for evaluating agent responses and workflow outputs (Default: `{}`)
60
+ **scorers** (`Record<string, Scorer>`): Scorers for evaluating agent responses and workflow outputs (Default: `{}`)
61
61
 
62
- **processors?:** (`Record<string, Processor>`): Input/output processors for transforming agent inputs and outputs (Default: `{}`)
62
+ **processors** (`Record<string, Processor>`): Input/output processors for transforming agent inputs and outputs (Default: `{}`)
63
63
 
64
- **gateways?:** (`Record<string, MastraModelGateway>`): Custom model gateways to register for accessing AI models through alternative providers or private deployments. Structured as a key-value pair, with keys being the registry key (used for getGateway()) and values being gateway instances. (Default: `{}`)
64
+ **gateways** (`Record<string, MastraModelGateway>`): Custom model gateways to register for accessing AI models through alternative providers or private deployments. Structured as a key-value pair, with keys being the registry key (used for getGateway()) and values being gateway instances. (Default: `{}`)
65
65
 
66
- **memory?:** (`Record<string, MastraMemory>`): Memory instances to register. These can be referenced by stored agents and resolved at runtime. Structured as a key-value pair, with keys being the registry key and values being memory instances. (Default: `{}`)
66
+ **memory** (`Record<string, MastraMemory>`): Memory instances to register. These can be referenced by stored agents and resolved at runtime. Structured as a key-value pair, with keys being the registry key and values being memory instances. (Default: `{}`)
@@ -22,35 +22,33 @@ export const agent = new Agent({
22
22
  })
23
23
  ```
24
24
 
25
- > To enable `workingMemory` on an agent, you’ll need a storage provider configured on your main Mastra instance. See [Mastra class](https://mastra.ai/reference/core/mastra-class) for more information.
25
+ > **Note:** To enable `workingMemory` on an agent, you’ll need a storage provider configured on your main Mastra instance. See [Mastra class](https://mastra.ai/reference/core/mastra-class) for more information.
26
26
 
27
27
  ## Constructor parameters
28
28
 
29
- **storage?:** (`MastraCompositeStore`): Storage implementation for persisting memory data. Defaults to \`new DefaultStorage({ config: { url: "file:memory.db" } })\` if not provided.
29
+ **storage** (`MastraCompositeStore`): Storage implementation for persisting memory data. Defaults to \`new DefaultStorage({ config: { url: "file:memory.db" } })\` if not provided.
30
30
 
31
- **vector?:** (`MastraVector | false`): Vector store for semantic search capabilities. Set to \`false\` to disable vector operations.
31
+ **vector** (`MastraVector | false`): Vector store for semantic search capabilities. Set to \`false\` to disable vector operations.
32
32
 
33
- **embedder?:** (`EmbeddingModel<string> | EmbeddingModelV2<string>`): Embedder instance for vector embeddings. Required when semantic recall is enabled.
33
+ **embedder** (`EmbeddingModel<string> | EmbeddingModelV2<string>`): Embedder instance for vector embeddings. Required when semantic recall is enabled.
34
34
 
35
- **options?:** (`MemoryConfig`): Memory configuration options.
35
+ **options** (`MemoryConfig`): Memory configuration options.
36
36
 
37
- ### Options parameters
37
+ **options.lastMessages** (`number | false`): Number of most recent messages to include in context. Set to \`false\` to disable loading conversation history into context. Use \`Number.MAX\_SAFE\_INTEGER\` to retrieve all messages with no limit. To prevent saving new messages, use the \`readOnly\` option instead.
38
38
 
39
- **lastMessages?:** (`number | false`): Number of most recent messages to include in context. Set to \`false\` to disable loading conversation history into context. Use \`Number.MAX\_SAFE\_INTEGER\` to retrieve all messages with no limit. To prevent saving new messages, use the \`readOnly\` option instead. (Default: `10`)
39
+ **options.readOnly** (`boolean`): When true, prevents memory from saving new messages and provides working memory as read-only context (without the updateWorkingMemory tool). Useful for read-only operations like previews, internal routing agents, or sub agents that should reference but not modify memory.
40
40
 
41
- **readOnly?:** (`boolean`): When true, prevents memory from saving new messages and provides working memory as read-only context (without the updateWorkingMemory tool). Useful for read-only operations like previews, internal routing agents, or sub agents that should reference but not modify memory. (Default: `false`)
41
+ **options.semanticRecall** (`boolean | { topK: number; messageRange: number | { before: number; after: number }; scope?: 'thread' | 'resource' }`): Enable semantic search in message history. Can be a boolean or an object with configuration options. When enabled, requires both vector store and embedder to be configured. Default topK is 4, default messageRange is {before: 1, after: 1}.
42
42
 
43
- **semanticRecall?:** (`boolean | { topK: number; messageRange: number | { before: number; after: number }; scope?: 'thread' | 'resource' }`): Enable semantic search in message history. Can be a boolean or an object with configuration options. When enabled, requires both vector store and embedder to be configured. Default topK is 4, default messageRange is {before: 1, after: 1}. (Default: `false`)
43
+ **options.workingMemory** (`WorkingMemory`): Configuration for working memory feature. Can be \`{ enabled: boolean; template?: string; schema?: ZodObject\<any> | JSONSchema7; scope?: 'thread' | 'resource' }\` or \`{ enabled: boolean }\` to disable.
44
44
 
45
- **workingMemory?:** (`WorkingMemory`): Configuration for working memory feature. Can be \`{ enabled: boolean; template?: string; schema?: ZodObject\<any> | JSONSchema7; scope?: 'thread' | 'resource' }\` or \`{ enabled: boolean }\` to disable. (Default: `{ enabled: false, template: '# User Information\n- **First Name**:\n- **Last Name**:\n...' }`)
45
+ **options.observationalMemory** (`boolean | ObservationalMemoryOptions`): Enable Observational Memory for long-context agentic memory. Set to \`true\` for defaults, or pass a config object to customize token budgets, models, and scope. See \[Observational Memory reference]\(/reference/memory/observational-memory) for configuration details.
46
46
 
47
- **observationalMemory?:** (`boolean | ObservationalMemoryOptions`): Enable Observational Memory for long-context agentic memory. Set to \`true\` for defaults, or pass a config object to customize token budgets, models, and scope. See \[Observational Memory reference]\(/reference/memory/observational-memory) for configuration details. (Default: `false`)
48
-
49
- **generateTitle?:** (`boolean | { model: DynamicArgument<MastraLanguageModel>; instructions?: DynamicArgument<string> }`): Controls automatic thread title generation from the user's first message. Can be a boolean or an object with custom model and instructions. (Default: `false`)
47
+ **options.generateTitle** (`boolean | { model: DynamicArgument<MastraLanguageModel>; instructions?: DynamicArgument<string> }`): Controls automatic thread title generation from the user's first message. Can be a boolean or an object with custom model and instructions.
50
48
 
51
49
  ## Returns
52
50
 
53
- **memory:** (`Memory`): A new Memory instance with the specified configuration.
51
+ **memory** (`Memory`): A new Memory instance with the specified configuration.
54
52
 
55
53
  ## Extended usage example
56
54
 
@@ -120,23 +120,23 @@ export const mastra = new Mastra({
120
120
 
121
121
  ## Options
122
122
 
123
- **id:** (`string`): Unique identifier for this storage instance.
123
+ **id** (`string`): Unique identifier for this storage instance.
124
124
 
125
- **default?:** (`MastraCompositeStore`): Default storage adapter. Domains not explicitly specified in \`domains\` will use this storage's domains as fallbacks.
125
+ **default** (`MastraCompositeStore`): Default storage adapter. Domains not explicitly specified in \`domains\` will use this storage's domains as fallbacks.
126
126
 
127
- **domains?:** (`object`): Individual domain overrides. Each domain can come from a different storage adapter. These take precedence over the default storage.
127
+ **domains** (`object`): Individual domain overrides. Each domain can come from a different storage adapter. These take precedence over the default storage.
128
128
 
129
- **domains.memory?:** (`MemoryStorage`): Storage for threads, messages, and resources.
129
+ **domains.memory** (`MemoryStorage`): Storage for threads, messages, and resources.
130
130
 
131
- **domains.workflows?:** (`WorkflowsStorage`): Storage for workflow snapshots.
131
+ **domains.workflows** (`WorkflowsStorage`): Storage for workflow snapshots.
132
132
 
133
- **domains.scores?:** (`ScoresStorage`): Storage for evaluation scores.
133
+ **domains.scores** (`ScoresStorage`): Storage for evaluation scores.
134
134
 
135
- **domains.observability?:** (`ObservabilityStorage`): Storage for traces and spans.
135
+ **domains.observability** (`ObservabilityStorage`): Storage for traces and spans.
136
136
 
137
- **domains.agents?:** (`AgentsStorage`): Storage for stored agent configurations.
137
+ **domains.agents** (`AgentsStorage`): Storage for stored agent configurations.
138
138
 
139
- **disableInit?:** (`boolean`): When true, automatic initialization is disabled. You must call init() explicitly.
139
+ **disableInit** (`boolean`): When true, automatic initialization is disabled. You must call init() explicitly.
140
140
 
141
141
  ## Initialization
142
142
 
@@ -108,17 +108,17 @@ For local development, you can use [DynamoDB Local](https://docs.aws.amazon.com/
108
108
 
109
109
  ## Parameters
110
110
 
111
- **id:** (`string`): Unique identifier for this storage instance.
111
+ **id** (`string`): Unique identifier for this storage instance.
112
112
 
113
- **config.tableName:** (`string`): The name of your DynamoDB table.
113
+ **config.tableName** (`string`): The name of your DynamoDB table.
114
114
 
115
- **config.region?:** (`string`): AWS region. Defaults to 'us-east-1'. For local development, can be set to 'localhost' or similar.
115
+ **config.region** (`string`): AWS region. Defaults to 'us-east-1'. For local development, can be set to 'localhost' or similar.
116
116
 
117
- **config.endpoint?:** (`string`): Custom endpoint for DynamoDB (e.g., 'http\://localhost:8000' for local development).
117
+ **config.endpoint** (`string`): Custom endpoint for DynamoDB (e.g., 'http\://localhost:8000' for local development).
118
118
 
119
- **config.credentials?:** (`object`): AWS credentials object with \`accessKeyId\` and \`secretAccessKey\`. If not provided, the AWS SDK will attempt to source credentials from environment variables, IAM roles (e.g., for EC2/Lambda), or the shared AWS credentials file.
119
+ **config.credentials** (`object`): AWS credentials object with \`accessKeyId\` and \`secretAccessKey\`. If not provided, the AWS SDK will attempt to source credentials from environment variables, IAM roles (e.g., for EC2/Lambda), or the shared AWS credentials file.
120
120
 
121
- **config.ttl?:** (`object`): TTL (Time To Live) configuration for automatic data expiration. Configure per entity type: thread, message, trace, eval, workflow\_snapshot, resource, score. Each entity config includes: enabled (boolean), attributeName (string, default: 'ttl'), defaultTtlSeconds (number).
121
+ **config.ttl** (`object`): TTL (Time To Live) configuration for automatic data expiration. Configure per entity type: thread, message, trace, eval, workflow\_snapshot, resource, score. Each entity config includes: enabled (boolean), attributeName (string, default: 'ttl'), defaultTtlSeconds (number).
122
122
 
123
123
  ## TTL (Time To Live) Configuration
124
124
 
@@ -188,11 +188,11 @@ TTL can be configured for these entity types:
188
188
 
189
189
  Each entity type accepts the following configuration:
190
190
 
191
- **enabled:** (`boolean`): Whether TTL is enabled for this entity type.
191
+ **enabled** (`boolean`): Whether TTL is enabled for this entity type.
192
192
 
193
- **attributeName?:** (`string`): The DynamoDB attribute name to use for TTL. Must match the TTL attribute configured on your DynamoDB table. Defaults to 'ttl'.
193
+ **attributeName** (`string`): The DynamoDB attribute name to use for TTL. Must match the TTL attribute configured on your DynamoDB table. Defaults to 'ttl'.
194
194
 
195
- **defaultTtlSeconds?:** (`number`): Default TTL in seconds from item creation time. Items will be automatically deleted by DynamoDB after this duration.
195
+ **defaultTtlSeconds** (`number`): Default TTL in seconds from item creation time. Items will be automatically deleted by DynamoDB after this duration.
196
196
 
197
197
  ### Enabling TTL on Your DynamoDB Table
198
198
 
@@ -89,9 +89,9 @@ storage: new LibSQLStore({
89
89
 
90
90
  ## Options
91
91
 
92
- **url:** (`string`): Database URL. Use \`:memory:\` for in-memory database, \`file:filename.db\` for a file database, or a libSQL connection string (e.g., \`libsql://your-database.turso.io\`) for remote storage.
92
+ **url** (`string`): Database URL. Use \`:memory:\` for in-memory database, \`file:filename.db\` for a file database, or a libSQL connection string (e.g., \`libsql://your-database.turso.io\`) for remote storage.
93
93
 
94
- **authToken?:** (`string`): Authentication token for remote libSQL databases.
94
+ **authToken** (`string`): Authentication token for remote libSQL databases.
95
95
 
96
96
  ## Initialization
97
97