@mastra/core 0.24.6-alpha.0 → 0.24.6

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (2) hide show
  1. package/CHANGELOG.md +306 -0
  2. package/package.json +5 -5
package/CHANGELOG.md CHANGED
@@ -1,5 +1,311 @@
1
1
  # @mastra/core
2
2
 
3
+ ## 0.24.6
4
+
5
+ ### Patch Changes
6
+
7
+ - Fix base64 encoded images with threads - issue #10480 ([#10566](https://github.com/mastra-ai/mastra/pull/10566))
8
+
9
+ Fixed "Invalid URL" error when using base64 encoded images (without `data:` prefix) in agent calls with threads and resources. Raw base64 strings are now automatically converted to proper data URIs before being processed.
10
+
11
+ **Changes:**
12
+ - Updated `attachments-to-parts.ts` to detect and convert raw base64 strings to data URIs
13
+ - Fixed `MessageList` image processing to handle raw base64 in two locations:
14
+ - Image part conversion in `aiV4CoreMessageToV1PromptMessage`
15
+ - File part to experimental_attachments conversion in `mastraDBMessageToAIV4UIMessage`
16
+ - Added comprehensive tests for base64 images, data URIs, and HTTP URLs with threads
17
+
18
+ **Breaking Change:** None - this is a bug fix that maintains backward compatibility while adding support for raw base64 strings.
19
+
20
+ - SimpleAuth and improved CloudAuth ([#10569](https://github.com/mastra-ai/mastra/pull/10569))
21
+
22
+ - Fixed OpenAI schema compatibility when using `agent.generate()` or `agent.stream()` with `structuredOutput`. ([#10454](https://github.com/mastra-ai/mastra/pull/10454))
23
+
24
+ ## Changes
25
+ - **Automatic transformation**: Zod schemas are now automatically transformed for OpenAI strict mode compatibility when using OpenAI models (including reasoning models like o1, o3, o4)
26
+ - **Optional field handling**: `.optional()` fields are converted to `.nullable()` with a transform that converts `null` → `undefined`, preserving optional semantics while satisfying OpenAI's strict mode requirements
27
+ - **Preserves nullable fields**: Intentionally `.nullable()` fields remain unchanged
28
+ - **Deep transformation**: Handles `.optional()` fields at any nesting level (objects, arrays, unions, etc.)
29
+ - **JSON Schema objects**: Not transformed, only Zod schemas
30
+
31
+ ## Example
32
+
33
+ ```typescript
34
+ const agent = new Agent({
35
+ name: 'data-extractor',
36
+ model: { provider: 'openai', modelId: 'gpt-4o' },
37
+ instructions: 'Extract user information',
38
+ });
39
+
40
+ const schema = z.object({
41
+ name: z.string(),
42
+ age: z.number().optional(),
43
+ deletedAt: z.date().nullable(),
44
+ });
45
+
46
+ // Schema is automatically transformed for OpenAI compatibility
47
+ const result = await agent.generate('Extract: John, deleted yesterday', {
48
+ structuredOutput: { schema },
49
+ });
50
+
51
+ // Result: { name: 'John', age: undefined, deletedAt: null }
52
+ ```
53
+
54
+ - deleteVectors, deleteFilter when upserting, updateVector filter (#10244) ([#10526](https://github.com/mastra-ai/mastra/pull/10526))
55
+
56
+ - Fix generateTitle model type to accept AI SDK LanguageModelV2 ([#10567](https://github.com/mastra-ai/mastra/pull/10567))
57
+
58
+ Updated the `generateTitle.model` config option to accept `MastraModelConfig` instead of `MastraLanguageModel`. This allows users to pass raw AI SDK `LanguageModelV2` models (e.g., `anthropic.languageModel('claude-3-5-haiku-20241022')`) directly without type errors.
59
+
60
+ Previously, passing a standard `LanguageModelV2` would fail because `MastraLanguageModelV2` has different `doGenerate`/`doStream` return types. Now `MastraModelConfig` is used consistently across:
61
+ - `memory/types.ts` - `generateTitle.model` config
62
+ - `agent.ts` - `genTitle`, `generateTitleFromUserMessage`, `resolveTitleGenerationConfig`
63
+ - `agent-legacy.ts` - `AgentLegacyCapabilities` interface
64
+
65
+ - Fix message metadata not persisting when using simple message format. Previously, custom metadata passed in messages (e.g., `{role: 'user', content: 'text', metadata: {userId: '123'}}`) was not being saved to the database. This occurred because the CoreMessage conversion path didn't preserve metadata fields. ([#10571](https://github.com/mastra-ai/mastra/pull/10571))
66
+
67
+ Now metadata is properly preserved for all message input formats:
68
+ - Simple CoreMessage format: `{role, content, metadata}`
69
+ - Full UIMessage format: `{role, content, parts, metadata}`
70
+ - AI SDK v5 ModelMessage format with metadata
71
+
72
+ Fixes #8556
73
+
74
+ - feat: Composite auth implementation ([#10486](https://github.com/mastra-ai/mastra/pull/10486))
75
+
76
+ - Fix requireApproval property being ignored for tools passed via toolsets, clientTools, and memoryTools parameters. The requireApproval flag now correctly propagates through all tool conversion paths, ensuring tools requiring approval will properly request user approval before execution. ([#10562](https://github.com/mastra-ai/mastra/pull/10562))
77
+
78
+ - Fix Azure Foundry rate limit handling for -1 values ([#10411](https://github.com/mastra-ai/mastra/pull/10411))
79
+
80
+ - Fix model headers not being passed through gateway system ([#10564](https://github.com/mastra-ai/mastra/pull/10564))
81
+
82
+ Previously, custom headers specified in `MastraModelConfig` were not being passed through the gateway system to model providers. This affected:
83
+ - OpenRouter (preventing activity tracking with `HTTP-Referer` and `X-Title`)
84
+ - Custom providers using custom URLs (headers not passed to `createOpenAICompatible`)
85
+ - Custom gateway implementations (headers not available in `resolveLanguageModel`)
86
+
87
+ Now headers are correctly passed through the entire gateway system:
88
+ - Base `MastraModelGateway` interface updated to accept headers
89
+ - `ModelRouterLanguageModel` passes headers from config to all gateways
90
+ - OpenRouter receives headers for activity tracking
91
+ - Custom URL providers receive headers via `createOpenAICompatible`
92
+ - Custom gateways can access headers in their `resolveLanguageModel` implementation
93
+
94
+ Example usage:
95
+
96
+ ```typescript
97
+ // Works with OpenRouter
98
+ const agent = new Agent({
99
+ name: 'my-agent',
100
+ instructions: 'You are a helpful assistant.',
101
+ model: {
102
+ id: 'openrouter/anthropic/claude-3-5-sonnet',
103
+ headers: {
104
+ 'HTTP-Referer': 'https://myapp.com',
105
+ 'X-Title': 'My Application',
106
+ },
107
+ },
108
+ });
109
+
110
+ // Also works with custom providers
111
+ const customAgent = new Agent({
112
+ name: 'custom-agent',
113
+ instructions: 'You are a helpful assistant.',
114
+ model: {
115
+ id: 'custom-provider/model',
116
+ url: 'https://api.custom.com/v1',
117
+ apiKey: 'key',
118
+ headers: {
119
+ 'X-Custom-Header': 'custom-value',
120
+ },
121
+ },
122
+ });
123
+ ```
124
+
125
+ Fixes https://github.com/mastra-ai/mastra/issues/9760
126
+
127
+ - fix(agent): persist messages before tool suspension ([#10542](https://github.com/mastra-ai/mastra/pull/10542))
128
+
129
+ Fixes issues where thread and messages were not saved before suspension when tools require approval or call suspend() during execution. This caused conversation history to be lost if users refreshed during tool approval or suspension.
130
+
131
+ **Backend changes (@mastra/core):**
132
+ - Add assistant messages to messageList immediately after LLM execution
133
+ - Flush messages synchronously before suspension to persist state
134
+ - Create thread if it doesn't exist before flushing
135
+ - Add metadata helpers to persist and remove tool approval state
136
+ - Pass saveQueueManager and memory context through workflow for immediate persistence
137
+
138
+ **Frontend changes (@mastra/react):**
139
+ - Extract runId from pending approvals to enable resumption after refresh
140
+ - Convert `pendingToolApprovals` (DB format) to `requireApprovalMetadata` (runtime format)
141
+ - Handle both `dynamic-tool` and `tool-{NAME}` part types for approval state
142
+ - Change runId from hardcoded `agentId` to unique `uuid()`
143
+
144
+ **UI changes (@mastra/playground-ui):**
145
+ - Handle tool calls awaiting approval in message initialization
146
+ - Convert approval metadata format when loading initial messages
147
+
148
+ Fixes #9745, #9906
149
+
150
+ - Fix race condition in parallel tool stream writes ([#10481](https://github.com/mastra-ai/mastra/pull/10481))
151
+
152
+ Introduces a write queue to ToolStream to serialize access to the underlying stream, preventing writer locked errors
153
+
154
+ - Remove unneeded console warning when flushing messages and no threadId or saveQueueManager is found. ([#10542](https://github.com/mastra-ai/mastra/pull/10542))
155
+
156
+ - Fixes GPT-5 reasoning which was failing on subsequent tool calls with the error: ([#10489](https://github.com/mastra-ai/mastra/pull/10489))
157
+
158
+ ```
159
+ Item 'fc_xxx' of type 'function_call' was provided without its required 'reasoning' item: 'rs_xxx'
160
+ ```
161
+
162
+ - Add optional includeRawChunks parameter to agent execution options, ([#10459](https://github.com/mastra-ai/mastra/pull/10459))
163
+ allowing users to include raw chunks in stream output where supported
164
+ by the model provider.
165
+
166
+ - When `mastra dev` runs, multiple processes can write to `provider-registry.json` concurrently (auto-refresh, syncGateways, syncGlobalCacheToLocal). This causes file corruption where the end of the JSON appears twice, making it unparseable. ([#10529](https://github.com/mastra-ai/mastra/pull/10529))
167
+
168
+ The fix uses atomic writes via the write-to-temp-then-rename pattern. Instead of:
169
+
170
+ ```ts
171
+ fs.writeFileSync(filePath, content, 'utf-8');
172
+ ```
173
+
174
+ We now do:
175
+
176
+ ```ts
177
+ const tempPath = `${filePath}.${process.pid}.${Date.now()}.${randomSuffix}.tmp`;
178
+ fs.writeFileSync(tempPath, content, 'utf-8');
179
+ fs.renameSync(tempPath, filePath); // atomic on POSIX
180
+ ```
181
+
182
+ `fs.rename()` is atomic on POSIX systems when both paths are on the same filesystem, so concurrent writes will each complete fully rather than interleaving.
183
+
184
+ - Ensures that data chunks written via `writer.custom()` always bubble up directly to the top-level stream, even when nested in sub-agents. This allows tools to emit custom progress updates, metrics, and other data that can be consumed at any level of the agent hierarchy. ([#10523](https://github.com/mastra-ai/mastra/pull/10523))
185
+ - **Added bubbling logic in sub-agent execution**: When sub-agents execute, data chunks (chunks with type starting with `data-`) are detected and written via `writer.custom()` instead of `writer.write()`, ensuring they bubble up directly without being wrapped in `tool-output` chunks.
186
+ - **Added comprehensive tests**:
187
+ - Test for `writer.custom()` with direct tool execution
188
+ - Test for `writer.custom()` with sub-agent tools (nested execution)
189
+ - Test for mixed usage of `writer.write()` and `writer.custom()` in the same tool
190
+
191
+ When a sub-agent's tool uses `writer.custom()` to write data chunks, those chunks appear in the sub-agent's stream. The parent agent's execution logic now detects these chunks and uses `writer.custom()` to bubble them up directly, preserving their structure and making them accessible at the top level.
192
+
193
+ This ensures that:
194
+ - Data chunks from tools always appear directly in the stream (not wrapped)
195
+ - Data chunks bubble up correctly through nested agent hierarchies
196
+ - Regular chunks continue to be wrapped in `tool-output` as expected
197
+
198
+ - Adds ability to create custom `MastraModelGateway`'s that can be added to the `Mastra` class instance under the `gateways` property. Giving you typescript autocompletion in any model picker string. ([#10535](https://github.com/mastra-ai/mastra/pull/10535))
199
+
200
+ ```typescript
201
+ import { MastraModelGateway, type ProviderConfig } from '@mastra/core/llm';
202
+ import { createOpenAICompatible } from '@ai-sdk/openai-compatible';
203
+ import type { LanguageModelV2 } from '@ai-sdk/provider';
204
+
205
+ class MyCustomGateway extends MastraModelGateway {
206
+ readonly id = 'custom';
207
+ readonly name = 'My Custom Gateway';
208
+
209
+ async fetchProviders(): Promise<Record<string, ProviderConfig>> {
210
+ return {
211
+ 'my-provider': {
212
+ name: 'My Provider',
213
+ models: ['model-1', 'model-2'],
214
+ apiKeyEnvVar: 'MY_API_KEY',
215
+ gateway: this.id,
216
+ },
217
+ };
218
+ }
219
+
220
+ buildUrl(modelId: string, envVars?: Record<string, string>): string {
221
+ return 'https://api.my-provider.com/v1';
222
+ }
223
+
224
+ async getApiKey(modelId: string): Promise<string> {
225
+ const apiKey = process.env.MY_API_KEY;
226
+ if (!apiKey) throw new Error('MY_API_KEY not set');
227
+ return apiKey;
228
+ }
229
+
230
+ async resolveLanguageModel({
231
+ modelId,
232
+ providerId,
233
+ apiKey,
234
+ }: {
235
+ modelId: string;
236
+ providerId: string;
237
+ apiKey: string;
238
+ }): Promise<LanguageModelV2> {
239
+ const baseURL = this.buildUrl(`${providerId}/${modelId}`);
240
+ return createOpenAICompatible({
241
+ name: providerId,
242
+ apiKey,
243
+ baseURL,
244
+ }).chatModel(modelId);
245
+ }
246
+ }
247
+
248
+ new Mastra({
249
+ gateways: {
250
+ myGateway: new MyCustomGateway(),
251
+ },
252
+ });
253
+ ```
254
+
255
+ - Support AI SDK voice models ([#10558](https://github.com/mastra-ai/mastra/pull/10558))
256
+
257
+ Mastra now supports AI SDK's transcription and speech models directly in `CompositeVoice`, enabling seamless integration with a wide range of voice providers through the AI SDK ecosystem. This allows you to use models from OpenAI, ElevenLabs, Groq, Deepgram, LMNT, Hume, and many more for both speech-to-text (transcription) and text-to-speech capabilities.
258
+
259
+ AI SDK models are automatically wrapped when passed to `CompositeVoice`, so you can mix and match AI SDK models with existing Mastra voice providers for maximum flexibility.
260
+
261
+ ## Usage Example
262
+
263
+ ```typescript
264
+ import { CompositeVoice } from '@mastra/core/voice';
265
+ import { openai } from '@ai-sdk/openai';
266
+ import { elevenlabs } from '@ai-sdk/elevenlabs';
267
+
268
+ // Use AI SDK models directly with CompositeVoice
269
+ const voice = new CompositeVoice({
270
+ input: openai.transcription('whisper-1'), // AI SDK transcription model
271
+ output: elevenlabs.speech('eleven_turbo_v2'), // AI SDK speech model
272
+ });
273
+
274
+ // Convert text to speech
275
+ const audioStream = await voice.speak('Hello from AI SDK!');
276
+
277
+ // Convert speech to text
278
+ const transcript = await voice.listen(audioStream);
279
+ console.log(transcript);
280
+ ```
281
+
282
+ Fixes #9947
283
+
284
+ - Fix network data step formatting in AI SDK stream transformation ([#10525](https://github.com/mastra-ai/mastra/pull/10525))
285
+
286
+ Previously, network execution steps were not being tracked correctly in the AI SDK stream transformation. Steps were being duplicated rather than updated, and critical metadata like step IDs, iterations, and task information was missing or incorrectly structured.
287
+
288
+ **Changes:**
289
+ - Enhanced step tracking in `AgentNetworkToAISDKTransformer` to properly maintain step state throughout execution lifecycle
290
+ - Steps are now identified by unique IDs and updated in place rather than creating duplicates
291
+ - Added proper iteration and task metadata to each step in the network execution flow
292
+ - Fixed agent, workflow, and tool execution events to correctly populate step data
293
+ - Updated network stream event types to include `networkId`, `workflowId`, and consistent `runId` tracking
294
+ - Added test coverage for network custom data chunks with comprehensive validation
295
+
296
+ This ensures the AI SDK correctly represents the full execution flow of agent networks with accurate step sequencing and metadata.
297
+
298
+ - Fix generating provider-registry.json ([#10535](https://github.com/mastra-ai/mastra/pull/10535))
299
+
300
+ - Fix message-list conversion issues when persisting messages before tool suspension: filter internal metadata fields (`__originalContent`) from UI messages, keep reasoning field empty for consistent cache keys during message deduplication, and only include providerMetadata on parts when defined. ([#10552](https://github.com/mastra-ai/mastra/pull/10552))
301
+
302
+ - Fix agent.generate() to use model's doGenerate method instead of doStream ([#10572](https://github.com/mastra-ai/mastra/pull/10572))
303
+
304
+ When calling `agent.generate()`, the model's `doGenerate` method is now correctly invoked instead of always using `doStream`. This aligns the non-streaming generation path with the intended behavior where providers can implement optimized non-streaming responses.
305
+
306
+ - Updated dependencies [[`33a607a`](https://github.com/mastra-ai/mastra/commit/33a607a1f716c2029d4a1ff1603dd756129a33b3)]:
307
+ - @mastra/schema-compat@0.11.8
308
+
3
309
  ## 0.24.6-alpha.0
4
310
 
5
311
  ### Patch Changes
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "@mastra/core",
3
- "version": "0.24.6-alpha.0",
3
+ "version": "0.24.6",
4
4
  "license": "Apache-2.0",
5
5
  "type": "module",
6
6
  "main": "dist/index.js",
@@ -220,7 +220,7 @@
220
220
  "sift": "^17.1.3",
221
221
  "xstate": "^5.20.1",
222
222
  "zod-to-json-schema": "^3.24.6",
223
- "@mastra/schema-compat": "0.11.8-alpha.0"
223
+ "@mastra/schema-compat": "0.11.8"
224
224
  },
225
225
  "peerDependencies": {
226
226
  "zod": "^3.25.0 || ^4.0.0"
@@ -247,9 +247,9 @@
247
247
  "vitest": "^3.2.4",
248
248
  "zod": "^3.25.76",
249
249
  "zod-v4": "npm:zod@4.1.12",
250
- "@internal/lint": "0.0.63",
251
- "@internal/external-types": "0.0.13",
252
- "@internal/types-builder": "0.0.38"
250
+ "@internal/external-types": "0.0.14",
251
+ "@internal/lint": "0.0.64",
252
+ "@internal/types-builder": "0.0.39"
253
253
  },
254
254
  "engines": {
255
255
  "node": ">=20"