@mastra/core 1.2.0-alpha.1 → 1.2.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/CHANGELOG.md CHANGED
@@ -1,5 +1,196 @@
1
1
  # @mastra/core
2
2
 
3
+ ## 1.2.0
4
+
5
+ ### Minor Changes
6
+
7
+ - Added Observational Memory — a new memory system that keeps your agent's context window small while preserving long-term memory across conversations. ([#12599](https://github.com/mastra-ai/mastra/pull/12599))
8
+
9
+ **Why:** Long conversations cause context rot and waste tokens. Observational Memory compresses conversation history into observations (5–40x compression) and periodically condenses those into reflections. Your agent stays fast and focused, even after thousands of messages.
10
+
11
+ **Usage:**
12
+
13
+ ```ts
14
+ import { Memory } from '@mastra/memory';
15
+ import { PostgresStore } from '@mastra/pg';
16
+
17
+ const memory = new Memory({
18
+ storage: new PostgresStore({ connectionString: process.env.DATABASE_URL }),
19
+ options: {
20
+ observationalMemory: true,
21
+ },
22
+ });
23
+
24
+ const agent = new Agent({
25
+ name: 'my-agent',
26
+ model: openai('gpt-4o'),
27
+ memory,
28
+ });
29
+ ```
30
+
31
+ **What's new:**
32
+ - `observationalMemory: true` enables the three-tier memory system (recent messages → observations → reflections)
33
+ - Thread-scoped (per-conversation) and resource-scoped (shared across all threads for a user) modes
34
+ - Manual `observe()` API for triggering observation outside the normal agent loop
35
+ - New OM storage methods for pg, libsql, and mongodb adapters (conditionally enabled)
36
+ - `Agent.findProcessor()` method for looking up processors by ID
37
+ - `processorStates` for persisting processor state across loop iterations
38
+ - Abort signal propagation to processors
39
+ - `ProcessorStreamWriter` for custom stream events from processors
40
+
41
+ - Created @mastra/editor package for managing and resolving stored agent configurations ([#12631](https://github.com/mastra-ai/mastra/pull/12631))
42
+
43
+ This major addition introduces the editor package, which provides a complete solution for storing, versioning, and instantiating agent configurations from a database. The editor seamlessly integrates with Mastra's storage layer to enable dynamic agent management.
44
+
45
+ **Key Features:**
46
+ - **Agent Storage & Retrieval**: Store complete agent configurations including instructions, model settings, tools, workflows, nested agents, scorers, processors, and memory configuration
47
+ - **Version Management**: Create and manage multiple versions of agents, with support for activating specific versions
48
+ - **Dependency Resolution**: Automatically resolves and instantiates all agent dependencies (tools, workflows, sub-agents, etc.) from the Mastra registry
49
+ - **Caching**: Built-in caching for improved performance when repeatedly accessing stored agents
50
+ - **Type Safety**: Full TypeScript support with proper typing for stored configurations
51
+
52
+ **Usage Example:**
53
+
54
+ ```typescript
55
+ import { MastraEditor } from '@mastra/editor';
56
+ import { Mastra } from '@mastra/core';
57
+
58
+ // Initialize editor with Mastra
59
+ const mastra = new Mastra({
60
+ /* config */
61
+ editor: new MastraEditor(),
62
+ });
63
+
64
+ // Store an agent configuration
65
+ const agentId = await mastra.storage.stores?.agents?.createAgent({
66
+ name: 'customer-support',
67
+ instructions: 'Help customers with inquiries',
68
+ model: { provider: 'openai', name: 'gpt-4' },
69
+ tools: ['search-kb', 'create-ticket'],
70
+ workflows: ['escalation-flow'],
71
+ memory: { vector: 'pinecone-db' },
72
+ });
73
+
74
+ // Retrieve and use the stored agent
75
+ const agent = await mastra.getEditor()?.getStoredAgentById(agentId);
76
+ const response = await agent?.generate('How do I reset my password?');
77
+
78
+ // List all stored agents
79
+ const agents = await mastra.getEditor()?.listStoredAgents({ pageSize: 10 });
80
+ ```
81
+
82
+ **Storage Improvements:**
83
+ - Fixed JSONB handling in LibSQL, PostgreSQL, and MongoDB adapters
84
+ - Improved agent resolution queries to properly merge version data
85
+ - Enhanced type safety for serialized configurations
86
+
87
+ - Added logger support to Workspace filesystem and sandbox providers. Providers extending MastraFilesystem or MastraSandbox now automatically receive the Mastra logger for consistent logging of file operations and command executions. ([#12606](https://github.com/mastra-ai/mastra/pull/12606))
88
+
89
+ - Added ToolSearchProcessor for dynamic tool discovery. ([#12290](https://github.com/mastra-ai/mastra/pull/12290))
90
+
91
+ Agents can now discover and load tools on demand instead of having all tools available upfront. This reduces context token usage by ~94% when working with large tool libraries.
92
+
93
+ **New API:**
94
+
95
+ ```typescript
96
+ import { ToolSearchProcessor } from '@mastra/core/processors';
97
+ import { Agent } from '@mastra/core';
98
+
99
+ // Create a processor with searchable tools
100
+ const toolSearch = new ToolSearchProcessor({
101
+ tools: {
102
+ createIssue: githubTools.createIssue,
103
+ sendEmail: emailTools.send,
104
+ // ... hundreds of tools
105
+ },
106
+ search: {
107
+ topK: 5, // Return top 5 results (default: 5)
108
+ minScore: 0.1, // Filter results below this score (default: 0)
109
+ },
110
+ });
111
+
112
+ // Attach processor to agent
113
+ const agent = new Agent({
114
+ name: 'my-agent',
115
+ inputProcessors: [toolSearch],
116
+ tools: {
117
+ /* always-available tools */
118
+ },
119
+ });
120
+ ```
121
+
122
+ **How it works:**
123
+
124
+ The processor automatically provides two meta-tools to the agent:
125
+ - `search_tools` - Search for available tools by keyword relevance
126
+ - `load_tool` - Load a specific tool into the conversation
127
+
128
+ The agent discovers what it needs via search and loads tools on demand. Loaded tools are available immediately and persist within the conversation thread.
129
+
130
+ **Why:**
131
+
132
+ When agents have access to 100+ tools (from MCP servers or integrations), including all tool definitions in the context can consume significant tokens (~1,500 tokens per tool). This pattern reduces context usage by giving agents only the tools they need, when they need them.
133
+
134
+ ### Patch Changes
135
+
136
+ - Update provider registry and model documentation with latest models and providers ([`e6fc281`](https://github.com/mastra-ai/mastra/commit/e6fc281896a3584e9e06465b356a44fe7faade65))
137
+
138
+ - Fixed processors returning `{ tools: {}, toolChoice: 'none' }` being ignored. Previously, when a processor returned empty tools with an explicit `toolChoice: 'none'` to prevent tool calls, the toolChoice was discarded and defaulted to 'auto'. This fix preserves the explicit 'none' value, enabling patterns like ensuring a final text response when `maxSteps` is reached. ([#12601](https://github.com/mastra-ai/mastra/pull/12601))
139
+
140
+ - Fix moonshotai/kimi-k2.5 multi-step tool calling failing with "reasoning_content is missing in assistant tool call message" ([#12530](https://github.com/mastra-ai/mastra/pull/12530))
141
+ - Changed moonshotai and moonshotai-cn (China version) providers to use Anthropic-compatible API endpoints instead of OpenAI-compatible
142
+ - moonshotai: `https://api.moonshot.ai/anthropic/v1`
143
+ - moonshotai-cn: `https://api.moonshot.cn/anthropic/v1`
144
+ - This properly handles reasoning_content for kimi-k2.5 model
145
+
146
+ - Fixed custom input processors from disabling workspace skill tools in generate() and stream(). Custom processors now replace only the processors you configured, while memory and skills remain available. Fixes #12612. ([#12676](https://github.com/mastra-ai/mastra/pull/12676))
147
+
148
+ - **Fixed** ([#12673](https://github.com/mastra-ai/mastra/pull/12673))
149
+ Workspace search index names now use underscores so they work with SQL-based vector stores (PgVector, LibSQL).
150
+
151
+ **Added**
152
+ You can now set a custom index name with `searchIndexName`.
153
+
154
+ **Why**
155
+ Some SQL vector stores reject hyphens in index names.
156
+
157
+ **Example**
158
+
159
+ ```ts
160
+ // Before - would fail with PgVector
161
+ new Workspace({ id: 'my-workspace', vectorStore, embedder });
162
+
163
+ // After - works with all vector stores
164
+ new Workspace({ id: 'my-workspace', vectorStore, embedder });
165
+
166
+ // Or use a custom index name
167
+ new Workspace({ vectorStore, embedder, searchIndexName: 'my_workspace_vectors' });
168
+ ```
169
+
170
+ Fixes #12656
171
+
172
+ - Catch up evented workflows on parity with default execution engine ([#12555](https://github.com/mastra-ai/mastra/pull/12555))
173
+
174
+ - Expose token usage from embedding operations ([#12556](https://github.com/mastra-ai/mastra/pull/12556))
175
+ - `saveMessages` now returns `usage: { tokens: number }` with aggregated token count from all embeddings
176
+ - `recall` now returns `usage: { tokens: number }` from the vector search query embedding
177
+ - Updated abstract method signatures in `MastraMemory` to include optional `usage` in return types
178
+
179
+ This allows users to track embedding token usage when using the Memory class.
180
+
181
+ - Fixed a security issue where sensitive observability credentials (such as Langfuse API keys) could be exposed in tool execution error logs. The tracingContext is now properly excluded from logged data. ([#12669](https://github.com/mastra-ai/mastra/pull/12669))
182
+
183
+ - Fixed issue where some models incorrectly call skill names directly as tools instead of using skill-activate. Added clearer system instructions that explicitly state skills are NOT tools and must be activated via skill-activate with the skill name as the "name" parameter. Fixes #12654. ([#12677](https://github.com/mastra-ai/mastra/pull/12677))
184
+
185
+ - Improved workspace filesystem error handling: return 404 for not-found errors instead of 500, show user-friendly error messages in UI, and add MastraClientError class with status/body properties for better error handling ([#12533](https://github.com/mastra-ai/mastra/pull/12533))
186
+
187
+ - Improved workspace tool descriptions with clearer usage guidance for read_file, edit_file, and execute_command tools. ([#12640](https://github.com/mastra-ai/mastra/pull/12640))
188
+
189
+ - Fixed JSON parsing in agent network to handle malformed LLM output. Uses parsePartialJson from AI SDK to recover truncated JSON, missing braces, and unescaped control characters instead of failing immediately. This reduces unnecessary retry round-trips when the routing agent generates slightly malformed JSON for tool/workflow prompts. Fixes #12519. ([#12526](https://github.com/mastra-ai/mastra/pull/12526))
190
+
191
+ - Updated dependencies [[`abae238`](https://github.com/mastra-ai/mastra/commit/abae238c755ebaf867bbfa1a3a219ef003a1021a)]:
192
+ - @mastra/schema-compat@1.1.0
193
+
3
194
  ## 1.2.0-alpha.1
4
195
 
5
196
  ### Minor Changes
@@ -54,4 +54,4 @@ docs/
54
54
  ## Version
55
55
 
56
56
  Package: @mastra/core
57
- Version: 1.2.0-alpha.1
57
+ Version: 1.2.0
@@ -5,7 +5,7 @@ description: Documentation for @mastra/core. Includes links to type definitions
5
5
 
6
6
  # @mastra/core Documentation
7
7
 
8
- > **Version**: 1.2.0-alpha.1
8
+ > **Version**: 1.2.0
9
9
  > **Package**: @mastra/core
10
10
 
11
11
  ## Quick Navigation
@@ -1,5 +1,5 @@
1
1
  {
2
- "version": "1.2.0-alpha.1",
2
+ "version": "1.2.0",
3
3
  "package": "@mastra/core",
4
4
  "exports": {
5
5
  "Agent": {
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "@mastra/core",
3
- "version": "1.2.0-alpha.1",
3
+ "version": "1.2.0",
4
4
  "license": "Apache-2.0",
5
5
  "type": "module",
6
6
  "main": "dist/index.js",
@@ -216,7 +216,7 @@
216
216
  "p-retry": "^7.1.0",
217
217
  "radash": "^12.1.1",
218
218
  "xxhash-wasm": "^1.1.0",
219
- "@mastra/schema-compat": "1.1.0-alpha.0"
219
+ "@mastra/schema-compat": "1.1.0"
220
220
  },
221
221
  "peerDependencies": {
222
222
  "zod": "^3.25.0 || ^4.0.0"
@@ -255,12 +255,12 @@
255
255
  "typescript": "^5.9.3",
256
256
  "vitest": "4.0.16",
257
257
  "zod": "^3.25.76",
258
- "@internal/ai-sdk-v4": "0.0.3",
259
- "@internal/ai-sdk-v5": "0.0.3",
260
- "@internal/ai-v6": "0.0.3",
261
- "@internal/external-types": "0.0.6",
262
- "@internal/lint": "0.0.56",
263
- "@internal/types-builder": "0.0.31"
258
+ "@internal/ai-sdk-v4": "0.0.4",
259
+ "@internal/ai-v6": "0.0.4",
260
+ "@internal/external-types": "0.0.7",
261
+ "@internal/lint": "0.0.57",
262
+ "@internal/ai-sdk-v5": "0.0.4",
263
+ "@internal/types-builder": "0.0.32"
264
264
  },
265
265
  "engines": {
266
266
  "node": ">=22.13.0"