@mastra/mcp-docs-server 1.1.29-alpha.4 → 1.1.29-alpha.6

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -255,8 +255,10 @@ const { messages, sendMessage } = useChat({
255
255
  return {
256
256
  body: {
257
257
  messages: [messages[messages.length - 1]],
258
- threadId: 'user-thread-123',
259
- resourceId: 'user-123',
258
+ memory: {
259
+ thread: 'user-thread-123',
260
+ resource: 'user-123',
261
+ },
260
262
  },
261
263
  }
262
264
  },
@@ -264,7 +266,7 @@ const { messages, sendMessage } = useChat({
264
266
  })
265
267
  ```
266
268
 
267
- Set `threadId` and `resourceId` from your app's own state, such as URL params, auth context, or your database.
269
+ Set `memory.thread` and `memory.resource` from your app's own state, such as URL params, auth context, or your database.
268
270
 
269
271
  See [Message history](https://mastra.ai/docs/memory/message-history) for more on how Mastra memory loads and stores messages.
270
272
 
@@ -4,6 +4,30 @@ Azure OpenAI provides enterprise-grade access to OpenAI models through dedicated
4
4
 
5
5
  Unlike other providers that have fixed model names, Azure uses **deployment names** that you configure in the Azure Portal.
6
6
 
7
+ ## Usage
8
+
9
+ ```typescript
10
+ import { Agent } from "@mastra/core/agent";
11
+
12
+ const agent = new Agent({
13
+ id: "my-agent",
14
+ name: "My Agent",
15
+ instructions: "You are a helpful assistant",
16
+ model: "azure-openai/my-gpt4-deployment" // Use your Azure deployment name (autocompleted in dev mode)
17
+ });
18
+
19
+ // Generate a response
20
+ const response = await agent.generate("Hello!");
21
+
22
+ // Stream a response
23
+ const stream = await agent.stream("Tell me a story");
24
+ for await (const chunk of stream) {
25
+ console.log(chunk);
26
+ }
27
+ ```
28
+
29
+ Check [Azure OpenAI model availability](https://learn.microsoft.com/en-us/azure/ai-services/openai/concepts/models) for region-specific options.
30
+
7
31
  ## How Azure Deployments Work
8
32
 
9
33
  Azure model IDs follow this pattern: `azure-openai/your-deployment-name`
@@ -101,28 +125,4 @@ export const mastra = new Mastra({
101
125
  | `management.subscriptionId` | `string` | Yes\* | Azure subscription ID |
102
126
  | `management.resourceGroup` | `string` | Yes\* | Resource group name |
103
127
 
104
- \* Required if `management` is provided
105
-
106
- ## Usage
107
-
108
- ```typescript
109
- import { Agent } from "@mastra/core/agent";
110
-
111
- const agent = new Agent({
112
- id: "my-agent",
113
- name: "My Agent",
114
- instructions: "You are a helpful assistant",
115
- model: "azure-openai/my-gpt4-deployment" // Use your Azure deployment name (autocompleted in dev mode)
116
- });
117
-
118
- // Generate a response
119
- const response = await agent.generate("Hello!");
120
-
121
- // Stream a response
122
- const stream = await agent.stream("Tell me a story");
123
- for await (const chunk of stream) {
124
- console.log(chunk);
125
- }
126
- ```
127
-
128
- Check [Azure OpenAI model availability](https://learn.microsoft.com/en-us/azure/ai-services/openai/concepts/models) for region-specific options.
128
+ \* Required if `management` is provided
@@ -0,0 +1,64 @@
1
+ # ![Mastra logo](https://mastra.ai/brand/logo.svg)Mastra
2
+
3
+ The Mastra Memory Gateway is an OpenAI-compatible API proxy with built-in [Observational Memory](https://gateway.mastra.ai/docs/features#observational-memory). Point any HTTP client, SDK, or framework at the gateway and every conversation is automatically remembered without any memory management code.
4
+
5
+ Learn more in the [Memory Gateway documentation](https://gateway.mastra.ai/docs).
6
+
7
+ ## Get an API key
8
+
9
+ Go to [gateway.mastra.ai](https://gateway.mastra.ai) and sign up for a Mastra account. During the onboarding you'll get your personal API key to authenticate requests.
10
+
11
+ ## Usage
12
+
13
+ Define your API key as an environment variable:
14
+
15
+ ```bash
16
+ MASTRA_GATEWAY_API_KEY=your-gateway-key
17
+ ```
18
+
19
+ Set your gateway model ID:
20
+
21
+ ```typescript
22
+ import { Agent } from "@mastra/core/agent";
23
+
24
+ const agent = new Agent({
25
+ id: "my-agent",
26
+ name: "My Agent",
27
+ instructions: "You are a helpful assistant",
28
+ model: "mastra/openai/gpt-5-mini"
29
+ });
30
+ ```
31
+
32
+ Pass `memory.thread` and `memory.resource` when you generate/stream responses to enable Observational Memory:
33
+
34
+ ```typescript
35
+ import { weatherAgent } from "./agents/weather-agent";
36
+
37
+ const memory = {
38
+ thread: "assistant-thread-1",
39
+ resource: "user-42",
40
+ };
41
+
42
+ const result = await weatherAgent.stream("My name is Alex and I prefer concise answers.", {
43
+ memory,
44
+ });
45
+
46
+ for await (const chunk of result.textStream) {
47
+ process.stdout.write(chunk);
48
+ }
49
+ ```
50
+
51
+ ## Configuration
52
+
53
+ ```bash
54
+ # Use gateway API key
55
+ MASTRA_GATEWAY_API_KEY=your-gateway-key
56
+ ```
57
+
58
+ ## Learn more
59
+
60
+ - [Features](https://gateway.mastra.ai/docs/features)
61
+ - [Models](https://gateway.mastra.ai/docs/models)
62
+ - [Limits](https://gateway.mastra.ai/docs/limits)
63
+ - [API Reference](https://gateway.mastra.ai/docs/api/overview)
64
+ - [Examples](https://gateway.mastra.ai/docs/examples/)
@@ -1,6 +1,6 @@
1
1
  # ![OpenRouter logo](https://models.dev/logos/openrouter.svg)OpenRouter
2
2
 
3
- OpenRouter aggregates models from multiple providers with enhanced features like rate limiting and failover. Access 177 models through Mastra's model router.
3
+ OpenRouter aggregates models from multiple providers with enhanced features like rate limiting and failover. Access 178 models through Mastra's model router.
4
4
 
5
5
  Learn more in the [OpenRouter documentation](https://openrouter.ai/models).
6
6
 
@@ -163,6 +163,7 @@ ANTHROPIC_API_KEY=ant-...
163
163
  | `openai/o4-mini` |
164
164
  | `openrouter/elephant-alpha` |
165
165
  | `openrouter/free` |
166
+ | `openrouter/pareto-code` |
166
167
  | `prime-intellect/intellect-3` |
167
168
  | `qwen/qwen-2.5-coder-32b-instruct` |
168
169
  | `qwen/qwen2.5-vl-72b-instruct` |
@@ -9,6 +9,7 @@ Create custom gateways for private LLM deployments or specialized provider integ
9
9
  ## Built-in gateways
10
10
 
11
11
  - [Azure OpenAI](https://mastra.ai/models/gateways/azure-openai)
12
+ - [Mastra](https://mastra.ai/models/gateways/mastra)
12
13
  - [Netlify](https://mastra.ai/models/gateways/netlify)
13
14
  - [OpenRouter](https://mastra.ai/models/gateways/openrouter)
14
15
  - [Vercel](https://mastra.ai/models/gateways/vercel)
@@ -1,6 +1,6 @@
1
1
  # Model Providers
2
2
 
3
- Mastra provides a unified interface for working with LLMs across multiple providers, giving you access to 3739 models from 104 providers through a single API.
3
+ Mastra provides a unified interface for working with LLMs across multiple providers, giving you access to 3743 models from 105 providers through a single API.
4
4
 
5
5
  ## Features
6
6
 
@@ -27,7 +27,7 @@ const agent = new Agent({
27
27
  id: "my-agent",
28
28
  name: "My Agent",
29
29
  instructions: "You are a helpful assistant",
30
- model: "openai/gpt-5"
30
+ model: "openai/gpt-5.5"
31
31
  })
32
32
  ```
33
33
 
@@ -40,7 +40,7 @@ const agent = new Agent({
40
40
  id: "my-agent",
41
41
  name: "My Agent",
42
42
  instructions: "You are a helpful assistant",
43
- model: "anthropic/claude-4-5-sonnet"
43
+ model: "anthropic/claude-sonnet-4-6"
44
44
  })
45
45
  ```
46
46
 
@@ -79,7 +79,7 @@ const agent = new Agent({
79
79
  id: "my-agent",
80
80
  name: "My Agent",
81
81
  instructions: "You are a helpful assistant",
82
- model: "openrouter/anthropic/claude-haiku-4-5"
82
+ model: "openrouter/anthropic/claude-haiku-4.5"
83
83
  })
84
84
  ```
85
85
 
@@ -1,6 +1,6 @@
1
1
  # ![NovitaAI logo](https://models.dev/logos/novita-ai.svg)NovitaAI
2
2
 
3
- Access 96 NovitaAI models through Mastra's model router. Authentication is handled automatically using the `NOVITA_API_KEY` environment variable.
3
+ Access 99 NovitaAI models through Mastra's model router. Authentication is handled automatically using the `NOVITA_API_KEY` environment variable.
4
4
 
5
5
  Learn more in the [NovitaAI documentation](https://novita.ai/docs/guides/introduction).
6
6
 
@@ -56,6 +56,8 @@ for await (const chunk of stream) {
56
56
  | `novita-ai/deepseek/deepseek-v3.1-terminus` | 131K | | | | | | $0.27 | $1 |
57
57
  | `novita-ai/deepseek/deepseek-v3.2` | 164K | | | | | | $0.27 | $0.40 |
58
58
  | `novita-ai/deepseek/deepseek-v3.2-exp` | 164K | | | | | | $0.27 | $0.41 |
59
+ | `novita-ai/deepseek/deepseek-v4-flash` | 1.0M | | | | | | $0.14 | $0.28 |
60
+ | `novita-ai/deepseek/deepseek-v4-pro` | 1.0M | | | | | | $2 | $3 |
59
61
  | `novita-ai/google/gemma-3-12b-it` | 131K | | | | | | $0.05 | $0.10 |
60
62
  | `novita-ai/google/gemma-3-27b-it` | 98K | | | | | | $0.12 | $0.20 |
61
63
  | `novita-ai/google/gemma-4-26b-a4b-it` | 262K | | | | | | $0.13 | $0.40 |
@@ -115,6 +117,7 @@ for await (const chunk of stream) {
115
117
  | `novita-ai/qwen/qwen3.5-27b` | 262K | | | | | | $0.30 | $2 |
116
118
  | `novita-ai/qwen/qwen3.5-35b-a3b` | 262K | | | | | | $0.25 | $2 |
117
119
  | `novita-ai/qwen/qwen3.5-397b-a17b` | 262K | | | | | | $0.60 | $4 |
120
+ | `novita-ai/qwen/qwen3.6-27b` | 262K | | | | | | $0.60 | $4 |
118
121
  | `novita-ai/sao10k/l3-70b-euryale-v2.1` | 8K | | | | | | $1 | $1 |
119
122
  | `novita-ai/sao10k/l3-8b-lunaris` | 8K | | | | | | $0.05 | $0.05 |
120
123
  | `novita-ai/sao10k/L3-8B-Stheno-v3.2` | 8K | | | | | | $0.05 | $0.05 |
@@ -153,8 +153,10 @@ async function manageClones() {
153
153
 
154
154
  // Have a conversation...
155
155
  await agent.generate("Hello! Let's discuss project options.", {
156
- threadId: originalThread.id,
157
- resourceId: 'user-123',
156
+ memory: {
157
+ thread: originalThread.id,
158
+ resource: 'user-123',
159
+ },
158
160
  })
159
161
 
160
162
  // Create multiple branches (clones) to explore different paths
@@ -93,8 +93,10 @@ const { thread: dateFilteredClone } = await memory.cloneThread({
93
93
 
94
94
  // Continue conversation on the cloned thread
95
95
  const response = await agent.generate("Let's try a different approach", {
96
- threadId: fullClone.id,
97
- resourceId: fullClone.resourceId,
96
+ memory: {
97
+ thread: fullClone.id,
98
+ resource: fullClone.resourceId,
99
+ },
98
100
  })
99
101
  ```
100
102
 
package/CHANGELOG.md CHANGED
@@ -1,5 +1,12 @@
1
1
  # @mastra/mcp-docs-server
2
2
 
3
+ ## 1.1.29-alpha.5
4
+
5
+ ### Patch Changes
6
+
7
+ - Updated dependencies [[`9e973b0`](https://github.com/mastra-ai/mastra/commit/9e973b010dacfa15ac82b0072897319f5234b90a), [`dd934a0`](https://github.com/mastra-ai/mastra/commit/dd934a0982ce0f78712fbd559e4f2410bf594b39), [`73f2809`](https://github.com/mastra-ai/mastra/commit/73f2809721db24e98cdf122539652a455211b450), [`aedeea4`](https://github.com/mastra-ai/mastra/commit/aedeea48a94f728323f040478775076b9574be50), [`8126d86`](https://github.com/mastra-ai/mastra/commit/8126d8638411eacfafdc29036ac998e8757ea66f), [`ae97520`](https://github.com/mastra-ai/mastra/commit/ae975206fdb0f6ef03c4d5bf94f7dc7c3f706c02), [`441670a`](https://github.com/mastra-ai/mastra/commit/441670a02c9dc7731c52674f55481e7848a84523)]:
8
+ - @mastra/core@1.29.0-alpha.2
9
+
3
10
  ## 1.1.29-alpha.2
4
11
 
5
12
  ### Patch Changes
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "@mastra/mcp-docs-server",
3
- "version": "1.1.29-alpha.4",
3
+ "version": "1.1.29-alpha.6",
4
4
  "description": "MCP server for accessing Mastra.ai documentation, changelogs, and news.",
5
5
  "type": "module",
6
6
  "main": "dist/index.js",
@@ -29,8 +29,8 @@
29
29
  "jsdom": "^26.1.0",
30
30
  "local-pkg": "^1.1.2",
31
31
  "zod": "^4.3.6",
32
- "@mastra/mcp": "^1.5.2",
33
- "@mastra/core": "1.29.0-alpha.1"
32
+ "@mastra/core": "1.29.0-alpha.2",
33
+ "@mastra/mcp": "^1.5.2"
34
34
  },
35
35
  "devDependencies": {
36
36
  "@hono/node-server": "^1.19.11",
@@ -46,9 +46,9 @@
46
46
  "tsx": "^4.21.0",
47
47
  "typescript": "^5.9.3",
48
48
  "vitest": "4.1.5",
49
- "@mastra/core": "1.29.0-alpha.1",
50
- "@internal/types-builder": "0.0.61",
51
- "@internal/lint": "0.0.86"
49
+ "@internal/lint": "0.0.86",
50
+ "@mastra/core": "1.29.0-alpha.2",
51
+ "@internal/types-builder": "0.0.61"
52
52
  },
53
53
  "homepage": "https://mastra.ai",
54
54
  "repository": {