@mastra/mcp-docs-server 1.1.27-alpha.1 → 1.1.27-alpha.3

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -46,6 +46,37 @@ See [configuration options](https://mastra.ai/reference/memory/observational-mem
46
46
 
47
47
  > **Note:** OM currently only supports `@mastra/pg`, `@mastra/libsql`, and `@mastra/mongodb` storage adapters. It uses background agents for managing memory. When using `observationalMemory: true`, the default model is `google/gemini-2.5-flash`. When passing a config object, a `model` must be explicitly set.
48
48
 
49
+ ## Temporal gap markers
50
+
51
+ Temporal gap markers insert a short reminder before a new user message when enough time has passed since the previous message in the thread. They help the agent and the UI see that the conversation resumed after a meaningful pause.
52
+
53
+ Temporal gap markers are off by default. Enable them with `temporalMarkers: true` in the `observationalMemory` config:
54
+
55
+ ```typescript
56
+ import { Memory } from '@mastra/memory'
57
+ import { Agent } from '@mastra/core/agent'
58
+
59
+ export const agent = new Agent({
60
+ name: 'my-agent',
61
+ instructions: 'You are a helpful assistant.',
62
+ model: 'openai/gpt-5-mini',
63
+ memory: new Memory({
64
+ options: {
65
+ observationalMemory: {
66
+ model: 'google/gemini-2.5-flash',
67
+ temporalMarkers: true,
68
+ },
69
+ },
70
+ }),
71
+ })
72
+ ```
73
+
74
+ Mastra inserts a temporal gap marker when the gap is at least 10 minutes. The marker is stored in memory and also emitted as a transient reminder event, so clients can render it as a lightweight timeline hint.
75
+
76
+ The observer also sees these markers when it processes the thread, so the observations it writes can anchor memories to when they happened (for example, "User asked about deployment after a 2-day gap").
77
+
78
+ See [the API reference](https://mastra.ai/reference/memory/observational-memory) for the full configuration shape.
79
+
49
80
  ## Benefits
50
81
 
51
82
  - **Prompt caching**: OM's context is stable and observations append over time rather than being dynamically retrieved each turn. This keeps the prompt prefix cacheable, which reduces costs.
@@ -1,6 +1,6 @@
1
1
  # Model Providers
2
2
 
3
- Mastra provides a unified interface for working with LLMs across multiple providers, giving you access to 3691 models from 103 providers through a single API.
3
+ Mastra provides a unified interface for working with LLMs across multiple providers, giving you access to 3706 models from 104 providers through a single API.
4
4
 
5
5
  ## Features
6
6
 
@@ -1,6 +1,6 @@
1
1
  # ![OpenCode Go logo](https://models.dev/logos/opencode-go.svg)OpenCode Go
2
2
 
3
- Access 10 OpenCode Go models through Mastra's model router. Authentication is handled automatically using the `OPENCODE_API_KEY` environment variable.
3
+ Access 12 OpenCode Go models through Mastra's model router. Authentication is handled automatically using the `OPENCODE_API_KEY` environment variable.
4
4
 
5
5
  Learn more in the [OpenCode Go documentation](https://opencode.ai/docs/zen).
6
6
 
@@ -32,18 +32,20 @@ for await (const chunk of stream) {
32
32
 
33
33
  ## Models
34
34
 
35
- | Model | Context | Tools | Reasoning | Image | Audio | Video | Input $/1M | Output $/1M |
36
- | -------------------------- | ------- | ----- | --------- | ----- | ----- | ----- | ---------- | ----------- |
37
- | `opencode-go/glm-5` | 205K | | | | | | $1 | $3 |
38
- | `opencode-go/glm-5.1` | 205K | | | | | | $1 | $4 |
39
- | `opencode-go/kimi-k2.5` | 262K | | | | | | $0.60 | $3 |
40
- | `opencode-go/kimi-k2.6` | 262K | | | | | | $0.32 | $1 |
41
- | `opencode-go/mimo-v2-omni` | 262K | | | | | | $0.40 | $2 |
42
- | `opencode-go/mimo-v2-pro` | 1.0M | | | | | | $1 | $3 |
43
- | `opencode-go/minimax-m2.5` | 205K | | | | | | $0.30 | $1 |
44
- | `opencode-go/minimax-m2.7` | 205K | | | | | | $0.30 | $1 |
45
- | `opencode-go/qwen3.5-plus` | 262K | | | | | | $0.20 | $1 |
46
- | `opencode-go/qwen3.6-plus` | 262K | | | | | | $0.50 | $3 |
35
+ | Model | Context | Tools | Reasoning | Image | Audio | Video | Input $/1M | Output $/1M |
36
+ | --------------------------- | ------- | ----- | --------- | ----- | ----- | ----- | ---------- | ----------- |
37
+ | `opencode-go/glm-5` | 205K | | | | | | $1 | $3 |
38
+ | `opencode-go/glm-5.1` | 205K | | | | | | $1 | $4 |
39
+ | `opencode-go/kimi-k2.5` | 262K | | | | | | $0.60 | $3 |
40
+ | `opencode-go/kimi-k2.6` | 262K | | | | | | $0.32 | $1 |
41
+ | `opencode-go/mimo-v2-omni` | 262K | | | | | | $0.40 | $2 |
42
+ | `opencode-go/mimo-v2-pro` | 1.0M | | | | | | $1 | $3 |
43
+ | `opencode-go/mimo-v2.5` | 262K | | | | | | $0.40 | $2 |
44
+ | `opencode-go/mimo-v2.5-pro` | 1.0M | | | | | | $1 | $3 |
45
+ | `opencode-go/minimax-m2.5` | 205K | | | | | | $0.30 | $1 |
46
+ | `opencode-go/minimax-m2.7` | 205K | | | | | | $0.30 | $1 |
47
+ | `opencode-go/qwen3.5-plus` | 262K | | | | | | $0.20 | $1 |
48
+ | `opencode-go/qwen3.6-plus` | 262K | | | | | | $0.50 | $3 |
47
49
 
48
50
  ## Advanced configuration
49
51
 
@@ -0,0 +1,83 @@
1
+ # ![Regolo AI logo](https://models.dev/logos/regolo-ai.svg)Regolo AI
2
+
3
+ Access 13 Regolo AI models through Mastra's model router. Authentication is handled automatically using the `REGOLO_API_KEY` environment variable.
4
+
5
+ Learn more in the [Regolo AI documentation](https://docs.regolo.ai/).
6
+
7
+ ```bash
8
+ REGOLO_API_KEY=your-api-key
9
+ ```
10
+
11
+ ```typescript
12
+ import { Agent } from "@mastra/core/agent";
13
+
14
+ const agent = new Agent({
15
+ id: "my-agent",
16
+ name: "My Agent",
17
+ instructions: "You are a helpful assistant",
18
+ model: "regolo-ai/gpt-oss-120b"
19
+ });
20
+
21
+ // Generate a response
22
+ const response = await agent.generate("Hello!");
23
+
24
+ // Stream a response
25
+ const stream = await agent.stream("Tell me a story");
26
+ for await (const chunk of stream) {
27
+ console.log(chunk);
28
+ }
29
+ ```
30
+
31
+ > **Info:** Mastra uses the OpenAI-compatible `/chat/completions` endpoint. Some provider-specific features may not be available. Check the [Regolo AI documentation](https://docs.regolo.ai/) for details.
32
+
33
+ ## Models
34
+
35
+ | Model | Context | Tools | Reasoning | Image | Audio | Video | Input $/1M | Output $/1M |
36
+ | ---------------------------------- | ------- | ----- | --------- | ----- | ----- | ----- | ---------- | ----------- |
37
+ | `regolo-ai/gpt-oss-120b` | 128K | | | | | | $1 | $4 |
38
+ | `regolo-ai/gpt-oss-20b` | 128K | | | | | | $0.40 | $2 |
39
+ | `regolo-ai/llama-3.1-8b-instruct` | 120K | | | | | | $0.05 | $0.25 |
40
+ | `regolo-ai/llama-3.3-70b-instruct` | 128K | | | | | | $0.60 | $3 |
41
+ | `regolo-ai/minimax-m2.5` | 190K | | | | | | $0.80 | $4 |
42
+ | `regolo-ai/mistral-small-4-119b` | 256K | | | | | | $0.75 | $3 |
43
+ | `regolo-ai/mistral-small3.2` | 120K | | | | | | $0.50 | $2 |
44
+ | `regolo-ai/qwen-image` | 8K | | | | | | $0.50 | $2 |
45
+ | `regolo-ai/qwen3-coder-next` | 262K | | | | | | $0.30 | $1 |
46
+ | `regolo-ai/qwen3-embedding-8b` | 33K | | | | | | $0.10 | $0.10 |
47
+ | `regolo-ai/qwen3-reranker-4b` | 33K | | | | | | $0.12 | $0.12 |
48
+ | `regolo-ai/qwen3.5-122b` | 262K | | | | | | $0.90 | $4 |
49
+ | `regolo-ai/qwen3.5-9b` | 262K | | | | | | $0.15 | $0.60 |
50
+
51
+ ## Advanced configuration
52
+
53
+ ### Custom headers
54
+
55
+ ```typescript
56
+ const agent = new Agent({
57
+ id: "custom-agent",
58
+ name: "custom-agent",
59
+ model: {
60
+ url: "https://api.regolo.ai/v1",
61
+ id: "regolo-ai/gpt-oss-120b",
62
+ apiKey: process.env.REGOLO_API_KEY,
63
+ headers: {
64
+ "X-Custom-Header": "value"
65
+ }
66
+ }
67
+ });
68
+ ```
69
+
70
+ ### Dynamic model selection
71
+
72
+ ```typescript
73
+ const agent = new Agent({
74
+ id: "dynamic-agent",
75
+ name: "Dynamic Agent",
76
+ model: ({ requestContext }) => {
77
+ const useAdvanced = requestContext.task === "complex";
78
+ return useAdvanced
79
+ ? "regolo-ai/qwen3.5-9b"
80
+ : "regolo-ai/gpt-oss-120b";
81
+ }
82
+ });
83
+ ```
@@ -75,6 +75,7 @@ Direct access to individual AI model providers. Each provider offers unique mode
75
75
  - [Privatemode AI](https://mastra.ai/models/providers/privatemode-ai)
76
76
  - [QiHang](https://mastra.ai/models/providers/qihang-ai)
77
77
  - [Qiniu](https://mastra.ai/models/providers/qiniu-ai)
78
+ - [Regolo AI](https://mastra.ai/models/providers/regolo-ai)
78
79
  - [Requesty](https://mastra.ai/models/providers/requesty)
79
80
  - [Scaleway](https://mastra.ai/models/providers/scaleway)
80
81
  - [SiliconFlow](https://mastra.ai/models/providers/siliconflow)
@@ -266,7 +266,7 @@ const response = await agent.generate('Help me organize my day', {
266
266
 
267
267
  **options.clientTools** (`ToolsInput`): Client-side tools available during execution.
268
268
 
269
- **options.savePerStep** (`boolean`): Save messages incrementally after each generation step completes (default: false).
269
+ **options.savePerStep** (`boolean`): Save messages incrementally after each generation step completes (default: false). Disabled internally when observational memory is enabled.
270
270
 
271
271
  **options.providerOptions** (`Record<string, Record<string, JSONValue>>`): Provider-specific options passed to the language model.
272
272
 
@@ -84,7 +84,7 @@ await agent.generateLegacy('message for agent')
84
84
 
85
85
  **options.clientTools** (`ToolsInput`): Tools that are executed on the 'client' side of the request. These tools do not have execute functions in the definition.
86
86
 
87
- **options.savePerStep** (`boolean`): Save messages incrementally after each stream step completes (default: false).
87
+ **options.savePerStep** (`boolean`): Save messages incrementally after each generation step completes (default: false). Disabled internally when observational memory is enabled.
88
88
 
89
89
  **options.providerOptions** (`Record<string, Record<string, JSONValue>>`): Additional provider-specific options that are passed through to the underlying LLM provider. The structure is \`{ providerName: { optionKey: value } }\`. Since Mastra extends AI SDK, see the \[AI SDK documentation]\(https\://sdk.vercel.ai/docs/providers/ai-sdk-providers) for complete provider options.
90
90
 
@@ -40,6 +40,8 @@ OM performs thresholding with fast local token estimation. Text uses `tokenx`, a
40
40
 
41
41
  **shareTokenBudget** (`boolean`): Share the token budget between messages and observations. When enabled, the total budget is \`observation.messageTokens + reflection.observationTokens\`. Messages can use more space when observations are small, and vice versa. This maximizes context usage through flexible allocation. \`shareTokenBudget\` is not yet compatible with async buffering. You must set \`observation: { bufferTokens: false }\` when using this option (this is a temporary limitation). (Default: `false`)
42
42
 
43
+ **temporalMarkers** (`boolean`): Insert temporal-gap reminder markers before new user messages when the previous message in the thread is at least 10 minutes older. The marker is persisted in memory, emitted as an inline reminder event so clients can render it specially, and shown to the observer so it can anchor observations to when events occurred. (Default: `false`)
44
+
43
45
  **retrieval** (`boolean | { vector?: boolean; scope?: 'thread' | 'resource' }`): \*\*Experimental.\*\* Enable retrieval-mode observation groups as durable pointers to raw message history. \`true\` enables cross-thread browsing by default. \`{ vector: true }\` also enables semantic search using Memory's vector store and embedder. \`{ scope: 'thread' }\` restricts the recall tool to the current thread only. Default scope is \`'resource'\`. (Default: `false`)
44
46
 
45
47
  **observation** (`ObservationalMemoryObservationConfig`): Configuration for the observation step. Controls when the Observer agent runs and how it behaves.
package/CHANGELOG.md CHANGED
@@ -1,5 +1,19 @@
1
1
  # @mastra/mcp-docs-server
2
2
 
3
+ ## 1.1.27-alpha.3
4
+
5
+ ### Patch Changes
6
+
7
+ - Updated dependencies [[`ed07df3`](https://github.com/mastra-ai/mastra/commit/ed07df32a9d539c8261e892fc1bade783f5b41a6)]:
8
+ - @mastra/core@1.27.0-alpha.2
9
+
10
+ ## 1.1.27-alpha.2
11
+
12
+ ### Patch Changes
13
+
14
+ - Updated dependencies [[`0a0aa94`](https://github.com/mastra-ai/mastra/commit/0a0aa94729592e99885af2efb90c56aaada62247), [`01a7d51`](https://github.com/mastra-ai/mastra/commit/01a7d513493d21562f677f98550f7ceb165ba78c)]:
15
+ - @mastra/core@1.27.0-alpha.1
16
+
3
17
  ## 1.1.27-alpha.1
4
18
 
5
19
  ### Patch Changes
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "@mastra/mcp-docs-server",
3
- "version": "1.1.27-alpha.1",
3
+ "version": "1.1.27-alpha.3",
4
4
  "description": "MCP server for accessing Mastra.ai documentation, changelogs, and news.",
5
5
  "type": "module",
6
6
  "main": "dist/index.js",
@@ -29,7 +29,7 @@
29
29
  "jsdom": "^26.1.0",
30
30
  "local-pkg": "^1.1.2",
31
31
  "zod": "^4.3.6",
32
- "@mastra/core": "1.26.1-alpha.0",
32
+ "@mastra/core": "1.27.0-alpha.2",
33
33
  "@mastra/mcp": "^1.5.1"
34
34
  },
35
35
  "devDependencies": {
@@ -47,8 +47,8 @@
47
47
  "typescript": "^5.9.3",
48
48
  "vitest": "4.1.4",
49
49
  "@internal/lint": "0.0.84",
50
- "@internal/types-builder": "0.0.59",
51
- "@mastra/core": "1.26.1-alpha.0"
50
+ "@mastra/core": "1.27.0-alpha.2",
51
+ "@internal/types-builder": "0.0.59"
52
52
  },
53
53
  "homepage": "https://mastra.ai",
54
54
  "repository": {