@mastra/mcp-docs-server 1.1.29-alpha.11 → 1.1.29-alpha.12

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -19,7 +19,7 @@ A filesystem provider handles all file operations for a workspace:
19
19
  Available providers:
20
20
 
21
21
  - [`LocalFilesystem`](https://mastra.ai/reference/workspace/local-filesystem): Stores files in a directory on disk
22
- - [`S3Filesystem`](https://mastra.ai/reference/workspace/s3-filesystem): Stores files in Amazon S3 or S3-compatible storage (R2, MinIO)
22
+ - [`S3Filesystem`](https://mastra.ai/reference/workspace/s3-filesystem): Stores files in Amazon S3 or S3-compatible storage (R2, MinIO, Tigris)
23
23
  - [`GCSFilesystem`](https://mastra.ai/reference/workspace/gcs-filesystem): Stores files in Google Cloud Storage
24
24
  - [`AzureBlobFilesystem`](https://mastra.ai/reference/workspace/azure-blob-filesystem): Stores files in Azure Blob Storage
25
25
  - [`AgentFSFilesystem`](https://mastra.ai/reference/workspace/agentfs-filesystem): Stores files in a Turso/SQLite database via AgentFS
@@ -1,6 +1,6 @@
1
1
  # Model Providers
2
2
 
3
- Mastra provides a unified interface for working with LLMs across multiple providers, giving you access to 3757 models from 106 providers through a single API.
3
+ Mastra provides a unified interface for working with LLMs across multiple providers, giving you access to 3761 models from 106 providers through a single API.
4
4
 
5
5
  ## Features
6
6
 
@@ -1,6 +1,6 @@
1
1
  # ![Deep Infra logo](https://models.dev/logos/deepinfra.svg)Deep Infra
2
2
 
3
- Access 32 Deep Infra models through Mastra's model router. Authentication is handled automatically using the `DEEPINFRA_API_KEY` environment variable.
3
+ Access 33 Deep Infra models through Mastra's model router. Authentication is handled automatically using the `DEEPINFRA_API_KEY` environment variable.
4
4
 
5
5
  Learn more in the [Deep Infra documentation](https://deepinfra.com/models).
6
6
 
@@ -36,6 +36,7 @@ for await (const chunk of stream) {
36
36
  | `deepinfra/anthropic/claude-4-opus` | 200K | | | | | | $17 | $83 |
37
37
  | `deepinfra/deepseek-ai/DeepSeek-R1-0528` | 164K | | | | | | $0.50 | $2 |
38
38
  | `deepinfra/deepseek-ai/DeepSeek-V3.2` | 164K | | | | | | $0.26 | $0.38 |
39
+ | `deepinfra/deepseek-ai/DeepSeek-V4-Pro` | 66K | | | | | | $2 | $3 |
39
40
  | `deepinfra/meta-llama/Llama-3.1-70B-Instruct` | 131K | | | | | | $0.40 | $0.40 |
40
41
  | `deepinfra/meta-llama/Llama-3.1-70B-Instruct-Turbo` | 131K | | | | | | $0.40 | $0.40 |
41
42
  | `deepinfra/meta-llama/Llama-3.1-8B-Instruct` | 131K | | | | | | $0.02 | $0.05 |
@@ -1,6 +1,6 @@
1
1
  # ![Fireworks AI logo](https://models.dev/logos/fireworks-ai.svg)Fireworks AI
2
2
 
3
- Access 18 Fireworks AI models through Mastra's model router. Authentication is handled automatically using the `FIREWORKS_API_KEY` environment variable.
3
+ Access 19 Fireworks AI models through Mastra's model router. Authentication is handled automatically using the `FIREWORKS_API_KEY` environment variable.
4
4
 
5
5
  Learn more in the [Fireworks AI documentation](https://fireworks.ai/docs/).
6
6
 
@@ -36,6 +36,7 @@ for await (const chunk of stream) {
36
36
  | --------------------------------------------------------- | ------- | ----- | --------- | ----- | ----- | ----- | ---------- | ----------- |
37
37
  | `fireworks-ai/accounts/fireworks/models/deepseek-v3p1` | 164K | | | | | | $0.56 | $2 |
38
38
  | `fireworks-ai/accounts/fireworks/models/deepseek-v3p2` | 160K | | | | | | $0.56 | $2 |
39
+ | `fireworks-ai/accounts/fireworks/models/deepseek-v4-pro` | 1.0M | | | | | | $2 | $3 |
39
40
  | `fireworks-ai/accounts/fireworks/models/glm-4p5` | 131K | | | | | | $0.55 | $2 |
40
41
  | `fireworks-ai/accounts/fireworks/models/glm-4p5-air` | 131K | | | | | | $0.22 | $0.88 |
41
42
  | `fireworks-ai/accounts/fireworks/models/glm-4p7` | 198K | | | | | | $0.60 | $2 |
@@ -1,6 +1,6 @@
1
1
  # ![Kilo Gateway logo](https://models.dev/logos/kilo.svg)Kilo Gateway
2
2
 
3
- Access 335 Kilo Gateway models through Mastra's model router. Authentication is handled automatically using the `KILO_API_KEY` environment variable.
3
+ Access 337 Kilo Gateway models through Mastra's model router. Authentication is handled automatically using the `KILO_API_KEY` environment variable.
4
4
 
5
5
  Learn more in the [Kilo Gateway documentation](https://kilo.ai).
6
6
 
@@ -357,6 +357,8 @@ for await (const chunk of stream) {
357
357
  | `kilo/xiaomi/mimo-v2-flash` | 262K | | | | | | $0.09 | $0.29 |
358
358
  | `kilo/xiaomi/mimo-v2-omni` | 262K | | | | | | $0.40 | $2 |
359
359
  | `kilo/xiaomi/mimo-v2-pro` | 1.0M | | | | | | $1 | $3 |
360
+ | `kilo/xiaomi/mimo-v2.5` | 1.0M | | | | | | $0.40 | $2 |
361
+ | `kilo/xiaomi/mimo-v2.5-pro` | 1.0M | | | | | | $1 | $3 |
360
362
  | `kilo/z-ai/glm-4-32b` | 128K | | | | | | $0.10 | $0.10 |
361
363
  | `kilo/z-ai/glm-4.5` | 131K | | | | | | $0.60 | $2 |
362
364
  | `kilo/z-ai/glm-4.5-air` | 131K | | | | | | $0.13 | $0.85 |
@@ -90,6 +90,8 @@ await harness.sendMessage({ content: 'Hello!' })
90
90
 
91
91
  **subagents.stopWhen** (`LoopOptions['stopWhen']`): Optional stop condition for the spawned subagent.
92
92
 
93
+ **subagents.forked** (`boolean`): When \`true\`, calls to this subagent default to forked mode: the subagent runs on a clone of the parent thread, reusing the parent agent’s instructions, tools, and model so the prompt-cache prefix stays intact. Requires \`memory\` to be configured. The subagent definition’s own \`instructions\`, \`tools\`, \`allowedHarnessTools\`, \`allowedWorkspaceTools\`, \`defaultModelId\`, \`maxSteps\`, and \`stopWhen\` are ignored in forked mode. Callers can still override per-invocation via \`forked: false\` in the \`subagent\` tool input. See the \[Forked subagents]\(#forked-subagents) section below for full semantics.
94
+
93
95
  **resolveModel** (`(modelId: string) => MastraLanguageModel`): Converts a model ID string (e.g., \`"anthropic/claude-sonnet-4"\`) to a language model instance. Used by subagents and observational memory model resolution.
94
96
 
95
97
  **omConfig** (`HarnessOMConfig`): Default configuration for observational memory (observer/reflector model IDs and thresholds).
@@ -286,16 +288,21 @@ await harness.switchThread({ threadId: 'thread-abc123' })
286
288
 
287
289
  #### `listThreads(options?)`
288
290
 
289
- List threads from storage. By default, only threads for the current resource are returned.
291
+ List threads from storage. By default, only threads for the current resource are returned, and transient [forked subagent](#forked-subagents) threads are hidden so they don’t appear in user-facing thread pickers / startup flows.
290
292
 
291
293
  ```typescript
292
- // List threads for current resource
294
+ // List threads for current resource (forks hidden)
293
295
  const threads = await harness.listThreads()
294
296
 
295
- // List all threads across resources
297
+ // List all threads across resources (forks still hidden)
296
298
  const allThreads = await harness.listThreads({ allResources: true })
299
+
300
+ // Include forked subagent fork threads (debug / admin tooling only)
301
+ const everything = await harness.listThreads({ includeForkedSubagents: true })
297
302
  ```
298
303
 
304
+ Fork threads are tagged with `metadata.forkedSubagent === true` (and `metadata.parentThreadId`) by the harness. Set `includeForkedSubagents: true` to opt back into seeing them — e.g. for a debug panel.
305
+
299
306
  #### `renameThread({ title })`
300
307
 
301
308
  Update the title of the current thread.
@@ -677,6 +684,42 @@ await harness.setSubagentModelId({ modelId: 'anthropic/claude-sonnet-4-6' })
677
684
  await harness.setSubagentModelId({ modelId: 'anthropic/claude-haiku-3.5', agentType: 'explore' })
678
685
  ```
679
686
 
687
+ ### Forked subagents
688
+
689
+ By default, a subagent runs with a fresh context — it doesn't see the parent conversation. **Forked subagents** opt into a different model: the subagent runs on a clone of the parent thread and reuses the parent agent's full configuration. This is useful when the subagent needs the full context of the conversation so far (e.g., recalling earlier user-supplied facts), and when prompt-cache hit rates matter.
690
+
691
+ #### Enabling forked mode
692
+
693
+ Set `forked: true` either on the [`HarnessSubagent` definition](#configuration) (per-type default) or on each `subagent` tool call (per-invocation override):
694
+
695
+ ```typescript
696
+ // Per-type default — every call to this subagent forks unless overridden.
697
+ const subagents: HarnessSubagent[] = [
698
+ {
699
+ id: 'collaborator',
700
+ name: 'Collaborator',
701
+ description: 'Continues the conversation in a fork to try a different angle.',
702
+ instructions: '...',
703
+ forked: true,
704
+ },
705
+ ]
706
+ ```
707
+
708
+ The model can also pass `forked: true` (or `forked: false`) per-invocation in the `subagent` tool input; the per-invocation value wins.
709
+
710
+ #### Semantics and constraints
711
+
712
+ - **Memory required.** Forked mode calls `memory.cloneThread` to create the fork, so the harness must have `memory` configured and an active parent thread. Calls without those return a structured error rather than throwing.
713
+ - **Parent agent reused.** The fork runs through the parent agent's `stream(...)` call. The parent's instructions, tools, model, `maxSteps`, and `stopWhen` apply. The subagent definition's `instructions`, `tools`, `allowedHarnessTools`, `allowedWorkspaceTools`, `defaultModelId`, `maxSteps`, and `stopWhen` are ignored in forked mode — this is what preserves the prompt-cache prefix.
714
+ - **Toolsets inherited, recursive forks blocked at runtime.** Forks inherit the parent's toolsets verbatim (`ask_user`, `submit_plan`, user-configured harness tools, _including the `subagent` tool itself_) so the LLM request prefix — system prompt + tool list + tool schemas + tool descriptions — stays byte-identical to the parent's. This is what preserves the prompt cache. The `subagent` entry is kept on the model side but its `execute` is replaced inside the fork with a stub that returns a non-error "tool unavailable inside a forked subagent" message: nested forks are blocked at the runtime layer without perturbing the cached prefix.
715
+ - **Fork threads are tagged.** Each fork thread is created with `metadata.forkedSubagent === true` and `metadata.parentThreadId === <parent>`. By default, [`listThreads`](#listthreadsoptions) hides these so they don't show up in user-facing thread pickers / startup flows. Pass `includeForkedSubagents: true` to see them in admin / debug tooling.
716
+ - **Save-queue flushed before clone.** The agent stream batches message saves through a debounced `SaveQueueManager`, so the parent's latest user / assistant turn may not be on disk yet when the subagent tool call fires. The fork tool flushes pending saves first via the `flushMessages` callback on `AgentToolExecutionContext` before cloning, so the fork actually carries the latest turn. Flush failures are non-fatal — the clone still runs.
717
+ - **Parent thread untouched.** All subagent activity (messages, OM writes) lands on the fork. The parent thread is never appended to during a forked subagent run.
718
+
719
+ #### When to prefer non-forked mode
720
+
721
+ Forked mode trades isolation for context inheritance. If the subagent should run with a strictly smaller toolset, a different system prompt, or a cheaper model, use the default (non-forked) mode and pass any required context explicitly in the `task` description.
722
+
680
723
  ### Events
681
724
 
682
725
  #### `subscribe(listener)`
@@ -753,13 +796,13 @@ The harness emits events through registered listeners. The following table lists
753
796
 
754
797
  The harness provides built-in tools to agents in every mode:
755
798
 
756
- | Tool | Description |
757
- | ------------- | ------------------------------------------------------------------------------------------------------------------------- |
758
- | `ask_user` | Ask the user a question and wait for their response. Supports free text, single-select choices, and multi-select choices. |
759
- | `submit_plan` | Submit a plan for user review and approval. |
760
- | `task_write` | Create or update a structured task list for tracking progress. |
761
- | `task_check` | Check the completion status of the current task list. |
762
- | `subagent` | Spawn a focused subagent with constrained tools (only available when `subagents` is configured). |
799
+ | Tool | Description |
800
+ | ------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
801
+ | `ask_user` | Ask the user a question and wait for their response. Supports free text, single-select choices, and multi-select choices. |
802
+ | `submit_plan` | Submit a plan for user review and approval. |
803
+ | `task_write` | Create or update a structured task list for tracking progress. |
804
+ | `task_check` | Check the completion status of the current task list. |
805
+ | `subagent` | Spawn a focused subagent with constrained tools (only available when `subagents` is configured). Pass `forked: true` to inherit the parent conversation — see [Forked subagents](#forked-subagents). |
763
806
 
764
807
  ### `ask_user` selections
765
808
 
@@ -169,6 +169,7 @@ The Reference section provides documentation of Mastra's API, including paramete
169
169
  - [PromptInjectionDetector](https://mastra.ai/reference/processors/prompt-injection-detector)
170
170
  - [SemanticRecall](https://mastra.ai/reference/processors/semantic-recall-processor)
171
171
  - [SkillSearchProcessor](https://mastra.ai/reference/processors/skill-search-processor)
172
+ - [StreamErrorRetryProcessor](https://mastra.ai/reference/processors/stream-error-retry-processor)
172
173
  - [SystemPromptScrubber](https://mastra.ai/reference/processors/system-prompt-scrubber)
173
174
  - [TokenLimiterProcessor](https://mastra.ai/reference/processors/token-limiter-processor)
174
175
  - [ToolCallFilter](https://mastra.ai/reference/processors/tool-call-filter)
@@ -126,6 +126,21 @@ interface ObservabilityExporter {
126
126
  /** Initialize exporter with tracing configuration and/or access to Mastra */
127
127
  init?(options: InitExporterOptions): void
128
128
 
129
+ /** Handle tracing events */
130
+ onTracingEvent?(event: TracingEvent): void | Promise<void>
131
+
132
+ /** Handle log events */
133
+ onLogEvent?(event: LogEvent): void | Promise<void>
134
+
135
+ /** Handle metric events */
136
+ onMetricEvent?(event: MetricEvent): void | Promise<void>
137
+
138
+ /** Handle score events */
139
+ onScoreEvent?(event: ScoreEvent): void | Promise<void>
140
+
141
+ /** Handle feedback events */
142
+ onFeedbackEvent?(event: FeedbackEvent): void | Promise<void>
143
+
129
144
  /** Export tracing events */
130
145
  exportTracingEvent(event: TracingEvent): Promise<void>
131
146
 
@@ -154,6 +169,8 @@ interface ObservabilityExporter {
154
169
  }
155
170
  ```
156
171
 
172
+ Event callback payloads use observability event bus envelopes: `TracingEvent` carries span lifecycle events with `exportedSpan`, `LogEvent` wraps `ExportedLog` in `log`, `MetricEvent` wraps `ExportedMetric` in `metric`, `ScoreEvent` wraps `ExportedScore` in `score`, and `FeedbackEvent` wraps `ExportedFeedback` in `feedback`. For Cloud exporter behavior for these callbacks, see [CloudExporter](https://mastra.ai/reference/observability/tracing/exporters/cloud-exporter).
173
+
157
174
  ### `SpanOutputProcessor`
158
175
 
159
176
  Interface for span output processors.
@@ -0,0 +1,54 @@
1
+ # StreamErrorRetryProcessor
2
+
3
+ `StreamErrorRetryProcessor` is an **error processor** that retries transient errors emitted after an LLM stream starts. It includes built-in matching for OpenAI Responses stream errors and supports additional matchers for other provider-specific stream error shapes.
4
+
5
+ The processor isn't enabled by default in core. Add it to `errorProcessors` for agents that need stream-error retry handling.
6
+
7
+ ## Usage example
8
+
9
+ Add `StreamErrorRetryProcessor` to `errorProcessors`:
10
+
11
+ ```typescript
12
+ import { Agent } from '@mastra/core/agent'
13
+ import { StreamErrorRetryProcessor } from '@mastra/core/processors'
14
+
15
+ export const agent = new Agent({
16
+ name: 'openai-agent',
17
+ instructions: 'You are a helpful assistant.',
18
+ model: 'openai/gpt-5',
19
+ errorProcessors: [new StreamErrorRetryProcessor()],
20
+ })
21
+ ```
22
+
23
+ ## How it works
24
+
25
+ The processor checks the error and its cause chain for:
26
+
27
+ - Provider retry metadata: `isRetryable === true`
28
+ - Built-in OpenAI Responses stream error matching
29
+ - Matcher results: Any configured matcher that returns `true`
30
+
31
+ When the error is retryable, the processor returns `{ retry: true }`. It doesn't mutate messages.
32
+
33
+ ## Default OpenAI Responses matcher
34
+
35
+ `isRetryableOpenAIResponsesStreamError` matches OpenAI Responses stream error chunks with `type: 'error'` or `type: 'response.failed'`. It retries known transient OpenAI error codes and, as a fallback, errors with explicit retry guidance such as `You can retry your request`.
36
+
37
+ `StreamErrorRetryProcessor` includes this matcher by default. You can also import it and reuse it in custom retry logic.
38
+
39
+ ## Constructor parameters
40
+
41
+ **options** (`StreamErrorRetryProcessorOptions`): Configuration for retry handling.
42
+
43
+ ## Properties
44
+
45
+ **id** (`'stream-error-retry-processor'`): Processor identifier.
46
+
47
+ **name** (`'Stream Error Retry Processor'`): Processor display name.
48
+
49
+ **processAPIError** (`(args: ProcessAPIErrorArgs) => ProcessAPIErrorResult | void`): Retries stream errors up to the configured retry limit.
50
+
51
+ ## Related
52
+
53
+ - [Processor interface](https://mastra.ai/reference/processors/processor-interface)
54
+ - [Processors](https://mastra.ai/docs/agents/processors)
@@ -1,6 +1,6 @@
1
1
  # S3Filesystem
2
2
 
3
- Stores files in Amazon S3 or S3-compatible storage services like Cloudflare R2, MinIO, and DigitalOcean Spaces.
3
+ Stores files in Amazon S3 or S3-compatible storage services like Cloudflare R2, MinIO, DigitalOcean Spaces, and Tigris.
4
4
 
5
5
  > **Info:** For interface details, see [WorkspaceFilesystem Interface](https://mastra.ai/reference/workspace/filesystem).
6
6
 
@@ -55,6 +55,61 @@ const agent = new Agent({
55
55
  })
56
56
  ```
57
57
 
58
+ ### AWS credential provider chain
59
+
60
+ When no credentials are provided, `S3Filesystem` uses the AWS SDK default credential provider chain. This discovers credentials automatically from environment variables, `~/.aws` config files, ECS container credentials, EC2 instance profiles, and other standard sources.
61
+
62
+ ```typescript
63
+ import { S3Filesystem } from '@mastra/s3'
64
+
65
+ // SDK discovers credentials from the environment automatically
66
+ const filesystem = new S3Filesystem({
67
+ bucket: 'my-bucket',
68
+ region: 'us-east-1',
69
+ })
70
+ ```
71
+
72
+ Pass a credential provider function for auto-refreshing credentials. This is useful for deployments on ECS, Lambda, or when using SSO/AssumeRole where temporary credentials expire and need to be refreshed.
73
+
74
+ Install the AWS SDK credential providers package when calling `fromNodeProviderChain()` directly:
75
+
76
+ **npm**:
77
+
78
+ ```bash
79
+ npm install @aws-sdk/credential-providers
80
+ ```
81
+
82
+ **pnpm**:
83
+
84
+ ```bash
85
+ pnpm add @aws-sdk/credential-providers
86
+ ```
87
+
88
+ **Yarn**:
89
+
90
+ ```bash
91
+ yarn add @aws-sdk/credential-providers
92
+ ```
93
+
94
+ **Bun**:
95
+
96
+ ```bash
97
+ bun add @aws-sdk/credential-providers
98
+ ```
99
+
100
+ ```typescript
101
+ import { S3Filesystem } from '@mastra/s3'
102
+ import { fromNodeProviderChain } from '@aws-sdk/credential-providers'
103
+
104
+ const filesystem = new S3Filesystem({
105
+ bucket: 'my-bucket',
106
+ region: 'us-east-1',
107
+ credentials: fromNodeProviderChain(),
108
+ })
109
+ ```
110
+
111
+ Provider functions only apply to `S3Filesystem` API calls. When mounting the filesystem into an E2B sandbox, mount configuration only supports static `accessKeyId`, `secretAccessKey`, and `sessionToken` values, so credential refresh must be handled outside the mount.
112
+
58
113
  ### Cloudflare R2
59
114
 
60
115
  ```typescript
@@ -83,19 +138,38 @@ const filesystem = new S3Filesystem({
83
138
  })
84
139
  ```
85
140
 
141
+ ### Tigris
142
+
143
+ ```typescript
144
+ import { S3Filesystem } from '@mastra/s3'
145
+
146
+ const filesystem = new S3Filesystem({
147
+ bucket: 'my-bucket',
148
+ region: 'auto',
149
+ endpoint: 'https://t3.storage.dev',
150
+ accessKeyId: process.env.TIGRIS_ACCESS_KEY_ID,
151
+ secretAccessKey: process.env.TIGRIS_SECRET_ACCESS_KEY,
152
+ forcePathStyle: false,
153
+ })
154
+ ```
155
+
156
+ Tigris uses virtual-hosted-style addressing, so `forcePathStyle` must be set to `false` (the default is `true` when a custom endpoint is provided). Create credentials from the [Tigris Dashboard](https://console.tigris.dev/) — access keys are prefixed `tid_` and secrets `tsec_`.
157
+
86
158
  ## Constructor parameters
87
159
 
88
160
  **bucket** (`string`): S3 bucket name
89
161
 
90
162
  **region** (`string`): AWS region (use 'auto' for R2)
91
163
 
92
- **accessKeyId** (`string`): AWS access key ID. Optional for public buckets (read-only access).
164
+ **credentials** (`AwsCredentialIdentity | AwsCredentialIdentityProvider`): AWS credentials or credential provider function. Accepts static credentials or a provider that auto-refreshes (e.g. \`fromNodeProviderChain()\` from \`@aws-sdk/credential-providers\`). Takes precedence over \`accessKeyId\`/\`secretAccessKey\`/\`sessionToken\`. When all credential options are omitted, the SDK default credential provider chain is used.
165
+
166
+ **accessKeyId** (`string`): AWS access key ID. When omitted along with \`secretAccessKey\` and \`credentials\`, the SDK default credential provider chain is used.
93
167
 
94
- **secretAccessKey** (`string`): AWS secret access key. Optional for public buckets (read-only access).
168
+ **secretAccessKey** (`string`): AWS secret access key. When omitted along with \`accessKeyId\` and \`credentials\`, the SDK default credential provider chain is used.
95
169
 
96
- **sessionToken** (`string`): AWS session token for temporary credentials. Required when using SSO, AssumeRole, container credentials, or any other temporary credential provider.
170
+ **sessionToken** (`string`): AWS session token for static temporary credentials. Use with \`accessKeyId\`/\`secretAccessKey\` only when passing a complete temporary credential set manually. For auto-refreshing SSO, AssumeRole, or container credentials, use the \`credentials\` provider parameter or the SDK default credential provider chain.
97
171
 
98
- **endpoint** (`string`): Custom endpoint URL for S3-compatible storage (R2, MinIO, etc.)
172
+ **endpoint** (`string`): Custom endpoint URL for S3-compatible storage (R2, MinIO, Tigris, etc.)
99
173
 
100
174
  **forcePathStyle** (`boolean`): Force path-style URLs instead of virtual-hosted-style. Required for some S3-compatible services like MinIO. Defaults to true when a custom endpoint is provided. (Default: `true (when endpoint is set)`)
101
175
 
package/CHANGELOG.md CHANGED
@@ -1,5 +1,13 @@
1
1
  # @mastra/mcp-docs-server
2
2
 
3
+ ## 1.1.29-alpha.12
4
+
5
+ ### Patch Changes
6
+
7
+ - Updated dependencies [[`c1ae974`](https://github.com/mastra-ai/mastra/commit/c1ae97491f6e57378ce880c3a397778c42adcdf1), [`10e1c9a`](https://github.com/mastra-ai/mastra/commit/10e1c9a6a99c14eb055d0f409b603e07af827e68), [`13b4d7c`](https://github.com/mastra-ai/mastra/commit/13b4d7c16de34dff9095d1cd80f22f544b6cfe75), [`5a4b1ee`](https://github.com/mastra-ai/mastra/commit/5a4b1ee80212969621228104995589c0fa59e575), [`5a4b1ee`](https://github.com/mastra-ai/mastra/commit/5a4b1ee80212969621228104995589c0fa59e575), [`5a4b1ee`](https://github.com/mastra-ai/mastra/commit/5a4b1ee80212969621228104995589c0fa59e575), [`6c8c6c7`](https://github.com/mastra-ai/mastra/commit/6c8c6c71518394321a4692614aa4b11f3bb0a343), [`5a4b1ee`](https://github.com/mastra-ai/mastra/commit/5a4b1ee80212969621228104995589c0fa59e575), [`ec4cb26`](https://github.com/mastra-ai/mastra/commit/ec4cb26919972eb2031fea510f8f013e1d5b7ee2)]:
8
+ - @mastra/core@1.29.0-alpha.6
9
+ - @mastra/mcp@1.6.0-alpha.0
10
+
3
11
  ## 1.1.29-alpha.10
4
12
 
5
13
  ### Patch Changes
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "@mastra/mcp-docs-server",
3
- "version": "1.1.29-alpha.11",
3
+ "version": "1.1.29-alpha.12",
4
4
  "description": "MCP server for accessing Mastra.ai documentation, changelogs, and news.",
5
5
  "type": "module",
6
6
  "main": "dist/index.js",
@@ -29,8 +29,8 @@
29
29
  "jsdom": "^26.1.0",
30
30
  "local-pkg": "^1.1.2",
31
31
  "zod": "^4.3.6",
32
- "@mastra/core": "1.29.0-alpha.5",
33
- "@mastra/mcp": "^1.5.2"
32
+ "@mastra/core": "1.29.0-alpha.6",
33
+ "@mastra/mcp": "^1.6.0-alpha.0"
34
34
  },
35
35
  "devDependencies": {
36
36
  "@hono/node-server": "^1.19.11",
@@ -46,9 +46,9 @@
46
46
  "tsx": "^4.21.0",
47
47
  "typescript": "^5.9.3",
48
48
  "vitest": "4.1.5",
49
- "@internal/lint": "0.0.86",
50
49
  "@internal/types-builder": "0.0.61",
51
- "@mastra/core": "1.29.0-alpha.5"
50
+ "@internal/lint": "0.0.86",
51
+ "@mastra/core": "1.29.0-alpha.6"
52
52
  },
53
53
  "homepage": "https://mastra.ai",
54
54
  "repository": {