@mastra/mcp-docs-server 1.1.32 → 1.1.33-alpha.3
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/.docs/docs/observability/tracing/bridges/datadog.md +217 -0
- package/.docs/docs/observability/tracing/exporters/datadog.md +3 -1
- package/.docs/docs/workflows/scheduled-workflows.md +181 -0
- package/.docs/models/gateways/openrouter.md +2 -1
- package/.docs/models/gateways/vercel.md +7 -1
- package/.docs/models/index.md +1 -1
- package/.docs/models/providers/cortecs.md +2 -1
- package/.docs/models/providers/digitalocean.md +2 -1
- package/.docs/models/providers/kilo.md +359 -341
- package/.docs/models/providers/ollama-cloud.md +1 -1
- package/.docs/models/providers/vivgrid.md +4 -3
- package/.docs/models/providers/xai.md +2 -1
- package/.docs/reference/client-js/workflows.md +50 -1
- package/.docs/reference/observability/tracing/bridges/datadog.md +211 -0
- package/.docs/reference/observability/tracing/exporters/datadog.md +3 -1
- package/.docs/reference/workflows/workflow.md +16 -0
- package/CHANGELOG.md +15 -0
- package/package.json +5 -5
|
@@ -0,0 +1,217 @@
|
|
|
1
|
+
# Datadog bridge
|
|
2
|
+
|
|
3
|
+
> **Warning:** The Datadog Bridge is currently **experimental**. APIs and configuration options may change in future releases.
|
|
4
|
+
|
|
5
|
+
The Datadog Bridge enables bidirectional integration between Mastra's tracing system and Datadog. Unlike exporters that send trace data after execution completes, the bridge creates native dd-trace spans in real time so that auto-instrumented APM operations (HTTP calls, database queries, etc.) inside your tools and processors are correctly nested under their parent Mastra spans.
|
|
6
|
+
|
|
7
|
+
> **Not using dd-trace APM?:** If you only need to send LLM Observability data and don't use `dd-trace` APM auto-instrumentation, the [Datadog Exporter](https://mastra.ai/docs/observability/tracing/exporters/datadog) is simpler — it supports agentless mode and sends spans directly to Datadog without a local agent.
|
|
8
|
+
|
|
9
|
+
## When to use the bridge
|
|
10
|
+
|
|
11
|
+
Use the DatadogBridge when you:
|
|
12
|
+
|
|
13
|
+
- Use `dd-trace` auto-instrumentation in your application (HTTP servers, database clients, etc.)
|
|
14
|
+
- Want APM service calls made by tools, MCP tools, or output processors to appear under their parent Mastra span instead of the request handler
|
|
15
|
+
- Need both APM traces and LLM Observability data to share a consistent trace topology
|
|
16
|
+
- Are building a distributed system where Datadog trace context must propagate across services
|
|
17
|
+
|
|
18
|
+
## How it works
|
|
19
|
+
|
|
20
|
+
The DatadogBridge participates in two parts of the dd-trace pipeline:
|
|
21
|
+
|
|
22
|
+
**APM context propagation (real time):**
|
|
23
|
+
|
|
24
|
+
- Creates a dd-trace APM span via `tracer.startSpan()` when each Mastra span is created
|
|
25
|
+
- Activates the APM span in dd-trace's scope via `tracer.scope().activate()` during execution
|
|
26
|
+
- Auto-instrumented operations inside the active scope are parented to the correct Mastra span
|
|
27
|
+
- Inherits the active dd-trace context (e.g., an incoming request span) when no explicit Mastra parent exists
|
|
28
|
+
|
|
29
|
+
**LLM Observability emission (on span end):**
|
|
30
|
+
|
|
31
|
+
- Emits annotations (model info, token usage, input/output, errors) through `dd-trace`'s LLM Observability pipeline
|
|
32
|
+
- Maintains parent-child relationships in Datadog LLM Observability using nested `llmobs.trace()` calls
|
|
33
|
+
- Reuses the same data shape and span-kind mapping as the [Datadog Exporter](https://mastra.ai/docs/observability/tracing/exporters/datadog)
|
|
34
|
+
|
|
35
|
+
## Why this matters
|
|
36
|
+
|
|
37
|
+
Without the bridge, the Datadog Exporter only creates LLM Observability spans after a trace completes. During execution, no `dd-trace` span is active in scope, so any HTTP or database call made by a tool falls back to whatever `dd-trace` span is active at the time — typically the incoming request handler. The result is that service calls from MCP tools or output processors appear as children of the request span instead of the agent or processor span that actually made them.
|
|
38
|
+
|
|
39
|
+
The bridge fixes this by creating real dd-trace spans up front, so the scope is correct when auto-instrumentation runs.
|
|
40
|
+
|
|
41
|
+
## Installation
|
|
42
|
+
|
|
43
|
+
**npm**:
|
|
44
|
+
|
|
45
|
+
```bash
|
|
46
|
+
npm install @mastra/datadog dd-trace
|
|
47
|
+
```
|
|
48
|
+
|
|
49
|
+
**pnpm**:
|
|
50
|
+
|
|
51
|
+
```bash
|
|
52
|
+
pnpm add @mastra/datadog dd-trace
|
|
53
|
+
```
|
|
54
|
+
|
|
55
|
+
**Yarn**:
|
|
56
|
+
|
|
57
|
+
```bash
|
|
58
|
+
yarn add @mastra/datadog dd-trace
|
|
59
|
+
```
|
|
60
|
+
|
|
61
|
+
**Bun**:
|
|
62
|
+
|
|
63
|
+
```bash
|
|
64
|
+
bun add @mastra/datadog dd-trace
|
|
65
|
+
```
|
|
66
|
+
|
|
67
|
+
The bridge requires `dd-trace` to be installed and a local Datadog Agent (or compatible OTLP receiver) to receive APM data. See the [APM prerequisites](https://mastra.ai/docs/observability/tracing/exporters/datadog) on the exporter page for agent setup details.
|
|
68
|
+
|
|
69
|
+
## Configuration
|
|
70
|
+
|
|
71
|
+
Using the DatadogBridge requires two steps:
|
|
72
|
+
|
|
73
|
+
1. Initialize `dd-trace` so its auto-instrumentation patches HTTP, database, and framework libraries
|
|
74
|
+
2. Add the DatadogBridge to your Mastra observability config
|
|
75
|
+
|
|
76
|
+
### Step 1: Initialize dd-trace
|
|
77
|
+
|
|
78
|
+
`dd-trace` must be initialized before any other imports so its auto-instrumentation can patch libraries at load time. The bridge will detect an already-initialized tracer and reuse it.
|
|
79
|
+
|
|
80
|
+
```typescript
|
|
81
|
+
import tracer from 'dd-trace'
|
|
82
|
+
|
|
83
|
+
tracer.init({
|
|
84
|
+
service: process.env.DD_SERVICE || 'my-mastra-app',
|
|
85
|
+
env: process.env.DD_ENV || 'production',
|
|
86
|
+
version: process.env.DD_VERSION,
|
|
87
|
+
})
|
|
88
|
+
|
|
89
|
+
import { Mastra } from '@mastra/core'
|
|
90
|
+
import { Observability } from '@mastra/observability'
|
|
91
|
+
import { DatadogBridge } from '@mastra/datadog'
|
|
92
|
+
|
|
93
|
+
// ...
|
|
94
|
+
```
|
|
95
|
+
|
|
96
|
+
> **Note:** Import and initialize `dd-trace` at the very top of your application's entry file, before any other imports.
|
|
97
|
+
|
|
98
|
+
### Step 2: Mastra Configuration
|
|
99
|
+
|
|
100
|
+
Add the DatadogBridge to your Mastra observability config:
|
|
101
|
+
|
|
102
|
+
```typescript
|
|
103
|
+
export const mastra = new Mastra({
|
|
104
|
+
observability: new Observability({
|
|
105
|
+
configs: {
|
|
106
|
+
default: {
|
|
107
|
+
serviceName: 'my-mastra-app',
|
|
108
|
+
bridge: new DatadogBridge({
|
|
109
|
+
mlApp: process.env.DD_LLMOBS_ML_APP!,
|
|
110
|
+
}),
|
|
111
|
+
},
|
|
112
|
+
},
|
|
113
|
+
}),
|
|
114
|
+
bundler: {
|
|
115
|
+
externals: [
|
|
116
|
+
'dd-trace',
|
|
117
|
+
'@datadog/native-metrics',
|
|
118
|
+
'@datadog/native-appsec',
|
|
119
|
+
'@datadog/native-iast-taint-tracking',
|
|
120
|
+
'@datadog/pprof',
|
|
121
|
+
],
|
|
122
|
+
},
|
|
123
|
+
})
|
|
124
|
+
```
|
|
125
|
+
|
|
126
|
+
```bash
|
|
127
|
+
DD_SERVICE=my-mastra-app
|
|
128
|
+
DD_ENV=production
|
|
129
|
+
DD_VERSION=1.0.0
|
|
130
|
+
DD_LLMOBS_ML_APP=my-llm-app
|
|
131
|
+
```
|
|
132
|
+
|
|
133
|
+
When `dd-trace` is initialized, it routes APM data to your local Datadog Agent on `localhost:8126`. The bridge enables LLM Observability on top of the same tracer, so both sets of data appear under the same service in Datadog.
|
|
134
|
+
|
|
135
|
+
No Mastra exporters are required when using the bridge — both APM and LLM Observability data flow through `dd-trace`. You can still add Mastra exporters if you want to send traces to additional destinations.
|
|
136
|
+
|
|
137
|
+
## Agent vs. agentless mode
|
|
138
|
+
|
|
139
|
+
The bridge defaults to **agent mode** (`agentless: false`). This assumes a local Datadog Agent is running on `localhost:8126` to receive both APM and LLM Observability data. This is the typical setup when using `dd-trace` auto-instrumentation, since APM data always routes through the agent.
|
|
140
|
+
|
|
141
|
+
If you don't have a local Datadog Agent and only need LLM Observability data (no APM auto-instrumentation), you can enable agentless mode to send data directly to Datadog. In this case, you must provide an API key.
|
|
142
|
+
|
|
143
|
+
```typescript
|
|
144
|
+
new DatadogBridge({
|
|
145
|
+
mlApp: process.env.DD_LLMOBS_ML_APP!,
|
|
146
|
+
apiKey: process.env.DD_API_KEY!,
|
|
147
|
+
agentless: true,
|
|
148
|
+
})
|
|
149
|
+
```
|
|
150
|
+
|
|
151
|
+
> **Note:** For most bridge users, agent mode is the right choice. APM data cannot be sent in agentless mode, so enabling agentless splits LLM Observability traffic away from APM traffic. If you want LLM Observability only without an agent, use the [Datadog Exporter](https://mastra.ai/docs/observability/tracing/exporters/datadog) instead.
|
|
152
|
+
|
|
153
|
+
## Trace hierarchy
|
|
154
|
+
|
|
155
|
+
With the DatadogBridge, your traces maintain proper hierarchy across dd-trace and Mastra boundaries. Service calls made by tools and processors appear under the correct Mastra span:
|
|
156
|
+
|
|
157
|
+
```text
|
|
158
|
+
HTTP POST /api/chat (from web framework instrumentation)
|
|
159
|
+
└── agent.orchestrator (from Mastra via DatadogBridge)
|
|
160
|
+
├── chat gpt-5.4 (LLM call)
|
|
161
|
+
├── tool.execute search (tool execution)
|
|
162
|
+
│ └── HTTP GET api.example.com (auto-instrumented from inside the tool)
|
|
163
|
+
└── processor.guardrail (output processor)
|
|
164
|
+
└── HTTP POST guardrail-service/check (auto-instrumented from inside the processor)
|
|
165
|
+
```
|
|
166
|
+
|
|
167
|
+
In Datadog, the APM trace shows this full topology, and the LLM Observability product shows the agent and LLM-specific spans with their inputs, outputs, and token metrics.
|
|
168
|
+
|
|
169
|
+
## Span type mapping
|
|
170
|
+
|
|
171
|
+
The bridge uses the same span-kind mapping as the Datadog Exporter for LLM Observability. See [span type mapping](https://mastra.ai/docs/observability/tracing/exporters/datadog) on the exporter page.
|
|
172
|
+
|
|
173
|
+
## Using tags
|
|
174
|
+
|
|
175
|
+
Tags help you categorize and filter traces in Datadog. Add tags when executing agents or workflows:
|
|
176
|
+
|
|
177
|
+
```typescript
|
|
178
|
+
const result = await agent.generate('Hello', {
|
|
179
|
+
tracingOptions: {
|
|
180
|
+
tags: ['production', 'experiment-v2', 'user-request'],
|
|
181
|
+
},
|
|
182
|
+
})
|
|
183
|
+
```
|
|
184
|
+
|
|
185
|
+
Tags formatted as `key:value` (e.g., `instance_name:career-scout-api`) are split into structured tag entries; tags without a colon are set with a `true` value.
|
|
186
|
+
|
|
187
|
+
## Promoting context keys to flat tags
|
|
188
|
+
|
|
189
|
+
Use `requestContextKeys` to promote specific keys from the request context or span attributes into flat, indexable LLM Observability tags. This makes them filterable in the Datadog UI:
|
|
190
|
+
|
|
191
|
+
```typescript
|
|
192
|
+
new DatadogBridge({
|
|
193
|
+
mlApp: process.env.DD_LLMOBS_ML_APP!,
|
|
194
|
+
requestContextKeys: ['tenantId', 'agentId'],
|
|
195
|
+
})
|
|
196
|
+
```
|
|
197
|
+
|
|
198
|
+
Promoted keys are removed from `annotations.metadata` and added as flat tags on each LLM Observability span.
|
|
199
|
+
|
|
200
|
+
## Troubleshooting
|
|
201
|
+
|
|
202
|
+
If APM spans aren't connecting to Mastra spans as expected:
|
|
203
|
+
|
|
204
|
+
- Verify `dd-trace` is initialized **before** any other imports (it patches libraries at load time)
|
|
205
|
+
- Verify a local Datadog Agent is running and reachable at `localhost:8126`
|
|
206
|
+
- Ensure the DatadogBridge is set as `bridge` (not as an entry in `exporters`) in your observability config
|
|
207
|
+
- Confirm you haven't also added the `DatadogExporter` to `exporters` — using both will double-emit LLM Observability data
|
|
208
|
+
|
|
209
|
+
For native-module compatibility issues with `dd-trace` and bundler externals, see the [Datadog exporter troubleshooting](https://mastra.ai/docs/observability/tracing/exporters/datadog) section.
|
|
210
|
+
|
|
211
|
+
## Related
|
|
212
|
+
|
|
213
|
+
- [Tracing Overview](https://mastra.ai/docs/observability/tracing/overview)
|
|
214
|
+
- [Datadog Exporter](https://mastra.ai/docs/observability/tracing/exporters/datadog) - LLM Observability only, no `dd-trace` APM
|
|
215
|
+
- [DatadogBridge Reference](https://mastra.ai/reference/observability/tracing/bridges/datadog) - API documentation
|
|
216
|
+
- [Datadog APM documentation](https://docs.datadoghq.com/tracing/)
|
|
217
|
+
- [Datadog LLM Observability documentation](https://docs.datadoghq.com/llm_observability/)
|
|
@@ -2,6 +2,8 @@
|
|
|
2
2
|
|
|
3
3
|
[Datadog](https://datadoghq.com/) is a comprehensive monitoring platform with dedicated LLM Observability features. The Datadog exporter sends your traces to Datadog's LLM Observability product, providing insights into model performance, token usage, and conversation flows.
|
|
4
4
|
|
|
5
|
+
> **Also using dd-trace APM?:** If you also use `dd-trace` APM auto-instrumentation, consider the [Datadog Bridge](https://mastra.ai/docs/observability/tracing/bridges/datadog) instead. The bridge creates `dd-trace` spans in real time so HTTP and database calls inside tools and processors are correctly nested under their parent Mastra span. The exporter on its own sends LLM Observability data after execution completes, which means auto-instrumented APM spans fall back to the request handler.
|
|
6
|
+
|
|
5
7
|
## Installation
|
|
6
8
|
|
|
7
9
|
**npm**:
|
|
@@ -130,7 +132,7 @@ Note: When using agent mode, the API key is read from the local agent's configur
|
|
|
130
132
|
|
|
131
133
|
## Span type mapping
|
|
132
134
|
|
|
133
|
-
Mastra span types are automatically mapped to Datadog
|
|
135
|
+
Mastra span types are automatically mapped to Datadog LLM Observability span kinds:
|
|
134
136
|
|
|
135
137
|
| Mastra SpanType | Datadog Kind |
|
|
136
138
|
| -------------------- | ------------ |
|
|
@@ -0,0 +1,181 @@
|
|
|
1
|
+
# Scheduled workflows
|
|
2
|
+
|
|
3
|
+
Declare a `schedule` field on a workflow and Mastra will fire it on the cron you specify. The same workflow remains callable directly with `workflow.start()` — scheduled fires and manual runs share a single execution path.
|
|
4
|
+
|
|
5
|
+
## Quickstart
|
|
6
|
+
|
|
7
|
+
The following workflow runs every day at 9am New York time. Register it on `Mastra` as you would any other workflow — the scheduler picks it up automatically.
|
|
8
|
+
|
|
9
|
+
```typescript
|
|
10
|
+
import { createWorkflow, createStep } from '@mastra/core/workflows'
|
|
11
|
+
import { z } from 'zod'
|
|
12
|
+
|
|
13
|
+
const sendReport = createStep({
|
|
14
|
+
id: 'send-report',
|
|
15
|
+
inputSchema: z.object({ userId: z.string() }),
|
|
16
|
+
outputSchema: z.object({ ok: z.boolean() }),
|
|
17
|
+
execute: async ({ inputData }) => {
|
|
18
|
+
// ...send the report for inputData.userId
|
|
19
|
+
return { ok: true }
|
|
20
|
+
},
|
|
21
|
+
})
|
|
22
|
+
|
|
23
|
+
export const dailyReport = createWorkflow({
|
|
24
|
+
id: 'daily-report',
|
|
25
|
+
inputSchema: z.object({ userId: z.string() }),
|
|
26
|
+
outputSchema: z.object({ ok: z.boolean() }),
|
|
27
|
+
schedule: {
|
|
28
|
+
cron: '0 9 * * *',
|
|
29
|
+
timezone: 'America/New_York',
|
|
30
|
+
inputData: { userId: 'system' },
|
|
31
|
+
},
|
|
32
|
+
})
|
|
33
|
+
.then(sendReport)
|
|
34
|
+
.commit()
|
|
35
|
+
```
|
|
36
|
+
|
|
37
|
+
There is no separate "register schedule" call. The scheduler reads `schedule` straight off the workflow when `Mastra` boots.
|
|
38
|
+
|
|
39
|
+
## What `schedule` changes
|
|
40
|
+
|
|
41
|
+
A workflow that declares `schedule` is auto-promoted to the **evented execution engine**. The public API (`workflow.start()`, `workflow.startAsync()`, `streamLegacy()`, `resume()`) is unchanged — `EventedWorkflow extends Workflow` and overrides each method with matching signatures. From your code, scheduled fires and manual runs are indistinguishable.
|
|
42
|
+
|
|
43
|
+
The promotion has one practical implication: evented runs require a storage adapter that supports concurrent updates, for example `@mastra/libsql`. If your adapter does not, `createRun()` throws a clear error pointing at the `schedule` field. Switch adapters or remove the schedule.
|
|
44
|
+
|
|
45
|
+
## Single schedule
|
|
46
|
+
|
|
47
|
+
Pass an object to `schedule` for a workflow that fires on one cadence:
|
|
48
|
+
|
|
49
|
+
```typescript
|
|
50
|
+
const dailyReport = createWorkflow({
|
|
51
|
+
id: 'daily-report',
|
|
52
|
+
schedule: {
|
|
53
|
+
cron: '0 9 * * *',
|
|
54
|
+
timezone: 'America/New_York',
|
|
55
|
+
inputData: { userId: 'system' },
|
|
56
|
+
},
|
|
57
|
+
// ...
|
|
58
|
+
})
|
|
59
|
+
```
|
|
60
|
+
|
|
61
|
+
Fields:
|
|
62
|
+
|
|
63
|
+
- `cron` (required): a 5-, 6-, or 7-part cron expression. Validated at workflow construction time.
|
|
64
|
+
- `timezone` (optional): IANA timezone, for example `America/New_York`. Defaults to the host's local timezone. Set this explicitly in production so fire times do not depend on server locale.
|
|
65
|
+
- `inputData` (optional): payload passed as the workflow's input on every fire.
|
|
66
|
+
- `initialState` (optional): initial state for the run.
|
|
67
|
+
- `requestContext` (optional): request context attached to the run.
|
|
68
|
+
- `metadata` (optional): arbitrary metadata persisted alongside the schedule row.
|
|
69
|
+
|
|
70
|
+
## Multiple schedules
|
|
71
|
+
|
|
72
|
+
Pass an array to fire the same workflow on multiple cadences. Each entry needs a unique stable `id`:
|
|
73
|
+
|
|
74
|
+
```typescript
|
|
75
|
+
const heartbeat = createWorkflow({
|
|
76
|
+
id: 'heartbeat',
|
|
77
|
+
schedule: [
|
|
78
|
+
{ id: 'morning', cron: '0 9 * * *', inputData: { window: 'morning' } },
|
|
79
|
+
{ id: 'evening', cron: '0 18 * * *', inputData: { window: 'evening' } },
|
|
80
|
+
],
|
|
81
|
+
// ...
|
|
82
|
+
})
|
|
83
|
+
```
|
|
84
|
+
|
|
85
|
+
Each entry creates an independent schedule row, fires on its own cron, and shows up separately in the Studio Schedules view.
|
|
86
|
+
|
|
87
|
+
## Viewing schedules in Studio
|
|
88
|
+
|
|
89
|
+
Studio surfaces schedules as a top-level area, not as a tab inside a workflow:
|
|
90
|
+
|
|
91
|
+
- **All schedules**: open `/workflows/schedules` for a cross-workflow list. Each row shows the workflow id, cron, next fire, and the most recent run's status, so the list answers "is anything broken?" at a glance.
|
|
92
|
+
- **Filtered by workflow**: append `?workflowId=<id>` to scope the list to a single workflow, for example `/workflows/schedules?workflowId=daily-report`.
|
|
93
|
+
- **Schedule detail**: select any row to open `/workflows/schedules/:scheduleId`, which shows the schedule's metadata, **Pause** / **Resume** controls, and the full trigger history.
|
|
94
|
+
|
|
95
|
+
A workflow's header includes a **Schedules** action when the workflow has at least one schedule:
|
|
96
|
+
|
|
97
|
+
- One schedule: the action links straight to that schedule's detail page.
|
|
98
|
+
- Multiple schedules: the action links to the workflow-filtered list at `/workflows/schedules?workflowId=<id>`.
|
|
99
|
+
- No schedules: the action is hidden.
|
|
100
|
+
|
|
101
|
+
### Trigger history
|
|
102
|
+
|
|
103
|
+
Every fire records a trigger row with the run id, scheduled time, actual fire time, and publish status. The schedule detail page joins each trigger to the corresponding workflow run and shows:
|
|
104
|
+
|
|
105
|
+
- The run's status (`running`, `success`, `failed`, `suspended`, `canceled`) as a badge.
|
|
106
|
+
- The run's start time and duration.
|
|
107
|
+
- A link to the run's full graph view at `/workflows/:workflowId/graph/:runId`.
|
|
108
|
+
- A `pending` badge for triggers whose run record has not been written yet, due to a race between trigger publish and run snapshot.
|
|
109
|
+
- A `publish failed` badge with the publish error when the scheduler could not enqueue the run at all.
|
|
110
|
+
|
|
111
|
+
Triggers in a non-terminal state cause the panel to poll every five seconds until they reach a terminal state. The list paginates, so long-running schedules do not load thousands of rows up front.
|
|
112
|
+
|
|
113
|
+
## Pausing a schedule at runtime
|
|
114
|
+
|
|
115
|
+
When a scheduled workflow misfires in production, you do not have to redeploy or hand-edit the database. Pause it from the SDK:
|
|
116
|
+
|
|
117
|
+
```typescript
|
|
118
|
+
import { MastraClient } from '@mastra/client-js'
|
|
119
|
+
|
|
120
|
+
const client = new MastraClient({ baseUrl: 'http://localhost:4111' })
|
|
121
|
+
|
|
122
|
+
// Schedule ids are derived from the workflow id: `wf_<workflowId>` for a
|
|
123
|
+
// single declarative schedule, or `wf_<workflowId>__<scheduleId>` when you
|
|
124
|
+
// declare multiple schedules per workflow as an array.
|
|
125
|
+
await client.pauseSchedule('wf_daily-report')
|
|
126
|
+
// ...investigate, ship a fix, then:
|
|
127
|
+
await client.resumeSchedule('wf_daily-report')
|
|
128
|
+
```
|
|
129
|
+
|
|
130
|
+
In Studio, open the schedule detail page and select **Pause** or **Resume** in the header.
|
|
131
|
+
|
|
132
|
+
A few rules worth knowing:
|
|
133
|
+
|
|
134
|
+
- Pause is durable. The status is written to the schedules table and survives process restarts and redeploys. The declarative-config upsert never overwrites a user-set status, even when you change `cron`, `timezone`, or other fields.
|
|
135
|
+
- Resume recomputes `nextFireAt` from now. A schedule paused for a week does not fire seven backlogged runs the moment you resume it. It fires on the next regular cron tick.
|
|
136
|
+
- The only way to unpause is `resumeSchedule` (or the **Resume** button in Studio). Editing the workflow's `schedule` config does not unpause a paused row.
|
|
137
|
+
- Pause and resume are idempotent. Calling pause on an already-paused schedule is a no-op.
|
|
138
|
+
- This is an operational override, not a way to author schedules. Creating, deleting, and editing schedules still happens in code via the `schedule` field on `createWorkflow`.
|
|
139
|
+
|
|
140
|
+
The underlying HTTP routes are `POST /api/schedules/:scheduleId/pause` and `POST /api/schedules/:scheduleId/resume`. Both require the `schedules:write` permission.
|
|
141
|
+
|
|
142
|
+
## Redeploying with changes
|
|
143
|
+
|
|
144
|
+
When you change the `schedule` config and redeploy, Mastra diffs the existing schedule row against the new config:
|
|
145
|
+
|
|
146
|
+
- If `cron` or `timezone` changed, `nextFireAt` is recomputed.
|
|
147
|
+
- If only `inputData`, `initialState`, or `metadata` changed, the row is patched in place and the next fire time is preserved.
|
|
148
|
+
- User-set status (for example, paused via `client.pauseSchedule`) and fire history are never overwritten.
|
|
149
|
+
|
|
150
|
+
Removing a schedule entry from a workflow's `schedule` array deletes its row on the next boot.
|
|
151
|
+
|
|
152
|
+
## Deployment topology
|
|
153
|
+
|
|
154
|
+
The built-in scheduler is a `setInterval` tick loop that polls the schedules table, claims due rows, and dispatches workflow runs through the in-process pubsub. It assumes a long-lived host process.
|
|
155
|
+
|
|
156
|
+
### Long-lived host (recommended)
|
|
157
|
+
|
|
158
|
+
Deploy targets such as Fly Machines, Railway, Render, AWS ECS, GKE, or your own server keep the Mastra process alive between cron ticks. Schedules work without extra setup.
|
|
159
|
+
|
|
160
|
+
### Serverless platforms
|
|
161
|
+
|
|
162
|
+
Functions-as-a-service platforms such as Vercel, Netlify, AWS Lambda, and Cloudflare Workers shut the process down after each request. The tick loop never gets a second tick. Schedules declared in code do not fire on these platforms with the built-in scheduler today.
|
|
163
|
+
|
|
164
|
+
On these platforms, use [`@mastra/inngest`](#inngest-workflows) instead. Inngest is serverless-native and holds the cron state for you.
|
|
165
|
+
|
|
166
|
+
## Inngest workflows
|
|
167
|
+
|
|
168
|
+
The `schedule` field documented on this page drives Mastra's built-in scheduler. If you use `@mastra/inngest`, scheduled workflows are configured through Inngest's own `cron` field on `createFunction` and fire on Inngest's scheduler instead.
|
|
169
|
+
|
|
170
|
+
Practical implications:
|
|
171
|
+
|
|
172
|
+
- Inngest schedules do not appear in Studio's `/workflows/schedules` view.
|
|
173
|
+
- The workflow header **Schedules** action does not show for Inngest workflows.
|
|
174
|
+
- `client.pauseSchedule` and `client.resumeSchedule` do not control Inngest schedules.
|
|
175
|
+
|
|
176
|
+
Manage Inngest schedules from the [Inngest dashboard](https://www.inngest.com/docs/guides/scheduled-functions). Use Mastra schedules when you want Mastra to own scheduling end to end.
|
|
177
|
+
|
|
178
|
+
## Related
|
|
179
|
+
|
|
180
|
+
- [Workflow overview](https://mastra.ai/docs/workflows/overview)
|
|
181
|
+
- [Suspend and resume](https://mastra.ai/docs/workflows/suspend-and-resume)
|
|
@@ -1,6 +1,6 @@
|
|
|
1
1
|
# OpenRouter
|
|
2
2
|
|
|
3
|
-
OpenRouter aggregates models from multiple providers with enhanced features like rate limiting and failover. Access
|
|
3
|
+
OpenRouter aggregates models from multiple providers with enhanced features like rate limiting and failover. Access 182 models through Mastra's model router.
|
|
4
4
|
|
|
5
5
|
Learn more in the [OpenRouter documentation](https://openrouter.ai/models).
|
|
6
6
|
|
|
@@ -166,6 +166,7 @@ ANTHROPIC_API_KEY=ant-...
|
|
|
166
166
|
| `openai/o4-mini` |
|
|
167
167
|
| `openrouter/elephant-alpha` |
|
|
168
168
|
| `openrouter/free` |
|
|
169
|
+
| `openrouter/owl-alpha` |
|
|
169
170
|
| `openrouter/pareto-code` |
|
|
170
171
|
| `prime-intellect/intellect-3` |
|
|
171
172
|
| `qwen/qwen-2.5-coder-32b-instruct` |
|
|
@@ -1,6 +1,6 @@
|
|
|
1
1
|
# Vercel
|
|
2
2
|
|
|
3
|
-
Vercel aggregates models from multiple providers with enhanced features like rate limiting and failover. Access
|
|
3
|
+
Vercel aggregates models from multiple providers with enhanced features like rate limiting and failover. Access 246 models through Mastra's model router.
|
|
4
4
|
|
|
5
5
|
Learn more in the [Vercel documentation](https://ai-sdk.dev/providers/ai-sdk-providers).
|
|
6
6
|
|
|
@@ -52,10 +52,12 @@ ANTHROPIC_API_KEY=ant-...
|
|
|
52
52
|
| `alibaba/qwen3-max-thinking` |
|
|
53
53
|
| `alibaba/qwen3-next-80b-a3b-instruct` |
|
|
54
54
|
| `alibaba/qwen3-next-80b-a3b-thinking` |
|
|
55
|
+
| `alibaba/qwen3-vl-235b-a22b-instruct` |
|
|
55
56
|
| `alibaba/qwen3-vl-instruct` |
|
|
56
57
|
| `alibaba/qwen3-vl-thinking` |
|
|
57
58
|
| `alibaba/qwen3.5-flash` |
|
|
58
59
|
| `alibaba/qwen3.5-plus` |
|
|
60
|
+
| `alibaba/qwen3.6-27b` |
|
|
59
61
|
| `alibaba/qwen3.6-plus` |
|
|
60
62
|
| `amazon/nova-2-lite` |
|
|
61
63
|
| `amazon/nova-lite` |
|
|
@@ -123,6 +125,7 @@ ANTHROPIC_API_KEY=ant-...
|
|
|
123
125
|
| `google/text-embedding-005` |
|
|
124
126
|
| `google/text-multilingual-embedding-002` |
|
|
125
127
|
| `inception/mercury-2` |
|
|
128
|
+
| `inception/mercury-coder-small` |
|
|
126
129
|
| `inception/mercury-edit-2` |
|
|
127
130
|
| `interfaze/interfaze-beta` |
|
|
128
131
|
| `kwaipilot/kat-coder-pro-v1` |
|
|
@@ -256,11 +259,14 @@ ANTHROPIC_API_KEY=ant-...
|
|
|
256
259
|
| `xai/grok-4.20-non-reasoning-beta` |
|
|
257
260
|
| `xai/grok-4.20-reasoning` |
|
|
258
261
|
| `xai/grok-4.20-reasoning-beta` |
|
|
262
|
+
| `xai/grok-4.3` |
|
|
259
263
|
| `xai/grok-code-fast-1` |
|
|
260
264
|
| `xai/grok-imagine-image` |
|
|
261
265
|
| `xai/grok-imagine-image-pro` |
|
|
262
266
|
| `xiaomi/mimo-v2-flash` |
|
|
263
267
|
| `xiaomi/mimo-v2-pro` |
|
|
268
|
+
| `xiaomi/mimo-v2.5` |
|
|
269
|
+
| `xiaomi/mimo-v2.5-pro` |
|
|
264
270
|
| `zai/glm-4.5` |
|
|
265
271
|
| `zai/glm-4.5-air` |
|
|
266
272
|
| `zai/glm-4.5v` |
|
package/.docs/models/index.md
CHANGED
|
@@ -1,6 +1,6 @@
|
|
|
1
1
|
# Model Providers
|
|
2
2
|
|
|
3
|
-
Mastra provides a unified interface for working with LLMs across multiple providers, giving you access to
|
|
3
|
+
Mastra provides a unified interface for working with LLMs across multiple providers, giving you access to 3832 models from 106 providers through a single API.
|
|
4
4
|
|
|
5
5
|
## Features
|
|
6
6
|
|
|
@@ -1,6 +1,6 @@
|
|
|
1
1
|
# Cortecs
|
|
2
2
|
|
|
3
|
-
Access
|
|
3
|
+
Access 35 Cortecs models through Mastra's model router. Authentication is handled automatically using the `CORTECS_API_KEY` environment variable.
|
|
4
4
|
|
|
5
5
|
Learn more in the [Cortecs documentation](https://cortecs.ai).
|
|
6
6
|
|
|
@@ -64,6 +64,7 @@ for await (const chunk of stream) {
|
|
|
64
64
|
| `cortecs/minimax-m2.5` | 197K | | | | | | $0.32 | $1 |
|
|
65
65
|
| `cortecs/minimax-m2.7` | 203K | | | | | | $0.47 | $1 |
|
|
66
66
|
| `cortecs/nova-pro-v1` | 300K | | | | | | $1 | $4 |
|
|
67
|
+
| `cortecs/qwen-2.5-72b-instruct` | 33K | | | | | | $0.06 | $0.23 |
|
|
67
68
|
| `cortecs/qwen3-32b` | 16K | | | | | | $0.10 | $0.33 |
|
|
68
69
|
| `cortecs/qwen3-coder-480b-a35b-instruct` | 262K | | | | | | $0.44 | $2 |
|
|
69
70
|
| `cortecs/qwen3-coder-next` | 256K | | | | | | $0.16 | $0.84 |
|
|
@@ -1,6 +1,6 @@
|
|
|
1
1
|
# DigitalOcean
|
|
2
2
|
|
|
3
|
-
Access
|
|
3
|
+
Access 63 DigitalOcean models through Mastra's model router. Authentication is handled automatically using the `DIGITALOCEAN_ACCESS_TOKEN` environment variable.
|
|
4
4
|
|
|
5
5
|
Learn more in the [DigitalOcean documentation](https://docs.digitalocean.com/products/gradient-ai-platform/details/models/).
|
|
6
6
|
|
|
@@ -50,6 +50,7 @@ for await (const chunk of stream) {
|
|
|
50
50
|
| `digitalocean/bge-reranker-v2-m3` | 8K | | | | | | $0.01 | — |
|
|
51
51
|
| `digitalocean/deepseek-3.2` | 128K | | | | | | $0.50 | $2 |
|
|
52
52
|
| `digitalocean/deepseek-r1-distill-llama-70b` | 131K | | | | | | $0.99 | $0.99 |
|
|
53
|
+
| `digitalocean/deepseek-v4-pro` | 1.0M | | | | | | $2 | $3 |
|
|
53
54
|
| `digitalocean/e5-large-v2` | 512 | | | | | | $0.02 | — |
|
|
54
55
|
| `digitalocean/fal-ai/elevenlabs/tts/multilingual-v2` | — | | | | | | — | — |
|
|
55
56
|
| `digitalocean/fal-ai/fast-sdxl` | — | | | | | | — | — |
|