@mastra/mcp-docs-server 1.1.29-alpha.8 → 1.1.29

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -0,0 +1,242 @@
1
+ # Background tasks
2
+
3
+ **Added in:** `@mastra/core@1.28.0`
4
+
5
+ Background tasks let an agent dispatch a long-running tool call without blocking the agentic loop. The tool returns an immediate acknowledgement, the LLM continues responding, and the task runs to completion in the background. When it finishes, its result is written to memory and if you use [`streamUntilIdle()`](https://mastra.ai/reference/streaming/agents/streamUntilIdle) the agent is re-invoked automatically so the result is processed in the same call.
6
+
7
+ ## When to use background tasks
8
+
9
+ Use background tasks when a tool call may take long enough that the user shouldn't wait for it before seeing a response. Common cases:
10
+
11
+ - Subagent delegations that themselves run multi-step research or writing.
12
+ - Tool calls that hit slow external services, queues, or large data jobs.
13
+ - Workflows triggered from a tool call that may take minutes to complete.
14
+
15
+ For tool calls that return quickly, foreground execution using `agent.stream()` and `agent.generate()` is simpler.
16
+
17
+ > **Note:** Background tasks require a configured [storage](https://mastra.ai/docs/memory/storage) backend on the Mastra instance. Tasks are persisted so they survive process restarts.
18
+
19
+ ## Quickstart
20
+
21
+ Background tasks are off by default. Enable them by setting `backgroundTasks.enabled` on the Mastra instance:
22
+
23
+ ```typescript
24
+ import { Mastra } from '@mastra/core'
25
+ import { LibSQLStore } from '@mastra/libsql'
26
+
27
+ export const mastra = new Mastra({
28
+ storage: new LibSQLStore({ id: 'storage', url: 'file:mastra.db' }),
29
+ backgroundTasks: {
30
+ enabled: true,
31
+ globalConcurrency: 10,
32
+ perAgentConcurrency: 5,
33
+ backpressure: 'queue',
34
+ defaultTimeoutMs: 300_000,
35
+ },
36
+ })
37
+ ```
38
+
39
+ The full set of options is listed in the [backgroundTasks configuration reference](https://mastra.ai/reference/configuration).
40
+
41
+ ## Run a tool in the background
42
+
43
+ Enabling the manager doesn't run anything in the background by itself as every tool defaults to foreground execution. You can run a tool in the background at one of three layers, in priority order:
44
+
45
+ 1. **LLM per-call override**: the model decides it should run in the background and includes a `_background` field in the tool arguments.
46
+ 2. **Agent-level config**: the agent declares which of its tools are background-eligible.
47
+ 3. **Tool-level config**: the tool itself declares it as background-eligible.
48
+
49
+ ### Tool-level
50
+
51
+ Set `backgroundTasks.enabled: true` on the tool definition. Tools opted in at this layer run in the background whenever called by an agent that has the manager enabled.
52
+
53
+ ```typescript
54
+ import { createTool } from '@mastra/core/tools'
55
+ import { z } from 'zod'
56
+
57
+ export const researchTool = createTool({
58
+ id: 'research',
59
+ description: 'Run a long research job',
60
+ inputSchema: z.object({ topic: z.string() }),
61
+ backgroundTasks: {
62
+ enabled: true,
63
+ timeoutMs: 600_000,
64
+ maxRetries: 1,
65
+ },
66
+ execute: async ({ context }) => {
67
+ // ...
68
+ },
69
+ })
70
+ ```
71
+
72
+ ### Agent-level
73
+
74
+ Use `backgroundTasks.tools` on the agent to opt in specific tools, override timeouts for individual tools, or run all background-eligible tools in the background. Use `disabled: true` to short-circuit background dispatch for the agent entirely.
75
+
76
+ ```typescript
77
+ import { Agent } from '@mastra/core/agent'
78
+
79
+ export const researcher = new Agent({
80
+ id: 'researcher',
81
+ instructions: 'You research topics and answer questions.',
82
+ model: 'openai/gpt-5.4',
83
+ tools: { researchTool, summarizeTool },
84
+ backgroundTasks: {
85
+ tools: {
86
+ researchTool: { enabled: true, timeoutMs: 600_000 },
87
+ summarizeTool: false,
88
+ },
89
+ },
90
+ })
91
+ ```
92
+
93
+ Set `tools: 'all'` to opt in every tool the agent has.
94
+
95
+ ### LLM per-call override
96
+
97
+ When a tool is registered on an agent that has background tasks enabled, the model can include a `_background` field in the tool arguments to override the resolved configuration for that specific call. The model only includes what it wants to override, all fields in `_background` are optional. The override is stripped from the arguments before the tool runs.
98
+
99
+ ```json
100
+ {
101
+ "topic": "solana",
102
+ "_background": { "enabled": true, "timeoutMs": 900_000 }
103
+ }
104
+ ```
105
+
106
+ ### Resolution order
107
+
108
+ When a tool call is dispatched, the resolved background config is computed in this priority order:
109
+
110
+ 1. LLM `_background` override (if present in the call's arguments).
111
+ 2. Agent-level `backgroundTasks.tools` entry for the tool.
112
+ 3. Tool-level `backgroundTasks` config.
113
+ 4. Manager defaults (`defaultTimeoutMs`, `defaultRetries`).
114
+
115
+ If the agent has `backgroundTasks.disabled: true`, every tool call runs synchronously regardless of the layers above.
116
+
117
+ ## Background tasks related stream chunks
118
+
119
+ When a tool call dispatches as a background task, two streams may surface lifecycle events for it: the agent's own stream and the [`backgroundTaskManager.stream()`](https://mastra.ai/docs/streaming/background-task-streaming) SSE stream. Each stream covers a different set of chunk types:
120
+
121
+ | Chunk type | When it fires | Emitted by |
122
+ | --------------------------- | -------------------------------------------------------------------------------------- | -------------- |
123
+ | `background-task-started` | The task has been enqueued and assigned a `taskId`. | Agent stream |
124
+ | `background-task-running` | The task picked up a worker and started executing. | Manager stream |
125
+ | `background-task-progress` | Shows number of running background tasks. | Agent stream |
126
+ | `background-task-output` | A streamed output chunk from the task's `execute`. | Manager stream |
127
+ | `background-task-completed` | The task finished successfully. The `payload.result` matches the eventual tool result. | Manager stream |
128
+ | `background-task-failed` | The task threw or timed out. | Manager stream |
129
+ | `background-task-cancelled` | The task was cancelled before completing. | Manager stream |
130
+
131
+ `agent.stream().fullStream` only emits the agent-loop chunks (`background-task-started`, `background-task-progress`) on its own. `agent.streamUntilIdle()` emits the same two chunks and additionally subscribes to the manager pubsub for the run's memory scope and pipes the five manager chunks (`background-task-running`, `background-task-output`, `background-task-completed`, `background-task-failed`, `background-task-cancelled`) into the same `fullStream`, so consumers of `streamUntilIdle().fullStream` see all seven types.
132
+
133
+ `backgroundTaskManager.stream()` only emits the five manager chunks.
134
+
135
+ The full payload shapes are documented in the [background task chunks reference](https://mastra.ai/reference/streaming/ChunkType).
136
+
137
+ ## Keep the agent stream open with `streamUntilIdle()`
138
+
139
+ `agent.stream()` returns once the LLM emits a final response even if a background task is still running. Use `agent.streamUntilIdle()` when you want the stream to stay open until every dispatched background task has completed and the LLM has had a chance to respond to the result:
140
+
141
+ ```typescript
142
+ const stream = await agent.streamUntilIdle('Research solana for me', {
143
+ memory: { thread: 't1', resource: 'u1' },
144
+ maxIdleMs: 5 * 60_000,
145
+ })
146
+
147
+ for await (const chunk of stream.fullStream) {
148
+ // chunks from the initial turn AND any continuation turns triggered by
149
+ // background task completions flow through here
150
+ }
151
+ ```
152
+
153
+ When a background task completes, the result is injected into the agent memory, `streamUntilIdle()` re-enters the agentic loop so the LLM can react to it. The stream closes when no tasks are running and no completions are queued.
154
+
155
+ `maxIdleMs` caps how long the stream waits between turns. The timer only runs while the wrapper is between turns, so a slow first token won't close the stream. The default is 5 minutes.
156
+
157
+ > **Note:** Visit [`Agent.streamUntilIdle()`](https://mastra.ai/reference/streaming/agents/streamUntilIdle) for the full API.
158
+
159
+ ### Aggregate properties
160
+
161
+ `streamUntilIdle()` returns a `MastraModelOutput` that looks like the one from `stream()`, but only `fullStream` spans the initial turn **and** any auto-continuations. Aggregate properties (`text`, `toolCalls`, `toolResults`, `finishReason`, `messageList`, `getFullOutput()`) still resolve against the **first turn's** internal buffer. If you need an aggregate view across continuations, consume `fullStream` yourself and accumulate.
162
+
163
+ ## Subagents in the background
164
+
165
+ Subagent invocations are dispatched as tool calls under the hood, so the same background configuration applies. The recommended pattern is to opt each subagent in on the supervisor, it's clearer and lets you tune `timeoutMs` per subagent in one place:
166
+
167
+ ```typescript
168
+ import { Agent } from '@mastra/core/agent'
169
+
170
+ const supervisor = new Agent({
171
+ id: 'supervisor',
172
+ instructions: 'Coordinate research and writing using the available agents.',
173
+ model: 'openai/gpt-5.4',
174
+ agents: { researchAgent, writingAgent },
175
+ backgroundTasks: {
176
+ tools: {
177
+ researchAgent: { enabled: true, timeoutMs: 900_000 },
178
+ writingAgent: { enabled: true, timeoutMs: 900_000 },
179
+ },
180
+ },
181
+ })
182
+
183
+ const stream = await supervisor.streamUntilIdle('Research AI in education and write an article', {
184
+ memory: { thread: 't1', resource: 'u1' },
185
+ })
186
+ ```
187
+
188
+ ### Inheriting from the subagent
189
+
190
+ If a subagent isn't listed under the supervisor's `backgroundTasks.tools` but has background-eligible tools of its own (either via tool-level `backgroundTasks.enabled: true` or its own `backgroundTasks.tools` entry) the framework still dispatches the entire subagent invocation as a background task. The supervisor inherits the subagent's intent: the subagent itself becomes the background task, and its inner tools run in the foreground inside the subagent's loop.
191
+
192
+ The background config used for the inherited dispatch (for example `waitTimeoutMs`) is derived from the subagent's own `backgroundTasks` config.
193
+
194
+ ```typescript
195
+ const researchAgent = new Agent({
196
+ id: 'research-agent',
197
+ description: 'Gathers factual information.',
198
+ model: 'openai/gpt-5-mini',
199
+ tools: { deepResearchTool },
200
+ backgroundTasks: {
201
+ tools: {
202
+ deepResearchTool: { enabled: true, timeoutMs: 600_000 },
203
+ },
204
+ waitTimeoutMs: 900_000,
205
+ },
206
+ })
207
+ ```
208
+
209
+ When this `researchAgent` is delegated to from a supervisor that has no backgroundTask configuration for the `researchAgent`, the supervisor still dispatches the whole `researchAgent` invocation as a background task, and `deepResearchTool` runs in the foreground inside that invocation, instead of dispatching its own nested background task.
210
+
211
+ Use this pattern when you want a subagent to behave consistently in the background regardless of which supervisor invokes it. Use the supervisor-side opt-in (above) when you want to tune background behavior centrally per supervisor.
212
+
213
+ ## Lifecycle callbacks
214
+
215
+ Each layer can register terminal-state callbacks. They don't replace one another, and success/failure hooks fire for their respective outcomes:
216
+
217
+ - Tool-level `backgroundTasks.onComplete` / `onFailed`: scoped to one tool.
218
+ - Agent-level `backgroundTasks.onTaskComplete` / `onTaskFailed`: scoped to all tasks dispatched by this agent.
219
+ - Manager-level `onTaskComplete` / `onTaskFailed`: scoped globally.
220
+
221
+ ```typescript
222
+ export const mastra = new Mastra({
223
+ storage,
224
+ backgroundTasks: {
225
+ enabled: true,
226
+ onTaskComplete: task => {
227
+ logger.info('Background task complete', { taskId: task.id, toolName: task.toolName })
228
+ },
229
+ onTaskFailed: task => {
230
+ logger.error('Background task failed', { taskId: task.id, error: task.error })
231
+ },
232
+ },
233
+ })
234
+ ```
235
+
236
+ ## Related
237
+
238
+ - [`Agent.streamUntilIdle()` reference](https://mastra.ai/reference/streaming/agents/streamUntilIdle)
239
+ - [backgroundTasks configuration reference](https://mastra.ai/reference/configuration)
240
+ - [Supervisor agents](https://mastra.ai/docs/agents/supervisor-agents)
241
+ - [Stream chunk types](https://mastra.ai/reference/streaming/ChunkType)
242
+ - [Storage](https://mastra.ai/docs/memory/storage)
@@ -72,7 +72,7 @@ Platform webhooks need a public URL to reach your local server. Use a tunnel to
72
72
  ngrok http 4111
73
73
 
74
74
  # cloudflared
75
- cloudflared tunnel --url http://localhost:4111
75
+ npx cloudflared tunnel --url http://localhost:4111
76
76
  ```
77
77
 
78
78
  Copy the generated URL and use it as the base for your webhook paths (e.g. `https://abc123.ngrok.io/api/agents/support-agent/channels/slack/webhook`).
@@ -164,6 +164,7 @@ By default, only images are sent inline (`inlineMedia: ['image/*']`). Unsupporte
164
164
 
165
165
  ## Related
166
166
 
167
+ - [Guide: Building a Slack assistant](https://mastra.ai/guides/guide/slack-assistant)
167
168
  - [Agent overview](https://mastra.ai/docs/agents/overview)
168
169
  - [Tool approval](https://mastra.ai/docs/agents/agent-approval)
169
170
  - [Channels reference](https://mastra.ai/reference/agents/channels)
@@ -300,9 +300,38 @@ Success criteria:
300
300
  })
301
301
  ```
302
302
 
303
- ## Sub-agent versioning
303
+ ## Running subagents in the background
304
304
 
305
- When using the [editor](https://mastra.ai/docs/editor/overview), you can control which stored version of each sub-agent the supervisor uses at runtime. Set version overrides on the Mastra instance or per invocation:
305
+ Subagent invocations are dispatched as tool calls, so they can run as [background tasks](https://mastra.ai/docs/agents/background-tasks). This is useful when one or more delegations are long-running and you don't want them to block the supervisor's response.
306
+
307
+ Enable the [backgroundTasks manager](https://mastra.ai/reference/configuration) on the Mastra instance, then opt subagents in on the supervisor:
308
+
309
+ ```typescript
310
+ const supervisor = new Agent({
311
+ id: 'supervisor',
312
+ instructions: 'Coordinate research and writing using the available agents.',
313
+ model: 'openai/gpt-5.4',
314
+ agents: { researchAgent, writingAgent },
315
+ backgroundTasks: {
316
+ tools: {
317
+ researchAgent: { enabled: true, timeoutMs: 900_000 },
318
+ writingAgent: { enabled: true, timeoutMs: 900_000 },
319
+ },
320
+ },
321
+ })
322
+
323
+ const stream = await supervisor.streamUntilIdle('Research AI in education and write an article', {
324
+ memory: { thread: 't1', resource: 'u1' },
325
+ })
326
+ ```
327
+
328
+ Use [`streamUntilIdle()`](https://mastra.ai/reference/streaming/agents/streamUntilIdle) instead of `stream()` so the stream stays open until the subagents complete and the supervisor has had a chance to respond to their results.
329
+
330
+ If a subagent isn't listed on the supervisor but has its own background-eligible tools, the supervisor still dispatches the subagent as a background task and inherits its config. See [Inheriting from the subagent](https://mastra.ai/docs/agents/background-tasks) for details.
331
+
332
+ ## Subagent versioning
333
+
334
+ When using the [editor](https://mastra.ai/docs/editor/overview), you can control which stored version of each subagent the supervisor uses at runtime. Set version overrides on the Mastra instance or per invocation:
306
335
 
307
336
  ```typescript
308
337
  const result = await supervisor.generate('Research and write about AI safety', {
@@ -315,13 +344,15 @@ const result = await supervisor.generate('Research and write about AI safety', {
315
344
  })
316
345
  ```
317
346
 
318
- Version overrides propagate automatically through delegation. See [Sub-agent versioning](https://mastra.ai/docs/editor/overview) for details on resolution order and server API usage.
347
+ Version overrides propagate automatically through delegation. See [Subagent versioning](https://mastra.ai/docs/editor/overview) for details on resolution order and server API usage.
319
348
 
320
349
  ## Related
321
350
 
322
- - [Sub-agent versioning](https://mastra.ai/docs/editor/overview)
351
+ - [Background tasks](https://mastra.ai/docs/agents/background-tasks)
352
+ - [Subagent versioning](https://mastra.ai/docs/editor/overview)
323
353
  - [Guide: Research coordinator](https://mastra.ai/guides/guide/research-coordinator)
324
354
  - [Agent.stream() reference](https://mastra.ai/reference/streaming/agents/stream)
355
+ - [Agent.streamUntilIdle() reference](https://mastra.ai/reference/streaming/agents/streamUntilIdle)
325
356
  - [Agent.generate() reference](https://mastra.ai/reference/agents/generate)
326
357
  - [Agent approval](https://mastra.ai/docs/agents/agent-approval)
327
358
  - [Memory in multi-agent systems](https://mastra.ai/docs/memory/overview)
@@ -290,6 +290,7 @@ Note that for subagents, you'll see two different identifiers in stream response
290
290
 
291
291
  - [`createTool` reference](https://mastra.ai/reference/tools/create-tool)
292
292
  - [`Agent.generate()` reference](https://mastra.ai/reference/agents/generate): Runtime options for tool selection, steps, and callbacks
293
+ - [Background tasks](https://mastra.ai/docs/agents/background-tasks): Run long-running tools without blocking the agent loop
293
294
  - [MCP overview](https://mastra.ai/docs/mcp/overview)
294
295
  - [Dynamic tool search](https://mastra.ai/reference/processors/tool-search-processor): Load tools on demand for agents with large tool libraries
295
296
  - [Tools with structured output](https://mastra.ai/docs/agents/structured-output): Model compatibility when combining tools and structured output
@@ -0,0 +1,80 @@
1
+ # Background task streaming
2
+
3
+ **Added in:** `@mastra/core@1.28.0`
4
+
5
+ `mastra.backgroundTaskManager.stream()` returns a `ReadableStream` of [background task](https://mastra.ai/docs/agents/background-tasks) lifecycle events. Use it to monitor running tasks across the system, for example to drive a status dashboard, surface progress in your own UI, or pipe events into an SSE response.
6
+
7
+ The stream emits the same chunk types that appear inside `Agent.streamUntilIdle()` (`background-task-running`, `background-task-output`, `background-task-completed`, `background-task-failed`, `background-task-cancelled`). See [background task chunks](https://mastra.ai/reference/streaming/ChunkType) for the full payload shapes.
8
+
9
+ > **Note:** Background tasks must be [enabled on the Mastra instance](https://mastra.ai/reference/configuration) before `backgroundTaskManager` is available. When disabled, `mastra.backgroundTaskManager` is `undefined`.
10
+
11
+ ## Subscribe to all task events
12
+
13
+ Calling `stream()` with no filter returns a stream of every task event in the system. On connection, the stream emits a snapshot of all currently running tasks, then forwards live events as they happen.
14
+
15
+ ```typescript
16
+ const bgManager = mastra.backgroundTaskManager
17
+ if (!bgManager) throw new Error('Background tasks are not enabled')
18
+
19
+ const controller = new AbortController()
20
+ const stream = bgManager.stream({ abortSignal: controller.signal })
21
+
22
+ for await (const chunk of stream) {
23
+ switch (chunk.type) {
24
+ case 'background-task-running':
25
+ console.log('started', chunk.payload.taskId, chunk.payload.toolName)
26
+ break
27
+ case 'background-task-completed':
28
+ console.log('done', chunk.payload.taskId, chunk.payload.result)
29
+ break
30
+ case 'background-task-failed':
31
+ console.error('failed', chunk.payload.taskId, chunk.payload.error)
32
+ break
33
+ }
34
+ }
35
+ ```
36
+
37
+ The stream stays open until the caller's `AbortSignal` fires. Always pass an `abortSignal` so you can disconnect cleanly.
38
+
39
+ ## Filter the stream
40
+
41
+ Pass any combination of filter options to narrow the events you receive. Filters apply to both the initial snapshot and the live event subscription.
42
+
43
+ ```typescript
44
+ const stream = bgManager.stream({
45
+ agentId: 'researcher',
46
+ threadId: 't1',
47
+ resourceId: 'u1',
48
+ abortSignal: controller.signal,
49
+ })
50
+ ```
51
+
52
+ | Filter | Description |
53
+ | ------------- | --------------------------------------------------- |
54
+ | `agentId` | Only events from tasks dispatched by this agent |
55
+ | `runId` | Only events from this specific agent run |
56
+ | `threadId` | Only events from tasks scoped to this memory thread |
57
+ | `resourceId` | Only events from tasks scoped to this resource |
58
+ | `taskId` | Only events for a single task |
59
+ | `abortSignal` | Closes the stream when the signal aborts |
60
+
61
+ ## Look up task state directly
62
+
63
+ For one-off lookups instead of a live stream, use `getTask` and `listTasks`:
64
+
65
+ ```typescript
66
+ const task = await mastra.backgroundTaskManager?.getTask(taskId)
67
+ const { tasks, total } = await mastra.backgroundTaskManager?.listTasks({
68
+ status: 'running',
69
+ agentId: 'researcher',
70
+ })
71
+ ```
72
+
73
+ These read from storage rather than the pubsub stream, so they're suitable for paginated lists and detail views.
74
+
75
+ ## Related
76
+
77
+ - [Background tasks](https://mastra.ai/docs/agents/background-tasks)
78
+ - [`Agent.streamUntilIdle()` reference](https://mastra.ai/reference/streaming/agents/streamUntilIdle)
79
+ - [Background task chunks](https://mastra.ai/reference/streaming/ChunkType)
80
+ - [backgroundTasks configuration reference](https://mastra.ai/reference/configuration)
@@ -29,6 +29,8 @@ for await (const chunk of stream.textStream) {
29
29
 
30
30
  > **Info:** Visit [Agent.stream()](https://mastra.ai/reference/streaming/agents/stream) for more information.
31
31
 
32
+ > **Tip:** For agents that dispatch [background tasks](https://mastra.ai/docs/agents/background-tasks), use [`Agent.streamUntilIdle()`](https://mastra.ai/reference/streaming/agents/streamUntilIdle) to keep the stream open until those tasks complete and the agent has had a chance to respond to their results.
33
+
32
34
  ### Output from `Agent.stream()`
33
35
 
34
36
  The output streams the generated response from the agent.
@@ -129,5 +131,6 @@ A workflow stream provides access to various response properties:
129
131
  ## Related
130
132
 
131
133
  - [Streaming events](https://mastra.ai/docs/streaming/events)
134
+ - [Background tasks](https://mastra.ai/docs/agents/background-tasks)
132
135
  - [Using Agents](https://mastra.ai/docs/agents/overview)
133
136
  - [Workflows overview](https://mastra.ai/docs/workflows/overview)
@@ -19,7 +19,7 @@ A filesystem provider handles all file operations for a workspace:
19
19
  Available providers:
20
20
 
21
21
  - [`LocalFilesystem`](https://mastra.ai/reference/workspace/local-filesystem): Stores files in a directory on disk
22
- - [`S3Filesystem`](https://mastra.ai/reference/workspace/s3-filesystem): Stores files in Amazon S3 or S3-compatible storage (R2, MinIO)
22
+ - [`S3Filesystem`](https://mastra.ai/reference/workspace/s3-filesystem): Stores files in Amazon S3 or S3-compatible storage (R2, MinIO, Tigris)
23
23
  - [`GCSFilesystem`](https://mastra.ai/reference/workspace/gcs-filesystem): Stores files in Google Cloud Storage
24
24
  - [`AzureBlobFilesystem`](https://mastra.ai/reference/workspace/azure-blob-filesystem): Stores files in Azure Blob Storage
25
25
  - [`AgentFSFilesystem`](https://mastra.ai/reference/workspace/agentfs-filesystem): Stores files in a Turso/SQLite database via AgentFS
@@ -0,0 +1,191 @@
1
+ # Building a Slack assistant
2
+
3
+ In this guide, you'll build a Mastra agent that responds to messages and mentions on Slack. You'll learn how to configure a channel adapter, set up a Slack app with the right permissions, connect it to your agent via a webhook, and test the interaction.
4
+
5
+ ## Prerequisites
6
+
7
+ - Node.js `v22.13.0` or later installed
8
+ - An API key from a supported [Model Provider](https://mastra.ai/models)
9
+ - An existing Mastra project. Follow the [installation guide](https://mastra.ai/guides/getting-started/quickstart) if needed.
10
+ - A [Slack workspace](https://slack.com/) where you can create apps
11
+
12
+ ## Create the agent
13
+
14
+ Install the Slack adapter:
15
+
16
+ **npm**:
17
+
18
+ ```bash
19
+ npm install @chat-adapter/slack
20
+ ```
21
+
22
+ **pnpm**:
23
+
24
+ ```bash
25
+ pnpm add @chat-adapter/slack
26
+ ```
27
+
28
+ **Yarn**:
29
+
30
+ ```bash
31
+ yarn add @chat-adapter/slack
32
+ ```
33
+
34
+ **Bun**:
35
+
36
+ ```bash
37
+ bun add @chat-adapter/slack
38
+ ```
39
+
40
+ Create a new file `src/mastra/agents/slack-agent.ts` and define your agent:
41
+
42
+ ```ts
43
+ import { Agent } from '@mastra/core/agent'
44
+ import { createSlackAdapter } from '@chat-adapter/slack'
45
+
46
+ export const slackAgent = new Agent({
47
+ id: 'slack-agent',
48
+ name: 'Slack Agent',
49
+ instructions:
50
+ 'You are a helpful assistant. Answer questions, help with tasks, and have natural conversations.',
51
+ model: 'anthropic/claude-opus-4-6',
52
+ channels: {
53
+ adapters: {
54
+ slack: createSlackAdapter(),
55
+ },
56
+ },
57
+ })
58
+ ```
59
+
60
+ The `channels` property tells Mastra to generate a webhook endpoint for each adapter. In this case, the Slack adapter handles event verification, signature validation, and message formatting automatically.
61
+
62
+ Register the agent in your `src/mastra/index.ts` file:
63
+
64
+ ```ts
65
+ import { Mastra } from '@mastra/core'
66
+ import { slackAgent } from './agents/slack-agent'
67
+
68
+ export const mastra = new Mastra({
69
+ agents: { slackAgent },
70
+ })
71
+ ```
72
+
73
+ ## Create a Slack app
74
+
75
+ You need a Slack app to connect your agent to a workspace.
76
+
77
+ Go to <https://api.slack.com/apps> and select **Create New App** > **From scratch**. Give it a name and select your workspace.
78
+
79
+ Navigate to **OAuth & Permissions** and scroll to **Bot Token Scopes**. Add the following scopes:
80
+
81
+ - `app_mentions:read`
82
+ - `channels:history`
83
+ - `channels:read`
84
+ - `chat:write`
85
+ - `users:read`
86
+
87
+ At the top of **OAuth & Permissions**, select **Install to Workspace**. Copy the **Bot User OAuth Token** (`xoxb-...`).
88
+
89
+ Go to **Basic Information** > **App Credentials** and copy the **Signing Secret** (e.g. `c3a4...`).
90
+
91
+ Ensure **Socket Mode** is turned **off** under **Settings** > **Socket Mode**.
92
+
93
+ Add the credentials to your `.env` file:
94
+
95
+ ```bash
96
+ SLACK_SIGNING_SECRET=your-signing-secret
97
+ SLACK_BOT_TOKEN=xoxb-your-bot-token
98
+ ```
99
+
100
+ The adapter reads these environment variables by default. See the [Chat SDK Slack adapter docs](https://chat-sdk.dev/adapters/slack) for more details.
101
+
102
+ ## Connect the webhook
103
+
104
+ Slack delivers events to your agent via a webhook. Mastra generates this endpoint automatically for each channel adapter.
105
+
106
+ Start the dev server:
107
+
108
+ **npm**:
109
+
110
+ ```bash
111
+ npm run dev
112
+ ```
113
+
114
+ **pnpm**:
115
+
116
+ ```bash
117
+ pnpm run dev
118
+ ```
119
+
120
+ **Yarn**:
121
+
122
+ ```bash
123
+ yarn dev
124
+ ```
125
+
126
+ **Bun**:
127
+
128
+ ```bash
129
+ bun run dev
130
+ ```
131
+
132
+ During local development, Slack needs a public URL to reach your local server. Open a new terminal and start a tunnel:
133
+
134
+ ```bash
135
+ npx cloudflared tunnel --url http://localhost:4111
136
+ ```
137
+
138
+ This outputs a temporary public URL like `https://triple-arms-solutions-kit.trycloudflare.com`. The URL changes each time you restart the tunnel.
139
+
140
+ In your Slack app settings, go to **Event Subscriptions** and toggle **Enable Events** to **On**.
141
+
142
+ Set the **Request URL** to your tunnel URL with the agent webhook path:
143
+
144
+ ```text
145
+ https://<your-tunnel-url>/api/agents/slack-agent/channels/slack/webhook
146
+ ```
147
+
148
+ Slack sends a verification request. If your dev server is running, it responds with a green checkmark.
149
+
150
+ Under **Subscribe to bot events**, add:
151
+
152
+ - `app_mention`
153
+ - `message.channels`
154
+
155
+ Select **Save Changes**. Slack requires you to reinstall the app after changing event subscriptions. Go to **OAuth & Permissions** and select **Reinstall to Workspace** to apply the updated permissions.
156
+
157
+ > **Note:** The tunnel URL is for local development only. When you [deploy your application](https://mastra.ai/docs/deployment/overview), update the **Request URL** in your Slack app's **Event Subscriptions** to your production URL (e.g. `https://your-app.example.com/api/agents/slack-agent/channels/slack/webhook`).
158
+
159
+ ## Test the agent
160
+
161
+ Before testing with Slack, you can refine your agent's behavior in [Studio](https://mastra.ai/docs/studio/overview). Open <http://localhost:4111/> to chat with your agent directly, adjust its instructions, and inspect traces to see how responses are generated.
162
+
163
+ Once you're happy with the agent's responses, test the Slack integration. Invite the bot to a channel in Slack:
164
+
165
+ ```text
166
+ /invite @your-bot-name
167
+ ```
168
+
169
+ Mention the bot in the channel:
170
+
171
+ ```text
172
+ @your-bot-name What can you help me with?
173
+ ```
174
+
175
+ The agent responds in the thread. Output may vary depending on the model and instructions.
176
+
177
+ ## Next steps
178
+
179
+ You can extend this agent to:
180
+
181
+ - [Deploy your application](https://mastra.ai/docs/deployment/overview) and update the Slack **Request URL** to your production endpoint
182
+ - Add more adapters (Discord, Telegram) to the same agent so it responds on multiple platforms
183
+ - Configure [multimodal content](https://mastra.ai/docs/agents/channels) to let your agent process images, video, and audio shared in chat
184
+ - Add [tools](https://mastra.ai/docs/agents/using-tools) to give the agent access to external APIs and data
185
+
186
+ Learn more:
187
+
188
+ - [Channels overview](https://mastra.ai/docs/agents/channels)
189
+ - [Studio](https://mastra.ai/docs/studio/overview)
190
+ - [Deployment overview](https://mastra.ai/docs/deployment/overview)
191
+ - [Chat SDK adapter docs](https://chat-sdk.dev)
@@ -1,6 +1,6 @@
1
1
  # Model Providers
2
2
 
3
- Mastra provides a unified interface for working with LLMs across multiple providers, giving you access to 3757 models from 106 providers through a single API.
3
+ Mastra provides a unified interface for working with LLMs across multiple providers, giving you access to 3761 models from 106 providers through a single API.
4
4
 
5
5
  ## Features
6
6
 
@@ -1,6 +1,6 @@
1
1
  # ![Deep Infra logo](https://models.dev/logos/deepinfra.svg)Deep Infra
2
2
 
3
- Access 32 Deep Infra models through Mastra's model router. Authentication is handled automatically using the `DEEPINFRA_API_KEY` environment variable.
3
+ Access 33 Deep Infra models through Mastra's model router. Authentication is handled automatically using the `DEEPINFRA_API_KEY` environment variable.
4
4
 
5
5
  Learn more in the [Deep Infra documentation](https://deepinfra.com/models).
6
6
 
@@ -36,6 +36,7 @@ for await (const chunk of stream) {
36
36
  | `deepinfra/anthropic/claude-4-opus` | 200K | | | | | | $17 | $83 |
37
37
  | `deepinfra/deepseek-ai/DeepSeek-R1-0528` | 164K | | | | | | $0.50 | $2 |
38
38
  | `deepinfra/deepseek-ai/DeepSeek-V3.2` | 164K | | | | | | $0.26 | $0.38 |
39
+ | `deepinfra/deepseek-ai/DeepSeek-V4-Pro` | 66K | | | | | | $2 | $3 |
39
40
  | `deepinfra/meta-llama/Llama-3.1-70B-Instruct` | 131K | | | | | | $0.40 | $0.40 |
40
41
  | `deepinfra/meta-llama/Llama-3.1-70B-Instruct-Turbo` | 131K | | | | | | $0.40 | $0.40 |
41
42
  | `deepinfra/meta-llama/Llama-3.1-8B-Instruct` | 131K | | | | | | $0.02 | $0.05 |