@assistant-ui/mcp-docs-server 0.1.1

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (109) hide show
  1. package/.docs/organized/code-examples/local-ollama.md +1135 -0
  2. package/.docs/organized/code-examples/search-agent-for-e-commerce.md +1721 -0
  3. package/.docs/organized/code-examples/with-ai-sdk.md +1081 -0
  4. package/.docs/organized/code-examples/with-cloud.md +1164 -0
  5. package/.docs/organized/code-examples/with-external-store.md +1064 -0
  6. package/.docs/organized/code-examples/with-ffmpeg.md +1305 -0
  7. package/.docs/organized/code-examples/with-langgraph.md +1819 -0
  8. package/.docs/organized/code-examples/with-openai-assistants.md +1175 -0
  9. package/.docs/organized/code-examples/with-react-hook-form.md +1727 -0
  10. package/.docs/organized/code-examples/with-vercel-ai-rsc.md +1157 -0
  11. package/.docs/raw/blog/2024-07-29-hello/index.mdx +65 -0
  12. package/.docs/raw/blog/2024-09-11/index.mdx +10 -0
  13. package/.docs/raw/blog/2024-12-15/index.mdx +10 -0
  14. package/.docs/raw/blog/2025-01-31-changelog/index.mdx +129 -0
  15. package/.docs/raw/docs/about-assistantui.mdx +44 -0
  16. package/.docs/raw/docs/api-reference/context-providers/AssistantRuntimeProvider.mdx +30 -0
  17. package/.docs/raw/docs/api-reference/context-providers/TextContentPartProvider.mdx +26 -0
  18. package/.docs/raw/docs/api-reference/integrations/react-hook-form.mdx +103 -0
  19. package/.docs/raw/docs/api-reference/integrations/vercel-ai-sdk.mdx +145 -0
  20. package/.docs/raw/docs/api-reference/overview.mdx +583 -0
  21. package/.docs/raw/docs/api-reference/primitives/ActionBar.mdx +264 -0
  22. package/.docs/raw/docs/api-reference/primitives/AssistantModal.mdx +129 -0
  23. package/.docs/raw/docs/api-reference/primitives/Attachment.mdx +96 -0
  24. package/.docs/raw/docs/api-reference/primitives/BranchPicker.mdx +87 -0
  25. package/.docs/raw/docs/api-reference/primitives/Composer.mdx +204 -0
  26. package/.docs/raw/docs/api-reference/primitives/ContentPart.mdx +173 -0
  27. package/.docs/raw/docs/api-reference/primitives/Error.mdx +70 -0
  28. package/.docs/raw/docs/api-reference/primitives/Message.mdx +181 -0
  29. package/.docs/raw/docs/api-reference/primitives/Thread.mdx +197 -0
  30. package/.docs/raw/docs/api-reference/primitives/composition.mdx +21 -0
  31. package/.docs/raw/docs/api-reference/runtimes/AssistantRuntime.mdx +33 -0
  32. package/.docs/raw/docs/api-reference/runtimes/AttachmentRuntime.mdx +46 -0
  33. package/.docs/raw/docs/api-reference/runtimes/ComposerRuntime.mdx +69 -0
  34. package/.docs/raw/docs/api-reference/runtimes/ContentPartRuntime.mdx +22 -0
  35. package/.docs/raw/docs/api-reference/runtimes/MessageRuntime.mdx +49 -0
  36. package/.docs/raw/docs/api-reference/runtimes/ThreadListItemRuntime.mdx +32 -0
  37. package/.docs/raw/docs/api-reference/runtimes/ThreadListRuntime.mdx +31 -0
  38. package/.docs/raw/docs/api-reference/runtimes/ThreadRuntime.mdx +48 -0
  39. package/.docs/raw/docs/architecture.mdx +92 -0
  40. package/.docs/raw/docs/cloud/authorization.mdx +152 -0
  41. package/.docs/raw/docs/cloud/overview.mdx +55 -0
  42. package/.docs/raw/docs/cloud/persistence/ai-sdk.mdx +54 -0
  43. package/.docs/raw/docs/cloud/persistence/langgraph.mdx +123 -0
  44. package/.docs/raw/docs/concepts/architecture.mdx +19 -0
  45. package/.docs/raw/docs/concepts/runtime-layer.mdx +163 -0
  46. package/.docs/raw/docs/concepts/why.mdx +9 -0
  47. package/.docs/raw/docs/copilots/make-assistant-readable.mdx +71 -0
  48. package/.docs/raw/docs/copilots/make-assistant-tool-ui.mdx +76 -0
  49. package/.docs/raw/docs/copilots/make-assistant-tool.mdx +117 -0
  50. package/.docs/raw/docs/copilots/model-context.mdx +135 -0
  51. package/.docs/raw/docs/copilots/motivation.mdx +191 -0
  52. package/.docs/raw/docs/copilots/use-assistant-instructions.mdx +62 -0
  53. package/.docs/raw/docs/getting-started.mdx +1133 -0
  54. package/.docs/raw/docs/guides/Attachments.mdx +640 -0
  55. package/.docs/raw/docs/guides/Branching.mdx +59 -0
  56. package/.docs/raw/docs/guides/Editing.mdx +56 -0
  57. package/.docs/raw/docs/guides/Speech.mdx +43 -0
  58. package/.docs/raw/docs/guides/ToolUI.mdx +663 -0
  59. package/.docs/raw/docs/guides/Tools.mdx +496 -0
  60. package/.docs/raw/docs/index.mdx +7 -0
  61. package/.docs/raw/docs/legacy/styled/AssistantModal.mdx +85 -0
  62. package/.docs/raw/docs/legacy/styled/Decomposition.mdx +633 -0
  63. package/.docs/raw/docs/legacy/styled/Markdown.mdx +86 -0
  64. package/.docs/raw/docs/legacy/styled/Scrollbar.mdx +71 -0
  65. package/.docs/raw/docs/legacy/styled/Thread.mdx +84 -0
  66. package/.docs/raw/docs/legacy/styled/ThreadWidth.mdx +21 -0
  67. package/.docs/raw/docs/mcp-docs-server.mdx +324 -0
  68. package/.docs/raw/docs/migrations/deprecation-policy.mdx +41 -0
  69. package/.docs/raw/docs/migrations/v0-7.mdx +188 -0
  70. package/.docs/raw/docs/migrations/v0-8.mdx +160 -0
  71. package/.docs/raw/docs/migrations/v0-9.mdx +75 -0
  72. package/.docs/raw/docs/react-compatibility.mdx +208 -0
  73. package/.docs/raw/docs/runtimes/ai-sdk/rsc.mdx +226 -0
  74. package/.docs/raw/docs/runtimes/ai-sdk/use-assistant-hook.mdx +195 -0
  75. package/.docs/raw/docs/runtimes/ai-sdk/use-chat-hook.mdx +138 -0
  76. package/.docs/raw/docs/runtimes/ai-sdk/use-chat.mdx +136 -0
  77. package/.docs/raw/docs/runtimes/custom/external-store.mdx +1624 -0
  78. package/.docs/raw/docs/runtimes/custom/local.mdx +1185 -0
  79. package/.docs/raw/docs/runtimes/helicone.mdx +60 -0
  80. package/.docs/raw/docs/runtimes/langgraph/index.mdx +320 -0
  81. package/.docs/raw/docs/runtimes/langgraph/tutorial/index.mdx +11 -0
  82. package/.docs/raw/docs/runtimes/langgraph/tutorial/introduction.mdx +28 -0
  83. package/.docs/raw/docs/runtimes/langgraph/tutorial/part-1.mdx +120 -0
  84. package/.docs/raw/docs/runtimes/langgraph/tutorial/part-2.mdx +336 -0
  85. package/.docs/raw/docs/runtimes/langgraph/tutorial/part-3.mdx +385 -0
  86. package/.docs/raw/docs/runtimes/langserve.mdx +126 -0
  87. package/.docs/raw/docs/runtimes/mastra/full-stack-integration.mdx +218 -0
  88. package/.docs/raw/docs/runtimes/mastra/overview.mdx +17 -0
  89. package/.docs/raw/docs/runtimes/mastra/separate-server-integration.mdx +196 -0
  90. package/.docs/raw/docs/runtimes/pick-a-runtime.mdx +222 -0
  91. package/.docs/raw/docs/ui/AssistantModal.mdx +46 -0
  92. package/.docs/raw/docs/ui/AssistantSidebar.mdx +42 -0
  93. package/.docs/raw/docs/ui/Attachment.mdx +82 -0
  94. package/.docs/raw/docs/ui/Markdown.mdx +72 -0
  95. package/.docs/raw/docs/ui/Mermaid.mdx +79 -0
  96. package/.docs/raw/docs/ui/Scrollbar.mdx +59 -0
  97. package/.docs/raw/docs/ui/SyntaxHighlighting.mdx +253 -0
  98. package/.docs/raw/docs/ui/Thread.mdx +47 -0
  99. package/.docs/raw/docs/ui/ThreadList.mdx +49 -0
  100. package/.docs/raw/docs/ui/ToolFallback.mdx +64 -0
  101. package/.docs/raw/docs/ui/primitives/Thread.mdx +197 -0
  102. package/LICENSE +21 -0
  103. package/README.md +128 -0
  104. package/dist/chunk-C7O7EFKU.js +38 -0
  105. package/dist/chunk-CZCDQ3YH.js +420 -0
  106. package/dist/index.js +1 -0
  107. package/dist/prepare-docs/prepare.js +199 -0
  108. package/dist/stdio.js +8 -0
  109. package/package.json +43 -0
@@ -0,0 +1,218 @@
1
+ ---
2
+ title: Full-Stack Integration
3
+ ---
4
+
5
+ import { Step, Steps } from "fumadocs-ui/components/steps";
6
+ import { Callout } from "fumadocs-ui/components/callout";
7
+
8
+ Integrate Mastra directly into your Next.js application's API routes. This approach keeps your backend and frontend code within the same project.
9
+
10
+ <Steps>
11
+ <Step>
12
+
13
+ ### Initialize Assistant UI
14
+
15
+ Start by setting up Assistant UI in your project. Run one of the following commands:
16
+
17
+ ```sh title="New Project"
18
+ npx assistant-ui@latest create
19
+ ```
20
+
21
+ ```sh title="Existing Project"
22
+ npx assistant-ui@latest init
23
+ ```
24
+
25
+ This command installs necessary dependencies and creates basic configuration files, including a default chat API route.
26
+
27
+ <Callout title="Need Help?">
28
+ For detailed setup instructions, including adding API keys, basic
29
+ configuration, and manual setup steps, please refer to the main [Getting
30
+ Started guide](/docs/getting-started).
31
+ </Callout>
32
+
33
+ </Step>
34
+ <Step>
35
+
36
+ ### Review Initial API Route
37
+
38
+ The initialization command creates a basic API route at `app/api/chat/route.ts` (or `src/app/api/chat/route.ts`). It typically looks like this:
39
+
40
+ ```typescript title="app/api/chat/route.ts"
41
+ import { openai } from "@ai-sdk/openai";
42
+ import { streamText } from "ai";
43
+
44
+ // Allow streaming responses up to 30 seconds
45
+ export const maxDuration = 30;
46
+
47
+ export async function POST(req: Request) {
48
+ const { messages } = await req.json();
49
+
50
+ const result = streamText({
51
+ model: openai("gpt-4o-mini"),
52
+ messages,
53
+ });
54
+
55
+ return result.toDataStreamResponse();
56
+ }
57
+ ```
58
+
59
+ This default route uses the Vercel AI SDK directly with OpenAI. In the following steps, we will modify this route to integrate Mastra.
60
+
61
+ </Step>
62
+ <Step>
63
+
64
+ ### Install Mastra Packages
65
+
66
+ Add the Mastra core, memory, the AI SDK OpenAI provider packages to your project:
67
+
68
+ ```bash npm2yarn
69
+ npm install @mastra/core@latest @mastra/memory@latest @ai-sdk/openai
70
+ ```
71
+
72
+ </Step>
73
+ <Step>
74
+
75
+ ### Configure Next.js
76
+
77
+ To ensure Next.js correctly bundles your application when using Mastra directly in API routes, you need to configure `serverExternalPackages`.
78
+
79
+ Update your `next.config.mjs` (or `next.config.js`) file to include `@mastra/*`:
80
+
81
+ ```js title="next.config.mjs"
82
+ /** @type {import('next').NextConfig} */
83
+ const nextConfig = {
84
+ serverExternalPackages: ["@mastra/*"],
85
+ // ... other configurations
86
+ };
87
+
88
+ export default nextConfig;
89
+ ```
90
+
91
+ This tells Next.js to treat Mastra packages as external dependencies on the server-side.
92
+
93
+ </Step>
94
+ <Step>
95
+
96
+ ### Create Mastra Files
97
+
98
+ Set up the basic folder structure for your Mastra configuration. Create a `mastra` folder (e.g., in your `src` or root directory) with the following structure:
99
+
100
+ ```txt title="Project Structure"
101
+ /
102
+ ├── mastra/
103
+ │ ├── agents/
104
+ │ │ └── chefAgent.ts
105
+ │ └── index.ts
106
+ └── ... (rest of your project)
107
+ ```
108
+
109
+ You can create these files and folders manually or use the following commands in your terminal:
110
+
111
+ ```bash
112
+ mkdir -p mastra/agents
113
+ touch mastra/index.ts mastra/agents/chefAgent.ts
114
+ ```
115
+
116
+ These files will be used in the next steps to define your Mastra agent and configuration.
117
+
118
+ </Step>
119
+ <Step>
120
+
121
+ ### Define the Agent
122
+
123
+ Now, let's define the behavior of our AI agent. Open the `mastra/agents/chefAgent.ts` file and add the following code:
124
+
125
+ ```typescript title="mastra/agents/chefAgent.ts"
126
+ import { openai } from "@ai-sdk/openai";
127
+ import { Agent } from "@mastra/core/agent";
128
+
129
+ export const chefAgent = new Agent({
130
+ name: "chef-agent",
131
+ instructions:
132
+ "You are Michel, a practical and experienced home chef. " +
133
+ "You help people cook with whatever ingredients they have available.",
134
+ model: openai("gpt-4o-mini"),
135
+ });
136
+ ```
137
+
138
+ This code creates a new Mastra `Agent` named `chef-agent`.
139
+
140
+ - `instructions`: Defines the agent's persona and primary goal.
141
+ - `model`: Specifies the language model the agent will use (in this case, OpenAI's GPT-4o Mini via the AI SDK).
142
+
143
+ Make sure you have set up your OpenAI API key as described in the [Getting Started guide](/docs/getting-started).
144
+
145
+ </Step>
146
+ <Step>
147
+
148
+ ### Register the Agent
149
+
150
+ Next, register the agent with your Mastra instance. Open the `mastra/index.ts` file and add the following code:
151
+
152
+ ```typescript title="mastra/index.ts"
153
+ import { Mastra } from "@mastra/core";
154
+
155
+ import { chefAgent } from "./agents/chefAgent";
156
+
157
+ export const mastra = new Mastra({
158
+ agents: { chefAgent },
159
+ });
160
+ ```
161
+
162
+ This code initializes Mastra and makes the `chefAgent` available for use in your application's API routes.
163
+
164
+ </Step>
165
+ <Step>
166
+
167
+ ### Modify the API Route
168
+
169
+ Now, update your API route (`app/api/chat/route.ts`) to use the Mastra agent you just configured. Replace the existing content with the following:
170
+
171
+ ```typescript title="app/api/chat/route.ts"
172
+ import { mastra } from "@/mastra"; // Adjust the import path if necessary
173
+
174
+ // Allow streaming responses up to 30 seconds
175
+ export const maxDuration = 30;
176
+
177
+ export async function POST(req: Request) {
178
+ // Extract the messages from the request body
179
+ const { messages } = await req.json();
180
+
181
+ // Get the chefAgent instance from Mastra
182
+ const agent = mastra.getAgent("chefAgent");
183
+
184
+ // Stream the response using the agent
185
+ const result = await agent.stream(messages);
186
+
187
+ // Return the result as a data stream response
188
+ return result.toDataStreamResponse();
189
+ }
190
+ ```
191
+
192
+ Key changes:
193
+ - We import the `mastra` instance created in `mastra/index.ts`. Make sure the import path (`@/mastra`) is correct for your project setup (you might need `~/mastra`, `../../../mastra`, etc., depending on your path aliases and project structure).
194
+ - We retrieve the `chefAgent` using `mastra.getAgent("chefAgent")`.
195
+ - Instead of calling the AI SDK's `streamText` directly, we call `agent.stream(messages)` to process the chat messages using the agent's configuration and model.
196
+ - The result is still returned in a format compatible with Assistant UI using `toDataStreamResponse()`.
197
+
198
+ Your API route is now powered by Mastra!
199
+
200
+ </Step>
201
+ <Step>
202
+
203
+ ### Run the Application
204
+
205
+ You're all set! Start your Next.js development server:
206
+
207
+ ```bash npm2yarn
208
+ npm run dev
209
+ ```
210
+
211
+ Open your browser to `http://localhost:3000` (or the port specified in your terminal). You should now be able to interact with your `chefAgent` through the Assistant UI chat interface. Ask it for cooking advice based on ingredients you have!
212
+
213
+ </Step>
214
+ </Steps>
215
+
216
+ Congratulations! You have successfully integrated Mastra into your Next.js application using the full-stack approach. Your Assistant UI frontend now communicates with a Mastra agent running in your Next.js backend API route.
217
+
218
+ To explore more advanced Mastra features like memory, tools, workflows, and more, please refer to the [official Mastra documentation](https://mastra.ai/docs).
@@ -0,0 +1,17 @@
1
+ ---
2
+ title: Overview
3
+ ---
4
+
5
+ Mastra is an open-source TypeScript agent framework designed to provide the essential primitives for building AI applications. It enables developers to create AI agents with memory and tool-calling capabilities, implement deterministic LLM workflows, and leverage RAG for knowledge integration. With features like model routing, workflow graphs, and automated evals, Mastra provides a complete toolkit for developing, testing, and deploying AI applications.
6
+
7
+ ## Integrating with Next.js and Assistant UI
8
+
9
+ There are two primary ways to integrate Mastra into your Next.js project when using Assistant UI:
10
+
11
+ 1. **Full-Stack Integration**: Integrate Mastra directly into your Next.js application's API routes. This approach keeps your backend and frontend code within the same project.
12
+ [Learn how to set up Full-Stack Integration](./full-stack-integration)
13
+
14
+ 2. **Separate Server Integration**: Run Mastra as a standalone server and connect your Next.js frontend to its API endpoints. This approach separates concerns and allows for independent scaling.
15
+ [Learn how to set up Separate Server Integration](./separate-server-integration)
16
+
17
+ Choose the guide that best fits your project architecture. Both methods allow seamless integration with the Assistant UI components.
@@ -0,0 +1,196 @@
1
+ ---
2
+ title: Separate Server Integration
3
+ ---
4
+
5
+ import { Step, Steps } from "fumadocs-ui/components/steps";
6
+ import { Callout } from "fumadocs-ui/components/callout";
7
+
8
+ Run Mastra as a standalone server and connect your Next.js frontend (using Assistant UI) to its API endpoints. This approach separates your AI backend from your frontend application, allowing for independent development and scaling.
9
+
10
+ <Steps>
11
+
12
+ <Step>
13
+
14
+ ### Create Mastra Server Project
15
+
16
+ First, create a dedicated project for your Mastra server. Choose a directory separate from your Next.js/Assistant UI frontend project.
17
+
18
+ Navigate to your chosen parent directory in the terminal and run the Mastra create command:
19
+
20
+ ```bash
21
+ npx create-mastra@latest
22
+ ```
23
+
24
+ This command will launch an interactive wizard to help you scaffold a new Mastra project, including prompting you for a project name and setting up basic configurations. Follow the prompts to create your server project. For more detailed setup instructions, refer to the [official Mastra installation guide](https://mastra.ai/docs/getting-started/installation).
25
+
26
+ Once the setup is complete, navigate into your new Mastra project directory (the name you provided during the setup):
27
+
28
+ ```bash
29
+ cd your-mastra-server-directory # Replace with the actual directory name
30
+ ```
31
+
32
+ You now have a basic Mastra server project ready.
33
+
34
+ <Callout title="API Keys">
35
+ Ensure you have configured your environment variables (e.g., `OPENAI_API_KEY`)
36
+ within this Mastra server project, typically in a `.env.development` file, as
37
+ required by the models you use. The `create-mastra` wizard might prompt you
38
+ for some keys, but ensure all necessary keys for your chosen models are
39
+ present.
40
+ </Callout>
41
+
42
+ </Step>
43
+
44
+ <Step>
45
+
46
+ ### Define the Agent
47
+
48
+ Next, let's define an agent within your Mastra server project. We'll create a `chefAgent` similar to the one used in the full-stack guide.
49
+
50
+ Open or create the agent file (e.g., `src/agents/chefAgent.ts` within your Mastra project) and add the following code:
51
+
52
+ ```typescript title="src/agents/chefAgent.ts"
53
+ import { openai } from "@ai-sdk/openai";
54
+ import { Agent } from "@mastra/core/agent";
55
+
56
+ export const chefAgent = new Agent({
57
+ name: "chef-agent",
58
+ instructions:
59
+ "You are Michel, a practical and experienced home chef. " +
60
+ "You help people cook with whatever ingredients they have available.",
61
+ model: openai("gpt-4o-mini"),
62
+ });
63
+ ```
64
+
65
+ This defines the agent's behavior, but it's not yet active in the Mastra server.
66
+
67
+ </Step>
68
+
69
+ <Step>
70
+
71
+ ### Register the Agent
72
+
73
+ Now, you need to register the `chefAgent` with your Mastra instance so the server knows about it. Open your main Mastra configuration file (this is often `src/index.ts` in projects created with `create-mastra`).
74
+
75
+ Import the `chefAgent` and add it to the `agents` object when initializing Mastra:
76
+
77
+ ```typescript title="src/index.ts"
78
+ import { Mastra } from "@mastra/core";
79
+ import { chefAgent } from "./agents/chefAgent"; // Adjust path if necessary
80
+
81
+ export const mastra = new Mastra({
82
+ agents: { chefAgent },
83
+ });
84
+ ```
85
+
86
+ Make sure you adapt this code to fit the existing structure of your `src/index.ts` file generated by `create-mastra`. The key is to import your agent and include it in the `agents` configuration object.
87
+
88
+ </Step>
89
+
90
+ <Step>
91
+
92
+ ### Run the Mastra Server
93
+
94
+ With the agent defined and registered, start the Mastra development server:
95
+
96
+ ```bash npm2yarn
97
+ npm run dev
98
+ ```
99
+
100
+ By default, the Mastra server will run on `http://localhost:4111`. Your `chefAgent` should now be accessible via a POST request endpoint, typically `http://localhost:4111/api/agents/chefAgent/stream`. Keep this server running for the next steps where we'll set up the Assistant UI frontend to connect to it.
101
+
102
+ </Step>
103
+
104
+ <Step>
105
+
106
+ ### Initialize Assistant UI Frontend
107
+
108
+ Now, set up your frontend application using Assistant UI. Navigate to a **different directory** from your Mastra server project. You can either create a new Next.js project or use an existing one.
109
+
110
+ Inside your frontend project directory, run one of the following commands:
111
+
112
+ ```sh title="New Project"
113
+ npx assistant-ui@latest create
114
+ ```
115
+
116
+ ```sh title="Existing Project"
117
+ npx assistant-ui@latest init
118
+ ```
119
+
120
+ This command installs the necessary Assistant UI dependencies and sets up basic configuration files, including a default chat page and an API route (`app/api/chat/route.ts`).
121
+
122
+ <Callout title="Need Help?">
123
+ For detailed setup instructions for Assistant UI, including manual setup
124
+ steps, please refer to the main [Getting Started
125
+ guide](/docs/getting-started).
126
+ </Callout>
127
+
128
+ In the next step, we will configure this frontend to communicate with the separate Mastra server instead of using the default API route.
129
+
130
+ </Step>
131
+
132
+ <Step>
133
+
134
+ ### Configure Frontend API Endpoint
135
+
136
+ The default Assistant UI setup configures the chat runtime to use a local API route (`/api/chat`) within the Next.js project. Since our Mastra agent is running on a separate server, we need to update the frontend to point to that server's endpoint.
137
+
138
+ Open the main page file in your Assistant UI frontend project (usually `app/page.tsx` or `src/app/page.tsx`). Find the `useChatRuntime` hook and change the `api` property to the full URL of your Mastra agent's stream endpoint:
139
+
140
+ ```tsx {10} title="app/page.tsx"
141
+ "use client";
142
+ import { Thread } from "@/components/assistant-ui/thread";
143
+ import { useChatRuntime } from "@assistant-ui/react-ai-sdk";
144
+ import { AssistantRuntimeProvider } from "@assistant-ui/react";
145
+ import { ThreadList } from "@/components/assistant-ui/thread-list";
146
+
147
+ export default function Home() {
148
+ // Point the runtime to the Mastra server endpoint
149
+ const runtime = useChatRuntime({
150
+ api: "http://localhost:4111/api/agents/chefAgent/stream",
151
+ });
152
+
153
+ return (
154
+ <AssistantRuntimeProvider runtime={runtime}>
155
+ <main className="grid h-dvh grid-cols-[200px_1fr] gap-x-2 px-4 py-4">
156
+ <ThreadList />
157
+ <Thread />
158
+ </main>
159
+ </AssistantRuntimeProvider>
160
+ );
161
+ }
162
+ ```
163
+
164
+ Replace `"http://localhost:4111/api/agents/chefAgent/stream"` with the actual URL if your Mastra server runs on a different port or host, or if your agent has a different name.
165
+
166
+ Now, the Assistant UI frontend will send chat requests directly to your running Mastra server.
167
+
168
+ <Callout title="Delete Default API Route">
169
+ Since the frontend no longer uses the local `/api/chat` route created by the
170
+ `init` command, you can safely delete the `app/api/chat/route.ts` (or
171
+ `src/app/api/chat/route.ts`) file from your frontend project.
172
+ </Callout>
173
+
174
+ </Step>
175
+
176
+ <Step>
177
+
178
+ ### Run the Frontend Application
179
+
180
+ You're ready to connect the pieces! Make sure your separate Mastra server is still running (from Step 4).
181
+
182
+ In your Assistant UI frontend project directory, start the Next.js development server:
183
+
184
+ ```bash npm2yarn
185
+ npm run dev
186
+ ```
187
+
188
+ Open your browser to `http://localhost:3000` (or the port specified in your terminal for the frontend app). You should now be able to interact with your `chefAgent` through the Assistant UI chat interface. The frontend will make requests to your Mastra server running on `http://localhost:4111`.
189
+
190
+ </Step>
191
+
192
+ </Steps>
193
+
194
+ Congratulations! You have successfully integrated Mastra with Assistant UI using a separate server approach. Your Assistant UI frontend now communicates with a standalone Mastra agent server.
195
+
196
+ This setup provides a clear separation between your frontend and AI backend. To explore more advanced Mastra features like memory, tools, workflows, and deployment options, please refer to the [official Mastra documentation](https://mastra.ai/docs).
@@ -0,0 +1,222 @@
1
+ ---
2
+ title: Picking a Runtime
3
+ ---
4
+
5
+ import { Card, Cards } from "fumadocs-ui/components/card";
6
+ import { Callout } from "fumadocs-ui/components/callout";
7
+
8
+ Choosing the right runtime is crucial for your assistant-ui implementation. This guide helps you navigate the options based on your specific needs.
9
+
10
+ ## Quick Decision Tree
11
+
12
+ ```mermaid
13
+ graph TD
14
+ A[What's your starting point?] --> B{Existing Framework?}
15
+ B -->|Vercel AI SDK| C[Use AI SDK Integration]
16
+ B -->|LangGraph| D[Use LangGraph Runtime]
17
+ B -->|LangServe| E[Use LangServe Runtime]
18
+ B -->|Mastra| F[Use Mastra Runtime]
19
+ B -->|Custom Backend| G{State Management?}
20
+ G -->|Let assistant-ui handle it| H[Use LocalRuntime]
21
+ G -->|I'll manage it myself| I[Use ExternalStoreRuntime]
22
+ ```
23
+
24
+ ## Core Runtimes
25
+
26
+ These are the foundational runtimes that power assistant-ui:
27
+
28
+ <Cards>
29
+ <Card
30
+ title="`LocalRuntime`"
31
+ description="assistant-ui manages chat state internally. Simple adapter pattern for any backend."
32
+ href="/docs/runtimes/custom/local"
33
+ />
34
+ <Card
35
+ title="`ExternalStoreRuntime`"
36
+ description="You control the state. Perfect for Redux, Zustand, or existing state management."
37
+ href="/docs/runtimes/custom/external-store"
38
+ />
39
+ </Cards>
40
+
41
+ ## Pre-Built Integrations
42
+
43
+ For popular frameworks, we provide ready-to-use integrations built on top of our core runtimes:
44
+
45
+ <Cards>
46
+ <Card
47
+ title="Vercel AI SDK"
48
+ description="For useChat and useAssistant hooks - streaming with all major providers"
49
+ href="/docs/runtimes/ai-sdk/use-chat"
50
+ />
51
+ <Card
52
+ title="LangGraph"
53
+ description="For complex agent workflows with LangChain's graph framework"
54
+ href="/docs/runtimes/langgraph"
55
+ />
56
+ <Card
57
+ title="LangServe"
58
+ description="For LangChain applications deployed with LangServe"
59
+ href="/docs/runtimes/langserve"
60
+ />
61
+ <Card
62
+ title="Mastra"
63
+ description="For workflow orchestration with Mastra's ecosystem"
64
+ href="/docs/runtimes/mastra/overview"
65
+ />
66
+ </Cards>
67
+
68
+ ## Understanding Runtime Architecture
69
+
70
+ ### How Pre-Built Integrations Work
71
+
72
+ The pre-built integrations (AI SDK, LangGraph, etc.) are **not separate runtime types**. They're convenient wrappers built on top of our core runtimes:
73
+
74
+ - **AI SDK Integration** → Built on `LocalRuntime` with streaming adapter
75
+ - **LangGraph Runtime** → Built on `LocalRuntime` with graph execution adapter
76
+ - **LangServe Runtime** → Built on `LocalRuntime` with LangServe client adapter
77
+ - **Mastra Runtime** → Built on `LocalRuntime` with workflow adapter
78
+
79
+ This means you get all the benefits of `LocalRuntime` (automatic state management, built-in features) with zero configuration for your specific framework.
80
+
81
+ ### When to Use Pre-Built vs Core Runtimes
82
+
83
+ **Use a pre-built integration when:**
84
+ - You're already using that framework
85
+ - You want the fastest possible setup
86
+ - The integration covers your needs
87
+
88
+ **Use a core runtime when:**
89
+ - You have a custom backend
90
+ - You need features not exposed by the integration
91
+ - You want full control over the implementation
92
+
93
+ <Callout>
94
+ Pre-built integrations can always be replaced with a custom `LocalRuntime` or `ExternalStoreRuntime` implementation if you need more control later.
95
+ </Callout>
96
+
97
+ ## Feature Comparison
98
+
99
+ ### Core Runtime Capabilities
100
+
101
+ | Feature | `LocalRuntime` | `ExternalStoreRuntime` |
102
+ | ------- | -------------- | ---------------------- |
103
+ | **State Management** | Automatic | You control |
104
+ | **Setup Complexity** | Simple | Moderate |
105
+ | **Message Editing** | Built-in | Implement `onEdit` |
106
+ | **Branch Switching** | Built-in | Implement `setMessages` |
107
+ | **Regeneration** | Built-in | Implement `onReload` |
108
+ | **Cancellation** | Built-in | Implement `onCancel` |
109
+ | **Multi-thread** | Via adapters | Via adapters |
110
+
111
+ ### Available Adapters
112
+
113
+ | Adapter | `LocalRuntime` | `ExternalStoreRuntime` |
114
+ | ------- | -------------- | ---------------------- |
115
+ | ChatModel | ✅ Required | ❌ N/A |
116
+ | Attachments | ✅ | ✅ |
117
+ | Speech | ✅ | ✅ |
118
+ | Feedback | ✅ | ✅ |
119
+ | History | ✅ | ❌ Use your state |
120
+ | Suggestions | ✅ | ❌ Use your state |
121
+
122
+ ## Common Implementation Patterns
123
+
124
+ ### Vercel AI SDK with Streaming
125
+
126
+ ```tsx
127
+ import { useChatRuntime } from "@assistant-ui/react-ai-sdk";
128
+
129
+ export function MyAssistant() {
130
+ const runtime = useChatRuntime({
131
+ api: "/api/chat",
132
+ });
133
+
134
+ return (
135
+ <AssistantRuntimeProvider runtime={runtime}>
136
+ <Thread />
137
+ </AssistantRuntimeProvider>
138
+ );
139
+ }
140
+ ```
141
+
142
+ ### Custom Backend with `LocalRuntime`
143
+
144
+ ```tsx
145
+ import { useLocalRuntime } from "@assistant-ui/react";
146
+
147
+ const runtime = useLocalRuntime({
148
+ async run({ messages, abortSignal }) {
149
+ const response = await fetch("/api/chat", {
150
+ method: "POST",
151
+ headers: { "Content-Type": "application/json" },
152
+ body: JSON.stringify({ messages }),
153
+ signal: abortSignal,
154
+ });
155
+ return response.json();
156
+ },
157
+ });
158
+ ```
159
+
160
+ ### Redux Integration with `ExternalStoreRuntime`
161
+
162
+ ```tsx
163
+ import { useExternalStoreRuntime } from "@assistant-ui/react";
164
+
165
+ const messages = useSelector(selectMessages);
166
+ const dispatch = useDispatch();
167
+
168
+ const runtime = useExternalStoreRuntime({
169
+ messages,
170
+ onNew: async (message) => {
171
+ dispatch(addUserMessage(message));
172
+ const response = await api.chat(message);
173
+ dispatch(addAssistantMessage(response));
174
+ },
175
+ setMessages: (messages) => dispatch(setMessages(messages)),
176
+ onEdit: async (message) => dispatch(editMessage(message)),
177
+ onReload: async (parentId) => dispatch(reloadMessage(parentId)),
178
+ });
179
+ ```
180
+
181
+ ## Examples
182
+
183
+ Explore our implementation examples:
184
+
185
+ - **[AI SDK Example](https://github.com/assistant-ui/assistant-ui/tree/main/examples/with-ai-sdk)** - Vercel AI SDK with `useChatRuntime`
186
+ - **[External Store Example](https://github.com/assistant-ui/assistant-ui/tree/main/examples/with-external-store)** - `ExternalStoreRuntime` with custom state
187
+ - **[Assistant Cloud Example](https://github.com/assistant-ui/assistant-ui/tree/main/examples/with-cloud)** - Multi-thread with cloud persistence
188
+ - **[LangGraph Example](https://github.com/assistant-ui/assistant-ui/tree/main/examples/with-langgraph)** - Agent workflows
189
+ - **[OpenAI Assistants Example](https://github.com/assistant-ui/assistant-ui/tree/main/examples/with-openai-assistants)** - OpenAI Assistants API
190
+
191
+ ## Common Pitfalls to Avoid
192
+
193
+ ### LocalRuntime Pitfalls
194
+ - **Forgetting the adapter**: `LocalRuntime` requires a `ChatModelAdapter` - it won't work without one
195
+ - **Not handling errors**: Always handle API errors in your adapter's `run` function
196
+ - **Missing abort signal**: Pass `abortSignal` to your fetch calls for proper cancellation
197
+
198
+ ### ExternalStoreRuntime Pitfalls
199
+ - **Mutating state**: Always create new arrays/objects when updating messages
200
+ - **Missing handlers**: Each UI feature requires its corresponding handler (e.g., no edit button without `onEdit`)
201
+ - **Forgetting optimistic updates**: Set `isRunning` to `true` for loading states
202
+
203
+ ### General Pitfalls
204
+ - **Wrong integration level**: Don't use `LocalRuntime` if you already have Vercel AI SDK - use the AI SDK integration instead
205
+ - **Over-engineering**: Start with pre-built integrations before building custom solutions
206
+ - **Ignoring TypeScript**: The types will guide you to the correct implementation
207
+
208
+ ## Next Steps
209
+
210
+ 1. **Choose your runtime** based on the decision tree above
211
+ 2. **Follow the specific guide**:
212
+ - [AI SDK Integration](/docs/runtimes/ai-sdk/use-chat)
213
+ - [`LocalRuntime` Guide](/docs/runtimes/custom/local)
214
+ - [`ExternalStoreRuntime` Guide](/docs/runtimes/custom/external-store)
215
+ - [LangGraph Integration](/docs/runtimes/langgraph)
216
+ 3. **Start with an example** from our [examples repository](https://github.com/assistant-ui/assistant-ui/tree/main/examples)
217
+ 4. **Add features progressively** using adapters
218
+ 5. **Consider Assistant Cloud** for production persistence
219
+
220
+ <Callout type="info">
221
+ Need help? Join our [Discord community](https://discord.gg/assistant-ui) or check the [GitHub](https://github.com/assistant-ui/assistant-ui).
222
+ </Callout>