@assistant-ui/mcp-docs-server 0.1.7 → 0.1.9
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/.docs/organized/code-examples/with-ai-sdk-v5.md +24 -15
- package/.docs/organized/code-examples/with-assistant-transport.md +1599 -0
- package/.docs/organized/code-examples/with-cloud.md +12 -10
- package/.docs/organized/code-examples/with-external-store.md +10 -8
- package/.docs/organized/code-examples/with-ffmpeg.md +17 -14
- package/.docs/organized/code-examples/with-langgraph.md +83 -47
- package/.docs/organized/code-examples/with-parent-id-grouping.md +10 -8
- package/.docs/organized/code-examples/with-react-hook-form.md +17 -14
- package/.docs/raw/docs/api-reference/integrations/react-data-stream.mdx +194 -0
- package/.docs/raw/docs/api-reference/overview.mdx +6 -0
- package/.docs/raw/docs/api-reference/primitives/Composer.mdx +31 -0
- package/.docs/raw/docs/api-reference/primitives/Message.mdx +108 -3
- package/.docs/raw/docs/api-reference/primitives/Thread.mdx +59 -0
- package/.docs/raw/docs/api-reference/primitives/ThreadList.mdx +128 -0
- package/.docs/raw/docs/api-reference/primitives/ThreadListItem.mdx +160 -0
- package/.docs/raw/docs/api-reference/runtimes/AssistantRuntime.mdx +0 -11
- package/.docs/raw/docs/api-reference/runtimes/ComposerRuntime.mdx +3 -3
- package/.docs/raw/docs/copilots/assistant-frame.mdx +399 -0
- package/.docs/raw/docs/devtools.mdx +51 -0
- package/.docs/raw/docs/getting-started.mdx +20 -19
- package/.docs/raw/docs/guides/Attachments.mdx +6 -13
- package/.docs/raw/docs/guides/Tools.mdx +56 -13
- package/.docs/raw/docs/guides/context-api.mdx +574 -0
- package/.docs/raw/docs/migrations/v0-12.mdx +125 -0
- package/.docs/raw/docs/runtimes/ai-sdk/use-chat.mdx +2 -2
- package/.docs/raw/docs/runtimes/custom/local.mdx +17 -4
- package/.docs/raw/docs/runtimes/data-stream.mdx +287 -0
- package/.docs/raw/docs/runtimes/mastra/full-stack-integration.mdx +6 -5
- package/.docs/raw/docs/runtimes/mastra/overview.mdx +3 -3
- package/.docs/raw/docs/runtimes/mastra/separate-server-integration.mdx +13 -13
- package/.docs/raw/docs/runtimes/pick-a-runtime.mdx +5 -0
- package/.docs/raw/docs/ui/ThreadList.mdx +54 -16
- package/dist/{chunk-L4K23SWI.js → chunk-NVNFQ5ZO.js} +4 -1
- package/dist/index.js +1 -1
- package/dist/prepare-docs/prepare.js +1 -1
- package/dist/stdio.js +1 -1
- package/package.json +7 -7
- package/.docs/raw/docs/concepts/architecture.mdx +0 -19
- package/.docs/raw/docs/concepts/runtime-layer.mdx +0 -163
- package/.docs/raw/docs/concepts/why.mdx +0 -9
|
@@ -0,0 +1,125 @@
|
|
|
1
|
+
---
|
|
2
|
+
title: Migration to v0.12
|
|
3
|
+
---
|
|
4
|
+
|
|
5
|
+
## Major Architecture Change: Unified State API
|
|
6
|
+
|
|
7
|
+
Version 0.12 introduces a complete rewrite of the state management system with a more consistent API.
|
|
8
|
+
|
|
9
|
+
## Breaking Changes
|
|
10
|
+
|
|
11
|
+
### 1. Context Hooks Replaced with Unified State API
|
|
12
|
+
|
|
13
|
+
All individual context hooks have been replaced with a single `useAssistantState` hook and `useAssistantApi` for actions.
|
|
14
|
+
|
|
15
|
+
#### What changed
|
|
16
|
+
|
|
17
|
+
The following hooks have been removed:
|
|
18
|
+
|
|
19
|
+
**Removed Hooks:**
|
|
20
|
+
|
|
21
|
+
- `useMessageUtils` → Use `useAssistantState(({ message }) => message.isHovering)` / `useAssistantState(({ message }) => message.isCopied)`
|
|
22
|
+
- `useMessageUtilsStore` → Use `useAssistantApi()` with `api.message().setIsHovering()` / `api.message().setIsCopied()`
|
|
23
|
+
- `useToolUIs` → Use `useAssistantState(({ toolUIs }) => toolUIs)` and `useAssistantApi()` with `api.toolUIs()`
|
|
24
|
+
- `useToolUIsStore` → Use `useAssistantApi()` with `api.toolUIs()`
|
|
25
|
+
|
|
26
|
+
**Deprecated Hooks:**
|
|
27
|
+
|
|
28
|
+
- `useAssistantRuntime` → Use `useAssistantApi()`
|
|
29
|
+
- `useThread` → Use `useAssistantState(({ thread }) => thread)`
|
|
30
|
+
- `useThreadRuntime` → Use `useAssistantApi()` with `api.thread()`
|
|
31
|
+
- `useMessage` → Use `useAssistantState(({ message }) => message)`
|
|
32
|
+
- `useMessageRuntime` → Use `useAssistantApi()` with `api.message()`
|
|
33
|
+
- `useComposer` → Use `useAssistantState(({ composer }) => composer)`
|
|
34
|
+
- `useComposerRuntime` → Use `useAssistantApi()` with `api.composer()`
|
|
35
|
+
- `useEditComposer` → Use `useAssistantState(({ message }) => message.composer)`
|
|
36
|
+
- `useThreadListItem` → Use `useAssistantState(({ threadListItem }) => threadListItem)`
|
|
37
|
+
- `useThreadListItemRuntime` → Use `useAssistantApi()` with `api.threadListItem()`
|
|
38
|
+
- `useMessagePart` → Use `useAssistantState(({ part }) => part)`
|
|
39
|
+
- `useMessagePartRuntime` → Use `useAssistantApi()` with `api.part()`
|
|
40
|
+
- `useAttachment` → Use `useAssistantState(({ attachment }) => attachment)`
|
|
41
|
+
- `useAttachmentRuntime` → Use `useAssistantApi()` with `api.attachment()`
|
|
42
|
+
- `useThreadModelContext` / `useThreadModelConfig` → Use `useAssistantState(({ thread }) => thread.modelContext)`
|
|
43
|
+
- `useThreadComposer` → Use `useAssistantState(({ thread }) => thread.composer)`
|
|
44
|
+
- `useThreadList` → Use `useAssistantState(({ threads }) => threads)`
|
|
45
|
+
|
|
46
|
+
#### Migration Examples
|
|
47
|
+
|
|
48
|
+
**Before:**
|
|
49
|
+
|
|
50
|
+
```tsx
|
|
51
|
+
import {
|
|
52
|
+
useThread,
|
|
53
|
+
useThreadRuntime,
|
|
54
|
+
useComposer,
|
|
55
|
+
useComposerRuntime,
|
|
56
|
+
useMessage,
|
|
57
|
+
useMessageRuntime,
|
|
58
|
+
} from "@assistant-ui/react";
|
|
59
|
+
|
|
60
|
+
function MyComponent() {
|
|
61
|
+
// Reading state
|
|
62
|
+
const messages = useThread((t) => t.messages);
|
|
63
|
+
const isRunning = useThread((t) => t.isRunning);
|
|
64
|
+
const composerText = useComposer((c) => c.text);
|
|
65
|
+
const messageRole = useMessage((m) => m.role);
|
|
66
|
+
|
|
67
|
+
// Using runtime for actions
|
|
68
|
+
const threadRuntime = useThreadRuntime();
|
|
69
|
+
const composerRuntime = useComposerRuntime();
|
|
70
|
+
const messageRuntime = useMessageRuntime();
|
|
71
|
+
|
|
72
|
+
const handleSend = () => {
|
|
73
|
+
composerRuntime.send();
|
|
74
|
+
};
|
|
75
|
+
|
|
76
|
+
const handleReload = () => {
|
|
77
|
+
messageRuntime.reload();
|
|
78
|
+
};
|
|
79
|
+
|
|
80
|
+
const handleCancel = () => {
|
|
81
|
+
threadRuntime.cancelRun();
|
|
82
|
+
};
|
|
83
|
+
|
|
84
|
+
return null;
|
|
85
|
+
}
|
|
86
|
+
```
|
|
87
|
+
|
|
88
|
+
**After:**
|
|
89
|
+
|
|
90
|
+
```tsx
|
|
91
|
+
import { useAssistantState, useAssistantApi } from "@assistant-ui/react";
|
|
92
|
+
|
|
93
|
+
function MyComponent() {
|
|
94
|
+
// Reading state - all through single hook
|
|
95
|
+
const messages = useAssistantState(({ thread }) => thread.messages);
|
|
96
|
+
const isRunning = useAssistantState(({ thread }) => thread.isRunning);
|
|
97
|
+
const composerText = useAssistantState(({ composer }) => composer.text);
|
|
98
|
+
const messageRole = useAssistantState(({ message }) => message.role);
|
|
99
|
+
|
|
100
|
+
// Using API for actions
|
|
101
|
+
const api = useAssistantApi();
|
|
102
|
+
|
|
103
|
+
const handleSend = () => {
|
|
104
|
+
api.composer().send();
|
|
105
|
+
};
|
|
106
|
+
|
|
107
|
+
const handleReload = () => {
|
|
108
|
+
api.message().reload();
|
|
109
|
+
};
|
|
110
|
+
|
|
111
|
+
const handleCancel = () => {
|
|
112
|
+
api.thread().cancelRun();
|
|
113
|
+
};
|
|
114
|
+
|
|
115
|
+
return null;
|
|
116
|
+
}
|
|
117
|
+
```
|
|
118
|
+
|
|
119
|
+
## Getting Help
|
|
120
|
+
|
|
121
|
+
If you encounter issues during migration:
|
|
122
|
+
|
|
123
|
+
1. Check the updated API documentation for detailed examples
|
|
124
|
+
2. Review the example applications in the repository
|
|
125
|
+
3. Report issues at https://github.com/assistant-ui/assistant-ui/issues
|
|
@@ -130,7 +130,7 @@ const runtime = useChatRuntime({
|
|
|
130
130
|
<Callout type="info">
|
|
131
131
|
By default, `useChatRuntime` uses `AssistantChatTransport` which automatically
|
|
132
132
|
forwards system messages and frontend tools to your backend API. This enables
|
|
133
|
-
your backend to receive the full context from the
|
|
133
|
+
your backend to receive the full context from the assistant-ui.
|
|
134
134
|
</Callout>
|
|
135
135
|
|
|
136
136
|
### Custom Transport Configuration
|
|
@@ -165,7 +165,7 @@ const runtime = useChatRuntime({
|
|
|
165
165
|
|
|
166
166
|
#### Transport Options
|
|
167
167
|
|
|
168
|
-
- **`AssistantChatTransport`** (default): Automatically forwards system messages and frontend tools from the
|
|
168
|
+
- **`AssistantChatTransport`** (default): Automatically forwards system messages and frontend tools from the assistant-ui context to your backend
|
|
169
169
|
- **`DefaultChatTransport`**: Standard AI SDK transport without automatic forwarding
|
|
170
170
|
|
|
171
171
|
### Using Frontend Tools with `frontendTools`
|
|
@@ -414,6 +414,8 @@ import {
|
|
|
414
414
|
type RemoteThreadListAdapter,
|
|
415
415
|
type ThreadHistoryAdapter,
|
|
416
416
|
} from "@assistant-ui/react";
|
|
417
|
+
import { createAssistantStream } from "assistant-stream";
|
|
418
|
+
import { useMemo } from "react";
|
|
417
419
|
|
|
418
420
|
// Implement your custom adapter with proper message persistence
|
|
419
421
|
const myDatabaseAdapter: RemoteThreadListAdapter = {
|
|
@@ -453,9 +455,16 @@ const myDatabaseAdapter: RemoteThreadListAdapter = {
|
|
|
453
455
|
|
|
454
456
|
async generateTitle(remoteId, messages) {
|
|
455
457
|
// Generate title from messages using your AI
|
|
456
|
-
const
|
|
457
|
-
|
|
458
|
-
|
|
458
|
+
const newTitle = await generateTitle(messages);
|
|
459
|
+
|
|
460
|
+
// Persist the title in your DB
|
|
461
|
+
await db.threads.update(remoteId, { title: newTitle });
|
|
462
|
+
|
|
463
|
+
// IMPORTANT: Return an AssistantStream so the UI updates
|
|
464
|
+
return createAssistantStream((controller) => {
|
|
465
|
+
controller.appendText(newTitle);
|
|
466
|
+
controller.close();
|
|
467
|
+
});
|
|
459
468
|
},
|
|
460
469
|
};
|
|
461
470
|
|
|
@@ -528,6 +537,10 @@ export function MyRuntimeProvider({ children }) {
|
|
|
528
537
|
}
|
|
529
538
|
```
|
|
530
539
|
|
|
540
|
+
<Callout type="info" title="Returning a title from generateTitle">
|
|
541
|
+
The `generateTitle` method must return an <code>AssistantStream</code> containing the title text. The easiest, type-safe way is to use <code>createAssistantStream</code> and call <code>controller.appendText(newTitle)</code> followed by <code>controller.close()</code>. Returning a raw <code>ReadableStream</code> won't update the thread list UI.
|
|
542
|
+
</Callout>
|
|
543
|
+
|
|
531
544
|
#### Understanding the Architecture
|
|
532
545
|
|
|
533
546
|
<Callout type="info">
|
|
@@ -719,7 +732,7 @@ Provide follow-up suggestions:
|
|
|
719
732
|
|
|
720
733
|
```tsx
|
|
721
734
|
const suggestionAdapter: SuggestionAdapter = {
|
|
722
|
-
async *
|
|
735
|
+
async *generate({ messages }) {
|
|
723
736
|
// Analyze conversation context
|
|
724
737
|
const lastMessage = messages[messages.length - 1];
|
|
725
738
|
|
|
@@ -0,0 +1,287 @@
|
|
|
1
|
+
---
|
|
2
|
+
title: Data Stream Protocol
|
|
3
|
+
---
|
|
4
|
+
|
|
5
|
+
import { Step, Steps } from "fumadocs-ui/components/steps";
|
|
6
|
+
import { Callout } from "fumadocs-ui/components/callout";
|
|
7
|
+
|
|
8
|
+
The `@assistant-ui/react-data-stream` package provides integration with data stream protocol endpoints, enabling streaming AI responses with tool support and state management.
|
|
9
|
+
|
|
10
|
+
## Overview
|
|
11
|
+
|
|
12
|
+
The data stream protocol is a standardized format for streaming AI responses that supports:
|
|
13
|
+
|
|
14
|
+
- **Streaming text responses** with real-time updates
|
|
15
|
+
- **Tool calling** with structured parameters and results
|
|
16
|
+
- **State management** for conversation context
|
|
17
|
+
- **Error handling** and cancellation support
|
|
18
|
+
- **Attachment support** for multimodal interactions
|
|
19
|
+
|
|
20
|
+
## Installation
|
|
21
|
+
|
|
22
|
+
```bash npm2yarn
|
|
23
|
+
npm install @assistant-ui/react-data-stream
|
|
24
|
+
```
|
|
25
|
+
|
|
26
|
+
## Basic Usage
|
|
27
|
+
|
|
28
|
+
<Steps>
|
|
29
|
+
|
|
30
|
+
<Step>
|
|
31
|
+
|
|
32
|
+
### Set up the Runtime
|
|
33
|
+
|
|
34
|
+
Use `useDataStreamRuntime` to connect to your data stream endpoint:
|
|
35
|
+
|
|
36
|
+
```tsx title="app/page.tsx"
|
|
37
|
+
"use client";
|
|
38
|
+
import { useDataStreamRuntime } from "@assistant-ui/react-data-stream";
|
|
39
|
+
import { AssistantRuntimeProvider } from "@assistant-ui/react";
|
|
40
|
+
import { Thread } from "@/components/assistant-ui/thread";
|
|
41
|
+
|
|
42
|
+
export default function ChatPage() {
|
|
43
|
+
const runtime = useDataStreamRuntime({
|
|
44
|
+
api: "/api/chat",
|
|
45
|
+
});
|
|
46
|
+
|
|
47
|
+
return (
|
|
48
|
+
<AssistantRuntimeProvider runtime={runtime}>
|
|
49
|
+
<Thread />
|
|
50
|
+
</AssistantRuntimeProvider>
|
|
51
|
+
);
|
|
52
|
+
}
|
|
53
|
+
```
|
|
54
|
+
|
|
55
|
+
</Step>
|
|
56
|
+
|
|
57
|
+
<Step>
|
|
58
|
+
|
|
59
|
+
### Create Backend Endpoint
|
|
60
|
+
|
|
61
|
+
Your backend endpoint should accept POST requests and return data stream responses:
|
|
62
|
+
|
|
63
|
+
```typescript title="app/api/chat/route.ts"
|
|
64
|
+
import { DataStreamResponse } from "assistant-stream";
|
|
65
|
+
|
|
66
|
+
export async function POST(request: Request) {
|
|
67
|
+
const { messages, tools, system } = await request.json();
|
|
68
|
+
|
|
69
|
+
// Process the request with your AI provider
|
|
70
|
+
const stream = await processWithAI({
|
|
71
|
+
messages,
|
|
72
|
+
tools,
|
|
73
|
+
system,
|
|
74
|
+
});
|
|
75
|
+
|
|
76
|
+
return new DataStreamResponse(stream);
|
|
77
|
+
}
|
|
78
|
+
```
|
|
79
|
+
|
|
80
|
+
</Step>
|
|
81
|
+
|
|
82
|
+
</Steps>
|
|
83
|
+
|
|
84
|
+
## Advanced Configuration
|
|
85
|
+
|
|
86
|
+
### Custom Headers and Authentication
|
|
87
|
+
|
|
88
|
+
```tsx
|
|
89
|
+
const runtime = useDataStreamRuntime({
|
|
90
|
+
api: "/api/chat",
|
|
91
|
+
headers: {
|
|
92
|
+
"Authorization": "Bearer " + token,
|
|
93
|
+
"X-Custom-Header": "value",
|
|
94
|
+
},
|
|
95
|
+
credentials: "include",
|
|
96
|
+
});
|
|
97
|
+
```
|
|
98
|
+
|
|
99
|
+
### Dynamic Headers
|
|
100
|
+
|
|
101
|
+
```tsx
|
|
102
|
+
const runtime = useDataStreamRuntime({
|
|
103
|
+
api: "/api/chat",
|
|
104
|
+
headers: async () => {
|
|
105
|
+
const token = await getAuthToken();
|
|
106
|
+
return {
|
|
107
|
+
"Authorization": "Bearer " + token,
|
|
108
|
+
};
|
|
109
|
+
},
|
|
110
|
+
});
|
|
111
|
+
```
|
|
112
|
+
|
|
113
|
+
### Event Callbacks
|
|
114
|
+
|
|
115
|
+
```tsx
|
|
116
|
+
const runtime = useDataStreamRuntime({
|
|
117
|
+
api: "/api/chat",
|
|
118
|
+
onResponse: (response) => {
|
|
119
|
+
console.log("Response received:", response.status);
|
|
120
|
+
},
|
|
121
|
+
onFinish: (message) => {
|
|
122
|
+
console.log("Message completed:", message);
|
|
123
|
+
},
|
|
124
|
+
onError: (error) => {
|
|
125
|
+
console.error("Error occurred:", error);
|
|
126
|
+
},
|
|
127
|
+
onCancel: () => {
|
|
128
|
+
console.log("Request cancelled");
|
|
129
|
+
},
|
|
130
|
+
});
|
|
131
|
+
```
|
|
132
|
+
|
|
133
|
+
## Tool Integration
|
|
134
|
+
|
|
135
|
+
### Frontend Tools
|
|
136
|
+
|
|
137
|
+
Use the `frontendTools` helper to serialize client-side tools:
|
|
138
|
+
|
|
139
|
+
```tsx
|
|
140
|
+
import { frontendTools } from "@assistant-ui/react-data-stream";
|
|
141
|
+
import { makeAssistantTool } from "@assistant-ui/react";
|
|
142
|
+
|
|
143
|
+
const weatherTool = makeAssistantTool({
|
|
144
|
+
toolName: "get_weather",
|
|
145
|
+
description: "Get current weather",
|
|
146
|
+
parameters: z.object({
|
|
147
|
+
location: z.string(),
|
|
148
|
+
}),
|
|
149
|
+
execute: async ({ location }) => {
|
|
150
|
+
const weather = await fetchWeather(location);
|
|
151
|
+
return `Weather in ${location}: ${weather}`;
|
|
152
|
+
},
|
|
153
|
+
});
|
|
154
|
+
|
|
155
|
+
const runtime = useDataStreamRuntime({
|
|
156
|
+
api: "/api/chat",
|
|
157
|
+
body: {
|
|
158
|
+
tools: frontendTools({
|
|
159
|
+
get_weather: weatherTool,
|
|
160
|
+
}),
|
|
161
|
+
},
|
|
162
|
+
});
|
|
163
|
+
```
|
|
164
|
+
|
|
165
|
+
### Backend Tool Processing
|
|
166
|
+
|
|
167
|
+
Your backend should handle tool calls and return results:
|
|
168
|
+
|
|
169
|
+
```typescript title="Backend tool handling"
|
|
170
|
+
// Tools are automatically forwarded to your endpoint
|
|
171
|
+
const { tools } = await request.json();
|
|
172
|
+
|
|
173
|
+
// Process tools with your AI provider
|
|
174
|
+
const response = await ai.generateText({
|
|
175
|
+
messages,
|
|
176
|
+
tools,
|
|
177
|
+
// Tool results are streamed back automatically
|
|
178
|
+
});
|
|
179
|
+
```
|
|
180
|
+
|
|
181
|
+
## Assistant Cloud Integration
|
|
182
|
+
|
|
183
|
+
For Assistant Cloud deployments, use `useCloudRuntime`:
|
|
184
|
+
|
|
185
|
+
```tsx
|
|
186
|
+
import { useCloudRuntime } from "@assistant-ui/react-data-stream";
|
|
187
|
+
|
|
188
|
+
const runtime = useCloudRuntime({
|
|
189
|
+
cloud: assistantCloud,
|
|
190
|
+
assistantId: "my-assistant-id",
|
|
191
|
+
});
|
|
192
|
+
```
|
|
193
|
+
|
|
194
|
+
<Callout type="info">
|
|
195
|
+
The `useCloudRuntime` hook is currently under active development and not yet ready for production use.
|
|
196
|
+
</Callout>
|
|
197
|
+
|
|
198
|
+
## Message Conversion
|
|
199
|
+
|
|
200
|
+
The package includes utilities for converting between message formats:
|
|
201
|
+
|
|
202
|
+
```tsx
|
|
203
|
+
import { toLanguageModelMessages } from "@assistant-ui/react-data-stream";
|
|
204
|
+
|
|
205
|
+
// Convert assistant-ui messages to language model format
|
|
206
|
+
const languageModelMessages = toLanguageModelMessages(messages, {
|
|
207
|
+
unstable_includeId: true, // Include message IDs
|
|
208
|
+
});
|
|
209
|
+
```
|
|
210
|
+
|
|
211
|
+
## Error Handling
|
|
212
|
+
|
|
213
|
+
The runtime automatically handles common error scenarios:
|
|
214
|
+
|
|
215
|
+
- **Network errors**: Automatically retried with exponential backoff
|
|
216
|
+
- **Stream interruptions**: Gracefully handled with partial content preservation
|
|
217
|
+
- **Tool execution errors**: Displayed in the UI with error states
|
|
218
|
+
- **Cancellation**: Clean abort signal handling
|
|
219
|
+
|
|
220
|
+
## Best Practices
|
|
221
|
+
|
|
222
|
+
### Performance Optimization
|
|
223
|
+
|
|
224
|
+
```tsx
|
|
225
|
+
// Use React.memo for expensive components
|
|
226
|
+
const OptimizedThread = React.memo(Thread);
|
|
227
|
+
|
|
228
|
+
// Memoize runtime configuration
|
|
229
|
+
const runtimeConfig = useMemo(() => ({
|
|
230
|
+
api: "/api/chat",
|
|
231
|
+
headers: { "Authorization": `Bearer ${token}` },
|
|
232
|
+
}), [token]);
|
|
233
|
+
|
|
234
|
+
const runtime = useDataStreamRuntime(runtimeConfig);
|
|
235
|
+
```
|
|
236
|
+
|
|
237
|
+
### Error Boundaries
|
|
238
|
+
|
|
239
|
+
```tsx
|
|
240
|
+
import { ErrorBoundary } from "react-error-boundary";
|
|
241
|
+
|
|
242
|
+
function ChatErrorFallback({ error, resetErrorBoundary }) {
|
|
243
|
+
return (
|
|
244
|
+
<div role="alert">
|
|
245
|
+
<h2>Something went wrong:</h2>
|
|
246
|
+
<pre>{error.message}</pre>
|
|
247
|
+
<button onClick={resetErrorBoundary}>Try again</button>
|
|
248
|
+
</div>
|
|
249
|
+
);
|
|
250
|
+
}
|
|
251
|
+
|
|
252
|
+
export default function App() {
|
|
253
|
+
return (
|
|
254
|
+
<ErrorBoundary FallbackComponent={ChatErrorFallback}>
|
|
255
|
+
<AssistantRuntimeProvider runtime={runtime}>
|
|
256
|
+
<Thread />
|
|
257
|
+
</AssistantRuntimeProvider>
|
|
258
|
+
</ErrorBoundary>
|
|
259
|
+
);
|
|
260
|
+
}
|
|
261
|
+
```
|
|
262
|
+
|
|
263
|
+
### State Persistence
|
|
264
|
+
|
|
265
|
+
```tsx
|
|
266
|
+
const runtime = useDataStreamRuntime({
|
|
267
|
+
api: "/api/chat",
|
|
268
|
+
body: {
|
|
269
|
+
// Include conversation state
|
|
270
|
+
state: conversationState,
|
|
271
|
+
},
|
|
272
|
+
onFinish: (message) => {
|
|
273
|
+
// Save state after each message
|
|
274
|
+
saveConversationState(message.metadata.unstable_state);
|
|
275
|
+
},
|
|
276
|
+
});
|
|
277
|
+
```
|
|
278
|
+
|
|
279
|
+
## Examples
|
|
280
|
+
|
|
281
|
+
- **[Basic Data Stream Example](https://github.com/assistant-ui/assistant-ui/tree/main/examples/with-data-stream)** - Simple streaming chat
|
|
282
|
+
- **[Tool Integration Example](https://github.com/assistant-ui/assistant-ui/tree/main/examples/with-data-stream-tools)** - Frontend and backend tools
|
|
283
|
+
- **[Authentication Example](https://github.com/assistant-ui/assistant-ui/tree/main/examples/with-data-stream-auth)** - Secure endpoints
|
|
284
|
+
|
|
285
|
+
## API Reference
|
|
286
|
+
|
|
287
|
+
For detailed API documentation, see the [`@assistant-ui/react-data-stream` API Reference](/docs/api-reference/integrations/react-data-stream).
|
|
@@ -10,9 +10,9 @@ Integrate Mastra directly into your Next.js application's API routes. This appro
|
|
|
10
10
|
<Steps>
|
|
11
11
|
<Step>
|
|
12
12
|
|
|
13
|
-
### Initialize
|
|
13
|
+
### Initialize assistant-ui
|
|
14
14
|
|
|
15
|
-
Start by setting up
|
|
15
|
+
Start by setting up assistant-ui in your project. Run one of the following commands:
|
|
16
16
|
|
|
17
17
|
```sh title="New Project"
|
|
18
18
|
npx assistant-ui@latest create
|
|
@@ -190,10 +190,11 @@ export async function POST(req: Request) {
|
|
|
190
190
|
```
|
|
191
191
|
|
|
192
192
|
Key changes:
|
|
193
|
+
|
|
193
194
|
- We import the `mastra` instance created in `mastra/index.ts`. Make sure the import path (`@/mastra`) is correct for your project setup (you might need `~/mastra`, `../../../mastra`, etc., depending on your path aliases and project structure).
|
|
194
195
|
- We retrieve the `chefAgent` using `mastra.getAgent("chefAgent")`.
|
|
195
196
|
- Instead of calling the AI SDK's `streamText` directly, we call `agent.stream(messages)` to process the chat messages using the agent's configuration and model.
|
|
196
|
-
- The result is still returned in a format compatible with
|
|
197
|
+
- The result is still returned in a format compatible with assistant-ui using `toDataStreamResponse()`.
|
|
197
198
|
|
|
198
199
|
Your API route is now powered by Mastra!
|
|
199
200
|
|
|
@@ -208,11 +209,11 @@ You're all set! Start your Next.js development server:
|
|
|
208
209
|
npm run dev
|
|
209
210
|
```
|
|
210
211
|
|
|
211
|
-
Open your browser to `http://localhost:3000` (or the port specified in your terminal). You should now be able to interact with your `chefAgent` through the
|
|
212
|
+
Open your browser to `http://localhost:3000` (or the port specified in your terminal). You should now be able to interact with your `chefAgent` through the assistant-ui chat interface. Ask it for cooking advice based on ingredients you have!
|
|
212
213
|
|
|
213
214
|
</Step>
|
|
214
215
|
</Steps>
|
|
215
216
|
|
|
216
|
-
Congratulations! You have successfully integrated Mastra into your Next.js application using the full-stack approach. Your
|
|
217
|
+
Congratulations! You have successfully integrated Mastra into your Next.js application using the full-stack approach. Your assistant-ui frontend now communicates with a Mastra agent running in your Next.js backend API route.
|
|
217
218
|
|
|
218
219
|
To explore more advanced Mastra features like memory, tools, workflows, and more, please refer to the [official Mastra documentation](https://mastra.ai/docs).
|
|
@@ -4,9 +4,9 @@ title: Overview
|
|
|
4
4
|
|
|
5
5
|
Mastra is an open-source TypeScript agent framework designed to provide the essential primitives for building AI applications. It enables developers to create AI agents with memory and tool-calling capabilities, implement deterministic LLM workflows, and leverage RAG for knowledge integration. With features like model routing, workflow graphs, and automated evals, Mastra provides a complete toolkit for developing, testing, and deploying AI applications.
|
|
6
6
|
|
|
7
|
-
## Integrating with Next.js and
|
|
7
|
+
## Integrating with Next.js and assistant-ui
|
|
8
8
|
|
|
9
|
-
There are two primary ways to integrate Mastra into your Next.js project when using
|
|
9
|
+
There are two primary ways to integrate Mastra into your Next.js project when using assistant-ui:
|
|
10
10
|
|
|
11
11
|
1. **Full-Stack Integration**: Integrate Mastra directly into your Next.js application's API routes. This approach keeps your backend and frontend code within the same project.
|
|
12
12
|
[Learn how to set up Full-Stack Integration](./full-stack-integration)
|
|
@@ -14,4 +14,4 @@ There are two primary ways to integrate Mastra into your Next.js project when us
|
|
|
14
14
|
2. **Separate Server Integration**: Run Mastra as a standalone server and connect your Next.js frontend to its API endpoints. This approach separates concerns and allows for independent scaling.
|
|
15
15
|
[Learn how to set up Separate Server Integration](./separate-server-integration)
|
|
16
16
|
|
|
17
|
-
Choose the guide that best fits your project architecture. Both methods allow seamless integration with the
|
|
17
|
+
Choose the guide that best fits your project architecture. Both methods allow seamless integration with the assistant-ui components.
|
|
@@ -5,7 +5,7 @@ title: Separate Server Integration
|
|
|
5
5
|
import { Step, Steps } from "fumadocs-ui/components/steps";
|
|
6
6
|
import { Callout } from "fumadocs-ui/components/callout";
|
|
7
7
|
|
|
8
|
-
Run Mastra as a standalone server and connect your Next.js frontend (using
|
|
8
|
+
Run Mastra as a standalone server and connect your Next.js frontend (using assistant-ui) to its API endpoints. This approach separates your AI backend from your frontend application, allowing for independent development and scaling.
|
|
9
9
|
|
|
10
10
|
<Steps>
|
|
11
11
|
|
|
@@ -13,7 +13,7 @@ Run Mastra as a standalone server and connect your Next.js frontend (using Assis
|
|
|
13
13
|
|
|
14
14
|
### Create Mastra Server Project
|
|
15
15
|
|
|
16
|
-
First, create a dedicated project for your Mastra server. Choose a directory separate from your Next.js/
|
|
16
|
+
First, create a dedicated project for your Mastra server. Choose a directory separate from your Next.js/assistant-ui frontend project.
|
|
17
17
|
|
|
18
18
|
Navigate to your chosen parent directory in the terminal and run the Mastra create command:
|
|
19
19
|
|
|
@@ -97,15 +97,15 @@ With the agent defined and registered, start the Mastra development server:
|
|
|
97
97
|
npm run dev
|
|
98
98
|
```
|
|
99
99
|
|
|
100
|
-
By default, the Mastra server will run on `http://localhost:4111`. Your `chefAgent` should now be accessible via a POST request endpoint, typically `http://localhost:4111/api/agents/chefAgent/stream`. Keep this server running for the next steps where we'll set up the
|
|
100
|
+
By default, the Mastra server will run on `http://localhost:4111`. Your `chefAgent` should now be accessible via a POST request endpoint, typically `http://localhost:4111/api/agents/chefAgent/stream`. Keep this server running for the next steps where we'll set up the assistant-ui frontend to connect to it.
|
|
101
101
|
|
|
102
102
|
</Step>
|
|
103
103
|
|
|
104
104
|
<Step>
|
|
105
105
|
|
|
106
|
-
### Initialize
|
|
106
|
+
### Initialize assistant-ui Frontend
|
|
107
107
|
|
|
108
|
-
Now, set up your frontend application using
|
|
108
|
+
Now, set up your frontend application using assistant-ui. Navigate to a **different directory** from your Mastra server project. You can either create a new Next.js project or use an existing one.
|
|
109
109
|
|
|
110
110
|
Inside your frontend project directory, run one of the following commands:
|
|
111
111
|
|
|
@@ -117,10 +117,10 @@ npx assistant-ui@latest create
|
|
|
117
117
|
npx assistant-ui@latest init
|
|
118
118
|
```
|
|
119
119
|
|
|
120
|
-
This command installs the necessary
|
|
120
|
+
This command installs the necessary assistant-ui dependencies and sets up basic configuration files, including a default chat page and an API route (`app/api/chat/route.ts`).
|
|
121
121
|
|
|
122
122
|
<Callout title="Need Help?">
|
|
123
|
-
For detailed setup instructions for
|
|
123
|
+
For detailed setup instructions for assistant-ui, including manual setup
|
|
124
124
|
steps, please refer to the main [Getting Started
|
|
125
125
|
guide](/docs/getting-started).
|
|
126
126
|
</Callout>
|
|
@@ -133,9 +133,9 @@ In the next step, we will configure this frontend to communicate with the separa
|
|
|
133
133
|
|
|
134
134
|
### Configure Frontend API Endpoint
|
|
135
135
|
|
|
136
|
-
The default
|
|
136
|
+
The default assistant-ui setup configures the chat runtime to use a local API route (`/api/chat`) within the Next.js project. Since our Mastra agent is running on a separate server, we need to update the frontend to point to that server's endpoint.
|
|
137
137
|
|
|
138
|
-
Open the main page file in your
|
|
138
|
+
Open the main page file in your assistant-ui frontend project (usually `app/page.tsx` or `src/app/page.tsx`). Find the `useChatRuntime` hook and change the `api` property to the full URL of your Mastra agent's stream endpoint:
|
|
139
139
|
|
|
140
140
|
```tsx {10} title="app/page.tsx"
|
|
141
141
|
"use client";
|
|
@@ -163,7 +163,7 @@ export default function Home() {
|
|
|
163
163
|
|
|
164
164
|
Replace `"http://localhost:4111/api/agents/chefAgent/stream"` with the actual URL if your Mastra server runs on a different port or host, or if your agent has a different name.
|
|
165
165
|
|
|
166
|
-
Now, the
|
|
166
|
+
Now, the assistant-ui frontend will send chat requests directly to your running Mastra server.
|
|
167
167
|
|
|
168
168
|
<Callout title="Delete Default API Route">
|
|
169
169
|
Since the frontend no longer uses the local `/api/chat` route created by the
|
|
@@ -179,18 +179,18 @@ Now, the Assistant UI frontend will send chat requests directly to your running
|
|
|
179
179
|
|
|
180
180
|
You're ready to connect the pieces! Make sure your separate Mastra server is still running (from Step 4).
|
|
181
181
|
|
|
182
|
-
In your
|
|
182
|
+
In your assistant-ui frontend project directory, start the Next.js development server:
|
|
183
183
|
|
|
184
184
|
```bash npm2yarn
|
|
185
185
|
npm run dev
|
|
186
186
|
```
|
|
187
187
|
|
|
188
|
-
Open your browser to `http://localhost:3000` (or the port specified in your terminal for the frontend app). You should now be able to interact with your `chefAgent` through the
|
|
188
|
+
Open your browser to `http://localhost:3000` (or the port specified in your terminal for the frontend app). You should now be able to interact with your `chefAgent` through the assistant-ui chat interface. The frontend will make requests to your Mastra server running on `http://localhost:4111`.
|
|
189
189
|
|
|
190
190
|
</Step>
|
|
191
191
|
|
|
192
192
|
</Steps>
|
|
193
193
|
|
|
194
|
-
Congratulations! You have successfully integrated Mastra with
|
|
194
|
+
Congratulations! You have successfully integrated Mastra with assistant-ui using a separate server approach. Your assistant-ui frontend now communicates with a standalone Mastra agent server.
|
|
195
195
|
|
|
196
196
|
This setup provides a clear separation between your frontend and AI backend. To explore more advanced Mastra features like memory, tools, workflows, and deployment options, please refer to the [official Mastra documentation](https://mastra.ai/docs).
|
|
@@ -48,6 +48,11 @@ For popular frameworks, we provide ready-to-use integrations built on top of our
|
|
|
48
48
|
description="For useChat and useAssistant hooks - streaming with all major providers"
|
|
49
49
|
href="/docs/runtimes/ai-sdk/use-chat"
|
|
50
50
|
/>
|
|
51
|
+
<Card
|
|
52
|
+
title="Data Stream Protocol"
|
|
53
|
+
description="For custom backends using the data stream protocol standard"
|
|
54
|
+
href="/docs/runtimes/data-stream"
|
|
55
|
+
/>
|
|
51
56
|
<Card
|
|
52
57
|
title="LangGraph"
|
|
53
58
|
description="For complex agent workflows with LangChain's graph framework"
|