@assistant-ui/mcp-docs-server 0.1.25 → 0.1.27
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/.docs/organized/code-examples/waterfall.md +4 -4
- package/.docs/organized/code-examples/with-a2a.md +5 -5
- package/.docs/organized/code-examples/with-ag-ui.md +6 -6
- package/.docs/organized/code-examples/with-ai-sdk-v6.md +7 -7
- package/.docs/organized/code-examples/with-artifacts.md +7 -7
- package/.docs/organized/code-examples/with-assistant-transport.md +5 -5
- package/.docs/organized/code-examples/with-chain-of-thought.md +7 -7
- package/.docs/organized/code-examples/with-cloud-standalone.md +8 -8
- package/.docs/organized/code-examples/with-cloud.md +7 -7
- package/.docs/organized/code-examples/with-custom-thread-list.md +7 -7
- package/.docs/organized/code-examples/with-elevenlabs-conversational.md +511 -0
- package/.docs/organized/code-examples/with-elevenlabs-scribe.md +10 -10
- package/.docs/organized/code-examples/with-expo.md +18 -18
- package/.docs/organized/code-examples/with-external-store.md +5 -5
- package/.docs/organized/code-examples/with-ffmpeg.md +220 -66
- package/.docs/organized/code-examples/with-google-adk.md +6 -6
- package/.docs/organized/code-examples/with-heat-graph.md +4 -4
- package/.docs/organized/code-examples/with-interactables.md +836 -0
- package/.docs/organized/code-examples/with-langgraph.md +6 -6
- package/.docs/organized/code-examples/with-livekit.md +591 -0
- package/.docs/organized/code-examples/with-parent-id-grouping.md +6 -6
- package/.docs/organized/code-examples/with-react-hook-form.md +8 -8
- package/.docs/organized/code-examples/with-react-ink.md +3 -3
- package/.docs/organized/code-examples/with-react-router.md +11 -11
- package/.docs/organized/code-examples/with-store.md +11 -6
- package/.docs/organized/code-examples/with-tanstack.md +8 -8
- package/.docs/organized/code-examples/with-tap-runtime.md +8 -8
- package/.docs/raw/blog/2026-03-launch-week/index.mdx +31 -0
- package/.docs/raw/docs/(docs)/cli.mdx +60 -0
- package/.docs/raw/docs/(docs)/copilots/model-context.mdx +9 -1
- package/.docs/raw/docs/(docs)/guides/attachments.mdx +65 -4
- package/.docs/raw/docs/(docs)/guides/interactables.mdx +354 -0
- package/.docs/raw/docs/(docs)/guides/message-timing.mdx +3 -3
- package/.docs/raw/docs/(docs)/guides/multi-agent.mdx +1 -0
- package/.docs/raw/docs/(docs)/guides/tool-ui.mdx +29 -0
- package/.docs/raw/docs/(docs)/guides/voice.mdx +333 -0
- package/.docs/raw/docs/(reference)/api-reference/primitives/composer.mdx +128 -0
- package/.docs/raw/docs/(reference)/api-reference/primitives/message-part.mdx +23 -0
- package/.docs/raw/docs/cloud/ai-sdk-assistant-ui.mdx +6 -0
- package/.docs/raw/docs/cloud/ai-sdk.mdx +81 -1
- package/.docs/raw/docs/ink/primitives.mdx +141 -0
- package/.docs/raw/docs/primitives/action-bar.mdx +351 -0
- package/.docs/raw/docs/primitives/assistant-modal.mdx +215 -0
- package/.docs/raw/docs/primitives/attachment.mdx +216 -0
- package/.docs/raw/docs/primitives/branch-picker.mdx +221 -0
- package/.docs/raw/docs/primitives/chain-of-thought.mdx +311 -0
- package/.docs/raw/docs/primitives/composer.mdx +526 -0
- package/.docs/raw/docs/primitives/error.mdx +141 -0
- package/.docs/raw/docs/primitives/index.mdx +98 -0
- package/.docs/raw/docs/primitives/message.mdx +524 -0
- package/.docs/raw/docs/primitives/selection-toolbar.mdx +165 -0
- package/.docs/raw/docs/primitives/suggestion.mdx +242 -0
- package/.docs/raw/docs/primitives/thread-list.mdx +404 -0
- package/.docs/raw/docs/primitives/thread.mdx +482 -0
- package/.docs/raw/docs/runtimes/a2a/index.mdx +4 -0
- package/.docs/raw/docs/runtimes/ai-sdk/v6.mdx +2 -2
- package/.docs/raw/docs/runtimes/assistant-transport.mdx +6 -2
- package/.docs/raw/docs/ui/context-display.mdx +2 -2
- package/.docs/raw/docs/ui/mention.mdx +168 -0
- package/.docs/raw/docs/ui/model-selector.mdx +1 -1
- package/.docs/raw/docs/ui/voice.mdx +172 -0
- package/package.json +3 -4
|
@@ -0,0 +1,333 @@
|
|
|
1
|
+
---
|
|
2
|
+
title: Realtime Voice
|
|
3
|
+
description: Bidirectional realtime voice conversations with AI agents.
|
|
4
|
+
---
|
|
5
|
+
|
|
6
|
+
import { VoiceSample } from "@/components/docs/samples/voice";
|
|
7
|
+
|
|
8
|
+
assistant-ui supports realtime bidirectional voice via the `RealtimeVoiceAdapter` interface. This enables live voice conversations where the user speaks into their microphone and the AI agent responds with audio, with transcripts appearing in the thread in real time.
|
|
9
|
+
|
|
10
|
+
<VoiceSample />
|
|
11
|
+
|
|
12
|
+
## How It Works
|
|
13
|
+
|
|
14
|
+
Unlike [Speech Synthesis](/docs/guides/speech) (text-to-speech) and [Dictation](/docs/guides/dictation) (speech-to-text), the voice adapter handles **both directions simultaneously** — the user's microphone audio is streamed to the agent, and the agent's audio response is played back, all while transcripts are appended to the message thread.
|
|
15
|
+
|
|
16
|
+
| Feature | Adapter | Direction |
|
|
17
|
+
|---------|---------|-----------|
|
|
18
|
+
| [Speech Synthesis](/docs/guides/speech) | `SpeechSynthesisAdapter` | Text → Audio (one message at a time) |
|
|
19
|
+
| [Dictation](/docs/guides/dictation) | `DictationAdapter` | Audio → Text (into composer) |
|
|
20
|
+
| **Realtime Voice** | `RealtimeVoiceAdapter` | Audio ↔ Audio (bidirectional, live) |
|
|
21
|
+
|
|
22
|
+
## Configuration
|
|
23
|
+
|
|
24
|
+
Pass a `RealtimeVoiceAdapter` implementation to the runtime via `adapters.voice`:
|
|
25
|
+
|
|
26
|
+
```tsx
|
|
27
|
+
const runtime = useChatRuntime({
|
|
28
|
+
adapters: {
|
|
29
|
+
voice: new MyVoiceAdapter({ /* ... */ }),
|
|
30
|
+
},
|
|
31
|
+
});
|
|
32
|
+
```
|
|
33
|
+
|
|
34
|
+
When a voice adapter is provided, `capabilities.voice` is automatically set to `true`.
|
|
35
|
+
|
|
36
|
+
## Hooks
|
|
37
|
+
|
|
38
|
+
### useVoiceState
|
|
39
|
+
|
|
40
|
+
Returns the current voice session state, or `undefined` when no session is active.
|
|
41
|
+
|
|
42
|
+
```tsx
|
|
43
|
+
import { useVoiceState, useVoiceVolume } from "@assistant-ui/react";
|
|
44
|
+
|
|
45
|
+
const voiceState = useVoiceState();
|
|
46
|
+
// voiceState?.status.type — "starting" | "running" | "ended"
|
|
47
|
+
// voiceState?.isMuted — boolean
|
|
48
|
+
// voiceState?.mode — "listening" | "speaking"
|
|
49
|
+
|
|
50
|
+
const volume = useVoiceVolume();
|
|
51
|
+
// volume — number (0–1, real-time audio level via separate subscription)
|
|
52
|
+
```
|
|
53
|
+
|
|
54
|
+
### useVoiceControls
|
|
55
|
+
|
|
56
|
+
Returns methods to control the voice session.
|
|
57
|
+
|
|
58
|
+
```tsx
|
|
59
|
+
import { useVoiceControls } from "@assistant-ui/react";
|
|
60
|
+
|
|
61
|
+
const { connect, disconnect, mute, unmute } = useVoiceControls();
|
|
62
|
+
```
|
|
63
|
+
|
|
64
|
+
## UI Example
|
|
65
|
+
|
|
66
|
+
```tsx
|
|
67
|
+
import { useVoiceState, useVoiceControls } from "@assistant-ui/react";
|
|
68
|
+
import { PhoneIcon, PhoneOffIcon, MicIcon, MicOffIcon } from "lucide-react";
|
|
69
|
+
|
|
70
|
+
function VoiceControls() {
|
|
71
|
+
const voiceState = useVoiceState();
|
|
72
|
+
const { connect, disconnect, mute, unmute } = useVoiceControls();
|
|
73
|
+
|
|
74
|
+
const isRunning = voiceState?.status.type === "running";
|
|
75
|
+
const isStarting = voiceState?.status.type === "starting";
|
|
76
|
+
const isMuted = voiceState?.isMuted ?? false;
|
|
77
|
+
|
|
78
|
+
if (!isRunning && !isStarting) {
|
|
79
|
+
return (
|
|
80
|
+
<button onClick={() => connect()}>
|
|
81
|
+
<PhoneIcon /> Connect
|
|
82
|
+
</button>
|
|
83
|
+
);
|
|
84
|
+
}
|
|
85
|
+
|
|
86
|
+
return (
|
|
87
|
+
<>
|
|
88
|
+
<button onClick={() => (isMuted ? unmute() : mute())} disabled={!isRunning}>
|
|
89
|
+
{isMuted ? <MicOffIcon /> : <MicIcon />}
|
|
90
|
+
{isMuted ? "Unmute" : "Mute"}
|
|
91
|
+
</button>
|
|
92
|
+
<button onClick={() => disconnect()}>
|
|
93
|
+
<PhoneOffIcon /> Disconnect
|
|
94
|
+
</button>
|
|
95
|
+
</>
|
|
96
|
+
);
|
|
97
|
+
}
|
|
98
|
+
```
|
|
99
|
+
|
|
100
|
+
## Custom Adapters
|
|
101
|
+
|
|
102
|
+
Implement the `RealtimeVoiceAdapter` interface to integrate with any voice provider.
|
|
103
|
+
|
|
104
|
+
### RealtimeVoiceAdapter Interface
|
|
105
|
+
|
|
106
|
+
```tsx
|
|
107
|
+
import type { RealtimeVoiceAdapter } from "@assistant-ui/react";
|
|
108
|
+
|
|
109
|
+
class MyVoiceAdapter implements RealtimeVoiceAdapter {
|
|
110
|
+
connect(options: {
|
|
111
|
+
abortSignal?: AbortSignal;
|
|
112
|
+
}): RealtimeVoiceAdapter.Session {
|
|
113
|
+
// Establish connection to your voice service
|
|
114
|
+
return {
|
|
115
|
+
get status() { /* ... */ },
|
|
116
|
+
get isMuted() { /* ... */ },
|
|
117
|
+
|
|
118
|
+
disconnect: () => { /* ... */ },
|
|
119
|
+
mute: () => { /* ... */ },
|
|
120
|
+
unmute: () => { /* ... */ },
|
|
121
|
+
|
|
122
|
+
onStatusChange: (callback) => {
|
|
123
|
+
// Status: { type: "starting" } → { type: "running" } → { type: "ended", reason }
|
|
124
|
+
return () => {}; // Return unsubscribe
|
|
125
|
+
},
|
|
126
|
+
|
|
127
|
+
onTranscript: (callback) => {
|
|
128
|
+
// callback({ role: "user" | "assistant", text: "...", isFinal: true })
|
|
129
|
+
// Transcripts are automatically appended as messages in the thread.
|
|
130
|
+
return () => {};
|
|
131
|
+
},
|
|
132
|
+
|
|
133
|
+
// Report who is speaking (drives the VoiceOrb speaking animation)
|
|
134
|
+
onModeChange: (callback) => {
|
|
135
|
+
// callback("listening") — user's turn
|
|
136
|
+
// callback("speaking") — agent's turn
|
|
137
|
+
return () => {};
|
|
138
|
+
},
|
|
139
|
+
|
|
140
|
+
// Report real-time audio level (0–1) for visual feedback
|
|
141
|
+
onVolumeChange: (callback) => {
|
|
142
|
+
// callback(0.72) — drives VoiceOrb amplitude and waveform bar heights
|
|
143
|
+
return () => {};
|
|
144
|
+
},
|
|
145
|
+
};
|
|
146
|
+
}
|
|
147
|
+
}
|
|
148
|
+
```
|
|
149
|
+
|
|
150
|
+
### Session Lifecycle
|
|
151
|
+
|
|
152
|
+
The session status follows the same pattern as other adapters:
|
|
153
|
+
|
|
154
|
+
```
|
|
155
|
+
starting → running → ended
|
|
156
|
+
```
|
|
157
|
+
|
|
158
|
+
The `ended` status includes a `reason`:
|
|
159
|
+
- `"finished"` — session ended normally
|
|
160
|
+
- `"cancelled"` — session was cancelled by the user
|
|
161
|
+
- `"error"` — session ended due to an error (includes `error` field)
|
|
162
|
+
|
|
163
|
+
### Mode and Volume
|
|
164
|
+
|
|
165
|
+
All adapters must implement `onModeChange` and `onVolumeChange`. If your provider doesn't support these, return a no-op unsubscribe:
|
|
166
|
+
|
|
167
|
+
- **`onModeChange`** — Reports `"listening"` (user's turn) or `"speaking"` (agent's turn). The `VoiceOrb` switches to the active speaking animation.
|
|
168
|
+
- **`onVolumeChange`** — Reports a real-time audio level (`0`–`1`). The `VoiceOrb` modulates its amplitude and glow, and waveform bars scale to match.
|
|
169
|
+
|
|
170
|
+
When using `createVoiceSession`, these are handled automatically — call `session.emitMode()` and `session.emitVolume()` when your provider delivers data.
|
|
171
|
+
|
|
172
|
+
### Transcript Handling
|
|
173
|
+
|
|
174
|
+
Transcripts emitted via `onTranscript` are automatically appended to the message thread:
|
|
175
|
+
|
|
176
|
+
- **User transcripts** (`role: "user"`, `isFinal: true`) are appended as user messages.
|
|
177
|
+
- **Assistant transcripts** (`role: "assistant"`) are streamed into an assistant message. The message shows a "running" status until `isFinal: true` is received.
|
|
178
|
+
|
|
179
|
+
## Example: ElevenLabs Conversational AI
|
|
180
|
+
|
|
181
|
+
[ElevenLabs Conversational AI](https://elevenlabs.io/docs/agents-platform/overview) provides realtime voice agents via WebRTC.
|
|
182
|
+
|
|
183
|
+
### Install Dependencies
|
|
184
|
+
|
|
185
|
+
```bash
|
|
186
|
+
npm install @elevenlabs/client
|
|
187
|
+
```
|
|
188
|
+
|
|
189
|
+
### Adapter
|
|
190
|
+
|
|
191
|
+
```tsx title="lib/elevenlabs-voice-adapter.ts"
|
|
192
|
+
import type { RealtimeVoiceAdapter, Unsubscribe } from "@assistant-ui/react";
|
|
193
|
+
import { VoiceConversation } from "@elevenlabs/client";
|
|
194
|
+
|
|
195
|
+
export class ElevenLabsVoiceAdapter implements RealtimeVoiceAdapter {
|
|
196
|
+
private _agentId: string;
|
|
197
|
+
|
|
198
|
+
constructor(options: { agentId: string }) {
|
|
199
|
+
this._agentId = options.agentId;
|
|
200
|
+
}
|
|
201
|
+
|
|
202
|
+
connect(options: {
|
|
203
|
+
abortSignal?: AbortSignal;
|
|
204
|
+
}): RealtimeVoiceAdapter.Session {
|
|
205
|
+
const statusCallbacks = new Set<(s: RealtimeVoiceAdapter.Status) => void>();
|
|
206
|
+
const transcriptCallbacks = new Set<(t: RealtimeVoiceAdapter.TranscriptItem) => void>();
|
|
207
|
+
const modeCallbacks = new Set<(m: RealtimeVoiceAdapter.Mode) => void>();
|
|
208
|
+
const volumeCallbacks = new Set<(v: number) => void>();
|
|
209
|
+
|
|
210
|
+
let currentStatus: RealtimeVoiceAdapter.Status = { type: "starting" };
|
|
211
|
+
let isMuted = false;
|
|
212
|
+
let conversation: VoiceConversation | null = null;
|
|
213
|
+
let disposed = false;
|
|
214
|
+
|
|
215
|
+
const updateStatus = (status: RealtimeVoiceAdapter.Status) => {
|
|
216
|
+
if (disposed) return;
|
|
217
|
+
currentStatus = status;
|
|
218
|
+
for (const cb of statusCallbacks) cb(status);
|
|
219
|
+
};
|
|
220
|
+
|
|
221
|
+
const cleanup = () => {
|
|
222
|
+
disposed = true;
|
|
223
|
+
conversation = null;
|
|
224
|
+
statusCallbacks.clear();
|
|
225
|
+
transcriptCallbacks.clear();
|
|
226
|
+
modeCallbacks.clear();
|
|
227
|
+
volumeCallbacks.clear();
|
|
228
|
+
};
|
|
229
|
+
|
|
230
|
+
const session: RealtimeVoiceAdapter.Session = {
|
|
231
|
+
get status() { return currentStatus; },
|
|
232
|
+
get isMuted() { return isMuted; },
|
|
233
|
+
disconnect: () => { conversation?.endSession(); cleanup(); },
|
|
234
|
+
mute: () => { conversation?.setMicMuted(true); isMuted = true; },
|
|
235
|
+
unmute: () => { conversation?.setMicMuted(false); isMuted = false; },
|
|
236
|
+
onStatusChange: (cb): Unsubscribe => {
|
|
237
|
+
statusCallbacks.add(cb);
|
|
238
|
+
return () => statusCallbacks.delete(cb);
|
|
239
|
+
},
|
|
240
|
+
onTranscript: (cb): Unsubscribe => {
|
|
241
|
+
transcriptCallbacks.add(cb);
|
|
242
|
+
return () => transcriptCallbacks.delete(cb);
|
|
243
|
+
},
|
|
244
|
+
onModeChange: (cb): Unsubscribe => {
|
|
245
|
+
modeCallbacks.add(cb);
|
|
246
|
+
return () => modeCallbacks.delete(cb);
|
|
247
|
+
},
|
|
248
|
+
onVolumeChange: (cb): Unsubscribe => {
|
|
249
|
+
volumeCallbacks.add(cb);
|
|
250
|
+
return () => volumeCallbacks.delete(cb);
|
|
251
|
+
},
|
|
252
|
+
};
|
|
253
|
+
|
|
254
|
+
if (options.abortSignal) {
|
|
255
|
+
options.abortSignal.addEventListener("abort", () => {
|
|
256
|
+
conversation?.endSession(); cleanup();
|
|
257
|
+
}, { once: true });
|
|
258
|
+
}
|
|
259
|
+
|
|
260
|
+
const doConnect = async () => {
|
|
261
|
+
if (disposed) return;
|
|
262
|
+
try {
|
|
263
|
+
conversation = await VoiceConversation.startSession({
|
|
264
|
+
agentId: this._agentId,
|
|
265
|
+
onConnect: () => updateStatus({ type: "running" }),
|
|
266
|
+
onDisconnect: () => { updateStatus({ type: "ended", reason: "finished" }); cleanup(); },
|
|
267
|
+
onError: (msg) => { updateStatus({ type: "ended", reason: "error", error: new Error(msg) }); cleanup(); },
|
|
268
|
+
onModeChange: ({ mode }) => {
|
|
269
|
+
if (disposed) return;
|
|
270
|
+
for (const cb of modeCallbacks) cb(mode === "speaking" ? "speaking" : "listening");
|
|
271
|
+
},
|
|
272
|
+
onMessage: (msg) => {
|
|
273
|
+
if (disposed) return;
|
|
274
|
+
for (const cb of transcriptCallbacks) {
|
|
275
|
+
cb({ role: msg.role === "user" ? "user" : "assistant", text: msg.message, isFinal: true });
|
|
276
|
+
}
|
|
277
|
+
},
|
|
278
|
+
});
|
|
279
|
+
} catch (error) {
|
|
280
|
+
updateStatus({ type: "ended", reason: "error", error }); cleanup();
|
|
281
|
+
}
|
|
282
|
+
};
|
|
283
|
+
|
|
284
|
+
doConnect();
|
|
285
|
+
return session;
|
|
286
|
+
}
|
|
287
|
+
}
|
|
288
|
+
```
|
|
289
|
+
|
|
290
|
+
### Usage
|
|
291
|
+
|
|
292
|
+
```tsx
|
|
293
|
+
import { ElevenLabsVoiceAdapter } from "@/lib/elevenlabs-voice-adapter";
|
|
294
|
+
|
|
295
|
+
const runtime = useChatRuntime({
|
|
296
|
+
adapters: {
|
|
297
|
+
voice: new ElevenLabsVoiceAdapter({
|
|
298
|
+
agentId: process.env.NEXT_PUBLIC_ELEVENLABS_AGENT_ID!,
|
|
299
|
+
}),
|
|
300
|
+
},
|
|
301
|
+
});
|
|
302
|
+
```
|
|
303
|
+
|
|
304
|
+
## Example: LiveKit
|
|
305
|
+
|
|
306
|
+
[LiveKit](https://livekit.io/) provides realtime voice via WebRTC rooms with transcription support.
|
|
307
|
+
|
|
308
|
+
### Install Dependencies
|
|
309
|
+
|
|
310
|
+
```bash
|
|
311
|
+
npm install livekit-client
|
|
312
|
+
```
|
|
313
|
+
|
|
314
|
+
### Usage
|
|
315
|
+
|
|
316
|
+
```tsx
|
|
317
|
+
import { LiveKitVoiceAdapter } from "@/lib/livekit-voice-adapter";
|
|
318
|
+
|
|
319
|
+
const runtime = useChatRuntime({
|
|
320
|
+
adapters: {
|
|
321
|
+
voice: new LiveKitVoiceAdapter({
|
|
322
|
+
url: process.env.NEXT_PUBLIC_LIVEKIT_URL!,
|
|
323
|
+
token: async () => {
|
|
324
|
+
const res = await fetch("/api/livekit-token", { method: "POST" });
|
|
325
|
+
const { token } = await res.json();
|
|
326
|
+
return token;
|
|
327
|
+
},
|
|
328
|
+
}),
|
|
329
|
+
},
|
|
330
|
+
});
|
|
331
|
+
```
|
|
332
|
+
|
|
333
|
+
See the `examples/with-livekit` directory in the repository for a complete implementation including the adapter and token endpoint.
|
|
@@ -415,3 +415,131 @@ import { AuiIf } from "@assistant-ui/react";
|
|
|
415
415
|
{/* rendered if dictation is active */}
|
|
416
416
|
</AuiIf>
|
|
417
417
|
```
|
|
418
|
+
|
|
419
|
+
## Mention Primitives (Unstable)
|
|
420
|
+
|
|
421
|
+
<Callout type="warn">
|
|
422
|
+
These primitives are under the `Unstable_` prefix and may change without notice.
|
|
423
|
+
</Callout>
|
|
424
|
+
|
|
425
|
+
Primitives for an @-mention picker in the composer. See the [Mention component guide](/docs/ui/mention) for a pre-built implementation.
|
|
426
|
+
|
|
427
|
+
### Anatomy
|
|
428
|
+
|
|
429
|
+
```tsx
|
|
430
|
+
import { ComposerPrimitive } from "@assistant-ui/react";
|
|
431
|
+
|
|
432
|
+
const Composer = () => (
|
|
433
|
+
<ComposerPrimitive.Unstable_MentionRoot adapter={mentionAdapter}>
|
|
434
|
+
<ComposerPrimitive.Root>
|
|
435
|
+
<ComposerPrimitive.Input />
|
|
436
|
+
<ComposerPrimitive.Unstable_MentionPopover>
|
|
437
|
+
<ComposerPrimitive.Unstable_MentionCategories>
|
|
438
|
+
{(categories) =>
|
|
439
|
+
categories.map((cat) => (
|
|
440
|
+
<ComposerPrimitive.Unstable_MentionCategoryItem
|
|
441
|
+
key={cat.id}
|
|
442
|
+
categoryId={cat.id}
|
|
443
|
+
>
|
|
444
|
+
{cat.label}
|
|
445
|
+
</ComposerPrimitive.Unstable_MentionCategoryItem>
|
|
446
|
+
))
|
|
447
|
+
}
|
|
448
|
+
</ComposerPrimitive.Unstable_MentionCategories>
|
|
449
|
+
<ComposerPrimitive.Unstable_MentionItems>
|
|
450
|
+
{(items) =>
|
|
451
|
+
items.map((item) => (
|
|
452
|
+
<ComposerPrimitive.Unstable_MentionItem
|
|
453
|
+
key={item.id}
|
|
454
|
+
item={item}
|
|
455
|
+
>
|
|
456
|
+
{item.label}
|
|
457
|
+
</ComposerPrimitive.Unstable_MentionItem>
|
|
458
|
+
))
|
|
459
|
+
}
|
|
460
|
+
</ComposerPrimitive.Unstable_MentionItems>
|
|
461
|
+
<ComposerPrimitive.Unstable_MentionBack>
|
|
462
|
+
Back
|
|
463
|
+
</ComposerPrimitive.Unstable_MentionBack>
|
|
464
|
+
</ComposerPrimitive.Unstable_MentionPopover>
|
|
465
|
+
</ComposerPrimitive.Root>
|
|
466
|
+
</ComposerPrimitive.Unstable_MentionRoot>
|
|
467
|
+
);
|
|
468
|
+
```
|
|
469
|
+
|
|
470
|
+
### Unstable_MentionRoot
|
|
471
|
+
|
|
472
|
+
Provider that wraps the composer with mention trigger detection, keyboard navigation, and popover state.
|
|
473
|
+
|
|
474
|
+
| Prop | Type | Default | Description |
|
|
475
|
+
| --- | --- | --- | --- |
|
|
476
|
+
| `adapter` | `Unstable_MentionAdapter` | — | Provides categories, items, and search |
|
|
477
|
+
| `trigger` | `string` | `"@"` | Character(s) that activate the popover |
|
|
478
|
+
| `formatter` | `Unstable_DirectiveFormatter` | Default | Serializer/parser for mention directives |
|
|
479
|
+
|
|
480
|
+
### Unstable_MentionPopover
|
|
481
|
+
|
|
482
|
+
Container that only renders when a mention trigger is active. Renders a `<div>` with `role="listbox"`.
|
|
483
|
+
|
|
484
|
+
### Unstable_MentionCategories
|
|
485
|
+
|
|
486
|
+
Renders the top-level category list. Accepts a render function `(categories) => ReactNode`. Hidden when a category is selected or when in search mode.
|
|
487
|
+
|
|
488
|
+
### Unstable_MentionCategoryItem
|
|
489
|
+
|
|
490
|
+
A button that drills into a category. Renders `role="option"` with automatic `data-highlighted` and `aria-selected` when keyboard-navigated.
|
|
491
|
+
|
|
492
|
+
| Prop | Type | Description |
|
|
493
|
+
| --- | --- | --- |
|
|
494
|
+
| `categoryId` | `string` | The category to select on click |
|
|
495
|
+
|
|
496
|
+
### Unstable_MentionItems
|
|
497
|
+
|
|
498
|
+
Renders the item list for the active category or search results. Accepts a render function `(items) => ReactNode`. Hidden when no category is selected and not in search mode.
|
|
499
|
+
|
|
500
|
+
### Unstable_MentionItem
|
|
501
|
+
|
|
502
|
+
A button that inserts a mention into the composer. Renders `role="option"` with automatic `data-highlighted` and `aria-selected` when keyboard-navigated.
|
|
503
|
+
|
|
504
|
+
| Prop | Type | Description |
|
|
505
|
+
| --- | --- | --- |
|
|
506
|
+
| `item` | `Unstable_MentionItem` | The item to insert on click |
|
|
507
|
+
|
|
508
|
+
### Unstable_MentionBack
|
|
509
|
+
|
|
510
|
+
A button that navigates back from items to the category list. Only renders when a category is active.
|
|
511
|
+
|
|
512
|
+
### unstable_useMentionContext
|
|
513
|
+
|
|
514
|
+
Hook to access the mention popover state and actions from within `Unstable_MentionRoot`.
|
|
515
|
+
|
|
516
|
+
```tsx
|
|
517
|
+
const {
|
|
518
|
+
open, // boolean — whether popover is visible
|
|
519
|
+
query, // string — text after the trigger character
|
|
520
|
+
isSearchMode, // boolean — whether showing search results
|
|
521
|
+
highlightedIndex,
|
|
522
|
+
categories,
|
|
523
|
+
items,
|
|
524
|
+
activeCategoryId,
|
|
525
|
+
selectCategory,
|
|
526
|
+
selectItem,
|
|
527
|
+
goBack,
|
|
528
|
+
close,
|
|
529
|
+
handleKeyDown,
|
|
530
|
+
formatter,
|
|
531
|
+
} = unstable_useMentionContext();
|
|
532
|
+
```
|
|
533
|
+
|
|
534
|
+
### unstable_useToolMentionAdapter
|
|
535
|
+
|
|
536
|
+
Hook that creates a `Unstable_MentionAdapter` from registered tools (via `useAssistantTool`).
|
|
537
|
+
|
|
538
|
+
```tsx
|
|
539
|
+
import { unstable_useToolMentionAdapter } from "@assistant-ui/react";
|
|
540
|
+
|
|
541
|
+
const adapter = unstable_useToolMentionAdapter({
|
|
542
|
+
formatLabel: (name) => name.replaceAll("_", " "),
|
|
543
|
+
categoryLabel: "Tools",
|
|
544
|
+
});
|
|
545
|
+
```
|
|
@@ -37,6 +37,29 @@ Custom data events that can be rendered as UI at their position in the message s
|
|
|
37
37
|
|
|
38
38
|
You can use either the explicit format `{ type: "data", name: "workflow", data: {...} }` or the shorthand `data-*` prefixed format `{ type: "data-workflow", data: {...} }`. The prefixed format is automatically converted to a `DataMessagePart` (stripping the `data-` prefix as the `name`). Unknown message part types that don't match any built-in type are silently skipped with a console warning.
|
|
39
39
|
|
|
40
|
+
#### Streaming Data Parts
|
|
41
|
+
|
|
42
|
+
Data parts can be sent from the server using `appendData()` on the stream controller:
|
|
43
|
+
|
|
44
|
+
```ts
|
|
45
|
+
controller.appendData({
|
|
46
|
+
type: "data",
|
|
47
|
+
name: "chart",
|
|
48
|
+
data: { labels: ["Q1", "Q2"], values: [10, 20] },
|
|
49
|
+
});
|
|
50
|
+
```
|
|
51
|
+
|
|
52
|
+
Register a renderer with `makeAssistantDataUI` to display data parts:
|
|
53
|
+
|
|
54
|
+
```tsx
|
|
55
|
+
import { makeAssistantDataUI } from "@assistant-ui/react";
|
|
56
|
+
|
|
57
|
+
const ChartUI = makeAssistantDataUI({
|
|
58
|
+
name: "chart",
|
|
59
|
+
render: ({ data }) => <MyChart data={data} />,
|
|
60
|
+
});
|
|
61
|
+
```
|
|
62
|
+
|
|
40
63
|
## Anatomy
|
|
41
64
|
|
|
42
65
|
```tsx
|
|
@@ -256,6 +256,12 @@ const cloud = new AssistantCloud({
|
|
|
256
256
|
|
|
257
257
|
Return `null` from `beforeReport` to skip reporting a specific run. To disable telemetry entirely, pass `telemetry: false`.
|
|
258
258
|
|
|
259
|
+
### Sub-Agent Model Tracking
|
|
260
|
+
|
|
261
|
+
When tool calls delegate to a different model (e.g., the main run uses GPT but a tool invokes Gemini), you can track the delegated model's usage. Pass sampling call data through `messageMetadata.samplingCalls` in your API route, and the telemetry reporter will automatically include it in the report.
|
|
262
|
+
|
|
263
|
+
See the [AI SDK Telemetry guide](/docs/cloud/ai-sdk#sub-agent-model-tracking) for the full setup with `createSamplingCollector` and `wrapSamplingHandler`.
|
|
264
|
+
|
|
259
265
|
## Authentication
|
|
260
266
|
|
|
261
267
|
The example above uses anonymous mode (browser session-based user ID) via the env var. For production apps with user accounts, pass an explicit cloud instance:
|
|
@@ -252,7 +252,7 @@ export async function POST(req: Request) {
|
|
|
252
252
|
```
|
|
253
253
|
|
|
254
254
|
<Callout type="info">
|
|
255
|
-
The standalone hook does not capture `duration_ms`, per-step breakdowns (`steps`),
|
|
255
|
+
The standalone hook captures message metadata when it is JSON-serializable, but it does not capture `duration_ms`, per-step breakdowns (`steps`), or `"error"` status. Those require the full runtime integration available via [`useChatRuntime`](/docs/cloud/ai-sdk-assistant-ui).
|
|
256
256
|
</Callout>
|
|
257
257
|
|
|
258
258
|
### Customizing Reports
|
|
@@ -274,6 +274,86 @@ const cloud = new AssistantCloud({
|
|
|
274
274
|
|
|
275
275
|
Return `null` from `beforeReport` to skip reporting a specific run. To disable telemetry entirely, pass `telemetry: false`.
|
|
276
276
|
|
|
277
|
+
### Sub-Agent Model Tracking
|
|
278
|
+
|
|
279
|
+
In multi-agent setups where tool calls delegate to a different model (e.g., the main run uses GPT but a tool invokes Gemini), you can track the delegated model's usage by passing sampling call data through `messageMetadata`.
|
|
280
|
+
|
|
281
|
+
**Step 1: Collect sampling data on the server**
|
|
282
|
+
|
|
283
|
+
Use `createSamplingCollector` and `wrapSamplingHandler` from `assistant-cloud` to capture LLM calls made during tool execution:
|
|
284
|
+
|
|
285
|
+
```ts title="app/api/chat/route.ts"
|
|
286
|
+
import { streamText } from "ai";
|
|
287
|
+
import { openai } from "@ai-sdk/openai";
|
|
288
|
+
import {
|
|
289
|
+
createSamplingCollector,
|
|
290
|
+
wrapSamplingHandler,
|
|
291
|
+
} from "assistant-cloud";
|
|
292
|
+
|
|
293
|
+
export async function POST(req: Request) {
|
|
294
|
+
const { messages } = await req.json();
|
|
295
|
+
|
|
296
|
+
// Collect sub-agent sampling calls per tool call
|
|
297
|
+
const samplingCalls: Record<string, SamplingCallData[]> = {};
|
|
298
|
+
|
|
299
|
+
const result = streamText({
|
|
300
|
+
model: openai("gpt-4o"),
|
|
301
|
+
messages,
|
|
302
|
+
tools: {
|
|
303
|
+
delegate_to_gemini: tool({
|
|
304
|
+
parameters: z.object({ task: z.string() }),
|
|
305
|
+
execute: async ({ task }, { toolCallId }) => {
|
|
306
|
+
const collector = createSamplingCollector();
|
|
307
|
+
// Your sub-agent logic that calls another model
|
|
308
|
+
const result = await runSubAgent(task, {
|
|
309
|
+
onSamplingCall: collector.collect,
|
|
310
|
+
});
|
|
311
|
+
samplingCalls[toolCallId] = collector.getCalls();
|
|
312
|
+
return result;
|
|
313
|
+
},
|
|
314
|
+
}),
|
|
315
|
+
},
|
|
316
|
+
});
|
|
317
|
+
|
|
318
|
+
return result.toUIMessageStreamResponse({
|
|
319
|
+
messageMetadata: ({ part }) => {
|
|
320
|
+
if (part.type === "finish") {
|
|
321
|
+
return {
|
|
322
|
+
usage: part.totalUsage,
|
|
323
|
+
samplingCalls, // attach collected sampling data
|
|
324
|
+
};
|
|
325
|
+
}
|
|
326
|
+
if (part.type === "finish-step") {
|
|
327
|
+
return { modelId: part.response.modelId };
|
|
328
|
+
}
|
|
329
|
+
return undefined;
|
|
330
|
+
},
|
|
331
|
+
});
|
|
332
|
+
}
|
|
333
|
+
```
|
|
334
|
+
|
|
335
|
+
**Step 2: That's it.** The telemetry reporter automatically reads `samplingCalls` from message metadata and attaches the data to matching tool calls in the report. The Cloud dashboard will show each delegated model in the model distribution chart with its own token and cost breakdown.
|
|
336
|
+
|
|
337
|
+
<Callout type="info">
|
|
338
|
+
For MCP tools that use the sampling protocol, `wrapSamplingHandler` can wrap the MCP client's sampling handler directly to capture all nested LLM calls transparently.
|
|
339
|
+
</Callout>
|
|
340
|
+
|
|
341
|
+
<Callout type="tip">
|
|
342
|
+
**On older versions** that don't yet read `samplingCalls` from metadata, use `beforeReport` to inject the data manually:
|
|
343
|
+
|
|
344
|
+
```ts
|
|
345
|
+
telemetry: {
|
|
346
|
+
beforeReport: (report) => ({
|
|
347
|
+
...report,
|
|
348
|
+
tool_calls: report.tool_calls?.map((tc) => ({
|
|
349
|
+
...tc,
|
|
350
|
+
sampling_calls: samplingCalls[tc.tool_call_id],
|
|
351
|
+
})),
|
|
352
|
+
}),
|
|
353
|
+
}
|
|
354
|
+
```
|
|
355
|
+
</Callout>
|
|
356
|
+
|
|
277
357
|
## Authentication
|
|
278
358
|
|
|
279
359
|
The example above uses anonymous mode (browser session-based user ID) via the env var. For production apps with user accounts, pass an explicit cloud instance:
|