@assistant-ui/mcp-docs-server 0.1.26 → 0.1.28
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/.docs/organized/code-examples/waterfall.md +2 -2
- package/.docs/organized/code-examples/with-a2a.md +2 -2
- package/.docs/organized/code-examples/with-ag-ui.md +3 -3
- package/.docs/organized/code-examples/with-ai-sdk-v6.md +4 -4
- package/.docs/organized/code-examples/with-artifacts.md +4 -4
- package/.docs/organized/code-examples/with-assistant-transport.md +2 -2
- package/.docs/organized/code-examples/with-chain-of-thought.md +4 -4
- package/.docs/organized/code-examples/with-cloud-standalone.md +4 -4
- package/.docs/organized/code-examples/with-cloud.md +4 -4
- package/.docs/organized/code-examples/with-custom-thread-list.md +4 -4
- package/.docs/organized/code-examples/with-elevenlabs-conversational.md +511 -0
- package/.docs/organized/code-examples/with-elevenlabs-scribe.md +6 -6
- package/.docs/organized/code-examples/with-expo.md +17 -17
- package/.docs/organized/code-examples/with-external-store.md +2 -2
- package/.docs/organized/code-examples/with-ffmpeg.md +217 -63
- package/.docs/organized/code-examples/with-generative-ui.md +841 -0
- package/.docs/organized/code-examples/with-google-adk.md +3 -3
- package/.docs/organized/code-examples/with-heat-graph.md +2 -2
- package/.docs/organized/code-examples/with-interactables.md +67 -9
- package/.docs/organized/code-examples/with-langgraph.md +3 -3
- package/.docs/organized/code-examples/with-livekit.md +591 -0
- package/.docs/organized/code-examples/with-parent-id-grouping.md +3 -3
- package/.docs/organized/code-examples/with-react-hook-form.md +5 -5
- package/.docs/organized/code-examples/with-react-ink.md +1 -1
- package/.docs/organized/code-examples/with-react-router.md +7 -7
- package/.docs/organized/code-examples/with-store.md +8 -3
- package/.docs/organized/code-examples/with-tanstack.md +4 -4
- package/.docs/organized/code-examples/with-tap-runtime.md +2 -2
- package/.docs/raw/docs/(docs)/copilots/model-context.mdx +9 -1
- package/.docs/raw/docs/(docs)/guides/interactables.mdx +99 -37
- package/.docs/raw/docs/(docs)/guides/mentions.mdx +406 -0
- package/.docs/raw/docs/(docs)/guides/slash-commands.mdx +275 -0
- package/.docs/raw/docs/(docs)/guides/tool-ui.mdx +29 -0
- package/.docs/raw/docs/(docs)/guides/voice.mdx +333 -0
- package/.docs/raw/docs/(reference)/api-reference/primitives/message-part.mdx +23 -0
- package/.docs/raw/docs/primitives/composer.mdx +27 -4
- package/.docs/raw/docs/runtimes/a2a/index.mdx +4 -0
- package/.docs/raw/docs/runtimes/ai-sdk/v6.mdx +2 -2
- package/.docs/raw/docs/runtimes/assistant-transport.mdx +6 -2
- package/.docs/raw/docs/ui/context-display.mdx +2 -2
- package/.docs/raw/docs/ui/model-selector.mdx +1 -1
- package/.docs/raw/docs/ui/voice.mdx +172 -0
- package/package.json +5 -6
|
@@ -748,6 +748,35 @@ useAssistantToolUI({
|
|
|
748
748
|
server.
|
|
749
749
|
</Callout>
|
|
750
750
|
|
|
751
|
+
## Per-Property Streaming Status
|
|
752
|
+
|
|
753
|
+
When rendering a tool UI, you can track which arguments have finished streaming using `useToolArgsStatus`. This must be used inside a tool-call message part context.
|
|
754
|
+
|
|
755
|
+
```tsx
|
|
756
|
+
import { useToolArgsStatus } from "@assistant-ui/react";
|
|
757
|
+
|
|
758
|
+
const WeatherUI = makeAssistantToolUI({
|
|
759
|
+
toolName: "weather",
|
|
760
|
+
render: ({ args }) => {
|
|
761
|
+
const { status, propStatus } = useToolArgsStatus<{
|
|
762
|
+
location: string;
|
|
763
|
+
unit: string;
|
|
764
|
+
}>();
|
|
765
|
+
|
|
766
|
+
return (
|
|
767
|
+
<div>
|
|
768
|
+
<span className={propStatus.location === "streaming" ? "animate-pulse" : ""}>
|
|
769
|
+
{args.location ?? "..."}
|
|
770
|
+
</span>
|
|
771
|
+
{status === "complete" && <WeatherChart data={args} />}
|
|
772
|
+
</div>
|
|
773
|
+
);
|
|
774
|
+
},
|
|
775
|
+
});
|
|
776
|
+
```
|
|
777
|
+
|
|
778
|
+
`propStatus` maps each key to `"streaming"` | `"complete"` once the key appears in the partial JSON. Keys not yet present in the stream are absent from `propStatus`.
|
|
779
|
+
|
|
751
780
|
## Related Guides
|
|
752
781
|
|
|
753
782
|
- [Tools Guide](/docs/guides/tools) - Learn how to create and use tools with AI models
|
|
@@ -0,0 +1,333 @@
|
|
|
1
|
+
---
|
|
2
|
+
title: Realtime Voice
|
|
3
|
+
description: Bidirectional realtime voice conversations with AI agents.
|
|
4
|
+
---
|
|
5
|
+
|
|
6
|
+
import { VoiceSample } from "@/components/docs/samples/voice";
|
|
7
|
+
|
|
8
|
+
assistant-ui supports realtime bidirectional voice via the `RealtimeVoiceAdapter` interface. This enables live voice conversations where the user speaks into their microphone and the AI agent responds with audio, with transcripts appearing in the thread in real time.
|
|
9
|
+
|
|
10
|
+
<VoiceSample />
|
|
11
|
+
|
|
12
|
+
## How It Works
|
|
13
|
+
|
|
14
|
+
Unlike [Speech Synthesis](/docs/guides/speech) (text-to-speech) and [Dictation](/docs/guides/dictation) (speech-to-text), the voice adapter handles **both directions simultaneously** — the user's microphone audio is streamed to the agent, and the agent's audio response is played back, all while transcripts are appended to the message thread.
|
|
15
|
+
|
|
16
|
+
| Feature | Adapter | Direction |
|
|
17
|
+
|---------|---------|-----------|
|
|
18
|
+
| [Speech Synthesis](/docs/guides/speech) | `SpeechSynthesisAdapter` | Text → Audio (one message at a time) |
|
|
19
|
+
| [Dictation](/docs/guides/dictation) | `DictationAdapter` | Audio → Text (into composer) |
|
|
20
|
+
| **Realtime Voice** | `RealtimeVoiceAdapter` | Audio ↔ Audio (bidirectional, live) |
|
|
21
|
+
|
|
22
|
+
## Configuration
|
|
23
|
+
|
|
24
|
+
Pass a `RealtimeVoiceAdapter` implementation to the runtime via `adapters.voice`:
|
|
25
|
+
|
|
26
|
+
```tsx
|
|
27
|
+
const runtime = useChatRuntime({
|
|
28
|
+
adapters: {
|
|
29
|
+
voice: new MyVoiceAdapter({ /* ... */ }),
|
|
30
|
+
},
|
|
31
|
+
});
|
|
32
|
+
```
|
|
33
|
+
|
|
34
|
+
When a voice adapter is provided, `capabilities.voice` is automatically set to `true`.
|
|
35
|
+
|
|
36
|
+
## Hooks
|
|
37
|
+
|
|
38
|
+
### useVoiceState
|
|
39
|
+
|
|
40
|
+
Returns the current voice session state, or `undefined` when no session is active.
|
|
41
|
+
|
|
42
|
+
```tsx
|
|
43
|
+
import { useVoiceState, useVoiceVolume } from "@assistant-ui/react";
|
|
44
|
+
|
|
45
|
+
const voiceState = useVoiceState();
|
|
46
|
+
// voiceState?.status.type — "starting" | "running" | "ended"
|
|
47
|
+
// voiceState?.isMuted — boolean
|
|
48
|
+
// voiceState?.mode — "listening" | "speaking"
|
|
49
|
+
|
|
50
|
+
const volume = useVoiceVolume();
|
|
51
|
+
// volume — number (0–1, real-time audio level via separate subscription)
|
|
52
|
+
```
|
|
53
|
+
|
|
54
|
+
### useVoiceControls
|
|
55
|
+
|
|
56
|
+
Returns methods to control the voice session.
|
|
57
|
+
|
|
58
|
+
```tsx
|
|
59
|
+
import { useVoiceControls } from "@assistant-ui/react";
|
|
60
|
+
|
|
61
|
+
const { connect, disconnect, mute, unmute } = useVoiceControls();
|
|
62
|
+
```
|
|
63
|
+
|
|
64
|
+
## UI Example
|
|
65
|
+
|
|
66
|
+
```tsx
|
|
67
|
+
import { useVoiceState, useVoiceControls } from "@assistant-ui/react";
|
|
68
|
+
import { PhoneIcon, PhoneOffIcon, MicIcon, MicOffIcon } from "lucide-react";
|
|
69
|
+
|
|
70
|
+
function VoiceControls() {
|
|
71
|
+
const voiceState = useVoiceState();
|
|
72
|
+
const { connect, disconnect, mute, unmute } = useVoiceControls();
|
|
73
|
+
|
|
74
|
+
const isRunning = voiceState?.status.type === "running";
|
|
75
|
+
const isStarting = voiceState?.status.type === "starting";
|
|
76
|
+
const isMuted = voiceState?.isMuted ?? false;
|
|
77
|
+
|
|
78
|
+
if (!isRunning && !isStarting) {
|
|
79
|
+
return (
|
|
80
|
+
<button onClick={() => connect()}>
|
|
81
|
+
<PhoneIcon /> Connect
|
|
82
|
+
</button>
|
|
83
|
+
);
|
|
84
|
+
}
|
|
85
|
+
|
|
86
|
+
return (
|
|
87
|
+
<>
|
|
88
|
+
<button onClick={() => (isMuted ? unmute() : mute())} disabled={!isRunning}>
|
|
89
|
+
{isMuted ? <MicOffIcon /> : <MicIcon />}
|
|
90
|
+
{isMuted ? "Unmute" : "Mute"}
|
|
91
|
+
</button>
|
|
92
|
+
<button onClick={() => disconnect()}>
|
|
93
|
+
<PhoneOffIcon /> Disconnect
|
|
94
|
+
</button>
|
|
95
|
+
</>
|
|
96
|
+
);
|
|
97
|
+
}
|
|
98
|
+
```
|
|
99
|
+
|
|
100
|
+
## Custom Adapters
|
|
101
|
+
|
|
102
|
+
Implement the `RealtimeVoiceAdapter` interface to integrate with any voice provider.
|
|
103
|
+
|
|
104
|
+
### RealtimeVoiceAdapter Interface
|
|
105
|
+
|
|
106
|
+
```tsx
|
|
107
|
+
import type { RealtimeVoiceAdapter } from "@assistant-ui/react";
|
|
108
|
+
|
|
109
|
+
class MyVoiceAdapter implements RealtimeVoiceAdapter {
|
|
110
|
+
connect(options: {
|
|
111
|
+
abortSignal?: AbortSignal;
|
|
112
|
+
}): RealtimeVoiceAdapter.Session {
|
|
113
|
+
// Establish connection to your voice service
|
|
114
|
+
return {
|
|
115
|
+
get status() { /* ... */ },
|
|
116
|
+
get isMuted() { /* ... */ },
|
|
117
|
+
|
|
118
|
+
disconnect: () => { /* ... */ },
|
|
119
|
+
mute: () => { /* ... */ },
|
|
120
|
+
unmute: () => { /* ... */ },
|
|
121
|
+
|
|
122
|
+
onStatusChange: (callback) => {
|
|
123
|
+
// Status: { type: "starting" } → { type: "running" } → { type: "ended", reason }
|
|
124
|
+
return () => {}; // Return unsubscribe
|
|
125
|
+
},
|
|
126
|
+
|
|
127
|
+
onTranscript: (callback) => {
|
|
128
|
+
// callback({ role: "user" | "assistant", text: "...", isFinal: true })
|
|
129
|
+
// Transcripts are automatically appended as messages in the thread.
|
|
130
|
+
return () => {};
|
|
131
|
+
},
|
|
132
|
+
|
|
133
|
+
// Report who is speaking (drives the VoiceOrb speaking animation)
|
|
134
|
+
onModeChange: (callback) => {
|
|
135
|
+
// callback("listening") — user's turn
|
|
136
|
+
// callback("speaking") — agent's turn
|
|
137
|
+
return () => {};
|
|
138
|
+
},
|
|
139
|
+
|
|
140
|
+
// Report real-time audio level (0–1) for visual feedback
|
|
141
|
+
onVolumeChange: (callback) => {
|
|
142
|
+
// callback(0.72) — drives VoiceOrb amplitude and waveform bar heights
|
|
143
|
+
return () => {};
|
|
144
|
+
},
|
|
145
|
+
};
|
|
146
|
+
}
|
|
147
|
+
}
|
|
148
|
+
```
|
|
149
|
+
|
|
150
|
+
### Session Lifecycle
|
|
151
|
+
|
|
152
|
+
The session status follows the same pattern as other adapters:
|
|
153
|
+
|
|
154
|
+
```
|
|
155
|
+
starting → running → ended
|
|
156
|
+
```
|
|
157
|
+
|
|
158
|
+
The `ended` status includes a `reason`:
|
|
159
|
+
- `"finished"` — session ended normally
|
|
160
|
+
- `"cancelled"` — session was cancelled by the user
|
|
161
|
+
- `"error"` — session ended due to an error (includes `error` field)
|
|
162
|
+
|
|
163
|
+
### Mode and Volume
|
|
164
|
+
|
|
165
|
+
All adapters must implement `onModeChange` and `onVolumeChange`. If your provider doesn't support these, return a no-op unsubscribe:
|
|
166
|
+
|
|
167
|
+
- **`onModeChange`** — Reports `"listening"` (user's turn) or `"speaking"` (agent's turn). The `VoiceOrb` switches to the active speaking animation.
|
|
168
|
+
- **`onVolumeChange`** — Reports a real-time audio level (`0`–`1`). The `VoiceOrb` modulates its amplitude and glow, and waveform bars scale to match.
|
|
169
|
+
|
|
170
|
+
When using `createVoiceSession`, these are handled automatically — call `session.emitMode()` and `session.emitVolume()` when your provider delivers data.
|
|
171
|
+
|
|
172
|
+
### Transcript Handling
|
|
173
|
+
|
|
174
|
+
Transcripts emitted via `onTranscript` are automatically appended to the message thread:
|
|
175
|
+
|
|
176
|
+
- **User transcripts** (`role: "user"`, `isFinal: true`) are appended as user messages.
|
|
177
|
+
- **Assistant transcripts** (`role: "assistant"`) are streamed into an assistant message. The message shows a "running" status until `isFinal: true` is received.
|
|
178
|
+
|
|
179
|
+
## Example: ElevenLabs Conversational AI
|
|
180
|
+
|
|
181
|
+
[ElevenLabs Conversational AI](https://elevenlabs.io/docs/agents-platform/overview) provides realtime voice agents via WebRTC.
|
|
182
|
+
|
|
183
|
+
### Install Dependencies
|
|
184
|
+
|
|
185
|
+
```bash
|
|
186
|
+
npm install @elevenlabs/client
|
|
187
|
+
```
|
|
188
|
+
|
|
189
|
+
### Adapter
|
|
190
|
+
|
|
191
|
+
```tsx title="lib/elevenlabs-voice-adapter.ts"
|
|
192
|
+
import type { RealtimeVoiceAdapter, Unsubscribe } from "@assistant-ui/react";
|
|
193
|
+
import { VoiceConversation } from "@elevenlabs/client";
|
|
194
|
+
|
|
195
|
+
export class ElevenLabsVoiceAdapter implements RealtimeVoiceAdapter {
|
|
196
|
+
private _agentId: string;
|
|
197
|
+
|
|
198
|
+
constructor(options: { agentId: string }) {
|
|
199
|
+
this._agentId = options.agentId;
|
|
200
|
+
}
|
|
201
|
+
|
|
202
|
+
connect(options: {
|
|
203
|
+
abortSignal?: AbortSignal;
|
|
204
|
+
}): RealtimeVoiceAdapter.Session {
|
|
205
|
+
const statusCallbacks = new Set<(s: RealtimeVoiceAdapter.Status) => void>();
|
|
206
|
+
const transcriptCallbacks = new Set<(t: RealtimeVoiceAdapter.TranscriptItem) => void>();
|
|
207
|
+
const modeCallbacks = new Set<(m: RealtimeVoiceAdapter.Mode) => void>();
|
|
208
|
+
const volumeCallbacks = new Set<(v: number) => void>();
|
|
209
|
+
|
|
210
|
+
let currentStatus: RealtimeVoiceAdapter.Status = { type: "starting" };
|
|
211
|
+
let isMuted = false;
|
|
212
|
+
let conversation: VoiceConversation | null = null;
|
|
213
|
+
let disposed = false;
|
|
214
|
+
|
|
215
|
+
const updateStatus = (status: RealtimeVoiceAdapter.Status) => {
|
|
216
|
+
if (disposed) return;
|
|
217
|
+
currentStatus = status;
|
|
218
|
+
for (const cb of statusCallbacks) cb(status);
|
|
219
|
+
};
|
|
220
|
+
|
|
221
|
+
const cleanup = () => {
|
|
222
|
+
disposed = true;
|
|
223
|
+
conversation = null;
|
|
224
|
+
statusCallbacks.clear();
|
|
225
|
+
transcriptCallbacks.clear();
|
|
226
|
+
modeCallbacks.clear();
|
|
227
|
+
volumeCallbacks.clear();
|
|
228
|
+
};
|
|
229
|
+
|
|
230
|
+
const session: RealtimeVoiceAdapter.Session = {
|
|
231
|
+
get status() { return currentStatus; },
|
|
232
|
+
get isMuted() { return isMuted; },
|
|
233
|
+
disconnect: () => { conversation?.endSession(); cleanup(); },
|
|
234
|
+
mute: () => { conversation?.setMicMuted(true); isMuted = true; },
|
|
235
|
+
unmute: () => { conversation?.setMicMuted(false); isMuted = false; },
|
|
236
|
+
onStatusChange: (cb): Unsubscribe => {
|
|
237
|
+
statusCallbacks.add(cb);
|
|
238
|
+
return () => statusCallbacks.delete(cb);
|
|
239
|
+
},
|
|
240
|
+
onTranscript: (cb): Unsubscribe => {
|
|
241
|
+
transcriptCallbacks.add(cb);
|
|
242
|
+
return () => transcriptCallbacks.delete(cb);
|
|
243
|
+
},
|
|
244
|
+
onModeChange: (cb): Unsubscribe => {
|
|
245
|
+
modeCallbacks.add(cb);
|
|
246
|
+
return () => modeCallbacks.delete(cb);
|
|
247
|
+
},
|
|
248
|
+
onVolumeChange: (cb): Unsubscribe => {
|
|
249
|
+
volumeCallbacks.add(cb);
|
|
250
|
+
return () => volumeCallbacks.delete(cb);
|
|
251
|
+
},
|
|
252
|
+
};
|
|
253
|
+
|
|
254
|
+
if (options.abortSignal) {
|
|
255
|
+
options.abortSignal.addEventListener("abort", () => {
|
|
256
|
+
conversation?.endSession(); cleanup();
|
|
257
|
+
}, { once: true });
|
|
258
|
+
}
|
|
259
|
+
|
|
260
|
+
const doConnect = async () => {
|
|
261
|
+
if (disposed) return;
|
|
262
|
+
try {
|
|
263
|
+
conversation = await VoiceConversation.startSession({
|
|
264
|
+
agentId: this._agentId,
|
|
265
|
+
onConnect: () => updateStatus({ type: "running" }),
|
|
266
|
+
onDisconnect: () => { updateStatus({ type: "ended", reason: "finished" }); cleanup(); },
|
|
267
|
+
onError: (msg) => { updateStatus({ type: "ended", reason: "error", error: new Error(msg) }); cleanup(); },
|
|
268
|
+
onModeChange: ({ mode }) => {
|
|
269
|
+
if (disposed) return;
|
|
270
|
+
for (const cb of modeCallbacks) cb(mode === "speaking" ? "speaking" : "listening");
|
|
271
|
+
},
|
|
272
|
+
onMessage: (msg) => {
|
|
273
|
+
if (disposed) return;
|
|
274
|
+
for (const cb of transcriptCallbacks) {
|
|
275
|
+
cb({ role: msg.role === "user" ? "user" : "assistant", text: msg.message, isFinal: true });
|
|
276
|
+
}
|
|
277
|
+
},
|
|
278
|
+
});
|
|
279
|
+
} catch (error) {
|
|
280
|
+
updateStatus({ type: "ended", reason: "error", error }); cleanup();
|
|
281
|
+
}
|
|
282
|
+
};
|
|
283
|
+
|
|
284
|
+
doConnect();
|
|
285
|
+
return session;
|
|
286
|
+
}
|
|
287
|
+
}
|
|
288
|
+
```
|
|
289
|
+
|
|
290
|
+
### Usage
|
|
291
|
+
|
|
292
|
+
```tsx
|
|
293
|
+
import { ElevenLabsVoiceAdapter } from "@/lib/elevenlabs-voice-adapter";
|
|
294
|
+
|
|
295
|
+
const runtime = useChatRuntime({
|
|
296
|
+
adapters: {
|
|
297
|
+
voice: new ElevenLabsVoiceAdapter({
|
|
298
|
+
agentId: process.env.NEXT_PUBLIC_ELEVENLABS_AGENT_ID!,
|
|
299
|
+
}),
|
|
300
|
+
},
|
|
301
|
+
});
|
|
302
|
+
```
|
|
303
|
+
|
|
304
|
+
## Example: LiveKit
|
|
305
|
+
|
|
306
|
+
[LiveKit](https://livekit.io/) provides realtime voice via WebRTC rooms with transcription support.
|
|
307
|
+
|
|
308
|
+
### Install Dependencies
|
|
309
|
+
|
|
310
|
+
```bash
|
|
311
|
+
npm install livekit-client
|
|
312
|
+
```
|
|
313
|
+
|
|
314
|
+
### Usage
|
|
315
|
+
|
|
316
|
+
```tsx
|
|
317
|
+
import { LiveKitVoiceAdapter } from "@/lib/livekit-voice-adapter";
|
|
318
|
+
|
|
319
|
+
const runtime = useChatRuntime({
|
|
320
|
+
adapters: {
|
|
321
|
+
voice: new LiveKitVoiceAdapter({
|
|
322
|
+
url: process.env.NEXT_PUBLIC_LIVEKIT_URL!,
|
|
323
|
+
token: async () => {
|
|
324
|
+
const res = await fetch("/api/livekit-token", { method: "POST" });
|
|
325
|
+
const { token } = await res.json();
|
|
326
|
+
return token;
|
|
327
|
+
},
|
|
328
|
+
}),
|
|
329
|
+
},
|
|
330
|
+
});
|
|
331
|
+
```
|
|
332
|
+
|
|
333
|
+
See the `examples/with-livekit` directory in the repository for a complete implementation including the adapter and token endpoint.
|
|
@@ -37,6 +37,29 @@ Custom data events that can be rendered as UI at their position in the message s
|
|
|
37
37
|
|
|
38
38
|
You can use either the explicit format `{ type: "data", name: "workflow", data: {...} }` or the shorthand `data-*` prefixed format `{ type: "data-workflow", data: {...} }`. The prefixed format is automatically converted to a `DataMessagePart` (stripping the `data-` prefix as the `name`). Unknown message part types that don't match any built-in type are silently skipped with a console warning.
|
|
39
39
|
|
|
40
|
+
#### Streaming Data Parts
|
|
41
|
+
|
|
42
|
+
Data parts can be sent from the server using `appendData()` on the stream controller:
|
|
43
|
+
|
|
44
|
+
```ts
|
|
45
|
+
controller.appendData({
|
|
46
|
+
type: "data",
|
|
47
|
+
name: "chart",
|
|
48
|
+
data: { labels: ["Q1", "Q2"], values: [10, 20] },
|
|
49
|
+
});
|
|
50
|
+
```
|
|
51
|
+
|
|
52
|
+
Register a renderer with `makeAssistantDataUI` to display data parts:
|
|
53
|
+
|
|
54
|
+
```tsx
|
|
55
|
+
import { makeAssistantDataUI } from "@assistant-ui/react";
|
|
56
|
+
|
|
57
|
+
const ChartUI = makeAssistantDataUI({
|
|
58
|
+
name: "chart",
|
|
59
|
+
render: ({ data }) => <MyChart data={data} />,
|
|
60
|
+
});
|
|
61
|
+
```
|
|
62
|
+
|
|
40
63
|
## Anatomy
|
|
41
64
|
|
|
42
65
|
```tsx
|
|
@@ -102,19 +102,42 @@ import { ComposerPrimitive } from "@assistant-ui/react";
|
|
|
102
102
|
|
|
103
103
|
The primitive's behavior (keyboard handling, disabled state, form submission) is merged onto your element. Your styles, your component, primitive wiring.
|
|
104
104
|
|
|
105
|
-
### Unstable
|
|
105
|
+
### Unstable Trigger Popovers
|
|
106
106
|
|
|
107
|
-
Composer
|
|
107
|
+
Composer includes an unstable **trigger popover** system for character-triggered popovers (e.g. `@` for mentions, `/` for slash commands). Multiple triggers can coexist on the same input.
|
|
108
|
+
|
|
109
|
+
**Mentions** (`@` trigger) — insert directive text into the message:
|
|
108
110
|
|
|
109
111
|
```tsx
|
|
110
|
-
<ComposerPrimitive.Unstable_MentionRoot>
|
|
112
|
+
<ComposerPrimitive.Unstable_MentionRoot adapter={mentionAdapter}>
|
|
111
113
|
<ComposerPrimitive.Root>
|
|
112
|
-
<
|
|
114
|
+
<ComposerPrimitive.Input placeholder="Type @ to mention..." />
|
|
113
115
|
<ComposerPrimitive.Unstable_MentionPopover />
|
|
114
116
|
</ComposerPrimitive.Root>
|
|
115
117
|
</ComposerPrimitive.Unstable_MentionRoot>
|
|
116
118
|
```
|
|
117
119
|
|
|
120
|
+
**Slash commands** (`/` trigger) — execute an action and clear the command text:
|
|
121
|
+
|
|
122
|
+
```tsx
|
|
123
|
+
<ComposerPrimitive.Unstable_SlashCommandRoot adapter={slashAdapter}>
|
|
124
|
+
<ComposerPrimitive.Root>
|
|
125
|
+
<ComposerPrimitive.Input placeholder="Type / for commands..." />
|
|
126
|
+
<ComposerPrimitive.Unstable_TriggerPopoverPopover>
|
|
127
|
+
<ComposerPrimitive.Unstable_TriggerPopoverItems>
|
|
128
|
+
{(items) => items.map(item => (
|
|
129
|
+
<ComposerPrimitive.Unstable_TriggerPopoverItem key={item.id} item={item}>
|
|
130
|
+
{item.label}
|
|
131
|
+
</ComposerPrimitive.Unstable_TriggerPopoverItem>
|
|
132
|
+
))}
|
|
133
|
+
</ComposerPrimitive.Unstable_TriggerPopoverItems>
|
|
134
|
+
</ComposerPrimitive.Unstable_TriggerPopoverPopover>
|
|
135
|
+
</ComposerPrimitive.Root>
|
|
136
|
+
</ComposerPrimitive.Unstable_SlashCommandRoot>
|
|
137
|
+
```
|
|
138
|
+
|
|
139
|
+
See the [Mentions guide](/docs/guides/mentions) and [Slash Commands guide](/docs/guides/slash-commands) for full documentation.
|
|
140
|
+
|
|
118
141
|
## Parts
|
|
119
142
|
|
|
120
143
|
### Root
|
|
@@ -97,6 +97,7 @@ const runtime = useA2ARuntime({ client });
|
|
|
97
97
|
| Option | Type | Description |
|
|
98
98
|
| --- | --- | --- |
|
|
99
99
|
| `baseUrl` | `string` | Base URL of the A2A server |
|
|
100
|
+
| `basePath` | `string` | Optional path prefix for API endpoints (e.g. `"/v1"`). Does not affect agent card discovery |
|
|
100
101
|
| `headers` | `Record<string, string>` or `() => Record<string, string>` | Static or dynamic headers (e.g. for auth tokens) |
|
|
101
102
|
| `tenant` | `string` | Tenant ID for multi-tenant servers (prepended to URL paths) |
|
|
102
103
|
| `extensions` | `string[]` | Extension URIs to negotiate via `A2A-Extensions` header |
|
|
@@ -124,7 +125,10 @@ const runtime = useA2ARuntime({ client });
|
|
|
124
125
|
| --- | --- | --- |
|
|
125
126
|
| `client` | `A2AClient` | Pre-built A2A client instance (provide this OR `baseUrl`) |
|
|
126
127
|
| `baseUrl` | `string` | A2A server URL (creates a client automatically) |
|
|
128
|
+
| `basePath` | `string` | Path prefix for API endpoints (e.g. `"/v1"`). Only used with `baseUrl` |
|
|
129
|
+
| `tenant` | `string` | Tenant ID for multi-tenant servers. Only used with `baseUrl` |
|
|
127
130
|
| `headers` | see above | Headers for the auto-created client |
|
|
131
|
+
| `extensions` | `string[]` | Extension URIs to negotiate. Only used with `baseUrl` |
|
|
128
132
|
| `contextId` | `string` | Initial context ID for the conversation |
|
|
129
133
|
| `configuration` | `A2ASendMessageConfiguration` | Default send message configuration |
|
|
130
134
|
| `onError` | `(error: Error) => void` | Error callback |
|
|
@@ -111,9 +111,9 @@ Use `messageMetadata` in your Next.js route to attach `usage` from `finish` and
|
|
|
111
111
|
import { streamText, convertToModelMessages } from "ai";
|
|
112
112
|
import { frontendTools } from "@assistant-ui/react-ai-sdk";
|
|
113
113
|
export async function POST(req: Request) {
|
|
114
|
-
const { messages, tools,
|
|
114
|
+
const { messages, tools, config } = await req.json();
|
|
115
115
|
const result = streamText({
|
|
116
|
-
model: getModel(modelName),
|
|
116
|
+
model: getModel(config?.modelName),
|
|
117
117
|
messages: await convertToModelMessages(messages),
|
|
118
118
|
tools: frontendTools(tools),
|
|
119
119
|
});
|
|
@@ -72,11 +72,15 @@ The backend endpoint receives POST requests with the following payload:
|
|
|
72
72
|
tools?: Record<string, ToolJSONSchema>, // Tool definitions keyed by tool name
|
|
73
73
|
threadId: string | null, // The current thread/conversation identifier (null for new threads)
|
|
74
74
|
parentId?: string | null, // The parent message ID (included when editing or branching)
|
|
75
|
-
|
|
76
|
-
|
|
75
|
+
callSettings?: { maxTokens, temperature, topP, presencePenalty, frequencyPenalty, seed },
|
|
76
|
+
config?: { apiKey, baseUrl, modelName },
|
|
77
77
|
}
|
|
78
78
|
```
|
|
79
79
|
|
|
80
|
+
<Callout type="warn">
|
|
81
|
+
**Migrating from top-level fields:** `callSettings` and `config` fields were previously spread at the top level of the request body (e.g. `body.modelName` instead of `body.config.modelName`). Both formats are currently sent for backward compatibility, but the top-level fields are deprecated and will be removed in a future version. Update your backend to read from the nested objects.
|
|
82
|
+
</Callout>
|
|
83
|
+
|
|
80
84
|
The backend endpoint returns a stream of state snapshots using the `assistant-stream` library ([npm](https://www.npmjs.com/package/assistant-stream) / [PyPI](https://pypi.org/project/assistant-stream/)).
|
|
81
85
|
|
|
82
86
|
### Handling Commands
|
|
@@ -33,9 +33,9 @@ Use `messageMetadata` in your Next.js route to attach `usage` from `finish` and
|
|
|
33
33
|
import { streamText, convertToModelMessages } from "ai";
|
|
34
34
|
|
|
35
35
|
export async function POST(req: Request) {
|
|
36
|
-
const { messages,
|
|
36
|
+
const { messages, config } = await req.json();
|
|
37
37
|
const result = streamText({
|
|
38
|
-
model: getModel(modelName),
|
|
38
|
+
model: getModel(config?.modelName),
|
|
39
39
|
messages: await convertToModelMessages(messages),
|
|
40
40
|
});
|
|
41
41
|
return result.toUIMessageStreamResponse({
|
|
@@ -50,7 +50,7 @@ const ComposerAction: FC = () => {
|
|
|
50
50
|
|
|
51
51
|
### Read the model in your API route
|
|
52
52
|
|
|
53
|
-
The selected model
|
|
53
|
+
The selected model's `id` is sent as `config.modelName` in the request body:
|
|
54
54
|
|
|
55
55
|
```tsx title="app/api/chat/route.ts" {2,5}
|
|
56
56
|
export async function POST(req: Request) {
|
|
@@ -0,0 +1,172 @@
|
|
|
1
|
+
---
|
|
2
|
+
title: Voice
|
|
3
|
+
description: Realtime voice session controls with connect, mute, and status indicator.
|
|
4
|
+
---
|
|
5
|
+
|
|
6
|
+
import { VoiceSample, VoiceVariantsSample, VoiceStatesSample } from "@/components/docs/samples/voice";
|
|
7
|
+
|
|
8
|
+
A control bar for realtime bidirectional voice sessions with an animated orb indicator. Works with any `RealtimeVoiceAdapter` (LiveKit, ElevenLabs, etc.).
|
|
9
|
+
|
|
10
|
+
<VoiceSample />
|
|
11
|
+
|
|
12
|
+
## Getting Started
|
|
13
|
+
|
|
14
|
+
<Steps>
|
|
15
|
+
<Step>
|
|
16
|
+
|
|
17
|
+
### Add the component
|
|
18
|
+
|
|
19
|
+
<InstallCommand shadcn={["voice"]} />
|
|
20
|
+
|
|
21
|
+
This adds `/components/assistant-ui/voice.tsx` to your project, which you can adjust as needed.
|
|
22
|
+
|
|
23
|
+
</Step>
|
|
24
|
+
<Step>
|
|
25
|
+
|
|
26
|
+
### Configure a voice adapter
|
|
27
|
+
|
|
28
|
+
Pass a `RealtimeVoiceAdapter` to your runtime. See the [Realtime Voice guide](/docs/guides/voice) for details.
|
|
29
|
+
|
|
30
|
+
```tsx
|
|
31
|
+
const runtime = useChatRuntime({
|
|
32
|
+
adapters: {
|
|
33
|
+
voice: myVoiceAdapter,
|
|
34
|
+
},
|
|
35
|
+
});
|
|
36
|
+
```
|
|
37
|
+
|
|
38
|
+
</Step>
|
|
39
|
+
<Step>
|
|
40
|
+
|
|
41
|
+
### Use in your application
|
|
42
|
+
|
|
43
|
+
```tsx title="app/page.tsx"
|
|
44
|
+
import { Thread } from "@/components/assistant-ui/thread";
|
|
45
|
+
import { VoiceControl } from "@/components/assistant-ui/voice";
|
|
46
|
+
import { AuiIf } from "@assistant-ui/react";
|
|
47
|
+
|
|
48
|
+
export default function Chat() {
|
|
49
|
+
return (
|
|
50
|
+
<div className="flex h-full flex-col">
|
|
51
|
+
<AuiIf condition={(s) => s.thread.capabilities.voice}>
|
|
52
|
+
<VoiceControl />
|
|
53
|
+
</AuiIf>
|
|
54
|
+
<div className="min-h-0 flex-1">
|
|
55
|
+
<Thread />
|
|
56
|
+
</div>
|
|
57
|
+
</div>
|
|
58
|
+
);
|
|
59
|
+
}
|
|
60
|
+
```
|
|
61
|
+
|
|
62
|
+
</Step>
|
|
63
|
+
</Steps>
|
|
64
|
+
|
|
65
|
+
## Anatomy
|
|
66
|
+
|
|
67
|
+
The `VoiceControl` component is built with the following hooks and conditionals:
|
|
68
|
+
|
|
69
|
+
```tsx
|
|
70
|
+
import { AuiIf, useVoiceState, useVoiceControls } from "@assistant-ui/react";
|
|
71
|
+
|
|
72
|
+
<div className="aui-voice-control">
|
|
73
|
+
<VoiceIndicator />
|
|
74
|
+
|
|
75
|
+
<AuiIf condition={(s) => s.thread.voice == null}>
|
|
76
|
+
<VoiceConnectButton />
|
|
77
|
+
</AuiIf>
|
|
78
|
+
|
|
79
|
+
<AuiIf condition={(s) => s.thread.voice?.status.type === "running"}>
|
|
80
|
+
<VoiceMuteButton />
|
|
81
|
+
<VoiceDisconnectButton />
|
|
82
|
+
</AuiIf>
|
|
83
|
+
</div>
|
|
84
|
+
```
|
|
85
|
+
|
|
86
|
+
## Examples
|
|
87
|
+
|
|
88
|
+
### Conditionally show voice controls
|
|
89
|
+
|
|
90
|
+
Only render when a voice adapter is configured:
|
|
91
|
+
|
|
92
|
+
```tsx
|
|
93
|
+
<AuiIf condition={(s) => s.thread.capabilities.voice}>
|
|
94
|
+
<VoiceControl />
|
|
95
|
+
</AuiIf>
|
|
96
|
+
```
|
|
97
|
+
|
|
98
|
+
### Voice toggle in composer
|
|
99
|
+
|
|
100
|
+
Add a compact voice toggle button inside the composer action area:
|
|
101
|
+
|
|
102
|
+
```tsx
|
|
103
|
+
function ComposerVoiceToggle() {
|
|
104
|
+
const voiceState = useVoiceState();
|
|
105
|
+
const { connect, disconnect } = useVoiceControls();
|
|
106
|
+
const isActive =
|
|
107
|
+
voiceState?.status.type === "running" ||
|
|
108
|
+
voiceState?.status.type === "starting";
|
|
109
|
+
|
|
110
|
+
return (
|
|
111
|
+
<AuiIf condition={(s) => s.thread.capabilities.voice}>
|
|
112
|
+
<button
|
|
113
|
+
type="button"
|
|
114
|
+
onClick={() => (isActive ? disconnect() : connect())}
|
|
115
|
+
aria-label={isActive ? "End voice" : "Start voice"}
|
|
116
|
+
>
|
|
117
|
+
{isActive ? <PhoneOffIcon /> : <PhoneIcon />}
|
|
118
|
+
</button>
|
|
119
|
+
</AuiIf>
|
|
120
|
+
);
|
|
121
|
+
}
|
|
122
|
+
```
|
|
123
|
+
|
|
124
|
+
### Custom indicator colors
|
|
125
|
+
|
|
126
|
+
Override the indicator styles by targeting the `aui-voice-indicator` class:
|
|
127
|
+
|
|
128
|
+
```css
|
|
129
|
+
.aui-voice-indicator {
|
|
130
|
+
/* Override active color */
|
|
131
|
+
&.bg-green-500 {
|
|
132
|
+
background: theme("colors.blue.500");
|
|
133
|
+
}
|
|
134
|
+
}
|
|
135
|
+
```
|
|
136
|
+
|
|
137
|
+
## States
|
|
138
|
+
|
|
139
|
+
The `VoiceOrb` responds to five voice session states with distinct animations:
|
|
140
|
+
|
|
141
|
+
<VoiceStatesSample />
|
|
142
|
+
|
|
143
|
+
## Variants
|
|
144
|
+
|
|
145
|
+
Four built-in color palettes. Size is controlled via `className`.
|
|
146
|
+
|
|
147
|
+
<VoiceVariantsSample />
|
|
148
|
+
|
|
149
|
+
## Sub-components
|
|
150
|
+
|
|
151
|
+
| Component | Description |
|
|
152
|
+
|-----------|-------------|
|
|
153
|
+
| `VoiceOrb` | Animated orb visual with gradient, glow, and ripple effects. Accepts `state` and `variant` props. |
|
|
154
|
+
| `VoiceControl` | Control bar with status dot, connect/disconnect, and mute/unmute buttons. |
|
|
155
|
+
| `VoiceConnectButton` | Calls `connect()`. Shown when no session is active. |
|
|
156
|
+
| `VoiceMuteButton` | Toggles `mute()`/`unmute()`. Shown when session is running. |
|
|
157
|
+
| `VoiceDisconnectButton` | Calls `disconnect()`. Shown when session is active. |
|
|
158
|
+
|
|
159
|
+
All sub-components are exported and can be used independently for custom layouts.
|
|
160
|
+
|
|
161
|
+
## State Selectors
|
|
162
|
+
|
|
163
|
+
Use these with `AuiIf` or `useAuiState` to build custom voice UI:
|
|
164
|
+
|
|
165
|
+
| Selector | Type | Description |
|
|
166
|
+
|----------|------|-------------|
|
|
167
|
+
| `s.thread.capabilities.voice` | `boolean` | Whether a voice adapter is configured |
|
|
168
|
+
| `s.thread.voice` | `VoiceSessionState \| undefined` | `undefined` when no session |
|
|
169
|
+
| `s.thread.voice?.status.type` | `"starting" \| "running" \| "ended"` | Session phase |
|
|
170
|
+
| `s.thread.voice?.isMuted` | `boolean` | Microphone muted state |
|
|
171
|
+
| `s.thread.voice?.mode` | `"listening" \| "speaking"` | Who is currently active (user or agent) |
|
|
172
|
+
| `useVoiceVolume()` | `number` | Real-time audio level (0–1), separate from main state to avoid 20Hz re-renders |
|