@convai/web-sdk 1.0.0 → 1.1.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/README.md +783 -670
- package/dist/core/BlendshapeQueue.d.ts +43 -14
- package/dist/core/BlendshapeQueue.d.ts.map +1 -1
- package/dist/core/BlendshapeQueue.js +69 -26
- package/dist/core/BlendshapeQueue.js.map +1 -1
- package/dist/core/ConvaiClient.d.ts +6 -0
- package/dist/core/ConvaiClient.d.ts.map +1 -1
- package/dist/core/ConvaiClient.js +18 -4
- package/dist/core/ConvaiClient.js.map +1 -1
- package/dist/core/MessageHandler.d.ts.map +1 -1
- package/dist/core/MessageHandler.js +5 -6
- package/dist/core/MessageHandler.js.map +1 -1
- package/dist/core/types.d.ts +5 -0
- package/dist/core/types.d.ts.map +1 -1
- package/dist/types/index.d.ts +5 -0
- package/dist/types/index.d.ts.map +1 -1
- package/dist/vanilla/ConvaiWidget.d.ts.map +1 -1
- package/dist/vanilla/ConvaiWidget.js +19 -18
- package/dist/vanilla/ConvaiWidget.js.map +1 -1
- package/package.json +6 -4
package/README.md
CHANGED
|
@@ -1,230 +1,308 @@
|
|
|
1
1
|
# @convai/web-sdk
|
|
2
2
|
|
|
3
|
-
|
|
4
|
-
|
|
5
|
-
|
|
3
|
+
`@convai/web-sdk` is a TypeScript-first SDK for building real-time conversational AI experiences with Convai characters on the web. It supports:
|
|
4
|
+
|
|
5
|
+
- React applications with ready-to-use hooks and widget components
|
|
6
|
+
- Vanilla TypeScript/JavaScript applications with a framework-agnostic widget
|
|
7
|
+
- Direct core client usage for custom UIs and advanced integrations
|
|
8
|
+
- Optional lipsync data pipelines for ARKit and MetaHuman rigs
|
|
9
|
+
|
|
10
|
+
This document is written as a complete implementation reference, from first setup to production hardening.
|
|
11
|
+
|
|
12
|
+
## Table of Contents
|
|
13
|
+
|
|
14
|
+
- [1. Package Entry Points](#1-package-entry-points)
|
|
15
|
+
- [2. Installation and Requirements](#2-installation-and-requirements)
|
|
16
|
+
- [3. Credentials and Environment Setup](#3-credentials-and-environment-setup)
|
|
17
|
+
- [4. Quick Start](#4-quick-start)
|
|
18
|
+
- [5. Build a Chatbot from Scratch](#5-build-a-chatbot-from-scratch)
|
|
19
|
+
- [6. Core Concepts and Lifecycle](#6-core-concepts-and-lifecycle)
|
|
20
|
+
- [7. Configuration Reference (`ConvaiConfig`)](#7-configuration-reference-convaiconfig)
|
|
21
|
+
- [8. Core API Reference (`ConvaiClient`)](#8-core-api-reference-convaiclient)
|
|
22
|
+
- [9. Message Semantics and Turn Completion](#9-message-semantics-and-turn-completion)
|
|
23
|
+
- [10. React API Reference](#10-react-api-reference)
|
|
24
|
+
- [11. Vanilla API Reference](#11-vanilla-api-reference)
|
|
25
|
+
- [12. Audio Integration Best Practices (Vanilla TypeScript)](#12-audio-integration-best-practices-vanilla-typescript)
|
|
26
|
+
- [13. Lipsync Helpers Reference](#13-lipsync-helpers-reference)
|
|
27
|
+
- [14. Error Handling and Reliability Patterns](#14-error-handling-and-reliability-patterns)
|
|
28
|
+
- [15. Troubleshooting](#15-troubleshooting)
|
|
29
|
+
- [16. Production Readiness Checklist](#16-production-readiness-checklist)
|
|
30
|
+
- [17. Examples](#17-examples)
|
|
31
|
+
- [18. License](#18-license)
|
|
32
|
+
|
|
33
|
+
## 1. Package Entry Points
|
|
34
|
+
|
|
35
|
+
The SDK is published with multiple entry points for different integration styles.
|
|
36
|
+
|
|
37
|
+
### `@convai/web-sdk` (default)
|
|
38
|
+
|
|
39
|
+
Primary exports:
|
|
40
|
+
|
|
41
|
+
- `useConvaiClient`
|
|
42
|
+
- `ConvaiWidget`
|
|
43
|
+
- `useCharacterInfo`
|
|
44
|
+
- `useLocalCameraTrack`
|
|
45
|
+
- `ConvaiClient`
|
|
46
|
+
- `AudioRenderer` (re-export of LiveKit `RoomAudioRenderer` for React usage)
|
|
47
|
+
- `AudioContext` (re-export of LiveKit `RoomContext`)
|
|
48
|
+
- Core types re-exported from `core/types`:
|
|
49
|
+
- `AudioSettings`
|
|
50
|
+
- `ConvaiConfig`
|
|
51
|
+
- `ChatMessage`
|
|
52
|
+
- `ConvaiClientState`
|
|
53
|
+
- `AudioControls`
|
|
54
|
+
- `VideoControls`
|
|
55
|
+
- `ScreenShareControls`
|
|
56
|
+
- `IConvaiClient`
|
|
57
|
+
- All exports from `@convai/web-sdk/lipsync-helpers`
|
|
58
|
+
- Type exports for latency models:
|
|
59
|
+
- `LatencyMonitor` (type)
|
|
60
|
+
- `LatencyMeasurement`
|
|
61
|
+
- `LatencyStats`
|
|
62
|
+
|
|
63
|
+
### `@convai/web-sdk/react`
|
|
64
|
+
|
|
65
|
+
React-focused entry point, equivalent to the default React API surface.
|
|
66
|
+
|
|
67
|
+
### `@convai/web-sdk/vanilla`
|
|
68
|
+
|
|
69
|
+
Vanilla/browser-focused exports:
|
|
70
|
+
|
|
71
|
+
- `ConvaiClient`
|
|
72
|
+
- `AudioRenderer` (vanilla audio playback manager)
|
|
73
|
+
- `createConvaiWidget`
|
|
74
|
+
- `destroyConvaiWidget`
|
|
75
|
+
- Types:
|
|
76
|
+
- `VanillaWidget`
|
|
77
|
+
- `VanillaWidgetOptions`
|
|
78
|
+
- `IConvaiClient`
|
|
79
|
+
- `ConvaiConfig`
|
|
80
|
+
- `ConvaiClientState`
|
|
81
|
+
- `ChatMessage`
|
|
82
|
+
|
|
83
|
+
### `@convai/web-sdk/core`
|
|
84
|
+
|
|
85
|
+
Framework-agnostic low-level API:
|
|
86
|
+
|
|
87
|
+
- `ConvaiClient`
|
|
88
|
+
- `AudioManager`
|
|
89
|
+
- `VideoManager`
|
|
90
|
+
- `ScreenShareManager`
|
|
91
|
+
- `MessageHandler`
|
|
92
|
+
- `BlendshapeQueue`
|
|
93
|
+
- `EventEmitter`
|
|
94
|
+
- Type alias: `ConvaiClientType`
|
|
95
|
+
- All core types from `core/types`
|
|
96
|
+
- `TurnStats` type
|
|
97
|
+
|
|
98
|
+
### `@convai/web-sdk/lipsync-helpers`
|
|
99
|
+
|
|
100
|
+
Dedicated helpers for blendshape formats and queue creation. Full function list is in [Section 13](#13-lipsync-helpers-reference).
|
|
101
|
+
|
|
102
|
+
## 2. Installation and Requirements
|
|
103
|
+
|
|
104
|
+
### Install
|
|
6
105
|
|
|
7
106
|
```bash
|
|
8
107
|
npm install @convai/web-sdk
|
|
9
108
|
```
|
|
10
109
|
|
|
11
|
-
|
|
12
|
-
|
|
13
|
-
### React
|
|
110
|
+
or
|
|
14
111
|
|
|
15
|
-
```
|
|
16
|
-
|
|
17
|
-
|
|
18
|
-
function App() {
|
|
19
|
-
const convaiClient = useConvaiClient({
|
|
20
|
-
apiKey: "your-api-key",
|
|
21
|
-
characterId: "your-character-id",
|
|
22
|
-
});
|
|
23
|
-
|
|
24
|
-
return <ConvaiWidget convaiClient={convaiClient} />;
|
|
25
|
-
}
|
|
112
|
+
```bash
|
|
113
|
+
pnpm add @convai/web-sdk
|
|
26
114
|
```
|
|
27
115
|
|
|
28
|
-
|
|
29
|
-
|
|
30
|
-
```typescript
|
|
31
|
-
import { ConvaiClient, createConvaiWidget } from "@convai/web-sdk/vanilla";
|
|
32
|
-
|
|
33
|
-
// Create client with configuration
|
|
34
|
-
const client = new ConvaiClient({
|
|
35
|
-
apiKey: "your-api-key",
|
|
36
|
-
characterId: "your-character-id",
|
|
37
|
-
});
|
|
116
|
+
or
|
|
38
117
|
|
|
39
|
-
|
|
40
|
-
|
|
41
|
-
convaiClient: client,
|
|
42
|
-
});
|
|
43
|
-
|
|
44
|
-
// Cleanup when done
|
|
45
|
-
widget.destroy();
|
|
118
|
+
```bash
|
|
119
|
+
yarn add @convai/web-sdk
|
|
46
120
|
```
|
|
47
121
|
|
|
48
|
-
|
|
122
|
+
### Runtime requirements
|
|
49
123
|
|
|
50
|
-
|
|
124
|
+
- Modern browser with WebRTC support
|
|
125
|
+
- Secure context (`https://` or `http://localhost`) for microphone/camera/screen access
|
|
51
126
|
|
|
52
|
-
|
|
127
|
+
### Peer dependencies
|
|
53
128
|
|
|
54
|
-
|
|
129
|
+
If you are using React APIs:
|
|
55
130
|
|
|
56
|
-
|
|
131
|
+
- `react` `^18 || ^19`
|
|
132
|
+
- `react-dom` `^18 || ^19`
|
|
57
133
|
|
|
58
|
-
|
|
59
|
-
- `useCharacterInfo(characterId, apiKey)` - Fetch character metadata
|
|
60
|
-
- `useLocalCameraTrack()` - Get local camera track
|
|
134
|
+
## 3. Credentials and Environment Setup
|
|
61
135
|
|
|
62
|
-
|
|
136
|
+
### Obtain credentials
|
|
63
137
|
|
|
64
|
-
|
|
138
|
+
1. Create/login to your Convai account.
|
|
139
|
+
2. Create or select a character.
|
|
140
|
+
3. Copy:
|
|
141
|
+
- API key
|
|
142
|
+
- Character ID
|
|
65
143
|
|
|
66
|
-
|
|
144
|
+
### Store credentials in environment variables
|
|
67
145
|
|
|
68
|
-
|
|
69
|
-
- `ConvaiClientState` - Client state interface
|
|
70
|
-
- `ChatMessage` - Message interface
|
|
71
|
-
- `IConvaiClient` - Client interface
|
|
72
|
-
- `AudioControls` - Audio control interface
|
|
73
|
-
- `VideoControls` - Video control interface
|
|
74
|
-
- `ScreenShareControls` - Screen share control interface
|
|
146
|
+
Do not hardcode credentials in source files.
|
|
75
147
|
|
|
76
|
-
|
|
148
|
+
```bash
|
|
149
|
+
# .env.local (example)
|
|
150
|
+
VITE_CONVAI_API_KEY=<YOUR_CONVAI_API_KEY>
|
|
151
|
+
VITE_CONVAI_CHARACTER_ID=<YOUR_CONVAI_CHARACTER_ID>
|
|
152
|
+
VITE_CONVAI_API_URL=<OPTIONAL_CONVAI_BASE_URL>
|
|
153
|
+
```
|
|
77
154
|
|
|
78
|
-
|
|
79
|
-
- `AudioContext` - Audio context provider
|
|
155
|
+
Use these values through your build system (`import.meta.env`, process env injection, or server-provided config).
|
|
80
156
|
|
|
81
|
-
|
|
157
|
+
## 4. Quick Start
|
|
82
158
|
|
|
83
|
-
|
|
159
|
+
### React
|
|
84
160
|
|
|
85
|
-
|
|
86
|
-
|
|
161
|
+
```tsx
|
|
162
|
+
import { ConvaiWidget, useConvaiClient } from "@convai/web-sdk";
|
|
87
163
|
|
|
88
|
-
|
|
164
|
+
export function App() {
|
|
165
|
+
const convaiClient = useConvaiClient({
|
|
166
|
+
apiKey: import.meta.env.VITE_CONVAI_API_KEY,
|
|
167
|
+
characterId: import.meta.env.VITE_CONVAI_CHARACTER_ID,
|
|
168
|
+
enableVideo: false,
|
|
169
|
+
startWithAudioOn: false,
|
|
170
|
+
});
|
|
89
171
|
|
|
90
|
-
|
|
91
|
-
|
|
172
|
+
return <ConvaiWidget convaiClient={convaiClient} />;
|
|
173
|
+
}
|
|
174
|
+
```
|
|
92
175
|
|
|
93
|
-
|
|
176
|
+
### Vanilla TypeScript
|
|
94
177
|
|
|
95
|
-
|
|
96
|
-
|
|
97
|
-
- `IConvaiClient` - Client interface
|
|
98
|
-
- `ConvaiConfig` - Configuration interface
|
|
99
|
-
- `ConvaiClientState` - Client state interface
|
|
100
|
-
- `ChatMessage` - Message interface
|
|
178
|
+
```ts
|
|
179
|
+
import { ConvaiClient, createConvaiWidget } from "@convai/web-sdk/vanilla";
|
|
101
180
|
|
|
102
|
-
|
|
181
|
+
const client = new ConvaiClient({
|
|
182
|
+
apiKey: import.meta.env.VITE_CONVAI_API_KEY,
|
|
183
|
+
characterId: import.meta.env.VITE_CONVAI_CHARACTER_ID,
|
|
184
|
+
enableVideo: false,
|
|
185
|
+
});
|
|
103
186
|
|
|
104
|
-
|
|
187
|
+
const widget = createConvaiWidget(document.body, {
|
|
188
|
+
convaiClient: client,
|
|
189
|
+
defaultVoiceMode: true,
|
|
190
|
+
onConnect: () => console.log("Connected"),
|
|
191
|
+
onDisconnect: () => console.log("Disconnected"),
|
|
192
|
+
});
|
|
105
193
|
|
|
106
|
-
|
|
107
|
-
|
|
108
|
-
|
|
109
|
-
|
|
110
|
-
|
|
111
|
-
- `EventEmitter` - Event emitter base class
|
|
194
|
+
window.addEventListener("beforeunload", () => {
|
|
195
|
+
widget.destroy();
|
|
196
|
+
void client.disconnect().catch(() => undefined);
|
|
197
|
+
});
|
|
198
|
+
```
|
|
112
199
|
|
|
113
|
-
|
|
200
|
+
## 5. Build a Chatbot from Scratch
|
|
114
201
|
|
|
115
|
-
-
|
|
116
|
-
- `ConvaiClientType` - Type alias for ConvaiClient
|
|
202
|
+
This section shows an end-to-end approach you can use in production.
|
|
117
203
|
|
|
118
|
-
|
|
204
|
+
### A) React from scratch (custom connection flow)
|
|
119
205
|
|
|
120
|
-
|
|
206
|
+
#### Step 1: Create the client
|
|
121
207
|
|
|
122
208
|
```tsx
|
|
123
|
-
|
|
124
|
-
|
|
125
|
-
|
|
126
|
-
|
|
127
|
-
|
|
128
|
-
|
|
129
|
-
|
|
130
|
-
|
|
131
|
-
|
|
132
|
-
|
|
133
|
-
|
|
134
|
-
|
|
135
|
-
|
|
209
|
+
import { useConvaiClient } from "@convai/web-sdk";
|
|
210
|
+
|
|
211
|
+
const convaiClient = useConvaiClient({
|
|
212
|
+
apiKey: import.meta.env.VITE_CONVAI_API_KEY,
|
|
213
|
+
characterId: import.meta.env.VITE_CONVAI_CHARACTER_ID,
|
|
214
|
+
endUserId: "<UNIQUE_END_USER_ID>",
|
|
215
|
+
enableVideo: true,
|
|
216
|
+
startWithVideoOn: false,
|
|
217
|
+
startWithAudioOn: false,
|
|
218
|
+
ttsEnabled: true,
|
|
219
|
+
enableLipsync: true,
|
|
220
|
+
blendshapeConfig: {
|
|
221
|
+
format: "arkit",
|
|
222
|
+
frames_buffer_duration: 0.5,
|
|
223
|
+
},
|
|
224
|
+
});
|
|
136
225
|
```
|
|
137
226
|
|
|
138
|
-
|
|
227
|
+
#### Step 2: Connect from a user gesture with error handling
|
|
139
228
|
|
|
140
|
-
```
|
|
141
|
-
|
|
142
|
-
|
|
143
|
-
|
|
144
|
-
|
|
145
|
-
|
|
146
|
-
}
|
|
147
|
-
/** Show video toggle button in settings (default: true) */
|
|
148
|
-
showVideo?: boolean;
|
|
149
|
-
/** Show screen share toggle button in settings (default: true) */
|
|
150
|
-
showScreenShare?: boolean;
|
|
229
|
+
```tsx
|
|
230
|
+
async function handleConnect() {
|
|
231
|
+
try {
|
|
232
|
+
await convaiClient.connect();
|
|
233
|
+
} catch (error) {
|
|
234
|
+
console.error("Connection failed:", error);
|
|
235
|
+
}
|
|
151
236
|
}
|
|
152
237
|
```
|
|
153
238
|
|
|
154
|
-
|
|
155
|
-
|
|
156
|
-
```
|
|
157
|
-
|
|
158
|
-
|
|
159
|
-
|
|
160
|
-
/** The Character ID to connect to (required) */
|
|
161
|
-
characterId: string;
|
|
162
|
-
/**
|
|
163
|
-
* End user identifier for speaker management (optional).
|
|
164
|
-
* When provided: enables long-term memory and analytics
|
|
165
|
-
* When not provided: anonymous mode, no persistent memory
|
|
166
|
-
*/
|
|
167
|
-
endUserId?: string;
|
|
168
|
-
/** Custom Convai API URL (optional, defaults to production endpoint) */
|
|
169
|
-
url?: string;
|
|
170
|
-
/**
|
|
171
|
-
* Enable video capability (default: false).
|
|
172
|
-
* If true, connection_type will be "video" (supports audio, video, and screenshare).
|
|
173
|
-
* If false, connection_type will be "audio" (audio only).
|
|
174
|
-
*/
|
|
175
|
-
enableVideo?: boolean;
|
|
176
|
-
/**
|
|
177
|
-
* Start with video camera on when connecting (default: false).
|
|
178
|
-
* Only works if enableVideo is true.
|
|
179
|
-
*/
|
|
180
|
-
startWithVideoOn?: boolean;
|
|
181
|
-
/**
|
|
182
|
-
* Start with microphone on when connecting (default: false).
|
|
183
|
-
* If false, microphone stays off until user enables it.
|
|
184
|
-
*/
|
|
185
|
-
startWithAudioOn?: boolean;
|
|
186
|
-
/** Enable text-to-speech audio generation (default: true) */
|
|
187
|
-
ttsEnabled?: boolean;
|
|
239
|
+
#### Step 3: Wait for readiness before sending text
|
|
240
|
+
|
|
241
|
+
```tsx
|
|
242
|
+
function sendMessage(text: string) {
|
|
243
|
+
if (!convaiClient.state.isConnected || !convaiClient.isBotReady) return;
|
|
244
|
+
convaiClient.sendUserTextMessage(text);
|
|
188
245
|
}
|
|
189
246
|
```
|
|
190
247
|
|
|
191
|
-
|
|
248
|
+
#### Step 4: Render the widget or your own UI
|
|
192
249
|
|
|
193
|
-
|
|
250
|
+
```tsx
|
|
251
|
+
import { ConvaiWidget } from "@convai/web-sdk";
|
|
194
252
|
|
|
195
|
-
|
|
253
|
+
<ConvaiWidget
|
|
254
|
+
convaiClient={convaiClient}
|
|
255
|
+
showVideo={true}
|
|
256
|
+
showScreenShare={true}
|
|
257
|
+
defaultVoiceMode={true}
|
|
258
|
+
/>;
|
|
259
|
+
```
|
|
196
260
|
|
|
197
|
-
|
|
261
|
+
#### Step 5: Subscribe to lifecycle events
|
|
198
262
|
|
|
199
263
|
```tsx
|
|
200
|
-
|
|
264
|
+
useEffect(() => {
|
|
265
|
+
const unsubError = convaiClient.on("error", (error) => {
|
|
266
|
+
console.error("Convai error:", error);
|
|
267
|
+
});
|
|
201
268
|
|
|
202
|
-
|
|
203
|
-
|
|
204
|
-
apiKey: "your-api-key",
|
|
205
|
-
characterId: "your-character-id",
|
|
206
|
-
enableVideo: true,
|
|
207
|
-
startWithVideoOn: false, // Camera off by default
|
|
269
|
+
const unsubState = convaiClient.on("stateChange", (state) => {
|
|
270
|
+
console.log("State:", state.agentState);
|
|
208
271
|
});
|
|
209
272
|
|
|
210
|
-
|
|
211
|
-
|
|
212
|
-
|
|
213
|
-
|
|
214
|
-
|
|
215
|
-
|
|
216
|
-
|
|
217
|
-
|
|
273
|
+
const unsubMessages = convaiClient.on("messagesChange", (messages) => {
|
|
274
|
+
console.log("Messages:", messages.length);
|
|
275
|
+
});
|
|
276
|
+
|
|
277
|
+
return () => {
|
|
278
|
+
unsubError();
|
|
279
|
+
unsubState();
|
|
280
|
+
unsubMessages();
|
|
281
|
+
};
|
|
282
|
+
}, [convaiClient]);
|
|
218
283
|
```
|
|
219
284
|
|
|
220
|
-
|
|
285
|
+
#### Step 6: Clean up on unmount
|
|
221
286
|
|
|
222
|
-
```
|
|
287
|
+
```tsx
|
|
288
|
+
useEffect(() => {
|
|
289
|
+
return () => {
|
|
290
|
+
void convaiClient.disconnect().catch(() => undefined);
|
|
291
|
+
};
|
|
292
|
+
}, [convaiClient]);
|
|
293
|
+
```
|
|
294
|
+
|
|
295
|
+
### B) Vanilla TypeScript from scratch (widget + custom hooks)
|
|
296
|
+
|
|
297
|
+
#### Step 1: Initialize client and widget
|
|
298
|
+
|
|
299
|
+
```ts
|
|
223
300
|
import { ConvaiClient, createConvaiWidget } from "@convai/web-sdk/vanilla";
|
|
224
301
|
|
|
225
302
|
const client = new ConvaiClient({
|
|
226
|
-
apiKey: "
|
|
227
|
-
characterId: "
|
|
303
|
+
apiKey: "<YOUR_CONVAI_API_KEY>",
|
|
304
|
+
characterId: "<YOUR_CHARACTER_ID>",
|
|
305
|
+
endUserId: "<UNIQUE_END_USER_ID>",
|
|
228
306
|
enableVideo: true,
|
|
229
307
|
startWithVideoOn: false,
|
|
230
308
|
});
|
|
@@ -233,667 +311,702 @@ const widget = createConvaiWidget(document.body, {
|
|
|
233
311
|
convaiClient: client,
|
|
234
312
|
showVideo: true,
|
|
235
313
|
showScreenShare: true,
|
|
314
|
+
defaultVoiceMode: true,
|
|
315
|
+
onConnect: () => console.log("Connected"),
|
|
316
|
+
onDisconnect: () => console.log("Disconnected"),
|
|
317
|
+
onMessage: (message) => console.log("Message:", message),
|
|
236
318
|
});
|
|
237
319
|
```
|
|
238
320
|
|
|
239
|
-
|
|
240
|
-
|
|
241
|
-
```typescript
|
|
242
|
-
// Enable video camera
|
|
243
|
-
await convaiClient.videoControls.enableVideo();
|
|
321
|
+
#### Step 2: Add explicit error listeners
|
|
244
322
|
|
|
245
|
-
|
|
246
|
-
|
|
247
|
-
|
|
248
|
-
|
|
249
|
-
await convaiClient.videoControls.toggleVideo();
|
|
250
|
-
|
|
251
|
-
// Check video state
|
|
252
|
-
const isVideoEnabled = convaiClient.videoControls.isVideoEnabled;
|
|
253
|
-
|
|
254
|
-
// Set video quality
|
|
255
|
-
await convaiClient.videoControls.setVideoQuality("high"); // 'low' | 'medium' | 'high'
|
|
256
|
-
|
|
257
|
-
// Get available video devices
|
|
258
|
-
const devices = await convaiClient.videoControls.getVideoDevices();
|
|
259
|
-
|
|
260
|
-
// Set specific video device
|
|
261
|
-
await convaiClient.videoControls.setVideoDevice(deviceId);
|
|
323
|
+
```ts
|
|
324
|
+
const unsubError = client.on("error", (error) => {
|
|
325
|
+
console.error("SDK error:", error);
|
|
326
|
+
});
|
|
262
327
|
```
|
|
263
328
|
|
|
264
|
-
|
|
329
|
+
#### Step 3: Add guarded send utility
|
|
265
330
|
|
|
266
|
-
```
|
|
267
|
-
|
|
268
|
-
|
|
269
|
-
|
|
270
|
-
|
|
271
|
-
|
|
272
|
-
|
|
273
|
-
|
|
274
|
-
await convaiClient.screenShareControls.disableScreenShare();
|
|
331
|
+
```ts
|
|
332
|
+
function safeSend(text: string) {
|
|
333
|
+
if (!text.trim()) return;
|
|
334
|
+
if (!client.state.isConnected) return;
|
|
335
|
+
if (!client.isBotReady) return;
|
|
336
|
+
client.sendUserTextMessage(text);
|
|
337
|
+
}
|
|
338
|
+
```
|
|
275
339
|
|
|
276
|
-
|
|
277
|
-
await convaiClient.screenShareControls.toggleScreenShare();
|
|
340
|
+
#### Step 4: Cleanup
|
|
278
341
|
|
|
279
|
-
|
|
280
|
-
|
|
342
|
+
```ts
|
|
343
|
+
function destroy() {
|
|
344
|
+
unsubError();
|
|
345
|
+
widget.destroy();
|
|
346
|
+
void client.disconnect().catch(() => undefined);
|
|
347
|
+
}
|
|
281
348
|
```
|
|
282
349
|
|
|
283
|
-
|
|
350
|
+
### C) Custom UI (framework-agnostic)
|
|
284
351
|
|
|
285
|
-
|
|
286
|
-
// React
|
|
287
|
-
const { isVideoEnabled } = convaiClient;
|
|
352
|
+
If you are not using the built-in widget:
|
|
288
353
|
|
|
289
|
-
|
|
290
|
-
|
|
291
|
-
|
|
292
|
-
console.log("Video hidden:", state.isVideoHidden);
|
|
293
|
-
});
|
|
294
|
-
```
|
|
354
|
+
- Use `ConvaiClient` from `@convai/web-sdk/core`
|
|
355
|
+
- Use `AudioRenderer` from `@convai/web-sdk/vanilla` for remote audio playback
|
|
356
|
+
- Render your own UI based on `stateChange`, `messagesChange`, and control manager events
|
|
295
357
|
|
|
296
|
-
|
|
297
|
-
|
|
298
|
-
|
|
358
|
+
```ts
|
|
359
|
+
import { ConvaiClient } from "@convai/web-sdk/core";
|
|
360
|
+
import { AudioRenderer } from "@convai/web-sdk/vanilla";
|
|
299
361
|
|
|
300
|
-
```typescript
|
|
301
362
|
const client = new ConvaiClient({
|
|
302
|
-
apiKey: "
|
|
303
|
-
characterId: "
|
|
304
|
-
enableLipsync: true,
|
|
305
|
-
blendshapeConfig: {
|
|
306
|
-
format: "arkit", // or "mha" for MetaHuman
|
|
307
|
-
},
|
|
363
|
+
apiKey: "<YOUR_CONVAI_API_KEY>",
|
|
364
|
+
characterId: "<YOUR_CHARACTER_ID>",
|
|
308
365
|
});
|
|
309
366
|
|
|
310
367
|
await client.connect();
|
|
368
|
+
const audioRenderer = new AudioRenderer(client.room);
|
|
311
369
|
|
|
312
|
-
//
|
|
313
|
-
let conversationStartTime = 0;
|
|
370
|
+
// ... your custom UI logic
|
|
314
371
|
|
|
315
|
-
|
|
316
|
-
|
|
317
|
-
|
|
318
|
-
|
|
319
|
-
function render() {
|
|
320
|
-
const elapsedSeconds = (Date.now() - conversationStartTime) / 1000;
|
|
321
|
-
const result = client.blendshapeQueue.getFrameAtTime(elapsedSeconds);
|
|
372
|
+
audioRenderer.destroy();
|
|
373
|
+
await client.disconnect();
|
|
374
|
+
```
|
|
322
375
|
|
|
323
|
-
|
|
324
|
-
// Apply blendshape values to your 3D character
|
|
325
|
-
myCharacter.morphTargets["jawOpen"] = result.frame[0];
|
|
326
|
-
myCharacter.morphTargets["mouthSmile"] = result.frame[1];
|
|
327
|
-
// ... apply remaining blendshapes
|
|
328
|
-
}
|
|
376
|
+
## 6. Core Concepts and Lifecycle
|
|
329
377
|
|
|
330
|
-
|
|
331
|
-
}
|
|
332
|
-
```
|
|
378
|
+
### Connection lifecycle
|
|
333
379
|
|
|
334
|
-
|
|
380
|
+
1. `connect()` starts room and transport setup.
|
|
381
|
+
2. `state.isConnected` becomes true when room connection is established.
|
|
382
|
+
3. `botReady` event indicates the character is ready for interaction.
|
|
383
|
+
4. Messages stream through data events into `chatMessages`.
|
|
384
|
+
5. Audio/video/screen-share are managed through dedicated control managers.
|
|
385
|
+
6. `disconnect()` tears down the session.
|
|
335
386
|
|
|
336
|
-
|
|
337
|
-
- `mha` - 251 blendshapes (MetaHuman)
|
|
387
|
+
### Activity lifecycle
|
|
338
388
|
|
|
339
|
-
|
|
389
|
+
- `state.isThinking`: model is generating response
|
|
390
|
+
- `state.isSpeaking`: model audio is currently speaking
|
|
391
|
+
- `state.agentState`: combined high-level state (`disconnected | connected | listening | thinking | speaking`)
|
|
340
392
|
|
|
341
|
-
|
|
393
|
+
### Widget lifecycle
|
|
342
394
|
|
|
343
|
-
|
|
395
|
+
Both React and vanilla widgets:
|
|
344
396
|
|
|
345
|
-
|
|
346
|
-
|
|
347
|
-
|
|
348
|
-
/* config */
|
|
349
|
-
});
|
|
397
|
+
- auto-connect on first user interaction
|
|
398
|
+
- expose optional callbacks/events
|
|
399
|
+
- need explicit cleanup on app teardown
|
|
350
400
|
|
|
351
|
-
|
|
352
|
-
// Interrupt the bot's current response
|
|
353
|
-
convaiClient.sendInterruptMessage();
|
|
354
|
-
};
|
|
401
|
+
## 7. Configuration Reference (`ConvaiConfig`)
|
|
355
402
|
|
|
356
|
-
|
|
357
|
-
|
|
358
|
-
|
|
403
|
+
| Field | Type | Required | Default | Description |
|
|
404
|
+
| ----------------------------------------- | ------------------ | -------- | -------------------- | ----------------------------------------------------------------------------------- |
|
|
405
|
+
| `apiKey` | `string` | Yes | - | Convai API key. |
|
|
406
|
+
| `characterId` | `string` | Yes | - | Target character identifier. |
|
|
407
|
+
| `endUserId` | `string` | No | `undefined` | Stable end-user identity for memory/analytics continuity. |
|
|
408
|
+
| `url` | `string` | No | SDK internal default | Convai base URL. Set explicitly if your deployment requires a specific environment. |
|
|
409
|
+
| `enableVideo` | `boolean` | No | `false` | Enables video-capable connection type. |
|
|
410
|
+
| `startWithVideoOn` | `boolean` | No | `false` | Auto-enable camera after connect. |
|
|
411
|
+
| `startWithAudioOn` | `boolean` | No | `false` | Auto-enable microphone after connect. |
|
|
412
|
+
| `ttsEnabled` | `boolean` | No | `true` | Enables model text-to-speech output. |
|
|
413
|
+
| `enableLipsync` | `boolean` | No | `false` | Requests blendshape payloads for facial animation. |
|
|
414
|
+
| `blendshapeConfig.format` | `"arkit" \| "mha"` | No | `"mha"` | Blendshape output format. |
|
|
415
|
+
| `blendshapeConfig.frames_buffer_duration` | `number` | No | server-defined | Buffering hint for audio/blendshape synchronization. |
|
|
416
|
+
| `actionConfig` | object | No | `undefined` | Action and scene-context metadata (actions, characters, objects, attention object). |
|
|
359
417
|
|
|
360
|
-
|
|
418
|
+
## 8. Core API Reference (`ConvaiClient`)
|
|
361
419
|
|
|
362
|
-
|
|
363
|
-
const interruptButton = document.getElementById("interrupt-btn");
|
|
420
|
+
Import:
|
|
364
421
|
|
|
365
|
-
|
|
366
|
-
|
|
367
|
-
});
|
|
422
|
+
```ts
|
|
423
|
+
import { ConvaiClient } from "@convai/web-sdk/core";
|
|
368
424
|
```
|
|
369
425
|
|
|
370
|
-
|
|
426
|
+
### Constructor
|
|
427
|
+
|
|
428
|
+
```ts
|
|
429
|
+
new ConvaiClient(config?: ConvaiConfig)
|
|
430
|
+
```
|
|
431
|
+
|
|
432
|
+
### Properties
|
|
433
|
+
|
|
434
|
+
| Property | Type | Description |
|
|
435
|
+
| ----------------------- | ---------------------------- | -------------------------------------------------------- |
|
|
436
|
+
| `state` | `ConvaiClientState` | Real-time connection/activity state. |
|
|
437
|
+
| `connectionType` | `"audio" \| "video" \| null` | Active transport mode. |
|
|
438
|
+
| `apiKey` | `string \| null` | Active API key. |
|
|
439
|
+
| `characterId` | `string \| null` | Active character ID. |
|
|
440
|
+
| `speakerId` | `string \| null` | Resolved speaker identity. |
|
|
441
|
+
| `room` | `Room` | Internal LiveKit room instance. |
|
|
442
|
+
| `chatMessages` | `ChatMessage[]` | Conversation message store. |
|
|
443
|
+
| `userTranscription` | `string` | Current non-final voice transcription text. |
|
|
444
|
+
| `characterSessionId` | `string \| null` | Server conversation session identifier. |
|
|
445
|
+
| `isBotReady` | `boolean` | Character readiness flag. |
|
|
446
|
+
| `audioControls` | `AudioControls` | Microphone controls. |
|
|
447
|
+
| `videoControls` | `VideoControls` | Camera controls. |
|
|
448
|
+
| `screenShareControls` | `ScreenShareControls` | Screen sharing controls. |
|
|
449
|
+
| `latencyMonitor` | `LatencyMonitor` | Measurement manager used by the client for turn latency. |
|
|
450
|
+
| `blendshapeQueue` | `BlendshapeQueue` | Buffer queue for lipsync frames. |
|
|
451
|
+
| `conversationSessionId` | `number` | Incremental turn session ID used by conversation events. |
|
|
452
|
+
|
|
453
|
+
### Methods
|
|
454
|
+
|
|
455
|
+
| Method | Signature | Description |
|
|
456
|
+
| ---------------------- | ------------------------------------------------------------------- | ---------------------------------------------------------- |
|
|
457
|
+
| `connect` | `(config?: ConvaiConfig) => Promise<void>` | Connect using passed config or stored config. |
|
|
458
|
+
| `disconnect` | `() => Promise<void>` | Disconnect and release session resources. |
|
|
459
|
+
| `reconnect` | `() => Promise<void>` | Disconnect then connect with stored config. |
|
|
460
|
+
| `resetSession` | `() => void` | Reset character session and clear conversation history. |
|
|
461
|
+
| `sendUserTextMessage` | `(text: string) => void` | Send text message to character. |
|
|
462
|
+
| `sendTriggerMessage` | `(triggerName?: string, triggerMessage?: string) => void` | Send trigger/action message. |
|
|
463
|
+
| `sendInterruptMessage` | `() => void` | Interrupt current bot response. |
|
|
464
|
+
| `updateTemplateKeys` | `(templateKeys: Record<string, string>) => void` | Update runtime template variables. |
|
|
465
|
+
| `updateDynamicInfo` | `(dynamicInfo: { text: string }) => void` | Update dynamic context text. |
|
|
466
|
+
| `toggleTts` | `(enabled: boolean) => void` | Enable/disable TTS for subsequent responses. |
|
|
467
|
+
| `on` | `(event: string, callback: (...args: any[]) => void) => () => void` | Subscribe to an event and receive an unsubscribe function. |
|
|
468
|
+
| `off` | `(event: string, callback: (...args: any[]) => void) => void` | Remove a specific listener. |
|
|
469
|
+
|
|
470
|
+
### Common event names and payloads
|
|
471
|
+
|
|
472
|
+
| Event | Payload | Notes |
|
|
473
|
+
| ------------------------- | --------------------------------------- | --------------------------------------------------------------------------------------------------------------- |
|
|
474
|
+
| `stateChange` | `ConvaiClientState` | Any state transition. |
|
|
475
|
+
| `message` | `ChatMessage` | Last message whenever `messagesChange` updates. |
|
|
476
|
+
| `messagesChange` | `ChatMessage[]` | Full message array update. |
|
|
477
|
+
| `userTranscriptionChange` | `string` | Live user speech text updates. |
|
|
478
|
+
| `speakingChange` | `boolean` | Bot speaking started/stopped. |
|
|
479
|
+
| `botReady` | `void` | Bot can now receive interaction. |
|
|
480
|
+
| `connect` | `void` | Client connected. |
|
|
481
|
+
| `disconnect` | `void` | Client disconnected. |
|
|
482
|
+
| `error` | `unknown` | Error surfaced by client. |
|
|
483
|
+
| `conversationStart` | `{ sessionId, userMessage, timestamp }` | Conversation turn started. |
|
|
484
|
+
| `turnEnd` | `{ sessionId, duration, timestamp }` | Server signaled end of turn (bot stopped speaking). Same semantics as `BlendshapeQueue.hasReceivedEndSignal()`. |
|
|
485
|
+
| `blendshapes` | `unknown` | Incoming blendshape chunk payload. |
|
|
486
|
+
| `blendshapeStatsReceived` | `unknown` | End-of-turn blendshape stats marker. |
|
|
487
|
+
| `latencyMeasurement` | `LatencyMeasurement` | Latency sample from monitor. |
|
|
488
|
+
|
|
489
|
+
### Control manager APIs
|
|
490
|
+
|
|
491
|
+
#### `audioControls`
|
|
492
|
+
|
|
493
|
+
Properties:
|
|
494
|
+
|
|
495
|
+
- `isAudioEnabled`
|
|
496
|
+
- `isAudioMuted`
|
|
497
|
+
- `audioLevel`
|
|
498
|
+
|
|
499
|
+
Methods:
|
|
500
|
+
|
|
501
|
+
- `enableAudio()`
|
|
502
|
+
- `disableAudio()`
|
|
503
|
+
- `muteAudio()`
|
|
504
|
+
- `unmuteAudio()`
|
|
505
|
+
- `toggleAudio()`
|
|
506
|
+
- `setAudioDevice(deviceId)`
|
|
507
|
+
- `getAudioDevices()`
|
|
508
|
+
- `startAudioLevelMonitoring()`
|
|
509
|
+
- `stopAudioLevelMonitoring()`
|
|
510
|
+
- `on("audioStateChange", callback)`
|
|
511
|
+
- `off("audioStateChange", callback)`
|
|
512
|
+
|
|
513
|
+
#### `videoControls`
|
|
514
|
+
|
|
515
|
+
Properties:
|
|
516
|
+
|
|
517
|
+
- `isVideoEnabled`
|
|
518
|
+
- `isVideoHidden`
|
|
519
|
+
|
|
520
|
+
Methods:
|
|
521
|
+
|
|
522
|
+
- `enableVideo()`
|
|
523
|
+
- `disableVideo()`
|
|
524
|
+
- `hideVideo()`
|
|
525
|
+
- `showVideo()`
|
|
526
|
+
- `toggleVideo()`
|
|
527
|
+
- `setVideoDevice(deviceId)`
|
|
528
|
+
- `getVideoDevices()`
|
|
529
|
+
- `setVideoQuality("low" | "medium" | "high")`
|
|
530
|
+
- `on("videoStateChange", callback)`
|
|
531
|
+
- `off("videoStateChange", callback)`
|
|
532
|
+
|
|
533
|
+
#### `screenShareControls`
|
|
534
|
+
|
|
535
|
+
Properties:
|
|
536
|
+
|
|
537
|
+
- `isScreenShareEnabled`
|
|
538
|
+
- `isScreenShareActive`
|
|
539
|
+
|
|
540
|
+
Methods:
|
|
541
|
+
|
|
542
|
+
- `enableScreenShare()`
|
|
543
|
+
- `disableScreenShare()`
|
|
544
|
+
- `toggleScreenShare()`
|
|
545
|
+
- `enableScreenShareWithAudio()`
|
|
546
|
+
- `getScreenShareTracks()`
|
|
547
|
+
- `on("screenShareStateChange", callback)`
|
|
548
|
+
- `off("screenShareStateChange", callback)`
|
|
549
|
+
|
|
550
|
+
### `latencyMonitor` API (via `client.latencyMonitor`)
|
|
551
|
+
|
|
552
|
+
`latencyMonitor` is available on every client instance for instrumentation and diagnostics.
|
|
553
|
+
|
|
554
|
+
Methods:
|
|
555
|
+
|
|
556
|
+
- `enable()`
|
|
557
|
+
- `disable()`
|
|
558
|
+
- `startMeasurement(type, userMessage?)`
|
|
559
|
+
- `endMeasurement()`
|
|
560
|
+
- `cancelMeasurement()`
|
|
561
|
+
- `getMeasurements()`
|
|
562
|
+
- `getLatestMeasurement()`
|
|
563
|
+
- `getStats()`
|
|
564
|
+
- `clear()`
|
|
565
|
+
- `getPendingMeasurement()`
|
|
566
|
+
- `on("measurement", callback)`
|
|
567
|
+
- `on("measurementsChange", callback)`
|
|
568
|
+
- `on("enabledChange", callback)`
|
|
569
|
+
|
|
570
|
+
Properties:
|
|
571
|
+
|
|
572
|
+
- `enabled`
|
|
573
|
+
- `hasPendingMeasurement`
|
|
574
|
+
|
|
575
|
+
### Advanced core classes (`@convai/web-sdk/core`)
|
|
576
|
+
|
|
577
|
+
These are exported for advanced and custom pipeline use-cases.
|
|
371
578
|
|
|
372
|
-
|
|
579
|
+
#### `BlendshapeQueue`
|
|
373
580
|
|
|
374
|
-
|
|
375
|
-
// When user enters voice mode
|
|
376
|
-
const enterVoiceMode = async () => {
|
|
377
|
-
// Interrupt any ongoing bot response
|
|
378
|
-
convaiClient.sendInterruptMessage();
|
|
581
|
+
Buffer for lipsync frames. Use `isConversationEnded()` for definitive end-of-conversation: it returns true only when the server has sent `blendshape-turn-stats` and either all expected frames have been consumed or the queue is empty (handles dropped frames). Use `hasReceivedEndSignal()` when you only need to know that the server signaled end (e.g. to keep playing remaining frames).
|
|
379
582
|
|
|
380
|
-
|
|
381
|
-
await convaiClient.audioControls.unmuteAudio();
|
|
382
|
-
};
|
|
583
|
+
Methods:
|
|
383
584
|
|
|
384
|
-
|
|
385
|
-
|
|
386
|
-
|
|
387
|
-
|
|
585
|
+
- `addChunk(blendshapes)`
|
|
586
|
+
- `getFrames()`
|
|
587
|
+
- `getFrame(index)`
|
|
588
|
+
- `getFrameWithAlpha(index)`
|
|
589
|
+
- `consumeFrames(count)`
|
|
590
|
+
- `hasFrames()`
|
|
591
|
+
- `isConversationActive()`
|
|
592
|
+
- `isConversationEnded()` — true when server signaled end and playback is complete (all frames consumed or queue empty)
|
|
593
|
+
- `hasReceivedEndSignal()` — true when server sent `blendshape-turn-stats` (does not check frame consumption)
|
|
594
|
+
- `startConversation()`
|
|
595
|
+
- `startBotSpeaking()`
|
|
596
|
+
- `stopBotSpeaking()`
|
|
597
|
+
- `isBotSpeaking()`
|
|
598
|
+
- `endConversation(stats?)`
|
|
599
|
+
- `interrupt()`
|
|
600
|
+
- `getTurnStats()`
|
|
601
|
+
- `getFramesConsumed()`
|
|
602
|
+
- `getTimeLeftMs()`
|
|
603
|
+
- `isAllFramesConsumed()`
|
|
604
|
+
- `reset()`
|
|
605
|
+
- `getFrameAtTime(elapsedTime)`
|
|
606
|
+
- `getDebugInfo()`
|
|
388
607
|
|
|
389
|
-
|
|
390
|
-
await convaiClient.audioControls.muteAudio();
|
|
391
|
-
};
|
|
392
|
-
```
|
|
608
|
+
Properties:
|
|
393
609
|
|
|
394
|
-
|
|
610
|
+
- `length`
|
|
395
611
|
|
|
396
|
-
|
|
612
|
+
#### `MessageHandler`
|
|
397
613
|
|
|
398
|
-
|
|
614
|
+
Methods:
|
|
399
615
|
|
|
400
|
-
|
|
401
|
-
|
|
402
|
-
|
|
403
|
-
|
|
404
|
-
|
|
616
|
+
- `getBlendshapeQueue()`
|
|
617
|
+
- `setLatencyMonitor(monitor)`
|
|
618
|
+
- `getChatMessages()`
|
|
619
|
+
- `getUserTranscription()`
|
|
620
|
+
- `getIsBotResponding()`
|
|
621
|
+
- `getIsSpeaking()`
|
|
622
|
+
- `setRoom(room)`
|
|
623
|
+
- `reset()`
|
|
624
|
+
- inherited event APIs from `EventEmitter`:
|
|
625
|
+
- `on(event, callback)`
|
|
626
|
+
- `off(event, callback)`
|
|
405
627
|
|
|
406
|
-
|
|
407
|
-
await convaiClient.audioControls.muteAudio();
|
|
408
|
-
};
|
|
409
|
-
|
|
410
|
-
const handleUnmute = async () => {
|
|
411
|
-
await convaiClient.audioControls.unmuteAudio();
|
|
412
|
-
};
|
|
628
|
+
#### `EventEmitter`
|
|
413
629
|
|
|
414
|
-
|
|
415
|
-
await convaiClient.audioControls.toggleAudio();
|
|
416
|
-
};
|
|
630
|
+
Methods:
|
|
417
631
|
|
|
418
|
-
|
|
419
|
-
|
|
420
|
-
|
|
421
|
-
|
|
422
|
-
|
|
423
|
-
<p>Muted: {convaiClient.audioControls.isAudioMuted ? "Yes" : "No"}</p>
|
|
424
|
-
</div>
|
|
425
|
-
);
|
|
426
|
-
}
|
|
427
|
-
```
|
|
632
|
+
- `on(event, callback)`
|
|
633
|
+
- `off(event, callback)`
|
|
634
|
+
- `emit(event, ...args)`
|
|
635
|
+
- `removeAllListeners()`
|
|
636
|
+
- `listenerCount(event)`
|
|
428
637
|
|
|
429
|
-
|
|
638
|
+
## 9. Message Semantics and Turn Completion
|
|
430
639
|
|
|
431
|
-
|
|
432
|
-
// Mute microphone
|
|
433
|
-
await client.audioControls.muteAudio();
|
|
640
|
+
### `ChatMessage` model
|
|
434
641
|
|
|
435
|
-
|
|
436
|
-
await client.audioControls.unmuteAudio();
|
|
642
|
+
`ChatMessage` includes:
|
|
437
643
|
|
|
438
|
-
|
|
439
|
-
|
|
644
|
+
- `id`
|
|
645
|
+
- `type`
|
|
646
|
+
- `content`
|
|
647
|
+
- `timestamp`
|
|
648
|
+
- `isFinal?`
|
|
440
649
|
|
|
441
|
-
|
|
442
|
-
const isMuted = client.audioControls.isAudioMuted;
|
|
650
|
+
Supported message `type` values include:
|
|
443
651
|
|
|
444
|
-
|
|
445
|
-
|
|
652
|
+
- `user`
|
|
653
|
+
- `convai`
|
|
654
|
+
- `emotion`
|
|
655
|
+
- `behavior-tree`
|
|
656
|
+
- `action`
|
|
657
|
+
- `user-transcription`
|
|
658
|
+
- `bot-llm-text`
|
|
659
|
+
- `bot-emotion`
|
|
660
|
+
- `user-llm-text`
|
|
661
|
+
- `interrupt-bot`
|
|
446
662
|
|
|
447
|
-
|
|
448
|
-
await client.audioControls.disableAudio();
|
|
449
|
-
```
|
|
663
|
+
### Important: current `isFinal` behavior
|
|
450
664
|
|
|
451
|
-
|
|
665
|
+
In the current implementation, `isFinal` is used as an accumulation flag:
|
|
452
666
|
|
|
453
|
-
|
|
454
|
-
|
|
455
|
-
const devices = await convaiClient.audioControls.getAudioDevices();
|
|
667
|
+
- `isFinal: true` means the message is still in a mutable/streaming state
|
|
668
|
+
- `isFinal: false` means the message has been finalized
|
|
456
669
|
|
|
457
|
-
|
|
458
|
-
await convaiClient.audioControls.setAudioDevice(deviceId);
|
|
670
|
+
This naming is counterintuitive. Treat `isFinal` as an internal streaming marker rather than a turn-completion signal.
|
|
459
671
|
|
|
460
|
-
|
|
461
|
-
convaiClient.audioControls.startAudioLevelMonitoring();
|
|
672
|
+
### Recommended way to detect response completion
|
|
462
673
|
|
|
463
|
-
|
|
464
|
-
console.log("Audio level:", level);
|
|
465
|
-
// level is a number between 0 and 1
|
|
466
|
-
});
|
|
674
|
+
Use events instead of `isFinal`:
|
|
467
675
|
|
|
468
|
-
|
|
469
|
-
|
|
676
|
+
- `turnEnd` for the server turn-end signal (bot stopped speaking; same as `hasReceivedEndSignal()`)
|
|
677
|
+
- `blendshapeStatsReceived` as additional completion marker when lipsync/animation output is enabled
|
|
470
678
|
|
|
471
|
-
|
|
679
|
+
When driving lipsync from `BlendshapeQueue`, use `blendshapeQueue.isConversationEnded()` for definitive end-of-conversation. It returns true only when the server has signaled end and playback is complete (all expected frames consumed or queue empty). Call `blendshapeQueue.reset()` and your `onConversationEnded` when it becomes true. Use `hasReceivedEndSignal()` only when you need the raw server signal (e.g. to decide whether to keep playing remaining frames).
|
|
472
680
|
|
|
473
|
-
|
|
474
|
-
// React
|
|
475
|
-
const { isAudioMuted } = convaiClient;
|
|
681
|
+
Example:
|
|
476
682
|
|
|
477
|
-
|
|
478
|
-
|
|
479
|
-
|
|
480
|
-
|
|
481
|
-
|
|
482
|
-
});
|
|
483
|
-
```
|
|
683
|
+
```ts
|
|
684
|
+
type TurnCompletionOptions = {
|
|
685
|
+
expectBlendshapes: boolean;
|
|
686
|
+
onComplete: () => void;
|
|
687
|
+
};
|
|
484
688
|
|
|
485
|
-
|
|
689
|
+
function subscribeTurnCompletion(client: any, options: TurnCompletionOptions) {
|
|
690
|
+
let spokenDone = false;
|
|
691
|
+
let animationDone = !options.expectBlendshapes;
|
|
486
692
|
|
|
487
|
-
|
|
693
|
+
const invokeOnCompleteIfReady = () => {
|
|
694
|
+
if (spokenDone && animationDone) {
|
|
695
|
+
options.onComplete();
|
|
696
|
+
}
|
|
697
|
+
};
|
|
488
698
|
|
|
489
|
-
|
|
699
|
+
const unsubTurnEnd = client.on("turnEnd", () => {
|
|
700
|
+
spokenDone = true;
|
|
701
|
+
invokeOnCompleteIfReady();
|
|
702
|
+
});
|
|
490
703
|
|
|
491
|
-
|
|
492
|
-
|
|
493
|
-
|
|
494
|
-
/* config */
|
|
704
|
+
const unsubBlendshapeStats = client.on("blendshapeStatsReceived", () => {
|
|
705
|
+
animationDone = true;
|
|
706
|
+
invokeOnCompleteIfReady();
|
|
495
707
|
});
|
|
496
708
|
|
|
497
|
-
|
|
498
|
-
|
|
709
|
+
return () => {
|
|
710
|
+
unsubTurnEnd();
|
|
711
|
+
unsubBlendshapeStats();
|
|
499
712
|
};
|
|
500
|
-
|
|
501
|
-
return (
|
|
502
|
-
<div>
|
|
503
|
-
<button onClick={() => handleToggleTTS(true)}>Enable TTS</button>
|
|
504
|
-
<button onClick={() => handleToggleTTS(false)}>Disable TTS</button>
|
|
505
|
-
</div>
|
|
506
|
-
);
|
|
507
713
|
}
|
|
508
714
|
```
|
|
509
715
|
|
|
510
|
-
|
|
716
|
+
When to use both signals: You only need to wait for both `turnEnd` and `blendshapeStatsReceived` when you use lipsync. Set `expectBlendshapes: false` when you do not use facial animation; then `animationDone` is effectively always true and completion runs as soon as `turnEnd` fires. Set `expectBlendshapes: true` when you drive lipsync from the queue; speech and blendshape data are separate pipelines and can finish in either order, so waiting for both ensures "turn complete" means both speech and animation are done before you run `onComplete`.
|
|
511
717
|
|
|
512
|
-
|
|
513
|
-
// Enable text-to-speech (character will speak responses)
|
|
514
|
-
client.toggleTts(true);
|
|
718
|
+
## 10. React API Reference
|
|
515
719
|
|
|
516
|
-
|
|
517
|
-
client.toggleTts(false);
|
|
518
|
-
```
|
|
720
|
+
### `useConvaiClient(config?)`
|
|
519
721
|
|
|
520
|
-
|
|
521
|
-
|
|
522
|
-
```typescript
|
|
523
|
-
// Set TTS state during connection
|
|
524
|
-
const client = new ConvaiClient({
|
|
525
|
-
apiKey: "your-api-key",
|
|
526
|
-
characterId: "your-character-id",
|
|
527
|
-
ttsEnabled: true, // Enable TTS by default
|
|
528
|
-
});
|
|
529
|
-
|
|
530
|
-
// Or disable initially
|
|
531
|
-
const client = new ConvaiClient({
|
|
532
|
-
apiKey: "your-api-key",
|
|
533
|
-
characterId: "your-character-id",
|
|
534
|
-
ttsEnabled: false, // Disable TTS
|
|
535
|
-
});
|
|
536
|
-
```
|
|
537
|
-
|
|
538
|
-
### Voice Mode Implementation
|
|
539
|
-
|
|
540
|
-
Voice mode allows users to speak instead of typing. The widget automatically handles voice mode, but you can implement it manually.
|
|
541
|
-
|
|
542
|
-
**React - Manual Voice Mode:**
|
|
722
|
+
Import:
|
|
543
723
|
|
|
544
724
|
```tsx
|
|
545
725
|
import { useConvaiClient } from "@convai/web-sdk";
|
|
546
|
-
|
|
547
|
-
|
|
548
|
-
function CustomChatInterface() {
|
|
549
|
-
const convaiClient = useConvaiClient({
|
|
550
|
-
/* config */
|
|
551
|
-
});
|
|
552
|
-
const [isVoiceMode, setIsVoiceMode] = useState(false);
|
|
553
|
-
|
|
554
|
-
const enterVoiceMode = async () => {
|
|
555
|
-
// Interrupt any ongoing bot response
|
|
556
|
-
convaiClient.sendInterruptMessage();
|
|
557
|
-
|
|
558
|
-
// Unmute microphone
|
|
559
|
-
await convaiClient.audioControls.unmuteAudio();
|
|
726
|
+
```
|
|
560
727
|
|
|
561
|
-
|
|
562
|
-
};
|
|
728
|
+
Returns full `IConvaiClient` plus React-friendly reactive fields:
|
|
563
729
|
|
|
564
|
-
|
|
565
|
-
|
|
566
|
-
|
|
730
|
+
- `activity`
|
|
731
|
+
- `chatMessages`
|
|
732
|
+
- `isAudioMuted`
|
|
733
|
+
- `isVideoEnabled`
|
|
734
|
+
- `isScreenShareActive`
|
|
567
735
|
|
|
568
|
-
|
|
569
|
-
await convaiClient.audioControls.muteAudio();
|
|
736
|
+
### `ConvaiWidget`
|
|
570
737
|
|
|
571
|
-
|
|
572
|
-
};
|
|
738
|
+
Import:
|
|
573
739
|
|
|
574
|
-
|
|
575
|
-
|
|
576
|
-
const transcription = convaiClient.userTranscription;
|
|
577
|
-
if (transcription && isVoiceMode) {
|
|
578
|
-
// Display real-time transcription
|
|
579
|
-
console.log("User is saying:", transcription);
|
|
580
|
-
}
|
|
581
|
-
}, [convaiClient.userTranscription, isVoiceMode]);
|
|
582
|
-
|
|
583
|
-
return (
|
|
584
|
-
<div>
|
|
585
|
-
{isVoiceMode ? (
|
|
586
|
-
<div>
|
|
587
|
-
<p>Listening: {convaiClient.userTranscription}</p>
|
|
588
|
-
<button onClick={exitVoiceMode}>Stop Voice Mode</button>
|
|
589
|
-
</div>
|
|
590
|
-
) : (
|
|
591
|
-
<button onClick={enterVoiceMode}>Start Voice Mode</button>
|
|
592
|
-
)}
|
|
593
|
-
</div>
|
|
594
|
-
);
|
|
595
|
-
}
|
|
740
|
+
```tsx
|
|
741
|
+
import { ConvaiWidget } from "@convai/web-sdk";
|
|
596
742
|
```
|
|
597
743
|
|
|
598
|
-
|
|
744
|
+
Props:
|
|
599
745
|
|
|
600
|
-
|
|
601
|
-
|
|
746
|
+
| Prop | Type | Default | Description |
|
|
747
|
+
| ------------------ | --------------------------------------------------------------------------------------------------------------------- | -------- | ------------------------------------------------------------------ |
|
|
748
|
+
| `convaiClient` | `IConvaiClient & { activity?: string; isAudioMuted: boolean; isVideoEnabled: boolean; isScreenShareActive: boolean }` | required | Client instance returned by `useConvaiClient`. |
|
|
749
|
+
| `showVideo` | `boolean` | `true` | Shows video toggle in settings if connection type is video. |
|
|
750
|
+
| `showScreenShare` | `boolean` | `true` | Shows screen-share toggle in settings if connection type is video. |
|
|
751
|
+
| `defaultVoiceMode` | `boolean` | `true` | Opens in voice mode on first widget session. |
|
|
602
752
|
|
|
603
|
-
|
|
604
|
-
// Interrupt any ongoing bot response
|
|
605
|
-
client.sendInterruptMessage();
|
|
753
|
+
### `useCharacterInfo(characterId?, apiKey?)`
|
|
606
754
|
|
|
607
|
-
|
|
608
|
-
await client.audioControls.unmuteAudio();
|
|
755
|
+
Returns:
|
|
609
756
|
|
|
610
|
-
|
|
611
|
-
|
|
612
|
-
|
|
757
|
+
- `name`
|
|
758
|
+
- `image`
|
|
759
|
+
- `isLoading`
|
|
760
|
+
- `error`
|
|
613
761
|
|
|
614
|
-
|
|
615
|
-
// Interrupt any ongoing bot response
|
|
616
|
-
client.sendInterruptMessage();
|
|
762
|
+
### `useLocalCameraTrack()`
|
|
617
763
|
|
|
618
|
-
|
|
619
|
-
await client.audioControls.muteAudio();
|
|
764
|
+
Returns a LiveKit `TrackReferenceOrPlaceholder` for local camera rendering in custom React video UIs.
|
|
620
765
|
|
|
621
|
-
|
|
622
|
-
updateUI();
|
|
623
|
-
};
|
|
766
|
+
### React audio utility exports
|
|
624
767
|
|
|
625
|
-
|
|
626
|
-
|
|
627
|
-
if (isVoiceMode && transcription) {
|
|
628
|
-
// Display real-time transcription
|
|
629
|
-
document.getElementById("transcription").textContent = transcription;
|
|
630
|
-
}
|
|
631
|
-
});
|
|
768
|
+
- `AudioRenderer` from LiveKit React components
|
|
769
|
+
- `AudioContext` from LiveKit React components
|
|
632
770
|
|
|
633
|
-
|
|
634
|
-
const voiceButton = document.getElementById("voice-btn");
|
|
635
|
-
const transcriptionDiv = document.getElementById("transcription");
|
|
771
|
+
## 11. Vanilla API Reference
|
|
636
772
|
|
|
637
|
-
|
|
638
|
-
voiceButton.textContent = "Stop Voice Mode";
|
|
639
|
-
transcriptionDiv.style.display = "block";
|
|
640
|
-
} else {
|
|
641
|
-
voiceButton.textContent = "Start Voice Mode";
|
|
642
|
-
transcriptionDiv.style.display = "none";
|
|
643
|
-
}
|
|
644
|
-
}
|
|
645
|
-
```
|
|
773
|
+
### `createConvaiWidget(container, options)`
|
|
646
774
|
|
|
647
|
-
|
|
648
|
-
|
|
649
|
-
```typescript
|
|
650
|
-
// Monitor agent state to handle voice mode transitions
|
|
651
|
-
convaiClient.on("stateChange", (state) => {
|
|
652
|
-
if (isVoiceMode) {
|
|
653
|
-
switch (state.agentState) {
|
|
654
|
-
case "listening":
|
|
655
|
-
// User can speak
|
|
656
|
-
console.log("Bot is listening");
|
|
657
|
-
break;
|
|
658
|
-
case "thinking":
|
|
659
|
-
// Bot is processing
|
|
660
|
-
console.log("Bot is thinking");
|
|
661
|
-
break;
|
|
662
|
-
case "speaking":
|
|
663
|
-
// Bot is responding
|
|
664
|
-
console.log("Bot is speaking");
|
|
665
|
-
// Optionally interrupt if user wants to speak
|
|
666
|
-
break;
|
|
667
|
-
}
|
|
668
|
-
}
|
|
669
|
-
});
|
|
775
|
+
```ts
|
|
776
|
+
import { createConvaiWidget } from "@convai/web-sdk/vanilla";
|
|
670
777
|
```
|
|
671
778
|
|
|
672
|
-
|
|
779
|
+
Creates and mounts a complete floating chat widget.
|
|
673
780
|
|
|
674
|
-
|
|
781
|
+
#### `VanillaWidgetOptions`
|
|
675
782
|
|
|
676
|
-
|
|
677
|
-
|
|
678
|
-
|
|
679
|
-
|
|
680
|
-
|
|
681
|
-
|
|
783
|
+
| Field | Type | Required | Default | Description |
|
|
784
|
+
| ------------------ | -------------------------------- | -------- | ----------- | -------------------------------------------------- |
|
|
785
|
+
| `convaiClient` | `IConvaiClient` | No\* | - | Existing client instance. |
|
|
786
|
+
| `apiKey` | `string` | No\* | - | Used only when `convaiClient` is not provided. |
|
|
787
|
+
| `characterId` | `string` | No\* | - | Used only when `convaiClient` is not provided. |
|
|
788
|
+
| `enableVideo` | `boolean` | No | `false` | Used for auto-created client only. |
|
|
789
|
+
| `startWithVideoOn` | `boolean` | No | `false` | Used for auto-created client only. |
|
|
790
|
+
| `enableLipsync` | `boolean` | No | `false` | Used for auto-created client only. |
|
|
791
|
+
| `blendshapeConfig` | object | No | `undefined` | Used for auto-created client only. |
|
|
792
|
+
| `showVideo` | `boolean` | No | `true` | Show video toggle in settings. |
|
|
793
|
+
| `showScreenShare` | `boolean` | No | `true` | Show screen-share toggle in settings. |
|
|
794
|
+
| `defaultVoiceMode` | `boolean` | No | `true` | Start in voice mode when opened. |
|
|
795
|
+
| `onConnect` | `() => void` | No | `undefined` | Called when widget client connects. |
|
|
796
|
+
| `onDisconnect` | `() => void` | No | `undefined` | Called when widget client disconnects. |
|
|
797
|
+
| `onMessage` | `(message: ChatMessage) => void` | No | `undefined` | Called on each message change with latest message. |
|
|
682
798
|
|
|
683
|
-
|
|
684
|
-
await convaiClient.connect({
|
|
685
|
-
apiKey: "your-api-key",
|
|
686
|
-
characterId: "your-character-id",
|
|
687
|
-
});
|
|
799
|
+
\* You must provide either `convaiClient` OR both `apiKey` and `characterId`.
|
|
688
800
|
|
|
689
|
-
|
|
690
|
-
const client = new ConvaiClient();
|
|
691
|
-
await client.connect({
|
|
692
|
-
apiKey: "your-api-key",
|
|
693
|
-
characterId: "your-character-id",
|
|
694
|
-
});
|
|
695
|
-
```
|
|
801
|
+
#### Return type: `VanillaWidget`
|
|
696
802
|
|
|
697
|
-
|
|
803
|
+
- `element`: root widget element
|
|
804
|
+
- `client`: resolved client instance
|
|
805
|
+
- `destroy()`: unmount and cleanup
|
|
806
|
+
- `update?`: optional future extension field
|
|
698
807
|
|
|
699
|
-
|
|
700
|
-
await convaiClient.disconnect();
|
|
701
|
-
```
|
|
808
|
+
### `destroyConvaiWidget(widget)`
|
|
702
809
|
|
|
703
|
-
|
|
810
|
+
Convenience wrapper that calls `widget.destroy()`.
|
|
704
811
|
|
|
705
|
-
|
|
706
|
-
await convaiClient.reconnect();
|
|
707
|
-
```
|
|
812
|
+
### `AudioRenderer` (vanilla)
|
|
708
813
|
|
|
709
|
-
|
|
814
|
+
`AudioRenderer` listens to LiveKit room track subscriptions and auto-attaches remote audio tracks to hidden `audio` elements for playback. Use one renderer instance per active room session and destroy it during cleanup.
|
|
710
815
|
|
|
711
|
-
|
|
712
|
-
// Clear conversation history and start new session
|
|
713
|
-
convaiClient.resetSession();
|
|
714
|
-
```
|
|
816
|
+
## 12. Audio Integration Best Practices (Vanilla TypeScript)
|
|
715
817
|
|
|
716
|
-
|
|
818
|
+
This section provides the recommended integration for stable audio playback.
|
|
717
819
|
|
|
718
|
-
|
|
719
|
-
// React
|
|
720
|
-
const { state } = convaiClient;
|
|
721
|
-
console.log("Connected:", state.isConnected);
|
|
722
|
-
console.log("Connecting:", state.isConnecting);
|
|
723
|
-
console.log("Agent state:", state.agentState); // 'disconnected' | 'connected' | 'listening' | 'thinking' | 'speaking'
|
|
820
|
+
### Recommended reference implementation
|
|
724
821
|
|
|
725
|
-
|
|
726
|
-
|
|
727
|
-
|
|
728
|
-
|
|
822
|
+
```ts
|
|
823
|
+
import { ConvaiClient } from "@convai/web-sdk/core";
|
|
824
|
+
import { AudioRenderer } from "@convai/web-sdk/vanilla";
|
|
825
|
+
|
|
826
|
+
class ConvaiAudioSession {
|
|
827
|
+
private client: ConvaiClient;
|
|
828
|
+
private audioRenderer: AudioRenderer | null = null;
|
|
829
|
+
private audioContext: AudioContext | null = null;
|
|
830
|
+
|
|
831
|
+
constructor() {
|
|
832
|
+
this.client = new ConvaiClient({
|
|
833
|
+
apiKey: "<YOUR_CONVAI_API_KEY>",
|
|
834
|
+
characterId: "<YOUR_CHARACTER_ID>",
|
|
835
|
+
ttsEnabled: true,
|
|
836
|
+
});
|
|
837
|
+
}
|
|
729
838
|
|
|
730
|
-
|
|
731
|
-
|
|
732
|
-
});
|
|
839
|
+
async connectFromUserGesture(): Promise<void> {
|
|
840
|
+
await this.client.connect();
|
|
733
841
|
|
|
734
|
-
|
|
735
|
-
|
|
736
|
-
});
|
|
737
|
-
```
|
|
842
|
+
// Required for remote audio playback wiring.
|
|
843
|
+
this.audioRenderer = new AudioRenderer(this.client.room);
|
|
738
844
|
|
|
739
|
-
|
|
845
|
+
// Optional: if your app performs WebAudio analysis/effects.
|
|
846
|
+
if (!this.audioContext) {
|
|
847
|
+
this.audioContext = new AudioContext();
|
|
848
|
+
}
|
|
849
|
+
if (this.audioContext.state === "suspended") {
|
|
850
|
+
await this.audioContext.resume();
|
|
851
|
+
}
|
|
852
|
+
}
|
|
740
853
|
|
|
741
|
-
|
|
854
|
+
async disconnect(): Promise<void> {
|
|
855
|
+
if (this.audioRenderer) {
|
|
856
|
+
this.audioRenderer.destroy();
|
|
857
|
+
this.audioRenderer = null;
|
|
858
|
+
}
|
|
859
|
+
|
|
860
|
+
await this.client.disconnect();
|
|
742
861
|
|
|
743
|
-
|
|
744
|
-
|
|
862
|
+
if (this.audioContext && this.audioContext.state !== "closed") {
|
|
863
|
+
await this.audioContext.close();
|
|
864
|
+
this.audioContext = null;
|
|
865
|
+
}
|
|
866
|
+
}
|
|
867
|
+
}
|
|
745
868
|
```
|
|
746
869
|
|
|
747
|
-
|
|
870
|
+
### AudioContext guidance
|
|
748
871
|
|
|
749
|
-
|
|
750
|
-
|
|
751
|
-
|
|
872
|
+
- Create/resume `AudioContext` only after user interaction in browsers that enforce autoplay policy.
|
|
873
|
+
- If you are not processing audio with WebAudio, you do not need a custom `AudioContext`; `AudioRenderer` is enough for playback.
|
|
874
|
+
- Always close your custom `AudioContext` in teardown.
|
|
752
875
|
|
|
753
|
-
|
|
754
|
-
convaiClient.sendTriggerMessage("wave");
|
|
755
|
-
```
|
|
876
|
+
### Lifecycle and cleanup order
|
|
756
877
|
|
|
757
|
-
|
|
878
|
+
Recommended shutdown order:
|
|
758
879
|
|
|
759
|
-
|
|
760
|
-
|
|
761
|
-
|
|
762
|
-
|
|
763
|
-
location: "New York",
|
|
764
|
-
});
|
|
880
|
+
1. Stop UI input loops/listeners
|
|
881
|
+
2. Destroy `AudioRenderer`
|
|
882
|
+
3. Disconnect `ConvaiClient`
|
|
883
|
+
4. Close custom `AudioContext` (if created)
|
|
765
884
|
|
|
766
|
-
|
|
767
|
-
convaiClient.updateDynamicInfo({
|
|
768
|
-
text: "User is currently browsing the products page",
|
|
769
|
-
});
|
|
770
|
-
```
|
|
885
|
+
### Common failure modes and fixes
|
|
771
886
|
|
|
772
|
-
|
|
887
|
+
| Symptom | Likely cause | Recommended action |
|
|
888
|
+
| ----------------------- | ----------------------------------------- | ------------------------------------------------------------------------------------- |
|
|
889
|
+
| No AI audio output | `AudioRenderer` not created | Instantiate `new AudioRenderer(client.room)` immediately after successful connect. |
|
|
890
|
+
| No AI audio output | Browser autoplay restriction | Trigger connect/playback from a user click, and resume `AudioContext` if suspended. |
|
|
891
|
+
| No AI audio output | TTS disabled | Ensure `ttsEnabled` is true for sessions that need speech output. |
|
|
892
|
+
| Intermittent playback | Multiple renderers or stale room instance | Use one renderer per session and always destroy old renderer before reconnecting. |
|
|
893
|
+
| Works once, then silent | Incomplete cleanup on previous session | Destroy renderer and disconnect client on teardown; avoid reusing invalid room state. |
|
|
894
|
+
| Random muted behavior | App-side muting of remote tracks | Verify no custom code is muting remote publications or media elements. |
|
|
773
895
|
|
|
774
|
-
|
|
775
|
-
// React
|
|
776
|
-
const { chatMessages } = convaiClient;
|
|
896
|
+
## 13. Error Handling and Reliability Patterns
|
|
777
897
|
|
|
778
|
-
|
|
779
|
-
convaiClient.on("message", (message: ChatMessage) => {
|
|
780
|
-
console.log("New message:", message.content);
|
|
781
|
-
console.log("Message type:", message.type);
|
|
782
|
-
});
|
|
898
|
+
### Pattern 1: Centralized SDK error handling
|
|
783
899
|
|
|
784
|
-
|
|
785
|
-
|
|
900
|
+
```ts
|
|
901
|
+
const unsubError = client.on("error", (error) => {
|
|
902
|
+
console.error("Convai SDK error:", error);
|
|
903
|
+
// Optional: route to telemetry/monitoring
|
|
786
904
|
});
|
|
787
905
|
```
|
|
788
906
|
|
|
789
|
-
|
|
790
|
-
|
|
791
|
-
```
|
|
792
|
-
|
|
793
|
-
|
|
794
|
-
|
|
795
|
-
|
|
796
|
-
|
|
797
|
-
|
|
798
|
-
|
|
799
|
-
|
|
800
|
-
|
|
801
|
-
|
|
802
|
-
|
|
907
|
+
### Pattern 2: Retry connect with exponential backoff
|
|
908
|
+
|
|
909
|
+
```ts
|
|
910
|
+
async function connectWithRetry(
|
|
911
|
+
client: any,
|
|
912
|
+
attempts = 3,
|
|
913
|
+
initialDelayMs = 500,
|
|
914
|
+
): Promise<void> {
|
|
915
|
+
let delay = initialDelayMs;
|
|
916
|
+
|
|
917
|
+
for (let i = 1; i <= attempts; i++) {
|
|
918
|
+
try {
|
|
919
|
+
await client.connect();
|
|
920
|
+
return;
|
|
921
|
+
} catch (error) {
|
|
922
|
+
if (i === attempts) throw error;
|
|
923
|
+
await new Promise((resolve) => setTimeout(resolve, delay));
|
|
924
|
+
delay *= 2;
|
|
925
|
+
}
|
|
926
|
+
}
|
|
927
|
+
}
|
|
803
928
|
```
|
|
804
929
|
|
|
805
|
-
###
|
|
930
|
+
### Pattern 3: Safe send guard
|
|
806
931
|
|
|
807
|
-
|
|
932
|
+
```ts
|
|
933
|
+
function safeSendText(client: any, text: string) {
|
|
934
|
+
if (!text.trim()) return;
|
|
935
|
+
if (!client.state.isConnected) return;
|
|
936
|
+
if (!client.isBotReady) return;
|
|
937
|
+
client.sendUserTextMessage(text);
|
|
938
|
+
}
|
|
939
|
+
```
|
|
808
940
|
|
|
809
|
-
|
|
810
|
-
// React
|
|
811
|
-
const { state } = convaiClient;
|
|
941
|
+
### Pattern 4: Protect media control calls
|
|
812
942
|
|
|
813
|
-
|
|
814
|
-
|
|
815
|
-
|
|
943
|
+
```ts
|
|
944
|
+
async function safeToggleMic(client: any) {
|
|
945
|
+
try {
|
|
946
|
+
await client.audioControls.toggleAudio();
|
|
947
|
+
} catch (error) {
|
|
948
|
+
console.error("Failed to toggle microphone:", error);
|
|
949
|
+
}
|
|
816
950
|
}
|
|
951
|
+
```
|
|
817
952
|
|
|
818
|
-
|
|
819
|
-
console.log("Bot is thinking");
|
|
820
|
-
}
|
|
953
|
+
### Pattern 5: Always unsubscribe listeners
|
|
821
954
|
|
|
822
|
-
|
|
823
|
-
|
|
824
|
-
}
|
|
955
|
+
```ts
|
|
956
|
+
const unsubscribers = [
|
|
957
|
+
client.on("stateChange", () => {}),
|
|
958
|
+
client.on("messagesChange", () => {}),
|
|
959
|
+
];
|
|
825
960
|
|
|
826
|
-
|
|
827
|
-
|
|
961
|
+
function cleanupListeners() {
|
|
962
|
+
for (const unsub of unsubscribers) unsub();
|
|
963
|
+
}
|
|
828
964
|
```
|
|
829
965
|
|
|
830
|
-
|
|
966
|
+
## 14. Troubleshooting
|
|
831
967
|
|
|
832
|
-
|
|
833
|
-
// React
|
|
834
|
-
const { userTranscription } = convaiClient;
|
|
968
|
+
### Connection issues
|
|
835
969
|
|
|
836
|
-
|
|
837
|
-
|
|
838
|
-
|
|
839
|
-
|
|
840
|
-
```
|
|
970
|
+
- Verify API key and character ID are valid.
|
|
971
|
+
- Ensure requests are allowed from your browser origin.
|
|
972
|
+
- Set `url` explicitly if your environment does not use the SDK default endpoint.
|
|
973
|
+
- Listen to `error` and inspect failed network calls in browser devtools.
|
|
841
974
|
|
|
842
|
-
|
|
975
|
+
### `connect()` succeeds but bot never responds
|
|
843
976
|
|
|
844
|
-
|
|
845
|
-
|
|
846
|
-
|
|
977
|
+
- Wait for `botReady` before sending messages.
|
|
978
|
+
- Confirm `ttsEnabled` and message flow are configured as expected.
|
|
979
|
+
- Verify `messagesChange` receives content.
|
|
847
980
|
|
|
848
|
-
|
|
849
|
-
convaiClient.on("botReady", () => {
|
|
850
|
-
console.log("Bot is ready to receive messages");
|
|
851
|
-
});
|
|
852
|
-
```
|
|
981
|
+
### Audio does not play
|
|
853
982
|
|
|
854
|
-
|
|
983
|
+
- Ensure an `AudioRenderer` is active for the connected room (vanilla custom UI).
|
|
984
|
+
- Ensure playback starts from a user gesture path to satisfy autoplay policies.
|
|
985
|
+
- Confirm no custom muting code is muting remote tracks.
|
|
855
986
|
|
|
856
|
-
|
|
857
|
-
2. Navigate to your dashboard
|
|
858
|
-
3. Create a new character or use an existing one
|
|
859
|
-
4. Copy your **API Key** from the dashboard
|
|
860
|
-
5. Copy your **Character ID** from the character details
|
|
987
|
+
### Microphone does not capture user voice
|
|
861
988
|
|
|
862
|
-
|
|
989
|
+
- Ensure app is served over secure context.
|
|
990
|
+
- Verify browser microphone permission.
|
|
991
|
+
- Handle permission errors from `audioControls.enableAudio()/unmuteAudio()`.
|
|
863
992
|
|
|
864
|
-
|
|
865
|
-
// Default: React version (backward compatible)
|
|
866
|
-
import { useConvaiClient, ConvaiWidget } from "@convai/web-sdk";
|
|
993
|
+
### Video or screen share controls fail
|
|
867
994
|
|
|
868
|
-
|
|
869
|
-
|
|
995
|
+
- Use `enableVideo: true` in config when you need video capabilities.
|
|
996
|
+
- Screen share can be blocked by browser policy or user denial.
|
|
997
|
+
- Wrap calls in `try/catch` and provide fallback UX.
|
|
870
998
|
|
|
871
|
-
|
|
872
|
-
import { ConvaiClient, createConvaiWidget } from "@convai/web-sdk/vanilla";
|
|
999
|
+
### Lipsync appears out of sync or shape
|
|
873
1000
|
|
|
874
|
-
|
|
875
|
-
|
|
876
|
-
|
|
1001
|
+
- Validate blendshape format (`arkit` vs `mha`) matches your rig expectations.
|
|
1002
|
+
- Tune `frames_buffer_duration` so you atleast have some duration of blendshapes before the audio starts playing.
|
|
1003
|
+
- Align lipsync start and stop with the queue: start playback when the bot starts speaking (`isBotSpeaking()` true) and treat the turn as finished when `blendshapeQueue.isConversationEnded()` is true before resetting.
|
|
1004
|
+
- Drive blendshape application from a single loop (e.g. `requestAnimationFrame`) and advance frame index at 60fps so mouth movement stays in sync with audio.
|
|
877
1005
|
|
|
878
|
-
##
|
|
879
|
-
|
|
880
|
-
All exports are fully typed:
|
|
881
|
-
|
|
882
|
-
```typescript
|
|
883
|
-
import type {
|
|
884
|
-
ConvaiClient,
|
|
885
|
-
ConvaiConfig,
|
|
886
|
-
ConvaiClientState,
|
|
887
|
-
ChatMessage,
|
|
888
|
-
AudioControls,
|
|
889
|
-
VideoControls,
|
|
890
|
-
ScreenShareControls,
|
|
891
|
-
IConvaiClient,
|
|
892
|
-
} from "@convai/web-sdk";
|
|
893
|
-
```
|
|
1006
|
+
## 16. Examples
|
|
894
1007
|
|
|
895
|
-
|
|
1008
|
+
Repository examples:
|
|
896
1009
|
|
|
897
|
-
-
|
|
898
|
-
-
|
|
899
|
-
-
|
|
1010
|
+
- `examples/react-three-fiber`
|
|
1011
|
+
- `examples/three-vanilla`
|
|
1012
|
+
- `examples/README.md` for example-level setup notes
|