@convai/web-sdk 1.0.0-beta.1 → 1.1.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -1,230 +1,308 @@
1
1
  # @convai/web-sdk
2
2
 
3
- JavaScript/TypeScript SDK for Convai AI voice assistants. Build voice-powered AI interactions for web applications with real-time audio/video streaming. Supports both React and Vanilla JavaScript/TypeScript.
4
-
5
- ## Installation
3
+ `@convai/web-sdk` is a TypeScript-first SDK for building real-time conversational AI experiences with Convai characters on the web. It supports:
4
+
5
+ - React applications with ready-to-use hooks and widget components
6
+ - Vanilla TypeScript/JavaScript applications with a framework-agnostic widget
7
+ - Direct core client usage for custom UIs and advanced integrations
8
+ - Optional lipsync data pipelines for ARKit and MetaHuman rigs
9
+
10
+ This document is written as a complete implementation reference, from first setup to production hardening.
11
+
12
+ ## Table of Contents
13
+
14
+ - [1. Package Entry Points](#1-package-entry-points)
15
+ - [2. Installation and Requirements](#2-installation-and-requirements)
16
+ - [3. Credentials and Environment Setup](#3-credentials-and-environment-setup)
17
+ - [4. Quick Start](#4-quick-start)
18
+ - [5. Build a Chatbot from Scratch](#5-build-a-chatbot-from-scratch)
19
+ - [6. Core Concepts and Lifecycle](#6-core-concepts-and-lifecycle)
20
+ - [7. Configuration Reference (`ConvaiConfig`)](#7-configuration-reference-convaiconfig)
21
+ - [8. Core API Reference (`ConvaiClient`)](#8-core-api-reference-convaiclient)
22
+ - [9. Message Semantics and Turn Completion](#9-message-semantics-and-turn-completion)
23
+ - [10. React API Reference](#10-react-api-reference)
24
+ - [11. Vanilla API Reference](#11-vanilla-api-reference)
25
+ - [12. Audio Integration Best Practices (Vanilla TypeScript)](#12-audio-integration-best-practices-vanilla-typescript)
26
+ - [13. Lipsync Helpers Reference](#13-lipsync-helpers-reference)
27
+ - [14. Error Handling and Reliability Patterns](#14-error-handling-and-reliability-patterns)
28
+ - [15. Troubleshooting](#15-troubleshooting)
29
+ - [16. Production Readiness Checklist](#16-production-readiness-checklist)
30
+ - [17. Examples](#17-examples)
31
+ - [18. License](#18-license)
32
+
33
+ ## 1. Package Entry Points
34
+
35
+ The SDK is published with multiple entry points for different integration styles.
36
+
37
+ ### `@convai/web-sdk` (default)
38
+
39
+ Primary exports:
40
+
41
+ - `useConvaiClient`
42
+ - `ConvaiWidget`
43
+ - `useCharacterInfo`
44
+ - `useLocalCameraTrack`
45
+ - `ConvaiClient`
46
+ - `AudioRenderer` (re-export of LiveKit `RoomAudioRenderer` for React usage)
47
+ - `AudioContext` (re-export of LiveKit `RoomContext`)
48
+ - Core types re-exported from `core/types`:
49
+ - `AudioSettings`
50
+ - `ConvaiConfig`
51
+ - `ChatMessage`
52
+ - `ConvaiClientState`
53
+ - `AudioControls`
54
+ - `VideoControls`
55
+ - `ScreenShareControls`
56
+ - `IConvaiClient`
57
+ - All exports from `@convai/web-sdk/lipsync-helpers`
58
+ - Type exports for latency models:
59
+ - `LatencyMonitor` (type)
60
+ - `LatencyMeasurement`
61
+ - `LatencyStats`
62
+
63
+ ### `@convai/web-sdk/react`
64
+
65
+ React-focused entry point, equivalent to the default React API surface.
66
+
67
+ ### `@convai/web-sdk/vanilla`
68
+
69
+ Vanilla/browser-focused exports:
70
+
71
+ - `ConvaiClient`
72
+ - `AudioRenderer` (vanilla audio playback manager)
73
+ - `createConvaiWidget`
74
+ - `destroyConvaiWidget`
75
+ - Types:
76
+ - `VanillaWidget`
77
+ - `VanillaWidgetOptions`
78
+ - `IConvaiClient`
79
+ - `ConvaiConfig`
80
+ - `ConvaiClientState`
81
+ - `ChatMessage`
82
+
83
+ ### `@convai/web-sdk/core`
84
+
85
+ Framework-agnostic low-level API:
86
+
87
+ - `ConvaiClient`
88
+ - `AudioManager`
89
+ - `VideoManager`
90
+ - `ScreenShareManager`
91
+ - `MessageHandler`
92
+ - `BlendshapeQueue`
93
+ - `EventEmitter`
94
+ - Type alias: `ConvaiClientType`
95
+ - All core types from `core/types`
96
+ - `TurnStats` type
97
+
98
+ ### `@convai/web-sdk/lipsync-helpers`
99
+
100
+ Dedicated helpers for blendshape formats and queue creation. Full function list is in [Section 13](#13-lipsync-helpers-reference).
101
+
102
+ ## 2. Installation and Requirements
103
+
104
+ ### Install
6
105
 
7
106
  ```bash
8
107
  npm install @convai/web-sdk
9
108
  ```
10
109
 
11
- ## Basic Setup
12
-
13
- ### React
14
-
15
- ```tsx
16
- import { useConvaiClient, ConvaiWidget } from "@convai/web-sdk";
17
-
18
- function App() {
19
- const convaiClient = useConvaiClient({
20
- apiKey: "your-api-key",
21
- characterId: "your-character-id",
22
- });
110
+ or
23
111
 
24
- return <ConvaiWidget convaiClient={convaiClient} />;
25
- }
112
+ ```bash
113
+ pnpm add @convai/web-sdk
26
114
  ```
27
115
 
28
- ### Vanilla TypeScript
29
-
30
- ```typescript
31
- import { ConvaiClient, createConvaiWidget } from "@convai/web-sdk/vanilla";
32
-
33
- // Create client with configuration
34
- const client = new ConvaiClient({
35
- apiKey: "your-api-key",
36
- characterId: "your-character-id",
37
- });
38
-
39
- // Create widget - auto-connects on first user click
40
- const widget = createConvaiWidget(document.body, {
41
- convaiClient: client,
42
- });
116
+ or
43
117
 
44
- // Cleanup when done
45
- widget.destroy();
118
+ ```bash
119
+ yarn add @convai/web-sdk
46
120
  ```
47
121
 
48
- ## Exports
122
+ ### Runtime requirements
49
123
 
50
- ### React Exports (`@convai/web-sdk` or `@convai/web-sdk/react`)
124
+ - Modern browser with WebRTC support
125
+ - Secure context (`https://` or `http://localhost`) for microphone/camera/screen access
51
126
 
52
- **Components:**
127
+ ### Peer dependencies
53
128
 
54
- - `ConvaiWidget` - Main chat widget component
129
+ If you are using React APIs:
55
130
 
56
- **Hooks:**
131
+ - `react` `^18 || ^19`
132
+ - `react-dom` `^18 || ^19`
57
133
 
58
- - `useConvaiClient(config?)` - Main client hook
59
- - `useCharacterInfo(characterId, apiKey)` - Fetch character metadata
60
- - `useLocalCameraTrack()` - Get local camera track
134
+ ## 3. Credentials and Environment Setup
61
135
 
62
- **Core Client:**
136
+ ### Obtain credentials
63
137
 
64
- - `ConvaiClient` - Core client class
138
+ 1. Create/login to your Convai account.
139
+ 2. Create or select a character.
140
+ 3. Copy:
141
+ - API key
142
+ - Character ID
65
143
 
66
- **Types:**
144
+ ### Store credentials in environment variables
67
145
 
68
- - `ConvaiConfig` - Configuration interface
69
- - `ConvaiClientState` - Client state interface
70
- - `ChatMessage` - Message interface
71
- - `IConvaiClient` - Client interface
72
- - `AudioControls` - Audio control interface
73
- - `VideoControls` - Video control interface
74
- - `ScreenShareControls` - Screen share control interface
146
+ Do not hardcode credentials in source files.
75
147
 
76
- **Components:**
148
+ ```bash
149
+ # .env.local (example)
150
+ VITE_CONVAI_API_KEY=<YOUR_CONVAI_API_KEY>
151
+ VITE_CONVAI_CHARACTER_ID=<YOUR_CONVAI_CHARACTER_ID>
152
+ VITE_CONVAI_API_URL=<OPTIONAL_CONVAI_BASE_URL>
153
+ ```
77
154
 
78
- - `AudioRenderer` - Audio playback component
79
- - `AudioContext` - Audio context provider
155
+ Use these values through your build system (`import.meta.env`, process env injection, or server-provided config).
80
156
 
81
- ### Vanilla Exports (`@convai/web-sdk/vanilla`)
157
+ ## 4. Quick Start
82
158
 
83
- **Functions:**
159
+ ### React
84
160
 
85
- - `createConvaiWidget(container, options)` - Create widget instance
86
- - `destroyConvaiWidget(widget)` - Destroy widget instance
161
+ ```tsx
162
+ import { ConvaiWidget, useConvaiClient } from "@convai/web-sdk";
87
163
 
88
- **Classes:**
164
+ export function App() {
165
+ const convaiClient = useConvaiClient({
166
+ apiKey: import.meta.env.VITE_CONVAI_API_KEY,
167
+ characterId: import.meta.env.VITE_CONVAI_CHARACTER_ID,
168
+ enableVideo: false,
169
+ startWithAudioOn: false,
170
+ });
89
171
 
90
- - `ConvaiClient` - Core client class
91
- - `AudioRenderer` - Audio playback handler
172
+ return <ConvaiWidget convaiClient={convaiClient} />;
173
+ }
174
+ ```
92
175
 
93
- **Types:**
176
+ ### Vanilla TypeScript
94
177
 
95
- - `VanillaWidget` - Widget instance interface
96
- - `VanillaWidgetOptions` - Widget options interface
97
- - `IConvaiClient` - Client interface
98
- - `ConvaiConfig` - Configuration interface
99
- - `ConvaiClientState` - Client state interface
100
- - `ChatMessage` - Message interface
178
+ ```ts
179
+ import { ConvaiClient, createConvaiWidget } from "@convai/web-sdk/vanilla";
101
180
 
102
- ### Core Exports (`@convai/web-sdk/core`)
181
+ const client = new ConvaiClient({
182
+ apiKey: import.meta.env.VITE_CONVAI_API_KEY,
183
+ characterId: import.meta.env.VITE_CONVAI_CHARACTER_ID,
184
+ enableVideo: false,
185
+ });
103
186
 
104
- **Classes:**
187
+ const widget = createConvaiWidget(document.body, {
188
+ convaiClient: client,
189
+ defaultVoiceMode: true,
190
+ onConnect: () => console.log("Connected"),
191
+ onDisconnect: () => console.log("Disconnected"),
192
+ });
105
193
 
106
- - `ConvaiClient` - Main client class
107
- - `AudioManager` - Audio management
108
- - `VideoManager` - Video management
109
- - `ScreenShareManager` - Screen share management
110
- - `MessageHandler` - Message handling
111
- - `EventEmitter` - Event emitter base class
194
+ window.addEventListener("beforeunload", () => {
195
+ widget.destroy();
196
+ void client.disconnect().catch(() => undefined);
197
+ });
198
+ ```
112
199
 
113
- **Types:**
200
+ ## 5. Build a Chatbot from Scratch
114
201
 
115
- - All types from React/Vanilla exports
116
- - `ConvaiClientType` - Type alias for ConvaiClient
202
+ This section shows an end-to-end approach you can use in production.
117
203
 
118
- ## Props and Configuration
204
+ ### A) React from scratch (custom connection flow)
119
205
 
120
- ### ConvaiWidget Props (React)
206
+ #### Step 1: Create the client
121
207
 
122
208
  ```tsx
123
- interface ConvaiWidgetProps {
124
- /** Convai client instance (required) */
125
- convaiClient: IConvaiClient & {
126
- activity?: string;
127
- isAudioMuted: boolean;
128
- isVideoEnabled: boolean;
129
- isScreenShareActive: boolean;
130
- };
131
- /** Show video toggle button in settings (default: true) */
132
- showVideo?: boolean;
133
- /** Show screen share toggle button in settings (default: true) */
134
- showScreenShare?: boolean;
135
- }
209
+ import { useConvaiClient } from "@convai/web-sdk";
210
+
211
+ const convaiClient = useConvaiClient({
212
+ apiKey: import.meta.env.VITE_CONVAI_API_KEY,
213
+ characterId: import.meta.env.VITE_CONVAI_CHARACTER_ID,
214
+ endUserId: "<UNIQUE_END_USER_ID>",
215
+ enableVideo: true,
216
+ startWithVideoOn: false,
217
+ startWithAudioOn: false,
218
+ ttsEnabled: true,
219
+ enableLipsync: true,
220
+ blendshapeConfig: {
221
+ format: "arkit",
222
+ frames_buffer_duration: 0.5,
223
+ },
224
+ });
136
225
  ```
137
226
 
138
- ### createConvaiWidget Options (Vanilla)
227
+ #### Step 2: Connect from a user gesture with error handling
139
228
 
140
- ```typescript
141
- interface VanillaWidgetOptions {
142
- /** Convai client instance (required) */
143
- convaiClient: IConvaiClient & {
144
- activity?: string;
145
- chatMessages: ChatMessage[];
146
- };
147
- /** Show video toggle button in settings (default: true) */
148
- showVideo?: boolean;
149
- /** Show screen share toggle button in settings (default: true) */
150
- showScreenShare?: boolean;
229
+ ```tsx
230
+ async function handleConnect() {
231
+ try {
232
+ await convaiClient.connect();
233
+ } catch (error) {
234
+ console.error("Connection failed:", error);
235
+ }
151
236
  }
152
237
  ```
153
238
 
154
- ### ConvaiConfig
155
-
156
- ```typescript
157
- interface ConvaiConfig {
158
- /** Your Convai API key from convai.com dashboard (required) */
159
- apiKey: string;
160
- /** The Character ID to connect to (required) */
161
- characterId: string;
162
- /**
163
- * End user identifier for speaker management (optional).
164
- * When provided: enables long-term memory and analytics
165
- * When not provided: anonymous mode, no persistent memory
166
- */
167
- endUserId?: string;
168
- /** Custom Convai API URL (optional, defaults to production endpoint) */
169
- url?: string;
170
- /**
171
- * Enable video capability (default: false).
172
- * If true, connection_type will be "video" (supports audio, video, and screenshare).
173
- * If false, connection_type will be "audio" (audio only).
174
- */
175
- enableVideo?: boolean;
176
- /**
177
- * Start with video camera on when connecting (default: false).
178
- * Only works if enableVideo is true.
179
- */
180
- startWithVideoOn?: boolean;
181
- /**
182
- * Start with microphone on when connecting (default: false).
183
- * If false, microphone stays off until user enables it.
184
- */
185
- startWithAudioOn?: boolean;
186
- /** Enable text-to-speech audio generation (default: true) */
187
- ttsEnabled?: boolean;
239
+ #### Step 3: Wait for readiness before sending text
240
+
241
+ ```tsx
242
+ function sendMessage(text: string) {
243
+ if (!convaiClient.state.isConnected || !convaiClient.isBotReady) return;
244
+ convaiClient.sendUserTextMessage(text);
188
245
  }
189
246
  ```
190
247
 
191
- ## Features
248
+ #### Step 4: Render the widget or your own UI
192
249
 
193
- ### Video Enabled Chat
250
+ ```tsx
251
+ import { ConvaiWidget } from "@convai/web-sdk";
194
252
 
195
- To enable video capabilities, set `enableVideo: true` in your configuration. This enables audio, video, and screen sharing.
253
+ <ConvaiWidget
254
+ convaiClient={convaiClient}
255
+ showVideo={true}
256
+ showScreenShare={true}
257
+ defaultVoiceMode={true}
258
+ />;
259
+ ```
196
260
 
197
- **React:**
261
+ #### Step 5: Subscribe to lifecycle events
198
262
 
199
263
  ```tsx
200
- import { useConvaiClient, ConvaiWidget } from "@convai/web-sdk";
264
+ useEffect(() => {
265
+ const unsubError = convaiClient.on("error", (error) => {
266
+ console.error("Convai error:", error);
267
+ });
201
268
 
202
- function App() {
203
- const convaiClient = useConvaiClient({
204
- apiKey: "your-api-key",
205
- characterId: "your-character-id",
206
- enableVideo: true,
207
- startWithVideoOn: false, // Camera off by default
269
+ const unsubState = convaiClient.on("stateChange", (state) => {
270
+ console.log("State:", state.agentState);
208
271
  });
209
272
 
210
- return (
211
- <ConvaiWidget
212
- convaiClient={convaiClient}
213
- showVideo={true}
214
- showScreenShare={true}
215
- />
216
- );
217
- }
273
+ const unsubMessages = convaiClient.on("messagesChange", (messages) => {
274
+ console.log("Messages:", messages.length);
275
+ });
276
+
277
+ return () => {
278
+ unsubError();
279
+ unsubState();
280
+ unsubMessages();
281
+ };
282
+ }, [convaiClient]);
283
+ ```
284
+
285
+ #### Step 6: Clean up on unmount
286
+
287
+ ```tsx
288
+ useEffect(() => {
289
+ return () => {
290
+ void convaiClient.disconnect().catch(() => undefined);
291
+ };
292
+ }, [convaiClient]);
218
293
  ```
219
294
 
220
- **Vanilla:**
295
+ ### B) Vanilla TypeScript from scratch (widget + custom hooks)
221
296
 
222
- ```typescript
297
+ #### Step 1: Initialize client and widget
298
+
299
+ ```ts
223
300
  import { ConvaiClient, createConvaiWidget } from "@convai/web-sdk/vanilla";
224
301
 
225
302
  const client = new ConvaiClient({
226
- apiKey: "your-api-key",
227
- characterId: "your-character-id",
303
+ apiKey: "<YOUR_CONVAI_API_KEY>",
304
+ characterId: "<YOUR_CHARACTER_ID>",
305
+ endUserId: "<UNIQUE_END_USER_ID>",
228
306
  enableVideo: true,
229
307
  startWithVideoOn: false,
230
308
  });
@@ -233,624 +311,702 @@ const widget = createConvaiWidget(document.body, {
233
311
  convaiClient: client,
234
312
  showVideo: true,
235
313
  showScreenShare: true,
314
+ defaultVoiceMode: true,
315
+ onConnect: () => console.log("Connected"),
316
+ onDisconnect: () => console.log("Disconnected"),
317
+ onMessage: (message) => console.log("Message:", message),
236
318
  });
237
319
  ```
238
320
 
239
- **Manual Video Controls:**
321
+ #### Step 2: Add explicit error listeners
240
322
 
241
- ```typescript
242
- // Enable video camera
243
- await convaiClient.videoControls.enableVideo();
323
+ ```ts
324
+ const unsubError = client.on("error", (error) => {
325
+ console.error("SDK error:", error);
326
+ });
327
+ ```
244
328
 
245
- // Disable video camera
246
- await convaiClient.videoControls.disableVideo();
329
+ #### Step 3: Add guarded send utility
247
330
 
248
- // Toggle video
249
- await convaiClient.videoControls.toggleVideo();
331
+ ```ts
332
+ function safeSend(text: string) {
333
+ if (!text.trim()) return;
334
+ if (!client.state.isConnected) return;
335
+ if (!client.isBotReady) return;
336
+ client.sendUserTextMessage(text);
337
+ }
338
+ ```
250
339
 
251
- // Check video state
252
- const isVideoEnabled = convaiClient.videoControls.isVideoEnabled;
340
+ #### Step 4: Cleanup
253
341
 
254
- // Set video quality
255
- await convaiClient.videoControls.setVideoQuality("high"); // 'low' | 'medium' | 'high'
342
+ ```ts
343
+ function destroy() {
344
+ unsubError();
345
+ widget.destroy();
346
+ void client.disconnect().catch(() => undefined);
347
+ }
348
+ ```
256
349
 
257
- // Get available video devices
258
- const devices = await convaiClient.videoControls.getVideoDevices();
350
+ ### C) Custom UI (framework-agnostic)
259
351
 
260
- // Set specific video device
261
- await convaiClient.videoControls.setVideoDevice(deviceId);
262
- ```
352
+ If you are not using the built-in widget:
263
353
 
264
- **Screen Sharing:**
354
+ - Use `ConvaiClient` from `@convai/web-sdk/core`
355
+ - Use `AudioRenderer` from `@convai/web-sdk/vanilla` for remote audio playback
356
+ - Render your own UI based on `stateChange`, `messagesChange`, and control manager events
265
357
 
266
- ```typescript
267
- // Enable screen share
268
- await convaiClient.screenShareControls.enableScreenShare();
358
+ ```ts
359
+ import { ConvaiClient } from "@convai/web-sdk/core";
360
+ import { AudioRenderer } from "@convai/web-sdk/vanilla";
269
361
 
270
- // Enable screen share with audio
271
- await convaiClient.screenShareControls.enableScreenShareWithAudio();
362
+ const client = new ConvaiClient({
363
+ apiKey: "<YOUR_CONVAI_API_KEY>",
364
+ characterId: "<YOUR_CHARACTER_ID>",
365
+ });
272
366
 
273
- // Disable screen share
274
- await convaiClient.screenShareControls.disableScreenShare();
367
+ await client.connect();
368
+ const audioRenderer = new AudioRenderer(client.room);
275
369
 
276
- // Toggle screen share
277
- await convaiClient.screenShareControls.toggleScreenShare();
370
+ // ... your custom UI logic
278
371
 
279
- // Check screen share state
280
- const isActive = convaiClient.screenShareControls.isScreenShareActive;
372
+ audioRenderer.destroy();
373
+ await client.disconnect();
281
374
  ```
282
375
 
283
- **Video State Monitoring:**
376
+ ## 6. Core Concepts and Lifecycle
284
377
 
285
- ```typescript
286
- // React
287
- const { isVideoEnabled } = convaiClient;
378
+ ### Connection lifecycle
288
379
 
289
- // Core API (event-based)
290
- convaiClient.videoControls.on("videoStateChange", (state) => {
291
- console.log("Video enabled:", state.isVideoEnabled);
292
- console.log("Video hidden:", state.isVideoHidden);
293
- });
294
- ```
380
+ 1. `connect()` starts room and transport setup.
381
+ 2. `state.isConnected` becomes true when room connection is established.
382
+ 3. `botReady` event indicates the character is ready for interaction.
383
+ 4. Messages stream through data events into `chatMessages`.
384
+ 5. Audio/video/screen-share are managed through dedicated control managers.
385
+ 6. `disconnect()` tears down the session.
295
386
 
296
- ### Interruption
387
+ ### Activity lifecycle
297
388
 
298
- Interrupt the character's current response to allow the user to speak immediately.
389
+ - `state.isThinking`: model is generating response
390
+ - `state.isSpeaking`: model audio is currently speaking
391
+ - `state.agentState`: combined high-level state (`disconnected | connected | listening | thinking | speaking`)
299
392
 
300
- **React:**
393
+ ### Widget lifecycle
301
394
 
302
- ```tsx
303
- function ChatInterface() {
304
- const convaiClient = useConvaiClient({
305
- /* config */
306
- });
395
+ Both React and vanilla widgets:
307
396
 
308
- const handleInterrupt = () => {
309
- // Interrupt the bot's current response
310
- convaiClient.sendInterruptMessage();
311
- };
397
+ - auto-connect on first user interaction
398
+ - expose optional callbacks/events
399
+ - need explicit cleanup on app teardown
312
400
 
313
- return <button onClick={handleInterrupt}>Interrupt</button>;
314
- }
315
- ```
401
+ ## 7. Configuration Reference (`ConvaiConfig`)
316
402
 
317
- **Vanilla:**
403
+ | Field | Type | Required | Default | Description |
404
+ | ----------------------------------------- | ------------------ | -------- | -------------------- | ----------------------------------------------------------------------------------- |
405
+ | `apiKey` | `string` | Yes | - | Convai API key. |
406
+ | `characterId` | `string` | Yes | - | Target character identifier. |
407
+ | `endUserId` | `string` | No | `undefined` | Stable end-user identity for memory/analytics continuity. |
408
+ | `url` | `string` | No | SDK internal default | Convai base URL. Set explicitly if your deployment requires a specific environment. |
409
+ | `enableVideo` | `boolean` | No | `false` | Enables video-capable connection type. |
410
+ | `startWithVideoOn` | `boolean` | No | `false` | Auto-enable camera after connect. |
411
+ | `startWithAudioOn` | `boolean` | No | `false` | Auto-enable microphone after connect. |
412
+ | `ttsEnabled` | `boolean` | No | `true` | Enables model text-to-speech output. |
413
+ | `enableLipsync` | `boolean` | No | `false` | Requests blendshape payloads for facial animation. |
414
+ | `blendshapeConfig.format` | `"arkit" \| "mha"` | No | `"mha"` | Blendshape output format. |
415
+ | `blendshapeConfig.frames_buffer_duration` | `number` | No | server-defined | Buffering hint for audio/blendshape synchronization. |
416
+ | `actionConfig` | object | No | `undefined` | Action and scene-context metadata (actions, characters, objects, attention object). |
318
417
 
319
- ```typescript
320
- const interruptButton = document.getElementById("interrupt-btn");
418
+ ## 8. Core API Reference (`ConvaiClient`)
321
419
 
322
- interruptButton.addEventListener("click", () => {
323
- client.sendInterruptMessage();
324
- });
420
+ Import:
421
+
422
+ ```ts
423
+ import { ConvaiClient } from "@convai/web-sdk/core";
325
424
  ```
326
425
 
327
- **Voice Mode Interruption Pattern:**
426
+ ### Constructor
427
+
428
+ ```ts
429
+ new ConvaiClient(config?: ConvaiConfig)
430
+ ```
431
+
432
+ ### Properties
433
+
434
+ | Property | Type | Description |
435
+ | ----------------------- | ---------------------------- | -------------------------------------------------------- |
436
+ | `state` | `ConvaiClientState` | Real-time connection/activity state. |
437
+ | `connectionType` | `"audio" \| "video" \| null` | Active transport mode. |
438
+ | `apiKey` | `string \| null` | Active API key. |
439
+ | `characterId` | `string \| null` | Active character ID. |
440
+ | `speakerId` | `string \| null` | Resolved speaker identity. |
441
+ | `room` | `Room` | Internal LiveKit room instance. |
442
+ | `chatMessages` | `ChatMessage[]` | Conversation message store. |
443
+ | `userTranscription` | `string` | Current non-final voice transcription text. |
444
+ | `characterSessionId` | `string \| null` | Server conversation session identifier. |
445
+ | `isBotReady` | `boolean` | Character readiness flag. |
446
+ | `audioControls` | `AudioControls` | Microphone controls. |
447
+ | `videoControls` | `VideoControls` | Camera controls. |
448
+ | `screenShareControls` | `ScreenShareControls` | Screen sharing controls. |
449
+ | `latencyMonitor` | `LatencyMonitor` | Measurement manager used by the client for turn latency. |
450
+ | `blendshapeQueue` | `BlendshapeQueue` | Buffer queue for lipsync frames. |
451
+ | `conversationSessionId` | `number` | Incremental turn session ID used by conversation events. |
452
+
453
+ ### Methods
454
+
455
+ | Method | Signature | Description |
456
+ | ---------------------- | ------------------------------------------------------------------- | ---------------------------------------------------------- |
457
+ | `connect` | `(config?: ConvaiConfig) => Promise<void>` | Connect using passed config or stored config. |
458
+ | `disconnect` | `() => Promise<void>` | Disconnect and release session resources. |
459
+ | `reconnect` | `() => Promise<void>` | Disconnect then connect with stored config. |
460
+ | `resetSession` | `() => void` | Reset character session and clear conversation history. |
461
+ | `sendUserTextMessage` | `(text: string) => void` | Send text message to character. |
462
+ | `sendTriggerMessage` | `(triggerName?: string, triggerMessage?: string) => void` | Send trigger/action message. |
463
+ | `sendInterruptMessage` | `() => void` | Interrupt current bot response. |
464
+ | `updateTemplateKeys` | `(templateKeys: Record<string, string>) => void` | Update runtime template variables. |
465
+ | `updateDynamicInfo` | `(dynamicInfo: { text: string }) => void` | Update dynamic context text. |
466
+ | `toggleTts` | `(enabled: boolean) => void` | Enable/disable TTS for subsequent responses. |
467
+ | `on` | `(event: string, callback: (...args: any[]) => void) => () => void` | Subscribe to an event and receive an unsubscribe function. |
468
+ | `off` | `(event: string, callback: (...args: any[]) => void) => void` | Remove a specific listener. |
469
+
470
+ ### Common event names and payloads
471
+
472
+ | Event | Payload | Notes |
473
+ | ------------------------- | --------------------------------------- | --------------------------------------------------------------------------------------------------------------- |
474
+ | `stateChange` | `ConvaiClientState` | Any state transition. |
475
+ | `message` | `ChatMessage` | Last message whenever `messagesChange` updates. |
476
+ | `messagesChange` | `ChatMessage[]` | Full message array update. |
477
+ | `userTranscriptionChange` | `string` | Live user speech text updates. |
478
+ | `speakingChange` | `boolean` | Bot speaking started/stopped. |
479
+ | `botReady` | `void` | Bot can now receive interaction. |
480
+ | `connect` | `void` | Client connected. |
481
+ | `disconnect` | `void` | Client disconnected. |
482
+ | `error` | `unknown` | Error surfaced by client. |
483
+ | `conversationStart` | `{ sessionId, userMessage, timestamp }` | Conversation turn started. |
484
+ | `turnEnd` | `{ sessionId, duration, timestamp }` | Server signaled end of turn (bot stopped speaking). Same semantics as `BlendshapeQueue.hasReceivedEndSignal()`. |
485
+ | `blendshapes` | `unknown` | Incoming blendshape chunk payload. |
486
+ | `blendshapeStatsReceived` | `unknown` | End-of-turn blendshape stats marker. |
487
+ | `latencyMeasurement` | `LatencyMeasurement` | Latency sample from monitor. |
488
+
489
+ ### Control manager APIs
490
+
491
+ #### `audioControls`
492
+
493
+ Properties:
494
+
495
+ - `isAudioEnabled`
496
+ - `isAudioMuted`
497
+ - `audioLevel`
498
+
499
+ Methods:
500
+
501
+ - `enableAudio()`
502
+ - `disableAudio()`
503
+ - `muteAudio()`
504
+ - `unmuteAudio()`
505
+ - `toggleAudio()`
506
+ - `setAudioDevice(deviceId)`
507
+ - `getAudioDevices()`
508
+ - `startAudioLevelMonitoring()`
509
+ - `stopAudioLevelMonitoring()`
510
+ - `on("audioStateChange", callback)`
511
+ - `off("audioStateChange", callback)`
512
+
513
+ #### `videoControls`
514
+
515
+ Properties:
516
+
517
+ - `isVideoEnabled`
518
+ - `isVideoHidden`
519
+
520
+ Methods:
521
+
522
+ - `enableVideo()`
523
+ - `disableVideo()`
524
+ - `hideVideo()`
525
+ - `showVideo()`
526
+ - `toggleVideo()`
527
+ - `setVideoDevice(deviceId)`
528
+ - `getVideoDevices()`
529
+ - `setVideoQuality("low" | "medium" | "high")`
530
+ - `on("videoStateChange", callback)`
531
+ - `off("videoStateChange", callback)`
532
+
533
+ #### `screenShareControls`
534
+
535
+ Properties:
536
+
537
+ - `isScreenShareEnabled`
538
+ - `isScreenShareActive`
539
+
540
+ Methods:
541
+
542
+ - `enableScreenShare()`
543
+ - `disableScreenShare()`
544
+ - `toggleScreenShare()`
545
+ - `enableScreenShareWithAudio()`
546
+ - `getScreenShareTracks()`
547
+ - `on("screenShareStateChange", callback)`
548
+ - `off("screenShareStateChange", callback)`
549
+
550
+ ### `latencyMonitor` API (via `client.latencyMonitor`)
551
+
552
+ `latencyMonitor` is available on every client instance for instrumentation and diagnostics.
553
+
554
+ Methods:
555
+
556
+ - `enable()`
557
+ - `disable()`
558
+ - `startMeasurement(type, userMessage?)`
559
+ - `endMeasurement()`
560
+ - `cancelMeasurement()`
561
+ - `getMeasurements()`
562
+ - `getLatestMeasurement()`
563
+ - `getStats()`
564
+ - `clear()`
565
+ - `getPendingMeasurement()`
566
+ - `on("measurement", callback)`
567
+ - `on("measurementsChange", callback)`
568
+ - `on("enabledChange", callback)`
569
+
570
+ Properties:
571
+
572
+ - `enabled`
573
+ - `hasPendingMeasurement`
574
+
575
+ ### Advanced core classes (`@convai/web-sdk/core`)
576
+
577
+ These are exported for advanced and custom pipeline use-cases.
328
578
 
329
- When implementing voice mode, interrupt the bot when the user starts speaking:
579
+ #### `BlendshapeQueue`
330
580
 
331
- ```typescript
332
- // When user enters voice mode
333
- const enterVoiceMode = async () => {
334
- // Interrupt any ongoing bot response
335
- convaiClient.sendInterruptMessage();
581
+ Buffer for lipsync frames. Use `isConversationEnded()` for definitive end-of-conversation: it returns true only when the server has sent `blendshape-turn-stats` and either all expected frames have been consumed or the queue is empty (handles dropped frames). Use `hasReceivedEndSignal()` when you only need to know that the server signaled end (e.g. to keep playing remaining frames).
336
582
 
337
- // Unmute microphone
338
- await convaiClient.audioControls.unmuteAudio();
339
- };
583
+ Methods:
340
584
 
341
- // When user exits voice mode
342
- const exitVoiceMode = async () => {
343
- // Interrupt any ongoing bot response
344
- convaiClient.sendInterruptMessage();
585
+ - `addChunk(blendshapes)`
586
+ - `getFrames()`
587
+ - `getFrame(index)`
588
+ - `getFrameWithAlpha(index)`
589
+ - `consumeFrames(count)`
590
+ - `hasFrames()`
591
+ - `isConversationActive()`
592
+ - `isConversationEnded()` — true when server signaled end and playback is complete (all frames consumed or queue empty)
593
+ - `hasReceivedEndSignal()` — true when server sent `blendshape-turn-stats` (does not check frame consumption)
594
+ - `startConversation()`
595
+ - `startBotSpeaking()`
596
+ - `stopBotSpeaking()`
597
+ - `isBotSpeaking()`
598
+ - `endConversation(stats?)`
599
+ - `interrupt()`
600
+ - `getTurnStats()`
601
+ - `getFramesConsumed()`
602
+ - `getTimeLeftMs()`
603
+ - `isAllFramesConsumed()`
604
+ - `reset()`
605
+ - `getFrameAtTime(elapsedTime)`
606
+ - `getDebugInfo()`
345
607
 
346
- // Mute microphone
347
- await convaiClient.audioControls.muteAudio();
348
- };
349
- ```
608
+ Properties:
350
609
 
351
- ### User Microphone Mute/Unmute
610
+ - `length`
352
611
 
353
- Control the user's microphone input.
612
+ #### `MessageHandler`
354
613
 
355
- **React:**
614
+ Methods:
356
615
 
357
- ```tsx
358
- function AudioControls() {
359
- const convaiClient = useConvaiClient({
360
- /* config */
361
- });
616
+ - `getBlendshapeQueue()`
617
+ - `setLatencyMonitor(monitor)`
618
+ - `getChatMessages()`
619
+ - `getUserTranscription()`
620
+ - `getIsBotResponding()`
621
+ - `getIsSpeaking()`
622
+ - `setRoom(room)`
623
+ - `reset()`
624
+ - inherited event APIs from `EventEmitter`:
625
+ - `on(event, callback)`
626
+ - `off(event, callback)`
362
627
 
363
- const handleMute = async () => {
364
- await convaiClient.audioControls.muteAudio();
365
- };
628
+ #### `EventEmitter`
366
629
 
367
- const handleUnmute = async () => {
368
- await convaiClient.audioControls.unmuteAudio();
369
- };
370
-
371
- const handleToggle = async () => {
372
- await convaiClient.audioControls.toggleAudio();
373
- };
630
+ Methods:
374
631
 
375
- return (
376
- <div>
377
- <button onClick={handleMute}>Mute</button>
378
- <button onClick={handleUnmute}>Unmute</button>
379
- <button onClick={handleToggle}>Toggle</button>
380
- <p>Muted: {convaiClient.audioControls.isAudioMuted ? "Yes" : "No"}</p>
381
- </div>
382
- );
383
- }
384
- ```
632
+ - `on(event, callback)`
633
+ - `off(event, callback)`
634
+ - `emit(event, ...args)`
635
+ - `removeAllListeners()`
636
+ - `listenerCount(event)`
385
637
 
386
- **Vanilla:**
638
+ ## 9. Message Semantics and Turn Completion
387
639
 
388
- ```typescript
389
- // Mute microphone
390
- await client.audioControls.muteAudio();
640
+ ### `ChatMessage` model
391
641
 
392
- // Unmute microphone
393
- await client.audioControls.unmuteAudio();
642
+ `ChatMessage` includes:
394
643
 
395
- // Toggle mute state
396
- await client.audioControls.toggleAudio();
644
+ - `id`
645
+ - `type`
646
+ - `content`
647
+ - `timestamp`
648
+ - `isFinal?`
397
649
 
398
- // Check mute state
399
- const isMuted = client.audioControls.isAudioMuted;
650
+ Supported message `type` values include:
400
651
 
401
- // Enable audio (request permissions if needed)
402
- await client.audioControls.enableAudio();
652
+ - `user`
653
+ - `convai`
654
+ - `emotion`
655
+ - `behavior-tree`
656
+ - `action`
657
+ - `user-transcription`
658
+ - `bot-llm-text`
659
+ - `bot-emotion`
660
+ - `user-llm-text`
661
+ - `interrupt-bot`
403
662
 
404
- // Disable audio
405
- await client.audioControls.disableAudio();
406
- ```
663
+ ### Important: current `isFinal` behavior
407
664
 
408
- **Audio Device Management:**
665
+ In the current implementation, `isFinal` is used as an accumulation flag:
409
666
 
410
- ```typescript
411
- // Get available audio devices
412
- const devices = await convaiClient.audioControls.getAudioDevices();
667
+ - `isFinal: true` means the message is still in a mutable/streaming state
668
+ - `isFinal: false` means the message has been finalized
413
669
 
414
- // Set specific audio device
415
- await convaiClient.audioControls.setAudioDevice(deviceId);
670
+ This naming is counterintuitive. Treat `isFinal` as an internal streaming marker rather than a turn-completion signal.
416
671
 
417
- // Monitor audio level
418
- convaiClient.audioControls.startAudioLevelMonitoring();
672
+ ### Recommended way to detect response completion
419
673
 
420
- convaiClient.audioControls.on("audioLevelChange", (level) => {
421
- console.log("Audio level:", level);
422
- // level is a number between 0 and 1
423
- });
674
+ Use events instead of `isFinal`:
424
675
 
425
- convaiClient.audioControls.stopAudioLevelMonitoring();
426
- ```
676
+ - `turnEnd` for the server turn-end signal (bot stopped speaking; same as `hasReceivedEndSignal()`)
677
+ - `blendshapeStatsReceived` as additional completion marker when lipsync/animation output is enabled
427
678
 
428
- **Audio State Monitoring:**
679
+ When driving lipsync from `BlendshapeQueue`, use `blendshapeQueue.isConversationEnded()` for definitive end-of-conversation. It returns true only when the server has signaled end and playback is complete (all expected frames consumed or queue empty). Call `blendshapeQueue.reset()` and your `onConversationEnded` when it becomes true. Use `hasReceivedEndSignal()` only when you need the raw server signal (e.g. to decide whether to keep playing remaining frames).
429
680
 
430
- ```typescript
431
- // React
432
- const { isAudioMuted } = convaiClient;
681
+ Example:
433
682
 
434
- // Core API (event-based)
435
- convaiClient.audioControls.on("audioStateChange", (state) => {
436
- console.log("Audio enabled:", state.isAudioEnabled);
437
- console.log("Audio muted:", state.isAudioMuted);
438
- console.log("Audio level:", state.audioLevel);
439
- });
440
- ```
683
+ ```ts
684
+ type TurnCompletionOptions = {
685
+ expectBlendshapes: boolean;
686
+ onComplete: () => void;
687
+ };
441
688
 
442
- ### Character TTS Mute/Unmute
689
+ function subscribeTurnCompletion(client: any, options: TurnCompletionOptions) {
690
+ let spokenDone = false;
691
+ let animationDone = !options.expectBlendshapes;
443
692
 
444
- Control whether the character's responses are spoken aloud (text-to-speech).
693
+ const invokeOnCompleteIfReady = () => {
694
+ if (spokenDone && animationDone) {
695
+ options.onComplete();
696
+ }
697
+ };
445
698
 
446
- **React:**
699
+ const unsubTurnEnd = client.on("turnEnd", () => {
700
+ spokenDone = true;
701
+ invokeOnCompleteIfReady();
702
+ });
447
703
 
448
- ```tsx
449
- function TTSControls() {
450
- const convaiClient = useConvaiClient({
451
- /* config */
704
+ const unsubBlendshapeStats = client.on("blendshapeStatsReceived", () => {
705
+ animationDone = true;
706
+ invokeOnCompleteIfReady();
452
707
  });
453
708
 
454
- const handleToggleTTS = (enabled: boolean) => {
455
- convaiClient.toggleTts(enabled);
709
+ return () => {
710
+ unsubTurnEnd();
711
+ unsubBlendshapeStats();
456
712
  };
457
-
458
- return (
459
- <div>
460
- <button onClick={() => handleToggleTTS(true)}>Enable TTS</button>
461
- <button onClick={() => handleToggleTTS(false)}>Disable TTS</button>
462
- </div>
463
- );
464
713
  }
465
714
  ```
466
715
 
467
- **Vanilla:**
468
-
469
- ```typescript
470
- // Enable text-to-speech (character will speak responses)
471
- client.toggleTts(true);
716
+ When to use both signals: You only need to wait for both `turnEnd` and `blendshapeStatsReceived` when you use lipsync. Set `expectBlendshapes: false` when you do not use facial animation; then `animationDone` is effectively always true and completion runs as soon as `turnEnd` fires. Set `expectBlendshapes: true` when you drive lipsync from the queue; speech and blendshape data are separate pipelines and can finish in either order, so waiting for both ensures "turn complete" means both speech and animation are done before you run `onComplete`.
472
717
 
473
- // Disable text-to-speech (character will only send text, no audio)
474
- client.toggleTts(false);
475
- ```
718
+ ## 10. React API Reference
476
719
 
477
- **Initial TTS Configuration:**
720
+ ### `useConvaiClient(config?)`
478
721
 
479
- ```typescript
480
- // Set TTS state during connection
481
- const client = new ConvaiClient({
482
- apiKey: "your-api-key",
483
- characterId: "your-character-id",
484
- ttsEnabled: true, // Enable TTS by default
485
- });
486
-
487
- // Or disable initially
488
- const client = new ConvaiClient({
489
- apiKey: "your-api-key",
490
- characterId: "your-character-id",
491
- ttsEnabled: false, // Disable TTS
492
- });
493
- ```
494
-
495
- ### Voice Mode Implementation
496
-
497
- Voice mode allows users to speak instead of typing. The widget automatically handles voice mode, but you can implement it manually.
498
-
499
- **React - Manual Voice Mode:**
722
+ Import:
500
723
 
501
724
  ```tsx
502
725
  import { useConvaiClient } from "@convai/web-sdk";
503
- import { useState, useEffect } from "react";
504
-
505
- function CustomChatInterface() {
506
- const convaiClient = useConvaiClient({
507
- /* config */
508
- });
509
- const [isVoiceMode, setIsVoiceMode] = useState(false);
510
-
511
- const enterVoiceMode = async () => {
512
- // Interrupt any ongoing bot response
513
- convaiClient.sendInterruptMessage();
514
-
515
- // Unmute microphone
516
- await convaiClient.audioControls.unmuteAudio();
726
+ ```
517
727
 
518
- setIsVoiceMode(true);
519
- };
728
+ Returns full `IConvaiClient` plus React-friendly reactive fields:
520
729
 
521
- const exitVoiceMode = async () => {
522
- // Interrupt any ongoing bot response
523
- convaiClient.sendInterruptMessage();
730
+ - `activity`
731
+ - `chatMessages`
732
+ - `isAudioMuted`
733
+ - `isVideoEnabled`
734
+ - `isScreenShareActive`
524
735
 
525
- // Mute microphone
526
- await convaiClient.audioControls.muteAudio();
736
+ ### `ConvaiWidget`
527
737
 
528
- setIsVoiceMode(false);
529
- };
738
+ Import:
530
739
 
531
- // Monitor user transcription for voice input
532
- useEffect(() => {
533
- const transcription = convaiClient.userTranscription;
534
- if (transcription && isVoiceMode) {
535
- // Display real-time transcription
536
- console.log("User is saying:", transcription);
537
- }
538
- }, [convaiClient.userTranscription, isVoiceMode]);
539
-
540
- return (
541
- <div>
542
- {isVoiceMode ? (
543
- <div>
544
- <p>Listening: {convaiClient.userTranscription}</p>
545
- <button onClick={exitVoiceMode}>Stop Voice Mode</button>
546
- </div>
547
- ) : (
548
- <button onClick={enterVoiceMode}>Start Voice Mode</button>
549
- )}
550
- </div>
551
- );
552
- }
740
+ ```tsx
741
+ import { ConvaiWidget } from "@convai/web-sdk";
553
742
  ```
554
743
 
555
- **Vanilla - Manual Voice Mode:**
744
+ Props:
556
745
 
557
- ```typescript
558
- let isVoiceMode = false;
746
+ | Prop | Type | Default | Description |
747
+ | ------------------ | --------------------------------------------------------------------------------------------------------------------- | -------- | ------------------------------------------------------------------ |
748
+ | `convaiClient` | `IConvaiClient & { activity?: string; isAudioMuted: boolean; isVideoEnabled: boolean; isScreenShareActive: boolean }` | required | Client instance returned by `useConvaiClient`. |
749
+ | `showVideo` | `boolean` | `true` | Shows video toggle in settings if connection type is video. |
750
+ | `showScreenShare` | `boolean` | `true` | Shows screen-share toggle in settings if connection type is video. |
751
+ | `defaultVoiceMode` | `boolean` | `true` | Opens in voice mode on first widget session. |
559
752
 
560
- const enterVoiceMode = async () => {
561
- // Interrupt any ongoing bot response
562
- client.sendInterruptMessage();
753
+ ### `useCharacterInfo(characterId?, apiKey?)`
563
754
 
564
- // Unmute microphone
565
- await client.audioControls.unmuteAudio();
755
+ Returns:
566
756
 
567
- isVoiceMode = true;
568
- updateUI();
569
- };
757
+ - `name`
758
+ - `image`
759
+ - `isLoading`
760
+ - `error`
570
761
 
571
- const exitVoiceMode = async () => {
572
- // Interrupt any ongoing bot response
573
- client.sendInterruptMessage();
762
+ ### `useLocalCameraTrack()`
574
763
 
575
- // Mute microphone
576
- await client.audioControls.muteAudio();
764
+ Returns a LiveKit `TrackReferenceOrPlaceholder` for local camera rendering in custom React video UIs.
577
765
 
578
- isVoiceMode = false;
579
- updateUI();
580
- };
766
+ ### React audio utility exports
581
767
 
582
- // Monitor user transcription
583
- client.on("userTranscriptionChange", (transcription) => {
584
- if (isVoiceMode && transcription) {
585
- // Display real-time transcription
586
- document.getElementById("transcription").textContent = transcription;
587
- }
588
- });
768
+ - `AudioRenderer` from LiveKit React components
769
+ - `AudioContext` from LiveKit React components
589
770
 
590
- function updateUI() {
591
- const voiceButton = document.getElementById("voice-btn");
592
- const transcriptionDiv = document.getElementById("transcription");
771
+ ## 11. Vanilla API Reference
593
772
 
594
- if (isVoiceMode) {
595
- voiceButton.textContent = "Stop Voice Mode";
596
- transcriptionDiv.style.display = "block";
597
- } else {
598
- voiceButton.textContent = "Start Voice Mode";
599
- transcriptionDiv.style.display = "none";
600
- }
601
- }
602
- ```
773
+ ### `createConvaiWidget(container, options)`
603
774
 
604
- **Voice Mode with State Monitoring:**
605
-
606
- ```typescript
607
- // Monitor agent state to handle voice mode transitions
608
- convaiClient.on("stateChange", (state) => {
609
- if (isVoiceMode) {
610
- switch (state.agentState) {
611
- case "listening":
612
- // User can speak
613
- console.log("Bot is listening");
614
- break;
615
- case "thinking":
616
- // Bot is processing
617
- console.log("Bot is thinking");
618
- break;
619
- case "speaking":
620
- // Bot is responding
621
- console.log("Bot is speaking");
622
- // Optionally interrupt if user wants to speak
623
- break;
624
- }
625
- }
626
- });
775
+ ```ts
776
+ import { createConvaiWidget } from "@convai/web-sdk/vanilla";
627
777
  ```
628
778
 
629
- ### Connection Management
779
+ Creates and mounts a complete floating chat widget.
630
780
 
631
- **Connect:**
781
+ #### `VanillaWidgetOptions`
632
782
 
633
- ```typescript
634
- // React - config passed to hook
635
- const convaiClient = useConvaiClient({
636
- apiKey: "your-api-key",
637
- characterId: "your-character-id",
638
- });
783
+ | Field | Type | Required | Default | Description |
784
+ | ------------------ | -------------------------------- | -------- | ----------- | -------------------------------------------------- |
785
+ | `convaiClient` | `IConvaiClient` | No\* | - | Existing client instance. |
786
+ | `apiKey` | `string` | No\* | - | Used only when `convaiClient` is not provided. |
787
+ | `characterId` | `string` | No\* | - | Used only when `convaiClient` is not provided. |
788
+ | `enableVideo` | `boolean` | No | `false` | Used for auto-created client only. |
789
+ | `startWithVideoOn` | `boolean` | No | `false` | Used for auto-created client only. |
790
+ | `enableLipsync` | `boolean` | No | `false` | Used for auto-created client only. |
791
+ | `blendshapeConfig` | object | No | `undefined` | Used for auto-created client only. |
792
+ | `showVideo` | `boolean` | No | `true` | Show video toggle in settings. |
793
+ | `showScreenShare` | `boolean` | No | `true` | Show screen-share toggle in settings. |
794
+ | `defaultVoiceMode` | `boolean` | No | `true` | Start in voice mode when opened. |
795
+ | `onConnect` | `() => void` | No | `undefined` | Called when widget client connects. |
796
+ | `onDisconnect` | `() => void` | No | `undefined` | Called when widget client disconnects. |
797
+ | `onMessage` | `(message: ChatMessage) => void` | No | `undefined` | Called on each message change with latest message. |
639
798
 
640
- // Or connect manually
641
- await convaiClient.connect({
642
- apiKey: "your-api-key",
643
- characterId: "your-character-id",
644
- });
799
+ \* You must provide either `convaiClient` OR both `apiKey` and `characterId`.
645
800
 
646
- // Vanilla
647
- const client = new ConvaiClient();
648
- await client.connect({
649
- apiKey: "your-api-key",
650
- characterId: "your-character-id",
651
- });
652
- ```
801
+ #### Return type: `VanillaWidget`
653
802
 
654
- **Disconnect:**
803
+ - `element`: root widget element
804
+ - `client`: resolved client instance
805
+ - `destroy()`: unmount and cleanup
806
+ - `update?`: optional future extension field
655
807
 
656
- ```typescript
657
- await convaiClient.disconnect();
658
- ```
808
+ ### `destroyConvaiWidget(widget)`
659
809
 
660
- **Reconnect:**
810
+ Convenience wrapper that calls `widget.destroy()`.
661
811
 
662
- ```typescript
663
- await convaiClient.reconnect();
664
- ```
812
+ ### `AudioRenderer` (vanilla)
665
813
 
666
- **Reset Session:**
814
+ `AudioRenderer` listens to LiveKit room track subscriptions and auto-attaches remote audio tracks to hidden `audio` elements for playback. Use one renderer instance per active room session and destroy it during cleanup.
667
815
 
668
- ```typescript
669
- // Clear conversation history and start new session
670
- convaiClient.resetSession();
671
- ```
816
+ ## 12. Audio Integration Best Practices (Vanilla TypeScript)
672
817
 
673
- **Connection State:**
818
+ This section provides the recommended integration for stable audio playback.
674
819
 
675
- ```typescript
676
- // React
677
- const { state } = convaiClient;
678
- console.log("Connected:", state.isConnected);
679
- console.log("Connecting:", state.isConnecting);
680
- console.log("Agent state:", state.agentState); // 'disconnected' | 'connected' | 'listening' | 'thinking' | 'speaking'
820
+ ### Recommended reference implementation
681
821
 
682
- // Core API (event-based)
683
- convaiClient.on("stateChange", (state) => {
684
- console.log("State changed:", state);
685
- });
822
+ ```ts
823
+ import { ConvaiClient } from "@convai/web-sdk/core";
824
+ import { AudioRenderer } from "@convai/web-sdk/vanilla";
825
+
826
+ class ConvaiAudioSession {
827
+ private client: ConvaiClient;
828
+ private audioRenderer: AudioRenderer | null = null;
829
+ private audioContext: AudioContext | null = null;
830
+
831
+ constructor() {
832
+ this.client = new ConvaiClient({
833
+ apiKey: "<YOUR_CONVAI_API_KEY>",
834
+ characterId: "<YOUR_CHARACTER_ID>",
835
+ ttsEnabled: true,
836
+ });
837
+ }
686
838
 
687
- convaiClient.on("connect", () => {
688
- console.log("Connected");
689
- });
839
+ async connectFromUserGesture(): Promise<void> {
840
+ await this.client.connect();
690
841
 
691
- convaiClient.on("disconnect", () => {
692
- console.log("Disconnected");
693
- });
694
- ```
842
+ // Required for remote audio playback wiring.
843
+ this.audioRenderer = new AudioRenderer(this.client.room);
844
+
845
+ // Optional: if your app performs WebAudio analysis/effects.
846
+ if (!this.audioContext) {
847
+ this.audioContext = new AudioContext();
848
+ }
849
+ if (this.audioContext.state === "suspended") {
850
+ await this.audioContext.resume();
851
+ }
852
+ }
695
853
 
696
- ### Messaging
854
+ async disconnect(): Promise<void> {
855
+ if (this.audioRenderer) {
856
+ this.audioRenderer.destroy();
857
+ this.audioRenderer = null;
858
+ }
697
859
 
698
- **Send Text Message:**
860
+ await this.client.disconnect();
699
861
 
700
- ```typescript
701
- convaiClient.sendUserTextMessage("Hello, how are you?");
862
+ if (this.audioContext && this.audioContext.state !== "closed") {
863
+ await this.audioContext.close();
864
+ this.audioContext = null;
865
+ }
866
+ }
867
+ }
702
868
  ```
703
869
 
704
- **Send Trigger Message:**
870
+ ### AudioContext guidance
705
871
 
706
- ```typescript
707
- // Trigger specific character action
708
- convaiClient.sendTriggerMessage("greet", "User entered the room");
872
+ - Create/resume `AudioContext` only after user interaction in browsers that enforce autoplay policy.
873
+ - If you are not processing audio with WebAudio, you do not need a custom `AudioContext`; `AudioRenderer` is enough for playback.
874
+ - Always close your custom `AudioContext` in teardown.
709
875
 
710
- // Trigger without message
711
- convaiClient.sendTriggerMessage("wave");
712
- ```
876
+ ### Lifecycle and cleanup order
713
877
 
714
- **Update Context:**
878
+ Recommended shutdown order:
715
879
 
716
- ```typescript
717
- // Update template keys (e.g., user name, location)
718
- convaiClient.updateTemplateKeys({
719
- user_name: "John",
720
- location: "New York",
721
- });
880
+ 1. Stop UI input loops/listeners
881
+ 2. Destroy `AudioRenderer`
882
+ 3. Disconnect `ConvaiClient`
883
+ 4. Close custom `AudioContext` (if created)
722
884
 
723
- // Update dynamic information
724
- convaiClient.updateDynamicInfo({
725
- text: "User is currently browsing the products page",
726
- });
727
- ```
885
+ ### Common failure modes and fixes
728
886
 
729
- **Message History:**
887
+ | Symptom | Likely cause | Recommended action |
888
+ | ----------------------- | ----------------------------------------- | ------------------------------------------------------------------------------------- |
889
+ | No AI audio output | `AudioRenderer` not created | Instantiate `new AudioRenderer(client.room)` immediately after successful connect. |
890
+ | No AI audio output | Browser autoplay restriction | Trigger connect/playback from a user click, and resume `AudioContext` if suspended. |
891
+ | No AI audio output | TTS disabled | Ensure `ttsEnabled` is true for sessions that need speech output. |
892
+ | Intermittent playback | Multiple renderers or stale room instance | Use one renderer per session and always destroy old renderer before reconnecting. |
893
+ | Works once, then silent | Incomplete cleanup on previous session | Destroy renderer and disconnect client on teardown; avoid reusing invalid room state. |
894
+ | Random muted behavior | App-side muting of remote tracks | Verify no custom code is muting remote publications or media elements. |
730
895
 
731
- ```typescript
732
- // React
733
- const { chatMessages } = convaiClient;
896
+ ## 13. Error Handling and Reliability Patterns
734
897
 
735
- // Core API (event-based)
736
- convaiClient.on("message", (message: ChatMessage) => {
737
- console.log("New message:", message.content);
738
- console.log("Message type:", message.type);
739
- });
898
+ ### Pattern 1: Centralized SDK error handling
740
899
 
741
- convaiClient.on("messagesChange", (messages: ChatMessage[]) => {
742
- console.log("All messages:", messages);
900
+ ```ts
901
+ const unsubError = client.on("error", (error) => {
902
+ console.error("Convai SDK error:", error);
903
+ // Optional: route to telemetry/monitoring
743
904
  });
744
905
  ```
745
906
 
746
- **Message Types:**
747
-
748
- ```typescript
749
- type ChatMessageType =
750
- | "user" // User's sent message
751
- | "convai" // Character's response
752
- | "user-transcription" // Real-time speech-to-text from user
753
- | "bot-llm-text" // Character's LLM-generated text
754
- | "emotion" // Character's emotional state
755
- | "behavior-tree" // Behavior tree response
756
- | "action" // Action execution
757
- | "bot-emotion" // Bot emotional response
758
- | "user-llm-text" // User text processed by LLM
759
- | "interrupt-bot"; // Interrupt message
907
+ ### Pattern 2: Retry connect with exponential backoff
908
+
909
+ ```ts
910
+ async function connectWithRetry(
911
+ client: any,
912
+ attempts = 3,
913
+ initialDelayMs = 500,
914
+ ): Promise<void> {
915
+ let delay = initialDelayMs;
916
+
917
+ for (let i = 1; i <= attempts; i++) {
918
+ try {
919
+ await client.connect();
920
+ return;
921
+ } catch (error) {
922
+ if (i === attempts) throw error;
923
+ await new Promise((resolve) => setTimeout(resolve, delay));
924
+ delay *= 2;
925
+ }
926
+ }
927
+ }
760
928
  ```
761
929
 
762
- ### State Monitoring
930
+ ### Pattern 3: Safe send guard
763
931
 
764
- **Agent State:**
932
+ ```ts
933
+ function safeSendText(client: any, text: string) {
934
+ if (!text.trim()) return;
935
+ if (!client.state.isConnected) return;
936
+ if (!client.isBotReady) return;
937
+ client.sendUserTextMessage(text);
938
+ }
939
+ ```
765
940
 
766
- ```typescript
767
- // React
768
- const { state } = convaiClient;
941
+ ### Pattern 4: Protect media control calls
769
942
 
770
- // Check specific states
771
- if (state.isListening) {
772
- console.log("Bot is listening");
943
+ ```ts
944
+ async function safeToggleMic(client: any) {
945
+ try {
946
+ await client.audioControls.toggleAudio();
947
+ } catch (error) {
948
+ console.error("Failed to toggle microphone:", error);
949
+ }
773
950
  }
951
+ ```
774
952
 
775
- if (state.isThinking) {
776
- console.log("Bot is thinking");
777
- }
953
+ ### Pattern 5: Always unsubscribe listeners
778
954
 
779
- if (state.isSpeaking) {
780
- console.log("Bot is speaking");
781
- }
955
+ ```ts
956
+ const unsubscribers = [
957
+ client.on("stateChange", () => {}),
958
+ client.on("messagesChange", () => {}),
959
+ ];
782
960
 
783
- // Combined state
784
- console.log(state.agentState); // 'disconnected' | 'connected' | 'listening' | 'thinking' | 'speaking'
961
+ function cleanupListeners() {
962
+ for (const unsub of unsubscribers) unsub();
963
+ }
785
964
  ```
786
965
 
787
- **User Transcription:**
966
+ ## 14. Troubleshooting
788
967
 
789
- ```typescript
790
- // React
791
- const { userTranscription } = convaiClient;
968
+ ### Connection issues
792
969
 
793
- // Core API (event-based)
794
- convaiClient.on("userTranscriptionChange", (transcription: string) => {
795
- console.log("User is saying:", transcription);
796
- });
797
- ```
970
+ - Verify API key and character ID are valid.
971
+ - Ensure requests are allowed from your browser origin.
972
+ - Set `url` explicitly if your environment does not use the SDK default endpoint.
973
+ - Listen to `error` and inspect failed network calls in browser devtools.
798
974
 
799
- **Bot Ready State:**
975
+ ### `connect()` succeeds but bot never responds
800
976
 
801
- ```typescript
802
- // React
803
- const { isBotReady } = convaiClient;
977
+ - Wait for `botReady` before sending messages.
978
+ - Confirm `ttsEnabled` and message flow are configured as expected.
979
+ - Verify `messagesChange` receives content.
804
980
 
805
- // Core API (event-based)
806
- convaiClient.on("botReady", () => {
807
- console.log("Bot is ready to receive messages");
808
- });
809
- ```
981
+ ### Audio does not play
810
982
 
811
- ## Getting Convai Credentials
983
+ - Ensure an `AudioRenderer` is active for the connected room (vanilla custom UI).
984
+ - Ensure playback starts from a user gesture path to satisfy autoplay policies.
985
+ - Confirm no custom muting code is muting remote tracks.
812
986
 
813
- 1. Visit [convai.com](https://convai.com) and create an account
814
- 2. Navigate to your dashboard
815
- 3. Create a new character or use an existing one
816
- 4. Copy your **API Key** from the dashboard
817
- 5. Copy your **Character ID** from the character details
987
+ ### Microphone does not capture user voice
818
988
 
819
- ## Import Paths
989
+ - Ensure app is served over secure context.
990
+ - Verify browser microphone permission.
991
+ - Handle permission errors from `audioControls.enableAudio()/unmuteAudio()`.
820
992
 
821
- ```typescript
822
- // Default: React version (backward compatible)
823
- import { useConvaiClient, ConvaiWidget } from "@convai/web-sdk";
993
+ ### Video or screen share controls fail
824
994
 
825
- // Explicit React import
826
- import { useConvaiClient, ConvaiWidget } from "@convai/web-sdk/react";
995
+ - Use `enableVideo: true` in config when you need video capabilities.
996
+ - Screen share can be blocked by browser policy or user denial.
997
+ - Wrap calls in `try/catch` and provide fallback UX.
827
998
 
828
- // Vanilla JS/TS
829
- import { ConvaiClient, createConvaiWidget } from "@convai/web-sdk/vanilla";
999
+ ### Lipsync appears out of sync or shape
830
1000
 
831
- // Core only (no UI, framework agnostic)
832
- import { ConvaiClient } from "@convai/web-sdk/core";
833
- ```
1001
+ - Validate blendshape format (`arkit` vs `mha`) matches your rig expectations.
1002
+ - Tune `frames_buffer_duration` so you atleast have some duration of blendshapes before the audio starts playing.
1003
+ - Align lipsync start and stop with the queue: start playback when the bot starts speaking (`isBotSpeaking()` true) and treat the turn as finished when `blendshapeQueue.isConversationEnded()` is true before resetting.
1004
+ - Drive blendshape application from a single loop (e.g. `requestAnimationFrame`) and advance frame index at 60fps so mouth movement stays in sync with audio.
834
1005
 
835
- ## TypeScript Support
836
-
837
- All exports are fully typed:
838
-
839
- ```typescript
840
- import type {
841
- ConvaiClient,
842
- ConvaiConfig,
843
- ConvaiClientState,
844
- ChatMessage,
845
- AudioControls,
846
- VideoControls,
847
- ScreenShareControls,
848
- IConvaiClient,
849
- } from "@convai/web-sdk";
850
- ```
1006
+ ## 16. Examples
851
1007
 
852
- ## Support
1008
+ Repository examples:
853
1009
 
854
- - [Convai Forum](https://forum.convai.com)
855
- - [API Reference](./API_REFERENCE.md)
856
- - [Convai Website](https://convai.com)
1010
+ - `examples/react-three-fiber`
1011
+ - `examples/three-vanilla`
1012
+ - `examples/README.md` for example-level setup notes