@spatialwalk/avatarkit 1.0.0-beta.6 → 1.0.0-beta.61

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (98) hide show
  1. package/CHANGELOG.md +499 -4
  2. package/README.md +267 -289
  3. package/dist/StreamingAudioPlayer-DIcPerS7.js +525 -0
  4. package/dist/animation/AnimationWebSocketClient.d.ts +9 -24
  5. package/dist/animation/utils/eventEmitter.d.ts +0 -4
  6. package/dist/animation/utils/flameConverter.d.ts +3 -11
  7. package/dist/audio/AnimationPlayer.d.ts +4 -32
  8. package/dist/audio/StreamingAudioPlayer.d.ts +14 -75
  9. package/dist/avatar_core_wasm-i0Ocpx6q.js +2693 -0
  10. package/dist/avatar_core_wasm.wasm +0 -0
  11. package/dist/config/app-config.d.ts +1 -6
  12. package/dist/config/constants.d.ts +11 -25
  13. package/dist/config/sdk-config-loader.d.ts +4 -9
  14. package/dist/core/Avatar.d.ts +0 -14
  15. package/dist/core/AvatarController.d.ts +44 -116
  16. package/dist/core/AvatarDownloader.d.ts +0 -95
  17. package/dist/core/AvatarManager.d.ts +10 -18
  18. package/dist/core/AvatarSDK.d.ts +21 -0
  19. package/dist/core/AvatarView.d.ts +34 -110
  20. package/dist/core/NetworkLayer.d.ts +1 -59
  21. package/dist/generated/common/v1/models.d.ts +29 -0
  22. package/dist/generated/driveningress/v1/driveningress.d.ts +1 -12
  23. package/dist/generated/driveningress/v2/driveningress.d.ts +81 -3
  24. package/dist/generated/google/protobuf/struct.d.ts +5 -39
  25. package/dist/generated/google/protobuf/timestamp.d.ts +1 -103
  26. package/dist/index-jWgogoMs.js +14758 -0
  27. package/dist/index.d.ts +1 -6
  28. package/dist/index.js +17 -18
  29. package/dist/renderer/RenderSystem.d.ts +1 -79
  30. package/dist/renderer/covariance.d.ts +0 -12
  31. package/dist/renderer/renderer.d.ts +6 -2
  32. package/dist/renderer/sortSplats.d.ts +0 -11
  33. package/dist/renderer/webgl/reorderData.d.ts +0 -13
  34. package/dist/renderer/webgl/webglRenderer.d.ts +19 -42
  35. package/dist/renderer/webgpu/webgpuRenderer.d.ts +18 -31
  36. package/dist/types/character-settings.d.ts +1 -5
  37. package/dist/types/character.d.ts +3 -21
  38. package/dist/types/index.d.ts +91 -36
  39. package/dist/utils/animation-interpolation.d.ts +3 -13
  40. package/dist/utils/client-id.d.ts +1 -0
  41. package/dist/utils/conversationId.d.ts +1 -0
  42. package/dist/utils/error-utils.d.ts +1 -25
  43. package/dist/utils/id-manager.d.ts +38 -0
  44. package/dist/utils/logger.d.ts +5 -11
  45. package/dist/utils/posthog-tracker.d.ts +11 -0
  46. package/dist/utils/pwa-cache-manager.d.ts +16 -0
  47. package/dist/utils/usage-tracker.d.ts +5 -0
  48. package/dist/vanilla/vite.config.d.ts +2 -0
  49. package/dist/wasm/avatarCoreAdapter.d.ts +14 -99
  50. package/dist/wasm/avatarCoreMemory.d.ts +5 -54
  51. package/package.json +15 -13
  52. package/dist/StreamingAudioPlayer-BKTD97fl.js +0 -319
  53. package/dist/StreamingAudioPlayer-BKTD97fl.js.map +0 -1
  54. package/dist/animation/AnimationWebSocketClient.d.ts.map +0 -1
  55. package/dist/animation/utils/eventEmitter.d.ts.map +0 -1
  56. package/dist/animation/utils/flameConverter.d.ts.map +0 -1
  57. package/dist/audio/AnimationPlayer.d.ts.map +0 -1
  58. package/dist/audio/StreamingAudioPlayer.d.ts.map +0 -1
  59. package/dist/avatar_core_wasm-D4eEi7Eh.js +0 -1666
  60. package/dist/avatar_core_wasm-D4eEi7Eh.js.map +0 -1
  61. package/dist/config/app-config.d.ts.map +0 -1
  62. package/dist/config/constants.d.ts.map +0 -1
  63. package/dist/config/sdk-config-loader.d.ts.map +0 -1
  64. package/dist/core/Avatar.d.ts.map +0 -1
  65. package/dist/core/AvatarController.d.ts.map +0 -1
  66. package/dist/core/AvatarDownloader.d.ts.map +0 -1
  67. package/dist/core/AvatarKit.d.ts +0 -66
  68. package/dist/core/AvatarKit.d.ts.map +0 -1
  69. package/dist/core/AvatarManager.d.ts.map +0 -1
  70. package/dist/core/AvatarView.d.ts.map +0 -1
  71. package/dist/core/NetworkLayer.d.ts.map +0 -1
  72. package/dist/generated/driveningress/v1/driveningress.d.ts.map +0 -1
  73. package/dist/generated/driveningress/v2/driveningress.d.ts.map +0 -1
  74. package/dist/generated/google/protobuf/struct.d.ts.map +0 -1
  75. package/dist/generated/google/protobuf/timestamp.d.ts.map +0 -1
  76. package/dist/index-CX8f1bzw.js +0 -5946
  77. package/dist/index-CX8f1bzw.js.map +0 -1
  78. package/dist/index.d.ts.map +0 -1
  79. package/dist/index.js.map +0 -1
  80. package/dist/renderer/RenderSystem.d.ts.map +0 -1
  81. package/dist/renderer/covariance.d.ts.map +0 -1
  82. package/dist/renderer/renderer.d.ts.map +0 -1
  83. package/dist/renderer/sortSplats.d.ts.map +0 -1
  84. package/dist/renderer/webgl/reorderData.d.ts.map +0 -1
  85. package/dist/renderer/webgl/webglRenderer.d.ts.map +0 -1
  86. package/dist/renderer/webgpu/webgpuRenderer.d.ts.map +0 -1
  87. package/dist/types/character-settings.d.ts.map +0 -1
  88. package/dist/types/character.d.ts.map +0 -1
  89. package/dist/types/index.d.ts.map +0 -1
  90. package/dist/utils/animation-interpolation.d.ts.map +0 -1
  91. package/dist/utils/cls-tracker.d.ts +0 -17
  92. package/dist/utils/cls-tracker.d.ts.map +0 -1
  93. package/dist/utils/error-utils.d.ts.map +0 -1
  94. package/dist/utils/logger.d.ts.map +0 -1
  95. package/dist/utils/reqId.d.ts +0 -20
  96. package/dist/utils/reqId.d.ts.map +0 -1
  97. package/dist/wasm/avatarCoreAdapter.d.ts.map +0 -1
  98. package/dist/wasm/avatarCoreMemory.d.ts.map +0 -1
package/README.md CHANGED
@@ -1,4 +1,4 @@
1
- # SPAvatarKit SDK
1
+ # AvatarKit SDK
2
2
 
3
3
  Real-time virtual avatar rendering SDK based on 3D Gaussian Splatting, supporting audio-driven animation rendering and high-quality 3D rendering.
4
4
 
@@ -6,6 +6,7 @@ Real-time virtual avatar rendering SDK based on 3D Gaussian Splatting, supportin
6
6
 
7
7
  - **3D Gaussian Splatting Rendering** - Based on the latest point cloud rendering technology, providing high-quality 3D virtual avatars
8
8
  - **Audio-Driven Real-Time Animation Rendering** - Users provide audio data, SDK handles receiving animation data and rendering
9
+ - **Multi-Character Support** - Support multiple avatar instances simultaneously, each with independent state and rendering
9
10
  - **WebGPU/WebGL Dual Rendering Backend** - Automatically selects the best rendering backend for compatibility
10
11
  - **WASM High-Performance Computing** - Uses C++ compiled WebAssembly modules for geometric calculations
11
12
  - **TypeScript Support** - Complete type definitions and IntelliSense
@@ -23,84 +24,82 @@ npm install @spatialwalk/avatarkit
23
24
 
24
25
  ```typescript
25
26
  import {
26
- AvatarKit,
27
+ AvatarSDK,
27
28
  AvatarManager,
28
29
  AvatarView,
29
30
  Configuration,
30
- Environment
31
+ Environment,
32
+ DrivingServiceMode,
33
+ LogLevel
31
34
  } from '@spatialwalk/avatarkit'
32
35
 
33
36
  // 1. Initialize SDK
37
+
34
38
  const configuration: Configuration = {
35
- environment: Environment.test,
39
+ environment: Environment.cn,
40
+ drivingServiceMode: DrivingServiceMode.sdk, // Optional, 'sdk' is default
41
+ // - DrivingServiceMode.sdk: SDK mode - SDK handles WebSocket communication
42
+ // - DrivingServiceMode.host: Host mode - Host app provides audio and animation data
43
+ logLevel: LogLevel.off, // Optional, 'off' is default
44
+ // - LogLevel.off: Disable all logs
45
+ // - LogLevel.error: Only error logs
46
+ // - LogLevel.warning: Warning and error logs
47
+ // - LogLevel.all: All logs (info, warning, error)
48
+ audioFormat: { // Optional, default is { channelCount: 1, sampleRate: 16000 }
49
+ channelCount: 1, // Fixed to 1 (mono)
50
+ sampleRate: 16000 // Supported: 8000, 16000, 22050, 24000, 32000, 44100, 48000 Hz
51
+ }
52
+ // characterApiBaseUrl: 'https://custom-api.example.com' // Optional, internal debug config, can be ignored
36
53
  }
37
54
 
38
- await AvatarKit.initialize('your-app-id', configuration)
55
+ await AvatarSDK.initialize('your-app-id', configuration)
39
56
 
40
57
  // Set sessionToken (if needed, call separately)
41
- // AvatarKit.setSessionToken('your-session-token')
58
+ // AvatarSDK.setSessionToken('your-session-token')
42
59
 
43
60
  // 2. Load character
44
- const avatarManager = new AvatarManager()
61
+ const avatarManager = AvatarManager.shared
45
62
  const avatar = await avatarManager.load('character-id', (progress) => {
46
63
  console.log(`Loading progress: ${progress.progress}%`)
47
64
  })
48
65
 
49
66
  // 3. Create view (automatically creates Canvas and AvatarController)
50
- // Network mode (default)
67
+ // The playback mode is determined by drivingServiceMode in AvatarSDK configuration
68
+ // - DrivingServiceMode.sdk: SDK mode - SDK handles WebSocket communication
69
+ // - DrivingServiceMode.host: Host mode - Host app provides audio and animation data
51
70
  const container = document.getElementById('avatar-container')
52
- const avatarView = new AvatarView(avatar, {
53
- container: container,
54
- playbackMode: 'network' // Optional, 'network' is default
55
- })
71
+ const avatarView = new AvatarView(avatar, container)
56
72
 
57
- // 4. Start real-time communication (network mode only)
73
+ // 4. Start real-time communication (SDK mode only)
58
74
  await avatarView.avatarController.start()
59
75
 
60
- // 5. Send audio data (network mode)
61
- // ⚠️ Important: Audio must be 16kHz mono PCM16 format
62
- // If audio is Uint8Array, you can use slice().buffer to convert to ArrayBuffer
63
- const audioUint8 = new Uint8Array(1024) // Example: 16kHz PCM16 audio data (512 samples = 1024 bytes)
64
- const audioData = audioUint8.slice().buffer // Simplified conversion, works for ArrayBuffer and SharedArrayBuffer
65
- avatarView.avatarController.send(audioData, false) // Send audio data, will automatically start playing after accumulating enough data
66
- avatarView.avatarController.send(audioData, true) // end=true means immediately return animation data, no longer accumulating
76
+ // 5. Send audio data (SDK mode, must be mono PCM16 format matching configured sample rate)
77
+ const audioData = new ArrayBuffer(1024) // Example: PCM16 audio data at configured sample rate
78
+ avatarView.avatarController.send(audioData, false) // Send audio data
79
+ avatarView.avatarController.send(audioData, true) // end=true marks the end of current conversation round
67
80
  ```
68
81
 
69
- ### External Data Mode Example
82
+ ### Host Mode Example
70
83
 
71
84
  ```typescript
72
- import { AvatarPlaybackMode } from '@spatialwalk/avatarkit'
73
85
 
74
- // 1-3. Same as network mode (initialize SDK, load character)
86
+ // 1-3. Same as SDK mode (initialize SDK, load character)
75
87
 
76
- // 3. Create view with external data mode
88
+ // 3. Create view with Host mode
77
89
  const container = document.getElementById('avatar-container')
78
- const avatarView = new AvatarView(avatar, {
79
- container: container,
80
- playbackMode: AvatarPlaybackMode.external
81
- })
82
-
83
- // 4. Start playback with initial data (obtained from your service)
84
- // Note: Audio and animation data should be obtained from your backend service
85
- const initialAudioChunks = [{ data: audioData1, isLast: false }, { data: audioData2, isLast: false }]
86
- const initialKeyframes = animationData1 // Animation keyframes from your service
87
-
88
- await avatarView.avatarController.play(initialAudioChunks, initialKeyframes)
90
+ const avatarView = new AvatarView(avatar, container)
89
91
 
90
- // 5. Stream additional data as needed
91
- avatarView.avatarController.sendAudioChunk(audioData3, false)
92
- avatarView.avatarController.sendKeyframes(animationData2)
92
+ // 4. Host Mode Workflow:
93
+ // Send audio data first to get conversationId, then use it to send animation data
94
+ const conversationId = avatarView.avatarController.yieldAudioData(audioData, false)
95
+ avatarView.avatarController.yieldFramesData(animationDataArray, conversationId) // animationDataArray: (Uint8Array | ArrayBuffer)[]
93
96
  ```
94
97
 
95
98
  ### Complete Examples
96
99
 
97
- Check the example code in the GitHub repository for complete usage flows for both modes.
98
-
99
- **Example Project:** [Avatarkit-web-demo](https://github.com/spatialwalk/Avatarkit-web-demo)
100
-
101
- This repository contains complete examples for Vanilla JS, Vue 3, and React, demonstrating:
102
- - Network mode: Real-time audio input with automatic animation data reception
103
- - External data mode: Custom data sources with manual audio/animation data management
100
+ This SDK supports two usage modes:
101
+ - SDK mode: Real-time audio input with automatic animation data reception
102
+ - Host mode: Custom data sources with manual audio/animation data management
104
103
 
105
104
  ## 🏗️ Architecture Overview
106
105
 
@@ -110,47 +109,60 @@ The SDK uses a three-layer architecture for clear separation of concerns:
110
109
 
111
110
  1. **Rendering Layer (AvatarView)** - Responsible for 3D rendering only
112
111
  2. **Playback Layer (AvatarController)** - Manages audio/animation synchronization and playback
113
- 3. **Network Layer (NetworkLayer)** - Handles WebSocket communication (only in network mode)
112
+ 3. **Network Layer** - Handles WebSocket communication (only in SDK mode, internal implementation)
114
113
 
115
114
  ### Core Components
116
115
 
117
- - **AvatarKit** - SDK initialization and management
116
+ - **AvatarSDK** - SDK initialization and management
118
117
  - **AvatarManager** - Character resource loading and management
119
118
  - **AvatarView** - 3D rendering view (rendering layer)
120
119
  - **AvatarController** - Audio/animation playback controller (playback layer)
121
- - **NetworkLayer** - WebSocket communication (network layer, automatically composed in network mode)
122
- - **AvatarCoreAdapter** - WASM module adapter
123
120
 
124
121
  ### Playback Modes
125
122
 
126
- The SDK supports two playback modes, configured when creating `AvatarView`:
123
+ The SDK supports two playback modes, configured in `AvatarSDK.initialize()`:
127
124
 
128
- #### 1. Network Mode (Default)
125
+ #### 1. SDK Mode (Default)
126
+ - Configured via `drivingServiceMode: DrivingServiceMode.sdk` in `AvatarSDK.initialize()`
129
127
  - SDK handles WebSocket communication automatically
130
128
  - Send audio data via `AvatarController.send()`
131
129
  - SDK receives animation data from backend and synchronizes playback
132
130
  - Best for: Real-time audio input scenarios
133
131
 
134
- #### 2. External Data Mode
135
- - External components manage their own network/data fetching
136
- - External components provide both audio and animation data
132
+ #### 2. Host Mode
133
+ - Configured via `drivingServiceMode: DrivingServiceMode.host` in `AvatarSDK.initialize()`
134
+ - Host application manages its own network/data fetching
135
+ - Host application provides both audio and animation data
137
136
  - SDK only handles synchronized playback
138
137
  - Best for: Custom data sources, pre-recorded content, or custom network implementations
139
138
 
139
+ **Note:** The playback mode is determined by `drivingServiceMode` in `AvatarSDK.initialize()` configuration.
140
+
141
+ ### Fallback Mechanism
142
+
143
+ The SDK includes a fallback mechanism to ensure audio playback continues even when animation data is unavailable:
144
+
145
+ - **SDK Mode Connection Failure**: If WebSocket connection fails to establish within 15 seconds, the SDK automatically enters fallback mode. In this mode, audio data can still be sent and will play normally, even though no animation data will be received from the server. This ensures that audio playback is not interrupted even when the service connection fails.
146
+ - **SDK Mode Server Error**: If the server returns an error after connection is established, the SDK automatically enters audio-only mode for that session and continues playing audio independently.
147
+ - **Host Mode**: If empty animation data is provided (empty array or undefined), the SDK automatically enters audio-only mode.
148
+ - Once in audio-only mode, any subsequent animation data for that session will be ignored, and only audio will continue playing.
149
+ - The fallback mode is interruptible, just like normal playback mode.
150
+ - Connection state callbacks (`onConnectionState`) will notify you when connection fails or times out, allowing you to handle the fallback state appropriately.
151
+
140
152
  ### Data Flow
141
153
 
142
- #### Network Mode Flow
154
+ #### SDK Mode Flow
143
155
 
144
156
  ```
145
157
  User audio input (16kHz mono PCM16)
146
158
 
147
159
  AvatarController.send()
148
160
 
149
- NetworkLayer → WebSocket → Backend processing
161
+ WebSocket → Backend processing
150
162
 
151
163
  Backend returns animation data (FLAME keyframes)
152
164
 
153
- NetworkLayer → AvatarController → AnimationPlayer
165
+ AvatarController → AnimationPlayer
154
166
 
155
167
  FLAME parameters → AvatarCore.computeFrameFlatFromParams() → Splat data
156
168
 
@@ -159,15 +171,14 @@ AvatarController (playback loop) → AvatarView.renderRealtimeFrame()
159
171
  RenderSystem → WebGPU/WebGL → Canvas rendering
160
172
  ```
161
173
 
162
- #### External Data Mode Flow
174
+ #### Host Mode Flow
163
175
 
164
176
  ```
165
177
  External data source (audio + animation)
166
178
 
167
- AvatarController.play(initialAudio, initialKeyframes) // Start playback
179
+ AvatarController.yieldAudioData(audioChunk) // Returns conversationId
168
180
 
169
- AvatarController.sendAudioChunk() // Stream additional audio
170
- AvatarController.sendKeyframes() // Stream additional animation
181
+ AvatarController.yieldFramesData(keyframesDataArray, conversationId) // keyframesDataArray: (Uint8Array | ArrayBuffer)[] - each element is a protobuf encoded Message
171
182
 
172
183
  AvatarController → AnimationPlayer (synchronized playback)
173
184
 
@@ -178,52 +189,84 @@ AvatarController (playback loop) → AvatarView.renderRealtimeFrame()
178
189
  RenderSystem → WebGPU/WebGL → Canvas rendering
179
190
  ```
180
191
 
181
- **Note:**
182
- - In network mode, users provide audio data, SDK handles network communication and animation data reception
183
- - In external data mode, users provide both audio and animation data, SDK handles synchronized playback only
184
-
185
192
  ### Audio Format Requirements
186
193
 
187
- **⚠️ Important:** The SDK requires audio data to be in **16kHz mono PCM16** format:
194
+ **⚠️ Important:** The SDK requires audio data to be in **mono PCM16** format:
188
195
 
189
- - **Sample Rate**: 16kHz (16000 Hz) - This is a backend requirement
190
- - **Channels**: Mono (single channel)
196
+ - **Sample Rate**: Configurable via `audioFormat.sampleRate` in SDK initialization (default: 16000 Hz)
197
+ - Supported sample rates: 8000, 16000, 22050, 24000, 32000, 44100, 48000 Hz
198
+ - The configured sample rate will be used for both audio recording and playback
199
+ - **Channels**: Mono (single channel) - Fixed to 1 channel
191
200
  - **Format**: PCM16 (16-bit signed integer, little-endian)
192
201
  - **Byte Order**: Little-endian
193
202
 
194
203
  **Audio Data Format:**
195
204
  - Each sample is 2 bytes (16-bit)
196
205
  - Audio data should be provided as `ArrayBuffer` or `Uint8Array`
197
- - For example: 1 second of audio = 16000 samples × 2 bytes = 32000 bytes
206
+ - For example, with 16kHz sample rate: 1 second of audio = 16000 samples × 2 bytes = 32000 bytes
207
+ - For 48kHz sample rate: 1 second of audio = 48000 samples × 2 bytes = 96000 bytes
198
208
 
199
209
  **Resampling:**
200
- - If your audio source is at a different sample rate (e.g., 24kHz, 48kHz), you must resample it to 16kHz before sending to the SDK
210
+ - If your audio source is at a different sample rate, you must resample it to match the configured sample rate before sending to the SDK
201
211
  - For high-quality resampling, we recommend using Web Audio API's `OfflineAudioContext` with anti-aliasing filtering
202
212
  - See example projects for resampling implementation
203
213
 
214
+ **Configuration Example:**
215
+ ```typescript
216
+ const configuration: Configuration = {
217
+ environment: Environment.cn,
218
+ audioFormat: {
219
+ channelCount: 1, // Fixed to 1 (mono)
220
+ sampleRate: 48000 // Choose from: 8000, 16000, 22050, 24000, 32000, 44100, 48000
221
+ }
222
+ }
223
+ ```
224
+
204
225
  ## 📚 API Reference
205
226
 
206
- ### AvatarKit
227
+ ### AvatarSDK
207
228
 
208
229
  The core management class of the SDK, responsible for initialization and global configuration.
209
230
 
210
231
  ```typescript
211
232
  // Initialize SDK
212
- await AvatarKit.initialize(appId: string, configuration: Configuration)
233
+ await AvatarSDK.initialize(appId: string, configuration: Configuration)
213
234
 
214
235
  // Check initialization status
215
- const isInitialized = AvatarKit.isInitialized
236
+ const isInitialized = AvatarSDK.isInitialized
237
+
238
+ // Get initialized app ID
239
+ const appId = AvatarSDK.appId
240
+
241
+ // Get configuration
242
+ const config = AvatarSDK.configuration
243
+
244
+ // Set sessionToken (if needed, call separately)
245
+ AvatarSDK.setSessionToken('your-session-token')
246
+
247
+ // Set userId (optional, for telemetry)
248
+ AvatarSDK.setUserId('user-id')
249
+
250
+ // Get sessionToken
251
+ const sessionToken = AvatarSDK.sessionToken
252
+
253
+ // Get userId
254
+ const userId = AvatarSDK.userId
255
+
256
+ // Get SDK version
257
+ const version = AvatarSDK.version
216
258
 
217
259
  // Cleanup resources (must be called when no longer in use)
218
- AvatarKit.cleanup()
260
+ AvatarSDK.cleanup()
219
261
  ```
220
262
 
221
263
  ### AvatarManager
222
264
 
223
- Character resource manager, responsible for downloading, caching, and loading character data.
265
+ Character resource manager, responsible for downloading, caching, and loading character data. Use the singleton instance via `AvatarManager.shared`.
224
266
 
225
267
  ```typescript
226
- const manager = new AvatarManager()
268
+ // Get singleton instance
269
+ const manager = AvatarManager.shared
227
270
 
228
271
  // Load character
229
272
  const avatar = await manager.load(
@@ -239,37 +282,42 @@ manager.clearCache()
239
282
 
240
283
  3D rendering view (rendering layer), responsible for 3D rendering only. Internally automatically creates and manages `AvatarController`.
241
284
 
242
- **⚠️ Important Limitation:** Currently, the SDK only supports one AvatarView instance at a time. If you need to switch characters, you must first call the `dispose()` method to clean up the current AvatarView, then create a new instance.
285
+ ```typescript
286
+ constructor(avatar: Avatar, container: HTMLElement)
287
+ ```
288
+
289
+ **Parameters:**
290
+ - `avatar`: Avatar instance
291
+ - `container`: Canvas container element (required)
292
+ - Canvas automatically uses the full size of the container (width and height)
293
+ - Canvas aspect ratio adapts to container size - set container size to control aspect ratio
294
+ - Canvas will be automatically added to the container
295
+ - SDK automatically handles resize events via ResizeObserver
243
296
 
244
- **Playback Mode Configuration:**
297
+ **Playback Mode:**
298
+ - The playback mode is determined by `drivingServiceMode` in `AvatarSDK.initialize()` configuration
245
299
  - The playback mode is fixed when creating `AvatarView` and persists throughout its lifecycle
246
300
  - Cannot be changed after creation
247
301
 
248
302
  ```typescript
249
- import { AvatarPlaybackMode } from '@spatialwalk/avatarkit'
250
-
251
303
  // Create view (Canvas is automatically added to container)
252
- // Network mode (default)
253
304
  const container = document.getElementById('avatar-container')
254
- const avatarView = new AvatarView(avatar: Avatar, {
255
- container: container,
256
- playbackMode: AvatarPlaybackMode.network // Optional, default is 'network'
257
- })
258
-
259
- // External data mode
260
- const avatarView = new AvatarView(avatar: Avatar, {
261
- container: container,
262
- playbackMode: AvatarPlaybackMode.external
263
- })
305
+ const avatarView = new AvatarView(avatar, container)
264
306
 
265
- // Get Canvas element
266
- const canvas = avatarView.getCanvas()
307
+ // Wait for first frame to render
308
+ avatarView.onFirstRendering = () => {
309
+ // First frame rendered
310
+ }
267
311
 
268
- // Get playback mode
269
- const mode = avatarView.playbackMode // 'network' | 'external'
312
+ // Get or set avatar transform (position and scale)
313
+ // Get current transform
314
+ const currentTransform = avatarView.transform // { x: number, y: number, scale: number }
270
315
 
271
- // Update camera configuration
272
- avatarView.updateCameraConfig(cameraConfig: CameraConfig)
316
+ // Set transform
317
+ avatarView.transform = { x, y, scale }
318
+ // - x: Horizontal offset in normalized coordinates (-1 to 1, where -1 = left edge, 0 = center, 1 = right edge)
319
+ // - y: Vertical offset in normalized coordinates (-1 to 1, where -1 = bottom edge, 0 = center, 1 = top edge)
320
+ // - scale: Scale factor (1.0 = original size, 2.0 = double size, 0.5 = half size)
273
321
 
274
322
  // Cleanup resources (must be called before switching characters)
275
323
  avatarView.dispose()
@@ -278,105 +326,117 @@ avatarView.dispose()
278
326
  **Character Switching Example:**
279
327
 
280
328
  ```typescript
281
- // Before switching characters, must clean up old AvatarView first
329
+ // To switch characters, simply dispose the old view and create a new one
282
330
  if (currentAvatarView) {
283
331
  currentAvatarView.dispose()
284
- currentAvatarView = null
285
332
  }
286
333
 
287
334
  // Load new character
288
335
  const newAvatar = await avatarManager.load('new-character-id')
289
336
 
290
- // Create new AvatarView (with same or different playback mode)
291
- currentAvatarView = new AvatarView(newAvatar, {
292
- container: container,
293
- playbackMode: AvatarPlaybackMode.network
294
- })
337
+ // Create new AvatarView
338
+ currentAvatarView = new AvatarView(newAvatar, container)
295
339
 
296
- // Network mode: start connection
297
- if (currentAvatarView.playbackMode === AvatarPlaybackMode.network) {
298
- await currentAvatarView.avatarController.start()
299
- }
340
+ // SDK mode: start connection (will throw error if not in SDK mode)
341
+ await currentAvatarView.controller.start()
300
342
  ```
301
343
 
302
344
  ### AvatarController
303
345
 
304
- Audio/animation playback controller (playback layer), manages synchronized playback of audio and animation. Automatically composes `NetworkLayer` in network mode.
346
+ Audio/animation playback controller (playback layer), manages synchronized playback of audio and animation. Automatically handles WebSocket communication in SDK mode.
305
347
 
306
348
  **Two Usage Patterns:**
307
349
 
308
- #### Network Mode Methods
350
+ #### SDK Mode Methods
309
351
 
310
352
  ```typescript
311
353
  // Start WebSocket service
312
354
  await avatarView.avatarController.start()
313
355
 
314
- // Send audio data (SDK handles receiving animation data automatically)
315
- avatarView.avatarController.send(audioData: ArrayBuffer, end: boolean)
316
- // audioData: Audio data (ArrayBuffer format, must be 16kHz mono PCM16)
317
- // - Sample rate: 16kHz (16000 Hz) - backend requirement
318
- // - Format: PCM16 (16-bit signed integer, little-endian)
319
- // - Channels: Mono (single channel)
320
- // - Example: 1 second = 16000 samples × 2 bytes = 32000 bytes
321
- // end: false (default) - Normal audio data sending, server will accumulate audio data, automatically returns animation data and starts synchronized playback of animation and audio after accumulating enough data
322
- // end: true - Immediately return animation data, no longer accumulating, used for ending current conversation or scenarios requiring immediate response
356
+ // Send audio data (must be 16kHz mono PCM16 format)
357
+ const conversationId = avatarView.avatarController.send(audioData: ArrayBuffer, end: boolean)
358
+ // Returns: conversationId - Conversation ID for this conversation session
359
+ // end: false (default) - Continue sending audio data for current conversation
360
+ // end: true - Mark the end of current conversation round. After end=true, sending new audio data will interrupt any ongoing playback from the previous conversation round
323
361
 
324
362
  // Close WebSocket service
325
363
  avatarView.avatarController.close()
326
364
  ```
327
365
 
328
- #### External Data Mode Methods
366
+ #### Host Mode Methods
329
367
 
330
368
  ```typescript
331
- // Start playback with initial audio and animation data
332
- await avatarView.avatarController.play(
333
- initialAudioChunks?: Array<{ data: Uint8Array, isLast: boolean }>, // Initial audio chunks (16kHz mono PCM16)
334
- initialKeyframes?: any[] // Initial animation keyframes (obtained from your service)
335
- )
336
-
337
- // Stream additional audio chunks (after play() is called)
338
- avatarView.avatarController.sendAudioChunk(
369
+ // Stream audio chunks (must be 16kHz mono PCM16 format)
370
+ const conversationId = avatarView.avatarController.yieldAudioData(
339
371
  data: Uint8Array, // Audio chunk data
340
372
  isLast: boolean = false // Whether this is the last chunk
341
373
  )
374
+ // Returns: conversationId - Conversation ID for this audio session
342
375
 
343
- // Stream additional animation keyframes (after play() is called)
344
- avatarView.avatarController.sendKeyframes(
345
- keyframes: any[] // Additional animation keyframes (obtained from your service)
376
+ // Stream animation keyframes (requires conversationId from audio data)
377
+ avatarView.avatarController.yieldFramesData(
378
+ keyframesDataArray: (Uint8Array | ArrayBuffer)[], // Animation keyframes binary data array (each element is a protobuf encoded Message)
379
+ conversationId: string // Conversation ID (required)
346
380
  )
347
381
  ```
348
382
 
383
+ **⚠️ Important: Conversation ID (conversationId) Management**
384
+
385
+ **SDK Mode:**
386
+ - `send()` returns a conversationId to distinguish each conversation round
387
+ - `end=true` marks the end of a conversation round
388
+
389
+ **Host Mode:**
390
+ - `yieldAudioData()` returns a conversationId (automatically generates if starting new session)
391
+ - `yieldFramesData()` requires a valid conversationId parameter
392
+ - Animation data with mismatched conversationId will be **discarded**
393
+ - Use `getCurrentConversationId()` to retrieve the current active conversationId
394
+
349
395
  #### Common Methods (Both Modes)
350
396
 
351
397
  ```typescript
398
+
352
399
  // Interrupt current playback (stops and clears data)
353
400
  avatarView.avatarController.interrupt()
354
401
 
355
402
  // Clear all data and resources
356
403
  avatarView.avatarController.clear()
357
404
 
358
- // Get connection state (network mode only)
359
- const isConnected = avatarView.avatarController.connected
360
-
361
- // Start service (network mode only)
362
- await avatarView.avatarController.start()
363
-
364
- // Close service (network mode only)
365
- avatarView.avatarController.close()
405
+ // Get current conversation ID (for Host mode)
406
+ const conversationId = avatarView.avatarController.getCurrentConversationId()
407
+ // Returns: Current conversationId for the active audio session, or null if no active session
366
408
 
367
- // Get current avatar state
368
- const state = avatarView.avatarController.state
409
+ // Volume control (affects only avatar audio player, not system volume)
410
+ avatarView.avatarController.setVolume(0.5) // Set volume to 50% (0.0 to 1.0)
411
+ const currentVolume = avatarView.avatarController.getVolume() // Get current volume (0.0 to 1.0)
369
412
 
370
413
  // Set event callbacks
371
- avatarView.avatarController.onConnectionState = (state: ConnectionState) => {} // Network mode only
372
- avatarView.avatarController.onAvatarState = (state: AvatarState) => {}
414
+ avatarView.avatarController.onConnectionState = (state: ConnectionState) => {} // SDK mode only
415
+ avatarView.avatarController.onConversationState = (state: ConversationState) => {}
373
416
  avatarView.avatarController.onError = (error: Error) => {}
374
417
  ```
375
418
 
419
+ #### Avatar Transform Methods
420
+
421
+ ```typescript
422
+ // Get or set avatar transform (position and scale in canvas)
423
+ // Get current transform
424
+ const currentTransform = avatarView.transform // { x: number, y: number, scale: number }
425
+
426
+ // Set transform
427
+ avatarView.transform = { x, y, scale }
428
+ // - x: Horizontal offset in normalized coordinates (-1 to 1, where -1 = left edge, 0 = center, 1 = right edge)
429
+ // - y: Vertical offset in normalized coordinates (-1 to 1, where -1 = bottom edge, 0 = center, 1 = top edge)
430
+ // - scale: Scale factor (1.0 = original size, 2.0 = double size, 0.5 = half size)
431
+ // Example:
432
+ avatarView.transform = { x: 0, y: 0, scale: 1.0 } // Center, original size
433
+ avatarView.transform = { x: 0.5, y: 0, scale: 2.0 } // Right half, double size
434
+ ```
435
+
376
436
  **Important Notes:**
377
- - `start()` and `close()` are only available in network mode
378
- - `play()`, `sendAudioChunk()`, and `sendKeyframes()` are only available in external data mode
379
- - `interrupt()` and `clear()` are available in both modes
437
+ - `start()` and `close()` are only available in SDK mode
438
+ - `yieldAudioData()` and `yieldFramesData()` are only available in Host mode
439
+ - `pause()`, `resume()`, `interrupt()`, `clear()`, `getCurrentConversationId()`, `setVolume()`, and `getVolume()` are available in both modes
380
440
  - The playback mode is determined when creating `AvatarView` and cannot be changed
381
441
 
382
442
  ## 🔧 Configuration
@@ -386,40 +446,55 @@ avatarView.avatarController.onError = (error: Error) => {}
386
446
  ```typescript
387
447
  interface Configuration {
388
448
  environment: Environment
449
+ drivingServiceMode?: DrivingServiceMode // Optional, default is 'sdk' (SDK mode)
450
+ logLevel?: LogLevel // Optional, default is 'off' (no logs)
451
+ audioFormat?: AudioFormat // Optional, default is { channelCount: 1, sampleRate: 16000 }
452
+ characterApiBaseUrl?: string // Optional, internal debug config, can be ignored
389
453
  }
390
- ```
391
-
392
- **Description:**
393
- - `environment`: Specifies the environment (cn/us/test), SDK will automatically use the corresponding API address and WebSocket address based on the environment
394
- - `sessionToken`: Set separately via `AvatarKit.setSessionToken()`, not in Configuration
395
454
 
396
- ```typescript
397
- enum Environment {
398
- cn = 'cn', // China region
399
- us = 'us', // US region
400
- test = 'test' // Test environment
455
+ interface AudioFormat {
456
+ readonly channelCount: 1 // Fixed to 1 (mono)
457
+ readonly sampleRate: number // Supported: 8000, 16000, 22050, 24000, 32000, 44100, 48000 Hz, default: 16000
401
458
  }
402
459
  ```
403
460
 
404
- ### AvatarViewOptions
461
+ ### LogLevel
462
+
463
+ Control the verbosity of SDK logs:
405
464
 
406
465
  ```typescript
407
- interface AvatarViewOptions {
408
- playbackMode?: AvatarPlaybackMode // Playback mode, default is 'network'
409
- container?: HTMLElement // Canvas container element
466
+ enum LogLevel {
467
+ off = 'off', // Disable all logs
468
+ error = 'error', // Only error logs
469
+ warning = 'warning', // Warning and error logs
470
+ all = 'all' // All logs (info, warning, error) - default
410
471
  }
411
472
  ```
412
473
 
474
+ **Note:** `LogLevel.off` completely disables all logging, including error logs. Use with caution in production environments.
475
+
413
476
  **Description:**
414
- - `playbackMode`: Specifies the playback mode (`'network'` or `'external'`), default is `'network'`
415
- - `'network'`: SDK handles WebSocket communication, send audio via `send()`
416
- - `'external'`: External components provide audio and animation data, SDK handles synchronized playback
417
- - `container`: Optional container element for Canvas, if not provided, Canvas will be created but not added to DOM
477
+ - `environment`: Specifies the environment (cn/intl), SDK will automatically use the corresponding API address and WebSocket address based on the environment
478
+ - `drivingServiceMode`: Specifies the driving service mode
479
+ - `DrivingServiceMode.sdk` (default): SDK mode - SDK handles WebSocket communication automatically
480
+ - `DrivingServiceMode.host`: Host mode - Host application provides audio and animation data
481
+ - `logLevel`: Controls the verbosity of SDK logs
482
+ - `LogLevel.off` (default): Disable all logs
483
+ - `LogLevel.error`: Only error logs
484
+ - `LogLevel.warning`: Warning and error logs
485
+ - `LogLevel.all`: All logs (info, warning, error)
486
+ - `audioFormat`: Configures audio sample rate and channel count
487
+ - `channelCount`: Fixed to 1 (mono channel)
488
+ - `sampleRate`: Audio sample rate in Hz (default: 16000)
489
+ - Supported values: 8000, 16000, 22050, 24000, 32000, 44100, 48000
490
+ - The configured sample rate will be used for both audio recording and playback
491
+ - `characterApiBaseUrl`: Internal debug config, can be ignored
492
+ - `sessionToken`: Set separately via `AvatarSDK.setSessionToken()`, not in Configuration
418
493
 
419
494
  ```typescript
420
- enum AvatarPlaybackMode {
421
- network = 'network', // Network mode: SDK handles WebSocket communication
422
- external = 'external' // External data mode: External provides data, SDK handles playback
495
+ enum Environment {
496
+ cn = 'cn', // China region
497
+ intl = 'intl', // International region
423
498
  }
424
499
  ```
425
500
 
@@ -450,16 +525,25 @@ enum ConnectionState {
450
525
  }
451
526
  ```
452
527
 
453
- ### AvatarState
528
+ ### ConversationState
454
529
 
455
530
  ```typescript
456
- enum AvatarState {
457
- idle = 'idle', // Idle state, showing breathing animation
458
- active = 'active', // Active, waiting for playable content
459
- playing = 'playing' // Playing
531
+ enum ConversationState {
532
+ idle = 'idle', // Idle state (breathing animation)
533
+ playing = 'playing', // Playing state (active conversation)
534
+ pausing = 'pausing' // Pausing state (paused during playback)
460
535
  }
461
536
  ```
462
537
 
538
+ **State Description:**
539
+ - `idle`: Avatar is in idle state (breathing animation), waiting for conversation to start
540
+ - `playing`: Avatar is playing conversation content (including during transition animations)
541
+ - `pausing`: Avatar playback is paused (e.g., when `end=false` and waiting for more audio data)
542
+
543
+ **Note:** During transition animations, the target state is notified immediately:
544
+ - When transitioning from `idle` to `playing`, the `playing` state is notified immediately
545
+ - When transitioning from `playing` to `idle`, the `idle` state is notified immediately
546
+
463
547
  ## 🎨 Rendering System
464
548
 
465
549
  The SDK supports two rendering backends:
@@ -469,57 +553,6 @@ The SDK supports two rendering backends:
469
553
 
470
554
  The rendering system automatically selects the best backend, no manual configuration needed.
471
555
 
472
- ## 🔍 Debugging and Monitoring
473
-
474
- ### Logging System
475
-
476
- The SDK has a built-in complete logging system, supporting different levels of log output:
477
-
478
- ```typescript
479
- import { logger } from '@spatialwalk/avatarkit'
480
-
481
- // Set log level
482
- logger.setLevel('verbose') // 'basic' | 'verbose'
483
-
484
- // Manual log output
485
- logger.log('Info message')
486
- logger.warn('Warning message')
487
- logger.error('Error message')
488
- ```
489
-
490
- ### Performance Monitoring
491
-
492
- The SDK provides performance monitoring interfaces to monitor rendering performance:
493
-
494
- ```typescript
495
- // Get rendering performance statistics
496
- const stats = avatarView.getPerformanceStats()
497
-
498
- if (stats) {
499
- console.log(`Render time: ${stats.renderTime.toFixed(2)}ms`)
500
- console.log(`Sort time: ${stats.sortTime.toFixed(2)}ms`)
501
- console.log(`Rendering backend: ${stats.backend}`)
502
-
503
- // Calculate frame rate
504
- const fps = 1000 / stats.renderTime
505
- console.log(`Frame rate: ${fps.toFixed(2)} FPS`)
506
- }
507
-
508
- // Regular performance monitoring
509
- setInterval(() => {
510
- const stats = avatarView.getPerformanceStats()
511
- if (stats) {
512
- // Send to monitoring service or display on UI
513
- console.log('Performance:', stats)
514
- }
515
- }, 1000)
516
- ```
517
-
518
- **Performance Statistics Description:**
519
- - `renderTime`: Total rendering time (milliseconds), includes sorting and GPU rendering
520
- - `sortTime`: Sorting time (milliseconds), uses Radix Sort algorithm to depth-sort point cloud
521
- - `backend`: Currently used rendering backend (`'webgpu'` | `'webgl'` | `null`)
522
-
523
556
  ## 🚨 Error Handling
524
557
 
525
558
  ### SPAvatarError
@@ -553,15 +586,12 @@ avatarView.avatarController.onError = (error: Error) => {
553
586
 
554
587
  ### Lifecycle Management
555
588
 
556
- #### Network Mode Lifecycle
589
+ #### SDK Mode Lifecycle
557
590
 
558
591
  ```typescript
559
592
  // Initialize
560
593
  const container = document.getElementById('avatar-container')
561
- const avatarView = new AvatarView(avatar, {
562
- container: container,
563
- playbackMode: AvatarPlaybackMode.network
564
- })
594
+ const avatarView = new AvatarView(avatar, container)
565
595
  await avatarView.avatarController.start()
566
596
 
567
597
  // Use
@@ -572,21 +602,16 @@ avatarView.avatarController.close()
572
602
  avatarView.dispose() // Automatically cleans up all resources
573
603
  ```
574
604
 
575
- #### External Data Mode Lifecycle
605
+ #### Host Mode Lifecycle
576
606
 
577
607
  ```typescript
578
608
  // Initialize
579
609
  const container = document.getElementById('avatar-container')
580
- const avatarView = new AvatarView(avatar, {
581
- container: container,
582
- playbackMode: AvatarPlaybackMode.external
583
- })
610
+ const avatarView = new AvatarView(avatar, container)
584
611
 
585
612
  // Use
586
- const initialAudioChunks = [{ data: audioData1, isLast: false }]
587
- await avatarView.avatarController.play(initialAudioChunks, initialKeyframes)
588
- avatarView.avatarController.sendAudioChunk(audioChunk, false)
589
- avatarView.avatarController.sendKeyframes(keyframes)
613
+ const conversationId = avatarView.avatarController.yieldAudioData(audioChunk, false)
614
+ avatarView.avatarController.yieldFramesData(keyframesDataArray, conversationId) // keyframesDataArray: (Uint8Array | ArrayBuffer)[]
590
615
 
591
616
  // Cleanup
592
617
  avatarView.avatarController.clear() // Clear all data and resources
@@ -594,11 +619,10 @@ avatarView.dispose() // Automatically cleans up all resources
594
619
  ```
595
620
 
596
621
  **⚠️ Important Notes:**
597
- - SDK currently only supports one AvatarView instance at a time
598
- - When switching characters, must first call `dispose()` to clean up old AvatarView, then create new instance
622
+ - When disposing AvatarView instances, must call `dispose()` to properly clean up resources
599
623
  - Not properly cleaning up may cause resource leaks and rendering errors
600
- - In network mode, call `close()` before `dispose()` to properly close WebSocket connections
601
- - In external data mode, call `clear()` before `dispose()` to clear all playback data
624
+ - In SDK mode, call `close()` before `dispose()` to properly close WebSocket connections
625
+ - In Host mode, call `clear()` before `dispose()` to clear all playback data
602
626
 
603
627
  ### Memory Optimization
604
628
 
@@ -606,51 +630,6 @@ avatarView.dispose() // Automatically cleans up all resources
606
630
  - Supports dynamic loading/unloading of character and animation resources
607
631
  - Provides memory usage monitoring interface
608
632
 
609
- ### Audio Data Sending
610
-
611
- #### Network Mode
612
-
613
- The `send()` method receives audio data in `ArrayBuffer` format:
614
-
615
- **Audio Format Requirements:**
616
- - **Sample Rate**: 16kHz (16000 Hz) - **Backend requirement, must be exactly 16kHz**
617
- - **Format**: PCM16 (16-bit signed integer, little-endian)
618
- - **Channels**: Mono (single channel)
619
- - **Data Size**: Each sample is 2 bytes, so 1 second of audio = 16000 samples × 2 bytes = 32000 bytes
620
-
621
- **Usage:**
622
- - `audioData`: Audio data (ArrayBuffer format, must be 16kHz mono PCM16)
623
- - `end=false` (default) - Normal audio data sending, server will accumulate audio data, automatically returns animation data and starts synchronized playback of animation and audio after accumulating enough data
624
- - `end=true` - Immediately return animation data, no longer accumulating, used for ending current conversation or scenarios requiring immediate response
625
- - **Important**: No need to wait for `end=true` to start playing, it will automatically start playing after accumulating enough audio data
626
-
627
- #### External Data Mode
628
-
629
- The `play()` method starts playback with initial data, then use `sendAudioChunk()` to stream additional audio:
630
-
631
- **Audio Format Requirements:**
632
- - Same as network mode: 16kHz mono PCM16 format
633
- - Audio data should be provided as `Uint8Array` in chunks with `isLast` flag
634
-
635
- **Usage:**
636
- ```typescript
637
- // Start playback with initial audio and animation data
638
- // Note: Audio and animation data should be obtained from your backend service
639
- const initialAudioChunks = [
640
- { data: audioData1, isLast: false },
641
- { data: audioData2, isLast: false }
642
- ]
643
- await avatarController.play(initialAudioChunks, initialKeyframes)
644
-
645
- // Stream additional audio chunks
646
- avatarController.sendAudioChunk(audioChunk, isLast)
647
- ```
648
-
649
- **Resampling (Both Modes):**
650
- - If your audio source is at a different sample rate (e.g., 24kHz, 48kHz), you **must** resample it to 16kHz before sending
651
- - For high-quality resampling, use Web Audio API's `OfflineAudioContext` with anti-aliasing filtering
652
- - See example projects (`vanilla`, `react`, `vue`) for complete resampling implementation
653
-
654
633
  ## 🌐 Browser Compatibility
655
634
 
656
635
  - **Chrome/Edge** 90+ (WebGPU recommended)
@@ -669,6 +648,5 @@ Issues and Pull Requests are welcome!
669
648
  ## 📞 Support
670
649
 
671
650
  For questions, please contact:
672
- - Email: support@spavatar.com
673
- - Documentation: https://docs.spavatar.com
674
- - GitHub: https://github.com/spavatar/sdk
651
+ - Email: code@spatialwalk.net
652
+ - Documentation: https://docs.spatialreal.ai