@vox-ai/react 0.5.0 → 1.0.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -1,282 +1,252 @@
1
- # Vox.ai React 라이브러리
1
+ # @vox-ai/react
2
2
 
3
- React 애플리케이션에 음성 AI 기능을 통합하기 위한 SDK 라이브러리입니다.
3
+ vox.ai voice agent를 React 앱에서 사용하기 위한 hook 라이브러리.
4
4
 
5
5
  ## 설치
6
6
 
7
- 패키지 매니저를 통해 프로젝트에 라이브러리를 설치하세요.
8
-
9
7
  ```bash
10
8
  npm install @vox-ai/react
11
- # 또는
9
+ # or
12
10
  yarn add @vox-ai/react
13
- # 또는
14
- pnpm install @vox-ai/react
11
+ # or
12
+ pnpm add @vox-ai/react
15
13
  ```
16
14
 
17
- ## 사용법
18
-
19
- ### useVoxAI
20
-
21
- Vox.ai 플랫폼과의 음성 AI 상호작용을 관리하는 React 훅입니다.
22
-
23
- #### 초기화
24
-
25
- 먼저 useVoxAI 훅을 초기화합니다.
15
+ ## 빠른 시작
26
16
 
27
17
  ```tsx
28
- import { useVoxAI } from "@vox-ai/react";
29
-
30
- function VoiceComponent() {
31
- const {
32
- connect,
33
- disconnect,
34
- state,
35
- messages,
36
- send,
37
- audioWaveform,
38
- toggleMic,
39
- setVolume,
40
- } = useVoxAI({
41
- onConnect: () => console.log("Vox.ai에 연결됨"),
42
- onDisconnect: () => console.log("Vox.ai 연결 해제됨"),
43
- onError: (error) => console.error("오류:", error),
44
- onMessage: (message) => console.log("새 메시지:", message),
18
+ import { useConversation } from "@vox-ai/react";
19
+
20
+ export function VoiceWidget() {
21
+ const conversation = useConversation({
22
+ onConnect: () => console.log("연결됨"),
23
+ onDisconnect: () => console.log("연결 종료"),
24
+ onStatusChange: (status) => console.log("status:", status),
25
+ onModeChange: (mode) => console.log("mode:", mode),
26
+ onMessage: (message) => console.log(`${message.source}: ${message.text}`),
27
+ onError: (error) => console.error("error:", error.message),
45
28
  });
46
29
 
47
- // 컴포넌트의 나머지 부분
30
+ const start = async () => {
31
+ // 마이크 권한 요청 (UI에서 사전 안내 권장)
32
+ await navigator.mediaDevices.getUserMedia({ audio: true });
33
+
34
+ const conversationId = await conversation.startSession({
35
+ agentId: "YOUR_AGENT_ID",
36
+ apiKey: "YOUR_API_KEY",
37
+ });
38
+ console.log("session started:", conversationId);
39
+ };
40
+
41
+ return (
42
+ <div>
43
+ <button onClick={start} disabled={conversation.status !== "disconnected"}>
44
+ Start
45
+ </button>
46
+ <button onClick={conversation.endSession}>End</button>
47
+ <p>Status: {conversation.status}</p>
48
+ <p>Speaking: {conversation.isSpeaking ? "Yes" : "No"}</p>
49
+ </div>
50
+ );
48
51
  }
49
52
  ```
50
53
 
51
- #### 옵션
54
+ ## `useConversation(options?)`
52
55
 
53
- - **onConnect** - 음성 AI 연결이 설정되었을 때 호출되는 핸들러
54
- - **onDisconnect** - 음성 AI 연결이 종료되었을 때 호출되는 핸들러
55
- - **onMessage** - 새 메시지가 수신되었을 때 호출되는 핸들러
56
- - **onError** - 오류가 발생했을 때 호출되는 핸들러
56
+ ### 콜백 (hook 초기화 전달)
57
57
 
58
- #### 메서드
58
+ | 콜백 | 시그니처 | 설명 |
59
+ |------|----------|------|
60
+ | `onConnect` | `() => void` | 연결 성공 |
61
+ | `onDisconnect` | `() => void` | 연결 종료 |
62
+ | `onStatusChange` | `(status: ConversationStatus) => void` | Status 변경 (`"disconnected"` → `"connecting"` → `"connected"`) |
63
+ | `onModeChange` | `(mode: ConversationMode) => void` | Mode 변경 (`"listening"` ⇄ `"speaking"`) |
64
+ | `onMessage` | `(message: ConversationMessage) => void` | 메시지 수신 (user transcription, agent response) |
65
+ | `onError` | `(error: Error) => void` | 에러 발생 |
59
66
 
60
- ##### connect
67
+ ### Hook 옵션
61
68
 
62
- Vox.ai 서비스에 연결을 설정합니다. 인증 매개변수가 필요합니다.
63
-
64
- ```tsx
65
- // 에이전트 ID와 API 키로 Vox.ai에 연결
66
- connect({
67
- agentId: "your-agent-id",
68
- apiKey: "your-api-key",
69
- dynamicVariables: {
70
- // 대화 커스터마이징을 위한 동적 변수
71
- userName: "홍길동",
72
- context: "고객-지원",
73
- },
74
- metadata: {
75
- // 통화에 대한 메타데이터를 프론트엔드에게 전달
76
- callerId: "customer-123",
77
- departmentId: "support",
78
- },
79
- });
80
- ```
69
+ | 옵션 | 타입 | 설명 |
70
+ |------|------|------|
71
+ | `textOnly` | `boolean` | Text-only session 기본값. `true`면 microphone/audio 없이 chat mode로 연결 |
81
72
 
82
- ##### disconnect
73
+ ### React State
83
74
 
84
- 음성 AI 세션을 수동으로 종료하는 메서드입니다.
75
+ | State | 타입 | 설명 |
76
+ |-------|------|------|
77
+ | `status` | `ConversationStatus` | `"disconnected"` \| `"connecting"` \| `"connected"` |
78
+ | `isSpeaking` | `boolean` | Agent가 현재 발화 중인지 여부 |
79
+ | `micMuted` | `boolean` | 마이크 음소거 상태 |
80
+ | `messages` | `ConversationMessage[]` | 현재 세션에서 주고받은 메시지 배열 |
85
81
 
86
- ```tsx
87
- disconnect();
88
- ```
82
+ > JS SDK의 `getStatus()`, `getMode()`, `getMicMuted()`에 대응. React에서는 state로 제공되므로 자동 re-render.
89
83
 
90
- ##### send
84
+ ### 메서드
91
85
 
92
- 텍스트 메시지나 DTMF 톤을 에이전트에 전송하는 메서드입니다.
86
+ #### 세션 제어
93
87
 
94
88
  ```tsx
95
- // 텍스트 메시지 전송
96
- send({ message: "안녕하세요, 도움이 필요합니다." });
89
+ // 세션 시작 — conversationId를 반환
90
+ const conversationId = await conversation.startSession({
91
+ agentId: "YOUR_AGENT_ID",
92
+ apiKey: "YOUR_API_KEY",
93
+ });
94
+
95
+ // 세션 종료
96
+ await conversation.endSession();
97
97
 
98
- // DTMF 전송 (0-9, *, #)
99
- send({ digit: 1 });
98
+ // 세션 ID 조회
99
+ const id = conversation.getId();
100
+
101
+ // 현재 세션 메시지 조회
102
+ const messages = conversation.getMessages();
100
103
  ```
101
104
 
102
- ##### audioWaveform
105
+ #### `startSession` 옵션
103
106
 
104
- 에이전트나 사용자의 오디오 웨이브폼 데이터를 반환하는 메서드입니다.
107
+ | 옵션 | 타입 | 필수 | 설명 |
108
+ |------|------|------|------|
109
+ | `agentId` | `string` | O | Agent ID |
110
+ | `apiKey` | `string` | O | API key |
111
+ | `agentVersion` | `string` | | Agent version (`"current"`, `"production"`, `"v1"` 등, default: `"current"`) |
112
+ | `textOnly` | `boolean` | | Hook 기본값을 override하는 per-session text-only 설정 |
113
+ | `dynamicVariables` | `Record<string, string \| number \| boolean>` | | Agent prompt에 주입할 dynamic variables |
114
+ | `metadata` | `Record<string, unknown>` | | Call metadata (webhook, call log에 포함) |
105
115
 
106
- ```tsx
107
- // 에이전트 오디오 웨이브폼 데이터 가져오기 (기본값)
108
- const agentWaveform = audioWaveform({
109
- speaker: "agent", // "agent" 또는 "user"
110
- barCount: 20, // 반환할 웨이브폼 바의 수
111
- updateInterval: 50, // 업데이트 간격 (ms)
112
- });
116
+ #### 메시지 전송
113
117
 
114
- // 사용자 오디오 웨이브폼 데이터 가져오기
115
- const userWaveform = audioWaveform({ speaker: "user" });
118
+ ```tsx
119
+ // 텍스트 메시지 전송 (음성 대신 텍스트 입력)
120
+ await conversation.sendUserMessage("안녕하세요");
116
121
  ```
117
122
 
118
- ##### toggleMic
119
-
120
- 사용자의 마이크를 활성화/비활성화하는 메서드입니다.
123
+ #### 메시지 히스토리
121
124
 
122
125
  ```tsx
123
- // 마이크 활성화
124
- toggleMic(true);
126
+ conversation.messages.forEach((message) => {
127
+ console.log(message.source, message.text, message.isFinal);
128
+ });
125
129
 
126
- // 마이크 비활성화
127
- toggleMic(false);
130
+ const snapshot = conversation.getMessages();
128
131
  ```
129
132
 
130
- ##### setVolume
133
+ - `messages`는 React state라서 메시지 갱신 시 자동 re-render
134
+ - `getMessages()`는 현재 시점의 메시지 배열 snapshot 반환
131
135
 
132
- 에이전트의 볼륨을 설정하는 메서드입니다. 값은 0(음소거)부터 1(최대 볼륨)까지입니다.
136
+ #### 마이크 제어
133
137
 
134
138
  ```tsx
135
- // 볼륨 50%로 설정
136
- setVolume(0.5);
137
- ```
139
+ // 음소거
140
+ await conversation.setMicMuted(true);
138
141
 
139
- #### 상태 및 데이터
142
+ // 음소거 해제
143
+ await conversation.setMicMuted(false);
140
144
 
141
- ##### state
145
+ // 현재 상태는 conversation.micMuted 로 확인
146
+ ```
142
147
 
143
- 음성 AI 상호작용의 현재 상태를 포함하는 React 상태입니다.
148
+ #### 볼륨 제어
144
149
 
145
150
  ```tsx
146
- const { state } = useVoxAI();
147
- console.log(state); // "disconnected", "connecting", "initializing", "listening", "thinking", "speaking" 중 하나
151
+ // Agent 음성 볼륨 설정 (0.0 ~ 1.0)
152
+ conversation.setVolume({ volume: 0.5 });
148
153
  ```
149
154
 
150
- 상태를 사용하여 사용자에게 적절한 UI 표시기를 보여줄 수 있습니다.
151
-
152
- ##### messages
153
-
154
- 대화 기록을 포함하는 React 상태입니다.
155
+ #### 오디오 모니터링
155
156
 
156
157
  ```tsx
157
- const { messages } = useVoxAI();
158
- console.log(messages); // 메시지 객체 배열
158
+ // 입출력 볼륨 (0.0 ~ 1.0)
159
+ const inputVol = conversation.getInputVolume();
160
+ const outputVol = conversation.getOutputVolume();
161
+
162
+ // Frequency data (Uint8Array, 시각화용)
163
+ const inputFreq = conversation.getInputByteFrequencyData();
164
+ const outputFreq = conversation.getOutputByteFrequencyData();
159
165
  ```
160
166
 
161
- 메시지는 다음 구조를 가집니다:
167
+ #### 디바이스 전환
162
168
 
163
169
  ```tsx
164
- type VoxMessage = {
165
- id?: string;
166
- name: "agent" | "user" | "tool";
167
- message?: string;
168
- timestamp: number;
169
- isFinal?: boolean;
170
- tool?: FunctionToolsExecuted; // 에이전트가 실행한 함수 도구
171
- };
170
+ // 입력 디바이스 변경
171
+ await conversation.changeInputDevice({ inputDeviceId: "device-id" });
172
+
173
+ // 출력 디바이스 변경
174
+ await conversation.changeOutputDevice({ outputDeviceId: "device-id" });
172
175
  ```
173
176
 
174
- ## 예제
177
+ 디바이스 목록은 [`navigator.mediaDevices.enumerateDevices()`](https://developer.mozilla.org/docs/Web/API/MediaDevices/enumerateDevices)로 조회.
175
178
 
176
- ### 기본 사용법
179
+ ## Dynamic Variables / Metadata
177
180
 
178
181
  ```tsx
179
- import React, { useState } from "react";
180
- import { useVoxAI } from "@vox-ai/react";
181
-
182
- function VoiceAssistant() {
183
- const [isConnected, setIsConnected] = useState(false);
184
- const { connect, disconnect, state, messages, send } = useVoxAI({
185
- onConnect: () => setIsConnected(true),
186
- onDisconnect: () => setIsConnected(false),
187
- onError: (error) => console.error("오류:", error),
188
- });
189
-
190
- const handleConnect = () => {
191
- connect({
192
- agentId: "your-agent-id",
193
- apiKey: "your-api-key",
194
- });
195
- };
196
-
197
- const handleSendMessage = () => {
198
- send({ message: "안녕하세요, 도움이 필요합니다." });
199
- };
200
-
201
- return (
202
- <div>
203
- <h1>Vox.ai 음성 비서</h1>
204
-
205
- <div>
206
- <button onClick={handleConnect} disabled={isConnected}>
207
- 연결
208
- </button>
209
- <button onClick={disconnect} disabled={!isConnected}>
210
- 연결 해제
211
- </button>
212
- <button onClick={handleSendMessage} disabled={!isConnected}>
213
- 메시지 전송
214
- </button>
215
- </div>
216
-
217
- <div>
218
- <p>현재 상태: {state}</p>
219
- </div>
220
-
221
- <div>
222
- <h2>대화</h2>
223
- <ul>
224
- {messages.map((msg, index) => (
225
- <li key={msg.id || index}>
226
- <strong>{msg.name}:</strong> {msg.message}
227
- </li>
228
- ))}
229
- </ul>
230
- </div>
231
- </div>
232
- );
233
- }
182
+ const conversationId = await conversation.startSession({
183
+ agentId: "YOUR_AGENT_ID",
184
+ apiKey: "YOUR_API_KEY",
185
+ agentVersion: "production",
186
+ dynamicVariables: {
187
+ userName: "홍길동",
188
+ userType: "premium",
189
+ accountBalance: 50000,
190
+ },
191
+ metadata: {
192
+ sessionId: "sess_abc123",
193
+ source: "mobile-app",
194
+ },
195
+ });
234
196
  ```
235
197
 
236
- ### 오디오 웨이브폼 시각화 예제
237
-
238
- ```tsx
239
- import React, { useState, useEffect } from "react";
240
- import { useVoxAI } from "@vox-ai/react";
198
+ - `dynamicVariables` Agent prompt에서 `{{userName}}` 형식으로 참조
199
+ - `metadata` — Outbound webhook과 call log에 포함
241
200
 
242
- function WaveformVisualizer() {
243
- const { audioWaveform, state } = useVoxAI();
244
- const [waveformData, setWaveformData] = useState([]);
201
+ ## Text Only
245
202
 
246
- // 웨이브폼 데이터를 정기적으로 업데이트
247
- useEffect(() => {
248
- if (state === "disconnected") return;
203
+ ```tsx
204
+ const conversation = useConversation({
205
+ textOnly: true,
206
+ });
249
207
 
250
- const intervalId = setInterval(() => {
251
- // 에이전트 오디오 웨이브폼 데이터 가져오기
252
- const data = audioWaveform({ speaker: "agent", barCount: 30 });
253
- setWaveformData(data);
254
- }, 50);
208
+ await conversation.startSession({
209
+ agentId: "YOUR_AGENT_ID",
210
+ apiKey: "YOUR_API_KEY",
211
+ });
255
212
 
256
- return () => clearInterval(intervalId);
257
- }, [audioWaveform, state]);
213
+ await conversation.sendUserMessage("텍스트로만 테스트할게요");
214
+ ```
258
215
 
259
- return (
260
- <div className="waveform-container">
261
- {waveformData.map((value, index) => (
262
- <div
263
- key={index}
264
- className="waveform-bar"
265
- style={{
266
- height: `${value * 100}%`,
267
- width: "10px",
268
- backgroundColor: "#3498db",
269
- margin: "0 2px",
270
- }}
271
- />
272
- ))}
273
- </div>
274
- );
275
- }
216
+ - text-only session은 microphone 권한을 요청하지 않음
217
+ - 에이전트 응답은 LiveKit text stream으로 수신됨
218
+ - audio 전용 API는 안전한 no-op 또는 zero-value를 반환
219
+
220
+ ## Export 타입
221
+
222
+ ```ts
223
+ import type {
224
+ ConversationMessage,
225
+ ConversationMode,
226
+ ConversationSource,
227
+ ConversationStatus,
228
+ InputDeviceConfig,
229
+ OutputDeviceConfig,
230
+ SetVolumeParams,
231
+ StartConversationOptions,
232
+ UseConversationOptions,
233
+ } from "@vox-ai/react";
276
234
  ```
277
235
 
278
- ## 기여하기
236
+ ## JS SDK와의 관계
237
+
238
+ | JS SDK (`@vox-ai/client`) | React SDK (`@vox-ai/react`) |
239
+ |---------------------------|----------------------------|
240
+ | `Conversation.startSession(opts)` | `conversation.startSession(opts)` |
241
+ | `getMessages()` | `messages` / `getMessages()` |
242
+ | `getStatus()` | `status` (React state) |
243
+ | `getMode()` | `isSpeaking` (React state) |
244
+ | `getMicMuted()` | `micMuted` (React state) |
245
+ | 나머지 method/callback | 동일 |
279
246
 
280
- 변경 사항을 제안하기 전에 먼저 이슈를 생성해 주세요. 모든 기여를 환영합니다!
247
+ ## 참고
281
248
 
282
- Pull Request를 제출함으로써, 귀하는 코드가 MIT 라이센스 하에 이 라이브러리에 통합되는 것에 동의하는 것입니다.
249
+ - `useVoxAI`는 deprecated `useConversation` 사용 권장
250
+ - 인증은 `apiKey` 직접 전달 방식
251
+ - 내부 연결은 LiveKit WebRTC 기반
252
+ - 브라우저별 audio device 제약이 있을 수 있음
@@ -1 +1 @@
1
- export * from "./useVoxAI";
1
+ export * from "./useConversation";
@@ -0,0 +1,28 @@
1
+ import { type ConversationMessage, type ConversationMode, type ConversationSource, type ConversationStatus, type InputDeviceConfig, type OutputDeviceConfig, type SetVolumeParams, type StartSessionOptions } from "@vox-ai/client";
2
+ type HookCallbacks = Pick<StartSessionOptions, "onConnect" | "onDisconnect" | "onError" | "onMessage" | "onStatusChange" | "onModeChange">;
3
+ export type UseConversationOptions = HookCallbacks & {
4
+ textOnly?: boolean;
5
+ };
6
+ export type StartConversationOptions = Omit<StartSessionOptions, keyof HookCallbacks>;
7
+ export declare function useConversation(options?: UseConversationOptions): {
8
+ startSession: (params: StartConversationOptions) => Promise<string>;
9
+ endSession: () => Promise<void>;
10
+ getId: () => string | undefined;
11
+ getMessages: () => ConversationMessage[];
12
+ setVolume: (volume: {
13
+ volume: number;
14
+ }) => void;
15
+ setMicMuted: (isMuted: boolean) => Promise<void>;
16
+ sendUserMessage: (text: string) => Promise<void>;
17
+ changeInputDevice: (config: InputDeviceConfig) => Promise<boolean>;
18
+ changeOutputDevice: (config: OutputDeviceConfig) => Promise<boolean>;
19
+ getInputVolume: () => number;
20
+ getOutputVolume: () => number;
21
+ getInputByteFrequencyData: () => Uint8Array<ArrayBufferLike> | undefined;
22
+ getOutputByteFrequencyData: () => Uint8Array<ArrayBufferLike> | undefined;
23
+ messages: ConversationMessage[];
24
+ status: ConversationStatus;
25
+ isSpeaking: boolean;
26
+ micMuted: boolean;
27
+ };
28
+ export type { ConversationMessage, ConversationMode, ConversationSource, ConversationStatus, InputDeviceConfig, OutputDeviceConfig, SetVolumeParams, };
package/dist/index.d.ts CHANGED
@@ -1,6 +1,2 @@
1
- /**
2
- * @voxai/react
3
- * React UI component library for vox.ai
4
- */
5
- export { useVoxAI } from "./hooks";
6
- export type { VoxAgentState, VoxMessage, VoxAIOptions, ConnectParams, FunctionToolsExecuted, FunctionCallInfo, FunctionCallResult, } from "./hooks";
1
+ export { useConversation } from "./hooks";
2
+ export type { ConversationMessage, ConversationMode, ConversationSource, ConversationStatus, InputDeviceConfig, OutputDeviceConfig, SetVolumeParams, StartConversationOptions, UseConversationOptions, } from "./hooks";
package/dist/lib.cjs CHANGED
@@ -1,2 +1,2 @@
1
- var e=require("react/jsx-runtime"),t=require("@livekit/components-react"),n=require("livekit-client"),r=require("react"),o=require("react-dom/client");function a(e){let{port:o,initialConfig:a}=e;const{agent:s,state:c}=t.useVoiceAssistant(),{send:i}=t.useChat(),[u,l]=r.useState({speaker:a.speaker||"agent",barCount:a.barCount,updateInterval:a.updateInterval}),p=t.useParticipantTracks([n.Track.Source.Microphone],null==s?void 0:s.identity)[0],d=t.useTrackTranscription(p),m=t.useAudioWaveform(p,{barCount:"agent"===u.speaker?u.barCount:120,updateInterval:"agent"===u.speaker?u.updateInterval:20}),f=t.useLocalParticipant(),g=t.useTrackTranscription({publication:f.microphoneTrack,source:n.Track.Source.Microphone,participant:f.localParticipant}),v=t.useParticipantTracks([n.Track.Source.Microphone],f.localParticipant.identity)[0],y=t.useAudioWaveform(v,{barCount:"user"===u.speaker?u.barCount:120,updateInterval:"user"===u.speaker?u.updateInterval:20});return r.useEffect(()=>{o&&m&&m.bars&&o.postMessage({type:"waveform_update",waveformData:m.bars,speaker:"agent"})},[o,m]),r.useEffect(()=>{o&&y&&y.bars&&o.postMessage({type:"waveform_update",waveformData:y.bars,speaker:"user"})},[o,y]),r.useEffect(()=>{if(!o)return;const e=e=>{const t=e.data;if("waveform_config"===t.type&&t.config)"number"==typeof t.config.barCount&&"number"==typeof t.config.updateInterval&&l(t.config);else if("send_text"===t.type)i?i(t.text):console.error("sendChat function is not available");else if("send_dtmf"===t.type)f.localParticipant?f.localParticipant.publishDtmf(101,t.digit):console.error("Local participant is not available for DTMF");else if("toggle_mic"===t.type&&"boolean"==typeof t.enabled)f.localParticipant?f.localParticipant.setMicrophoneEnabled(t.enabled).catch(e=>{console.error("Failed to toggle microphone:",e)}):console.error("Local participant is not available for mic toggle");else if("set_volume"===t.type&&"number"==typeof t.volume)if(s)try{s.setVolume(t.volume),console.log("Set agent volume to "+t.volume)}catch(e){console.error("Failed to set agent volume:",e)}else console.error("Agent is not available for volume control")};return o.start(),o.addEventListener("message",e),()=>{o.removeEventListener("message",e)}},[o,i,f,s]),r.useEffect(()=>{o&&o.postMessage({type:"state_update",state:c})},[c,o]),r.useEffect(()=>{if(o&&d.segments.length>0){const e=d.segments.map(e=>({id:e.id,text:e.text,isFinal:e.final,timestamp:Date.now(),speaker:"agent"}));o.postMessage({type:"transcription_update",transcriptions:e})}},[d.segments,o]),r.useEffect(()=>{if(o&&g.segments.length>0){const e=g.segments.map(e=>({id:e.id,text:e.text,isFinal:e.final,timestamp:Date.now(),speaker:"user"}));o.postMessage({type:"transcription_update",transcriptions:e})}},[g.segments,o]),t.useDataChannel("function_tools_executed",e=>{if(!o)return;const t=new TextDecoder,n=e.payload instanceof Uint8Array?t.decode(e.payload):String(e.payload);let r;try{r=JSON.parse(n),o.postMessage({type:"function_tools_executed",tool:r})}catch(e){console.error("Failed to parse function call log:",e)}}),null}exports.useVoxAI=function(n){void 0===n&&(n={});const[s,c]=r.useState(null),[i,u]=r.useState("disconnected"),l=r.useRef(Date.now()),[p,d]=r.useState(new Map),[m,f]=r.useState([]),g=r.useRef(""),v=r.useRef(new Set),y=r.useRef(null),b=r.useRef(null),h=r.useRef(null),k=r.useRef(null),[w,C]=r.useState({agent:[],user:[]}),M=r.useRef(null),[E,_]=r.useState(!0);r.useEffect(()=>{const e=Array.from(p.values()).sort((e,t)=>e.timestamp-t.timestamp),t=JSON.stringify(e);t!==g.current&&(g.current=t,f(e),n.onMessage&&e.filter(e=>e.isFinal&&e.id&&!v.current.has(e.id)).forEach(e=>{e.id&&(v.current.add(e.id),null==n.onMessage||n.onMessage(e))}))},[p,n.onMessage]),r.useEffect(()=>{const e=new MessageChannel;return e.port1.onmessage=e=>{const t=e.data;if("state_update"===t.type)u(t.state);else if("transcription_update"===t.type)x(t.transcriptions);else if("waveform_update"===t.type&&t.speaker)C(e=>({...e,[t.speaker]:t.waveformData}));else if("function_tools_executed"===t.type&&t.tool){const e="function-calls-"+Date.now();d(n=>{const r=new Map(n);return r.set(e,{id:e,name:"tool",tool:t.tool,timestamp:Date.now(),isFinal:!0}),r})}},e.port1.start(),h.current=e,()=>{var e,t;null==(e=h.current)||e.port1.close(),null==(t=h.current)||t.port2.close(),h.current=null}},[]);const x=r.useCallback(e=>{d(t=>{const n=new Map(t);return e.forEach(e=>{var r;if(e.timestamp<l.current)return;const o="agent"===e.speaker?"agent":"user",a=(null==(r=t.get(e.id))?void 0:r.timestamp)||e.timestamp;n.set(e.id,{id:e.id,name:o,message:e.text,timestamp:a,isFinal:e.isFinal})}),n})},[]);r.useEffect(()=>{const e=document.createElement("div");return e.style.display="none",document.body.appendChild(e),y.current=e,b.current=o.createRoot(e),()=>{b.current&&b.current.unmount(),y.current&&document.body.removeChild(y.current)}},[]);const D=r.useCallback(function(e){let{agentId:t,agentVersion:r,apiKey:o,dynamicVariables:a,metadata:s}=e;try{return Promise.resolve(function(e,p){try{var d=function(){if("disconnected"!==i){const e="Connection attempt rejected: Already in a connection state ("+i+")";return console.warn(e),n.onError&&n.onError(new Error(e)),Promise.reject(new Error(e))}return l.current=Date.now(),u("connecting"),Promise.resolve(fetch("https://www.tryvox.co/api/agent/sdk",{method:"POST",headers:{Authorization:"Bearer "+o,"Content-Type":"application/json"},body:JSON.stringify({agent_id:t,agent_version:r||"current",metadata:{runtime_context:{source:{type:"react-sdk",version:"0.5.0"}},call_web:{dynamic_variables:a||{},metadata:s||{}}}})})).then(function(e){function t(t){return Promise.resolve(e.json()).then(function(e){c(e),n.onConnect&&n.onConnect()})}const r=function(){if(!e.ok)return Promise.resolve(e.text()).then(function(t){throw new Error("Connection failed ("+e.status+"): "+t)})}();return r&&r.then?r.then(t):t()})}()}catch(e){return p(e)}return d&&d.then?d.then(void 0,p):d}(0,function(e){c(null),d(new Map),f([]),u("disconnected");const t=e instanceof Error?e:new Error(String(e));n.onError&&n.onError(t)}))}catch(e){return Promise.reject(e)}},[n,i]),S=r.useCallback(()=>{l.current=Date.now(),c(null),d(new Map),f([]),u("disconnected"),n.onDisconnect&&n.onDisconnect()},[n]),P=r.useCallback(e=>{let{message:t,digit:n}=e;if("disconnected"!==i){if(t){const e="user-text-"+Date.now();d(n=>{const r=new Map(n);return r.set(e,{id:e,name:"user",message:t,timestamp:Date.now(),isFinal:!0}),r}),h.current?h.current.port1.postMessage({type:"send_text",text:t}):console.error("No message channel available to send message")}void 0!==n&&(h.current?h.current.port1.postMessage({type:"send_dtmf",digit:n}):console.error("No message channel available to send DTMF"))}else console.warn("Cannot send message: Not connected to a conversation")},[i]),T=r.useCallback(e=>{let{speaker:t="agent",barCount:n=10,updateInterval:r=20}=e;M.current={speaker:t,barCount:n,updateInterval:r},h.current&&h.current.port1.postMessage({type:"waveform_config",config:{speaker:t,barCount:n,updateInterval:r}});const o=w[t]||[];return o.length>0?o.slice(0,n):Array(n).fill(0)},[w]),F=r.useCallback(e=>{_(e),h.current?h.current.port1.postMessage({type:"toggle_mic",enabled:e}):console.error("No message channel available to toggle microphone")},[]),I=r.useCallback(e=>{const t=Math.min(Math.max(e,0),1);h.current?h.current.port1.postMessage({type:"set_volume",volume:t}):console.error("No message channel available to set volume")},[]);return r.useEffect(()=>{b.current&&(s?(k.current||(h.current&&h.current.port2.start(),k.current=e.jsxs(t.LiveKitRoom,{serverUrl:s.serverUrl,token:s.participantToken,audio:!0,video:!1,connect:!0,onDisconnected:S,onError:e=>{console.error("LiveKit connection error:",e),S(),n.onError&&n.onError(new Error("LiveKit connection error: "+e.message))},children:[e.jsx(t.RoomAudioRenderer,{}),h.current&&e.jsx(a,{port:h.current.port2,initialConfig:M.current||{barCount:10,updateInterval:20}})]})),b.current.render(k.current)):(k.current=null,b.current.render(e.jsx(e.Fragment,{}))))},[s,S,n.onError]),{connect:D,disconnect:S,state:i,messages:m,send:P,audioWaveform:T,toggleMic:F,setVolume:I}};
1
+ var e=require("@vox-ai/client"),t=require("react");exports.useConversation=function(n){void 0===n&&(n={});const r=t.useRef(null),u=t.useRef(new Map),[s,c]=t.useState("disconnected"),[o,a]=t.useState(!1),[l,i]=t.useState(!1),[g,v]=t.useState([]),d=t.useCallback(function(t){try{function s(){var s;return u.current=new Map,v([]),Promise.resolve(e.Conversation.startSession({...t,textOnly:null!=(s=t.textOnly)?s:n.textOnly,onConnect:()=>{c("connected"),null==n.onConnect||n.onConnect()},onDisconnect:()=>{c("disconnected"),a(!1),null==n.onDisconnect||n.onDisconnect()},onError:e=>{null==n.onError||n.onError(e)},onMessage:e=>{u.current.set(e.id,e),v(Array.from(u.current.values()).sort((e,t)=>e.timestamp-t.timestamp)),null==n.onMessage||n.onMessage(e)},onStatusChange:e=>{c(e),null==n.onStatusChange||n.onStatusChange(e)},onModeChange:e=>{a("speaking"===e),null==n.onModeChange||n.onModeChange(e)}})).then(function(e){var t;return r.current=e,c(e.getStatus()),i(e.getMicMuted()),a("speaking"===e.getMode()),null!=(t=e.getId())?t:""})}const o=function(){if(r.current)return Promise.resolve(r.current.endSession()).then(function(){r.current=null})}();return Promise.resolve(o&&o.then?o.then(s):s())}catch(l){return Promise.reject(l)}},[n]),m=t.useCallback(function(){try{return r.current?Promise.resolve(r.current.endSession()).then(function(){r.current=null,c("disconnected"),a(!1)}):Promise.resolve()}catch(e){return Promise.reject(e)}},[]),C=t.useCallback(()=>{var e;return null==(e=r.current)?void 0:e.getId()},[]),h=t.useCallback(()=>{var e,t;return null!=(e=null==(t=r.current)?void 0:t.getMessages())?e:g},[g]),M=t.useCallback(e=>{var t;null==(t=r.current)||t.setVolume(e)},[]),p=t.useCallback(function(e){try{return r.current?Promise.resolve(r.current.setMicMuted(e)).then(function(){i(r.current.getMicMuted())}):Promise.resolve()}catch(e){return Promise.reject(e)}},[]),f=t.useCallback(function(e){try{return r.current?Promise.resolve(r.current.sendUserMessage(e)).then(function(){}):Promise.resolve()}catch(e){return Promise.reject(e)}},[]),y=t.useCallback(function(e){try{return Promise.resolve(!!r.current&&r.current.changeInputDevice(e))}catch(e){return Promise.reject(e)}},[]),P=t.useCallback(function(e){try{return Promise.resolve(!!r.current&&r.current.changeOutputDevice(e))}catch(e){return Promise.reject(e)}},[]),k=t.useCallback(()=>{var e,t;return null!=(e=null==(t=r.current)?void 0:t.getInputVolume())?e:0},[]),S=t.useCallback(()=>{var e,t;return null!=(e=null==(t=r.current)?void 0:t.getOutputVolume())?e:0},[]),b=t.useCallback(()=>{var e;return null==(e=r.current)?void 0:e.getInputByteFrequencyData()},[]),D=t.useCallback(()=>{var e;return null==(e=r.current)?void 0:e.getOutputByteFrequencyData()},[]);return t.useMemo(()=>({startSession:d,endSession:m,getId:C,getMessages:h,setVolume:M,setMicMuted:p,sendUserMessage:f,changeInputDevice:y,changeOutputDevice:P,getInputVolume:k,getOutputVolume:S,getInputByteFrequencyData:b,getOutputByteFrequencyData:D,messages:g,status:s,isSpeaking:o,micMuted:l}),[d,m,C,h,M,p,f,y,P,k,S,b,D,g,s,o,l])};
2
2
  //# sourceMappingURL=lib.cjs.map