@vox-ai/react 0.5.0 → 1.0.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/README.md +180 -210
- package/dist/hooks/index.d.ts +1 -1
- package/dist/hooks/useConversation.d.ts +28 -0
- package/dist/index.d.ts +2 -6
- package/dist/lib.cjs +1 -1
- package/dist/lib.cjs.map +1 -1
- package/dist/lib.modern.js +1 -1
- package/dist/lib.modern.js.map +1 -1
- package/dist/lib.module.js +1 -1
- package/dist/lib.module.js.map +1 -1
- package/dist/lib.umd.js +1 -1
- package/dist/lib.umd.js.map +1 -1
- package/package.json +2 -5
- package/dist/hooks/useVoxAI.d.ts +0 -291
- package/dist/utils/constants.d.ts +0 -2
package/README.md
CHANGED
|
@@ -1,282 +1,252 @@
|
|
|
1
|
-
#
|
|
1
|
+
# @vox-ai/react
|
|
2
2
|
|
|
3
|
-
|
|
3
|
+
vox.ai voice agent를 React 앱에서 사용하기 위한 hook 라이브러리.
|
|
4
4
|
|
|
5
5
|
## 설치
|
|
6
6
|
|
|
7
|
-
패키지 매니저를 통해 프로젝트에 라이브러리를 설치하세요.
|
|
8
|
-
|
|
9
7
|
```bash
|
|
10
8
|
npm install @vox-ai/react
|
|
11
|
-
#
|
|
9
|
+
# or
|
|
12
10
|
yarn add @vox-ai/react
|
|
13
|
-
#
|
|
14
|
-
pnpm
|
|
11
|
+
# or
|
|
12
|
+
pnpm add @vox-ai/react
|
|
15
13
|
```
|
|
16
14
|
|
|
17
|
-
##
|
|
18
|
-
|
|
19
|
-
### useVoxAI
|
|
20
|
-
|
|
21
|
-
Vox.ai 플랫폼과의 음성 AI 상호작용을 관리하는 React 훅입니다.
|
|
22
|
-
|
|
23
|
-
#### 초기화
|
|
24
|
-
|
|
25
|
-
먼저 useVoxAI 훅을 초기화합니다.
|
|
15
|
+
## 빠른 시작
|
|
26
16
|
|
|
27
17
|
```tsx
|
|
28
|
-
import {
|
|
29
|
-
|
|
30
|
-
function
|
|
31
|
-
const {
|
|
32
|
-
|
|
33
|
-
|
|
34
|
-
|
|
35
|
-
|
|
36
|
-
|
|
37
|
-
|
|
38
|
-
toggleMic,
|
|
39
|
-
setVolume,
|
|
40
|
-
} = useVoxAI({
|
|
41
|
-
onConnect: () => console.log("Vox.ai에 연결됨"),
|
|
42
|
-
onDisconnect: () => console.log("Vox.ai 연결 해제됨"),
|
|
43
|
-
onError: (error) => console.error("오류:", error),
|
|
44
|
-
onMessage: (message) => console.log("새 메시지:", message),
|
|
18
|
+
import { useConversation } from "@vox-ai/react";
|
|
19
|
+
|
|
20
|
+
export function VoiceWidget() {
|
|
21
|
+
const conversation = useConversation({
|
|
22
|
+
onConnect: () => console.log("연결됨"),
|
|
23
|
+
onDisconnect: () => console.log("연결 종료"),
|
|
24
|
+
onStatusChange: (status) => console.log("status:", status),
|
|
25
|
+
onModeChange: (mode) => console.log("mode:", mode),
|
|
26
|
+
onMessage: (message) => console.log(`${message.source}: ${message.text}`),
|
|
27
|
+
onError: (error) => console.error("error:", error.message),
|
|
45
28
|
});
|
|
46
29
|
|
|
47
|
-
|
|
30
|
+
const start = async () => {
|
|
31
|
+
// 마이크 권한 요청 (UI에서 사전 안내 권장)
|
|
32
|
+
await navigator.mediaDevices.getUserMedia({ audio: true });
|
|
33
|
+
|
|
34
|
+
const conversationId = await conversation.startSession({
|
|
35
|
+
agentId: "YOUR_AGENT_ID",
|
|
36
|
+
apiKey: "YOUR_API_KEY",
|
|
37
|
+
});
|
|
38
|
+
console.log("session started:", conversationId);
|
|
39
|
+
};
|
|
40
|
+
|
|
41
|
+
return (
|
|
42
|
+
<div>
|
|
43
|
+
<button onClick={start} disabled={conversation.status !== "disconnected"}>
|
|
44
|
+
Start
|
|
45
|
+
</button>
|
|
46
|
+
<button onClick={conversation.endSession}>End</button>
|
|
47
|
+
<p>Status: {conversation.status}</p>
|
|
48
|
+
<p>Speaking: {conversation.isSpeaking ? "Yes" : "No"}</p>
|
|
49
|
+
</div>
|
|
50
|
+
);
|
|
48
51
|
}
|
|
49
52
|
```
|
|
50
53
|
|
|
51
|
-
|
|
54
|
+
## `useConversation(options?)`
|
|
52
55
|
|
|
53
|
-
|
|
54
|
-
- **onDisconnect** - 음성 AI 연결이 종료되었을 때 호출되는 핸들러
|
|
55
|
-
- **onMessage** - 새 메시지가 수신되었을 때 호출되는 핸들러
|
|
56
|
-
- **onError** - 오류가 발생했을 때 호출되는 핸들러
|
|
56
|
+
### 콜백 (hook 초기화 시 전달)
|
|
57
57
|
|
|
58
|
-
|
|
58
|
+
| 콜백 | 시그니처 | 설명 |
|
|
59
|
+
|------|----------|------|
|
|
60
|
+
| `onConnect` | `() => void` | 연결 성공 |
|
|
61
|
+
| `onDisconnect` | `() => void` | 연결 종료 |
|
|
62
|
+
| `onStatusChange` | `(status: ConversationStatus) => void` | Status 변경 (`"disconnected"` → `"connecting"` → `"connected"`) |
|
|
63
|
+
| `onModeChange` | `(mode: ConversationMode) => void` | Mode 변경 (`"listening"` ⇄ `"speaking"`) |
|
|
64
|
+
| `onMessage` | `(message: ConversationMessage) => void` | 메시지 수신 (user transcription, agent response) |
|
|
65
|
+
| `onError` | `(error: Error) => void` | 에러 발생 |
|
|
59
66
|
|
|
60
|
-
|
|
67
|
+
### Hook 옵션
|
|
61
68
|
|
|
62
|
-
|
|
63
|
-
|
|
64
|
-
|
|
65
|
-
// 에이전트 ID와 API 키로 Vox.ai에 연결
|
|
66
|
-
connect({
|
|
67
|
-
agentId: "your-agent-id",
|
|
68
|
-
apiKey: "your-api-key",
|
|
69
|
-
dynamicVariables: {
|
|
70
|
-
// 대화 커스터마이징을 위한 동적 변수
|
|
71
|
-
userName: "홍길동",
|
|
72
|
-
context: "고객-지원",
|
|
73
|
-
},
|
|
74
|
-
metadata: {
|
|
75
|
-
// 통화에 대한 메타데이터를 프론트엔드에게 전달
|
|
76
|
-
callerId: "customer-123",
|
|
77
|
-
departmentId: "support",
|
|
78
|
-
},
|
|
79
|
-
});
|
|
80
|
-
```
|
|
69
|
+
| 옵션 | 타입 | 설명 |
|
|
70
|
+
|------|------|------|
|
|
71
|
+
| `textOnly` | `boolean` | Text-only session 기본값. `true`면 microphone/audio 없이 chat mode로 연결 |
|
|
81
72
|
|
|
82
|
-
|
|
73
|
+
### React State
|
|
83
74
|
|
|
84
|
-
|
|
75
|
+
| State | 타입 | 설명 |
|
|
76
|
+
|-------|------|------|
|
|
77
|
+
| `status` | `ConversationStatus` | `"disconnected"` \| `"connecting"` \| `"connected"` |
|
|
78
|
+
| `isSpeaking` | `boolean` | Agent가 현재 발화 중인지 여부 |
|
|
79
|
+
| `micMuted` | `boolean` | 마이크 음소거 상태 |
|
|
80
|
+
| `messages` | `ConversationMessage[]` | 현재 세션에서 주고받은 메시지 배열 |
|
|
85
81
|
|
|
86
|
-
|
|
87
|
-
disconnect();
|
|
88
|
-
```
|
|
82
|
+
> JS SDK의 `getStatus()`, `getMode()`, `getMicMuted()`에 대응. React에서는 state로 제공되므로 자동 re-render.
|
|
89
83
|
|
|
90
|
-
|
|
84
|
+
### 메서드
|
|
91
85
|
|
|
92
|
-
|
|
86
|
+
#### 세션 제어
|
|
93
87
|
|
|
94
88
|
```tsx
|
|
95
|
-
//
|
|
96
|
-
|
|
89
|
+
// 세션 시작 — conversationId를 반환
|
|
90
|
+
const conversationId = await conversation.startSession({
|
|
91
|
+
agentId: "YOUR_AGENT_ID",
|
|
92
|
+
apiKey: "YOUR_API_KEY",
|
|
93
|
+
});
|
|
94
|
+
|
|
95
|
+
// 세션 종료
|
|
96
|
+
await conversation.endSession();
|
|
97
97
|
|
|
98
|
-
//
|
|
99
|
-
|
|
98
|
+
// 세션 ID 조회
|
|
99
|
+
const id = conversation.getId();
|
|
100
|
+
|
|
101
|
+
// 현재 세션 메시지 조회
|
|
102
|
+
const messages = conversation.getMessages();
|
|
100
103
|
```
|
|
101
104
|
|
|
102
|
-
|
|
105
|
+
#### `startSession` 옵션
|
|
103
106
|
|
|
104
|
-
|
|
107
|
+
| 옵션 | 타입 | 필수 | 설명 |
|
|
108
|
+
|------|------|------|------|
|
|
109
|
+
| `agentId` | `string` | O | Agent ID |
|
|
110
|
+
| `apiKey` | `string` | O | API key |
|
|
111
|
+
| `agentVersion` | `string` | | Agent version (`"current"`, `"production"`, `"v1"` 등, default: `"current"`) |
|
|
112
|
+
| `textOnly` | `boolean` | | Hook 기본값을 override하는 per-session text-only 설정 |
|
|
113
|
+
| `dynamicVariables` | `Record<string, string \| number \| boolean>` | | Agent prompt에 주입할 dynamic variables |
|
|
114
|
+
| `metadata` | `Record<string, unknown>` | | Call metadata (webhook, call log에 포함) |
|
|
105
115
|
|
|
106
|
-
|
|
107
|
-
// 에이전트 오디오 웨이브폼 데이터 가져오기 (기본값)
|
|
108
|
-
const agentWaveform = audioWaveform({
|
|
109
|
-
speaker: "agent", // "agent" 또는 "user"
|
|
110
|
-
barCount: 20, // 반환할 웨이브폼 바의 수
|
|
111
|
-
updateInterval: 50, // 업데이트 간격 (ms)
|
|
112
|
-
});
|
|
116
|
+
#### 메시지 전송
|
|
113
117
|
|
|
114
|
-
|
|
115
|
-
|
|
118
|
+
```tsx
|
|
119
|
+
// 텍스트 메시지 전송 (음성 대신 텍스트 입력)
|
|
120
|
+
await conversation.sendUserMessage("안녕하세요");
|
|
116
121
|
```
|
|
117
122
|
|
|
118
|
-
|
|
119
|
-
|
|
120
|
-
사용자의 마이크를 활성화/비활성화하는 메서드입니다.
|
|
123
|
+
#### 메시지 히스토리
|
|
121
124
|
|
|
122
125
|
```tsx
|
|
123
|
-
|
|
124
|
-
|
|
126
|
+
conversation.messages.forEach((message) => {
|
|
127
|
+
console.log(message.source, message.text, message.isFinal);
|
|
128
|
+
});
|
|
125
129
|
|
|
126
|
-
|
|
127
|
-
toggleMic(false);
|
|
130
|
+
const snapshot = conversation.getMessages();
|
|
128
131
|
```
|
|
129
132
|
|
|
130
|
-
|
|
133
|
+
- `messages`는 React state라서 메시지 갱신 시 자동 re-render
|
|
134
|
+
- `getMessages()`는 현재 시점의 메시지 배열 snapshot 반환
|
|
131
135
|
|
|
132
|
-
|
|
136
|
+
#### 마이크 제어
|
|
133
137
|
|
|
134
138
|
```tsx
|
|
135
|
-
//
|
|
136
|
-
|
|
137
|
-
```
|
|
139
|
+
// 음소거
|
|
140
|
+
await conversation.setMicMuted(true);
|
|
138
141
|
|
|
139
|
-
|
|
142
|
+
// 음소거 해제
|
|
143
|
+
await conversation.setMicMuted(false);
|
|
140
144
|
|
|
141
|
-
|
|
145
|
+
// 현재 상태는 conversation.micMuted 로 확인
|
|
146
|
+
```
|
|
142
147
|
|
|
143
|
-
|
|
148
|
+
#### 볼륨 제어
|
|
144
149
|
|
|
145
150
|
```tsx
|
|
146
|
-
|
|
147
|
-
|
|
151
|
+
// Agent 음성 볼륨 설정 (0.0 ~ 1.0)
|
|
152
|
+
conversation.setVolume({ volume: 0.5 });
|
|
148
153
|
```
|
|
149
154
|
|
|
150
|
-
|
|
151
|
-
|
|
152
|
-
##### messages
|
|
153
|
-
|
|
154
|
-
대화 기록을 포함하는 React 상태입니다.
|
|
155
|
+
#### 오디오 모니터링
|
|
155
156
|
|
|
156
157
|
```tsx
|
|
157
|
-
|
|
158
|
-
|
|
158
|
+
// 입출력 볼륨 (0.0 ~ 1.0)
|
|
159
|
+
const inputVol = conversation.getInputVolume();
|
|
160
|
+
const outputVol = conversation.getOutputVolume();
|
|
161
|
+
|
|
162
|
+
// Frequency data (Uint8Array, 시각화용)
|
|
163
|
+
const inputFreq = conversation.getInputByteFrequencyData();
|
|
164
|
+
const outputFreq = conversation.getOutputByteFrequencyData();
|
|
159
165
|
```
|
|
160
166
|
|
|
161
|
-
|
|
167
|
+
#### 디바이스 전환
|
|
162
168
|
|
|
163
169
|
```tsx
|
|
164
|
-
|
|
165
|
-
|
|
166
|
-
|
|
167
|
-
|
|
168
|
-
|
|
169
|
-
isFinal?: boolean;
|
|
170
|
-
tool?: FunctionToolsExecuted; // 에이전트가 실행한 함수 도구
|
|
171
|
-
};
|
|
170
|
+
// 입력 디바이스 변경
|
|
171
|
+
await conversation.changeInputDevice({ inputDeviceId: "device-id" });
|
|
172
|
+
|
|
173
|
+
// 출력 디바이스 변경
|
|
174
|
+
await conversation.changeOutputDevice({ outputDeviceId: "device-id" });
|
|
172
175
|
```
|
|
173
176
|
|
|
174
|
-
|
|
177
|
+
디바이스 목록은 [`navigator.mediaDevices.enumerateDevices()`](https://developer.mozilla.org/docs/Web/API/MediaDevices/enumerateDevices)로 조회.
|
|
175
178
|
|
|
176
|
-
|
|
179
|
+
## Dynamic Variables / Metadata
|
|
177
180
|
|
|
178
181
|
```tsx
|
|
179
|
-
|
|
180
|
-
|
|
181
|
-
|
|
182
|
-
|
|
183
|
-
|
|
184
|
-
|
|
185
|
-
|
|
186
|
-
|
|
187
|
-
|
|
188
|
-
|
|
189
|
-
|
|
190
|
-
|
|
191
|
-
|
|
192
|
-
|
|
193
|
-
apiKey: "your-api-key",
|
|
194
|
-
});
|
|
195
|
-
};
|
|
196
|
-
|
|
197
|
-
const handleSendMessage = () => {
|
|
198
|
-
send({ message: "안녕하세요, 도움이 필요합니다." });
|
|
199
|
-
};
|
|
200
|
-
|
|
201
|
-
return (
|
|
202
|
-
<div>
|
|
203
|
-
<h1>Vox.ai 음성 비서</h1>
|
|
204
|
-
|
|
205
|
-
<div>
|
|
206
|
-
<button onClick={handleConnect} disabled={isConnected}>
|
|
207
|
-
연결
|
|
208
|
-
</button>
|
|
209
|
-
<button onClick={disconnect} disabled={!isConnected}>
|
|
210
|
-
연결 해제
|
|
211
|
-
</button>
|
|
212
|
-
<button onClick={handleSendMessage} disabled={!isConnected}>
|
|
213
|
-
메시지 전송
|
|
214
|
-
</button>
|
|
215
|
-
</div>
|
|
216
|
-
|
|
217
|
-
<div>
|
|
218
|
-
<p>현재 상태: {state}</p>
|
|
219
|
-
</div>
|
|
220
|
-
|
|
221
|
-
<div>
|
|
222
|
-
<h2>대화</h2>
|
|
223
|
-
<ul>
|
|
224
|
-
{messages.map((msg, index) => (
|
|
225
|
-
<li key={msg.id || index}>
|
|
226
|
-
<strong>{msg.name}:</strong> {msg.message}
|
|
227
|
-
</li>
|
|
228
|
-
))}
|
|
229
|
-
</ul>
|
|
230
|
-
</div>
|
|
231
|
-
</div>
|
|
232
|
-
);
|
|
233
|
-
}
|
|
182
|
+
const conversationId = await conversation.startSession({
|
|
183
|
+
agentId: "YOUR_AGENT_ID",
|
|
184
|
+
apiKey: "YOUR_API_KEY",
|
|
185
|
+
agentVersion: "production",
|
|
186
|
+
dynamicVariables: {
|
|
187
|
+
userName: "홍길동",
|
|
188
|
+
userType: "premium",
|
|
189
|
+
accountBalance: 50000,
|
|
190
|
+
},
|
|
191
|
+
metadata: {
|
|
192
|
+
sessionId: "sess_abc123",
|
|
193
|
+
source: "mobile-app",
|
|
194
|
+
},
|
|
195
|
+
});
|
|
234
196
|
```
|
|
235
197
|
|
|
236
|
-
|
|
237
|
-
|
|
238
|
-
```tsx
|
|
239
|
-
import React, { useState, useEffect } from "react";
|
|
240
|
-
import { useVoxAI } from "@vox-ai/react";
|
|
198
|
+
- `dynamicVariables` — Agent prompt에서 `{{userName}}` 형식으로 참조
|
|
199
|
+
- `metadata` — Outbound webhook과 call log에 포함
|
|
241
200
|
|
|
242
|
-
|
|
243
|
-
const { audioWaveform, state } = useVoxAI();
|
|
244
|
-
const [waveformData, setWaveformData] = useState([]);
|
|
201
|
+
## Text Only
|
|
245
202
|
|
|
246
|
-
|
|
247
|
-
|
|
248
|
-
|
|
203
|
+
```tsx
|
|
204
|
+
const conversation = useConversation({
|
|
205
|
+
textOnly: true,
|
|
206
|
+
});
|
|
249
207
|
|
|
250
|
-
|
|
251
|
-
|
|
252
|
-
|
|
253
|
-
|
|
254
|
-
}, 50);
|
|
208
|
+
await conversation.startSession({
|
|
209
|
+
agentId: "YOUR_AGENT_ID",
|
|
210
|
+
apiKey: "YOUR_API_KEY",
|
|
211
|
+
});
|
|
255
212
|
|
|
256
|
-
|
|
257
|
-
|
|
213
|
+
await conversation.sendUserMessage("텍스트로만 테스트할게요");
|
|
214
|
+
```
|
|
258
215
|
|
|
259
|
-
|
|
260
|
-
|
|
261
|
-
|
|
262
|
-
|
|
263
|
-
|
|
264
|
-
|
|
265
|
-
|
|
266
|
-
|
|
267
|
-
|
|
268
|
-
|
|
269
|
-
|
|
270
|
-
|
|
271
|
-
|
|
272
|
-
|
|
273
|
-
|
|
274
|
-
|
|
275
|
-
|
|
216
|
+
- text-only session은 microphone 권한을 요청하지 않음
|
|
217
|
+
- 에이전트 응답은 LiveKit text stream으로 수신됨
|
|
218
|
+
- audio 전용 API는 안전한 no-op 또는 zero-value를 반환
|
|
219
|
+
|
|
220
|
+
## Export 타입
|
|
221
|
+
|
|
222
|
+
```ts
|
|
223
|
+
import type {
|
|
224
|
+
ConversationMessage,
|
|
225
|
+
ConversationMode,
|
|
226
|
+
ConversationSource,
|
|
227
|
+
ConversationStatus,
|
|
228
|
+
InputDeviceConfig,
|
|
229
|
+
OutputDeviceConfig,
|
|
230
|
+
SetVolumeParams,
|
|
231
|
+
StartConversationOptions,
|
|
232
|
+
UseConversationOptions,
|
|
233
|
+
} from "@vox-ai/react";
|
|
276
234
|
```
|
|
277
235
|
|
|
278
|
-
##
|
|
236
|
+
## JS SDK와의 관계
|
|
237
|
+
|
|
238
|
+
| JS SDK (`@vox-ai/client`) | React SDK (`@vox-ai/react`) |
|
|
239
|
+
|---------------------------|----------------------------|
|
|
240
|
+
| `Conversation.startSession(opts)` | `conversation.startSession(opts)` |
|
|
241
|
+
| `getMessages()` | `messages` / `getMessages()` |
|
|
242
|
+
| `getStatus()` | `status` (React state) |
|
|
243
|
+
| `getMode()` | `isSpeaking` (React state) |
|
|
244
|
+
| `getMicMuted()` | `micMuted` (React state) |
|
|
245
|
+
| 나머지 method/callback | 동일 |
|
|
279
246
|
|
|
280
|
-
|
|
247
|
+
## 참고
|
|
281
248
|
|
|
282
|
-
|
|
249
|
+
- `useVoxAI`는 deprecated — `useConversation` 사용 권장
|
|
250
|
+
- 인증은 `apiKey` 직접 전달 방식
|
|
251
|
+
- 내부 연결은 LiveKit WebRTC 기반
|
|
252
|
+
- 브라우저별 audio device 제약이 있을 수 있음
|
package/dist/hooks/index.d.ts
CHANGED
|
@@ -1 +1 @@
|
|
|
1
|
-
export * from "./
|
|
1
|
+
export * from "./useConversation";
|
|
@@ -0,0 +1,28 @@
|
|
|
1
|
+
import { type ConversationMessage, type ConversationMode, type ConversationSource, type ConversationStatus, type InputDeviceConfig, type OutputDeviceConfig, type SetVolumeParams, type StartSessionOptions } from "@vox-ai/client";
|
|
2
|
+
type HookCallbacks = Pick<StartSessionOptions, "onConnect" | "onDisconnect" | "onError" | "onMessage" | "onStatusChange" | "onModeChange">;
|
|
3
|
+
export type UseConversationOptions = HookCallbacks & {
|
|
4
|
+
textOnly?: boolean;
|
|
5
|
+
};
|
|
6
|
+
export type StartConversationOptions = Omit<StartSessionOptions, keyof HookCallbacks>;
|
|
7
|
+
export declare function useConversation(options?: UseConversationOptions): {
|
|
8
|
+
startSession: (params: StartConversationOptions) => Promise<string>;
|
|
9
|
+
endSession: () => Promise<void>;
|
|
10
|
+
getId: () => string | undefined;
|
|
11
|
+
getMessages: () => ConversationMessage[];
|
|
12
|
+
setVolume: (volume: {
|
|
13
|
+
volume: number;
|
|
14
|
+
}) => void;
|
|
15
|
+
setMicMuted: (isMuted: boolean) => Promise<void>;
|
|
16
|
+
sendUserMessage: (text: string) => Promise<void>;
|
|
17
|
+
changeInputDevice: (config: InputDeviceConfig) => Promise<boolean>;
|
|
18
|
+
changeOutputDevice: (config: OutputDeviceConfig) => Promise<boolean>;
|
|
19
|
+
getInputVolume: () => number;
|
|
20
|
+
getOutputVolume: () => number;
|
|
21
|
+
getInputByteFrequencyData: () => Uint8Array<ArrayBufferLike> | undefined;
|
|
22
|
+
getOutputByteFrequencyData: () => Uint8Array<ArrayBufferLike> | undefined;
|
|
23
|
+
messages: ConversationMessage[];
|
|
24
|
+
status: ConversationStatus;
|
|
25
|
+
isSpeaking: boolean;
|
|
26
|
+
micMuted: boolean;
|
|
27
|
+
};
|
|
28
|
+
export type { ConversationMessage, ConversationMode, ConversationSource, ConversationStatus, InputDeviceConfig, OutputDeviceConfig, SetVolumeParams, };
|
package/dist/index.d.ts
CHANGED
|
@@ -1,6 +1,2 @@
|
|
|
1
|
-
|
|
2
|
-
|
|
3
|
-
* React UI component library for vox.ai
|
|
4
|
-
*/
|
|
5
|
-
export { useVoxAI } from "./hooks";
|
|
6
|
-
export type { VoxAgentState, VoxMessage, VoxAIOptions, ConnectParams, FunctionToolsExecuted, FunctionCallInfo, FunctionCallResult, } from "./hooks";
|
|
1
|
+
export { useConversation } from "./hooks";
|
|
2
|
+
export type { ConversationMessage, ConversationMode, ConversationSource, ConversationStatus, InputDeviceConfig, OutputDeviceConfig, SetVolumeParams, StartConversationOptions, UseConversationOptions, } from "./hooks";
|
package/dist/lib.cjs
CHANGED
|
@@ -1,2 +1,2 @@
|
|
|
1
|
-
var e=require("
|
|
1
|
+
var e=require("@vox-ai/client"),t=require("react");exports.useConversation=function(n){void 0===n&&(n={});const r=t.useRef(null),u=t.useRef(new Map),[s,c]=t.useState("disconnected"),[o,a]=t.useState(!1),[l,i]=t.useState(!1),[g,v]=t.useState([]),d=t.useCallback(function(t){try{function s(){var s;return u.current=new Map,v([]),Promise.resolve(e.Conversation.startSession({...t,textOnly:null!=(s=t.textOnly)?s:n.textOnly,onConnect:()=>{c("connected"),null==n.onConnect||n.onConnect()},onDisconnect:()=>{c("disconnected"),a(!1),null==n.onDisconnect||n.onDisconnect()},onError:e=>{null==n.onError||n.onError(e)},onMessage:e=>{u.current.set(e.id,e),v(Array.from(u.current.values()).sort((e,t)=>e.timestamp-t.timestamp)),null==n.onMessage||n.onMessage(e)},onStatusChange:e=>{c(e),null==n.onStatusChange||n.onStatusChange(e)},onModeChange:e=>{a("speaking"===e),null==n.onModeChange||n.onModeChange(e)}})).then(function(e){var t;return r.current=e,c(e.getStatus()),i(e.getMicMuted()),a("speaking"===e.getMode()),null!=(t=e.getId())?t:""})}const o=function(){if(r.current)return Promise.resolve(r.current.endSession()).then(function(){r.current=null})}();return Promise.resolve(o&&o.then?o.then(s):s())}catch(l){return Promise.reject(l)}},[n]),m=t.useCallback(function(){try{return r.current?Promise.resolve(r.current.endSession()).then(function(){r.current=null,c("disconnected"),a(!1)}):Promise.resolve()}catch(e){return Promise.reject(e)}},[]),C=t.useCallback(()=>{var e;return null==(e=r.current)?void 0:e.getId()},[]),h=t.useCallback(()=>{var e,t;return null!=(e=null==(t=r.current)?void 0:t.getMessages())?e:g},[g]),M=t.useCallback(e=>{var t;null==(t=r.current)||t.setVolume(e)},[]),p=t.useCallback(function(e){try{return r.current?Promise.resolve(r.current.setMicMuted(e)).then(function(){i(r.current.getMicMuted())}):Promise.resolve()}catch(e){return Promise.reject(e)}},[]),f=t.useCallback(function(e){try{return r.current?Promise.resolve(r.current.sendUserMessage(e)).then(function(){}):Promise.resolve()}catch(e){return Promise.reject(e)}},[]),y=t.useCallback(function(e){try{return Promise.resolve(!!r.current&&r.current.changeInputDevice(e))}catch(e){return Promise.reject(e)}},[]),P=t.useCallback(function(e){try{return Promise.resolve(!!r.current&&r.current.changeOutputDevice(e))}catch(e){return Promise.reject(e)}},[]),k=t.useCallback(()=>{var e,t;return null!=(e=null==(t=r.current)?void 0:t.getInputVolume())?e:0},[]),S=t.useCallback(()=>{var e,t;return null!=(e=null==(t=r.current)?void 0:t.getOutputVolume())?e:0},[]),b=t.useCallback(()=>{var e;return null==(e=r.current)?void 0:e.getInputByteFrequencyData()},[]),D=t.useCallback(()=>{var e;return null==(e=r.current)?void 0:e.getOutputByteFrequencyData()},[]);return t.useMemo(()=>({startSession:d,endSession:m,getId:C,getMessages:h,setVolume:M,setMicMuted:p,sendUserMessage:f,changeInputDevice:y,changeOutputDevice:P,getInputVolume:k,getOutputVolume:S,getInputByteFrequencyData:b,getOutputByteFrequencyData:D,messages:g,status:s,isSpeaking:o,micMuted:l}),[d,m,C,h,M,p,f,y,P,k,S,b,D,g,s,o,l])};
|
|
2
2
|
//# sourceMappingURL=lib.cjs.map
|