voxflow 1.5.2 → 1.5.3

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (3) hide show
  1. package/README.md +184 -83
  2. package/dist/index.js +1 -1
  3. package/package.json +2 -2
package/README.md CHANGED
@@ -1,4 +1,4 @@
1
- # ai-tts
1
+ # voxflow
2
2
 
3
3
  AI audio content creation CLI — stories, podcasts, narration, dubbing, transcription, translation, and TTS synthesis.
4
4
 
@@ -6,28 +6,31 @@ AI audio content creation CLI — stories, podcasts, narration, dubbing, transcr
6
6
 
7
7
  ```bash
8
8
  # Synthesize a single sentence
9
- npx ai-tts say "你好世界"
9
+ npx voxflow say "你好世界"
10
10
 
11
11
  # Output as MP3 (smaller file size)
12
- npx ai-tts say "你好世界" --format mp3
12
+ npx voxflow say "你好世界" --format mp3
13
13
 
14
14
  # Generate a story with TTS narration
15
- npx ai-tts story --topic "三只小猪"
15
+ npx voxflow story --topic "三只小猪"
16
16
 
17
17
  # Dub a video from SRT subtitles
18
- npx ai-tts dub --srt subtitles.srt --video input.mp4 --output dubbed.mp4
18
+ npx voxflow dub --srt subtitles.srt --video input.mp4 --output dubbed.mp4
19
19
 
20
20
  # Transcribe audio to subtitles (SRT)
21
- npx ai-tts asr --input recording.mp3
21
+ npx voxflow asr --input recording.mp3
22
22
 
23
23
  # Translate SRT subtitles to another language
24
- npx ai-tts translate --srt subtitles.srt --to en
24
+ npx voxflow translate --srt subtitles.srt --to en
25
25
 
26
26
  # End-to-end video translation (ASR → translate → dub → merge)
27
- npx ai-tts video-translate --input video.mp4 --to en
27
+ npx voxflow video-translate --input video.mp4 --to en
28
+
29
+ # One-command build + local delivery (for Skill/agent orchestration)
30
+ npx voxflow publish --video input.mp4 --audio narration.wav --publish local
28
31
 
29
32
  # Browse available voices
30
- npx ai-tts voices --search "温柔"
33
+ npx voxflow voices --search "温柔"
31
34
  ```
32
35
 
33
36
  A browser window will open for login on first use. After that, your token is cached automatically.
@@ -35,20 +38,20 @@ A browser window will open for login on first use. After that, your token is cac
35
38
  ## Install
36
39
 
37
40
  ```bash
38
- npm install -g ai-tts
41
+ npm install -g voxflow
39
42
  ```
40
43
 
41
44
  ## Commands
42
45
 
43
- ### `ai-tts say <text>` / `ai-tts synthesize <text>`
46
+ ### `voxflow say <text>` / `voxflow synthesize <text>`
44
47
 
45
48
  Synthesize a single text snippet to audio.
46
49
 
47
50
  ```bash
48
- ai-tts say "你好世界"
49
- ai-tts say "你好世界" --format mp3
50
- ai-tts synthesize "Welcome" --voice v-male-Bk7vD3xP --format mp3
51
- ai-tts say "快速测试" --speed 1.5 --volume 0.8 --pitch 2
51
+ voxflow say "你好世界"
52
+ voxflow say "你好世界" --format mp3
53
+ voxflow synthesize "Welcome" --voice v-male-Bk7vD3xP --format mp3
54
+ voxflow say "快速测试" --speed 1.5 --volume 0.8 --pitch 2
52
55
  ```
53
56
 
54
57
  | Flag | Default | Description |
@@ -61,17 +64,17 @@ ai-tts say "快速测试" --speed 1.5 --volume 0.8 --pitch 2
61
64
  | `--pitch <n>` | `0` | Pitch -12 to 12 |
62
65
  | `--output <path>` | `./tts-<timestamp>.wav` | Output file path |
63
66
 
64
- ### `ai-tts narrate [options]`
67
+ ### `voxflow narrate [options]`
65
68
 
66
69
  Narrate a document, text, or script to multi-segment audio.
67
70
 
68
71
  ```bash
69
- ai-tts narrate --input article.txt
70
- ai-tts narrate --input article.txt --format mp3
71
- ai-tts narrate --input readme.md --voice v-male-Bk7vD3xP
72
- ai-tts narrate --text "第一段。第二段。第三段。"
73
- ai-tts narrate --script narration-script.json
74
- echo "Hello world" | ai-tts narrate
72
+ voxflow narrate --input article.txt
73
+ voxflow narrate --input article.txt --format mp3
74
+ voxflow narrate --input readme.md --voice v-male-Bk7vD3xP
75
+ voxflow narrate --text "第一段。第二段。第三段。"
76
+ voxflow narrate --script narration-script.json
77
+ echo "Hello world" | voxflow narrate
75
78
  ```
76
79
 
77
80
  | Flag | Default | Description |
@@ -98,15 +101,15 @@ echo "Hello world" | ai-tts narrate
98
101
  }
99
102
  ```
100
103
 
101
- ### `ai-tts voices [options]`
104
+ ### `voxflow voices [options]`
102
105
 
103
106
  Browse and filter available TTS voices (no login required).
104
107
 
105
108
  ```bash
106
- ai-tts voices
107
- ai-tts voices --search "温柔" --gender female
108
- ai-tts voices --language en --extended
109
- ai-tts voices --json
109
+ voxflow voices
110
+ voxflow voices --search "温柔" --gender female
111
+ voxflow voices --language en --extended
112
+ voxflow voices --json
110
113
  ```
111
114
 
112
115
  | Flag | Default | Description |
@@ -117,13 +120,13 @@ ai-tts voices --json
117
120
  | `--extended` | `false` | Include extended voice library (380+) |
118
121
  | `--json` | `false` | Output raw JSON |
119
122
 
120
- ### `ai-tts story [options]`
123
+ ### `voxflow story [options]`
121
124
 
122
125
  Generate a story with AI and synthesize TTS audio.
123
126
 
124
127
  ```bash
125
- ai-tts story --topic "小红帽的故事"
126
- ai-tts story --topic "太空探险" --paragraphs 8 --speed 0.8
128
+ voxflow story --topic "小红帽的故事"
129
+ voxflow story --topic "太空探险" --paragraphs 8 --speed 0.8
127
130
  ```
128
131
 
129
132
  | Flag | Default | Description |
@@ -135,44 +138,75 @@ ai-tts story --topic "太空探险" --paragraphs 8 --speed 0.8
135
138
  | `--speed <n>` | `1.0` | Speed (0.5-2.0) |
136
139
  | `--silence <sec>` | `0.8` | Silence between paragraphs (0-5.0) |
137
140
 
138
- ### `ai-tts podcast [options]`
141
+ ### `voxflow podcast [options]`
139
142
 
140
- Generate a multi-speaker podcast dialogue.
143
+ Generate a multi-speaker podcast dialogue with AI script generation and multi-voice TTS.
141
144
 
142
145
  ```bash
143
- ai-tts podcast --topic "AI趋势" --exchanges 10
144
- ai-tts podcast --topic "科技新闻" --style casual --length long
146
+ # Quick start AI generates script + synthesizes audio
147
+ voxflow podcast --topic "AI in healthcare"
148
+
149
+ # Use a template with colloquial control
150
+ voxflow podcast --topic "tech news" --template news --colloquial high --speakers 3
151
+
152
+ # English podcast
153
+ voxflow podcast --topic "AI ethics debate" --language en --template discussion
154
+
155
+ # Generate script only (no TTS), export as JSON
156
+ voxflow podcast --topic "量子计算入门" --format json --no-tts
157
+
158
+ # Synthesize from a previously exported .podcast.json
159
+ voxflow podcast --input my-podcast.podcast.json --output final.wav
160
+
161
+ # Legacy engine (lower quota cost)
162
+ voxflow podcast --topic "AI趋势" --engine legacy --exchanges 10
145
163
  ```
146
164
 
147
165
  | Flag | Default | Description |
148
166
  |------|---------|-------------|
149
- | `--topic <text>` | Tech trends | Podcast topic |
150
- | `--style <style>` | `professional` | Dialogue style |
151
- | `--length <len>` | `medium` | short / medium / long |
152
- | `--exchanges <n>` | `8` | Number of exchanges (2-30) |
153
- | `--output <path>` | `./podcast-<timestamp>.wav` | Output WAV file |
154
- | `--speed <n>` | `1.0` | Speed (0.5-2.0) |
167
+ | `--topic <text>` | tech trends | Podcast topic or prompt |
168
+ | `--engine <type>` | `auto` (→ ai-sdk) | `auto`, `legacy`, or `ai-sdk` |
169
+ | `--template <name>` | `interview` | `interview`, `discussion`, `news`, `story`, `tutorial` |
170
+ | `--colloquial <lvl>` | `medium` | Conversational tone: `low`, `medium`, `high` |
171
+ | `--speakers <n>` | `2` | Speaker count: 1, 2, or 3 |
172
+ | `--language <code>` | `zh-CN` | `zh-CN`, `en`, `ja` |
173
+ | `--format json` | — | Also output `.podcast.json` alongside audio |
174
+ | `--input <file>` | — | Load `.podcast.json` for synthesis (skip LLM) |
175
+ | `--no-tts` | `false` | Generate script only, skip TTS synthesis |
176
+ | `--length <len>` | `medium` | `short`, `medium`, `long` |
177
+ | `--exchanges <n>` | `8` | Number of exchanges, 2-30 (legacy engine) |
178
+ | `--style <style>` | — | Legacy: dialogue style (maps to `--template`) |
179
+ | `--voice <id>` | — | Override TTS voice for all speakers |
180
+ | `--bgm <file>` | — | Background music file to mix in |
181
+ | `--ducking <n>` | `0.2` | BGM volume ducking (0-1.0) |
182
+ | `--output <path>` | `./podcast-<ts>.wav` | Output file path |
183
+ | `--speed <n>` | `1.0` | TTS speed (0.5-2.0) |
155
184
  | `--silence <sec>` | `0.5` | Silence between segments (0-5.0) |
156
185
 
157
- ### `ai-tts dub [options]`
186
+ **Two-step workflow** (recommended for editing):
187
+ 1. `voxflow podcast --topic "..." --format json --no-tts` → generates `.podcast.json`
188
+ 2. Edit the JSON (speakers, dialogue, voice mapping)
189
+ 3. `voxflow podcast --input edited.podcast.json` → synthesizes audio
190
+
191
+ ### `voxflow dub [options]`
158
192
 
159
193
  Dub audio from SRT subtitles with timeline-precise TTS synthesis. Supports multi-speaker voice mapping, dynamic speed compensation, video merge, and background music mixing.
160
194
 
161
195
  ```bash
162
196
  # Basic: generate dubbed audio from SRT
163
- ai-tts dub --srt subtitles.srt
197
+ voxflow dub --srt subtitles.srt
164
198
 
165
199
  # Dub and merge into video
166
- ai-tts dub --srt subtitles.srt --video input.mp4 --output dubbed.mp4
200
+ voxflow dub --srt subtitles.srt --video input.mp4 --output dubbed.mp4
167
201
 
168
202
  # Multi-speaker with voice mapping
169
- ai-tts dub --srt subtitles.srt --voices speakers.json --speed-auto
203
+ voxflow dub --srt subtitles.srt --voices speakers.json --speed-auto
170
204
 
171
205
  # Add background music with ducking
172
- ai-tts dub --srt subtitles.srt --bgm music.mp3 --ducking 0.3
206
+ voxflow dub --srt subtitles.srt --bgm music.mp3 --ducking 0.3
173
207
 
174
208
  # Patch a single caption without full rebuild
175
- ai-tts dub --srt subtitles.srt --patch 5 --output dub-existing.wav
209
+ voxflow dub --srt subtitles.srt --patch 5 --output dub-existing.wav
176
210
  ```
177
211
 
178
212
  | Flag | Default | Description |
@@ -184,7 +218,7 @@ ai-tts dub --srt subtitles.srt --patch 5 --output dub-existing.wav
184
218
  | `--speed <n>` | `1.0` | TTS speed 0.5-2.0 |
185
219
  | `--speed-auto` | `false` | Auto-adjust speed when audio overflows timeslot |
186
220
  | `--bgm <file>` | | Background music file to mix in |
187
- | `--ducking <n>` | `0.5` | BGM volume ducking 0-1.0 (lower = quieter BGM) |
221
+ | `--ducking <n>` | `0.2` | BGM volume ducking 0-1.0 (lower = quieter BGM) |
188
222
  | `--patch <id>` | | Re-synthesize a single caption by ID (patch mode) |
189
223
  | `--output <path>` | `./dub-<timestamp>.wav` | Output file path (.wav or .mp4 with --video) |
190
224
 
@@ -213,31 +247,31 @@ Thanks for having me.
213
247
 
214
248
  > Requires `ffmpeg` in PATH for `--video`, `--bgm`, and `--speed-auto` features.
215
249
 
216
- ### `ai-tts asr [options]` / `ai-tts transcribe [options]`
250
+ ### `voxflow asr [options]` / `voxflow transcribe [options]`
217
251
 
218
252
  Transcribe audio or video files to text. Supports cloud ASR (Tencent Cloud, 3 modes) and local Whisper (offline, no quota).
219
253
 
220
254
  ```bash
221
255
  # Transcribe with auto engine detection (local Whisper if available, else cloud)
222
- ai-tts asr --input recording.mp3
256
+ voxflow asr --input recording.mp3
223
257
 
224
258
  # Force local Whisper (no login needed, no quota used)
225
- ai-tts asr --input recording.mp3 --engine local
259
+ voxflow asr --input recording.mp3 --engine local
226
260
 
227
261
  # Use a larger Whisper model for better accuracy
228
- ai-tts asr --input meeting.wav --engine local --model small
262
+ voxflow asr --input meeting.wav --engine local --model small
229
263
 
230
264
  # Cloud ASR with speaker diarization
231
- ai-tts asr --input meeting.wav --engine cloud --speakers --speaker-number 3
265
+ voxflow asr --input meeting.wav --engine cloud --speakers --speaker-number 3
232
266
 
233
267
  # Transcribe video file, output plain text
234
- ai-tts asr --input video.mp4 --format txt
268
+ voxflow asr --input video.mp4 --format txt
235
269
 
236
270
  # Remote URL (cloud only)
237
- ai-tts asr --url https://example.com/audio.wav --mode flash
271
+ voxflow asr --url https://example.com/audio.wav --mode flash
238
272
 
239
273
  # Record from microphone (cloud only)
240
- ai-tts asr --mic --format txt
274
+ voxflow asr --mic --format txt
241
275
  ```
242
276
 
243
277
  | Flag | Default | Description |
@@ -269,25 +303,25 @@ npm install -g nodejs-whisper
269
303
 
270
304
  > Requires `ffmpeg` in PATH for audio extraction from video files.
271
305
 
272
- ### `ai-tts translate [options]`
306
+ ### `voxflow translate [options]`
273
307
 
274
308
  Translate SRT subtitles, plain text, or text files using LLM-powered batch translation.
275
309
 
276
310
  ```bash
277
311
  # Translate SRT file (Chinese → English)
278
- ai-tts translate --srt subtitles.srt --to en
312
+ voxflow translate --srt subtitles.srt --to en
279
313
 
280
314
  # Translate with timing realignment for target language
281
- ai-tts translate --srt subtitles.srt --to en --realign
315
+ voxflow translate --srt subtitles.srt --to en --realign
282
316
 
283
317
  # Translate a text file
284
- ai-tts translate --input article.txt --to ja --output article-ja.txt
318
+ voxflow translate --input article.txt --to ja --output article-ja.txt
285
319
 
286
320
  # Translate inline text
287
- ai-tts translate --text "你好世界" --to en
321
+ voxflow translate --text "你好世界" --to en
288
322
 
289
323
  # Auto-detect source language
290
- ai-tts translate --srt movie.srt --to ko
324
+ voxflow translate --srt movie.srt --to ko
291
325
  ```
292
326
 
293
327
  | Flag | Default | Description |
@@ -305,22 +339,22 @@ ai-tts translate --srt movie.srt --to ko
305
339
 
306
340
  **Cost**: 1 quota per batch (~10 captions). A 100-caption SRT costs ~10 quota.
307
341
 
308
- ### `ai-tts video-translate [options]`
342
+ ### `voxflow video-translate [options]`
309
343
 
310
344
  End-to-end video translation: extracts audio, transcribes, translates subtitles, dubs with TTS, and merges back into video.
311
345
 
312
346
  ```bash
313
347
  # Translate Chinese video to English
314
- ai-tts video-translate --input video.mp4 --to en
348
+ voxflow video-translate --input video.mp4 --to en
315
349
 
316
350
  # Specify source language
317
- ai-tts video-translate --input video.mp4 --from zh --to ja
351
+ voxflow video-translate --input video.mp4 --from zh --to ja
318
352
 
319
353
  # Keep intermediate files (SRT, audio) for debugging
320
- ai-tts video-translate --input video.mp4 --to en --keep-intermediates
354
+ voxflow video-translate --input video.mp4 --to en --keep-intermediates
321
355
 
322
356
  # Custom voice and speed
323
- ai-tts video-translate --input video.mp4 --to en --voice v-male-Bk7vD3xP --speed 0.9
357
+ voxflow video-translate --input video.mp4 --to en --voice v-male-Bk7vD3xP --speed 0.9
324
358
  ```
325
359
 
326
360
  | Flag | Default | Description |
@@ -344,39 +378,84 @@ ai-tts video-translate --input video.mp4 --to en --voice v-male-Bk7vD3xP --speed
344
378
 
345
379
  > Requires `ffmpeg` in PATH.
346
380
 
347
- ### `ai-tts login` / `logout` / `status` / `dashboard`
381
+ ### `voxflow publish [options]`
382
+
383
+ Single command for final deliverables. Designed for **agent skills and automation orchestration**:
384
+ - Build final MP4 (translate+dub / dub / merge)
385
+ - Deliver to local directory or via webhook
386
+ - Return structured JSON output for downstream processing
387
+
388
+ > **Note**: `--platform` is a metadata tag only — it does NOT upload to any platform. Use `--publish webhook` to integrate with your own distribution service.
389
+
390
+ ```bash
391
+ # Mode A: video-translate + local delivery
392
+ voxflow publish --input video.mp4 --to en --publish local
393
+
394
+ # Mode B: dub existing subtitles into video
395
+ voxflow publish --srt subtitles.srt --video input.mp4 --publish local
396
+
397
+ # Mode C: merge existing audio into video
398
+ voxflow publish --video input.mp4 --audio narration.mp3 --publish local
399
+
400
+ # Deliver via webhook (e.g. custom distribution service)
401
+ voxflow publish --input video.mp4 --to ja \
402
+ --publish webhook \
403
+ --publish-webhook https://publisher.example.com/hook \
404
+ --json
405
+ ```
406
+
407
+ | Flag | Default | Description |
408
+ |------|---------|-------------|
409
+ | `--input <video>` | | Mode A: source video for translate+dub (requires `--to`) |
410
+ | `--to <lang>` | | Target language for Mode A |
411
+ | `--from <lang>` | auto | Source language for Mode A |
412
+ | `--srt <file>` | | Mode B: SRT subtitle file (requires `--video`) |
413
+ | `--video <file>` | | Mode B/Mode C video file |
414
+ | `--audio <file>` | | Mode C: external narration audio |
415
+ | `--voice <id>` | `v-female-R2s4N9qJ` | TTS voice for Mode A/B |
416
+ | `--voices <file>` | | Multi-speaker voice mapping JSON |
417
+ | `--output <path>` | auto | Final MP4 output path |
418
+ | `--publish <target>` | `local` | `local` \| `webhook` \| `none` |
419
+ | `--publish-dir <dir>` | `./published` | Local publish directory |
420
+ | `--publish-webhook <url>` | | Webhook URL for distribution service |
421
+ | `--platform <name>` | `generic` | Platform metadata tag (not an actual upload target) |
422
+ | `--title <text>` | filename | Title metadata |
423
+ | `--json` | `false` | Print machine-readable JSON result |
424
+
425
+ ### `voxflow login` / `logout` / `status` / `dashboard`
348
426
 
349
427
  ```bash
350
- ai-tts login # Open browser to login via email OTP
351
- ai-tts logout # Clear cached token
352
- ai-tts status # Show login status and token info
353
- ai-tts dashboard # Open Web dashboard in browser
428
+ voxflow login # Open browser to login via email OTP
429
+ voxflow logout # Clear cached token
430
+ voxflow status # Show login status and token info
431
+ voxflow dashboard # Open Web dashboard in browser
354
432
  ```
355
433
 
356
434
  ## Authentication
357
435
 
358
- AI-TTS uses browser-based email OTP login (Supabase):
436
+ voxflow uses browser-based email OTP login (Supabase):
359
437
 
360
438
  1. CLI starts a temporary local HTTP server
361
439
  2. Opens your browser to the login page
362
440
  3. You enter your email and verification code
363
441
  4. Browser redirects back to the CLI with your token
364
- 5. Token is cached at `~/.config/ai-tts/token.json`
442
+ 5. Token is cached at `~/.config/voxflow/token.json`
365
443
 
366
444
  ## Quota
367
445
 
368
- - Free tier: 100 quota per day
369
- - `say`/`synthesize`: 1 quota per call
370
- - `narrate`: 1 quota per segment
371
- - `story`: ~6-8 quota (1 LLM + N TTS)
372
- - `podcast`: ~10-20 quota
373
- - `dub`: 1 quota per SRT caption
374
- - `asr` (cloud): 1 quota per recognition
446
+ - Free tier: 10,000 quota per month (1 basic TTS = 100 quota)
447
+ - `say`/`synthesize`: 100 quota per call
448
+ - `narrate`: 100 quota per segment
449
+ - `story`: ~600-800 quota (1 LLM + N TTS)
450
+ - `podcast` (ai-sdk): ~5,000-10,000 quota (script) + 100/segment (TTS)
451
+ - `podcast` (legacy): ~200 quota (script) + 100/segment (TTS)
452
+ - `dub`: 100 quota per SRT caption
453
+ - `asr` (cloud): 100 quota per recognition
375
454
  - `asr` (local): free (no quota)
376
- - `translate`: 1 quota per batch (~10 captions)
377
- - `video-translate`: ~3-N quota (ASR + translate + TTS)
455
+ - `translate`: 100 quota per batch (~10 captions)
456
+ - `video-translate`: ~300-N quota (ASR + translate + TTS)
378
457
  - `voices`: free (no quota)
379
- - Quota resets daily
458
+ - Quota resets monthly
380
459
 
381
460
  ## Requirements
382
461
 
@@ -412,6 +491,28 @@ Optional dependencies:
412
491
  - `nodejs-whisper` — for local Whisper ASR without cloud API (`npm install -g nodejs-whisper`)
413
492
  - `sox` — for microphone recording (`asr --mic`)
414
493
 
494
+ ## Claude Code / AI Agent Integration
495
+
496
+ The `voxflow` CLI is designed to be called by AI agents (Claude Code, Cursor, etc.) as the unified execution layer. No API keys or Python scripts needed — all auth goes through `voxflow login` (JWT).
497
+
498
+ **Skill documentation**: See [`cli/skills/podcast/SKILL.md`](skills/podcast/SKILL.md) for the full podcast skill reference.
499
+
500
+ **Typical agent workflow**:
501
+ ```bash
502
+ # 1. Login (one-time)
503
+ voxflow login
504
+
505
+ # 2. Generate script only
506
+ voxflow podcast --topic "Your topic" --format json --no-tts
507
+
508
+ # 3. Agent edits the .podcast.json as needed
509
+
510
+ # 4. Synthesize from edited script
511
+ voxflow podcast --input edited.podcast.json --output final.wav
512
+ ```
513
+
514
+ **CI/non-interactive environments**: Set `VOXFLOW_TOKEN` env var to skip browser login.
515
+
415
516
  ## License
416
517
 
417
518
  UNLICENSED - All rights reserved.
package/dist/index.js CHANGED
@@ -1,2 +1,2 @@
1
1
  #!/usr/bin/env node
2
- (()=>{var e={6:(e,t,o)=>{const{getToken:n,clearToken:s,getTokenInfo:r}=o(986);const{story:a,ApiError:i}=o(214);const{podcast:c,ApiError:l}=o(35);const{synthesize:u}=o(383);const{narrate:p}=o(80);const{voices:d}=o(784);const{dub:f}=o(944);const{asr:g,ASR_DEFAULTS:m}=o(929);const{translate:h}=o(585);const{videoTranslate:w}=o(863);const{warnIfMissingFfmpeg:x}=o(297);const{API_BASE:v,WEB_BASE:S,STORY_DEFAULTS:$,PODCAST_DEFAULTS:y,SYNTHESIZE_DEFAULTS:b,NARRATE_DEFAULTS:T,DUB_DEFAULTS:k,ASR_DEFAULTS:F,TRANSLATE_DEFAULTS:E,VIDEO_TRANSLATE_DEFAULTS:_,getConfigDir:A}=o(782);const I=o(330);const M=i;async function run(e){const t=e||process.argv.slice(2);const o=t[0];if(!o||o==="--help"||o==="-h"){printHelp();return}if(o==="--version"||o==="-v"){console.log(I.version);return}if(t.includes("--help")||t.includes("-h")){printHelp();return}switch(o){case"login":return handleLogin(t.slice(1));case"logout":return handleLogout();case"status":return handleStatus();case"story":case"generate":return handleStory(t.slice(1));case"podcast":return handlePodcast(t.slice(1));case"synthesize":case"say":return handleSynthesize(t.slice(1));case"narrate":return handleNarrate(t.slice(1));case"voices":return handleVoices(t.slice(1));case"dub":return handleDub(t.slice(1));case"asr":case"transcribe":return handleAsr(t.slice(1));case"translate":return handleTranslate(t.slice(1));case"video-translate":return handleVideoTranslate(t.slice(1));case"dashboard":return handleDashboard();default:console.error(`Unknown command: ${o}\nRun voxflow --help for usage.`);process.exit(1)}}async function handleLogin(e){const t=parseFlag(e,"--api")||v;console.log("Logging in...");const o=await n({api:t,force:true});const s=r();if(s){console.log(`\nLogged in as ${s.email}`);console.log(`Token expires: ${s.expiresAt}`);console.log(`API: ${s.api}`)}}function handleLogout(){s();console.log("Logged out. Token cache cleared.")}function handleStatus(){const e=r();if(!e){console.log("Not logged in. Run: voxflow login");return}console.log(`Email: ${e.email}`);console.log(`API: ${e.api}`);console.log(`Expires: ${e.expiresAt}`);console.log(`Valid: ${e.valid?"yes":"expired"}`);if(!e.valid){console.log("\nToken expired. Run: voxflow login")}console.log(`\nDashboard: ${S}/app/`);console.log("Run voxflow dashboard to open in browser.")}async function handleStory(e){const t=parseFlag(e,"--api")||v;const o=parseFlag(e,"--token");let s;if(o){s=o}else{s=await n({api:t});const e=r();if(e){console.log(`Logged in as ${e.email}`)}}const i=parseIntFlag(e,"--paragraphs");const c=parseFloatFlag(e,"--speed");const l=parseFloatFlag(e,"--silence");const u=parseFlag(e,"--output");if(i!==undefined){if(isNaN(i)||i<1||i>20){console.error(`Error: --paragraphs must be an integer between 1 and 20 (got: "${parseFlag(e,"--paragraphs")}")`);process.exit(1)}}validateSpeed(e,c);validateSilence(e,l);validateOutput(u);const p={token:s,api:t,topic:parseFlag(e,"--topic"),voice:parseFlag(e,"--voice"),output:u,paragraphs:i,speed:c,silence:l};await runWithRetry(a,p,t,o)}async function handlePodcast(e){const t=parseFlag(e,"--api")||v;const o=parseFlag(e,"--token");let s;if(o){s=o}else{s=await n({api:t});const e=r();if(e){console.log(`Logged in as ${e.email}`)}}const a=parseIntFlag(e,"--exchanges");const i=parseFloatFlag(e,"--speed");const l=parseFloatFlag(e,"--silence");const u=parseFlag(e,"--output");if(a!==undefined){if(isNaN(a)||a<2||a>30){console.error(`Error: --exchanges must be an integer between 2 and 30 (got: "${parseFlag(e,"--exchanges")}")`);process.exit(1)}}validateSpeed(e,i);validateSilence(e,l);validateOutput(u);const p=parseFlag(e,"--length");if(p&&!["short","medium","long"].includes(p)){console.error(`Error: --length must be one of: short, medium, long (got: "${p}")`);process.exit(1)}const d={token:s,api:t,topic:parseFlag(e,"--topic"),style:parseFlag(e,"--style"),length:p,exchanges:a,output:u,speed:i,silence:l};await runWithRetry(c,d,t,o)}async function handleSynthesize(e){const t=parseFlag(e,"--api")||v;const o=parseFlag(e,"--token");let s;if(o){s=o}else{s=await n({api:t});const e=r();if(e){console.log(`Logged in as ${e.email}`)}}let a=parseFlag(e,"--text");if(!a){const t=new Set(["--text","--voice","--speed","--volume","--pitch","--output","--token","--api","--format"]);for(let o=0;o<e.length;o++){if(e[o].startsWith("--")){if(t.has(e[o]))o++;continue}a=e[o];break}}if(!a){console.error('Error: No text provided. Usage: voxflow synthesize "your text here"');process.exit(1)}const i=parseFloatFlag(e,"--speed");const c=parseFloatFlag(e,"--volume");const l=parseFloatFlag(e,"--pitch");const p=parseFlag(e,"--output");const d=parseFlag(e,"--format");validateSpeed(e,i);validateOutput(p,d);validateFormat(d);if(c!==undefined){if(isNaN(c)||c<.1||c>2){console.error(`Error: --volume must be between 0.1 and 2.0 (got: "${parseFlag(e,"--volume")}")`);process.exit(1)}}if(l!==undefined){if(isNaN(l)||l<-12||l>12){console.error(`Error: --pitch must be between -12 and 12 (got: "${parseFlag(e,"--pitch")}")`);process.exit(1)}}const f={token:s,api:t,text:a,voice:parseFlag(e,"--voice"),output:p,speed:i,volume:c,pitch:l,format:d||undefined};await runWithRetry(u,f,t,o)}async function handleNarrate(e){const t=parseFlag(e,"--api")||v;const s=parseFlag(e,"--token");let a;if(s){a=s}else{a=await n({api:t});const e=r();if(e){console.log(`Logged in as ${e.email}`)}}const i=parseFlag(e,"--input");const c=parseFlag(e,"--text");const l=parseFlag(e,"--script");const u=parseFloatFlag(e,"--speed");const d=parseFloatFlag(e,"--silence");const f=parseFlag(e,"--output");const g=parseFlag(e,"--format");validateSpeed(e,u);validateSilence(e,d);validateOutput(f,g);validateFormat(g);if(i){const e=o(896);const t=o(928);const n=t.resolve(i);if(!e.existsSync(n)){console.error(`Error: Input file not found: ${n}`);process.exit(1)}}if(l){const e=o(896);const t=o(928);const n=t.resolve(l);if(!e.existsSync(n)){console.error(`Error: Script file not found: ${n}`);process.exit(1)}}const m={token:a,api:t,input:i,text:c,script:l,voice:parseFlag(e,"--voice"),output:f,speed:u,silence:d,format:g||undefined};await runWithRetry(p,m,t,s)}async function handleVoices(e){const t=parseFlag(e,"--api")||v;const o={api:t,search:parseFlag(e,"--search"),gender:parseFlag(e,"--gender"),language:parseFlag(e,"--language"),json:parseBoolFlag(e,"--json"),extended:parseBoolFlag(e,"--extended")};await d(o)}async function handleDub(e){await x(A(),"dub");const t=parseFlag(e,"--api")||v;const s=parseFlag(e,"--token");let a;if(s){a=s}else{a=await n({api:t});const e=r();if(e){console.log(`Logged in as ${e.email}`)}}const i=parseFlag(e,"--srt");const c=parseFlag(e,"--video");const l=parseFlag(e,"--output");const u=parseFloatFlag(e,"--speed");const p=parseFloatFlag(e,"--ducking");const d=parseIntFlag(e,"--patch");if(!i&&!parseBoolFlag(e,"--help")){console.error("Error: --srt <file> is required. Usage: voxflow dub --srt <file.srt>");process.exit(1)}if(i){const e=o(896);const t=o(928);const n=t.resolve(i);if(!e.existsSync(n)){console.error(`Error: SRT file not found: ${n}`);process.exit(1)}}if(c){const e=o(896);const t=o(928);const n=t.resolve(c);if(!e.existsSync(n)){console.error(`Error: Video file not found: ${n}`);process.exit(1)}}const g=parseFlag(e,"--voices");if(g){const e=o(896);const t=o(928);const n=t.resolve(g);if(!e.existsSync(n)){console.error(`Error: Voices map file not found: ${n}`);process.exit(1)}}const m=parseFlag(e,"--bgm");if(m){const e=o(896);const t=o(928);const n=t.resolve(m);if(!e.existsSync(n)){console.error(`Error: BGM file not found: ${n}`);process.exit(1)}}validateSpeed(e,u);if(l){const e=c?[".mp4",".mkv",".mov"]:[".wav",".mp3"];const t=e.some((e=>l.toLowerCase().endsWith(e)));if(!t){const t=e.join(", ");console.error(`Error: --output path must end with ${t}`);process.exit(1)}}if(p!==undefined){if(isNaN(p)||p<0||p>1){console.error(`Error: --ducking must be between 0 and 1.0 (got: "${parseFlag(e,"--ducking")}")`);process.exit(1)}}const h={token:a,api:t,srt:i,video:c,output:l,speed:u,patch:d,voice:parseFlag(e,"--voice"),voicesMap:g,speedAuto:parseBoolFlag(e,"--speed-auto"),bgm:m,ducking:p};await runWithRetry(f,h,t,s)}async function handleAsr(e){await x(A(),"asr");const t=parseFlag(e,"--api")||v;const s=parseFlag(e,"--token");const a=parseFlag(e,"--engine")||F.engine;const i=parseFlag(e,"--model")||F.model;if(a&&!["auto","local","cloud","whisper","tencent"].includes(a)){console.error(`Error: --engine must be one of: auto, local, cloud (got: "${a}")`);process.exit(1)}if(i&&!["tiny","base","small","medium","large"].includes(i)){console.error(`Error: --model must be one of: tiny, base, small, medium, large (got: "${i}")`);process.exit(1)}const c=a==="local"||a==="whisper";let l;if(c){l=null}else if(s){l=s}else{l=await n({api:t});const e=r();if(e){console.log(`Logged in as ${e.email}`)}}const u=parseFlag(e,"--input");const p=parseFlag(e,"--url");const d=parseBoolFlag(e,"--mic");const f=parseFlag(e,"--mode")||m.mode;const h=parseFlag(e,"--lang")||parseFlag(e,"--language")||m.lang;const w=parseFlag(e,"--format")||m.format;const S=parseFlag(e,"--output");const $=parseBoolFlag(e,"--speakers");const y=parseIntFlag(e,"--speaker-number");const b=parseIntFlag(e,"--task-id");if(f&&!["auto","sentence","flash","file"].includes(f)){console.error(`Error: --mode must be one of: auto, sentence, flash, file (got: "${f}")`);process.exit(1)}if(w&&!["srt","txt","json"].includes(w)){console.error(`Error: --format must be one of: srt, txt, json (got: "${w}")`);process.exit(1)}if(u){const e=o(896);const t=o(928);const n=t.resolve(u);if(!e.existsSync(n)){console.error(`Error: Input file not found: ${n}`);process.exit(1)}}const T={token:l,api:t,input:u,url:p,mic:d,mode:f,lang:h,format:w,output:S,speakers:$,speakerNumber:y,taskId:b,engine:a,model:i};if(c){await g(T)}else{await runWithRetry(g,T,t,s)}}async function handleTranslate(e){const t=parseFlag(e,"--api")||v;const s=parseFlag(e,"--token");let a;if(s){a=s}else{a=await n({api:t});const e=r();if(e){console.log(`Logged in as ${e.email}`)}}const i=parseFlag(e,"--srt");const c=parseFlag(e,"--text");const l=parseFlag(e,"--input");const u=parseFlag(e,"--from");const p=parseFlag(e,"--to");const d=parseFlag(e,"--output");const f=parseBoolFlag(e,"--realign");const g=parseIntFlag(e,"--batch-size");if(!p&&!parseBoolFlag(e,"--help")){console.error("Error: --to <lang> is required. Example: voxflow translate --srt file.srt --to en");process.exit(1)}const m=["zh","en","ja","ko","fr","de","es","pt","ru","ar","th","vi","it"];if(p&&!m.includes(p)){console.error(`Error: --to must be one of: ${m.join(", ")} (got: "${p}")`);process.exit(1)}if(u&&!m.includes(u)&&u!=="auto"){console.error(`Error: --from must be one of: auto, ${m.join(", ")} (got: "${u}")`);process.exit(1)}const w=[i,c,l].filter(Boolean).length;if(w===0&&!parseBoolFlag(e,"--help")){console.error("Error: Provide one of: --srt <file>, --text <text>, --input <file>");process.exit(1)}if(w>1){console.error("Error: Specify only one input: --srt, --text, or --input");process.exit(1)}if(i){const e=o(896);const t=o(928);const n=t.resolve(i);if(!e.existsSync(n)){console.error(`Error: SRT file not found: ${n}`);process.exit(1)}}if(l){const e=o(896);const t=o(928);const n=t.resolve(l);if(!e.existsSync(n)){console.error(`Error: Input file not found: ${n}`);process.exit(1)}}if(g!==undefined){if(isNaN(g)||g<1||g>20){console.error(`Error: --batch-size must be between 1 and 20 (got: "${parseFlag(e,"--batch-size")}")`);process.exit(1)}}const x={token:a,api:t,srt:i,text:c,input:l,from:u,to:p,output:d,realign:f,batchSize:g};await runWithRetry(h,x,t,s)}async function handleVideoTranslate(e){if(parseBoolFlag(e,"--help")||parseBoolFlag(e,"-h")){printHelp();return}const t=parseFlag(e,"--api")||v;const s=parseFlag(e,"--token");let a;if(s){a=s}else{a=await n({api:t});const e=r();if(e){console.log(`Logged in as ${e.email}`)}}const i=parseFlag(e,"--input");const c=parseFlag(e,"--from");const l=parseFlag(e,"--to");const u=parseFlag(e,"--voice");const p=parseFlag(e,"--voices");const d=parseFlag(e,"--output");const f=parseBoolFlag(e,"--realign");const g=parseBoolFlag(e,"--keep-intermediates");const m=parseIntFlag(e,"--batch-size");const h=parseFloatFlag(e,"--speed");const x=parseFlag(e,"--asr-mode");const S=parseFlag(e,"--asr-lang");if(!i){console.error("Error: --input <video-file> is required. Example: voxflow video-translate --input video.mp4 --to en");process.exit(1)}if(!l){console.error("Error: --to <lang> is required. Example: voxflow video-translate --input video.mp4 --to en");process.exit(1)}const $=["zh","en","ja","ko","fr","de","es","pt","ru","ar","th","vi","it"];if(l&&!$.includes(l)){console.error(`Error: --to must be one of: ${$.join(", ")} (got: "${l}")`);process.exit(1)}if(c&&!$.includes(c)&&c!=="auto"){console.error(`Error: --from must be one of: auto, ${$.join(", ")} (got: "${c}")`);process.exit(1)}if(i){const e=o(896);const t=o(928);const n=t.resolve(i);if(!e.existsSync(n)){console.error(`Error: Video file not found: ${n}`);process.exit(1)}}if(h!==undefined&&(isNaN(h)||h<.5||h>2)){console.error(`Error: --speed must be between 0.5 and 2.0 (got: "${parseFlag(e,"--speed")}")`);process.exit(1)}if(m!==undefined&&(isNaN(m)||m<1||m>20)){console.error(`Error: --batch-size must be between 1 and 20 (got: "${parseFlag(e,"--batch-size")}")`);process.exit(1)}const y=["auto","sentence","flash","file"];if(x&&!y.includes(x)){console.error(`Error: --asr-mode must be one of: ${y.join(", ")} (got: "${x}")`);process.exit(1)}if(p){const e=o(896);const t=o(928);const n=t.resolve(p);if(!e.existsSync(n)){console.error(`Error: Voices map file not found: ${n}`);process.exit(1)}}const b={token:a,api:t,input:i,from:c,to:l,voice:u,voicesMap:p,output:d,realign:f,keepIntermediates:g,batchSize:m,speed:h,asrMode:x,asrLang:S};await runWithRetry(w,b,t,s)}async function handleDashboard(){const e=`${S}/app/`;console.log(`\nOpening dashboard: ${e}`);try{const t=(await o.e(935).then(o.bind(o,935))).default;const n=await t(e);if(n&&typeof n.on==="function"){n.on("error",(()=>{console.log("Failed to open browser. Visit manually:");console.log(` ${e}`)}))}}catch{console.log("Failed to open browser. Visit manually:");console.log(` ${e}`)}}async function runWithRetry(e,t,o,s){try{await e(t)}catch(r){if(r instanceof M&&r.code==="token_expired"&&!s){console.log("\nToken expired, re-authenticating...");t.token=await n({api:o,force:true});await e(t)}else{throw r}}}function validateSpeed(e,t){if(t!==undefined){if(isNaN(t)||t<.5||t>2){console.error(`Error: --speed must be between 0.5 and 2.0 (got: "${parseFlag(e,"--speed")}")`);process.exit(1)}}}function validateSilence(e,t){if(t!==undefined){if(isNaN(t)||t<0||t>5){console.error(`Error: --silence must be between 0 and 5.0 (got: "${parseFlag(e,"--silence")}")`);process.exit(1)}}}function validateOutput(e,t){if(e){const t=[".wav",".mp3"];const o=t.some((t=>e.toLowerCase().endsWith(t)));if(!o){console.error("Error: --output path must end with .wav or .mp3");process.exit(1)}}}function validateFormat(e){if(e&&!["pcm","wav","mp3"].includes(e)){console.error(`Error: --format must be one of: pcm, wav, mp3 (got: "${e}")`);process.exit(1)}}function parseFlag(e,t){const o=e.indexOf(t);if(o===-1||o+1>=e.length)return null;return e[o+1]}function parseIntFlag(e,t){const o=parseFlag(e,t);return o!=null?parseInt(o,10):undefined}function parseFloatFlag(e,t){const o=parseFlag(e,t);return o!=null?parseFloat(o):undefined}function parseBoolFlag(e,t){return e.includes(t)}function printHelp(){console.log(`\nvoxflow v${I.version} — AI audio content creation CLI\n\nUsage:\n voxflow <command> [options]\n\nCommands:\n login Open browser to login and cache token\n logout Clear cached token\n status Show login status and token info\n dashboard Open Web dashboard in browser\n story [opts] Generate a story with TTS narration\n podcast [opts] Generate a multi-speaker podcast/dialogue\n synthesize <text> Synthesize a single text snippet to audio (alias: say)\n narrate [opts] Narrate a file, text, or script to audio\n voices [opts] Browse and search available TTS voices\n dub [opts] Dub video/audio from SRT subtitles (timeline-aligned TTS)\n asr [opts] Transcribe audio/video to text (alias: transcribe)\n translate [opts] Translate SRT subtitles, text, or files\n video-translate Translate entire video: ASR → translate → dub → merge\n\nStory options:\n --topic <text> Story topic (default: children's story)\n --voice <id> TTS voice ID (default: ${$.voice})\n --output <path> Output WAV path (default: ./story-<timestamp>.wav)\n --paragraphs <n> Paragraph count, 1-20 (default: ${$.paragraphs})\n --speed <n> TTS speed 0.5-2.0 (default: ${$.speed})\n --silence <sec> Silence between paragraphs, 0-5.0 (default: ${$.silence})\n\nPodcast options:\n --topic <text> Podcast topic (default: tech trends)\n --style <style> Dialogue style (default: ${y.style})\n --length <len> short | medium | long (default: ${y.length})\n --exchanges <n> Number of exchanges, 2-30 (default: ${y.exchanges})\n --output <path> Output WAV path (default: ./podcast-<timestamp>.wav)\n --speed <n> TTS speed 0.5-2.0 (default: ${y.speed})\n --silence <sec> Silence between segments, 0-5.0 (default: ${y.silence})\n\nSynthesize options (alias: say):\n <text> Text to synthesize (positional arg or --text)\n --text <text> Text to synthesize (alternative to positional)\n --voice <id> TTS voice ID (default: ${b.voice})\n --format <fmt> Output format: pcm, wav, mp3 (default: pcm → WAV)\n --speed <n> TTS speed 0.5-2.0 (default: ${b.speed})\n --volume <n> TTS volume 0.1-2.0 (default: ${b.volume})\n --pitch <n> TTS pitch -12 to 12 (default: ${b.pitch})\n --output <path> Output file path (default: ./tts-<timestamp>.wav)\n\nNarrate options:\n --input <file> Input .txt or .md file\n --text <text> Inline text to narrate\n --script <file> JSON script with per-segment voice/speed control\n --voice <id> Default voice ID (default: ${T.voice})\n --format <fmt> Output format: pcm, wav, mp3 (default: pcm → WAV)\n --speed <n> TTS speed 0.5-2.0 (default: ${T.speed})\n --silence <sec> Silence between segments, 0-5.0 (default: ${T.silence})\n --output <path> Output file path (default: ./narration-<timestamp>.wav)\n\n Also supports stdin: echo "text" | voxflow narrate\n\nDub options:\n --srt <file> SRT subtitle file (required)\n --video <file> Video file — merge dubbed audio into video\n --voice <id> Default TTS voice ID (default: ${k.voice})\n --voices <file> JSON speaker→voiceId map for multi-speaker dubbing\n --speed <n> TTS speed 0.5-2.0 (default: ${k.speed})\n --speed-auto Auto-adjust speed when audio overflows time slot\n --bgm <file> Background music file to mix in\n --ducking <n> BGM volume ducking 0-1.0 (default: ${k.ducking})\n --patch <id> Re-synthesize a single caption by ID (patch mode)\n --output <path> Output file path (default: ./dub-<timestamp>.wav)\n\nASR options (alias: transcribe):\n --input <file> Local audio or video file to transcribe\n --url <url> Remote audio URL to transcribe (cloud only)\n --mic Record from microphone (cloud only, requires sox)\n --engine <type> auto (default) | local | cloud\n --model <name> Whisper model: tiny, base (default), small, medium, large\n --mode <type> auto (default) | sentence | flash | file (cloud only)\n --lang <model> Language: 16k_zh (default), 16k_en, 16k_zh_en, 16k_ja, 16k_ko\n --format <fmt> Output format: srt (default), txt, json\n --output <path> Output file path (default: <input>.<format>)\n --speakers Enable speaker diarization (cloud flash/file mode)\n --speaker-number <n> Expected number of speakers (with --speakers)\n --task-id <id> Resume polling an existing async task (cloud only)\n\nTranslate options:\n --srt <file> SRT subtitle file to translate\n --text <text> Inline text to translate\n --input <file> Text file (.txt, .md) to translate\n --from <lang> Source language code (default: auto-detect)\n --to <lang> Target language code (required)\n --output <path> Output file path (default: <input>-<lang>.<ext>)\n --realign Adjust subtitle timing for target language length\n --batch-size <n> Captions per LLM call, 1-20 (default: ${E.batchSize})\n\n Supported languages: zh, en, ja, ko, fr, de, es, pt, ru, ar, th, vi, it\n\nVideo-translate options:\n --input <file> Input video file (required)\n --to <lang> Target language code (required)\n --from <lang> Source language code (default: auto-detect)\n --voice <id> TTS voice ID for dubbed audio\n --voices <file> JSON speaker→voiceId map for multi-speaker dubbing\n --realign Adjust subtitle timing for target language length\n --speed <n> TTS speed 0.5-2.0 (default: ${_.speed})\n --batch-size <n> Translation batch size, 1-20 (default: ${_.batchSize})\n --keep-intermediates Keep intermediate files (SRT, audio) for debugging\n --output <path> Output MP4 path (default: <input>-<lang>.mp4)\n --asr-mode <mode> Override ASR mode: auto, sentence, flash, file\n --asr-lang <engine> Override ASR engine: 16k_zh, 16k_en, 16k_ja, 16k_ko, etc.\n\nVoices options:\n --search <query> Search by name, tone, style, description\n --gender <m|f> Filter by gender: male/m or female/f\n --language <code> Filter by language: zh, en, etc.\n --extended Include extended voice library (380+ voices)\n --json Output raw JSON instead of table\n\nCommon options:\n --help, -h Show this help\n --version, -v Show version\n\nAdvanced options:\n --api <url> Override API endpoint (for self-hosted servers)\n --token <jwt> Use explicit token (CI/CD, skip browser login)\n\nExamples:\n voxflow say "你好世界"\n voxflow say "你好世界" --format mp3\n voxflow synthesize "Welcome" --voice v-male-Bk7vD3xP --format mp3\n voxflow narrate --input article.txt --voice v-female-R2s4N9qJ\n voxflow narrate --input article.txt --format mp3\n voxflow narrate --script narration-script.json\n echo "Hello" | voxflow narrate --output hello.wav\n voxflow voices --search "温柔" --gender female\n voxflow voices --extended --json\n voxflow dub --srt subtitles.srt\n voxflow dub --srt subtitles.srt --video input.mp4 --output dubbed.mp4\n voxflow dub --srt subtitles.srt --voices speakers.json --speed-auto\n voxflow dub --srt subtitles.srt --bgm music.mp3 --ducking 0.3\n voxflow dub --srt subtitles.srt --patch 5 --output dub-existing.wav\n voxflow asr --input recording.mp3\n voxflow asr --input recording.mp3 --engine local\n voxflow asr --input meeting.wav --engine local --model small\n voxflow asr --input video.mp4 --format srt --lang 16k_zh\n voxflow asr --url https://example.com/audio.wav --mode flash\n voxflow asr --mic --format txt\n voxflow transcribe --input meeting.wav --speakers --speaker-number 3\n voxflow asr --task-id 12345678 --format srt\n voxflow translate --srt subtitles.srt --to en\n voxflow translate --srt subtitles.srt --from zh --to en --realign\n voxflow translate --srt subtitles.srt --to ja --output subtitles-ja.srt\n voxflow translate --text "你好世界" --to en\n voxflow translate --input article.txt --to en --output article-en.txt\n voxflow video-translate --input video.mp4 --to en\n voxflow video-translate --input video.mp4 --from zh --to en --realign\n voxflow video-translate --input video.mp4 --to ja --voice v-male-Bk7vD3xP\n`)}e.exports={run:run}},929:(e,t,o)=>{const n=o(896);const s=o(928);const{API_BASE:r}=o(782);const{ApiError:a}=o(852);const{getMediaInfo:i,extractAudioForAsr:c}=o(388);const{uploadFileToCos:l}=o(567);const{recognize:u,detectMode:p,SENTENCE_MAX_MS:d,FLASH_MAX_MS:f,BASE64_MAX_BYTES:g,TASK_STATUS:m}=o(514);const{formatSrt:h,formatPlainText:w,formatJson:x,buildCaptionsFromFlash:v,buildCaptionsFromSentence:S,buildCaptionsFromFile:$}=o(813);const{checkWhisperAvailable:y,transcribeLocal:b}=o(126);const T={lang:"16k_zh",mode:"auto",format:"srt"};const k={"16k_zh":"中文 (16kHz)","16k_en":"English (16kHz)","16k_zh_en":"中英混合 (16kHz)","16k_ja":"日本語 (16kHz)","16k_ko":"한국어 (16kHz)","16k_zh_dialect":"中文方言 (16kHz)","8k_zh":"中文 (8kHz 电话)","8k_en":"English (8kHz phone)"};const F={srt:".srt",txt:".txt",json:".json"};async function asr(e){const sigintHandler=()=>{console.log("\n\nASR cancelled.");process.exit(0)};process.on("SIGINT",sigintHandler);try{return await _asr(e)}finally{process.removeListener("SIGINT",sigintHandler)}}async function _asr(e){const{token:t,api:a=r,input:d,url:f,mic:y=false,mode:b=T.mode,lang:E=T.lang,format:_=T.format,output:A,speakers:I=false,speakerNumber:M=0,taskId:L,engine:C="auto",model:N="base"}=e;if(L){return await resumePoll({apiBase:a,token:t,taskId:L,format:_,output:A,lang:E})}const P=resolveEngine(C);if(P==="local"){return await _asrLocal({input:d,format:_,output:A,model:N,lang:E})}const O=[d,f,y].filter(Boolean).length;if(O===0){throw new Error("No input specified. Provide one of:\n"+" --input <file> Local audio/video file\n"+" --url <url> Remote audio URL\n"+" --mic Record from microphone")}if(O>1){throw new Error("Specify only one input source: --input, --url, or --mic")}console.log("\n=== VoxFlow ASR ===");let D=d?s.resolve(d):null;let q=[];if(y){D=await handleMicInput();q.push(D)}if(D&&!n.existsSync(D)){throw new Error(`Input file not found: ${D}`)}let z=0;let R=0;let B=f||null;if(D){console.log(`Input: ${s.basename(D)}`);const e=await i(D);z=e.durationMs;R=n.statSync(D).size;const t=formatDuration(z);const o=formatSize(R);console.log(`Duration: ${t}`);console.log(`Size: ${o}`);if(!e.hasAudio){throw new Error("Input file has no audio track.")}console.log(`\n[1/3] 提取音频 (16kHz mono WAV)...`);const r=await c(D);q.push(r.wavPath);z=r.durationMs;R=n.statSync(r.wavPath).size;D=r.wavPath;console.log(` OK (${formatSize(R)}, ${formatDuration(z)})`)}else{console.log(`Input: ${f}`);console.log(`(Remote URL — duration will be detected by ASR API)`)}const U=!!B;const j=b==="auto"?p(z,U||!!D,R):b;console.log(`Mode: ${j}`);console.log(`Language: ${k[E]||E}`);console.log(`Format: ${_}`);if(D&&!B){const e=j==="flash"||j==="file"&&R>g||j==="sentence"&&R>g;if(e){console.log(`\n[2/3] 上传至 COS...`);const e=await l(D,a,t);B=e.cosUrl;console.log(` OK (${e.key})`)}else{console.log(`\n[2/3] 上传至 COS... (跳过,使用 base64)`)}}else if(!D&&B){console.log(`\n[2/3] 上传至 COS... (跳过,使用远程 URL)`)}console.log(`\n[3/3] ASR 语音识别 (${j})...`);const W=Date.now();const V=await u({apiBase:a,token:t,mode:j,url:B,filePath:j==="sentence"&&!B?D:undefined,durationMs:z,fileSize:R,lang:E,speakerDiarization:I,speakerNumber:M,wordInfo:_==="srt",onProgress:(e,t)=>{const o=e===m.WAITING?"排队中":e===m.PROCESSING?"识别中":"未知";process.stdout.write(`\r ${o}... (${Math.round(t/1e3)}s)`)}});const G=((Date.now()-W)/1e3).toFixed(1);console.log(`\n OK (${G}s)`);const K=V.audioTime||z/1e3||0;let J;switch(V.mode){case"flash":J=v(V.flashResult||[]);break;case"sentence":J=S(V.result,K,V.wordList);break;case"file":J=$(V.result,K);break;default:J=[{id:1,startMs:0,endMs:0,text:V.result||""}]}let H;switch(_){case"srt":H=h(J);break;case"txt":H=w(J,{includeSpeakers:I});break;case"json":H=x(J);break;default:throw new Error(`Unknown format: ${_}. Use: srt, txt, json`)}const Y=F[_]||".txt";let Q;if(A){Q=s.resolve(A)}else if(d){const e=s.basename(d,s.extname(d));Q=o.ab+"cli/"+s.dirname(d)+"/"+e+""+Y}else if(y){Q=s.resolve(`mic-${Date.now()}${Y}`)}else{try{const e=new URL(f);const t=s.basename(e.pathname,s.extname(e.pathname))||"asr";Q=o.ab+"cli/"+t+""+Y}catch{Q=s.resolve(`asr-${Date.now()}${Y}`)}}n.writeFileSync(Q,H,"utf8");for(const e of q){try{n.unlinkSync(e)}catch{}}const X=V.quota||{};const Z=1;const ee=X.remaining??"?";console.log(`\n=== Done ===`);console.log(`Output: ${Q}`);console.log(`Captions: ${J.length}`);console.log(`Duration: ${formatDuration(z||(V.audioTime||0)*1e3)}`);console.log(`Mode: ${V.mode}`);console.log(`Quota: ${Z} used, ${ee} remaining`);if(J.length>0&&_!=="json"){console.log(`\n--- Preview ---`);const e=J.slice(0,3);for(const t of e){const e=formatDuration(t.startMs);const o=t.speakerId?`[${t.speakerId}] `:"";const n=t.text.length>60?t.text.slice(0,57)+"...":t.text;console.log(` ${e} ${o}${n}`)}if(J.length>3){console.log(` ... (${J.length-3} more)`)}}return{outputPath:Q,mode:V.mode,duration:z/1e3,captionCount:J.length,quotaUsed:Z}}async function _asrLocal(e){const{input:t,format:r=T.format,output:a,model:l="base",lang:u=T.lang}=e;console.log("\n=== VoxFlow ASR (Local Whisper) ===");if(!t){throw new Error("Local whisper engine requires --input <file>.\n"+"URL and microphone input are cloud-only features.\n"+"Use: voxflow asr --input <file> --engine local")}const p=s.resolve(t);if(!n.existsSync(p)){throw new Error(`Input file not found: ${p}`)}const d=await i(p);console.log(`Input: ${s.basename(p)}`);console.log(`Duration: ${formatDuration(d.durationMs)}`);console.log(`Engine: whisper (local)`);console.log(`Model: ${l}`);console.log(`\n[1/2] 提取音频 (16kHz mono WAV)...`);const f=await c(p);console.log(` OK (${formatSize(n.statSync(f.wavPath).size)})`);console.log(`[2/2] Whisper 本地识别...`);const g=Date.now();const m=await b(f.wavPath,{model:l,lang:u});const v=((Date.now()-g)/1e3).toFixed(1);console.log(` OK (${v}s, ${m.length} segments)`);try{n.unlinkSync(f.wavPath)}catch{}let S;switch(r){case"srt":S=h(m);break;case"txt":S=w(m);break;case"json":S=x(m);break;default:throw new Error(`Unknown format: ${r}. Use: srt, txt, json`)}const $=F[r]||".txt";const y=a?s.resolve(a):o.ab+"cli/"+s.dirname(t)+"/"+s.basename(t,s.extname(t))+""+$;n.writeFileSync(y,S,"utf8");console.log(`\n=== Done ===`);console.log(`Output: ${y}`);console.log(`Captions: ${m.length}`);console.log(`Duration: ${formatDuration(d.durationMs)}`);console.log(`Engine: whisper (local, no quota used)`);if(m.length>0&&r!=="json"){console.log(`\n--- Preview ---`);const e=m.slice(0,3);for(const t of e){const e=formatDuration(t.startMs);const o=t.text.length>60?t.text.slice(0,57)+"...":t.text;console.log(` ${e} ${o}`)}if(m.length>3){console.log(` ... (${m.length-3} more)`)}}return{outputPath:y,mode:"local",duration:d.durationMs/1e3,captionCount:m.length,quotaUsed:0}}function resolveEngine(e){if(e==="local"||e==="whisper"){const e=y();if(!e.available){throw new Error("Local whisper engine requires nodejs-whisper.\n"+"Install: npm install -g nodejs-whisper\n"+"Download a model: npx nodejs-whisper download\n"+"Or use: --engine cloud")}return"local"}if(e==="cloud"||e==="tencent")return"cloud";if(e==="auto"){const{available:e}=y();return e?"local":"cloud"}return"cloud"}async function resumePoll({apiBase:e,token:t,taskId:r,format:a,output:i,lang:c}){console.log(`\n=== VoxFlow ASR — Resume Task ===`);console.log(`Task ID: ${r}`);const{pollTaskResult:l,TASK_STATUS:u}=o(514);console.log(`Polling...`);const p=await l({apiBase:e,token:t,taskId:r,onProgress:(e,t)=>{const o=e===u.WAITING?"排队中":e===u.PROCESSING?"识别中":"?";process.stdout.write(`\r ${o}... (${Math.round(t/1e3)}s)`)}});console.log(`\n OK`);const d=$(p.result,p.audioTime);let f;switch(a){case"srt":f=h(d);break;case"txt":f=w(d);break;case"json":f=x(d);break;default:f=w(d)}const g=F[a]||".txt";const m=i?s.resolve(i):s.resolve(`task-${r}${g}`);n.writeFileSync(m,f,"utf8");console.log(`\n=== Done ===`);console.log(`Output: ${m}`);console.log(`Captions: ${d.length}`);console.log(`Duration: ${formatDuration((p.audioTime||0)*1e3)}`);return{outputPath:m,mode:"file",duration:p.audioTime,captionCount:d.length}}async function handleMicInput(){const{recordMic:e,checkRecAvailable:t}=o(384);const n=await t();if(!n.available){throw new Error(n.error)}console.log(`\nRecording from microphone...`);console.log(` Press Enter or Q to stop recording.`);console.log(` Max duration: 5 minutes.\n`);const{wavPath:s,durationMs:r,stopped:a}=await e({maxSeconds:300});console.log(`\n Recording ${a==="user"?"stopped":"finished"}: ${formatDuration(r)}`);return s}function formatDuration(e){if(!e||e<=0)return"0s";const t=Math.round(e/1e3);if(t<60)return`${t}s`;const o=Math.floor(t/60);const n=t%60;if(o<60)return`${o}m${n>0?n+"s":""}`;const s=Math.floor(o/60);const r=o%60;return`${s}h${r>0?r+"m":""}`}function formatSize(e){if(e<1024)return`${e} B`;if(e<1024*1024)return`${(e/1024).toFixed(1)} KB`;return`${(e/1024/1024).toFixed(1)} MB`}e.exports={asr:asr,ASR_DEFAULTS:T,ApiError:a}},944:(e,t,o)=>{const n=o(896);const s=o(928);const{DUB_DEFAULTS:r}=o(782);const{request:a,throwApiError:i,throwNetworkError:c,ApiError:l}=o(852);const{buildWav:u,getFileExtension:p}=o(56);const{parseSrt:d,formatSrt:f}=o(813);const{buildTimelinePcm:g,buildTimelineAudio:m,msToBytes:h,BYTES_PER_MS:w}=o(907);const{startSpinner:x}=o(339);function parseVoicesMap(e){if(!n.existsSync(e)){throw new Error(`Voices map file not found: ${e}`)}let t;try{t=JSON.parse(n.readFileSync(e,"utf8"))}catch(e){throw new Error(`Invalid JSON in voices map: ${e.message}`)}if(typeof t!=="object"||t===null||Array.isArray(t)){throw new Error('Voices map must be a JSON object: { "SpeakerName": "voiceId", ... }')}for(const[e,o]of Object.entries(t)){if(typeof o!=="string"||o.trim().length===0){throw new Error(`Invalid voice ID for speaker "${e}": must be a non-empty string`)}}return t}async function synthesizeCaption(e,t,o,n,s,r,l,u){process.stdout.write(` TTS [${l+1}/${u}]...`);const p={text:o,voiceId:n,speed:s??1,format:r||"pcm"};let d,f;try{({status:d,data:f}=await a(`${e}/api/tts/synthesize`,{method:"POST",headers:{"Content-Type":"application/json",Authorization:`Bearer ${t}`}},p))}catch(t){console.log(" FAIL");c(t,e)}if(d!==200||f.code!=="success"){console.log(" FAIL");i(d,f,`TTS caption ${l+1}`)}const g=Buffer.from(f.audio,"base64");const m=g.length/w;console.log(` OK (${(g.length/1024).toFixed(0)} KB, ${(m/1e3).toFixed(1)}s)`);return{audio:g,quota:f.quota,durationMs:m}}async function dub(e){const sigintHandler=()=>{console.log("\n\nDubbing cancelled.");process.exit(0)};process.on("SIGINT",sigintHandler);try{return await _dub(e)}finally{process.removeListener("SIGINT",sigintHandler)}}async function _dub(e){const t=e.srt;if(!t){throw new Error("No SRT file provided. Usage: voxflow dub --srt <file.srt>")}const a=s.resolve(t);if(!n.existsSync(a)){throw new Error(`SRT file not found: ${a}`)}const i=n.readFileSync(a,"utf8");const c=d(i);if(c.length===0){throw new Error("SRT file contains no valid captions")}const l=e.voice||r.voice;const u=e.speed??r.speed;const p=e.speedAuto||false;const f=r.toleranceMs;const g=e.api;const h=e.token;const w=e.patch;const x=[];let v=null;if(e.voicesMap){v=parseVoicesMap(s.resolve(e.voicesMap))}let S=e.output;const $=!!e.video;const y=$?".mp4":".wav";if(!S){const e=(new Date).toISOString().replace(/[:.]/g,"-").slice(0,19);S=s.resolve(`dub-${e}${y}`)}console.log("\n=== VoxFlow Dub ===");console.log(`SRT: ${t} (${c.length} captions)`);console.log(`Voice: ${l}${v?` + voices map (${Object.keys(v).length} speakers)`:""}`);console.log(`Speed: ${u}${p?" (auto-compensate)":""}`);if($)console.log(`Video: ${e.video}`);if(e.bgm)console.log(`BGM: ${e.bgm} (ducking: ${e.ducking??r.ducking})`);if(w!=null)console.log(`Patch: caption #${w}`);console.log(`Output: ${S}`);if(w!=null){return _dubPatch(e,c,S,x)}console.log(`\n[1/2] 合成 TTS 音频 (${c.length} 条字幕)...`);const b=[];let T=null;let k=0;for(let e=0;e<c.length;e++){const t=c[e];const o=t.endMs-t.startMs;let n=l;if(v&&t.speakerId&&v[t.speakerId]){n=v[t.speakerId]}let s=await synthesizeCaption(g,h,t.text,n,u,"pcm",e,c.length);k++;T=s.quota;if(p&&s.durationMs>o+f){const r=s.durationMs/o;if(r<=2){const a=Math.min(u*r,2);process.stdout.write(` ↳ Re-synth #${t.id} (${(s.durationMs/1e3).toFixed(1)}s > ${(o/1e3).toFixed(1)}s, speed: ${a.toFixed(2)})...`);s=await synthesizeCaption(g,h,t.text,n,a,"pcm",e,c.length);k++;T=s.quota}else{const a=`Caption #${t.id}: audio too long (${(s.durationMs/1e3).toFixed(1)}s for ${(o/1e3).toFixed(1)}s slot, alpha=${r.toFixed(1)}). Consider shortening text.`;x.push(a);console.log(` ⚠ OVERFLOW: ${a}`);const i=await synthesizeCaption(g,h,t.text,n,2,"pcm",e,c.length);k++;T=i.quota;s=i}}b.push({startMs:t.startMs,endMs:t.endMs,audioBuffer:s.audio})}console.log("\n[2/2] 构建时间轴音频...");const{wav:F,duration:E}=m(b);const _=$?S.replace(/\.[^.]+$/,".wav"):S;const A=s.dirname(_);n.mkdirSync(A,{recursive:true});n.writeFileSync(_,F);const I=_.replace(/\.(wav|mp3|mp4)$/i,".txt");const M=c.map((e=>{const t=e.speakerId?`|${e.speakerId}`:"";const o=v&&e.speakerId&&v[e.speakerId]?`|${v[e.speakerId]}`:"";return`[${e.id}${t}${o}] ${e.text}`})).join("\n\n");n.writeFileSync(I,M,"utf8");const L=$||e.bgm;if(L){const{checkFfmpeg:t,mergeAudioVideo:s,mixWithBgm:a}=o(297);const i=await t();if(!i.available){throw new Error("ffmpeg is required for BGM mixing / video merging. Install it:\n"+" macOS: brew install ffmpeg\n"+" Ubuntu: sudo apt install ffmpeg\n"+" Windows: https://ffmpeg.org/download.html")}let c=_;if(e.bgm){const t=_.replace(".wav","-mixed.wav");console.log(` Mixing BGM (ducking: ${e.ducking??r.ducking})...`);await a(_,e.bgm,t,{ducking:e.ducking??r.ducking});c=t;if(!$){n.copyFileSync(c,_);try{n.unlinkSync(c)}catch{}c=_}}if($){console.log(" Merging with video...");await s(e.video,c,S);try{if(_!==S)n.unlinkSync(_);if(e.bgm){const e=_.replace(".wav","-mixed.wav");if(n.existsSync(e))n.unlinkSync(e)}}catch{}}}console.log(`\n=== Done ===`);console.log(`Output: ${S} (${(n.statSync(S).size/1024).toFixed(1)} KB)`);console.log(`Duration: ${E.toFixed(1)}s`);console.log(`Transcript: ${I}`);console.log(`Captions: ${c.length}`);console.log(`Quota: ${k} used, ${T?.remaining??"?"} remaining`);if(x.length>0){console.log(`\nWarnings (${x.length}):`);for(const e of x){console.log(` ⚠ ${e}`)}}return{outputPath:S,textPath:I,duration:E,quotaUsed:k,segmentCount:c.length,warnings:x}}async function _dubPatch(e,t,o,a){const i=e.patch;const c=e.api;const l=e.token;const p=e.voice||r.voice;const d=e.speed??r.speed;let f=null;if(e.voicesMap){f=parseVoicesMap(s.resolve(e.voicesMap))}const g=t.findIndex((e=>e.id===i));if(g===-1){throw new Error(`Caption #${i} not found in SRT. Available IDs: ${t.map((e=>e.id)).join(", ")}`)}const m=t[g];const w=o.replace(/\.[^.]+$/,".wav");if(!n.existsSync(w)){throw new Error(`Patch mode requires an existing output file. `+`Run a full dub first, then use --patch to update individual captions.`)}let x=p;if(f&&m.speakerId&&f[m.speakerId]){x=f[m.speakerId]}console.log(`\n[Patch] Re-synthesizing caption #${i}: "${m.text.slice(0,40)}..."`);const v=await synthesizeCaption(c,l,m.text,x,d,"pcm",0,1);const S=n.readFileSync(w);const $=S.subarray(44);const y=h(m.startMs);const b=h(m.endMs);$.fill(0,y,Math.min(b,$.length));const T=Math.min(v.audio.length,b-y,$.length-y);if(T>0){v.audio.copy($,y,0,T)}const{wav:k}=u([$],0);n.writeFileSync(w,k);console.log(`\n=== Patch Done ===`);console.log(`Updated: caption #${i} in ${w}`);console.log(`Quota: 1 used, ${v.quota?.remaining??"?"} remaining`);return{outputPath:w,textPath:w.replace(/\.wav$/i,".txt"),duration:$.length/(24e3*2),quotaUsed:1,segmentCount:1,warnings:a}}e.exports={dub:dub,ApiError:l,_test:{parseVoicesMap:parseVoicesMap}}},80:(e,t,o)=>{const n=o(896);const s=o(928);const{NARRATE_DEFAULTS:r}=o(782);const{request:a,throwApiError:i,throwNetworkError:c,ApiError:l}=o(852);const{parseParagraphs:u,buildWav:p,concatAudioBuffers:d,getFileExtension:f}=o(56);const{startSpinner:g}=o(339);function parseScript(e){if(!n.existsSync(e)){throw new Error(`Script file not found: ${e}`)}let t;try{t=JSON.parse(n.readFileSync(e,"utf8"))}catch(e){throw new Error(`Invalid JSON in script file: ${e.message}`)}if(!t.segments||!Array.isArray(t.segments)||t.segments.length===0){throw new Error('Script must have a non-empty "segments" array')}for(let e=0;e<t.segments.length;e++){const o=t.segments[e];if(!o.text||typeof o.text!=="string"||o.text.trim().length===0){throw new Error(`Segment ${e+1} must have a non-empty "text" field`)}}return{segments:t.segments.map((e=>({text:e.text.trim(),voiceId:e.voiceId||undefined,speed:e.speed!=null?Number(e.speed):undefined,volume:e.volume!=null?Number(e.volume):undefined,pitch:e.pitch!=null?Number(e.pitch):undefined}))),silence:t.silence!=null?Number(t.silence):r.silence,output:t.output||undefined}}function stripMarkdown(e){return e.replace(/```[\s\S]*?```/g,"").replace(/`([^`]+)`/g,"$1").replace(/!\[[^\]]*\]\([^)]*\)/g,"").replace(/\[([^\]]+)\]\([^)]*\)/g,"$1").replace(/^#{1,6}\s+/gm,"").replace(/\*{1,3}([^*]+)\*{1,3}/g,"$1").replace(/_{1,3}([^_]+)_{1,3}/g,"$1").replace(/^[-*_]{3,}\s*$/gm,"").replace(/^>\s?/gm,"").replace(/\n{3,}/g,"\n\n").trim()}async function readStdin(){const e=[];for await(const t of process.stdin){e.push(t)}return Buffer.concat(e).toString("utf8")}async function synthesizeSegment(e,t,o,n,s,r,l,u,p,d){process.stdout.write(` TTS [${p+1}/${d}]...`);const f={text:o,voiceId:n,speed:s??1,volume:r??1,format:u||"pcm"};if(l!=null&&l!==0)f.pitch=l;let g,m;try{({status:g,data:m}=await a(`${e}/api/tts/synthesize`,{method:"POST",headers:{"Content-Type":"application/json",Authorization:`Bearer ${t}`}},f))}catch(t){console.log(" FAIL");c(t,e)}if(g!==200||m.code!=="success"){console.log(" FAIL");i(g,m,`TTS segment ${p+1}`)}const h=Buffer.from(m.audio,"base64");const w=u==="mp3"?"MP3":u==="wav"?"WAV":"PCM";console.log(` OK (${(h.length/1024).toFixed(0)} KB ${w})`);return{audio:h,quota:m.quota}}async function narrate(e){const sigintHandler=()=>{console.log("\n\nNarration cancelled.");process.exit(0)};process.on("SIGINT",sigintHandler);try{return await _narrate(e)}finally{process.removeListener("SIGINT",sigintHandler)}}async function _narrate(e){const t=e.voice||r.voice;const o=e.speed??r.speed;const a=e.format||"pcm";const i=e.api;const c=e.token;let l;let g;let m;let h;if(e.script){const t=parseScript(e.script);l=t.segments;g=e.silence??t.silence;m=e.output||t.output;h=`script: ${e.script} (${l.length} segments)`}else if(e.input){const t=s.resolve(e.input);if(!n.existsSync(t)){throw new Error(`Input file not found: ${t}`)}let o=n.readFileSync(t,"utf8");const a=s.extname(t).toLowerCase();if(a===".md"||a===".markdown"){o=stripMarkdown(o)}const i=u(o);if(i.length===0){throw new Error("No text content found in input file")}l=i.map((e=>({text:e})));g=e.silence??r.silence;m=e.output;h=`file: ${e.input} (${l.length} paragraphs)`}else if(e.text){const t=u(e.text);if(t.length===0){throw new Error("No text content provided")}l=t.map((e=>({text:e})));g=e.silence??r.silence;m=e.output;h=`text: ${e.text.length} chars (${l.length} paragraphs)`}else if(!process.stdin.isTTY){const t=await readStdin();if(!t||t.trim().length===0){throw new Error("No input provided via stdin")}const o=u(t);if(o.length===0){throw new Error("No text content found in stdin input")}l=o.map((e=>({text:e})));g=e.silence??r.silence;h=`stdin (${l.length} paragraphs)`}else{throw new Error("No input provided. Use one of:\n"+" --input <file.txt> Read a text or markdown file\n"+' --text "text" Provide inline text\n'+" --script <file.json> Use a script with per-segment control\n"+' echo "text" | voxflow narrate Pipe from stdin')}const w=f(a);if(!m){const e=(new Date).toISOString().replace(/[:.]/g,"-").slice(0,19);m=s.resolve(`narration-${e}${w}`)}if(!m.endsWith(w)){m=m.replace(/\.(wav|mp3|pcm)$/i,"")+w}console.log("\n=== VoxFlow Narrate ===");console.log(`Input: ${h}`);console.log(`Voice: ${t}${e.script?" (may be overridden per segment)":""}`);console.log(`Format: ${a==="pcm"?"wav (pcm)":a}`);console.log(`Speed: ${o}`);if(a==="mp3"){console.log(`Output: ${m}`);console.log(` (MP3 模式不插入段间静音)`)}else{console.log(`Silence: ${g}s`);console.log(`Output: ${m}`)}console.log(`\n[1/2] 合成 TTS 音频 (${l.length} 段)...`);const x=[];let v=null;for(let e=0;e<l.length;e++){const n=l[e];const s=await synthesizeSegment(i,c,n.text,n.voiceId||t,n.speed??o,n.volume,n.pitch,a,e,l.length);x.push(s.audio);v=s.quota}console.log("\n[2/2] 拼接音频...");const{audio:S,wav:$,duration:y}=a==="mp3"||a==="wav"?d(x,a,g):p(x,g);const b=S||$;const T=s.dirname(m);n.mkdirSync(T,{recursive:true});n.writeFileSync(m,b);const k=m.replace(/\.(wav|mp3)$/i,".txt");const F=l.map(((e,t)=>{const o=e.voiceId?`[${t+1}|${e.voiceId}]`:`[${t+1}]`;return`${o} ${e.text}`})).join("\n\n");n.writeFileSync(k,F,"utf8");const E=l.length;console.log(`\n=== Done ===`);console.log(`Output: ${m} (${(b.length/1024).toFixed(1)} KB, ${y.toFixed(1)}s)`);console.log(`Transcript: ${k}`);console.log(`Segments: ${l.length}`);console.log(`Quota: ${E} used, ${v?.remaining??"?"} remaining`);return{outputPath:m,textPath:k,duration:y,quotaUsed:E,segmentCount:l.length,format:a}}e.exports={narrate:narrate,ApiError:l,_test:{parseScript:parseScript,stripMarkdown:stripMarkdown}}},35:(e,t,o)=>{const n=o(896);const s=o(928);const{PODCAST_DEFAULTS:r}=o(782);const{request:a,throwApiError:i,throwNetworkError:c,ApiError:l}=o(852);const{buildWav:u}=o(56);const{startSpinner:p}=o(339);function parseDialogueText(e){const t=e.split("\n").filter((e=>e.trim()));const o=[];const n=/^([^::]+)[::]\s*(.+)$/;for(const e of t){const t=e.trim();if(!t)continue;const s=t.match(n);if(s){const e=s[1].trim();const t=s[2].trim();if(t){o.push({speaker:e,text:t})}}else if(t.length>0){o.push({speaker:"旁白",text:t})}}return o}async function generateDialogue(e,t,o){const n=p("\n[1/3] 生成对话文本...");let s,r;try{({status:s,data:r}=await a(`${e}/api/llm/generate-dialogue`,{method:"POST",headers:{"Content-Type":"application/json",Authorization:`Bearer ${t}`}},{prompt:o.topic,style:o.style,length:o.length,dialogueMode:true,autoSpeakerNames:true,exchanges:o.exchanges}))}catch(t){n.stop("FAIL");c(t,e)}if(s!==200||r.code!=="success"){n.stop("FAIL");i(s,r,"Dialogue generation")}const l=r.text;const u=r.voiceMapping||{};const d=r.quota;n.stop("OK");const f=parseDialogueText(l);const g=[...new Set(f.map((e=>e.speaker)))];console.log(` ${l.length} 字, ${f.length} 段, ${g.length} 位说话者`);console.log(` 说话者: ${g.join(", ")}`);console.log(` 配额剩余: ${d?.remaining??"?"}`);return{text:l,segments:f,voiceMapping:u,speakers:g,quota:d}}async function synthesizeSegment(e,t,o,n,s,r,l,u){process.stdout.write(` TTS [${r+1}/${l}] ${u}...`);let p,d;try{({status:p,data:d}=await a(`${e}/api/tts/synthesize`,{method:"POST",headers:{"Content-Type":"application/json",Authorization:`Bearer ${t}`}},{text:o,voiceId:n,speed:s,volume:1}))}catch(t){console.log(" FAIL");c(t,e)}if(p!==200||d.code!=="success"){console.log(" FAIL");i(p,d,`TTS segment ${r+1}`)}const f=Buffer.from(d.audio,"base64");console.log(` OK (${(f.length/1024).toFixed(0)} KB)`);return{pcm:f,quota:d.quota}}async function synthesizeAll(e,t,o,n,s){console.log(`\n[2/3] 合成 TTS 音频 (${o.length} 段, 多角色)...`);const a=[];let i=null;for(let c=0;c<o.length;c++){const l=o[c];const u=n[l.speaker]||{};const p=u.voiceId||r.defaultVoice||"v-female-R2s4N9qJ";const d=u.speed||s;const f=await synthesizeSegment(e,t,l.text,p,d,c,o.length,l.speaker);a.push(f.pcm);i=f.quota}return{pcmBuffers:a,quota:i}}async function podcast(e){const sigintHandler=()=>{console.log("\n\nGeneration cancelled.");process.exit(0)};process.on("SIGINT",sigintHandler);try{return await _podcast(e)}finally{process.removeListener("SIGINT",sigintHandler)}}async function _podcast(e){const t=e.style||r.style;const o=e.length||r.length;const a=e.exchanges||r.exchanges;const i=e.speed??r.speed;const c=e.silence??r.silence;const l=e.api;const p=e.token;const d=e.topic||"科技领域的最新趋势";let f=e.output;if(!f){const e=(new Date).toISOString().replace(/[:.]/g,"-").slice(0,19);f=s.resolve(`podcast-${e}.wav`)}console.log("\n=== VoxFlow Podcast Generator ===");console.log(`Topic: ${d}`);console.log(`Style: ${t}`);console.log(`Length: ${o}`);console.log(`Exchanges: ${a}`);console.log(`Speed: ${i}`);console.log(`API: ${l}`);console.log(`Output: ${f}`);const{text:g,segments:m,voiceMapping:h,speakers:w}=await generateDialogue(l,p,{topic:d,style:t,length:o,exchanges:a});if(m.length===0){throw new Error("No dialogue segments found in generated text")}console.log("\n Voice assignments:");for(const e of w){const t=h[e];if(t){console.log(` ${e} → ${t.voiceId}`)}else{console.log(` ${e} → (default)`)}}const{pcmBuffers:x,quota:v}=await synthesizeAll(l,p,m,h,i);console.log("\n[3/3] 拼接音频...");const{wav:S,duration:$}=u(x,c);const y=s.dirname(f);n.mkdirSync(y,{recursive:true});n.writeFileSync(f,S);const b=f.replace(/\.wav$/,".txt");const T=m.map(((e,t)=>`[${t+1}] ${e.speaker}:${e.text}`)).join("\n\n");n.writeFileSync(b,T,"utf8");const k=2+m.length;console.log(`\n=== Done ===`);console.log(`Output: ${f} (${(S.length/1024).toFixed(1)} KB, ${$.toFixed(1)}s)`);console.log(`Script: ${b}`);console.log(`Quota: ${k} used, ${v?.remaining??"?"} remaining`);return{outputPath:f,textPath:b,duration:$,quotaUsed:k}}e.exports={podcast:podcast,ApiError:l,_test:{parseDialogueText:parseDialogueText}}},214:(e,t,o)=>{const n=o(896);const s=o(928);const{STORY_DEFAULTS:r}=o(782);const{request:a,throwApiError:i,throwNetworkError:c,ApiError:l}=o(852);const{parseParagraphs:u,buildWav:p,createSilence:d}=o(56);const{startSpinner:f}=o(339);async function generateStory(e,t,o){const n=f("\n[1/3] 生成故事文本...");let s,r;try{({status:s,data:r}=await a(`${e}/api/llm/chat`,{method:"POST",headers:{"Content-Type":"application/json",Authorization:`Bearer ${t}`}},{messages:[{role:"user",content:o}],model:"gpt-4o-mini",stream:false,temperature:.8}))}catch(t){n.stop("FAIL");c(t,e)}if(s!==200||r.code!=="success"){n.stop("FAIL");i(s,r,"LLM")}const l=r.content;const u=r.quota;n.stop("OK");console.log(` ${l.length} 字. 配额剩余: ${u?.remaining??"?"}`);return{story:l,quota:u}}async function synthesizeParagraph(e,t,o,n,s,r,l){process.stdout.write(` TTS [${r+1}/${l}]...`);let u,p;try{({status:u,data:p}=await a(`${e}/api/tts/synthesize`,{method:"POST",headers:{"Content-Type":"application/json",Authorization:`Bearer ${t}`}},{text:o,voiceId:n,speed:s,volume:1}))}catch(t){console.log(" FAIL");c(t,e)}if(u!==200||p.code!=="success"){console.log(" FAIL");i(u,p,`TTS paragraph ${r+1}`)}const d=Buffer.from(p.audio,"base64");console.log(` OK (${(d.length/1024).toFixed(0)} KB)`);return{pcm:d,quota:p.quota}}async function synthesizeAll(e,t,o,n,s){console.log(`\n[2/3] 合成 TTS 音频 (${o.length} 段)...`);const r=[];let a=null;for(let i=0;i<o.length;i++){const c=await synthesizeParagraph(e,t,o[i],n,s,i,o.length);r.push(c.pcm);a=c.quota}return{pcmBuffers:r,quota:a}}async function story(e){const sigintHandler=()=>{console.log("\n\nGeneration cancelled.");process.exit(0)};process.on("SIGINT",sigintHandler);try{return await _story(e)}finally{process.removeListener("SIGINT",sigintHandler)}}async function _story(e){const t=e.voice||r.voice;const o=e.paragraphs||r.paragraphs;const a=e.speed??r.speed;const i=e.silence??r.silence;const c=e.api;const l=e.token;const d=e.topic||`请写一个适合5岁儿童的短故事,要求:\n1. 分${o}段,每段2-3句话\n2. 每段描述一个清晰的画面场景\n3. 语言简单易懂,充满童趣\n4. 段落之间用空行分隔\n5. 不要添加段落编号,直接输出故事内容`;let f=e.output;if(!f){const e=(new Date).toISOString().replace(/[:.]/g,"-").slice(0,19);f=s.resolve(`story-${e}.wav`)}console.log("\n=== VoxFlow Story Generator ===");console.log(`Voice: ${t}`);console.log(`API: ${c}`);console.log(`Paragraphs: ${o}`);console.log(`Speed: ${a}`);console.log(`Output: ${f}`);const{story:g}=await generateStory(c,l,d);const m=u(g);if(m.length===0){throw new Error("No paragraphs found in generated story")}console.log(` ${m.length} 段`);const{pcmBuffers:h,quota:w}=await synthesizeAll(c,l,m,t,a);console.log("\n[3/3] 拼接音频...");const{wav:x,duration:v}=p(h,i);const S=s.dirname(f);n.mkdirSync(S,{recursive:true});n.writeFileSync(f,x);const $=f.replace(/\.wav$/,".txt");const y=m.map(((e,t)=>`[${t+1}] ${e}`)).join("\n\n");n.writeFileSync($,y,"utf8");const b=1+m.length;console.log(`\n=== Done ===`);console.log(`Output: ${f} (${(x.length/1024).toFixed(1)} KB, ${v.toFixed(1)}s)`);console.log(`Story: ${$}`);console.log(`Quota: ${b} used, ${w?.remaining??"?"} remaining`);return{outputPath:f,textPath:$,duration:v,quotaUsed:b}}e.exports={story:story,ApiError:l,_test:{parseParagraphs:u,buildWav:p,createSilence:d}}},383:(e,t,o)=>{const n=o(896);const s=o(928);const{SYNTHESIZE_DEFAULTS:r}=o(782);const{request:a,throwApiError:i,throwNetworkError:c,ApiError:l}=o(852);const{buildWav:u,getFileExtension:p}=o(56);const{startSpinner:d}=o(339);async function synthesize(e){const sigintHandler=()=>{console.log("\n\nSynthesis cancelled.");process.exit(0)};process.on("SIGINT",sigintHandler);try{return await _synthesize(e)}finally{process.removeListener("SIGINT",sigintHandler)}}async function _synthesize(e){const t=e.text;if(!t||t.trim().length===0){throw new Error('No text provided. Usage: voxflow synthesize "your text here"')}const o=e.voice||r.voice;const l=e.speed??r.speed;const f=e.volume??r.volume;const g=e.pitch??r.pitch;const m=e.format||"pcm";const h=e.api;const w=e.token;const x=p(m);let v=e.output;if(!v){const e=(new Date).toISOString().replace(/[:.]/g,"-").slice(0,19);v=s.resolve(`tts-${e}${x}`)}console.log("\n=== VoxFlow Synthesize ===");console.log(`Voice: ${o}`);console.log(`Format: ${m==="pcm"?"wav (pcm)":m}`);console.log(`Speed: ${l}`);if(f!==1)console.log(`Volume: ${f}`);if(g!==0)console.log(`Pitch: ${g}`);console.log(`Text: ${t.length>60?t.slice(0,57)+"...":t}`);console.log(`Output: ${v}`);const S=d("\n[1/1] 合成 TTS 音频...");let $,y;try{({status:$,data:y}=await a(`${h}/api/tts/synthesize`,{method:"POST",headers:{"Content-Type":"application/json",Authorization:`Bearer ${w}`}},{text:t.trim(),voiceId:o,format:m,speed:l,volume:f,pitch:g}))}catch(e){S.stop("FAIL");c(e,h)}if($!==200||y.code!=="success"){S.stop("FAIL");i($,y,"TTS")}const b=Buffer.from(y.audio,"base64");S.stop("OK");let T,k;if(m==="mp3"){T=b;k=b.length/4e3;console.log(` ${(b.length/1024).toFixed(0)} KB MP3`)}else if(m==="wav"){T=b;const e=b.length>44?b.readUInt32LE(28):48e3;const t=b.length>44?b.readUInt32LE(40):b.length;k=t/e;console.log(` ${(b.length/1024).toFixed(0)} KB WAV`)}else{const e=u([b],0);T=e.wav;k=e.duration;console.log(` ${(b.length/1024).toFixed(0)} KB PCM → WAV`)}const F=s.dirname(v);n.mkdirSync(F,{recursive:true});n.writeFileSync(v,T);const E=1;console.log(`\n=== Done ===`);console.log(`Output: ${v} (${(T.length/1024).toFixed(1)} KB, ${k.toFixed(1)}s)`);console.log(`Quota: ${E} used, ${y.quota?.remaining??"?"} remaining`);return{outputPath:v,duration:k,quotaUsed:E,format:m}}e.exports={synthesize:synthesize,ApiError:l}},585:(e,t,o)=>{const n=o(896);const s=o(928);const{API_BASE:r,TRANSLATE_DEFAULTS:a}=o(782);const{chatCompletion:i,detectLanguage:c}=o(133);const{parseSrt:l,formatSrt:u}=o(813);const p={zh:"Chinese (Simplified)",en:"English",ja:"Japanese",ko:"Korean",fr:"French",de:"German",es:"Spanish",pt:"Portuguese",ru:"Russian",ar:"Arabic",th:"Thai",vi:"Vietnamese",it:"Italian"};function batchCaptions(e,t=10){const o=[];for(let n=0;n<e.length;n+=t){o.push(e.slice(n,n+t))}return o}function buildTranslationPrompt(e,t,o){const n=[`You are a professional subtitle translator. Translate each numbered line from ${t} to ${o}.`,"","Rules:","- Return ONLY the translated lines, one per number","- Keep the exact same numbering (1., 2., 3., ...)","- Preserve [Speaker: xxx] tags unchanged — do NOT translate speaker names","- Keep translations concise and natural for subtitles","- Do not add explanations, notes, or extra text"].join("\n");const s=e.map(((e,t)=>{const o=e.speakerId?`[Speaker: ${e.speakerId}] `:"";return`${t+1}. ${o}${e.text}`})).join("\n");return{system:n,user:s}}function parseTranslationResponse(e,t){const o=e.trim().split("\n").filter((e=>e.trim()));const n=[];for(let e=0;e<t.length;e++){const s=new RegExp(`^${e+1}\\.\\s*(.+)$`);const r=o.find((e=>s.test(e.trim())));if(r){const o=r.trim().replace(s,"$1").trim();let a=o;const i=o.match(/^\[Speaker:\s*[^\]]+\]\s*/i);if(i){a=o.slice(i[0].length)}n.push({...t[e],text:a||t[e].text})}else{if(e<o.length){const s=o[e].replace(/^\d+\.\s*/,"").trim();let r=s;const a=s.match(/^\[Speaker:\s*[^\]]+\]\s*/i);if(a){r=s.slice(a[0].length)}n.push({...t[e],text:r||t[e].text})}else{n.push({...t[e]})}}}return n}function realignTimings(e,t){const o=.3;const n=100;const s=t.map(((t,s)=>{const r=e[s];if(!r)return t;const a=r.text.length;const i=t.text.length;if(a===0)return t;const c=i/a;if(c<1+o&&c>1-o){return t}const l=r.endMs-r.startMs;let u=Math.round(l*c);const p=s<e.length-1?e[s+1].startMs:Infinity;const d=p-t.startMs-n;if(u>d&&d>0){u=d}u=Math.max(u,500);return{...t,endMs:t.startMs+u}}));return s}async function translate(e){const sigintHandler=()=>{console.log("\n\nTranslation cancelled.");process.exit(0)};process.on("SIGINT",sigintHandler);try{return await _translate(e)}finally{process.removeListener("SIGINT",sigintHandler)}}async function _translate(e){const{token:t,api:o=r,srt:n,text:s,input:i,from:c,to:l,output:u,realign:p=false,batchSize:d=a.batchSize}=e;if(n)return _translateSrt({token:t,api:o,srt:n,from:c,to:l,output:u,realign:p,batchSize:d});if(s)return _translateText({token:t,api:o,text:s,from:c,to:l});if(i)return _translateFile({token:t,api:o,input:i,from:c,to:l,output:u});throw new Error("No input specified. Use --srt, --text, or --input")}async function _translateSrt({token:e,api:t,srt:r,from:c,to:d,output:f,realign:g,batchSize:m}){console.log("\n=== VoxFlow Translate (SRT) ===");const h=s.resolve(r);const w=n.readFileSync(h,"utf8");const x=l(w);if(x.length===0){throw new Error(`SRT file is empty or invalid: ${h}`)}console.log(`Input: ${s.basename(h)}`);console.log(`Captions: ${x.length}`);const v=c||await autoDetectLanguage(t,x);const S=p[v]||v;const $=p[d]||d;console.log(`From: ${S} (${v})`);console.log(`To: ${$} (${d})`);console.log(`Realign: ${g?"yes":"no"}`);const y=batchCaptions(x,m);console.log(`Batches: ${y.length} (batch size: ${m})`);console.log("");let b=[];let T=0;for(let o=0;o<y.length;o++){const n=y[o];process.stdout.write(` [${o+1}/${y.length}] Translating ${n.length} captions...`);const{system:s,user:r}=buildTranslationPrompt(n,S,$);const c=await i({apiBase:t,token:e,messages:[{role:"system",content:s},{role:"user",content:r}],temperature:a.temperature,maxTokens:a.maxTokens});const l=parseTranslationResponse(c.content,n);b=b.concat(l);T++;if(c.quota){console.log(` OK (remaining: ${c.quota.remaining})`)}else{console.log(" OK")}}if(g){console.log(" Re-aligning subtitle timing...");b=realignTimings(x,b)}b=b.map(((e,t)=>({...e,id:t+1})));const k=u(b);let F;if(f){F=s.resolve(f)}else{const e=s.basename(h,s.extname(h));const t=s.dirname(h);F=o.ab+"cli/"+t+"/"+e+"-"+d+".srt"}n.writeFileSync(F,k,"utf8");console.log(`\n=== Done ===`);console.log(`Output: ${F}`);console.log(`Captions: ${b.length}`);console.log(`Quota: ${T} used`);if(b.length>0){console.log(`\n--- Preview ---`);const e=b.slice(0,3);for(const t of e){const e=t.speakerId?`[${t.speakerId}] `:"";const o=t.text.length>60?t.text.slice(0,57)+"...":t.text;console.log(` ${t.id}. ${e}${o}`)}if(b.length>3){console.log(` ... (${b.length-3} more)`)}}return{outputPath:F,captionCount:b.length,quotaUsed:T,from:v,to:d}}async function _translateText({token:e,api:t,text:o,from:n,to:s}){console.log("\n=== VoxFlow Translate (Text) ===");const r=n||await autoDetectLanguage(t,[{text:o}]);const c=p[r]||r;const l=p[s]||s;console.log(`From: ${c} → To: ${l}`);const u=await i({apiBase:t,token:e,messages:[{role:"system",content:`You are a professional translator. Translate the following text from ${c} to ${l}. Return ONLY the translation, no explanations.`},{role:"user",content:o}],temperature:a.temperature,maxTokens:a.maxTokens});const d=u.content.trim();console.log(`\n${d}`);const f=u.quota?u.quota.remaining:"?";console.log(`\n(Quota: 1 used, ${f} remaining)`);return{text:d,quotaUsed:1,from:r,to:s}}async function _translateFile({token:e,api:t,input:r,from:c,to:l,output:u}){console.log("\n=== VoxFlow Translate (File) ===");const d=s.resolve(r);const f=n.readFileSync(d,"utf8");if(f.trim().length===0){throw new Error(`Input file is empty: ${d}`)}console.log(`Input: ${s.basename(d)}`);console.log(`Length: ${f.length} chars`);const g=c||await autoDetectLanguage(t,[{text:f}]);const m=p[g]||g;const h=p[l]||l;console.log(`From: ${m} → To: ${h}`);const w=await i({apiBase:t,token:e,messages:[{role:"system",content:`You are a professional translator. Translate the following document from ${m} to ${h}. Preserve the original formatting (paragraphs, line breaks, markdown). Return ONLY the translation.`},{role:"user",content:f}],temperature:a.temperature,maxTokens:Math.max(a.maxTokens,4e3)});const x=w.content.trim();let v;if(u){v=s.resolve(u)}else{const e=s.extname(d);const t=s.basename(d,e);const n=s.dirname(d);v=o.ab+"cli/"+n+"/"+t+"-"+l+""+e}n.writeFileSync(v,x+"\n","utf8");const S=w.quota?w.quota.remaining:"?";console.log(`\n=== Done ===`);console.log(`Output: ${v}`);console.log(`Quota: 1 used, ${S} remaining`);return{outputPath:v,quotaUsed:1,from:g,to:l}}async function autoDetectLanguage(e,t){const o=t.slice(0,3).map((e=>e.text)).join(" ");const n=await c({apiBase:e,text:o});return n||"auto"}e.exports={translate:translate,LANG_MAP:p,_test:{buildTranslationPrompt:buildTranslationPrompt,parseTranslationResponse:parseTranslationResponse,realignTimings:realignTimings,batchCaptions:batchCaptions}}},863:(e,t,o)=>{const n=o(896);const s=o(928);const r=o(857);const{checkFfmpeg:a,extractAudio:i}=o(297);const{asr:c}=o(929);const{translate:l}=o(585);const{dub:u}=o(944);const{detectLanguage:p}=o(133);const{parseSrt:d}=o(813);const{API_BASE:f,VIDEO_TRANSLATE_DEFAULTS:g}=o(782);const m={zh:"16k_zh",en:"16k_en",ja:"16k_ja",ko:"16k_ko","zh-en":"16k_zh_en"};function resolveAsrLang(e,t){if(t)return t;if(e&&m[e])return m[e];return"16k_zh"}async function videoTranslate(e){const sigintHandler=()=>{console.log("\n\nVideo translation cancelled.");process.exit(0)};process.on("SIGINT",sigintHandler);try{return await _videoTranslate(e)}finally{process.removeListener("SIGINT",sigintHandler)}}async function _videoTranslate(e){const{token:t,api:m=f,input:h,from:w,to:x,voice:v,voicesMap:S,realign:$=false,output:y,keepIntermediates:b=false,batchSize:T=g.batchSize,speed:k=g.speed,asrMode:F,asrLang:E}=e;const _=s.resolve(h);const A=s.basename(_,s.extname(_));console.log("\n=== VoxFlow Video Translate ===");console.log(`Input: ${s.basename(_)}`);console.log(`Target: ${x}`);console.log("");const I=n.mkdtempSync(s.join(r.tmpdir(),"voxflow-vtranslate-"));let M=0;const L={};try{process.stdout.write("[1/4] Checking FFmpeg... ");const e=await a();if(!e.available){throw new Error("FFmpeg is required for video-translate. Install: https://ffmpeg.org/download.html")}console.log(`OK (${e.version})`);process.stdout.write("[2/4] Transcribing audio... ");const r=s.join(I,"extracted-audio.wav");await i(_,r);const f=s.join(I,"source.srt");const g=resolveAsrLang(w,E);const h={token:t,api:m,input:r,format:"srt",output:f,lang:g};if(F)h.mode=F;const C=await c(h);if(C.captionCount===0){throw new Error("ASR produced no captions. The video may have no audible speech.")}L.asr={mode:C.mode,duration:C.duration,captionCount:C.captionCount,quotaUsed:C.quotaUsed};M+=C.quotaUsed;console.log(`${C.captionCount} captions (${C.mode} mode)`);let N=w;if(!N){const e=n.readFileSync(f,"utf8");const t=d(e);const o=t.slice(0,3).map((e=>e.text)).join(" ");N=await p({apiBase:m,text:o})||"auto"}process.stdout.write(`[3/4] Translating (${N} → ${x})... `);const P=s.join(I,`translated-${x}.srt`);const O=await l({token:t,api:m,srt:f,from:N,to:x,output:P,realign:$,batchSize:T});L.translate={from:O.from,to:O.to,captionCount:O.captionCount,quotaUsed:O.quotaUsed};M+=O.quotaUsed;console.log(`${O.captionCount} captions translated`);process.stdout.write("[4/4] Dubbing and merging video... ");const D=y?s.resolve(y):o.ab+"cli/"+s.dirname(_)+"/"+A+"-"+x+".mp4";const q=await u({token:t,api:m,srt:P,voice:v,voicesMap:S,speed:k,video:_,output:D});L.dub={segmentCount:q.segmentCount,duration:q.duration,quotaUsed:q.quotaUsed,warnings:q.warnings};M+=q.quotaUsed;console.log(`${q.segmentCount} segments dubbed`);if(b){const e=s.resolve(s.dirname(D),`${A}-${x}-intermediates`);n.mkdirSync(e,{recursive:true});const t=[["extracted-audio.wav",r],["source.srt",f],[`translated-${x}.srt`,P]];for(const[o,r]of t){if(n.existsSync(r)){n.copyFileSync(r,s.join(e,o))}}console.log(`\nIntermediates saved: ${e}`)}console.log("\n=== Done ===");console.log(`Output: ${D}`);console.log(`Language: ${N} → ${x}`);console.log(`Captions: ${O.captionCount}`);console.log(`Duration: ${q.duration.toFixed(1)}s`);console.log(`Quota: ${M} used`);if(L.dub.warnings&&L.dub.warnings.length>0){console.log(`\nWarnings:`);for(const e of L.dub.warnings){console.log(` - ${e}`)}}return{outputPath:D,from:N,to:x,captionCount:O.captionCount,quotaUsed:M,stages:L}}finally{if(!b){try{n.rmSync(I,{recursive:true,force:true})}catch{}}}}e.exports={videoTranslate:videoTranslate}},784:(e,t,o)=>{const{request:n,throwNetworkError:s}=o(852);async function voices(e){const t=e.api;const o=e.extended?"true":"false";let r,a;try{({status:r,data:a}=await n(`${t}/api/tts/voices?includeExtended=${o}`,{method:"GET"}))}catch(e){s(e,t)}if(r!==200){throw new Error(`Failed to fetch voices (${r}): ${a?.message||"unknown error"}`)}let i=a.voices||a.data?.voices||[];if(e.gender){const t=normalizeGender(e.gender);if(!t){console.error(`Error: --gender must be one of: male, m, female, f (got: "${e.gender}")`);process.exit(1)}i=i.filter((e=>{const o=(e.gender||"").toLowerCase();return o===t}))}if(e.language){const t=e.language.toLowerCase();i=i.filter((e=>(e.language||"").toLowerCase()===t))}if(e.search){const t=e.search.toLowerCase();i=i.filter((e=>{const o=[e.name,e.nameEn,e.tone,e.style,e.description,e.scenarios].filter(Boolean).join(" ").toLowerCase();return o.includes(t)}))}if(i.length===0){console.log("No voices match your criteria.");return}if(e.json){console.log(JSON.stringify(i,null,2))}else{printTable(i)}console.log(`\nFound ${i.length} voice${i.length===1?"":"s"}.`)}function normalizeGender(e){const t=(e||"").toLowerCase().trim();if(t==="male"||t==="m")return"male";if(t==="female"||t==="f")return"female";return null}function printTable(e){const t=24;const o=14;const n=8;const s=22;const r=20;const a=["ID".padEnd(t),"Name".padEnd(o),"Gender".padEnd(n),"Tone".padEnd(s),"Style".padEnd(r)].join(" ");console.log(`\n${a}`);console.log("-".repeat(a.length));for(const a of e){const e=[truncate(a.id||"",t).padEnd(t),truncate(a.name||"",o).padEnd(o),truncate(a.gender||"",n).padEnd(n),truncate(a.tone||"",s).padEnd(s),truncate(a.style||"",r).padEnd(r)].join(" ");console.log(e)}}function truncate(e,t){if(e.length<=t)return e;return e.slice(0,t-1)+"…"}e.exports={voices:voices}},514:(e,t,o)=>{const n=o(896);const{request:s,throwApiError:r,throwNetworkError:a,ApiError:i}=o(852);const c=6e4;const l=72e5;const u=5*1024*1024;const p=3e3;const d=3e5;const f={WAITING:0,PROCESSING:1,SUCCESS:2,FAILED:3};function detectMode(e,t,o){if(e<=c&&o<=u){return"sentence"}if(e<=l&&t){return"flash"}return"file"}function authHeaders(e){return{"Content-Type":"application/json",Authorization:`Bearer ${e}`}}async function recognizeSentence(e){const{apiBase:t,token:o,url:c,filePath:l,lang:u="16k_zh",wordInfo:p=false}=e;const d={EngSerViceType:u,VoiceFormat:"wav",SubServiceType:2,WordInfo:p?1:0,ConvertNumMode:1};if(c){d.Url=c;d.SourceType=0}else if(l){const e=n.readFileSync(l);d.Data=e.toString("base64");d.DataLen=e.length;d.SourceType=1}else{throw new Error("Either url or filePath is required for sentence recognition")}try{const{status:e,data:n}=await s(`${t}/api/asr/sentence`,{method:"POST",headers:authHeaders(o)},d);if(e!==200||n.code!=="success"){r(e,n,"ASR sentence")}return{result:n.result,audioTime:n.audioTime,wordList:n.wordList||[],requestId:n.requestId,quota:n.quota}}catch(e){if(e instanceof i)throw e;a(e,t)}}async function recognizeFlash(e){const{apiBase:t,token:o,url:n,lang:c="16k_zh",speakerDiarization:l=false,speakerNumber:u=0}=e;if(!n){throw new Error("Flash recognition requires a URL (cannot use base64 data)")}const p={engine_type:c,voice_format:"wav",url:n,speaker_diarization:l?1:0,speaker_number:u,filter_dirty:0,filter_modal:0,filter_punc:0,convert_num_mode:1,word_info:1,first_channel_only:1};try{const{status:e,data:n}=await s(`${t}/api/asr/flash`,{method:"POST",headers:authHeaders(o)},p);if(e!==200||n.code!=="success"){r(e,n,"ASR flash")}return{flashResult:n.flash_result||[],audioDuration:n.audio_duration||0,requestId:n.request_id,quota:n.quota}}catch(e){if(e instanceof i)throw e;a(e,t)}}async function submitFileTask(e){const{apiBase:t,token:o,url:c,filePath:l,lang:p="16k_zh",speakerDiarization:d=false,speakerNumber:f=0}=e;const g={EngineModelType:p,ChannelNum:1,ResTextFormat:0,FilterDirty:0,FilterModal:0,FilterPunc:0,ConvertNumMode:1,SpeakerDiarization:d?1:0,SpeakerNumber:f};if(c){g.Url=c;g.SourceType=0}else if(l){const e=n.readFileSync(l);if(e.length>u){throw new Error(`File too large for base64 upload (${(e.length/1024/1024).toFixed(1)} MB). `+"Upload to COS first or use flash mode with a URL.")}g.Data=e.toString("base64");g.DataLen=e.length;g.SourceType=1}else{throw new Error("Either url or filePath is required for file recognition")}try{const{status:e,data:n}=await s(`${t}/api/asr/file`,{method:"POST",headers:authHeaders(o)},g);if(e!==200||n.code!=="success"){r(e,n,"ASR file submit")}return{taskId:n.taskId,requestId:n.requestId,quota:n.quota}}catch(e){if(e instanceof i)throw e;a(e,t)}}async function pollTaskResult(e){const{apiBase:t,token:o,taskId:n,pollIntervalMs:c=p,pollTimeoutMs:l=d,onProgress:u}=e;const g=Date.now();while(true){const e=Date.now()-g;if(e>l){throw new Error(`ASR task ${n} timed out after ${Math.round(e/1e3)}s. `+"The task may still complete — check later with: voxflow asr --task-id "+n)}try{const{status:a,data:i}=await s(`${t}/api/asr/result/${n}`,{method:"GET",headers:authHeaders(o)});if(a!==200||i.code!=="success"){r(a,i,"ASR poll")}const c=i.data;const l=c.Status;if(u)u(l,e);if(l===f.SUCCESS){return{result:c.Result,audioTime:c.AudioTime,status:l}}if(l===f.FAILED){throw new Error(`ASR task ${n} failed: ${c.Result||"Unknown error"}`)}}catch(o){if(o instanceof i)throw o;if(e+c<l){}else{a(o,t)}}await sleep(c)}}async function recognize(e){const{mode:t="auto",url:o,filePath:n,durationMs:s,fileSize:r=0}=e;const a=!!o;const i=t==="auto"?detectMode(s,a,r):t;switch(i){case"sentence":{const t=await recognizeSentence(e);return{mode:"sentence",result:t.result,audioTime:t.audioTime,wordList:t.wordList,quota:t.quota}}case"flash":{if(!o){throw new Error("Flash mode requires a URL. Upload the file to COS first, or use --mode auto.")}const t=await recognizeFlash(e);const n=(t.flashResult||[]).flatMap((e=>e.sentence_list?e.sentence_list.map((e=>e.text)):[e.text])).join("");return{mode:"flash",result:n,flashResult:t.flashResult,audioDuration:t.audioDuration,audioTime:(t.audioDuration||0)/1e3,quota:t.quota}}case"file":{const t=await submitFileTask(e);const o=await pollTaskResult({apiBase:e.apiBase,token:e.token,taskId:t.taskId,onProgress:e.onProgress});return{mode:"file",result:o.result,audioTime:o.audioTime,taskId:t.taskId,quota:t.quota}}default:throw new Error(`Unknown ASR mode: ${i}. Use: auto, sentence, flash, or file`)}}function sleep(e){return new Promise((t=>setTimeout(t,e)))}e.exports={recognize:recognize,recognizeSentence:recognizeSentence,recognizeFlash:recognizeFlash,submitFileTask:submitFileTask,pollTaskResult:pollTaskResult,detectMode:detectMode,SENTENCE_MAX_MS:c,FLASH_MAX_MS:l,BASE64_MAX_BYTES:u,TASK_STATUS:f}},388:(e,t,o)=>{const{execFile:n}=o(317);const s=o(928);const r=o(857);const a=o(896);function runCommand(e,t,o){return new Promise(((s,r)=>{n(e,t,{timeout:6e5,...o},((e,t,o)=>{if(e){e.stderr=o;e.stdout=t;r(e)}else{s({stdout:t,stderr:o})}}))}))}async function getMediaInfo(e){const t=s.resolve(e);if(!a.existsSync(t)){throw new Error(`File not found: ${t}`)}try{const{stdout:e}=await runCommand("ffprobe",["-v","error","-show_entries","format=duration","-show_entries","stream=codec_type,codec_name,sample_rate,channels","-of","json",t]);const o=JSON.parse(e);const n=o.streams||[];const s=o.format||{};const r=n.find((e=>e.codec_type==="audio"));const a=n.find((e=>e.codec_type==="video"));const i=parseFloat(s.duration);const c=isNaN(i)?0:Math.round(i*1e3);return{durationMs:c,hasVideo:!!a,hasAudio:!!r,audioCodec:r?r.codec_name:null,sampleRate:r?parseInt(r.sample_rate,10):null,channels:r?parseInt(r.channels,10):null}}catch(t){if(t.code==="ENOENT"){throw new Error("ffprobe not found. Please install ffmpeg:\n"+" macOS: brew install ffmpeg\n"+" Ubuntu: sudo apt install ffmpeg\n"+" Windows: https://ffmpeg.org/download.html")}throw new Error(`Failed to probe media file ${e}: ${t.message}`)}}async function extractAudioForAsr(e,t={}){const o=s.resolve(e);if(!a.existsSync(o)){throw new Error(`File not found: ${o}`)}const n=t.outputDir||r.tmpdir();const i=s.basename(o,s.extname(o));const c=s.join(n,`asr-${i}-${Date.now()}.wav`);try{await runCommand("ffmpeg",["-i",o,"-vn","-acodec","pcm_s16le","-ar","16000","-ac","1","-y",c])}catch(t){if(t.code==="ENOENT"){throw new Error("ffmpeg not found. Please install ffmpeg:\n"+" macOS: brew install ffmpeg\n"+" Ubuntu: sudo apt install ffmpeg\n"+" Windows: https://ffmpeg.org/download.html")}throw new Error(`Failed to extract audio from ${e}: ${t.stderr||t.message}`)}const l=a.statSync(c);const u=Math.round((l.size-44)/32);return{wavPath:c,durationMs:u,needsCleanup:true}}async function needsConversion(e){try{const t=await getMediaInfo(e);if(t.hasVideo)return true;if(t.audioCodec!=="pcm_s16le")return true;if(t.sampleRate!==16e3)return true;if(t.channels!==1)return true;return false}catch{return true}}e.exports={getMediaInfo:getMediaInfo,extractAudioForAsr:extractAudioForAsr,needsConversion:needsConversion}},56:e=>{function parseParagraphs(e){const t=e.split(/\n\s*\n/).map((e=>e.replace(/^\d+[.、)\]]\s*/,"").trim())).filter((e=>e.length>0));return t}function createSilence(e,t){const o=Math.floor(t*e);return Buffer.alloc(o*2,0)}function buildWav(e,t){const o=24e3;const n=16;const s=1;const r=n/8;const a=s*r;const i=o*a;const c=createSilence(t,o);let l=0;for(let t=0;t<e.length;t++){l+=e[t].length;if(t<e.length-1){l+=c.length}}const u=Buffer.alloc(44);u.write("RIFF",0);u.writeUInt32LE(36+l,4);u.write("WAVE",8);u.write("fmt ",12);u.writeUInt32LE(16,16);u.writeUInt16LE(1,20);u.writeUInt16LE(s,22);u.writeUInt32LE(o,24);u.writeUInt32LE(i,28);u.writeUInt16LE(a,32);u.writeUInt16LE(n,34);u.write("data",36);u.writeUInt32LE(l,40);const p=[u];for(let t=0;t<e.length;t++){p.push(e[t]);if(t<e.length-1){p.push(c)}}return{wav:Buffer.concat(p),duration:l/i}}function getFileExtension(e){switch(e){case"mp3":return".mp3";case"wav":return".wav";case"pcm":default:return".wav"}}function concatAudioBuffers(e,t,o){if(t==="mp3"){const t=Buffer.concat(e);const o=t.length/4e3;return{audio:t,duration:o}}if(t==="wav"){const t=e.map(extractPcmFromWav);return buildWav(t,o)}return buildWav(e,o)}function extractPcmFromWav(e){const t=Buffer.from("data");let o=12;while(o<e.length-8){if(e.subarray(o,o+4).equals(t)){const t=e.readUInt32LE(o+4);return e.subarray(o+8,o+8+t)}const n=e.readUInt32LE(o+4);o+=8+n}return e.subarray(44)}e.exports={parseParagraphs:parseParagraphs,createSilence:createSilence,buildWav:buildWav,concatAudioBuffers:concatAudioBuffers,getFileExtension:getFileExtension}},986:(e,t,o)=>{const n=o(611);const s=o(896);const r=o(928);const a=o(982);const i=o(785);const{TOKEN_PATH:c,getConfigDir:l,LOGIN_PAGE:u,AUTH_TIMEOUT_MS:p,API_BASE:d}=o(782);function readCachedToken(){try{const e=s.readFileSync(c,"utf8");const t=JSON.parse(e);if(!t.access_token)return null;const o=decodeJwtPayload(t.access_token);if(!o||!o.exp)return null;const n=Math.floor(Date.now()/1e3);if(o.exp-n<300)return null;return t}catch{return null}}function writeCachedToken(e){const t=l();s.mkdirSync(t,{recursive:true,mode:448});s.writeFileSync(c,JSON.stringify(e,null,2),{encoding:"utf8",mode:384})}function clearToken(){try{s.unlinkSync(c)}catch{}}function decodeJwtPayload(e){try{const t=e.split(".");if(t.length!==3)return null;const o=t[1].replace(/-/g,"+").replace(/_/g,"/");return JSON.parse(Buffer.from(o,"base64").toString("utf8"))}catch{return null}}async function getToken({api:e,force:t}={}){if(!t){const t=readCachedToken();if(t){const o=!e||e===t.api;if(o)return t.access_token}}return browserLogin(e||d)}function getTokenInfo(){const e=readCachedToken();if(!e)return null;const t=decodeJwtPayload(e.access_token);if(!t)return null;const o=Math.floor(Date.now()/1e3);return{email:t.email||e.email||"(unknown)",expiresAt:new Date(t.exp*1e3).toISOString(),remaining:t.exp-o,valid:t.exp-o>300,api:e.api||d}}function browserLogin(e){return new Promise(((t,s)=>{const r=a.randomBytes(16).toString("hex");let c=false;let l=null;function settle(o){if(c)return;c=true;const n=decodeJwtPayload(o);writeCachedToken({access_token:o,expires_at:n?.exp||0,email:n?.email||"",api:e,cached_at:(new Date).toISOString()});if(l){l.close();l=null}d.close();t(o)}const d=n.createServer(((e,t)=>{const o=new URL(e.url,`http://127.0.0.1`);if(o.pathname!=="/callback"){t.writeHead(404,{"Content-Type":"text/plain"});t.end("Not Found");return}const n=o.searchParams.get("token");const s=o.searchParams.get("state");if(s!==r){t.writeHead(400,{"Content-Type":"text/html; charset=utf-8"});t.end("<h1>认证失败</h1><p>state 参数不匹配,请重试。</p>");return}if(!n){t.writeHead(400,{"Content-Type":"text/html; charset=utf-8"});t.end("<h1>认证失败</h1><p>未收到 token,请重试。</p>");return}t.writeHead(200,{"Content-Type":"text/html; charset=utf-8"});t.end(`<!DOCTYPE html>\n<html><head><meta charset="utf-8"><title>登录成功</title></head>\n<body style="font-family:system-ui;display:flex;align-items:center;justify-content:center;height:100vh;margin:0;background:#f0fdf4">\n<div style="text-align:center">\n<h1 style="color:#16a34a;font-size:2rem">登录成功</h1>\n<p style="color:#666;margin-top:0.5rem">已授权 voxflow CLI,可以关闭此窗口。</p>\n</div></body></html>`);settle(n)}));d.listen(0,"127.0.0.1",(async()=>{const e=d.address().port;const t=`${u}?state=${r}&callback_port=${e}`;console.log("\n🔐 需要登录。正在打开浏览器...");console.log(` 若未自动打开: ${t}\n`);let n=false;try{const e=(await o.e(935).then(o.bind(o,935))).default;const s=await e(t);if(s&&typeof s.on==="function"){s.on("error",(()=>{n=true;console.log(" 浏览器打开失败,请手动复制上面的链接到浏览器。\n");startStdinListener()}))}}catch{n=true;console.log(" 浏览器打开失败,请手动复制上面的链接到浏览器。\n");startStdinListener()}function startStdinListener(){if(c||l||!process.stdin.isTTY)return;console.log(" 登录后网页会显示授权码,粘贴到此处回车即可");l=i.createInterface({input:process.stdin,output:process.stdout,terminal:false});process.stdout.write(" > Token: ");l.on("line",(e=>{const t=e.trim();if(!t)return;const o=decodeJwtPayload(t);if(!o){console.log(" 无效的 token,请重新粘贴完整的授权码。");process.stdout.write(" > Token: ");return}const n=Math.floor(Date.now()/1e3);if(o.exp&&o.exp<n){console.log(" token 已过期,请重新登录获取。");process.stdout.write(" > Token: ");return}console.log(`\n✓ 授权成功 (${o.email||"user"})`);settle(t)}))}}));const f=setTimeout((()=>{if(!c){c=true;if(l){l.close();l=null}d.close();s(new Error(`登录超时 (${p/1e3}s)。请重试: voxflow login`))}}),p);d.on("close",(()=>clearTimeout(f)));d.on("error",(e=>{if(!c){c=true;if(l){l.close();l=null}s(new Error(`本地服务器启动失败: ${e.message}`))}}))}))}e.exports={getToken:getToken,clearToken:clearToken,getTokenInfo:getTokenInfo}},782:(e,t,o)=>{const n=o(928);const s=o(857);const r="https://api.voxflow.studio";const a="https://iwkonytsjysszmafqchh.supabase.co";const i="sb_publishable_TEh6H4K9OWXUNfWSeBKXlQ_hg7Zzm6b";const c="voxflow";function getConfigDir(){if(process.platform==="win32"){return n.join(process.env.APPDATA||n.join(s.homedir(),"AppData","Roaming"),c)}const e=process.env.XDG_CONFIG_HOME||n.join(s.homedir(),".config");return n.join(e,c)}const l=n.join(getConfigDir(),"token.json");const u={voice:"v-female-R2s4N9qJ",paragraphs:5,speed:1,silence:.8};const p={template:"interview",exchanges:8,length:"medium",style:"professional",speakers:2,silence:.5,speed:1};const d={voice:"v-female-R2s4N9qJ",speed:1,volume:1,pitch:0};const f={voice:"v-female-R2s4N9qJ",speed:1,silence:.8};const g={voice:"v-female-R2s4N9qJ",speed:1,toleranceMs:50,ducking:.2};const m={lang:"16k_zh",mode:"auto",format:"srt",pollIntervalMs:3e3,pollTimeoutMs:3e5,engine:"auto",model:"base"};const h={batchSize:10,temperature:.3,maxTokens:2e3};const w={batchSize:10,speed:1};const x="https://voxflow.studio";const v=`${r}/cli-auth.html`;const S=18e4;e.exports={API_BASE:r,WEB_BASE:x,SUPABASE_URL:a,SUPABASE_ANON_KEY:i,TOKEN_PATH:l,getConfigDir:getConfigDir,DEFAULTS:u,STORY_DEFAULTS:u,PODCAST_DEFAULTS:p,SYNTHESIZE_DEFAULTS:d,NARRATE_DEFAULTS:f,DUB_DEFAULTS:g,ASR_DEFAULTS:m,TRANSLATE_DEFAULTS:h,VIDEO_TRANSLATE_DEFAULTS:w,LOGIN_PAGE:v,AUTH_TIMEOUT_MS:S}},567:(e,t,o)=>{const n=o(896);const s=o(928);const r=o(611);const a=o(692);const{request:i,throwApiError:c,throwNetworkError:l,ApiError:u}=o(852);const p={".wav":"audio/wav",".mp3":"audio/mpeg",".ogg":"audio/ogg",".m4a":"audio/x-m4a",".mp4":"video/mp4",".webm":"video/webm",".mov":"video/quicktime",".avi":"video/x-msvideo",".mkv":"video/x-matroska",".flac":"audio/flac"};function getMimeType(e){const t=s.extname(e).toLowerCase();return p[t]||"application/octet-stream"}async function uploadFileToCos(e,t,o){const r=s.resolve(e);if(!n.existsSync(r)){throw new Error(`File not found: ${r}`)}const a=n.statSync(r);const p=s.basename(r);const d=getMimeType(r);const f=a.size;let g;try{const{status:e,data:n}=await i(`${t}/api/file-upload/get-upload-url`,{method:"POST",headers:{"Content-Type":"application/json",Authorization:`Bearer ${o}`}},{filename:p,fileType:d,fileSize:f});if(e!==200||n.code!=="success"){c(e,n,"Get upload URL")}g=n.data}catch(e){if(e instanceof u)throw e;l(e,t)}const{uploadUrl:m,key:h,bucket:w,region:x}=g;await putFile(m,r,d);let v;try{v=await getSignedDownloadUrl(t,o,h)}catch{v=`https://${w}.cos.${x}.myqcloud.com/${h}`}return{cosUrl:v,key:h}}function putFile(e,t,o){return new Promise(((s,i)=>{const c=new URL(e);const l=c.protocol==="https:"?a:r;const u=n.statSync(t).size;const p={hostname:c.hostname,port:c.port||(c.protocol==="https:"?443:80),path:c.pathname+c.search,method:"PUT",headers:{"Content-Type":o,"Content-Length":u}};const d=l.request(p,(e=>{const t=[];e.on("data",(e=>t.push(e)));e.on("end",(()=>{if(e.statusCode>=200&&e.statusCode<300){s()}else{const o=Buffer.concat(t).toString("utf8");i(new Error(`COS upload failed (${e.statusCode}): ${o.slice(0,300)}`))}}))}));d.on("error",(e=>i(new Error(`COS upload network error: ${e.message}`))));d.setTimeout(3e5,(()=>{d.destroy();i(new Error("COS upload timeout (5 min)"))}));const f=n.createReadStream(t);f.pipe(d);f.on("error",(e=>{d.destroy();i(new Error(`Failed to read file for upload: ${e.message}`))}))}))}async function getSignedDownloadUrl(e,t,o){const{status:n,data:s}=await i(`${e}/api/file-upload/get-download-url`,{method:"POST",headers:{"Content-Type":"application/json",Authorization:`Bearer ${t}`}},{key:o});if(n!==200||s.code!=="success"){throw new Error(`Failed to get download URL: ${s.message||n}`)}return s.data.downloadUrl}e.exports={uploadFileToCos:uploadFileToCos,getSignedDownloadUrl:getSignedDownloadUrl,getMimeType:getMimeType}},297:(e,t,o)=>{const{execFile:n}=o(317);const s=o(928);const r=o(896);function runCommand(e,t,o){return new Promise(((s,r)=>{n(e,t,{timeout:3e5,...o},((e,t,o)=>{if(e){e.stderr=o;e.stdout=t;r(e)}else{s({stdout:t,stderr:o})}}))}))}async function checkFfmpeg(){try{const{stdout:e}=await runCommand("ffmpeg",["-version"]);const t=e.match(/ffmpeg version (\S+)/);const o=t?t[1]:"unknown";let n=false;try{await runCommand("ffprobe",["-version"]);n=true}catch{}return{available:true,version:o,ffprobeAvailable:n}}catch{return{available:false}}}async function getAudioDuration(e){const t=s.resolve(e);try{const{stdout:e}=await runCommand("ffprobe",["-v","error","-show_entries","format=duration","-of","default=noprint_wrappers=1:nokey=1",t]);const o=parseFloat(e.trim());if(isNaN(o)){throw new Error(`Could not parse duration from ffprobe output: "${e.trim()}"`)}return Math.round(o*1e3)}catch(t){if(t.code==="ENOENT"){throw new Error("ffprobe not found. Please install ffmpeg: https://ffmpeg.org/download.html")}throw new Error(`Failed to get duration of ${e}: ${t.message}`)}}async function extractAudio(e,t){const o=s.resolve(e);const n=s.resolve(t);try{await runCommand("ffmpeg",["-i",o,"-vn","-acodec","pcm_s16le","-ar","24000","-ac","1","-y",n]);return n}catch(t){if(t.code==="ENOENT"){throw new Error("ffmpeg not found. Please install ffmpeg: https://ffmpeg.org/download.html")}throw new Error(`Failed to extract audio from ${e}: ${t.stderr||t.message}`)}}async function mergeAudioVideo(e,t,o){const n=s.resolve(e);const r=s.resolve(t);const a=s.resolve(o);try{await runCommand("ffmpeg",["-i",n,"-i",r,"-c:v","copy","-map","0:v:0","-map","1:a:0","-shortest","-y",a]);return a}catch(e){if(e.code==="ENOENT"){throw new Error("ffmpeg not found. Please install ffmpeg: https://ffmpeg.org/download.html")}throw new Error(`Failed to merge audio/video: ${e.stderr||e.message}`)}}async function mixWithBgm(e,t,o,n={}){const r=n.ducking??.2;const a=s.resolve(e);const i=s.resolve(t);const c=s.resolve(o);try{await runCommand("ffmpeg",["-i",a,"-i",i,"-filter_complex",`[1:a]volume=${r}[bgm_low];`+`[0:a][bgm_low]amix=inputs=2:duration=first:dropout_transition=2[out]`,"-map","[out]","-acodec","pcm_s16le","-ar","24000","-ac","1","-y",c]);return c}catch(e){if(e.code==="ENOENT"){throw new Error("ffmpeg not found. Please install ffmpeg: https://ffmpeg.org/download.html")}throw new Error(`Failed to mix audio with BGM: ${e.stderr||e.message}`)}}async function warnIfMissingFfmpeg(e,t){const o=await checkFfmpeg();if(o.available)return o;const n=s.join(e,".ffmpeg-hint-shown");try{if(r.existsSync(n))return o}catch{}const a={dub:"video merging (--video), BGM mixing (--bgm), speed adjustment (--speed-auto)",asr:"audio format conversion, video audio extraction"};const i=a[t]||"audio/video processing";console.log("\n"+`[hint] ffmpeg not found — needed for ${i}.\n`+" Install: brew install ffmpeg (macOS) / sudo apt install ffmpeg (Linux)\n"+" Without ffmpeg, some features will be unavailable.\n");try{r.mkdirSync(e,{recursive:true});r.writeFileSync(n,(new Date).toISOString(),"utf8")}catch{}return o}e.exports={checkFfmpeg:checkFfmpeg,getAudioDuration:getAudioDuration,extractAudio:extractAudio,mergeAudioVideo:mergeAudioVideo,mixWithBgm:mixWithBgm,warnIfMissingFfmpeg:warnIfMissingFfmpeg}},852:(e,t,o)=>{const n=o(611);const s=o(692);class ApiError extends Error{constructor(e,t,o){super(e);this.name="ApiError";this.code=t;this.status=o}}function throwApiError(e,t,o){if(e===401){throw new ApiError(`Token expired or invalid. Run: voxflow login`,"token_expired",401)}if(e===429||t&&t.code==="quota_exceeded"){throw new ApiError(`Daily quota exceeded. Your quota resets tomorrow. Check: voxflow status`,"quota_exceeded",429)}if(e>=500){throw new ApiError(`Server error (${e}). Please try again later.`,"server_error",e)}const n=t?.message||t?.code||JSON.stringify(t);throw new ApiError(`${o} failed (${e}): ${n}`,"api_error",e)}function throwNetworkError(e,t){const o=e.code||"";if(o==="ECONNREFUSED"||o==="ENOTFOUND"||o==="ETIMEDOUT"){throw new ApiError(`Cannot reach API server at ${t}. Check your internet connection or try --api <url>`,"network_error",0)}throw e}function request(e,t,o){return new Promise(((r,a)=>{const i=new URL(e);const c=i.protocol==="https:"?s:n;const l=c.request(i,t,(e=>{const t=[];e.on("data",(e=>t.push(e)));e.on("end",(()=>{const o=Buffer.concat(t).toString("utf8");try{r({status:e.statusCode,data:JSON.parse(o)})}catch{a(new Error(`Non-JSON response (${e.statusCode}): ${o.slice(0,200)}`))}}))}));l.on("error",(e=>a(e)));l.setTimeout(6e4,(()=>{l.destroy();a(new Error("Request timeout (60s)"))}));if(o)l.write(JSON.stringify(o));l.end()}))}e.exports={request:request,ApiError:ApiError,throwApiError:throwApiError,throwNetworkError:throwNetworkError}},133:(e,t,o)=>{const{request:n,throwApiError:s,throwNetworkError:r}=o(852);async function chatCompletion({apiBase:e,token:t,messages:o,temperature:a=.3,maxTokens:i=2e3}){let c,l;try{({status:c,data:l}=await n(`${e}/api/llm/chat`,{method:"POST",headers:{"Content-Type":"application/json",Authorization:`Bearer ${t}`}},{messages:o,temperature:a,max_tokens:i}))}catch(t){r(t,e)}if(c!==200||l.code!=="success"){s(c,l,"LLM chat")}return{content:l.content,usage:l.usage,quota:l.quota}}async function detectLanguage({apiBase:e,text:t}){let o,s;try{({status:o,data:s}=await n(`${e}/api/lang-detect/detect`,{method:"POST",headers:{"Content-Type":"application/json"}},{text:t.slice(0,200)}))}catch{return"auto"}if(o===200&&s.code==="success"){return s.language}return"auto"}e.exports={chatCompletion:chatCompletion,detectLanguage:detectLanguage}},384:(e,t,o)=>{const{spawn:n}=o(317);const s=o(928);const r=o(857);const a=o(896);async function checkRecAvailable(){return new Promise((e=>{const t=n("rec",["--version"],{stdio:"pipe"});let o="";t.stdout.on("data",(e=>{o+=e}));t.stderr.on("data",(e=>{o+=e}));t.on("error",(()=>{e({available:false,error:"rec (sox) not found. Please install sox:\n"+" macOS: brew install sox\n"+" Ubuntu: sudo apt install sox\n"+" Windows: https://sourceforge.net/projects/sox/"})}));t.on("close",(t=>{e({available:t===0||o.length>0})}))}))}function recordMic(e={}){const{outputDir:t=r.tmpdir(),maxSeconds:o=300,silenceThreshold:i=0}=e;const c=s.join(t,`mic-${Date.now()}.wav`);return new Promise(((e,t)=>{const s=["-r","16000","-c","1","-b","16","-e","signed-integer",c,"trim","0",String(o)];if(i>0){s.push("silence","1","0.1","1%","1",String(i),"1%")}const r=n("rec",s,{stdio:["pipe","pipe","pipe"]});let l="";r.stderr.on("data",(e=>{l+=e.toString()}));r.on("error",(e=>{if(e.code==="ENOENT"){t(new Error("rec (sox) not found. Please install sox:\n"+" macOS: brew install sox\n"+" Ubuntu: sudo apt install sox\n"+" Windows: https://sourceforge.net/projects/sox/"))}else{t(new Error(`Microphone recording failed: ${e.message}`))}}));let u="timeout";r.on("close",(o=>{if(!a.existsSync(c)){return t(new Error(`Recording failed — no output file created.\n${l.slice(0,500)}`))}const n=a.statSync(c);if(n.size<100){a.unlinkSync(c);return t(new Error("Recording produced an empty file. Check that your microphone is connected and accessible."))}const s=Math.round((n.size-44)/32);e({wavPath:c,durationMs:s,stopped:u})}));const stopRecording=()=>{u="user";r.kill("SIGTERM")};if(process.stdin.isTTY){process.stdin.setRawMode(true);process.stdin.resume();const onKey=e=>{if(e[0]===13||e[0]===10||e[0]===113){u="user";process.stdin.setRawMode(false);process.stdin.removeListener("data",onKey);process.stdin.pause();r.kill("SIGTERM")}if(e[0]===3){u="user";process.stdin.setRawMode(false);process.stdin.removeListener("data",onKey);process.stdin.pause();r.kill("SIGTERM")}};process.stdin.on("data",onKey);r.on("close",(()=>{try{process.stdin.removeListener("data",onKey);if(process.stdin.isTTY){process.stdin.setRawMode(false)}process.stdin.pause()}catch{}}))}r._stopRecording=stopRecording}))}e.exports={recordMic:recordMic,checkRecAvailable:checkRecAvailable}},339:e=>{function startSpinner(e){const t=["|","/","-","\\"];let o=0;process.stdout.write(e+" "+t[0]);const n=setInterval((()=>{o=(o+1)%t.length;process.stdout.write("\b"+t[o])}),120);return{stop(e){clearInterval(n);process.stdout.write("\b"+e+"\n")}}}e.exports={startSpinner:startSpinner}},813:e=>{function parseTimestamp(e){const t=e.trim().match(/^(\d{1,2}):(\d{2}):(\d{2})[,.](\d{3})$/);if(!t){throw new Error(`Invalid SRT timestamp: "${e}"`)}const[,o,n,s,r]=t;return parseInt(o,10)*36e5+parseInt(n,10)*6e4+parseInt(s,10)*1e3+parseInt(r,10)}function formatTimestamp(e){if(e<0)e=0;const t=Math.floor(e/36e5);e%=36e5;const o=Math.floor(e/6e4);e%=6e4;const n=Math.floor(e/1e3);const s=e%1e3;return String(t).padStart(2,"0")+":"+String(o).padStart(2,"0")+":"+String(n).padStart(2,"0")+","+String(s).padStart(3,"0")}function parseSrt(e){if(!e||e.trim().length===0){return[]}const t=[];const o=e.replace(/\r\n/g,"\n").replace(/\r/g,"\n");const n=o.split(/\n\s*\n/).filter((e=>e.trim().length>0));for(const e of n){const o=e.trim().split("\n");if(o.length<2)continue;let n=0;let s;const r=o[0].trim();if(/^\d+$/.test(r)){s=parseInt(r,10);n=1}else{s=t.length+1}if(n>=o.length)continue;const a=o[n].trim();const i=a.match(/^(\d{1,2}:\d{2}:\d{2}[,.]\d{3})\s*-->\s*(\d{1,2}:\d{2}:\d{2}[,.]\d{3})/);if(!i)continue;const c=parseTimestamp(i[1]);const l=parseTimestamp(i[2]);n++;const u=o.slice(n).filter((e=>e.trim().length>0));if(u.length===0)continue;const p=u.join("\n");let d;let f=p;const g=p.match(/^\[Speaker:\s*([^\]]+)\]\s*/i);if(g){d=g[1].trim();f=p.slice(g[0].length)}if(f.trim().length===0)continue;t.push({id:s,startMs:c,endMs:l,text:f.trim(),...d?{speakerId:d}:{}})}t.sort(((e,t)=>e.startMs-t.startMs));return t}function formatSrt(e){return e.map(((e,t)=>{const o=e.id||t+1;const n=formatTimestamp(e.startMs);const s=formatTimestamp(e.endMs);const r=e.speakerId?`[Speaker: ${e.speakerId}] `:"";return`${o}\n${n} --\x3e ${s}\n${r}${e.text}`})).join("\n\n")+"\n"}function buildCaptionsFromFlash(e){const t=[];let o=1;for(const n of e){const e=n.sentence_list||[];for(const n of e){const e={id:o++,startMs:n.start_time||0,endMs:n.end_time||0,text:(n.text||"").trim()};if(n.speaker_id!==undefined&&n.speaker_id!==null){e.speakerId=`Speaker${n.speaker_id}`}if(e.text.length>0){t.push(e)}}}return t}function buildCaptionsFromSentence(e,t,o){if(!e||e.trim().length===0)return[];if(o&&o.length>0){return buildCaptionsFromWordList(o,e)}return[{id:1,startMs:0,endMs:Math.round(t*1e3),text:e.trim()}]}function buildCaptionsFromWordList(e,t){if(!e||e.length===0){return t?[{id:1,startMs:0,endMs:0,text:t}]:[]}const o=500;const n=5e3;const s=15e3;const r=/[.!?。!?…]+$/;const getWord=e=>e.word||e.Word||"";const getStart=e=>e.startTime??e.StartTime??0;const getEnd=e=>e.endTime??e.EndTime??0;const a=e.slice(0,10).map(getWord).join("");const i=(a.match(/[\u4e00-\u9fff\u3040-\u309f\u30a0-\u30ff\uac00-\ud7af]/g)||[]).length;const c=i<a.length*.3;const l=c?" ":"";const u=[];let p=[];let d=getStart(e[0]);let f=d;function flushCaption(){if(p.length===0)return;const e=p.join(l).trim();if(e.length>0){u.push({id:u.length+1,startMs:d,endMs:f,text:e})}p=[]}for(let t=0;t<e.length;t++){const a=e[t];const i=getStart(a);const c=getEnd(a);const l=i-f;const u=i-d;if(l>o&&p.length>0){flushCaption();d=i}else if(p.length>0&&u>n&&r.test(p[p.length-1])){flushCaption();d=i}else if(p.length>0&&u>s){flushCaption();d=i}p.push(getWord(a));f=c||f}flushCaption();return u}function buildCaptionsFromFile(e,t){if(!e||e.trim().length===0)return[];if(/^\d+\s*\n\d{2}:\d{2}:\d{2}[,.]\d{3}\s*-->/.test(e.trim())){return parseSrt(e)}return[{id:1,startMs:0,endMs:Math.round(t*1e3),text:e.trim()}]}function formatPlainText(e,t={}){return e.map((e=>{const o=t.includeSpeakers&&e.speakerId?`[${e.speakerId}] `:"";return`${o}${e.text}`})).join("\n")+"\n"}function formatJson(e){return JSON.stringify(e,null,2)+"\n"}e.exports={parseSrt:parseSrt,formatSrt:formatSrt,parseTimestamp:parseTimestamp,formatTimestamp:formatTimestamp,buildCaptionsFromFlash:buildCaptionsFromFlash,buildCaptionsFromSentence:buildCaptionsFromSentence,buildCaptionsFromWordList:buildCaptionsFromWordList,buildCaptionsFromFile:buildCaptionsFromFile,formatPlainText:formatPlainText,formatJson:formatJson}},907:(e,t,o)=>{const{createSilence:n,buildWav:s}=o(56);const r=24e3;const a=2;const i=r*a/1e3;function msToBytes(e){const t=Math.round(e*i);return t-t%a}function buildTimelinePcm(e,t){if(!e||e.length===0){return{pcm:Buffer.alloc(0),durationMs:0}}const o=Math.max(...e.map((e=>e.endMs)));const n=t||o;const s=msToBytes(n);const r=Buffer.alloc(s,0);for(const t of e){const e=msToBytes(t.startMs);const o=msToBytes(t.endMs)-e;const n=Math.min(t.audioBuffer.length,o,s-e);if(n>0&&e<s){t.audioBuffer.copy(r,e,0,n)}}return{pcm:r,durationMs:n}}function buildTimelineAudio(e,t){const{pcm:o,durationMs:n}=buildTimelinePcm(e,t);if(o.length===0){return{wav:Buffer.alloc(0),duration:0}}const{wav:r}=s([o],0);return{wav:r,duration:n/1e3}}e.exports={buildTimelinePcm:buildTimelinePcm,buildTimelineAudio:buildTimelineAudio,msToBytes:msToBytes,SAMPLE_RATE:r,BYTES_PER_SAMPLE:a,BYTES_PER_MS:i}},126:(e,t,o)=>{const n=o(896);const s=o(928);const{execFileSync:r}=o(317);const a={"16k_zh":"Chinese","16k_en":"English","16k_zh_en":"Chinese","16k_ja":"Japanese","16k_ko":"Korean","16k_zh_dialect":"Chinese","8k_zh":"Chinese","8k_en":"English",zh:"Chinese",en:"English",ja:"Japanese",ko:"Korean",auto:"auto"};function checkWhisperAvailable(){try{const e=resolveWhisperModule();if(e){return{available:true}}return{available:false,error:"nodejs-whisper is not installed.\n"+"Install it with: npm install -g nodejs-whisper\n"+"Then download a model: npx nodejs-whisper download"}}catch{return{available:false,error:"nodejs-whisper is not installed.\n"+"Install it with: npm install -g nodejs-whisper\n"+"Then download a model: npx nodejs-whisper download"}}}function resolveWhisperModule(){try{return require.resolve("nodejs-whisper")}catch{}try{const e=r("npm",["root","-g"],{encoding:"utf8"}).trim();const t=s.join(e,"nodejs-whisper");if(n.existsSync(t)){return t}}catch{}return null}function loadWhisperModule(){const e=resolveWhisperModule();if(!e){throw new Error("nodejs-whisper is not installed.\n"+"Install: npm install -g nodejs-whisper\n"+"Then: npx nodejs-whisper download")}const t=require(e);return t.nodewhisper||t.default||t}async function transcribeLocal(e,t={}){const{model:o="base",lang:s="16k_zh"}=t;if(!n.existsSync(e)){throw new Error(`WAV file not found: ${e}`)}const r=loadWhisperModule();const i=a[s]||a["auto"]||"auto";const c=o;await r(e,{modelName:c,autoDownloadModelName:c,removeWavFileAfterTranscription:false,whisperOptions:{outputInJson:true,outputInSrt:false,outputInVtt:false,outputInTxt:false,wordTimestamps:true,splitOnWord:true,language:i==="auto"?undefined:i}});const l=e+".json";if(!n.existsSync(l)){const t=e.replace(/\.wav$/i,"");const o=[t+".json",e+".json"];const s=o.find((e=>n.existsSync(e)));if(!s){throw new Error("Whisper completed but no JSON output found.\n"+`Expected: ${l}\n`+"Ensure nodejs-whisper is correctly installed.")}}const u=n.readFileSync(l,"utf8");const p=JSON.parse(u);const d=p.transcription||p.segments||[];const f=parseWhisperOutput(d);cleanupWhisperFiles(e);return f}function parseWhisperOutput(e){if(!e||!Array.isArray(e))return[];let t=0;const o=[];for(const n of e){const e=(n.text||"").trim();if(!e)continue;t++;let s=0;let r=0;if(n.timestamps){s=parseTimestamp(n.timestamps.from);r=parseTimestamp(n.timestamps.to)}else if(n.offsets){s=n.offsets.from||0;r=n.offsets.to||0}else if(typeof n.start==="number"){s=Math.round(n.start*1e3);r=Math.round(n.end*1e3)}o.push({id:t,startMs:s,endMs:r,text:e})}return o}function parseTimestamp(e){if(!e||typeof e!=="string")return 0;const t=e.match(/^(\d+):(\d+):(\d+)\.(\d+)$/);if(!t)return 0;const o=parseInt(t[1],10);const n=parseInt(t[2],10);const s=parseInt(t[3],10);const r=parseInt(t[4].padEnd(3,"0").slice(0,3),10);return(o*3600+n*60+s)*1e3+r}function cleanupWhisperFiles(e){const t=[".json",".srt",".vtt",".txt",".lrc",".wts"];for(const o of t){const t=e+o;try{if(n.existsSync(t)){n.unlinkSync(t)}}catch{}}}e.exports={checkWhisperAvailable:checkWhisperAvailable,transcribeLocal:transcribeLocal,parseWhisperOutput:parseWhisperOutput,parseTimestamp:parseTimestamp,LANG_MAP:a}},317:e=>{"use strict";e.exports=require("child_process")},982:e=>{"use strict";e.exports=require("crypto")},896:e=>{"use strict";e.exports=require("fs")},611:e=>{"use strict";e.exports=require("http")},692:e=>{"use strict";e.exports=require("https")},573:e=>{"use strict";e.exports=require("node:buffer")},421:e=>{"use strict";e.exports=require("node:child_process")},24:e=>{"use strict";e.exports=require("node:fs")},455:e=>{"use strict";e.exports=require("node:fs/promises")},161:e=>{"use strict";e.exports=require("node:os")},760:e=>{"use strict";e.exports=require("node:path")},708:e=>{"use strict";e.exports=require("node:process")},136:e=>{"use strict";e.exports=require("node:url")},975:e=>{"use strict";e.exports=require("node:util")},857:e=>{"use strict";e.exports=require("os")},928:e=>{"use strict";e.exports=require("path")},785:e=>{"use strict";e.exports=require("readline")},330:e=>{"use strict";e.exports=JSON.parse('{"name":"voxflow","version":"1.5.2","description":"AI audio content creation CLI — stories, podcasts, narration, dubbing, transcription, translation, and video translation with TTS","bin":{"voxflow":"./dist/index.js"},"files":["dist/index.js","dist/935.index.js","README.md"],"engines":{"node":">=18.0.0"},"dependencies":{"open":"^10.0.0"},"keywords":["tts","story","podcast","ai","audio","text-to-speech","voice","narration","dubbing","synthesize","voices","document","translate","subtitle","srt","transcribe","asr","video-translate","video","voxflow"],"scripts":{"build":"ncc build bin/voxflow.js -o dist --minify","prepublishOnly":"npm run build","test":"node --test tests/*.test.js"},"author":"gonghaoran","license":"UNLICENSED","homepage":"https://voxflow.studio","repository":{"type":"git","url":"https://github.com/VoxFlowStudio/FlowStudio","directory":"cli"},"publishConfig":{"access":"public"},"devDependencies":{"@vercel/ncc":"^0.38.4"}}')}};var t={};function __nccwpck_require__(o){var n=t[o];if(n!==undefined){return n.exports}var s=t[o]={exports:{}};var r=true;try{e[o](s,s.exports,__nccwpck_require__);r=false}finally{if(r)delete t[o]}return s.exports}__nccwpck_require__.m=e;(()=>{__nccwpck_require__.d=(e,t)=>{for(var o in t){if(__nccwpck_require__.o(t,o)&&!__nccwpck_require__.o(e,o)){Object.defineProperty(e,o,{enumerable:true,get:t[o]})}}}})();(()=>{__nccwpck_require__.f={};__nccwpck_require__.e=e=>Promise.all(Object.keys(__nccwpck_require__.f).reduce(((t,o)=>{__nccwpck_require__.f[o](e,t);return t}),[]))})();(()=>{__nccwpck_require__.u=e=>""+e+".index.js"})();(()=>{__nccwpck_require__.o=(e,t)=>Object.prototype.hasOwnProperty.call(e,t)})();(()=>{__nccwpck_require__.r=e=>{if(typeof Symbol!=="undefined"&&Symbol.toStringTag){Object.defineProperty(e,Symbol.toStringTag,{value:"Module"})}Object.defineProperty(e,"__esModule",{value:true})}})();if(typeof __nccwpck_require__!=="undefined")__nccwpck_require__.ab=__dirname+"/";(()=>{var e={792:1};var installChunk=t=>{var o=t.modules,n=t.ids,s=t.runtime;for(var r in o){if(__nccwpck_require__.o(o,r)){__nccwpck_require__.m[r]=o[r]}}if(s)s(__nccwpck_require__);for(var a=0;a<n.length;a++)e[n[a]]=1};__nccwpck_require__.f.require=(t,o)=>{if(!e[t]){if(true){installChunk(require("./"+__nccwpck_require__.u(t)))}else e[t]=1}}})();var o={};const{run:n}=__nccwpck_require__(6);n().catch((e=>{console.error(`\nFatal error: ${e.message}`);process.exit(1)}));module.exports=o})();
2
+ (()=>{var e={839:e=>{const t={opening:{speed:1,volume:1},explain:{speed:.95,volume:1},question:{speed:1.05,volume:1.05},react:{speed:1,volume:.95},debate:{speed:1.1,volume:1.1},summary:{speed:.9,volume:1},summarize:{speed:.9,volume:1},transition:{speed:1.1,volume:.9},joke:{speed:1.05,volume:1.05},closing:{speed:.95,volume:.95}};function getIntentParams(e){if(!e||typeof e!=="string"){return{speed:1,volume:1}}return t[e]||{speed:1,volume:1}}e.exports={INTENT_TTS_PARAMS:t,getIntentParams:getIntentParams}},6:(e,t,o)=>{const{getToken:n,clearToken:s,getTokenInfo:r}=o(986);const{story:a,ApiError:i}=o(214);const{podcast:c,ApiError:l}=o(35);const{synthesize:u}=o(383);const{narrate:p}=o(80);const{voices:d}=o(784);const{dub:f}=o(944);const{asr:g,ASR_DEFAULTS:m}=o(929);const{translate:h}=o(585);const{videoTranslate:w}=o(863);const{publish:v}=o(360);const{explain:x}=o(484);const{present:y,VALID_SCHEMES:S}=o(712);const{warnIfMissingFfmpeg:$}=o(297);const{API_BASE:b,WEB_BASE:k,DASHBOARD_URL:T,STORY_DEFAULTS:F,PODCAST_DEFAULTS:E,SYNTHESIZE_DEFAULTS:_,NARRATE_DEFAULTS:A,DUB_DEFAULTS:I,ASR_DEFAULTS:M,TRANSLATE_DEFAULTS:P,VIDEO_TRANSLATE_DEFAULTS:L,EXPLAIN_DEFAULTS:O,PRESENT_DEFAULTS:N,getConfigDir:C}=o(782);const D=o(330);const R=i;async function run(e){const t=e||process.argv.slice(2);const o=t[0];if(!o||o==="--help"||o==="-h"){printHelp();return}if(o==="--version"||o==="-v"){console.log(D.version);return}if(t.includes("--help")||t.includes("-h")){printSubcommandHelp(o);return}switch(o){case"login":return handleLogin(t.slice(1));case"logout":return handleLogout();case"status":return handleStatus();case"story":case"generate":return handleStory(t.slice(1));case"podcast":return handlePodcast(t.slice(1));case"synthesize":case"say":return handleSynthesize(t.slice(1));case"narrate":return handleNarrate(t.slice(1));case"voices":return handleVoices(t.slice(1));case"dub":return handleDub(t.slice(1));case"asr":case"transcribe":return handleAsr(t.slice(1));case"translate":return handleTranslate(t.slice(1));case"video-translate":return handleVideoTranslate(t.slice(1));case"publish":return handlePublish(t.slice(1));case"explain":return handleExplain(t.slice(1));case"present":return handlePresent(t.slice(1));case"dashboard":return handleDashboard();default:console.error(`Unknown command: ${o}\nRun voxflow --help for usage.`);process.exit(1)}}async function handleLogin(e){const t=parseFlag(e,"--api")||b;console.log("Logging in...");const o=await n({api:t,force:true});const s=r();if(s){console.log(`\nLogged in as ${s.email}`);console.log(`Token expires: ${s.expiresAt}`);console.log(`API: ${s.api}`)}}function handleLogout(){s();console.log("Logged out. Token cache cleared.")}function handleStatus(){const e=r();if(!e){console.log("Not logged in. Run: voxflow login");return}console.log(`Email: ${e.email}`);console.log(`API: ${e.api}`);console.log(`Expires: ${e.expiresAt}`);console.log(`Valid: ${e.valid?"yes":"expired"}`);if(!e.valid){console.log("\nToken expired. Run: voxflow login")}console.log(`\nDashboard: ${T}`);console.log("Run voxflow dashboard to open in browser.")}async function handleStory(e){const t=parseFlag(e,"--api")||b;const o=parseFlag(e,"--token");let s;if(o){s=o}else{s=await n({api:t});const e=r();if(e){console.log(`Logged in as ${e.email}`)}}const i=parseIntFlag(e,"--paragraphs");const c=parseFloatFlag(e,"--speed");const l=parseFloatFlag(e,"--silence");const u=parseFlag(e,"--output");if(i!==undefined){if(isNaN(i)||i<1||i>20){console.error(`Error: --paragraphs must be an integer between 1 and 20 (got: "${parseFlag(e,"--paragraphs")}")`);process.exit(1)}}validateSpeed(e,c);validateSilence(e,l);validateOutput(u);const p={token:s,api:t,topic:parseFlag(e,"--topic"),voice:parseFlag(e,"--voice"),output:u,paragraphs:i,speed:c,silence:l};await runWithRetry(a,p,t,o)}async function handlePodcast(e){const t=parseFlag(e,"--api")||b;const s=parseFlag(e,"--token");const a=parseBoolFlag(e,"--no-tts");const i=parseFlag(e,"--input");let l;if(s){l=s}else{l=await n({api:t});const e=r();if(e){console.log(`Logged in as ${e.email}`)}}const u=parseIntFlag(e,"--exchanges");const p=parseFloatFlag(e,"--speed");const d=parseFloatFlag(e,"--silence");const f=parseFloatFlag(e,"--ducking");const g=parseFlag(e,"--output");const m=parseIntFlag(e,"--speakers");if(u!==undefined){if(isNaN(u)||u<2||u>30){console.error(`Error: --exchanges must be an integer between 2 and 30 (got: "${parseFlag(e,"--exchanges")}")`);process.exit(1)}}validateSpeed(e,p);validateSilence(e,d);if(g){const e=[".wav",".mp3",".txt",".json"];const t=e.some((e=>g.toLowerCase().endsWith(e)));if(!t){console.error(`Error: --output path must end with ${e.join(", ")}`);process.exit(1)}}const h=parseFlag(e,"--length");if(h&&!["short","medium","long"].includes(h)){console.error(`Error: --length must be one of: short, medium, long (got: "${h}")`);process.exit(1)}const w=parseFlag(e,"--engine");if(w&&!["auto","legacy","ai-sdk"].includes(w)){console.error(`Error: --engine must be one of: auto, legacy, ai-sdk (got: "${w}")`);process.exit(1)}const v=parseFlag(e,"--colloquial");if(v&&!["low","medium","high"].includes(v)){console.error(`Error: --colloquial must be one of: low, medium, high (got: "${v}")`);process.exit(1)}if(m!==undefined){if(isNaN(m)||m<1||m>3){console.error(`Error: --speakers must be 1, 2, or 3 (got: "${parseFlag(e,"--speakers")}")`);process.exit(1)}}const x=parseFlag(e,"--language");if(x&&!["zh-CN","en","ja"].includes(x)){console.error(`Error: --language must be one of: zh-CN, en, ja (got: "${x}")`);process.exit(1)}const y=parseFlag(e,"--template");if(y&&!["interview","discussion","news","story","tutorial"].includes(y)){console.error(`Error: --template must be one of: interview, discussion, news, story, tutorial (got: "${y}")`);process.exit(1)}const S=parseFlag(e,"--format");if(S&&!["json"].includes(S)){console.error(`Error: --format must be: json (got: "${S}")`);process.exit(1)}if(i){const e=o(896);const t=o(928);const n=t.resolve(i);if(!e.existsSync(n)){console.error(`Error: Input file not found: ${n}`);process.exit(1)}}const $=parseFlag(e,"--script");if($){const e=o(896);const t=o(928);const n=t.resolve($);if(!e.existsSync(n)){console.error(`Error: Script file not found: ${n}`);process.exit(1)}}const k=parseFlag(e,"--bgm");if(k){const e=o(896);const t=o(928);const n=t.resolve(k);if(!e.existsSync(n)){console.error(`Error: BGM file not found: ${n}`);process.exit(1)}}if(f!==undefined){if(isNaN(f)||f<0||f>1){console.error(`Error: --ducking must be between 0 and 1.0 (got: "${parseFlag(e,"--ducking")}")`);process.exit(1)}}const T={token:l,api:t,topic:parseFlag(e,"--topic"),style:parseFlag(e,"--style")||y,template:y,length:h,exchanges:u,output:g,speed:p,silence:d,engine:w||undefined,colloquial:v||undefined,speakers:m||undefined,language:x||undefined,format:S||undefined,input:i||undefined,noTts:a,voice:parseFlag(e,"--voice"),script:$,bgm:k,ducking:f};await runWithRetry(c,T,t,s)}async function handleSynthesize(e){const t=parseFlag(e,"--api")||b;const o=parseFlag(e,"--token");let s;if(o){s=o}else{s=await n({api:t});const e=r();if(e){console.log(`Logged in as ${e.email}`)}}let a=parseFlag(e,"--text");if(!a){const t=new Set(["--text","--voice","--speed","--volume","--pitch","--output","--token","--api","--format"]);for(let o=0;o<e.length;o++){if(e[o].startsWith("--")){if(t.has(e[o]))o++;continue}a=e[o];break}}if(!a){console.error('Error: No text provided. Usage: voxflow synthesize "your text here"');process.exit(1)}const i=parseFloatFlag(e,"--speed");const c=parseFloatFlag(e,"--volume");const l=parseFloatFlag(e,"--pitch");const p=parseFlag(e,"--output");const d=parseFlag(e,"--format");validateSpeed(e,i);validateOutput(p,d);validateFormat(d);if(c!==undefined){if(isNaN(c)||c<.1||c>2){console.error(`Error: --volume must be between 0.1 and 2.0 (got: "${parseFlag(e,"--volume")}")`);process.exit(1)}}if(l!==undefined){if(isNaN(l)||l<-12||l>12){console.error(`Error: --pitch must be between -12 and 12 (got: "${parseFlag(e,"--pitch")}")`);process.exit(1)}}const f={token:s,api:t,text:a,voice:parseFlag(e,"--voice"),output:p,speed:i,volume:c,pitch:l,format:d||undefined};await runWithRetry(u,f,t,o)}async function handleNarrate(e){const t=parseFlag(e,"--api")||b;const s=parseFlag(e,"--token");let a;if(s){a=s}else{a=await n({api:t});const e=r();if(e){console.log(`Logged in as ${e.email}`)}}const i=parseFlag(e,"--input");const c=parseFlag(e,"--text");const l=parseFlag(e,"--script");const u=parseFloatFlag(e,"--speed");const d=parseFloatFlag(e,"--silence");const f=parseFlag(e,"--output");const g=parseFlag(e,"--format");validateSpeed(e,u);validateSilence(e,d);validateOutput(f,g);validateFormat(g);if(i){const e=o(896);const t=o(928);const n=t.resolve(i);if(!e.existsSync(n)){console.error(`Error: Input file not found: ${n}`);process.exit(1)}}if(l){const e=o(896);const t=o(928);const n=t.resolve(l);if(!e.existsSync(n)){console.error(`Error: Script file not found: ${n}`);process.exit(1)}}const m={token:a,api:t,input:i,text:c,script:l,voice:parseFlag(e,"--voice"),output:f,speed:u,silence:d,format:g||undefined};await runWithRetry(p,m,t,s)}async function handleVoices(e){const t=parseFlag(e,"--api")||b;const o={api:t,search:parseFlag(e,"--search"),gender:parseFlag(e,"--gender"),language:parseFlag(e,"--language"),json:parseBoolFlag(e,"--json"),extended:parseBoolFlag(e,"--extended")};await d(o)}async function handleDub(e){await $(C(),"dub");const t=parseFlag(e,"--api")||b;const s=parseFlag(e,"--token");let a;if(s){a=s}else{a=await n({api:t});const e=r();if(e){console.log(`Logged in as ${e.email}`)}}const i=parseFlag(e,"--srt");const c=parseFlag(e,"--video");const l=parseFlag(e,"--output");const u=parseFloatFlag(e,"--speed");const p=parseFloatFlag(e,"--ducking");const d=parseIntFlag(e,"--patch");if(!i&&!parseBoolFlag(e,"--help")){console.error("Error: --srt <file> is required. Usage: voxflow dub --srt <file.srt>");process.exit(1)}if(i){const e=o(896);const t=o(928);const n=t.resolve(i);if(!e.existsSync(n)){console.error(`Error: SRT file not found: ${n}`);process.exit(1)}}if(c){const e=o(896);const t=o(928);const n=t.resolve(c);if(!e.existsSync(n)){console.error(`Error: Video file not found: ${n}`);process.exit(1)}}const g=parseFlag(e,"--voices");if(g){const e=o(896);const t=o(928);const n=t.resolve(g);if(!e.existsSync(n)){console.error(`Error: Voices map file not found: ${n}`);process.exit(1)}}const m=parseFlag(e,"--bgm");if(m){const e=o(896);const t=o(928);const n=t.resolve(m);if(!e.existsSync(n)){console.error(`Error: BGM file not found: ${n}`);process.exit(1)}}validateSpeed(e,u);if(l){const e=c?[".mp4",".mkv",".mov"]:[".wav",".mp3"];const t=e.some((e=>l.toLowerCase().endsWith(e)));if(!t){const t=e.join(", ");console.error(`Error: --output path must end with ${t}`);process.exit(1)}}if(p!==undefined){if(isNaN(p)||p<0||p>1){console.error(`Error: --ducking must be between 0 and 1.0 (got: "${parseFlag(e,"--ducking")}")`);process.exit(1)}}const h={token:a,api:t,srt:i,video:c,output:l,speed:u,patch:d,voice:parseFlag(e,"--voice"),voicesMap:g,speedAuto:parseBoolFlag(e,"--speed-auto"),bgm:m,ducking:p};await runWithRetry(f,h,t,s)}async function handleAsr(e){await $(C(),"asr");const t=parseFlag(e,"--api")||b;const s=parseFlag(e,"--token");const a=parseFlag(e,"--engine")||M.engine;const i=parseFlag(e,"--model")||M.model;if(a&&!["auto","local","cloud","whisper","tencent"].includes(a)){console.error(`Error: --engine must be one of: auto, local, cloud (got: "${a}")`);process.exit(1)}if(i&&!["tiny","base","small","medium","large"].includes(i)){console.error(`Error: --model must be one of: tiny, base, small, medium, large (got: "${i}")`);process.exit(1)}const c=a==="local"||a==="whisper";let l;if(c){l=null}else if(s){l=s}else{l=await n({api:t});const e=r();if(e){console.log(`Logged in as ${e.email}`)}}const u=parseFlag(e,"--input");const p=parseFlag(e,"--url");const d=parseBoolFlag(e,"--mic");const f=parseFlag(e,"--mode")||m.mode;const h=parseFlag(e,"--lang")||parseFlag(e,"--language")||m.lang;const w=parseFlag(e,"--format")||m.format;const v=parseFlag(e,"--output");const x=parseBoolFlag(e,"--speakers");const y=parseIntFlag(e,"--speaker-number");const S=parseIntFlag(e,"--task-id");if(f&&!["auto","sentence","flash","file"].includes(f)){console.error(`Error: --mode must be one of: auto, sentence, flash, file (got: "${f}")`);process.exit(1)}if(w&&!["srt","txt","json"].includes(w)){console.error(`Error: --format must be one of: srt, txt, json (got: "${w}")`);process.exit(1)}if(u){const e=o(896);const t=o(928);const n=t.resolve(u);if(!e.existsSync(n)){console.error(`Error: Input file not found: ${n}`);process.exit(1)}}const k={token:l,api:t,input:u,url:p,mic:d,mode:f,lang:h,format:w,output:v,speakers:x,speakerNumber:y,taskId:S,engine:a,model:i};if(c){await g(k)}else{await runWithRetry(g,k,t,s)}}async function handleTranslate(e){const t=parseFlag(e,"--api")||b;const s=parseFlag(e,"--token");let a;if(s){a=s}else{a=await n({api:t});const e=r();if(e){console.log(`Logged in as ${e.email}`)}}const i=parseFlag(e,"--srt");const c=parseFlag(e,"--text");const l=parseFlag(e,"--input");const u=parseFlag(e,"--from");const p=parseFlag(e,"--to");const d=parseFlag(e,"--output");const f=parseBoolFlag(e,"--realign");const g=parseIntFlag(e,"--batch-size");if(!p&&!parseBoolFlag(e,"--help")){console.error("Error: --to <lang> is required. Example: voxflow translate --srt file.srt --to en");process.exit(1)}const m=["zh","en","ja","ko","fr","de","es","pt","ru","ar","th","vi","it"];if(p&&!m.includes(p)){console.error(`Error: --to must be one of: ${m.join(", ")} (got: "${p}")`);process.exit(1)}if(u&&!m.includes(u)&&u!=="auto"){console.error(`Error: --from must be one of: auto, ${m.join(", ")} (got: "${u}")`);process.exit(1)}const w=[i,c,l].filter(Boolean).length;if(w===0&&!parseBoolFlag(e,"--help")){console.error("Error: Provide one of: --srt <file>, --text <text>, --input <file>");process.exit(1)}if(w>1){console.error("Error: Specify only one input: --srt, --text, or --input");process.exit(1)}if(i){const e=o(896);const t=o(928);const n=t.resolve(i);if(!e.existsSync(n)){console.error(`Error: SRT file not found: ${n}`);process.exit(1)}}if(l){const e=o(896);const t=o(928);const n=t.resolve(l);if(!e.existsSync(n)){console.error(`Error: Input file not found: ${n}`);process.exit(1)}}if(g!==undefined){if(isNaN(g)||g<1||g>20){console.error(`Error: --batch-size must be between 1 and 20 (got: "${parseFlag(e,"--batch-size")}")`);process.exit(1)}}const v={token:a,api:t,srt:i,text:c,input:l,from:u,to:p,output:d,realign:f,batchSize:g};await runWithRetry(h,v,t,s)}async function handleVideoTranslate(e){if(parseBoolFlag(e,"--help")||parseBoolFlag(e,"-h")){printHelp();return}const t=parseFlag(e,"--api")||b;const s=parseFlag(e,"--token");let a;if(s){a=s}else{a=await n({api:t});const e=r();if(e){console.log(`Logged in as ${e.email}`)}}const i=parseFlag(e,"--input");const c=parseFlag(e,"--from");const l=parseFlag(e,"--to");const u=parseFlag(e,"--voice");const p=parseFlag(e,"--voices");const d=parseFlag(e,"--output");const f=parseBoolFlag(e,"--realign");const g=parseBoolFlag(e,"--keep-intermediates");const m=parseIntFlag(e,"--batch-size");const h=parseFloatFlag(e,"--speed");const v=parseFlag(e,"--asr-mode");const x=parseFlag(e,"--asr-lang");if(!i){console.error("Error: --input <video-file> is required. Example: voxflow video-translate --input video.mp4 --to en");process.exit(1)}if(!l){console.error("Error: --to <lang> is required. Example: voxflow video-translate --input video.mp4 --to en");process.exit(1)}const y=["zh","en","ja","ko","fr","de","es","pt","ru","ar","th","vi","it"];if(l&&!y.includes(l)){console.error(`Error: --to must be one of: ${y.join(", ")} (got: "${l}")`);process.exit(1)}if(c&&!y.includes(c)&&c!=="auto"){console.error(`Error: --from must be one of: auto, ${y.join(", ")} (got: "${c}")`);process.exit(1)}if(i){const e=o(896);const t=o(928);const n=t.resolve(i);if(!e.existsSync(n)){console.error(`Error: Video file not found: ${n}`);process.exit(1)}}if(h!==undefined&&(isNaN(h)||h<.5||h>2)){console.error(`Error: --speed must be between 0.5 and 2.0 (got: "${parseFlag(e,"--speed")}")`);process.exit(1)}if(m!==undefined&&(isNaN(m)||m<1||m>20)){console.error(`Error: --batch-size must be between 1 and 20 (got: "${parseFlag(e,"--batch-size")}")`);process.exit(1)}const S=["auto","sentence","flash","file"];if(v&&!S.includes(v)){console.error(`Error: --asr-mode must be one of: ${S.join(", ")} (got: "${v}")`);process.exit(1)}if(p){const e=o(896);const t=o(928);const n=t.resolve(p);if(!e.existsSync(n)){console.error(`Error: Voices map file not found: ${n}`);process.exit(1)}}const $={token:a,api:t,input:i,from:c,to:l,voice:u,voicesMap:p,output:d,realign:f,keepIntermediates:g,batchSize:m,speed:h,asrMode:v,asrLang:x};await runWithRetry(w,$,t,s)}async function handlePublish(e){await $(C(),"dub");const t=parseFlag(e,"--api")||b;const o=parseFlag(e,"--token");const s=parseFlag(e,"--input");const a=parseFlag(e,"--to");const i=parseFlag(e,"--video");const c=parseFlag(e,"--srt");const l=parseFlag(e,"--audio");const u=parseFlag(e,"--output");const p=parseFlag(e,"--publish")||"local";const d=parseFlag(e,"--publish-webhook");if(s)assertFileExists(s,"Input video");if(i)assertFileExists(i,"Video");if(c)assertFileExists(c,"SRT");if(l)assertFileExists(l,"Audio");if(p&&!["local","webhook","none"].includes(p)){console.error(`Error: --publish must be one of: local, webhook, none (got: "${p}")`);process.exit(1)}if(p==="webhook"&&!d){console.error("Error: --publish webhook requires --publish-webhook <url>");process.exit(1)}if(u&&!u.toLowerCase().endsWith(".mp4")){console.error("Error: --output path must end with .mp4");process.exit(1)}const f=!!(i&&l&&!c&&!s);let g=null;if(!f){if(o){g=o}else{g=await n({api:t});const e=r();if(e){console.log(`Logged in as ${e.email}`)}}}const m={token:g,api:t,input:s,from:parseFlag(e,"--from"),to:a,srt:c,video:i,audio:l,voice:parseFlag(e,"--voice"),voicesMap:parseFlag(e,"--voices"),output:u,speed:parseFloatFlag(e,"--speed"),realign:parseBoolFlag(e,"--realign"),keepIntermediates:parseBoolFlag(e,"--keep-intermediates"),batchSize:parseIntFlag(e,"--batch-size"),publishTarget:p,publishDir:parseFlag(e,"--publish-dir"),publishWebhook:d,platform:parseFlag(e,"--platform")||undefined,title:parseFlag(e,"--title")||undefined};try{const n=f?await v(m):await runWithRetry(v,m,t,o);if(parseBoolFlag(e,"--json")){console.log(JSON.stringify(n,null,2))}}catch(e){const t=e.message||e.code||String(e);console.error(`\nFatal error: ${t}`);process.exit(1)}}async function handleExplain(e){const t=parseFlag(e,"--api")||b;const o=parseFlag(e,"--token");const s=parseFloatFlag(e,"--speed");const a=parseFlag(e,"--output");const i=parseIntFlag(e,"--scenes");validateSpeed(e,s);if(a){const e=[".wav",".mp3",".mp4"];const t=e.some((e=>a.toLowerCase().endsWith(e)));if(!t){console.error("Error: --output path must end with .wav, .mp3, or .mp4");process.exit(1)}}const c=parseFlag(e,"--style");if(c&&!["modern","playful","corporate","chalkboard"].includes(c)){console.error(`Error: --style must be one of: modern, playful, corporate, chalkboard (got: "${c}")`);process.exit(1)}if(i!==undefined){if(isNaN(i)||i<3||i>12){console.error(`Error: --scenes must be between 3 and 12 (got: "${parseFlag(e,"--scenes")}")`);process.exit(1)}}let l;if(o){l=o}else{l=await n({api:t});const e=r();if(e){console.log(`Logged in as ${e.email}`)}}const u={token:l,api:t,topic:parseFlag(e,"--topic")||undefined,voice:parseFlag(e,"--voice")||undefined,style:c||undefined,language:parseFlag(e,"--language")||undefined,output:a,speed:s,scenes:i,audioOnly:parseBoolFlag(e,"--audio-only"),cloud:parseBoolFlag(e,"--cloud")};await runWithRetry(x,u,t,o)}async function handlePresent(e){const t=parseFlag(e,"--api")||b;const s=parseFlag(e,"--token");const a=parseFloatFlag(e,"--speed");const i=parseFlag(e,"--output");validateSpeed(e,a);if(i){const e=[".mp4",".wav"].some((e=>i.toLowerCase().endsWith(e)));if(!e){console.error("Error: --output path must end with .mp4 or .wav");process.exit(1)}}const c=parseFlag(e,"--scheme");if(c&&!S.includes(c)){console.error(`Error: --scheme must be one of: ${S.join(", ")} (got: "${c}")`);process.exit(1)}const l=parseFlag(e,"--text");const u=parseFlag(e,"--url");const p=parseFlag(e,"--cards");if(!l&&!u&&!p){console.error("Error: provide one of --text, --url, or --cards");process.exit(1)}if(p&&!o(896).existsSync(p)){console.error(`Error: cards file not found: ${p}`);process.exit(1)}let d;if(s){d=s}else{d=await n({api:t});const e=r();if(e){console.log(`Logged in as ${e.email}`)}}const f={token:d,api:t,text:l,url:u,cards:p,scheme:c||undefined,voice:parseFlag(e,"--voice")||undefined,speed:a,output:i,noAudio:parseBoolFlag(e,"--no-audio")};await runWithRetry(y,f,t,s)}async function handleDashboard(){const e=T;console.log(`\nOpening dashboard: ${e}`);try{const t=(await o.e(935).then(o.bind(o,935))).default;const n=await t(e);if(n&&typeof n.on==="function"){n.on("error",(()=>{console.log("Failed to open browser. Visit manually:");console.log(` ${e}`)}))}}catch{console.log("Failed to open browser. Visit manually:");console.log(` ${e}`)}}async function runWithRetry(e,t,o,s){try{return await e(t)}catch(r){if(r instanceof R&&r.code==="token_expired"&&!s){console.log("\nToken expired, re-authenticating...");t.token=await n({api:o,force:true});return await e(t)}else{throw r}}}function validateSpeed(e,t){if(t!==undefined){if(isNaN(t)||t<.5||t>2){console.error(`Error: --speed must be between 0.5 and 2.0 (got: "${parseFlag(e,"--speed")}")`);process.exit(1)}}}function validateSilence(e,t){if(t!==undefined){if(isNaN(t)||t<0||t>5){console.error(`Error: --silence must be between 0 and 5.0 (got: "${parseFlag(e,"--silence")}")`);process.exit(1)}}}function validateOutput(e,t){if(e){const t=[".wav",".mp3"];const o=t.some((t=>e.toLowerCase().endsWith(t)));if(!o){console.error("Error: --output path must end with .wav or .mp3");process.exit(1)}}}function validateFormat(e){if(e&&!["pcm","wav","mp3"].includes(e)){console.error(`Error: --format must be one of: pcm, wav, mp3 (got: "${e}")`);process.exit(1)}}function assertFileExists(e,t){const n=o(928).resolve(e);if(!o(896).existsSync(n)){console.error(`Error: ${t} file not found: ${n}`);process.exit(1)}}function parseFlag(e,t){const o=e.indexOf(t);if(o===-1||o+1>=e.length)return null;return e[o+1]}function parseIntFlag(e,t){const o=parseFlag(e,t);return o!=null?parseInt(o,10):undefined}function parseFloatFlag(e,t){const o=parseFlag(e,t);return o!=null?parseFloat(o):undefined}function parseBoolFlag(e,t){return e.includes(t)}function printHelp(){const e=[`\nvoxflow v${D.version} — AI audio content creation CLI`,"","Usage:"," voxflow <command> [options]","","Commands:"];for(const[t,o]of Object.entries(q)){if(o.alias)continue;const n=(t+(o.usage?" "+o.usage:"")).padEnd(18);e.push(` ${n} ${o.description}`)}e.push("");for(const[t,o]of Object.entries(q)){if(o.alias||!o.options)continue;const n=o.aliasOf?`${t} options (alias: ${o.aliasOf}):`:`${t} options:`;e.push(n.charAt(0).toUpperCase()+n.slice(1));e.push(...o.options.map((e=>" "+e)));e.push("")}e.push("Common options:"," --help, -h Show help (use with a command for command-specific help)"," --version, -v Show version","","Advanced options:"," --api <url> Override API endpoint (for self-hosted servers)"," --token <jwt> Use explicit token (CI/CD, skip browser login)","","Examples:");for(const[,t]of Object.entries(q)){if(t.alias||!t.examples)continue;e.push(...t.examples.map((e=>" "+e)))}console.log(e.join("\n"))}function printSubcommandHelp(e){const t=q[e];if(!t){printHelp();return}const o=t.alias?q[t.alias]:t;if(!o||!o.options){printHelp();return}const n=t.alias||e;const s=[`\nvoxflow ${n} — ${o.description}`,"","Usage:",` voxflow ${n}${o.usage?" "+o.usage:""}`,"","Options:",...o.options.map((e=>" "+e))," --api <url> Override API endpoint"," --token <jwt> Use explicit token (skip browser login)"];if(o.examples&&o.examples.length>0){s.push("","Examples:",...o.examples.map((e=>" "+e)))}console.log(s.join("\n"))}const q={login:{usage:"",description:"Open browser to login and cache token"},logout:{usage:"",description:"Clear cached token"},status:{usage:"",description:"Show login status and token info"},dashboard:{usage:"",description:"Open Web dashboard in browser"},story:{usage:"[opts]",description:"Generate a story with TTS narration",options:[`--topic <text> Story topic (default: children's story)`,`--voice <id> TTS voice ID (default: ${F.voice})`,`--output <path> Output WAV path (default: ./story-<timestamp>.wav)`,`--paragraphs <n> Paragraph count, 1-20 (default: ${F.paragraphs})`,`--speed <n> TTS speed 0.5-2.0 (default: ${F.speed})`,`--silence <sec> Silence between paragraphs, 0-5.0 (default: ${F.silence})`],examples:['voxflow story --topic "三只小猪"','voxflow story --topic "space adventure" --voice v-male-Bk7vD3xP --paragraphs 10']},podcast:{usage:"[opts]",description:"Generate a multi-speaker podcast/dialogue",options:[`--topic <text> Podcast topic (default: tech trends)`,`--engine <type> auto | legacy | ai-sdk (default: auto → ai-sdk)`,`--template <name> interview | discussion | news | story | tutorial`,`--colloquial <lvl> low | medium | high (default: medium)`,`--speakers <n> Speaker count: 1, 2, or 3 (default: ${E.speakers})`,`--language <code> zh-CN | en | ja (default: zh-CN)`,`--style <style> Legacy: dialogue style (maps to --template)`,`--length <len> short | medium | long (default: ${E.length})`,`--exchanges <n> Number of exchanges, 2-30 (legacy, default: ${E.exchanges})`,`--format json Output .podcast.json alongside audio`,`--input <file> Load .podcast.json and synthesize from it`,`--no-tts Generate script only, skip TTS synthesis`,`--script <file> Pre-written script JSON (skips LLM generation)`,`--voice <id> Override TTS voice for all speakers`,`--bgm <file> Background music file to mix in`,`--ducking <n> BGM volume ducking 0-1.0 (default: ${E.ducking})`,`--output <path> Output WAV path (default: ./podcast-<timestamp>.wav)`,`--speed <n> TTS speed 0.5-2.0 (default: ${E.speed})`,`--silence <sec> Silence between segments, 0-5.0 (default: ${E.silence})`],examples:['voxflow podcast --topic "AI in healthcare"','voxflow podcast --topic "climate change" --colloquial high --speakers 3','voxflow podcast --topic "tech news" --template news --language en','voxflow podcast --topic "AI" --format json --no-tts',"voxflow podcast --input podcast.podcast.json",'voxflow podcast --topic "debate" --engine legacy --length long --exchanges 20',"voxflow podcast --script dialogue.json --voice v-male-Bk7vD3xP",'voxflow podcast --topic "music history" --bgm background.mp3 --ducking 0.15']},synthesize:{usage:"<text>",description:"Synthesize a single text snippet to audio (alias: say)",aliasOf:"say",options:[`<text> Text to synthesize (positional arg or --text)`,`--text <text> Text to synthesize (alternative to positional)`,`--voice <id> TTS voice ID (default: ${_.voice})`,`--format <fmt> Output format: pcm, wav, mp3 (default: pcm → WAV)`,`--speed <n> TTS speed 0.5-2.0 (default: ${_.speed})`,`--volume <n> TTS volume 0.1-2.0 (default: ${_.volume})`,`--pitch <n> TTS pitch -12 to 12 (default: ${_.pitch})`,`--output <path> Output file path (default: ./tts-<timestamp>.wav)`],examples:['voxflow say "你好世界"','voxflow say "你好世界" --format mp3','voxflow synthesize "Welcome" --voice v-male-Bk7vD3xP --format mp3']},narrate:{usage:"[opts]",description:"Narrate a file, text, or script to audio",options:[`--input <file> Input .txt or .md file`,`--text <text> Inline text to narrate`,`--script <file> JSON script with per-segment voice/speed control`,`--voice <id> Default voice ID (default: ${A.voice})`,`--format <fmt> Output format: pcm, wav, mp3 (default: pcm → WAV)`,`--speed <n> TTS speed 0.5-2.0 (default: ${A.speed})`,`--silence <sec> Silence between segments, 0-5.0 (default: ${A.silence})`,`--output <path> Output file path (default: ./narration-<timestamp>.wav)`],examples:["voxflow narrate --input article.txt --voice v-female-R2s4N9qJ","voxflow narrate --script narration-script.json",'echo "Hello" | voxflow narrate --output hello.wav']},voices:{usage:"[opts]",description:"Browse and search available TTS voices",options:[`--search <query> Search by name, tone, style, description`,`--gender <m|f> Filter by gender: male/m or female/f`,`--language <code> Filter by language: zh, en, etc.`,`--extended Include extended voice library (380+ voices)`,`--json Output raw JSON instead of table`],examples:['voxflow voices --search "温柔" --gender female',"voxflow voices --extended --json"]},dub:{usage:"[opts]",description:"Dub video/audio from SRT subtitles (timeline-aligned TTS)",options:[`--srt <file> SRT subtitle file (required)`,`--video <file> Video file — merge dubbed audio into video`,`--voice <id> Default TTS voice ID (default: ${I.voice})`,`--voices <file> JSON speaker→voiceId map for multi-speaker dubbing`,`--speed <n> TTS speed 0.5-2.0 (default: ${I.speed})`,`--speed-auto Auto-adjust speed when audio overflows time slot`,`--bgm <file> Background music file to mix in`,`--ducking <n> BGM volume ducking 0-1.0 (default: ${I.ducking})`,`--patch <id> Re-synthesize a single caption by ID (patch mode)`,`--output <path> Output file path (default: ./dub-<timestamp>.wav)`],examples:["voxflow dub --srt subtitles.srt","voxflow dub --srt subtitles.srt --video input.mp4 --output dubbed.mp4","voxflow dub --srt subtitles.srt --voices speakers.json --speed-auto","voxflow dub --srt subtitles.srt --bgm music.mp3 --ducking 0.3","voxflow dub --srt subtitles.srt --patch 5 --output dub-existing.wav"]},asr:{usage:"[opts]",description:"Transcribe audio/video to text (alias: transcribe)",aliasOf:"transcribe",options:[`--input <file> Local audio or video file to transcribe`,`--url <url> Remote audio URL to transcribe (cloud only)`,`--mic Record from microphone (cloud only, requires sox)`,`--engine <type> auto (default) | local | cloud`,`--model <name> Whisper model: tiny, base (default), small, medium, large`,`--mode <type> auto (default) | sentence | flash | file (cloud only)`,`--lang <model> Language: 16k_zh (default), 16k_en, 16k_zh_en, 16k_ja, 16k_ko`,`--format <fmt> Output format: srt (default), txt, json`,`--output <path> Output file path (default: <input>.<format>)`,`--speakers Enable speaker diarization (cloud flash/file mode)`,`--speaker-number <n> Expected number of speakers (with --speakers)`,`--task-id <id> Resume polling an existing async task (cloud only)`],examples:["voxflow asr --input recording.mp3","voxflow asr --input recording.mp3 --engine local --model small","voxflow asr --input video.mp4 --format srt --lang 16k_zh","voxflow transcribe --input meeting.wav --speakers --speaker-number 3"]},translate:{usage:"[opts]",description:"Translate SRT subtitles, text, or files",options:[`--srt <file> SRT subtitle file to translate`,`--text <text> Inline text to translate`,`--input <file> Text file (.txt, .md) to translate`,`--from <lang> Source language code (default: auto-detect)`,`--to <lang> Target language code (required)`,`--output <path> Output file path (default: <input>-<lang>.<ext>)`,`--realign Adjust subtitle timing for target language length`,`--batch-size <n> Captions per LLM call, 1-20 (default: ${P.batchSize})`],examples:["voxflow translate --srt subtitles.srt --to en",'voxflow translate --text "你好世界" --to en',"voxflow translate --input article.txt --to en --output article-en.txt"]},"video-translate":{usage:"[opts]",description:"Translate entire video: ASR → translate → dub → merge",options:[`--input <file> Input video file (required)`,`--to <lang> Target language code (required)`,`--from <lang> Source language code (default: auto-detect)`,`--voice <id> TTS voice ID for dubbed audio`,`--voices <file> JSON speaker→voiceId map for multi-speaker dubbing`,`--realign Adjust subtitle timing for target language length`,`--speed <n> TTS speed 0.5-2.0 (default: ${L.speed})`,`--batch-size <n> Translation batch size, 1-20 (default: ${L.batchSize})`,`--keep-intermediates Keep intermediate files (SRT, audio) for debugging`,`--output <path> Output MP4 path (default: <input>-<lang>.mp4)`,`--asr-mode <mode> Override ASR mode: auto, sentence, flash, file`,`--asr-lang <engine> Override ASR engine: 16k_zh, 16k_en, 16k_ja, 16k_ko, etc.`],examples:["voxflow video-translate --input video.mp4 --to en","voxflow video-translate --input video.mp4 --from zh --to en --realign","voxflow video-translate --input video.mp4 --to ja --voice v-male-Bk7vD3xP"]},publish:{usage:"[opts]",description:"One-command build+merge+publish for Skill/Web orchestration",options:["--input <video> Mode A: video-translate then publish (requires --to)","--to <lang> Target language for Mode A","--from <lang> Source language for Mode A (default: auto)","--srt <file> Mode B: dub existing subtitles into video (requires --video)","--video <file> Video file for Mode B/Mode C","--audio <file> Mode C: merge existing audio into video","--voice <id> TTS voice for Mode A/B","--voices <file> Multi-speaker voice map for Mode A/B","--output <path> Final MP4 output path","--publish <target> local (default) | webhook | none","--publish-dir <dir> Local publish directory (for --publish local)","--publish-webhook <url> Webhook URL (for --publish webhook)","--platform <name> Platform metadata tag (default: generic)","--title <text> Title metadata","--json Print structured JSON result (recommended for skills)"],examples:["voxflow publish --input video.mp4 --to en --publish local","voxflow publish --srt captions.srt --video input.mp4 --publish local","voxflow publish --video input.mp4 --audio narration.mp3 --publish local","voxflow publish --input video.mp4 --to ja --publish webhook --publish-webhook https://publisher.example.com/hook --json"]},explain:{usage:"[opts]",description:"Generate an AI explainer video from a topic",options:[`--topic <text> Topic to explain (use "demo" for built-in demo)`,`--style <style> Visual style: modern (default), playful, corporate, chalkboard`,`--language <code> Script language: en (default), zh, ja, ko, etc.`,`--voice <id> TTS voice ID (default: ${O.voice})`,`--speed <n> TTS speed 0.5-2.0 (default: ${O.speed})`,`--scenes <n> Number of scenes, 3-12 (default: ${O.sceneCount})`,`--audio-only Skip video render, output WAV narration only`,`--cloud Render on cloud instead of local Remotion`,`--output <path> Output file path (default: ./explain-<timestamp>.mp4)`],examples:['voxflow explain --topic "What is React?"',"voxflow explain --topic demo --output demo.mp4",'voxflow explain --topic "区块链入门" --style chalkboard --voice v-male-Bk7vD3xP','voxflow explain --topic "Machine Learning" --audio-only']},present:{usage:"<--text|--url|--cards> [opts]",description:"Generate a short video from text or URL content",options:[`--text <text> Input text content`,`--url <url> URL to fetch and convert`,`--cards <path> Pre-generated cards.json (skip LLM)`,`--scheme <name> Visual scheme: noir, neon, editorial, aurora (default), brutalist`,`--voice <id> TTS voice ID (default: ${N.voice})`,`--speed <n> TTS speed 0.5-2.0 (default: ${N.speed})`,`--no-audio Skip TTS, render silent video only`,`--output <path> Output file path (default: ./present-<timestamp>.mp4)`],examples:['voxflow present --text "Claude Code 是一个 AI 编程工具" --scheme aurora',"voxflow present --url https://example.com/article --scheme noir","voxflow present --cards my-cards.json --no-audio",'voxflow present --text "React 入门指南" --voice v-male-Bk7vD3xP --output react.mp4']},say:{alias:"synthesize",description:"Alias for synthesize"},generate:{alias:"story",description:"Alias for story"},transcribe:{alias:"asr",description:"Alias for asr"}};e.exports={run:run}},929:(e,t,o)=>{const n=o(896);const s=o(928);const{API_BASE:r}=o(782);const{ApiError:a}=o(852);const{getMediaInfo:i,extractAudioForAsr:c}=o(388);const{uploadFileToCos:l}=o(567);const{recognize:u,detectMode:p,SENTENCE_MAX_MS:d,FLASH_MAX_MS:f,BASE64_MAX_BYTES:g,TASK_STATUS:m}=o(514);const{formatSrt:h,formatPlainText:w,formatJson:v,buildCaptionsFromFlash:x,buildCaptionsFromSentence:y,buildCaptionsFromFile:S}=o(813);const{checkWhisperAvailable:$,transcribeLocal:b}=o(126);const k={lang:"16k_zh",mode:"auto",format:"srt"};const T={"16k_zh":"Chinese (16kHz)","16k_en":"English (16kHz)","16k_zh_en":"Chinese-English (16kHz)","16k_ja":"Japanese (16kHz)","16k_ko":"Korean (16kHz)","16k_zh_dialect":"Chinese dialect (16kHz)","8k_zh":"Chinese (8kHz phone)","8k_en":"English (8kHz phone)"};const F={srt:".srt",txt:".txt",json:".json"};async function asr(e){let t=false;let o=[];const sigintHandler=()=>{if(t)return;t=true;console.log("\n\nASR cancelled.");for(const e of o){try{n.unlinkSync(e)}catch{}}process.exit(130)};process.on("SIGINT",sigintHandler);try{return await _asr(e,o)}finally{process.removeListener("SIGINT",sigintHandler)}}async function _asr(e,t){const{token:a,api:d=r,input:f,url:$,mic:b=false,mode:E=k.mode,lang:_=k.lang,format:A=k.format,output:I,speakers:M=false,speakerNumber:P=0,taskId:L,engine:O="auto",model:N="base"}=e;if(L){return await resumePoll({apiBase:d,token:a,taskId:L,format:A,output:I,lang:_})}const C=resolveEngine(O);if(C==="local"){return await _asrLocal({input:f,format:A,output:I,model:N,lang:_})}const D=[f,$,b].filter(Boolean).length;if(D===0){throw new Error("No input specified. Provide one of:\n"+" --input <file> Local audio/video file\n"+" --url <url> Remote audio URL\n"+" --mic Record from microphone")}if(D>1){throw new Error("Specify only one input source: --input, --url, or --mic")}console.log("\n=== VoxFlow ASR ===");let R=f?s.resolve(f):null;if(b){R=await handleMicInput();t.push(R)}try{if(R&&!n.existsSync(R)){throw new Error(`Input file not found: ${R}`)}let e=0;let r=0;let k=$||null;if(R){console.log(`Input: ${s.basename(R)}`);const o=await i(R);e=o.durationMs;r=n.statSync(R).size;const a=formatDuration(e);const l=formatSize(r);console.log(`Duration: ${a}`);console.log(`Size: ${l}`);if(!o.hasAudio){throw new Error("Input file has no audio track.")}console.log(`\n[1/3] Extracting audio (16kHz mono WAV)...`);const u=await c(R);t.push(u.wavPath);e=u.durationMs;r=n.statSync(u.wavPath).size;R=u.wavPath;console.log(` OK (${formatSize(r)}, ${formatDuration(e)})`)}else{console.log(`Input: ${$}`);console.log(`(Remote URL — duration will be detected by ASR API)`)}const L=!!k;const O=E==="auto"?p(e,L||!!R,r):E;console.log(`Mode: ${O}`);console.log(`Language: ${T[_]||_}`);console.log(`Format: ${A}`);if(R&&!k){const e=O==="flash"||O==="file"&&r>g||O==="sentence"&&r>g;if(e){console.log(`\n[2/3] Uploading to COS...`);const e=await l(R,d,a);k=e.cosUrl;console.log(` OK (${e.key})`)}else{console.log(`\n[2/3] Uploading to COS... (skipped, using base64)`)}}else if(!R&&k){console.log(`\n[2/3] Uploading to COS... (skipped, using remote URL)`)}console.log(`\n[3/3] ASR speech recognition (${O})...`);const N=Date.now();const C=await u({apiBase:d,token:a,mode:O,url:k,filePath:O==="sentence"&&!k?R:undefined,durationMs:e,fileSize:r,lang:_,speakerDiarization:M,speakerNumber:P,wordInfo:A==="srt",onProgress:(e,t)=>{const o=e===m.WAITING?"Queued":e===m.PROCESSING?"Recognizing":"Unknown";process.stdout.write(`\r ${o}... (${Math.round(t/1e3)}s)`)}});const D=((Date.now()-N)/1e3).toFixed(1);console.log(`\n OK (${D}s)`);const q=C.audioTime||e/1e3||0;let j;switch(C.mode){case"flash":j=x(C.flashResult||[]);break;case"sentence":j=y(C.result,q,C.wordList);break;case"file":j=S(C.result,q);break;default:j=[{id:1,startMs:0,endMs:0,text:C.result||""}]}let z;switch(A){case"srt":z=h(j);break;case"txt":z=w(j,{includeSpeakers:M});break;case"json":z=v(j);break;default:throw new Error(`Unknown format: ${A}. Use: srt, txt, json`)}const U=F[A]||".txt";let B;if(I){B=s.resolve(I)}else if(f){const e=s.basename(f,s.extname(f));B=o.ab+"cli/"+s.dirname(f)+"/"+e+""+U}else if(b){B=s.resolve(`mic-${Date.now()}${U}`)}else{try{const e=new URL($);const t=s.basename(e.pathname,s.extname(e.pathname))||"asr";B=o.ab+"cli/"+t+""+U}catch{B=s.resolve(`asr-${Date.now()}${U}`)}}n.writeFileSync(B,z,"utf8");const W=C.quota||{};const V=1;const G=W.remaining??"?";console.log(`\n=== Done ===`);console.log(`Output: ${B}`);console.log(`Captions: ${j.length}`);console.log(`Duration: ${formatDuration(e||(C.audioTime||0)*1e3)}`);console.log(`Mode: ${C.mode}`);console.log(`Quota: ${V} used, ${G} remaining`);if(j.length>0&&A!=="json"){console.log(`\n--- Preview ---`);const e=j.slice(0,3);for(const t of e){const e=formatDuration(t.startMs);const o=t.speakerId?`[${t.speakerId}] `:"";const n=t.text.length>60?t.text.slice(0,57)+"...":t.text;console.log(` ${e} ${o}${n}`)}if(j.length>3){console.log(` ... (${j.length-3} more)`)}}return{outputPath:B,mode:C.mode,duration:e/1e3,captionCount:j.length,quotaUsed:V}}finally{for(const e of t){try{n.unlinkSync(e)}catch{}}}}async function _asrLocal(e){const{input:t,format:r=k.format,output:a,model:l="base",lang:u=k.lang}=e;console.log("\n=== VoxFlow ASR (Local Whisper) ===");if(!t){throw new Error("Local whisper engine requires --input <file>.\n"+"URL and microphone input are cloud-only features.\n"+"Use: voxflow asr --input <file> --engine local")}const p=s.resolve(t);if(!n.existsSync(p)){throw new Error(`Input file not found: ${p}`)}const d=await i(p);console.log(`Input: ${s.basename(p)}`);console.log(`Duration: ${formatDuration(d.durationMs)}`);console.log(`Engine: whisper (local)`);console.log(`Model: ${l}`);console.log(`\n[1/2] Extracting audio (16kHz mono WAV)...`);const f=await c(p);console.log(` OK (${formatSize(n.statSync(f.wavPath).size)})`);console.log(`[2/2] Whisper local recognition...`);const g=Date.now();const m=await b(f.wavPath,{model:l,lang:u});const x=((Date.now()-g)/1e3).toFixed(1);console.log(` OK (${x}s, ${m.length} segments)`);try{n.unlinkSync(f.wavPath)}catch{}let y;switch(r){case"srt":y=h(m);break;case"txt":y=w(m);break;case"json":y=v(m);break;default:throw new Error(`Unknown format: ${r}. Use: srt, txt, json`)}const S=F[r]||".txt";const $=a?s.resolve(a):o.ab+"cli/"+s.dirname(t)+"/"+s.basename(t,s.extname(t))+""+S;n.writeFileSync($,y,"utf8");console.log(`\n=== Done ===`);console.log(`Output: ${$}`);console.log(`Captions: ${m.length}`);console.log(`Duration: ${formatDuration(d.durationMs)}`);console.log(`Engine: whisper (local, no quota used)`);if(m.length>0&&r!=="json"){console.log(`\n--- Preview ---`);const e=m.slice(0,3);for(const t of e){const e=formatDuration(t.startMs);const o=t.text.length>60?t.text.slice(0,57)+"...":t.text;console.log(` ${e} ${o}`)}if(m.length>3){console.log(` ... (${m.length-3} more)`)}}return{outputPath:$,mode:"local",duration:d.durationMs/1e3,captionCount:m.length,quotaUsed:0}}function resolveEngine(e){if(e==="local"||e==="whisper"){const e=$();if(!e.available){throw new Error("Local whisper engine requires nodejs-whisper.\n"+"Install: npm install -g nodejs-whisper\n"+"Download a model: npx nodejs-whisper download\n"+"Or use: --engine cloud")}return"local"}if(e==="cloud"||e==="tencent")return"cloud";if(e==="auto"){const{available:e}=$();return e?"local":"cloud"}return"cloud"}async function resumePoll({apiBase:e,token:t,taskId:r,format:a,output:i,lang:c}){console.log(`\n=== VoxFlow ASR — Resume Task ===`);console.log(`Task ID: ${r}`);const{pollTaskResult:l,TASK_STATUS:u}=o(514);console.log(`Polling...`);const p=await l({apiBase:e,token:t,taskId:r,onProgress:(e,t)=>{const o=e===u.WAITING?"Queued":e===u.PROCESSING?"Recognizing":"?";process.stdout.write(`\r ${o}... (${Math.round(t/1e3)}s)`)}});console.log(`\n OK`);const d=S(p.result,p.audioTime);let f;switch(a){case"srt":f=h(d);break;case"txt":f=w(d);break;case"json":f=v(d);break;default:f=w(d)}const g=F[a]||".txt";const m=i?s.resolve(i):s.resolve(`task-${r}${g}`);n.writeFileSync(m,f,"utf8");console.log(`\n=== Done ===`);console.log(`Output: ${m}`);console.log(`Captions: ${d.length}`);console.log(`Duration: ${formatDuration((p.audioTime||0)*1e3)}`);return{outputPath:m,mode:"file",duration:p.audioTime,captionCount:d.length}}async function handleMicInput(){const{recordMic:e,checkRecAvailable:t}=o(384);const n=await t();if(!n.available){throw new Error(n.error)}console.log(`\nRecording from microphone...`);console.log(` Press Enter or Q to stop recording.`);console.log(` Max duration: 5 minutes.\n`);const{wavPath:s,durationMs:r,stopped:a}=await e({maxSeconds:300});console.log(`\n Recording ${a==="user"?"stopped":"finished"}: ${formatDuration(r)}`);return s}function formatDuration(e){if(!e||e<=0)return"0s";const t=Math.round(e/1e3);if(t<60)return`${t}s`;const o=Math.floor(t/60);const n=t%60;if(o<60)return`${o}m${n>0?n+"s":""}`;const s=Math.floor(o/60);const r=o%60;return`${s}h${r>0?r+"m":""}`}function formatSize(e){if(e<1024)return`${e} B`;if(e<1024*1024)return`${(e/1024).toFixed(1)} KB`;return`${(e/1024/1024).toFixed(1)} MB`}e.exports={asr:asr,ASR_DEFAULTS:k,ApiError:a}},944:(e,t,o)=>{const n=o(896);const s=o(928);const{DUB_DEFAULTS:r}=o(782);const{ApiError:a}=o(852);const{buildWav:i,getFileExtension:c}=o(56);const{parseSrt:l,formatSrt:u}=o(813);const{buildTimelinePcm:p,buildTimelineAudio:d,msToBytes:f,BYTES_PER_MS:g}=o(907);const{startSpinner:m}=o(339);const{synthesizeTTS:h}=o(675);function parseVoicesMap(e){if(!n.existsSync(e)){throw new Error(`Voices map file not found: ${e}`)}let t;try{t=JSON.parse(n.readFileSync(e,"utf8"))}catch(e){throw new Error(`Invalid JSON in voices map: ${e.message}`)}if(typeof t!=="object"||t===null||Array.isArray(t)){throw new Error('Voices map must be a JSON object: { "SpeakerName": "voiceId", ... }')}for(const[e,o]of Object.entries(t)){if(typeof o!=="string"||o.trim().length===0){throw new Error(`Invalid voice ID for speaker "${e}": must be a non-empty string`)}}return t}async function synthesizeCaption(e,t,o,n,s,r,a,i){const c=await h({apiBase:e,token:t,text:o,voiceId:n,speed:s??1,format:r||"pcm",index:a,total:i});const l=c.audio.length/g;console.log(` OK (${(c.audio.length/1024).toFixed(0)} KB, ${(l/1e3).toFixed(1)}s)`);return{audio:c.audio,quota:c.quota,durationMs:l}}async function dub(e){let t=false;const sigintHandler=()=>{if(t)return;t=true;console.log("\n\nDubbing cancelled.");process.exit(130)};process.on("SIGINT",sigintHandler);try{return await _dub(e)}finally{process.removeListener("SIGINT",sigintHandler)}}async function _dub(e){const t=e.srt;if(!t){throw new Error("No SRT file provided. Usage: voxflow dub --srt <file.srt>")}const a=s.resolve(t);if(!n.existsSync(a)){throw new Error(`SRT file not found: ${a}`)}const i=n.readFileSync(a,"utf8");const c=l(i);if(c.length===0){throw new Error("SRT file contains no valid captions")}const u=e.voice||r.voice;const p=e.speed??r.speed;const f=e.speedAuto||false;const g=r.toleranceMs;const m=e.api;const h=e.token;const w=e.patch;const v=[];let x=null;if(e.voicesMap){x=parseVoicesMap(s.resolve(e.voicesMap))}let y=e.output;const S=!!e.video;const $=S?".mp4":".wav";if(!y){const e=(new Date).toISOString().replace(/[:.]/g,"-").slice(0,19);y=s.resolve(`dub-${e}${$}`)}console.log("\n=== VoxFlow Dub ===");console.log(`SRT: ${t} (${c.length} captions)`);console.log(`Voice: ${u}${x?` + voices map (${Object.keys(x).length} speakers)`:""}`);console.log(`Speed: ${p}${f?" (auto-compensate)":""}`);if(S)console.log(`Video: ${e.video}`);if(e.bgm)console.log(`BGM: ${e.bgm} (ducking: ${e.ducking??r.ducking})`);if(w!=null)console.log(`Patch: caption #${w}`);console.log(`Output: ${y}`);if(w!=null){return _dubPatch(e,c,y,v)}console.log(`\n[1/2] Synthesizing TTS audio (${c.length} captions)...`);const b=[];let k=null;let T=0;for(let e=0;e<c.length;e++){const t=c[e];const o=t.endMs-t.startMs;let n=u;if(x&&t.speakerId&&x[t.speakerId]){n=x[t.speakerId]}let s=await synthesizeCaption(m,h,t.text,n,p,"pcm",e,c.length);T++;k=s.quota;if(f&&s.durationMs>o+g){const r=s.durationMs/o;if(r<=2){const a=Math.min(p*r,2);process.stdout.write(` ↳ Re-synth #${t.id} (${(s.durationMs/1e3).toFixed(1)}s > ${(o/1e3).toFixed(1)}s, speed: ${a.toFixed(2)})...`);s=await synthesizeCaption(m,h,t.text,n,a,"pcm",e,c.length);T++;k=s.quota}else{const a=`Caption #${t.id}: audio too long (${(s.durationMs/1e3).toFixed(1)}s for ${(o/1e3).toFixed(1)}s slot, alpha=${r.toFixed(1)}). Consider shortening text.`;v.push(a);console.log(` ⚠ OVERFLOW: ${a}`);const i=await synthesizeCaption(m,h,t.text,n,2,"pcm",e,c.length);T++;k=i.quota;s=i}}b.push({startMs:t.startMs,endMs:t.endMs,audioBuffer:s.audio})}console.log("\n[2/2] Building timeline audio...");const{wav:F,duration:E}=d(b);const _=S?y.replace(/\.[^.]+$/,".wav"):y;const A=s.dirname(_);n.mkdirSync(A,{recursive:true});n.writeFileSync(_,F);const I=_.replace(/\.(wav|mp3|mp4)$/i,".txt");const M=c.map((e=>{const t=e.speakerId?`|${e.speakerId}`:"";const o=x&&e.speakerId&&x[e.speakerId]?`|${x[e.speakerId]}`:"";return`[${e.id}${t}${o}] ${e.text}`})).join("\n\n");n.writeFileSync(I,M,"utf8");const P=S||e.bgm;if(P){const{checkFfmpeg:t,mergeAudioVideo:s,mixWithBgm:a}=o(297);const i=await t();if(!i.available){throw new Error("ffmpeg is required for BGM mixing / video merging. Install it:\n"+" macOS: brew install ffmpeg\n"+" Ubuntu: sudo apt install ffmpeg\n"+" Windows: https://ffmpeg.org/download.html")}let c=_;if(e.bgm){const t=_.replace(".wav","-mixed.wav");console.log(` Mixing BGM (ducking: ${e.ducking??r.ducking})...`);await a(_,e.bgm,t,{ducking:e.ducking??r.ducking});c=t;if(!S){n.copyFileSync(c,_);try{n.unlinkSync(c)}catch{}c=_}}if(S){console.log(" Merging with video...");await s(e.video,c,y);try{if(_!==y)n.unlinkSync(_);if(e.bgm){const e=_.replace(".wav","-mixed.wav");if(n.existsSync(e))n.unlinkSync(e)}}catch{}}}console.log(`\n=== Done ===`);console.log(`Output: ${y} (${(n.statSync(y).size/1024).toFixed(1)} KB)`);console.log(`Duration: ${E.toFixed(1)}s`);console.log(`Transcript: ${I}`);console.log(`Captions: ${c.length}`);console.log(`Quota: ${T} used, ${k?.remaining??"?"} remaining`);if(v.length>0){console.log(`\nWarnings (${v.length}):`);for(const e of v){console.log(` ⚠ ${e}`)}}return{outputPath:y,textPath:I,duration:E,quotaUsed:T,segmentCount:c.length,warnings:v}}async function _dubPatch(e,t,o,a){const c=e.patch;const l=e.api;const u=e.token;const p=e.voice||r.voice;const d=e.speed??r.speed;let g=null;if(e.voicesMap){g=parseVoicesMap(s.resolve(e.voicesMap))}const m=t.findIndex((e=>e.id===c));if(m===-1){throw new Error(`Caption #${c} not found in SRT. Available IDs: ${t.map((e=>e.id)).join(", ")}`)}const h=t[m];const w=o.replace(/\.[^.]+$/,".wav");if(!n.existsSync(w)){throw new Error(`Patch mode requires an existing output file. `+`Run a full dub first, then use --patch to update individual captions.`)}let v=p;if(g&&h.speakerId&&g[h.speakerId]){v=g[h.speakerId]}console.log(`\n[Patch] Re-synthesizing caption #${c}: "${h.text.slice(0,40)}..."`);const x=await synthesizeCaption(l,u,h.text,v,d,"pcm",0,1);const y=n.readFileSync(w);const S=y.subarray(44);const $=f(h.startMs);const b=f(h.endMs);S.fill(0,$,Math.min(b,S.length));const k=Math.min(x.audio.length,b-$,S.length-$);if(k>0){x.audio.copy(S,$,0,k)}const{wav:T}=i([S],0);n.writeFileSync(w,T);console.log(`\n=== Patch Done ===`);console.log(`Updated: caption #${c} in ${w}`);console.log(`Quota: 1 used, ${x.quota?.remaining??"?"} remaining`);return{outputPath:w,textPath:w.replace(/\.wav$/i,".txt"),duration:S.length/(24e3*2),quotaUsed:1,segmentCount:1,warnings:a}}e.exports={dub:dub,ApiError:a,_test:{parseVoicesMap:parseVoicesMap}}},484:(e,t,o)=>{const n=o(896);const s=o(928);const{EXPLAIN_DEFAULTS:r}=o(782);const{request:a,throwApiError:i,throwNetworkError:c}=o(852);const{buildWav:l}=o(56);const{mergeAudioVideo:u}=o(297);const{startSpinner:p}=o(339);const d={title:"What is React?",language:"en",style:"modern",scenes:[{type:"title",title:"What is React?",subtitle:"A JavaScript library for building user interfaces",narration:"Welcome to this quick explainer on React, one of the most popular frontend libraries in the world."},{type:"bullets",heading:"Core Concepts",bullets:["Component-based architecture","Virtual DOM for efficient updates","Declarative UI with JSX","Unidirectional data flow"],narration:"React is built around several core concepts. First, everything is a component. Second, it uses a virtual DOM for efficient rendering. Third, you write declarative UI with JSX syntax. And fourth, data flows in one direction, from parent to child."},{type:"bullets",heading:"Why Use React?",bullets:["Massive ecosystem and community","Reusable component library","Excellent developer tools"],narration:"So why choose React? It has a massive ecosystem with thousands of libraries. You can build reusable component libraries. And the developer tools are some of the best in the industry."},{type:"summary",heading:"Key Takeaways",points:["React makes UI development predictable and efficient","Components are the building blocks of React apps","The virtual DOM optimizes rendering performance"],narration:"To summarize: React makes UI development predictable and efficient. Components are the fundamental building blocks. And the virtual DOM ensures your app stays fast, even as it grows. Thanks for watching!"}]};async function synthesizeScene(e,t,o,n,s,r,l){process.stdout.write(` TTS scene [${r+1}/${l}]...`);let u,p;try{({status:u,data:p}=await a(`${e}/api/tts/synthesize`,{method:"POST",headers:{"Content-Type":"application/json",Authorization:`Bearer ${t}`}},{text:o,voiceId:n,speed:s,volume:1}))}catch(t){console.log(" FAIL");c(t,e)}if(u!==200||p.code!=="success"){console.log(" FAIL");i(u,p,`TTS scene ${r+1}`)}const d=Buffer.from(p.audio,"base64");const f=Math.round(d.length/48);console.log(` OK (${(d.length/1024).toFixed(0)} KB, ${(f/1e3).toFixed(1)}s)`);return{pcm:d,durationMs:f,quota:p.quota}}function buildNarrationWav(e,t){const o=l(e,t);return o.wav}async function generateScript(e,t,o,{language:n="en",sceneCount:s=5,style:r="modern"}={}){let i,c;try{({status:i,data:c}=await a(`${e}/api/llm/generate-explain-script`,{method:"POST",headers:{"Content-Type":"application/json",Authorization:`Bearer ${t}`}},{topic:o,language:n,sceneCount:s,style:r}))}catch(e){console.log(` ⚠ LLM request failed: ${e.message}`);console.log(" Falling back to demo script template.");return buildFallbackScript(o,r)}if(i!==200||c.code!=="success"||!c.script){const e=c?.message||`HTTP ${i}`;console.log(` ⚠ LLM generation failed: ${e}`);console.log(" Falling back to demo script template.");return buildFallbackScript(o,r)}console.log(` ✓ Script generated: "${c.script.title}" (${c.script.scenes.length} scenes)`);return c.script}function buildFallbackScript(e,t){const o={...d,title:e,style:t};o.scenes=[{...d.scenes[0],title:e,narration:`Welcome to this explainer on ${e}.`},...d.scenes.slice(1)];return o}function findRemotionDir(){let e=__dirname;for(let t=0;t<5;t++){const t=s.join(e,"remotion");if(n.existsSync(s.join(t,"package.json"))){return t}e=s.dirname(e)}return null}function isRemotionAvailable(){const e=findRemotionDir();if(!e)return false;return n.existsSync(s.join(e,"node_modules","remotion"))}async function renderVideo(e,t,r,a){const i=findRemotionDir();if(!i){throw new Error("Remotion directory not found. Ensure remotion/ exists at the repo root with dependencies installed.")}const c=n.mkdtempSync(s.join(o(857).tmpdir(),"voxflow-explain-"));const l=s.join(c,"props.json");const u={fps:30,script:e,scenes:t};n.writeFileSync(l,JSON.stringify(u,null,2));const p=s.join(i,"render.ts");const{execFile:d}=o(317);const f=s.join(i,"node_modules",".bin","ts-node");return new Promise(((e,t)=>{const o=d(f,[p,"--props",l,"--output",r],{cwd:i,maxBuffer:50*1024*1024,timeout:6e5},((o,s,a)=>{try{n.unlinkSync(l);n.rmdirSync(c)}catch{}if(o){t(new Error(`Remotion render failed: ${o.message}\n${a}`));return}try{const t=s.trim().split("\n");const o=t[t.length-1];const n=JSON.parse(o);e(n)}catch{e({output:r})}}));if(o.stderr&&a){let e="";o.stderr.on("data",(t=>{e+=t.toString();const o=e.split("\n");e=o.pop()||"";for(const e of o){try{const t=JSON.parse(e);if(t.type==="progress"&&a){a(t.percent)}}catch{}}}))}}))}async function explain(e){const{token:t,api:o,topic:s="demo",voice:a=r.voice,style:i=r.style,language:c=r.language,speed:l=r.speed,scenes:f=r.sceneCount,audioOnly:g=false,cloud:m=false}=e;if(e.output&&e.output.endsWith(".mp3")){console.error("Error: MP3 output is not supported for explain. Use .wav or .mp4");process.exit(1)}const sigintHandler=()=>{console.log("\n\nGeneration cancelled.");process.exit(0)};process.on("SIGINT",sigintHandler);try{let h;const w=s==="demo"||s==="Demo";if(w){console.log("\n[1/4] Using demo script (hardcoded)...");h={...d,style:i}}else{console.log("\n[1/4] Generating script via LLM...");console.log(` Topic: "${s}" (${c}, ${f} scenes)`);h=await generateScript(o,t,s,{language:c,sceneCount:f,style:i})}console.log(` Script: ${h.scenes.length} scenes, style: ${h.style}`);console.log(`\n[2/4] Synthesizing narration (${h.scenes.length} scenes)...`);const v=[];const x=[];let y=null;let S=null;for(let e=0;e<h.scenes.length;e++){const n=h.scenes[e];const s=await synthesizeScene(o,t,n.narration,a,l,e,h.scenes.length);v.push(s.pcm);if(!y&&s.quota)y=s.quota;S=s.quota;x.push({scene:n,durationMs:s.durationMs,audioSrc:""})}const $=buildNarrationWav(v,r.silence);const b=(new Date).toISOString().replace(/[:.]/g,"-").slice(0,19);let k;if(g||!isRemotionAvailable()){if(!g&&!isRemotionAvailable()){console.log("\n[3/4] Remotion not available. Falling back to audio-only output.");console.log(" To enable video rendering, install Remotion:");console.log(" cd remotion && npm install")}else{console.log("\n[3/4] Building audio-only output...")}k=e.output||`explain-${b}.wav`;n.writeFileSync(k,$)}else{if(m){console.log("\n[3/4] Cloud rendering coming in Phase 2. Using local render.")}k=e.output||`explain-${b}.mp4`;const t=p(m?" Rendering video...":"\n[3/4] Rendering video...");try{const e=k+".silent.mp4";await renderVideo(h,x,e,(e=>{t.update(` Rendering video... ${e}%`)}));t.stop("OK");console.log(" Merging narration audio...");const o=k+".narration.wav";n.writeFileSync(o,$);await u(e,o,k);try{n.unlinkSync(e)}catch{}try{n.unlinkSync(o)}catch{}console.log(" Audio merged OK")}catch(e){t.stop("FAIL");console.log(` Video render failed: ${e.message}`);console.log(" Falling back to audio-only...");k=k.replace(/\.mp4$/,".wav");n.writeFileSync(k,$)}}const T=k.replace(/\.(mp4|wav)$/,".json");n.writeFileSync(T,JSON.stringify(h,null,2));const F=x.reduce(((e,t)=>e+t.durationMs),0);const E=n.statSync(k);const _=(E.size/(1024*1024)).toFixed(1);const A=formatDuration(F);console.log("\n[4/4] Done!");console.log(`\n=== Output ===`);console.log(` File: ${k} (${_} MB, ${A})`);console.log(` Script: ${T}`);console.log(` Scenes: ${h.scenes.length}`);console.log(` Style: ${h.style}`);if(S){const e=y&&S?y.remaining-S.remaining:h.scenes.length*100;console.log(` Quota: ${e} used, ${S.remaining??"?"} remaining`)}return{outputPath:k,scriptPath:T,duration:F}}finally{process.removeListener("SIGINT",sigintHandler)}}function formatDuration(e){const t=Math.round(e/1e3);const o=Math.floor(t/60);const n=t%60;if(o===0)return`${n}s`;return`${o}m${n.toString().padStart(2,"0")}s`}e.exports={explain:explain}},80:(e,t,o)=>{const n=o(896);const s=o(928);const{NARRATE_DEFAULTS:r}=o(782);const{ApiError:a}=o(852);const{parseParagraphs:i,buildWav:c,concatAudioBuffers:l,getFileExtension:u}=o(56);const{startSpinner:p}=o(339);const{synthesizeTTS:d}=o(675);function parseScript(e){if(!n.existsSync(e)){throw new Error(`Script file not found: ${e}`)}let t;try{t=JSON.parse(n.readFileSync(e,"utf8"))}catch(e){throw new Error(`Invalid JSON in script file: ${e.message}`)}if(!t.segments||!Array.isArray(t.segments)||t.segments.length===0){throw new Error('Script must have a non-empty "segments" array')}for(let e=0;e<t.segments.length;e++){const o=t.segments[e];if(!o.text||typeof o.text!=="string"||o.text.trim().length===0){throw new Error(`Segment ${e+1} must have a non-empty "text" field`)}}return{segments:t.segments.map((e=>({text:e.text.trim(),voiceId:e.voiceId||undefined,speed:e.speed!=null?Number(e.speed):undefined,volume:e.volume!=null?Number(e.volume):undefined,pitch:e.pitch!=null?Number(e.pitch):undefined}))),silence:t.silence!=null?Number(t.silence):r.silence,output:t.output||undefined}}function stripMarkdown(e){return e.replace(/```[\s\S]*?```/g,"").replace(/`([^`]+)`/g,"$1").replace(/!\[[^\]]*\]\([^)]*\)/g,"").replace(/\[([^\]]+)\]\([^)]*\)/g,"$1").replace(/^#{1,6}\s+/gm,"").replace(/\*{1,3}([^*]+)\*{1,3}/g,"$1").replace(/_{1,3}([^_]+)_{1,3}/g,"$1").replace(/^[-*_]{3,}\s*$/gm,"").replace(/^>\s?/gm,"").replace(/\n{3,}/g,"\n\n").trim()}async function readStdin(){const e=[];for await(const t of process.stdin){e.push(t)}return Buffer.concat(e).toString("utf8")}async function synthesizeSegment(e,t,o,n,s,r,a,i,c,l){const u=await d({apiBase:e,token:t,text:o,voiceId:n,speed:s??1,volume:r??1,pitch:a,format:i||"pcm",index:c,total:l});const p=i==="mp3"?"MP3":i==="wav"?"WAV":"PCM";console.log(` OK (${(u.audio.length/1024).toFixed(0)} KB ${p})`);return u}async function narrate(e){let t=false;const sigintHandler=()=>{if(t)return;t=true;console.log("\n\nNarration cancelled.");process.exit(130)};process.on("SIGINT",sigintHandler);try{return await _narrate(e)}finally{process.removeListener("SIGINT",sigintHandler)}}async function _narrate(e){const t=e.voice||r.voice;const o=e.speed??r.speed;const a=e.format||"pcm";const p=e.api;const d=e.token;let f;let g;let m;let h;if(e.script){const t=parseScript(e.script);f=t.segments;g=e.silence??t.silence;m=e.output||t.output;h=`script: ${e.script} (${f.length} segments)`}else if(e.input){const t=s.resolve(e.input);if(!n.existsSync(t)){throw new Error(`Input file not found: ${t}`)}let o=n.readFileSync(t,"utf8");const a=s.extname(t).toLowerCase();if(a===".md"||a===".markdown"){o=stripMarkdown(o)}const c=i(o);if(c.length===0){throw new Error("No text content found in input file")}f=c.map((e=>({text:e})));g=e.silence??r.silence;m=e.output;h=`file: ${e.input} (${f.length} paragraphs)`}else if(e.text){const t=i(e.text);if(t.length===0){throw new Error("No text content provided")}f=t.map((e=>({text:e})));g=e.silence??r.silence;m=e.output;h=`text: ${e.text.length} chars (${f.length} paragraphs)`}else if(!process.stdin.isTTY){const t=await readStdin();if(!t||t.trim().length===0){throw new Error("No input provided via stdin")}const o=i(t);if(o.length===0){throw new Error("No text content found in stdin input")}f=o.map((e=>({text:e})));g=e.silence??r.silence;h=`stdin (${f.length} paragraphs)`}else{throw new Error("No input provided. Use one of:\n"+" --input <file.txt> Read a text or markdown file\n"+' --text "text" Provide inline text\n'+" --script <file.json> Use a script with per-segment control\n"+' echo "text" | voxflow narrate Pipe from stdin')}const w=u(a);if(!m){const e=(new Date).toISOString().replace(/[:.]/g,"-").slice(0,19);m=s.resolve(`narration-${e}${w}`)}if(!m.endsWith(w)){m=m.replace(/\.(wav|mp3|pcm)$/i,"")+w}console.log("\n=== VoxFlow Narrate ===");console.log(`Input: ${h}`);console.log(`Voice: ${t}${e.script?" (may be overridden per segment)":""}`);console.log(`Format: ${a==="pcm"?"wav (pcm)":a}`);console.log(`Speed: ${o}`);if(a==="mp3"){console.log(`Output: ${m}`);console.log(` (MP3 mode: no silence inserted between segments)`)}else{console.log(`Silence: ${g}s`);console.log(`Output: ${m}`)}console.log(`\n[1/2] Synthesizing TTS audio (${f.length} segments)...`);const v=[];let x=null;for(let e=0;e<f.length;e++){const n=f[e];const s=await synthesizeSegment(p,d,n.text,n.voiceId||t,n.speed??o,n.volume,n.pitch,a,e,f.length);v.push(s.audio);x=s.quota}console.log("\n[2/2] Merging audio...");const{audio:y,wav:S,duration:$}=a==="mp3"||a==="wav"?l(v,a,g):c(v,g);const b=y||S;const k=s.dirname(m);n.mkdirSync(k,{recursive:true});n.writeFileSync(m,b);const T=m.replace(/\.(wav|mp3)$/i,".txt");const F=f.map(((e,t)=>{const o=e.voiceId?`[${t+1}|${e.voiceId}]`:`[${t+1}]`;return`${o} ${e.text}`})).join("\n\n");n.writeFileSync(T,F,"utf8");const E=f.length;console.log(`\n=== Done ===`);console.log(`Output: ${m} (${(b.length/1024).toFixed(1)} KB, ${$.toFixed(1)}s)`);console.log(`Transcript: ${T}`);console.log(`Segments: ${f.length}`);console.log(`Quota: ${E} used, ${x?.remaining??"?"} remaining`);return{outputPath:m,textPath:T,duration:$,quotaUsed:E,segmentCount:f.length,format:a}}e.exports={narrate:narrate,ApiError:a,_test:{parseScript:parseScript,stripMarkdown:stripMarkdown}}},35:(e,t,o)=>{const n=o(896);const s=o(928);const{PODCAST_DEFAULTS:r}=o(782);const{request:a,throwApiError:i,throwNetworkError:c,ApiError:l}=o(852);const{buildWav:u}=o(56);const{startSpinner:p}=o(339);const{getIntentParams:d}=o(425);const{synthesizeTTS:f}=o(675);function loadScript(e){const t=s.resolve(e);if(!n.existsSync(t)){throw new Error(`Script file not found: ${t}`)}let o;try{o=JSON.parse(n.readFileSync(t,"utf8"))}catch(e){throw new Error(`Invalid JSON in script file: ${e.message}`)}if(!Array.isArray(o.segments)||o.segments.length===0){throw new Error('Script must contain a non-empty "segments" array.\n'+'Expected: { "segments": [{ "speaker": "Host", "text": "Hello" }, ...] }')}for(let e=0;e<o.segments.length;e++){const t=o.segments[e];if(!t||typeof t.speaker!=="string"||!t.speaker.trim()){throw new Error(`Script segment [${e}] is missing a valid "speaker" field`)}if(!t||typeof t.text!=="string"||!t.text.trim()){throw new Error(`Script segment [${e}] is missing a valid "text" field`)}}return{segments:o.segments.map((e=>({speaker:e.speaker.trim(),text:e.text.trim()}))),voiceMapping:o.voiceMapping||{}}}function parseDialogueText(e){const t=e.split("\n").filter((e=>e.trim()));const o=[];const n=/^([^::]+)[::]\s*(.+)$/;for(const e of t){const t=e.trim();if(!t)continue;const s=t.match(n);if(s){const e=s[1].trim();const t=s[2].trim();if(t){o.push({speaker:e,text:t})}}else if(t.length>0){o.push({speaker:"旁白",text:t})}}return o}function parseStructuredScript(e){if(!e?.dialogue||!Array.isArray(e.dialogue))return[];return e.dialogue.map((t=>{const o=e.speakers?.[t.speaker];const n=o?.name||t.speaker;return{speaker:n,text:t.text,intent:t.intent||null}}))}async function generateDialogueLegacy(e,t,o){const n=p("\n[1/3] Generating dialogue text (legacy)...");let s,r;try{({status:s,data:r}=await a(`${e}/api/llm/generate-dialogue`,{method:"POST",headers:{"Content-Type":"application/json",Authorization:`Bearer ${t}`}},{prompt:o.topic,style:o.style||o.template,length:o.length,dialogueMode:true,autoSpeakerNames:true,exchanges:o.exchanges}))}catch(t){n.stop("FAIL");c(t,e)}if(s!==200||r.code!=="success"){n.stop("FAIL");i(s,r,"Dialogue generation")}const l=r.text;const u=r.voiceMapping||{};const d=r.quota;n.stop("OK");const f=parseDialogueText(l);const g=[...new Set(f.map((e=>e.speaker)))];console.log(` ${l.length} 字, ${f.length} 段, ${g.length} 位说话者`);console.log(` 说话者: ${g.join(", ")}`);console.log(` 配额剩余: ${d?.remaining??"?"}`);return{text:l,segments:f,voiceMapping:u,speakers:g,quota:d,script:null}}async function generateDialogueAiSdk(e,t,o){const n=p("\n[1/3] Generating dialogue text (ai-sdk)...");const s={short:"1-3",medium:"3-5",long:"5-10"};let l,u;try{({status:l,data:u}=await a(`${e}/api/podcast/generate-script`,{method:"POST",headers:{"Content-Type":"application/json",Authorization:`Bearer ${t}`}},{topic:o.topic,speakerCount:o.speakers||r.speakers,colloquialLevel:o.colloquial||"medium",language:o.language||"zh-CN",duration:s[o.length]||"3-5",autoMatchVoices:true}))}catch(t){n.stop("FAIL");c(t,e)}if(l!==200||u.code!=="success"){n.stop("FAIL");i(l,u,"Podcast script generation")}const d=u.script;const f=u.voiceMapping||{};const g=u.quota;n.stop("OK");const m=parseStructuredScript(d);const h=[...new Set(m.map((e=>e.speaker)))];const w=m.map((e=>`${e.speaker}: ${e.text}`)).join("\n");console.log(` ${w.length} chars, ${m.length} segments, ${h.length} speakers`);console.log(` Speakers: ${h.join(", ")}`);if(d?.quality_score?.overall){console.log(` Quality score: ${d.quality_score.overall}/10`)}console.log(` Quota remaining: ${g?.remaining??"?"}`);return{text:w,segments:m,voiceMapping:f,speakers:h,quota:g,script:d}}async function synthesizeSegment(e,t,o,n,s,r,a,i,c){const l=await f({apiBase:e,token:t,text:o,voiceId:n,speed:s,volume:r,index:a,total:i,label:c});console.log(` OK (${(l.audio.length/1024).toFixed(0)} KB)`);return{pcm:l.audio,quota:l.quota}}async function synthesizeAll(e,t,o,n,s,r){console.log(`\n[2/3] 合成 TTS 音频 (${o.length} 段, 多角色)...`);const a=[];let i=null;for(let c=0;c<o.length;c++){const l=o[c];const u=n[l.speaker]||{};const p=r||u.voiceId||"v-female-R2s4N9qJ";const f=u.speed||s;const g=d(l.intent);const m=f*g.speed;const h=g.volume;const w=await synthesizeSegment(e,t,l.text,p,m,h,c,o.length,l.speaker);a.push(w.pcm);i=w.quota}return{pcmBuffers:a,quota:i}}function resolveEngine(e){if(e==="legacy")return"legacy";if(e==="ai-sdk")return"ai-sdk";return"ai-sdk"}async function podcast(e){const sigintHandler=()=>{console.log("\n\nGeneration cancelled.");process.exit(0)};process.on("SIGINT",sigintHandler);try{return await _podcast(e)}finally{process.removeListener("SIGINT",sigintHandler)}}async function _podcast(e){const t=e.style||e.template||r.template;const a=e.length||r.length;const i=e.exchanges||r.exchanges;const c=e.speed??r.speed;const l=e.silence??r.silence;const p=e.api;const d=e.token;const f=resolveEngine(e.engine||"auto");const g=e.colloquial||"medium";const m=e.speakers||r.speakers;const h=e.language||"zh-CN";const w=e.format==="json";const v=e.noTts||false;const x=e.voice||null;const y=e.script||null;const S=e.topic||"Latest trends in technology";if(e.input){return _podcastFromFile(e)}let $=e.output;if(!$){const e=(new Date).toISOString().replace(/[:.]/g,"-").slice(0,19);const t=v?".txt":".wav";$=s.resolve(`podcast-${e}${t}`)}console.log("\n=== VoxFlow Podcast Generator ===");console.log(`Topic: ${S}`);if(y){console.log(`Script: ${y}`)}else{console.log(`Engine: ${f}`);console.log(`Template: ${t}`);console.log(`Length: ${a}`);console.log(`Colloquial: ${g}`);console.log(`Speakers: ${m}`);console.log(`Language: ${h}`)}console.log(`Speed: ${c}`);if(x)console.log(`Voice: ${x}`);if(e.bgm)console.log(`BGM: ${e.bgm} (ducking: ${e.ducking??r.ducking})`);console.log(`API: ${p}`);if(!v)console.log(`Output: ${$}`);let b,k,T,F,E;if(y){console.log("\n[1/3] 加载脚本文件...");const e=loadScript(y);k=e.segments;T=e.voiceMapping;F=[...new Set(k.map((e=>e.speaker)))];b=k.map((e=>`${e.speaker}:${e.text}`)).join("\n");E=null;console.log(` ${b.length} chars, ${k.length} segments, ${F.length} speakers`);console.log(` Speakers: ${F.join(", ")}`)}else if(f==="ai-sdk"){const e=await generateDialogueAiSdk(p,d,{topic:S,style:t,length:a,exchanges:i,colloquial:g,speakers:m,language:h});b=e.text;k=e.segments;T=e.voiceMapping;F=e.speakers;E=e.script}else{const e=await generateDialogueLegacy(p,d,{topic:S,style:t,length:a,exchanges:i,template:t});b=e.text;k=e.segments;T=e.voiceMapping;F=e.speakers;E=e.script}if(k.length===0){throw new Error("No dialogue segments found in generated text")}if(w){const e=$.replace(/\.\w+$/,".podcast.json");const o={version:1,engine:f,topic:S,script:E||{dialogue:k.map((e=>({speaker:e.speaker,text:e.text})))},voiceMapping:T,meta:{colloquial:g,speakers:m,language:h,length:a,style:t}};const r=s.dirname(e);n.mkdirSync(r,{recursive:true});n.writeFileSync(e,JSON.stringify(o,null,2),"utf8");console.log(`\n JSON exported: ${e}`)}if(v){const e=$.endsWith(".txt")?$:$.replace(/\.\w+$/,".txt");const t=k.map(((e,t)=>`[${t+1}] ${e.speaker}:${e.text}`)).join("\n\n");const o=s.dirname(e);n.mkdirSync(o,{recursive:true});n.writeFileSync(e,t,"utf8");console.log(`\n=== Done (script only) ===`);console.log(`Script: ${e}`);return{outputPath:e,textPath:e,duration:0,quotaUsed:1}}console.log("\n Voice assignments:");for(const e of F){if(x){console.log(` ${e} → ${x} (override)`)}else{const t=T[e];if(t){console.log(` ${e} → ${t.voiceId}`)}else{console.log(` ${e} → (default)`)}}}const{pcmBuffers:_,quota:A}=await synthesizeAll(p,d,k,T,c,x);const I=e.bgm?"[3/4]":"[3/3]";console.log(`\n${I} 拼接音频...`);const{wav:M,duration:P}=u(_,l);const L=s.dirname($);n.mkdirSync(L,{recursive:true});n.writeFileSync($,M);const O=$.replace(/\.wav$/,".txt");const N=k.map(((e,t)=>`[${t+1}] ${e.speaker}:${e.text}`)).join("\n\n");n.writeFileSync(O,N,"utf8");if(e.bgm){const{checkFfmpeg:t,mixWithBgm:s}=o(297);const a=await t();if(!a.available){throw new Error("ffmpeg is required for BGM mixing. Install it:\n"+" macOS: brew install ffmpeg\n"+" Ubuntu: sudo apt install ffmpeg\n"+" Windows: https://ffmpeg.org/download.html")}console.log(`\n[4/4] 混合背景音乐 (ducking: ${e.ducking??r.ducking})...`);const i=$.replace(".wav","-mixed.wav");await s($,e.bgm,i,{ducking:e.ducking??r.ducking});n.copyFileSync(i,$);try{n.unlinkSync(i)}catch{}}const C=(y?0:2)+k.length;console.log(`\n=== Done ===`);console.log(`Output: ${$} (${(n.statSync($).size/1024).toFixed(1)} KB, ${P.toFixed(1)}s)`);console.log(`Script: ${O}`);console.log(`Quota: ${C} used, ${A?.remaining??"?"} remaining`);return{outputPath:$,textPath:O,duration:P,quotaUsed:C}}async function _podcastFromFile(e){if(!e.token){throw new Error("Authentication required. Run `voxflow login` first.")}const t=s.resolve(e.input);if(!n.existsSync(t)){throw new Error(`Input file not found: ${t}`)}console.log(`\n=== Loading podcast from ${t} ===`);const o=JSON.parse(n.readFileSync(t,"utf8"));const a=o.script;const i=o.voiceMapping||{};const c=parseStructuredScript(a)||(a?.dialogue||[]).map((e=>({speaker:e.speaker,text:e.text})));if(c.length===0){throw new Error("No dialogue segments found in input file")}const l=e.speed??r.speed;const p=e.silence??r.silence;let d=e.output;if(!d){const e=(new Date).toISOString().replace(/[:.]/g,"-").slice(0,19);d=s.resolve(`podcast-${e}.wav`)}const f=[...new Set(c.map((e=>e.speaker)))];console.log(` ${c.length} segments, ${f.length} speakers`);const{pcmBuffers:g,quota:m}=await synthesizeAll(e.api,e.token,c,i,l);console.log("\n[3/3] Building audio...");const{wav:h,duration:w}=u(g,p);const v=s.dirname(d);n.mkdirSync(v,{recursive:true});n.writeFileSync(d,h);const x=d.replace(/\.wav$/,".txt");const y=c.map(((e,t)=>`[${t+1}] ${e.speaker}:${e.text}`)).join("\n\n");n.writeFileSync(x,y,"utf8");console.log(`\n=== Done ===`);console.log(`Output: ${d} (${(h.length/1024).toFixed(1)} KB, ${w.toFixed(1)}s)`);console.log(`Script: ${x}`);console.log(`Quota: ${c.length} TTS calls, ${m?.remaining??"?"} remaining`);return{outputPath:d,textPath:x,duration:w,quotaUsed:c.length}}e.exports={podcast:podcast,ApiError:l,_test:{parseDialogueText:parseDialogueText,parseStructuredScript:parseStructuredScript,resolveEngine:resolveEngine,loadScript:loadScript}}},712:(e,t,o)=>{const n=o(896);const s=o(928);const{PRESENT_DEFAULTS:r}=o(782);const{request:a,throwApiError:i,throwNetworkError:c}=o(852);const{chatCompletion:l}=o(133);const{buildWav:u,createSilence:p}=o(56);const{startSpinner:d}=o(339);const f={noir:"Scheme1-CinematicNoir",neon:"Scheme2-NeonGlass",editorial:"Scheme3-EditorialLuxury",aurora:"Scheme4-GradientAurora",brutalist:"Scheme5-BoldBrutalist"};const g=Object.keys(f);const m=10;const h=2;function findVideoPresentDir(){let e=__dirname;for(let t=0;t<5;t++){const t=s.join(e,"video-present");if(n.existsSync(s.join(t,"package.json"))){return t}e=s.dirname(e)}return null}function isRemotionReady(){const e=findVideoPresentDir();if(!e)return false;return n.existsSync(s.join(e,"node_modules","remotion"))}const w=`你是一个短视频卡片内容策划专家。请将以下内容转化为 5-10 张短视频卡片的 JSON 数据。\n\n## 卡片类型与数据格式\n\n每张卡片的 type 可以是: title, content, comparison, quote, ending\n\n### title 卡片\n{ "type": "title", "headline": "标题", "bullets": [{"icon": "emoji", "text": "要点", "highlight": "高亮词"}], "footer_quote": "底部引言", "tags": ["#标签"], "narration": "旁白" }\n\n### content 卡片(3 种布局)\nlayout: "list" — items: [{"icon": "emoji", "text": "内容", "highlight": "高亮词", "accent": "#hex"}]\nlayout: "grid-2x2" — items: [{"icon": "emoji", "title": "标题", "body": "内容", "accent": "#hex"}]\nlayout: "grid-1x2" — items: [{"icon": "emoji", "title": "标题", "body": "内容", "accent": "#hex"}]\n\n### comparison 卡片\n{ "type": "comparison", "headline": "...", "left": {"label": "A", "accent": "#hex", "items": ["..."]}, "right": {"label": "B", "accent": "#hex", "items": ["..."]}, "narration": "..." }\n\n### quote 卡片\n{ "type": "quote", "quote": "引用文字", "attribution": "来源", "accent": "#hex", "narration": "..." }\n\n### ending 卡片\n{ "type": "ending", "headline": "结尾标题", "cta": "行动号召", "sub_text": "副文字", "narration": "..." }\n\n## 要求\n\n1. 第一张必须是 title,最后一张必须是 ending\n2. 每张卡片必须包含 narration 字段——用口语化的中文写,像朋友在聊天一样自然,不是干巴巴地念卡片上的文字\n3. 推荐的 accent 颜色: #f5c842 (黄), #4ade80 (绿), #a78bfa (紫), #22d3ee (青), #ff6b35 (橙), #ef4444 (红)\n4. 返回纯 JSON,不要 markdown 代码块\n\n## JSON 结构\n\n{\n "meta": {\n "series_name": "系列名称",\n "series_tag": "标签",\n "author": "作者"\n },\n "cards": [ ... ]\n}\n\n## 输入内容\n\n`;async function generateCards(e,t,o){const n=[{role:"user",content:w+o}];const s=await l({apiBase:e,token:t,messages:n,temperature:.7,maxTokens:4e3});let r=s.content.trim();const a=r.match(/```(?:json)?\s*([\s\S]*?)```/);if(a)r=a[1].trim();const i=JSON.parse(r);if(!i.cards||!Array.isArray(i.cards)||i.cards.length<2){throw new Error("LLM returned invalid card structure (need at least 2 cards)")}return{cards:i,quota:s.quota}}async function fetchUrlContent(e,t=0){if(t>5)throw new Error("Too many redirects");const n=new URL(e);if(!["http:","https:"].includes(n.protocol)){throw new Error(`Unsupported URL protocol: ${n.protocol}`)}const s=o(692);const r=o(611);const a=e.startsWith("https")?s:r;return new Promise(((o,n)=>{const s=a.get(e,{headers:{"User-Agent":"VoxFlow-CLI/1.0"}},(e=>{if(e.statusCode>=300&&e.statusCode<400&&e.headers.location){return fetchUrlContent(e.headers.location,t+1).then(o,n)}const s=[];e.on("data",(e=>s.push(e)));e.on("end",(()=>{const e=Buffer.concat(s).toString("utf8");const t=e.replace(/<script[\s\S]*?<\/script>/gi,"").replace(/<style[\s\S]*?<\/style>/gi,"").replace(/<[^>]+>/g," ").replace(/&nbsp;/g," ").replace(/&amp;/g,"&").replace(/&lt;/g,"<").replace(/&gt;/g,">").replace(/&quot;/g,'"').replace(/&#39;/g,"'").replace(/\s+/g," ").trim();o(t.slice(0,8e3))}))}));s.on("error",n);s.setTimeout(15e3,(()=>{s.destroy();n(new Error("URL fetch timeout"))}))}))}async function synthesizeNarration(e,t,o,n,s,r,l){process.stdout.write(` TTS card [${r+1}/${l}]...`);let u,p;try{({status:u,data:p}=await a(`${e}/api/tts/synthesize`,{method:"POST",headers:{"Content-Type":"application/json",Authorization:`Bearer ${t}`}},{text:o,voiceId:n,speed:s,volume:1}))}catch(t){console.log(" FAIL");c(t,e)}if(u!==200||p.code!=="success"){console.log(" FAIL");i(u,p,`TTS card ${r+1}`)}const d=Buffer.from(p.audio,"base64");const f=Math.round(d.length/48);console.log(` OK (${(d.length/1024).toFixed(0)} KB, ${(f/1e3).toFixed(1)}s)`);return{pcm:d,durationMs:f,quota:p.quota}}function computeDurations(e,t){return e.map(((e,o)=>{if(t&&t[o]&&t[o].durationMs>0){return Math.max(5,Math.ceil(t[o].durationMs/1e3)+h)}return m}))}async function renderVideo(e,t,r){const a=findVideoPresentDir();if(!a){throw new Error("video-present/ directory not found.\n"+" Install: cd video-present && npm install")}if(!n.existsSync(s.join(a,"node_modules","remotion"))){throw new Error("Remotion not installed in video-present/.\n"+" Run: cd video-present && npm install")}const i=s.join(a,"src","data");n.mkdirSync(i,{recursive:true});n.writeFileSync(s.join(i,"cards.json"),JSON.stringify(e,null,2));const c=f[t]||"Scheme4-GradientAurora";const{execFileSync:l}=o(317);const u=process.platform==="win32"?"npx.cmd":"npx";const p=["remotion","render",c,r,"--codec","h264","--image-format","jpeg"];l(u,p,{cwd:a,stdio:"pipe",timeout:6e5,maxBuffer:50*1024*1024})}async function present(e){const{token:t,api:s,text:a,url:i,cards:c,scheme:l=r.scheme,voice:f=r.voice,speed:g=r.speed,noAudio:m=false}=e;const sigintHandler=()=>{console.log("\n\nGeneration cancelled.");process.exit(0)};process.on("SIGINT",sigintHandler);try{const r=m?3:4;let h=1;let w;let v=null;if(c){console.log(`\n[${h}/${r}] Loading cards from ${c}...`);const e=n.readFileSync(c,"utf8");w=JSON.parse(e);if(!w.cards||!Array.isArray(w.cards)){throw new Error('Invalid cards.json: missing "cards" array')}console.log(` Loaded ${w.cards.length} cards`)}else{let e;if(i){console.log(`\n[${h}/${r}] Fetching URL and generating cards...`);console.log(` URL: ${i}`);const t=d(" Fetching content...");e=await fetchUrlContent(i);t.stop(`OK (${e.length} chars)`)}else if(a){console.log(`\n[${h}/${r}] Generating cards from text...`);e=a}else{throw new Error("No input provided. Use --text, --url, or --cards")}console.log(` Generating card structure via LLM...`);const o=d(" Calling LLM...");try{const n=await generateCards(s,t,e);w=n.cards;v=n.quota;o.stop(`OK (${w.cards.length} cards)`)}catch(e){o.stop("FAIL");throw new Error(`Card generation failed: ${e.message}`)}}const x=w.cards.length;const y=w.cards.map((e=>e.type)).join(", ");console.log(` Cards: ${x} (${y})`);h++;let S=null;let $=null;let b=v;let k=v;if(!m){const e=w.cards.filter((e=>e.narration));console.log(`\n[${h}/${r}] Synthesizing narration (${e.length} cards with narration)...`);const o=[];S=[];for(let e=0;e<w.cards.length;e++){const n=w.cards[e];if(!n.narration){o.push(Buffer.alloc(0));S.push({durationMs:0});continue}const r=await synthesizeNarration(s,t,n.narration,f,g,e,x);o.push(r.pcm);S.push({durationMs:r.durationMs});if(!b&&r.quota)b=r.quota;k=r.quota}const n=computeDurations(w.cards,S);const a=o.map(((e,t)=>e.length>0?e:p(n[t],24e3)));if(a.some((e=>e.length>0))){const e=u(a,.5);$=e.wav}h++}console.log(`\n[${h}/${r}] Rendering video (scheme: ${l})...`);if(!isRemotionReady()){console.log(" Remotion not available. Install with:");console.log(" cd video-present && npm install");if($){console.log(" Outputting audio-only WAV instead.");const t=(new Date).toISOString().replace(/[:.]/g,"-").slice(0,19);const o=e.output||`present-${t}.wav`;n.writeFileSync(o,$);const s=o.replace(/\.\w+$/,".json");n.writeFileSync(s,JSON.stringify(w,null,2));const r=S.reduce(((e,t)=>e+t.durationMs),0);console.log(`\n Output: ${o}`);return{outputPath:o,cardsPath:s,duration:r}}throw new Error("Remotion not installed and no audio to fall back to")}const T=computeDurations(w.cards,S);const F=(new Date).toISOString().replace(/[:.]/g,"-").slice(0,19);const E=e.output||`present-${F}.mp4`;const _=d(" Rendering...");try{const e=E+".silent.mp4";await renderVideo(w,l,e);_.stop("OK");if($){console.log(" Merging narration audio...");const t=E+".narration.wav";n.writeFileSync(t,$);try{const{mergeAudioVideo:n}=o(297);await n(e,t,E);console.log(" Audio merged OK")}catch(t){console.log(` Audio merge failed (ffmpeg required): ${t.message}`);console.log(" Output is silent video.");n.renameSync(e,E)}finally{try{n.unlinkSync(e)}catch{}try{n.unlinkSync(t)}catch{}}}else{n.renameSync(e,E)}}catch(e){_.stop("FAIL");try{n.unlinkSync(E+".silent.mp4")}catch{}try{n.unlinkSync(E+".narration.wav")}catch{}throw new Error(`Video render failed: ${e.message}`)}h++;const A=E.replace(/\.mp4$/,".json");n.writeFileSync(A,JSON.stringify(w,null,2));const I=T.reduce(((e,t)=>e+t*1e3),0);const M=n.statSync(E);const P=(M.size/(1024*1024)).toFixed(1);console.log(`\n[${h}/${r}] Done!`);console.log("\n=== Output ===");console.log(` Video: ${E} (${P} MB, ${formatDuration(I)})`);console.log(` Cards: ${A}`);console.log(` Scheme: ${l}`);console.log(` Cards: ${x} (${y})`);if(k){const e=b&&k?b.remaining-k.remaining:x*100;console.log(` Quota: ${e} used, ${k.remaining??"?"} remaining`)}return{outputPath:E,cardsPath:A,duration:I}}finally{process.removeListener("SIGINT",sigintHandler)}}function formatDuration(e){const t=Math.round(e/1e3);const o=Math.floor(t/60);const n=t%60;if(o===0)return`${n}s`;return`${o}m${n.toString().padStart(2,"0")}s`}e.exports={present:present,SCHEMES:f,VALID_SCHEMES:g}},360:(e,t,o)=>{const n=o(896);const s=o(928);const r=o(982);const{API_BASE:a}=o(782);const{request:i}=o(852);const{videoTranslate:c}=o(863);const{dub:l}=o(944);const{mergeAudioVideo:u,getAudioDuration:p}=o(297);function ensureDir(e){n.mkdirSync(e,{recursive:true})}function defaultOutputPath(e){const t=s.basename(e,s.extname(e));return s.resolve(s.dirname(e),`${t}-published.mp4`)}function defaultPublishDir(){return s.resolve(process.cwd(),"published")}function toFileUrl(e){const t=e.split(s.sep).join("/");return t.startsWith("/")?`file://${t}`:`file:///${t}`}function copyToPublishDir(e,t,o){ensureDir(t);const r=(new Date).toISOString().replace(/[:.]/g,"-");const a=s.basename(e,s.extname(e));const i=`${a}-${o}-${r}.mp4`;const c=s.join(t,i);n.copyFileSync(e,c);return{target:"local",platform:o,status:"ready",publishedPath:c,publishUrl:toFileUrl(c)}}async function publishToWebhook(e,t){const o=new URL(e);if(o.protocol!=="https:"&&o.protocol!=="http:"){throw new Error("Webhook URL must use HTTP or HTTPS")}const n=o.hostname;if(n==="localhost"||n==="127.0.0.1"||n==="0.0.0.0"||n==="::1"||n.startsWith("10.")||n.startsWith("192.168.")||n.match(/^172\.(1[6-9]|2\d|3[01])\./)||n==="169.254.169.254"||n.startsWith("169.254.")||n.startsWith("fd")||n.startsWith("fc")||n.startsWith("fe80")||n.endsWith(".internal")||n.endsWith(".local")){throw new Error("Webhook URL must not target private networks")}const{status:s,data:r}=await i(e,{method:"POST",headers:{"Content-Type":"application/json"}},t);if(s>=400){throw new Error(`Webhook publish failed (${s}): ${r?.message||JSON.stringify(r)}`)}return{target:"webhook",status:r.status||"submitted",platform:r.platform||t.platform,publishUrl:r.publishUrl||r.url||null,platformJobId:r.jobId||null,raw:r}}async function publish(e){const sigintHandler=()=>{console.log("\n\nPublish cancelled.");process.exit(0)};process.on("SIGINT",sigintHandler);try{return await _publish(e)}finally{process.removeListener("SIGINT",sigintHandler)}}async function _publish(e){const{token:t,api:o=a,input:i,from:d,to:f,srt:g,video:m,audio:h,voice:w,voicesMap:v,speed:x,realign:y=false,batchSize:S,keepIntermediates:$=false,output:b,publishTarget:k="local",publishDir:T=defaultPublishDir(),publishWebhook:F,platform:E="generic",title:_}=e;const A=!!(i&&f);const I=!!(g&&m);const M=!!(m&&h);const P=[A,I,M].filter(Boolean).length;if(P!==1){throw new Error("Invalid publish input. Use exactly one mode:\n"+" 1) --input <video> --to <lang> (video-translate + publish)\n"+" 2) --srt <file> --video <video> (dub + publish)\n"+" 3) --video <video> --audio <audio> (merge existing + publish)")}if(!["local","webhook","none"].includes(k)){throw new Error(`--publish must be one of: local, webhook, none (got: ${k})`)}if(k==="webhook"&&!F){throw new Error("--publish webhook requires --publish-webhook <url>")}console.log("\n=== VoxFlow Publish ===");console.log(`Publish target: ${k}`);console.log(`Platform: ${E}`);let L;let O;let N={};if(A){if(!t){throw new Error('Publish mode "video-translate" requires authentication token')}console.log("Build mode: video-translate");const e=await c({token:t,api:o,input:i,from:d,to:f,voice:w,voicesMap:v,output:b?s.resolve(b):undefined,realign:y,batchSize:S,keepIntermediates:$,speed:x});L=e.outputPath;O="video-translate";N=e}else if(I){if(!t){throw new Error('Publish mode "srt-dub" requires authentication token')}console.log("Build mode: srt-dub");const e=await l({token:t,api:o,srt:g,video:m,voice:w,voicesMap:v,speed:x,output:b?s.resolve(b):undefined});L=e.outputPath;O="srt-dub";N=e}else{console.log("Build mode: merge-existing");L=b?s.resolve(b):defaultOutputPath(s.resolve(m));await u(m,h,L);O="merge-existing";N={outputPath:L,quotaUsed:0}}const C=s.resolve(L);const D=n.statSync(C);const R=await p(C);const q=`pub_${r.randomUUID()}`;const j={jobId:q,title:_||s.basename(C,s.extname(C)),buildMode:O,platform:E,createdAt:(new Date).toISOString(),artifact:{localPath:C,sizeBytes:D.size,durationSec:Number((R/1e3).toFixed(2))}};let z={target:"none",status:"skipped",platform:E,publishUrl:null};if(k==="local"){z=copyToPublishDir(C,s.resolve(T),E)}else if(k==="webhook"){const{localPath:e,...t}=j.artifact;z=await publishToWebhook(F,{...j,artifact:{...t,filePath:s.basename(j.artifact.localPath)}})}const U={...j,publish:z,quotaUsed:N.quotaUsed||0,buildResult:N};console.log("\n=== Publish Done ===");console.log(`Final video: ${C}`);console.log(`Duration: ${U.artifact.durationSec}s`);console.log(`Size: ${(U.artifact.sizeBytes/1024/1024).toFixed(2)} MB`);console.log(`Publish status: ${z.status}`);if(z.publishUrl){console.log(`Publish URL: ${z.publishUrl}`)}else if(z.publishedPath){console.log(`Published file: ${z.publishedPath}`)}return U}e.exports={publish:publish}},214:(e,t,o)=>{const n=o(896);const s=o(928);const{STORY_DEFAULTS:r}=o(782);const{request:a,throwApiError:i,throwNetworkError:c,ApiError:l}=o(852);const{parseParagraphs:u,buildWav:p,createSilence:d}=o(56);const{startSpinner:f}=o(339);const{synthesizeTTS:g}=o(675);async function generateStory(e,t,o){const n=f("\n[1/3] Generating story text...");let s,r;try{({status:s,data:r}=await a(`${e}/api/llm/chat`,{method:"POST",headers:{"Content-Type":"application/json",Authorization:`Bearer ${t}`}},{messages:[{role:"user",content:o}],stream:false,temperature:.8}))}catch(t){n.stop("FAIL");c(t,e)}if(s!==200||r.code!=="success"){n.stop("FAIL");i(s,r,"LLM")}const l=r.content;const u=r.quota;n.stop("OK");console.log(` ${l.length} chars. Quota remaining: ${u?.remaining??"?"}`);return{story:l,quota:u}}async function synthesizeParagraph(e,t,o,n,s,r,a){const i=await g({apiBase:e,token:t,text:o,voiceId:n,speed:s,index:r,total:a});console.log(` OK (${(i.audio.length/1024).toFixed(0)} KB)`);return{pcm:i.audio,quota:i.quota}}async function synthesizeAll(e,t,o,n,s){console.log(`\n[2/3] Synthesizing TTS audio (${o.length} segments)...`);const r=[];let a=null;for(let i=0;i<o.length;i++){const c=await synthesizeParagraph(e,t,o[i],n,s,i,o.length);r.push(c.pcm);a=c.quota}return{pcmBuffers:r,quota:a}}async function story(e){const sigintHandler=()=>{console.log("\n\nGeneration cancelled.");process.exit(0)};process.on("SIGINT",sigintHandler);try{return await _story(e)}finally{process.removeListener("SIGINT",sigintHandler)}}async function _story(e){const t=e.voice||r.voice;const o=e.paragraphs||r.paragraphs;const a=e.speed??r.speed;const i=e.silence??r.silence;const c=e.api;const l=e.token;const d=e.topic||`请写一个适合5岁儿童的短故事,要求:\n1. 分${o}段,每段2-3句话\n2. 每段描述一个清晰的画面场景\n3. 语言简单易懂,充满童趣\n4. 段落之间用空行分隔\n5. 不要添加段落编号,直接输出故事内容`;let f=e.output;if(!f){const e=(new Date).toISOString().replace(/[:.]/g,"-").slice(0,19);f=s.resolve(`story-${e}.wav`)}console.log("\n=== VoxFlow Story Generator ===");console.log(`Voice: ${t}`);console.log(`API: ${c}`);console.log(`Paragraphs: ${o}`);console.log(`Speed: ${a}`);console.log(`Output: ${f}`);const{story:g}=await generateStory(c,l,d);const m=u(g);if(m.length===0){throw new Error("No paragraphs found in generated story")}console.log(` ${m.length} paragraphs`);const{pcmBuffers:h,quota:w}=await synthesizeAll(c,l,m,t,a);console.log("\n[3/3] Merging audio...");const{wav:v,duration:x}=p(h,i);const y=s.dirname(f);n.mkdirSync(y,{recursive:true});n.writeFileSync(f,v);const S=f.replace(/\.wav$/,".txt");const $=m.map(((e,t)=>`[${t+1}] ${e}`)).join("\n\n");n.writeFileSync(S,$,"utf8");const b=1+m.length;console.log(`\n=== Done ===`);console.log(`Output: ${f} (${(v.length/1024).toFixed(1)} KB, ${x.toFixed(1)}s)`);console.log(`Story: ${S}`);console.log(`Quota: ${b} used, ${w?.remaining??"?"} remaining`);return{outputPath:f,textPath:S,duration:x,quotaUsed:b}}e.exports={story:story,ApiError:l,_test:{parseParagraphs:u,buildWav:p,createSilence:d}}},383:(e,t,o)=>{const n=o(896);const s=o(928);const{SYNTHESIZE_DEFAULTS:r}=o(782);const{request:a,throwApiError:i,throwNetworkError:c,ApiError:l}=o(852);const{buildWav:u,getFileExtension:p}=o(56);const{startSpinner:d}=o(339);async function synthesize(e){let t=false;const sigintHandler=()=>{if(t)return;t=true;console.log("\n\nSynthesis cancelled.");process.exit(130)};process.on("SIGINT",sigintHandler);try{return await _synthesize(e)}finally{process.removeListener("SIGINT",sigintHandler)}}async function _synthesize(e){const t=e.text;if(!t||t.trim().length===0){throw new Error('No text provided. Usage: voxflow synthesize "your text here"')}const o=e.voice||r.voice;const l=e.speed??r.speed;const f=e.volume??r.volume;const g=e.pitch??r.pitch;const m=e.format||"pcm";const h=e.api;const w=e.token;const v=p(m);let x=e.output;if(!x){const e=(new Date).toISOString().replace(/[:.]/g,"-").slice(0,19);x=s.resolve(`tts-${e}${v}`)}console.log("\n=== VoxFlow Synthesize ===");console.log(`Voice: ${o}`);console.log(`Format: ${m==="pcm"?"wav (pcm)":m}`);console.log(`Speed: ${l}`);if(f!==1)console.log(`Volume: ${f}`);if(g!==0)console.log(`Pitch: ${g}`);console.log(`Text: ${t.length>60?t.slice(0,57)+"...":t}`);console.log(`Output: ${x}`);const y=d("\n[1/1] Synthesizing TTS audio...");let S,$;try{({status:S,data:$}=await a(`${h}/api/tts/synthesize`,{method:"POST",headers:{"Content-Type":"application/json",Authorization:`Bearer ${w}`}},{text:t.trim(),voiceId:o,format:m,speed:l,volume:f,pitch:g}))}catch(e){y.stop("FAIL");c(e,h)}if(S!==200||$.code!=="success"){y.stop("FAIL");i(S,$,"TTS")}const b=Buffer.from($.audio,"base64");y.stop("OK");let k,T;if(m==="mp3"){k=b;T=b.length/4e3;console.log(` ${(b.length/1024).toFixed(0)} KB MP3`)}else if(m==="wav"){k=b;const e=b.length>44?b.readUInt32LE(28):48e3;const t=b.length>44?b.readUInt32LE(40):b.length;T=t/e;console.log(` ${(b.length/1024).toFixed(0)} KB WAV`)}else{const e=u([b],0);k=e.wav;T=e.duration;console.log(` ${(b.length/1024).toFixed(0)} KB PCM → WAV`)}const F=s.dirname(x);n.mkdirSync(F,{recursive:true});n.writeFileSync(x,k);const E=1;console.log(`\n=== Done ===`);console.log(`Output: ${x} (${(k.length/1024).toFixed(1)} KB, ${T.toFixed(1)}s)`);console.log(`Quota: ${E} used, ${$.quota?.remaining??"?"} remaining`);return{outputPath:x,duration:T,quotaUsed:E,format:m}}e.exports={synthesize:synthesize,ApiError:l}},585:(e,t,o)=>{const n=o(896);const s=o(928);const{API_BASE:r,TRANSLATE_DEFAULTS:a}=o(782);const{chatCompletion:i,detectLanguage:c}=o(133);const{parseSrt:l,formatSrt:u}=o(813);const p={zh:"Chinese (Simplified)",en:"English",ja:"Japanese",ko:"Korean",fr:"French",de:"German",es:"Spanish",pt:"Portuguese",ru:"Russian",ar:"Arabic",th:"Thai",vi:"Vietnamese",it:"Italian"};function batchCaptions(e,t=10){const o=[];for(let n=0;n<e.length;n+=t){o.push(e.slice(n,n+t))}return o}function buildTranslationPrompt(e,t,o){const n=[`You are a professional subtitle translator. Translate each numbered line from ${t} to ${o}.`,"","Rules:","- Return ONLY the translated lines, one per number","- Keep the exact same numbering (1., 2., 3., ...)","- Preserve [Speaker: xxx] tags unchanged — do NOT translate speaker names","- Keep translations concise and natural for subtitles","- Do not add explanations, notes, or extra text"].join("\n");const s=e.map(((e,t)=>{const o=e.speakerId?`[Speaker: ${e.speakerId}] `:"";return`${t+1}. ${o}${e.text}`})).join("\n");return{system:n,user:s}}function parseTranslationResponse(e,t){const o=e.trim().split("\n").filter((e=>e.trim()));const n=[];for(let e=0;e<t.length;e++){const s=new RegExp(`^${e+1}\\.\\s*(.+)$`);const r=o.find((e=>s.test(e.trim())));if(r){const o=r.trim().replace(s,"$1").trim();let a=o;const i=o.match(/^\[Speaker:\s*[^\]]+\]\s*/i);if(i){a=o.slice(i[0].length)}n.push({...t[e],text:a||t[e].text})}else{if(e<o.length){const s=o[e].replace(/^\d+\.\s*/,"").trim();let r=s;const a=s.match(/^\[Speaker:\s*[^\]]+\]\s*/i);if(a){r=s.slice(a[0].length)}n.push({...t[e],text:r||t[e].text})}else{n.push({...t[e]})}}}return n}function realignTimings(e,t){const o=.3;const n=100;const s=t.map(((t,s)=>{const r=e[s];if(!r)return t;const a=r.text.length;const i=t.text.length;if(a===0)return t;const c=i/a;if(c<1+o&&c>1-o){return t}const l=r.endMs-r.startMs;let u=Math.round(l*c);const p=s<e.length-1?e[s+1].startMs:Infinity;const d=p-t.startMs-n;if(u>d&&d>0){u=d}u=Math.max(u,500);return{...t,endMs:t.startMs+u}}));return s}async function translate(e){const sigintHandler=()=>{console.log("\n\nTranslation cancelled.");process.exit(0)};process.on("SIGINT",sigintHandler);try{return await _translate(e)}finally{process.removeListener("SIGINT",sigintHandler)}}async function _translate(e){const{token:t,api:o=r,srt:n,text:s,input:i,from:c,to:l,output:u,realign:p=false,batchSize:d=a.batchSize}=e;if(n)return _translateSrt({token:t,api:o,srt:n,from:c,to:l,output:u,realign:p,batchSize:d});if(s)return _translateText({token:t,api:o,text:s,from:c,to:l});if(i)return _translateFile({token:t,api:o,input:i,from:c,to:l,output:u});throw new Error("No input specified. Use --srt, --text, or --input")}async function _translateSrt({token:e,api:t,srt:r,from:c,to:d,output:f,realign:g,batchSize:m}){console.log("\n=== VoxFlow Translate (SRT) ===");const h=s.resolve(r);const w=n.readFileSync(h,"utf8");const v=l(w);if(v.length===0){throw new Error(`SRT file is empty or invalid: ${h}`)}console.log(`Input: ${s.basename(h)}`);console.log(`Captions: ${v.length}`);const x=c||await autoDetectLanguage(t,v);const y=p[x]||x;const S=p[d]||d;console.log(`From: ${y} (${x})`);console.log(`To: ${S} (${d})`);console.log(`Realign: ${g?"yes":"no"}`);const $=batchCaptions(v,m);console.log(`Batches: ${$.length} (batch size: ${m})`);console.log("");let b=[];let k=0;for(let o=0;o<$.length;o++){const n=$[o];process.stdout.write(` [${o+1}/${$.length}] Translating ${n.length} captions...`);const{system:s,user:r}=buildTranslationPrompt(n,y,S);const c=await i({apiBase:t,token:e,messages:[{role:"system",content:s},{role:"user",content:r}],temperature:a.temperature,maxTokens:a.maxTokens});const l=parseTranslationResponse(c.content,n);b=b.concat(l);k++;if(c.quota){console.log(` OK (remaining: ${c.quota.remaining})`)}else{console.log(" OK")}}if(g){console.log(" Re-aligning subtitle timing...");b=realignTimings(v,b)}b=b.map(((e,t)=>({...e,id:t+1})));const T=u(b);let F;if(f){F=s.resolve(f)}else{const e=s.basename(h,s.extname(h));const t=s.dirname(h);F=o.ab+"cli/"+t+"/"+e+"-"+d+".srt"}n.writeFileSync(F,T,"utf8");console.log(`\n=== Done ===`);console.log(`Output: ${F}`);console.log(`Captions: ${b.length}`);console.log(`Quota: ${k} used`);if(b.length>0){console.log(`\n--- Preview ---`);const e=b.slice(0,3);for(const t of e){const e=t.speakerId?`[${t.speakerId}] `:"";const o=t.text.length>60?t.text.slice(0,57)+"...":t.text;console.log(` ${t.id}. ${e}${o}`)}if(b.length>3){console.log(` ... (${b.length-3} more)`)}}return{outputPath:F,captionCount:b.length,quotaUsed:k,from:x,to:d}}async function _translateText({token:e,api:t,text:o,from:n,to:s}){console.log("\n=== VoxFlow Translate (Text) ===");const r=n||await autoDetectLanguage(t,[{text:o}]);const c=p[r]||r;const l=p[s]||s;console.log(`From: ${c} → To: ${l}`);const u=await i({apiBase:t,token:e,messages:[{role:"system",content:`You are a professional translator. Translate the following text from ${c} to ${l}. Return ONLY the translation, no explanations.`},{role:"user",content:o}],temperature:a.temperature,maxTokens:a.maxTokens});const d=u.content.trim();console.log(`\n${d}`);const f=u.quota?u.quota.remaining:"?";console.log(`\n(Quota: 1 used, ${f} remaining)`);return{text:d,quotaUsed:1,from:r,to:s}}async function _translateFile({token:e,api:t,input:r,from:c,to:l,output:u}){console.log("\n=== VoxFlow Translate (File) ===");const d=s.resolve(r);const f=n.readFileSync(d,"utf8");if(f.trim().length===0){throw new Error(`Input file is empty: ${d}`)}console.log(`Input: ${s.basename(d)}`);console.log(`Length: ${f.length} chars`);const g=c||await autoDetectLanguage(t,[{text:f}]);const m=p[g]||g;const h=p[l]||l;console.log(`From: ${m} → To: ${h}`);const w=await i({apiBase:t,token:e,messages:[{role:"system",content:`You are a professional translator. Translate the following document from ${m} to ${h}. Preserve the original formatting (paragraphs, line breaks, markdown). Return ONLY the translation.`},{role:"user",content:f}],temperature:a.temperature,maxTokens:Math.max(a.maxTokens,4e3)});const v=w.content.trim();let x;if(u){x=s.resolve(u)}else{const e=s.extname(d);const t=s.basename(d,e);const n=s.dirname(d);x=o.ab+"cli/"+n+"/"+t+"-"+l+""+e}n.writeFileSync(x,v+"\n","utf8");const y=w.quota?w.quota.remaining:"?";console.log(`\n=== Done ===`);console.log(`Output: ${x}`);console.log(`Quota: 1 used, ${y} remaining`);return{outputPath:x,quotaUsed:1,from:g,to:l}}async function autoDetectLanguage(e,t){const o=t.slice(0,3).map((e=>e.text)).join(" ");const n=await c({apiBase:e,text:o});return n||"auto"}e.exports={translate:translate,LANG_MAP:p,_test:{buildTranslationPrompt:buildTranslationPrompt,parseTranslationResponse:parseTranslationResponse,realignTimings:realignTimings,batchCaptions:batchCaptions}}},863:(e,t,o)=>{const n=o(896);const s=o(928);const r=o(857);const{checkFfmpeg:a,extractAudio:i}=o(297);const{asr:c}=o(929);const{translate:l}=o(585);const{dub:u}=o(944);const{detectLanguage:p}=o(133);const{parseSrt:d}=o(813);const{API_BASE:f,VIDEO_TRANSLATE_DEFAULTS:g}=o(782);const m={zh:"16k_zh",en:"16k_en",ja:"16k_ja",ko:"16k_ko","zh-en":"16k_zh_en"};function resolveAsrLang(e,t){if(t)return t;if(e&&m[e])return m[e];return"16k_zh"}async function videoTranslate(e){const sigintHandler=()=>{console.log("\n\nVideo translation cancelled.");process.exit(0)};process.on("SIGINT",sigintHandler);try{return await _videoTranslate(e)}finally{process.removeListener("SIGINT",sigintHandler)}}async function _videoTranslate(e){const{token:t,api:m=f,input:h,from:w,to:v,voice:x,voicesMap:y,realign:S=false,output:$,keepIntermediates:b=false,batchSize:k=g.batchSize,speed:T=g.speed,asrMode:F,asrLang:E}=e;const _=s.resolve(h);const A=s.basename(_,s.extname(_));console.log("\n=== VoxFlow Video Translate ===");console.log(`Input: ${s.basename(_)}`);console.log(`Target: ${v}`);console.log("");const I=n.mkdtempSync(s.join(r.tmpdir(),"voxflow-vtranslate-"));let M=0;const P={};try{process.stdout.write("[1/4] Checking FFmpeg... ");const e=await a();if(!e.available){throw new Error("FFmpeg is required for video-translate. Install: https://ffmpeg.org/download.html")}console.log(`OK (${e.version})`);process.stdout.write("[2/4] Transcribing audio... ");const r=s.join(I,"extracted-audio.wav");await i(_,r);const f=s.join(I,"source.srt");const g=resolveAsrLang(w,E);const h={token:t,api:m,input:r,format:"srt",output:f,lang:g};if(F)h.mode=F;const L=await c(h);if(L.captionCount===0){throw new Error("ASR produced no captions. The video may have no audible speech.")}P.asr={mode:L.mode,duration:L.duration,captionCount:L.captionCount,quotaUsed:L.quotaUsed};M+=L.quotaUsed;console.log(`${L.captionCount} captions (${L.mode} mode)`);let O=w;if(!O){const e=n.readFileSync(f,"utf8");const t=d(e);const o=t.slice(0,3).map((e=>e.text)).join(" ");O=await p({apiBase:m,text:o})||"auto"}process.stdout.write(`[3/4] Translating (${O} → ${v})... `);const N=s.join(I,`translated-${v}.srt`);const C=await l({token:t,api:m,srt:f,from:O,to:v,output:N,realign:S,batchSize:k});P.translate={from:C.from,to:C.to,captionCount:C.captionCount,quotaUsed:C.quotaUsed};M+=C.quotaUsed;console.log(`${C.captionCount} captions translated`);process.stdout.write("[4/4] Dubbing and merging video... ");const D=$?s.resolve($):o.ab+"cli/"+s.dirname(_)+"/"+A+"-"+v+".mp4";const R=await u({token:t,api:m,srt:N,voice:x,voicesMap:y,speed:T,video:_,output:D});P.dub={segmentCount:R.segmentCount,duration:R.duration,quotaUsed:R.quotaUsed,warnings:R.warnings};M+=R.quotaUsed;console.log(`${R.segmentCount} segments dubbed`);if(b){const e=s.resolve(s.dirname(D),`${A}-${v}-intermediates`);n.mkdirSync(e,{recursive:true});const t=[["extracted-audio.wav",r],["source.srt",f],[`translated-${v}.srt`,N]];for(const[o,r]of t){if(n.existsSync(r)){n.copyFileSync(r,s.join(e,o))}}console.log(`\nIntermediates saved: ${e}`)}console.log("\n=== Done ===");console.log(`Output: ${D}`);console.log(`Language: ${O} → ${v}`);console.log(`Captions: ${C.captionCount}`);console.log(`Duration: ${R.duration.toFixed(1)}s`);console.log(`Quota: ${M} used`);if(P.dub.warnings&&P.dub.warnings.length>0){console.log(`\nWarnings:`);for(const e of P.dub.warnings){console.log(` - ${e}`)}}return{outputPath:D,from:O,to:v,captionCount:C.captionCount,quotaUsed:M,stages:P}}finally{if(!b){try{n.rmSync(I,{recursive:true,force:true})}catch{}}}}e.exports={videoTranslate:videoTranslate}},784:(e,t,o)=>{const{request:n,throwNetworkError:s}=o(852);async function voices(e){const t=e.api;const o=e.extended?"true":"false";let r,a;try{({status:r,data:a}=await n(`${t}/api/tts/voices?includeExtended=${o}`,{method:"GET"}))}catch(e){s(e,t)}if(r!==200){throw new Error(`Failed to fetch voices (${r}): ${a?.message||"unknown error"}`)}let i=a.voices||a.data?.voices||[];if(e.gender){const t=normalizeGender(e.gender);if(!t){console.error(`Error: --gender must be one of: male, m, female, f (got: "${e.gender}")`);process.exit(1)}i=i.filter((e=>{const o=(e.gender||"").toLowerCase();return o===t}))}if(e.language){const t=e.language.toLowerCase();i=i.filter((e=>(e.language||"").toLowerCase()===t))}if(e.search){const t=e.search.toLowerCase();i=i.filter((e=>{const o=[e.name,e.nameEn,e.tone,e.style,e.description,e.scenarios].filter(Boolean).join(" ").toLowerCase();return o.includes(t)}))}if(i.length===0){console.log("No voices match your criteria.");return}if(e.json){console.log(JSON.stringify(i,null,2))}else{printTable(i)}console.log(`\nFound ${i.length} voice${i.length===1?"":"s"}.`)}function normalizeGender(e){const t=(e||"").toLowerCase().trim();if(t==="male"||t==="m")return"male";if(t==="female"||t==="f")return"female";return null}function printTable(e){const t=24;const o=14;const n=8;const s=22;const r=20;const a=["ID".padEnd(t),"Name".padEnd(o),"Gender".padEnd(n),"Tone".padEnd(s),"Style".padEnd(r)].join(" ");console.log(`\n${a}`);console.log("-".repeat(a.length));for(const a of e){const e=[truncate(a.id||"",t).padEnd(t),truncate(a.name||"",o).padEnd(o),truncate(a.gender||"",n).padEnd(n),truncate(a.tone||"",s).padEnd(s),truncate(a.style||"",r).padEnd(r)].join(" ");console.log(e)}}function truncate(e,t){if(e.length<=t)return e;return e.slice(0,t-1)+"…"}e.exports={voices:voices}},514:(e,t,o)=>{const n=o(896);const{request:s,throwApiError:r,throwNetworkError:a,ApiError:i}=o(852);const c=6e4;const l=72e5;const u=5*1024*1024;const p=3e3;const d=3e5;const f={WAITING:0,PROCESSING:1,SUCCESS:2,FAILED:3};function detectMode(e,t,o){if(e<=c&&o<=u){return"sentence"}if(e<=l&&t){return"flash"}return"file"}function authHeaders(e){return{"Content-Type":"application/json",Authorization:`Bearer ${e}`}}async function recognizeSentence(e){const{apiBase:t,token:o,url:c,filePath:l,lang:u="16k_zh",wordInfo:p=false}=e;const d={EngSerViceType:u,VoiceFormat:"wav",SubServiceType:2,WordInfo:p?1:0,ConvertNumMode:1};if(c){d.Url=c;d.SourceType=0}else if(l){const e=n.readFileSync(l);d.Data=e.toString("base64");d.DataLen=e.length;d.SourceType=1}else{throw new Error("Either url or filePath is required for sentence recognition")}try{const{status:e,data:n}=await s(`${t}/api/asr/sentence`,{method:"POST",headers:authHeaders(o)},d);if(e!==200||n.code!=="success"){r(e,n,"ASR sentence")}return{result:n.result,audioTime:n.audioTime,wordList:n.wordList||[],requestId:n.requestId,quota:n.quota}}catch(e){if(e instanceof i)throw e;a(e,t)}}async function recognizeFlash(e){const{apiBase:t,token:o,url:n,lang:c="16k_zh",speakerDiarization:l=false,speakerNumber:u=0}=e;if(!n){throw new Error("Flash recognition requires a URL (cannot use base64 data)")}const p={engine_type:c,voice_format:"wav",url:n,speaker_diarization:l?1:0,speaker_number:u,filter_dirty:0,filter_modal:0,filter_punc:0,convert_num_mode:1,word_info:1,first_channel_only:1};try{const{status:e,data:n}=await s(`${t}/api/asr/flash`,{method:"POST",headers:authHeaders(o)},p);if(e!==200||n.code!=="success"){r(e,n,"ASR flash")}return{flashResult:n.flash_result||[],audioDuration:n.audio_duration||0,requestId:n.request_id,quota:n.quota}}catch(e){if(e instanceof i)throw e;a(e,t)}}async function submitFileTask(e){const{apiBase:t,token:o,url:c,filePath:l,lang:p="16k_zh",speakerDiarization:d=false,speakerNumber:f=0}=e;const g={EngineModelType:p,ChannelNum:1,ResTextFormat:0,FilterDirty:0,FilterModal:0,FilterPunc:0,ConvertNumMode:1,SpeakerDiarization:d?1:0,SpeakerNumber:f};if(c){g.Url=c;g.SourceType=0}else if(l){const e=n.readFileSync(l);if(e.length>u){throw new Error(`File too large for base64 upload (${(e.length/1024/1024).toFixed(1)} MB). `+"Upload to COS first or use flash mode with a URL.")}g.Data=e.toString("base64");g.DataLen=e.length;g.SourceType=1}else{throw new Error("Either url or filePath is required for file recognition")}try{const{status:e,data:n}=await s(`${t}/api/asr/file`,{method:"POST",headers:authHeaders(o)},g);if(e!==200||n.code!=="success"){r(e,n,"ASR file submit")}return{taskId:n.taskId,requestId:n.requestId,quota:n.quota}}catch(e){if(e instanceof i)throw e;a(e,t)}}async function pollTaskResult(e){const{apiBase:t,token:o,taskId:n,pollIntervalMs:c=p,pollTimeoutMs:l=d,onProgress:u}=e;const g=Date.now();while(true){const e=Date.now()-g;if(e>l){throw new Error(`ASR task ${n} timed out after ${Math.round(e/1e3)}s. `+"The task may still complete — check later with: voxflow asr --task-id "+n)}try{const{status:a,data:i}=await s(`${t}/api/asr/result/${n}`,{method:"GET",headers:authHeaders(o)});if(a!==200||i.code!=="success"){r(a,i,"ASR poll")}const c=i.data;const l=c.Status;if(u)u(l,e);if(l===f.SUCCESS){return{result:c.Result,audioTime:c.AudioTime,status:l}}if(l===f.FAILED){throw new Error(`ASR task ${n} failed: ${c.Result||"Unknown error"}`)}}catch(o){if(o instanceof i)throw o;if(e+c<l){}else{a(o,t)}}await sleep(c)}}async function recognize(e){const{mode:t="auto",url:o,filePath:n,durationMs:s,fileSize:r=0}=e;const a=!!o;const i=t==="auto"?detectMode(s,a,r):t;switch(i){case"sentence":{const t=await recognizeSentence(e);return{mode:"sentence",result:t.result,audioTime:t.audioTime,wordList:t.wordList,quota:t.quota}}case"flash":{if(!o){throw new Error("Flash mode requires a URL. Upload the file to COS first, or use --mode auto.")}const t=await recognizeFlash(e);const n=(t.flashResult||[]).flatMap((e=>e.sentence_list?e.sentence_list.map((e=>e.text)):[e.text])).join("");return{mode:"flash",result:n,flashResult:t.flashResult,audioDuration:t.audioDuration,audioTime:(t.audioDuration||0)/1e3,quota:t.quota}}case"file":{const t=await submitFileTask(e);const o=await pollTaskResult({apiBase:e.apiBase,token:e.token,taskId:t.taskId,onProgress:e.onProgress});return{mode:"file",result:o.result,audioTime:o.audioTime,taskId:t.taskId,quota:t.quota}}default:throw new Error(`Unknown ASR mode: ${i}. Use: auto, sentence, flash, or file`)}}function sleep(e){return new Promise((t=>setTimeout(t,e)))}e.exports={recognize:recognize,recognizeSentence:recognizeSentence,recognizeFlash:recognizeFlash,submitFileTask:submitFileTask,pollTaskResult:pollTaskResult,detectMode:detectMode,SENTENCE_MAX_MS:c,FLASH_MAX_MS:l,BASE64_MAX_BYTES:u,TASK_STATUS:f}},388:(e,t,o)=>{const{execFile:n}=o(317);const s=o(928);const r=o(857);const a=o(896);function runCommand(e,t,o){return new Promise(((s,r)=>{n(e,t,{timeout:6e5,...o},((e,t,o)=>{if(e){e.stderr=o;e.stdout=t;r(e)}else{s({stdout:t,stderr:o})}}))}))}async function getMediaInfo(e){const t=s.resolve(e);if(!a.existsSync(t)){throw new Error(`File not found: ${t}`)}try{const{stdout:e}=await runCommand("ffprobe",["-v","error","-show_entries","format=duration","-show_entries","stream=codec_type,codec_name,sample_rate,channels","-of","json",t]);const o=JSON.parse(e);const n=o.streams||[];const s=o.format||{};const r=n.find((e=>e.codec_type==="audio"));const a=n.find((e=>e.codec_type==="video"));const i=parseFloat(s.duration);const c=isNaN(i)?0:Math.round(i*1e3);return{durationMs:c,hasVideo:!!a,hasAudio:!!r,audioCodec:r?r.codec_name:null,sampleRate:r?parseInt(r.sample_rate,10):null,channels:r?parseInt(r.channels,10):null}}catch(t){if(t.code==="ENOENT"){throw new Error("ffprobe not found. Please install ffmpeg:\n"+" macOS: brew install ffmpeg\n"+" Ubuntu: sudo apt install ffmpeg\n"+" Windows: https://ffmpeg.org/download.html")}throw new Error(`Failed to probe media file ${e}: ${t.message}`)}}async function extractAudioForAsr(e,t={}){const o=s.resolve(e);if(!a.existsSync(o)){throw new Error(`File not found: ${o}`)}const n=t.outputDir||r.tmpdir();const i=s.basename(o,s.extname(o));const c=s.join(n,`asr-${i}-${Date.now()}.wav`);try{await runCommand("ffmpeg",["-i",o,"-vn","-acodec","pcm_s16le","-ar","16000","-ac","1","-y",c])}catch(t){if(t.code==="ENOENT"){throw new Error("ffmpeg not found. Please install ffmpeg:\n"+" macOS: brew install ffmpeg\n"+" Ubuntu: sudo apt install ffmpeg\n"+" Windows: https://ffmpeg.org/download.html")}throw new Error(`Failed to extract audio from ${e}: ${t.stderr||t.message}`)}const l=a.statSync(c);const u=Math.round((l.size-44)/32);return{wavPath:c,durationMs:u,needsCleanup:true}}async function needsConversion(e){try{const t=await getMediaInfo(e);if(t.hasVideo)return true;if(t.audioCodec!=="pcm_s16le")return true;if(t.sampleRate!==16e3)return true;if(t.channels!==1)return true;return false}catch{return true}}e.exports={getMediaInfo:getMediaInfo,extractAudioForAsr:extractAudioForAsr,needsConversion:needsConversion}},56:e=>{function parseParagraphs(e){const t=e.split(/\n\s*\n/).map((e=>e.replace(/^\d+[.、)\]]\s*/,"").trim())).filter((e=>e.length>0));return t}function createSilence(e,t){const o=Math.floor(t*e);return Buffer.alloc(o*2,0)}function buildWav(e,t){const o=24e3;const n=16;const s=1;const r=n/8;const a=s*r;const i=o*a;const c=createSilence(t,o);let l=0;for(let t=0;t<e.length;t++){l+=e[t].length;if(t<e.length-1){l+=c.length}}const u=Buffer.alloc(44);u.write("RIFF",0);u.writeUInt32LE(36+l,4);u.write("WAVE",8);u.write("fmt ",12);u.writeUInt32LE(16,16);u.writeUInt16LE(1,20);u.writeUInt16LE(s,22);u.writeUInt32LE(o,24);u.writeUInt32LE(i,28);u.writeUInt16LE(a,32);u.writeUInt16LE(n,34);u.write("data",36);u.writeUInt32LE(l,40);const p=[u];for(let t=0;t<e.length;t++){p.push(e[t]);if(t<e.length-1){p.push(c)}}return{wav:Buffer.concat(p),duration:l/i}}function getFileExtension(e){switch(e){case"mp3":return".mp3";case"wav":return".wav";case"pcm":default:return".wav"}}function concatAudioBuffers(e,t,o){if(t==="mp3"){const t=Buffer.concat(e);const o=t.length/4e3;return{audio:t,duration:o}}if(t==="wav"){const t=e.map(extractPcmFromWav);return buildWav(t,o)}return buildWav(e,o)}function extractPcmFromWav(e){const t=Buffer.from("data");let o=12;while(o<e.length-8){if(e.subarray(o,o+4).equals(t)){const t=e.readUInt32LE(o+4);return e.subarray(o+8,o+8+t)}const n=e.readUInt32LE(o+4);o+=8+n}return e.subarray(44)}e.exports={parseParagraphs:parseParagraphs,createSilence:createSilence,buildWav:buildWav,concatAudioBuffers:concatAudioBuffers,getFileExtension:getFileExtension}},986:(e,t,o)=>{const n=o(611);const s=o(896);const r=o(928);const a=o(982);const i=o(785);const{TOKEN_PATH:c,getConfigDir:l,LOGIN_PAGE:u,AUTH_TIMEOUT_MS:p,API_BASE:d}=o(782);const f=300;function readCachedToken(){try{const e=s.readFileSync(c,"utf8");const t=JSON.parse(e);if(!t.access_token)return null;const o=decodeJwtPayload(t.access_token);if(!o||!o.exp)return null;const n=Math.floor(Date.now()/1e3);if(o.exp-n<f)return null;return t}catch{return null}}function writeCachedToken(e){const t=l();s.mkdirSync(t,{recursive:true,mode:448});const o=c+".tmp";s.writeFileSync(o,JSON.stringify(e,null,2),{encoding:"utf8",mode:384});s.renameSync(o,c)}function clearToken(){try{s.unlinkSync(c)}catch{}}function decodeJwtPayload(e){try{const t=e.split(".");if(t.length!==3)return null;const o=t[1].replace(/-/g,"+").replace(/_/g,"/");return JSON.parse(Buffer.from(o,"base64").toString("utf8"))}catch{return null}}function readEnvToken(){const e=(process.env.VOXFLOW_TOKEN||process.env.VOXFLOW_JWT||"").trim();if(!e)return null;const t=decodeJwtPayload(e);if(!t)return null;if(!t.exp)return null;const o=Math.floor(Date.now()/1e3);if(t.exp-o<f)return null;return e}async function getToken({api:e,force:t}={}){if(!t){const e=readEnvToken();if(e)return e}if(!t){const t=readCachedToken();if(t){const o=!e||e===t.api;if(o)return t.access_token}}if(!process.stdin.isTTY){throw new Error("No valid token found in cache. Non-interactive environment detected. "+"Please provide VOXFLOW_TOKEN (or --token) and retry.")}return browserLogin(e||d)}function getTokenInfo(){const e=readCachedToken();if(!e)return null;const t=decodeJwtPayload(e.access_token);if(!t)return null;const o=Math.floor(Date.now()/1e3);return{email:t.email||e.email||"(unknown)",expiresAt:new Date(t.exp*1e3).toISOString(),remaining:t.exp-o,valid:t.exp-o>f,api:e.api||d}}function browserLogin(e){return new Promise(((t,s)=>{const r=a.randomBytes(16).toString("hex");let c=false;let l=null;function settle(o){if(c)return;c=true;const n=decodeJwtPayload(o);writeCachedToken({access_token:o,expires_at:n?.exp||0,email:n?.email||"",api:e,cached_at:(new Date).toISOString()});if(l){l.close();l=null}d.close();t(o)}const d=n.createServer(((e,t)=>{const o=new URL(e.url,`http://127.0.0.1`);if(o.pathname!=="/callback"){t.writeHead(404,{"Content-Type":"text/plain"});t.end("Not Found");return}const n=o.searchParams.get("token");const s=o.searchParams.get("state");if(s!==r){t.writeHead(400,{"Content-Type":"text/html; charset=utf-8"});t.end("<h1>认证失败</h1><p>state 参数不匹配,请重试。</p>");return}if(!n){t.writeHead(400,{"Content-Type":"text/html; charset=utf-8"});t.end("<h1>认证失败</h1><p>未收到 token,请重试。</p>");return}t.writeHead(200,{"Content-Type":"text/html; charset=utf-8"});t.end(`<!DOCTYPE html>\n<html><head><meta charset="utf-8"><title>登录成功</title></head>\n<body style="font-family:system-ui;display:flex;align-items:center;justify-content:center;height:100vh;margin:0;background:#f0fdf4">\n<div style="text-align:center">\n<h1 style="color:#16a34a;font-size:2rem">登录成功</h1>\n<p style="color:#666;margin-top:0.5rem">已授权 voxflow CLI,可以关闭此窗口。</p>\n</div></body></html>`);settle(n)}));d.listen(0,"127.0.0.1",(async()=>{const e=d.address().port;const t=`${u}?state=${r}&callback_port=${e}`;console.log("\n🔐 需要登录。正在打开浏览器...");console.log(` 若未自动打开: ${t}\n`);let n=false;try{const e=(await o.e(935).then(o.bind(o,935))).default;const s=await e(t);if(s&&typeof s.on==="function"){s.on("error",(()=>{n=true;console.log(" 浏览器打开失败,请手动复制上面的链接到浏览器。\n");startStdinListener()}))}}catch{n=true;console.log(" 浏览器打开失败,请手动复制上面的链接到浏览器。\n");startStdinListener()}function startStdinListener(){if(c||l||!process.stdin.isTTY)return;console.log(" 登录后网页会显示授权码,粘贴到此处回车即可");l=i.createInterface({input:process.stdin,output:process.stdout,terminal:false});process.stdout.write(" > Token: ");l.on("line",(e=>{const t=e.trim();if(!t)return;const o=decodeJwtPayload(t);if(!o){console.log(" 无效的 token,请重新粘贴完整的授权码。");process.stdout.write(" > Token: ");return}const n=Math.floor(Date.now()/1e3);if(o.exp&&o.exp<n){console.log(" token 已过期,请重新登录获取。");process.stdout.write(" > Token: ");return}console.log(`\n✓ 授权成功 (${o.email||"user"})`);settle(t)}))}}));const f=setTimeout((()=>{if(!c){c=true;if(l){l.close();l=null}d.close();s(new Error(`登录超时 (${p/1e3}s)。请重试: voxflow login`))}}),p);d.on("close",(()=>clearTimeout(f)));d.on("error",(e=>{if(!c){c=true;if(l){l.close();l=null}s(new Error(`本地服务器启动失败: ${e.message}`))}}))}))}e.exports={getToken:getToken,clearToken:clearToken,getTokenInfo:getTokenInfo}},782:(e,t,o)=>{const n=o(928);const s=o(857);const r="https://api.voxflow.studio";const a="https://iwkonytsjysszmafqchh.supabase.co";const i="sb_publishable_TEh6H4K9OWXUNfWSeBKXlQ_hg7Zzm6b";const c="voxflow";function getConfigDir(){if(process.platform==="win32"){return n.join(process.env.APPDATA||n.join(s.homedir(),"AppData","Roaming"),c)}const e=process.env.XDG_CONFIG_HOME||n.join(s.homedir(),".config");return n.join(e,c)}const l=n.join(getConfigDir(),"token.json");const u={voice:"v-female-R2s4N9qJ",paragraphs:5,speed:1,silence:.8};const p={template:"interview",exchanges:8,length:"medium",style:"professional",speakers:2,silence:.5,speed:1,ducking:.2};const d={voice:"v-female-R2s4N9qJ",speed:1,volume:1,pitch:0};const f={voice:"v-female-R2s4N9qJ",speed:1,silence:.8};const g={voice:"v-female-R2s4N9qJ",speed:1,toleranceMs:50,ducking:.2};const m={lang:"16k_zh",mode:"auto",format:"srt",pollIntervalMs:3e3,pollTimeoutMs:3e5,engine:"auto",model:"base"};const h={batchSize:10,temperature:.3,maxTokens:2e3};const w={batchSize:10,speed:1};const v={voice:"v-female-R2s4N9qJ",speed:1,style:"modern",sceneCount:5,silence:.8,language:"en"};const x={voice:"v-female-R2s4N9qJ",speed:1,scheme:"aurora"};const y="https://voxflow.studio";const S=`${y}/cli-auth.html`;const $=`${y}/app`;const b=18e4;e.exports={API_BASE:r,WEB_BASE:y,DASHBOARD_URL:$,SUPABASE_URL:a,SUPABASE_ANON_KEY:i,TOKEN_PATH:l,getConfigDir:getConfigDir,DEFAULTS:u,STORY_DEFAULTS:u,PODCAST_DEFAULTS:p,SYNTHESIZE_DEFAULTS:d,NARRATE_DEFAULTS:f,DUB_DEFAULTS:g,ASR_DEFAULTS:m,TRANSLATE_DEFAULTS:h,VIDEO_TRANSLATE_DEFAULTS:w,EXPLAIN_DEFAULTS:v,PRESENT_DEFAULTS:x,LOGIN_PAGE:S,AUTH_TIMEOUT_MS:b}},567:(e,t,o)=>{const n=o(896);const s=o(928);const r=o(611);const a=o(692);const{request:i,throwApiError:c,throwNetworkError:l,ApiError:u}=o(852);const p={".wav":"audio/wav",".mp3":"audio/mpeg",".ogg":"audio/ogg",".m4a":"audio/x-m4a",".mp4":"video/mp4",".webm":"video/webm",".mov":"video/quicktime",".avi":"video/x-msvideo",".mkv":"video/x-matroska",".flac":"audio/flac"};function getMimeType(e){const t=s.extname(e).toLowerCase();return p[t]||"application/octet-stream"}async function uploadFileToCos(e,t,o){const r=s.resolve(e);if(!n.existsSync(r)){throw new Error(`File not found: ${r}`)}const a=n.statSync(r);const p=s.basename(r);const d=getMimeType(r);const f=a.size;let g;try{const{status:e,data:n}=await i(`${t}/api/file-upload/get-upload-url`,{method:"POST",headers:{"Content-Type":"application/json",Authorization:`Bearer ${o}`}},{filename:p,fileType:d,fileSize:f});if(e!==200||n.code!=="success"){c(e,n,"Get upload URL")}g=n.data}catch(e){if(e instanceof u)throw e;l(e,t)}const{uploadUrl:m,key:h,bucket:w,region:v}=g;await putFile(m,r,d);let x;try{x=await getSignedDownloadUrl(t,o,h)}catch{x=`https://${w}.cos.${v}.myqcloud.com/${h}`}return{cosUrl:x,key:h}}function putFile(e,t,o){return new Promise(((s,i)=>{const c=new URL(e);const l=c.protocol==="https:"?a:r;const u=n.statSync(t).size;const p={hostname:c.hostname,port:c.port||(c.protocol==="https:"?443:80),path:c.pathname+c.search,method:"PUT",headers:{"Content-Type":o,"Content-Length":u}};const d=l.request(p,(e=>{const t=[];e.on("data",(e=>t.push(e)));e.on("end",(()=>{if(e.statusCode>=200&&e.statusCode<300){s()}else{const o=Buffer.concat(t).toString("utf8");i(new Error(`COS upload failed (${e.statusCode}): ${o.slice(0,300)}`))}}))}));d.on("error",(e=>i(new Error(`COS upload network error: ${e.message}`))));d.setTimeout(3e5,(()=>{d.destroy();i(new Error("COS upload timeout (5 min)"))}));const f=n.createReadStream(t);f.pipe(d);f.on("error",(e=>{f.destroy();d.destroy();i(new Error(`Failed to read file for upload: ${e.message}`))}))}))}async function getSignedDownloadUrl(e,t,o){const{status:n,data:s}=await i(`${e}/api/file-upload/get-download-url`,{method:"POST",headers:{"Content-Type":"application/json",Authorization:`Bearer ${t}`}},{key:o});if(n!==200||s.code!=="success"){throw new Error(`Failed to get download URL: ${s.message||n}`)}return s.data.downloadUrl}e.exports={uploadFileToCos:uploadFileToCos,getSignedDownloadUrl:getSignedDownloadUrl,getMimeType:getMimeType}},297:(e,t,o)=>{const{execFile:n}=o(317);const s=o(928);const r=o(896);function runCommand(e,t,o){return new Promise(((s,r)=>{n(e,t,{timeout:3e5,...o},((e,t,o)=>{if(e){e.stderr=o;e.stdout=t;r(e)}else{s({stdout:t,stderr:o})}}))}))}async function checkFfmpeg(){try{const{stdout:e}=await runCommand("ffmpeg",["-version"]);const t=e.match(/ffmpeg version (\S+)/);const o=t?t[1]:"unknown";let n=false;try{await runCommand("ffprobe",["-version"]);n=true}catch{}return{available:true,version:o,ffprobeAvailable:n}}catch{return{available:false}}}async function getAudioDuration(e){const t=s.resolve(e);try{const{stdout:e}=await runCommand("ffprobe",["-v","error","-show_entries","format=duration","-of","default=noprint_wrappers=1:nokey=1",t]);const o=parseFloat(e.trim());if(isNaN(o)){throw new Error(`Could not parse duration from ffprobe output: "${e.trim()}"`)}return Math.round(o*1e3)}catch(t){if(t.code==="ENOENT"){throw new Error("ffprobe not found. Please install ffmpeg: https://ffmpeg.org/download.html")}throw new Error(`Failed to get duration of ${e}: ${t.message}`)}}async function extractAudio(e,t){const o=s.resolve(e);const n=s.resolve(t);try{await runCommand("ffmpeg",["-i",o,"-vn","-acodec","pcm_s16le","-ar","24000","-ac","1","-y",n]);return n}catch(t){if(t.code==="ENOENT"){throw new Error("ffmpeg not found. Please install ffmpeg: https://ffmpeg.org/download.html")}throw new Error(`Failed to extract audio from ${e}: ${t.stderr||t.message}`)}}async function mergeAudioVideo(e,t,o){const n=s.resolve(e);const r=s.resolve(t);const a=s.resolve(o);try{await runCommand("ffmpeg",["-i",n,"-i",r,"-c:v","copy","-map","0:v:0","-map","1:a:0","-shortest","-y",a]);return a}catch(e){if(e.code==="ENOENT"){throw new Error("ffmpeg not found. Please install ffmpeg: https://ffmpeg.org/download.html")}throw new Error(`Failed to merge audio/video: ${e.stderr||e.message}`)}}async function mixWithBgm(e,t,o,n={}){const r=n.ducking??.2;const a=s.resolve(e);const i=s.resolve(t);const c=s.resolve(o);try{await runCommand("ffmpeg",["-i",a,"-i",i,"-filter_complex",`[1:a]volume=${r}[bgm_low];`+`[0:a][bgm_low]amix=inputs=2:duration=first:dropout_transition=2[out]`,"-map","[out]","-acodec","pcm_s16le","-ar","24000","-ac","1","-y",c]);return c}catch(e){if(e.code==="ENOENT"){throw new Error("ffmpeg not found. Please install ffmpeg: https://ffmpeg.org/download.html")}throw new Error(`Failed to mix audio with BGM: ${e.stderr||e.message}`)}}async function warnIfMissingFfmpeg(e,t){const o=await checkFfmpeg();if(o.available)return o;const n=s.join(e,".ffmpeg-hint-shown");try{if(r.existsSync(n))return o}catch{}const a={dub:"video merging (--video), BGM mixing (--bgm), speed adjustment (--speed-auto)",asr:"audio format conversion, video audio extraction"};const i=a[t]||"audio/video processing";console.log("\n"+`[hint] ffmpeg not found — needed for ${i}.\n`+" Install: brew install ffmpeg (macOS) / sudo apt install ffmpeg (Linux)\n"+" Without ffmpeg, some features will be unavailable.\n");try{r.mkdirSync(e,{recursive:true});r.writeFileSync(n,(new Date).toISOString(),"utf8")}catch{}return o}e.exports={checkFfmpeg:checkFfmpeg,getAudioDuration:getAudioDuration,extractAudio:extractAudio,mergeAudioVideo:mergeAudioVideo,mixWithBgm:mixWithBgm,warnIfMissingFfmpeg:warnIfMissingFfmpeg}},852:(e,t,o)=>{const n=o(611);const s=o(692);class ApiError extends Error{constructor(e,t,o){super(e);this.name="ApiError";this.code=t;this.status=o}}function throwApiError(e,t,o){if(e===401){throw new ApiError(`Token expired or invalid. Run: voxflow login`,"token_expired",401)}if(e===429||t&&t.code==="quota_exceeded"){throw new ApiError(`Monthly quota exceeded. Resets in ~30 days. Check: voxflow status`,"quota_exceeded",429)}if(e>=500){throw new ApiError(`Server error (${e}). Please try again later.`,"server_error",e)}const n=t?.message||t?.code||JSON.stringify(t);throw new ApiError(`${o} failed (${e}): ${n}`,"api_error",e)}function throwNetworkError(e,t){const o=e.code||"";if(o==="ECONNREFUSED"||o==="ENOTFOUND"||o==="ETIMEDOUT"){throw new ApiError(`Cannot reach API server at ${t}. Check your internet connection or try --api <url>`,"network_error",0)}throw e}function request(e,t,o){return new Promise(((r,a)=>{const i=new URL(e);const c=i.protocol==="https:"?s:n;if(t.headers){t.headers["X-Client-Source"]="cli"}else{t.headers={"X-Client-Source":"cli"}}const l=c.request(i,t,(e=>{const t=[];e.on("data",(e=>t.push(e)));e.on("end",(()=>{const o=Buffer.concat(t).toString("utf8");try{r({status:e.statusCode,data:JSON.parse(o)})}catch{a(new Error(`Non-JSON response (${e.statusCode}): ${o.slice(0,200)}`))}}))}));l.on("error",(e=>a(e)));l.setTimeout(6e4,(()=>{l.destroy();a(new Error("Request timeout (60s)"))}));if(o)l.write(JSON.stringify(o));l.end()}))}e.exports={request:request,ApiError:ApiError,throwApiError:throwApiError,throwNetworkError:throwNetworkError}},425:(e,t,o)=>{const{INTENT_TTS_PARAMS:n,getIntentParams:s}=o(839);e.exports={INTENT_TTS_PARAMS:n,getIntentParams:s}},133:(e,t,o)=>{const{request:n,throwApiError:s,throwNetworkError:r}=o(852);async function chatCompletion({apiBase:e,token:t,messages:o,temperature:a=.3,maxTokens:i=2e3}){let c,l;try{({status:c,data:l}=await n(`${e}/api/llm/chat`,{method:"POST",headers:{"Content-Type":"application/json",Authorization:`Bearer ${t}`}},{messages:o,temperature:a,max_tokens:i}))}catch(t){r(t,e)}if(c!==200||l.code!=="success"){s(c,l,"LLM chat")}return{content:l.content,usage:l.usage,quota:l.quota}}async function detectLanguage({apiBase:e,text:t}){let o,s;try{({status:o,data:s}=await n(`${e}/api/lang-detect/detect`,{method:"POST",headers:{"Content-Type":"application/json"}},{text:t.slice(0,200)}))}catch{return"auto"}if(o===200&&s.code==="success"){return s.language}return"auto"}e.exports={chatCompletion:chatCompletion,detectLanguage:detectLanguage}},384:(e,t,o)=>{const{spawn:n}=o(317);const s=o(928);const r=o(857);const a=o(896);async function checkRecAvailable(){return new Promise((e=>{const t=n("rec",["--version"],{stdio:"pipe"});let o="";t.stdout.on("data",(e=>{o+=e}));t.stderr.on("data",(e=>{o+=e}));t.on("error",(()=>{e({available:false,error:"rec (sox) not found. Please install sox:\n"+" macOS: brew install sox\n"+" Ubuntu: sudo apt install sox\n"+" Windows: https://sourceforge.net/projects/sox/"})}));t.on("close",(t=>{e({available:t===0||o.length>0})}))}))}function recordMic(e={}){const{outputDir:t=r.tmpdir(),maxSeconds:o=300,silenceThreshold:i=0}=e;const c=s.join(t,`mic-${Date.now()}.wav`);return new Promise(((e,t)=>{const s=["-r","16000","-c","1","-b","16","-e","signed-integer",c,"trim","0",String(o)];if(i>0){s.push("silence","1","0.1","1%","1",String(i),"1%")}const r=n("rec",s,{stdio:["pipe","pipe","pipe"]});let l="";r.stderr.on("data",(e=>{l+=e.toString()}));r.on("error",(e=>{if(e.code==="ENOENT"){t(new Error("rec (sox) not found. Please install sox:\n"+" macOS: brew install sox\n"+" Ubuntu: sudo apt install sox\n"+" Windows: https://sourceforge.net/projects/sox/"))}else{t(new Error(`Microphone recording failed: ${e.message}`))}}));let u="timeout";r.on("close",(o=>{if(!a.existsSync(c)){return t(new Error(`Recording failed — no output file created.\n${l.slice(0,500)}`))}const n=a.statSync(c);if(n.size<100){a.unlinkSync(c);return t(new Error("Recording produced an empty file. Check that your microphone is connected and accessible."))}const s=Math.round((n.size-44)/32);e({wavPath:c,durationMs:s,stopped:u})}));const stopRecording=()=>{u="user";r.kill("SIGTERM")};if(process.stdin.isTTY){process.stdin.setRawMode(true);process.stdin.resume();const onKey=e=>{if(e[0]===13||e[0]===10||e[0]===113){u="user";process.stdin.setRawMode(false);process.stdin.removeListener("data",onKey);process.stdin.pause();r.kill("SIGTERM")}if(e[0]===3){u="user";process.stdin.setRawMode(false);process.stdin.removeListener("data",onKey);process.stdin.pause();r.kill("SIGTERM")}};process.stdin.on("data",onKey);r.on("close",(()=>{try{process.stdin.removeListener("data",onKey);if(process.stdin.isTTY){process.stdin.setRawMode(false)}process.stdin.pause()}catch{}}))}r._stopRecording=stopRecording}))}e.exports={recordMic:recordMic,checkRecAvailable:checkRecAvailable}},339:e=>{function startSpinner(e){const t=["|","/","-","\\"];let o=0;let n=e;process.stdout.write(e+" "+t[0]);const s=setInterval((()=>{o=(o+1)%t.length;process.stdout.write("\b"+t[o])}),120);return{stop(e){clearInterval(s);process.stdout.write("\b"+e+"\n")},update(e){process.stdout.write("\r"+" ".repeat(n.length+4)+"\r");n=e;process.stdout.write(e+" "+t[o])}}}e.exports={startSpinner:startSpinner}},813:e=>{function parseTimestamp(e){const t=e.trim().match(/^(\d{1,2}):(\d{2}):(\d{2})[,.](\d{3})$/);if(!t){throw new Error(`Invalid SRT timestamp: "${e}"`)}const[,o,n,s,r]=t;return parseInt(o,10)*36e5+parseInt(n,10)*6e4+parseInt(s,10)*1e3+parseInt(r,10)}function formatTimestamp(e){if(e<0)e=0;const t=Math.floor(e/36e5);e%=36e5;const o=Math.floor(e/6e4);e%=6e4;const n=Math.floor(e/1e3);const s=e%1e3;return String(t).padStart(2,"0")+":"+String(o).padStart(2,"0")+":"+String(n).padStart(2,"0")+","+String(s).padStart(3,"0")}function parseSrt(e){if(!e||e.trim().length===0){return[]}const t=[];const o=e.replace(/\r\n/g,"\n").replace(/\r/g,"\n");const n=o.split(/\n\s*\n/).filter((e=>e.trim().length>0));for(const e of n){const o=e.trim().split("\n");if(o.length<2)continue;let n=0;let s;const r=o[0].trim();if(/^\d+$/.test(r)){s=parseInt(r,10);n=1}else{s=t.length+1}if(n>=o.length)continue;const a=o[n].trim();const i=a.match(/^(\d{1,2}:\d{2}:\d{2}[,.]\d{3})\s*-->\s*(\d{1,2}:\d{2}:\d{2}[,.]\d{3})/);if(!i)continue;const c=parseTimestamp(i[1]);const l=parseTimestamp(i[2]);n++;const u=o.slice(n).filter((e=>e.trim().length>0));if(u.length===0)continue;const p=u.join("\n");let d;let f=p;const g=p.match(/^\[Speaker:\s*([^\]]+)\]\s*/i);if(g){d=g[1].trim();f=p.slice(g[0].length)}if(f.trim().length===0)continue;t.push({id:s,startMs:c,endMs:l,text:f.trim(),...d?{speakerId:d}:{}})}t.sort(((e,t)=>e.startMs-t.startMs));return t}function formatSrt(e){return e.map(((e,t)=>{const o=e.id||t+1;const n=formatTimestamp(e.startMs);const s=formatTimestamp(e.endMs);const r=e.speakerId?`[Speaker: ${e.speakerId}] `:"";return`${o}\n${n} --\x3e ${s}\n${r}${e.text}`})).join("\n\n")+"\n"}function buildCaptionsFromFlash(e){const t=[];let o=1;for(const n of e){const e=n.sentence_list||[];for(const n of e){const e={id:o++,startMs:n.start_time||0,endMs:n.end_time||0,text:(n.text||"").trim()};if(n.speaker_id!==undefined&&n.speaker_id!==null){e.speakerId=`Speaker${n.speaker_id}`}if(e.text.length>0){t.push(e)}}}return t}function buildCaptionsFromSentence(e,t,o){if(!e||e.trim().length===0)return[];if(o&&o.length>0){return buildCaptionsFromWordList(o,e)}return[{id:1,startMs:0,endMs:Math.round(t*1e3),text:e.trim()}]}function buildCaptionsFromWordList(e,t){if(!e||e.length===0){return t?[{id:1,startMs:0,endMs:0,text:t}]:[]}const o=500;const n=5e3;const s=15e3;const r=/[.!?。!?…]+$/;const getWord=e=>e.word||e.Word||"";const getStart=e=>e.startTime??e.StartTime??0;const getEnd=e=>e.endTime??e.EndTime??0;const a=e.slice(0,10).map(getWord).join("");const i=(a.match(/[\u4e00-\u9fff\u3040-\u309f\u30a0-\u30ff\uac00-\ud7af]/g)||[]).length;const c=i<a.length*.3;const l=c?" ":"";const u=[];let p=[];let d=getStart(e[0]);let f=d;function flushCaption(){if(p.length===0)return;const e=p.join(l).trim();if(e.length>0){u.push({id:u.length+1,startMs:d,endMs:f,text:e})}p=[]}for(let t=0;t<e.length;t++){const a=e[t];const i=getStart(a);const c=getEnd(a);const l=i-f;const u=i-d;if(l>o&&p.length>0){flushCaption();d=i}else if(p.length>0&&u>n&&r.test(p[p.length-1])){flushCaption();d=i}else if(p.length>0&&u>s){flushCaption();d=i}p.push(getWord(a));f=c||f}flushCaption();return u}function buildCaptionsFromFile(e,t){if(!e||e.trim().length===0)return[];if(/^\d+\s*\n\d{2}:\d{2}:\d{2}[,.]\d{3}\s*-->/.test(e.trim())){return parseSrt(e)}return[{id:1,startMs:0,endMs:Math.round(t*1e3),text:e.trim()}]}function formatPlainText(e,t={}){return e.map((e=>{const o=t.includeSpeakers&&e.speakerId?`[${e.speakerId}] `:"";return`${o}${e.text}`})).join("\n")+"\n"}function formatJson(e){return JSON.stringify(e,null,2)+"\n"}e.exports={parseSrt:parseSrt,formatSrt:formatSrt,parseTimestamp:parseTimestamp,formatTimestamp:formatTimestamp,buildCaptionsFromFlash:buildCaptionsFromFlash,buildCaptionsFromSentence:buildCaptionsFromSentence,buildCaptionsFromWordList:buildCaptionsFromWordList,buildCaptionsFromFile:buildCaptionsFromFile,formatPlainText:formatPlainText,formatJson:formatJson}},907:(e,t,o)=>{const{createSilence:n,buildWav:s}=o(56);const r=24e3;const a=2;const i=r*a/1e3;function msToBytes(e){const t=Math.round(e*i);return t-t%a}function buildTimelinePcm(e,t){if(!e||e.length===0){return{pcm:Buffer.alloc(0),durationMs:0}}const o=Math.max(...e.map((e=>e.endMs)));const n=t||o;const s=msToBytes(n);const r=Buffer.alloc(s,0);for(const t of e){const e=msToBytes(t.startMs);const o=msToBytes(t.endMs)-e;const n=Math.min(t.audioBuffer.length,o,s-e);if(n>0&&e<s){t.audioBuffer.copy(r,e,0,n)}}return{pcm:r,durationMs:n}}function buildTimelineAudio(e,t){const{pcm:o,durationMs:n}=buildTimelinePcm(e,t);if(o.length===0){return{wav:Buffer.alloc(0),duration:0}}const{wav:r}=s([o],0);return{wav:r,duration:n/1e3}}e.exports={buildTimelinePcm:buildTimelinePcm,buildTimelineAudio:buildTimelineAudio,msToBytes:msToBytes,SAMPLE_RATE:r,BYTES_PER_SAMPLE:a,BYTES_PER_MS:i}},675:(e,t,o)=>{const{request:n,throwApiError:s,throwNetworkError:r}=o(852);async function synthesizeTTS(e){const{apiBase:t,token:o,text:a,voiceId:i,speed:c=1,volume:l=1,pitch:u,format:p="pcm",index:d,total:f,label:g}=e;const m=g?` ${g}`:"";process.stdout.write(` TTS [${d+1}/${f}]${m}...`);const h={text:a,voiceId:i,speed:c,volume:l,format:p};if(u!=null&&u!==0)h.pitch=u;let w,v;try{({status:w,data:v}=await n(`${t}/api/tts/synthesize`,{method:"POST",headers:{"Content-Type":"application/json",Authorization:`Bearer ${o}`}},h))}catch(e){console.log(" FAIL");r(e,t)}if(w!==200||v.code!=="success"){console.log(" FAIL");s(w,v,`TTS segment ${d+1}`)}const x=Buffer.from(v.audio,"base64");return{audio:x,quota:v.quota}}e.exports={synthesizeTTS:synthesizeTTS}},126:(e,t,o)=>{const n=o(896);const s=o(928);const{execFileSync:r}=o(317);const a={"16k_zh":"Chinese","16k_en":"English","16k_zh_en":"Chinese","16k_ja":"Japanese","16k_ko":"Korean","16k_zh_dialect":"Chinese","8k_zh":"Chinese","8k_en":"English",zh:"Chinese",en:"English",ja:"Japanese",ko:"Korean",auto:"auto"};function checkWhisperAvailable(){try{const e=resolveWhisperModule();if(e){return{available:true}}return{available:false,error:"nodejs-whisper is not installed.\n"+"Install it with: npm install -g nodejs-whisper\n"+"Then download a model: npx nodejs-whisper download"}}catch{return{available:false,error:"nodejs-whisper is not installed.\n"+"Install it with: npm install -g nodejs-whisper\n"+"Then download a model: npx nodejs-whisper download"}}}function resolveWhisperModule(){try{return require.resolve("nodejs-whisper")}catch{}try{const e=r("npm",["root","-g"],{encoding:"utf8"}).trim();const t=s.join(e,"nodejs-whisper");if(n.existsSync(t)){return t}}catch{}return null}function loadWhisperModule(){const e=resolveWhisperModule();if(!e){throw new Error("nodejs-whisper is not installed.\n"+"Install: npm install -g nodejs-whisper\n"+"Then: npx nodejs-whisper download")}const t=require(e);return t.nodewhisper||t.default||t}async function transcribeLocal(e,t={}){const{model:o="base",lang:s="16k_zh"}=t;if(!n.existsSync(e)){throw new Error(`WAV file not found: ${e}`)}const r=loadWhisperModule();const i=a[s]||a["auto"]||"auto";const c=o;await r(e,{modelName:c,autoDownloadModelName:c,removeWavFileAfterTranscription:false,whisperOptions:{outputInJson:true,outputInSrt:false,outputInVtt:false,outputInTxt:false,wordTimestamps:true,splitOnWord:true,language:i==="auto"?undefined:i}});const l=e+".json";if(!n.existsSync(l)){const t=e.replace(/\.wav$/i,"");const o=[t+".json",e+".json"];const s=o.find((e=>n.existsSync(e)));if(!s){throw new Error("Whisper completed but no JSON output found.\n"+`Expected: ${l}\n`+"Ensure nodejs-whisper is correctly installed.")}}const u=n.readFileSync(l,"utf8");const p=JSON.parse(u);const d=p.transcription||p.segments||[];const f=parseWhisperOutput(d);cleanupWhisperFiles(e);return f}function parseWhisperOutput(e){if(!e||!Array.isArray(e))return[];let t=0;const o=[];for(const n of e){const e=(n.text||"").trim();if(!e)continue;t++;let s=0;let r=0;if(n.timestamps){s=parseTimestamp(n.timestamps.from);r=parseTimestamp(n.timestamps.to)}else if(n.offsets){s=n.offsets.from||0;r=n.offsets.to||0}else if(typeof n.start==="number"){s=Math.round(n.start*1e3);r=Math.round(n.end*1e3)}o.push({id:t,startMs:s,endMs:r,text:e})}return o}function parseTimestamp(e){if(!e||typeof e!=="string")return 0;const t=e.match(/^(\d+):(\d+):(\d+)\.(\d+)$/);if(!t)return 0;const o=parseInt(t[1],10);const n=parseInt(t[2],10);const s=parseInt(t[3],10);const r=parseInt(t[4].padEnd(3,"0").slice(0,3),10);return(o*3600+n*60+s)*1e3+r}function cleanupWhisperFiles(e){const t=[".json",".srt",".vtt",".txt",".lrc",".wts"];for(const o of t){const t=e+o;try{if(n.existsSync(t)){n.unlinkSync(t)}}catch{}}}e.exports={checkWhisperAvailable:checkWhisperAvailable,transcribeLocal:transcribeLocal,parseWhisperOutput:parseWhisperOutput,parseTimestamp:parseTimestamp,LANG_MAP:a}},317:e=>{"use strict";e.exports=require("child_process")},982:e=>{"use strict";e.exports=require("crypto")},896:e=>{"use strict";e.exports=require("fs")},611:e=>{"use strict";e.exports=require("http")},692:e=>{"use strict";e.exports=require("https")},573:e=>{"use strict";e.exports=require("node:buffer")},421:e=>{"use strict";e.exports=require("node:child_process")},24:e=>{"use strict";e.exports=require("node:fs")},455:e=>{"use strict";e.exports=require("node:fs/promises")},161:e=>{"use strict";e.exports=require("node:os")},760:e=>{"use strict";e.exports=require("node:path")},708:e=>{"use strict";e.exports=require("node:process")},136:e=>{"use strict";e.exports=require("node:url")},975:e=>{"use strict";e.exports=require("node:util")},857:e=>{"use strict";e.exports=require("os")},928:e=>{"use strict";e.exports=require("path")},785:e=>{"use strict";e.exports=require("readline")},330:e=>{"use strict";e.exports=JSON.parse('{"name":"voxflow","version":"1.5.3","description":"AI audio content creation CLI — stories, podcasts, narration, dubbing, transcription, translation, and video translation with TTS","bin":{"voxflow":"./dist/index.js"},"files":["dist/index.js","dist/935.index.js","README.md"],"engines":{"node":">=18.0.0"},"dependencies":{"open":"^10.0.0"},"keywords":["tts","story","podcast","ai","audio","text-to-speech","voice","narration","dubbing","synthesize","voices","document","translate","subtitle","srt","transcribe","asr","video-translate","video","voxflow"],"scripts":{"build":"ncc build bin/voxflow.js -o dist --minify && rm -rf dist/cli","prepublishOnly":"npm run build","test":"node --test tests/*.test.js"},"author":"gonghaoran","license":"UNLICENSED","homepage":"https://voxflow.studio","repository":{"type":"git","url":"https://github.com/VoxFlowStudio/FlowStudio","directory":"cli"},"publishConfig":{"access":"public"},"devDependencies":{"@vercel/ncc":"^0.38.4"}}')}};var t={};function __nccwpck_require__(o){var n=t[o];if(n!==undefined){return n.exports}var s=t[o]={exports:{}};var r=true;try{e[o](s,s.exports,__nccwpck_require__);r=false}finally{if(r)delete t[o]}return s.exports}__nccwpck_require__.m=e;(()=>{__nccwpck_require__.d=(e,t)=>{for(var o in t){if(__nccwpck_require__.o(t,o)&&!__nccwpck_require__.o(e,o)){Object.defineProperty(e,o,{enumerable:true,get:t[o]})}}}})();(()=>{__nccwpck_require__.f={};__nccwpck_require__.e=e=>Promise.all(Object.keys(__nccwpck_require__.f).reduce(((t,o)=>{__nccwpck_require__.f[o](e,t);return t}),[]))})();(()=>{__nccwpck_require__.u=e=>""+e+".index.js"})();(()=>{__nccwpck_require__.o=(e,t)=>Object.prototype.hasOwnProperty.call(e,t)})();(()=>{__nccwpck_require__.r=e=>{if(typeof Symbol!=="undefined"&&Symbol.toStringTag){Object.defineProperty(e,Symbol.toStringTag,{value:"Module"})}Object.defineProperty(e,"__esModule",{value:true})}})();if(typeof __nccwpck_require__!=="undefined")__nccwpck_require__.ab=__dirname+"/";(()=>{var e={792:1};var installChunk=t=>{var o=t.modules,n=t.ids,s=t.runtime;for(var r in o){if(__nccwpck_require__.o(o,r)){__nccwpck_require__.m[r]=o[r]}}if(s)s(__nccwpck_require__);for(var a=0;a<n.length;a++)e[n[a]]=1};__nccwpck_require__.f.require=(t,o)=>{if(!e[t]){if(true){installChunk(require("./"+__nccwpck_require__.u(t)))}else e[t]=1}}})();var o={};const{run:n}=__nccwpck_require__(6);n().catch((e=>{console.error(`\nFatal error: ${e.message}`);process.exit(1)}));module.exports=o})();
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "voxflow",
3
- "version": "1.5.2",
3
+ "version": "1.5.3",
4
4
  "description": "AI audio content creation CLI — stories, podcasts, narration, dubbing, transcription, translation, and video translation with TTS",
5
5
  "bin": {
6
6
  "voxflow": "./dist/index.js"
@@ -39,7 +39,7 @@
39
39
  "voxflow"
40
40
  ],
41
41
  "scripts": {
42
- "build": "ncc build bin/voxflow.js -o dist --minify",
42
+ "build": "ncc build bin/voxflow.js -o dist --minify && rm -rf dist/cli",
43
43
  "prepublishOnly": "npm run build",
44
44
  "test": "node --test tests/*.test.js"
45
45
  },