@djangocfg/ui-nextjs 2.1.81 → 2.1.83

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (36) hide show
  1. package/package.json +4 -4
  2. package/src/tools/AudioPlayer/@refactoring3/00-IMPLEMENTATION-ROADMAP.md +1146 -0
  3. package/src/tools/AudioPlayer/@refactoring3/01-WAVESURFER-STREAMING-ANALYSIS.md +611 -0
  4. package/src/tools/AudioPlayer/@refactoring3/02-MEDIA-VIEWER-ANALYSIS.md +560 -0
  5. package/src/tools/AudioPlayer/@refactoring3/03-HYBRID-ARCHITECTURE-PROPOSAL.md +769 -0
  6. package/src/tools/AudioPlayer/@refactoring3/04-CRACKLING-ISSUE-DIAGNOSIS.md +373 -0
  7. package/src/tools/AudioPlayer/README.md +177 -205
  8. package/src/tools/AudioPlayer/components/AudioPlayer.tsx +9 -4
  9. package/src/tools/AudioPlayer/components/HybridAudioPlayer.tsx +251 -0
  10. package/src/tools/AudioPlayer/components/HybridSimplePlayer.tsx +291 -0
  11. package/src/tools/AudioPlayer/components/HybridWaveform.tsx +279 -0
  12. package/src/tools/AudioPlayer/components/SimpleAudioPlayer.tsx +16 -26
  13. package/src/tools/AudioPlayer/components/index.ts +6 -1
  14. package/src/tools/AudioPlayer/context/AudioProvider.tsx +16 -8
  15. package/src/tools/AudioPlayer/context/HybridAudioProvider.tsx +121 -0
  16. package/src/tools/AudioPlayer/context/index.ts +14 -2
  17. package/src/tools/AudioPlayer/hooks/index.ts +11 -0
  18. package/src/tools/AudioPlayer/hooks/useHybridAudio.ts +387 -0
  19. package/src/tools/AudioPlayer/hooks/useHybridAudioAnalysis.ts +95 -0
  20. package/src/tools/AudioPlayer/hooks/useSharedWebAudio.ts +6 -3
  21. package/src/tools/AudioPlayer/index.ts +31 -0
  22. package/src/tools/AudioPlayer/progressive/ProgressiveAudioPlayer.tsx +8 -0
  23. package/src/tools/ImageViewer/hooks/useImageLoading.ts +33 -9
  24. package/src/tools/VideoPlayer/hooks/useVideoPositionCache.ts +13 -6
  25. package/src/tools/VideoPlayer/providers/StreamProvider.tsx +38 -22
  26. package/src/tools/index.ts +22 -0
  27. package/src/tools/AudioPlayer/@refactoring/00-PLAN.md +0 -148
  28. package/src/tools/AudioPlayer/@refactoring/01-TYPES.md +0 -301
  29. package/src/tools/AudioPlayer/@refactoring/02-HOOKS.md +0 -281
  30. package/src/tools/AudioPlayer/@refactoring/03-CONTEXT.md +0 -328
  31. package/src/tools/AudioPlayer/@refactoring/04-COMPONENTS.md +0 -251
  32. package/src/tools/AudioPlayer/@refactoring/05-EFFECTS.md +0 -427
  33. package/src/tools/AudioPlayer/@refactoring/06-UTILS-AND-INDEX.md +0 -193
  34. package/src/tools/AudioPlayer/@refactoring/07-EXECUTION-CHECKLIST.md +0 -146
  35. package/src/tools/AudioPlayer/@refactoring2/ISSUE_ANALYSIS.md +0 -187
  36. package/src/tools/AudioPlayer/@refactoring2/PLAN.md +0 -372
@@ -0,0 +1,611 @@
1
+ # WaveSurfer.js Streaming and Chunked Audio Analysis
2
+
3
+ > **Analysis Date:** 2025-12-30
4
+ > **WaveSurfer Version:** 7.x (based on source code analysis)
5
+ > **Source Location:** `@sources/wavesurfer.js-main/`
6
+
7
+ ---
8
+
9
+ ## Table of Contents
10
+
11
+ 1. [Executive Summary](#executive-summary)
12
+ 2. [How WaveSurfer Loads Audio](#how-wavesurfer-loads-audio)
13
+ 3. [Streaming Support Analysis](#streaming-support-analysis)
14
+ 4. [Fetch and Progress Tracking](#fetch-and-progress-tracking)
15
+ 5. [Working with Partially Loaded Audio](#working-with-partially-loaded-audio)
16
+ 6. [Pre-decoded Peaks Pattern](#pre-decoded-peaks-pattern)
17
+ 7. [MediaStream Support (Recording)](#mediastream-support-recording)
18
+ 8. [Limitations and Workarounds](#limitations-and-workarounds)
19
+ 9. [Recommendations for Streaming Audio](#recommendations-for-streaming-audio)
20
+ 10. [Code Snippets Reference](#code-snippets-reference)
21
+
22
+ ---
23
+
24
+ ## Executive Summary
25
+
26
+ **Key Finding:** WaveSurfer.js does NOT natively support true audio streaming (chunked/progressive loading with immediate playback). It requires the **entire audio file** to be downloaded and decoded before rendering the waveform.
27
+
28
+ ### Critical Limitations:
29
+
30
+ | Feature | Support | Notes |
31
+ |---------|---------|-------|
32
+ | True streaming (MediaSource API) | **NO** | Not implemented |
33
+ | Chunked fetch with progressive rendering | **NO** | Full download required |
34
+ | Partial audio playback | **LIMITED** | Only with pre-decoded peaks |
35
+ | Pre-computed peaks + streaming URL | **YES** | Recommended approach |
36
+ | Real-time microphone input | **YES** | Via Record plugin |
37
+ | Large file support | **WORKAROUND** | Use pre-decoded peaks |
38
+
39
+ ---
40
+
41
+ ## How WaveSurfer Loads Audio
42
+
43
+ ### Main Loading Flow
44
+
45
+ The loading process in WaveSurfer follows this sequence:
46
+
47
+ ```
48
+ load(url) -> Fetcher.fetchBlob() -> Full Download -> audioContext.decodeAudioData() -> Renderer.render()
49
+ ```
50
+
51
+ From `src/wavesurfer.ts` (lines 502-566):
52
+
53
+ ```typescript
54
+ private async loadAudio(url: string, blob?: Blob, channelData?: WaveSurferOptions['peaks'], duration?: number) {
55
+ this.emit('load', url)
56
+
57
+ if (!this.options.media && this.isPlaying()) this.pause()
58
+
59
+ this.decodedData = null
60
+ this.stopAtPosition = null
61
+
62
+ // Abort any ongoing fetch before starting a new one
63
+ this.abortController?.abort()
64
+ this.abortController = null
65
+
66
+ // Fetch the entire audio as a blob if pre-decoded data is not provided
67
+ if (!blob && !channelData) {
68
+ const fetchParams = this.options.fetchParams || {}
69
+ if (window.AbortController && !fetchParams.signal) {
70
+ this.abortController = new AbortController()
71
+ fetchParams.signal = this.abortController.signal
72
+ }
73
+ const onProgress = (percentage: number) => this.emit('loading', percentage)
74
+ blob = await Fetcher.fetchBlob(url, onProgress, fetchParams) // <-- FULL DOWNLOAD
75
+ const overridenMimeType = this.options.blobMimeType
76
+ if (overridenMimeType) {
77
+ blob = new Blob([blob], { type: overridenMimeType })
78
+ }
79
+ }
80
+
81
+ // Set the mediaelement source
82
+ this.setSrc(url, blob)
83
+
84
+ // Wait for the audio duration
85
+ const audioDuration = await new Promise<number>((resolve) => {
86
+ const staticDuration = duration || this.getDuration()
87
+ if (staticDuration) {
88
+ resolve(staticDuration)
89
+ } else {
90
+ this.mediaSubscriptions.push(
91
+ this.onMediaEvent('loadedmetadata', () => resolve(this.getDuration()), { once: true }),
92
+ )
93
+ }
94
+ })
95
+
96
+ // Decode the audio data or use user-provided peaks
97
+ if (channelData) {
98
+ this.decodedData = Decoder.createBuffer(channelData, audioDuration || 0)
99
+ } else if (blob) {
100
+ const arrayBuffer = await blob.arrayBuffer()
101
+ this.decodedData = await Decoder.decode(arrayBuffer, this.options.sampleRate) // <-- FULL DECODE
102
+ }
103
+
104
+ if (this.decodedData) {
105
+ this.emit('decode', this.getDuration())
106
+ this.renderer.render(this.decodedData)
107
+ }
108
+
109
+ this.emit('ready', this.getDuration())
110
+ }
111
+ ```
112
+
113
+ ### Key Observations:
114
+
115
+ 1. **Full Download Required:** `Fetcher.fetchBlob()` downloads the entire file
116
+ 2. **Full Decode Required:** `Decoder.decode()` decodes the entire ArrayBuffer
117
+ 3. **No Progressive Rendering:** Waveform only renders after complete decode
118
+ 4. **Memory Intensive:** Large files may fail due to memory constraints
119
+
120
+ ---
121
+
122
+ ## Streaming Support Analysis
123
+
124
+ ### Official Statement from README.md (lines 118-121):
125
+
126
+ ```markdown
127
+ <details>
128
+ <summary>What about streaming audio?</summary>
129
+ Streaming audio is supported only with <a href="https://wavesurfer.xyz/examples/?predecoded.js">pre-decoded peaks and duration</a>.
130
+ </details>
131
+ ```
132
+
133
+ ### What "Streaming Support" Actually Means:
134
+
135
+ WaveSurfer's "streaming support" is **NOT true streaming**. It means:
136
+
137
+ 1. You provide pre-computed peaks data
138
+ 2. You provide the known duration
139
+ 3. The actual audio URL can be a streaming source
140
+ 4. WaveSurfer renders the waveform immediately from peaks
141
+ 5. Audio playback happens via HTMLMediaElement (which supports streaming natively)
142
+
143
+ **This is NOT:**
144
+ - Progressive waveform rendering as audio loads
145
+ - Chunked audio decoding
146
+ - MediaSource API integration
147
+ - Real-time waveform updates during playback of streaming content
148
+
149
+ ---
150
+
151
+ ## Fetch and Progress Tracking
152
+
153
+ ### Fetcher Implementation (`src/fetcher.ts`):
154
+
155
+ ```typescript
156
+ async function watchProgress(response: Response, progressCallback: (percentage: number) => void) {
157
+ if (!response.body || !response.headers) return
158
+ const reader = response.body.getReader()
159
+
160
+ const contentLength = Number(response.headers.get('Content-Length')) || 0
161
+ let receivedLength = 0
162
+
163
+ // Process the data
164
+ const processChunk = (value: Uint8Array | undefined) => {
165
+ // Add to the received length
166
+ receivedLength += value?.length || 0
167
+ const percentage = Math.round((receivedLength / contentLength) * 100)
168
+ progressCallback(percentage)
169
+ }
170
+
171
+ // Use iteration instead of recursion to avoid stack issues
172
+ try {
173
+ while (true) {
174
+ const data = await reader.read()
175
+
176
+ if (data.done) {
177
+ break
178
+ }
179
+
180
+ processChunk(data.value)
181
+ }
182
+ } catch (err) {
183
+ // Ignore errors because we can only handle the main response
184
+ console.warn('Progress tracking error:', err)
185
+ }
186
+ }
187
+
188
+ async function fetchBlob(
189
+ url: string,
190
+ progressCallback: (percentage: number) => void,
191
+ requestInit?: RequestInit,
192
+ ): Promise<Blob> {
193
+ // Fetch the resource
194
+ const response = await fetch(url, requestInit)
195
+
196
+ if (response.status >= 400) {
197
+ throw new Error(`Failed to fetch ${url}: ${response.status} (${response.statusText})`)
198
+ }
199
+
200
+ // Read the data to track progress
201
+ watchProgress(response.clone(), progressCallback)
202
+
203
+ return response.blob() // <-- Returns complete blob
204
+ }
205
+ ```
206
+
207
+ ### Key Limitations in Fetcher:
208
+
209
+ 1. **Chunks are read but NOT processed** - only for progress percentage
210
+ 2. **Returns complete blob** - `response.blob()` waits for full download
211
+ 3. **No progressive decoding** - chunks are discarded after progress tracking
212
+ 4. **Response cloning** - original response goes to blob(), clone is for progress
213
+
214
+ ### Progress Events:
215
+
216
+ The `loading` event provides download progress but NOT decode/render progress:
217
+
218
+ ```typescript
219
+ // Usage
220
+ wavesurfer.on('loading', (percent) => {
221
+ console.log(`Downloaded: ${percent}%`)
222
+ })
223
+
224
+ // This fires during download, NOT during decode
225
+ ```
226
+
227
+ ---
228
+
229
+ ## Working with Partially Loaded Audio
230
+
231
+ ### Short Answer: Not Directly Supported
232
+
233
+ WaveSurfer cannot render or play partially downloaded audio. However, there are workarounds:
234
+
235
+ ### Pattern 1: Pre-decoded Peaks (Recommended)
236
+
237
+ ```typescript
238
+ const wavesurfer = WaveSurfer.create({
239
+ container: '#waveform',
240
+ url: '/audio/long-file.mp3', // Can be streaming URL
241
+ peaks: preComputedPeaksArray, // Pre-generated peaks
242
+ duration: 3600, // Known duration in seconds
243
+ })
244
+ ```
245
+
246
+ From `examples/predecoded.js`:
247
+
248
+ ```javascript
249
+ const wavesurfer = WaveSurfer.create({
250
+ container: document.body,
251
+ waveColor: 'rgb(200, 0, 200)',
252
+ progressColor: 'rgb(100, 0, 100)',
253
+ barWidth: 10,
254
+ barRadius: 10,
255
+ barGap: 2,
256
+ url: '/examples/audio/demo.wav',
257
+ peaks: [
258
+ [
259
+ 0, 0.0023595101665705442, 0.012107174843549728, 0.005919494666159153,
260
+ -0.31324470043182373, 0.1511787623167038, 0.2473851442337036,
261
+ // ... more peak values
262
+ ],
263
+ ],
264
+ duration: 22,
265
+ })
266
+ ```
267
+
268
+ ### Pattern 2: WebAudio Shim (For Precise Timing)
269
+
270
+ From `examples/webaudio-shim.js`:
271
+
272
+ ```javascript
273
+ import WaveSurfer from 'wavesurfer.js'
274
+ import WebAudioPlayer from 'wavesurfer.js/dist/webaudio.js'
275
+
276
+ const webAudioPlayer = new WebAudioPlayer()
277
+ webAudioPlayer.src = '/examples/audio/audio.wav'
278
+
279
+ webAudioPlayer.addEventListener('loadedmetadata', () => {
280
+ const wavesurfer = WaveSurfer.create({
281
+ container: document.body,
282
+ media: webAudioPlayer,
283
+ peaks: webAudioPlayer.getChannelData(),
284
+ duration: webAudioPlayer.duration,
285
+ })
286
+ })
287
+ ```
288
+
289
+ ### Pattern 3: Dynamic Peaks Update
290
+
291
+ You can call `setOptions()` with new peaks as they become available:
292
+
293
+ ```typescript
294
+ // Initial load with placeholder
295
+ const wavesurfer = WaveSurfer.create({
296
+ container: '#waveform',
297
+ url: streamingUrl,
298
+ peaks: [[0, 0, 0]], // Minimal placeholder
299
+ duration: estimatedDuration,
300
+ })
301
+
302
+ // Later, update with real peaks
303
+ wavesurfer.setOptions({
304
+ peaks: actualPeaksData,
305
+ duration: actualDuration,
306
+ })
307
+ ```
308
+
309
+ ---
310
+
311
+ ## Pre-decoded Peaks Pattern
312
+
313
+ ### How It Works:
314
+
315
+ 1. **Server-side:** Generate peaks using tools like `audiowaveform`
316
+ 2. **Client-side:** Pass peaks and duration to WaveSurfer
317
+ 3. **Rendering:** Waveform renders immediately without downloading audio
318
+ 4. **Playback:** HTMLMediaElement handles streaming playback
319
+
320
+ ### Decoder.createBuffer (`src/decoder.ts`):
321
+
322
+ ```typescript
323
+ /** Create an audio buffer from pre-decoded audio data */
324
+ function createBuffer(channelData: Array<Float32Array | number[]>, duration: number): AudioBuffer {
325
+ // Validate inputs
326
+ if (!channelData || channelData.length === 0) {
327
+ throw new Error('channelData must be a non-empty array')
328
+ }
329
+ if (duration <= 0) {
330
+ throw new Error('duration must be greater than 0')
331
+ }
332
+
333
+ // If a single array of numbers is passed, make it an array of arrays
334
+ if (typeof channelData[0] === 'number') channelData = [channelData as unknown as number[]]
335
+
336
+ // Normalize to -1..1
337
+ normalize(channelData)
338
+
339
+ // Convert to Float32Array for consistency
340
+ const float32Channels = channelData.map((channel) =>
341
+ channel instanceof Float32Array ? channel : Float32Array.from(channel),
342
+ )
343
+
344
+ return {
345
+ duration,
346
+ length: float32Channels[0].length,
347
+ sampleRate: float32Channels[0].length / duration,
348
+ numberOfChannels: float32Channels.length,
349
+ getChannelData: (i: number) => {
350
+ const channel = float32Channels[i]
351
+ if (!channel) {
352
+ throw new Error(`Channel ${i} not found`)
353
+ }
354
+ return channel
355
+ },
356
+ copyFromChannel: AudioBuffer.prototype.copyFromChannel,
357
+ copyToChannel: AudioBuffer.prototype.copyToChannel,
358
+ } as AudioBuffer
359
+ }
360
+ ```
361
+
362
+ ### Peak Generation Tools:
363
+
364
+ - **audiowaveform** (recommended): https://github.com/bbc/audiowaveform
365
+ - **ffmpeg** with audio filter
366
+ - Custom server-side processing
367
+
368
+ ---
369
+
370
+ ## MediaStream Support (Recording)
371
+
372
+ ### Real-time Waveform During Recording
373
+
374
+ The **Record Plugin** (`src/plugins/record.ts`) provides real-time waveform rendering from microphone input:
375
+
376
+ ```typescript
377
+ public renderMicStream(stream: MediaStream): MicStream {
378
+ const audioContext = new AudioContext()
379
+ const source = audioContext.createMediaStreamSource(stream)
380
+ const analyser = audioContext.createAnalyser()
381
+ source.connect(analyser)
382
+
383
+ // Use smaller FFT size for more responsive peak detection
384
+ if (this.options.continuousWaveform || this.options.scrollingWaveform) {
385
+ analyser.fftSize = 32
386
+ }
387
+ const bufferLength = analyser.frequencyBinCount
388
+ const dataArray = new Float32Array(bufferLength)
389
+
390
+ // ... drawing logic at 100 FPS ...
391
+
392
+ const drawWaveform = () => {
393
+ if (this.isWaveformPaused) return
394
+
395
+ analyser.getFloatTimeDomainData(dataArray)
396
+
397
+ if (this.options.scrollingWaveform) {
398
+ // Scrolling waveform - use peak values
399
+ // ... accumulate peaks ...
400
+ } else if (this.options.continuousWaveform) {
401
+ // Continuous waveform
402
+ // ... accumulate all data ...
403
+ }
404
+
405
+ // Render using wavesurfer.load() with peaks
406
+ if (this.wavesurfer) {
407
+ this.wavesurfer
408
+ .load('', [this.dataWindow], totalDuration)
409
+ .then(() => { /* ... */ })
410
+ }
411
+ }
412
+
413
+ const intervalId = setInterval(drawWaveform, 1000 / FPS) // 100 FPS
414
+ // ...
415
+ }
416
+ ```
417
+
418
+ ### Key Features:
419
+
420
+ - **Scrolling Waveform:** Fixed window showing last N seconds
421
+ - **Continuous Waveform:** Growing waveform as recording progresses
422
+ - **100 FPS update rate** for smooth visualization
423
+ - Uses `load('', [peaks], duration)` pattern internally
424
+
425
+ ---
426
+
427
+ ## Limitations and Workarounds
428
+
429
+ ### Memory Constraints for Large Files
430
+
431
+ From README.md (lines 114-116):
432
+
433
+ ```markdown
434
+ <details>
435
+ <summary>Does wavesurfer support large files?</summary>
436
+ Since wavesurfer decodes audio entirely in the browser using Web Audio, large clips may fail to decode due to memory constraints. We recommend using pre-decoded peaks for large files.
437
+ </details>
438
+ ```
439
+
440
+ ### VBR Audio Mismatch
441
+
442
+ Variable bit rate files can cause waveform/audio sync issues:
443
+
444
+ ```markdown
445
+ <details>
446
+ <summary>There is a mismatch between my audio and the waveform. How do I fix it?</summary>
447
+ If you're using a VBR (variable bit rate) audio file, there might be a mismatch between the audio and the waveform. This can be fixed by converting your file to CBR (constant bit rate).
448
+ </details>
449
+ ```
450
+
451
+ ### No MediaSource API Integration
452
+
453
+ WaveSurfer does not use MediaSource Extensions (MSE) for:
454
+ - Progressive audio loading
455
+ - Adaptive bitrate streaming
456
+ - HLS/DASH support
457
+
458
+ ---
459
+
460
+ ## Recommendations for Streaming Audio
461
+
462
+ ### Recommended Architecture for Streaming:
463
+
464
+ ```
465
+ +------------------+
466
+ | Audio Server |
467
+ | (HLS/DASH/MP3) |
468
+ +--------+---------+
469
+ |
470
+ +--------------------------------+--------------------------------+
471
+ | |
472
+ v v
473
+ +----------+----------+ +-----------+-----------+
474
+ | Pre-computed Peaks | | Audio Streaming URL |
475
+ | (JSON/Binary) | | (direct or chunked) |
476
+ +----------+----------+ +-----------+-----------+
477
+ | |
478
+ v v
479
+ +----------+----------+ +-----------+-----------+
480
+ | WaveSurfer | | HTMLMediaElement |
481
+ | (waveform render) |<---------------------------------------->| (audio playback) |
482
+ +---------------------+ +-----------------------+
483
+ ```
484
+
485
+ ### Implementation Pattern:
486
+
487
+ ```typescript
488
+ // 1. Fetch peaks separately (fast, small payload)
489
+ const peaksResponse = await fetch('/api/audio/peaks/123')
490
+ const { peaks, duration } = await peaksResponse.json()
491
+
492
+ // 2. Create WaveSurfer with streaming URL
493
+ const wavesurfer = WaveSurfer.create({
494
+ container: '#waveform',
495
+ url: 'https://streaming.example.com/audio/123.mp3', // Can be HLS, etc.
496
+ peaks: peaks,
497
+ duration: duration,
498
+ backend: 'MediaElement', // Use HTMLMediaElement for streaming
499
+ })
500
+
501
+ // 3. Audio streams progressively via HTMLMediaElement
502
+ // Waveform is instantly rendered from pre-computed peaks
503
+ ```
504
+
505
+ ### For Progressive Loading Without Pre-computed Peaks:
506
+
507
+ If you cannot pre-compute peaks, consider:
508
+
509
+ 1. **Chunked Peak Generation:** Generate peaks in chunks on server, send progressively
510
+ 2. **Placeholder Waveform:** Show loading indicator, then render when ready
511
+ 3. **Estimated Waveform:** Use audio analysis on first few seconds, extrapolate
512
+ 4. **Hybrid Approach:** Quick low-resolution peaks first, high-res after full load
513
+
514
+ ---
515
+
516
+ ## Code Snippets Reference
517
+
518
+ ### WaveSurfer Options Interface (relevant streaming options):
519
+
520
+ ```typescript
521
+ export type WaveSurferOptions = {
522
+ // ... other options ...
523
+
524
+ /** Audio URL */
525
+ url?: string
526
+
527
+ /** Pre-computed audio data, arrays of floats for each channel */
528
+ peaks?: Array<Float32Array | number[]>
529
+
530
+ /** Pre-computed audio duration in seconds */
531
+ duration?: number
532
+
533
+ /** Use an existing media element instead of creating one */
534
+ media?: HTMLMediaElement
535
+
536
+ /** Options to pass to the fetch method */
537
+ fetchParams?: RequestInit
538
+
539
+ /** Playback "backend" to use, defaults to MediaElement */
540
+ backend?: 'WebAudio' | 'MediaElement'
541
+
542
+ /** Override the Blob MIME type */
543
+ blobMimeType?: string
544
+ }
545
+ ```
546
+
547
+ ### Events for Monitoring Load Progress:
548
+
549
+ ```typescript
550
+ // During download
551
+ wavesurfer.on('load', (url) => {
552
+ console.log('Started loading:', url)
553
+ })
554
+
555
+ wavesurfer.on('loading', (percent) => {
556
+ console.log(`Download progress: ${percent}%`)
557
+ })
558
+
559
+ // After decode
560
+ wavesurfer.on('decode', (duration) => {
561
+ console.log('Audio decoded, duration:', duration)
562
+ })
563
+
564
+ // Ready for playback
565
+ wavesurfer.on('ready', (duration) => {
566
+ console.log('Ready to play, duration:', duration)
567
+ })
568
+ ```
569
+
570
+ ### WebAudioPlayer (Custom Backend):
571
+
572
+ ```typescript
573
+ import WebAudioPlayer from 'wavesurfer.js/dist/webaudio.js'
574
+
575
+ // WebAudioPlayer decodes entire file but provides precise timing
576
+ const player = new WebAudioPlayer()
577
+ player.src = '/audio.mp3'
578
+
579
+ player.addEventListener('loadedmetadata', () => {
580
+ // Access decoded channel data
581
+ const channelData = player.getChannelData() // Float32Array[]
582
+ const duration = player.duration
583
+ })
584
+ ```
585
+
586
+ ---
587
+
588
+ ## Conclusion
589
+
590
+ WaveSurfer.js is designed for **pre-loaded audio visualization**, not streaming. For streaming audio applications:
591
+
592
+ 1. **Always use pre-decoded peaks** for large or streaming audio
593
+ 2. **Let HTMLMediaElement handle streaming** playback
594
+ 3. **Generate peaks server-side** using audiowaveform or similar
595
+ 4. **Consider chunked peak delivery** for very long files
596
+
597
+ The library's architecture fundamentally requires full audio data for waveform rendering, making true progressive/streaming waveform visualization impossible without the pre-decoded peaks pattern.
598
+
599
+ ---
600
+
601
+ ## Related Files
602
+
603
+ - `/src/wavesurfer.ts` - Main WaveSurfer class
604
+ - `/src/fetcher.ts` - Fetch with progress tracking
605
+ - `/src/decoder.ts` - Audio decoding and buffer creation
606
+ - `/src/webaudio.ts` - WebAudio backend
607
+ - `/src/player.ts` - Base player class
608
+ - `/src/renderer.ts` - Waveform rendering
609
+ - `/src/plugins/record.ts` - Real-time microphone recording
610
+ - `/examples/predecoded.js` - Pre-decoded peaks example
611
+ - `/examples/webaudio-shim.js` - WebAudio backend example