audiomotion-analyzer 4.0.0-beta.5 β†’ 4.0.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -27,19 +27,19 @@ What users are saying:
27
27
 
28
28
  ## Features
29
29
 
30
- + High-resolution real-time dual channel audio spectrum analyzer
31
- + Logarithmic, linear and perceptual (Bark/Mel) frequency scales, with customizable range
30
+ + Dual-channel high-resolution real-time audio spectrum analyzer
31
+ + Logarithmic, linear and perceptual (Bark and Mel) frequency scales, with customizable range
32
32
  + Visualization of discrete FFT frequencies or up to 240 frequency bands (supports ANSI and equal-tempered octave bands)
33
33
  + Decibel and linear amplitude scales, with customizable sensitivity
34
- + A, B, C, D and ITU-R 468 weighting filters
35
- + Optional effects: LED bars, luminance bars, mirroring and reflection, radial spectrum
36
- + Comes with 3 predefined color gradients - easily add your own!
34
+ + Optional A, B, C, D and ITU-R 468 weighting filters
35
+ + Additional effects: LED bars, luminance bars, mirroring and reflection, radial spectrum
36
+ + Choose from 5 built-in color gradients or easily add your own!
37
37
  + Fullscreen support, ready for retina / HiDPI displays
38
38
  + Zero-dependency native ES6+ module (ESM), \~25kB minified
39
39
 
40
40
  ## Online demos
41
41
 
42
- [![demo-animation](img/demo.gif)](https://audiomotion.dev/demo/)
42
+ [![demo-animation](img/demo.webp)](https://audiomotion.dev/demo/)
43
43
 
44
44
  ?> https://audiomotion.dev/demo/
45
45
 
@@ -247,12 +247,17 @@ Defaults to **false**.
247
247
 
248
248
  *Available since v4.0.0*
249
249
 
250
- When set to *true* uses ANSI/IEC preferred frequencies to generate the bands for [octave bands modes](#mode-number).
250
+ When set to *true*, ANSI/IEC preferred frequencies are used to generate the bands for **octave bands** modes (see [`mode`](#mode-number)).
251
251
  The preferred base-10 scale is used to compute the center and bandedge frequencies, as specified in the [ANSI S1.11-2004 standard](https://archive.org/details/gov.law.ansi.s1.11.2004).
252
252
 
253
- The default is to use the [equal temperament scale](http://hyperphysics.phy-astr.gsu.edu/hbase/Music/et.html), so that in 1/12 octave bands
253
+ When *false*, bands are based on the [equal-tempered scale](http://hyperphysics.phy-astr.gsu.edu/hbase/Music/et.html), so that in 1/12 octave bands
254
254
  the center of each band is perfectly tuned to a musical note.
255
255
 
256
+ ansiBands | bands standard | octaves' center frequencies
257
+ ----------|----------------|----------------------------
258
+ false | Equal temperament (A-440 Hz) | ![scale-log-equal-temperament](img/scale-log-equal-temperament.png)
259
+ true | ANSI S1.11-2004 | ![scale-log-ansi](img/scale-log-ansi.png)
260
+
256
261
  Defaults to **false**.
257
262
 
258
263
  ### `audioCtx` *AudioContext object* *(Read only)*
@@ -326,8 +331,8 @@ Defines the number and layout of analyzer channels.
326
331
  channelLayout | description
327
332
  ----------------|------------
328
333
  'single' | Single channel analyzer, representing the combined output of both left and right channels.
329
- 'dual-vertical' | Dual channel analyzer, with left channel shown at the top and right channel at the bottom.
330
- 'dual-combined' | Left and right channel graphs are shown overlaid. Works best with semi-transparent **Graph** [`mode`](#mode-number) or [`outlineBars`](#outlinebars-boolean).
334
+ 'dual-combined' | Dual channel analyzer, with both channel graphs overlaid. Works best with semi-transparent **Graph** [`mode`](#mode-number) or [`outlineBars`](#outlinebars-boolean).
335
+ 'dual-vertical' | Left channel shown at the top half of the canvas and right channel at the bottom.
331
336
 
332
337
  !> When a *dual* layout is selected, any mono (single channel) audio source connected to the analyzer will output sound only from the left speaker,
333
338
  unless a stereo source is simultaneously connected to the analyzer, which will force the mono input to be upmixed to stereo.
@@ -385,16 +390,16 @@ Current frame rate.
385
390
 
386
391
  Scale used to represent frequencies in the horizontal axis.
387
392
 
388
- frequencyScale | description
389
- ---------------|------------
390
- 'bark' | [Bark scale](https://en.wikipedia.org/wiki/Bark_scale)
391
- 'linear' | Linear scale
392
- 'log' | Logarithmic scale
393
- 'mel' | [Mel scale](https://en.wikipedia.org/wiki/Mel_scale)
393
+ frequencyScale | description | scale preview (20Hz - 22kHz range)
394
+ ---------------|-------------|-----------------------------------
395
+ 'bark' | [Bark scale](https://en.wikipedia.org/wiki/Bark_scale) | ![scale-bark](img/scale-bark.png)
396
+ 'linear' | Linear scale | ![scale-linear](img/scale-linear.png)
397
+ 'log' | Logarithmic scale | ![scale-log-ansi](img/scale-log-ansi.png)
398
+ 'mel' | [Mel scale](https://en.wikipedia.org/wiki/Mel_scale) | ![scale-mel](img/scale-mel.png)
394
399
 
395
- Logarithmic scale is required to visualize proper [octave bands](#mode-number) and it's also recommended when using [`noteLabels`](#notelabels-boolean).
400
+ Logarithmic scale allows visualization of proper **octave bands** (see [`mode`](#mode-number)) and it's also recommended when using [`noteLabels`](#notelabels-boolean).
396
401
 
397
- *Bark* and *Mel* are perceptual pitch scales which provide better visualization of midrange and high frequencies, especially in the [discrete frequencies mode](#mode-number).
402
+ *Bark* and *Mel* are perceptual pitch scales, which provide better visualization of mid-range to high frequencies.
398
403
 
399
404
  Defaults to **'log'**.
400
405
 
@@ -419,7 +424,7 @@ It must be a built-in or registered gradient name (see [`registerGradient()`](#r
419
424
 
420
425
  `gradient` sets the gradient for both analyzer channels, but its read value represents only the gradient on the left (or single) channel.
421
426
 
422
- When using a dual [`channelLayout`](#channellayout-string), use [`gradientLeft`](#gradientleft-string) and [`gradientRight`](#gradientright-string) if you want to individually set/read the gradient for each channel.
427
+ When using a dual [`channelLayout`](#channellayout-string), use [`gradientLeft`](#gradientleft-string) and [`gradientRight`](#gradientright-string) to set/read the gradient on each channel individually.
423
428
 
424
429
  Built-in gradients are shown below:
425
430
 
@@ -635,22 +640,24 @@ Visualization mode.
635
640
 
636
641
  mode | description | notes
637
642
  ----:|:-----------:|------
638
- 0 | Discrete frequencies |
639
- 1 | 1/24th octave bands or 240 bands |
640
- 2 | 1/12th octave bands or 120 bands |
641
- 3 | 1/8th octave bands or 80 bands |
642
- 4 | 1/6th octave bands or 60 bands |
643
- 5 | 1/4th octave bands or 40 bands |
644
- 6 | 1/3rd octave bands or 30 bands |
645
- 7 | Half octave bands or 20 bands |
646
- 8 | Full octave bands or 10 bands |
643
+ 0 | Discrete frequencies | *default*
644
+ 1 | 1/24th octave bands or 240 bands | *use 'log' `frequencyScale` for octave bands*
645
+ 2 | 1/12th octave bands or 120 bands | *use 'log' `frequencyScale` for octave bands*
646
+ 3 | 1/8th octave bands or 80 bands | *use 'log' `frequencyScale` for octave bands*
647
+ 4 | 1/6th octave bands or 60 bands | *use 'log' `frequencyScale` for octave bands*
648
+ 5 | 1/4th octave bands or 40 bands | *use 'log' `frequencyScale` for octave bands*
649
+ 6 | 1/3rd octave bands or 30 bands | *use 'log' `frequencyScale` for octave bands*
650
+ 7 | Half octave bands or 20 bands | *use 'log' `frequencyScale` for octave bands*
651
+ 8 | Full octave bands or 10 bands | *use 'log' `frequencyScale` for octave bands*
647
652
  9 | *(not valid)* | *reserved*
648
653
  10 | Graph | *added in v1.1.0*
649
654
 
650
655
  + **Mode 0** provides the highest resolution, allowing you to visualize individual frequencies as provided by the [FFT](https://en.wikipedia.org/wiki/Fast_Fourier_transform) computation;
651
- + **Modes 1 - 8** divide the frequency spectrum in bands; when using the default **logarithmic** [frequency scale](#frequencyscale-string), each band represents the *n*th part of an octave (see also [`ansiBands`](#ansibands-boolean)); otherwise, a fixed number of bands is used for each mode;
656
+ + **Modes 1 - 8** divide the frequency spectrum in bands; when using the default **logarithmic** [`frequencyScale`](#frequencyscale-string), each band represents the *n*th part of an octave; otherwise, a fixed number of bands is used for each mode;
652
657
  + **Mode 10** uses the discrete FFT data points to draw a continuous line and/or a filled area graph (see [`fillAlpha`](#fillalpha-number) and [`lineWidth`](#linewidth-number) properties).
653
658
 
659
+ See also [`ansiBands`](#ansibands-boolean).
660
+
654
661
  Defaults to **0**.
655
662
 
656
663
  ### `noteLabels` *boolean*
@@ -911,11 +918,15 @@ Since this is a static property, you should always access it as `AudioMotionAnal
911
918
 
912
919
  ### `onCanvasDraw` *function*
913
920
 
914
- If defined, this function will be called after rendering each frame.
921
+ If defined, this function will be called after **audioMotion-analyzer** finishes rendering each animation frame.
915
922
 
916
- The audioMotion object will be passed as an argument to the callback function.
923
+ The callback function is passed two arguments: an *AudioMotionAnalyzer* object, and an object with the following properties:
924
+ - `timestamp`, a [*DOMHighResTimeStamp*](https://developer.mozilla.org/en-US/docs/Web/API/DOMHighResTimeStamp)
925
+ which indicates the elapsed time in milliseconds since the analyzer started running;
926
+ - `canvasGradients`, an array of [*CanvasGradient*](https://developer.mozilla.org/en-US/docs/Web/API/CanvasGradient])
927
+ objects currently in use on the left (or single) and right analyzer channels.
917
928
 
918
- Canvas properties `fillStyle` and `strokeStyle` will be set to the current gradient when the function is called.
929
+ The canvas properties `fillStyle` and `strokeStyle` will be set to the left/single channel gradient before the function is called.
919
930
 
920
931
  Usage example:
921
932
 
@@ -928,16 +939,26 @@ const audioMotion = new AudioMotionAnalyzer(
928
939
  }
929
940
  );
930
941
 
931
- function drawCallback( instance ) {
932
- const ctx = instance.canvasCtx,
933
- baseSize = ( instance.isFullscreen ? 40 : 20 ) * instance.pixelRatio;
934
-
935
- // use the 'energy' value to increase the font size and make the logo pulse to the beat
942
+ function drawCallback( instance, info ) {
943
+ const baseSize = ( instance.isFullscreen ? 40 : 20 ) * instance.pixelRatio,
944
+ canvas = instance.canvas,
945
+ centerX = canvas.width / 2,
946
+ centerY = canvas.height / 2,
947
+ ctx = instance.canvasCtx,
948
+ maxHeight = centerY / 2,
949
+ maxWidth = centerX - baseSize * 5,
950
+ time = info.timestamp / 1e4;
951
+
952
+ // the energy value is used here to increase the font size and make the logo pulsate to the beat
936
953
  ctx.font = `${ baseSize + instance.getEnergy() * 25 * instance.pixelRatio }px Orbitron, sans-serif`;
937
954
 
938
- ctx.fillStyle = '#fff8';
955
+ // use the right-channel gradient to fill text
956
+ ctx.fillStyle = info.canvasGradients[1];
939
957
  ctx.textAlign = 'center';
940
- ctx.fillText( 'audioMotion', instance.canvas.width - baseSize * 8, baseSize * 2 );
958
+ ctx.globalCompositeOperation = 'lighter';
959
+
960
+ // the timestamp can be used to create effects and animations based on the elapsed time
961
+ ctx.fillText( 'audioMotion', centerX + maxWidth * Math.cos( time % Math.PI * 2 ), centerY + maxHeight * Math.sin( time % Math.PI * 16 ) );
941
962
  }
942
963
  ```
943
964
 
@@ -947,7 +968,7 @@ For more examples, see the fluid demo [source code](https://github.com/hvianna/a
947
968
 
948
969
  If defined, this function will be called whenever the canvas is resized.
949
970
 
950
- Two arguments are passed: a string with the reason why the function was called (see below) and the audioMotion object.
971
+ The callback function is passed two arguments: a string which indicates the reason that triggered the call (see below) and the *AudioMotionAnalyzer* object.
951
972
 
952
973
  Reason | Description
953
974
  -------|------------
@@ -1259,11 +1280,11 @@ myAudio.crossOrigin = 'anonymous';
1259
1280
  Browser autoplay policy dictates that audio output can only be initiated by a user gesture, and this is enforced by WebAudio API
1260
1281
  by creating [*AudioContext*](#audioctx-audiocontext-object-read-only) objects in *suspended* mode.
1261
1282
 
1262
- **audioMotion-analyzer** tries to automatically start its audio context on the first click on the page.
1283
+ **audioMotion-analyzer** tries to automatically start its AudioContext on the first click on the page.
1263
1284
  However, if you're using an `audio` or `video` element with the `controls` property, clicks on those native media controls cannot be detected
1264
1285
  by JavaScript, so the audio will only be enabled if/when the user clicks somewhere else.
1265
1286
 
1266
- Two possible solutions: **1)** make **sure** your users have to click somewhere else before using the media controls,
1287
+ Two possible solutions are: **1)** ensure your users have to click somewhere else before using the media controls,
1267
1288
  like a "power on" button, or simply clicking to select a song from a list will do; or **2)** don't use the native
1268
1289
  controls at all, and create your own custom play and stop buttons. A very simple example:
1269
1290
 
@@ -1281,6 +1302,9 @@ document.getElementById('play').addEventListener( 'click', () => myAudio.play()
1281
1302
  document.getElementById('stop').addEventListener( 'click', () => myAudio.pause() );
1282
1303
  ```
1283
1304
 
1305
+ You can also prevent the _"The AudioContext was not allowed to start"_ warning message from appearing in the browser console, by instantiating
1306
+ your **audioMotion-analyzer** object within a function triggered by a user click. See the [minimal demo](/demo/minimal.html) code for an example.
1307
+
1284
1308
 
1285
1309
  ## References and acknowledgments
1286
1310
 
@@ -1293,7 +1317,9 @@ document.getElementById('stop').addEventListener( 'click', () => myAudio.pause()
1293
1317
  * [Equations for equal-tempered scale frequencies](http://pages.mtu.edu/~suits/NoteFreqCalcs.html)
1294
1318
  * [Making Audio Reactive Visuals](https://www.airtightinteractive.com/2013/10/making-audio-reactive-visuals/)
1295
1319
  * The font used in audioMotion's logo is [Orbitron](https://fonts.google.com/specimen/Orbitron) by Matt McInerney
1296
- * This documentation website is powered by [GitHub Pages](https://pages.github.com/), [docsify](https://docsify.js.org/) and [docsify-themeable](https://jhildenbiddle.github.io/docsify-themeable).
1320
+ * The _prism_ and _rainbow_ gradients use the [12-bit rainbow palette](https://iamkate.com/data/12-bit-rainbow/) by Kate Morley
1321
+ * The cover page animation was recorded with [ScreenToGif](https://github.com/NickeManarin/ScreenToGif) by Nicke Manarin
1322
+ * This documentation website is powered by [GitHub Pages](https://pages.github.com/), [docsify](https://docsify.js.org/) and [docsify-themeable](https://jhildenbiddle.github.io/docsify-themeable)
1297
1323
 
1298
1324
 
1299
1325
  ## Changelog
@@ -1303,18 +1329,20 @@ See [Changelog.md](Changelog.md)
1303
1329
 
1304
1330
  ## Contributing
1305
1331
 
1306
- If you want to send feedback, ask a question, or need help with something, please use the [**Discussions**](https://github.com/hvianna/audioMotion-analyzer/discussions) area on GitHub.
1332
+ I kindly request that you only [open an issue](https://github.com/hvianna/audioMotion-analyzer/issues) for submitting a **bug report**.
1307
1333
 
1308
- I would love to see your cool projects using **audioMotion-analyzer** -- post them in the *Show and tell* section of [Discussions](https://github.com/hvianna/audioMotion-analyzer/discussions)!
1334
+ If you need help integrating *audioMotion-analyzer* with your project, have ideas for **new features** or any other questions or feedback,
1335
+ please use the [**Discussions**](https://github.com/hvianna/audioMotion-analyzer/discussions) section on GitHub.
1309
1336
 
1310
- For **bug reports** and **feature requests**, feel free to [open an issue](https://github.com/hvianna/audioMotion-analyzer/issues).
1337
+ Additionally, I would love it if you could showcase your project using *audioMotion-analyzer* in [**Show and Tell**](https://github.com/hvianna/audioMotion-analyzer/discussions/categories/show-and-tell),
1338
+ and share your custom gradients with the community in [**Gradients**](https://github.com/hvianna/audioMotion-analyzer/discussions/categories/gradients)!
1311
1339
 
1312
- If you want to submit a **Pull Request**, please branch it off the project's `develop` branch.
1340
+ When submitting a **Pull Request**, please branch it off the project's `develop` branch.
1313
1341
 
1314
1342
  And if you're feeling generous, maybe:
1315
1343
 
1316
1344
  * [Buy me a coffee](https://ko-fi.com/Q5Q6157GZ) on Ko-fi β˜•πŸ˜
1317
- * Gift me something from my [Bandcamp wishlist](https://bandcamp.com/henriquevianna/wishlist) 🎁πŸ₯°
1345
+ * Gift me something from my [Bandcamp wishlist](https://bandcamp.com/henriquevianna/wishlist) 🎁🎢πŸ₯°
1318
1346
  * Tip me via [Brave Rewards](https://brave.com/brave-rewards/) using Brave browser πŸ€“
1319
1347
 
1320
1348
 
package/package.json CHANGED
@@ -1,7 +1,7 @@
1
1
  {
2
2
  "name": "audiomotion-analyzer",
3
3
  "description": "High-resolution real-time graphic audio spectrum analyzer JavaScript module with no dependencies.",
4
- "version": "4.0.0-beta.5",
4
+ "version": "4.0.0",
5
5
  "main": "./src/audioMotion-analyzer.js",
6
6
  "module": "./src/audioMotion-analyzer.js",
7
7
  "types": "./src/index.d.ts",
@@ -2,12 +2,12 @@
2
2
  * audioMotion-analyzer
3
3
  * High-resolution real-time graphic audio spectrum analyzer JS module
4
4
  *
5
- * @version 4.0.0-beta.5
5
+ * @version 4.0.0
6
6
  * @author Henrique Avila Vianna <hvianna@gmail.com> <https://henriquevianna.com>
7
7
  * @license AGPL-3.0-or-later
8
8
  */
9
9
 
10
- const VERSION = '4.0.0-beta.5';
10
+ const VERSION = '4.0.0';
11
11
 
12
12
  // internal constants
13
13
  const TAU = 2 * Math.PI,
@@ -45,7 +45,8 @@ const CANVAS_BACKGROUND_COLOR = '#000',
45
45
  SCALE_MEL = 'mel';
46
46
 
47
47
  // built-in gradients
48
- const GRADIENTS = [
48
+ const PRISM = [ '#a35', '#c66', '#e94', '#ed0', '#9d5', '#4d8', '#2cb', '#0bc', '#09c', '#36b' ],
49
+ GRADIENTS = [
49
50
  [ 'classic', {
50
51
  colorStops: [
51
52
  'hsl( 0, 100%, 50% )',
@@ -54,25 +55,11 @@ const GRADIENTS = [
54
55
  ]
55
56
  }],
56
57
  [ 'prism', {
57
- colorStops: [
58
- 'hsl( 0, 100%, 50% )',
59
- 'hsl( 60, 100%, 50% )',
60
- 'hsl( 120, 100%, 50% )',
61
- 'hsl( 180, 100%, 50% )',
62
- 'hsl( 240, 100%, 50% )'
63
- ]
58
+ colorStops: PRISM
64
59
  }],
65
60
  [ 'rainbow', {
66
61
  dir: 'h',
67
- colorStops: [
68
- 'hsl( 0, 100%, 50% )',
69
- 'hsl( 60, 100%, 50% )',
70
- 'hsl( 120, 100%, 50% )',
71
- 'hsl( 180, 100%, 47% )',
72
- 'hsl( 240, 100%, 58% )',
73
- 'hsl( 300, 100%, 50% )',
74
- 'hsl( 360, 100%, 50% )'
75
- ]
62
+ colorStops: [ '#817', ...PRISM, '#639' ]
76
63
  }],
77
64
  [ 'orangered', {
78
65
  bgColor: '#3e2f29',
@@ -1137,7 +1124,7 @@ export default class AudioMotionAnalyzer {
1137
1124
  [ binLo, ratioLo ] = calcRatio( freqLo ),
1138
1125
  [ binHi, ratioHi ] = calcRatio( freqHi );
1139
1126
 
1140
- barsPush( { posX, freq, freqLo, freqHi, binLo, binHi, ratioLo, ratioHi } );
1127
+ barsPush( { posX: initialX + posX, freq, freqLo, freqHi, binLo, binHi, ratioLo, ratioHi } );
1141
1128
  }
1142
1129
 
1143
1130
  }
@@ -1370,6 +1357,7 @@ export default class AudioMotionAnalyzer {
1370
1357
  canvas = ctx.canvas,
1371
1358
  canvasX = this._scaleX.canvas,
1372
1359
  canvasR = this._scaleR.canvas,
1360
+ canvasGradients= this._canvasGradients,
1373
1361
  energy = this._energy,
1374
1362
  fillAlpha = this.fillAlpha,
1375
1363
  mode = this._mode,
@@ -1571,7 +1559,7 @@ export default class AudioMotionAnalyzer {
1571
1559
  ctx.lineWidth = isOutline ? Math.min( lineWidth, width / 2 ) : lineWidth;
1572
1560
 
1573
1561
  // set selected gradient for fill and stroke
1574
- ctx.fillStyle = ctx.strokeStyle = this._canvasGradients[ channel ];
1562
+ ctx.fillStyle = ctx.strokeStyle = canvasGradients[ channel ];
1575
1563
  } // if ( useCanvas )
1576
1564
 
1577
1565
  // get a new array of data from the FFT
@@ -1891,8 +1879,8 @@ export default class AudioMotionAnalyzer {
1891
1879
  // call callback function, if defined
1892
1880
  if ( this.onCanvasDraw ) {
1893
1881
  ctx.save();
1894
- ctx.fillStyle = ctx.strokeStyle = this._canvasGradients[0];
1895
- this.onCanvasDraw( this );
1882
+ ctx.fillStyle = ctx.strokeStyle = canvasGradients[0];
1883
+ this.onCanvasDraw( this, { timestamp, canvasGradients } );
1896
1884
  ctx.restore();
1897
1885
  }
1898
1886
 
package/src/index.d.ts CHANGED
@@ -1,5 +1,14 @@
1
- type OnCanvasDrawFunction = (instance: AudioMotionAnalyzer) => unknown;
2
- type OnCanvasResizeFunction = (
1
+ export type OnCanvasDrawFunction = (
2
+ instance: AudioMotionAnalyzer,
3
+ info: CanvasDrawInfo
4
+ ) => unknown;
5
+
6
+ export type CanvasDrawInfo = {
7
+ timestamp: DOMHighResTimeStamp,
8
+ canvasGradients: CanvasGradient[]
9
+ }
10
+
11
+ export type OnCanvasResizeFunction = (
3
12
  reason: CanvasResizeReason,
4
13
  instance: AudioMotionAnalyzer
5
14
  ) => unknown;