node-web-audio-api 0.6.0 → 0.8.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/CHANGELOG.md CHANGED
@@ -1,8 +1,29 @@
1
- ## v0.4.0
1
+ ## v0.8.0
2
+
3
+ - Implement MediaDevices enumerateDeviaces and getUserMedia
4
+ - Use jack as default output if exists on linux
5
+
6
+ ## v0.7.0
7
+
8
+ - Improve readme & doc
9
+ - Fix AudioParam method names
10
+
11
+ ## v0.6.0 - Feb 2023
12
+
13
+ - Basic support for mediaDevices & MediaStreamAudioSourceNode
14
+ - Add bindings to ConvolverNode, AnalyserNode & Panner nodes
15
+ - Update upstream crate to v0.26
16
+
17
+ ## v0.5.0 - Dec 2022
18
+
19
+ - Implement AudioParam#setValueCurveAtTime
20
+ - Offline context constructor
21
+
22
+ ## v0.4.0 - Nov 2022
2
23
 
3
24
  - Implement offline audio context
4
- - Update web-audio-api-rs to v0.24.0
5
- - Implement `audio_node.disconnect()`
25
+ - Update upstream crate to v0.24
26
+ - Implement AudioNode#disconnect
6
27
  - Properly support ESM
7
28
  - Limit number of online contexts to 1 on Linux
8
29
  - Force latencyHint to 'playback' if not manually set on RPi
package/README.md CHANGED
@@ -1,6 +1,13 @@
1
- # `node-web-audio-api`
1
+ # Node Web Audio API
2
2
 
3
- > Nodejs bindings for [`orottier/web-audio-api-rs`](https://github.com/orottier/web-audio-api-rs/) using [`napi-rs`](https://github.com/napi-rs/napi-rs/)
3
+ [![npm version](https://badge.fury.io/js/node-web-audio-api.svg)](https://badge.fury.io/js/node-web-audio-api)
4
+
5
+ Node.js bindings for the Rust implementation of the Web Audio API Specification
6
+
7
+ The goal of this library is to provide an implementation that is both efficient and _exactly_ matches the browsers' API.
8
+
9
+ - see [`orottier/web-audio-api-rs`](https://github.com/orottier/web-audio-api-rs/) for the "real" audio guts
10
+ - use [`napi-rs`](https://github.com/napi-rs/napi-rs/) for the Node.js bindings
4
11
 
5
12
  ## Install
6
13
 
@@ -8,10 +15,12 @@
8
15
  npm install [--save] node-web-audio-api
9
16
  ```
10
17
 
11
- ## Example
18
+ ## Example Use
12
19
 
13
20
  ```js
14
21
  import { AudioContext, OscillatorNode, GainNode } from 'node-web-audio-api';
22
+ // or using old fashionned commonjs syntax:
23
+ // const { AudioContext, OscillatorNode, GainNode } = require('node-web-audio-api');
15
24
 
16
25
  const audioContext = new AudioContext();
17
26
 
@@ -33,54 +42,52 @@ setInterval(() => {
33
42
  }, 80);
34
43
  ```
35
44
 
36
- or using with old fashionned commonjs syntax
45
+ ### Running the Examples
37
46
 
38
- ```js
39
- const { AudioContext, OscillatorNode, GainNode } = require('node-web-audio-api');
47
+ To run all examples locally on your machine you will need to:
40
48
 
41
- const audioContext = new AudioContext();
42
- //...
49
+ 1. Install Rust toolchain
50
+ ```sh
51
+ curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
43
52
  ```
44
53
 
45
- ## Caveats
46
-
47
- - Currently the library does not provide any way of chosing the output interface, system default interface will be used. As the spec and web-audio-api evolve evolve, thus should change in the future see [https://github.com/orottier/web-audio-api-rs/issues/216](https://github.com/orottier/web-audio-api-rs/issues/216)
48
- - On Linux systems, the audio backend is Alsa, which limits the number of online
49
- AudioContext to 1. This is subject to change in the future.
50
-
51
- ### Raspberry Pi
52
-
53
- On Raspberry Pi, the default render quantum size (128) is too small and underruns
54
- occurs frequently. To prevent that, if you do not explicitely provide a latency hint
55
- in the AudioContext options, the value is automatically set to 'playback' which uses
56
- a buffer of 1024 samples. While this is not per se spec compliant, it allow usage
57
- of the library in a more user friendly manner. In the future, this might change according
58
- to the support of other audio backend, which is now alsa.
54
+ 2. Clone the repo and build the binary on your machine
55
+ ```sh
56
+ git clone https://github.com/ircam-ismm/node-web-audio-api.git
57
+ cd node-web-audio-api
58
+ npm install
59
+ npm run build
60
+ ```
59
61
 
60
- ```js
61
- const audioContext = new AudioContext({ latencyHint: 'playback' });
62
+ 3. Run the examples from the project's root directory
63
+ ```sh
64
+ node examples/granular-scrub.mjs
62
65
  ```
63
66
 
64
- The 'playback' latency hint, 1024 samples / ~21ms at 48000Hz, has been found
65
- a good value.
67
+ ## Caveats
68
+
69
+ - The async methods are not trully async for now and are just patched on the JS side. This will evolve once the "trully" async version of the methods are implemented in the upstream library.
70
+ - On Linux systems, the audio backend is currently Alsa, which limits the number of online `AudioContext` to 1. This is subject to change in the future.
71
+ - On Raspberry Pi, the default render quantum size (128) is too small and underruns occurs frequently. To prevent that, if you do not explicitely provide a latency hint in the AudioContext options, the value is automatically set to 'playback' which uses a buffer of 1024 samples (~21ms at 48000Hz). While this is not per se spec compliant, it allows usage of the library in a more user friendly manner. In the future, this might change according to the support of other audio backend.
72
+ - On Raspberry Pi, the `Linux arm gnueabihf` binary provided only works on 32bit OS. We will provide a version for the 64 bit OS in the future.
66
73
 
67
74
  ## Supported Platforms
68
75
 
69
- | | binaries | tested |
70
- | ---------------------------| ------ | ------ |
71
- | Windows x64 | ✓ | |
72
- | Windows arm64 | ✓ | |
73
- | macOS x64 | ✓ | ✓ |
74
- | macOS aarch64 | ✓ | |
75
- | Linux x64 gnu | ✓ | |
76
- | Linux arm gnueabihf (RPi) | ✓ | ✓ |
76
+ | | binaries | tested |
77
+ | --------------------------- | ------ | ------ |
78
+ | Windows x64 | ✓ | |
79
+ | Windows arm64 | ✓ | |
80
+ | macOS x64 | ✓ | ✓ |
81
+ | macOS aarch64 | ✓ | |
82
+ | Linux x64 gnu | ✓ | |
83
+ | Linux arm gnueabihf (RPi) | ✓ | ✓ |
77
84
 
78
85
 
79
86
  ### Manual Build
80
87
 
81
88
  If prebuilt binaries are not shippped for your platform, you will need to:
82
89
 
83
- 1. Install rust toolchain
90
+ 1. Install the Rust toolchain
84
91
 
85
92
  ```sh
86
93
  curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
@@ -89,24 +96,17 @@ curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
89
96
  2. Install and build from github
90
97
 
91
98
  ```sh
92
- npm install --save git+https://github.com/b-ma/node-web-audio-api.git
99
+ npm install --save git+https://github.com/ircam-ismm/node-web-audio-api.git
93
100
  cd node_modules/node-web-audio-api
101
+ npm install
94
102
  npm run build
95
103
  ```
96
104
 
97
- The package will then be built on your machine, which might take some time
98
-
99
- ## Known limitation / caveats
100
-
101
- - async function are not trully async but only monkey patched on the JS side, this will
102
- be updated once `web-audio-api-rs` provide async version of the methods.
103
- - see `web-audio-api-rs`
105
+ The package will be built on your machine, which might take some time
104
106
 
105
107
  ## Development notes
106
108
 
107
- The npm script rely on [`cargo-bump`](https://crates.io/crates/cargo-bump) to maintain version synced between
108
- the `package.json` and the `Cargo.toml` files. Therefore, you will need to install
109
- `cargo-bump` on your machine
109
+ The npm `postversion` script rely on [`cargo-bump`](https://crates.io/crates/cargo-bump) to maintain versions synced between the `package.json` and the `Cargo.toml` files. Therefore, you will need to install `cargo-bump` on your machine
110
110
 
111
111
  ```
112
112
  cargo install cargo-bump
@@ -114,4 +114,4 @@ cargo install cargo-bump
114
114
 
115
115
  ## License
116
116
 
117
- This project is licensed under the [BSD-3-Clause license](./LICENSE).
117
+ [BSD-3-Clause](./LICENSE)
package/index.cjs CHANGED
@@ -83,33 +83,8 @@ if (!nativeBinding) {
83
83
  throw new Error(`Failed to load native binding for OS: ${platform}, architecture: ${arch}`);
84
84
  }
85
85
 
86
- const {
87
- patchAudioContext,
88
- patchOfflineAudioContext,
89
- load,
90
- } = require('./monkey-patch.js');
91
-
92
- nativeBinding.AudioContext = patchAudioContext(nativeBinding.AudioContext);
93
- nativeBinding.OfflineAudioContext = patchOfflineAudioContext(nativeBinding.OfflineAudioContext);
94
- nativeBinding.load = load;
95
-
96
- // ------------------------------------------------------------------
97
- // monkey patch proto media devices API
98
- // @todo - review
99
- // ------------------------------------------------------------------
100
- class MediaStream extends nativeBinding.Microphone {};
101
- // const Microphone = nativeBinding.Microphone;
102
- nativeBinding.Microphone = null;
103
-
104
- nativeBinding.mediaDevices = {}
105
- nativeBinding.mediaDevices.getUserMedia = function getUserMedia(options) {
106
- if (options && options.audio === true) {
107
- const mic = new MediaStream();
108
- return Promise.resolve(mic);
109
- } else {
110
- throw new NotSupportedError(`Only { audio: true } is currently supported`);
111
- }
112
- }
86
+ const monkeyPatch = require('./monkey-patch.js');
87
+ nativeBinding = monkeyPatch(nativeBinding);
113
88
 
114
89
  module.exports = nativeBinding;
115
90
 
package/monkey-patch.js CHANGED
@@ -20,32 +20,58 @@ class NotSupportedError extends Error {
20
20
  }
21
21
 
22
22
  const { platform, arch } = process;
23
- let contextId = 0;
24
23
 
25
- function patchAudioContext(NativeAudioContext) {
26
- class AudioContext extends NativeAudioContext {
27
- constructor(options = {}) {
24
+ let contextIds = {
25
+ audioinput: 0,
26
+ audiooutput: 0,
27
+ };
28
28
 
29
- // special handling of options on linux, these are not spec compliant but are
30
- // ment to be more user-friendly than what we have now (is subject to change)
31
- if (platform === 'linux') {
32
- // throw meaningfull error if several contexts are created on linux,
33
- // because of alsa backend we currently use
34
- if (contextId === 1) {
35
- throw new Error(`[node-web-audio-api] node-web-audio-api currently uses alsa as backend, therefore only one context can be safely created`);
36
- }
29
+ let enumerateDevicesSync = null;
30
+
31
+ function handleDefaultOptions(options, kind) {
32
+ if (platform === 'linux') {
33
+ const list = enumerateDevicesSync();
34
+ const jackDevice = list.find(device => device.kind === kind && device.label === 'jack');
37
35
 
38
- // fallback latencyHint to "playback" on RPi if not explicitely defined
39
- if (arch === 'arm') {
40
- if (!('latencyHint' in options)) {
41
- options.latencyHint = 'playback';
42
- }
36
+ if (jackDevice === undefined) {
37
+ // throw meaningfull error if several contexts are created on linux,
38
+ // because of alsa backend we currently use
39
+ if (contextIds[kind] === 1) {
40
+ throw new Error(`[node-web-audio-api] node-web-audio-api uses alsa as backend, therefore only one context or audio input stream can be safely created`);
41
+ }
42
+
43
+ // force latencyHint to "playback" on RPi if not explicitely defined
44
+ if (arch === 'arm') {
45
+ if (kind === 'audiooutput' && !('latencyHint' in options)) {
46
+ options.latencyHint = 'playback';
43
47
  }
44
48
  }
49
+ } else {
50
+ // default to jack if jack source or sink is found
51
+ const deviceKey = kind === 'audioinput' ? 'deviceId' : 'sinkId';
45
52
 
53
+ if (!(deviceKey in options)) {
54
+ console.log(`> JACK ${kind} device found, use as default`);
55
+ options[deviceKey] = jackDevice.deviceId;
56
+ }
57
+ }
58
+ }
59
+
60
+ // increment contextIds as they are used to keep the process awake
61
+ contextIds[kind] += 1;
62
+
63
+ return options;
64
+ }
65
+
66
+ function patchAudioContext(nativeBinding) {
67
+ class AudioContext extends nativeBinding.AudioContext {
68
+ constructor(options = {}) {
69
+ // special handling of options on linux, these are not spec compliant but are
70
+ // ment to be more user-friendly than what we have now (is subject to change)
71
+ options = handleDefaultOptions(options, 'audiooutput');
46
72
  super(options);
47
73
  // prevent garbage collection
48
- const processId = `__AudioContext_${contextId}`;
74
+ const processId = `__AudioContext_${contextIds['audiooutput']}`;
49
75
  process[processId] = this;
50
76
 
51
77
  Object.defineProperty(this, '__processId', {
@@ -55,7 +81,6 @@ function patchAudioContext(NativeAudioContext) {
55
81
  configurable: false,
56
82
  });
57
83
 
58
- contextId += 1;
59
84
  // keep process awake
60
85
  const keepAwakeId = setInterval(() => {}, 10000);
61
86
  Object.defineProperty(this, '__keepAwakeId', {
@@ -83,6 +108,15 @@ function patchAudioContext(NativeAudioContext) {
83
108
  return Promise.resolve(super.close());
84
109
  }
85
110
 
111
+ setSinkId(sinkId) {
112
+ try {
113
+ super.setSinkId(sinkId);
114
+ Promise.resolve(undefined);
115
+ } catch (err) {
116
+ Promise.reject(err);
117
+ }
118
+ }
119
+
86
120
  decodeAudioData(audioData) {
87
121
  if (!isPlainObject(audioData) || !('path' in audioData)) {
88
122
  throw new Error(`Invalid argument, please consider using the load helper`);
@@ -100,8 +134,8 @@ function patchAudioContext(NativeAudioContext) {
100
134
  return AudioContext;
101
135
  }
102
136
 
103
- function patchOfflineAudioContext(NativeOfflineAudioContext) {
104
- class OfflineAudioContext extends NativeOfflineAudioContext {
137
+ function patchOfflineAudioContext(nativeBinding) {
138
+ class OfflineAudioContext extends nativeBinding.OfflineAudioContext {
105
139
  constructor(...args) {
106
140
  // handle initialisation with either an options object or a sequence of parameters
107
141
  // https://webaudio.github.io/web-audio-api/#dom-offlineaudiocontext-constructor-contextoptions-contextoptions
@@ -121,15 +155,6 @@ function patchOfflineAudioContext(NativeOfflineAudioContext) {
121
155
  }
122
156
 
123
157
  super(...args);
124
-
125
- // not sure this is usefull, to be tested
126
- const keepAwakeId = setInterval(() => {}, 10000);
127
- Object.defineProperty(this, '__keepAwakeId', {
128
- value: keepAwakeId,
129
- enumerable: false,
130
- writable: true,
131
- configurable: false,
132
- });
133
158
  }
134
159
 
135
160
  // promisify sync APIs
@@ -161,15 +186,40 @@ function patchOfflineAudioContext(NativeOfflineAudioContext) {
161
186
  return OfflineAudioContext;
162
187
  }
163
188
 
164
- module.exports.patchAudioContext = patchAudioContext;
165
- module.exports.patchOfflineAudioContext = patchOfflineAudioContext;
166
-
167
189
  // dumb method provided to mock an xhr call and mimick browser's API
168
190
  // see also `AudioContext.decodeAudioData`
169
- module.exports.load = function(path) {
191
+ function load(path) {
170
192
  if (!fs.existsSync(path)) {
171
193
  throw new Error(`File not found: "${path}"`);
172
194
  }
173
195
 
174
196
  return { path };
175
197
  };
198
+
199
+ module.exports = function monkeyPatch(nativeBinding) {
200
+ nativeBinding.AudioContext = patchAudioContext(nativeBinding);
201
+ nativeBinding.OfflineAudioContext = patchOfflineAudioContext(nativeBinding);
202
+
203
+ // Promisify MediaDevices API
204
+ enumerateDevicesSync = nativeBinding.mediaDevices.enumerateDevices;
205
+ nativeBinding.mediaDevices.enumerateDevices = async function enumerateDevices(options) {
206
+ const list = enumerateDevicesSync();
207
+ return Promise.resolve(list);
208
+ }
209
+
210
+ const getUserMediaSync = nativeBinding.mediaDevices.getUserMedia;
211
+ nativeBinding.mediaDevices.getUserMedia = async function getUserMedia(options) {
212
+ if (options === undefined) {
213
+ throw new TypeError("Failed to execute 'getUserMedia' on 'MediaDevices': audio must be requested")
214
+ }
215
+
216
+ options = handleDefaultOptions(options, 'audioinput');
217
+ const stream = getUserMediaSync(options);
218
+ return Promise.resolve(stream);
219
+ }
220
+
221
+ // utils
222
+ nativeBinding.load = load;
223
+
224
+ return nativeBinding;
225
+ }
Binary file
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "node-web-audio-api",
3
- "version": "0.6.0",
3
+ "version": "0.8.0",
4
4
  "author": "Benjamin Matuszewski",
5
5
  "description": "Node.js bindings for web-audio-api-rs using napi-rs",
6
6
  "exports": {
@@ -12,6 +12,10 @@
12
12
  "keywords": [
13
13
  "audio",
14
14
  "web audio api",
15
+ "webaudio",
16
+ "sound",
17
+ "music",
18
+ "dsp",
15
19
  "rust",
16
20
  "n-api"
17
21
  ],
@@ -28,20 +32,26 @@
28
32
  "scripts": {
29
33
  "artifacts": "napi artifacts",
30
34
  "build": "npm run generate && napi build --platform --release",
35
+ "build:jack": "npm run generate && napi build --platform --features 'web-audio-api/cpal-jack' --release",
31
36
  "build:debug": "npm run generate && napi build --platform",
32
37
  "check": "cargo fmt && cargo clippy",
33
38
  "generate": "node generator/index.mjs && cargo fmt",
34
39
  "lint": "eslint monkey-patch.js index.cjs index.mjs && eslint examples/*.mjs",
35
40
  "preversion": "yarn install && npm run generate",
36
- "postversion": "cargo bump $npm_package_version && git commit -am \"v$npm_package_version\""
41
+ "postversion": "cargo bump $npm_package_version && git commit -am \"v$npm_package_version\" && node bin/check-changelog.mjs",
42
+ "test": "mocha"
37
43
  },
38
44
  "devDependencies": {
39
45
  "@ircam/eslint-config": "^1.3.0",
46
+ "@ircam/sc-gettime": "^1.0.0",
40
47
  "@sindresorhus/slugify": "^2.1.1",
41
48
  "camelcase": "^7.0.1",
49
+ "chai": "^4.3.7",
42
50
  "chalk": "^5.2.0",
51
+ "cli-table": "^0.3.11",
43
52
  "dotenv": "^16.0.3",
44
53
  "eslint": "^8.32.0",
54
+ "mocha": "^10.2.0",
45
55
  "node-ssh": "^13.0.0",
46
56
  "octokit": "^2.0.11",
47
57
  "ping": "^0.4.2",
package/simple-test.cjs CHANGED
@@ -1,7 +1,6 @@
1
- const { AudioContext } = require('./index.js');
1
+ const { AudioContext, mediaDevices } = require('./index.cjs');
2
2
 
3
3
  const audioContext = new AudioContext();
4
- process.audioContext = audioContext;
5
4
 
6
5
  setInterval(() => {
7
6
  const now = audioContext.currentTime;
package/simple-test.mjs CHANGED
@@ -1,25 +1,20 @@
1
- // import { AudioContext } from './index.mjs';
1
+ import { AudioContext, mediaDevices } from './index.mjs';
2
2
 
3
- // const audioContext = new AudioContext();
3
+ const audioContext = new AudioContext();
4
4
 
5
- // setInterval(() => {
6
- // const now = audioContext.currentTime;
5
+ setInterval(() => {
6
+ const now = audioContext.currentTime;
7
7
 
8
- // const env = audioContext.createGain();
9
- // env.connect(audioContext.destination);
10
- // env.gain.value = 0;
11
- // env.gain.setValueAtTime(0, now);
12
- // env.gain.linearRampToValueAtTime(0.1, now + 0.02);
13
- // env.gain.exponentialRampToValueAtTime(0.0001, now + 1);
8
+ const env = audioContext.createGain();
9
+ env.connect(audioContext.destination);
10
+ env.gain.value = 0;
11
+ env.gain.setValueAtTime(0, now);
12
+ env.gain.linearRampToValueAtTime(0.1, now + 0.02);
13
+ env.gain.exponentialRampToValueAtTime(0.0001, now + 1);
14
14
 
15
- // const osc = audioContext.createOscillator();
16
- // osc.frequency.value = 200 + Math.random() * 2800;
17
- // osc.connect(env);
18
- // osc.start(now);
19
- // osc.stop(now + 1);
20
- // }, 100);
21
-
22
-
23
- import { mediaDevices } from './index.mjs';
24
-
25
- console.log(mediaDevices);
15
+ const osc = audioContext.createOscillator();
16
+ osc.frequency.value = 200 + Math.random() * 2800;
17
+ osc.connect(env);
18
+ osc.start(now);
19
+ osc.stop(now + 1);
20
+ }, 100);
@@ -0,0 +1,60 @@
1
+ import { assert } from 'chai';
2
+ import { AudioBuffer, AudioContext } from '../index.mjs';
3
+
4
+ describe('# AudioBuffer', () => {
5
+ let audioContext;
6
+
7
+ before(() => {
8
+ audioContext = new AudioContext();
9
+ });
10
+
11
+ after(() => {
12
+ audioContext.close();
13
+ });
14
+
15
+ describe(`## audioContext.createBuffer`, () => {
16
+ it('should properly create audio buffer', () => {
17
+ const audioBuffer = audioContext.createBuffer(1, 100, audioContext.sampleRate);
18
+
19
+ assert.equal(audioBuffer.numberOfChannels, 1);
20
+ assert.equal(audioBuffer.length, 100);
21
+ assert.equal(audioBuffer.sampleRate, audioContext.sampleRate);
22
+ });
23
+
24
+ it('should properly fail if missing argument', () => {
25
+ assert.throws(() => {
26
+ const audioBuffer = audioContext.createBuffer(1, 100);
27
+ });
28
+ });
29
+ });
30
+
31
+ describe(`## new AudioBuffer(options)`, () => {
32
+ it('should properly create audio buffer', () => {
33
+ const audioBuffer = new AudioBuffer({
34
+ length: 100,
35
+ sampleRate: audioContext.sampleRate,
36
+ });
37
+
38
+ assert.equal(audioBuffer.numberOfChannels, 1);
39
+ assert.equal(audioBuffer.length, 100);
40
+ assert.equal(audioBuffer.sampleRate, audioContext.sampleRate);
41
+ });
42
+
43
+ it('should properly fail if missing argument', () => {
44
+ assert.throws(() => {
45
+ const audioBuffer = new AudioBuffer({ length: 100 });
46
+ });
47
+ });
48
+
49
+ it(`should have type error`, () => {
50
+ try {
51
+ new AudioBuffer(Date, 42);
52
+ } catch (err) {
53
+ console.log(err.type);
54
+ console.log(err.name);
55
+ console.log(err.message);
56
+ assert.fail('should be TypeError');
57
+ }
58
+ });
59
+ });
60
+ });
@@ -0,0 +1,58 @@
1
+ import { assert } from 'chai';
2
+
3
+ import { mediaDevices } from '../index.mjs';
4
+
5
+ describe('# mediaDevices.getUserMedia(options)', () => {
6
+ it('should fail if no argument given', async () => {
7
+ let failed = false;
8
+ try {
9
+ await mediaDevices.getUserMedia();
10
+ } catch (err) {
11
+ console.log(err.message);
12
+ failed = true;
13
+ }
14
+
15
+ if (!failed) { assert.fail(); }
16
+ });
17
+
18
+ // @todo - clean error message
19
+ it('should fail if argument is not an object', async () => {
20
+ let failed = false;
21
+ try {
22
+ await mediaDevices.getUserMedia(true);
23
+ } catch (err) {
24
+ console.log(err.message);
25
+ failed = true;
26
+ }
27
+
28
+ if (!failed) { assert.fail(); }
29
+ });
30
+
31
+ it('should fail if options.video', async () => {
32
+ let failed = false;
33
+ try {
34
+ await mediaDevices.getUserMedia({ video: true });
35
+ } catch (err) {
36
+ console.log(err.message);
37
+ failed = true;
38
+ }
39
+
40
+ if (!failed) { assert.fail(); }
41
+ });
42
+
43
+ it.only('should not fail if options.audio = true', async () => {
44
+ let failed = false;
45
+
46
+ try {
47
+ const stream = await mediaDevices.getUserMedia({ audio: true });
48
+ // console.log(stream instanceof mediaDevices.MediaStream);
49
+ } catch (err) {
50
+ console.log(err);
51
+ failed = true;
52
+ }
53
+
54
+ console.log(failed);
55
+
56
+ if (failed) { assert.fail('should not have failed'); }
57
+ });
58
+ });