node-web-audio-api 0.21.0 → 0.21.1
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/CHANGELOG.md +7 -1
- package/README.md +39 -20
- package/js/AudioContext.js +9 -13
- package/js/AudioRenderCapacity.js +2 -2
- package/js/AudioScheduledSourceNode.js +7 -6
- package/js/AudioWorklet.js +8 -1
- package/js/AudioWorkletGlobalScope.js +86 -18
- package/js/AudioWorkletNode.js +8 -6
- package/js/OfflineAudioContext.js +5 -13
- package/js/ScriptProcessorNode.js +2 -6
- package/node-web-audio-api.darwin-arm64.node +0 -0
- package/node-web-audio-api.darwin-x64.node +0 -0
- package/node-web-audio-api.linux-arm-gnueabihf.node +0 -0
- package/node-web-audio-api.linux-arm64-gnu.node +0 -0
- package/node-web-audio-api.linux-x64-gnu.node +0 -0
- package/node-web-audio-api.win32-arm64-msvc.node +0 -0
- package/node-web-audio-api.win32-x64-msvc.node +0 -0
- package/package.json +2 -2
package/CHANGELOG.md
CHANGED
|
@@ -1,6 +1,12 @@
|
|
|
1
|
+
## v0.21.1 (10/06/2024)
|
|
2
|
+
|
|
3
|
+
- Feat: Buffer pool for AudioWorketProcessor
|
|
4
|
+
- Fix: Propagate `addModule` errors to main thread
|
|
5
|
+
- Fix: Memory leak due to `onended` events
|
|
6
|
+
|
|
1
7
|
## v0.21.0 (17/05/2024)
|
|
2
8
|
|
|
3
|
-
- Feat:
|
|
9
|
+
- Feat: Implement AudioWorkletNode
|
|
4
10
|
|
|
5
11
|
## v0.20.0 (29/04/2024)
|
|
6
12
|
|
package/README.md
CHANGED
|
@@ -65,7 +65,7 @@ node examples/granular-scrub.mjs
|
|
|
65
65
|
|
|
66
66
|
## Caveats
|
|
67
67
|
|
|
68
|
-
- `Streams`: only a
|
|
68
|
+
- `Streams`: only a minimal audio input stream and the `MediaStreamSourceNode` are provided. All other `MediaStream` features are left on the side for now as they principally concern a different API specification, which is not a trivial problem.
|
|
69
69
|
|
|
70
70
|
## Supported Platforms
|
|
71
71
|
|
|
@@ -79,24 +79,7 @@ node examples/granular-scrub.mjs
|
|
|
79
79
|
| Linux arm gnueabihf (RPi) | ✓ | ✓ |
|
|
80
80
|
| Linux arm64 gnu (RPi) | ✓ | ✓ |
|
|
81
81
|
|
|
82
|
-
|
|
83
|
-
## Notes for Linux users
|
|
84
|
-
|
|
85
|
-
Using the library on Linux with the ALSA backend might lead to unexpected cranky sound with the default render size (i.e. 128 frames). In such cases, a simple workaround is to pass the `playback` latency hint when creating the audio context, which will increase the render size to 1024 frames:
|
|
86
|
-
|
|
87
|
-
```js
|
|
88
|
-
const audioContext = new AudioContext({ latencyHint: 'playback' });
|
|
89
|
-
```
|
|
90
|
-
|
|
91
|
-
For real-time and interactive applications where low latency is crucial, you should instead rely on the JACK backend provided by `cpal`. By default the audio context will use that backend if a running JACK server is found.
|
|
92
|
-
|
|
93
|
-
If you don't have JACK installed, you can still pass the `WEB_AUDIO_LATENCY=playback` env variable to all examples to create the audio context with the playback latency hint, e.g.:
|
|
94
|
-
|
|
95
|
-
```sh
|
|
96
|
-
WEB_AUDIO_LATENCY=playback node examples/amplitude-modulation.mjs
|
|
97
|
-
```
|
|
98
|
-
|
|
99
|
-
### Manual Build
|
|
82
|
+
## Manual Build
|
|
100
83
|
|
|
101
84
|
If prebuilt binaries are not shippped for your platform, you will need to:
|
|
102
85
|
|
|
@@ -119,15 +102,51 @@ The package will be built on your machine, which might take some time.
|
|
|
119
102
|
|
|
120
103
|
Be aware that the package won't be listed on your `package.json` file, and that it won't be re-installed if running `npm install` again. A possible workaround would be to include the above in a postinstall script.
|
|
121
104
|
|
|
105
|
+
## Notes for Linux users
|
|
106
|
+
|
|
107
|
+
### Build
|
|
108
|
+
|
|
109
|
+
To build the library, you will need to manually install the `libasound2-dev` package:
|
|
110
|
+
|
|
111
|
+
```sh
|
|
112
|
+
sudo apt install libasound2-dev
|
|
113
|
+
```
|
|
114
|
+
|
|
115
|
+
Optionally, if you use the Jack Audio Backend, the `libjack-jackd2-dev` package:
|
|
116
|
+
|
|
117
|
+
```sh
|
|
118
|
+
sudo apt install libjack-jackd2-dev
|
|
119
|
+
```
|
|
120
|
+
|
|
121
|
+
In such case, you can use the `npm run build:jack` script to enable the Jack feature.
|
|
122
|
+
|
|
123
|
+
### Audio backend and latency
|
|
124
|
+
|
|
125
|
+
Using the library on Linux with the ALSA backend might lead to unexpected cranky sound with the default render size (i.e. 128 frames). In such cases, a simple workaround is to pass the `playback` latency hint when creating the audio context, which will increase the render size to 1024 frames:
|
|
126
|
+
|
|
127
|
+
```js
|
|
128
|
+
const audioContext = new AudioContext({ latencyHint: 'playback' });
|
|
129
|
+
```
|
|
130
|
+
|
|
131
|
+
For real-time and interactive applications where low latency is crucial, you should instead rely on the JACK backend provided by `cpal`. By default the audio context will use that backend if a running JACK server is found.
|
|
132
|
+
|
|
133
|
+
If you don't have JACK installed, you can still pass the `WEB_AUDIO_LATENCY=playback` environment variable to all examples to create the audio context with the playback latency hint, e.g.:
|
|
134
|
+
|
|
135
|
+
```sh
|
|
136
|
+
WEB_AUDIO_LATENCY=playback node examples/amplitude-modulation.mjs
|
|
137
|
+
```
|
|
138
|
+
|
|
122
139
|
## Development notes
|
|
123
140
|
|
|
141
|
+
### Synchronize versioning
|
|
142
|
+
|
|
124
143
|
The npm `postversion` script rely on [`cargo-bump`](https://crates.io/crates/cargo-bump) to maintain versions synced between the `package.json` and the `Cargo.toml` files. Therefore, you will need to install `cargo-bump` on your machine
|
|
125
144
|
|
|
126
145
|
```
|
|
127
146
|
cargo install cargo-bump
|
|
128
147
|
```
|
|
129
148
|
|
|
130
|
-
|
|
149
|
+
### Running the web-platform-test suite
|
|
131
150
|
|
|
132
151
|
Follow the steps for 'Manual Build' first. Then checkout the web-platform-tests submodule with:
|
|
133
152
|
|
package/js/AudioContext.js
CHANGED
|
@@ -89,23 +89,15 @@ module.exports = function(jsExport, nativeBinding) {
|
|
|
89
89
|
});
|
|
90
90
|
|
|
91
91
|
// Add function to Napi object to bridge from Rust events to JS EventTarget
|
|
92
|
-
this[kNapiObj][kOnStateChange] = (err, rawEvent)
|
|
93
|
-
if (typeof rawEvent !== 'object' && !('type' in rawEvent)) {
|
|
94
|
-
throw new TypeError('Invalid [kOnStateChange] Invocation: rawEvent should have a type property');
|
|
95
|
-
}
|
|
96
|
-
|
|
92
|
+
this[kNapiObj][kOnStateChange] = (function(err, rawEvent) {
|
|
97
93
|
const event = new Event(rawEvent.type);
|
|
98
94
|
propagateEvent(this, event);
|
|
99
|
-
};
|
|
100
|
-
|
|
101
|
-
this[kNapiObj][kOnSinkChange] = (err, rawEvent) => {
|
|
102
|
-
if (typeof rawEvent !== 'object' && !('type' in rawEvent)) {
|
|
103
|
-
throw new TypeError('Invalid [kOnSinkChange] Invocation: rawEvent should have a type property');
|
|
104
|
-
}
|
|
95
|
+
}).bind(this);
|
|
105
96
|
|
|
97
|
+
this[kNapiObj][kOnSinkChange] = (function(err, rawEvent) {
|
|
106
98
|
const event = new Event(rawEvent.type);
|
|
107
99
|
propagateEvent(this, event);
|
|
108
|
-
};
|
|
100
|
+
}).bind(this);
|
|
109
101
|
|
|
110
102
|
// Workaround to bind the `sinkchange` and `statechange` events to EventTarget.
|
|
111
103
|
// This must be called from JS facade ctor as the JS handler are added to the Napi
|
|
@@ -125,7 +117,6 @@ module.exports = function(jsExport, nativeBinding) {
|
|
|
125
117
|
});
|
|
126
118
|
// keep process awake until context is closed
|
|
127
119
|
const keepAwakeId = setInterval(() => {}, 10 * 1000);
|
|
128
|
-
|
|
129
120
|
// clear on close
|
|
130
121
|
this.addEventListener('statechange', () => {
|
|
131
122
|
if (this.state === 'closed') {
|
|
@@ -134,6 +125,11 @@ module.exports = function(jsExport, nativeBinding) {
|
|
|
134
125
|
clearTimeout(keepAwakeId);
|
|
135
126
|
}
|
|
136
127
|
});
|
|
128
|
+
|
|
129
|
+
// for wpt tests, see ./.scripts/wpt_harness.mjs for informations
|
|
130
|
+
if (process.WPT_TEST_RUNNER) {
|
|
131
|
+
process.WPT_TEST_RUNNER.once('cleanup', () => this.close());
|
|
132
|
+
}
|
|
137
133
|
}
|
|
138
134
|
|
|
139
135
|
get baseLatency() {
|
|
@@ -31,10 +31,10 @@ class AudioRenderCapacity extends EventTarget {
|
|
|
31
31
|
|
|
32
32
|
this[kNapiObj] = options[kNapiObj];
|
|
33
33
|
|
|
34
|
-
this[kNapiObj][kOnUpdate] = (err, rawEvent)
|
|
34
|
+
this[kNapiObj][kOnUpdate] = (function(err, rawEvent) {
|
|
35
35
|
const event = new AudioRenderCapacityEvent('update', rawEvent);
|
|
36
36
|
propagateEvent(this, event);
|
|
37
|
-
};
|
|
37
|
+
}).bind(this);
|
|
38
38
|
|
|
39
39
|
this[kNapiObj].listen_to_events();
|
|
40
40
|
}
|
|
@@ -33,14 +33,15 @@ class AudioScheduledSourceNode extends AudioNode {
|
|
|
33
33
|
|
|
34
34
|
// Add function to Napi object to bridge from Rust events to JS EventTarget
|
|
35
35
|
// It will be effectively registered on rust side when `start` is called
|
|
36
|
-
|
|
37
|
-
|
|
38
|
-
|
|
39
|
-
|
|
40
|
-
|
|
36
|
+
//
|
|
37
|
+
// Note 2024-06-05 - We use bind instead of arrow function because arrow function
|
|
38
|
+
// prevent the node to be collected by Scavenge step of GC, which can lead to
|
|
39
|
+
// oversized graphs and performance issues.
|
|
40
|
+
// cf. https://github.com/ircam-ismm/node-web-audio-api/tree/fix/118
|
|
41
|
+
this[kNapiObj][kOnEnded] = (function(_err, rawEvent) {
|
|
41
42
|
const event = new Event(rawEvent.type);
|
|
42
43
|
propagateEvent(this, event);
|
|
43
|
-
};
|
|
44
|
+
}).bind(this);
|
|
44
45
|
}
|
|
45
46
|
|
|
46
47
|
get onended() {
|
package/js/AudioWorklet.js
CHANGED
|
@@ -134,6 +134,14 @@ class AudioWorklet {
|
|
|
134
134
|
resolve();
|
|
135
135
|
break;
|
|
136
136
|
}
|
|
137
|
+
case 'node-web-audio-api:worklet:add-module-failed': {
|
|
138
|
+
const { promiseId, ctor, name, message } = event;
|
|
139
|
+
const { reject } = this.#idPromiseMap.get(promiseId);
|
|
140
|
+
this.#idPromiseMap.delete(promiseId);
|
|
141
|
+
const err = new globalThis[ctor](message, name);
|
|
142
|
+
reject(err);
|
|
143
|
+
break;
|
|
144
|
+
}
|
|
137
145
|
case 'node-web-audio-api:worlet:processor-registered': {
|
|
138
146
|
const { name, parameterDescriptors } = event;
|
|
139
147
|
this.#workletParamDescriptorsMap.set(name, parameterDescriptors);
|
|
@@ -188,7 +196,6 @@ class AudioWorklet {
|
|
|
188
196
|
// For OfflineAudioContext only, check that all processors have been properly
|
|
189
197
|
// created before actual `startRendering`
|
|
190
198
|
async [kCheckProcessorsCreated]() {
|
|
191
|
-
// console.log(this.#pendingCreateProcessors);
|
|
192
199
|
return new Promise(async resolve => {
|
|
193
200
|
while (this.#pendingCreateProcessors.size !== 0) {
|
|
194
201
|
// we need a microtask to ensure message can be received
|
|
@@ -1,10 +1,12 @@
|
|
|
1
1
|
const {
|
|
2
2
|
parentPort,
|
|
3
3
|
workerData,
|
|
4
|
+
markAsUntransferable,
|
|
4
5
|
} = require('node:worker_threads');
|
|
5
6
|
|
|
6
7
|
const conversions = require('webidl-conversions');
|
|
7
8
|
|
|
9
|
+
// these are defined in rust side
|
|
8
10
|
const {
|
|
9
11
|
exit_audio_worklet_global_scope,
|
|
10
12
|
run_audio_worklet_global_scope,
|
|
@@ -21,14 +23,72 @@ const kWorkletInputs = Symbol.for('node-web-audio-api:worklet-inputs');
|
|
|
21
23
|
const kWorkletOutputs = Symbol.for('node-web-audio-api:worklet-outputs');
|
|
22
24
|
const kWorkletParams = Symbol.for('node-web-audio-api:worklet-params');
|
|
23
25
|
const kWorkletParamsCache = Symbol.for('node-web-audio-api:worklet-params-cache');
|
|
26
|
+
const kWorkletGetBuffer = Symbol.for('node-web-audio-api:worklet-get-buffer');
|
|
27
|
+
const kWorkletRecycleBuffer = Symbol.for('node-web-audio-api:worklet-recycle-buffer');
|
|
28
|
+
const kWorkletRecycleBuffer1 = Symbol.for('node-web-audio-api:worklet-recycle-buffer-1');
|
|
29
|
+
const kWorkletMarkAsUntransferable = Symbol.for('node-web-audio-api:worklet-mark-as-untransferable');
|
|
24
30
|
// const kWorkletOrderedParamNames = Symbol.for('node-web-audio-api:worklet-ordered-param-names');
|
|
25
31
|
|
|
32
|
+
|
|
26
33
|
const nameProcessorCtorMap = new Map();
|
|
27
34
|
const processors = {};
|
|
28
35
|
let pendingProcessorConstructionData = null;
|
|
29
36
|
let loopStarted = false;
|
|
30
37
|
let runLoopImmediateId = null;
|
|
31
38
|
|
|
39
|
+
class BufferPool {
|
|
40
|
+
#bufferSize;
|
|
41
|
+
#pool;
|
|
42
|
+
|
|
43
|
+
constructor(bufferSize, initialPoolSize) {
|
|
44
|
+
this.#bufferSize = bufferSize;
|
|
45
|
+
this.#pool = new Array(initialPoolSize);
|
|
46
|
+
|
|
47
|
+
for (let i = 0; i < this.#pool.length; i++) {
|
|
48
|
+
this.#pool[i] = this.#allocate();
|
|
49
|
+
}
|
|
50
|
+
}
|
|
51
|
+
|
|
52
|
+
#allocate() {
|
|
53
|
+
const float32 = new Float32Array(this.#bufferSize);
|
|
54
|
+
markAsUntransferable(float32);
|
|
55
|
+
// Mark underlying buffer as untransfrable too, this will fail one of
|
|
56
|
+
// the task in `audioworkletprocessor-process-frozen-array.https.html`
|
|
57
|
+
// but prevent segmentation fault
|
|
58
|
+
markAsUntransferable(float32.buffer);
|
|
59
|
+
|
|
60
|
+
return float32;
|
|
61
|
+
}
|
|
62
|
+
|
|
63
|
+
get() {
|
|
64
|
+
if (this.#pool.length === 0) {
|
|
65
|
+
return this.#allocate();
|
|
66
|
+
}
|
|
67
|
+
|
|
68
|
+
return this.#pool.pop();
|
|
69
|
+
}
|
|
70
|
+
|
|
71
|
+
recycle(buffer) {
|
|
72
|
+
// make sure we cannot polute our pool
|
|
73
|
+
if (buffer.length === this.#bufferSize) {
|
|
74
|
+
this.#pool.push(buffer);
|
|
75
|
+
}
|
|
76
|
+
}
|
|
77
|
+
}
|
|
78
|
+
|
|
79
|
+
const renderQuantumSize = 128;
|
|
80
|
+
|
|
81
|
+
const pool128 = new BufferPool(renderQuantumSize, 256);
|
|
82
|
+
const pool1 = new BufferPool(1, 64);
|
|
83
|
+
// allow rust to access some methods required when io layout change
|
|
84
|
+
globalThis[kWorkletGetBuffer] = () => pool128.get();
|
|
85
|
+
globalThis[kWorkletRecycleBuffer] = buffer => pool128.recycle(buffer);
|
|
86
|
+
globalThis[kWorkletRecycleBuffer1] = buffer => pool1.recycle(buffer);
|
|
87
|
+
globalThis[kWorkletMarkAsUntransferable] = obj => {
|
|
88
|
+
markAsUntransferable(obj);
|
|
89
|
+
return obj;
|
|
90
|
+
}
|
|
91
|
+
|
|
32
92
|
function isIterable(obj) {
|
|
33
93
|
// checks for null and undefined
|
|
34
94
|
if (obj === null || obj === undefined) {
|
|
@@ -54,12 +114,11 @@ function runLoop() {
|
|
|
54
114
|
runLoopImmediateId = setImmediate(runLoop);
|
|
55
115
|
}
|
|
56
116
|
|
|
57
|
-
// s
|
|
58
117
|
globalThis.currentTime = 0
|
|
59
118
|
globalThis.currentFrame = 0;
|
|
60
119
|
globalThis.sampleRate = sampleRate;
|
|
61
120
|
// @todo - implement in upstream crate
|
|
62
|
-
|
|
121
|
+
globalThis.renderQuantumSize = renderQuantumSize;
|
|
63
122
|
|
|
64
123
|
globalThis.AudioWorkletProcessor = class AudioWorkletProcessor {
|
|
65
124
|
static get parameterDescriptors() {
|
|
@@ -76,13 +135,14 @@ globalThis.AudioWorkletProcessor = class AudioWorkletProcessor {
|
|
|
76
135
|
parameterDescriptors,
|
|
77
136
|
} = pendingProcessorConstructionData;
|
|
78
137
|
|
|
79
|
-
//
|
|
138
|
+
// Mark [[callable process]] as true, set to false in render quantum
|
|
80
139
|
// either "process" doese not exists, either it throws an error
|
|
81
140
|
this[kWorkletCallableProcess] = true;
|
|
82
|
-
|
|
141
|
+
|
|
142
|
+
// Populate with dummy values which will be replaced in first render call
|
|
83
143
|
this[kWorkletInputs] = new Array(numberOfInputs).fill([]);
|
|
84
|
-
// @todo - use `outputChannelCount`
|
|
85
144
|
this[kWorkletOutputs] = new Array(numberOfOutputs).fill([]);
|
|
145
|
+
|
|
86
146
|
// Object to be reused as `process` parameters argument
|
|
87
147
|
this[kWorkletParams] = {};
|
|
88
148
|
// Cache of 2 Float32Array (of length 128 and 1) for each param, to be reused on
|
|
@@ -91,8 +151,8 @@ globalThis.AudioWorkletProcessor = class AudioWorkletProcessor {
|
|
|
91
151
|
|
|
92
152
|
parameterDescriptors.forEach(desc => {
|
|
93
153
|
this[kWorkletParamsCache][desc.name] = [
|
|
94
|
-
|
|
95
|
-
|
|
154
|
+
pool128.get(), // should be globalThis.renderQuantumSize
|
|
155
|
+
pool1.get(),
|
|
96
156
|
]
|
|
97
157
|
});
|
|
98
158
|
|
|
@@ -234,13 +294,11 @@ globalThis.registerProcessor = function registerProcessor(name, processorCtor) {
|
|
|
234
294
|
// NOTE: Authors that register an event listener on the "message" event of this
|
|
235
295
|
// port should call close on either end of the MessageChannel (either in the
|
|
236
296
|
// AudioWorklet or the AudioWorkletGlobalScope side) to allow for resources to be collected.
|
|
237
|
-
parentPort.on('exit', () => {
|
|
238
|
-
|
|
239
|
-
});
|
|
297
|
+
// parentPort.on('exit', () => {
|
|
298
|
+
// process.stdout.write('closing worklet');
|
|
299
|
+
// });
|
|
240
300
|
|
|
241
301
|
parentPort.on('message', event => {
|
|
242
|
-
console.log(event.cmd + '\n');
|
|
243
|
-
|
|
244
302
|
switch (event.cmd) {
|
|
245
303
|
case 'node-web-audio-api:worklet:init': {
|
|
246
304
|
const { workletId, processors, promiseId } = event;
|
|
@@ -257,13 +315,23 @@ parentPort.on('message', event => {
|
|
|
257
315
|
case 'node-web-audio-api:worklet:add-module': {
|
|
258
316
|
const { code, promiseId } = event;
|
|
259
317
|
const func = new Function('AudioWorkletProcessor', 'registerProcessor', code);
|
|
260
|
-
func(AudioWorkletProcessor, registerProcessor);
|
|
261
318
|
|
|
262
|
-
|
|
263
|
-
|
|
264
|
-
|
|
265
|
-
|
|
266
|
-
|
|
319
|
+
try {
|
|
320
|
+
func(AudioWorkletProcessor, registerProcessor);
|
|
321
|
+
// send registered param descriptors on main thread and resolve Promise
|
|
322
|
+
parentPort.postMessage({
|
|
323
|
+
cmd: 'node-web-audio-api:worklet:module-added',
|
|
324
|
+
promiseId,
|
|
325
|
+
});
|
|
326
|
+
} catch (err) {
|
|
327
|
+
parentPort.postMessage({
|
|
328
|
+
cmd: 'node-web-audio-api:worklet:add-module-failed',
|
|
329
|
+
promiseId,
|
|
330
|
+
ctor: err.constructor.name,
|
|
331
|
+
name: err.name,
|
|
332
|
+
message: err.message,
|
|
333
|
+
});
|
|
334
|
+
}
|
|
267
335
|
break;
|
|
268
336
|
}
|
|
269
337
|
case 'node-web-audio-api:worklet:create-processor': {
|
package/js/AudioWorkletNode.js
CHANGED
|
@@ -98,14 +98,16 @@ module.exports = (jsExport, nativeBinding) => {
|
|
|
98
98
|
throw new DOMException(`Failed to construct 'AudioWorkletNode': Invalid 'outputChannelCount' property from AudioWorkletNodeOptions: 'outputChannelCount' length (${parsedOptions.outputChannelCount.length}) does not equal 'numberOfOutputs' (${parsedOptions.numberOfOutputs})`, 'IndexSizeError');
|
|
99
99
|
}
|
|
100
100
|
} else {
|
|
101
|
-
// If outputChannelCount does not exists,
|
|
102
101
|
// - If both numberOfInputs and numberOfOutputs are 1, set the initial channel count of the node output to 1 and return.
|
|
103
102
|
// NOTE: For this case, the output chanel count will change to computedNumberOfChannels dynamically based on the input and the channelCountMode at runtime.
|
|
104
|
-
|
|
105
|
-
|
|
106
|
-
|
|
107
|
-
|
|
108
|
-
|
|
103
|
+
if (parsedOptions.numberOfInputs === 1 && parsedOptions.numberOfOutputs === 1) {
|
|
104
|
+
// rust waits for an empty Vec as the special case value
|
|
105
|
+
parsedOptions.outputChannelCount = new Uint32Array(0);
|
|
106
|
+
} else {
|
|
107
|
+
// - Otherwise set the channel count of each output of the node to 1 and return.
|
|
108
|
+
parsedOptions.outputChannelCount = new Uint32Array(parsedOptions.numberOfOutputs);
|
|
109
|
+
parsedOptions.outputChannelCount.fill(1);
|
|
110
|
+
}
|
|
109
111
|
}
|
|
110
112
|
|
|
111
113
|
// @todo
|
|
@@ -82,22 +82,14 @@ module.exports = function patchOfflineAudioContext(jsExport, nativeBinding) {
|
|
|
82
82
|
|
|
83
83
|
// Add function to Napi object to bridge from Rust events to JS EventTarget
|
|
84
84
|
// They will be effectively registered on rust side when `startRendering` is called
|
|
85
|
-
this[kNapiObj][kOnStateChange] = (err, rawEvent)
|
|
86
|
-
if (typeof rawEvent !== 'object' && !('type' in rawEvent)) {
|
|
87
|
-
throw new TypeError('Invalid [kOnStateChange] Invocation: rawEvent should have a type property');
|
|
88
|
-
}
|
|
89
|
-
|
|
85
|
+
this[kNapiObj][kOnStateChange] = (function(err, rawEvent) {
|
|
90
86
|
const event = new Event(rawEvent.type);
|
|
91
87
|
propagateEvent(this, event);
|
|
92
|
-
};
|
|
88
|
+
}).bind(this);
|
|
93
89
|
|
|
94
90
|
// This event is, per spec, the last trigerred one
|
|
95
|
-
this[kNapiObj][kOnComplete] = (err, rawEvent)
|
|
96
|
-
|
|
97
|
-
throw new TypeError('Invalid [kOnComplete] Invocation: rawEvent should have a type property');
|
|
98
|
-
}
|
|
99
|
-
|
|
100
|
-
// @fixme: workaround the fact that this event seems to be triggered before
|
|
91
|
+
this[kNapiObj][kOnComplete] = (function(err, rawEvent) {
|
|
92
|
+
// workaround the fact that this event seems to be triggered before
|
|
101
93
|
// startRendering fulfills and that we want to return the exact same instance
|
|
102
94
|
if (this.#renderedBuffer === null) {
|
|
103
95
|
this.#renderedBuffer = new jsExport.AudioBuffer({ [kNapiObj]: rawEvent.renderedBuffer });
|
|
@@ -108,7 +100,7 @@ module.exports = function patchOfflineAudioContext(jsExport, nativeBinding) {
|
|
|
108
100
|
});
|
|
109
101
|
|
|
110
102
|
propagateEvent(this, event);
|
|
111
|
-
};
|
|
103
|
+
}).bind(this);
|
|
112
104
|
}
|
|
113
105
|
|
|
114
106
|
get length() {
|
|
@@ -107,11 +107,7 @@ module.exports = (jsExport, nativeBinding) => {
|
|
|
107
107
|
[kNapiObj]: napiObj,
|
|
108
108
|
});
|
|
109
109
|
|
|
110
|
-
this[kNapiObj][kOnAudioProcess] = (err, rawEvent)
|
|
111
|
-
if (typeof rawEvent !== 'object' && !('type' in rawEvent)) {
|
|
112
|
-
throw new TypeError('Invalid [kOnStateChange] Invocation: rawEvent should have a type property');
|
|
113
|
-
}
|
|
114
|
-
|
|
110
|
+
this[kNapiObj][kOnAudioProcess] = (function(err, rawEvent) {
|
|
115
111
|
const audioProcessingEventInit = {
|
|
116
112
|
playbackTime: rawEvent.playbackTime,
|
|
117
113
|
inputBuffer: new jsExport.AudioBuffer({ [kNapiObj]: rawEvent.inputBuffer }),
|
|
@@ -120,7 +116,7 @@ module.exports = (jsExport, nativeBinding) => {
|
|
|
120
116
|
|
|
121
117
|
const event = new jsExport.AudioProcessingEvent('audioprocess', audioProcessingEventInit);
|
|
122
118
|
propagateEvent(this, event);
|
|
123
|
-
};
|
|
119
|
+
}).bind(this);
|
|
124
120
|
|
|
125
121
|
this[kNapiObj].listen_to_events();
|
|
126
122
|
}
|
|
Binary file
|
|
Binary file
|
|
Binary file
|
|
Binary file
|
|
Binary file
|
|
Binary file
|
|
Binary file
|
package/package.json
CHANGED
|
@@ -1,6 +1,6 @@
|
|
|
1
1
|
{
|
|
2
2
|
"name": "node-web-audio-api",
|
|
3
|
-
"version": "0.21.
|
|
3
|
+
"version": "0.21.1",
|
|
4
4
|
"author": "Benjamin Matuszewski",
|
|
5
5
|
"description": "Node.js bindings for web-audio-api-rs using napi-rs",
|
|
6
6
|
"exports": {
|
|
@@ -51,6 +51,7 @@
|
|
|
51
51
|
"devDependencies": {
|
|
52
52
|
"@ircam/eslint-config": "^1.3.0",
|
|
53
53
|
"@ircam/sc-gettime": "^1.0.0",
|
|
54
|
+
"@ircam/sc-scheduling": "^0.1.7",
|
|
54
55
|
"@ircam/sc-utils": "^1.3.3",
|
|
55
56
|
"@sindresorhus/slugify": "^2.1.1",
|
|
56
57
|
"camelcase": "^7.0.1",
|
|
@@ -66,7 +67,6 @@
|
|
|
66
67
|
"octokit": "^2.0.11",
|
|
67
68
|
"ping": "^0.4.2",
|
|
68
69
|
"template-literal": "^1.0.4",
|
|
69
|
-
"waves-masters": "^2.3.1",
|
|
70
70
|
"webidl2": "^24.2.0",
|
|
71
71
|
"wpt-runner": "^5.0.0"
|
|
72
72
|
},
|