@newgameplusinc/odyssey-audio-video-sdk-dev 1.0.11 β†’ 1.0.12

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -1,106 +1,143 @@
1
- # Odyssey Spatial Audio SDK
2
-
3
- A comprehensive SDK for real-time spatial audio and video communication using MediaSoup, designed for immersive multi-user experiences in the Odyssey platform.
4
-
5
- ## Purpose
6
-
7
- This package provides a complete WebRTC-based spatial audio and video solution that:
8
- - Manages MediaSoup connections for audio/video streaming
9
- - Implements spatial audio using Web Audio API with HRTF
10
- - Handles participant management with user profile data (bodyHeight, bodyShape, email, etc.)
11
- - Provides real-time position tracking for immersive spatial experiences
12
-
13
- ## Installation
14
-
15
- You can install this package from npm:
16
-
17
- ```bash
18
- npm install @newgameplusinc/odyssey-spatial-sdk-wrapper
19
- ```
1
+ # Odyssey Audio/Video SDK (MediaSoup + Web Audio)
2
+
3
+ This package exposes `OdysseySpatialComms`, a thin TypeScript client that glues together:
4
+
5
+ - **MediaSoup SFU** for ultra-low-latency audio/video routing
6
+ - **Web Audio API** for Apple-like spatial mixing via `SpatialAudioManager`
7
+ - **Socket telemetry** (position + direction) so every browser hears/see everyone exactly where they are in the 3D world
8
+
9
+ It mirrors the production SDK used by Odyssey V2 and ships ready-to-drop into any Web UI (Vue, React, plain JS).
10
+
11
+ ## Feature Highlights
12
+ - πŸ”Œ **One class to rule it all** – `OdysseySpatialComms` wires transports, producers, consumers, and room state.
13
+ - 🧭 **Accurate pose propagation** – `updatePosition()` streams listener pose to the SFU while `participant-position-updated` keeps the local store in sync.
14
+ - 🎧 **Studio-grade spatial audio** – each remote participant gets a dedicated Web Audio graph: denoiser β†’ high-pass β†’ low-pass β†’ HRTF `PannerNode` β†’ adaptive gain β†’ master compressor.
15
+ - πŸŽ₯ **Camera-ready streams** – video tracks are exposed separately so UI layers can render muted `<video>` tags while audio stays inside Web Audio.
16
+ - πŸŽ™οΈ **Clean microphone uplink** – optional `enhanceOutgoingAudioTrack` helper runs mic input through denoiser + EQ + compressor before hitting the SFU.
17
+ - πŸ” **EventEmitter contract** – subscribe to `room-joined`, `consumer-created`, `participant-position-updated`, etc., without touching Socket.IO directly.
18
+
19
+ ## Quick Start
20
+
21
+ ```ts
22
+ import {
23
+ OdysseySpatialComms,
24
+ Direction,
25
+ Position,
26
+ } from "@newgameplusinc/odyssey-audio-video-sdk-dev";
27
+
28
+ const sdk = new OdysseySpatialComms("https://mediasoup-server.example.com");
29
+
30
+ // 1) Join a room
31
+ await sdk.joinRoom({
32
+ roomId: "demo-room",
33
+ userId: "user-123",
34
+ deviceId: "device-123",
35
+ position: { x: 0, y: 0, z: 0 },
36
+ direction: { x: 0, y: 1, z: 0 },
37
+ });
20
38
 
21
- Or install locally:
39
+ // 2) Produce local media
40
+ const stream = await navigator.mediaDevices.getUserMedia({ audio: true, video: true });
41
+ for (const track of stream.getTracks()) {
42
+ await sdk.produceTrack(track);
43
+ }
44
+
45
+ // 3) Handle remote tracks
46
+ sdk.on("consumer-created", async ({ participant, track }) => {
47
+ if (track.kind === "video") {
48
+ attachVideo(track, participant.participantId);
49
+ }
50
+ });
22
51
 
23
- ```bash
24
- npm install ../mediasoup-sdk-test
52
+ // 4) Keep spatial audio honest
53
+ sdk.updatePosition(currentPos, currentDir);
54
+ sdk.setListenerFromLSD(listenerPos, cameraPos, lookAtPos);
25
55
  ```
26
56
 
27
- ## Usage
28
-
29
- ### 1. Initialize the SDK
57
+ ## Audio Flow (Server ↔ Browser)
30
58
 
31
- ```typescript
32
- import { OdysseySpatialComms } from "@newgameplusinc/odyssey-spatial-sdk-wrapper";
33
-
34
- // Initialize with your MediaSoup server URL
35
- const sdk = new OdysseySpatialComms("https://your-mediasoup-server.com");
36
59
  ```
37
-
38
- ### 2. Join a Room with User Profile Data
39
-
40
- ```typescript
41
- const participant = await sdk.joinRoom({
42
- roomId: "my-room",
43
- userId: "user-123",
44
- deviceId: "device-456",
45
- position: { x: 0, y: 0, z: 0 },
46
- direction: { x: 0, y: 0, z: 1 },
47
- bodyHeight: "0.5", // User's avatar height from Firebase
48
- bodyShape: "4", // User's avatar body shape from Firebase
49
- userName: "John Doe", // User's display name
50
- userEmail: "john@example.com" // User's email
51
- });
60
+ β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” update-position β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” pose + tracks β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
61
+ β”‚ Browser LSD β”‚ ──────────────────▢ β”‚ MediaSoup SFUβ”‚ ────────────────▢ β”‚ SDK Event Bus β”‚
62
+ β”‚ (Unreal data)β”‚ β”‚ + Socket.IO β”‚ β”‚ (EventManager) β”‚
63
+ β””β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”˜
64
+ β”‚ β”‚ track + pose
65
+ β”‚ β”‚ β–Ό
66
+ β”‚ β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β–Όβ”€β”€β”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
67
+ β”‚ audio RTP β”‚ consumer-createdβ”‚ β”‚ SpatialAudioMgr β”‚
68
+ └──────────────────────────▢│ setup per-user │◀──────────────────────│ (Web Audio API) β”‚
69
+ β””β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β”‚ - Denoiser β”‚
70
+ β”‚ β”‚ - HP / LP β”‚
71
+ β”‚ β”‚ - HRTF Panner β”‚
72
+ β–Ό β”‚ - Gain + Comp β”‚
73
+ Web Audio Graph β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”˜
74
+ β”‚ β”‚
75
+ β–Ό β–Ό
76
+ Listener ears (Left/Right) System Output
52
77
  ```
53
78
 
54
- ### 3. Produce Audio/Video Tracks
79
+ ### Web Audio Algorithms
80
+ - **Coordinate normalization** – Unreal sends centimeters; `SpatialAudioManager` auto-detects large values and converts to meters once.
81
+ - **Orientation math** – `setListenerFromLSD()` builds forward/right/up vectors from camera/LookAt to keep the listener aligned with head movement.
82
+ - **Dynamic distance gain** – `updateSpatialAudio()` measures distance from listener β†’ source and applies a smooth rolloff curve, so distant avatars fade to silence.
83
+ - **Noise handling** – optional AudioWorklet denoiser plus high/low-pass filters trim rumble & hiss before HRTF processing.
55
84
 
56
- ```typescript
57
- // Get user media
58
- const stream = await navigator.mediaDevices.getUserMedia({
59
- audio: true,
60
- video: true
61
- });
62
-
63
- // Produce audio track
64
- const audioTrack = stream.getAudioTracks()[0];
65
- await sdk.produceTrack(audioTrack);
85
+ #### How Spatial Audio Is Built
86
+ 1. **Telemetry ingestion** – each LSD packet is passed through `setListenerFromLSD(listenerPos, cameraPos, lookAtPos)` so the Web Audio listener matches the player’s real head/camera pose.
87
+ 2. **Per-participant node graph** – when `consumer-created` yields a remote audio track, `setupSpatialAudioForParticipant()` spins up an isolated graph:
88
+ `MediaStreamSource β†’ (optional) Denoiser Worklet β†’ High-Pass β†’ Low-Pass β†’ Panner(HRTF) β†’ Gain β†’ Master Compressor`.
89
+ 3. **Position + direction updates** – every `participant-position-updated` event calls `updateSpatialAudio(participantId, position, direction)`. The position feeds the panner’s XYZ, while the direction vector sets the source orientation so voices project forward relative to avatar facing.
90
+ 4. **Distance-aware gain** – the manager stores the latest listener pose and computes the Euclidean distance to each remote participant on every update. A custom rolloff curve adjusts gain before the compressor, giving the β€œsomeone on my left / far away” perception without blowing out master levels.
91
+ 5. **Left/right rendering** – because the panner uses `panningModel = "HRTF"`, browsers feed the processed signal into the user’s audio hardware with head-related transfer functions, producing natural interaural time/intensity differences.
66
92
 
67
- // Produce video track
68
- const videoTrack = stream.getVideoTracks()[0];
69
- await sdk.produceTrack(videoTrack);
70
- ```
93
+ #### How Microphone Audio Is Tuned Before Sending
94
+ 1. **Hardware constraints first** – the SDK requests `noiseSuppression`, `echoCancellation`, and `autoGainControl` on the raw `MediaStreamTrack` (plus Chromium-specific `goog*` flags).
95
+ 2. **Web Audio pre-flight** – `enhanceOutgoingAudioTrack(track)` clones the mic into a dedicated `AudioContext` and chain: `Denoiser β†’ 50/60β€―Hz notches β†’ Low-shelf rumble cut β†’ High-pass (95β€―Hz) β†’ Low-pass (7.2β€―kHz) β†’ High-shelf tame β†’ Presence boost β†’ Dynamics compressor β†’ Adaptive gate`.
96
+ 3. **Adaptive gate** – a lightweight RMS monitor clamps the gate gain when only background hiss remains, but opens instantly when speech energy rises.
97
+ 4. **Clean stream to SFU** – the processed track is what you pass to `produceTrack`, so every participant receives the filtered audio (and your local store uses the same track for mute toggles).
71
98
 
72
- ### 4. Update Position for Spatial Audio
99
+ ## Video Flow (Capture ↔ Rendering)
73
100
 
74
- ```typescript
75
- sdk.updatePosition(
76
- { x: 10, y: 0, z: 5 }, // New position
77
- { x: 0, y: 0, z: 1 } // New direction
78
- );
79
101
  ```
80
-
81
- ### 5. Listen to Events
82
-
83
- ```typescript
84
- // New participant joined
85
- sdk.on("new-participant", (participant) => {
86
- console.log("New participant:", participant.userName, participant.bodyHeight);
87
- });
88
-
89
- // Participant left
90
- sdk.on("participant-left", (participantId) => {
91
- console.log("Participant left:", participantId);
92
- });
93
-
94
- // Consumer created (receiving audio/video from remote participant)
95
- sdk.on("consumer-created", ({ participant, track, consumer }) => {
96
- console.log("Receiving", track.kind, "from", participant.userName);
97
- });
102
+ β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” produceTrack β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” RTP β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
103
+ β”‚ getUserMedia β”‚ ───────────────▢ β”‚ MediaSoup SDKβ”‚ ──────▢ β”‚ MediaSoup SFUβ”‚
104
+ β””β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”˜ β”‚ (Odyssey) β”‚ β””β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”˜
105
+ β”‚ β””β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”˜ β”‚
106
+ β”‚ consumer-created β”‚ track β”‚
107
+ β–Ό β–Ό β”‚
108
+ β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚
109
+ β”‚ Vue/React UI β”‚ ◀─────────────── β”‚ SDK Event Bus β”‚ β—€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
110
+ β”‚ (muted video β”‚ β”‚ exposes media β”‚
111
+ β”‚ elements) β”‚ β”‚ tracks β”‚
112
+ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
98
113
  ```
99
114
 
100
- ## Build
101
-
102
- To build the package, run:
103
-
104
- ```bash
105
- npm run build
106
- ```
115
+ ## Core Classes
116
+ - `src/index.ts` – `OdysseySpatialComms` (socket lifecycle, producers/consumers, event surface).
117
+ - `src/MediasoupManager.ts` – transport helpers for produce/consume/resume.
118
+ - `src/SpatialAudioManager.ts` – Web Audio orchestration (listener transforms, per-participant chains, denoiser, distance math).
119
+ - `src/EventManager.ts` – lightweight EventEmitter used by the entire SDK.
120
+
121
+ ## Integration Checklist
122
+ 1. **Instantiate once** per page/tab and keep it in a store (Vuex, Redux, Zustand, etc.).
123
+ 2. **Pipe LSD/Lap data** from your rendering engine into `updatePosition()` + `setListenerFromLSD()` at ~10 Hz.
124
+ 3. **Render videos muted** – never attach remote audio tracks straight to DOM; let `SpatialAudioManager` own playback.
125
+ 4. **Push avatar telemetry back to Unreal** so `remoteSpatialData` can render minimaps/circles (see Odyssey V2 `sendMediaSoupParticipantsToUnreal`).
126
+ 5. **Monitor logs** – browser console shows `🎧 SDK`, `πŸ“ SDK`, and `🎚️ [Spatial Audio]` statements for every critical hop.
127
+
128
+ ## Server Contract (Socket.IO events)
129
+ | Event | Direction | Payload |
130
+ |-------|-----------|---------|
131
+ | `join-room` | client β†’ server | `{roomId, userId, deviceId, position, direction}` |
132
+ | `room-joined` | server β†’ client | `RoomJoinedData` (router caps, participants snapshot) |
133
+ | `update-position` | client β†’ server | `{participantId, conferenceId, position, direction}` |
134
+ | `participant-position-updated` | server β†’ client | `{participantId, position, direction, mediaState}` |
135
+ | `consumer-created` | server β†’ client | `{participantId, track(kind), position, direction}` |
136
+ | `participant-media-state-updated` | server β†’ client | `{participantId, mediaState}` |
137
+
138
+ ## Development Tips
139
+ - Run `pnpm install && pnpm build` inside `mediasoup-sdk-test` to publish a fresh build.
140
+ - Use `pnpm watch` while iterating so TypeScript outputs live under `dist/`.
141
+ - The SDK targets evergreen browsers; for Safari <16.4 you may need to polyfill AudioWorklets or disable the denoiser via `new SpatialAudioManager({ denoiser: { enabled: false } })`.
142
+
143
+ Have questions or want to extend the SDK? Start with `SpatialAudioManager` – that’s where most of the β€œreal-world” behavior (distance feel, stereo cues, denoiser) lives.
@@ -23,11 +23,13 @@ export declare class SpatialAudioManager extends EventManager {
23
23
  private monitoringIntervals;
24
24
  private compressor;
25
25
  private options;
26
- private denoiseWorkletReady;
27
26
  private denoiseWorkletUrl?;
28
27
  private denoiserWasmBytes?;
28
+ private denoiseContextPromises;
29
29
  private listenerPosition;
30
30
  private listenerInitialized;
31
+ private stabilityState;
32
+ private outgoingProcessors;
31
33
  private listenerDirection;
32
34
  constructor(options?: SpatialAudioOptions);
33
35
  getAudioContext(): AudioContext;
@@ -47,7 +49,9 @@ export declare class SpatialAudioManager extends EventManager {
47
49
  * @param bypassSpatialization For testing - bypasses 3D positioning
48
50
  */
49
51
  setupSpatialAudioForParticipant(participantId: string, track: MediaStreamTrack, bypassSpatialization?: boolean): Promise<void>;
52
+ enhanceOutgoingAudioTrack(track: MediaStreamTrack): Promise<MediaStreamTrack>;
50
53
  private startMonitoring;
54
+ private handleTrackStability;
51
55
  /**
52
56
  * Update spatial audio position and orientation for a participant
53
57
  *
@@ -97,6 +101,9 @@ export declare class SpatialAudioManager extends EventManager {
97
101
  private calculateDistanceGain;
98
102
  private normalizePositionUnits;
99
103
  private isDenoiserEnabled;
104
+ private applyHardwareNoiseConstraints;
105
+ private startOutboundMonitor;
106
+ private cleanupOutboundProcessor;
100
107
  private ensureDenoiseWorklet;
101
108
  private resolveOptions;
102
109
  }
@@ -7,9 +7,11 @@ class SpatialAudioManager extends EventManager_1.EventManager {
7
7
  super();
8
8
  this.participantNodes = new Map();
9
9
  this.monitoringIntervals = new Map();
10
- this.denoiseWorkletReady = null;
10
+ this.denoiseContextPromises = new WeakMap();
11
11
  this.listenerPosition = { x: 0, y: 0, z: 0 };
12
12
  this.listenerInitialized = false;
13
+ this.stabilityState = new Map();
14
+ this.outgoingProcessors = new Map();
13
15
  this.listenerDirection = {
14
16
  forward: { x: 0, y: 1, z: 0 },
15
17
  up: { x: 0, y: 0, z: 1 },
@@ -69,6 +71,7 @@ class SpatialAudioManager extends EventManager_1.EventManager {
69
71
  const panner = this.audioContext.createPanner();
70
72
  const analyser = this.audioContext.createAnalyser();
71
73
  const gain = this.audioContext.createGain();
74
+ const noiseGate = this.audioContext.createGain();
72
75
  let denoiseNode;
73
76
  if (this.isDenoiserEnabled() && typeof this.audioContext.audioWorklet !== "undefined") {
74
77
  try {
@@ -102,6 +105,8 @@ class SpatialAudioManager extends EventManager_1.EventManager {
102
105
  lowpassFilter.type = "lowpass";
103
106
  lowpassFilter.frequency.value = 7500; // Below 8kHz to avoid flat/muffled sound
104
107
  lowpassFilter.Q.value = 1.0; // Quality factor
108
+ // Adaptive noise gate defaults
109
+ noiseGate.gain.value = 1.0;
105
110
  // Configure Panner for realistic 3D spatial audio
106
111
  const distanceConfig = this.getDistanceConfig();
107
112
  panner.panningModel = "HRTF"; // Head-Related Transfer Function for realistic 3D
@@ -121,15 +126,16 @@ class SpatialAudioManager extends EventManager_1.EventManager {
121
126
  }
122
127
  currentNode.connect(highpassFilter);
123
128
  highpassFilter.connect(lowpassFilter);
129
+ lowpassFilter.connect(noiseGate);
124
130
  if (bypassSpatialization) {
125
131
  console.log(`πŸ”Š TESTING: Connecting audio directly to destination (bypassing spatial audio) for ${participantId}`);
126
- lowpassFilter.connect(analyser);
132
+ noiseGate.connect(analyser);
127
133
  analyser.connect(this.masterGainNode);
128
134
  }
129
135
  else {
130
136
  // Standard spatialized path with full audio chain
131
- // Audio Chain: source -> filters -> panner -> analyser -> gain -> masterGain -> compressor -> destination
132
- lowpassFilter.connect(panner);
137
+ // Audio Chain: source -> filters -> noiseGate -> panner -> analyser -> gain -> masterGain -> compressor -> destination
138
+ noiseGate.connect(panner);
133
139
  panner.connect(analyser);
134
140
  analyser.connect(gain);
135
141
  gain.connect(this.masterGainNode);
@@ -139,11 +145,21 @@ class SpatialAudioManager extends EventManager_1.EventManager {
139
145
  panner,
140
146
  analyser,
141
147
  gain,
148
+ noiseGate,
142
149
  highpassFilter,
143
150
  lowpassFilter,
144
151
  denoiseNode,
145
152
  stream,
146
153
  });
154
+ this.stabilityState.set(participantId, {
155
+ smoothedLevel: 0,
156
+ targetGain: 1,
157
+ networkMuted: false,
158
+ });
159
+ if (typeof track.onmute !== "undefined") {
160
+ track.onmute = () => this.handleTrackStability(participantId, true);
161
+ track.onunmute = () => this.handleTrackStability(participantId, false);
162
+ }
147
163
  console.log(`🎧 Spatial audio setup complete for ${participantId}:`, {
148
164
  audioContextState: this.audioContext.state,
149
165
  sampleRate: this.audioContext.sampleRate,
@@ -163,12 +179,138 @@ class SpatialAudioManager extends EventManager_1.EventManager {
163
179
  // Start monitoring audio levels
164
180
  this.startMonitoring(participantId);
165
181
  }
182
+ async enhanceOutgoingAudioTrack(track) {
183
+ if (track.kind !== "audio") {
184
+ return track;
185
+ }
186
+ const existingProcessor = Array.from(this.outgoingProcessors.values()).find((processor) => processor.originalTrack === track);
187
+ if (existingProcessor) {
188
+ return existingProcessor.processedTrack;
189
+ }
190
+ await this.applyHardwareNoiseConstraints(track);
191
+ const context = new AudioContext({ sampleRate: 48000 });
192
+ await context.resume();
193
+ const sourceStream = new MediaStream([track]);
194
+ const source = context.createMediaStreamSource(sourceStream);
195
+ let current = source;
196
+ let denoiseNode;
197
+ if (this.isDenoiserEnabled() && typeof context.audioWorklet !== "undefined") {
198
+ try {
199
+ await this.ensureDenoiseWorklet(context);
200
+ denoiseNode = new AudioWorkletNode(context, "odyssey-denoise", {
201
+ numberOfInputs: 1,
202
+ numberOfOutputs: 1,
203
+ processorOptions: {
204
+ enabled: true,
205
+ threshold: this.options.denoiser?.threshold,
206
+ noiseFloor: this.options.denoiser?.noiseFloor,
207
+ release: this.options.denoiser?.release,
208
+ wasmBytes: this.denoiserWasmBytes
209
+ ? this.denoiserWasmBytes.slice(0)
210
+ : null,
211
+ },
212
+ });
213
+ current.connect(denoiseNode);
214
+ current = denoiseNode;
215
+ }
216
+ catch (error) {
217
+ console.warn("⚠️ Outgoing denoiser unavailable, continuing without it.", error);
218
+ }
219
+ }
220
+ const notch60 = context.createBiquadFilter();
221
+ notch60.type = "notch";
222
+ notch60.frequency.value = 60;
223
+ notch60.Q.value = 24;
224
+ current.connect(notch60);
225
+ current = notch60;
226
+ const notch50 = context.createBiquadFilter();
227
+ notch50.type = "notch";
228
+ notch50.frequency.value = 50;
229
+ notch50.Q.value = 24;
230
+ current.connect(notch50);
231
+ current = notch50;
232
+ const lowShelf = context.createBiquadFilter();
233
+ lowShelf.type = "lowshelf";
234
+ lowShelf.frequency.value = 120;
235
+ lowShelf.gain.value = -3;
236
+ current.connect(lowShelf);
237
+ current = lowShelf;
238
+ const highpassFilter = context.createBiquadFilter();
239
+ highpassFilter.type = "highpass";
240
+ highpassFilter.frequency.value = 95;
241
+ highpassFilter.Q.value = 0.8;
242
+ current.connect(highpassFilter);
243
+ current = highpassFilter;
244
+ const lowpassFilter = context.createBiquadFilter();
245
+ lowpassFilter.type = "lowpass";
246
+ lowpassFilter.frequency.value = 7200;
247
+ lowpassFilter.Q.value = 0.8;
248
+ current.connect(lowpassFilter);
249
+ current = lowpassFilter;
250
+ const hissShelf = context.createBiquadFilter();
251
+ hissShelf.type = "highshelf";
252
+ hissShelf.frequency.value = 6400;
253
+ hissShelf.gain.value = -4;
254
+ current.connect(hissShelf);
255
+ current = hissShelf;
256
+ const presenceBoost = context.createBiquadFilter();
257
+ presenceBoost.type = "peaking";
258
+ presenceBoost.frequency.value = 2400;
259
+ presenceBoost.Q.value = 1.1;
260
+ presenceBoost.gain.value = 2.4;
261
+ current.connect(presenceBoost);
262
+ current = presenceBoost;
263
+ const compressor = context.createDynamicsCompressor();
264
+ compressor.threshold.value = -18;
265
+ compressor.knee.value = 16;
266
+ compressor.ratio.value = 3.2;
267
+ compressor.attack.value = 0.002;
268
+ compressor.release.value = 0.22;
269
+ current.connect(compressor);
270
+ current = compressor;
271
+ const postCompressorTap = context.createGain();
272
+ postCompressorTap.gain.value = 1.05;
273
+ current.connect(postCompressorTap);
274
+ current = postCompressorTap;
275
+ const analyser = context.createAnalyser();
276
+ analyser.fftSize = 512;
277
+ current.connect(analyser);
278
+ const gate = context.createGain();
279
+ gate.gain.value = 1;
280
+ current.connect(gate);
281
+ const destination = context.createMediaStreamDestination();
282
+ gate.connect(destination);
283
+ const processedTrack = destination.stream.getAudioTracks()[0];
284
+ processedTrack.contentHint = "speech";
285
+ const processorId = processedTrack.id;
286
+ const monitor = this.startOutboundMonitor(processorId, analyser, gate);
287
+ const cleanup = () => this.cleanupOutboundProcessor(processorId);
288
+ processedTrack.addEventListener("ended", cleanup);
289
+ track.addEventListener("ended", cleanup);
290
+ this.outgoingProcessors.set(processorId, {
291
+ context,
292
+ sourceStream,
293
+ destinationStream: destination.stream,
294
+ analyser,
295
+ gate,
296
+ monitor,
297
+ originalTrack: track,
298
+ processedTrack,
299
+ cleanupListener: cleanup,
300
+ });
301
+ console.log("πŸŽ›οΈ [SDK] Outgoing audio tuned", {
302
+ originalTrackId: track.id,
303
+ processedTrackId: processedTrack.id,
304
+ });
305
+ return processedTrack;
306
+ }
166
307
  startMonitoring(participantId) {
167
308
  const nodes = this.participantNodes.get(participantId);
168
309
  if (!nodes)
169
310
  return;
170
- const { analyser, stream } = nodes;
311
+ const { analyser, stream, noiseGate } = nodes;
171
312
  const dataArray = new Uint8Array(analyser.frequencyBinCount);
313
+ let lastTrackLog = 0;
172
314
  // Clear any existing interval for this participant
173
315
  if (this.monitoringIntervals.has(participantId)) {
174
316
  clearInterval(this.monitoringIntervals.get(participantId));
@@ -181,16 +323,47 @@ class SpatialAudioManager extends EventManager_1.EventManager {
181
323
  }
182
324
  const average = sum / dataArray.length;
183
325
  const audioLevel = (average / 128) * 255; // Scale to 0-255
184
- console.log(`πŸ“Š Audio level for ${participantId}: ${audioLevel.toFixed(2)} (0-255 scale)`);
185
- if (audioLevel < 1.0) {
326
+ const normalizedLevel = audioLevel / 255;
327
+ const stability = this.stabilityState.get(participantId);
328
+ if (stability) {
329
+ const smoothing = 0.2;
330
+ stability.smoothedLevel =
331
+ stability.smoothedLevel * (1 - smoothing) + normalizedLevel * smoothing;
332
+ const gateOpenThreshold = 0.035; // empirical speech/noise split
333
+ const gateCloseThreshold = 0.015;
334
+ let targetGain = stability.targetGain;
335
+ if (stability.networkMuted) {
336
+ targetGain = 0;
337
+ }
338
+ else if (stability.smoothedLevel < gateCloseThreshold) {
339
+ targetGain = 0;
340
+ }
341
+ else if (stability.smoothedLevel < gateOpenThreshold) {
342
+ targetGain = 0.35;
343
+ }
344
+ else {
345
+ targetGain = 1;
346
+ }
347
+ if (Math.abs(targetGain - stability.targetGain) > 0.05) {
348
+ const ramp = targetGain > stability.targetGain ? 0.03 : 0.12;
349
+ noiseGate.gain.setTargetAtTime(targetGain, this.audioContext.currentTime, ramp);
350
+ stability.targetGain = targetGain;
351
+ }
352
+ if (Math.random() < 0.05) {
353
+ console.log(`🎚️ [NoiseGate] ${participantId}`, {
354
+ level: stability.smoothedLevel.toFixed(3),
355
+ gain: stability.targetGain.toFixed(2),
356
+ });
357
+ }
358
+ }
359
+ if (audioLevel < 1.0 && Math.random() < 0.2) {
186
360
  console.warn(`⚠️ NO AUDIO DATA detected for ${participantId}! Track may be silent or not transmitting.`);
187
- console.info(`πŸ’‘ Check: 1) Is microphone unmuted? 2) Is correct mic selected? 3) Is mic working in system settings?`);
188
361
  }
189
- // Check track status after 2 seconds
190
- setTimeout(() => {
362
+ if (Date.now() - lastTrackLog > 2000) {
363
+ lastTrackLog = Date.now();
191
364
  const track = stream.getAudioTracks()[0];
192
365
  if (track) {
193
- console.log(`πŸ”Š Audio track status after 2s for ${participantId}:`, {
366
+ console.log(`πŸ”Š Audio track status for ${participantId}:`, {
194
367
  trackEnabled: track.enabled,
195
368
  trackMuted: track.muted,
196
369
  trackReadyState: track.readyState,
@@ -202,10 +375,20 @@ class SpatialAudioManager extends EventManager_1.EventManager {
202
375
  },
203
376
  });
204
377
  }
205
- }, 2000);
206
- }, 2000); // Log every 2 seconds
378
+ }
379
+ }, 250); // Adaptive monitoring ~4x per second
207
380
  this.monitoringIntervals.set(participantId, interval);
208
381
  }
382
+ handleTrackStability(participantId, muted) {
383
+ const nodes = this.participantNodes.get(participantId);
384
+ if (!nodes)
385
+ return;
386
+ const stability = this.stabilityState.get(participantId);
387
+ if (stability) {
388
+ stability.networkMuted = muted;
389
+ }
390
+ nodes.noiseGate.gain.setTargetAtTime(muted ? 0 : 1, this.audioContext.currentTime, muted ? 0.05 : 0.2);
391
+ }
209
392
  /**
210
393
  * Update spatial audio position and orientation for a participant
211
394
  *
@@ -389,11 +572,18 @@ class SpatialAudioManager extends EventManager_1.EventManager {
389
572
  nodes.panner.disconnect();
390
573
  nodes.analyser.disconnect();
391
574
  nodes.gain.disconnect();
575
+ nodes.noiseGate.disconnect();
392
576
  if (nodes.denoiseNode) {
393
577
  nodes.denoiseNode.disconnect();
394
578
  }
579
+ const track = nodes.stream.getAudioTracks()[0];
580
+ if (track) {
581
+ track.onmute = null;
582
+ track.onunmute = null;
583
+ }
395
584
  nodes.stream.getTracks().forEach((track) => track.stop());
396
585
  this.participantNodes.delete(participantId);
586
+ this.stabilityState.delete(participantId);
397
587
  console.log(`πŸ—‘οΈ Removed participant ${participantId} from spatial audio.`);
398
588
  }
399
589
  }
@@ -476,11 +666,85 @@ class SpatialAudioManager extends EventManager_1.EventManager {
476
666
  isDenoiserEnabled() {
477
667
  return this.options.denoiser?.enabled !== false;
478
668
  }
479
- async ensureDenoiseWorklet() {
669
+ async applyHardwareNoiseConstraints(track) {
670
+ try {
671
+ await track.applyConstraints({
672
+ echoCancellation: true,
673
+ noiseSuppression: true,
674
+ autoGainControl: true,
675
+ advanced: [
676
+ {
677
+ echoCancellation: true,
678
+ noiseSuppression: true,
679
+ autoGainControl: true,
680
+ googEchoCancellation: true,
681
+ googNoiseSuppression: true,
682
+ googAutoGainControl: true,
683
+ googHighpassFilter: true,
684
+ googTypingNoiseDetection: true,
685
+ },
686
+ ],
687
+ });
688
+ }
689
+ catch (error) {
690
+ console.warn("⚠️ Unable to apply hardware audio constraints", error);
691
+ }
692
+ track.contentHint = "speech";
693
+ }
694
+ startOutboundMonitor(processorId, analyser, gate) {
695
+ const dataArray = new Uint8Array(analyser.fftSize);
696
+ let smoothedLevel = 0;
697
+ return setInterval(() => {
698
+ analyser.getByteTimeDomainData(dataArray);
699
+ let sum = 0;
700
+ for (const value of dataArray) {
701
+ sum += Math.abs(value - 128);
702
+ }
703
+ const level = (sum / dataArray.length) / 128;
704
+ smoothedLevel = smoothedLevel * 0.7 + level * 0.3;
705
+ let targetGain = 1;
706
+ if (smoothedLevel < 0.02) {
707
+ targetGain = 0;
708
+ }
709
+ else if (smoothedLevel < 0.05) {
710
+ targetGain = 0.45;
711
+ }
712
+ else {
713
+ targetGain = 1;
714
+ }
715
+ gate.gain.setTargetAtTime(targetGain, gate.context.currentTime, targetGain > gate.gain.value ? 0.02 : 0.08);
716
+ if (Math.random() < 0.03) {
717
+ console.log("🎚️ [SDK] Outgoing gate", {
718
+ processorId,
719
+ level: smoothedLevel.toFixed(3),
720
+ gain: targetGain.toFixed(2),
721
+ });
722
+ }
723
+ }, 200);
724
+ }
725
+ cleanupOutboundProcessor(processorId) {
726
+ const processor = this.outgoingProcessors.get(processorId);
727
+ if (!processor)
728
+ return;
729
+ clearInterval(processor.monitor);
730
+ processor.processedTrack.removeEventListener("ended", processor.cleanupListener);
731
+ processor.originalTrack.removeEventListener("ended", processor.cleanupListener);
732
+ try {
733
+ processor.originalTrack.stop();
734
+ }
735
+ catch (error) {
736
+ console.warn("⚠️ Unable to stop original track during cleanup", error);
737
+ }
738
+ processor.destinationStream.getTracks().forEach((t) => t.stop());
739
+ processor.sourceStream.getTracks().forEach((t) => t.stop());
740
+ processor.context.close();
741
+ this.outgoingProcessors.delete(processorId);
742
+ }
743
+ async ensureDenoiseWorklet(targetContext = this.audioContext) {
480
744
  if (!this.isDenoiserEnabled()) {
481
745
  return;
482
746
  }
483
- if (!("audioWorklet" in this.audioContext)) {
747
+ if (!("audioWorklet" in targetContext)) {
484
748
  console.warn("⚠️ AudioWorklet not supported in this browser. Disabling denoiser.");
485
749
  this.options.denoiser = {
486
750
  ...(this.options.denoiser || {}),
@@ -488,8 +752,9 @@ class SpatialAudioManager extends EventManager_1.EventManager {
488
752
  };
489
753
  return;
490
754
  }
491
- if (this.denoiseWorkletReady) {
492
- return this.denoiseWorkletReady;
755
+ const existingPromise = this.denoiseContextPromises.get(targetContext);
756
+ if (existingPromise) {
757
+ return existingPromise;
493
758
  }
494
759
  const processorSource = `class OdysseyDenoiseProcessor extends AudioWorkletProcessor {
495
760
  constructor(options) {
@@ -546,11 +811,13 @@ class SpatialAudioManager extends EventManager_1.EventManager {
546
811
 
547
812
  registerProcessor('odyssey-denoise', OdysseyDenoiseProcessor);
548
813
  `;
549
- const blob = new Blob([processorSource], {
550
- type: "application/javascript",
551
- });
552
- this.denoiseWorkletUrl = URL.createObjectURL(blob);
553
- this.denoiseWorkletReady = this.audioContext.audioWorklet
814
+ if (!this.denoiseWorkletUrl) {
815
+ const blob = new Blob([processorSource], {
816
+ type: "application/javascript",
817
+ });
818
+ this.denoiseWorkletUrl = URL.createObjectURL(blob);
819
+ }
820
+ const promise = targetContext.audioWorklet
554
821
  .addModule(this.denoiseWorkletUrl)
555
822
  .catch((error) => {
556
823
  console.error("❌ Failed to register denoise worklet", error);
@@ -560,7 +827,8 @@ registerProcessor('odyssey-denoise', OdysseyDenoiseProcessor);
560
827
  };
561
828
  throw error;
562
829
  });
563
- return this.denoiseWorkletReady;
830
+ this.denoiseContextPromises.set(targetContext, promise);
831
+ return promise;
564
832
  }
565
833
  resolveOptions(options) {
566
834
  const distanceDefaults = {
package/dist/index.d.ts CHANGED
@@ -26,6 +26,7 @@ export declare class OdysseySpatialComms extends EventManager {
26
26
  }): Promise<Participant>;
27
27
  leaveRoom(): void;
28
28
  resumeAudio(): Promise<void>;
29
+ enhanceOutgoingAudioTrack(track: MediaStreamTrack): Promise<MediaStreamTrack>;
29
30
  getAudioContextState(): AudioContextState;
30
31
  produceTrack(track: MediaStreamTrack): Promise<any>;
31
32
  updatePosition(position: Position, direction: Direction, spatialData?: {
package/dist/index.js CHANGED
@@ -121,6 +121,9 @@ class OdysseySpatialComms extends EventManager_1.EventManager {
121
121
  async resumeAudio() {
122
122
  await this.spatialAudioManager.resumeAudioContext();
123
123
  }
124
+ async enhanceOutgoingAudioTrack(track) {
125
+ return this.spatialAudioManager.enhanceOutgoingAudioTrack(track);
126
+ }
124
127
  getAudioContextState() {
125
128
  return this.spatialAudioManager.getAudioContextState();
126
129
  }
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "@newgameplusinc/odyssey-audio-video-sdk-dev",
3
- "version": "1.0.11",
3
+ "version": "1.0.12",
4
4
  "description": "Odyssey Spatial Audio & Video SDK using MediaSoup for real-time communication",
5
5
  "main": "dist/index.js",
6
6
  "types": "dist/index.d.ts",