@webex/web-client-media-engine 1.41.1 → 1.42.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/README.md +183 -0
- package/dist/cjs/index.js.map +1 -1
- package/dist/esm/index.js.map +1 -1
- package/dist/types/index.d.ts +28 -28
- package/package.json +1 -2
package/README.md
ADDED
|
@@ -0,0 +1,183 @@
|
|
|
1
|
+
# Web Client Media Engine (WCME)
|
|
2
|
+
|
|
3
|
+
Web Client Media Engine is common web code for interacting with the multistream media server.
|
|
4
|
+
|
|
5
|
+

|
|
6
|
+
|
|
7
|
+
## Setup
|
|
8
|
+
|
|
9
|
+
1. Run `yarn` to install dependencies.
|
|
10
|
+
1. Run `yarn watch` to build and watch for updates.
|
|
11
|
+
1. Run `yarn test` to build, run tests, lint, and run test coverage.
|
|
12
|
+
|
|
13
|
+
## Multistream Connection
|
|
14
|
+
|
|
15
|
+
The `MultistreamConnection` object is the primary interface for clients to interact with the media server. It maintains a list of transceivers, as well as handles media requests to and from the media server. Each `MultistreamConnection` has a single RTCPeerConnection that is shared across all `MediaType`s. It also creates a RTCDataChannel to send and receive JMP messages.
|
|
16
|
+
|
|
17
|
+
### Example
|
|
18
|
+
|
|
19
|
+
Establish a multistream connection and start sending audio/video:
|
|
20
|
+
|
|
21
|
+
```typescript
|
|
22
|
+
const multistreamConnection = new MultistreamConnection();
|
|
23
|
+
|
|
24
|
+
const offer = await multistreamConnection.createOffer();
|
|
25
|
+
// after sending offer to server and receiving answer
|
|
26
|
+
await multistreamConnection.setAnswer(answer);
|
|
27
|
+
|
|
28
|
+
const localAudioTrack = createMicrophoneTrack();
|
|
29
|
+
multistreamConnection.publishTrack(localAudioTrack);
|
|
30
|
+
|
|
31
|
+
const localVideoTrack = createCameraTrack();
|
|
32
|
+
multistreamConnection.publishTrack(localVideoTrack);
|
|
33
|
+
```
|
|
34
|
+
|
|
35
|
+
## SDP Management
|
|
36
|
+
|
|
37
|
+
Clients want to be able to send audio and video (limited to main audio, main video, content audio, and content video) as well as receive any number of audio and video streams. Unified plan in WebRTC means there's a limitation that only a single stream can be signaled per mline. The media server, however, only supports four mlines (main audio, main video, content audio, and content video), so in order to receive more we need to manipulate the SDP in a way that works for both the browser and the server.
|
|
38
|
+
|
|
39
|
+
To accomplish this, there are mlines that only the browser knows about -- these mlines are "filtered out" before sending an offer to the server, and corresponding mlines are "injected" into the answer from the server. This works for a few reasons:
|
|
40
|
+
|
|
41
|
+
- The browser is configured to use one ICE bundle group for each media type: this means that the additional mlines don't require extra ICE connections.
|
|
42
|
+
- The browser only needs packets tagged in a certain way to receive them correctly. The media server supports this tagging, so packets are routed to the correct RTCRtpReceiver.
|
|
43
|
+
- JMP signaling allows signaling to the media server how to tag the packets, so the client can tell the server how to tag media such that it's received correctly.
|
|
44
|
+
|
|
45
|
+
An example (truncated) client SDP:
|
|
46
|
+
|
|
47
|
+
```
|
|
48
|
+
v=0
|
|
49
|
+
...
|
|
50
|
+
// An audio mline for sending main audio
|
|
51
|
+
m=audio 56200 UDP/TLS/RTP/SAVPF 111
|
|
52
|
+
a=mid:0
|
|
53
|
+
a=sendrecv
|
|
54
|
+
a=jmp
|
|
55
|
+
a=jmp-source:0 csi=677569024
|
|
56
|
+
// A video mline for sending main video
|
|
57
|
+
m=video 63722 UDP/TLS/RTP/SAVPF 127 125 108 124 123 35 114
|
|
58
|
+
a=mid:1
|
|
59
|
+
a=sendrecv
|
|
60
|
+
a=jmp
|
|
61
|
+
a=jmp-source:1 csi=677569025
|
|
62
|
+
// An audio mline for sending content audio
|
|
63
|
+
m=audio 56200 UDP/TLS/RTP/SAVPF 111
|
|
64
|
+
a=mid:2
|
|
65
|
+
a=sendrecv
|
|
66
|
+
a=content:slides
|
|
67
|
+
a=jmp
|
|
68
|
+
a=jmp-source:2 csi=777569024
|
|
69
|
+
// A video mline for sending content video
|
|
70
|
+
m=video 56200 UDP/TLS/RTP/SAVPF 111
|
|
71
|
+
a=mid:3
|
|
72
|
+
a=sendrecv
|
|
73
|
+
a=content:slides
|
|
74
|
+
a=jmp
|
|
75
|
+
a=jmp-source:3 csi=777569025
|
|
76
|
+
// 3 recvonly audio mlines for receiving audio
|
|
77
|
+
m=audio 56200 UDP/TLS/RTP/SAVPF 111
|
|
78
|
+
a=mid:4
|
|
79
|
+
a=recvonly
|
|
80
|
+
m=audio 56200 UDP/TLS/RTP/SAVPF 111
|
|
81
|
+
a=mid:5
|
|
82
|
+
a=recvonly
|
|
83
|
+
m=audio 56200 UDP/TLS/RTP/SAVPF 111
|
|
84
|
+
a=mid:6
|
|
85
|
+
a=recvonly
|
|
86
|
+
// 3 recvonly video mlines for receiving video
|
|
87
|
+
m=video 56200 UDP/TLS/RTP/SAVPF 111
|
|
88
|
+
a=mid:7
|
|
89
|
+
a=recvonly
|
|
90
|
+
m=video 56200 UDP/TLS/RTP/SAVPF 111
|
|
91
|
+
a=mid:8
|
|
92
|
+
a=recvonly
|
|
93
|
+
m=video 56200 UDP/TLS/RTP/SAVPF 111
|
|
94
|
+
a=mid:9
|
|
95
|
+
a=recvonly
|
|
96
|
+
// mline for datachannel
|
|
97
|
+
m=application 55527 UDP/DTLS/SCTP webrtc-datachannel
|
|
98
|
+
a=mid:10
|
|
99
|
+
```
|
|
100
|
+
|
|
101
|
+
In the above SDP, only mlines 0, 1, 2, 3 and 10 will be sent to Homer. The rest are only seen by the client and are used to generate RTCRtpTransceivers to act as handles for media received on those mlines. When the answer from Homer is received, it will only contain mlines 0, 1, 2, 3 and 10. The client will then generate synthetic answer mlines to match the "hidden" ones (4, 5, 6, 7, 8, and 9) before setting the remote description.
|
|
102
|
+
|
|
103
|
+
Mlines 4, 5, 6, 7, 8, and 9 serve only as ways for generating tracks associated with a MID on the client. To request media on those tracks, the client sends a `MediaRequest` (described below) with the appropriate MID.
|
|
104
|
+
|
|
105
|
+
`MultistreamConnection` handles all the SDP manipulation required to accomplish the above for both the offer and the answer. It also performs additional preprocessing on the offer such as injecting content types and JMP attributes.
|
|
106
|
+
|
|
107
|
+
## Transceivers
|
|
108
|
+
|
|
109
|
+
In WebRTC, RTCRtpTransceivers represent a pairing of send and receive SRTP streams between the client and server. WCME defines two classes of transceivers: `SendOnlyTransceiver` and `RecvOnlyTransceiver`. Each `MediaType` (`VideoMain`, `AudioMain`, `VideoSlides`, or `AudioSlides`) can only have one `SendOnlyTransceiver` but may have multiple `RecvOnlyTransceiver`s. `MultistreamConnection` maintains a list of all transceivers in a connection per `MediaType`.
|
|
110
|
+
|
|
111
|
+
Although `SendOnlyTransceiver`s are only used for sending, its underlying RTCRtpTransceiver direction is set to "sendrecv" in order to make it compatible with Homer. Each `SendOnlyTransceiver` maintains the state of the sending track -- whether or not it is published, whether or not it has been requested by a remote peer -- as well as handles replacing the existing track with a new one (e.g. when switching camera devices).
|
|
112
|
+
|
|
113
|
+
`RecvOnlyTransceiver`s maintain the state of receiving tracks. Each `RecvOnlyTransceiver` is associated with a single `ReceiveSlot` (described below).
|
|
114
|
+
|
|
115
|
+
## ReceiveSlot
|
|
116
|
+
|
|
117
|
+
WebRTC clients need tracks to be able to receive remote media, but creating a track for every remote participant's media would result in a huge SDP, and would require that the SDP was updated every time a client joined or left the session. So instead of allocating an RTCRtpReceiver for every remote participant, the receivers are treated as "slots" on which a remote participant (CSI) can be requested. `ReceiveSlot`s have an ID (which is the MID of the corresponding mline), and media is requested via JMP using these IDs. For example, if a client wants to receive the 3 most active speakers' audio, then it needs to create only 3 main audio receive slots, even if there are 25 participants in the meeting.
|
|
118
|
+
|
|
119
|
+
The media received on a `ReceiveSlot` can change over time, depending on the policy requested on that slot and/or future media requests on that slot. `Source Indication` messages are used by the server to notify the client which CSI is being received on a `ReceiveSlot` at a given time.
|
|
120
|
+
|
|
121
|
+
## Requesting Media
|
|
122
|
+
|
|
123
|
+
The `requestMedia` API is used to request media from the media server. It takes in a `MediaType` (`VideoMain`, `AudioMain`, `VideoSlides`, or `AudioSlides`) and an array of `MediaRequest` objects. A `MediaRequest` object consists of a policy (`ActiveSpeaker` or `ReceiverSelected`), some policy-specific information, and an array of `ReceiveSlot`s on which the requested media will be received.
|
|
124
|
+
|
|
125
|
+
```typescript
|
|
126
|
+
requestMedia(mediaType: MediaType, mediaRequests: MediaRequest[]): void
|
|
127
|
+
```
|
|
128
|
+
|
|
129
|
+
The `MediaRequest` object is an abstraction on the JMP SCR object, and allows crafting different combinations of policies and `ReceiveSlot`s to achieve different behaviors. A `MediaRequest` consists of:
|
|
130
|
+
|
|
131
|
+
- Policy
|
|
132
|
+
- PolicySpecificInfo
|
|
133
|
+
- ReceiveSlot[]
|
|
134
|
+
|
|
135
|
+
Details for these fields are the same as they are for the JMP SCR. Information can be found [here](https://confluence-eng-gpk2.cisco.com/conf/pages/viewpage.action?spaceKey=WMT&title=JMP+-+Json+Multistream+Protocol).
|
|
136
|
+
|
|
137
|
+
### Examples
|
|
138
|
+
|
|
139
|
+
Requesting the audio of the 3 active speakers:
|
|
140
|
+
|
|
141
|
+
```typescript
|
|
142
|
+
// Create the receive slots
|
|
143
|
+
const audioSlot1 = await createReceiveSlot(MediaType.AudioMain);
|
|
144
|
+
const audioSlot2 = await createReceiveSlot(MediaType.AudioMain);
|
|
145
|
+
const audioSlot3 = await createReceiveSlot(MediaType.AudioMain);
|
|
146
|
+
requestMedia(MediaType.AudioMain, [
|
|
147
|
+
new MediaRequest(Policy.ActiveSpeaker, new ActiveSpeakerInfo(100, false, false, true), [
|
|
148
|
+
audioSlot1,
|
|
149
|
+
audioSlot2,
|
|
150
|
+
audioSlot3,
|
|
151
|
+
]),
|
|
152
|
+
]);
|
|
153
|
+
```
|
|
154
|
+
|
|
155
|
+
Requesting a the video of a specific CSI:
|
|
156
|
+
|
|
157
|
+
```typescript
|
|
158
|
+
// Create the receive slots
|
|
159
|
+
const videoSlot1 = await createReceiveSlot(MediaType.VideoMain);
|
|
160
|
+
requestMedia(MediaType.VideoMain, [
|
|
161
|
+
new MediaRequest(Policy.ReceiverSelected, new ReceiverSelectedInfo(csiToSelect), [videoSlot1]),
|
|
162
|
+
]);
|
|
163
|
+
```
|
|
164
|
+
|
|
165
|
+
## Utilities
|
|
166
|
+
|
|
167
|
+
WCME also defines several utility files with exported functions that may be useful in both WCME and in other code:
|
|
168
|
+
|
|
169
|
+
- `sdp-utils`: Contains functions that munge and manipulate SDP offer/answer descriptions.
|
|
170
|
+
- `ua-utils`: Contains functions to help identify the current user agent (i.e. browser name and version).
|
|
171
|
+
|
|
172
|
+
## Logging
|
|
173
|
+
|
|
174
|
+
Logging is done through the `js-logger` library and the `Logger` class is exported to be used with other repositories. This can be done by importing and setting a handler for `Logger`.
|
|
175
|
+
|
|
176
|
+
```ts
|
|
177
|
+
import { Logger } from '@webex/web-client-media-engine';
|
|
178
|
+
|
|
179
|
+
Logger.setHandler((msgs, context) => {
|
|
180
|
+
// do something with logs
|
|
181
|
+
console.log(context.name, msgs);
|
|
182
|
+
});
|
|
183
|
+
```
|