WebRTC Simulcast: What It Is and How It Works
How can you broadcast at sub-second latency to multiple viewers with varying computing resources? WebRTC simulcasting refers to the process by which WebRTC prepares media files for smooth and reasonably high-quality streaming regardless of user bandwidths.
In this article, we dig down into how this technology differs from those with similar names and compatible goals. We ask what makes WebRTC uniquely beneficial and how WebRTC simulcasting improves its workflow without losing that which sets it apart: ultra-low latency streaming.
Table of Contents
- What is WebRTC?
- What Is WebRTC Simulcasting?
- How Do You Simulcast in WebRTC?
- Why Use WebRTC Simulcast?
- WebRTC Simulcast and Wowza
What is WebRTC?
Google built the Web Real Time Communications (WebRTC) protocol with near real-time, ultra-low latency streaming in mind. However, that wasn’t its only promise. WebRTC was based, at least in part, on the idea of bitrate adaptation and remote bandwidth control. These elements come together, along with its basis in the UDP protocol, to provide a streaming solution that is both lightning fast and moderately reliable.
How Is WebRTC So Fast?
This browser-based protocol provides real-time voice, text, and video communications. At its core, WebRTC is a peer-to-peer protocol. In other words, it doesn’t funnel data through a central server before reaching the viewer’s device, essentially cutting out the middleman. In these cases, the client device acts as the server, requesting and processing data. Of course, when it comes to simulcasting with WebRTC, you will need a server to facilitate the stream, but more on that later.
Brief Intro to UDP Protocol
The biggest contributing factor is, however, the UDP protocol that underlies WebRTC. User Datagram Protocol (UDP) is a communications protocol built on one central principle: speed is prioritized over reliability. This protocol can start streaming without the need for a client/server “handshake.” In short, the data packets are delivered quickly, but how they are received and whether any get lost in transmission can be a bit of a wildcard. That said, UDP will use checksums, bite-sized data tests, to confirm the quality of data being sent. It’s just that this method has its limitations.
It’s worth noting here that how WebRTC transmits data using UDP is a bit more complicated. After all, WebRTC can’t establish a peer-to-peer connection all on its own. A Session Description Protocol (SDP) is required to communicate important information about the peer-to-peer session, as well as an ICE candidate (public IP address) exchange. Feel free to read more about WebRTC signaling and related workflows. For now, let’s focus on what happens during WebRTC simulcasting.
What Is WebRTC Simulcasting?
Thanks to the open-source nature of WebRTC, many streaming solutions have found ways to capitalize on its capabilities. After all, who doesn’t want lighting fast streams for their wider broadcasts? But the more viewers you add to a stream, the more likely it is that someone won’t have the bandwidth to keep up. That’s where WebRTC simulcasting comes in.
WebRTC simulcasting prepares alternate streams to cater to a range of bandwidths and helps ensure that everyone gets a steady stream. It does this by anticipating bandwidth needs and encoding several alternate streams for distribution according to those needs. However, before we get too into the weeds regarding this process, let’s take a moment to better define what WebRTC is by what it isn’t.
WebRTC Simulcasting vs. Multistreaming
Of course, if you’re in the business of streaming, you’ve likely heard the term simulcasting before. Simulcasting in a broader streaming sense, also called multistreaming, is an entirely different animal. Multistreaming refers to the act of streaming simultaneously to a wide range of platforms, including social media sites. It’s all about playback locations and not about encoding media for playback at a range of bandwidths. That said, the ultimate goal is similar: reach more people.
WebRTC Simulcasting vs. Adaptive Bitrate Streaming
We’ve been talking a lot about bitrates and that might have your ears perking up. After all, adaptive bitrate (ABR) streaming is a hot topic in live streaming and for good reason. However, it’s worth noting that while the term ABR might invoke a broader activity, it refers to a specific process involving encoding ladders and dynamic adjustments to available bandwidth. This is distinct from the somewhat more rudimentary adaptive bitrate activity you can expect from WebRTC simulcasting.
WebRTC simulcasting involves encoding a small handful of bitrate options at the client, which are then selected for distribution based on individual bandwidth availability. Only one bitrate is selected per viewer, and once selected, it is not inclined to change with bandwidth fluctuations as is the purview of ABR. ABR, on the other hand, prepares multiple bitrate options that the end-user device selects from based on its needs. The device then switches between as needed mid-stream to maximize quality without sacrificing reliability.
In this way, ABR is better equipped to deal with an individual viewer’s fluctuating bandwidth while WebRTC simulcasting is stuck with its initial estimate. Simulcasting typically puts less pressure on the server, however, by generating the alternate streams itself and sending them to the server. All the server needs to do is forward them on to the end-user devices. In any case, the two technologies are heavily linked.
How Do You Simulcast in WebRTC?
Understanding the big picture is one thing, but how do you actually make it happen? Simulcasting with WebRTC is a lot simpler than you might think. It all comes down to having a selective forwarding unit (SFU) media server.
What Is An SFU?
Also known as selective forwarding middlebox (SFM), an SFU receives multiple data streams from the data source and reroutes those streams to the appropriate playback locations. In other words, it doesn’t transcode or host the data, it simply triages it in a way that out-of-the-box WebRTC can’t.
WebRTC Simulcast Workflow
Let’s look at WebRTC simulcasting from beginning to end, then dig down into how this differs from peer-to-peer WebRTC functions.
- Publisher picks a target output resolution.
- Simulcast determines three different bitrate options based on a set algorithm using the target output resolution.
- Simulcast encodes three streams, one for each bitrate option.
- Each alternate stream is sent to the SFU server for triaging. If the bandwidth between the publisher and the SFU is too low to send all three streams, they will be re-encoded at lower bitrates.
- The SFU server sends a single alternate stream to each end-user device based on available bandwidth.
- Users experience the best possible combination of stream reliability and quality.
This, in a nutshell, is how simple a WebRTC simulcast can be. Let’s take a moment to further clarify how this works by clarifying how it differs from other WebRTC functions.
WebRTC Simulcast vs. WebRTC Bandwidth Estimation
It’s easy to confuse these attempts to encode multiple streams at different bitrates with what’s known as WebRTC bandwidth estimation. In a peer-to-peer connection, WebRTC employs bandwidth estimation before starting a stream. WebRTC determines how much bandwidth the end-user device has to work with and prepares the data accordingly. The ultimate goal is to avoid lag without sacrificing quality.
However, because WebRTC simulcasting uses an SFU to triage alternate streams to various end-user devices, the publisher can no longer “see” how much bandwidth the end-user device has to work with. Instead, it must base these estimations on the target output resolution and how much bandwidth is available between it and the SFU. The SFU takes it from there after already having received the alternate streams.
WebRTC Simulcast vs. WebRTC Bitrate Control
Another term you’re likely to have come across in your research is WebRTC bitrate control. This is not well defined and could refer to anything from generating alternate streams for simulcasting to attempts to accommodate bandwidth estimates in a peer-to-peer connection.
In the latter, WebRTC can exert some control over available bitrate. WebRTC will pick what it perceives to be the best bitrate option based on its bandwidth estimations. In the case of WebRTC simulcasting, three bitrate options are prepared and sent to an SFU for triaging (see above). In both cases, the bitrate selected for a given end-user device is the bitrate for the duration of the stream.
And there’s the rub. WebRTC can’t perfectly predict how a viewer’s bandwidth will behave. Bandwidth can fluctuate substantially with changes in the viewer’s network and location. As it attempts to utilize the maximum possible bitrate allowed by a viewer’s bandwidth, it may find that its initial estimates only go so far.
Why Use WebRTC Simulcast?
Simple. You want the ultra-low latency of WebRTC and the ability to stream to a wider audience with varied bandwidth restrictions without sacrificing quality. WebRTC simulcast does this by providing bitrate options for a range of bandwidths, allowing WebRTC to accommodate those at the bottom of the bandwidth barrel without punishing those at the top.
Issues With WebRTC
For all that WebRTC has to offer, it has some significant limitations. The technology has historically been intended for peer-to-peer delivery within a limited number of browsers, like video conferencing, rather than large-scale broadcasting. The methods it uses to achieve ultra-low latency are the same methods that make it less reliable and less scalable.
However, the demand for WebRTC at higher scales exists and many workflows and intermediary solutions have stepped up to bridge the gaps. Simulcasting is an essential part of how they help WebRTC adapt to wider audiences and the accompanying range of bandwidth restrictions.
Benefits of WebRTC Simulcasting
- Expand your audience by accommodating their bandwidth restrictions.
- Maintain video quality by having a handful of bitrate options that an SFU server can distribute to the appropriate end-user devices.
- Promote reliability by catering to varying bandwidth needs.
- Enjoy ultra-low latency while achieving all of the above.
WebRTC Simulcast and Wowza
Wowza’s Real Time Streaming at Scale solution takes WebRTC and expands it dramatically, allowing for ultra-low latency streaming to up to a million viewers across a wide range of platforms. Real Time Streaming at Scale works similarly to those steps described above, including having three different bitrate options and an SFU server to triage them.
In our workflow, data is sent from various devices to the Wowza Video API. We then deliver the video through a custom content delivery network (CDN) that also acts as the SFU for large-scale delivery. What’s more, we do all this while still delivering on the promise of WebRTC, sub-second latency for an essentially real-time experience.
Search Wowza Resources
About Sydney Roy (Whalen)
Sydney works for Wowza as a content writer and Marketing Communications Specialist, leveraging roughly a decade of experience in copywriting, technical writing, and content development. When observed in the wild, she can be found gaming, reading, hiking, parenting, overspending at the Renaissance Festival, and leaving coffee cups around the house.