Low-Latency HLS vs. WebRTC: What to Know Before Choosing a Protocol

October 26, 2020 by
low latency hls vs webrtc graphic
 

You know you need low-latency streams. But not all low-latency protocols are created equal. So, how do you decide which one is right for your use case? The last thing you want is to build a solution, send it to QA, and then discover that it’s full of glitches and that it won’t stop buffering. There’s also the financial factor — some protocols will cost you more money when you use them in a certain way. This post will look at Low-Latency HLS and WebRTC, the top two low-latency protocols, and it will tell you what you need to know to choose the right one.

 

Latency

Is WebRTC vs. HLS truly a battle of the low-latency protocols? Before the Apple Worldwide Developer Conference (WWDC) in 2019, I would’ve given that a hard no. Apple’s HTTP Live Streaming (HLS) protocol has a lot of great attributes that have led to its mass popularity. Reduced latency, however, wasn’t one of them. In fact, the protocol had latency inherently built into it. Historically, one could expect 10- to 45-second latency with HLS — far too high to be used for interactive streaming. WebRTC, although able to stream in real time, had its own difficulties.

This all changed when Roger Pantos announced Low-Latency HLS (LL-HLS) during WWDC 2019, and again when he announced that LL-HLS and HLS were no longer two separate streaming protocols. Instead, the LL-HLS extension had been baked into the HLS specification as a feature set. These back-to-back announcements made it clear that Apple had thrown its hat in the ring, and the industry rushed to support the new low-latency feature.

The new-and-improved HLS has a latency of 3 seconds or less. This is still considered very fast, and it can work for most low-latency use cases. But if you truly need the fastest option and if your use case requires real-time streaming, WebRTC still reigns supreme.

Low latency is critical — whether it’s so that a conversation can flow naturally or when a few seconds of delay could mean the difference between life and death. With sub–500 milliseconds of real-time latency, WebRTC is the fastest protocol on the market. WebRTC was built with bidirectional, real-time communication in mind. Unlike HLS, which is built with TCP, WebRTC is UDP based. This means that WebRTC can start without requiring any handshake between the client and the server. As a result, WebRTC is speedier but also more susceptible to network fluctuations.

 

Flexibility and Compatibility

When it comes to which protocol has better flexibility, LL-HLS certainly muddies the waters. As the most widely used protocol for media streaming, HLS is supported by a wide range of devices and browsers, and it’s fully compatible with a CDN. Another perk of HLS is its support for closed captions and subtitling, metadata, Digital Rights Management (DRM), and ad insertions. Unfortunately, LL-HLS isn’t quite there yet. These are capabilities that people will need, but it’ll take time for the industry to get them working within the ecosystem. However, no one can ignore Apple’s ecosystem for long. This will push development work, so the availability of capabilities like these won’t be too far behind.

Browser support for LL-HLS is par for the course and shouldn’t change from what you do with HLS. Player compatibility is another issue, though. Other than Apple’s native player, compatibility isn’t expected for other open-source native players such as Android’s ExoPlayer. Companies such as THEO and JW Player, however, have been developing their own proprietary players to help with playback on all major devices, platforms, and browsers.

One strength of WebRTC is that no additional plug-ins or software is required for it to function within the browser. And although WebRTC’s adoption had a slow start, today, all major desktop browsers support it. Not without bugs, though. Safari is the buggiest out of the group, and it doesn’t support VP9. Even though mobile support is being implemented for VP9, not every mobile browser (for example, Safari) fully supports WebRTC. In fact, the experience is so poor that it’s recommended to redirect users to a native app installation instead. It’s unlikely that Apple will invest in anything of consequence related to WebRTC because the company is focused on LL-HLS.

I wish there were a straight answer when it comes to which protocol is the most flexible and compatible. The truth is, neither one is the winner right now. LL-HLS still needs a lot of work, and WebRTC feels as though it’s a perpetual work in progress. Until LL-HLS can catch up to HLS, you’ll need to look very closely at what you’re trying to build and what level of flexibility you’ll need in order to make the right choice.

 

Quality

You can’t promise high-quality video without having adaptive bitrate (ABR) capabilities. It’s the secret sauce to professional streaming. It provides the best video quality and viewer experience possible — no matter the connection, software, or device.

In terms of quality, LL-HLS takes the cake. HLS is the standard in ABR video, and that includes LL-HLS as well. Multiple renditions allow for playback on different bandwidths. The media server then sends the highest-quality stream possible for each viewer’s device and connection speed. A delivery method that automatically adjusts the quality of the video stream between multiple bitrates and/or resolutions is far better than one that only operates at a single bitrate.  

WebRTC, on the other hand, wasn’t built with quality in mind. WebRTC’s No. 1 priority has always been real-time latency for peer-to-peer browser connections. Traditionally, quality has taken the back seat. You may find it surprising that WebRTC supports ABR — but with a caveat. As the saying goes, “A chain is only as strong as its weakest link.” And this holds true for WebRTC’s quality. WebRTC’s built-in ABR is on the subscriber side only, which creates an issue if you have multiple subscribers. You could run into a situation where one subscriber has a poor network. This would force the publisher to switch to a lower-quality stream, resulting in everyone having to watch it in low quality.

 

Security

Long before the word Zoombombing was trending around the world, privacy issues were (and continue to be) a big issue in the industry. Content protection throughout the workflow can take many forms, including encryption of incoming and outcoming streams, token authentication, and digital rights management (DRM) for premium content delivery. 

Because the LL-HLS spec has been merged into the HLS spec, the LL-HLS spec supports everything that the HLS spec does. But as mentioned above, it’ll take time for providers to catch up. Capabilities such as DRM, token authentication, and key rotation will all be possible but not available until providers get them working within their ecosystems. LL-HLS isn’t completely unprotected, however. TLS 1.3 is recommended as part of the spec. It’s encrypted by nature, and it prevents hackers from intercepting data in transit.

WebRTC also supports delivery over TLS to ensure the security of content in transit. The traffic between the two clients is encrypted, which enhances security. Although WebRTC lacks compatibility with DRM providers, if you’re looking for basic security measures, the encryption WebRTC offers will be enough. Things left out of the scope of WebRTC are authentication, authorization, and identity management. This doesn’t mean you can’t add them — you can and you should. It just means that they’re not offered out of the box.

 

Scalability

If your solution requires peer-to-peer, real-time streaming, WebRTC is most likely the protocol you’ll want to use. But if you need to scale your audience beyond about 50 viewers, you’ll need to think twice about how you’re going to do it. WebRTC wasn’t designed with scalability in mind. But this can be solved. Two different workflows are available. The right one for you will depend on your use case and desired audience size.

 

WebRTC End-to-End

When milliseconds matter, an end-to-end WebRTC workflow can be used to ensure the lowest latency possible. If you don’t add a media server in the mix, however, your audience size will be limited. Out-of-the-box WebRTC works by each browser connecting directly to all the other browsers in the group — burning up bandwidth in the process. But by adding a media server, such as Wowza Streaming Engine, you can reduce the amount of bandwidth required without dramatically increasing latency. This allows for real-time streaming for an audience size up to about 300 viewers.

 

Transmuxing WebRTC to HLS or DASH

If you need to scale your audience beyond about 300 viewers, transmuxing WebRTC to HLS or DASH is required. Scaling your live stream to reach a massive audience can be challenging, but it doesn’t have to be. Using Wowza Streaming Engine or Wowza Streaming Cloud, you can scale automatically to accommodate global audiences of any size — but it’ll come at the expense of added latency.

 

Scaling LL-HLS

Before integration with CDNs, scaling LL-HLS was merely aspirational and extremely limited. Vendors have spent the past year developing for the ever-evolving spec, and they’ve recently begun announcing their support. Technology partners such as Fastly allow for global distribution and reduced latency, due to more cached content closer to the viewer. Although the delay is longer than that with WebRTC, CDNs make it possible to stream LL-HLS to thousands in less than 3 seconds.

 

Cost

WebRTC is an open-source protocol, and it’s free. Great! But, remember when we talked about scaling WebRTC? It’s totally doable, and it works well for many use cases — but it can be expensive. WebRTC is very cost effective if you don’t need to stream to hundreds of people. If you need to reach a mass audience, however, you’ll need to spin up additional servers to reduce the load from the browser, which is pricey.

If you do need to scale, LL-HLS is the cheaper option. HLS is probably the most cost-effective way of delivering video. It uses affordable HTTP infrastructures and existing TCP-based network technology, so scaling with a CDN can easily be justified from a cost perspective.

 

Conclusion

LL-HLS and WebRTC have come a long way in the last couple of years. Even though they’re both cutting-edge technologies and both driving the industry forward, each has its pros and cons. Neither option is perfect for everything, but one of the two can be the perfect solution for you. Ultimately, the best protocol will depend on the specifics of your project, the devices you plan to distribute to, and the size of your audience. Make sure to keep these things in mind when building your low-latency solution.

 

Get Notified About Low-Latency HLS

Sign Up

About Anne Balistreri

As product marketing manager at Wowza, Anne works at the intersection of product, marketing, and sales — putting the customer at the center of it all. She loves to understand the “why” behind our customers' challenges to create solutions and… View more