What Is Low Latency, and Who Needs It? (Video Series: Part 1)
May 14, 2018 by
Here's a dirty secret: When it comes to streaming media, it’s rare that “live” actually means live. Say you're at home watching a live-streamed concert, and you see an overly excited audience member jump onstage. The audience at the concert venue saw that happen at least 30 seconds before you did.
Why? Because it takes time to pass chunks of data from one place to another. That delay between when a camera captures video and when the video is displayed on a viewer’s screen is called latency.
What Is Low Latency?
So, if several seconds of latency is normal, what is low latency?
It's a subjective term. The popular Apple HLS streaming protocol defaults to 30 to 45 seconds of latency (more on this below), so when people talk about low latency, they are often speaking of whittling that down to five to seven seconds: similar to what you’d expect in traditional broadcast viewing.
However, some people need even faster delivery—so a new category of ultra low latency has emerged, coming in at under two seconds. This group of streaming applications is usually reserved for interactive use cases, such as user-generated video, two-way chat and gaming.
To explore concepts, trends and technologies around low-latency streaming, we created a four-part video series, hosted by Wowza’s Barry Owen, vice president of engineering, and Chris Michaels, director of communication. Watch part one of this series on low-latency streaming basics below:
When Is Low Latency Important?
No one wants notably high latency, of course—but in what contexts does low latency truly matter?
For most streaming scenarios, the typical 30- to 45-second delay isn't problematic. Returning to our concert example, it's irrelevant that the lead guitarist broke a string 36 seconds ago and you're just now finding out. But for some streaming use cases, latency is a business-critical consideration.
Let's take a look at a few streaming use cases where low latency is undeniably important:
Second-screen experiences: If you're watching a live event on a second-screen app (such as a sports league or official network app), you’re likely running several seconds behind live TV. While there's inherent latency for the television broadcast, your second-screen app needs to at least match that same level of latency to deliver a consistent viewing experience.
For example, if you’re watching your alma mater play in a rivalry game, you don’t want your experience spoiled by comments, notifications or even the neighbors next door celebrating the game-winning score before you see it. This results in unhappy fans and dissatisfied (often paying) customers.
Video chat: This is where ultra low latency "real-time" streaming comes into play. We've all seen televised interviews where the reporter is speaking to someone at a remote location, and the latency in their exchange results in long pauses and the two parties talking over each other. That's because the latency goes both ways—maybe it takes a full second for the reporter's question to make it to the interviewee, but then it takes another second for the interviewee's reply to get back to the reporter. That conversation can quickly turn painful.
When true immediacy matters, about 150 milliseconds (one-seventh of a second) of latency in each direction is the upper limit. That's short enough to allow for smooth conversation without awkward pauses.
Betting and bidding: Activities such as auctions and sports-track betting are exciting because of their fast pace. And that speed calls for real-time streaming with two-way communication.
For instance, horse-racing tracks have traditionally piped in satellite feeds from other tracks around the world and allowed their patrons to bet on them online. Ultra low latency streaming eliminates problematic delays, ensuring that everyone has the same opportunity to place their bets in a time-synchronized experience. Similarly, online auctions and trading platforms are big business, and any delay can mean bids or trades aren't recorded properly. Fractions of a second can mean billions of dollars.
Video game streaming and esports: Anyone who has yelled "this game cheats!" (or more colorful invectives) at a screen knows that timing is critical for gamers. Sub-100-millisecond latency is a must. No one wants to play a game via a streaming service and discover that they're firing at enemies that are no longer there. In platforms offering features for direct viewer-to-broadcaster interaction, it’s also important that viewer suggestions and comments reach the streamer in time for them to beat the level.
How Does Low-Latency Streaming Work?
Now that you know what low latency is and when it's important, you're probably wondering, how can I deliver low-latency streaming?
As with most things in life, low-latency streaming involves tradeoffs. You'll have to balance three factors to find the mix that's right for you:
- Encoding protocol and device/player compatibility.
- Audience size and geographic distribution.
- Video resolution and complexity.
The streaming protocol you choose makes a big difference, so let's dig into that:
Apple HLS is among the most widely used streaming protocols due to its reliability—but it's not suited to true low-latency streaming. As an HTTP-based protocol, HLS streams chunks of data, and video players need a certain number of chunks (typically three) before they start playing. If you’re using the default HLS chunk size (10 seconds), that means you’re already 30 to 45 seconds behind. Customization can cut that significantly, but your viewers will experience more buffering the smaller you make those chunks.
RTMP has long been the standard for low-latency streaming, but more people are moving away from it and implementing alternatives such as WebRTC, SRT, CMAF, QUIC, WebSockets and others. Here’s a look at how various technologies compare:
- RTMP delivers quality low-latency streaming, but requires a Flash-based player or custom players, meaning it is not supported on iOS devices (and soon won't work in many web browsers).
- WebRTC is growing in popularity as an HTML5-based solution that’s well-suited for creating browser-based applications. WebRTC allows for low-latency delivery in an HTML5-based, Flash-free environment; however, it suffers from the inability to scale beyond 1,000 concurrent viewers.
- SRT (Secure Reliable Transport) is becoming popular for use cases involving unstable or unreliable networks. As a UDP-like protocol, SRT is great at delivering high-quality video over long distances, but it suffers from player support without a lot of customization.
- Two new protocols, CMAF (Common Media Application Format) and QUIC (Quick UDP Internet Connections), are emerging as alternative options, supported by Akamai and Google, respectively. While still in their infancy, both show promise of delivering super-fast video—but we have yet to see how well they scale.
- WebSockets, which is supported by Wowza Streaming Engine™ software, is also an alternative. Creating a direct connection between the server and client, WebSockets allows you to continually push a stream with little chatter between machines, thus decreasing bandwidth bloat.
Another major consideration is your streaming server or cloud service. You'll want streaming technology that provides you with fine-grained control over latency and video quality, and offers the greatest flexibility.
The Wowza Ultra Low Latency Service (a premium offering of the Wowza Streaming Cloud™ service) was designed for multiple types of streaming: high scale as well as low latency. Why is this mix important? It gives you the best of both worlds, without having to sacrifice quality.
Stay tuned for the next installments in our four-part video series to learn about the options for protocols and streaming workflows, current trends and how Wowza is providing more options to solve low-latency streaming.