Ultra Low Latency: What It Means for Video StreamingFebruary 17, 2022
Latency is a widely used term these days. You’ll see latency and low latency pop up in relation to computers, including everything from data center networking to retail graphics cards. The quick and dirty definition of latency is the delay between telling data to transfer and the transfer actually occurring. In some instances, you may have seen or heard latency’s less technical synonym, lag.
Latency is just as prevalent in the video streaming world as it is in general computing. Frankly, it’s all over the place. Low latency mode is a setting you can turn on as an individual Twitch streamer, which is either a good or bad idea based on the capabilities of your and your viewers’ networks. Low latency and ultra low latency are also used to describe the most cutting-edge streaming software offerings. We’re focused on that last one in this article — ultra low latency video streaming.
Table of Contents
- How Low Is Ultra Low Latency?
- How to Achieve Ultra Low Latency
- Shortcomings of Each Approach
- The Solution: Real-Time Streaming at Scale
How Low Is Ultra Low Latency?
Ultra low latency is a pretty nebulous phrase. A lot of companies discuss ultra low latency but use different definitions. Many state that if you’re talking low latency, expect delays to be five seconds or less. In a similar fashion, ultra low latency is frequently defined as less than one second.
That is not to say the definition is set in stone or even holds true with a majority of players in the live streaming game. Some say low latency streaming can have delays as long as seven seconds and ultra low latency is anything under three seconds. On the other hand, you’ll also see voices out there saying that if you’re measuring latency in seconds, it can hardly be considered low latency. Milliseconds and nanoseconds are the new way of measuring. These standards have changed over time as technology allows us to achieve faster speeds. For our purposes, refer to the graphic below for Wowza’s definitions.
At Wowza, instead of ultra low latency, we have started to use the term “near real-time” and consider that to be anything under one second.
How to Achieve Ultra Low Latency Video Streaming
One common method for decreasing latency is changing the default segment size of HTTP-based streaming protocols. Both HTTP Live Streaming (HLS) and Dynamic Adaptive Streaming over HTTP (DASH) can be tuned to reduce latency. We have an entire blog with detailed instructions on how to tune HLS in both Wowza Video and Wowza Streaming Engine.
This method requires shortening the segments. Apple recommends a minimum setting of six seconds each, but it’s possible to go lower. The segment length directly affects latency. You can end up with latencies anywhere from 45 seconds to 2-4 seconds depending on segment length. Two seconds isn’t exactly an exciting number for those looking for an ultra low latency streaming solution, but it’s about as good as you can get with standard HLS or DASH.
As you saw in that graphic above, protocols come with different inherent latencies. WebRTC is probably looking pretty enticing right now for those who are after lightning fast speeds.
It’s true, WebRTC is excellent at connecting small groups of people at near real-time speeds. You’ve probably used an application or two that utilizes WebRTC. WhatsApp, Discord, Snapchat, Google Hangouts, Facebook Messenger, and many more popular conference and messaging apps connect people using WebRTC. Users experience sub-second latency, which makes sense considering WebRTC stands for web real-time communication.
RTMP, SRT, Low-Latency HLS, and low-latency CMAF for DASH are also available for low-latency streaming, but none compare to WebRTC in terms of delivery speed.
Shortcomings of Each Approach
Besides the obvious issue that two seconds hardly allows for interactivity, tuning HLS and DASH can come with extra problems. If you shorten your segments beneath the recommended 6 seconds each (and you’d have to shorten them to around .5-1 second to achieve the lowest latency possible) you run the risk of interrupted viewing. You’ll also end up with lower-quality playback, which is unacceptable for most use cases. Streaming Learning Center’s Jan Ozer goes into more detail on that here.
Alternatively, if you stick with the recommended six seconds, you’ll end up with a latency of around 18-20 seconds. According to the various definitions we went over already, no one should be calling that low latency, let alone ultra low latency.
There’s a reason all those apps listed above are for one-to-one or few-to-few interactions. WebRTC doesn’t scale well. For interactivity, you can really only go up to about 10 participants. If your use case involves one-to-many, where interactivity isn’t necessary for everyone involved, you can likely include somewhere between 200-300 participants.
Additionally, WebRTC can fall short on the quality side. It’s browser-based, which means quality depends on the capabilities of each user’s browser. You’d also need to develop solutions for each browser, and keep up with them as they update.
RTMP is a legacy protocol that’s no longer supported on the playback side, so you’d have to convert RTMP to another protocol, with the potential for added latency. SRT has the same issue as RTMP but for the opposite reason. It’s a newer technology that isn’t widely supported, especially for playback. Again, you’d have a more complicated workflow, which can lead to higher latencies.
Finally, new standards like Low-Latency HLS and low-latency CMAF for DASH have recently been developed for low-latency streaming. They show promise, but both are new technologies but lack wide support.
The Solution: Real-Time Streaming at Scale
Late in 2021, we released a new feature for Wowza Video designed specifically to eliminate these shortcomings. The feature boasts ultra low latency speeds of under 500ms and can scale up to a million participants to boot. It does utilize WebRTC to maximize speeds, but leaves behind all the issues that normally come with WebRTC. It’s scalable, flexible, and lightning-fast. Real-Time Streaming at Scale is the best option for use cases that need ultra low latency. Contact us to learn more about how to get started.