Low-Latency Claims and How to Decipher ThemJune 19, 2019 From ‘glass-to-glass latency’ to ‘near real time,’ streaming professionals like to throw around a lot of terms describing video delay. How do the different latency claims stack up and what should you be looking for? It all depends what you’re trying to achieve.
What Is Latency?Latency describes the delay between when a video is captured and when it’s displayed on a viewer’s device. Passing chunks of data from one place to another takes time, so latency builds up at every step of the streaming workflow.
What Is Glass-to-Glass Latency?Encoding. Transcoding. Delivery. Playback. Each stage can increase latency. For that reason, you’ll want to look for claims of end-to-end or glass-to-glass latency. This ensures that the value accounts for the comprehensive time difference between source and viewer. Other terms, like ‘capture latency’ or ‘player latency,’ only account for lag introduced at a specific step of the streaming workflow.
How Is Glass-to-Glass Latency Measured?Measuring glass-to-glass latency is easy. Open a stopwatch program on your computer screen and film it with a camera — sending the stream through your entire workflow (encoder, transcoder if using, CDN, and player). Next, open up your player on the same computer screen and take a screenshot with the two windows. The screenshot should include both the stopwatch being captured and the live stream of the stopwatch being played back. You can subtract the time shown on the video from the time shown on the stopwatch proper to get a better-than-ballpark measurement. For a more accurate reading, do this experiment several times and take an average.
What Is Considered Low Latency?You’ll want to differentiate between the multiple categories of latency and determine which is best suited for your streaming scenario. These categories include:
- Near real time for video conferencing and remote devices
- Low latency for interactive content
- Reduced latency for live premium content
- Typical HTTP latencies for linear programming and one-way streams.
Who Needs Low Latency?When streaming sporting events, we’d recommend a glass-to-glass latency of around five seconds — this is referred to as reduced latency. Any longer than that and the cable broadcasters you’re competing against will be one step — or touchdown, goal, home run, you name it — ahead of you. For developers trying to integrate live video into interactive solutions — such as trivia apps, live-commerce sites, e-sports platforms, etc. — sub-five-second latency is crucial. This puts you in the low latency range. The fastest delivery speed is reserved for two-way conferencing, military-grade bodycams, remote-control drones, and medical cameras. Any latency north of one second would make these streaming scenarios awkward at best, disastrous at worst. Here’s where near real time reigns supreme. If your streaming application falls outside of the use cases described above, it’s probably wise not to prioritize latency at all. That’s because configuring streams for speedy delivery only adds additional complexity that’s not always necessary.
Latency Across the WorkflowAs we said, latency can be introduced at each area of the live-streaming workflow. The primary bottlenecks include:
- Segment Length
- Player Buffering
- Encoding Efficiency
- Packaging and Protocols
- Delivery and Network Conditions
|WebRTC||Real-time interactivity without a plugin.||Not well-suited for scale or quality.|
|SRT||Smooth playback with minimal lag.||Playback support still isn’t widespread.|
|Low-Latency CMAF for DASH||Streamlined workflows and decreased latency.||Still gaining momentum across the industry.|
|Apple Low-Latency HLS||Supported by Apple.||Spec was just announced and vendors are working to support it.|