Using WebRTC as an RTMP Alternative

July 16, 2020 by
 

2020 marks the long-awaited death of Adobe Flash, and with that comes the phase-out of the Real-Time Messaging Protocol (RTMP). While RTMP is still a reliable mechanism for transporting video between encoders and servers, the same cannot be said for RTMP-based playback. Fewer devices and browsers support this protocol than ever before. And in Adobe’s own words, broadcasters are encouraged “to migrate any existing Flash content to… new open formats.”

In the January/February 2020 issue of Streaming Media magazine, Robert Reinhard cautioned, “If you’re using Flash for low-latency real-time streaming, you’ve got about a year or less to try moving over to a WebRTC solution. And what does that mean exactly? Any code you’re using on your Flash-based media server (Adobe Media Systems, Wowza Streaming Engine, and so on) needs to migrate to WebRTC instead of Real-Time Messaging Protocol (RTMP)”

So what are your alternatives for RTMP streaming? Both HTTP Live Streaming (HLS) and MPEG-DASH are both popular options. But Web Real-Time Communication (WebRTC) has attracted a lot of attention in recent years.

As an HTML5-based solution, WebRTC doesn’t require browser plug-ins for playback support and can leverage peering techniques for data transfer between connected sessions. What’s more, it offers the quickest method for transporting live video across the internet.

With all that said, though, WebRTC isn’t without its limitations. Sure, it’s a popular buzzword among developers — but is it the best solution for your use case?

 

RTMP vs. WebRTC

WebRTC delivers several advantages over RTMP. For one, the open-source framework is standardized by the IETF and W3C. All major browsers support WebRTC without requiring a plug-in, eliminating the interoperability challenges that come with proprietary streaming technologies.

Secondly, at sub-500-millisecond delivery speed, WebRTC is the lowest latency protocol out there. This makes it a go-to solution for creating interactive video experiences ranging from real-time auctions to live commerce.

The Streaming Latency and Interactivity Continuum

So, where exactly can RTMP be swapped out for WebRTC when it comes to live video streaming? We take a look at a few ways for doing so below and also detail some of the most common WebRTC streaming workflows in this blog.

 

Replacing RTMP With WebRTC for Egress

RTMP to WebRTC workflows help maintain low latency in today’s Flash-less world — without complicating traditional methods for video contribution. By converting RTMP streams to WebRTC, broadcasters benefit from flexible publishing using any standard encoder and simple browser-based playback. All that’s needed for the conversion process is a live streaming server like Wowza Streaming Engine

That said, WebRTC isn’t well-suited for broadcasting at scale. In such cases, HLS is often a better — albeit higher latency — option.

 

Replacing RTMP With WebRTC for Ingest

RTMP (or more specifically RTMPS) remains the standard for first-mile video contribution on many social platforms. But even that could change with time. Open-source protocols like SRT and WebRTC now offer advantages.

Specifically, when used as the video source, WebRTC ensures easy browser-based publishing. The protocol enables end-to-end real-time streaming in a snap. A media server or streaming service can also be used to ingest WebRTC streams and convert them into a more scalable alternative like HLS. 

 

Considerations When Implementing WebRTC

If you’re considering using WebRTC as an RTMP replacement, take the following questions into account:

 

1. Do you require two-way video or real-time interactivity?

Interactive live streaming solutions and WebRTC go together like peanut butter and jelly. As long as you are using WebRTC for both publishing and playback, latency should be under 500 milliseconds in good network conditions, facilitating real-time interactions. If you’d prefer to delay the playback time or synchronize playback across multiple devices, you may want to capture with WebRTC and use HLS for playback, thereby leveraging metadata and timecode to control the time you want referenced from playback.

 

2. Do you expect to go viral?

All developers hope their streaming applications will become a huge success, with thousands or even millions of viewers watching. However, with that many users comes a big scalability question. Currently, WebRTC is very limited in its ability to scale past a few thousand instances without a vast (and expensive) network of live-repeating servers. Because WebRTC utilizes peering networks, there still has to be a nearby node to help distribute the stream to other local hosts — and peering across a global network can be incredibly difficult.

For large-scale broadcasts, a live streaming server or cloud-based service should be used to transcode the WebRTC stream into a protocol like HLS for distribution to thousands. That way, content distributors can combine simple, browser-based publishing with large-scale broadcasting — albeit at a slightly higher latency.

 

3. Do you need broadcast-quality streams?

 A common misconception is that WebRTC lacks in quality due to bitrate limitations. While browser-based contribution is inherently connectivity- and camera-dependent on the resolution front, high-bitrate encoding is still possible. That said, WebRTC sacrifices B-frames from the GOP structure to enable real-time delivery, which can impact quality.

For broadcast-quality streaming, you’ll need to send the highest-quality source to your transcoder and transcode into adaptive bitrate streaming for seamless playback on a variety of devices.

 

About Traci Ruether

As a Colorado-based B2B tech writer, Traci Ruether serves as Wowza's content marketing manager. Her background is in streaming and content delivery. In addition to writing, Traci enjoys cooking, reading, gardening, and spending quality time with her fur babies. Follow… View more