Wowza Community

Low Latency WebRTC to mpegTS


We’re using the Streaming Engine to build a system that needs to take video and audio from the browser using WebRTC, and deliver onward to a third party receiver using mpegts. We need it to be ultra low latency - think: real-time-communications.

Functionally, we’re looking good. But, the latency is too high. To keep the mpegts UDP stream consistent, we’ve found that we need to push everything through a Streaming Engine transcode. But, that’s adding about 600ms (even if we just use pass through). So, we get about 1200 to 1300ms end to end latency - and that’s about double what we need.

I’ve just about come to the end of options that I have been able to find to improve the latency. So, I have two questions to ask here:

  • Is that extra 600ms from the transcoder about what I should be expecting?
  • Is there a trick that I’ve missed to reduce the latency?

For reference, we’re running in a dev environment in an AWS Ubuntu T3.large. I don’t believe we’re any where near worrysome CPU utilisation.




If you manually edit the XML file of your transcoding profile; look for Root/Transcode/PostProcess/SortBuffer and set Enable to false. There’s a risk with that, of course, but it’s worth a try. See If disabling the buffer entirely gives you problems, then alternatively you can try to lower the buffer size.

<?xml version="1.0" encoding="UTF-8" ?>
<!-- Example template for transrate, producing four new streams at different bitrates. Resultant streams can be played back individually or as a group. source, 360p and 160p encode blocks are enabled through the Enable property, other examples are not enabled. Add additional encode blocks to your template as needed. -->
<Root version="1">
        <Description>Default transrate.xml file</Description>
                <!-- milliseconds -->

Also, MPEG-TS over UDP is a relatively risky way of transporting the stream over the Internet. If possible, consider SRT to avoid packet drop.

Karel is correct. The sort buffer will solve your issue. There is both the buffer size to enable better encoding but then also the buffer to enable frames to catch up if delayed. If you reduce it all the way, the transcoder will keep about 5 frames in it at a time.

In this article, it shows how to tune these parameters:

Thanks for the answer - and I apologise for the tardy acceptance. With your guidance I was able to get latency down to an acceptable level. A couple of notes:

  • I found the docs to be a bit misleading - it took me a while to work out where to make the edits.
  • If SortBuffer is set to false, the dependent target stream will not start.
  • If BufferSize and FlushInterval are set very high, the whole engine crashes out.
  • I agree that UDP over the internet can be a bad idea. Here though, we’re just going over the LAN.

Thank you for this feedback and I will pass it on to the Engine team this morning @Simon Haywood.