Wowza Community

Reducing latency

I’m working on a project for a new client. This guy has a Wowza server running on an EC2 instance. His streams are audio-only, and he’s quite concerned about getting his latency (from microphone input on the encoder to headphone output on the player) as low as possible - ideally under 1/2 second.

I’ve followed the advice in https://www.wowza.com/docs/how-to-set-up-low-latency-applications-in-wowza-streaming-engine-for-rtmp-streaming, and when I get my hands on an Apple device to test on I’ll also implement https://www.wowza.com/docs/how-to-configure-adobe-hds-packetization-sanjosestreaming. So far, encoding a stream using FMLE and using the OSMF player I can get my latency down around a second. I don’t seem to notice a difference between including a video stream or sending just an audio stream.

Besides the two articles mentioned above and the discussions that go with them, is there anything more I should be looking at to get my stream latency down? I am using stream type of live-record-lowlatency.

One second is great for sanjose playback. If you are going to get any better it will probably have to be rtmp.

Richard

You have already read the guide on low latency streaming. I don’t have other info.

Richard

Take a look at this article for ways to improve HLS latency

https://www.wowza.com/docs/how-to-configure-apple-hls-packetization-cupertinostreaming

Richard

For HDS (sanjose), take a look at this article:

https://www.wowza.com/docs/how-to-configure-adobe-hds-packetization-sanjosestreaming

Richard

RTMP is TCP based protocol.

It means, you will have latency less than 1/2 seconds in ideal network only. e.g. local network or optical channel.

Zero buffers on playback-side may a bit reduce latency but, in general, this problem is not resolved for RTMP.

Wikipedia:

Difference between RTMP and RTMFP

The principle difference is how the protocols communicate over the network. RTMFP is based on User Datagram Protocol (UDP), whereas RTMP is based on Transmission Control Protocol (TCP). UDP‐based protocols have some specific advantages over TCP‐based protocols when delivering live streaming media, such as decreased latency and overhead, and greater tolerance for dropped/missing packets, at the cost of decreased reliability.

You can also read this article to get explanation - why fixed latency 1/2 seconds might not be reached in arbitrary network.

If you want just reduce latency in RTMP.

There are some settings, but it does not guarantee latency 1/2 seconds everywhere.

  1. NetStream.bufferTime = 0 for client-side

  2. Wowza/conf/Streams.xml for server-side:

<Stream>
    <Name>my-low-latency</Name>
    <Description>my-low-latency</Description>
    <ClassBase>com.wowza.wms.stream.live.MediaStreamLive</ClassBase>
    <ClassPlay>com.wowza.wms.stream.live.MediaStreamLivePlay</ClassPlay>
    <Properties>
        <Property>
            <Name>maxliveaudiolatency</Name>
            <Value>8000</Value>
        </Property>
        <Property>
            <Name>instantOn</Name>
            <Value>false</Value>
            <Type>Boolean</Type>
        </Property>
        <Property>
            <Name>flushInterval</Name>
            <Value>20</Value>
            <Type>Integer</Type>
        </Property>
        <Property>
            <Name>onFlushNotifyClients</Name>
            <Value>true</Value>
            <Type>Boolean</Type>
        </Property>
        <Property>
            <Name>disableLowBandwidthThrottling</Name>
            <Value>false</Value>
            <Type>Boolean</Type>
        </Property>
        <Property>
            <Name>behindDropDFrames</Name>
            <Value>3000</Value>
            <Type>Integer</Type>
        </Property>
        <Property>
            <Name>behindDropPFrames</Name>
            <Value>3000</Value>
            <Type>Integer</Type>
        </Property>
        <Property>
            <Name>behindDropKFrames</Name>
            <Value>3000</Value>
            <Type>Integer</Type>
        </Property>
        <Property>
            <Name>behindDropAudio</Name>
            <Value>3000</Value>
            <Type>Integer</Type>
        </Property>
    </Properties>
</Stream>

Pay attention:

  1. maxliveaudiolatency = 8000

  2. flushInterval - less flushInterval gives lower latency, but increases CPU utilization.

  3. onFlushNotifyClients = true

  4. behindDropAudio = 3000

You can play with this params to get lower latency. It will cost you CPU overhead and some quality degradation.

randall,

In your example, you wrote:

“The round-trip ping latency between my computer to the server is 140ms, which is rather high. So, if your ping latency is lower, you should be able to achieve even better results. I think 100-200ms is possible.”

Yes, it is possible to have low latency when RTT is 140ms. Loss rate should be ~0 for such results. Try ping command like: $ping -l 96 -n 1500 host

to simulate packets. If you have low-latency and good enough quality, you ping lost rate should be 0 for two-five thousand test packets.

RTT, lost rate impacts on resulted latency and quality.

Yes, you can set bufferTime 0, but it will resolve latency promlem by dropping of behindhand packets. And you will see video freezes and lags.

So, if you have UDP stream 50 packets/per second and loss rate 1%, you will lose 5 packets(1%) for 10 seconds stream.

But, if latency is 4 seconds or even 1 seconds(RTMP stream), you will need drop 500*1/10=50 packets(10%) to return zero-latency.

One second is great for sanjose playback. If you are going to get any better it will probably have to be rtmp.

Richard

I don’t have a problem with that. I suspect the client will be dealing mostly with Flash Player users, so that should work. That’s how I’m doing my testing currently. Does it sounds reasonable to try for 1/2 second latency with RTMP streaming? If so, what additional things should I be looking at?

Ok, thanks.

if you ask about TCP low-level settings, why not to try the following setup:

echo 0 > /proc/sys/net/ipv4/tcp_sack

echo 0 > /proc/sys/net/ipv4/tcp_timestamps

echo 2621143 > /proc/sys/net/core/rmem_max

echo 262143 > /proc/sys/net/core/rmem_default

Alex, your linked article is probably incorrect. If lost packets increased stream latency, then we would see latency grow over time on connections with packet loss. I know this is not the case after much testing on such a link. Selective Acknowledgement (SACK) in OSI layer 2 is enabled by default which mitigates the issue. Also, I’m sure most streaming players ignore lost packets rather than increasing playback latency.

I’m guessing the increased latency of TCP over UDP comes from the initial handshake TCP has which UDP doesn’t.

Of course, my uninformed speculation could be wrong…

FYI,

I get .5sec (or less) video latency using:

  1. Stock un-tuned Wowza server.

  2. Wowza VideoChat example with the video window reduced to 180p.

The round-trip ping latency between my computer to the server is 140ms, which is rather high. So, if your ping latency is lower, you should be able to achieve even better results. I think 100-200ms is possible.

Alex, your linked article is probably incorrect. If lost packets increased stream latency, then we would see latency grow over time on connections with packet loss. I know this is not the case after much testing on such a link. Selective Acknowledgement (SACK) in OSI layer 2 is enabled by default which mitigates the issue. Also, I’m sure most streaming players ignore lost packets rather than increasing playback latency.

I’m guessing the increased latency of TCP over UDP comes from the initial handshake TCP has which UDP doesn’t.

Of course, my uninformed speculation could be wrong…

Ok, I think I see your point now. For regular streaming a single dropped packet on a high-latency connection will cause the stream to pause a minimum of time equal to the latency while waiting for TCP re-transmission, whereas UDP will just lose one packet. You’re right, but if you have 1-4 second latency, you’re probably not doing live-chat. Therefore with a regular stream you can set buffers to mitigate this effect, so I’m not seeing the benefit with UDP.

Now lets imagine we’re doing a “real-time” live chat on an average connection with 140ms latency. Say our bitrate gives us a packet every 50ms, and one packet is dropped. Once again, with TCP the receiver must wait for re-transmission:

After:

50ms next packet is received

70ms sender gets DUP-ACK

70ms receiver gets retransmission

So, we have maybe a ~200ms glitch in our TCP stream. But with UDP we only have a 50ms glitch. Sound right?

That makes sense for audio, where packets are discrete, but with H.264 video baseline profile we have P-Frames which are built from the previous frames. In UDP a dropped frame would cause corruption for a little more than half the keyint on average whereas with TCP the stream can be rebuilt correctly, which ideally results in a frozen picture for 200ms, instead of a green blocking/corruption for ~2sec on a 4sec keyint. So, I think UDP might be better for low-latency voice applications, but TCP would be better for video, and especially better for recording.

if you ask about TCP low-level settings, why not to try the following setup:

echo 2621143 > /proc/sys/net/core/rmem_max

echo 262143 > /proc/sys/net/core/rmem_default

These settings just allow an application to request up to that amount when setting SendBuffer and ReceiveBuffer sizes. They will have no effect on Wowza if you do not change the sizes of the send & receive buffers. Prior to 3.5.0, the ReceiveBifferSize should not be made too large ans a separate internal read buffer is also tied to that value. From 3.5.0, the read buffer is a separate value allowing you to set the receive buffer on it’s own. Also, from 3.5.0, tcp auto tuning can be enabled by setting the send buffer and receive buffer sizes to 0. The ReadBufferSize should remain at the recommended setting.

Roger.

habtedech,

You might try to increase you’re frames per second to something like 30, and shoot for a key frame interval (gop rate) of 2 seconds (every 60 frames at 30fps)

Also, what are you using for playback?

Salvadore

Hello,

First, I would like to appreciate wowza’s rich forum and support. I am trying to live stream using VLC encoder and I able to recieve stream with a delay of max 7. I am using native RTP protocol. I tried to optimize the encoder. but, the delay still persist. on wowza server I also changed from live to live-lowlatency. Below is the Vlc command line code I used.

vlc -vvv dshow:// --sout “#transcode{venc=x264,vcodec=x264,vb=150,scale=1,acodec=mp4a,width=320,height=240,ab=56,channels=2,fps=15,samplerate=8000}:rtp{dst=127.0.0.1,port=5544,mux=ts,ttl=12,sdp=file:///C:\Program Files (x86)\Wowza Media Systems\Wowza Media Server 3.5.2\content\www.sdp}”

I read the article https://www.wowza.com/docs/how-to-set-up-low-latency-applications-in-wowza-streaming-engine-for-rtmp-streaming, but still same problem

how to reduce latency ?

I have a problem with latency in the HTTP mode, i’m using Wirecast to connect with Wowza Server, but the streaming have a delay of 20 seconds. I need it with media of 5 seconds. It’s work? In Flash RTMP Player have a delay is 5 seconds, but in HTTP Player is 20 seconds. Or exist one mode to add RTMP Player in my website? (Sorry, i’m brazilian and i don’t speak english)

Hi Richard, the delay is in

file:///C:/Program%20Files%20(x86)/Wowza%20Media%20Systems/Wowza%20Media%20Server%203.6.1/examples/LiveVideoStreaming/FlashHTTPPlayer/player.html

I tried this article, but no worked, the delay is stable 20 seconds.

Tried too make a server in Linux Debian, but have this same problem.

You have a some idea?

Take a look at this article for ways to improve HLS latency

https://www.wowza.com/docs/how-to-configure-apple-hls-packetization-cupertinostreaming

Richard