Wowza Community

Live Stream Delay 30-60 seconds.

We have successfully setup 2 servers running orgin, edge, load balance and redirect.

Adobe --> Server01 (origin) --> Server01 (edge) & Server02 (edge)

Streams run with no delays for weeks without reboots.

Now we have successfully setup loadbalance/redirect.

Adobe --> Server01 (origin)(loadbalance) --> Server01 (edge) & Server02 (edge)(loadbalance)

Our stream shows no delay when we fire up our stream, but after several hours we will notice 30-60 second delay from player vs actuall stream from Adobe. Starting a new player to stream still shows delay.

How can this be fixed?

The load balancer most likely has nothing to do with the delay. I would focus on the encoder. These delays are usually encoder issues where the encoder can’t send the stream to the server fast enough so it hold video/audio data. What encoder are you using? If FMLE then there are option to have it drop frames to try and keep up.

Charlie

Handling this on support

Richard

The solution was: For iOS, Smooth and Sanjose streaming from edge, change LiveStreamPacketizers on the edge applications from this:

<LiveStreamPacketizers>cupertinostreamingpacketizer,smoothstreamingpacketizer,sanjosestreamingpacketizer</LiveStreamPacketizers>

To this:

<LiveStreamPacketizers>cupertinostreamingrepeater, smoothstreamingrepeater, sanjosestreamingrepeater</LiveStreamPacketizers>

Richard

Try reducing the bitrate of the stream in the encoder by half. If that makes a significant difference, then you can adjust up some, or perhaps reduce further.

The problem could be the uplink at the encoder, or bandwidth of the client, or both. You can measure bandwidth from Wowza to a client with the BWCheck example:

https://www.wowza.com/docs/how-to-test-server-to-client-bandwidth-for-rtmp-clients)

And you can measure bandwidth between a client (on the enocder side) to Wowza to see what you have for the uplink:

https://www.wowza.com/docs/how-to-check-bandwidth-from-client-to-server-to-test-uplink-to-be-used-by-a-live-stream-encoder-moduleclientbwcheck)

Richard

Thank you for your response.

Correction: this all started after we had setup origin, edge, live repeaters.

If it was a standalone app, no streaming issues. We did not have this streaming issue with 4000 viewers on a single box with no delays or jitters. Now we are running an origin, edge, live repeater apps with total of 2 servers.

We can rule out the encoder as we did not have this issue to begin with.

We have opened a ticket with Wowza #7957. If you would kindly check our configuration.

The load balancer most likely has nothing to do with the delay. I would focus on the encoder. These delays are usually encoder issues where the encoder can’t send the stream to the server fast enough so it hold video/audio data. What encoder are you using? If FMLE then there are option to have it drop frames to try and keep up.

Charlie

For FMLE, you cannot setup auto adjust to drop frames when you have multiple bitrate streams.

Also,

we are getting these errors on our FMLE.

Fri Feb 11 2011 06:43:57 : Primary - More than 50 % of the specified threshold values (Max buffer size 512000 KB or Max buffer duration 3600 sec) of RTMP buffer reached. Current network conditions are unfavorable to publish data at the specified bitrate. RTMP buffer will be flushed and streams will be re-published when the threshold value is reached.

Fri Feb 11 2011 06:58:27 : Primary - More than 70 % of the specified threshold values (Max buffer size 512000 KB or Max buffer duration 3600 sec) of RTMP buffer reached. Current network conditions are unfavorable to publish data at the specified bitrate. RTMP buffer will be flushed and streams will be re-published when the threshold value is reached.

Fri Feb 11 2011 07:17:57 : Primary - More than 90 % of the specified threshold values (Max buffer size 512000 KB or Max buffer duration 3600 sec) of RTMP buffer reached. Current network conditions are unfavorable to publish data at the specified bitrate. RTMP buffer will be flushed and streams will be re-published when the threshold value is reached.

We did another test from the same FMLE server to a standalone “live” app. No streaming issue, no jitter or video delay. Our FMLE server is 10-15ms away from our origin server. Our Origin server has 1gbps port that can handle 4000-5000 viewers with no problems using a standalone “live” app. This issue only existing using live-origin/live-edge with repeater to another live-edge.

Thanks Richard,

Problem resolved by support. There was a configuration setting on edge application that resolved issue.

Hi Richard

Is posible to kown what was the solution for this trouhble also I HAVE the same inconvenient in order to add the problem to me ist being extended the time of the transmision for to many munutes.

aware of your comments

Thanks for your answer, but that solution I have using, the problem that I have is:

Each two or four minutes the video stop it and one minute after aprox. continue the video but that time is accumulate in a delay, then, the original signal plays too much minutes in advance.

I tried cleaning the encoder buffer (put to zero), doesn’t accumlate, but the video stop it every minute

Graphic:

Delayed signal (live)

====|-stop-|====|-stop-|====|-stop-|====>

Original Signal (live)

================================================>

I’m using an edge server.