Wowza Community

Live wirecast to AWS streams fine to some but others get dropped every 5 seconds

We had a very successful live stream to 150 unique viewers on using the Wowza EC2 AMI instance on Amazon AWS. However we had reports of the stream stopping every 5 seconds for some viewers with the player error of “livestream” not found.

We are confident that some people were able to watch the entire stream without any errors or disconnection. We had a couple of testers verify this. This most likely eliminates live up bandwidth from our local wirecast or any local origin problems. So it seems the feed was getting out fine to Amazon.

The wowza is set up to stream RTMP to flash and repeat for iOS devices. Both were tested and received the feed just fine.

But what happened to the people who got dropped every 5 seconds?

Below is Wowza log from the show that night. Does anyone know where to look for the cause of these errors? The log shows that some people were disconnected and that the streams were stopped. Why? Can anyone tell me how to troubleshoot and find logs to figure out this error.


[HTML]2011-05-13 23:26:16 EDT unpublish stream INFO 200 livestream - defaultVHost live definst 13.521 [any] 1935 rtmp:// 74.8$

2011-05-13 23:26:16 EDT destroy stream INFO 200 livestream - defaultVHost live definst 13.521 [any] 1935 rtmp://$

2011-05-13 23:26:17 EDT disconnect session INFO 200 847375956 - defaultVHost live definst 14.693 [any] 1935 rtmp:// 74.8$

2011-05-13 23:26:21 EDT connect-pending session INFO 100 - defaultVHost live definst 0.061 [any] 1935 rtmp:// 74.8$

2011-05-13 23:26:21 EDT connect session INFO 200 - defaultVHost live definst 0.062 [any] 1935 rtmp://$

2011-05-13 23:26:21 EDT create stream INFO 200 - - defaultVHost live definst 0.0 [any] 1935 rtmp:// rtmp$

2011-05-13 23:26:21 EDT publish stream INFO 200 livestream - defaultVHost live definst 0.026 [any] 1935 rtmp://$


Couple questions:

  • what bitrate were you running?

  • were you monitoring any CPU or network I/O with AWS at the time?


Assuming this is a small instance, you are overloading it. What you describe is probably more than it can handle.

You probably need at least an m1.xlarge, or one of the high-memory types: m2.2xlarge or m2.4xlarge


The small, large and xlarge are known to get about 150, 250 and 350mbs throughput, resp.

You also have a lot incoming streams. I think that is as likely to be the issue. The small should handle 150 1mbs streams just barely. One way or the other, I think the issue was maxed out capacity. A small is not big.

We don’t have numbers on the other instance types at this time.


I misread the upstream part. You said you were “pushing less than 1000k up”, which is reasonable as far as the EC2 instance is concerned. It could be a lot for the encoder location. There is a UpBWCheck app here:

You also said “150 uniques” which I took to mean 150 concurrent, which is what really matters. If you were at or about 150 concurrent 1mbs streams, you were capping out. In that case what you described makes sense, expected.


Players run in user’s browser and connect to Wowza from there. The web server that stores and serves the player files are not involved in streaming.


Take a look at the access and error logs, which are easy to follow. You can see where connections start, grouped by c-client-id. I open in Excel, delete the top 4 rows, then View > Freeze top row.


Were you using RTMPT, tunneling? Flash drops rtmpt connections very easily if any network problem occurs. Avoid rtmpt or work out reconnect in your Flash player. JW player has a reconnect plugin.


Hi Ian,

Sorry for slow response. We were streaming via wirecast and pushing less than 1000k up. Our total Up stream bandwidth was at least 3 or 4 Mb. Video was running around 700kbits/s and the audio jumped but generally was at 250 or 300k. We were running the Flash wirecast settings with Main profile. We sent it directly to the rtmp ec2 wowza address.

I have monitored the I/O with AWS cloudfront monitor. We saw spikes up and down. We also saw a good 15 minutes of plateau. Is the plateau a sign that our Cloud front server is overwhelmed? It peaked at only about 9.76 CPU utilization. We had a Medium EC2 Wowza AMI running at the time.

Thanks for your thoughts.

Ok. That might be the case for sure. Do you know how many concurrent viewers can be on a wowza m1.xlarge ec2 server if the stream is near 1000 kbits/sec? Is the wowza EC2 streaming limited to server capacity only or does wowza have limitations? Do you happen to know how many can be on an m2.4xlarge?

Can load balancing be a fix for this? Can the wowza server relay to multiple ec2 instances using the load balancing?


Ok. Great. I’ll test a larger server capacity on EC2 and enable the cloud monitoring.

To be clear, what do you mean by we have a lot of incoming streams? The number of connections trying to view the feed? aka unique simultaneous viewers? We just have the one stream per wirecast broadcast being sent to each EC2 instance at one time.

Also, does the Wowza server connect directly to the player at the client or does the player relay through my origin webserver to communicate to the EC2. Could it also be an issue of my dedicated IP buffer at my website origin? The player is served from cloudfront cdn but is called from a dedicated IP at mediatemple. Seems to me that once the player is loaded on the client that the client would talk directly to the wowza streamer. Is that correct?

Thanks again!

Can you tell me if the wowza server connects directly to the client after the player loads? Or might there be a problem on my webserver buffer?

Our player is hosted on Cloudfront with the feed coming from the Wowza EC2.


Ok. Great. The 150 uniques that I mentioned were indeed unique viewers over about a 3 hours period. Now I know that there were some that watched the whole thing and some that got kicked off every 5 minutes. I believe we only had probably 40 concurrent viewers at any one time during that 3 hour period based on the 30 minute average view time of the show.

That seems well below the server CPU capacity. Is there anything else that could cause the livestream not found errors?

Here is an image from cloudwatch from a smaller stream e.g. fewer concurrent viewers where we still had a couple of livestreams not found.

Regarding RTMPT tunnelling - No. We are just using the default instance settings and only allow flash and iOS type connections.

I’m looking through the logs. Not sure what I’m looking for in there though. I see the connects in the access log. Nothing seems to be a flag. The error log only has five rows.

I’ve posted the 5/13 logs here:

good tip on the JWFLV reconnect plugin btw. Thank you!