We are experiencing performance issues, and was wondering if our setup is fundamentally flawed.
We have 1 Wowza Origin in our data center (not EC2) - (win2008, x64, 8G mem, dual core xeon, raid 5, 1Gb network) We changed our Java heap to 6G. We are also recording the live streams using the live-origin-recording application settings. We also called our datacenter folks, and had them increase our NIC connection buffers to 2048 instead of the default 256.
2 Identical edges in Amazon's EC2. Medium instances with 4G mem, 2 cores. Java Heap sizes are set to 3G. And they're running the live-repeater applications.
At our peak time, we will see approx 30 streams coming in to our origin from our clients who use FMLE from their network to our server at approx 1000 Mb/s.
The origin and one Edge by itself served approx 80 client streams successfully when we had only about 20 input streams, with a few latency issues for some clients. Now with our load almost doubled, we are finding that all our streams are experiencing latency, and buffering issues.
Each stream coming in from our clients may have 1 to 20 connections outbound on the edges.
Over the past 3 weeks we've increased the memory and port speed of our origin, but the issues seem to continue.
Our Thoughts are:
We are thinking that streaming from our origin, in one data center, to the edges in EC2 may be where we are wrong. Maybe it’s better to have them both in the same local network. In the way we are using this, it may be better for us to have the origin and edge on the same box or inside the same internal network and cap the input streams to 50 per origin.
Is there a best practice diagram for the ideal recommended setup for live-streaming origin / edges? BTW, we’ve reviewed the Performance Tuning Doc several times to no avail.
Any insight would be appreciated, as we are learning that streaming seems to be more finicky than we thought!