I'm looking into a potentially quite extensive VOD OTT project. The idea is to have video content stored in S3, each asset in multiple layers (profiles) for SanJose and Cupertino streaming. Meaning lots of chunk files. The content would be delivered to viewers by a network of Wowza servers on EC2. My worry is whether the S3 and Wowza EC2 data connection is fast enough to support delivering these small chunks in time for the streaming? Especially if multiple Wowza servers are accessing the same files in S3.
I'm also worried about the scale of things. Let's assume the S3 content would total 3TB and there would be 50 Wowza instances running to support the expected number of peak concurrent viewers, all accessing the same S3 bucket. Would this work?
Is there any easy way to quickly scale the number of Wowza servers (and configure load-balancers) on-the-fly based on the demand?