Wowza Community

Load Balancing WSE in AWS for VOD-HLS

We are looking to scale our WSE VOD deployments in Amazon. Currently we are using M5.XL instance type running CentOS. We are not leveraging any external sources for Media Cache, like ElasticCache or the like. We are not YET using S3 to access to access our VOD libraries (this is on our to-do list, but are concerned about speed with S3?) we currently use EBS volumes. Our common practice for scaling out is to front ending a pool of multiple web servers with an ELB/ALB. We would like to incorporate an additional WSE server into each of our implementations (currently front-ended by an ELB), and perhaps enhance the Media Cache configuration for additional performance. We use a custom module that leverages secure tokens and a REST layer to ensure all streams are protected. We are trying to determine our best first step in scaling our deployments. Was looking for some suggestions as we continue to study documentation. Thanks.

Hi; are you scaling on web servers only with a fixed number of WSE’s behind that or are you looking to scale your WSE’s too? We’ve build many (auto-)scaling streaming platforms with AWS but also with Azure, Google and Docker/K8S.

If you are distributing your VOD over CDN and using your web servers as proxies then it really depends on the use case if you would actually need to do any shared MediaCache implementation. We built an OTT platform not long ago where we focused on cache HITs and just let the origin servers stand side-by-side without sharing anything but the SAN.

If S3 is fast enough depends on how much you’re going to pull from it and what breakdown risk you can accept. If your servers are in the same AWS region as where your S3 bucket is then it’s rarely a problem until you start pulling big numbers (100s? 1000s? of concurrent streams); alternatively you could share an EBS volume between your Wowza Servers and do some smart caching or set up another cache/proxy with a decent EBS volume that you use between S3 and your WSE’s

If you are going to do secure tokens, remember that is only protects access to the stream, not the stream itself. For that you could implement AES-128 (aka. ClearText) encryption (which you can do yourself with a WSE module and a key server) or even DRM.

I did a webinar some time ago; although it’s focused on scaling of Live streaming, maybe there’s something useful in there for you: https://www.youtube.com/watch?v=aMylTSmOa2U

Thank you @Karel_Boek! At present we are only auto-scaling our web servers, and not our WSE’s (single point of failure and scale we know!). We would like to fold in an additional WSE into each of our 3 deployments. These deployments are NOT linked, and are on separate networks. At present we are limited to AWS for a platform. CDN and OTT really do not apply to us as we are deploying on closed network environments that are not really geo balanced.

We definitely want to migrate to S3 (if nothing else for cost, AZ redundancy, and DR benefits), our servers are in the same region as our buckets. We will commonly pull 100’s to a few thousand of concurrent feeds depending on the demand. We currently are using EBS volume as you suggested, but our caching is simply WSE MediaCache pretty much OTB in my opinion. We could DEFINITELY leverage S3 for smart caching like you suggested, but we would need to understand how that would be implemented. When WSE uses S3, is it smart enough to know the instance has permissions to the bucket, or do we have to use keys?

In regards to Secure Token, we are fine without hard core encryption, we are on closed networks. Its really a point of maintaining ABAC and DAC to our streams at all times. BUT we are concerned about Secure Token and clustering. Do you use an ALB or ELB, do we need sticky sessions, how does all play together? We’ve run into issues with delivering Closed Caption and Secure Token.

I am still confused/conflicted as to our best first step. If we implement a second WSE to our existing pool behind the existing ELB. And then develop a shared cache somewhere (EBS, S3, or the like). Is it that simple? There is some sarcasm in that last sentence. Who has documentation on the shared cache implementation? Thanks so much!!! Great conversation.

A walk in the park, Rich; what are you waiting for? :smile: I don’t think there’s a step-by-step guide for implementing any of these aspects. The ideas I gave you are either obvious or based on experience. As I mentioned, we’ve built a number of these platforms from scratch and no two solutions are the exact same.

When WSE uses S3 it presumably needs to have the Access and Secret keys. You could run a small script when you launch a WSE instance that puts environment variables into place.

We don’t use SecureToken often; so I don’t recall if it requires a session but presumably so. Typically we implement a token solution that can be terminated at the proxy.

For shared cache the most simple is probably a proxy/cache instance within the VPC with EBS for caching and S3 as the content source. Could be as simple as mounting the S3 bucket as a volume.

If you need any help with the implementation; you can post in the “Hire A Consultant” forums where qualified Wowza experts are allowed to respond and contact you directly.