I have a peculiar situation that looks like it turns the Wowza LB configuration upside down. Instead of serving many many clients, I need to serve many many encoders. I have incoming RTMP connections from across the US and want to scale my Wowza farm horizontally when needed and dynamically bring servers into and out of the farm. Right now, I switch between large virtual instances and even larger virtual instances by hand behind an AWS ELB, which is a pretty simple Load Balancer. This means during switchovers I blip some encoders.
What I want to do is to have a smart Load Balancer on the way in that can stop sending incoming connections to a server, so that I can retire that server when the last connection drops. I want to be able to have two smaller servers (less cost) than one bigger server, during times of moderate load. Best case would be to launch a new server from an AWS AMI based on load as seen by the load balancer. I have this infrastructure with the rest of my server farms and it works beautifully to scale up for load and down when the load is no longer there. However, I have connections that are being recorded, so putting different recorded files on different servers for different segments of a live broadcast is less than optimal.
I have begun investigating HAProxy as a possible solution to this need. Any one have any experience with HAProxy and Wowza for incoming RTMP connections?
I understand this is a very specific use case, and the large majority of load balance configurations are viewer based, which doesn’t need quite the stickiness. However, it is my use case and something I have toyed with solving for the long term.