We've been using a load balancing setup where there's a single primary server that runs the origin, load balancer, and edge applications, and when we start up edge servers to handle a large load, they connect to it. This has been working well, but as we expand, the primary server gets connections for every stream times every edge server. So for 20 streams and 20 edges, it has 400 connections.
To fix this, we're first moving the load balancing and edge applications off of the primary server, to a new origin server. This server will have 5 edge-origin servers connecting to it, and each of those will have 5 edge servers connecting to them.
This all looks possible, except the part when we start up edge servers. If I'm understanding correctly, we need to specify the edge-origin server to connect to in the loadbalancertargets.txt file. But if we're starting the edge-origin servers up for a temporary time, the addresses will change often, and we'll have to customize the startup package for the edge servers for each edge-origin.
Should I be using only one load balancer server, or do I need a load balancer server on each edge-origin machine? If I'm telling them all to use my main origin server for the load balancer, will they be able to connect to all the streams?
Is this correct, or is there an easier way to do all of this?
Hi. You're asking how to configure a load-balancer in a Liverepeater origin>edge-origin>edge configuration, correct? I think you're on the right track. Lets see about your questions:
1. Should I be using only one load balancer server?
2. ...will clients be able to connect to all the streams?
3. We'll have to customize the startup package for the edge servers for each edge-origin.
The key to understanding here is knowing that the concept of the loadbalancer is independent from the concept of the LiveRepeater (edge-origin configuration).
Regarding best practices for dynamic IP address: Short answer: don't do that. Long answer: Dynamic IP addresses are not conducive to the server paradigm, so you have to make a workaround. For some use cases you can use domain names and set a short TTL on your registrar when changing the A-Record to point to your new servers. Or, you could include a script in these edge-origin and edge server startup packages that checks a webservice on your main server to get the info needed to automatically configure themselves. Then you don't have to alter the startup packages. Otherwise, isn't AWS Elastic IP used to keep the same IP across different server instantiations?
#1: Yes, you will use one loadbalancer server (listener). This is the server that your domain name points to; the domain name you want customers to use for your service.
-It is possible to have nested loadbalancer listeners, for example if you need to use GeoLocation to keep certain clients in certain geographically located groups of edge servers.
#2: Yes, if that is your goal. Clients can connect to whatever origin or edge-origin streams you've referenced in your edge application.
#3: That's correct. When you start a new edge application, you have to tell it what origin server to connect to.
To recap: Your loadbalancer senders point to your single loadbalancer listener. Your edge applications point to their group's edge-origin applications. Your edge-origin applications point to your main origin server.
I'm not sure how Rackspace cloud servers can be started, but EC2 servers can be started manually through an web interface (AWS Console) or the EC2Tools API, and I would suppose there is an equivalent to both in Rackspace. And they must have a way of image a server, an image with Wowza installed and configured as an edge and Load Balancer sender, with .stream files in place, etc, such that when it starts up the LB Sender starts sending to the LB Listener, which will then start referring client requets to it. A new instance in this scenario will get new connections first because it will be the least loaded edge.
1. Via configuration: Setting the origin hostname in your Application.xml as soon as you have the origin IP available.
2. Via code. Since there's no built-in way for your server instances to "see" each other as they are turned up, you'll need to build out your own modules for this. One option would be to set up a socket service (outside of wowza) that would allow you to relay messages between instances as they are turned up and down. Another option would be to use a key-value store with pub/sub functionality (such as Redis), and then have each of your servers subscribe to the messages on the store, and publish a message as they turn up.