Monday, July 7, 2014

Load Balancing with OpenStack

One of the key strength of Cloud applications is High Availability which ensures that the applications and services are accessible to users all the time with as little down time as possible. The backbone of High Availability is Load Balancing which also comes as a type of cloud model known as LBAAS(Load Balancing As A Service). In my earlier article  , I had explained how to configure your OpenStack setup to enable Load balancing. In this article , we will see how to load balance services in OpenStack taking some basic examples.

Lets begin with some concepts of Load balancing in OpenStack -
VIP - it is like a listener for clients which will receive requests on behalf of services being load balanced. It constitutes of an IP and port on which it can listen.

LB Pool - it is a pool of VMs which are being load balanced. LB pool is responsible for identifying which member will process the request coming to VIP. Each VIP has one pool.

Member - it is the application/server which is being load balanced.

Health Monitor - it is responsible for identifying which members are fit to process requests coming to VIP and which members are not. A pool can be associated with many monitors. Following type of health monitors are supported - PING, TCP, HTTP, HTTPS.

Below are high level steps used for load balancing in OpenStack -
1. Create a pool.
2. Create one or more members to be added in pool.
3. Create one or more health monitors to be associated with pool.
4. Add members to pool.
5. Associate health monitors with pool.
6. Create VIP
7. Associate VIP with pool.

Lets take an example of httpd service load balancing where in we will try to send wget or curl requests for VIP and will in return be getting host names of members being load balanced.

Create a Pool
Login to dashboard and navigate to Load Balancer section to create new pool.








While creating pool we can specify provider like HA Proxy, Subnet to be associated with pool , Protocol and load balancing method.

2. Add members to pool
In this step we will add member VMs to the pool .




























In this I have added to members to the pool.


3. Create Health monitor 




























The health monitor inspects periodically the members of pool to check their availability and health in order to process request.


4. Associate Health monitor with pool


































5. Create VIP

Final step is to create Virtual IP address (VIP) which will handle traffic. The VIP should come from the same subnet as that of the pool. It may be allocated as Public IP so that it is reachable from external network.




































We have now configured load balancer on 2 VMs each running httpd service in order to cater to requests coming to VIP.

Testing

Lets hit VIP (10.0.0.10) and see the responses

for i in {1..4} ; do curl -w "\n" 10.0.0.10 ; done
vm1
vm2
vm1
vm2

Hence we can see the responses of individual VM in a round robin manner.

Next you can disable one of the members of pool and try the above command again. It can be observed that despite of one VM going down, the requests to VIP are still fulfilled.

Hope it helped !

Configuring Load Balancers in OpenStack


After you install OpenStack through RDO (which is most widely used) , Load Balancer are not by default enabled. To enable Load balancer in OpenStack Havana or Ice House following steps need to be followed :-
Load balancing can be achieved by HA Proxy which is a fast and reliable solution for High availability and load balancing. Ha Proxy needs to be installed in order to enable Load balancing in OpenStack.
To install Ha Proxy on Centos5/RHEL 5 you need to add Epel packages via
# rpm -Uvh http://dl.fedoraproject.org/pub/epel/5/i386/epel-release-5-4.noarch.rpm

Next to install HA Proxy -
# yum install haproxy

Following changes are required in OpenStack configuration files :-
1.Edit /etc/neutron/neutron.conf and add the following in the default section:
      service_plugins = neutron.services.loadbalancer.plugin.LoadBalancerPlugin

   Also, comment out the line in the service_providers section:

 service_provider=LOADBALANCER:Haproxy:neutron.services.loadbalancer.drivers.haproxy.plugin_driver.HaproxyOnHostPluginDriver:default


2.Set Ha Proxy drivers  in /usr/share/neutron/neutron-dist.conf: 

service_provider = LOADBALANCER:Haproxy:neutron.services.loadbalancer.drivers.haproxy.plugin_driver.HaproxyOnHostPluginDriver:default

3.Edit  /etc/neutron/lbaas_agent.ini file and add HA Proxy device driver

device_driver = neutron.services.loadbalancer.drivers.haproxy.namespace_driver.HaproxyNSDriver

4.The interface_driver will depend on the core L2 plugin being used

For OpenVSwitch:
interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver

For linuxbridge:
interface_driver = neutron.agent.linux.interface.BridgeInterfaceDriver

5. Make changes in /etc/openstack-dashboard/local_settings file so that Load balancer section starts appearing in dashboard 

OPENSTACK_QUANTUM_NETWORK = {
    'enable_lb': True

}

6. Restart the services 
sudo service neutron-lbaas-agent start
sudo service neutron-server restart
sudo service httpd restart

7.Login to dashboard to see Load Balancer section