Build a 3-node mongodb cluster using puppet (for use with High Availability Graylog in this case)

One of the core components to a Graylog installation in MongoDB. Quite possibly the worst database ever to grace the planet :)

Hopefully, from a Graylog prospective, MongoDB will disappear from the solution soon.

Anyway, from an architecture prospective, we want to use a highly available Graylog deployment aka Graylog HA. Of which, there is little documentation about. So from a technical prospective you’ve got:

  • Incoming log traffic load-balancer
  • Multiple Graylog servers
  • Multiple MongoDB nodes (also Graylog servers)
  • Multiple ElasticSearch nodes

In our case, we chose to use:

  • A NetScaler to listen on UDP 514 and also host the SSL certificate.
    • The NetScaler will also do a API call against the Graylog servers to verify health.
    • The NetScaler will then pass the traffic to the active Graylog server on the active input thats listening on UDP 5140.
  • The two Graylog servers will be part of a MongoDB cluster, and then a third VM will be used as a MongoDB witness server.
  • Three servers will be used a ElasticSearch nodes.

From a configuration management prospective, we wanted to leverage Puppet to do the installation of the MongoDB cluster.

The puppet manifests we used are:

class encore_rp::profile::mongopeer {
file {['/data', '/data/db']:
	ensure => 'directory',
	}

#install Java JRE
	class { 'java':
		distribution => 'jre',
	}

class {'::mongodb::client': }

class {'::mongodb::server':
    ensure    => present,
		auth      => false,
		port      => 27018,
		bind_ip   => $::ipaddress,
		replset   => 'graylog',
						  }
mongodb_replset { 'graylog':
    ensure  => present,
		initialize_host => 'node1.domain.local',
		    members => ['node1.domain.local:27018', 'node2.domain.local:27018', 'node3.domain.local:27018']
				  }
mongodb::db { 'graylog':
  user          => 'graylog',
	  password_hash => 'hashed password',
		}

}

Using haproxy as a load balancer for OpenStack services on RedHat OpenStack

Configuring RedHat OpenStack to be highly available is not like VMware where you just enable a couple features, check some boxes and voila! Quite the contrary… In fact; configuring RedHat OpenStack to be highly available is quite elegant.

Lets look at it like this. You quite a few services that make up OpenStack as a product. Services like Keystone (auth), Neutron (networking), Glance (image storage), Cinder (volume storage) and Nova (compute/scheduler) and MySQL (database).  Some of these services are made highly available via failover and some via cloning; aka having multiple copies of the same service distributed throughout the deployment.

OpenStack HA
OpenStack HA

In this case, we need to configure the clone pair of load balancers for to use for our OpenStack services. This will allow us to reference a single Virtual IP that will be load balanced across all cloned service nodes. For this functionality, were going to use haproxy for our load balancing software and pacemaker for clustering.

Load Balanced OpenStack Services
Load Balanced OpenStack Services

The first step is going to be to configure the RHEL HA and LB yum channels with the following:

rhn-channel --user username --password passw0rd-a -c rhel-x86_64-server-ha-6 -c rhel-x86_64-server-lb-6

Then simply install the HA and LB packages with:

yum install -y pacemaker pcs cman resource-agents fence-agents haproxy

The next step being, configure /etc/haproxy/haproxy.cfg to look like this on both of your load balancer nodes:

global
daemon
defaults
mode tcp
maxconn 10000
timeout connect 10s
timeout client 10s
timeout server 10s
frontend qpidd-vip
bind 172.16.56.227:5672
default_backend qpidd-mrg
frontend keystone-admin-vip
bind 172.16.56.227:35357
default_backend keystone-admin-api
frontend keystone-public-vip
bind 172.16.56.227:5000
default_backend keystone-public-api
frontend glance-vip
bind 172.16.56.227:9191
default_backend glance-api
frontend glance-registry-vip
bind 172.16.56.227:9292
default_backend glance-registry-api
frontend cinder-vip
bind 172.16.56.227:8776
default_backend cinder-api
frontend neutron-vip
bind 172.16.56.227:9696
default_backend neutron-api
frontend nova-vnc-novncproxy
bind 172.16.56.227:6080
default_backend nova-vnc-novncproxy
frontend nova-vnc-xvpvncproxy
bind 172.16.56.227:6081
default_backend nova-vnc-xvpvncproxy
frontend nova-metadata-api
bind 172.16.56.227:8775
default_backend nova-metadata
frontend nova-api-vip
bind 172.16.56.227:8774
default_backend nova-api
frontend horizon-vip
bind 172.16.56.227:80
default_backend horizon-api
frontend ceilometer-vip
bind 172.16.56.227:8777
default_backend ceilometer-api
frontend heat-cfn-vip
bind 172.16.56.227:8000
default_backend heat-cfn-api
frontend heat-cloudw-vip
bind 172.16.56.227:8003
default_backend heat-cloudw-api
frontend heat-srv-vip
bind 172.16.56.227:8004
default_backend heat-srv-api
backend qpidd-mrg
balance roundrobin
server oscon1.domain.net 172.16.56.224:5672 check inter 10s
server oscon2.domain.net 172.16.56.225:5672 check inter 10s
server oscon3.domain.net 172.16.56.226:5672 check inter 10s
backend keystone-admin-api
balance roundrobin
server oscon1.domain.net 172.16.56.224:35357 check inter 10s
server oscon2.domain.net 172.16.56.225:35357 check inter 10s
server oscon3.domain.net 172.16.56.226:35357 check inter 10s
backend keystone-public-api
balance roundrobin
server oscon1.domain.net 172.16.56.224:5000 check inter 10s
server oscon2.domain.net 172.16.56.225:5000 check inter 10s
server oscon3.domain.net 172.16.56.226:5000 check inter 10s
backend glance-api
balance roundrobin
server oscon1.domain.net 172.16.56.224:9191 check inter 10s
server oscon2.domain.net 172.16.56.225:9191 check inter 10s
server oscon3.domain.net 172.16.56.226:9191 check inter 10s
backend glance-registry-api
balance roundrobin
server oscon1.domain.net 172.16.56.224:9292 check inter 10s
server oscon2.domain.net 172.16.56.225:9292 check inter 10s
server oscon3.domain.net 172.16.56.226:9292 check inter 10s
backend cinder-api
balance roundrobin
server oscon1.domain.net 172.16.56.224:8776 check inter 10s
server oscon2.domain.net 172.16.56.225:8776 check inter 10s
server oscon3.domain.net 172.16.56.226:8776 check inter 10s
backend neutron-api
balance roundrobin
server oscon1.domain.net 172.16.56.224:9696 check inter 10s
server oscon2.domain.net 172.16.56.225:9696 check inter 10s
server oscon3.domain.net 172.16.56.226:9696 check inter 10s
backend nova-vnc-novncproxy
balance roundrobin
server oscon1.domain.net 172.16.56.224:6080 check inter 10s
server oscon2.domain.net 172.16.56.225:6080 check inter 10s
server oscon3.domain.net 172.16.56.226:6080 check inter 10s
backend nova-vnc-xvpvncproxy
balance roundrobin
server oscon1.domain.net 172.16.56.224:6081 check inter 10s
server oscon2.domain.net 172.16.56.225:6081 check inter 10s
server oscon3.domain.net 172.16.56.226:6081 check inter 10s
backend nova-metadata
balance roundrobin
server oscon1.domain.net 172.16.56.224:8775 check inter 10s
server oscon2.domain.net 172.16.56.225:8775 check inter 10s
server oscon3.domain.net 172.16.56.226:8775 check inter 10s
backend nova-api
balance roundrobin
server oscon1.domain.net 172.16.56.224:8774 check inter 10s
server oscon2.domain.net 172.16.56.225:8774 check inter 10s
server oscon3.domain.net 172.16.56.226:8774 check inter 10s
backend horizon-api
balance roundrobin
server oscon1.domain.net 172.16.56.224:80 check inter 10s
server oscon2.domain.net 172.16.56.225:80 check inter 10s
server oscon3.domain.net 172.16.56.226:80 check inter 10s
backend ceilometer-api
balance roundrobin
server oscon1.domain.net 172.16.56.224:8777 check inter 10s
backend heat-cfn-api
balance roundrobin
server oscon1.domain.net 172.16.56.224:8000 check inter 10s
server oscon2.domain.net 172.16.56.225:8000 check inter 10s
server oscon3.domain.net 172.16.56.226:8000 check inter 10s
backend heat-cloudw-api
balance roundrobin
server oscon1.domain.net 172.16.56.224:8003 check inter 10s
server oscon2.domain.net 172.16.56.225:8003 check inter 10s
server oscon3.domain.net 172.16.56.226:8003 check inter 10s
backend heat-srv-api
balance roundrobin
server oscon1.domain.net 172.16.56.224:8004 check inter 10s
server oscon2.domain.net 172.16.56.225:8004 check inter 10s
server oscon3.domain.net 172.16.56.226:8004 check inter 10s

The next step is to then configure pacemaker on each of the nodes to start haproxy accordingly after the resources are configured.

chkconfig pacemaker on
sysctl -w net.ipv4.ip_nonlocal_bind=1
pcs cluster setup --name lb-cluster "oslb1.domain.net oslb2.domain.net"
pcs cluster start

The next step is to define the cluster resources

pcs resource defaults resource-stickiness=100
pcs resource create lb-master-vip IPaddr2 ip=172.16.56.227
pcs resource create lb-haproxy lsb:haproxy
pcs resource group add haproxy-group lb-haproxy lb-master-vip

Now the only thing left to do is to create the stonith resources

pcs stonith create fence_oslb1 fence_vmware_soap login=root passwd=vmware action=reboot ipaddr=vcenteripaddress port=/vsandc/vm/oslb1 ssl=1 pcmk_host_list=oslb1.domain.net

pcs stonith create fence_oslb2 fence_vmware_soap login=root passwd=vmware action=reboot ipaddr=vcenteripaddress port=/vsandc/vm/oslb2 ssl=1 pcmk_host_list=oslb2.domain.net

You should now have haproxy running as a clustered service servicing a Virtual IP address for your OpenStack Services.