Build a 3-node mongodb cluster using puppet (for use with High Availability Graylog in this case)

One of the core components to a Graylog installation in MongoDB. Quite possibly the worst database ever to grace the planet :)

Hopefully, from a Graylog prospective, MongoDB will disappear from the solution soon.

Anyway, from an architecture prospective, we want to use a highly available Graylog deployment aka Graylog HA. Of which, there is little documentation about. So from a technical prospective you’ve got:

  • Incoming log traffic load-balancer
  • Multiple Graylog servers
  • Multiple MongoDB nodes (also Graylog servers)
  • Multiple ElasticSearch nodes

In our case, we chose to use:

  • A NetScaler to listen on UDP 514 and also host the SSL certificate.
    • The NetScaler will also do a API call against the Graylog servers to verify health.
    • The NetScaler will then pass the traffic to the active Graylog server on the active input thats listening on UDP 5140.
  • The two Graylog servers will be part of a MongoDB cluster, and then a third VM will be used as a MongoDB witness server.
  • Three servers will be used a ElasticSearch nodes.

From a configuration management prospective, we wanted to leverage Puppet to do the installation of the MongoDB cluster.

The puppet manifests we used are:

class encore_rp::profile::mongopeer {
file {['/data', '/data/db']:
	ensure => 'directory',
	}

#install Java JRE
	class { 'java':
		distribution => 'jre',
	}

class {'::mongodb::client': }

class {'::mongodb::server':
    ensure    => present,
		auth      => false,
		port      => 27018,
		bind_ip   => $::ipaddress,
		replset   => 'graylog',
						  }
mongodb_replset { 'graylog':
    ensure  => present,
		initialize_host => 'node1.domain.local',
		    members => ['node1.domain.local:27018', 'node2.domain.local:27018', 'node3.domain.local:27018']
				  }
mongodb::db { 'graylog':
  user          => 'graylog',
	  password_hash => 'hashed password',
		}

}

OpenStack cloud-init cannot contact or ping 169.254.169.254 to establish meta-data connection – fix

Using OpenStack Open vSwitch with VLAN’s removes a lot of the trickery involved with using public and private IP’s. That way each VM gets it’s own real IP address assigned to it.

In this case, our network layout looks as such:

Logical Network Layout
Logical Network Layout

That being said, the VM’s still need a way to get back to 169.254.169.254 for access to the OpenStack metadata service. In this case, the fix was to add a static route to the VM and re-configure the neutron-dhcp-agent service as such.

On the vm template, add /etc/sysconfig/network-scripts/route-eth0:

169.254.169.254/32 dev eth0

Then add the following to /etc/neutron/dhcp-agent.ini

enable_isolated_metadata = True
enable_metadata_network = False

and then restart the Neutron dhcp agent service:

service neutron_dhcp_agent restart

Once this is finished, you should then be able to add the updated image to glance and deploy without issue.

Configuring OpenStack Havana Cinder, Nova and Glance to run on GlusterFS

Configuring Glace, Cinder and Nova for OpenStack Havana to run on GlusterFS is actually quite simple; assuming that you’ve already got GlusterFS up and running.

So lets first look at my Gluster configuration. As you can see below, I have a Gluster volume defined for Cinder, Glance and Nova.

[root@g1 ~(keystone_admin)]# gluster volume info
Volume Name: cinder
Type: Replicate
Volume ID: d71d0ab7-2c99-41c5-8495-fd68d1571f31 Continue reading "Configuring OpenStack Havana Cinder, Nova and Glance to run on GlusterFS"

OpenCompute Winterfell RHEL/CentOS PXE boot over serial console

The main issue with installing OS’s on these Winterfell servers is the complete lack of video card :)

Your only option is serial console using the USB > Serial header adapter.

So the task is to install RedHat or CentOS (or other RHEL style OS’s) via PXE, with serial console only access. To do this, we need to pass boot parameters to the PXE menu entry.

In this particular case, my 1GB interface on the OpenCompute V3 (Winterfell) servers is recognized as eth2. So my PXE menu entry looks as such: Continue reading “OpenCompute Winterfell RHEL/CentOS PXE boot over serial console”