One of the core components to a Graylog installation in MongoDB. Quite possibly the worst database ever to grace the planet :)

Hopefully, from a Graylog prospective, MongoDB will disappear from the solution soon.

Anyway, from an architecture prospective, we want to use a highly available Graylog deployment aka Graylog HA. Of which, there is little documentation about. So from a technical prospective you’ve got:

  • Incoming log traffic load-balancer
  • Multiple Graylog servers
  • Multiple MongoDB nodes (also Graylog servers)
  • Multiple ElasticSearch nodes

In our case, we chose to use:

  • A NetScaler to listen on UDP 514 and also host the SSL certificate.
    • The NetScaler will also do a API call against the Graylog servers to verify health.
    • The NetScaler will then pass the traffic to the active Graylog server on the active input thats listening on UDP 5140.
  • The two Graylog servers will be part of a MongoDB cluster, and then a third VM will be used as a MongoDB witness server.
  • Three servers will be used a ElasticSearch nodes.

From a configuration management prospective, we wanted to leverage Puppet to do the installation of the MongoDB cluster.

The puppet manifests we used are:

Using OpenStack Open vSwitch with VLAN’s removes a lot of the trickery involved with using public and private IP’s. That way each VM gets it’s own real IP address assigned to it.

In this case, our network layout looks as such:

Logical Network Layout

Logical Network Layout

That being said, the VM’s still need a way to get back to 169.254.169.254 for access to the OpenStack metadata service. In this case, the fix was to add a static route to the VM and re-configure the neutron-dhcp-agent service as such.

On the vm template, add /etc/sysconfig/network-scripts/route-eth0:

Then add the following to /etc/neutron/dhcp-agent.ini

and then restart the Neutron dhcp agent service:

Once this is finished, you should then be able to add the updated image to glance and deploy without issue.

Configuring Glace, Cinder and Nova for OpenStack Havana to run on GlusterFS is actually quite simple; assuming that you’ve already got GlusterFS up and running.

So lets first look at my Gluster configuration. As you can see below, I have a Gluster volume defined for Cinder, Glance and Nova.

And I have each of these filesystems mounted.

Now that I have OpenStack Cinder, Nova and Glance installed; I can then configure them to use my Gluster mounts.

Modify /etc/cinder/cinder.conf to reflect the Gluster configuration

Also make sure that /etc/cinder/shares.conf has the Gluster share listed

Then, create the images folder for Glance.

Then, modify the file permissions so that its usable.

Modify the glance configuration to reflect the Gluster mount points in /etc/glance/glance-api.conf

Restart Glance Services

Create nova folder structure

Then, modify the file permissions so that its usable.

Modify nova config in /etc/nova/nova.conf to reflect the Gluster mount points.

Restart nova

Verify OpenStack services

You should now see that Cinder has the GlusterFS mount mounted when issuing the mount command:

Note: 127.0.0.1:/cinder on /var/lib/cinder/mnt/92ef2ec54fd18595ed18d8e6027a1b3d type fuse.glusterfs (rw,default_permissions,allow_other,max_read=131072

The main issue with installing OS’s on these Winterfell servers is the complete lack of video card :)

Your only option is serial console using the USB > Serial header adapter.

So the task is to install RedHat or CentOS (or other RHEL style OS’s) via PXE, with serial console only access. To do this, we need to pass boot parameters to the PXE menu entry.

In this particular case, my 1GB interface on the OpenCompute V3 (Winterfell) servers is recognized as eth2. So my PXE menu entry looks as such: More »