One of the core components to a Graylog installation in MongoDB. Quite possibly the worst database ever to grace the planet :)

Hopefully, from a Graylog prospective, MongoDB will disappear from the solution soon.

Anyway, from an architecture prospective, we want to use a highly available Graylog deployment aka Graylog HA. Of which, there is little documentation about. So from a technical prospective you’ve got:

  • Incoming log traffic load-balancer
  • Multiple Graylog servers
  • Multiple MongoDB nodes (also Graylog servers)
  • Multiple ElasticSearch nodes

In our case, we chose to use:

  • A NetScaler to listen on UDP 514 and also host the SSL certificate.
    • The NetScaler will also do a API call against the Graylog servers to verify health.
    • The NetScaler will then pass the traffic to the active Graylog server on the active input thats listening on UDP 5140.
  • The two Graylog servers will be part of a MongoDB cluster, and then a third VM will be used as a MongoDB witness server.
  • Three servers will be used a ElasticSearch nodes.

From a configuration management prospective, we wanted to leverage Puppet to do the installation of the MongoDB cluster.

The puppet manifests we used are:

On our OCP Winterfell nodes, in CentOS 6; the 10GB Mellanox NIC’s show up as eth0 and eht1, where the 1GB management interface shows up as eth2. We are also using Brocade 10GB top-of-rack switches. So configuring LLDP was necessary for the servers to advertise themselves to the upstream switches. To do this, we use the LLDPAD package available  in the @base CentOS repo.

The next step is to create a Puppet module/mainfest to:

  1. Install the LLDPAD RPM from YUM.
  2. Start the LLDPAD service
  3. Ensure that the LLDPAD service is set to autostart at boot
  4. Configure eth0 and eth1 to broadcast their LLDP status to the upstream switches
  5. Ensure that it only runs once, not every time puppet agent runs

More »

I was building a NFS server for our users home directories to work with our FreeIPA implementation, and instead of setting up a logical volume, filesystem and mount point manually I decided to do it via Puppet. Since Puppet is our configuration management engine of choice, I might as well make something that’s reusable, right?

In our environment, we use a Puppet module called Profile, this profile module allows us to create puppet manifests for individual servers, something like this:

This allows us to use one specific manifest for each server rather than each server having its own independent module.

For this server (nfs.pp), I’m going to use the puppetlabs-lvm puppet module, and the haraldsk/nfs puppet modules. I then create my nfs.pp manifest in my profile Puppet module manifests directory, to look like this:

Here I specify the name of the manifest, and any includes.

Here I ensure that /srv/nfs is a directory that gets created or already exists on the filesystem.

Here I specify a Physical Volume (/dev/sdb1), Volume Group (vg_data), Logical Volume (nfs), and the LV size (480G). In this module I can also specify the mount point (/srv/nfs) and make it required (true).

Here I create the entry in /etc/exports for /srv/nfs with the appropriate options that I wanted.

Here is the full nfs.pp Puppet manifest: