Build a 3-node mongodb cluster using puppet (for use with High Availability Graylog in this case)

One of the core components to a Graylog installation in MongoDB. Quite possibly the worst database ever to grace the planet :)

Hopefully, from a Graylog prospective, MongoDB will disappear from the solution soon.

Anyway, from an architecture prospective, we want to use a highly available Graylog deployment aka Graylog HA. Of which, there is little documentation about. So from a technical prospective you’ve got:

  • Incoming log traffic load-balancer
  • Multiple Graylog servers
  • Multiple MongoDB nodes (also Graylog servers)
  • Multiple ElasticSearch nodes

In our case, we chose to use:

  • A NetScaler to listen on UDP 514 and also host the SSL certificate.
    • The NetScaler will also do a API call against the Graylog servers to verify health.
    • The NetScaler will then pass the traffic to the active Graylog server on the active input thats listening on UDP 5140.
  • The two Graylog servers will be part of a MongoDB cluster, and then a third VM will be used as a MongoDB witness server.
  • Three servers will be used a ElasticSearch nodes.

From a configuration management prospective, we wanted to leverage Puppet to do the installation of the MongoDB cluster.

The puppet manifests we used are:

class encore_rp::profile::mongopeer {
file {['/data', '/data/db']:
	ensure => 'directory',

#install Java JRE
	class { 'java':
		distribution => 'jre',

class {'::mongodb::client': }

class {'::mongodb::server':
    ensure    => present,
		auth      => false,
		port      => 27018,
		bind_ip   => $::ipaddress,
		replset   => 'graylog',
mongodb_replset { 'graylog':
    ensure  => present,
		initialize_host => 'node1.domain.local',
		    members => ['node1.domain.local:27018', 'node2.domain.local:27018', 'node3.domain.local:27018']
mongodb::db { 'graylog':
  user          => 'graylog',
	  password_hash => 'hashed password',


Building a two-node elasticsearch cluster for Graylog using Puppet


Two servers, in this case:

  • –
  • –
  • 8 vCPU
  • 16GB vMem
  • A second hard disk of 500GB
    • /dev/sdb1
      • formatted XFS and mounted as /var/lib/elasticsearch
  • Hosts file configured to reference each other
  • The follow two puppet modules are installed: saz-limits, puppetlabs-java and elasticsearch-elasticsearch

Web Interface:

We use the KOPF elastic search plugin to present us a web interface. Install the KOPF plugin:

./elasticsearch/bin/plugin install lmenezes/elasticsearch-kopf/

It should now be available on each node:!/cluster

Elasticsearch Installation

Use the following puppet manifest to configure the nodes.

#filename :graylog.pp
#description :This is the base puppet manifest to configure an elastic search cluster
#author :Eric Sarakaitis
#date :1/26/17
#this is for graylog
class profiles::graylog {

$config_hash = {
‘ES_HEAP_SIZE’ => ‘8g’,
‘MAX_LOCKED_MEMORY’ => ‘unlimited’,

#configure memory limits

class { ‘limits’: }
limits::limits { ’99-elasticsearch-memory.conf’:
ensure => present,
user => ‘username’,
limit_type => ‘memlock’,
both => unlimited,

#install Java JRE
class { ‘java’:
distribution => ‘jre’,
#install elasticsearch cluster
class { ‘elasticsearch’:
init_defaults => $config_hash,
version => ‘2.3.5’,
restart_on_change => true,
manage_repo => true,
repo_version => ‘2.x’,
datadir => ‘/var/lib/elasticsearch’,
config => {
‘’ => ‘graylog’,
‘’ => ‘150mb’,
‘script.inline’ => false,
‘script.indexed’ => false,
‘script.file’ => false,
‘’ => $::hostname,
‘’ => $::ipaddress,
‘network.publish_host’ => $::ipaddress,
‘http.enabled’ => true,
‘node.master’ => true,
‘’ => true,
‘index.number_of_shards’ => ‘2’,
‘index.number_of_replicas’ => ‘1’,
‘’ => “,,”,
‘’ => “,,”,
‘’ => false,
‘discovery.zen.minimum_master_nodes’ => ‘1’,

#Define the node instance
elasticsearch::instance { ‘graylog’:
config => { ‘’ => $::hostname }

#install KOPF management UI
elasticsearch::plugin { ‘lmenezes/elasticsearch-kopf’:
instances => ‘graylog’
#closing frenchie

Graylog Configuration

Install and configure the graylog appliance: –

Then edit /opt/graylog/conf/graylog.conf

Configure each of the node IP’s on the elastic search_discovery_zen_ping_unicast_hosts

elasticsearch_discovery_zen_ping_unicast_hosts =,,

Also formally define the graylog host itself

elasticsearch_network_host =

Now edit: /opt/graylog/elasticsearch/config/elasticsearch.yml

And configure graylog to not be a node master or data node.

node.master: false false

Then restart the graylog server

Using Puppet to manage virt-who to map virtual guests to physical hosts in Satellite 6.1

The virt-who package allows you to map virtual machines to the physical host so that you can take advantage of RedHat Virtual Data Center licensing when using Satellite 6.1. It allows you to use your Hypervisor Host (in this case VMware ESXi) as a content host within Satellite. Therefore allowing you to assign RHEL licenses to the hosts directly, rather than individually on the virtual machine.

To do this, I’m going to leverage Puppet. In my puppet manifest I have:
file { '/etc/virt-who.d/vcenter.conf':
ensure => file,
owner => 'root',
mode => 644,
group => 'root',
source => "puppet:///modules/profiles/center"

And on the puppet server in: /etc/puppet/modules/profiles/files/

I have a file called vcenter, it looks like this:

With this configuration, my ESX hosts will show up under Satellite > Hosts > Content Hosts

virt-who content hosts

Here you can see that Satellite now can identify the VM’s then running on the Hypervisor Host

Create a Logical Volume, EXT4 filesystem, mounted mount point and NFS export all via Puppet

I was building a NFS server for our users home directories to work with our FreeIPA implementation, and instead of setting up a logical volume, filesystem and mount point manually I decided to do it via Puppet. Since Puppet is our configuration management engine of choice, I might as well make something that’s reusable, right?

In our environment, we use a Puppet module called Profile, this profile module allows us to create puppet manifests for individual servers, something like this:


This allows us to use one specific manifest for each server rather than each server having its own independent module.

For this server (nfs.pp), I’m going to use the puppetlabs-lvm puppet module, and the haraldsk/nfs puppet modules. I then create my nfs.pp manifest in my profile Puppet module manifests directory, to look like this:

Here I specify the name of the manifest, and any includes.

class profile::nfs {
 include nfs::server

Here I ensure that /srv/nfs is a directory that gets created or already exists on the filesystem.

file { "/srv/nfs":
 ensure => "directory",

Here I specify a Physical Volume (/dev/sdb1), Volume Group (vg_data), Logical Volume (nfs), and the LV size (480G). In this module I can also specify the mount point (/srv/nfs) and make it required (true).

class { 'lvm':
 volume_groups => {
 'vg_data' => {
 physical_volumes => [ '/dev/sdb1' ],
 logical_volumes => {
 'nfs' => {
 'size' => '480G',
 'mountpath' => '/srv/nfs',
 'mountpath_require' => true,

Here I create the entry in /etc/exports for /srv/nfs with the appropriate options that I wanted.

nfs::export {
 clients => '*',
 options => ['rw', 'insecure', 'sync', 'all_squash', 'no_wdelay', ]

Here is the full nfs.pp Puppet manifest:

class profile::nfs {
        include nfs::server

file { "/srv/nfs":
        ensure => "directory",

class { 'lvm':
  volume_groups    => {
    'vg_data' => {
      physical_volumes => [ '/dev/sdb1' ],
      logical_volumes  => {
        'nfs' => {
          'size'              => '480G',
          'mountpath'         => '/srv/nfs',
          'mountpath_require' => true,

nfs::export {
      clients => '*',
        options => ['rw', 'insecure', 'sync', 'all_squash', 'no_wdelay', ]