Assumptions:

Two servers, in this case:

  • elastica.domain.com – 172.16.100.80
  • elasticb.domain.com – 172.16.100.81
  • 8 vCPU
  • 16GB vMem
  • A second hard disk of 500GB
    • /dev/sdb1
      • formatted XFS and mounted as /var/lib/elasticsearch
  • Hosts file configured to reference each other
  • The follow two puppet modules are installed: saz-limits, puppetlabs-java and elasticsearch-elasticsearch

Web Interface:

We use the KOPF elastic search plugin to present us a web interface. Install the KOPF plugin:

It should now be available on each node:

http://elastica.domain.com:9200/_plugin/kopf/#!/cluster

Elasticsearch Installation

Use the following puppet manifest to configure the nodes.

#=============================================================================
#filename :graylog.pp
#description :This is the base puppet manifest to configure an elastic search cluster
#author :Eric Sarakaitis
#date :1/26/17
#==============================================================================
#this is for graylog
class profiles::graylog {

$config_hash = {
‘ES_HEAP_SIZE’ => ‘8g’,
‘MAX_LOCKED_MEMORY’ => ‘unlimited’,
}

#configure memory limits

class { ‘limits’: }
limits::limits { ’99-elasticsearch-memory.conf’:
ensure => present,
user => ‘username’,
limit_type => ‘memlock’,
both => unlimited,
}

#install Java JRE
class { ‘java’:
distribution => ‘jre’,
}
#install elasticsearch cluster
class { ‘elasticsearch’:
init_defaults => $config_hash,
version => ‘2.3.5’,
restart_on_change => true,
manage_repo => true,
repo_version => ‘2.x’,
datadir => ‘/var/lib/elasticsearch’,
config => {
‘cluster.name’ => ‘graylog’,
‘indices.store.throttle.max_bytes_per_sec’ => ‘150mb’,
‘script.inline’ => false,
‘script.indexed’ => false,
‘script.file’ => false,
‘node.name’ => $::hostname,
‘network.host’ => $::ipaddress,
‘network.publish_host’ => $::ipaddress,
‘http.enabled’ => true,
‘node.master’ => true,
‘node.data’ => true,
‘index.number_of_shards’ => ‘2’,
‘index.number_of_replicas’ => ‘1’,
‘discovery.zen.ping.unicast.hosts’ => “172.16.100.80, 172.16.100.81, 172.16.100.77”,
‘elasticsearch_discovery.zen.ping.unicast.hosts’ => “172.16.100.80, 172.16.100.81, 172.16.100.77”,
‘discovery.zen.ping.multicast.enabled’ => false,
‘discovery.zen.minimum_master_nodes’ => ‘1’,
}
}

#Define the node instance
elasticsearch::instance { ‘graylog’:
config => { ‘node.name’ => $::hostname }
}

#install KOPF management UI
elasticsearch::plugin { ‘lmenezes/elasticsearch-kopf’:
instances => ‘graylog’
}
#
#closing frenchie
}

Graylog Configuration

Install and configure the graylog appliance: graylog.domain.com – 172.16.100.77

Then edit /opt/graylog/conf/graylog.conf

Configure each of the node IP’s on the elastic search_discovery_zen_ping_unicast_hosts

Also formally define the graylog host itself

Now edit: /opt/graylog/elasticsearch/config/elasticsearch.yml

And configure graylog to not be a node master or data node.

Then restart the graylog server

The virt-who package allows you to map virtual machines to the physical host so that you can take advantage of RedHat Virtual Data Center licensing when using Satellite 6.1. It allows you to use your Hypervisor Host (in this case VMware ESXi) as a content host within Satellite. Therefore allowing you to assign RHEL licenses to the hosts directly, rather than individually on the virtual machine.

To do this, I’m going to leverage Puppet. In my puppet manifest I have:

file { '/etc/virt-who.d/vcenter.conf':
  ensure => file,
  owner  => 'root',
  mode   => 644,
  group  => 'root',
  source => "puppet:///modules/profiles/center"
}

And on the puppet server in: /etc/puppet/modules/profiles/files/

I have a file called vcenter, it looks like this:

[vcenter.domain.internal]
type=esx
server=vcenter.domain.internal
username=administrator@domain.internal.vmw
password=Password1!
#encryped_password=
owner="1"
env=Library
hypervisor_id=hostname

With this configuration, my ESX hosts will show up under Satellite > Hosts > Content Hosts

virt-who content hosts

Here you can see that Satellite now can identify the VM’s then running on the Hypervisor Host

I was building a NFS server for our users home directories to work with our FreeIPA implementation, and instead of setting up a logical volume, filesystem and mount point manually I decided to do it via Puppet. Since Puppet is our configuration management engine of choice, I might as well make something that’s reusable, right?

In our environment, we use a Puppet module called Profile, this profile module allows us to create puppet manifests for individual servers, something like this:

This allows us to use one specific manifest for each server rather than each server having its own independent module.

For this server (nfs.pp), I’m going to use the puppetlabs-lvm puppet module, and the haraldsk/nfs puppet modules. I then create my nfs.pp manifest in my profile Puppet module manifests directory, to look like this:

Here I specify the name of the manifest, and any includes.

Here I ensure that /srv/nfs is a directory that gets created or already exists on the filesystem.

Here I specify a Physical Volume (/dev/sdb1), Volume Group (vg_data), Logical Volume (nfs), and the LV size (480G). In this module I can also specify the mount point (/srv/nfs) and make it required (true).

Here I create the entry in /etc/exports for /srv/nfs with the appropriate options that I wanted.

Here is the full nfs.pp Puppet manifest: