Assumptions:

Two servers, in this case:

  • elastica.domain.com – 172.16.100.80
  • elasticb.domain.com – 172.16.100.81
  • 8 vCPU
  • 16GB vMem
  • A second hard disk of 500GB
    • /dev/sdb1
      • formatted XFS and mounted as /var/lib/elasticsearch
  • Hosts file configured to reference each other
  • The follow two puppet modules are installed: saz-limits, puppetlabs-java and elasticsearch-elasticsearch

Web Interface:

We use the KOPF elastic search plugin to present us a web interface. Install the KOPF plugin:

It should now be available on each node:

http://elastica.domain.com:9200/_plugin/kopf/#!/cluster

Elasticsearch Installation

Use the following puppet manifest to configure the nodes.

#=============================================================================
#filename :graylog.pp
#description :This is the base puppet manifest to configure an elastic search cluster
#author :Eric Sarakaitis
#date :1/26/17
#==============================================================================
#this is for graylog
class profiles::graylog {

$config_hash = {
‘ES_HEAP_SIZE’ => ‘8g’,
‘MAX_LOCKED_MEMORY’ => ‘unlimited’,
}

#configure memory limits

class { ‘limits’: }
limits::limits { ’99-elasticsearch-memory.conf’:
ensure => present,
user => ‘username’,
limit_type => ‘memlock’,
both => unlimited,
}

#install Java JRE
class { ‘java’:
distribution => ‘jre’,
}
#install elasticsearch cluster
class { ‘elasticsearch’:
init_defaults => $config_hash,
version => ‘2.3.5’,
restart_on_change => true,
manage_repo => true,
repo_version => ‘2.x’,
datadir => ‘/var/lib/elasticsearch’,
config => {
‘cluster.name’ => ‘graylog’,
‘indices.store.throttle.max_bytes_per_sec’ => ‘150mb’,
‘script.inline’ => false,
‘script.indexed’ => false,
‘script.file’ => false,
‘node.name’ => $::hostname,
‘network.host’ => $::ipaddress,
‘network.publish_host’ => $::ipaddress,
‘http.enabled’ => true,
‘node.master’ => true,
‘node.data’ => true,
‘index.number_of_shards’ => ‘2’,
‘index.number_of_replicas’ => ‘1’,
‘discovery.zen.ping.unicast.hosts’ => “172.16.100.80, 172.16.100.81, 172.16.100.77”,
‘elasticsearch_discovery.zen.ping.unicast.hosts’ => “172.16.100.80, 172.16.100.81, 172.16.100.77”,
‘discovery.zen.ping.multicast.enabled’ => false,
‘discovery.zen.minimum_master_nodes’ => ‘1’,
}
}

#Define the node instance
elasticsearch::instance { ‘graylog’:
config => { ‘node.name’ => $::hostname }
}

#install KOPF management UI
elasticsearch::plugin { ‘lmenezes/elasticsearch-kopf’:
instances => ‘graylog’
}
#
#closing frenchie
}

Graylog Configuration

Install and configure the graylog appliance: graylog.domain.com – 172.16.100.77

Then edit /opt/graylog/conf/graylog.conf

Configure each of the node IP’s on the elastic search_discovery_zen_ping_unicast_hosts

Also formally define the graylog host itself

Now edit: /opt/graylog/elasticsearch/config/elasticsearch.yml

And configure graylog to not be a node master or data node.

Then restart the graylog server

This document is dependent on the following assumptions:

  • NetBIOS names of the IPA domain and AD domain must be different.
    • In addtion, NetBIOS names of the IPA server and AD DC server must be different.
  • Encoredev.local is the AD domain
    • encoredev1.encoredev.local will host this domain and associated DNS
  • Linux.local is the IPA domain
    • ipa1.linux.local will host this domain and associated DNS records
  • The /etc/hosts file is configured
  • The servers hostname is configured correctly
  • The server has firewalld disabled or the appropriate firewall ports have been opened.
  • NS1/NS2 = 172.16.40.2/172.16.40.3
  • DEVNS1/DEVNS2 = 172.16.104.2/172.16.105.3
  • Windows Domain = encoredev.local
  • IPA domain = linux.local
  • Active Directory Linux Admins Group = LinuxAdmins
  • NFS Server = nfs.linux.local
    • nfs.linux.local has been added as an IPA Client

More »

The virt-who package allows you to map virtual machines to the physical host so that you can take advantage of RedHat Virtual Data Center licensing when using Satellite 6.1. It allows you to use your Hypervisor Host (in this case VMware ESXi) as a content host within Satellite. Therefore allowing you to assign RHEL licenses to the hosts directly, rather than individually on the virtual machine.

To do this, I’m going to leverage Puppet. In my puppet manifest I have:

file { '/etc/virt-who.d/vcenter.conf':
  ensure => file,
  owner  => 'root',
  mode   => 644,
  group  => 'root',
  source => "puppet:///modules/profiles/center"
}

And on the puppet server in: /etc/puppet/modules/profiles/files/

I have a file called vcenter, it looks like this:

[vcenter.domain.internal]
type=esx
server=vcenter.domain.internal
username=administrator@domain.internal.vmw
password=Password1!
#encryped_password=
owner="1"
env=Library
hypervisor_id=hostname

With this configuration, my ESX hosts will show up under Satellite > Hosts > Content Hosts

virt-who content hosts

Here you can see that Satellite now can identify the VM’s then running on the Hypervisor Host

On our OCP Winterfell nodes, in CentOS 6; the 10GB Mellanox NIC’s show up as eth0 and eht1, where the 1GB management interface shows up as eth2. We are also using Brocade 10GB top-of-rack switches. So configuring LLDP was necessary for the servers to advertise themselves to the upstream switches. To do this, we use the LLDPAD package available  in the @base CentOS repo.

The next step is to create a Puppet module/mainfest to:

  1. Install the LLDPAD RPM from YUM.
  2. Start the LLDPAD service
  3. Ensure that the LLDPAD service is set to autostart at boot
  4. Configure eth0 and eth1 to broadcast their LLDP status to the upstream switches
  5. Ensure that it only runs once, not every time puppet agent runs

More »