2018’s Puppet Community Award Winners!!

A big shout out too my peer @nmaludy! for his Puppet community award!

Each year the Puppet community puts our collective heads together and selects some outstanding community members to recognize, and this year was no different. We had a lot of really great nominees and a few clear leaders. I’m sure that none of you will be surprised by who we selected. This year it seems that the focus was on some of the non-technical parts of being a good community member, like encouraging others or building a welcoming environment. Each of our winners embodies that ethos and is a real positive presence to have.

Nick’s not only done a fantastic job of supporting our and our customer’s organizations, but you can also see that he’s a fantastic open-source community member.

Nick Maludy

The Bolt team was almost unanimous when we asked them about community awards. Nick is the number one community contributor to the project and has contributed new features like handling sensitive data and other security improvements, and plan logging. He even built StackStorm integration for Bolt and answers community questions about the project.

We were told that we’d never be forgiven if we didn’t give Nick an award for his involvement!

Source: https://puppet.com/blog/big-thank-you-years-community-award-winners

Build a 3-node mongodb cluster using puppet (for use with High Availability Graylog in this case)

One of the core components to a Graylog installation in MongoDB. Quite possibly the worst database ever to grace the planet :)

Hopefully, from a Graylog prospective, MongoDB will disappear from the solution soon.

Anyway, from an architecture prospective, we want to use a highly available Graylog deployment aka Graylog HA. Of which, there is little documentation about. So from a technical prospective you’ve got:

  • Incoming log traffic load-balancer
  • Multiple Graylog servers
  • Multiple MongoDB nodes (also Graylog servers)
  • Multiple ElasticSearch nodes

In our case, we chose to use:

  • A NetScaler to listen on UDP 514 and also host the SSL certificate.
    • The NetScaler will also do a API call against the Graylog servers to verify health.
    • The NetScaler will then pass the traffic to the active Graylog server on the active input thats listening on UDP 5140.
  • The two Graylog servers will be part of a MongoDB cluster, and then a third VM will be used as a MongoDB witness server.
  • Three servers will be used a ElasticSearch nodes.

From a configuration management prospective, we wanted to leverage Puppet to do the installation of the MongoDB cluster.

The puppet manifests we used are:

class encore_rp::profile::mongopeer {
file {['/data', '/data/db']:
	ensure => 'directory',
	}

#install Java JRE
	class { 'java':
		distribution => 'jre',
	}

class {'::mongodb::client': }

class {'::mongodb::server':
    ensure    => present,
		auth      => false,
		port      => 27018,
		bind_ip   => $::ipaddress,
		replset   => 'graylog',
						  }
mongodb_replset { 'graylog':
    ensure  => present,
		initialize_host => 'node1.domain.local',
		    members => ['node1.domain.local:27018', 'node2.domain.local:27018', 'node3.domain.local:27018']
				  }
mongodb::db { 'graylog':
  user          => 'graylog',
	  password_hash => 'hashed password',
		}

}

Building a two-node elasticsearch cluster for Graylog using Puppet

Assumptions:

Two servers, in this case:

  • elastica.domain.com – 172.16.100.80
  • elasticb.domain.com – 172.16.100.81
  • 8 vCPU
  • 16GB vMem
  • A second hard disk of 500GB
    • /dev/sdb1
      • formatted XFS and mounted as /var/lib/elasticsearch
  • Hosts file configured to reference each other
  • The follow two puppet modules are installed: saz-limits, puppetlabs-java and elasticsearch-elasticsearch

Web Interface:

We use the KOPF elastic search plugin to present us a web interface. Install the KOPF plugin:

./elasticsearch/bin/plugin install lmenezes/elasticsearch-kopf/

It should now be available on each node:

http://elastica.domain.com:9200/_plugin/kopf/#!/cluster

Elasticsearch Installation

Use the following puppet manifest to configure the nodes.

#=============================================================================
#filename :graylog.pp
#description :This is the base puppet manifest to configure an elastic search cluster
#author :Eric Sarakaitis
#date :1/26/17
#==============================================================================
#this is for graylog
class profiles::graylog {

$config_hash = {
‘ES_HEAP_SIZE’ => ‘8g’,
‘MAX_LOCKED_MEMORY’ => ‘unlimited’,
}

#configure memory limits

class { ‘limits’: }
limits::limits { ’99-elasticsearch-memory.conf’:
ensure => present,
user => ‘username’,
limit_type => ‘memlock’,
both => unlimited,
}

#install Java JRE
class { ‘java’:
distribution => ‘jre’,
}
#install elasticsearch cluster
class { ‘elasticsearch’:
init_defaults => $config_hash,
version => ‘2.3.5’,
restart_on_change => true,
manage_repo => true,
repo_version => ‘2.x’,
datadir => ‘/var/lib/elasticsearch’,
config => {
‘cluster.name’ => ‘graylog’,
‘indices.store.throttle.max_bytes_per_sec’ => ‘150mb’,
‘script.inline’ => false,
‘script.indexed’ => false,
‘script.file’ => false,
‘node.name’ => $::hostname,
‘network.host’ => $::ipaddress,
‘network.publish_host’ => $::ipaddress,
‘http.enabled’ => true,
‘node.master’ => true,
‘node.data’ => true,
‘index.number_of_shards’ => ‘2’,
‘index.number_of_replicas’ => ‘1’,
‘discovery.zen.ping.unicast.hosts’ => “172.16.100.80, 172.16.100.81, 172.16.100.77”,
‘elasticsearch_discovery.zen.ping.unicast.hosts’ => “172.16.100.80, 172.16.100.81, 172.16.100.77”,
‘discovery.zen.ping.multicast.enabled’ => false,
‘discovery.zen.minimum_master_nodes’ => ‘1’,
}
}

#Define the node instance
elasticsearch::instance { ‘graylog’:
config => { ‘node.name’ => $::hostname }
}

#install KOPF management UI
elasticsearch::plugin { ‘lmenezes/elasticsearch-kopf’:
instances => ‘graylog’
}
#
#closing frenchie
}

Graylog Configuration

Install and configure the graylog appliance: graylog.domain.com – 172.16.100.77

Then edit /opt/graylog/conf/graylog.conf

Configure each of the node IP’s on the elastic search_discovery_zen_ping_unicast_hosts

elasticsearch_discovery_zen_ping_unicast_hosts = 172.16.100.77:9300, 172.16.100.80:9300, 172.16.100.81:9300

Also formally define the graylog host itself

elasticsearch_network_host = 172.16.100.77

Now edit: /opt/graylog/elasticsearch/config/elasticsearch.yml

And configure graylog to not be a node master or data node.

node.master: false
node.data: false

Then restart the graylog server

Configure a datacenter mail relay through Office 365 based on Postfix using Puppet

When standing up a new greenfield environment, one of the first services you typically end up needing is an internal mail relay. We use Office 365, so we wanted to our mail relay to send mail through it. To do that I used Puppet along with a Puppet module from jlambert121, which you can find here. Note, I also used the Firewalld puppet module from crayfishx to manage my firewall ports on RHEL 7, which you can find here.

Once I had the puppet module installed, I was able to use the following puppet manifest.

#=============================================================================
#filename :postfix_relay.pp
#description :This is the base puppet manifest for a postfix mail relay for EMS
#author :Eric Sarakaitis
#date :9/29/16
#==============================================================================

class profiles::postfix_relay {

#open the firewall ports
firewalld_service { 'Allow smtp from the external zone':
ensure => 'present',
service => 'smtp',
zone => 'external'
}

firewalld_port {
#open port 25 TCP for SMTP
'Open port 25 TCP in the public zone':
ensure => present,
zone => 'public',
port => 25,
protocol => 'tcp'
}

#install and configure postfix
class { 'postfix':
smtp_relay => true,
relay_networks => '172.16.209.0/24, 172.16.208.0/24, 192.168.1.0/24',
relay_host => '[smtp.office365.com]',
relay_username => 'relay@domain.com',
relay_password => 'Passw0rd',
relay_port => '587',
}

#closing frenchie
}