your home for end-user virtualization!

Building a two-node elasticsearch cluster for Graylog using Puppet

Assumptions:

Two servers, in this case:

  • elastica.domain.com – 172.16.100.80
  • elasticb.domain.com – 172.16.100.81
  • 8 vCPU
  • 16GB vMem
  • A second hard disk of 500GB
    • /dev/sdb1
      • formatted XFS and mounted as /var/lib/elasticsearch
  • Hosts file configured to reference each other
  • The follow two puppet modules are installed: saz-limits, puppetlabs-java and elasticsearch-elasticsearch

Web Interface:

We use the KOPF elastic search plugin to present us a web interface. Install the KOPF plugin:

./elasticsearch/bin/plugin install lmenezes/elasticsearch-kopf/

It should now be available on each node:

http://elastica.domain.com:9200/_plugin/kopf/#!/cluster

Elasticsearch Installation

Use the following puppet manifest to configure the nodes.

#=============================================================================
#filename :graylog.pp
#description :This is the base puppet manifest to configure an elastic search cluster
#author :Eric Sarakaitis
#date :1/26/17
#==============================================================================
#this is for graylog
class profiles::graylog {

$config_hash = {
‘ES_HEAP_SIZE’ => ‘8g’,
‘MAX_LOCKED_MEMORY’ => ‘unlimited’,
}

#configure memory limits

class { ‘limits’: }
limits::limits { ’99-elasticsearch-memory.conf’:
ensure => present,
user => ‘username’,
limit_type => ‘memlock’,
both => unlimited,
}

#install Java JRE
class { ‘java’:
distribution => ‘jre’,
}
#install elasticsearch cluster
class { ‘elasticsearch’:
init_defaults => $config_hash,
version => ‘2.3.5’,
restart_on_change => true,
manage_repo => true,
repo_version => ‘2.x’,
datadir => ‘/var/lib/elasticsearch’,
config => {
‘cluster.name’ => ‘graylog’,
‘indices.store.throttle.max_bytes_per_sec’ => ‘150mb’,
‘script.inline’ => false,
‘script.indexed’ => false,
‘script.file’ => false,
‘node.name’ => $::hostname,
‘network.host’ => $::ipaddress,
‘network.publish_host’ => $::ipaddress,
‘http.enabled’ => true,
‘node.master’ => true,
‘node.data’ => true,
‘index.number_of_shards’ => ‘2’,
‘index.number_of_replicas’ => ‘1’,
‘discovery.zen.ping.unicast.hosts’ => “172.16.100.80, 172.16.100.81, 172.16.100.77”,
‘elasticsearch_discovery.zen.ping.unicast.hosts’ => “172.16.100.80, 172.16.100.81, 172.16.100.77”,
‘discovery.zen.ping.multicast.enabled’ => false,
‘discovery.zen.minimum_master_nodes’ => ‘1’,
}
}

#Define the node instance
elasticsearch::instance { ‘graylog’:
config => { ‘node.name’ => $::hostname }
}

#install KOPF management UI
elasticsearch::plugin { ‘lmenezes/elasticsearch-kopf’:
instances => ‘graylog’
}
#
#closing frenchie
}

Graylog Configuration

Install and configure the graylog appliance: graylog.domain.com – 172.16.100.77

Then edit /opt/graylog/conf/graylog.conf

Configure each of the node IP’s on the elastic search_discovery_zen_ping_unicast_hosts

elasticsearch_discovery_zen_ping_unicast_hosts = 172.16.100.77:9300, 172.16.100.80:9300, 172.16.100.81:9300

Also formally define the graylog host itself

elasticsearch_network_host = 172.16.100.77

Now edit: /opt/graylog/elasticsearch/config/elasticsearch.yml

And configure graylog to not be a node master or data node.

node.master: false
node.data: false

Then restart the graylog server

Search

Categories