How do you operationalize OpenStack?

So after “living with OpenStack” for a couple months now, I’ve been taking notes and such, figured I’d share them. This view is from a managed services environment, but an enterprise view would be ok as well.

  • Our OpenStack architecture cannot, and will-not look like our Vmware architecture where each hypervisor node also provides storage, etc.
    • Compute nodes would be just that, compute nodes with storage client connectivity (gluster, cinder, NFS, gpfs, etc)
    • Compute nodes would also need to run neutron
    • Controller nodes would be services nodes:
      • Heat
      • Glance
      • Horizon
      • Keystone
      • Nova
      • Neutron
    • Storage Nodes would be services nodes:
      • Cinder
      • NFS
      • GPFS/Gluster
    • Storage tiering as we know it (pick the datastore you want) does not exist in OpenStack
      • For storage tiering, you can skin that cat many ways:
      • Multiple cinder nodes with different storage backings
      • Multiple gluster/ceph nodes with different storage backings
    • Networking… so far, we’ve approached networking the same way we’re doing it now, where each VM has its own real IP address.
      • OpenStack allows us to empower the customer to build/destroy/configure their own networks
        • Faults to this: we’re not mature enough in managed services (skill-set throughout the stack)
        • Our customers are not mature enough
        • Further complicates an already complicated stance
    • HA OpenStack (VMware clone) would be impossible to operationalize
      • Some services would need to be cloned then load balanced
      • Some service would need to be configured in failover capability
      • This further obfuscates the entire stack beyond what it already is.
    • Horizon is a huge step in the right direction for customer interaction.
      • Horizon community is lacking essential services plugins (billing, automation, networking, storage
    • Image management
      • This is nonexistent and very cumbersome
      • The “best” method I’ve found is to build your images on vmware, then import the VMDK.
      • Tinkering with qemu-kvm is bothersome because of the network separation between qemu-kvm and OpenStack
      • Static routes need to be present in Linux VM’s, Additional routes are added to VLAN’s for Windows VM’s
    • Celiometer
      • API interaction/CLI interaction only, not available via Horizon
    • Heat
      • More of a vAPP utility than a orchestration engine as we know it
    • Still very powerful, but very difficult to interact with (customer/operations)
  • My #1 complaint is the fact that there are sooooooo many interfaces to interact with when it comes to just running OpenStack.
    • Linux
    • Nova
    • Keystone
    • Cinder
    • Neutron
    • Heat
    • Celiometer
    • Horizon
    • Glance
    • Storage subsystems
    • NFS
    • iSCSI
    • Local
    • Gluster
    • CEPH
    • GPFS
    • Pacemaker

How do you then operationalize OpenStack? Even using a product like IBM SmartCloud Orchestrator, or VMware vCAC to “hide” OpenStack behind; you still need operations procedures for when “it” breaks… and as we all know, “it” will break.

How do you operationalize OpenStack?

OpenStack cloud-init cannot contact or ping 169.254.169.254 to establish meta-data connection – fix

Using OpenStack Open vSwitch with VLAN’s removes a lot of the trickery involved with using public and private IP’s. That way each VM gets it’s own real IP address assigned to it.

In this case, our network layout looks as such:

Logical Network Layout
Logical Network Layout

That being said, the VM’s still need a way to get back to 169.254.169.254 for access to the OpenStack metadata service. In this case, the fix was to add a static route to the VM and re-configure the neutron-dhcp-agent service as such.

On the vm template, add /etc/sysconfig/network-scripts/route-eth0:

169.254.169.254/32 dev eth0

Then add the following to /etc/neutron/dhcp-agent.ini

enable_isolated_metadata = True
enable_metadata_network = False

and then restart the Neutron dhcp agent service:

service neutron_dhcp_agent restart

Once this is finished, you should then be able to add the updated image to glance and deploy without issue.

Using haproxy as a load balancer for OpenStack services on RedHat OpenStack

Configuring RedHat OpenStack to be highly available is not like VMware where you just enable a couple features, check some boxes and voila! Quite the contrary… In fact; configuring RedHat OpenStack to be highly available is quite elegant.

Lets look at it like this. You quite a few services that make up OpenStack as a product. Services like Keystone (auth), Neutron (networking), Glance (image storage), Cinder (volume storage) and Nova (compute/scheduler) and MySQL (database).  Some of these services are made highly available via failover and some via cloning; aka having multiple copies of the same service distributed throughout the deployment.

OpenStack HA
OpenStack HA

In this case, we need to configure the clone pair of load balancers for to use for our OpenStack services. This will allow us to reference a single Virtual IP that will be load balanced across all cloned service nodes. For this functionality, were going to use haproxy for our load balancing software and pacemaker for clustering.

Load Balanced OpenStack Services
Load Balanced OpenStack Services

The first step is going to be to configure the RHEL HA and LB yum channels with the following:

rhn-channel --user username --password passw0rd-a -c rhel-x86_64-server-ha-6 -c rhel-x86_64-server-lb-6

Then simply install the HA and LB packages with:

yum install -y pacemaker pcs cman resource-agents fence-agents haproxy

The next step being, configure /etc/haproxy/haproxy.cfg to look like this on both of your load balancer nodes:

global
daemon
defaults
mode tcp
maxconn 10000
timeout connect 10s
timeout client 10s
timeout server 10s
frontend qpidd-vip
bind 172.16.56.227:5672
default_backend qpidd-mrg
frontend keystone-admin-vip
bind 172.16.56.227:35357
default_backend keystone-admin-api
frontend keystone-public-vip
bind 172.16.56.227:5000
default_backend keystone-public-api
frontend glance-vip
bind 172.16.56.227:9191
default_backend glance-api
frontend glance-registry-vip
bind 172.16.56.227:9292
default_backend glance-registry-api
frontend cinder-vip
bind 172.16.56.227:8776
default_backend cinder-api
frontend neutron-vip
bind 172.16.56.227:9696
default_backend neutron-api
frontend nova-vnc-novncproxy
bind 172.16.56.227:6080
default_backend nova-vnc-novncproxy
frontend nova-vnc-xvpvncproxy
bind 172.16.56.227:6081
default_backend nova-vnc-xvpvncproxy
frontend nova-metadata-api
bind 172.16.56.227:8775
default_backend nova-metadata
frontend nova-api-vip
bind 172.16.56.227:8774
default_backend nova-api
frontend horizon-vip
bind 172.16.56.227:80
default_backend horizon-api
frontend ceilometer-vip
bind 172.16.56.227:8777
default_backend ceilometer-api
frontend heat-cfn-vip
bind 172.16.56.227:8000
default_backend heat-cfn-api
frontend heat-cloudw-vip
bind 172.16.56.227:8003
default_backend heat-cloudw-api
frontend heat-srv-vip
bind 172.16.56.227:8004
default_backend heat-srv-api
backend qpidd-mrg
balance roundrobin
server oscon1.domain.net 172.16.56.224:5672 check inter 10s
server oscon2.domain.net 172.16.56.225:5672 check inter 10s
server oscon3.domain.net 172.16.56.226:5672 check inter 10s
backend keystone-admin-api
balance roundrobin
server oscon1.domain.net 172.16.56.224:35357 check inter 10s
server oscon2.domain.net 172.16.56.225:35357 check inter 10s
server oscon3.domain.net 172.16.56.226:35357 check inter 10s
backend keystone-public-api
balance roundrobin
server oscon1.domain.net 172.16.56.224:5000 check inter 10s
server oscon2.domain.net 172.16.56.225:5000 check inter 10s
server oscon3.domain.net 172.16.56.226:5000 check inter 10s
backend glance-api
balance roundrobin
server oscon1.domain.net 172.16.56.224:9191 check inter 10s
server oscon2.domain.net 172.16.56.225:9191 check inter 10s
server oscon3.domain.net 172.16.56.226:9191 check inter 10s
backend glance-registry-api
balance roundrobin
server oscon1.domain.net 172.16.56.224:9292 check inter 10s
server oscon2.domain.net 172.16.56.225:9292 check inter 10s
server oscon3.domain.net 172.16.56.226:9292 check inter 10s
backend cinder-api
balance roundrobin
server oscon1.domain.net 172.16.56.224:8776 check inter 10s
server oscon2.domain.net 172.16.56.225:8776 check inter 10s
server oscon3.domain.net 172.16.56.226:8776 check inter 10s
backend neutron-api
balance roundrobin
server oscon1.domain.net 172.16.56.224:9696 check inter 10s
server oscon2.domain.net 172.16.56.225:9696 check inter 10s
server oscon3.domain.net 172.16.56.226:9696 check inter 10s
backend nova-vnc-novncproxy
balance roundrobin
server oscon1.domain.net 172.16.56.224:6080 check inter 10s
server oscon2.domain.net 172.16.56.225:6080 check inter 10s
server oscon3.domain.net 172.16.56.226:6080 check inter 10s
backend nova-vnc-xvpvncproxy
balance roundrobin
server oscon1.domain.net 172.16.56.224:6081 check inter 10s
server oscon2.domain.net 172.16.56.225:6081 check inter 10s
server oscon3.domain.net 172.16.56.226:6081 check inter 10s
backend nova-metadata
balance roundrobin
server oscon1.domain.net 172.16.56.224:8775 check inter 10s
server oscon2.domain.net 172.16.56.225:8775 check inter 10s
server oscon3.domain.net 172.16.56.226:8775 check inter 10s
backend nova-api
balance roundrobin
server oscon1.domain.net 172.16.56.224:8774 check inter 10s
server oscon2.domain.net 172.16.56.225:8774 check inter 10s
server oscon3.domain.net 172.16.56.226:8774 check inter 10s
backend horizon-api
balance roundrobin
server oscon1.domain.net 172.16.56.224:80 check inter 10s
server oscon2.domain.net 172.16.56.225:80 check inter 10s
server oscon3.domain.net 172.16.56.226:80 check inter 10s
backend ceilometer-api
balance roundrobin
server oscon1.domain.net 172.16.56.224:8777 check inter 10s
backend heat-cfn-api
balance roundrobin
server oscon1.domain.net 172.16.56.224:8000 check inter 10s
server oscon2.domain.net 172.16.56.225:8000 check inter 10s
server oscon3.domain.net 172.16.56.226:8000 check inter 10s
backend heat-cloudw-api
balance roundrobin
server oscon1.domain.net 172.16.56.224:8003 check inter 10s
server oscon2.domain.net 172.16.56.225:8003 check inter 10s
server oscon3.domain.net 172.16.56.226:8003 check inter 10s
backend heat-srv-api
balance roundrobin
server oscon1.domain.net 172.16.56.224:8004 check inter 10s
server oscon2.domain.net 172.16.56.225:8004 check inter 10s
server oscon3.domain.net 172.16.56.226:8004 check inter 10s

The next step is to then configure pacemaker on each of the nodes to start haproxy accordingly after the resources are configured.

chkconfig pacemaker on
sysctl -w net.ipv4.ip_nonlocal_bind=1
pcs cluster setup --name lb-cluster "oslb1.domain.net oslb2.domain.net"
pcs cluster start

The next step is to define the cluster resources

pcs resource defaults resource-stickiness=100
pcs resource create lb-master-vip IPaddr2 ip=172.16.56.227
pcs resource create lb-haproxy lsb:haproxy
pcs resource group add haproxy-group lb-haproxy lb-master-vip

Now the only thing left to do is to create the stonith resources

pcs stonith create fence_oslb1 fence_vmware_soap login=root passwd=vmware action=reboot ipaddr=vcenteripaddress port=/vsandc/vm/oslb1 ssl=1 pcmk_host_list=oslb1.domain.net

pcs stonith create fence_oslb2 fence_vmware_soap login=root passwd=vmware action=reboot ipaddr=vcenteripaddress port=/vsandc/vm/oslb2 ssl=1 pcmk_host_list=oslb2.domain.net

You should now have haproxy running as a clustered service servicing a Virtual IP address for your OpenStack Services.

Installing a active/passive MySQL database cluster for OpenStack Havana on RHEL 6.5

Since RHEL 6.5 does not ship with an Active/Active version of MySQL, you’re limited to using an Active/Passive configuration out of the box. There are other alternate methods, however; this is the supported configuration from RedHat.

The requirements for creating an Active/Passive MySQL cluster are quite simple. You simply need shared storage. In this case, NFS will be sufficient as the database is not all that transaction heavy. So from here out, I’m assuming that you’re running a registered version of RHEL 6.5 on two servers with a common NFS mount on both servers under /var/lib/mysql and that your MySQL servers are virtual machines running on an ESX cluster.

First step, set up the RHEL HA channel: Continue reading “Installing a active/passive MySQL database cluster for OpenStack Havana on RHEL 6.5”