How do you operationalize OpenStack?

So after “living with OpenStack” for a couple months now, I’ve been taking notes and such, figured I’d share them. This view is from a managed services environment, but an enterprise view would be ok as well.

  • Our OpenStack architecture cannot, and will-not look like our Vmware architecture where each hypervisor node also provides storage, etc.
    • Compute nodes would be just that, compute nodes with storage client connectivity (gluster, cinder, NFS, gpfs, etc)
    • Compute nodes would also need to run neutron
    • Controller nodes would be services nodes:
      • Heat
      • Glance
      • Horizon
      • Keystone
      • Nova
      • Neutron
    • Storage Nodes would be services nodes:
      • Cinder
      • NFS
      • GPFS/Gluster
    • Storage tiering as we know it (pick the datastore you want) does not exist in OpenStack
      • For storage tiering, you can skin that cat many ways:
      • Multiple cinder nodes with different storage backings
      • Multiple gluster/ceph nodes with different storage backings
    • Networking… so far, we’ve approached networking the same way we’re doing it now, where each VM has its own real IP address.
      • OpenStack allows us to empower the customer to build/destroy/configure their own networks
        • Faults to this: we’re not mature enough in managed services (skill-set throughout the stack)
        • Our customers are not mature enough
        • Further complicates an already complicated stance
    • HA OpenStack (VMware clone) would be impossible to operationalize
      • Some services would need to be cloned then load balanced
      • Some service would need to be configured in failover capability
      • This further obfuscates the entire stack beyond what it already is.
    • Horizon is a huge step in the right direction for customer interaction.
      • Horizon community is lacking essential services plugins (billing, automation, networking, storage
    • Image management
      • This is nonexistent and very cumbersome
      • The “best” method I’ve found is to build your images on vmware, then import the VMDK.
      • Tinkering with qemu-kvm is bothersome because of the network separation between qemu-kvm and OpenStack
      • Static routes need to be present in Linux VM’s, Additional routes are added to VLAN’s for Windows VM’s
    • Celiometer
      • API interaction/CLI interaction only, not available via Horizon
    • Heat
      • More of a vAPP utility than a orchestration engine as we know it
    • Still very powerful, but very difficult to interact with (customer/operations)
  • My #1 complaint is the fact that there are sooooooo many interfaces to interact with when it comes to just running OpenStack.
    • Linux
    • Nova
    • Keystone
    • Cinder
    • Neutron
    • Heat
    • Celiometer
    • Horizon
    • Glance
    • Storage subsystems
    • NFS
    • iSCSI
    • Local
    • Gluster
    • CEPH
    • GPFS
    • Pacemaker

How do you then operationalize OpenStack? Even using a product like IBM SmartCloud Orchestrator, or VMware vCAC to “hide” OpenStack behind; you still need operations procedures for when “it” breaks… and as we all know, “it” will break.

How do you operationalize OpenStack?

OpenStack cloud-init cannot contact or ping 169.254.169.254 to establish meta-data connection – fix

Using OpenStack Open vSwitch with VLAN’s removes a lot of the trickery involved with using public and private IP’s. That way each VM gets it’s own real IP address assigned to it.

In this case, our network layout looks as such:

Logical Network Layout
Logical Network Layout

That being said, the VM’s still need a way to get back to 169.254.169.254 for access to the OpenStack metadata service. In this case, the fix was to add a static route to the VM and re-configure the neutron-dhcp-agent service as such.

On the vm template, add /etc/sysconfig/network-scripts/route-eth0:

Then add the following to /etc/neutron/dhcp-agent.ini

and then restart the Neutron dhcp agent service:

Once this is finished, you should then be able to add the updated image to glance and deploy without issue.

Using haproxy as a load balancer for OpenStack services on RedHat OpenStack

Configuring RedHat OpenStack to be highly available is not like VMware where you just enable a couple features, check some boxes and voila! Quite the contrary… In fact; configuring RedHat OpenStack to be highly available is quite elegant.

Lets look at it like this. You quite a few services that make up OpenStack as a product. Services like Keystone (auth), Neutron (networking), Glance (image storage), Cinder (volume storage) and Nova (compute/scheduler) and MySQL (database).  Some of these services are made highly available via failover and some via cloning; aka having multiple copies of the same service distributed throughout the deployment.

OpenStack HA
OpenStack HA

In this case, we need to configure the clone pair of load balancers for to use for our OpenStack services. This will allow us to reference a single Virtual IP that will be load balanced across all cloned service nodes. For this functionality, were going to use haproxy for our load balancing software and pacemaker for clustering.

Load Balanced OpenStack Services
Load Balanced OpenStack Services

The first step is going to be to configure the RHEL HA and LB yum channels with the following:

Then simply install the HA and LB packages with:

The next step being, configure /etc/haproxy/haproxy.cfg to look like this on both of your load balancer nodes:

The next step is to then configure pacemaker on each of the nodes to start haproxy accordingly after the resources are configured.

The next step is to define the cluster resources

Now the only thing left to do is to create the stonith resources

You should now have haproxy running as a clustered service servicing a Virtual IP address for your OpenStack Services.

Installing a active/passive MySQL database cluster for OpenStack Havana on RHEL 6.5

Since RHEL 6.5 does not ship with an Active/Active version of MySQL, you’re limited to using an Active/Passive configuration out of the box. There are other alternate methods, however; this is the supported configuration from RedHat.

The requirements for creating an Active/Passive MySQL cluster are quite simple. You simply need shared storage. In this case, NFS will be sufficient as the database is not all that transaction heavy. So from here out, I’m assuming that you’re running a registered version of RHEL 6.5 on two servers with a common NFS mount on both servers under /var/lib/mysql and that your MySQL servers are virtual machines running on an ESX cluster.

First step, set up the RHEL HA channel: Continue reading “Installing a active/passive MySQL database cluster for OpenStack Havana on RHEL 6.5”