So after “living with OpenStack” for a couple months now, I’ve been taking notes and such, figured I’d share them. This view is from a managed services environment, but an enterprise view would be ok as well.

  • Our OpenStack architecture cannot, and will-not look like our Vmware architecture where each hypervisor node also provides storage, etc.
    • Compute nodes would be just that, compute nodes with storage client connectivity (gluster, cinder, NFS, gpfs, etc)
    • Compute nodes would also need to run neutron
    • Controller nodes would be services nodes:
      • Heat
      • Glance
      • Horizon
      • Keystone
      • Nova
      • Neutron
    • Storage Nodes would be services nodes:
      • Cinder
      • NFS
      • GPFS/Gluster
    • Storage tiering as we know it (pick the datastore you want) does not exist in OpenStack
      • For storage tiering, you can skin that cat many ways:
      • Multiple cinder nodes with different storage backings
      • Multiple gluster/ceph nodes with different storage backings
    • Networking… so far, we’ve approached networking the same way we’re doing it now, where each VM has its own real IP address.
      • OpenStack allows us to empower the customer to build/destroy/configure their own networks
        • Faults to this: we’re not mature enough in managed services (skill-set throughout the stack)
        • Our customers are not mature enough
        • Further complicates an already complicated stance
    • HA OpenStack (VMware clone) would be impossible to operationalize
      • Some services would need to be cloned then load balanced
      • Some service would need to be configured in failover capability
      • This further obfuscates the entire stack beyond what it already is.
    • Horizon is a huge step in the right direction for customer interaction.
      • Horizon community is lacking essential services plugins (billing, automation, networking, storage
    • Image management
      • This is nonexistent and very cumbersome
      • The “best” method I’ve found is to build your images on vmware, then import the VMDK.
      • Tinkering with qemu-kvm is bothersome because of the network separation between qemu-kvm and OpenStack
      • Static routes need to be present in Linux VM’s, Additional routes are added to VLAN’s for Windows VM’s
    • Celiometer
      • API interaction/CLI interaction only, not available via Horizon
    • Heat
      • More of a vAPP utility than a orchestration engine as we know it
    • Still very powerful, but very difficult to interact with (customer/operations)
  • My #1 complaint is the fact that there are sooooooo many interfaces to interact with when it comes to just running OpenStack.
    • Linux
    • Nova
    • Keystone
    • Cinder
    • Neutron
    • Heat
    • Celiometer
    • Horizon
    • Glance
    • Storage subsystems
    • NFS
    • iSCSI
    • Local
    • Gluster
    • CEPH
    • GPFS
    • Pacemaker

How do you then operationalize OpenStack? Even using a product like IBM SmartCloud Orchestrator, or VMware vCAC to “hide” OpenStack behind; you still need operations procedures for when “it” breaks… and as we all know, “it” will break.

How do you operationalize OpenStack?

Configuring RedHat OpenStack to be highly available is not like VMware where you just enable a couple features, check some boxes and voila! Quite the contrary… In fact; configuring RedHat OpenStack to be highly available is quite elegant.

Lets look at it like this. You quite a few services that make up OpenStack as a product. Services like Keystone (auth), Neutron (networking), Glance (image storage), Cinder (volume storage) and Nova (compute/scheduler) and MySQL (database).  Some of these services are made highly available via failover and some via cloning; aka having multiple copies of the same service distributed throughout the deployment.

OpenStack HA

OpenStack HA

In this case, we need to configure the clone pair of load balancers for to use for our OpenStack services. This will allow us to reference a single Virtual IP that will be load balanced across all cloned service nodes. For this functionality, were going to use haproxy for our load balancing software and pacemaker for clustering.

Load Balanced OpenStack Services

Load Balanced OpenStack Services

The first step is going to be to configure the RHEL HA and LB yum channels with the following:

Then simply install the HA and LB packages with:

The next step being, configure /etc/haproxy/haproxy.cfg to look like this on both of your load balancer nodes:

The next step is to then configure pacemaker on each of the nodes to start haproxy accordingly after the resources are configured.

The next step is to define the cluster resources

Now the only thing left to do is to create the stonith resources

You should now have haproxy running as a clustered service servicing a Virtual IP address for your OpenStack Services.

When trying to launch a dual homed VM within OpenStack Havana, I would get an Error in the horizon interface.

When looking in Nova, I would see the error below when running: nova show ‘instancename’

fault | {u’message’: u’No valid host was found. ‘, u’code’: 500

Which by itself its pretty damn ambiguous, I stared digging through the instance logs and the Nova Compute logs. However, by default, debug mode was enabled in /etc/nova/nova.conf and was drowning out all useful information. So after disabling debug mode and restarting openstack-nova-compute, I was able to then see the following error messages:

In my neutron networking I had the following created:

As you can see, I did not have a subnet associated with the 192.168.128.0/24 network. After adding the subnet like so:

I was able to then successfully deploy the virtual machine.

Playing with KVM and OpenStack I wanted to create a custom Linux virtual machine template. The easiest way to do this is to first create a blank disk using the qemu-img command.

Here we specify the format as qcow2 with the -f switch, the path to where we want to create it, and the size by specifying 20G.

See the man page for qemu-img here

Once you have the disk created, you can then use the virt-install command to install an OS to the blank disk.

With the following command, we specify the amount of memory, hardware acceleration, name, path to the blank disk, what ISO file we want attached and console arguments so I can do the install via SSH session.

Without using the console specifications, you will not be able to fully boot the image, as it will hang at:

Not giving you much information as to what is going on :)

Once your image build is completed, you can then import the image into OpenStack using glance

You should be able to now see the image registered with Nova using the nova image list command.

Now you can use the nova boot command to create a VM from the image and start it.