Using OpenStack Open vSwitch with VLAN’s removes a lot of the trickery involved with using public and private IP’s. That way each VM gets it’s own real IP address assigned to it.

In this case, our network layout looks as such:

Logical Network Layout

Logical Network Layout

That being said, the VM’s still need a way to get back to 169.254.169.254 for access to the OpenStack metadata service. In this case, the fix was to add a static route to the VM and re-configure the neutron-dhcp-agent service as such.

On the vm template, add /etc/sysconfig/network-scripts/route-eth0:

Then add the following to /etc/neutron/dhcp-agent.ini

and then restart the Neutron dhcp agent service:

Once this is finished, you should then be able to add the updated image to glance and deploy without issue.

Configuring Glace, Cinder and Nova for OpenStack Havana to run on GlusterFS is actually quite simple; assuming that you’ve already got GlusterFS up and running.

So lets first look at my Gluster configuration. As you can see below, I have a Gluster volume defined for Cinder, Glance and Nova.

And I have each of these filesystems mounted.

Now that I have OpenStack Cinder, Nova and Glance installed; I can then configure them to use my Gluster mounts.

Modify /etc/cinder/cinder.conf to reflect the Gluster configuration

Also make sure that /etc/cinder/shares.conf has the Gluster share listed

Then, create the images folder for Glance.

Then, modify the file permissions so that its usable.

Modify the glance configuration to reflect the Gluster mount points in /etc/glance/glance-api.conf

Restart Glance Services

Create nova folder structure

Then, modify the file permissions so that its usable.

Modify nova config in /etc/nova/nova.conf to reflect the Gluster mount points.

Restart nova

Verify OpenStack services

You should now see that Cinder has the GlusterFS mount mounted when issuing the mount command:

Note: 127.0.0.1:/cinder on /var/lib/cinder/mnt/92ef2ec54fd18595ed18d8e6027a1b3d type fuse.glusterfs (rw,default_permissions,allow_other,max_read=131072

The main issue with installing OS’s on these Winterfell servers is the complete lack of video card :)

Your only option is serial console using the USB > Serial header adapter.

So the task is to install RedHat or CentOS (or other RHEL style OS’s) via PXE, with serial console only access. To do this, we need to pass boot parameters to the PXE menu entry.

In this particular case, my 1GB interface on the OpenCompute V3 (Winterfell) servers is recognized as eth2. So my PXE menu entry looks as such: More »

Playing with KVM and OpenStack I wanted to create a custom Linux virtual machine template. The easiest way to do this is to first create a blank disk using the qemu-img command.

Here we specify the format as qcow2 with the -f switch, the path to where we want to create it, and the size by specifying 20G.

See the man page for qemu-img here

Once you have the disk created, you can then use the virt-install command to install an OS to the blank disk.

With the following command, we specify the amount of memory, hardware acceleration, name, path to the blank disk, what ISO file we want attached and console arguments so I can do the install via SSH session.

Without using the console specifications, you will not be able to fully boot the image, as it will hang at:

Not giving you much information as to what is going on :)

Once your image build is completed, you can then import the image into OpenStack using glance

You should be able to now see the image registered with Nova using the nova image list command.

Now you can use the nova boot command to create a VM from the image and start it.