Installing GlusterFS on RHEL 6.4 for OpenStack Havana (RDO)

The OpenCompute systems are the the ideal hardware platform for distributed filesystems. Period. Why? Cheap servers with 10GB NIC’s and a boatload of locally attached cheap storage!

In preparation for deploying RedHat RDO on RHEL, the distributed filesystem I chose was GlusterFS. It’s simple and easy to deploy, and only takes a couple of minutes to have it up and running.

The first thing I did was configure my local 10GB interfaces for heartbeat traffic, to do that I created a sub-interface on VLAN 401 for each node. In this case I used 10.124.1.0/24 addressing. Continue reading “Installing GlusterFS on RHEL 6.4 for OpenStack Havana (RDO)”

VMware vSAN on Open Compute

As we all know, the largest cost component of virtualization is typically shared storage. Be it Fibre Channel, NAS or iSCSI, it’s all expensive! Let alone a flash based array like Tinri or Whiptail.

OCP Container
OCP Container

One of the use cases of our OCP (Open Compute Platform) Container (above) gear is to test VMware’s new vSAN product. With vSAN, VMware not only removes the expensive disk cost out of the picture, but they also lock you in to using their Hypervisor exclusively. Continue reading “VMware vSAN on Open Compute”

Static drive mapping using Open Compute Windmill with CentOS 6.4 and the Open Compute Open Vault (Knox Unit) JBOD

In my previous post about Installing CentOS on the Open Compute Windmill servers, all of the testing was done and completed without using the OCP Knox Unit. Once connected, it routinely caused drive mapping issues. For instance, /dev/sda would become /dev/sdb, /dev/sdo or /dev/sdp at reboot. Causing the server to hang at boot since it could not find the appropriate filesystem.

The problem being, that the megaraid_sas driver was being loaded prior to the libsas driver, causing the Knox Unit drives to come online prior to the internal drives. Unfortunately, it was not consistent enough to just use /dev/sdb, /dev/sdo or /dev/sdp as the boot drive, since it would rotate depending on what server I was connected to.

After a plethora of testing, the working solution I was able to come up with is:

1. Blacklist megaraid_sas in the PXE menu as a kernel parameter using: rdblacklist megaraid_sas

2. Blacklist megaraid_sas in the kickstart file as a kernel parameter using: rdblacklist megaraid_sas

3. Blacklist megaraid_sas in /etc/modprobe.d/blacklist.conf with blacklist megaraid_sas

4. Load megaraid_sas in /etc/rc.modules with modprobe megaraid_sas

My updated kickstart:

 

 

Open Compute Windmill + Open Compute Open Vault hangs at booting from local disk fix

Working through installing CentOS 6.4, ESXi and others, I started running into issues where the systems would run their PXE installations just fine, then end up hanging at booting from local disk afterwards… As it turns out, the systems we’re having issues trying to boot to /dev/sda when /dev/sda was not always where the OS was getting installed… as it turns out, sometimes the local SSD would be /dev/sda, /dev/sdo, etc. This is due to the mpt_sas driver getting loaded after the megaraid_sas driver.

Continue reading “Open Compute Windmill + Open Compute Open Vault hangs at booting from local disk fix”