After working this issue with Quanta/VMware since July of 2014, we finally have a new working issue for the Quanta Winterfell motherboards.
What was happening: You could see everything on the SOL interface, but you could not send any key sequences (no keyboard input was being accepted).
You can grab the BIOS here. F03C3A05 Working BIOS Continue reading “Updated Quanta Winterfell BIOS fixes issue where you cannot send keystrokes to ESXi via IPMItool”
On our OCP Winterfell nodes, in CentOS 6; the 10GB Mellanox NIC’s show up as eth0 and eht1, where the 1GB management interface shows up as eth2. We are also using Brocade 10GB top-of-rack switches. So configuring LLDP was necessary for the servers to advertise themselves to the upstream switches. To do this, we use the LLDPAD package available in the @base CentOS repo.
The next step is to create a Puppet module/mainfest to:
- Install the LLDPAD RPM from YUM.
- Start the LLDPAD service
- Ensure that the LLDPAD service is set to autostart at boot
- Configure eth0 and eth1 to broadcast their LLDP status to the upstream switches
- Ensure that it only runs once, not every time puppet agent runs
Continue reading “Using Puppet to configure LLDPAD on the Open Compute Winterfell”
Configuring Glace, Cinder and Nova for OpenStack Havana to run on GlusterFS is actually quite simple; assuming that you’ve already got GlusterFS up and running.
So lets first look at my Gluster configuration. As you can see below, I have a Gluster volume defined for Cinder, Glance and Nova.
[root@g1 ~(keystone_admin)]# gluster volume info
Volume Name: cinder
Volume ID: d71d0ab7-2c99-41c5-8495-fd68d1571f31 Continue reading "Configuring OpenStack Havana Cinder, Nova and Glance to run on GlusterFS"
The OpenCompute systems are the the ideal hardware platform for distributed filesystems. Period. Why? Cheap servers with 10GB NIC’s and a boatload of locally attached cheap storage!
In preparation for deploying RedHat RDO on RHEL, the distributed filesystem I chose was GlusterFS. It’s simple and easy to deploy, and only takes a couple of minutes to have it up and running.
The first thing I did was configure my local 10GB interfaces for heartbeat traffic, to do that I created a sub-interface on VLAN 401 for each node. In this case I used 10.124.1.0/24 addressing. Continue reading “Installing GlusterFS on RHEL 6.4 for OpenStack Havana (RDO)”