AB of Gluster fame, is off working on another kick-butt storage project. Again, its in the Software Defined Storage realm, this time its called Minio! A play on the minimal-io phrase/mindset. Written in go, the focus is a simple, easy to deploy and use 100% S3 compatible, object based storage platform.


I talked about this project here, nearly two years ago when the project was just initially getting off the ground. Today, you’ve got  full blown storage serve along with a full blow client for interacting with the server and other S3 compatible services!.

Some of the features are:

  • Written in go, super easy to update/develop in or against.
  • Native integrated replication.
  • 100% Amazon S3 compatible.
  • Erasure Code & Bitrot Protection
  • No need for RAID
  • Platform agnostic
  • Already Docker containerized

More »

The Minio project is inspired by Amazon’s S3 for its API and Facebook’s Haystack for its immutable data structure.


Minio Logo

You can track, check-out and even contribute to the project here at their GitHub. Like Hadoop, expensive RAID controllers are not needed. Instead they are using Rubberband Erasure Coding to dynamically protect the data.

Simplification of installation, configuration, updates and management are some of the key features. All being developed by seasoned storage veterans.

Object based storage is quickly becoming a much sought after solution in many IT organizations. Minio is definitely something to keep an eye on!

Configuring Glace, Cinder and Nova for OpenStack Havana to run on GlusterFS is actually quite simple; assuming that you’ve already got GlusterFS up and running.

So lets first look at my Gluster configuration. As you can see below, I have a Gluster volume defined for Cinder, Glance and Nova.

And I have each of these filesystems mounted.

Now that I have OpenStack Cinder, Nova and Glance installed; I can then configure them to use my Gluster mounts.

Modify /etc/cinder/cinder.conf to reflect the Gluster configuration

Also make sure that /etc/cinder/shares.conf has the Gluster share listed

Then, create the images folder for Glance.

Then, modify the file permissions so that its usable.

Modify the glance configuration to reflect the Gluster mount points in /etc/glance/glance-api.conf

Restart Glance Services

Create nova folder structure

Then, modify the file permissions so that its usable.

Modify nova config in /etc/nova/nova.conf to reflect the Gluster mount points.

Restart nova

Verify OpenStack services

You should now see that Cinder has the GlusterFS mount mounted when issuing the mount command:

Note: on /var/lib/cinder/mnt/92ef2ec54fd18595ed18d8e6027a1b3d type fuse.glusterfs (rw,default_permissions,allow_other,max_read=131072

The OpenCompute systems are the the ideal hardware platform for distributed filesystems. Period. Why? Cheap servers with 10GB NIC’s and a boatload of locally attached cheap storage!

In preparation for deploying RedHat RDO on RHEL, the distributed filesystem I chose was GlusterFS. It’s simple and easy to deploy, and only takes a couple of minutes to have it up and running.

The first thing I did was configure my local 10GB interfaces for heartbeat traffic, to do that I created a sub-interface on VLAN 401 for each node. In this case I used addressing. More »