Using ZFS on Linux is an attractive solution for a high-performance NFS server due to several key factors:

  • Cost, ability to use commodity hardware with free software
  • Simplicity, quick install and easy to configure/manage
  • Flexibility, ZFS offers a plethora of options for your filesystem needs

In this case, I installed ZFS on CentOS 6.4 available here:

The hardware I used was a HP DL370 G6 with 11 3TB disks to be used for ZFS.

The next step after updating the system (yum -y update) is to install ZFS, I followed the instructions here:

The next step is to install the ZFS module (drivers) with the following command:

Now that you’ve installed the ZFS driver, lets make sure it loaded appropriately with the following command:

The output should show the loaded ZFS modules as below:

ZFS Modules Loaded

ZFS Modules Loaded

Now I want to create my ZFS array, to do that I need to find the device ID’s of my hard drives. Running the command:

Gives me the list of the 3TB drives that I’m going to use for the ZFS array, the output looks like:

So now that I have the device ID’s, lets create the array using the ZFS RAIDZ raid-type.

RAIDZ is very popular among many users because it gives you the best tradeoff of hardware failure protection vs useable storage. It is very similar to RAID5, but without the write-hole penalty that RAID5 encounters. The drawback is that when reading the checksum data, you are limited to basically the speed of one drive since the checksum data is spread across all drives in the zvol. This causes slowdowns when doing random reads of small chunks of data. It is very popular for storage archives where the data is written once and accessed infrequently.

Here we create the array named nfspool in raidz format, using devices sda-sdl

Then, we go ahead and create a filesystem on top of the array using:

We then set the filesystem permissions for NFS:

Lets now share the ZFS filesystem using NFS (built-in to the filesystem!!!!)

Now lets start the ZFS-NFS share using:

While your at it, go ahead and copy that same command into /etc/rc.local

Now for one minor performance tune!

You should now then be able to mount your NFS mount via ESXi and write to it!

Using TCPtrack, I can then monitor my 10GB bond0 interface to see the transfer rates.

TCPtrack bond0

TCPtrack bond0

Gotta love the creator of TCPtrack, clearly a spaceballs fan



The following two tabs change content below.

Eric Sarakaitis

Virtualization Engineer at CBTS
I'm Eric, I love to cook, sing, garden and enjoy cold beverages!
2018 vmware admins. All rights reserved.
Design by picomol. Powered by WordPress.