Fist In Hand Networks
26Feb/111

NexentaStor ZFS Performance Testing

Seagate ST9500620SS hard drives

During the last couple weeks I have been testing a SAN appliance software called NexentaStor. The software uses parts of the the Sun Solaris operating system to build a feature rich and high performance storage system. The underlying ZFS file system can do some pretty neat things and looks to be an excellent contender for replacing costly hardware RAID driven setups.

My goal was a SAN capable of serving iSCSI LUNs to my VMware vSphere virtual infrastructure with a respectable performance for under $6000. Commercial SAN products with a feature set and minimal amount of drive space seem to start at around $10k and only go up sharply from there. The NexentaStor platform, in theory, should allow me to build my own system using specifications best suited for my environment at less upfront cost than commercial products.

I had the opportunity to test 2 different drive setups in the machine before production deployment. The NexentaStor machine has 4 gigE ports to the switch with 2 sets of aggregated ports (2 ports each). There are 2 gigE ports at the VMware host level and are setup to do round robin multi-pathing between available paths. To ensure a good distribution of bandwidth over the links I set the I/O Operation Limit for the LUNs being tested to 3. There is an excellent article explaining this and lots of other great information about VMware iSCSI multipathing at http://virtualgeek.typepad.com/virtual_geek/2009/09/a-multivendor-post-on-using-iscsi-with-vmware-vsphere.html. On with the benchmarks!

The NexentaStor Hardware:

In NexentaStor I am using 2 different pools (volumes) of disks for comparisons. The first pool (pool1) tests 8 x Fujitsu MHZ2160BK-G2 160GB 7200RPM SATA drives in various RAIDZ2 configurations. The same is done for the second pool (pool2) using 8 x Seagate ST9500620SS 500GB 7200RPM SAS drives (which are intended to be the production drives). Each pool has autoexpand and deduplication turned off and sync is set to standard. All tests were done using the iozone command:

iozone -ec -r 32 -s 32768m -l 2 -i 0 -i 1 -i 8

The first tests are done with the NexentaStor host level at the NMC to get an idea of how the overall ZFS pool performs. For the VM tests, we configure a zvol as a iSCSI LUN, and setup multipath aggregated over a total of 2 gigE ports to VMware ESXi. However, you will still only be able to utilize a single link per thread (which the tests expose).

Test numbers:
  1. 7 x Fujitsu MHZ2160BK-G2 in RAIDZ2.
  2. 7 x Seagate ST9500620SS in RAIDZ2.
  3. 8 x Fujitsu MHZ2160BK-G2 (2 groups x 4 drive RAIDZ1).
  4. 8 x Seagate ST9500620SS (2 groups x 4 drive RAIDZ1).
  5. 4 x Western Digital WD1500HLFS (4 drive RAID10).

Interesting to note here is the low Mixed Workload performance and the higher initial write than rewrite. Rewrite over the same data should be slightly higher performing since filesystem metadata is already in place. Now the same tests were run on a VM using iSCSI. Here, the theoretical max performance for the VM should be 100 MB/s (single gigE link).

Again we notice a higher initial write than rewrite for test 1. I reran test 1 and 3 multiple times and the results were the same. The results are closer to what we would expect over a gigE link and the newer Seagate drives show only a little lead in this case. Mixed Workload tests are still very low for what I would consider real world performance. I included a VM test on the local RAID drives in the host (4 x Western Digital VelociRaptors in RAID10) for comparison. The local VM read and write tests can push more single thread bandwidth. But the Mixed Workload performance still suffers worse than the iSCSI tests.

While the Mixed tests are all low, they still do show the relative performance per tests. Here is a graph of just the Mixed Workload tests.

The local VM test still jumps ahead here. But VM performance for test 4 still seems acceptable compared to the other configurations. I am still learning how to interpret these results and may do additional testing with jumbo frames, different types of aggregation, multiple iozone processes, and maybe even a different performance testing software. These tests should give a good performance comparison for others running the iozone benchmarks in NexentaStor.