PernixData and Dell – first test results

Before I go to deep into the layout of this benchmark, let me say that Frank Denneman came out with some great articles on testing SSDs. I highly recommend reading some of his posts to understand how to benchmark hardware and understand the results. To see what PernixData is all about, check out this post from Jason Nash.

I’ve been testing PernixData with Dell hardware this week, trying to find the ceiling on local SSD drives first. My plan is to test what I can place closet to the hypervisor as possible (within a blade). That’s right, I’m testing a Dell M620 blade solution with Dell Compellent storage on FC. It is more common to find add-on PCIe cards for rack mount servers when using SSD solutions, but I am looking to find what kind of performance I can get out of a blade system with SSD drives on a PERC controller. I will be testing SLC SSD drives (Toshiba MK4001GRZB) that are controlled from the local PERC H710 controller.  PernixData has a great set of documents for configuring disk controllers. I am not using the H710P controller (which has a FastPath for IO to bypass the controller cache and get committed directly to the physical disk from host RAM through the second controllers dual-core ROC processor). It sounds a little like EMC ExtremeIO, but on a much smaller scale. CTIO and FastPath provide enhanced performance benefits to SSD volumes. It is important to remember that if you are working with multiple drives on a RAID controller and JBOD is not an option, you need to configure individual disks in RAID 0, not grouped in RAID 0 (although this can be done to take advantage of the performance of both drives at once).

The tests I ran involved running 5 VM’s with IOMeter, 4K and 100% reads on a 30GB file. The queue depth is the default VMware 64. Of course all work loads are different. Not all applications are built the same. If you are looking to test something like SQL, I recommend using BenachMark factory from Quest (Dell). You can record a production workload and play it back on the test platform to see how well something like this would work in your environment. The purpose of the test is to find out how many IOPs I can get out of the solution. I would not recommend relying on something like IOMeter to benchmark something for production.

Make sure your VM guest has a separate paravirtual SCSI controller for the data drive you are testing. Also, make sure everything in the environment along the storage fabric is tuned for best performance. From the server BIOS, PERC controller, HBA cards, fiber switches, fiber interconnects and storage controllers.

Compellent Disk configurations in VMware

 My first test was with Write Back. These test results had better results of course, but only by 10K or so IOPS. I saw as high as 150K IOPS for the FVP cluster, but it usually stayed around 120K IOPS.

PD Cluster level performance 01 post 1-5 upgrade (Write Back)

PD Cluster level performance 02 post 1-5 upgrade (Write Back)

 My second test results was with write through, which is my preferred model since the data is written to the datastore at the same time. You can see that IOPS came in just under 120K IOPS. Still not bad! The dip in this chart is from me starting up another VM with the same test.

PD Cluster level performance 01 post 1-5 upgrade (Write thru)

PD Cluster level performance 02 post 1-5 upgrade (Write thru)


You can see what goes on with my Compellent storage on the back end with the same results:

PD Compellent volume last day perf (Write Back)PD Compellent SSD last day perf (Write Back)PD Compellent 15K last day perf (Write Back)PD Compellent 7K last day perf (Write Back)

All I can say is Holy Cow! SSD’s sure do give great performance when they are closer to the server! I do start to wonder what this does to the life cycle of the drives if they run at a constant rate like this. But like I said, every workload is different. I saw as high as 60K IOPS per SSD in the Dell M620 blades. Would I say this first hardware test is an enterprise solution? Perhaps, it is defiantly cost effective! It depends on your level of comfort with the hardware and your use case.

Working with the Pernix Data software is so easy! It is very simple to install and manage. It is also a breeze to remove when you are done with a POC. If you are working with iSCSI, you will need to adjust your path selection policies after it is removed. You can also use the software without any SSDs, to see what type of performance you are getting from your datastore. PernixData FVP works with block storage protocols today (FC, iSCSI, and FCoE), and will soon support NFS. FVP uses server-side flash (SSDs or PCIe cards) to increase storage performance in vSphere environments.


My next tests? I think this will involve using the Dell M620 Blade with PCIe to see what results I can get from that using PernixData. Dell is really on me to use FluidCache, but that is something down the road I will get to.

Leave a Reply

Your email address will not be published. Required fields are marked *