In a previous post, I wrote about how improvements in storage performance have been achieved through changes to software.  Vendors like DataCore and Nutanix have implemented significant improvements that are in some cases quite remarkable.  As the storage market diverges into dozen’s of vendor platforms, how can end users validate the claims vendors are making?  Verifying vendor performance claims has probably become more of an issue with the development of all flash systems, where vendors are aiming for the two magical numbers of <1ms response time and 1 million IOPS.  Unfortunately even these simple metrics aren’t directly comparable, as vendors will use varying block sizes in their measurements that suit the architecture of their hardware platform – for example, Pure Storage use 32KB in figures they quote; most other vendors use 8KB or less.

In recent months I’ve had a few briefings from Load DynamiX, a company that focuses on storage performance measuring and testing software.  Rather than just build a synthetic workload profile with tools like Iometer or fio, Load DynamiX allows customers to sample their existing workload (using their Workload Sensor appliance) and use that data as the profile for generating I/O against other storage hardware.  The idea of using dedicated appliances to both sample and generate work to stress test hardware may seem excessive, however there are some good reasons why this process makes sense:

  • Black Boxes – storage arrays act essentially like black boxes, with each responding differently to specific workloads.  This behaviour should be expected because vendors implement their technology in very different ways.  The same workload can give unexpected results with different products.
  • Synthetic workloads generated by Iometer and fio never quite reflect the variable mix I/O profiles seen in production systems and can only ever be an approximation to the real world.
  • Modern storage arrays are very hard to push to their limits unless you have a lot of hardware, especially all-flash systems.

I’ve had a quick play with Load DynamiX in their lab environment, running I/O workloads against traditional SAN and NAS platforms.  Compared to using Iometer or fio, the web-based interface makes it much easier for the user to generate test cases that can be run against a variety of hardware platforms.  The output also generates some interesting reports that go deeper than the data in Iometer/fio.  Having automated graphical reporting in place makes the process of running tests easier than using Iometer – which after a few weeks of use turns you into an Excel spreadsheet expert.

Although we tend to just think of just using performance tools on traditional arrays, another area that would be interesting to investigate is the performance of open-source solutions, such as Ceph, Swift and “software-defined” solutions.  The same applies for hyper-converged offerings.  We’ve seen lots of talk about the performance of VSAN for example.  This white paper shows five tests using Iometer, three of which (all read, sequential read, sequential write) are pointless as they don’t reflect real-world configurations.

The Architect’s View

Load DynamiX’s workload capture and replay tools are pretty heavyweight and won’t be appropriate for every customer.  However enterprises that run labs and need to validate multiple vendor claims on a regular basis will find significant use from the solution.  For end users where the performance of storage is critical to their operations, Load DynamiX features can be used for more than just new product testing.  Imagine for example wanting to validate the impact of a software/microcode upgrade on hardware.  Anyone who has used enterprise-class arrays will know that software updates can be impactful.

One place where the solution could be very interesting is in POC testing by VARs for customers who can’t afford their own lab setup.  Prospective customers could bring a sample of their workload profile and run it against vendor recommended configurations.  As an end user, I’d be more comfortable knowing I’d been able to test a vendor’s claims more fully before committing to an expensive storage spend.

Further Reading

Comments are always welcome; please read our Comments Policy first.  If you have any related links of interest, please feel free to add them as a comment for consideration.  

Copyright (c) 2009-2016 – Chris M Evans, first published on, do not reproduce without permission.


Please consider sharing!Share on FacebookShare on Google+Tweet about this on TwitterShare on LinkedInBuffer this pageEmail this to someoneShare on RedditShare on StumbleUpon

Written by Chris Evans