Don’t you just hate it when someone beats you to the blog post you intended to write. Following on From Vaughn Stewart‘s Pure blog (Comparing and Contrasting All Flash Arrays), Calvin Zito (HP, Around the Storage Block Blog) has updated the sheet and added the HP 3PAR 7450. Obviously, it’s Calvin’s job to show HP products in a good light, so kudos to him for producing the updated version. However I want to look at the bigger picture and show as many vendors as possible and expand on the figures being shown. So here it is, my extended summary of all-flash arrays.
First of all, let’s qualify what’s meant by all-flash. Version 1 of all flash arrays basically placed flash devices into existing hardware. I’m personally qualifying these out of the data and only including products that have been specifically designed or modified for flash. In the examples chosen, HP 3PAR 7450 was specifically modified (both hardware and software), as was the Hitachi HUS VM (FMD hardware and software changes). Products like VMAX and VNX aren’t included as they don’t meet this category.
Next, I’ve extended the metrics being recorded. I’ve added a section for performance (glaringly omitted from Vaughn’s sheet), plus some other categories, including those Calvin added. I’ve also indicated whether the platform is a scale out architecture, which will be important going forward.
As you can see, this is a work in progress and I have lots of gaps. There are a number of issues here:
- This data takes a long time to trawl for and interpret. Vendors don’t use the same terminology.
- Some references are textual in product descriptions and not explicitly placed into data sheets.
- Some data is questionable.
What do I mean by questionable? Well, there’s always a difference between read & write latency and between read & write IOPS, but some vendors don’t split out these two values. Some figures, like IOPS, are based on one measurement; for example 4KB blocks. Others, like bandwidth, are based on 1MB blocks and don’t make it clear whether this is sequential or random throughput. Not all vendors are showing RAID before/after in their figures (presumably because they are variable). Nimbus Data’s latency figures seem remarkably good compared to others on the market. NetApp don’t even provide latency figures and only say “sub-millisecond”, which has to be taken as meaning 1 millisecond.
What’s the point of this kind of comparison sheet if the data isn’t consistent? Well, having something in one table makes the outliers easier to spot, which in turn allows customers to question vendors on how their figures are calculated. Presenting as much data as possible means potential customers can match up their requirements with product offerings. It’s all about the requirements. At least with some baseline for comparison, a single table is a good thing.
The Architect’s View
This is a version 1 “work in progress” spreadsheet, taking data from existing public online sources. I would welcome any vendor who wants to offer corrections or additions, with the only caveat being that the data must be publicly available (please provide link), so anyone reading this blog can check the validity. If you want to reference this chart, I provide no guarantees of the accuracy, but you may reproduce it as long as you reference the source. I’ve noticed a few sites reproducing some of my graphics without permission.
- Comparing and Contrasting All-Flash Arrays (Virtual Storage Guy Blog, 3 December 2013)
- Comparing and Contrasting All-Flash Arrays, including HP 3PAR StoreServ (Around The Storage Block, 4 December 2013)
Comments are always welcome; please indicate if you work for a vendor as it’s only fair. If you have any related links of interest, please feel free to add them as a comment for consideration.
Subscribe to the newsletter! – simply follow this link and enter your basic details (email addresses not shared with any other site).
Copyright (c) 2013 – Brookend Ltd, first published on http://architecting.it, do not reproduce without permission.