It’s interesting reading the comments that we’re added to my post of last week, discussing the summary of all flash arrays, initially started by Vaughn Stewart and continued by HP storage guy Calvin Zito.
Unsurprisingly, most of the negativity came from EMC and their resellers, who are defending a late entrance into the market. They have focused their comments on the technical excellence of their solution, rather than discussing what customers actually want. Let’s move on from that; over the last 5 years since flash became part of mainstream enterprise storage, we have seen an evolving picture. how have things changed?
The first wave of solutions targeted those tricky to fix applications like high performance Oracle databases, where low latency and/or high IOPS was needed. These were niche requirements and played well into solutions that added flash to existing products. EMC had success with DMX\VMAX and their solution improved once block level tiering was available. Netapp, due to their architecture has achieved better results with caching, initially with PAM cards and later with SSD caching layers. Violin Memory Inc was initially successful as they were able to fix problems with applications that couldn’t be solved in any other way than buying lots of disks and short stroking them. Features such as de-dupe and replication were less important or could be mitigated in other ways.
Wave 2 of flash deployments has seen a widening and maturing of flash usage. Customers are running higher density workloads than ever before, due to faster processors and widespread use of virtualisation. Processor performance continues to follow in line with Moores Law and transistor scaling. Consequently, high density apps need high density IOPS. While disk capacities have increased, IOPS/TB as a ratio has dropped and will continue to do so. Quite simply this means that flash will become more mainstream in enterprise solutions as hard drives fail to deliver on increased performance needs.
There are two ways to go at this point. Pick an all flash solution that has been designed for the purpose, or look at the evolved solutions from existing vendors. The all-flash vendors such as SolidFire, Whiptail, Kaminario, Pure Storage, etc offer viable solutions, all with slightly different use cases. The established vendors have taken two routes; either adding flash to existing products or acquiring/building. EMC, IBM and NetApp have acquired technology, with NetApp using their acquisition as a stopgap while they develop a brand new technology from scratch. HP and Hitachi have chosen to amend their existing product lines to support flash and retain the existing benefits of features such as replication, management and data mobility.
The Architect’s View
So what is the right product to choose? Well, there is no pure definition of an all-flash array; it’s not a unique category on its own. It really comes down to customer requirements; for example, sometimes integration into existing infrastructure may be paramount and if you’re already a 3PAR or Hitachi customer, then this may be the best solution. There may be constraints on space, or requirement to only use a global supplier, in which case the options will change again. It’s more important to look at requirements than choose a technology for the sake of it.
Footnote: Just a quick shout out to the hybrid guys out there – their IOPS density makes them great choices at the moment. Their longevity will be based on how well they can keep up as centralised workloads increase over time.
Comments are always welcome; please read our Comments Policy. If you have any related links of interest, please feel free to add them as a comment for consideration.
Copyright (c) 2009-2018 – Post #61A4 – Chris M Evans, first published on https://blog.architecting.it, do not reproduce without permission.