Just when we thought there were enough all-flash products on the market, Hitachi (or HDS if you prefer) has released a new line of all-flash systems under the brand name “Hitachi Flash Storage” or HFS.  The Cheetah reference in the title is the project name of the product during its development.  Before digging into technical details, the first thoughts of many people will be whether we need yet another flash product and why bring it to the market now.  Valid questions that this post will explore.

Technical Specs

Hitachi HFS A

Hitachi HFS A

HFS is based on a completely new design from Hitachi that is new in both form factor and software.  The full-depth 2U chassis houses up to 60 solid-state disks (SSDs) of up to 1.6TB in capacity.  Today these drives are based on MLC NAND flash.  The architecture is dual active/active controllers although I am told that it is capable of scale-out configuration.  The code-base is entirely new (and that presents an interesting risk, more on that later), however I suspect there is definitely some heritage in the design from previous platforms like HUS (Hitachi Unified Storage).  At GA (18th January 2016) three models will be available:

  • A220 – 10x 1.6TB SSDs, 16TB raw, 64TB usable (after data optimisation)
  • A250 – 30x 1.6TB SSDs, 48TB raw, 192TB usable (after data optimisation)
  • A270 – 60x 1.6TB SSDs, 96TB raw, 384TB usable (after data optimisation)

Each model is capable of supporting Fibre Channel and iSCSI block protocols (no FCoE here!).  Cache varies per model and upgrades between the model lines is supported too. An A220->A250 is simply an upgrade of 20 drives; upgrading to A270 is either 30 or 50 drives depending on the original model (plus some extra cache).

HFS supports all of the data services expected on today’s all flash systems including inline data de-duplication, inline compression, thin provisioning, QoS, snapshots and replication.  More importantly, the data features can be enabled on a per-RAID group basis, giving customers the choice on whether to take the performance or the capacity hit for each data type.  Obviously features like de-duplication make more sense with larger volumes of data, hence the reason the features are per-RAID group and not per-LUN.

Why Now?

Getting back to the initial thoughts, many potential clients will ask “why now?”.  Why is HDS bringing another flash system to the market, well after companies like EMC and Pure Storage have become established players.  I’ve been shown some NDA competitive comparisons and I think they point to a few things that are key in the current market:

  • Efficiency – To make flash more attractive from a price perspective, systems need to optimise on density, performance and energy efficiency as all of these features go towards justifying a TCO (not raw $/GB) for flash.  HFS looks like it offers high levels of efficiency, better than the competition.
  • Perception – Hitachi’s existing flash solutions, including the all-flash VSP F Series released only last November, are seen as heavyweight enterprise products.  There’s no doubt that Hitachi were “leaving money on the table” with customers that wanted smaller flash solutions.  In contrast, HFS starts at around the $125,000 mark.
  • Cost – As mentioned, VSP F Series is more of an enterprise class platform, with an enterprise-class price.  The platform doesn’t (yet) support data de-duplication, which when doing effective or usable $/GB calculations makes the VSP seem expensive.  Even if TCO or IOPS calculations show the VSP to be cost efficient, in many instances this information falls on deaf ears as customers look at the most basic cost metrics most of the time.

The Gaps

HFS is a new platform and it isn’t surprising that there are a few feature gaps.  Here’s what I think are missing today:

  • Automated Management – currently management is GUI-based, which harks back to storage deployments of 15 years ago.  I’m hoping that the GUI is currently hiding a REST-based API and that we’ll see both an API and CLI in the near future.
  • TLC Flash – the current products ship with MLC drives, but price competition is going to be driven by TLC.  At this stage there are few TLC drive vendors and only one (Samsung) 3D TLC vendor.  I expect once there is a more diverse supply chain for 3D TLC, then we’ll see this be adopted.
  • Scale-out – the initial product releases are based on a single chassis, with only the ability to add drives within the chassis (subject a maximum of 60).  Scale out is in the design, so we should expect some evolution in the future.

Of course timing is key for HDS here.  New features need to come quickly.

The Architect’s View

The release of a new all-flash system is an interesting move by Hitachi, one that brings some risk when introducing a truly new product and code base.  Other vendors and customers alike will question how well the code manages flash (from an endurance perspective) as well as how efficient compression and de-dupe operate.  The market is getting super-competitive with hybrid players like Tintri and Tegile introducing products over the last 12-18 months.  However if the future is Flash & Trash, as predicted by Enrico Signoretti, then Hitachi would have been missing a massive market opportunity with VSP G/F Series alone.  To date Hitachi has been very successful in their flash business, however this has been massively under-reported by the analysts due to crazy rules on acceptance to the “all-flash club”.  See the quote here from Bob Madaio from HDS – 50PB in a single quarter and rising.  Clearly there’s an opportunity to capture even more flash business.

We continue to see strong growth in our flash portfolio.  In fact, in 2015 we shipped more than 150PB of flash, with Q4 being our largest quarter ever at more than 50PB shipped. With the introduction of the new Hitachi Flash Storage solution and the November 2015 release of the VSP F series, we expect 2016 to bring continued expansion in our total shipped capacity as our customers rapidly adopt flash storage for more of their applications and a greater percentage of their production data. – Bob Madaio, Senior Director Product Marketing, HDS.

I’d like to see features like scale-out and automated management rolled out quickly.  I’m also interested to see what the QoS features look like.  2016 is going to be a very interesting year indeed for the evolution of storage.  I expect we will see fewer rather than more flash vendors as we exit this year, and unlike SolidFire, not all of those will be by acquisition.

Further Reading

Comments are always welcome; please read our Comments Policy first.  If you have any related links of interest, please feel free to add them as a comment for consideration.  

Copyright (c) 2009-2016 – Chris M Evans, first published on https://blog.architecting.it, do not reproduce without permission.

Photo Source: Miradna.com (http://miriadna.com/preview/the-cheetah)

Please consider sharing!Share on FacebookShare on Google+Tweet about this on TwitterShare on LinkedInBuffer this pageEmail this to someoneShare on RedditShare on StumbleUpon

Written by Chris Evans