The market for hyper-convergence seems to know no bounds as more and more vendors move into this segment of IT.  Last week EMC released their rebranding of the failed VSPEX BLUE platform as VxRAIL with EMC supplied hardware and VMware software. Looking across the market we have the two leaders (Nutanix and SimpliVity) offering hardware appliances that cater for all forms of hypervisor – including in Nutanix’ case their homegrown Acropolis product.  Then there are the SDS vendors that have pivoted towards hyper-convergence.  I did a little bit of research on the appliance part of the market and there’s a spread of hardware & software-based appliance solutions now available:

  • Cisco – SimpliVity, StorMagic, Atlantis Computing, Maxta
  • Dell – Nutanix XC, Atlantis Computing, Maxta, StarWind
  • HDS – Hitachi Hyper-Scale Out Platform
  • HPE – HPE StoreVirtual, Atlantis Computing, Maxta
  • Lenovo – StorMagic, Atlantis Computing, Maxta, SimpliVity
  • Quanta – Maxta (also EMC?)
  • SuperMicro – Scale Computing, Atlantis Computing, Maxta

So what about NetApp?  So far the company has developed reference architectures based around their FlexPod brand. Its been in the market for a while, but really only qualifies as being converged rather than hyper-converged.  Looking deeper at the software, Data ONTAP is probably one of the most “software defined” platforms around.  There has been a fully functional version of the ONTAP simulator for years and there’s the software-based ONTAP Edge for virtual environments.  Clustered ONTAP scales to eight nodes with SAN/block protocols and 24 nodes with NAS protocols, however the architecture doesn’t really fit the hyper-converged model as nodes have to be deployed in pairs.  There’s no “loosely coupled” or “shared nothing” scale-out design.

So how does NetApp get into hyper-convergence?  I see the company as having two main options – they could purchase a startup, or use SolidFire.  This is where things could get interesting.  SolidFire Element OS is fully scale-out and has even previously been shipped as a software-only solution called Element X, albeit with specific hardware requirements.  So, we could imagine a hyper-converged solution from NetApp that uses SolidFire and multiple hypervisor types – in fact any that support iSCSI LUNs.  I don’t think this kind of solution would be that hard to build.  SolidFire’s initial market was service providers, customers running scale-out storage and virtual servers.  Most of the background scripting and integration that would be needed has probably already been done for customers as the SolidFire platform can be driven entirely by API.

Thinking about how hyper-converged could be implemented, I don’t have enough information to understand how Element OS currently runs on each node; whether it is a single Linux/Unix OS or has already been containerised.  There may of course need to be engineering work to put both the hypervisor and O/S onto the same host – this might prove the tricky part, however I can’t imagine it being that unsurmountable.

The second alternative is to buy a start-up.  Atlantis, Maxta and Springpath are the most obvious choices.  This would mean NetApp spending more cash, but it wouldn’t be that expensive to pick up some good IP.

The Architect’s View

NetApp’s recent sales figures yet again paint a picture of a shrinking storage market, certainly in the legacy/traditional platform area.  Hyper-convergence is taking a slice of the revenue (as has all-flash) and vendors need a hyper-converged story to compete.  With the acquisition of SolidFire, NetApp has some great storage IP that could be leveraged in other ways.  It would be good to see the company moving on from the permanent focus on Data ONTAP; that process of course has already started.  Can they keep the momentum going?

Further Reading

Comments are always welcome; please read our Comments Policy first.  If you have any related links of interest, please feel free to add them as a comment for consideration.  

Copyright (c) 2009-2016 – Chris M Evans, first published on, do not reproduce without permission.

Please consider sharing!Share on FacebookShare on Google+Tweet about this on TwitterShare on LinkedInBuffer this pageEmail this to someoneShare on RedditShare on StumbleUpon

Written by Chris Evans

  • klstay

    Seeing Pure add the “compute layer” and become a hyperconverged platform would be far more interesting. First there is NetApp’s other than stellar track record with acquisitions providing long term value in any kind of a reasonable timeframe. (Sure, Engenio has been a nice-to-have for those smart enough not to run SQL on WAFL though still dumb enough to insist on a NetApp nameplate on the box.) Second, the kind of innovative thinking to pull that off left that building a long time ago. Third, the whole “hyperscale/webscale” thing is architectures that scale down poorly to the mere size of the majority of typical corporate datacenters.

    It is clear Pure is working on cluster style scale out, though when it will ship is anyone’s guess. It would be very surprising if that was not implemented with NVMeF. Personally I hope they stick to a 4 node limit initially which would make either ring or mesh interconnects doable with no external switching needed. Unlike VSP, but like 3PAR, inter-node commits prior to ack back upstream would be store & forward though over shared PCIe (NVMeF) instead of IP encapsulated as with Solidfire. Since for most workloads for must datacenter sized customers (yeah, just my estimate here) the majority of traffic is strictly ever with just one of those nodes. The important part is the active and passive controller in that node have continuous shared access over PCIe to the single pool of NVRAM. (At least IF I am recalling the architecture correctly; there is no dedicated per controller set of NVRAM necessitating a copy to other controller dedicated NVRAM process)

    There are two salient characteristics of that which make Pure more interesting as a potential hyperconverged platform; the size of “most” datacenter quantas of workload would obviate the need for ANY off node storage communication and the local compute hosting servers in a node (assume up to 4) would probably be NVMeF interconnected to the on-node storage (a la DSSD) thus eliminating ALL of the IP/ethernet overhead for volume access.

    Sure, today with SQL DBAs and programmers abusing tmp DB like they know they need to since commits are SO painful such a platform is not in the real world that much better than Nutanix. However, remove the constraint of commit space being thousands of time slower than in memory database transactions to being maybe only 10 times slower and those folks will figure it out. At that point such an architecture will crush IP encapsulated store & forward at the same price point.