In a recent post I talked about whether NetApp may choose to move into the hyper-converged appliance market.  Since then, Cisco has released their own hyper-converged offering called HyperFlex, a product based on UCS and software from Springpath (rebranded as the Cisco HyperFlex HX Data Platform).  Until now, Cisco has been part of the hyper-converged market through partnerships with software vendors.  The most notable of these has been SimpliVity and OmniStack, however relationships exist with StorMagic, Atlantis Computing and Maxta.  This makes their move to offer a Cisco-branded platform somewhat interesting to say the least.

Let’s look first at the StorMagic, Atlantis and Maxta situation.  StorMagic is more suited to ROBO type deployments that scale out to hundreds of systems; think department stores (or other shops with lots of branches), medical practices or remote areas where technical support is limited.  This is particularly niche and not directly suited to high-end deployments in the data centre.  Atlantis and Maxta are more midrange, but the relationship is perhaps the inverse of what is being done with HyperFlex, in that these vendors are using Cisco UCS hardware as one of many solutions to provide hardware choice, rather than using Cisco as their only channel to market.

Cisco has been (to my mind) more of a channel opportunity for SimpliVity, however this position is slowly changing with the availability of OmniStack on Lenovo (previously IBM) hardware.  This move could prove interesting as Lenovo looks to grow their brand in a similar fashion to the laptop business.  What about the view from where SimpliVity sits?  From Chris Mellor’s perspective over at the The Register, nothing changes, as OmniCube/OnmiStack offer more efficient and advanced features than those in HyperFlex.  How long that lasts remains to be seen.

hyperflex2Springpath

Looking in more detail at HyperFlex, it’s worth talking for a moment about Springpath.  The company (formerly known as Storvisor) presented at Storage Field Day 7, back in March 2015 and without a doubt the delegates were massively impressed by what the technology offered.  Springpath’s HALO platform is a distributed and virtualised storage layer, similar to the storage component found in Nutanix and SimpliVity’s products (similiar in the sense that storage is distributed in a redundant/protected fashion across all nodes in a cluster).

Springpath went quiet back in November, with no blog posts or press releases since that time.  The Register posted an article on the situation in December 2015, highlighting cancelled events and the dropping of their PR company.  Based on this new good news relationship with Cisco, you would have expected Springpath to at least make a press release on the subject.  However there has been radio silence on the announcement.  More mysteriously, the slider on the Springpath homepage had been updated to include a link to the Cisco blog post announcing HyperFlex.  This has since been removed.  It looks like Springpath aren’t in a good place and the relationship with Cisco may be their lifeline to cashing out the company.

The Architect’s View

We’re in a very interesting position in the marketplace as hyper-convergence matures and the server vendors start pushing their own products.  The market has become hardware focused with Cisco now selling their own product line, EMC/Dell pushing VxRAIL, Dell working with Nutanix and HPE focusing on StoreVirtual.  Even though there are (significant) differences between these platforms, the message may get lost as the leaders in the market focus on higher level messaging like operational efficiency, consolidation and flexibility (like multiple hypervisor support).  This is going to make life tough for the software-defined startups, who will have to direct their focus into showing how their platform ticks all the boxes already described and more.  2016 could see more casualties than just Springpath as this market segment gets even hotter.

Further Reading

 

Comments are always welcome; please read our Comments Policy first.  If you have any related links of interest, please feel free to add them as a comment for consideration.  

Copyright (c) 2009-2016 – Chris M Evans, first published on https://blog.architecting.it, do not reproduce without permission.

Written by Chris Evans

With 30+ years in IT, Chris has worked on everything from mainframe to open platforms, Windows and more. During that time, he has focused on storage, developed software and even co-founded a music company in the late 1990s. These days it's all about analysis, advice and consultancy.

1 Comment

klstay

Wow, three great posts all so close together! (Can’t stop myself from commenting…)

All the “goings on” in hyperconverged lead to an inevitable end point especially given what is happening in flash storage. Before blathering about that though there is the ongoing and widening disconnect between web-scale architectures and datacenter scale architectures. I love it when a vendor drones on about scale out of the product to tens of thousands of nodes, VMs, workloads, etc. Well, what about scale down to where a LOT of enterprise datacenters live? The current inherent inefficiencies of Nutanix or Simplivity fade into the noise with 60+ of their “bricks” in your racks. With the more typical (my opinion) datacenter site workloads of ~1000 VMs that is simply not the case. That end point in hyperconverged I think bodes very well for both web scale AND datacenter scale architectures.

What matters most in the virtualized datacenter (enterprise scale or web scale) is the ability to reliably deliver performance isolation for entire application stacks. Scaling overall capacity efficiently as requirements evolve is a secondary and largely economic consideration; it still matters, just less than what matters most. So, the salient characteristic of the virtualized datacenter is NOT the ability to consistently deliver X performance for application A, but the ability to deliver X performance for applications A, B, and C and onward at the same time.

Yes, the need for performance isolation seems obvious, but at the same time what real benefit is there to each of those applications in being able to “scale out” the entire hosting environment beyond just its needs? Technically none and economically only to a certain point. Now, consider a hyperconverged architecture where a single “brick/chassis” consists of say 4 dual socket hosts each with up to 1.5TB RAM all “mid-plane” connected vie NVMeF to their shared storage which uses the latest high speed NAND as a caching layer in front of a much bigger TLC pool.

As long as a single application stack (specialized HPC type workloads aside) can fit on such a brick does it really matter that other bricks in your racks do not all have the exact same access to that storage and vice versa? Hooking up 2 or maybe 4 of those bricks via external NVMeF just between the storage takes care of HA. The implications of that kind of direct storage access for those hosting servers in each brick is not limited to just higher performance. Remember, the economic side of things which also matters dictates it is not just how fast you can run a workload, but how many workloads can be run at that level of performance per such brick.

Now, compare that type of “brick” with what you get from Simplivity or Nutanix or a VSAN type solution today. Such an all software store and forward over IP storage layer will be absolutely crushed both performance AND economics wise by something as described. Why the performance will be so much better is obvious, why the economics are the same is less obvious.

Up until now application admins and DBAs have known you request as much RAM as you can for that VM and then use that to avoid going to disk wherever possible. That is understandable given storage has been six orders of magnitude or more slower than RAM. Even the latest all flash arrays still have all the overhead and encapsulation, which is eliminated in DSSD-like designs, keeping storage access times well above the pain threshold. When storage is truly only 1 to 2 orders of magnitude slower the true potential bottlenecks to whole application stack overall performance shift dramatically. Such large amounts of system RAM and all the machinations involved with avoiding going to disk disappear to a large degree. The result being the density of workload such a described “brick” can host is MUCH higher than current designs for a wide variety of situations.

Comments are closed.