Chris Mellor at The Register has an interesting article following up on rumours of NetApp moving into the hyper-convergence market. This intention shouldn’t be a surprise in some respects. As the storage market fragments, the incumbents have to adapt to the needs of customers and hyper-converged systems are – by any measure – popular.
Storage Pure Play
It’s debatable whether any company can continue to be a storage “pure play” in the current storage market. I discussed this subject in a short video with Calvin Zito and Mark Peters (ESG) earlier this year (link). Historically, storage was based around one or two major products, whereas today, there are many (think primary storage, secondary storage, archive, backup/dedupe, SDS, enterprise, midrange, all-flash, hybrid, object, etc) and that means having a portfolio of products to fit all requirements. However more interesting is the move away from centralised storage to hyper-converged and converged solutions. Here we see a range of players that we’ve discussed many times, like Nutanix, Simplivity, VMware (with Virtual SAN), HPE, Atlantis Computing, HyperGrid and many more.
The trend away from traditional arrays is clearly defined in the market revenue of vendors that still have the majority of their business based on this segment of the market. You can see the figures in a recent blog post, showing how the storage array business is declining and the revenues from storage in general is rising slightly (link here).
The potential in hyper-convergence is seen by looking at the revenues from the soon-to-IPO Nutanix. Check out Chris Mellor’s post on this, which shows rapidly rising revenue and flattening losses (link here). VMware have been pushing hard with Virtual SAN and via EMC with VxRAIL; Simplivity featured highly on Forrester’s recent hyper-convergence report (link, registration required). Gartner predicts hyper-convergence as a $2 billion market this year and $5 billion by 2019 (link), so there’s obviously revenue to go for, which in part has come from storage.
NetApp and SCM
Getting back to Chris’s article, there’s an indication that NetApp (who would be late to the hyper-converged market by some margin) could be looking to leverage Storage Class Memory as a way to get a leap over the competition. SCM is a series of storage products that put persistent storage onto the memory bus of the server (see a primer on Diablo Technologies here). SCM devices, like Diablo’s Memory Channel Storage provide the capability to store data persistently across reboots on memory cards that fit into the DIMM sockets of the server, so called NVDIMMs or non-volatile DIMMs. Other SCM products could include battery-backed memory and 3D-Xpoint, when we see it.
The benefit of being on the memory (rather than I/O) bus of the server is in I/O performance and in particular massively reduced latency. Data is also byte addressable, rather than being stored and retrieved in blocks, as it would for an I/O bus device (although the products themselves may not directly provide byte addressability, they may emulate it). Low latency provides the capability to run both virtual machine and container instances at a significantly higher rate than typically achieved today, and was the premise of PernixData’s FVP product (although that used flash and volatile memory).
SCM means large volumes of I/O can be served from memory and potentially stored in memory with fewer requirements to create multiple copies to protect against controller or server failure. Exactly how this is done remains to be seen, but there are obvious benefits from not having to continually commit to relatively slow external disk.
What could prove interesting is how NetApp chooses to integrate the idea of hyper-convergence into their existing product line. You can read my thoughts on this from a February blog post that speculates on NetApp getting into the hyper-converged market (link). The acquisition of SolidFire could well be the catalyst for this move. One question not yet answered though is what hypervisor might be used. Would NetApp chose to build their own hypervisor, using KVM and integrate that into (or alongside) the SolidFire ElementOS? Will the solution be a mix of storage and compute nodes in a loosely coupled hyper-converged solution? How will this business affect the NetApp relationship with Cisco and FlexPod?
The Architect’s View
NetApp continues to transform its business as the company moves away from a single platform to deliver services to a wider audience. A hyper-converged solution was always on the drawing board, the question was when, not if. At the most recent NetApp/SolidFire analysts day in June this year, George Kurian closed the session by stressing that NetApp was a data management company, further re-emphasising a focus on the Data Fabric (see Tech Field Day presentations from NetApp from Storage Field Day 9 earlier this year). Hyper-convergence meets this strategy, even if it is a little more “left field” than the traditional storage platforms of old. However if NetApp wants to compete, this is a market it needs to be in.
- Opinion: Is Enterprise Storage a Stagnant Market?
- Will NetApp Build A Hyper-Converged Appliance?
- Will Pure Play Storage Vendors Be Gone With The Wind? (Around The Storage Block Blob), retrieved 27 September 2016, published 20 June 2016
- We read Nutanix’s homework… and the numbers look good (The Register), retrieved 27 September 2016, published 20 September 2016
Comments are always welcome; please read our Comments Policy first. If you have any related links of interest, please feel free to add them as a comment for consideration.
Copyright (c) 2009-2016 – Chris M Evans, first published on https://blog.architecting.it, do not reproduce without permission.