Technology is a fickle business and surviving more than a few short years is hard to achieve. The industry changes very quickly, as the next greatest thing consigns solutions and products to being legacy, even if they are still running many organisations’ core business. Storage has evolved to be a major part of any IT implementation and companies such as NetApp have grown off the insatiable demand for data. The company was founded in 1992 and last year celebrated its 25th anniversary. With public cloud threatening to engulf traditional infrastructure companies, NetApp is embracing the cloud and making changes to adapt to the future. How is this being achieved?
Embrace the Cloud
Primarily, the focus for NetApp is to embrace, rather than fight the public cloud. Towards the end of last year, I had a chance to interview NetApp founder, Dave Hitz. During the conversation, Hitz reiterated his on-stage presentation assertion that public cloud can’t be beaten. Amazon, Google, Microsoft and Alibaba have big pockets, driving their cloud growth from existing retail and IT businesses.
- Soundbytes #014: A Conversation with NetApp Founder Dave Hitz
- Is NetApp Becoming a Service Provider?
- Azure Enterprise NFS by NetApp – Initial Thoughts
The key though is that the cloud providers offer a relatively standard set of services, which especially from a data management perspective, are less mature than enterprise customers experience on-premises. In addition, there is little incentive for the existing CSPs to interoperate well with each other or private cloud/data centres and this provides significant opportunity.
NetApp recently reorganised and introduced the Cloud Infrastructure Business Unit. This encompasses hardware platforms, including FlexPod, HCI, SolidFire and StorageGRID. The existing Cloud BU, run by Anthony Lye is now renamed Cloud Data Services (CDS) and includes all of the software offerings that were previously discussed as part of the Data Fabric concept.
- Cloud Field Day 3 Preview: NetApp
- Soundbytes: The Data Fabric Explained with NetApp CTO Mark Bregman
At Cloud Field Day 3, Eiki Hrafnsson, Technical Director of the Cloud Data Services BU, presented more detail on exactly what the CDS BU will focus on. Hrafnsson came to NetApp in 2017 through the acquisition of GreenQloud in 2017. In seven years, GreenQloud had been involved in developing for CloudStack, OpenStack and Kubernetes. The company had also run a public cloud environment for four years and offered a private cloud solution called Qstack.
There are five main areas:
- Data Volumes (Cloud Volumes and ONTAP Cloud)
- Data Protection (Cloud Backup, SaaS Backup)
- Data Integration & Orchestration (Applications, APIs, Cloud Sync)
- Data & Cloud Optimisation (TBA)
- Data Security & Compliance (TBA)
Some of these offerings have still to be announced, so there’s nothing yet around the optimisation, security and compliance pieces, although this is probably in development. Two demos of the Data Volumes technology were demonstrated at CFD3. Here’s some background on what they are.
ONTAP Cloud is a version of the ONTAP storage platform running in the public cloud. What does this mean? In the simplest form, you can imagine spinning up an instance with the ONTAP operating system running inside it and in fact, that’s exactly what you can do within the AWS Marketplace. But there’s more to making storage available in the cloud than spinning up a VM. Process is required around licensing and the storage that supports the instance, for example. NetApp has had new customers that have come only from the public cloud offerings, without even involving a salesperson. So, this product works purely as a cloud service.
Cloud Volumes takes the features of ONTAP cloud and makes storage volumes available as a service. This is the basis of the Azure Enterprise NFS service that I talked about last year. Storage becomes directly integrated into the Azure platform, which means volumes can be created programmatically and associated with other services of the platform, like security.
Why bother? Remember, at the top of this post I mentioned the relative immaturity of cloud storage solutions to those in the enterprise. This is where the value is added. Cloud-based applications get the benefit of mature and higher-performance storage services (in this case delivered by NetApp), but fully baked into the platform. For NetApp Cloud Volumes delivered on Azure and AWS, this means NFSv3, NFSv4 and CIFS/SMB support. Performance is around 3000 IOPS/TB and backed by SLA. Features include instant snaps and clones, driven by an open API. At CFD3, Hrafnsson indicated that future implementations will be multi-platform. That presumably means being able to SnapMirror between cloud providers and on-premises (as well as other features).
Through Cloud Central, customers can consume services provided by the CDS business unit. This portal provides access to Cloud Volumes, ONTAP Cloud, Cloud Sync, Cloud Backup, SaaS Backup and any other features NetApp chooses to provide. Crucially, the portal resources are tagged to an account, which enables services to cross-authenticate each other. Imagine being able to create an authorisation token that easily connects an ONTAP Cloud instance running in AWS with backup running in Azure.
I recommend checking out the video on the above link, which has the entire CFD3 session at NetApp, including demonstrations of the portal (demos start around 25 minutes in).
The Architect’s View
Although public cloud providers have created amazing ecosystems in a short time, the services they offer can’t be all things to all people. There will inevitably be shortcomings and gaps within their service offerings. Vendors like NetApp have the opportunity to take decades of knowledge and IP and bridge the gap between the enterprise data centre and the cloud. This strategy is also good business sense for the CSPs as they create a route for getting applications and data into the cloud and driving up service adoption.
Thus far, it seems that NetApp is the only “legacy” storage company that is executing well on this strategy. Others like Dell, HPE and Pure Storage are focused on the data centre alone, while IBM is both data centre and cloud-focused (however I’m not aware of any synergy between the two).
I like the approach that NetApp is taking because it abstracts discussion on hardware away from the real subject – the data. However, there is still so much more to deliver. Optimisation, security and compliance either need partnerships or acquisitions. OCI (OnCommand Insight) is expected to be offered as a SaaS solution in the near future but will need some cloud-native focus to be truly useful.
There’s also more work on mobility to do. NetApp HCI promises to allow multi-hypervisor workloads to run on-premises. What’s needed here is an efficient process for moving that data in and out of the cloud. Some of that work is there (SnapMirror integration). What I haven’t seen yet is the overall orchestration integration.
Of course, this is a journey, so things will take time. But there is a clear strategy of transformation, which might well keep the company competitive for another quarter of a century.
Comments are always welcome; please read our Comments Policy. If you have any related links of interest, please feel free to add them as a comment for consideration.
Copyright (c) 2007-2018 – Post #4426 – Chris M Evans, first published on https://blog.architecting.it, do not reproduce without permission. Photo credit iStock.
Disclaimer: NetApp is a client of Brookend Ltd.