In September 2018, NetApp acquired a little-known cloud services company called StackPointCloud.  StackPoint offers a managed Kubernetes service that enables customers to build Kubernetes clusters across a range of public cloud providers.  Why would NetApp want to acquire this technology and where could it be used?

Containerisation

First of all, let’s take a step back and look at how application orchestration has evolved over the last few years.  Container-based applications have been popularised with the development of Docker (the platform and the company).  As a process for quickly running applications, containers are easy, lightweight and relatively platform agnostic.  You can run a Linux-based container, for example across any of the major distributions.

Kubernetes

As the container orchestration platforms have evolved, Kubernetes has taken over as the dominant orchestration tool, either on-premises or in the public cloud.  Kubernetes provides the encapsulation for ensuring that containers launch in a managed way (via Pods) and have networking and persistent storage to match.  They also provide secure registries for container images.  Cloud service providers have all developed their own container and Kubernetes services (like AWS Fargate & Azure Kubernetes Service). Docker may have kickstarted the modern container revolution, but Kubernetes has created much more widespread adoption and quickly become the de-facto standard.

Why Managed Kubernetes?

What’s the benefit of having a managed Kubernetes service?  End users could build their own service offerings, but like all self-deployed technology, these environments need to be developed and maintained over time.  Someone has to monitor and manage failures, upgrades and patching.  A managed service like those in the public cloud take away that overhead and puts the onus on the service provider.

Let’s remember that the cloud providers are not offering any secret sauce here.  Like many managed public cloud services, managed Kubernetes/Containers are simply another set of services running on virtual instances.  However, managed services do take the operational burden away and can be useful for businesses and DevOps teams that don’t need a heavily tailored or more complex service.

Cross-Cloud Managed Services

So, why use a service like StackPointCloud?  Well, the most obvious benefit is to gain a consistent view of container orchestration across multiple cloud providers.  Instead of having to code to each one separately (because they all implement different APIs), services like StackPoint provide the orchestration to abstract away the specifics of a platform and enable applications to be deployed through a single consistent API.  It’s also possible to bring in on-premises solutions to that framework, which we’ll discuss again in a moment.

NetApp Cloud Services

NetApp Cloud Services

NetApp as a Service Provider

Before we dig into at why NetApp would acquire StackPoint, it’s helpful to look at a wider perspective.  In the last two to three years, NetApp has been evolving from a company selling mainly hardware appliances and software to one sell storage services.  This transition has been incremental, starting with the ability to use existing platforms like ONTAP and StorageGRID in the public cloud as virtual instances.

The real step forward was the consolidation of these services under the banner of the Data Fabric and the NetApp Cloud Services Portal.

Greenqloud

A lot of this technology is underpinned by IP acquired through the purchase of Greenqloud in 2017.  Greenqloud had initially created their own public cloud computing platform, later pivoting to offer a cloud management platform called Qstack.  We’ve seen the development of service provider capabilities in the Cloud Business Unit, led by Anthony Lye.  Eiki Hrafnsson (Greenqloud co-founder) is now leading the Data Fabric initiative.  Greenqloud allowed NetApp to develop a cloud-based framework for developing Data Fabric services.  This video from a recent Tech Field Day provides some additional background on Data Fabric 2.0.

NetApp Kubernetes Service

The acquisition of StackPoint has enabled NetApp to create NKS, the NetApp Kubernetes Service.  This runs as a part of NetApp Cloud Services and is available today as a tool to deploy applications across a range of public cloud providers and crucially, NetApp HCI.

I see two scenarios for how NKS will play out.

NKS for Customers

The first is to enable customers that want to deploy container-based workloads a way of bringing together both the orchestration and data parts of the application.  Increasingly, containers have become stateful in the sense that they use data that persists over time.  The Data Fabric (and components like Trident and Cloud Volumes) will allow a container to gain access to data wherever it runs – including on-premises.  Think about this for a moment.  Containers allow applications and micro-services to be run pretty much anywhere.  The biggest issue in taking advantage of this application mobility is aligning with data.  The Data Fabric will make this process much easier to achieve, while not compromising on the issues of security, compliance and data sprawl.  Both data and application become mobile across on-premises and cloud platforms.

NKS for Data Services

The second scenario is to use NKS to orchestrate NetApp and third-party data management solutions.  Part of the Data Fabric strategy was always to open up the platform to external service providers that could bring their own IP to the party.  NKS makes this possible by creating the framework to easily implement data services as individual micro-services.  As an example, imagine exposing data to a service for backup or compliance scanning.  NetApp could develop an API that allows service providers to simply upload a container that runs the data management service.  The customer pays for usage, either in time, data capacity or some other service-based metric.

Serverless

Taking this a step further, the next level of code automation could be to implement a serverless framework into NetApp’s Cloud Services.  In this scenario, data processing triggers serverless code that could perform transcoding or other tasks.   With serverless offerings already in the cloud, it would be relatively simple to extend the Data Fabric to make use of AWS Lambda, for example.

The Architect’s View

Summing up, where does this position NetApp?  I’ve recently written about whether we’re seeing vendors talking about more about advanced storage management (aka data asset management) rather than true data management capabilities.  Here’s the podcast where we discussed where the market is headed.

It’s one thing to improve access to data, quite another to develop processes that actually do something with the content.  NetApp seems to be heading the right way in terms of putting a framework around how IT organisations will want to manage data in a hybrid environment.  It might seem like a stretch for a data management company to be offering application orchestration, however, when it’s in support of the data management functions, it makes total sense.

Let’s also not forget about where NetApp has come from.  Many customers still use on-premises solutions, including NetApp HCI.  Using Cloud Services, on-prem can simply be another “availability zone” as part of an overall solution.  Customers that want to stagger their cloud journey, or can’t fully commit, get to achieve the benefits of extending their existing technology investments, while not building a new island of data and compute in the cloud.

So, let’s reality check this for a moment.  There is lots of potential here.  The measure of success will be in both how well these services integrate and how mature the services become.  This will be the challenge for NetApp during 2019 and 2020 – keeping the momentum up and continuing to bring a wider ecosystem of services to their customers.

Copyright (c) 2007-2019 – Post #8B73 – Brookend Ltd, first published on https://blog.architecting.it, do not reproduce without permission. 

Written by Chris Evans

With 30+ years in IT, Chris has worked on everything from mainframe to open platforms, Windows and more. During that time, he has focused on storage, developed software and even co-founded a music company in the late 1990s. These days it's all about analysis, advice and consultancy.