Earlier this week I was reading an article in the UK’s Computing magazine, which listed research on key priorities for businesses over the next three years.  The top three items for research were listed as Storage Virtualisation, Desktop Virtualisation and solid state drives (SSDs).  I was interested that 56% of respondents said they were looking into doing virtualisation of their storage infrastructure; a term that of course could have multiple meanings.  Before we discuss that, it’s worth mentioning that VDI is clearly in people’s minds, despite the last 5 years being the “year of VDI”.  As for SSDs, well we’ve discussed that subject many times and I will be releasing a white paper covering this in the next couple of weeks.

When we talk about storage virtualisation there are two scenarios.

External Controller Virtualisation

External virtualisation describes a configuration in which one storage array is connected to a virtualisation controller and the resources presented out of that managing array as if they are part of the array itself.  This is the technology Hitachi have successfully been selling for a decade as part of their USP/VSP range.  IBM sell this technology as SVC (SAN Volume Controller), also with a long history.  More recently NetApp introduced this feature through Data ONTAP Cluster mode and EMC implement it either through VPLEX or now directly within VMAX.  The benefits of virtualising one array through another are multifold; it reduces cost when buying large quantities of storage; it enables the controller to scale to more capacity than it would be physically capable, using it more efficiently; it allows controller based functions such as replication and snapshots to encompass the external storage and be transparent to the user (and potentially reducing licensing costs).  Of course on big selling point is the ability to use the technology to do transparent data migrations.  Although this technology may be what the article refers to, I have a feeling that it isn’t quite what data centre managers have in mind.

Software Defined Storage

I suspect that the real intention of customers referenced in this article is to look at abstracting the storage array itself, in effect virtualising the storage hardware.  I’m sure many data centre managers would like to completely remove storage from their infrastructure, given a chance.  It has grown to become one of the most expensive and resource consuming part of any data centre deployment, due to the explosive growth in the volume and types of data we are creating and retaining.  Storage arrays are regarded as expensive to manage (especially those using Fibre Channel) and there are many hidden costs around data migration, performance management and administration.

So what products fall into the “Software Defined Storage” category?   Clearly “software defined” implies hardware agnostic, which means the range of virtual storage appliances fit this requirement.  There are commercial products from HP (StoreVirtual, StoreOnce), Nexenta (NexentaStor), Open-E, Nasuni, StorMagic (SvSAN), NetApp (ONTAP Edge), OpenFiler, FalconStor (Network Storage Server), EMC (ScaleIO), VMware (vSphere Storage Appliance) and DataCore (SANsymphony).  Some of these products run within a hypervisor or operating system, some can run natively on server hardware, but ultimately they should not be dependent on specific hardware configurations.

There are also the hyper-converged solutions that move storage into the hypervisor, including VMware’s VSAN, Nutanix, SimpliVity, Scale Computing and Pivot3.  These offerings remove the need for dedicated storage hardware and effectively disperse the storage resources between physical hypervisor servers.  Within this category we also see Maxta, which is a software-only implementation that isn’t sold as a packaged hardware and software solution.

Finally we should highlight that there are some open source solutions appearing on the market, such as Ceph, which can deliver distributed storage in a similar fashion to ScaleIO.

The Architect’s View

The term “Storage Virtualisation” encompasses a wide range of products and solutions.  Although external controller-based virtualisation remains a good solution, I think SDS is where most organisations will look to gain best value as they move forward and push towards greater levels of server virtualisation.  The speed and reliability of modern server hardware means dedicated storage arrays aren’t always required.  The pendulum is swinging back towards distributed storage again and SDS provides the opportunity to make this happen.

Related Links

Comments are always welcome; please read our Comments Policy.  If you have any related links of interest, please feel free to add them as a comment for consideration.  

Copyright (c) 2007-2018 – Post #C851 – Chris M Evans, first published on https://blog.architecting.it, do not reproduce without permission. 

Written by Chris Evans

With 30+ years in IT, Chris has worked on everything from mainframe to open platforms, Windows and more. During that time, he has focused on storage, developed software and even co-founded a music company in the late 1990s. These days it's all about analysis, advice and consultancy.