Home | Virtualisation | Will TCO Drive Software Defined Storage?
Will TCO Drive Software Defined Storage?

Will TCO Drive Software Defined Storage?

36 Flares Twitter 23 Facebook 2 Google+ 2 StumbleUpon 0 Buffer 1 LinkedIn 8 36 Flares ×

With the imminent announcement of pricing for VMware Virtual SAN, there has never been a more complex landscape for storage in the enterprise.  But how much of future decisions will be based on technology and how much on understanding the TCO of solutions being deployed?

Rumour has it that VSAN will be priced at about $2500 per CPU.  That’s on top of existing licensing and storage hardware (flash and disk).  Presumably this means on a 3-node cluster with 4 CPUs per node, VSAN will add an additional $30,000 in costs, list price.  Anyone who follows enterprise storage pricing would expect vendors to offer pricing around the $5/GB mark, so for the the same price as licensing VSAN, we could go out and purchase a 6TB+ array.  This doesn’t seem much in terms of capacity but to be fair, there are many small array vendors out there who could supply a huge amount of storage capacity for $30,000.  This is rather a simplistic example, so perhaps we need to look at things in more detail.

Storage Options

Today users have multiple ways to implement storage in a virtual data centre.

Hyper-Converged – In this instance storage and compute are merged into the same physical server chassis.  Software within the hypervisor kernel (e.g. VSAN) or in a VM (Nutanix) provides virtual storage resources based on disks and flash in the server itself.  There are software only solutions (e.g. Maxta) and packaged solutions like Nutanix and SimpliVity.  From a total cost of ownership perspective, these solutions can be implemented without needing separate storage skills or teams.  There may also be savings in data centre space (facilities) and environmental costs too.  On the negative side, the storage can’t be used for other purposes and there is a potential for more impact/risk with busy environments and understanding how storage and compute workloads are prioritised in the same infrastructure.

Simple-SAN – In this instance, rather than deploy a storage array that is highly complex, the solution is to deploy hardware that provides some of the basic shared storage requirements of availability, resilience and performance.  In many cases, these arrays can be managed by the same team handling virtualisation as they are easy to deploy and generally don’t need a lot of management once in place.  Probably one of the best examples here is ISE from X-IO.  ISE systems are “black-box” implementations of storage capacity using hard drives and flash.  Technology within the array controls and manages transient and permanent device failures, resulting in a much lower maintenance cycle.  Calling ISE simple is unfair as there are many more features in the product than high availability, but this is one of its strengths.  Cost savings in using simple SANs can be made in terms of reducing management overhead and eliminating the need for high-end skills.

Complex-SAN – The traditional method of storage deployment is to use high-end arrays with lots of resilience and availability built in, as well as complex functionality like block-level tiering, compression and de-duplication.  Of course the hardware cost per TB is high and the skills required to maintain the hardware can be expensive too.  However many end users look at these solutions for more than just virtualisation and care about availability over the cost of deployment.

So, which solution is right for you?  There are technical merits and issues with each solution, however without an idea of sensitivity to cost (whether that’s infrastructure, skills or facilities), the decision becomes much more difficult to make.  There are also a number of platforms that don’t easily fit into the above definitions, including Tintri’s VMstore.

The Architect’s View

Storage for the Software Defined Data Centre is becoming more complex than ever.  Developing and maintaining a TCO model is essential in order to evaluate the myriad options available as a customer.  TCO and requirements together form the basis of making sound purchasing decisions.  It’s good to see intense vendor competition as this drives the market forward, making storage a key feature of the SDDC to come.

 

Comments are always welcome; please indicate if you work for a vendor as it’s only fair for others to judge context.  If you have any related links of interest, please feel free to add them as a comment for consideration.

Subscribe to our Newsletter

* indicates required

 





Copyright (c) 2009-2014 – Chris M Evans, first published on http://blog.architecting.it, do not reproduce without permission.

About Chris M Evans

Chris M Evans has worked in the technology industry since 1987, starting as a systems programmer on the IBM mainframe platform, while retaining an interest in storage. After working abroad, he co-founded an Internet-based music distribution company during the .com era, returning to consultancy in the new millennium. In 2009 Chris co-founded Langton Blue Ltd (www.langtonblue.com), a boutique consultancy firm focused on delivering business benefit through efficient technology deployments. Chris writes a popular blog at http://blog.architecting.it, attends many conferences and invitation-only events and can be found providing regular industry contributions through Twitter (@chrismevans) and other social media outlets.
36 Flares Twitter 23 Facebook 2 Google+ 2 StumbleUpon 0 Buffer 1 LinkedIn 8 36 Flares ×