Home | Storage | Virtualisation – Solving the Storage Problem
Virtualisation – Solving the Storage Problem

Virtualisation – Solving the Storage Problem

6 Flares Twitter 0 Facebook 0 Google+ 6 StumbleUpon 0 Buffer 0 LinkedIn 0 6 Flares ×

One of the interesting take-aways from VMworld EMEA this year was the number of companies looking to solve the “storage problem” in virtualisation.  I’m sure many of our traditional storage vendors will say that no problem exists, however there is a perception that storage, in particular Fibre Channel, is a costly solution and requires expensive administrators to keep tidy and in place.  As an aside, I don’t believe that’s true, but it has done no harm for the legions of skilled storage administrators who’ve done handsomely out of the industry over the last 10-15 years (including myself, I may add).

But getting back to the point in hand, there are now a range of hardware/software and software-only solutions that are looking to do away with centralised storage and distribute the resources throughout a virtual infrastructure.  Anyone following virtualisation will know that VMware are pushing their own technology in the form of Virtual SAN (or VSAN, possibly the worst acronym choice ever), which is currently in public beta.  However there are other solutions much more mature that have been out there for some time, including Nutanix (hardware/software), Simplivity (hardware/software), Atlantis Computing (software), Virsto (software, now VMware-owned), Scale Computing (hardware/software), Pivot3 (hardware/software) and ScaleIO (although that’s a presumption on my part, not a released product).  It’s also possible to use things like HP’s StoreVirtual VSA, which can exist in a multi-node architecture and there are some vendors touching the edges of this converged model like Tintri and Violin Memory.  The latest company to join this group is Maxta Inc, which came formally out of “stealth” on 12 November 2013 with their software-only solution called MxSP.

So what does Maxta offer?  In a deployment very similar to other solutions, MxSP is a virtual machine/appliance that sits on the virtual infrastructure and owns the storage resources for that hypervisor installation.  Guest VM’s access the storage presented back to the hypervisor as an NFS share.  For data resilience between nodes, a private connection is required between the nodes, either as a physical switch or VLAN.  As MxSP replicates data in a RAID-1 style, then this network is likely used for both communications and data transfer.  All this seems simple enough, and at some stage I do a more comprehensive hands-on review.  However, in the meantime here are a few things to think about when reviewing these products.

Resources 

  • How much resource is required on each hypervisor, including compute, disk capacity and memory?
  • How is resource usage managed and monitored (including how resources can be constrained)?
  • What features are available for data reduction (thin provisioning, compression, dedupe)?

Availability

  • How does the solution cope with hardware and software failure?
  • How many redundant copies of data are being kept, how are they replicated (sync/async).
  • What happens if a whole node fails data is being accessed from elsewhere in the cluster?  Does that affect performance?
  • Are automated rebuilds across the remaining cluster in place to re-establish data integrity?
  • Can I even vary the number of copies of data to increase my resiliency?
  • What level of data protection (local and remote) is available?
  • What leve of data validation is in place (data scrubbing, CRC checking etc)?

Performance

  • How do I manage the performance of a single node?
  • How do I manage the performance of storage resources across the cluster?
  • What automated load-balancing algorithms are in place?
  • What are the limits of scalability on the solution?

Management

  • How easy is it to add and remove resources?  Can I add a node and/or just disk non-disruptively?
  • What happens if I want to take a node out of the cluster?  Can I drain its resources and move my data elsewhere?
  • What security controls are in place on the appliance/software?

Compatibility

  • What hypervisors are supported?
  • Can I run in different versions of the same hypervisor and/or in a heterogenous hypervisor environment at the same time?
  • What support is therefore hypervisor features such as FT & HA?  How do these integrate?
  • What is the hardware support matrix/HCL?

Maintenance 

  • How is the software supplied?  Is it a “black-box” VM or can the user configure it?
  • Does the vendor produce hardened versions?
  • Could nodes in a cluster run differing versions of the software to allow me to upgrade without an outage?
  • What level of tolerance is accepted in version differences?

Financial

  • What is the licensing model used?  How is licensing affected by usable/used capacity?
  • How easily can licences be tuned up/down?

The Architect’s View

It’s good to see these new solutions attacking some of the issues encountered in delivering storage for virtual environments.  However centralised storage has had many years of evolution that provides a solid basis for data protection and availability.  Any of the new solutions need to be reviewed in light of this; what is being lost and gained by distributing storage at the VM level?  Am I still delivering a solution that is as resilient as the one before?

One interesting thought is where these solutions are headed.  I’m sure Nutanix and Simplivity can provide software-only versions of their solutions, but enrobing them in hardware provides better and controllable performance (and the ability to add more margin).  So, will we see more vendor-agnostic solutions?  I think the answer has to be yes; VMware are challenging with Virsto and vSAN; I expect Atlantis to do something in the server space; EMC will do something with ScaleIO.  Interoperability will be the key here, rather than specific vendor lockin.

I’m hoping to review Maxta soon, in the meantime, check out some other good links I’ve listed here.

Related Links

Comments are always welcome; please indicate if you work for a vendor as it’s only fair.  If you have any related links of interest, please feel free to add them as a comment for consideration.

Subscribe to the newsletter! – simply follow this link and enter your basic details (email addresses not shared with any other site).

Copyright (c) 2013 – Brookend Ltd, first published on http://architecting.it, do not reproduce without permission.

About Chris M Evans

Chris M Evans has worked in the technology industry since 1987, starting as a systems programmer on the IBM mainframe platform, while retaining an interest in storage. After working abroad, he co-founded an Internet-based music distribution company during the .com era, returning to consultancy in the new millennium. In 2009 Chris co-founded Langton Blue Ltd (www.langtonblue.com), a boutique consultancy firm focused on delivering business benefit through efficient technology deployments. Chris writes a popular blog at http://blog.architecting.it, attends many conferences and invitation-only events and can be found providing regular industry contributions through Twitter (@chrismevans) and other social media outlets.
  • Pingback: Maxta presents software defined storage for vSphere challenging traditional SAN/NAS. Software eats storage! | UP2V()

  • Pingback: Link: Virtualisation – Solving the Storage Problem | The Maximum Midrange Blog()

  • Pingback: Storage - VSAs Everywhere! (and two words about Maxta)()

  • Kevin Stay

    I would love to see someone like Extreme scoop up Simplivity. (Yeah, the Enterasys deal makes that unlikely…) Now you really have a full solution if packaged correctly to go against the UCS type offerings with the financial stability lacking in these upstarts.

    At least 80% of these companies will be a fading memory in 3 years and who is willing to bet your storage on those odds? Right now you have three paths to a ‘Unified Computing Platform’ which is a place you probably want your datacenter to be.

    1. Roll your own. Most of the folks who take this path do so out of a deluded sense of vanity. Few shops have the horsepower, engineering discipline or organizational structure in place to pull this off.

    2. Buy a packaged up set of fairly traditional building blocks pre-configured and ready to go with a nice abstracted ‘single pane of glass’ from which to manage it all. The offerings worth buying do not come cheap, but they are generally a darn site more effective than trying to cobble it all together yourself. The biggest problem, and it is significant, is that underneath the wrapper it is all actually still fairly complex goings on.

    3. Go with one of these new hyperconverged players like Nutanix or Simplivity. (I like the latter of those two in this space.) These solutions have a nice simple abstraction layer largely because under the wrapper the implementation actually is relatively simple. Will any of these guys be around in a few years? Who knows, but personally I cannot afford to take that chance.

    There is a reason package, point solution silos are so prevalent in business – few IT groups successfully deliver the layers of unified global services needed as a foundation for offering specific business process computerization/automation. As it becomes easier for departments to simply pull out a credit card and have a ‘cloud’ application ready for use in literally hours that will change; one way or another.

  • Pingback: Violin Memory Delivers Converged Storage | Architecting IT()

  • Pingback: Tech News: Hitachi, HP & Simplivity | Architecting IT()

  • Pingback: Reinventing The Storage Array and Learning From Blackblaze | Architecting IT()

6 Flares Twitter 0 Facebook 0 Google+ 6 StumbleUpon 0 Buffer 0 LinkedIn 0 6 Flares ×