Home | GestaltIT | Enterprise Computing: What Next For Virtualisation?

Enterprise Computing: What Next For Virtualisation?

0 Flares Twitter 0 Facebook 0 Google+ 0 StumbleUpon 0 Buffer 0 LinkedIn 0 0 Flares ×

Earlier this month, Texas Memory Systems announced they had acquired the intellectual assets of Incipient, a company that produced SAN virtualisation hardware and software.  With Incipient gone, EMC hardly bothering to mention Invista, what is the future of SAN LUN virtualisation? 

I talked about Incipient last year, here and here when discussing the costs of performing migrations.  As I said at the time, I couldn’t see how much of a saving deploying their iNSP would bring to the burdensome migration work we all have to manage on an ongoing basis.  So there’s got to be a more compelling benefit out there for using virtualisation products.  If there is, then what is it?

Excluding the defunct Invista, that leaves Hitachi with Universal Volume Manager (UVM) and IBM with SAN Volume Controller (SVC) still in the market place.    From experience, I know UVM is a great product and surprise, I’ve commented on that recently too especially herewhere I reference the fact that Hitachi are offering UVM for free.  Clearly, the drawback to UVM is that it is integrated into the array itself.  When the NSC55 first came out, I heard rumours that it may be a diskless virtualisation “head” and although it can be deployed in that way, it isn’t sold as that.  If Hitachi decided offer the USP VM or its successor as a diskless virtualisation controller, it would put them squarely in competition with SVC from IBM.

Earlier this year I was fortunate to have an invitation to meet Barry Whyte, “Master Inventor” and Performance Architect on the SVC product.  You can find Barry’s blog hereif you’re already not subscribed to it.  I highly recommend it especially for understanding the in’s and out’s of the SVC itself.  During my trip I got to see some of the hardware used to do interoperability testing of SVC – with storage it virtualises as well as servers it connects to.  It’s by no means a trivial task; there are 80 people in Hursley alone, working on development and testing of the product as well as a further 64 scattered around the globe.  Obviously virtualising storage is a complex business and requires huge amounts of testing.  I’d go as far as suggesting that the testing takes way more cycles than writing the code itself.

 

What’s all this got to do with the future of virtualisation?  Well, I think it highlights what a complex process it is.  Even though standards for interoperability exist, IBM (and presumably Hitachi, EMC and at one time Incipient) have to deal with complex interoperability issues and interleave that with additional features and functionality whilst guaranteeing data integrity.  The slide taken from an SVC presentation deck gives you an idea of what’s involved.  Thanks to Barry for permission to reproduce this.

Both Hitachi and IBM have been successful with a virtualisation product that doesn’t sit within the SAN fabric itself.  This seems to me to be counter-intuitive as I’ve always thought the fabric was the right place for virtualisation.  After all, every I/O leaving a host hits the fabric first and this naturally becomes the best place to route the I/O to its final destination, whether or not that is a “real” LUN or one created from a virtualisation product. 

Perhaps SAN fabric virtualisation was simply too complex and costly to deploy.  After all, recent history has told us that paying for a fabric-based virtualisation product is a non-starter otherwise we’d see more Invista and iNSP.  Perhaps fabric-based virtualisation didn’t provide the feature set that mature IT organisations required from the technology.  Either way, virtualisation in the fabric needs a rethink.  Maybe FCoE provides/provided that opportunity?

About Chris M Evans

Chris M Evans has worked in the technology industry since 1987, starting as a systems programmer on the IBM mainframe platform, while retaining an interest in storage. After working abroad, he co-founded an Internet-based music distribution company during the .com era, returning to consultancy in the new millennium. In 2009 Chris co-founded Langton Blue Ltd (www.langtonblue.com), a boutique consultancy firm focused on delivering business benefit through efficient technology deployments. Chris writes a popular blog at http://blog.architecting.it, attends many conferences and invitation-only events and can be found providing regular industry contributions through Twitter (@chrismevans) and other social media outlets.
  • http://www.ibm.com/developerworks/blogs/page/storagevirtualization Barry Whyte

    Hey Chris, nice photo… of Hursley House that is :) Was good to catch up with you.

  • http://blog.drstorage,de Sebastian Wormser

    Hi Chris,

    I wonder why there is no mentioning of DataCore, Falconstor or LSI Storage Virtualization products in your post?
    From experience I know, that they all work (one a bit better, the other a bit worse) than IBM SVC and Hitachi UVM. We actually won a couple of customers with DataCore storage vitualization against IBM SVC. And I do know that Hitachi USP-VM can be ordered as an (almost) diskless version for virtualization or 3-Datacenter configurations.

    You are right as far as fabric based virtualization goes. It seems too expensive or complex to implement, and even LSI’s approach (using a proprietary fabric-switch-like appliance) can be considered to be an appliance based product, since it still needs an existing SAN for Fan-out.
    The never-ending discussion about where storage virtualization belongs (controller/SAN/appliance/) has no real value for any customer, it is solely a marketing instrument of vendors. One should judge the different implementations by comparing reliability, scalability, redundancy, performance, etc. If there is no single point of failure and the performance meets requirements, there should be no discussion whether controller-based weighs out SAN-based virtualization.

    I doubt that we will ever get there, as long as there is EMC, Netapp, IBM, HDS, etc. marketing going on. Can we as independent solution providers focus more on facts instead of marketing, please? (This is a rhetoric questions, not specifically directed to you Chris ;-)

    Sebastian

  • Chris Evans

    Sebastian

    I agree with your comments. As for me personally, I’ll go away and familiarise myself with DataCore a little more… :-)

    Chris

  • http://blogs.rupturedmonkey.com Snig

    Chris,

    You left the NetApp V-Series out of your post as well. I’m a huge fan of HDS’s implementation of virtualization and what it can do for a storage environment. I’ve done a number of V-Series implementations over the past couple years as well and I think the NetApp direction is probably going to prove the better solution over HDS and IBM. Having the ability to use all of the cool NetApp software on the front end and still have all of the back end functionality and performance is definitely a plus!

  • http://blogs.rupturedmonkey.com Nigel

    Nice photos of Tony’s lab. Not as impressive as the EVA test lab though -

    http://blogs.rupturedmonkey.com/?p=536

    SVC – top notch product though!

0 Flares Twitter 0 Facebook 0 Google+ 0 StumbleUpon 0 Buffer 0 LinkedIn 0 0 Flares ×