Home | Storage | VMware’s Virtual and Physical SAN Misdirection
VMware’s Virtual and Physical SAN Misdirection

VMware’s Virtual and Physical SAN Misdirection

98 Flares Twitter 47 Facebook 7 Google+ 5 StumbleUpon 0 Buffer 11 LinkedIn 28 98 Flares ×

I almost fell off my chair laughing as I read this post by Chuck Hollis that talks about using vSphere VSAN (Virtual SAN) with external storage arrays.  Is it the rantings of someone clinging onto the vestiges of a former life, or perhaps it’s a veiled way for VMware to start unravelling their failed VVOL strategy?

Let’s do some scene setting.  VSAN or Virtual SAN is VMware’s attempt to counter the hyper-converged players who have carved out a niche in the market by physically merging the storage and hypervisor hardware and software into one single unit.  VSAN allows customers to cluster three or more ESXi hypervisors together and use local storage, supplemented with flash, to act as a single virtual datastore repository.  Data is spread across the cluster and can be made resilient to tolerate node and/or disk failure.

VMware have made a massive push on VSAN with their corporate bloggers writing dozens of posts over recent months (Duncan Epping 38 posts, Cormac Hogan 25 posts, William Lam 14 posts to name but a few) and speakers at every VMUG around the world.  The company clearly wants to do something to grab a share of the revenue that Nutanix, SimpliVity, Maxta, Atlantis and others are managing to generate by simplifying or in some cases removing external storage from virtual server and desktop implementations.

VVOLs are VMware’s attempt to encapsulate the virtual machine into a logical object on disk that can then have policies (performance, resiliency etc) applied to it, rather then today’s somewhat messy collection of files that comprise a vSphere virtual machine.  Startup vendors such as Tintri have been managing VMs at the object level for some time (founded in 2008, shipped product since 2011) by storing the VM on NFS, which is much easier to manage than block storage.  Technical previews of VVOL have been around for 18 months or more, yet the feature still hasn’t materialised.

Virtual on Physical

So let’s talk about VSAN and its applicability to shared storage.  The whole focus of VSAN was to move customers away from needing shared physical storage.  In fact this has been one of the marketing messages since the technology was first conceived.  VSAN removes the issues of dealing with those pesky storage teams, those expensive and complex storage arrays and takes us to a new world of simplification and ease of use where all of our resources live happily in the server.

However, as I have discussed many times, the external storage market has brought us resilient, highly available and feature rich products that deliver consistent and predictable performance, especially as we move into the widespread deployment of solid state.  Shared arrays offload “heavy lifting tasks” through VAAI.  They support efficient replication, compression, de-duplication, deliver quality of service, proactive device sparing, data scrubbing, integrity checking and multi-tenancy, plus a whole lot more.

Of course many of these features are not available in VSAN 1.0.  The initial release doesn’t support even support vSphere features such as Fault Tolerance, Storage DRS, SIOC or Distributed Power Management.  Some of these are features which work well with shared storage and as a result would be lost in a VSAN/PSAN (physical SAN) model.

But could VSAN with PSAN offer anything?  Screen shot snippets from Chuck’s blog highlight soft benefits such as “simpler, automated provisioning”, “application storage resource managed by VM teams” “Leverage strong vSphere integration” and “Frees storage team from day-to-day requests”.  Frankly these statements are nothing more than hogwash or marketing fluff at its best.

VVOL becomes VSAN

But what about VVOL?  The implementation of VVOL is hampered by two things; the first is the Fibre Channel protocol itself, the second is the effort required for storage vendors to re-design their products to support VVOL at the scale VMware are looking to achieve.  No vendor today, for example would be comfortable or even capable of supporting arrays with hundreds of thousands or millions of VVOL objects.  See my previous comments here and here for more details.

If VMware can’t deliver VVOLs, then why not create another layer of abstraction between the storage and the hypervisor and use VSAN as the datastore “container” for virtual volumes?  Now I get to apply my policies of resilience and performance at the VM level, rather than having it within the storage and VSAN acts as the arbitration layer, determining the best physical storage to use for the deployment of my VM and to deliver the policy features I require.  In some ways it almost makes sense although we have to be careful we’re not approaching Frankenstorage again.

The Architect’s View

Hyper-converged solutions are a great idea but there are better implementations out there than VSAN (more soon on this).  External block storage is still an issue with vSphere.  VMware even killed off their Virsto acquisition, which could have bridged between the two. VVOLs have gone no further than technical previews.  VMware is owned by the biggest storage company in the world.  It’s not difficult to see the win-win scenario where VMware takes away the pain of VVOLs and the embarrassment of being owned by a storage company through the model of VSAN & PSAN.  This may be a good thing.

Frankly, however with posts like this latest one from Chuck, customers will be wondering exactly what VMware’s strategy for storage actually is.  Some may even think that VMware has no strategy at all.

Note: in follow up posts I’ll discuss why technology like Atlantis Computing’s USX could be a much better solution for using VSAN.

 

Comments are always welcome; please indicate if you work for a vendor as it’s only fair.  If you have any related links of interest, please feel free to add them as a comment for consideration.  

Subscribe to the newsletter! – simply follow this link and enter your basic details (email addresses not shared with any other site).

Copyright (c) 2009-2014 – Chris M Evans, first published on http://blog.architecting.it, do not reproduce without permission.

About Chris M Evans

Chris M Evans has worked in the technology industry since 1987, starting as a systems programmer on the IBM mainframe platform, while retaining an interest in storage. After working abroad, he co-founded an Internet-based music distribution company during the .com era, returning to consultancy in the new millennium. In 2009 Chris co-founded Langton Blue Ltd (www.langtonblue.com), a boutique consultancy firm focused on delivering business benefit through efficient technology deployments. Chris writes a popular blog at http://blog.architecting.it, attends many conferences and invitation-only events and can be found providing regular industry contributions through Twitter (@chrismevans) and other social media outlets.
  • Hector Linares

    Hey Chris, great post. I think you have it flipped around. vVOL attempts to normalize at the protocol level and vSAN is VMware’s implementation of a vVOL ‘object store’. I agree that layering vSAN on top of pSAN is not the way to go. However, I think what customers are looking for is a DataCore/IBM SVC model with storage virtualization. Take storage from anywhere and aggregate it into a virtualized/normalized storage pool. I am not saying vSAN offers this or is meant to do this but its a common assumption customers make. For example, with Microsoft Storage Spaces, customers ask me if that technology can create a virtualized storage pool on top of existing FC/iSCSI storage. The customer has some older arrays or arrays from multiple vendors and they want to simplify management by making it all look and ‘act’ the same by layering software on top. So in the Microsoft stack for example, you can then enable deduplication on a volume. You can do that today with Spaces or a tradition LUN. However, if Spaces layered on top of pSAN, then I can magically grow my storage pool by unmasking more LUNs and then adding those disks to the pool. Extending the Space is a simple online operation. Similar scenarios for tiering. Note: Spaces (similar to vSAN) is not meant to layer on top of pSAN.

    • http://thestoragearchitect.com/ Chris M Evans

      Hector, thanks for the comment. Yes, I’ve used storage spaces (and written about it) and in fact I deployed with SAN storage – see this post’s comments http://blog.architecting.it/2012/06/21/windows-server-2012-windows-server-8-storage-spaces/

      Tintri have an interesting solution where they understand the content of the data on the array, which provides them great insight in managing performance/prioritisation. I’d expect this to be key to scaling virtual environments, whether local, hyper-converged or shared storage.

      I have an article coming up on Computer Weekly soon referencing exactly these points.

      Regards
      Chris

  • John

    I’d like to point out that Storage DRS is kind of useless with VSAN. You don’t have containers, or pools (its all one giant pool) and you define storage, or capacity at a VMDK level against the ENTIRE pool of resources (and it does auto balancing between disk groups already). Now a NEW version of storage DRS that adjusted VSAN policies in reaction to latency spikes (increase stripes or cache) is something I’ve been tempted to hack together as an orchestrator script, but that wouldn’t be storage DRS really.

    • http://thestoragearchitect.com/ Chris M Evans

      John, thanks for your multiple feedback! So, what about mixed environments with differing pool characteristics – say some SATA backed VSAN and some 15K backed VSAN, where there’s a desire to get the read miss I/O performance right. Could that not be where SDRS fits?

      Chris

      • Alex

        S-DRS is designed to move VM’s within a single cluster. The feature is very useful in traditional environments that are based on LUNs.
        For VSAN, all of the storage of the cluster is managed in aggregate. VSAN will automatically mange your de-stage operations from SSD to HDD based on your policy settings. This policy based management is Virtual Disk granular and more flexible than traditional auto-tiering. S-DRS in it’s current state is not needed for VSAN.
        Alex Jauch
        Product Line Manager
        VMware Storage & Availability

      • John

        Currently at release you can only have a single VSAN store in a cluster. Given Enterprise Flash is ~$2.25 (Intel 3700 list) I honestly see very little value in using 15K drives for anything with VSAN. If data’s HOT give it enough cache. If you’ve got some crazy 100% random read OLAP workload where its not cache friendly and the IOPS/GB/$ sweet spot is 15K drives then maybe VSAN is not your best bet :) That said these are a lot more rare of workloads than people want to admit, and for ~$3600 I can have 1.6TB of Cache in my 4 node cluster… Is Tiering at a VMDK based level (what SDRS does) REALLY that useful in a world with 1MB block size cache that cheap? (I’m guessing this is VSAN’s block size, feel free for someone to disagree).

        • Mark William Kulacz

          John – Mark Kulacz (analyst at NetApp) here. Have you been able to learn anything further about the VSAN Block size? I realize that the stripe size is 1MB, but that doesnt necessarily mean the block size is also 1MB. I would guess 8KB (that would match Isilon OneFS), but without any guidance it is a calculated belief. It is also not clear what happens when thin provisioning – are random sets of 8KB blocks put into 1MB stripes? Are full 1MB stripes allocated each time a single block within the stripe is written to? Any insight is appreciated. Thank you. Mark

  • John

    Storage IO Control (that is designed to prevent a single VM from causing non-fair share access balances with IO Queue’s on LUN’s and Target HBA’s) Is also not needed in a VSAN world. You can set flash reservations (and IOPS limits) but beyond that VSAN actually removes the purpose of Storage IO control.

    • http://thestoragearchitect.com/ Chris M Evans

      John, I’d have still seen some benefit from SIOC, especially with things like SATA disks that are poor at random read I/O. Wouldn’t there be a need to restrict hosts from killing SATA? Or, are you saying that functionality has been moved to VSAN with IOPS limits?

      • Alex

        With VSAN, you control read IOPS performance by managing SSD reservation percentage on a per VM basis. Keep in mind that SOIC is latency based, not IOPS threshold based. We do not believe that customers want to manage absolute IOPS limits since very few VMs are running highly characterized workloads which can predict IOPS loads in advance. In addition, committing to an IOPS threshold without a latency threshold and a characterized workload is meaningless.
        VSAN is not a traditional SAN and does not always benefit from features designed to support traditional SAN limitations.
        Alex Jauch
        Product Line Manager
        VMware Storage & Availability

  • John

    While not support DPM does is in theory a limit, I would like to question if other Hyper-converged vendors support the feature in a meaningful (and actually used in production way). Most people buy power off of their maximum draw in a datacenter so while cool I’ve always found this feature of limited use, and actually find spin down storage/NAS/Object integration to be a much cooler use of this technology (Like what HCP can do).

  • John

    if your going to highlight array vendors not being able to keep up with VMware, I’d point not at VVOLs but at the mess that was the 5.0 release of SCSI UNMAP and the data loss, corruption, and 90% performance hit that made the feature have to be pulled back.

    My understanding is VVOLS was delayed to push VSAN out the door (internal resources got re-tasked to focus on it). Large VSP/VMAX customers with 1000′s of Oracle RAC consistency groups though are slower moving beasts than the shit in the market to Hyper converged. Its is an opportunity to show what the technology can do when implemented right (VVOLS are used in VSAN) and prevent the vendors opening with untested implementations that cause heartburn. The are setting the bar high, for VASA/VVOL/Storage Profile integration. Hopefully this should prevent the big guys from launching with an untested mess like everyone did with SCSI UNMAP.

    • http://thestoragearchitect.com/ Chris M Evans

      Interesting feedback on VVOLs. That’s a technology I’ve thought would be excellent in delivering policy based storage. I only hope it works as it sounds like it should. I’d like to not be disappointed.

98 Flares Twitter 47 Facebook 7 Google+ 5 StumbleUpon 0 Buffer 11 LinkedIn 28 98 Flares ×