Virtual Volumes, or VVOLs is a technology first introduced by VMware in 2012 to enable the VM-level application of policies to storage.  In the 5+ years since the technology was announced, have VVOLs lived up to initial billing?


The need for Virtual Volumes was apparent fairly early on in the server virtualisation journey.  Presenting shared storage to a vSphere cluster represented a number of challenges, not least of which was the limit on the number of storage devices a vSphere host could support.  That meant using large block-based LUNs/volumes or NFS shares and amalgamating multiple VMs into one datastore.  With the ability and need to create terabyte-sized volumes, applying policy becomes tricky.  Storage arrays work at the LUN level, so tiering, QoS and availability are applied on that basis.  This means every VM within a datastore gets the same policy of whatever is applied by the storage array.

The implications of grouping VMs like this depends on the policy type.  For QoS and performance, a VM could simply be moved to another datastore.  This introduces complexity, plus time, effort and expense in moving the data and maintaining extra storage space.  Availability becomes even more complicated.  Failing over a replicated LUN to a remote location in the case of a failure or DR issue means all VMs a shared datastore/LUN have to move at the same time.  Alternatively, the VM to be failed over would need to be first moved to a dedicated LUN.  This isn’t a practical failure scenario process, where the benefit of pre-planning isn’t always available.

Enter VVOLs

The solution here is to find a way to apply policy at a more granular level.  Virtual Volumes was proposed as a solution and introduced in vSphere 6 with VASA 2.0 (vSphere APIs for Storage Awareness).  The technology introduced the concepts of the Storage Container and Protocol Endpoint (PE).  A Storage Container is effectively a pool of physical or logical capacity from which VM VVOLs can be created.  A PE is the name given to a device that exposes visibility of VVOLs and has been described as an “I/O demultiplexer”.  In reality, a PE is simply a standard LUN.  VVOLs are dependent LUNs attached to a PE, as described by the SCSI protocol.


Almost all of the work in implementing VVOLs needs to be done by storage vendors, working to meet the VASA 2.0 (and later) specification.  With multiple dependent LUNs used per virtual machine (there are 5 types – Config, Data, Mem, Swap, Other), the number of devices a storage array needed to support suddenly rises rapidly.  VASA requires delegating storage administration tasks to the vSphere cluster, so VVOLs can be created without storage administrator involvement.  While this seems like a good thing now, this way of working posed headaches for storage administrators who couldn’t see what was going on within the storage array – and more important, couldn’t easily manage performance.

Some vendors were quick to support VVOLs.  HPE 3PAR was one of the first and had VVOL support from day 1.  As Eric Siebert highlights, there were only 4 vendors listed in the Hardware Compatibility Guide at GA.  Some other vendors took longer to release support, noticeably VMware’s own parent, EMC.  SolidFire (now part of NetApp) released VVOL support as part of ElementOS release 9 (Fluorine).  The company could have argued that VVOL support was implicitly there with LUN-level QoS and the ability to assign one LUN per VM.  However, that configuration scenario has much more management overhead and doesn’t drive the management from the hypervisor, which is clearly the better position to be in.

VASA 3.0

The release of VASA 3.0 in vSphere 6.5 introduced VVOLs 2.0 (although this isn’t an “official” title).  New features include the ability to replicate VVOLs (and manage the replication process), and “line of service” for assigning a more granular level of capabilities of the array.  Getting back to Eric Siebert’s analysis of VVOLs 2.0, his opinion is that we’ve not seen a huge adoption of VVOLs at this stage.  I suspect part of that was the lack of replication support, plus the time taken to upgrade environments to be in a position to support either VVOLs 1.0 or 2.0.  Also, initial vendor support of VVOLs 2.0 was as scarce as 1.0.


Clearly, the maturity of the API standard and the passing of time has given some vendors the opportunity to implement a more rounded solution.  For example, with the release of Purity O/S 5.0, Pure Storage has added VVOL support to FlashArray.  Compared to early implementations, the VASA provider interface is run natively on the platform, whereas early vendors required a separate VM to manage this.  There’s good integration with snapshots, offloading some of this work from the hypervisor.  Plus there’s the ability to “undelete” a VM within 24 hours if deleted accidentally.  Probably the most interesting feature is the ability to take a VM and run the image on a physical host or push to the public cloud.  Check out the video by Cody Hosterman in this post where a VM is moved from a snapshot to physical host.  I’m not sure if this is a unique feature of FlashArray or has already been demonstrated by other vendors.  There’s no reason to think it couldn’t have been done elsewhere too.  Cody has a more detailed blog here covering the VVOL features.

The Architect’s View

From a slow start, VVOLs are starting to get interesting, because we can see some of the mobility features offering capabilities that would be hard to achieve any other way.  Part of the problem has been the work required from storage vendors to get VVOLs implemented.  Platforms with good virtual LUN support (i.e. the ability to create logical LUNs easily, on demand with no overhead) have been in a strong position from day 1.  Hence 3PAR being an early adopter.  Obviously, VVOLs hasn’t been the only way to apply policy-based service levels to individual VMs, as VMware has demonstrated with Virtual SAN.  Is this the direction VMware has taken with the storage marketing dollars because Virtual SAN is chargeable?

Another alternative to VVOLs was pioneered by Tintri, which was founded in 2008 and been VM-aware storage policy management since coming out of stealth in 2011.  Tintri has always had features in their platforms way ahead of the VASA 2.0 and 3.0 releases.  Partially this has been because the platform worked with file-based datastores rather than LUNs.  The file system provides structure around which policy can be applied.

What do you think about the implementation of VVOLs?  Are you using them in your environment?  I’d be interested to hear any stories of how VVOLs have been used effectively and of course anyone who found significant benefit from using alternatives such as Tintri.

Further Reading

Comments are always welcome; please read our Comments Policy.  If you have any related links of interest, please feel free to add them as a comment for consideration.  

Copyright (c) 2009-2018 – Post #7E19 – Chris M Evans, first published on, do not reproduce without permission.

Written by Chris Evans

With 30+ years in IT, Chris has worked on everything from mainframe to open platforms, Windows and more. During that time, he has focused on storage, developed software and even co-founded a music company in the late 1990s. These days it's all about analysis, advice and consultancy.