Home | Uncategorized | Storage protocols for VMware

Storage protocols for VMware

0 Flares Twitter 0 Facebook 0 Google+ 0 StumbleUpon 0 Buffer 0 LinkedIn 0 0 Flares ×

I’ve been doing more VMware work recently. The deployment I’m working on is using SAN presented disk. The storage started as 50GB LUNs, quickly grew to 100GB and now we’re deploying on 200GB LUNs, using VMFS and placing multiple VM guests on each meta volume.

Now, this presents a number of problems. Firstly, it was clear the LUN sizes weren’t big enough in the first place. Second, migrating guests to larger LUNs had to be an offline process; present the new LUNs, shutdown the guest, clone the guest, restart the guest, blow the old guest away. A time intensive process, especially if it has to be repeated regularly.

Using FC presented LUNs/metas also presents another problem; if we choose to use remote replication (TrueCopy/SRDF) to provide DR failover then all the VM guests on a meta have to go on that failover too. This may not (almost certainly not!) be practical.

Add in the issue with lack of true active/active multipathing and restrictions on the number of LUNs presentable to an ESX server and FC LUNs don’t seem that compelling.

The options are to consider iSCSI or store data on CIFS/NFS. I’m not keen on the CIFS/NFS option, iSCSI seems more attractive. It pushes the storage management away from the ESX Server and onto the VM guest; security is managed at the array level, rather than within ESX. Personally I think this is preferable let ESX (system) administrators do their job etc etc. One last benefit; I can present as many iSCSI LUNs as I like of whatever size. It means I can also stripe multiple LUNs; something I’m unlikely to do on VMFS presented devices.

Therefore I think iSCSI could be a great option. Then I thought of one curve ball; what if I could do thin provisioning on FC? Here’s the benefit. Imagine creating 20 VM guests on a server, all running Win2K3. Standard deployment is 10GB for the root/boot disk but I’m only actually using about 5. The remainder is left to allow for maintenance/patching/temporary space (we don’t want to have to rebuild servers) – applications and data go on separate volumes. I’ll use a 200GB meta. Unfortunately it’s 50% wasted. But bring in thin provisioning and I can allocate 10GB drives with impunity. I can allocate 20 or 30 or 40! FC is back on the menu. Incidentally, I’m more than aware that iSCSI devices can already be presented thin provisioned.

Lots of people tell me, why bother with thin provisioning. I think in VMware I’ve found a perfect usage.

About Chris M Evans

Chris M Evans has worked in the technology industry since 1987, starting as a systems programmer on the IBM mainframe platform, while retaining an interest in storage. After working abroad, he co-founded an Internet-based music distribution company during the .com era, returning to consultancy in the new millennium. In 2009 Chris co-founded Langton Blue Ltd (www.langtonblue.com), a boutique consultancy firm focused on delivering business benefit through efficient technology deployments. Chris writes a popular blog at http://blog.architecting.it, attends many conferences and invitation-only events and can be found providing regular industry contributions through Twitter (@chrismevans) and other social media outlets.
  • Cedric

    Hi Chris,

    If we use TrueCopy, for DR failover, we need to have teh guest os in the failover too. You say that it might not be practical ? Could you please explain.

    Moreover, how do you provide DR redundancy with iSCSI ? you’ve got two layers where you need to build a redundancy

    a redundant path to two iSCSI gateways and ensure a redundant failover between two storage arrays in the backend

  • Cedric

    Hi chris,

    You say “truecopy” might not be practical ? why ?

    If we choose the iscsi path. To have a robust iscsi solution, we need two boxes. How Vmware will react if we loose the iscsi gateways ? seamless failover ?

0 Flares Twitter 0 Facebook 0 Google+ 0 StumbleUpon 0 Buffer 0 LinkedIn 0 0 Flares ×