Home | Featured | Hitachi Bloggers Day 2011 – Part I

Hitachi Bloggers Day 2011 – Part I

0 Flares Twitter 0 Facebook 0 Google+ 0 StumbleUpon 0 Buffer 0 LinkedIn 0 0 Flares ×

DSC_5911aAs previously mentioned, last week I attended the second HDS/Hitachi Bloggers’ Day, this year located at Sefton Park in the UK.  Setting the event on this side of The Pond (rather than requiring me to take a 10,000 mile round-trip) was certainly a benefit and I welcomed the lack of jet lag!  I suspect of all the attendees I had the shortest journey although I still managed to be one of the last to arrive at Storage Beers on Tuesday evening.

The event itself was well organised and informative.  Hitachi weren’t afraid to be put on the spot and answer those hard questions bloggers like to ask.  This open approach is refreshing and unfortunately so few vendors are prepared to do it.

Anyway, back to the subject in hand.  During the course of the two days, the content was deep and varied, covering block hardware (VSP), file and content (HCP and HDI – more on that later), healthcare, servers and Storage Economics.  Rather than lump everything into a single post, I’ll start around the discussion of VAAI support.

Everybody’s Doing It

Yes, it’s true to say anyone who’s anyone is supporting VAAI.  Check out Stephen Foskett’s great post that covers VAAI support across the major vendors.  Clearly Hitachi can’t afford *not* to be supporting VAAI as it is essential for large-scale virtualised environments.  The reason for this is the need to move data around both within and between hypervisors to support load balancing and cloning.  In addition, VAAI support increases the granularity of file locking from the LUN (i.e. a complete VMFS object) to the block level with two benefits; it reduces contention on the VMFS (so improving performance) and also allows datastores to be scaled to more than 2GB in size (although there are other restrictions preventing this).  Lastly VAAI reduces the need to write “empty” data by offloading block writes of zeros to the array.  In those arrays that support thin provisioning this should also mean reduced storage consumption as those “empty” blocks of data will simply be marked as logically in use.

So why is Hitachi so keen to talk about VAAI?  Well there are a number of reasons.  Most obviously they want all their customers out there to know they “get” virtualisation and VMware – definitely a marketing position.  It also means being able to use the heritage Hitachi have in working with mainframe environments, where extent-level locking was the norm.  Possibly more important in my mind is the fact that VAAI will be essential to driving the scalability and the performance out of the new VSP.

3D Scaling

Following the VSP release last year, I wrote this post talking about some of the internal VSP architecture that allows it to scale better than previous models, including support for external storage (the third dimension).  Extracting every ounce of performance from an array is something customers will surely want, but if the server platform can’t produce that demand then the VSP becomes an expensive commodity.  By supporting VAAI, Hitachi are ensuring that virtualised environments can offload as much workload as possible to the array, thereby fully utilising the performance capabilities of the storage hardware.

Without a doubt, high performance is a Good Thing, but let’s face it you can have too much of a good thing and this is the area where I have my concerns.  A VSP can scale to support multiple virtual environments; it would therefore be possible for virtualisation administrators to overload the storage array by instancing multiple cloning and vMotion acitivities at the same time.  What controls are there to prevent this?  It would appear there are none, or at least The Heff wasn’t sure what they were.

Storage I/O Control

What we need is a control mechanism for the VAAI features.  When all I/O came from outside of the array, things were simple; too much I/O could be throttled by restricting queue depth per host.  That’s not to say an array couldn’t be overwhelmed, of course they could, especially if remote replication was used.

Having been involved on some in-depth discussions in the past with the engineers in Hitachi Japan, I *know* there are throttling mechanisms in other parts of the VSP design.  Workload is prioritised to ensure that higher priority activities (like responding to cache destages for replication) are handled before other work.  Therefore I’d expect that the VAAI functions would be treated as lower priority and handled after normal I/O activity.  Heff has promised to go away and confirm this, so hopefully we’ll see the answer soon.

One other thought – the subject of security.  The VAAI copy features are implemented through the SCSI EXTENDED COPY command.  From what I have read (and posted here) there are no security controls to prevent any user from issuing this SCSI copy command.  So, there’s nothing to stop me writing a SCSI call to copy data from another LUN on the array into a LUN I have access to.  I would hope I am wrong on this assumption and that there would be a higher level check within Fibre Channel/iSCSI/FCoE to validate I have access to the source and target LUNs.  If there is, unfortunately I haven’t been able to find it.  The upshot of this (potential) loophole is that I could obtain access to secure data.  Worse, I could change or overwrite that data either destructively or for my benefit.  Hopefully someone will prove me wrong.

So VAAI is a good thing; Hitachi demonstrated VAAI working on a vMotion and datastore creation task.  What we need to see next is a demonstration at scale.

For reference you can find a copy of Heff’s presentation here.

Disclosure:  The Hitachi Bloggers’ Day was an invitation-only event.  My accomodation and meals (but not my travel) were paid for by Hitachi, however I was not compensated for my time.

About Chris M Evans

Chris M Evans has worked in the technology industry since 1987, starting as a systems programmer on the IBM mainframe platform, while retaining an interest in storage. After working abroad, he co-founded an Internet-based music distribution company during the .com era, returning to consultancy in the new millennium. In 2009 Chris co-founded Langton Blue Ltd (www.langtonblue.com), a boutique consultancy firm focused on delivering business benefit through efficient technology deployments. Chris writes a popular blog at http://blog.architecting.it, attends many conferences and invitation-only events and can be found providing regular industry contributions through Twitter (@chrismevans) and other social media outlets.
0 Flares Twitter 0 Facebook 0 Google+ 0 StumbleUpon 0 Buffer 0 LinkedIn 0 0 Flares ×