Home | Storage | Hitachi Attacks Migration Costs with Non-Disruptive Migration Feature
Hitachi Attacks Migration Costs with Non-Disruptive Migration Feature
Hitachi Non-Disruptive Migration

Hitachi Attacks Migration Costs with Non-Disruptive Migration Feature

4 Flares Twitter 0 Facebook 1 Google+ 3 StumbleUpon 0 Buffer 0 LinkedIn 0 4 Flares ×
Hitachi Non-Disruptive Migration

Hitachi Non-Disruptive Migration

It’s probably fair to say that data migration is one of the most tedious tasks in storage management.  Moving data from one array to another, simply to decommission a piece of hardware or for load and capacity balancing, doesn’t ever inspire me. But it’s a necessary evil and one I’ve done many times.  Unless the existing storage configuration has been well planned, migrations can take months to accomplish, at high cost, in order to avoid outages and maintain data integrity.

So during last year’s Hitachi Influencers event in San Jose, I was extremely interested to see that the migration issue might have finally been put to bed by one of the vendors.  This week we saw the release of last year’s demo; Hitachi’s non-disruptive migration service.

At it’s heart the concept isn’t that complicated; by enabling a new target storage array to act as if it is another set of data paths through the SAN to the original array, a LUN can be mirrored away from the source, onto the target, while the data is mirrored in the background.  Once the mirroring is complete, drop the first set of paths that connect to the LUN and in effect the LUN “migrates” to the new target array in a completely transparent fashion.

This migration process works because Hitachi are able to both virtualise a LUN through another storage array (technology that has existed for quite some time) and move the World Wide Name (WWN) of the source array across and present it out from the target.  In essence, the target array “spoofs” the host into believing that the WWN of the old array still exists on the network.

What’s interesting here is that the migration process is making use of standard Fibre Channel protocols and so the migration process could be used to move data from any array, not just Hitachi products.  There’s also no disruption to the existing configuration, as the virtualisation of the source array can be achieved behind the scenes without impacting the existing configuration.  There are of course some restrictions or issues to consider; migrating LUNs that are array replicated needs to be thought through and of course the target array continues to appear with a different WWN, which could be confusing during and after migration work.

However, the power of this technology is the ability to avoid cost.  Hitachi’s storage economist, Dave Merrill, estimates that storage costs around $7K-15K per TB to migrate between arrays, which is way more than new storage costs to acquire in the first place.  Consider that many customers will put a new vendor on the hook to cover migration, then this service puts Hitachi in a much stronger position when tendering for swap-out business.

As with everything, what’s usually more interesting is not what can be achieved today, but what can be done tomorrow.  The ability for an Hitachi array to offer out virtual WWN port names means that an entire physical array could be split into multiple virtual arrays, in a similar way to the Multistore feature offered by Netapp.  Now, instead of having WWNs that match to a physical device, a virtual array could be created.  This virtual array can be managed with its own QOS, or migrated or shared between hardware platforms, without the user having to have any knowledge of where the data is actually sitting.  Imagine using a single physical array to create secure multi-tenant virtual arrays with the ability to manage each with an individual QOS (something that Multistore can’t do today).

Although I can’t claim to have a crystal ball, I did predict this kind of feature on the release of the VSP in 2010 (see my previous article where I discuss this possibility).  I have no idea whether Hitachi will deliver this feature, but I hope they do.

My only disappointment with Non-Disruptive Migrations is that it may have come to late for many organisations.  As virtualisation becomes more prevalent, migrations will be achieved using (for example) Storage vMotion in the hypervisor, negating the need to care whether the array can perform the migration on the host’s behalf.  However server virtualisation isn’t everywhere and the option of multi-tenancy is still a powerful one, even with the ability to virtualise the server.

Related Posts

Disclaimer: Last year I attended Hitachi’s Influencer Forum in San Jose.  Hitachi paid for my travel and accommodation as well as most meals.  Most of the content of this event was NDA only and so hasn’t been discussed until now.  There is no requirement on me to blog about any of the content presented during the event.  I am not otherwise employed by Hitachi, or compensated for my time.

About Chris M Evans

  • http://www.storagebod.com storagebod

    Chris, migration is certainly one of those painful realities of the ever growing storage estates that we manage but I am beginning to wonder whether doing it at the array level or in this manner is the best way to go. I wonder if migration would not be better driven from the host and applications? Applications could see pools of storage and then drive their own migrations, drain pool A to pool B etc…

    The other issue is that this only addresses block storage; NAS storage is even more painful to migrate and I’ve yet to see a good solution to this? NAS and unstructured data stored on NAS will probably outstrip block growth; if we then throw cloud-based storage into it, migration is getting even harder and problematical.

    So, having storage-aware applications seems to me a better way to go long-term.

    • Chris M Evans

      Martin

      I agree that host-based migrations are a good alternative. However I think things transpire against them. First, storage migrations are usually done to replace ageing hardware or cope with expansion. It isn’t typically to the benefit of the application owner to make the migration, so getting access to the server, qualified staff etc, ends up in a political nightmare. This is usually compounded with change control issues, never mind having to explain the performance hit on the server as data is migrated. Having the ability to bypass all of that affects the main issue – cost. If organisations were more focused on delivering storage as a service, they could perhaps make new storage cheaper than old and incentivise migrations, but I have rarely seen that.

      As for file, you are right that is a total POS. I think part of the issue can be blamed on the protocols, especially CIFS/SMB, which just isn’t helpful. I guess pNFS should help with that, but it always seems to me that pNFS is in the elusive pot of gold at the end of the rainbow. We talk about it but never reach it. If you want to make your millions, how about fixing the NAS migration issue!

      Storage aware applications – agreed. That was my reference to VMware and the hypervisor, where a lot of these issues are resolved.

      Chris

  • Pingback: Hitachi Attacks Migration Costs with Non-Disruptive Migration Feature | Data Storage()

4 Flares Twitter 0 Facebook 1 Google+ 3 StumbleUpon 0 Buffer 0 LinkedIn 0 4 Flares ×