Home | Uncategorized | Storage Migration Costs

Storage Migration Costs

0 Flares Twitter 0 Facebook 0 Google+ 0 StumbleUpon 0 Buffer 0 LinkedIn 0 0 Flares ×

I’ve not paid much attention to Incipient (their news page doesn’t provide an RSS feed, so there’s no chance of me seeing their press releases easily), but my attention was recently drawn to a recent release relating to their iADM and iNSP products (catchy names, those).

Now, if you want to know about their products, have a look at their website for yourself. Rather, my interest was sparked by a claim in their press release, quoted below:

The High Cost of Today’s Data Migration

Industry estimates and field data captured by Incipient indicate that SAN storage is growing at 40 – 60 percent annually and 25 percent of data under management is moved annually at an average cost of $5,000 per terabyte. Based on these estimates, a data center with one petabyte of storage under management today spends $1.25 million annually on data migration operations. Two years later, the data center is likely to grow to nearly two petabytes increasing the annual data migration cost to nearly $2.5 million.

Source: Incipient Press Release 11 June 2008

So the estimate is $5000 per TB of data movement and 25% of data being moved each year. I can understand the latter; it’s simple logic that if you have a 3-4 year lifecycle on technology then on average 25% of your estate will be being refreshed each year (although that figure is slightly distorted by the fact that you’re also deploying an additional 40-60% each year). Now, how to get to a $5000 per TB calculation…

Excluding new storage acquisition, network bandwidth, etc, I’d assume that the majority of migration costs will be people time. That would include planning and execution of migrations. In environments of 1PB or more, I could (almost) bet my house on the fact that there will be a significant amount of the storage infrastructure which is (a) not understood (b) badly deployed (c) backlevel amongst many other issues. $5000/TB would therefore seem quite reasonable, based on the amount of work needed to refresh. The only problem, though, is that a majority of the manpower cannot be solved by software alone. This will include documenting the environment, bringing server O/S, firmware and drivers up to date, negotiating with customers for data migrations, migration schedule planning, clearing up wastage, new server hardware and so on.

It would be an interesting exercise to determine what percentage of the $5000/TB cost is actually attributable to data movement work (i.e. having someone sitting at a screen issuing data replication commands). I suspect it is quite low. From experience, I’ve been able to move large volumes of data in quite short timespans. In fact assuming sensible preparation and planning, most of the time doing migrations is sitting around (previous employers disregard this statement).

So how much money would Incipient save? My bet is not much.

About Chris M Evans

Chris M Evans has worked in the technology industry since 1987, starting as a systems programmer on the IBM mainframe platform, while retaining an interest in storage. After working abroad, he co-founded an Internet-based music distribution company during the .com era, returning to consultancy in the new millennium. In 2009 Chris co-founded Langton Blue Ltd (www.langtonblue.com), a boutique consultancy firm focused on delivering business benefit through efficient technology deployments. Chris writes a popular blog at http://blog.architecting.it, attends many conferences and invitation-only events and can be found providing regular industry contributions through Twitter (@chrismevans) and other social media outlets.
  • Han_Solo

    I think they are getting some/most of that $5000/TB because that is simply what some large vendors with 3-letters charge for the service of data migration.

    Funny how they can charge that sort of money for migrating from one of their arrays to one of their new arrays.

    You think they would throw that sort of thing in for free, but there are lots of companies that pay those high rates for 3-letter people to come in and use SRDF to automatically migrate their data for them.

    Personally, I say put SVC in front of your storage and you will never have this issue or many others associated with using ancient big array technology.

  • Chris M Evans

    H,

    I think in one respect you are right; we need to move to a scenario where an intermediate device sits and polices the traffic on the network, redirecting to new arrays as required. What we are really need though is an easy way to visualise this entire concept – and that certainly doesn’t exist.

  • Pingback: Enterprise Computing: What Next For Virtualisation? « The Storage Architect

  • Pingback: What Next For Virtualisation? – Gestalt IT

0 Flares Twitter 0 Facebook 0 Google+ 0 StumbleUpon 0 Buffer 0 LinkedIn 0 0 Flares ×