Home | Featured | Symantec Disaster Recovery Study 2010

Symantec Disaster Recovery Study 2010

0 Flares Twitter 0 Facebook 0 Google+ 0 StumbleUpon 0 Buffer 0 LinkedIn 0 0 Flares ×

SYM_Horiz_RGB-72dpiI was recently briefed on the latest Symantec Disaster Recovery Study (2010), the details of which can be found here.  Some 1700 companies (of 5000 employees or more) were interviewed about various aspects of their backup environments.  As usual with these kind of surveys, there were some interesting results (I guess there have to be interesting results otherwise the surveys wouldn’t be worth talking about).

  • 56% of data on virtual systems is regularly backed up. This seems like a small number but perhaps the term “regularly” is one to consider here.  If environments are cloned and used for test, then perhaps there’s no need to back these environments up as they’re deleted and re-seeded as required.  It would be interesting to know how this figure breaks down by production and non-production environments.
  • Only 20% of virtual environments protected by replication or failover technologies. This is a remarkable figure and implies a number of things; array-based replication is still hard to get right with virtualisation; users don’t consider virtual environments “production enough” to replicate them; but probably most important is that this implies there is still a lot of work to be done getting replication right.  Of course VMware with their VAAI initiative are looking to fix this problem and features such as svMotion help, but we know that there is a disconnect between LUN-based VMFS file stores and the granularity required to fail over individual virtual guests to remote locations.
  • 60% of virtual environments are not covered in DR plans. The issue here could be similar to the first point above; most of these environments might not be production and so be re-seeded as required, however as virtualisation becomes the norm rather than the exception, DR will become an increasingly important consideration.
  • 72% of organisations experience downtime from system upgrades & 70% experience downtime from power outages and failures. There’s always going to be a certain amount of “fat-finger” syndrome in system upgrades but I think these two statistics indicate that there’s a lack of failure planning going into designing infrastructures.  Yes, hardware and software will fail; it always does.  The skill is designing to this and building an infrastructure that meets requirements, including resiliency.

I’ve attached the study in it’s entirety to this post.  I’d be interested in anyone’s feedback on their experiences with the points raised; whether they feel they are valid or not.

Symantec 2010 Disaster Recovery Study

About Chris M Evans

Chris M Evans has worked in the technology industry since 1987, starting as a systems programmer on the IBM mainframe platform, while retaining an interest in storage. After working abroad, he co-founded an Internet-based music distribution company during the .com era, returning to consultancy in the new millennium. In 2009 Chris co-founded Langton Blue Ltd (www.langtonblue.com), a boutique consultancy firm focused on delivering business benefit through efficient technology deployments. Chris writes a popular blog at http://blog.architecting.it, attends many conferences and invitation-only events and can be found providing regular industry contributions through Twitter (@chrismevans) and other social media outlets.
0 Flares Twitter 0 Facebook 0 Google+ 0 StumbleUpon 0 Buffer 0 LinkedIn 0 0 Flares ×