Home | Cloud | Guest Post: Closing the Server/Storage Virtualisation Gap

Guest Post: Closing the Server/Storage Virtualisation Gap

0 Flares Twitter 0 Facebook 0 Google+ 0 StumbleUpon 0 Buffer 0 LinkedIn 0 0 Flares ×

This is a guest post from Anand Babu (AB) Periasamy, CTO and co-founder of Gluster Inc.

In the virtualization foot race, the server has clearly been playing the role of the hare, with storage—a.k.a., the tortoise—bringing up the rear. The technology innovation in server virtualization got a lengthy headstart on storage and has continued to outpace it ever since, making some question how the tortoise will ever have a chance to catch up.

Waiting for them at the finish line is a truly virtualized data center and a fully enabled cloud, two things that go hand in hand. They may be waiting for quite a while, if the industry does not intelligently approach and successfully tackle the problem of virtualizing the storage layer.

At the Starting Blocks

Initially, the purpose of server virtualization was to increase hardware utilization as the technology matured. That is not the case any more. Now, the real power of virtualization is in a dynamic computing environment where you have to react more quickly, be more agile, and have rapid and reliable disaster recovery.

As the server layer became abstracted, thanks to Service Oriented Architectures and virtual machines, computing was treated as if it were a software problem. As a result, many new business models emerged. The same potential awaits us on the storage side, if the industry addresses it as a software problem, just as it did with servers.

In a lot of ways, storage is the Achilles heel of full data center virtualization, having been strained by virtual server environments and the explosive growth of unstructured data. In order to gain greater efficiency and agility, we need new storage architectures and better virtualization technology for storage. We have to close the virtualization gap: Storage needs to catch up with compute, and the way to do that is by adopting a new software-based approach that scales out the architecture.

Picking up the Pace

So what does virtualization mean for storage? Quite a bit, actually.

It means the data itself can’t be tied to hardware and the storage needs to be like virtual servers, which can start, stop and move as necessary. The storage system also has to support multi-tenancy and easy sharing of data, something that is particularly important for the cloud environment. As with server virtualization, open source technology will be key for innovation and cost-control.

In addition, storage must be able to scale on demand both from a capacity and a performance standpoint and in respect to general data growth. Because everything is becoming web-scale, automation is also a crucial aspect of storage virtualization.

What’s interesting is that the lines are blurring between storage, server and cloud administrators. In the server world, there are cloud administrators who must implement on a large scale. As we’ve seen in the server world, storage needs to adopt a scale-out model, because costs can never be controlled if we must rely on scale-up silos, and storage—like software—should leverage commodity hardware.

Entering the Final Lap

The bottom line is that storage should be more scalable, be easier to share, have a file storage platform for sorting disk images that are files, provide lower cost, be fully functional, and be easy to use.

At the end of the day, storage is critical for the true, complete virtualization of the data center and full realization of the cloud’s power, and treating storage as a software problem will offer the ability to accelerate the data center’s virtualization and enable competitive advantages. It’s time to give the tortoise a turbo boost so it can cross the finish line with the hare, because in that scenario we will all win.

This is a guest post from Anand Babu (AB) Periasamy, CTO and co-founder of Gluster Inc.

About Chris M Evans

Chris M Evans has worked in the technology industry since 1987, starting as a systems programmer on the IBM mainframe platform, while retaining an interest in storage. After working abroad, he co-founded an Internet-based music distribution company during the .com era, returning to consultancy in the new millennium. In 2009 Chris co-founded Langton Blue Ltd (www.langtonblue.com), a boutique consultancy firm focused on delivering business benefit through efficient technology deployments. Chris writes a popular blog at http://blog.architecting.it, attends many conferences and invitation-only events and can be found providing regular industry contributions through Twitter (@chrismevans) and other social media outlets.
0 Flares Twitter 0 Facebook 0 Google+ 0 StumbleUpon 0 Buffer 0 LinkedIn 0 0 Flares ×