Home | Storage | Cost Reduction Is Still On The Agenda for Enterprise Storage
Cost Reduction Is Still On The Agenda for Enterprise Storage

Cost Reduction Is Still On The Agenda for Enterprise Storage

35 Flares Twitter 20 Facebook 2 Google+ 1 StumbleUpon 0 Buffer 7 LinkedIn 5 35 Flares ×

Dave Merrill has another great article on his “Storage Economist” blog, talking about predictions for 2014.  This one is interesting as it talks about the price erosion versus growth curve that we have seen over the last few years.  He predicts CAPEX spend will actually increase as we move forward into 2014.

Managing growth continues to be the number one issue for IT and Storage Managers (see this SlideShare presentation from EMC, slide 7) and has been for many years.  However, changing technology is causing a problem for us in some areas.

Enterprise Storage

Efficiency in enterprise storage is generally good.  You can look at deploying optimised storage in two directions.  Firstly there’s increased capacity need to cater for resiliency, hence additional storage needed to implement RAID, typically anywhere from 12-50% depending on other factors, but typically around 20% on average.  Then there’s the counter balance of savings around compression, de-dupe and thin provisioning (although this doesn’t technically reduce storage volumes), moving the efficiency pendulum back the other way.  In general though, enterprise storage tends to be efficient, if done correctly.  For reference, check out my storage waterfall diagram (posts below in related links).

Emerging Storage Technologies

However, if we look at emerging technologies in storage, we see a return to protecting data using replicas.  Hadoop protects data by keeping three copies by default.  Technologies like Ceph and GlusterFS use replication for data protection (although to be fair for Gluster, you could keep a single copy of data on storage from a SAN connected array).  Converged solutions like VMware’s VSAN uses the same process, copying data between nodes for resiliency.  The response of those developing these platforms will likely be to say that these solutions can use cheap storage and of course compared to buying enterprise arrays that is true, but that only holds for deployments at scale.  However if (as Dave predicts) prices start to flatten and the new range of high capacity drives (6TB and above) aren’t suitable for everyday workloads, then we have a problem.

Evolving Protection

Many people have claimed that RAID is dead.  Of course it’s not, just like tape or the mainframe isn’t.  RAID will still have many years of useful life.  There are however, new technologies for protection, like erasure coding (also known as forward error correction), which create data resiliency using only an extra fraction of additional capacity, much like RAID does.  To date, erasure coding has remained in the object storage space, presumably because of the overhead of calculating and redistributing data on write, which is an intensive process.  We also shouldn’t forget that buying an expensive enterprise array isn’t always required; there are plenty of ‘build your own” storage solutions based on software only, plus some very good midrange storage appliances.

The Architect’s View

Storage is cheap, but although the $/GB trend continues to move towards zero, other factors may disrupt this assumption.  The Thailand floods was one example; another may be the ability for drive manufacturers to keep increasing capacity and maintain current drive performance levels.  In any case, the IOPS/GB figure continues to fall.  Flash helps in solving the performance issue, but increases cost again. New solutions for hyper-converge are selling the benefits of removing the SAN.  Achieving this with simple data replicas and no space management functionality isn’t a great solution.  Enterprises need to make sure the storage they have today is used as efficiently as possible, while looking carefully at the TCO of new solutions.  Disk drives may be cheap, but the servers, memory, power, cooling and racking needed to support them can add a significant part to the cost of a solution.

Related Links

Comments are always welcome; please indicate if you work for a vendor as it’s only fair for others to judge context.  If you have any related links of interest, please feel free to add them as a comment for consideration.

Subscribe to our Newsletter

* indicates required

 





About Chris M Evans

Chris M Evans has worked in the technology industry since 1987, starting as a systems programmer on the IBM mainframe platform, while retaining an interest in storage. After working abroad, he co-founded an Internet-based music distribution company during the .com era, returning to consultancy in the new millennium. In 2009 Chris co-founded Langton Blue Ltd (www.langtonblue.com), a boutique consultancy firm focused on delivering business benefit through efficient technology deployments. Chris writes a popular blog at http://blog.architecting.it, attends many conferences and invitation-only events and can be found providing regular industry contributions through Twitter (@chrismevans) and other social media outlets.
35 Flares Twitter 20 Facebook 2 Google+ 1 StumbleUpon 0 Buffer 7 LinkedIn 5 35 Flares ×