Home | Uncategorized | Developing a Tiering Strategy

Developing a Tiering Strategy

0 Flares Twitter 0 Facebook 0 Google+ 0 StumbleUpon 0 Buffer 0 LinkedIn 0 0 Flares ×

Implementing a storage tiering strategy is a big thing these days. Everyone should do it. If you don’t then you’re not a “proper” storage administrator. Being serious and moving away from the hype for a second, there is a lot of sense in implementing tiering. It comes down to 1 thing – cost. If disk and tape storage was free, we’d place all our data on the fastest media. Unfortunately storage isn’t free and therefore matching data value to storage tiers is an effective way of saving money.

Choosing the Metrics

In order to create tiers it’s necessary to set the metrics that define different tiers of storage. There are many to choose from:

  • Response time
  • Throughput
  • Availability (e.g. 5 9′s)
  • Disk Geometry (73/146/300/500GB)
  • Disk interconnection (SATA/FC/SCSI)
  • Usage profile (Serial/Random)
  • Access Profile (24×7, infrequent)
  • Data value
  • Array Type (modular/enteprise)
  • Protection (RAID levels)

There are easily more, but these give you a flavour of what could be selected. In reality, to determine the metrics to use, you really need to look at what would act as a differentiator in your environment. For example, would it be really necessary to use 15K speed drives rather than 10K? Is availability important – should RAID6 be considered over RAID5? Is there data in the organisation that would exist happily on SATA drives rather than fibre channel? Choosing the metrics is a difficult call to make as it relies on knowing your environment to a high degree.

There are also a number of other options to consider. Tiers may be used to differentiate functionality, for example tiers could be used to specify whether remote replication or point-in-time copies are permitted.

Is It Worth It?

Once you’ve outlined the tiers to implement, you have to ask a simple question – will people use the storage tiers you’ve chosen? Tiering only works if you can retain a high usage percentage of the storage you deploy – it’s no use deploying 20TB of one tier of storage and only using 10% of it. This is a key factor. There will be a minimum footprint and capacity which must be purchased for each tier and unless you can guarantee that storage will be used, any saving from tiering may be negated by unused resources. Narrow your tiering choices down to those you think are actually practical to implement.

Making the Move

So, the tiers are set, storage has been evaluated and migration targets have been identified. How do you make it worthwhile for your customers to migrate? Again, things come back to cost. Tiers of storage will attract differing costs for the customer and calculating and identifying the cost savings will provide a justification for investing in the migration. In addition, tiers can be introduced as part of a standard technology refresh – a process that regularly happens anyway.

Gotcha!

There are always going to be pitfalls with implementing tiering:

  1. Don’t get left with unusable resources. It may be appealing to identify lots of storage which can be pushed to a lower tier. However, if the existing tier of storage is not end-of-life or unless you have customers for it, you could end up with a lot of high tier unused storage which reflects badly on your efficiency targets. Make sure new storage brought in for tiering doesn’t impact your overall storage usage efficiency.
  2. Avoid implementing technology specific tiers which may change over time. One example; it is popular to choose to tier by drive size on the assumption that higher capacity drives offer a lower performance and therefore are matched to a lower tier. But what happens when the predominant drive type changes or you buy a new array in which the larger drives perform equally well compared to an older array? How should those tiers be classified?
  3. Be careful when choosing absolute parameters for tiers. For example, it is tempting to quote response time characteristics in tiers. However, no subsystem can guarantee consistent response times. It may be more appropriate to set confidence limits, such as offering “

Iterative Process

Developing a tiering strategy is an iterative process which will constantly be refined over time. There’s no doubt, that implemented correctly, it will save money. Just don’t implement it and forget about it.

About Chris M Evans

Chris M Evans has worked in the technology industry since 1987, starting as a systems programmer on the IBM mainframe platform, while retaining an interest in storage. After working abroad, he co-founded an Internet-based music distribution company during the .com era, returning to consultancy in the new millennium. In 2009 Chris co-founded Langton Blue Ltd (www.langtonblue.com), a boutique consultancy firm focused on delivering business benefit through efficient technology deployments. Chris writes a popular blog at http://blog.architecting.it, attends many conferences and invitation-only events and can be found providing regular industry contributions through Twitter (@chrismevans) and other social media outlets.
  • Aaron

    Great article and a topic that is top of my mind right now. In my case, I have selected the technology tiers and am now working out how to “advertise” them effectively to our internal business units so they will want to use them.

    We created our tiers by collecting and classifying the many thousands of business applications running across the environment. It was fortuitous that the information we needed for this analysis was actually collected as part of a Disaster Recovery remediation program that has swept through before this round of storage cosnolidation. We started the analysis with the hope of trying to collect our applications together into a maximum of 6 groups with common storage requirements. As it turned out, the groups turned into more like 4 or 5. We then created 4 “classes” of storage that had attributes matching these application requirements groupings.

    Now, the key is to get the business put all new data in the right class and reach the “tipping point” where they beg us to move existing data to the right tier. The former is simpler than the latter as it is about education, communication and price. The migration screams $$$’s in order to mitigate the risk of business disruption. So, I start with the former and aim for the “tipping point” where the risk and cost to migrate doesn’t appear so big to any of us.

    The Service Catalog will be the primary forum, the question is what information is included and how is it represented. A table displaying the classes with ranges of targets for attributes like availability, performance etc is an obvious start. The real winner in my mind is to align examples of “headline” applications with the storage classes. These “headliners” are the applications everyone knows like exchange and desktop file sharing, but also the highly publicised internal applications that exist in every business like “Critical_External_Customer_Application_X”.

    Would be great to hear your thoughts on this approach and if you have any recommendations on how to make it a success.

0 Flares Twitter 0 Facebook 0 Google+ 0 StumbleUpon 0 Buffer 0 LinkedIn 0 0 Flares ×