Home | Featured | Enterprise Computing – Death of Tiering?

Enterprise Computing – Death of Tiering?

0 Flares Twitter 0 Facebook 0 Google+ 0 StumbleUpon 0 Buffer 0 LinkedIn 0 0 Flares ×

I’m not the first to post on the subject of Netapp’s President and CEO Tom Georgens commenting during their latest earnings call on the apparent death of tiering as we know it today.

In Netapp’s view, there will be no tiering of storage in the future.  Instead we will be using SATA drives for our data and cache cards.  Here’s Tom’s words, taken from the call transcript:

Second of all, frankly I think the concept of tiering is dying. And I probably don’t want to go into a long speech on that, but at the end of the day, the simple fact of the matter is tiering is a way to manage migration of data between fiber-channel based systems and serial ATA-based systems. With the advent of Flash, and we talked about our performance acceleration module, basically these systems are going to large amounts of Flash, which are going to be dynamic with serial ATA behind them, and the whole concept of HSM and tiered storage is going to go away.

So what are Netapp thinking?  I thought the PAM module was a read cache for accelerating random I/O read-intensive environments?  How would that help scenarios where there’s heavy write activity?  This is where FC and SSDs are most suited and Netapp are saying they won’t be needed?

Perhaps Tom’s comments preview a change to the Netapp architecture in which the PAM cards are used to improve write caching.  That’s how Sun’s 7000 Unified Storage Systems work; use SSD for caching writes and serve data from SATA.  Maybe SSDs are too fast for Netapp’s architecture and this is why they need to implemented via the back door.

Tiering today enables customers to match workload to the price of the underlying storage hardware.  EMC and Compellent are most well known for introducing technologies that enable data to be moved at a granular level, making best use of SSD.  No doubt the other major vendors will follow suit as they release the next generation of their products.  Storage arrays that can move small blocks of data onto the most appropriate tier of storage will deliver the next wave of efficiency.  After that we should expect the array to manage the underlying hardware automatically, while we specify policies that dictate the service we expect.  Tiering isn’t going away any time soon.

About Chris M Evans

Chris M Evans has worked in the technology industry since 1987, starting as a systems programmer on the IBM mainframe platform, while retaining an interest in storage. After working abroad, he co-founded an Internet-based music distribution company during the .com era, returning to consultancy in the new millennium. In 2009 Chris co-founded Langton Blue Ltd (www.langtonblue.com), a boutique consultancy firm focused on delivering business benefit through efficient technology deployments. Chris writes a popular blog at http://blog.architecting.it, attends many conferences and invitation-only events and can be found providing regular industry contributions through Twitter (@chrismevans) and other social media outlets.
  • http://milek.blogspot.com Robert Milkowski

    Actually in Sun’s 7000 series you can use SSDs both for caching writes and for caching reads. They sell different kinds of SSDs for each task.
    Read caching on SSD makes sense as well as it gives much more random read IOPS than sata drives – so ideally if you could store your data drives on STAT drives and end-up habing entire working set cached on SSDs you would effectively server all requests from SSDs.

  • http://www.brookend.com Chris Evans

    Robert

    Good point, I forgot different models offer different features.

    Chris

  • http://storagebod.typepad.com Martin G

    Chris,
    I’ve been pondering the antipathy of NetApp to tiering and I’ve come to the conclusion that to do automated tiering like EMC, Compellent and just about everybody else will do is actually impossibly hard for NetApp.

    The aggregate abstraction and WAFL itself makes it very hard for NetApp to implement a LUN or file-system which includes different types of storage media. So do to automated storage tiering will actually be so complex for them, they would need to completely re-engineer their architecture.

    OnTap 8 with it’s much large aggregates might make it easier but the automated tiering will only be able to be done within an aggregate. So despite my pronouncement in my blog that it’s all a matter of semantics, it isn’t…NetApp can only cache with their current architecture; they cannot tier!

    I was going to blog this…but I’ve upset NetApp enough on my own blog for this month!!

  • http://ewan.to Ewan

    To me, a cache is a relatively small but very fast storage area which is used for temporary workloads.

    However, I know a good number of people wouldn’t agree with this simple definition of a cache vs a fast tier so I’d go with something like this:

    * If I can move data into the fast storage in advance of it being read from the slower disk, it’s a tier.
    * If data permanently resides on the fast storage, with a copy on slower disk only used as a backup in case of hardware failure, it’s a tier
    * If the data remains in the fast storage area, even when the storage area is full, rather than being deleted, due to some kind of classification rules keeping it there, it’s a tier

    However I agree with the concept of manual “tiering” having a limited life-span, I certainly hope it goes away soon, to be replaced with policy based decisions made by the array management software.

    Having a “Tier1 (Flash) -> Tier2 (15K RPM SAS/FC) -> Tier3 7.2K RPM SATA)” model doesn’t work as well in the new structures of IT delivery as a model of “High, Medium and Low Priority” and “High, Medium and Low Reliability” which can be applied to data belonging to specific applications, and which can be changed dynamically.

    Simplistic examples could be:

    Production Oracle Database – High Priority, High Reliability
    Images for Sharepoint Server – Low Priority, Low Reliability

    But slightly more complicated policies like this one should be equally easy to use:

    Oracle Database – High Priority during working hours (9-5 Mon-Fri), Med Priority otherwise, High Reliability

    Once we’ve got these kind of policy-based management tools, the method that the array uses to achieve them become fairly irrelevant to anyone, the only thing left to work on would be the target SLAs that you’d want the array to achieve, something like:

    High Priority = 0.01ms Response time
    Medium Priority = 0.5ms Response time
    Low Priority = 5ms Response time
    High Reliability = 99.99% Data Availability

    This probably isn’t going to happen very quickly, but I hope it does.

  • http://blogs.hds.com/michael Michael Hay

    I tend to agree that NetApp is having a hard time with this concept. They just got what is LUN level migration last year and I know some folks who have left NetApp within the last year because they see an architectural dead end unless something happens. Here is one point that is pretty interesting it has taken nearly 8 or so years to get some portion of the Spinnaker IP included in OnTap. I think that this is a good example of innovation at the company coming to a grinding halt. So as Martin said, tiering may be something that is nearly impossible for NetApp to contemplate, so they’ve gone all Ostrich on us and stuck their head in the sand instead.

  • http://www.brookend.com Chris Evans

    Martin

    I agree with your prognosis – the architecture of Data ONTAP precludes easily implementing a granular tiering model. It gets me back to my original point in this post; Netapp need to have a wider product portfolio rather than continuing to extend their existing 18 year old architecture.

    Chris

  • http://www.brookend.com Chris Evans

    Ewan

    I agree with the way you are defining cache – it is essentially a temporary work area for speeding up I/O in and out of more permanent storage. A tier is a permanent storage location, which has particular performance characteristics.

    I think the specifics of what determines a tier isn’t particularly relevant. What’s more important is whether a tier of storage can deliver the required service level. Over time, faster components have made the perceived performance of (for example) 10K drives faster. The improvement has been in the array hardware surrounding the disk itself.

    Having the granularity to add fast/medium/slow hardware (whether SSD or HDD) into the mix of a shared storage environment will let users choose the right mix of hardware for their data profile, which will of course be unique and constantly changing.

    I look forward to the day we don’t have to care about the performance from the storage array and we can concentrate on more interesting things!

    Chris

  • http://www.recoverymonkey.org Dimitris Krekoukias

    “Maybe SSDs are too fast for Netapp’s architecture and this is why they need to implemented via the back door” – that’s not the reason. The architecture isn’t at all inherently slow (local customer is doing 4GB/s on their VM farm, the box isn’t even maxed out, yes a lot is cached and deduped).

    The only tiering scheme I respect is Compellent’s – they REALLY are granular.

    EMC just moves the entire LUN (with more granularity to come sometime in the future, but last I checked customers aren’t in the business of buying futures). The current incarnation of EMC FAST for Symm and CX (not the same between the two, mind you) is really not that intelligent or impressive IMO. The CX one is especially boring: Have Navi Analyzer send perf data to a server that then figures out if the LUN should be moved which, in turn, sends NaviCLI commands to actually do the LUN move.

    Not that dynamic and doesn’t work with virtual provisioning AFAIK, to which I say – what’s the point?

    File-level tiering via Rainfinity will work with any NAS, and that product isn’t without its own issues.

    So, my question to the gentle audience is:

    What use case are you trying to deal with? I’m not for a moment saying tiering is dead etc etc, I’m just trying to find out what the perception of use cases is.

    It’s just that lately everyone has been frothing at the mouth about tiering, it was never a big deal before. Maybe I’m an ostrich.

    D

  • SRJ

    Again with the illogical EMC love… I agree, Compellent is known for granular migration of data between tiers. Kudos to them…they’re ahead of the game here.

    EMC? If you mean they’re known for pre-announcing a feature a year or more in advance of it shipping (is v2 shipping yet?), then I suppose I can’t disagree.

    FAST v1 has to be an inside joke within the EMC marketing department. Other products have done LUN-level migration for years. If EMC playing catch-up with everyone else makes them notable, I don’t know what qualifies as status quo. Granted, NetApp has also been slow to the game with this as well. But seriously…EMC?

    What NetApp does have today, though, has proven exceptionally valuable in certain use-cases. The PAM cards have been successful because they work well and are extremely efficient (being dedupe aware and all). Slapping some SSDs in an array is not innovative, efficient, or terribly useful. PAM is all of the above.

  • http://www.cinetica.it/blog Enrico Signoretti

    Dimitris,
    I’m a Compellent reseller since 2008 and I can say that every Compellent customer is very happy of its atuomated tiereing (not comparable to EMC, of course).
    There are a lot of savings and I can show you a lot of real world success stories, here an example: http://www.cinetica.it/2010/02/04/the-best-space-guarantee-program/ .

    Not all the workloads are equal and not all data are equal: some customers have plenty of Tier1 and others have huge quantity of T2 or T3.

    Tiering is one foundation capability for fully virtualized storage environments and it does a great automatic fine tuning of your storage reducing complexity!

    Enrico

  • Pingback: The Rackspace Cloud Introduces its First Comprehensive Partner Program for Cloud Computing | Network solutions, security and support

  • http://storagebod.typepad.com Martin G

    SRJ, where’s the EMC love, I don’t see an EMC bias especially here.

    And actually EMC have done lun-level optimisation for years with Optimizer. I know because I canned our license for it four years ago because we weren’t using it! But it’s been a shipping product for a long, long time.

    To be honest, I was amazed that EMC announced FAST v2 when they did….it was insane! If they fail to deliver on time, or if it doesn’t work, or is horribly complex and needs a gazillion hours of PS to deliver benefits…you will see a pretty negative blog from me.

    EMC have potentially hung themselves out to dry and if another major storage vendor delivers before them, they’ll be looking very stupid. And that could happen; as apart from NetApp, all the major players are working on similar tech!

  • SRJ

    Martin – *EXACTLY!* They’ve done LUN-level for years…so has everyone else. Calling it by a new name and announcing it as a new game-changing feature just because they finally tied it to some automation software and productized it, is a poor joke in my opinion.

    Agreed in full on FAST v2, but not totally convinced it will be the game-changer they claim it will be. Cool? Yes. Compellant is cool too, but haven’t come out and significantly altered the market because of their tech.

    EMC love from me? :) I’ll admit you probably won’t see that from me any time soon. Call me “pro-anyone but EMC” if you like. (kidding) Main reason I don’t love them is because of their arrogance. (Though Sakac and team seems to definitely be the exception to the rule. Nice work guys! Love to see that old culture get shattered!) I like to think I’m fairly reasonable when it comes to the technology. In my opinion most other storage vendors are passionate, but not quite so arrogant. I *typically* don’t bash their tech…just their FUD and marketing spin. All vendors do it to a certain extent. For some reason, EMC does it in a way/amount that ticks me off. Can’t explain it better than that.

    Want to emphasize again that Sakac and his team are starting to change my perception of EMC. He’s the best thing that could have happened to EMC’s public image, IMHO. If only Hollis, Burke, and Twomey would follow suit I think I could come to actually like EMC.

    Chris wasn’t really biased toward EMC in this post, but I read it right after a previous post where he was. I’ll get over it. :) Sorry Chris.

  • Pingback: CA continues cloud computing buying spree | Network solutions, security and support

  • Pingback: Cloud computing security challenges unite hosting providers, security specialists | Network solutions, security and support

  • Pingback: On the differences between Memory-Hierarchy and Intelligent-Tiering « Technically Speaking…

  • http://bluearc.com/html/blueviews/shmuel Shmuel Shottan

    Well, I know I’m late, and this subject is already yesterday’s news.
    Looks like most of what I would have said has already been said …:-)
    Yet, I “just had to” :-) chime in…

    http://bluearc.com/html/blueviews/shmuel/2010/02/27/on-the-differences-between-memory-hierarchy-and-intelligent-tiering/

  • Pingback: Toms Hardware Guide Games - Technology PDF Files

  • Visiotech

    Strangely that is exactly what I wrote back in december 14 on Storage Monkeys.

    http://www.storagemonkeys.com/index.php?option=com_content&view=article&id=232:infosmack-episode-31-emc-gets-fast&catid=69:infosmack&Itemid=143

    Looklike Netapp management are reading my comments…

0 Flares Twitter 0 Facebook 0 Google+ 0 StumbleUpon 0 Buffer 0 LinkedIn 0 0 Flares ×