Home | Opinion | Has NetApp Reached a Strategic Inflection Point?
Has NetApp Reached a Strategic Inflection Point?

Has NetApp Reached a Strategic Inflection Point?

593 Flares Twitter 43 Facebook 19 Google+ 7 StumbleUpon 0 Buffer 10 LinkedIn 514 593 Flares ×

Chris Mellor recently covered IDC’s quarterly disk shipment report, which measures the market leaders in enterprise storage.  As Chris points out, NetApp continues to decline, with reduced revenue over the last three quarters.  I’ve talked for some time about NetApp reaching an inflection point with Data ONTAP, their storage array operating system.  Has the legacy of running two incompatible storage operating systems finally started a change in customer behaviour?

Background

NetApp has been selling two versions of its Data ONTAP operating system for some time.  Legacy or “traditional” ONTAP (previously known as 7G) is called 7-mode and is the operating system that has driven NetApp filers for over 20 years.  The software has survived surprisingly well, being amended and enhanced to include features such as block device (LUN) support with Fibre Channel and Fibre Channel over Ethernet.  However the cracks are starting to show.  7-mode only supports dual controllers, making it a scale-up solution only (which incidentally is probably part of the reason NetApp have put so much horsepower into the new 8000 series).  7-mode doesn’t work well with flash and NetApp have taken the decision to use flash more as a cache than a storage tier.

NetApp acquired Spinnaker Networks in 2004.  Their SpinServer and SpinOS technology was a scale-out clustered NAS solution and global file system capable of growing to 500+ nodes and 11PB of storage.  NetApp’s aim was to introduce these features into Data ONTAP family with the first product release being Data ONTAP GX in 2006.  However the evolution of Data ONTAP GX wasn’t as stellar as NetApp might have hoped and adoption was low.

Rather than run two product lines, NetApp made the decision to harmonise the two separate operating systems, in an attempt to make them look and feel compatible.  NetApp started to bring the two code bases of GX and Data ONTAP 7G together in release 8.0 of both products.  Unfortunately this meant a retrograde for some customers as not all features in Data ONTAP 7G release 7.3.2 were supported in 8.0 7-mode.  Gradually, NetApp has replicated 7-mode features in cluster mode (now called c-mode), to the extent that almost all features are supported.

Making The Transition

The intention is for 7-mode to fade away gracefully and in fact it is already considered a legacy platform.  However migration to c-mode is far from seamless:

  • There is no upgrade-in-place from 7-mode to c-mode.  Data has to be migrated to new hardware.  For any customers with large estates, this represents a significant investment.
  • 7-mode to c-mode data migration isn’t directly supported – there is no support for LUNs, traditional volumes, restricted volumes, SnapLock volumes or FlexCache volumes.  Even with professional services involvement, LUN snapshots are lost, implying migration is done outside of the storage systems.  SnapMirror between modes is not supported, so data has to be migrated outside of the storage system.  Although NetApp offers tools like the 7-Mode Transition Tool, this has serious restrictions too.

For the storage administrator there are other issues:

  • c-mode uses a totally different command set that requires acquiring new skills and knowledge.
  • Existing scripts and processes built around the 7-mode CLI will need to be re-written.
  • OnCommand Core v6 will not support 7-mode – customers will need to retain version 5.  In transition environments, where customers have a mixture of c-mode and 7-mode, either some systems will not be supported (v6) or some new features will be unavailable (v5).

I could go on digging deeper into the issues but the problem is clear;  7-mode and c-mode are fundamentally different products and migration between the two is no different than choosing technology from another vendor.

When faced with the choice of moving to another incompatible NetApp platform or looking at the market to see if there’s something potentially better, cheaper and easier to manage, it’s not surprising that many IT departments see this as their opportunity for change.

A Changing World

If ever there was a truism it’s the fact that the world of IT never stands still.  As anyone employed in the industry knows, take a step out of IT for one or two years and you are massively behind the curve.  Even at a time when many people could look at storage and say “it’s done”, we see new entrants into the market.  These new players are starting their architectures from scratch, learning from the past and taking advantage of new technologies, such as flash, NVDIMM, high performance processors and interfaces.  Companies such as SolidFire, Coho Data and Nimble Storage have brought us new ideas on storing data with scale out, native API interfaces, intelligent use of flash and ease of management.  Many are positioning themselves for the next wave of data management that sees the storage array taking on the analytics function directly.  There are still more such as Tintri who are “application aware”, understanding the requirements of the compute platform.  There are also object storage companies pushing to take a greater slice of the infrastructure and of course the move to cloud computing.

Diversification

Why does this risk affecting NetApp and not the other storage vendors?  The answer is diversification.  EMC (or at least Joe Tucci) saw the writing on the wall many years ago and started acquiring other businesses.  Some were more successful than others, the net result being that EMC isn’t wholly dependent on selling more storage arrays, as their financial reports show.  Other big players have followed a similar model; IBM is shedding its hardware business piece by piece; HDS have moved into converged infrastructure and object storage as well as evolving their existing business.  HP have never been wholly dependent on storage however they have evolved from EVA and are making great gains with 3PAR.  Take a look at NetApp’s product page (here) and you see a focus on a single product line; FAS.

There’s also a threat to traditional storage from hyper-converged solutions which look to remove shared storage altogether.  Products such as those from SimpliVity and Nutanix offer more than just the elimination of hardware, they remove the need for the PhD-grade storage administrators required to understand the complexities and intricacies of Data ONTAP.

The Architect’s View

The world of storage is changing.  Storage at scale needs to be simple; the startup vendors are proving this is the right approach.  NetApp continues to be focused on a single product line.  Unfortunately they have boxed themselves into a corner by following the marketing mantra that Data ONTAP fixes all ills.  For the company to continue to grow they need to get past their only child fixation and acquire and diversify, otherwise time will start to run out and it will be too late to change.

Related Links

image by freefoodphotos – stockarch.com

Comments are always welcome; please indicate if you work for a vendor as it’s only fair.  If you have any related links of interest, please feel free to add them as a comment for consideration.  

Subscribe to the newsletter! – simply follow this link and enter your basic details (email addresses not shared with any other site).

Copyright (c) 2009-2014 – Chris M Evans, first published on http://blog.architecting.it, do not reproduce without permission.

 

About Chris M Evans

Chris M Evans has worked in the technology industry since 1987, starting as a systems programmer on the IBM mainframe platform, while retaining an interest in storage. After working abroad, he co-founded an Internet-based music distribution company during the .com era, returning to consultancy in the new millennium. In 2009 Chris co-founded Langton Blue Ltd (www.langtonblue.com), a boutique consultancy firm focused on delivering business benefit through efficient technology deployments. Chris writes a popular blog at http://blog.architecting.it, attends many conferences and invitation-only events and can be found providing regular industry contributions through Twitter (@chrismevans) and other social media outlets.
  • klstay

    Excellent summary.

    We have been NetApp for over 8 years (took me 4 years to get them in the door to begin with), but are beginning a pilot of both Simplivity and have plans to pilot Hitachi Content Platform. Honestly, it is a bit brazen to even call it cluster mode and the same goes for claimed QoS.

    As far as the 8000 series goes, nothing wrong with a scale up design for a lot of workload out there IMHO. However, if I am looking for that I guarantee you 3PAR and HUS VM are still way in front of them in any RFP I put together. (DKCMAIN is a spring chicken compared to ONTAP…)

  • Dimitris Krekoukias

    Hi all, Dimitris from NetApp here (recoverymonkey.org).

    Chris (Evans), remember that the way Chris (Mellor) likes to show quarterly reports but omits entire year performance. It’s a simpler (though less bumpy and therefore less visually interesting than quarterly) chart to show simply year-over-year performance.

    It’s easier to show entire year performance than focus on quarters – and doing that shows NetApp doing better in 2013 than 2012 both from revenue and capacity shipped standpoints.

    Like most things in life, storage sales are cyclical events, best not even analyzed year over year…

    In addition, your article is focusing a lot on transition from 7-mode to cDOT. All new customers go on cDOT… existing ones can choose to transition or not, and many are simply waiting for their next tech refresh, and that alone is delaying spending in some cases.

    Regarding diversification – it doesn’t ensure a company makes better products, just that it makes many different ones. And doesn’t care if some tank as long as the company survives (HP could survive even if they dropped their storage business completely, for example). As you said, IBM is getting rid of venerable hardware lines. Most of their revenue is services – they don’t really care whose storage it is…

    We are already getting heat over how different cDOT is than 7-mode! :) Plus we have E-Series and the awesome FlashRay coming up, plus stuff I can’t talk about in a public forum. But yes, overall we don’t make servers or networking or printers or mouse pads, we’d rather make quality storage.

    And finally – and this is all public info – NetApp is, by far, the biggest storage supplier to the US government. Government spending on storage was significantly down recently, which accounts for downturns in the quarterly earnings.

    The devil is in the details.

    Thx

    D

  • Frank Davidso

    I see that you have
    referenced Chris Mellor’s register article here and in particular in the
    comment “IDC’s quarterly disk shipment report, which measures the market
    leaders in enterprise storage. As Chris points out, NetApp continues to
    decline, with reduced revenue over the last three quarters.” I discussed with Chris offline and he agreed,
    the devil is in the details. One, three
    quarters don’t make a year and for calendar year 2013, also according to IDC, NetApp
    is up on both revenue and capacity shipped over 2012. Also, according to
    IDC, even though both are up, our revenue gain percentage is lower than our
    capacity gain percentage. This means customers are getting more value for each
    dollar they are spending with NetApp as they are spending less and getting more.
    Obviously one could pick three quarters
    from any vendor and make it look like they are doing worse than they are as some
    quarters are up and some are down. It is
    what they do in a full years’ time that tells the tale. Net is NetApp is still growing, is taking
    share and not actually losing share as the comment and article might
    imply. Also consider that the US Federal
    Government, of which NetApp has 42% share had a spending sequester and did not
    spend as much in 2013 as they have in the past. Net is that the numbers
    are down not because competitors are taking share form NetApp, but in part because
    our biggest customer spent considerably less and even though the numbers are
    down, NetApp still gained market share in calendar year 2013. Regarding 7-mode to CDOT migration, admittedly,
    customers have to plan this, it isn’t something you do overnight. As
    such, customers are waiting for a window like a tech refresh to do the
    migration. I have many customers who want to move to CDOT, but are
    waiting for a tech refresh as that is the easiest time to do it. So there
    is some delay in the migration due to customer timing, not because they don’t
    want to make the transition or have switched to another storage vendor, but
    because they are waiting for the best window of opportunity to do so. Sometimes the devil is in the details.

    A few final comments. Yes NetApp has two operating systems, three
    if you count Santricity for E-Series.
    That is still far less OS types than any of the other tier one vendors
    and NetApp is only asking customers to migrate once in 20 years of time as you
    mentioned in the article. If you look at
    EMC for example, customers have had to migrate from Sym to VMAX, Clariion to
    VNx/VNXe etc. HP Customers have to move
    from EVA to 3PAR, IBM from whatever to v7000.
    Each of these in the past 5 years.
    Nobody is immune to this, so to say; “Has NetApp Reached a Strategic
    Inflection Point?” Is to say “Did or
    have EMC, HP, Dell, IBM, Hitachi Reach a Strategic Inflection Point for each
    time they required a customer to migrate to a new platform?”.

    • http://architecting.it Chris M Evans

      All fair points, and I thank you for a balanced response. To defend EMC slightly, Symm ->VMAX was possible with SRDF as long as you didn’t stray too far ahead on Enginuity; VNX to VNX2 however is a painful migration.

      The interesting thing about inflection points is that they occur but don’t become obvious for some time after.

      I made a couple of points – the pain of migrating from 7-mode to cDOT, plus the lack of diversity. That wasn’t just on storage platform but technology in general. Do you really think NetApp (or any other vendor for that matter) can continue to solely be a storage array seller?

      • Dimitris Krekoukias

        Maybe not solely a storage array seller – rather, storage in general. Yes, I would argue there is a very bright future for a large storage-specific company like NetApp. The value prop isn’t the hardware anyway. It’s how hardware is used.

        Oh, and regarding your Symm -> VMAX comment… it’s still Enginuity. cDOT is sufficiently different that we can do an easy mirror for migration but at the moment for file protocols only.

        What is missing from the Symm -> VMAX or CX -> VNX -> VNX2 or AMS -> HUS etc equation: the ability to keep your old disk shelves and controllers.

        7-mode to cDOT is NOT a forklift when it comes to the hardware. Indeed, the hardware is redeployed!

        There is investment protection there.

        Most other vendors don’t let you do stuff like that, especially not when the target OS is different enough.

        Check: http://bit.ly/L3CIGM

        Thx

        D

  • Frank Davidso

    Full Disclosure: NetApp employee here and my comments are my own and not that of NetApp.

    One additional comment. In this article you reference and link to your previous article from 2010. Since the 2010 article, NetApp has gained share every year, including 2013. Is it possible customers are seeing something in NetApp that you are missing in your analysis?

  • klstay

    I do love watching the volk of this or that vendor shuffle onstage whenever you suggest anything negative! In the case of NetApp for me what I perceive right now is bittersweet; such a great company with such great employees! Only time will really tell if the path they are on now leads to a place they and many of their customers want to be. For us it did not so some legacy things will stay NetApp for the next few years, but that is it.

    Everyone always talks about scaling up and out, but that is only half the equation. Scaling down but keeping the enterprise features in a globally manageable service without breaking the bank is just as important to a lot of companies. At the other end NetApp still has some work to do in the object storage space if they actually want to compete and that is where a lot of the growth is in the multi-petabyte market.

    Tough sledding for all the incumbents in the coming years and IBM certainly is the weakest of the set. EMC also faces some technical challenges, but they have that broad portfolio with all the pluses and minuses that brings AND a proven sales force from hell that takes no prisoners. 3PAR is my favorite architecture of the 5 ‘players’, but unfortunately they are part of HP; no thanks, at least not for now… The mini VSP from HDS (with the unfortunate name HUS VM) has some just plain great features at that price point. Plus HCP for object stuff from the same company. Plus their unified compute platform is actually pretty darn good. (LPARs on x86 anyone? Fantastic!)

    Anyway, as stated, time will tell.

  • Pingback: SFD5 Prep Work: Veeam | eigenmagic()

  • Jay M

    Disclosure: former NetAppian, now EMC. My opinions.

    Great discussion in this thread. The future of IT infrastructure will be very interesting. Our customers are lucky to have so many choices, but therein also lies the challenge. SAN, NAS, object, all flash, hybrid flash, converged, hyper-converged, on-prem, off-prem, both… Whew. I try to stay grounded with, “it’s about the applications, stupid”.

    Great ride with NetApp, and FAS was great choice for many biz apps, file systems and VMs. But after 20 years ONTAP is showing signs of innovator’s dilemma. They could just never successfully integrate acquisitions to create a portfolio. All is good with snapmanager and snapvault for copies and DR, but only if you have all homogeneous arrays. Not only, but now all arrays must be running same mode of ONTAP. All of our customers have heterogeneous platforms/data. I keep hearing common theme in customer discussions, “we want to evolve to a tiered service catalog on hybrid cloud with highly automated provisioning, management, and protection”. These are not just techies that love to experiment, but folks responsible for business critical applications. Ultimately software will eat special built hardware. I think this will be a challenge for EMC as this has accelerated the shift to software defined architectures to solve for “3rd platform”, but could be a real iceberg melter for NTAP…

  • Rich Cramer

    NTAP is (was) a great company but they are pseudo-unified storage; huge in the NAS space but weak in SAN. I think Nimble Storage is going to dominate the storage space (eventually) – at least for a while, till someone new comes along. Check out the YouTube video of Nimble’s CEO – the legacy players would have had to rewrite their OS completely to take full advantage of flash, multi-core processors etc., which would have been expensive/suicidal? So most vendors reacted by just jerry-rigging the new technology onto old platforms. Also, C-DOT is turning out to be a huge pain-point for NTAP customers – they have to migrate to it (yes, great benefits lie ahead) but they have to pay for it! This is a great opportunity for NTAP competitors to acquire customers, except for companies that are “file shops” and even then, maybe they could be convinced to go to Isilon or HNAS….

  • wingknut

    This could have been pulled from a conversation I had with my management over at the ‘N about 5-6yrs ago, word for word. :(

    Love everyone there, but we parted ways after 14yrs n 2012 based on ‘direction’. Since then about 95% of my old group has been RIF’d and all but shut down.

    The main issue with scale-up, is a -single- user or workflow can easily hold hostage every other player under the CPU they share, much less a shared domain.

    Even Cmode cannot manage ‘scale out’ storage below the directory level, when workflows live within narrow sets of directories, (and this can change quicker than an admin could migrate around) that workflow can then also saturate and hold hostage other users/flows beneath the shared controller, and you’re back to the 7Mode problem again. Think DFS, and you have the basic idea of Cmode Junctions/scale-out.

    But..I don’t think Netapp ended up with two products that cant be upgraded within on purpose. They spent the better part of 20yrs with a very elegant “Don’t touch storage, just upgrade the head!” approach…and the concept of data migration did not exist within the corporate DNA, and was ignored in the Engineering approach for others to sort out on their own…if the customer REALLY wants us, THEY will have to manage their own migration. Not..my..problem, fax over the PO.

    But, when your Vendor X is the same vendor as the new kit..and you’re told that you’re on your own to upgrade & migrate..ya..might as well investigate what else is out there. As a customer, you’ve got nothing to lose except your job for not taking the wise opportunity to do so.

    A very wise man that owned a company I worked at in Houston..a Mr. Bill Holbert once told me ‘When it comes to marketing, ever believe your own bullshit’.

    Wiser words were never said.

  • Pingback: NAS is dead. Long live NAS! - Juku.it - Storage()

593 Flares Twitter 43 Facebook 19 Google+ 7 StumbleUpon 0 Buffer 10 LinkedIn 514 593 Flares ×