Home | Uncategorized | Enterprise Computing: USP-V – So Long And Thanks For All The Fish

Enterprise Computing: USP-V – So Long And Thanks For All The Fish

0 Flares Twitter 0 Facebook 0 Google+ 0 StumbleUpon 0 Buffer 0 LinkedIn 0 0 Flares ×

So HDS’s announcement has turned out to be a complete disappointment.  What it’s not:

  • It’s not new hardware.
  • It’s not providing more physical capacity.
  • It’s not providing dynamic tiering.
  • It’s not providing enhanced replication technology.

What is on offer is the ability to cluster USPs – a feature called Hitachi High Availability Manager.  By cluster, this means connect two USP arrays together and have them work in an active-active configuration, with data replicated in either direction.  This new feature seems to be answering only one problem - how do I get off my USP?

hds8xBack in 2004 (I think it was), when I first sat with HDS and had a presentation on USP, the key question (especially with virtualisation) was how to cope with the fact that a single USP is the SPOF (Single Point of Failure).  People in the room viewed the USP like a network switch - subject to failure, requiring upgrade and so on.  HDS was at pains to say that clustering USPs simply wasn’t necessary as the hardware was fully resilient.  In fact, HDS have been at pains since then to offer 100% availability of data with the USP.  So why is clustering a USP so necessary?  How can we have higher than 100% availability?  Let’s not forget, the level of availability of any system is based on the availability of the least resilient component, so if we have a USP (with 100.00% availability) virtualising external storage (with 99.9% availability), the weak point is the external storage.  Clustering USP’s doesn’t improve this and never will.  All HDS have answered with this offering is the migration issue.  This isn’t a new feature. 

Here are some questions that arise from the presentation and which weren’t answered:

  • How far apart can my USP cluster arrays be?
  • What’s the impact on latency?
  • How is data integrity maintained?
  • Does clustering also support TrueCopy, ShadowImage and COW Snapshots?
  • Does clustering change my array World Wide Names, and if so, how?
  • Can cluster arrays be at different microcode levels?
  • Can clusters be TrueCopy secondary devices, if so what replication links are required?
  • Do need specific multipathing software?

So, you may be asking why the odd title of this post.  Have a look at here.  It’s what the dolphins said before they left earth.  Time to say goodbye to the USP-V as a player in the Enterprise array space.

About Chris M Evans

Chris M Evans has worked in the technology industry since 1987, starting as a systems programmer on the IBM mainframe platform, while retaining an interest in storage. After working abroad, he co-founded an Internet-based music distribution company during the .com era, returning to consultancy in the new millennium. In 2009 Chris co-founded Langton Blue Ltd (www.langtonblue.com), a boutique consultancy firm focused on delivering business benefit through efficient technology deployments. Chris writes a popular blog at http://blog.architecting.it, attends many conferences and invitation-only events and can be found providing regular industry contributions through Twitter (@chrismevans) and other social media outlets.
  • http://www.contemplatingit.com Tony Asaro

    Chris,

    I disagree with you on this and I think you had expectations and since they weren’t met you’ve overreacted. I believe you will see the USP V thrive in the market place and this is just another practical capability added to the growing list that the USP V has over the competition. This release isn’t about hype but practical implications in the data center and it does change the Enterprise-class storage game. First, customers don’t ever have to do another data migration. That alone offers major value. Second, if they plan it correctly then customers can totally avoid paying extended maintenance fees. Extended maintenance fees can be ridiculous and kill iT budgets. And third, what other storage system can guarantee that applications never have to be brought off line again? For some customers this totally mitigates risk for mission-critical applications.

    Tony

  • Pingback: Cinetica Blog » ANNUNCIO IMPORTANTISSIMO!!!!!

  • laurencedavenport

    Chris

    hope I am not going to start a load of Hitchhikers Guide quote comments but I appreciated your title, it is one I use myself and often get confused looks. one other favourite:

    “You know,” said Arthur, “it’s at times like this, when I’m trapped in a Vogon airlock with a man from Betelgeuse, and about to die of asphyxiation in deep space that I really wish I’d listened to what my mother told me when I was young.”
    “Why, what did she tell you?”
    “I don’t know, I didn’t listen.”

    Laurence
    http://www.StorageManagementAutomation.com

  • http://www.contemplatingit.com Tony Asaro

    Chris,

    I think you need to break it down to more practical terms. First, there are already a number of customers that have more than one USP V in their environments and there are certain applications that they want this level of redundancy because of the critical nature to the business. That is a no-brainer. Keep in mind it is not an all or nothing proposition. Second, for greenfield opportunities IT professionals can look at how they want to design their environments and they can leverage this capability based again on specific requirements of the applications and business. But they now have an option to use this capability if it makes sense to their business. Having a zero application downtime option is powerful and has practical implications that IT professionals may want to leverage based on requirements.

    Additionally, we both know the challenges of data migrations and the impact that this has on organizations. The ROI on this is also a no-brainer within many Enterprises based on how much data they move. We both also know that many a storage vendor has made a great deal of money on extended maintenance fees and again this capability can address this as well. This was from an article in Search Storage – Christopher Crowhurst, VP of strategic technology for Thomson Reuters Professional Division – “I have not experienced a situation where we’ve had availability problems [with the single-controller USP-V],” he said. “We plan to use this more as a non-disruptive way to swap out infrastructure.”

    I am all for next generation stuff but there has to be a reason for it. If you are referencing the V-Max – who knows what the impact will be? Will it increase EMC’s addressable market? I am far from convinced from that. Additionally, none of the new features of the V-Max are unique – other vendors have them – and who actually knows when all of the capabilities will be available? Having said that, external storage virtualization is still unique to the USP V as a storage system and a big reason why they win business. I predict that this year and next it will drive business even further because of the focus on optimization and utilization in the data center. Another quote from Crowhurst – “However, as both an EMC and an HDS customer, Crowhurst said the Symmetrix needed to increase the amount of available cache, and that’s not a problem with USP-V. ‘The biggest thing about V-Max was the transformational step for the underlying architecture, which needed refreshing to continue to grow and scale,’ he said. ‘HDS doesn’t need to refresh to overcome those limits yet.’”

    Personally, I think you are howling at the moon on this one. We can debate endlessly on this but the real impact is what happens in the field. The first part of the call was Jack Domme talking about how well their storage business is doing even in this economy – that should tell you something. Let’s talk in December to see how well the USP V is doing compared to the competition.

  • http://blogs.hds.com/claus Claus Mikkelsen

    OK, guys….my turn, and hopefully this answers some outstanding questions. I’ll respond in a somewhat random order but hopefully get the points across.

    Chris, on your first 4 points, I’ll say:

    1. It’s not new hardware – man-o-man, do you work for a forklift company? Is there a stock tip I’m not aware of? Seriously, why would I want to have to invest in new hardware to get this new cool function? The echo I hear from customers (constantly!!) is don’t make me have to invest in new HW just to get new function.

    2. It’s not providing more physical capacity – well, you’re right, it doesn’t. If you fully populate a USP-V with 1TB SATA drives you’re well north of a PB. I chose the SATA option since that’s the standard that EMC uses. Add to that a 247PB limit on virtualized storage, and you’ve got one very beyond-humongous array image.

    3. It’s not providing dynamic tiering – Well, our HCAP product already does dynamically move data amongst tiers. Or is this a reference to FAST from EMC which is currently a Powerpoint slide deck.

    4. It’s not providing enhanced replication technology – So replication with failover that’s transparent to apps and servers is not new on the enterprise platform? I think it is. And besides, we already provide every replication technology I can think of anyway. Pass on some new ideas and we’ll see if we can get them implemented.

    Now, as far as your other questions, I’ll C/P with answers inserted:

    How far apart can my USP cluster arrays be? – TrueCopy Synchronous distances. So, like 20 miles, 50 miles, 100 miles, whatever, and the further you go the greater the performance impact is on “writes”. In other words, it’s your basic synchronous replication issues. Nothing new here.

    What’s the impact on latency? – above and beyond the normal synchronous replication stuff above, just a few microseconds. Not milliseconds, but microseconds.

    How is data integrity maintained? – With TrueCopy synchronous replication. Additionally, the quorum disk is checked to ensure that pairs are properly duplexed before any swap occurs.

    Does clustering also support TrueCopy, ShadowImage and COW Snapshots? – Yes

    Does clustering change my array World Wide Names, and if so, how? – No, device address are the same between the two controllers, ports are different and must be configured as such

    Can cluster arrays be at different microcode levels? – Yes, pretty cool, huh?

    Can clusters be TrueCopy secondary devices, if so what replication links are required? – TrueCopy is mandatory for AM using standard links as required by sizing of the environment. (I’m intentionally not calling it HAM since I’m a bit fatigued of the “swine flu” and “pork” jokes). Replication links are basic TrueCopy links.

    Do you need specific multipathing software? – Yes, HDLM for now

    Some other miscellaneous comments on stuff in this thread….

    You’re not doubling all of your resources. You’re doubling the capacity that you deem critical and wish to recover. This is not a totally replicated array environment. If you have a 200TB array and think only 10TB needs clustering support, then duplicate just that 10TB. External virtualized storage does not need to be replicated since it just does a path failover.

    Is this a “big” announcement? Yes, I think it is. Being able to – given your currently installed hardware – transparently failover your applications given loss of access to data, is a pretty big deal.

    As a sidelight, I’ve been NDA’ing this thing for a while, and at first there’s a bit of yawning, but once the details come out, you have to believe there is some serious excitement. And you talk about being disappointed that this is not new hardware, the fact that these customers can implement AM on what they already have, is big.

    Hope that helps clarify some things…post more questions if it does not. This is good dialogue…Claus

  • Pingback: Online Storage Optimization » Blog Archive » Storage News and Notes - May 29

  • Ced

    Hi Chris,

    I do not understand the conclusion of your post: Time to say goodbye to the USP-V as a player in the Enterprise array space.

    It’s not a new hardware but personnaly i might interested by this feature, provided the licensing cost is not too high ;).

    Here is my touhgts:

    I explain why. Today, we do use host based mirroring for our unix environment based on ZFS. It works well, give us a lot of flexibility for migration from our array to another, it’s cheap but do create some side effects. The biggest one is control frame sent by ZFS to detect wether or not disks are present on both sites. The net result is a lot of small frames on the SAN that consume buffer credit and especially on costly inter-site links. This consumption prevents the full usage of our links ( a high number of buffers are consumed for small packets and not of long packets) so we might need to upgrade/add links and spent more money.

    With this feature, we might turnoff the miroring at the ZFS level and manage the storage redundancy at the storage layer. So improving the usage of our links thanks to truecopy. Now, please note that i used turnoff and not remove. The main reason being keeping the ability to execute host based miroring during a migration phase from one USP to another one.

    Ced

  • soikki

    “HDS will start to fall behind in the features race”

    Would you like to give us more insight to this? Do you mean hardware, software, or functionality of both?

    What people who use these arrays need, is functionality that we can use: online data movemet and migration, easy and quick provisioning, virtualization etc. We don’t care so much how the hw looks like, we don’t live next to the boxes.

    Please update your detailed information on this, and please give us your valued opinion :)

    On some high-end array you can do online data migrations inside- or outside the box for all luns, use dynamic provisioning with zero page reclaim etc, on some high end arrays you can’t do any of the above.

  • soikki

    First, please read EMC’s document “Best Practices for Nondisruptive Tiering via EMC® Symmetrix® Virtual LUN”

    Yes, the HW on V-Max looks promising. However, currently the features are missing: there’s no possibility for real tiering, as you cannot migrate data even inside the box from “normal” luns to “thin” luns. And as I’ve heard, you cannot do lun migrations on thin luns at all. This is seriously flawed and I’m very surprised that there’s no attention to this. And when migrating luns, the source is _always_ erased = you cannot use it for repurposing data.

    Also, the only way to expand luns is by creating meta’s and this kind of storage configuration is from the dark middle ages.

    At some point I’m sure that the V-Max will be usable, but currently there is too little to get excited about, when you are the person who has to really work with it. I’m quite sure that the lacking features are due to the microcode inheritance, which brings too many restrictions originated from the 90′s.

    It is also too bad that HDS has a lot of nice features, but as you stated, the user interface is lacking and of bad design. I wonder when will we get the best of both worlds…

    -Soikki

    And BTW, Open replicator means that data is copied offline.

  • Pingback: HDS’ HAM-Fisted Announcement Can’t Be All – Gestalt IT

  • Pingback: A Taste Of HAM (Apologies To The Doctor) – Gestalt IT

  • Pingback: HDS High Availability Manager: How It Works – Gestalt IT

  • Pingback: Enterprise Computing: New HDS AMS – Do We Need Enterprise Storage? « The Storage Architect

  • Pingback: Enterprise Computing: New HDS AMS – Do We Need Enterprise Storage? – Gestalt IT

  • Mpho

    How do I calculate capacity of storage needed if I am given the following.

    Numbero accounts = 5000-100000
    Number of ATM’s = 256
    Number of books store = 500
    Number of call centres = 9
    Estimated transaction rates = 500/sec

    I what know if I will need AMS or USP

  • http://www.tp.stasinlessequipment.com storage thanks

    1.how to how to deal with the fact
    that the USP is SPOF (Single Point of Failure)?
    2.What causes a SPOF USP (Single Point of Failure)?
    3.What primary function clustering USP ?
    4.What are the deficiencies and excess clustering USP ?
    5.whether to use clustering USP, should require other
    hardware that can be used clustering USP?

  • Pingback: Hitachi VSP SPC-1 Results posted « TechOpsGuys.com

  • Chris Evans

    Tony

    I’m afraid I can’t agree with you on this. Universal Volume Manager has been a tool HDS have sold to map external storage through USPs for a variety of reasons. Typically, the externally connected storage is lower cost, lower resiliency/availability technology. Adding extra resiliency to the virtualisation layer is pointless if the storage technology doesn’t have the same availability characteristics. In addition, I see a very small number of customers who would consider pairing two USPs and replicating their internal disk locally in order to make a solution more highly available. Doubling the cost for an incremental increase in resiliency will be hard to justify in the current economic climate.

    You are right that I did have expectations. I expected (and hoped) that HDS were announcing something more radical than they did. The thinly veiled pre-announcements from Hu and Claus on their blogs, plus Twittering from @HDSCorp made me (and I suspect others) hope HDS were planning something big, as I’ve never seen them do this kind of joined up planning. Unfortunately we were let down.

    Compare this announcement to that of “Switch It On” which was much more impactful to customers in terms of cost savings but wasn’t announced in anywhere near the same way.

    I applaud HDS for trying to up the ante in order to match the marketing of their competitors, but we need to see the next generation of USP with EMC beating features and see it soon.

    Chris

  • Chris Evans

    Tony, thanks for the comments. I think it would be interesting to review, as you say, in 6 months time to see how many deployments of HAM there are (assuming HDS are prepared to release the information).

    Chris

  • Chris Evans

    Ced, are you clustering your Unix servers? Are they more or less available than the storage? If the answer is less, then why pay and use HAM if the other parts of your infrastructure are less resilient. The conclusion of “time to say goodbye” is a reflection on HDS’s PR surrounding this announcement. It appeared to be PR related to a new hardware release. If there’s not one in the pipeline (and soon) then HDS will start to fall behind in the features race.

    Chris

  • Chris Evans

    Soikki

    I mean both – on the hardware front, the clear direction is commoditisation – a move to (more reliable) standardised components and design. As this happens, adding new technology becomes easier – the move from 3.5″ -> 2.5″ drives for instance, just becomes a case of deploying a different shelf. There’s also the move to use cheaper and more inexpensive hardware – like Intel processors – and move away from lots of dedicated custom ASIC and component design, which is expensive.

    On the software side, HDS need to improve their host-side applications. Device Manager and Tuning Manager for example are still poorly implemented products. There’s no direct API to the USP arrays other than CLIEX which doesn’t cover all features and is risky – you can delete LUNs without confirmation or checking or lock processes.

    Online migrations are great – if you’re not changing the layout of your disks – i.e. if your configuration was optimal in the first place, which is usually not the case. Don’t forget EMC have Open Replicator, which allows migration to/from non-EMC arrays too…

    I agree Zero-page reclaim isn’t on other arrays – but it has restrictions; as far as I am aware it won’t run in conjunction with TrueCopy or ShadowImage.

    Chris

  • Chris Evans

    Mpho

    I’m not sure what you’re trying to achieve here. You’ve lots of different variables but I don’t know how they combine. What figure are you trying to calculate?

    Chris

0 Flares Twitter 0 Facebook 0 Google+ 0 StumbleUpon 0 Buffer 0 LinkedIn 0 0 Flares ×