Home | Uncategorized | HDS Virtualisation

HDS Virtualisation

0 Flares Twitter 0 Facebook 0 Google+ 0 StumbleUpon 0 Buffer 0 LinkedIn 0 0 Flares ×

I may have mentioned before that I’m working on deploying HDS virtualisation. I’m deploying a USP100 with no disk (well 4 drives, the bare minimum) virtualising 3x AMS1000 with 65TB of storage each. So now the tricky part; how to configure the storage and present it through the USP.

The trouble is; with the LUN size that the customer requires (16GB), the AMS units can’t present all of their storage. The limit is 2048 devices per AMS (whilst retaining dual pathing), so that means having either only 32TB of usable storage per AMS or increasing the LUN size to 32GB. Now that presents a dilemma; one of the selling points of the HDS solution is the ability to remove the USP and talk directly to the AMS if I so chose to remove virtualisation (unlikely in this instance but as Sean Connery learned, you should Never Say Never). I can’t present the final LUN size from the AMS of 16GB, I’ll have to present larger LUNs, carve them up using the USP and forego the ability remove the USP in the future. In this instance this may not be a big deal, but bear it in mind, for some customers is may be.

So, presentation will be a 6+2 array group; 6x 300GB which actually results in 1607GB of usable storage. This is obviously salesman sized disk allocations; my 300GB disk actually gives me 267.83GB… I’ll then carve up this 1607GB of storage using the USP. At this point it is very important to consider dispersal groups. A little lesson for the HDS uninitiated here; the USP (and NSC and 99xx before it) divides up disks into array groups (also called RAID groups), which with 6+2 RAID, is 8 drives. It is possible to create LUNs from the storage in an array group in a sequential manner, i.e. LUN 00:00 then 00:01, 00:02 and so on. This is a bad idea as the storage for a single host will probably be allocated out sequentially by the Storage Administrator and then all the I/O for a single host will be hitting a small number of physical spindles. More sensible is to disperse the LUNs across a number of array groups (say 6 or 12) where the 1st LUN comes from the first array group, the second from the second and so on until the series repeats at the 7th (or 13th using our examples) LUN. This way, sequentially allocated LUNs will be dispersed across a number of array groups.

Good, so lesson over; using external storage as presented, it will be even more important to ensure LUNs are dispersed across what are effectively externally presented array groups. If not, performance will be terrible.

Having thought it over, what I’ll probably do is divide the AMS RAID group into four and present four LUNs of about 400GB each. This will be equivalent to having a single disk on a disk loop behind the USP, as internal storage would be configured. This will be better than a single 1.6TB LUN. I hope to have some storage configured by the end of the week – and an idea of how it performs; watch this space!

About Chris M Evans

Chris M Evans has worked in the technology industry since 1987, starting as a systems programmer on the IBM mainframe platform, while retaining an interest in storage. After working abroad, he co-founded an Internet-based music distribution company during the .com era, returning to consultancy in the new millennium. In 2009 Chris co-founded Langton Blue Ltd (www.langtonblue.com), a boutique consultancy firm focused on delivering business benefit through efficient technology deployments. Chris writes a popular blog at http://blog.architecting.it, attends many conferences and invitation-only events and can be found providing regular industry contributions through Twitter (@chrismevans) and other social media outlets.
  • mackem

    Hi Chris, Im not sure if you’re aware but HDS have just this week launched a user forum at http://forums.hds.com
    May be this is a good place to leave some of your comments re the number of volumes you can configure on the AMS, as well as lend some of your own expertise to other peoples questions and issues

    mackem

0 Flares Twitter 0 Facebook 0 Google+ 0 StumbleUpon 0 Buffer 0 LinkedIn 0 0 Flares ×