Home | Featured | Enterprise Computing: The Exchange Storage Bandwagon

Enterprise Computing: The Exchange Storage Bandwagon

0 Flares Twitter 0 Facebook 0 Google+ 0 StumbleUpon 0 Buffer 0 LinkedIn 0 0 Flares ×

I love the late evening banter on Twitter, where a conversation between a number of individuals turns into a personal rant from yours truly.  Tonight’s subject – performance management of Microsoft Exchange and overconfiguration of storage for email.

Some 4 years ago, I was working for a large investment bank (which may now be defunct) and I did the storage configuration and testing for the new Exchange deployment.  Having been called in at the last minute, I had to take the storage configuration provided by the previous experts and the vendor.  This consisted of a DMX1000-P2 (performance model) and using only the fastest 50% of the drives. 

As the pre-deployment testing progressed, all MSFT Exchange servers were installed, configured and loaded with the Jetstress software to test performance.  Unsurprisingly, as the setup had been so hideously over-configured, the  testing concluded with flying colours.  As I checked out the configuration of the individual servers, I found wide variations in their setup; HBAs at 1Gb/s rather than 2 (with HBAs on the same servers running at different speeds); drivers and firmware that were inconsistent; differences in the host logical volume layout.  Despite all this, the configuration worked flawlessly, even with all of the intended production servers running stress loading at the same time.

This isn’t the only over-configured Exchange implementation I’ve seen; another springs to mind that used 300GB drives as 146GB models.  I’ve also seen the same attention given to Notes.  In that instance, however, common sense prevailed and it became clear very quickly that each Notes server could be more heavily loaded with data and that there was no need to short-stroke the drives to achieve the desired throughput. Performance/capacity logic was applied and the configuration streamlined.

The moral of this story?  (a) don’t over-configure purely based on what the vendor recommends.  Chances are they’re doing CYA to ensure they can’t be blamed for poor response times and throughput (b) review your configuration regularly and if response times are overly good, tune things down; use that extra disk space; load the servers more heavily.  

Don’t just assume because everything works normally that you can’t squeeze that extra level of performance from the configuration.

About Chris M Evans

Chris M Evans has worked in the technology industry since 1987, starting as a systems programmer on the IBM mainframe platform, while retaining an interest in storage. After working abroad, he co-founded an Internet-based music distribution company during the .com era, returning to consultancy in the new millennium. In 2009 Chris co-founded Langton Blue Ltd (www.langtonblue.com), a boutique consultancy firm focused on delivering business benefit through efficient technology deployments. Chris writes a popular blog at http://blog.architecting.it, attends many conferences and invitation-only events and can be found providing regular industry contributions through Twitter (@chrismevans) and other social media outlets.
  • Jim

    With Exchange 2010 it will be a lot easier to over configure the solution. People will still do it though.

  • Tzvika

    Sound advise. But I wouldn’t argue 100% against short stroking, because there are many valid cases where it is extremely useful. Sometimes you just need the IOPS more than the space.

  • http://www.brookend.com Chris Evans

    Tzvika

    See my post yesterday on Violin SSD – if you’re using only 1/2 the drive to get throughput on spindle count, then perhaps you’re using the wrong device….

    Chris

  • https://mvp.support.microsoft.com/profile=7BF8E3B6-6CC2-48D2-9960-E62FEE70252C John F.

    Oh, Exchange can be a beast at times. Back in Exchange 2003, short stroking wase in vogue. The IOPS/user were high, and things like Backberry or desktop search enginges (all the rage back then) drove them much higher. This is perhaps the genesis of overengineering the IOPS for Exchange to ensure adequate performance. Back then, a 50M mailbox was considered a good size.

    On to Exchange 2007, which really did drop the IOPS 70%. Some of it was by increasing the IO size from 4K to 8K, but the mojority was a reduction in Read IO due mainly to the larger database cache. It’s not uncommon to see a 53:47 R/W ratio for cached Outlook clients. With the decrease in IO, mailbox sizes tended to grow. The 300-500M range was not uncommon.

    On to the latest and greatest; Exchange 2010. The cache increases again, with a few efficiency tricks. The IO size increases from 8K to 32K. Changes in log handling actually improve write performance. With DAG replication (log shipping) The IO ratio drifts closer to 60:40. The IOPS/user drops 70% again. If you’re using mailboxes over about 700M, then you’re in the SATA realm.

    Why is it I still see some vendors/consultants designing for Exchange 2003 when deploying Exchange 2010? When should you use SATA, and when is SAS more appropriate? How do you calculate the IO? What’s the impact of Blackberry in Exchange 2007 and Exchange 2010? What’s all this DAG stuff about? No, I don’t expect you to have all the answers Chris, but I do believe you’ve inspired me to write a blog on the subject…

    Thanks

    J

  • http://www.exchangemymail.com hosted exchange service

    I think there’s nothing wrong with overconfiguring. Sometimes it just helps to have that extra fast response times.

0 Flares Twitter 0 Facebook 0 Google+ 0 StumbleUpon 0 Buffer 0 LinkedIn 0 0 Flares ×