Home | Uncategorized | Enterprise Computing: It’s All About The Process!

Enterprise Computing: It’s All About The Process!

0 Flares Twitter 0 Facebook 0 Google+ 0 StumbleUpon 0 Buffer 0 LinkedIn 0 0 Flares ×

I’ve been doing some work at a large financial organisation over the last couple of weeks.  As always, the result of my analysis is that the technology isn’t the main problem. 

Let’s face it, most enterprise storage technology is pretty similar; arrays all have similar features – RAID, Fibre Channel support, massive scalability and so on.  Switches are no different; 90% of the functionality required is connecting hosts to storage.  

What matters is how the technology is used and that all comes down to how process and procedures are implemented.

ITIL gives you a framework in which to work; it does categorisation for you, but you’re going to have to implement the process yourself.  Here’s a few things that seem to always crop up:

  • Decommissioning old Technology.  Or what we should call Technology Lifecycle Refresh.  Many sites have technology stretching back n-1, n-2, n-3, or n-4 generations old.  They have multiple vendor types, often running with different levels of code.
  • Keeping Host Firmware/Driver Levels Accurate.  Unfortunately this piece of work tends to fall between teams – the platform team won’t do it because its storage software; the storage team can’t do it because they aren’t permitted to make host changes.  Not keeping host levels up to date is a potential disaster waiting to happen.  It introduces risk into an environment and will eventually prevent upgrades as there is a limit to the level of code that vendors will support.
  • Demand Management.  By this I don’t mean capacity planning, which is a whole subject in its own right.  I mean negotiating with the business to understand their requirements 3, 6, 12 or 18 months out.  This also means discussing the specifics of their requirements – what tier, what performance requirements, what replication requirements and so on.  By understanding customer’s needs, it becomes easier to identify technology which can be used to reduce cost and increase business advantage.

There are lots more; feel free to throw a few out for discussion.

About Chris M Evans

Chris M Evans has worked in the technology industry since 1987, starting as a systems programmer on the IBM mainframe platform, while retaining an interest in storage. After working abroad, he co-founded an Internet-based music distribution company during the .com era, returning to consultancy in the new millennium. In 2009 Chris co-founded Langton Blue Ltd (www.langtonblue.com), a boutique consultancy firm focused on delivering business benefit through efficient technology deployments. Chris writes a popular blog at http://blog.architecting.it, attends many conferences and invitation-only events and can be found providing regular industry contributions through Twitter (@chrismevans) and other social media outlets.
  • Ron Major

    In my opinion, Demand Management is the most difficult aspect of storage management. If I could get people to accurately project 6 to 12 months out, my job would be so much easier. In my experience, the reality never matches the projections. Projects pop up out of nowhere and they never know how much storage they really need, but they need it right away. At least with life cycle refreshes, you know when they will happen and can plan for them. It is difficult to plan for the unknown.

  • Agathian

    IMHO, “specifics of the requirement” you have included under demand management holds the key in large environments.

    How would you classify your tiers? —- SATA/FC? enterprise/modular array? RAID level? HA? There are so many permutation and combinations – and technology keeps changing so quickly. for ex. today’s modular array might actually be better than yesterday’s enterprise array in certain factors.

    It would be ideal if we can classify tiers based on specific/measurable criteria and have the ability to map it to the application requirements.. thoughts?

  • Chris Evans

    Ron

    All good points. The key piece here is the “they need it right away” part. It’s assumed storage resources are infinitely available, but as we know the reality is far different. The surrounding/related processes have to be efficient too – for instance, I’m sure most Storage Managers would get the green light to purchase more storage whenever they needed it if the chargeback model was fully implemented and if the business requesting new storage had a cost centre against which those purchases could be pre-validated. Most of the time, processes are not that mature, so time consuming POs have to be raised, justified, signed off before hardware is even deployed – and that’s a discussion on its own!

    Chris

  • Chris Evans

    Agathian

    You raise lots of good points. I think you’ve identified the key criteria when designing storage tiers – don’t make it hardware specific. Typically, customer have no idea what kind of performance they need from their storage. If you quote them IOPS, ms response time and so on, they wouldn’t really know. In addition, they don’t want to go through a labourious classification process. What they do know is that they want certain service criteria – availability, consistent response time and so on. I prefer to use a tiering model where I rate storage in simple relative terms. For example, Gold Tier – highest performance, highest availability, Silver Tier – good performance, high availability, Bronze Tier – acceptable performance, acceptable availability. Obviously you can flesh this out a bit more, this is just an example. So, the tiers need to be associated with a cost – e.g. Gold = $10/GB/month, Silver = $6/GB/month, Bronze = $3/GB/month. The aim is to use the incentive of cost to drive behaviour. Most storage will be targeted at Silver, with high performing databases justifying Gold performance. Bronze represents low-cost archive type storage. At the back end, you can then deliver the storage requirements in whatever way you choose – so as technology becomes faster, cheaper, better, you can deploy Silver on the next gen hardware without reference to the customer – as long as you’re meeting their service levels. For product to product comparisons, you may want to choose a nominal response time; for instance Gold = <= 10ms 99.99% of the samples in a 5 minute measurement. Silver = <= 20ms, Bronze = <=30ms. Again, these are only example, not real figures. Having an internal benchmark makes sure you're providing consistent response times as you move in new hardware.

    The specific definition of the tiers depends on what you feel is important to your customers, so you may want to include an element of remote replication, recoverability, choice of enterprise or modular arrays and so on into the definitions.

    In terms of data classification, use some sensible broad categories; for example, non-production gets Bronze level for everything. Production gets Silver as default. Where databases require additional performance, Gold applies. Using generic tier names rather than numbers allows others to be slotted in; for instance, if Gold isn't good enough and a SSD tier is needed, this could be slotted in as Platinum.

    I guess I'm saying, do things service-based; don't expose specific technology to the customer; charge for usage by tier; incentivise behaviour by structuring cost.

0 Flares Twitter 0 Facebook 0 Google+ 0 StumbleUpon 0 Buffer 0 LinkedIn 0 0 Flares ×