Home | Uncategorized | Enterprise Computing: 9×7=63 – The Feeling of Power

Enterprise Computing: 9×7=63 – The Feeling of Power

0 Flares Twitter 0 Facebook 0 Google+ 0 StumbleUpon 0 Buffer 0 LinkedIn 0 0 Flares ×

I’m on holiday this week, relaxing in the south of Spain with some lovely warm temperatures, which has been a boon compared to the summer we had in the UK this year.  Part of my holiday reading has been Isaac Asimov and I’m ploughing through his short stories, currently one called “The Feeling of Power”.

The story discusses the re-discovery of basic mathematics, computed using paper rather than a wholescale dependency on computers.  The analogy to the storage industry wasn’t lost on me; we developed techniques in the 70′s and 80′s for managing mainframe storage, which appear to a certain degree to have been lost in the move to resource aplenty and the Windows/Unix age.

Like the Asimov story (which focuses on two warring nations and their tit-for-tat technology advances), the storage fight is against the continual growth in demand for storing information versus new technologies which improve our capabilities to store more data.

I wonder whether we should go back to original principles for data storage – and what were the 80′s “golden age” methodologies which get referred to so many times?  Well, firstly we have to accept that it was a different time then.  The volume of data was nowhere near the levels we have today.  However there was a focus on cost – as there is today.  My experience of mainframe storage revolved around the following:

  • Standards.  I worked at a site recently where the Storage Architect didn’t believe in setting sensible provisioning standards, being happy to rely on the ability of software to handle multi-pathing numbering settings.  Whilst this is technically possible, from a practical standpoint, standards need to be adhered to.  It’s common sense really. When you’re diagnosing problems, looking at the loading and balancing of a system or planning the scalability of a system, standards are essential.  
  • Process.  This is one of the key pieces of mainframe storage management.  In the past I regularly trawled volumes for un-catalogued datasets (files), scanned catalogues, VVDS and VTOCs  for rogue entries and ensured all datasets adhered to standards laid out in the DFSMS configuration.  There was a continuous focus on ensuring all datasets on disk were valid and required; DFHSM sucked up unreferenced datasets and moved them to tape and forward to eventual expiration (subject to retention of a backup copy of course).

Of course in the Asimov short story, the ultimate result of re-discovering mathematics was not explained.  I’d hope that in the Storage world, we can learn from the past and improve on what we already know.

About Chris M Evans

Chris M Evans has worked in the technology industry since 1987, starting as a systems programmer on the IBM mainframe platform, while retaining an interest in storage. After working abroad, he co-founded an Internet-based music distribution company during the .com era, returning to consultancy in the new millennium. In 2009 Chris co-founded Langton Blue Ltd (www.langtonblue.com), a boutique consultancy firm focused on delivering business benefit through efficient technology deployments. Chris writes a popular blog at http://blog.architecting.it, attends many conferences and invitation-only events and can be found providing regular industry contributions through Twitter (@chrismevans) and other social media outlets.
  • http://www.bupa.com.au Gerard

    Agree whole heartedly on this. The biggest challenge around storage growth is with environment proliferation in relation to major change programs. In practice 5 out of 7 environments will have less than 5% data difference. I miss the diligence of process and control of the mainframe days but know we also need to ensure customer agility.
    At an IT development level nobody wants to share anymore. We are not prepared to rationalise our change/test footprint at a data level. Why? Because this takes time, time is money, and storage is “cheap” by simplistic comparison.
    But! If we have a defined and accepted process for data islolation (Cut down environments) we can meet our development/testing requirements and also reduce storage.
    Our storage vendors will tell us that a Data de-duplication and thin provisioned storage solution is the answer. Is it? Then of course it needs to be backup integrated or otherwise replicated, $, $, $. Thankyou very much,,$$,, next please!
    I draw a long bow comparison to the home utility products sold on daytime TV. The reality is the only way to keep your carpet and floor tiles clean is to do the work: housekeeping.

    Gerard

  • http://mattpovey.wordpress.com Matt Povey

    While I couldn’t agree more about the value of re-learning some lessons, I’m not sure that I agree with your analogy. It strikes me that it might be of more utility (and more fun) to look at computing paradigms in terms of politics. Like this:

    The mainframe era was monarchy (or autarchy at any rate). Thorough centralisation, high cost of infrastructure and support and the necessary complexity and rigour of process meant that computing could only really be delivered and controlled by a tiny elite. The mainframe team then, were collectively, the IT overlords. The up-shot was that everything worked, and worked efficiently (if expensively).

    With the advent of open systems, a libertarian revolution swept the IT industry. Open interfaces, cheap hardware and a burgeoning desire among the business to automate and manage business processes led to an explosion of IT across organisations. Initially, this revolution saw the creation of mini-IT departments across organisations which were able to build new applications quickly and without the fuss of process and authorisation that was necessary if you had to ask the Mainframe autarchs.

    Over time though, the finance departments realised that this libertarian experiment had gone too far. The expense of multiple mini IT departments was causing strain and so a dictatorship was imposed. Sadly for this new dictatorship, the open systems world lacked the tools and particularly the integration necessary to re-create the rigour of the mainframe era. Worse, by trying to impose that rigour, the newly centralised IT departments found themselves unable to deliver services to their customers as quickly and simply as had been possible when they each had their own IT teams. The business began to demand their freedom back and IT, tired of the pressures of power agreed!

    And so the age of the cloud dawned. Much of the operational rigour available to the mainframe is now possible to apply to open systems. This time though, that rigour is coupled with an openness and simplicity of provision that allows much of the agility of the libertarian era to be re-created while finally being managable.

    The standards and processes are back but this time, they’re free. The birth of IT liberty and democracy is upon us!

    I don’t think I have ever stretched a metaphor that far before :)

  • Dan

    Amen to that brother! OS/390 DFSMShsm rules…!

0 Flares Twitter 0 Facebook 0 Google+ 0 StumbleUpon 0 Buffer 0 LinkedIn 0 0 Flares ×