Home | Opinion | Managing Risk in IT
Managing Risk in IT

Managing Risk in IT

3 Flares Twitter 0 Facebook 0 Google+ 3 StumbleUpon 0 Buffer 0 LinkedIn 0 3 Flares ×

The IT debacle at RBS has highlighted the dependency large financial organisations (and other companies) have on their IT infrastructure.  From what has leaked out into the press, the RBS issue relates to a piece of software called CA-7, used for mainframe batch job scheduling. When I first started in IT in 1987, CA-7 (and it’s sister product CA-1, used for tape management) were already legacy technology.  From memory, I believe CA acquired the products from another company; both had archaic configuration processes and poor documentation.  However they did work and were reasonably reliable.

 

If it Ain’t Broke…

There’s an old adage that says, if it ain’t broke, don’t fix it; meaning if the software works, why change it.  Any change inherently introduces risk; make no changes and you don’t introduce unnecessary risk.  However, IT infrastructure doesn’t run forever.  Change is necessary to accomodate new features & functionality and cope with growth.  Eventually vendors stop supporting certain versions of software and hardware as they entice and force you to upgrade and purchase new products.

The hardware risk profile is pretty well understood by most organisations.  As servers and storage for instance, get older then the cost of support increases as parts become more difficult to obtain (and more expensive).  There’s a tipping point where maintenance costs outweigh upgrade and new purchase and so justification can be made to replace old hardware.  There’s also a number of other factors involved for hardware, including space, power & cooling costs, all of which help create a reasonably mature TCO model which can be used as part of a technology refresh.

 

The Software Risk Profile

However, I’m not sure we can say the same for software upgrades.  Working out the risk profile for software is more complex.  Firstly, software has no equivalent of hardware parts replacement; software components don’t wear out.  Bugs do get discovered in code, however these usually get fixed with service packs and patches.

Going back to CA-7, this software originally ran in mainframe environments supporting perhaps hundreds or a few thousand batch jobs in an overnight schedule.  In an organisation like RBS, the software may be supporting tens if not hundreds of thousands of complex batch interactions.  These may have dependencies on platforms other than the mainframe, which make things even more complex.

It’s easy to see that too much risk had been concentrated into a single piece of infrastructure software, if a failed upgrade could result in such disastrous consequences.  When software becomes so complex, it’s likely that upgrades get deferred and deferred until the upgrade becomes critical.  Then a failed upgrade has massive consequences.

The risk of failure in this instance was clearly not understood.  The upgrade took place midweek to a system that seemed to cover the update of accounts to every customer in three banks.  With such a high risk profile, this change should have been scheduled for a quiet period such as a bank holiday.  The change and subsequent backout should have been covered by senior staff – The Register article implies junior staff were involved.

Finally, questions have to asked as to how a junior member of staff could delete the entire input queue updating millions of customer records, then requiring “manual” input.  This statement makes no sense or demonstrates huge flaws in RBS’ batch structure.

The Architect’s View

Software and application upgrades are complex and in large organisations that complexity can be one risk too many.  The desire to centralise to reduce costs shouldn’t be done at the expense of introducing excessive risk.  RBS (and probably many other financial organisations) need to reflect on their system designs and look to mitigate these kinds of scenarios.  From my own experience I know we could see another one of these incidents happen at any time.

About Chris M Evans

  • Anonymous – sorry, ITB

    Chris, not sure what you mean by ”in 1987, CA-7 (and it’s sister product CA-1, used for tape management) were already legacy technology”, my recollection is that in 1987 CA-7 was one of the 2 leading schedulers and it still is.

    The crux  of the matter, in my opinion, is that CA-7 is installed in hundreds of the largest companies around the world and if this minor release upgrade (it wasn’t even a version upgrade) was the cause it’d have been repeated iin other banks and businesses. So the problem seems to be with RBS

    CA bought 7 from University of California and it was known as UCC-7

    • chrismevans

      I see your point about RBS, but what we don’t know is how this change came to occur.  I’m not sure I’d be confident changing a component of my infrastructure that could cause the whole bank to collapse, without (a) having scheduled it for the quietest period possible (b) worked out every fallback angle available.  Maybe many other banks have been forced into this upgrade but have chosen to tackle it differently – maybe those other banks upgraded in a scheduled rather than enforced manner – it’s all speculation I suppose.

      As for legacy CA-1/CA-7, I worked with those products for the first 10 years of my career and I don’t remember them changing one bit.  The documentation was poor and frequently wrong and the implementation was poor, but as you mention the products were probably some of the only ones around.  

      Eventually I moved to use OPC/A more and that was infinitely better than the CA product.  CA-1, though seemed to survive due to lack of any competition.  IBM brought out a competitor that was part of DFSMS/MVS but was worse than CA-1 was.  

      Thanks for the comment, though.
      Regards
      Chris

3 Flares Twitter 0 Facebook 0 Google+ 3 StumbleUpon 0 Buffer 0 LinkedIn 0 3 Flares ×