Home | Featured | Choosing Between Monolithic and Modular Architectures – Part I

Choosing Between Monolithic and Modular Architectures – Part I

1 Flares Twitter 1 Facebook 0 Google+ 0 StumbleUpon 0 Buffer 0 LinkedIn 0 1 Flares ×

The recent proposed acquisition of 3Par by Dell and/or HP has made me think a little more about the direction the storage industry is taking in terms of their storage array design architecture.  Since storage arrays became a category of devices in their own right, we’ve seen the growth of the monolithic, sometimes called Enterprise storage array.  Hu Yoshida discusses the subject on one of his recent blog posts.  Looking at the wide range of storage devices, I’ve categorised arrays into the following groups:

  • Monolithic – this architecture is characterised by Hitachi USP, HP XP & EMC DMX and consists of a shared memory architecture and multiple redundant components.
  • Multi-Node – these devices use loosely coupled storage “nodes” with a high-speed interconnect providing scalability by adding extra nodes to the storage “cluster”.  Products in this category include EMC VMAX and 3Par InServ.
  • Closely Coupled Dual Controller – this is the typical “modular” storage architecture characterised by IBM DS8000, EMC CLARiiON, Hitachi AMS and HP EVA.
  • Loosely Coupled Dual Controller - this category describes technology that are capable of device failover but aren’t closely coupled to enable individual LUN failover as the Closely Coupled model permits.  This category is characterised by arrays such as Netapp FAS filers and Compellent Storage Center.
  • Single Controller – this category covers devices that act as standalone products, including SOHO storage devices such as the Iomega IX4 & Data Robotics Drobo series.

The above list isn’t exhaustive and it’s my own personal categorisation.  There are many more vendors of technology than I’ve listed here.  In addition, none of these lists qualify as “Enterprise” in their own right.  The use of this term is a hotly debated subject.

Monolithic Architectures

DMX Architecture

EMC DMX High Level Architecture

Monolithic arrays use a shared cache architecture to connect front-end storage ports to back-end disk.  This is shown clearly in the architecture diagrams shown here, representing the internal connectivity of the EMC DMX  and Hitachi USP storage arrays.  Each of the memory units is connected to each of the front-end directors and the back-end disk directors.  Hitachi divide their cache into two halves for Clusters 1 & 2 in the array; EMC have up to eight cache modules.  This architecture has positive and negative benefits; firstly having director connections connecting to all cache modules ensures resources aren’t fragmented;  unless cache becomes completely exhausted there’s always connectivity to another cache module to process a user request.  It also doesn’t matter on which port that request comes in; the cache module can process any request from any port to any back-end disk.  This connectivity is also beneficial in terms of failure.  If a cache module fails, for example, only the cache on that module is lost; in a fully deployed architecture the total cache would drop (by 1/8th in EMC’s case), but front and back-end connectivity would remain the same.  With this model it is possible pair up storage ports and have a single LUN presented from 1 or more ports with no performance impact; the path length between a storage port and disk adaptor will always be the same.

This any-to-any model also has disadvantages.  The connectivity is complex and therefore becomes expensive and requires overhead to manage and control the interaction between the various components.  In addition, there’s a limit to the practical scalability of this architecture.  With eight FE, BE and cache modules, there are 128 connections in place; (8x8x2).  Adding a single cache module requires an additional 16 connections; similarly, adding more front or back-end directors requires more connectivity.  Also monolithic arrays are based on custom components and custom design, increasing the ongoing maintenance and development costs for the hardware.

One other point to remember; front and back-end directors have their own processors.  It is possible for the traffic across the directors to be unbalanced and for some processors to be more heavily utilised than others.  I’ve seen a number of configurations where USP V FED ports are running at 100% processor utilisation due to to small block sizes.  This means manual load balancing is required both in initial host placement and subsequently as traffic load increases.  This fact is worth bearing in mind as we move to more highly virtualised environments as it is likely host port utilisation will start low and rise over time as more virtual machines are created.

Hitachi Architecture

Hitachi USP High Level Architecture

Now that the DMX platform has been put out to pasture in place of VMAX, it appears Hitachi are the only vendor continuing down the monolithic route.  Next time I’ll discuss Multi-Node arrays and why they may (or may not) be a replacement for today’s monolithic devices.

About Chris M Evans

Chris M Evans has worked in the technology industry since 1987, starting as a systems programmer on the IBM mainframe platform, while retaining an interest in storage. After working abroad, he co-founded an Internet-based music distribution company during the .com era, returning to consultancy in the new millennium. In 2009 Chris co-founded Langton Blue Ltd (www.langtonblue.com), a boutique consultancy firm focused on delivering business benefit through efficient technology deployments. Chris writes a popular blog at http://blog.architecting.it, attends many conferences and invitation-only events and can be found providing regular industry contributions through Twitter (@chrismevans) and other social media outlets.
  • Pingback: World Wide News Flash()

  • Pingback: Choosing Between Monolithic and Modular Architectures – Part I « Storage CH Blog()

  • http://storagebuddhist.wordpress.com/ Jim Kelly

    I wonder if some of these labels might be a bit arbitrary. Monolithic (i.e. a big stone) seems a strange way to label a switched processor architecture with mirrored cache. Also modular seems a strange description for a DS8000 which is a highly controlled product and which, from an installation and operation point of view, is not in any sense modular. VMAX is another interesting one you term multi-node. The main difference between DMX and VMAX appears to be a move from shared memory to distributed memory. I wonder how many of the VMAX’s sold have more than 2 directors, in which case is it really that different from a coupled system? The true multi-node system is of course XIV, but you didn’t mention that.

  • http://www.brookend.com Chris Evans

    Jim

    Agreed the definitions are somewhat generic. Unfortunately it’s difficult to chose definitions that don’t offend. If I said, for instance, Monolithic was Enterprise, there’d be uproar. :-)

    I haven’t mentioned XIV, however I’m hoping to build a bigger list that contains as many devices as possible.

    Regards
    Chris

  • Pingback: EMC offers upgrades to automated tiering, Clariion and Celerra arrays | IT Security, Hacking, Vulnerability alerts, IT Leadership and more()

  • Pingback: Real world high performance web architectures – good read | Daniel Iversen()

  • http://joshkrischer.com Josh Krischer

    Chris,
    You can’t put the DS8000 in the same category as the rest of typical two controller based storage. The DS8000 is two-node, four-processor (POWE6) clustered SMP (Symmetric Multiprocessing) designs with buses handling data and command movement between subsystems, as well as RAID controller cards offloading RAID functionality from the nodes. SMP cluster structure is difficult to compare with the EMC DMX and the Hitachi USP because instead of using a dedicated cache, the DS8000’s cache is allocated as part of the System p server memory. The POWER 6 server has two level caches (L1 and L2) in addition to its main memory, which creates three levels of hierarchy. IBM claims that the tightly clustered SMP, the processor speeds, the L1/L2 cache sizes and speeds, and the memory bandwidth deliver better performance in comparison to dedicated, single level caches. The FE and BE use PowerPC processors and ASICS. In summary the DS8000 has 3 levels of processors in comparison to the DMX which has only two (Power PC in FE and BE) for example.

    “One other point to remember; front and back-end directors have their own processors. It is possible for the traffic across the directors to be unbalanced and for some processors to be more heavily utilised than others. I’ve seen a number of configurations where USP V FED ports are running at 100% processor utilisation due to to small block sizes.”

    This is true for any structure: In fact in the large subsystems there is a better chance of accessing the data trough another un-blocked path.
    See the following paper on Enterprise CU comparison.
    http://www.joshkrischer.com/files/Storage_is_not_a_comodity_v2.pdf

  • Han SOlo

    I look at it more like we look at servers.

    Monolitic is a single big server box. I don’t call it modular just because it has more than one CPU on the motherboard. And modular is like when you have a bunch of nodes but they are all virtualized or clustered together.

    Heck even a blade chassis is really more of a monolithic design.

    In storage this means to me that if you have a big ole cabinet full of disks from a single vendor, no matter if its a USP, DMX, VMAX, Clariion, FAS, EVA, or whatever is monolitic.

    The only true modular storage is something you can actually add nodes to at will from different vendors, each with a different price/performance/capability point.

    This is basically a SAN with arrays from many vendors on it, some providing high end storage, some providing low end storace, and probably behind some sort of virtualization solution so they can be managed in a single namespace/storage pools.

    As a storage admin, that is REAL modular storage to me. Arguing over the pieces the vendors use to put inside their big cabinets is a waste of time for the most part.

    The really modular storage is where I want to be as a storage guy…think VMWare with storage, not arguing over if a HP Blade Chassis is more modular than the IBM Pclass AIX chassis, or the Sun UltraSparc 6000 chassis.

  • http://www.brookend.com Chris Evans

    Josh

    I have to say that I disagree with you. The IBM DS8X arrays have no shared cache between the processor complexes. SMP isn’t that much different from having a multi-core processor or multi-processor motherboards. IBM might claim the SMP structure gives better performance but it isn’t all about that; the DS8X arrays rate poorly in terms of floor space efficiency and power utilisation, for example. I’ve done comparisions of them before. I will discuss the IBM architecture at length in one of the posts.

    Regards
    Chris

  • http://joshkrischer.com Josh Krischer

    Chris,
    I agree that the DS8000 have no shared cache between the two SMPs sides but that was not an issue. Putting the DS8000 in the same category as the AMS, CLARiiON, EVA etc. is downgrading it to Tier 2 which is not correct. The issue is that another company spread FUD pointing the DS8000 as two controller subsystem and we should not follow them.
    Adding active layer (the SMP) between the FE and BE will draw more energy in comparison to passive elements matrix but at average capacity most of the energy is consumed by the HDDs and all the companies use the same HDDs from Hitachi and Seagate.

    Regards
    Josh

  • http://www.brookend.com Chris Evans

    Josh

    Again I have to disagree. Have you just pointed out one of the weaknesses of the DS8K series? If a single SMP complex crashes, the data in cache is lost because it isn’t replicated to the other complex? If cached data isn’t replicated, how can this array be classed as tier 1 or Enterprise? Surely that makes it less reliable than the dual controller models as there’s no cache redundancy. Am I missing something here? I don’t believe I am. I have to say that data availability would be my most important requirement rather than performance or power consumption.

    Regards
    Chris

  • http://joshkrischer.com Josh Krischer

    Chris, it is not true.
    Each side of the cluster has its own cache and persistent memory – still carrying the name Non Volatile Storage (NVS) – of the other side. During normal operation, the DS8000 preserves fast writes using the NVS copy on the other side. This cross-connection protects write data loss in case of a loss of power or other malfunctions. The DS8000 uses 4 Kbyte cache pages, which prevents polluting the cache with unnecessary data during interactive operations.

    regards

    Josh

  • Pingback: The Storage Architect » Blog Archive » Choosing Between Monolithic and Modular Architectures – Part II()

  • Pingback: Four Fundamental Best Practices for Enterprise IT – Stephen Foskett, Pack Rat()

  • Pingback: The Storage Architect » Blog Archive » HDS and HP Release New Enterprise Array()

  • Pingback: Four Fundamental Best Practices for Enterprise IT – Gestalt IT()

  • Pingback: IBM’s Storwize V7000: 100% SVC; 0% Storwize – Stephen Foskett, Pack Rat()

  • Pingback: IBM’s Storwize V7000: 100% SVC; 0% Storwize – Gestalt IT()

  • T.C. Ferguson

    Chris, great post. Even being 2 years old it looks to address a very relevant question I see being asked by customers almost every day — modular vs monolithic. The one thing I haven’t been able to discern is whether 3par leverages global coherent caching or if their cache layer is isolated in some regard?

    -T.C. Ferguson
    http://www.CornerCafe.net

1 Flares Twitter 1 Facebook 0 Google+ 0 StumbleUpon 0 Buffer 0 LinkedIn 0 1 Flares ×