Home | Virtualisation | Containers vs Virtualisation
Containers vs Virtualisation

Containers vs Virtualisation

321 Flares Twitter 84 Facebook 20 Google+ 14 StumbleUpon 55 Buffer 36 LinkedIn 112 321 Flares ×

With the announcement of the release of Docker 1.0, Linux Containers are one of IT’s current hot topics.  How does Docker and Containers in general fit into the world of virtualisation and is the technology simply riding a hype curve?

Background

The idea of containers isn’t a new one.  Sun Microsystems’ Solaris operating system introduced the idea of containers (or Zones) as early as 2004 and features such as Linux control groups have allowed process group isolation since 2007.  The term “containers” is probably best described as Operating system-level Virtualisation.  So rather than create a completely separate virtual machine (VM) instance for each new application (as traditional virtualisation would do), Containers allow multiple instances of an operating system to exist on the same Linux or Unix-based machine.  To understand how this is achieved, we need to know how operating systems like Linux divide up processes and virtual memory.  In order to implement tight security and fault tolerance, code is executed either in kernel mode (or kernel space) or user mode (sometimes called userland).  This segregation allows sensitive or privileged tasks that manage process scheduling or device driver support to be run in the kernel with application functions run in user mode.  User mode processes communicate and use the kernel through the Kernel API and system calls.

This method of operation will be familiar to anyone with a mainframe background.  On the z/OS operating system (and the previous variants such as MVS/XA, MVS/ESA and OS/390), user processes are run in an address space with multiple tasks (TCBs) providing multi-tasking support, but share a common “kernel” through libraries on the SYSRES, or system residence volume.  Privileged instructions are executed using supervisor calls or SVC calls, which run in supervisor mode – the equivalent of running a process in the kernel.  Thus each address space is logically isolated from another using virtual memory addressing however all address spaces and tasks are processed or “dispatched” on the same z/OS instance.

Traditional Virtualisation

Containers implement virtualisation by effectively running multiple copies of userland on the same operating system instance.  These copies all use the same kernel and so have similar dependencies and functionality.  Compare this to traditional server virtualisation where each virtual server deploys its own entire copy of the operating system libraries.

The difference between the two forms of virtualisation starts to become clear.  Containers allow the user to run multiple similar instances of an operating system or application within the same O/S whereas server virtualisation creates machines that are logically isolated and can run different platforms.  Therefore, containers can be used to provide highly efficient deployments of similar applications that all use the same kernel code.  Server virtualisation allows each instance to run entirely separately, supporting many different (and non-Linux) operating systems.

There are some obvious benefits and disadvantages in using containers over VMs:

  • Containers can be created almost instantly – as fast as spawning a new Linux process.  This makes them excellent for scenarios where many transient, temporary instances need to be created and destroyed.
  • Containers are “lightweight”, sharing the same kernel and libraries and taking very little additional disk space.
  • Containers scale well – Linux already manages high scalability in processes, which equates to containers.

However:

  • Containers all run on the same O/S instance; so if that O/S goes down or is rebooted, they all go down.
  • Containers can’t run other operating systems like Windows (or anything not based on the Linux kernel).
  • Containers aren’t great for permanent data storage as they are easy to destroy, so other techniques need to be used to store data used by containers.
  • Containers aren’t as flexible as VMs in terms of resilience or portability (think vMotion).

Application Use

So, why would anyone use containers?  The most obvious benefit is high scalability.  Imagine running many web server instances each with their own database.  A traditional deployment might place them on a single VM, making it difficult to manage and prioritise the workload generated by each site.  Containers provide more flexibility in workload management without having to resort to deploying many virtual machines.  This kind of implementation is effectively PaaS or Platform as a Service, where the container doesn’t need to maintained or patched as this is handled by the owning operating system.

Where does Docker fit in?  Well, Docker tools simply make the process of creating and managing containers easier, but ultimately use existing containerisation software such as LXC and their own container software called libcontainer.

The Architect’s View

Containers and Docker aren’t going to change the world and won’t replace traditional server virtualisation as there’s a limitation to the way containers can be used.  However they do represent a new opportunity to scale environments more effectively and potentially reduce VM sprawl.  But this will require developers to understand the differences in the deployment model for software and application execution.  I’ll be doing more work on the coming weeks with “How To’s” on getting started with Docker and other container solutions.

Related Links

 

Comments are always welcome; please indicate if you work for a vendor as it’s only fair.  If you have any related links of interest, please feel free to add them as a comment for consideration.  

Subscribe to the newsletter! – simply follow this link and enter your basic details (email addresses not shared with any other site).

Copyright (c) 2009-2014 – Chris M Evans, first published on http://blog.architecting.it, do not reproduce without permission.

About Chris M Evans

Chris M Evans has worked in the technology industry since 1987, starting as a systems programmer on the IBM mainframe platform, while retaining an interest in storage. After working abroad, he co-founded an Internet-based music distribution company during the .com era, returning to consultancy in the new millennium. In 2009 Chris co-founded Langton Blue Ltd (www.langtonblue.com), a boutique consultancy firm focused on delivering business benefit through efficient technology deployments. Chris writes a popular blog at http://blog.architecting.it, attends many conferences and invitation-only events and can be found providing regular industry contributions through Twitter (@chrismevans) and other social media outlets.
  • Mlambert890

    Have to respectfully disagree. Docker, and the current state of containerization, is evolutionary, but long term there is no way monolithic guest OS on hypervisor continues to be the standard IMO. I elaborated a bit on this topic a while back: (dont know if linking is OK, but Ill give it a shot: http://complaintsincorporated.com/2012/06/14/what-lessons-can-we-learn-from-hpc/ )

    • http://architecting.it Chris M Evans

      Thanks for the comments, and yes your link does work!

      I think server virtualisation became popular because it was easy for people to understand and manage. If you already have a server team, all they moved to do was support virtual rather than physical devices. All your processes around change control, maintenance etc, all stay the same.

      Containerisation represents a different challenge, with as much change to process as technology because the risk profile, the management processes and of course the skill sets are different.

      I agree with you that in an ideal world we’d have moved to break down the fundamentals of computing into their constituent parts, however I don’t think hardware architectures have been flexible enough to achieve that until now.

      More importantly, wholesale change is extremely difficult for people to grasp (they love working in the way they have always done) and of course there’s a whole legacy world that would have to be rewritten and tested. This brings in uncertainty and risk.

      Ultimately in 50 years time, people may be looking back at laughing at the naive way we simply packaged physical servers into virtual ones, in a similar way to how we laugh at the first IBM 5MB disk drive that needed a forklift to put it into a jumbo jet.

      Change is both fast and slow depending on how you measure it; for now I still think nothing with change quickly, but ask me again in 30 years time (when I hope to still be alive) and perhaps the landscape will look totally different!

      Chris

      • Mlambert890

        Let’s check back in 5. Hopefully we’re both alive! :-)

  • Tim Wort

    One small correction. Sun marketing over loaded the term “Containers”, the technical detail is that while a zone is a container, not all containers are zones, for example Solaris projects are containers as well. Oracle has moved away from the term “container” for Solaris zones.

    A container is something that contains a defined workload and allows the application of resource controls.

    Not a big deal but, as is often said, the devil is in the details. :) I look forward to Linux containers having the sophistication of Solaris zones, someday.

  • Pingback: Getting Started with Docker | Architecting IT Blog

321 Flares Twitter 84 Facebook 20 Google+ 14 StumbleUpon 55 Buffer 36 LinkedIn 112 321 Flares ×