The Christmas 2015 edition of Magpi magazine created a world’s first – an entire computer given away on the front of a magazine.  A free Raspberry Pi Zero is certainly a step up from a disk of software, or even the annoying AOL CDs that we know from years gone by.  This is an entire functional computer with a 1Ghz processor and 512MB of RAM, deployed on something not much bigger than a DRAM memory card.

Ever smaller units of compute have become fashionable, with the original Raspberry Pi being wildly successful (I have to admit to having ten of them myself), as well as a host of similar products from other manufacturers.  HPE even tried to get in on the act at the enterprise level with Moonshot.  I currently use some of my RPis for testing out Docker Swarm.

It’s easy to dismiss the Pi-Zero as a hobbyist device, but it represents another step on the continued trend towards $0 for the cost of hardware.  Although this first model doesn’t have any built-in networking, it does have USB capability that can be used to extend the functionality and as with everything, generations 2, 3 & 4 will be successively better.

I see these devices as offering an amazing step forward in the design and distribution of computing resources.  With the right chassis, we could create systems with hundreds or thousands of nodes, all capable of processing independent workloads.  However there are at least two immediate problems.

Problem 1 – Networking.  These devices need to be networked.  Currently the Pi-Zero has no real networking capabilities and is dependent on using the USB “On The Go” OTG adaptor.  Solutions need to emerge that allow many devices to be easily connected together without compromising the small form factor benefits of the device.  Looking to the enterprise and Moonshot, these systems come with networking modules to support the 45 server cartridges a system can support, plus additional networking uplink modules.  Connections for both Ethernet and Storage are an issue that has to be considered; where should my data sit?  If the aim is to build a truly stateless design then the answer has to be off-board of the server chassis and that means supporting plenty of bandwidth.

Problem – Software.  Today we’re used to running applications that sit within virtual servers on physical hardware.  Despite what we think, this is still the most common way of deploying applications and we rely on nurturing these VMs and scaling up (faster processors, more memory) rather than scaling out across many separate nodes.  That’s not to say scale out doesn’t exist; we can scale out applications today but it’s mainly limited to hyper-scale solutions that have a very few large applications, making the process considerably easier.  Containerisation is one solution that allows a more dispersed computing model, but that’s in relative infancy with many issues around storage and networking still to resolve.

So there’s the quandary; we have lots of low cost compute but we need a new model for splitting and dividing compute workloads.  We need something that spans both parallel computing (breaking stuff down into separate tasks) and concurrent computing (running multiple applications at the same time) as efficiently as possible.  I think we have parts of the solution, but not an answer to the overall problem.  What we still need to think through includes:

  • Shared storage or storage with compute – should a “workload unit” be shipped for execution with its data or read it across the network?  Lots of questions there around data concurrency, locking, integrity, bandwidth, caching and so on.
  • How do we determine where to execute a unit of compute – this means having systems to manage load balancing and distribution, cost/performance calculations, locality (running code closest to where the data resides) and platform capability (e.g. ARM versus Intel).
  • Application & data integrity – how do we build in policy that ensures applications running on (inherently unreliable) hardware are protected against equipment and data centre failure as well as meeting compliance and other requirements?
  • Commissioning – how do we commission and decommission hardware within an infrastructure environment? Do we look at automating discovery and have new devices “announce” their capabilities?

This area of application development and design is one that I find really interesting and definitely a future direction for more investigation.  If any readers have suggestions for projects or solutions that are being worked on, please let me know and I’ll include them in future posts.

Comments are always welcome; please read our Comments Policy first.  If you have any related links of interest, please feel free to add them as a comment for consideration.  

Copyright (c) 2009-2015 – Chris M Evans, first published on http://blog.architecting.it, do not reproduce without permission.

Header image: Raspberry Pi Foundation.

 

Written by Chris Evans

With 30+ years in IT, Chris has worked on everything from mainframe to open platforms, Windows and more. During that time, he has focused on storage, developed software and even co-founded a music company in the late 1990s. These days it's all about analysis, advice and consultancy.