At the end of 2016 ClusterHQ, the developers of Flocker decided to shut down the project with a rather blunt blog post (link).  Rather than take additional funding, CEO Mark Davis (and presumably the board) decided to shut the company down on the assumption that it wasn’t clear where revenue would come from.

Based on my attempts to install Flocker, I can say that the software wasn’t straightforward to deploy and the idea of building out what was basically a failover management process perhaps could have been implemented more simply.  Now we have the emergence of, a scale-out open-source storage solution that sits atop Ceph.

The idea of seems to be to provide an interface between orchestration platforms like Kubernetes and the Ceph storage layer.  Rather than re-invent another scale-out storage platform (and goodness knows we have enough of those), acts as the glue to make the two interoperate by allowing storage to be orchestrated through the Kubernetes command line.

Using Ceph as a foundation for storage is an interesting choice.  The website (and github notes) imply that Ceph has had 10 years of production deployments.  This seems a little generous as the first agreed “stable” version of Ceph was only released in 2012.  However, putting that issue aside, Ceph does at least provide object, block and file interfaces, even if they’re not the most efficient of each.

So therein lies the problem.  Ceph isn’t going to be the platform of choice for everyone.  In fact it’s a gross assumption that anyone would want to use Ceph at all, especially with the proliferation of SDS and hardware based solutions on the market.  Here’s another thought – one of the issues with Flocker (in my opinion) was the focus on block storage for deploying application data.  Flocker mapped a block device to a host then formatted the device with an ext3/4 file system before mounting to the host.  This becomes really restricting when thinking about sharing data between (for example) Windows and Linux platforms.  It also represents a management overhead if the original LUN size is miscalculated.

The Architect’s View

Having a tighter integration with the orchestration layer is a good idea for fixing the Persistent Storage for Containers problem.  However fixing on a single platform and a single storage layer seems to be architecturally restrictive.  Perhaps the intention is to start small and move out from a basic configuration.  Certainly isn’t intended to be in the data path, but rather merely acts as a management conduit.

Have you looked at  Do you have an opinion?  Once I’ve had an attempt at installation I’ll come back with a little more detail.  In the meantime, feel free to comment if you have some experience or thoughts in this area.

Related Links

Comments are always welcome; please read our Comments Policy first.  If you have any related links of interest, please feel free to add them as a comment for consideration.  

Copyright (c) 2009-2017 – Chris M Evans, first published on, do not reproduce without permission.

We share because we care!Share on Facebook0Share on Google+0Tweet about this on TwitterShare on LinkedIn0Buffer this pageEmail this to someoneShare on Reddit0Share on StumbleUpon0

Written by Chris Evans

  • Evan Powell

    Hi Chris – great to see you digging into the storage for stateful workloads on containers space. I’m biased since I’m working w/ OpenEBS – however I truly do think the signs are pointing towards containerization enabling and requiring a fundamentally new storage architecture. However, that does imply – sorry – that we need a new scale out approach, one that incorporates the orchestration and containerization infrastructure into the storage system itself. The way I think about it is that as you know step one of moving enterprise applications onto containers is often a lift and shift; that works however results in a monolithic app crammed into a container, which tends to negate many of the benefits of containerization. Well – CEPH itself is such a monolithic application that has yet to be refactored or rewritten specifically for containers. What the world needs is something written from the ground up – ideally in a language like Go 🙂 – that containerizes controllers and leverages the orchestration of Kubernentes or other container orchestrators and schedulers wherever possible. That’s like the second phase of a migration of applications to containers – when container native applications emerge, enabling a much more dynamic and developer friendly environment.

  • Pingback: Rook is the New Flocker? - Gestalt IT()

  • I attended a breakout at Kubecon Berlin 2017 and the video is posted here: My takeaway was two things: (1) they have containerized CEPH to ease management of a CEPH cluster and (2) a reminder that CEPH volumes are supported inKubernetes (k8s). I could imagine creating a dedicated k8s cluster for rook, and then serving that up to other k8s clusters that run your containerized apps and DBs, and/or to OpenStack for IaaS. If CEPH isn’t your thing then rook isn’t either. In that case (forgive my bias since I work at NetApp) NetApp + Trident dynamically storage provisioner might be: