Hyper-convergence has been one of the biggest IT trends of the last 24 months, with start-ups and existing technology vendors offering a suite of hardware and software solutions. Many have conflated hyper-convergence and private cloud to mean one way of deploying and managing IT infrastructure and applications. Although there are many similarities between the two technologies (and hyper-convergence does enable private cloud) one can’t simply be replaced with another. As we will discuss, hyper-convergence is an enabler, not a definition of, private cloud and there are a number of components that are missing from hyper-convergence that make it a complete cloud solution. As we walk through what cloud means, we will discuss hyper-convergence and the gaps that have to be filled in delivering a true private cloud solution.
Taking a step back, it’s good to put in place some definitions. Cloud computing is well defined, and the generally accepted characteristics are those put forward by NIST (US National Institute for Standards and Technology) that include:
- Elasticity – a cloud service should provide the ability for the customer to scale up or down their resource requirements, without the need to have consideration for how the services are delivered or implemented – resources should be seen as “infinite” in availability.
- On-Demand – services should be available on-demand to the customer, without the need for direct human interaction (note that in private cloud infrastructure, the provisioning of services may involve authorisation that has some human component). This effectively means rapid provisioning of services.
- Resource Pooling – resources are pooled together to deliver a multi-tenant solution with no requirement for customers to have an understanding of the location of hardware infrastructure. Pooling is required to mitigate the fragmentation of resources seen in traditional enterprise deployments.
- Broad Network Access – services are accessible through network services via a range of end-user computing devices, including phones, laptops, tablets and desktops.
- Measurability – services are measured and billed based on service usage rather than hardware consumption. Charges are not directly related to the cost of deploying the underlying infrastructure.
Cloud in The Enterprise
These definitions work whether we are talking about private, public or hybrid cloud deployments. In the enterprise, IT resources have generally been implemented on a project basis, with infrastructure acquired to meet requirements of a specific project. Many enterprises will have moved towards implementing private cloud through the use of infrastructure-based service catalogues (a list of products and services that internal customers can consume) as well as billing through show-back or charge-back. The more difficult step is to implement the characteristics of Elasticity and On-demand provisioning. Both represent a technical, operational and financial challenges that we will discuss later. In general terms, we can see that the basis of private cloud is in changing the consumption model for the customer from one that is infrastructure-based to one that is service-based.
Private cloud is typically thought of as being based on IaaS, or Infrastructure-as-a-Service. While this is the most common deployment, it’s also possible to have PaaS (Platform-as-a-Service) or SaaS (Software-as-a-Service) implementations. Private PaaS could be implemented around platforms such as .NET or Java. A good example of SaaS in a private cloud could be email, where end users are charged for their usage per mailbox.
The Rise of Hyper-Convergence
Hyper-convergence represents an evolution of converged infrastructure solutions that were introduced to the market from 2009. In reality, converged infrastructure were simply packaged solutions comprising server, storage and networking where the vendor had brought together their own products and pre-certified the solution for compatibility. Some vendors added orchestration tools and developed pre-tested configurations for specific application workloads. Converged solutions don’t provide the ability to implement private cloud, as the technology is usually difficult to expand on-demand, typically requiring the deployment of another “chunk” of infrastructure to add new resources. The saving here is in the design and configuration of the hardware, rather than the operation of the IT service.
Hyper-convergence goes a step further by merging the separate hardware components (server and storage) into a single physical form factor. The services that were previously provided by a dedicated storage array are now delivered through software using a distributed storage platform across all of the servers in a multi-node cluster configuration. Hyper-converged solutions deliver resiliency at both the disk and node level; if a disk fails in a server, the data can be recreated elsewhere in the cluster; if a node fails the compute is moved and the data recreated on a different node.
Compared to deploying multiple separate components (whether as part of a bespoke or converged solution), hyper-converged is seen as a simpler and more efficient way to implement infrastructure services because:
- It provides a single platform to manage, eliminating traditionally expensive skills such as storage management. Most hyper-converged solutions provide a single management interface or integrate directly into the hypervisor management tools, removing the need to understand or manage storage resources. Concepts like LUNs are banished in favour of a pool of available resources.
- It uses resources efficiently; spare processor capacity on external storage arrays and compute nodes is brought together and can be used for either VMs or delivering I/O. Hardware is collapsed to the minimum required, saving on space, power and cooling.
- It provides efficient scaling, through the addition of nodes to a hyper-converged cluster. Increasing capacity can be as simple as racking, networking and powering on a new node, with the rebalancing of resources handled automatically.
- It potentially reduces cost through lower capital expenditure and the ability to add capacity on-demand in a more granular fashion.
- It removes the traditional overhead associated with hardware replacement; new capacity can be added to a cluster, then old hardware nodes evacuated and simply powered off. This migration process can be achieved with little or no impact to the end user.
Hyper-convergence within a Private Cloud Strategy
As we start to compare private cloud and hyper-converged solutions, we can see that both are aligned in the way that resources are deployed and consumed. This alignment implies that hyper-converged solutions can be used as part of a private cloud strategy. Looking back at our cloud definitions, taking each in turn:
Elasticity – cloud services enable scaling up or down on demand. Hyper-converged solutions typically provide this capability. Compute and storage (capacity & performance) can be increased by adding nodes to a hyper-converged configuration. The process of adding nodes is typically quick and handled by the hyper-converged platform directly, with resources rebalanced across all nodes. This highly granular incremental process for increasing resource capacity allows private cloud providers to be responsive to their internal customers; the process of adding nodes takes hours rather than the days or months that traditional resource deployment takes. Simply adding nodes to a configuration removes all of the design and planning work traditionally associated with deploying server and storage infrastructure. From a financial perspective, adding a node can be achieved at a lower incremental cost than having to deploy an entire storage array or new server cluster. It’s worth mentioning however that just-in-time purchasing of new equipment may be difficult to reconcile against budgets that are planned 12 months or more in advance.
On-Demand – cloud services should allow end users to consume resources on-demand. Hyper-converged solutions certainly provide the basis to deliver an on-demand service. In most solutions, the physical attributes such as storage LUNs or volumes are abstracted away from the end user. Storage is typically implemented as a distributed file system across all nodes and so the LUN/volume concept simply doesn’t exist. The placement of virtual machines across the node cluster isn’t a concern of the end user. Many solutions automate this process (in conjunction with hypervisor resource levelling features) so the user just has to know that the VM exists somewhere in the cluster. This makes it much easier to deploy virtual machines using quantitative descriptions like logical disk size, virtual memory and number of logical processors. As long as sufficient resource capacity exists in the whole cluster, then there’s no other consideration.
Resource Pooling – as we dig further into the cloud definitions, we can already see that pooling of resources is a key feature of hyper-converged solutions. Storage exists as one logical pool across the cluster, with most solutions permitting heterogeneous configurations (nodes of different hardware configurations). VMs are deployed to servers with the right level of available resources. This resource management capability is one of the key features of hyper-converged solutions, in that the benefit of using hyper-converged over “DIY” is the ability to automate resource management.
Broad Network Access – this feature really doesn’t talk to hyper-converged but rather reflects how hyper-converged solutions are integrated into an existing infrastructure framework. As with any server/storage hardware, hyper-converged solutions provide the same access capabilities as any hardware deployment.
Measurability – in private cloud we want the ability to measure the consumption of resources because the billing is based on service offerings rather than hardware. Hyper-converged solutions abstract resources and provide the capability to report on resource consumption, however a question to ask is whether this reporting is mature enough for private cloud environments. We will discuss the maturity level of this requirement in a moment.
Hyper-convergence & Private Cloud – Where are the Gaps?
From the discussion so far we can see that hyper-converged solutions provide many aspects of deploying a private cloud. However there are a number of features that mature private cloud infrastructure needs that we have yet to discuss.
Multi-tenancy – by their nature, private clouds need to implement multi-tenancy. This means being able to support more than one customer on the infrastructure but isolate the resources of each, with the appearance that each customer thinks they are the only user of the system. Multi-tenancy therefore has security implications (one user’s data should be separate and inaccessible from anyone else), performance implications (excessive resource usage by one user should not affect the other’s service) and capacity implications (one user shouldn’t be able to starve the other of processor time or disk space).
Multi-tenancy also comes into play when we talk about measurability. Private cloud providers will require the ability to report on usage by some internal company division, such as line-of-business or department. This means attributing the resources consumed to those logical entities and building reports based on these groupings.
Workflow – this could also be considered to be automation, but is actually a superset of the features automation offers. An automated service will provide the ability to remove many of the manual processes out of provisioning new applications – for example, building a virtual machine, deploying and configuring the operating system and integrating into security frameworks like Active Directory. Automation in itself is a desirable feature that reduces the burden on the administrator, however Workflow takes that a step further. When end users request resources from a private cloud, there will be processes to follow that validate the user is authorised to make the request. This checking may be to validate against an internal budget or pre-defined quota limit or some other process that ensures end users can’t simply consume resources ad-infinitum.
Product Marketplace – an important part of a private cloud is the ability to provide more than simply the ability to create virtual machines. End users want to deploy applications that are built from operating systems, databases and other code. Each of these components will be interrelated and have dependencies that control their rollout. The move to agile deployment or DevOps is providing organisations with competitive advantage by speeding up the development process through constant iteration. Having a marketplace within the private cloud provides the framework around which this development can take place and be delivered in a controlled fashion.
IaaS Only – hyper-converged solutions only address the IaaS subset of private cloud offerings and are very much an infrastructure-based offering. Using hyper-converged to implement PaaS and SaaS will require the layering on of additional software and services to achieve a fully rounded solution.
As we have seen, hyper-converged solutions address many of the operational issues involved in deploying private clouds. Using hyper-converged represents a great way to enable the implementation of Infrastructure-as-a-Service, with some gaps needing to be filled around the workflow, multi-tenancy and product marketplace. Of course vendors never rest when there are opportunities to gain more market share and so we can expect hyper-converged solutions to evolve into complete ecosystems that close off some of the issues raised in this article and deliver complete packaged solutions for the enterprise data centre. This evolution is part of what is termed the Software Defined Data Centre (SDDC), a vision that sees all hardware components abstracted into software. Enhanced hyper-converged solutions could offer a great alternative to a full SDDC implementation, one that many IT organisations may find easier to migrate to, in order to realise the vision of being fully software defined.
- The NIST Definition of Cloud Computing (800-145, published September 2011)