The IT world is great at delivering technically elegant solutions that intrigue and pique my (and others) interest. In recent years the entire virtualisation and containers movements have been good examples of this. Producing something technically interesting doesn’t always lead to a solution that benefits the business. However, in the case of Droplet Computing, the company is proposing an application virtualisation technology that could have significant benefits in an ecosystem of new and legacy applications.
The Spectrum of Applications
The world of application development isn’t black or white, but a continuous spectrum of applications that range from 1960’s mainframe (COBOL, FORTRAN etc) through C++, Java and a host of other languages that even today continue to evolve. Businesses invest significant time, effort and money into building applications. Refactoring apps to new technology is rarely done without some incentive. Businesses need to see value in moving to a new platform, or a move can be forced for support or security reasons. However, even when support doesn’t exist, the inertia to move or upgrade to another platform still means that many enterprises are running legacy applications in the data centre.
There’s another problem to throw into the mix – for many applications, there may not even be an upgrade path. Critical software products go out of active development yet may still be required for production use. I’m reminded of an application that was used by a UK energy supplier in the mid-1990s to estimate bills. The software was originally written on IBM VM/ESA and the source code lost. While the algorithms were redeveloped and rewritten, an entire VM system had to be maintained, with considerable dedicated hardware. There was no way to move this application off VM/ESA.
Droplet Computing came out of stealth at Cloud Field Day 3. The company demonstrated a technology called the Universal Container, which implements a form of application virtualisation. Imagine, for example, being able to take a version of Microsoft Word 2003 and run it in a browser on macOS. This is the level of virtualisation Droplet Computing is offering. Legacy desktop applications can be run within a standard browser (such as Chrome, Firefox or Edge) without any modification and on a range of devices, including mobile. There are also plans to run across multiple processor architectures, for example running x86 applications on ARM and vice versa.
How does this work? The Droplet solution takes an existing application and uses two main technologies. WINE (Wine Is Not an Emulator) provides the ability to run Windows applications on Linux. This technology has been around a long time and I’m assured that the latest releases overcome a lot of the issues with emulating Windows, namely DirectX and back-door API calls. The second technology is WebAssembly. This is a project to produce a low-level binary instruction format that can be run sandboxed inside common browsers. Effectively each application running would be isolated within its own set of processes.
Droplet Computing’s IP is based around taking existing x86 application binaries and translating them to run just-in-time as WebAssembly code. In order to achieve that there are some translations which go on behind the scenes, such as handling memory access and registers. Emulation layers are also needed to handle persistent storage, such as file systems, and graphics. In my experience, getting programs to run with emulation software like WINE and Linspire (formerly Lindows) usually had major issues with graphics because the original developers coded in direct hardware access to get best performance. Theoretically, that should be less of an issue here because x86 instructions themselves are being emulated.
The first application of Droplet’s technology is focused on the desktop. There are plenty of legacy applications in use today that depend on older versions of operating systems. The kinds of examples quoted include MRI or other medical equipment still running Windows XP, or ATMs that run on OS/2. The ability to encapsulate these applications and run them on a supported operating system has real benefits. Older Windows versions have been subject to hacking exploits like WannaCry. Businesses want to get off unsafe (and unsupported/un-patched) operating systems but have the inertia of legacy apps to deal with.
Droplet Computing separates the app from the operating system, which means un-patchable Windows can be decommissioned. In its place, the business can run Linux, a browser and then the application, with a much secure footprint. Obviously, the application itself may have exposures and some O/S components are still required, but that’s unavoidable.
Some of the use cases suggested by CEO Steve Horne seemed a little tenuous. I think it’s unlikely that any business will want to run old versions of Office 2003, even if they do have the licenses. More likely is the scenario that Droplet’s Universal Container could be used to abstract the O/S and application, allowing organisations to dump Windows in favour of free Linux. Alternatively, where older Office products (or other software) are not supported on the latest Windows releases, then these could be supported in-browser as part of a long-term migration strategy.
Enterprise is the Prize
Whilst the ability to virtualise desktop applications is attractive and definitely has some use cases, I think the enterprise data centre is where the technology is headed. The most obvious use is to let IT organisations abstract applications away from unsupported operating systems that otherwise might have previously been virtualised. The attack surface for those apps becomes vastly reduced. One solution could be to run Droplet Containers on-premises, to remove legacy or ageing hardware.
However, a better candidate would be to push legacy applications into the public cloud. As we said at the start, IT is a continuum. It’s expensive to re-write legacy applications and there’s usually a business driver to do so. Imagine being able to lift and shift an entire on-premises workload into public cloud, then rewrite code over time as business drivers dictate a need. The “last mile” is always the problem in any migration; 90% of apps get moved, 10% remain and consume a disproportionate amount of costs. Being able to move them unaltered into public cloud could prove very attractive indeed.
The Architect’s View
Oracle acquired Ravello for a shed-load of money, more than seemed credible. However, the Ravello technology allowed Oracle to onboard applications that were hypervisor dependent, with little or no changes. Now the same scenario can happen with Droplet Computing, where the only the application need move. The legacy of the operating system is gone. Surely this makes Droplet Computing (or at least the parent company App-UX Ltd) an acquisition target?
One last thing – Docker Inc has spent a lot of time talking about ‘transforming” applications into a more efficient deployment model (e.g. containers, not VMs). With this technology, why bother? Just spin up the application in a WebAssembly container and run it unmodified. If this truly can be made seamless, then Droplet Computing could have a very bright future ahead indeed.
Comments are always welcome; please read our Comments Policy. If you have any related links of interest, please feel free to add them as a comment for consideration.
Copyright (c) 2007-2018 – Post #1DF9 – Chris M Evans, first published on https://blog.architecting.it, do not reproduce without permission.
Disclaimer: I was personally invited to attend Cloud Field Day 3, with the event teams covering some of my travel and accommodation costs. However, I was not compensated for my time. I am not required to blog on any content; blog posts are not edited or reviewed by the presenters or the respective companies prior to publication.