Yesterday EMC finally revealed the details on their not-so-secret Lightning and Thunder flash projects. Fortunately this launch event didn’t include cramming small people into minis or firing Chad Sakac out of a cannon, but was more focused on the market and products EMC are bringing to it. There was also a large amount of Twitter activity; look back at the #vfvache hashtag – this being the final product name of the Lightning product. So what exactly did EMC announce?
Lightning AKA VFCache
VFCache (Very Fast Cache) is the final product name for the project that was called Lightning. It turns out that this offering is nothing more than a PCIe SSD card for servers (not all servers mind you, but at this stage quite a few). The initial offering combines with software to act as a very fast read cache to the host. In Windows terms, this is implemented as a filter driver that sits above the STORPORT driver in the I/O stack, with similar implementations on Open Systems platforms. The software component of VFCache tracks I/O and caches reads in order to speed up future I/O requests without needing to go to external disk. Writes to disk are not cached by VFCache and EMC tried to make a virtue of the fact their product acts as a “write-through” cache, meaning I/O writes have to be committed to physical disk before the cache acknowledges them to the host. Rather than being a benefit, write through mode in this instance is more likely to make the cache less effective by polluting the cache with writes that can’t be released until confirmed externally. When there’s a difference in I/O if microseconds versus milliseconds, then this difference really matters. However, I don’t think this is a design flaw, merely a placeholder for the future, as I’ll discuss later.
Disappointingly for EMC, VFCache 1.0 really is a 1.0 version in terms of feature support. Within VMware ESXi for instance, the card installs with a device driver that only allows the cache benefits to be used when the filter driver is deployed into each ESXi guest, so it’s not simply a case of insert the card and off you go. Moreover, the VFCache appears as a DAS device within VMware and so can’t be used in conjunction with vMotion. For many organisations this is a huge omission as there’s a big correlation between high performance and high availability; the lack of vMotion isn’t acceptable.
We can’t go any further on the VFCache discussion without mentioning the competition and in one of the presented slides, EMC paid homage to the market leaders, Fusion-IO. Their ioCache product already accelerates VMware ESXi and Windows 2008 environments, using a similar hypervisor plugin approach. ioCache already offers double the capacity of VFCache and it’s likely Fusion-IO have larger capacity cards in the pipeline as they already offer a range of SLC and MLC flash devices.
Thunder Follows Lightning
Surely Hitachi must be enjoying the irony of EMC choosing product code names based on already defunct HDS hardware (Thunder and Lightning were the mid-range and Enterprise products respectively that preceeded AMS and USP). The next product announcement moves the flash-in-server story forward and explains how this technology is limited in terms of availability. The move to centralised SAN environments was done precisely to fix the issues that occur with server-side SSD today. Data is locked into the server, is difficult to expand (requiring downtime and physical intervention) and is isolated from access if a physical failure of the server should occur. So, step up Project Thunder, EMC’s purpose build all-flash array. This device allows multiple servers to share data across what EMC are calling the “server-area network”. What they mean is a physically local, high-speed interconnect (such as Infiniband or Rapid-IO) between the server and a fast all-flash storage array.
The interconnect technology between devices already exists today (as already mentioned) but placing it into the server and using this for shared storage presents more of a challenge. Where Lightning was a simple Filter Drive, Thunder will require deeper integration in order to manage consistency across all connected servers. This isn’t something that comes new to EMC – think of how VMAX nodes interconnect and you have the model already there. However, the implementation may require closer co-operation with server vendors than EMC can achieve, those same companies they are already in competition with for storage – IBM, HP and Dell. This could mean Thunder becomes a VCE only product or is severely restricted when deployed in other manufacturers hardware. We will have to wait and see. (Side Note: This also means that other all-SSD array manufacturers could become more attractive to HP, IBM & Dell as acquisition targets – check out here and here).
Let’s not forget that dedicated all-flash arrays are already out there. Recently I’ve discussed Pure Storage and SolidFire and there’s also Violin Memory, who have been going at this market for quite some time. They already have the SSD array technology to a mature level including support for Infiniband; all that’s needed is a software driver to bring clustering to their products.
What can we expect going forward? There are lots of gaps the product releases we’ve seen today. Lack of vMotion support and no write-back cache to name only two. The question we should be asking is what could be delivered in the future. EMC have access to every piece of the I/O stack, from the hypervisor, the multi-path driver, through to the array. Using PowerPath, EMC can develop more intelligent algorithms that choose whether to cache I/O locally in the server/hypervisor, destage to the array, leverage pre-fetching from disk and other clever ways to squeeze the best level of performance out of the hardware stack.
EMC have ratified both the PCIe SSD and dedicated flash array markets with their announcements of Lightning and Thunder. At this stage they are bringing only “me too” products to the market, with other vendors out there having already delivered more advanced technology than being announced today. However EMC have two big advantages; (a) they are a huge organisation, with access to the majority of customers in the market and a great marketing team. They have the ability to place their products into customer environments and use price as the main differentiator (b) they have a huge R&D budget and never stand still on product development. Today’s 1.0 releases will be superseded within months and address some of the shortcomings we can see today. The future battle will not be over the hardware, but the software that integrates I/O in the server to I/O on the array, delivering the benefits of local flash with the safety of external storage. The eventual winner will be the vendor who gets that software and hardware integration right.
You may be interested in the following related articles from this and other sites.
- Fusion-IO Shares Tumble as New Entrants Prepare to Enter The Market
- Emulex – Evolution of the HBA
- Solid State Arrays: SolidFire
- Solid State Arrays: Pure Storage Inc
- Who Will Be The First Solid State Array Vendor To Be Acquired?
- HP & Violin?
- Enterprise Computing: Violin Memory Inc Release New All-SSD Array
- EMC VFCache (aka “Project Lightning”) Is One Small Step, But an Important One (Stephen Foskett)
- Cache Splash (Storagebod)
- VFCache Means Very Fast Cache Indeed (Chuck Hollis)
- VFCache: Hello World! (…and covers come off Project Thunder) (Chad Sakac)
- My take on EMC’s project lightning (Enrico Signoretti)
Comments are always welcome; please read our Comments Policy. If you have any related links of interest, please feel free to add them as a comment for consideration.
Copyright (c) 2007-2018 – Post #EC07 – Chris M Evans, first published on https://blog.architecting.it, do not reproduce without permission.