This week has been an interesting one in the world of all-flash storage. Both Pure Storage and HP have announcements; this post covers Pure and their new “FlashArray//m”.
The new FlashArray//m platform is touted as a “modular upgradeable architecture” and claims to deliver 50% increase in performance, 2.6-fold increase in storage density and a 2.4-fold improvement in power efficiency compared to previous generations of the FlashArray platform. So far, three new models have been announced:
- //m20 controller – 5-40TB raw capacity, 150,000 IOPS (at 32KB) and 120TB+ usable with compression/dedupe.
- //m50 controller – 30-88TB raw capacity, 220,000 IOPS (at 32KB) and 250TB+ usable with compression/dedupe.
- //m70 controller – 44-136TB raw capacity, 300,000 IOPS (at 32KB) and 400TB+ usable with compression/dedupe.
Obviously observed usable capacities will be dictated by the customer’s data profile.
The hardware itself is based on the Intel Haswell processor (maximum of 64 cores and 1TB of memory), with two controllers per chassis, connected by PCIe non-transparent bridging. Each controller supports up to 20 2.5″ SSD drive caddies, with two SSD drives per caddy/slot, ranging from 512GB to 2TB in size. The caddies are SAS or PCIe/NVMe connected and can be mixed and matched to expand capacity over time.
Although this architecture is being described as an an evolution from the previous FlashArray platforms, it is in fact a radical departure from the hardware in the first generations of FlashArray. The notable difference is in the use of NVRAM and PCIe/NVMe. Previous hardware had no NVRAM in the controller modules, which were effectively stateless – NVRAM was deployed on the disk shelves (SLC drives in the outer drive slots). Controllers were connected via Infiniband. The platform was a scale-up solution only; capacity/performance was increased through additional disk shelves.
In the new architecture, NVRAM moves to the controller chassis, with one or two DDR3 hot-plug PCIe/NVMe modules
in mapped to each controller. Controllers within a chassis are also connected via PCIe to deliver high availability. This change of architecture positions FlashArray to become a scale-out rather than scale-up system. A PCIe backbone within the architecture potentially allows HA to be implemented between controllers in separate chassis, including shared access to NVRAM, for scalable write I/O performance. SSD drives will in future be NVMe as well as SAS connected.
What makes the new architecture interesting is an offering known as Evergreen Storage. This option allows customers to gain credits for the trade-in of existing controllers as they transition to new models. With this, customers could build a scale-out storage cluster, swapping out old controllers over time as they transition to new hardware, paying only for the upgrade cost. Tie this feature with Forever Flash that provides controller upgrades every three years and customers get significant peace of mind on managing future costs. (Note: as part of the new product release, customers buying FA hardware after 1st February 2015, get a free upgrade)
At first look the upgrade to Pure’s controller hardware may not seem like a big deal, however this move is a positioning exercise. Generation 1 hardware was all about stateless controllers. Generation 2 has introduced more closely coupled hardware; I expect generation 3 to introduce scale out capability. Remember also that Brian Pawlowski joined Pure from NetApp at the beginning of April 2015. He will have brought a wealth of knowledge around scale-out and clustering and who knows, perhaps even a focus on file-based protocols.
Comments are always welcome; please read our Comments Policy first. If you have any related links of interest, please feel free to add them as a comment for consideration.
Copyright (c) 2009-2015 – Chris M Evans, first published on http://blog.architecting.it, do not reproduce without permission. Post #ECF4.