Last week I was reading this article on Ars Technica discussing the latest Samsung M.2 format drives to hit the market. In something similar to a DIMM format, you get up to 2TB of capacity and read/write speeds of 3.5GB/s and 2.1GB/s respectively. M.2 comes in a range of sizes and was originally developed for motherboards and laptops to replace the mSATA specification. I’ve mentioned these devices in a few recent presentations as I think the potential for such small devices could be enormous. But why have we seen no enterprise storage arrays based on M.2?
The first thing to note is that today, M.2 devices are not hot-swappable, unlike PCIe storage. This means shutting down a system in order to replace a failed device. Whilst this is an issue with monolithic storage designs, I’m not sure it’s really a big deal for scale-out solutions, where a single node could easily be taken down and devices replaced. A second problem is possibly one of geometry and building a motherboard that is capable of mounting the drives in a vertical fashion. I don’t think this is such a big problem either, as this would resemble the design Violin Memory uses for their VIMMs, which all sit vertically and have done since the original Violin Memory 1010 all-memory array.
One major issue has perhaps been one of capacity. One large drive is certainly easier to service than many small ones although there’s no real gain in price saving, as far as I’ve seen by consolidation into a larger drives. Costs are basically focused around the number of NAND chips an SSD contains, so cost reduction only comes from increasing the density of each NAND chip. In addition, there are lots of benefits from building systems with many devices as multiple I/Os can be run in parallel to many devices at the same time.
The use of M.2 devices came up on the In Tech We Trust podcast this week (episode 104, The Flexible Cloud). Marc Farley discussed how hyperscalers have picked up on M.2 as a great format of device to cram storage into a single server. The likes of Google, FaceBook and Microsoft (with Azure) are cost conscious both from a capital and operational perspective, so if they are using M.2, you can be sure they have checked out the cost price point and have decided it makes operational sense. I’d love to find out exactly how the hyperscalers are using M.2. If you have links to presentations from the recent SNIA Developer’s conference or Flash Memory Summit that covers M.2, let me know and I’ll post up the links.
The Architect’s View
M.2 represents the diversity we now see in the NAND flash market, which at the other end is seeing the release of 60TB drives (see my recent article). Do we go with mega-SSDs or should we consider many smaller devices? There’s no clear answer here as the choice depends on requirements. However what we can say, is that flash storage is matching and exceeding the configuration options that were available in the HDD market. With PC-era hard drives we had 5.25″, 3.5″, 2.5″ and 1.8″ drives, all pretty much using the same form factor. The only recent physical innovation has been to make drives thinner. Flash has more flexibility and more deployment options, another reason why it will be the primary storage medium of choice for the future.
- Seagate Ups The Ante with 60TB SSD
- Samsung unveils crazy-fast 960 Pro and 960 Evo M.2 NVMe SSDs (Ars Technica Website), retrieved 28 September 2016, published 21 September 2016
- In Tech We Trust Podcast (InTechWeTrust Website), retrieved 28 September 2016, published 26 September 2016
Comments are always welcome; please read our Comments Policy first. If you have any related links of interest, please feel free to add them as a comment for consideration.
Copyright (c) 2009-2016 – Chris M Evans, first published on https://blog.architecting.it, do not reproduce without permission.