Samsung has just announced 30.72TB in a single SAS SSD.  Let’s just think what that means for a moment.  That’s around 6,500 DVDs or 1,200 Blu-Ray disks, depending on how you do your maths.  It’s an amazing achievement.  In that form factor, 30TB in a 2.5″ drive is 6x the largest HDD available and around 11x better than the biggest enterprise-class drive (at 10K RPM).  But is there really a market for this size of drive?

Specs

First of all, let’s check out the specifications.  The new PM1643 is a 2x step up from the previous PM1633a at 15.36TB.  The technology is implemented using 64-layer TLC V-NAND (3D-NAND) that produces 512Gb chips.  Sixteen chips are combined into a 1TB NAND “package”, of which there are 32 in the drive.  Performance figures are quoted at 400,000 read IOPS, 50,000 write IOPS and read/write speeds of 2,100MB/s and 1,700MB/s respectively from a SAS 12Gb/s interface.  Note there are no details on what block sizes have been used to generate the performance figures or any latency values, which of course is significant.  As an aside, the PM1643 has 40GB of DRAM, so it would be interesting to see those latency figures.

How do these numbers stack up?  Compared to the Samsung EVO drives, the PM1643 is clearly faster (100,000 read IOPS/90,000 write IOPS and 560/530MB/s read/write).  However, the EVO drives only scale to 4TB, so pit eight of them against a single PM1643 and the drive looks slow. Compare to a Samsung 960 Pro NVMe M.2 device and the PM1643 is positively leisurely.

TCO

So flat-out performance isn’t the benefit here, perhaps it’s TCO?  The PM1633a retailed for around $10,000 a unit.  That’s around $o.65/GB.  The previously mentioned 960 Pro M.2 is available on Amazon for $0.62/GB.  Intel M.2 NVMe drives are available at half that cost, however it’s not easy to find pricing on the high-end enterprise drives.  Per GB, the new drive doesn’t seem cost competitive (unless the PM1643 retails at the same price as the PM1633a), so perhaps it’s a scalability and chassis discussion.  Building out a 1PB all-flash analytics platform would, for example, be much easier using 30TB drives.  As the PM1643 still offers 2 million hours MTBF and 1 DWPD endurance, large-capacity (PB+) analytics solutions are going to be where this drive hits a sweet spot.  This is simply because there will be fewer failures and less data to recreate.  Obviously the impact of a single failure will be greater.

Competition

There’s little competition at this end of the SSD market.  Last year I talked about Intel’s new “ruler” SSD form factor, which should be capable of 32TB per drive and 1PB in 1U.  Samsung could easily do something similar with 32x PM1643, however I can’t help thinking that at scale, the continued use of SAS for such a large volume of high capacity drives would represent a serious bottleneck to any high-performance platform.  Remember that today’s Intel P4500 Ruler SSDs at 8TB already use NVMe and deliver similar (slightly better) performance figures to a single PM1643.  So a chassis of these drives all NVMe connected would operate much more effectively.

The Architect’s View

How should we view the availability of 30TB SSD?  As we’ve already said, it’s a remarkable achievement.  However, this part of the industry never stands still.  Intel has Ruler and will get to 32TB per unit.  Seagate already demonstrated a (rather clunky looking) 60TB drive.  Viking announced 50TB drives last year, although both this and the Seagate drive were 3.5″ form factor, so Samsung has a lead here (the Viking drive is also hideously slow for some reason).  The question to ask is whether SAS is the right long-term drive interface.  HDDs suffered issues because the IOPS/GB reduced to such a low level that spindle count (number of drives) was important.  SSDs are heading towards a similar issue in that SAS becomes the limiting factor.  Not for a single drive (although that is an issue for latency), but in a system as a whole.

I expect we will see 60TB drives from Samsung within 2 years.  QLC and increased layering will make that possible in a relatively short timespan.  But the market demand will be limited.  Any system would need at least 6 of these drives to create resiliency, so that’s 180TB as a minimum starting point.  There are definitely cheaper ways to build 180TB of flash capacity than using 30TB drives.  So, it’s likely the technology will be used for highly scalable analytics type solutions.

On a positive note, the technology will cascade down to the lower capacity devices, making them cheaper for the rest of us, even if the PM1643 doesn’t gain widespread adoption.

Further Reading

Comments are always welcome; please read our Comments Policy.  If you have any related links of interest, please feel free to add them as a comment for consideration.  

Copyright (c) 2009-2018 – Post #9675 – Chris M Evans, first published on https://blog.architecting.it, do not reproduce without permission.

 

Written by Chris Evans

With 30+ years in IT, Chris has worked on everything from mainframe to open platforms, Windows and more. During that time, he has focused on storage, developed software and even co-founded a music company in the late 1990s. These days it's all about analysis, advice and consultancy.