It wasn’t that long ago we were being promised hard drives of unimaginable capacities.  I’m sure that at some stage triple digit numbers were being talked about.  However today we’re stuck with a “mere” 12TB of capacity per drive.  So what went wrong, or are we just being a little too hopeful?

I did some looking back to see exactly what we had been promised and what was more conjecture on the part of journalists.  Here’s what I found:

ASTC Roadmap 2016

Of course a lot of these postulations by the media could come from the same source and so it’s not surprising that we see both a range of predictions and some commonality too.  However earlier in this decade we were seeing predictions of perhaps 60TB by the end, whereas this seems to have been more reasonably reduced to 20TB drives in 2020 with a more aggressive 100TB by 2025.  The spike to 100TB drives seems massively hopeful – a five-fold increase in 5 years, when we haven’t seen that level of growth for some time.

The latest ASTC roadmap diagram for hard drive capacities shows a slight increase in the CAGR (compound annual growth rate) at around 30%, which from 20TB in 2020 would mean 26TB in 2021, 33.8TB in 2022, 44TB in 2023, 57TB in 2024 and 74TB in 2025.  This is still short of a predicted 100TB, but areal density doesn’t initially translate to bigger capacity products.

New Technology

It’s clear to make improvements, vendors need to commercialise new technology, which of course is exactly what’s happening.  Perpendicular Magnetic Recording (PMR) and Shingled Magnetic Recording (SMR) have allowed capacities to reach 12TB today, based on up to 1Tb/in2.  Two dimensional recording (TDMR) improves the accuracy of reading data by having multiple read heads to measure the value of an individual cell or island, which is typically composed of many magnetic grains.  With this “oversampling” and error correction, read/write accuracy can be increased as the island size decreases.

However to gain future increases, we need to see HAMR (Head Assisted Magnetic Recording) and other technologies (BPMR and HDMR) coming to commercial viability.  Seagate has stated that HAMR is in the advanced stages of development, with products in testing this year (2017) and due for delivery in 2018/19.  Seagate (in the blog post just linked) expands on how future technologies should lead to 100TB drives – without any specific commitment.  Initially this is based on HAMR scaling to 5Tb/in2.

Bit Pattern Media

Bit Pattern Media Recording (BPMR) looks interesting.  As shown in the diagram here, instead of using fixed-size, aligned magnetic “islands” that represent a single bit, the storage of a bit of data is spread across what looks like a mosaic of magnetic grains.  This technology represents a challenge to implement, not least in the way in which each bit is read.  Read heads would have to follow eccentric paths, rather than fixed circular ones, which could represent a significant problem.

Heated Dot Magnetic Recording (HDMR) is the combination of existing BPMR and HAMR technologies.  As we’ve not even seen BPMR yet and HAMR is only just coming to market, it’s likely is some time away, perhaps in that 2025 timescale.  The industry seems to have the habit of combining mulitple technologies together (which makes sense) and using that to generate the next leap forward.

Micro vs Macro View

When we look at the detail of the engineering and development needed to increase hard drive densities, the achievements are truly impressive. although these technologies have been in development for many years.  There are research papers on some of the upcoming technologies that stretch back at least a decade.  However, for end users, the real value is the increase in capacities and decrease in cost.  Many of us might be interested in the micro-view – the specific detail of how we’re generating this increase in capacity – but in general most probably only care about what’s next on the horizon.

Hard drive manufacturers have done an amazing job at reducing costs and increasing capacity.  Look at the Wikipedia History of Hard Drives page and the timeline gives us something of an understanding.  The first 1TB hard drive was released only 10 years ago.  6TB drives are only four years old.  The macro view shows that we can expect cost reductions and capacity increases to continue in that 30-50% per annum range (in fact the cost of capacity drives seems to remain flat, regardless of their capacity).

The problem we have to deal with though is that access times will also continue to decline.  Capacity drives will remain at the 7200 RPM range (or lower) and will likely be the only range of drives sold as performance requirements move towards flash.  There’s a really interesting scenario here that questions how much of the HDD market flash will acquire.  Jim Handy discusses the price erosion of HDDs versus SSDs in a blog post from 2016 and postulates that we won’t see a price cutover (with SSDs being cheaper) until at least 2025 because both technologies decline at similar rates.

So capacity drives will be around for some time, with performance served by SSDs – something we’d pretty much worked out already.

The Architect’s View

The way in which storage capacities have increased over the years is simply amazing.  In 1996 I had a job managing (on one mainframe platform) a mere 300GB of data.  I can get more than that now on a micro-SD card.

We can assume the industry will continue to make great strides in increasing capacities and reducing cost – that’s a given.  We may have to wait only a little past 2025 to see 100TB drives.  As ever, the more taxing question will be on how we manage to use these drives.  SMR has already reduced write performance.  Interface speeds have increased, but in reality drive speeds haven’t changed that much.  The rotational speed of a hard drive limits throughput in one vector, with areal density determining another.  As the read/write head flies over the drive surface, smaller bits mean more data can be read in the same time; although BPMR may mess that up.

So we need to assume drives will remain slow and be “semi-random” in performance.  This continues to highlight the use of large-scale object stores as the best use of this technology (excluding Infinidat, who seem to be bucking the trend here).  The final point to raise is the eternal one of data protection on these drives.  With 12TB+ of capacity, failing the entire drive is a huge waste.  We’ll save that conversation for another post.

Related Links

 

Comments are always welcome; please read our Comments Policy first.  If you have any related links of interest, please feel free to add them as a comment for consideration.  

Copyright (c) 2009-2017 – Chris M Evans, first published on https://blog.architecting.it, do not reproduce without permission.

 

We share because we care!Share on Facebook0Share on Google+0Tweet about this on TwitterShare on LinkedIn0Buffer this pageEmail this to someoneShare on Reddit0Share on StumbleUpon0

Written by Chris Evans