On 17 October 2017, the LTO consortium announced the availability of LTO-8, the next generation of LTO tape technology. LTO is run by three technology provider companies (TPCs), namely HPE, IBM, and Quantum. Since the first products were released in 2000, LTO media has increased from 100GB capacity with LTO-1 to 12TB on LTO-8. The future shows a roadmap of cartridges capable of holding half a petabyte of compressed data within another decade.
Figure 1 shows the LTO timeline of product releases, with capacities and throughput. The right axis shows capacity, scaling from 100GB to 12,000GB with LTO-8. I’ve drawn the graph with a logarithmic scale because the early products don’t show on the graph at all. Figures are quoted in GB because the graph would revert to negative numbers when using TB capacities (100GB = 0.1TB, but is negative on a log scale). The right axis shows throughput in MB/s, from 20MB/s initially, to 300MB/s with LTO-8. Again this scale is logarithmic.
We can see straight-line growth improvements in capacity from the technology, to the point you can almost draw a line with a ruler across the data points. Throughput has been more challenging, with modest improvements until the jump at LTO-6. LTO-9 onwards (where the figures are projections rather than actual) show bigger jumps in performance. There are two straight-line leaps to around 1100MB/s.
The increases in capacity continue for LTO-9 onwards, with a commitment to LTO-11 and LTO-12 generations that weren’t on the previous roadmap. LTO-12 will have a raw capacity of 192TB (480TB with 2.5:1 compression) and throughput of 1100MB/s. The idea of being able to store half a petabyte of data on a single cartridge seems hardly imaginable to where the LTO project originally started from.
One of the interesting aspects to LTO and tape continuing to develop at such a rate is the way in which hard drive technology gets incorporated into tape over time. LTO-8 drives, for example, use TMR heads (tunnel magnetoresistive) rather than GMR. TMR was originally introduced into disk drives around 2004. So tape (not just LTO) benefits from the development work done in the hard drive industry.
As a small bonus, the new LTO-8 drives will accept new (unused) LTO-7 cartridges and provide 50% additional capacity over an LTO-7 drive. This capability is being called LTO-8 Type M and is aimed at easing the transition from LTO-7 to LTO-8 for customers who have already invested in LTO-7 media. So LTO-7 media (typically 6TB) will store 9TB when used in LTO-8 drives.
Changing Role of Tape
When LTO was first introduced, tape was a mainstay of the backup world. In the 1990’s we saw huge tape silos from vendors like StorageTek that used tape for both backup and archive. In most cases, archive wasn’t really a thing, but just a collection of historical backups from which data was restored. The industry has moved on, with dedicated backup appliances now replacing the first generation of disk-based backup storage. It’s much more practical to use disk for backup, so tape is being positioned more as an archive technology.
On pure media costs alone, tape is way cheaper than disk and a fraction the cost of using online cloud services like S3. Obviously, TCO includes drives, libraries, software, people as well as media. So we can’t just look at the cost of a single tape. However, in a well structured archive, the cost of the media becomes the incremental costs in managing more capacity. So as an archive scales, the $/GB cost continues to reduce.
Gateway to Tape
Tape is of course purely the medium for storing data. We need a way to get data on and off tapes and that’s where we see a challenge. In a recent Storage Unpacked podcast, Martin and I talked about some of the challenges, like using LTFS for format independence. We also discussed Black Pearl from Spectra Logic. The Black Pearl appliance is effectively a cache in front of one or more Spectra tape libraries. It manages the translation of API calls based on AWS S3 into storing data onto tape media.
S3 and object storage, in general, is seen as a great way to archive content, however, using disk at large scale (or even S3) may not be cost-effective. Large amounts of data in an archive can be inactive, making the cost of storing on disk an expensive one. AWS itself is probably using tape for Glacier because the access times for content are so long. This is, of course, reflected in the cost of the service.
I’m not sure that the object storage vendors have fully embraced tape yet. To fully scale, object stores will need to support tape and in a way that makes it flexible and easy to use.
The Architect’s View
Shiny fun stuff tends to get the news in storage (and probably all of IT). However, storage has always had a cost/performance/capacity balancing act to achieve. Data Tiering has been around forever and there’s no reason to not include tape. While backup may be better served by disk, long-term retention of data suits tape well. This could be for compliance or as part of an active archive.
You can find more on this discussion in our recent podcast, which is available to play in this post. Or you can listen at the Storage Unpacked website.
Comments are always welcome; please read our Comments Policy. If you have any related links of interest, please feel free to add them as a comment for consideration.
Copyright (c) 2009-2017 – Post #1386– Chris M Evans, first published on https://blog.architecting.it, do not reproduce without permission.