IBM Has No Retirement Party Planned For Tape
May 27, 2014 Dan Burger
There’s nothing heavier than the burden of a great legacy. Instead of continued expectations, the predominant bias is to push aside proven products for newer, flashier, must-be-better replacements. Magnetic tape storage is one of those products that proves what’s old is actually new. If you know someone who believes tape has reached the end of its capacity, bet them a cheeseburger and a beer that it hasn’t.
Few people know this better than Mark Lantz, manager of exploratory tape storage technologies at IBM‘s research facilities in Zurich, Switzerland. He will show you a deeply researched feasibility of tape roadmap for the next 10-plus years, which rides along with a history of indisputable product development that is as ingrained in enterprise IT as tires are to cars.
From experimental to product development, tape has both the track record and the promise of a bright future. Lantz talked with IT Jungle at the Edge2014 conference last week in Las Vegas.
Aerial density, the amount of bits you can cram into a unit of area, is the leverage that tape exerts to maintain its storage advantage. “It’s the vehicle we use to push the technology to its limits,” Lantz says. When you push things to the limit, you identify the weak points. When break things, it opens the door to solutions that fuel further development. The goal is to transfer those discoveries to products.
The recording surface area for tape is huge. A half-inch tape that’s 1,000 meters long has a huge surface area. Compared to disk, it can operate at a much lower aerial density and get more capacity. The capacity for tape has doubled every two years dating back to the last century and the feasibility roadmap provides statistics that indicate this historical rate of scaling will continue at least 10 more years. With the current technology, the expectations are for 4 TB capacity in the existing LTO 6 form factor. Beyond that, Lantz says, there are other technologies that could push the limits farther.
IBM in fact demonstrated with partner Fujifilm that it could achieve a density of 85.9 gigabits per square inch, which would provide a capacity of 154 TB with an LTO 6 drive, if one were built today. Lantz said further that using the new read heads, which are down to 90 nanometers, the 180 nanometer tracks for the tape media, as well as an improved version of the barium ferrite material on the tape, the two partners were pretty sure they could push it up to 100 gigabits per square inch.
If that degree of scaling is possible, why not make a giant leap tomorrow? If it is known that capacity can double every two years for the next ten, why not go there now?
“Rather than take one big leap, there are incremental steps along the way,” Lantz explains. “The reason has to do with maintaining backward compatibility. When new tape drives come out, we want to assure it can read two generations back. Customers like this because it is easier to migrate tape from one generation to the next. They can use old cartridges in new drives. In order to keep backward read capability there is a limit to how much scaling can be accomplished. The market likes backward read capability. We try to preserve that.”
All technologies have advantages and disadvantages.
For tape, the big advantage is cost-per gigabyte, per overall capacity, and for the lifetime of the technology. Then there’s that old time-proven trait called reliability.
But the access time for tape is slow. The huge recording surface that provides an aerial density advantage also means it can take a comparatively long time to reach data that needs to be retrieved, even when spooling at a high rate of speed. For archival purposes, where the data is not needed often or quickly, tape is great. When fast data access is required, disk gets the job. And when data access requirements become paramount, flash is the favorite pick.
That’s what leads to different tiers of storage.
IBM Research is also exploring ways for making tape easier to use with other storage options, including automating when data should be moved. This has led in recent years, to Linear Tape File System (LTFS) on the tapes themselves and the General Parallel File System (GPFS) for parallel storage clusters where bandwidth and capacity are both important. Lantz describes these two as “the building blocks for better capability in moving data back and forth.”
Continuous data rates for streaming data will make a difference, so server throughput will be important as the data flows on.