IBM Off To The Races With New Memory Tech
Published: December 12, 2011
by Timothy Prickett Morgan
The chip heads at IBM's Research and Microelectronics divisions have had a busy week, showing off their new memory and computing toys at the IEEE's International Electron Devices Meeting in Washington, D.C., last week. Some of the chip technology that Big Blue was showing off to its peers is far out into the future, but it could end up in systems sooner than you might think.
As you old AS/400 shops are well aware, IBM used to make its own memory chips and disk drives for its systems and that memory was designed and fabbed in the same Rochester, Minnesota, labs where the AS/400 and its forebears were all born. The magnetoresistive disk heads and super-dense disk platters came out of Rochester during the AS/400 C Series generation, and so did dense DRAM main memory back in the D Series generation. (I used to have one of these chips, which I think was one of the first 1Mbit chips ever, on my desk, but I have no idea where it went). IBM also figured out how to stack up cheaper DRAM chips in twos and fours with AS/400 and iSeries machines back at the dawn of time so it could make super-dense memory cards out of relatively low-cost DRAM chips. IBM still knows a thing or two about memory, as evidenced by its L1 and L2 caches in its mainframe and Power Systems processors and the embedded DRAM (eDRAM) that it put into the Power7 and z10 processors last year.
Perhaps more importantly for a new memory technology called Hybrid Memory Cube (HMC), which is a new kind of 3D memory stacking created by Samsung Electronics and Micron Technology, IBM has lots of expertise in advanced chip making and has come up with some breakthroughs that will allow for HMC memory--not to be confused with the Hardware Management Console for Power Systems servers--to come to market.
Here's the idea behind 3D memory. If you stack up lots of plain-old DDR3 memory chips, link them together vertically with a substrate of wires, snap a crossbar interconnect onto those wires to link the chips all together, and interface them to a motherboard socket, then you can run memory in a more dense and parallel fashion than you can do with standard 2D DDR3 memory modules and sockets today. By allowing memory access to run in parallel across a wide bus front-ended by that crossbar, you can run it slower and thus generate less heat and still have a tremendous amount of bandwidth between the memory blocks and the processors that need the data on one side and the I/O systems that feed data into memory from the other side.
Under a deal that IBM cut with Micron last week, the memory maker based in Boise, Idaho will be licensing some intellectual property that Big Blue cooked up to manufacture the connecting wires in the HCM block, called Through Silicon Vias or TSVs. IBM and Micron are not saying exactly what they have figured out, but they are made of copper and integrate well with the chip baking processes that Micron has in its Idaho fabs. As part of the deal, IBM is also making the HMC logic circuits that implement the HMC crossbar and pinout to the motherboard, which will be implemented in Big Blue's copper/high-k metal gate chip processes, which are implemented in 32 nanometers. IBM will make these logic circuits and sell them to Micron, but it is Micron that will be making the DRAM and stacking it up with the TSVs and then hooking the memory logic to the block.
With HMC memory, a bit of data can be transferred from memory to the CPU interconnect with about 70 percent less energy than with conventional 2D DDR3 memory and an on-chip controller, and a particular capacity of main memory will take up about one-tenth the space. And because of the parallel nature of the interconnect between the HMC and the system board, prototypes have been able to deliver 128 GB/sec of bandwidth out of the HMC block, which compares quite favorably to the 12.8 GB/sec you can get out of a DDR3 memory stick running at 1.33 GHz using a 2D configuration.
Micron expects to have HMC memory ready for market by the second half of 2013, and server makers are already lining up to see how it might fit into their system designs. Such super-dense, low-power memory modules are absolutely necessary for future exascale supercomputers, but it remains to be seen when it might appear in general purpose servers like those running Web servers and databases.
IBM also said last week at the IEDM event that it has come a bit closer to commercializing another memory technology, called racetrack memory.
I told you all about racetrack memory back in January, and it is a truly funky technology that promises memory densities that are more than two orders of magnitude better than disk areal density while at the same time being more energy efficient and less expensive to make than flash memory. Disks move ahead over media to read data while tape moves media over a stationary head to read it. Racetrack memory encodes data on zillions of nano-scale wires and uses spintronics to move the data up and down a loop of wire as if it were a short piece of tape. But in this case, the spintronics effect only moves the data, not the wire, and it can do it at hundreds of miles an hour over those small wires. If you put a lot of loops together and a similar parallel access method and crossbar, you can make a very high bandwidth, low latency memory device.
With last week's announcement at IEDM, IBM was able to integrate the nanowires and the read and write heads for the loops, as well as the circuits that encode the data magnetically on the wire and use quantum-mechanical spin to move that data up and down the wire as if it were on a virtual piece of tape. More importantly, IBM was able to fabricate the racetrack memory device, consisting of 256 racetracks, using standard CMOS processes on 200 millimeter wafers. The racetrack wires were about 150 nanometers wide, 20 nanometers thick, and 10 micrometers long, which is pretty large by modern circuit standards. And thus far, IBM is only able to move one bit of data up and down the loops and needs to be able to handle lots of bits for racetrack memory to be practical.
You'll see HMC memory in servers well before racetrack memory.
Thinking way out into the future, IBM's techies are playing around with carbon nanotube transistors and said last week at IEDM that they were able to make a carbon nanotube transistor with channel lengths that were smaller than 10 nanometers, which is better than silicon-based chip fabbing technology can do. Heaven only knows when we will switch computers from silicon to carbon-based circuits.
IBM Is One Step Closer to High Speed, Low Power Racetrack Memory
IBM Goes Vertical with Chip Designs
Post this story to del.icio.us
Post this story to Digg
Post this story to Slashdot