IBM and Partners Work on Future Chip Tech
March 16, 2009 Timothy Prickett Morgan
With Intel and Advanced Micro Devices hogging most of the press these days when it comes to processors, the design and manufacturing techniques that go into creating chips, and the chip fabs where they are made, IBM, which is no slouch when it comes to designing and making chips, wants to get the word out that it, too, is working on advanced technologies.
In late February, IBM said that it has extended its partnership with PDF Solutions, a company that provides yield improvement and other services for the integrated circuit manufacturing industry. IBM has worked with PDF (which has nothing to do with documents or Adobe) since it deployed 90 nanometer chip technologies. These processes were used, you will remember, in the Power5 and Power5+ chip generations, and PDF has undoubtedly helped IBM get yields with its 65 nanometer processes, which were used to create the current dual-core Power6 chips. The Power chips ran out of gas a bit on the Power5+ cycle–clock speeds did not get anywhere near the 3 GHz many had expected from reading earlier Power roadmaps. IBM used PDF’s pdBRIX processes and technologies on 90 nanometer chips, and IBM said that the extended partnership deals with 32 nanometer technologies. So it may not be using pdBRIX for 65 nanometer and 45 nanometer Power chips. This might explain why the Power6 chip was delayed and why Power5+ didn’t crank as high as we expected.
Anyway, the extended relationship with PDF now includes future 32 nanometer, 28 nanometer, and 22 nanometer process technologies, which is pretty far down the road. As I reported last fall, IBM’s chipheads have cooked up a new set of chip making technologies, called computational scaling, which will be a kicker to the current immersion lithography technique IBM’s East Fishkill, New York, chip plant began ramping up last September. Traditional lithography, using light, stops working beyond 65 nanometers, and the immersion lithography bathes the silicon wafers in water so light frequencies can be cranked up, yielding a tighter laser beam to etch the circuits onto the wafers. But that trick only works down to maybe 32 nanometers–and it is one that the entire industry has adopted after IBM’s lead.
With computational scaling, which IBM is developing in conjunction with Mentor Graphics and its Calibre line of chip design software, and Toppan Printing, a maker of masks for chip fabrication processes, the partners are doing something very clever. Rather than try to buck the laws of physics of water and light and force them to tighter and tighter resolutions to etch ever-smaller circuits, computational scaling uses a technique called source mask optimization to distort the chip masks as designed by the software so that when physics comes into play, the two distortions cancel out, yielding a crisp etching. (Those of you with astigmatism in one or both eyes, as I have, know how well this works.) The other part of computational scaling is called the Virtual Fabricator, where IBM is simulating the entire chip plant–including masks, wafers, people, and the plant floor–ahead of time so it can optimize the whole process and work out kinks that lower yields before the actual production even begins.
IBM has also announced that chip fabrication equipment maker Applied Materials and the College of Nanoscale Science and Engineering at the University at Albany (part of the State University of New York system) are going to be working together to create process modeling technology for 22 nanometer logic and memory chips. This also plays into the computational scaling effort. Now, IBM wants to be able to model not only the factor and the wafers, but down to individual transistors on the chips so it can see how the etching process will behave. It needs Applied Materials’ help in securing the thin-film deposition and etching data to create the models that will feed back into the processes that IBM and its partners create for 22 nanometer chip processes.
I don’t know about you, but I don’t want to think too hard about all of the computing power this computational scaling is going to take. It doesn’t sound like it will yield a cheaper Power8 or Power9 processor, but that’s IBM’s business, not mine.
Anyway, the research relating to the models for the 22 nanometer processes will be carried out mostly at the CNSE’s labs, where both IBM and Applied Materials have equipment and presumably will be done by grad students, professors and company engineers. IBM will do some additional modeling in its East Fishkill and Yorktown labs and Applied Materials will kick in from Maydan Technology Center in Sunnyvale, California. The Computational Center for Nanotechnology Innovations at the Rensselaer Polytechnic Institute in nearby Troy, New York, seems to be kicking in the flops to drive the models. The CSNE is a 450,000 square foot, $4.5 billion lab that is one of the key parts of the future Power Systems platform, not to mention IBM’s ability to hold on to its lucrative contracts with Microsoft, Sony, Toshiba, and Nintendo for the Power chips used in game consoles and other electronics. Without that underlying Power chip business, IBM would have long since been driving to the X64 architecture.
Speaking of which, IBM said last week that it has shipped the 50 millionth Power-derived chip to Nintendo for its Wii game console, which started shipping in 2006. The Nintendo processor, code-named “Broadway,” is based on a 90 nanometer process and is said to run at a bizarre 729 MHz, with its “Hollywood” graphics processor (made by ATI, now part of AMD) running at 243 MHz. Power System i shops should be glad of that 50 million unit shipments for Broadway chips. Whatever price break that happened with the Power6 generation of machines, happened thanks in part to these game consoles, which keep manufacturing costs down in East Fishkill.