The Power 575: Grandfather of the Multi-Teraflops Power7 Monster
Published: April 10, 2008
by Timothy Prickett Morgan
This week, as part of the further fleshing out of IBM's Power6-based servers and alongside the launch of the 64-core Power 595 server, Big Blue also launched a new super-dense Power 575 server node aimed specifically at high performance computing workloads, not the general database and application serving that other Power-based servers typically do. The Power 575 is not just one of the densest computers ever built. It is also laying the groundwork for a future Power7-based machine that promises an order of magnitude improvement in processing capacity per unit of rack space.
Of course, to accomplish this task, IBM has had to resort to water-cooling of the 2U chassis that the Power 575 server uses, and this stands to reason considering that the company is cramming 16 dual-core Power6 processors running at 4.7 GHz into that chassis. Of course, IBM is cheating a little bit on the density that this 2U form factor implies, since the company is using the 24-inch chassis style of its high-end System p 595 and Power 595 servers as well as in the System z mainframe line instead of the 19-inch racks that the rest of the industry (including IBM in its other X64 and Power servers) employs. Those extra five inches of width would have had to been made up in extra height in a 19-inch rack, of course, perhaps boosting it to a 3U form factor. And equally importantly, customers cannot cram 21 of these units into a standard 42-inch rack. In fact, IBM is only allowing 14 of these Power 575 HPC server nodes in a rack, which leaves a little air space for cooling. The extra space in the rack can presumably also be used for I/O drawers for disk and other peripherals.
It is hard to make the water-cooling of computers, which went out of favor back in the 1990s, sound cool, but IBM's marketeers have given it their best try by referring to the HPC clusters based on the Power 575 nodes as "Hydro Clusters." The air-cooling of big systems and servers became possible in the 1990s because of the advances in processor technology, which shrank chips and made them run cooler. Because these smaller chips, eventually including CMOS-based mainframe processors, did not run as hot, the relatively inefficient medium that is air could be used instead of water-infused heat sinks like IBM and other mainframe makers used in the mid-1960s through the early 1990s to keep their big iron from melting. According to Dave Turek, vice president of deep computing at IBM, the Power 575 uses both water-infused heat sinks on the Power6 processors and water-cooled jackets for the racks, launched last year under the product name Cool Blue, to keep the Power 575 cool.
The Power 575 server uses standard Power6 chips running at 4.7 GHz, with 4 MB of L2 cache per core, 32 MB of L3 cache per chip, and AltiVec vector math units available for number-crunching jobs. The server has 64 DDR2 memory slots and supports from 32 GB to 256 GB of main memory per node. With 14 nodes in a rack, Turek says a cluster of Power 575 machines can deliver a little more than 8 teraflops of computing power. Customers who want to gang up racks of Power 575 machines are able to so with IBM's Cluster Service Manager (CSM) software for AIX and Linux, and rather than use the "Federation" switching created for and deployed in the ASCI Purple AIX-based supercomputer at Lawrence Livermore Laboratory, IBM has put InfiniBand and Gigabit Ethernet ports on the Power 575 board and wants customers to use these networks for interconnecting server nodes into clusters.
Looking out ahead, Lawrence Livermore is going to be focusing on IBM's Blue Gene/L supercomputer to break through the petaflops barrier, even though it has made substantial investments in the ASCI Purple machine and continues to make use of this 100 teraflops box. The University of Illinois will become the new standard bearer for high-end Power-based supercomputing, says Turek, and in fact, the technology deployed in the Power 575 is laying the groundwork for a multi-petaflops machine that IBM and the university are working on based on the future Power7 family of multicore processors, expected to be delivered in the 2011 timeframe. To get to that performance level, IBM is going to have to deliver approximately an order of magnitude of floating point performance as the new Power 575 provides.
How is IBM going to do this? Beyond saying that IBM will do more engineering on the heating and cooling issues and move to multicore processors, Turek is not saying. But IBM, he says, has been working on these issues for a long time. "These problems have been staring at us since we launched Blue Gene back in 1999," says Turek. "This has not been a cavalier embrace of the green revolution on our part, but rather our desire to push the boundaries of supercomputing and expressly dealing with the implications of power and cooling in large machines."
Some customers will get the Blue Gene route, others will go the Power 575 route, and still others may even use Power 550 or Power 595 machines as nodes, depending on their workloads. Turek says that when IBM began working on Blue Gene, even organizations such as Lawrence Livermore who didn't think their algorithms, which scaled to maybe 16, 32, or 64 nodes, would scale beyond that. But these algorithms had not been updated since the 1970s, and they have now spent many years reworking them and parallelizing them better, and lo and behold, some supercomputing shops are now saying that they might be able to scale to 1 million cores. As for the Power 575, IBM is already sold out of the boxes, and has been preselling them to key customers, including some of the larger national weather modeling centers, which just so happen to be in the middle of an upgrade cycle.
The base Power 575 costs $5,300; a system rack with base power costs $30,000. The processor board, which has all 32 cores on it, costs $110,000, and turning on each core for use by AIX or Linux costs $1,000. A 16 GB memory card with no memory turned on costs $6,070, and activating 1 GB of memory costs $1,515. So a base machine without disk, I/O, or operating system and with 32 cores and 256 GB of main memory will run you $571,210.
IBM is supporting AIX 5.3 at the 5300-08 technology level on the Power 575 as well as Novell SUSE Linux Enterprise Server 10 Service Pack 2 (due in the fall) and Red Hat Enterprise Linux 4.6 or 5.2 (the latter not yet ready, either). The announcement letters did not say when AIX 6.1 will be supported on the Power 575. The Power 575 is available on May 6.
IBM Merges System p and System i Server Lines
IBM Readies Big Power6 Boxes, New X64 Servers
Entry System p Servers Get Power6 Chips, System i Boxes Await
The Power6 Server Ramp: Better Than Expected
IBM Fleshes Out p5 Line with More Power5+ Processors
IBM Finishes Building ASCI Purple Super for DOE
IBM Readies Super-Dense 16-Way p5 Rack Server
IBM Gets Ready to Ship p5 575 Eight-Way Cluster Nodes
Post this story to del.icio.us
Post this story to Digg
Post this story to Slashdot