Future iSeries Servers, Part 2
by Timothy Prickett Morgan
In last week's issue, I told you about IBM's plans for the iSeries server line in the Power5, Power6, and Power7 generations, between 2004 and 2010. This week, I want to get into what the future iSeries machines will look like, and what they will not look like. IBM has a few surprises up its big blue pin-striped sleeves, and many of them are good ones. I have, as usual, a few gripes. This wouldn't be The Four Hundred if I didn't.
Moving from the outside in, according to people familiar with IBM's plans, the company is readying--a sigh of relief everyone--rack-mounted iSeries machines.
As many of you old-timers know, the AS/400 minicomputers and 9370 mini-mainframes that date from the late 1980s were all rack-mounted machines. In 1994, IBM switched to its own tower configurations for entry and midrange AS/400s and to the big refrigerator-style boxes for high-end AS/400s. While IBM has made rack-mounted variants of the RS/6000 and pSeries Unix server lines--something it had to do to compete in telecom and dot-com accounts from 1995 onward--it has not believed it needed rack-mounted AS/400 or iSeries machines. IBM made some half-hearted attempts to rack up Model 270 and Model 820 servers a few years ago when it was chasing the application service provider (ASP) market, but it has done nothing to make the density of the iSeries or its packaging come even close to what other servers provide.
With the "Squadron" series of Power5 machines, due next year, my sources say that IBM will indeed deliver machines that can be put in industry-standard 42U racks. What I am hearing is that IBM will deliver a two-way 4U form factor that can be used as a rack-mounted server or can be flipped on its side to be used as a tower machine. This is what IBM does with many of its xSeries and pSeries machines.
It also looks like there will be an 8U form factor machine that could house one or two four-way Squadron servers that are linked through a high-speed, NUMA-like SMP interconnect, like the one IBM uses on the 16-way xSeries 440 (32-bit Xeon MP) and xSeries 445 (64-bit Itanium 2) "Summit" servers. If it works like the Summit boxes, then each 8U chassis will have from 2 to 8 processors and two of the chassis will be linked to make a 16-way. The details on this are fuzzy, but it could be that four four-ways (each being 8U in size) get linked to make the 16-way. One approach has twice the density of the other. Functionally, there is no difference in terms of running software. I don't know the exact processor count in the multichip module in the Power5 line, but in the Power4 and Power4+, each MCM has potentially eight active cores. With the midrange machines, IBM may just glue two or four individual two-way chips on a motherboard in each chassis and not use MCMs except in the very big 64-way boxes. Those MCMs are what makes the Power4 and Power4+ chips run fast, but it is also what makes them very expensive.
The people I have yammered with at IBM over the past several years about the whole rack-mounted server issue have said that density such as that provided by today's 1U and 2U two-way RISC/Unix and X86 Wintel and Lintel servers is not a priority for iSeries customers. Well, if you look at the customer base, that is probably true. Also, with the support for OS/400 subsystems and logical partitions, which respectively allow multiple applications and multiple operating systems to run on a single iSeries box, you can make the argument that a single iSeries machine with two Power5 processors running at 1.5 GHz to 2 GHz and running at 70 to 80 percent of capacity is as good as four two-way, 1U Intel boxes running at 10 to 15 percent of CPU capacity. It takes up the same space and probably will provide the same oomph running applications. This will be IBM's argument, of course. But the fact remains that 92 percent of the servers shipped in the third quarter, according to Gartner were X86-based machines, and the fact that shipments grew by 21 percent in the quarter is largely attributable to companies picking server sprawl over efficiency. The reason is economics. An iSeries 1.1 GHz or 1.3 GHz Power4 processor costs anywhere from $30,000 to $35,000 at list price. A similarly powerful 3.2 GHz "Prestonia" Pentium 4 Xeon DP processor sells for $1,699 if you buy it for a Hewlett-Packard DL360 two-way, 1U server. The iSeries is unquestionably superior architecturally. The Intel box has to run Windows or Linux. Partitions come from third parties and have nowhere near the sophistication of those on the iSeries. And forget subsystems. Microsoft and the Linux community have no idea how useful these are or have the slightest clue about how to deliver them. (The open source BSD Unix does, however. They're called "jails," and Sun Microsystems will be borrowing the concept to create its "zone" partitions for Solaris 10 next year.) All I have to say to that is, so what? If you're right, and you don't win the business, what good is it? If an iSeries machine can only do four times as much real work simultaneously as four Intel boxes, then the price ought to be only four times as high for a processor. That works out to maybe $7,000, which means that iSeries pricing for processors is easily four times as high as it ought to be.
IBM has to fix this, and it has to fix it in the iSeries Squadron generation of machines. The company can make the best box in the world, but at those prices it will never bring the tens of thousands of new customers it needs to the iSeries platform. With every new iSeries generation, IBM has a chance to set things right. Now is the time to start speaking up, people.
Big Boxes Just Keep Getting Bigger
At the high-end, the iSeries will eventually get the same 64-way Squadron Power5 box that is being sold in the pSeries line. These machines will come in their own chassis, just as zSeries, iSeries, and pSeries machines do today. As we have previously reported, the top-end Power5 machine will deliver four times the processing capacity of the original 32-way "Regatta-H" Power4 servers. Running AIX, that should mean at least 1.6 million transactions per minute on the TPC-C online transaction processing benchmark and 2 million TPM if IBM boosts maximum main memory in the machines to 1 TB instead of 512 GB. When running OS/400, performance is expected to be somewhat lower, as has been the case for years.
I'm not entirely sure, but the big Power5 machines may be purple, not black, paying homage to the "ASCI Purple" 100 teraflops parallel supercomputer that IBM is building for Lawrence Livermore National Labs. ASCI Purple will be comprised of 195 of the 64-way Squadron servers, each with 256 GB of main memory and accessing over 2 PB (that's petabytes, or 1,000 TB) of disk capacity. The whole shebang is connected by IBM's third generation of parallel switches, code-named "Federation" and sold as the High Performance Switch (HPS). Federation just started shipping in October. It connects right into the GX memory and I/O bus inside the Regatta and Squadron servers, and it provides as much bandwidth at low latency as the current memory and I/O buses used in the Regatta line. In essence, IBM can make a cluster look like a very big SMP.
The reason I bring this up is this: No one is asking for this, but IBM could implement DB2 Multisystem parallel extensions to DB2/400, which have been in OS/400 since OS/400 V4R1 came out, on clustered iSeries machines, and build very large and very powerful servers. The iSeries performance ceiling is, in theory, a lot higher than just a 64-way Power5 box. Such as cluster might, for instance, make a great utility computing platform for vendors that want to rent rather than sell computers and applications. Yes, this is déjà vu all over again with the ASP idea.
So Long, BladeCenter--See Ya in 2006
I've been going on and on about how IBM can and should create an OS/400 blade server that goes into the BladeCenter chassis. (You can see the latest rant in the September 22 issue of this newsletter.) I've checked with people at Rochester and at IBM Microelectronics, and every time I ask whether the 64-bit PowerPC 970 processor--which we know, and some of us love, as the G5 processor--sold in the latest Apple Macs, could support the unique memory tagging schemes used by OS/400 for its single level storage architecture, I was told it could. Given that the PowerPC 970 is a knockoff of the Power4, when I got a yes answer, I went with it. I mean, why would IBM excise that feature from the chip?
But high-level IBM sources have confirmed that this is exactly what has happened with the PowerPC 970/G5 processor. Maybe it's because I am a middle child myself, but I know how the AS/400 feels. This situation is the kind of thing that drives me absolutely crazy.
I have two problems. First, the people I have talked to at IBM about the PowerPC AS instructions being in the PowerPC 970 were uninformed and shouldn't have confirmed it. But people make mistakes. So this is forgivable. But not putting the instructions in the chip, if this is indeed true--which part of IBM do you trust more than the other?--this is absolutely unforgivable. You see, AIX and Linux will run on that PowerPC blade server, and run like a top. But OS/400 was completely neglected. The second problem is a big one.
Hope springs eternal, though. According to my sources, IBM is really going to get the cost per Power processor way down in the Power6 generation, and that chip, which is based on 65 nanometer technology, will be small enough that it can be easily embedded in a blade server. The iSeries team is looking at how they might make a Power6-based OS/400 blade server. The Power4 and Power5 processors are too big and too hot to work in the BladeCenter chassis, and some might argue that Advanced Micro Devices's 64-bit Opteron and Intel's Itanium processors are, too.
That doesn't mean that the current BladeCenter won't have a place in iSeries shops that want to run applications on adjunct 32-bit Intel X86 processors running Windows or Linux. The team in the Rochester labs is right now trying to figure out the best way to provide systems services and storage for BladeCenter machines that are linked to the iSeries in much the same way that external xSeries machines attach to the machine through the High Speed Link (HSL) buses in iSeries servers. Right now, that connectivity is a one-to-one deal: One iSeries IFS disk partition is linked to one xSeries server. What Rochester wants to do is link many BladeCenter blades to an iSeries in many-to-one configurations that allow those BladeCenter machines to act more in concert and to allow them to be controlled more easily through OS/400. This will probably not be delivered in 2004, according to sources, and there is only a maybe for 2005.
The fact that OS/400 is not supported on the PowerPC 970 also seems to shoot dead my idea of creating an inexpensive PowerPC-based Integrated OS/400 Server card for the iSeries to run application servers against the central iSeries database engine. (See my iSeries and OS/400 Wish List for details on this idea.) I said seems. IBM must have tens of thousands of S-Star processors running at 600 MHz or 750 MHz. Use these until a Power5 or Power6 chip is available and cost-effective. The idea is to keep RPG, COBOL, and Java applications running on OS/400 and not send an implicit signal to customers that using Windows applications on the Integrated xSeries Server (IxS) is. This will still do the trick.
Next week, I'll give you the low-down on some of the advanced error-detection and correction technologies that future iSeries machines will have, to give them the same legendary reliability as the S/390 and zSeries mainframe. I will also talk about the I/O roadmap for the iSeries. Lots of fun.
Other Articles in This Series
Contact the Editors
|Copyright © 1996-2008 Guild Companies, Inc. All Rights Reserved.|