Bang for the Buck: Big Iron Boxes, Even Bigger Bucks
October 2, 2006 Timothy Prickett Morgan
In the past few months, I have examined the price/performance of baby, small, midrange, and enterprise System i5 servers against their competition in the Windows, Linux, and Unix counterparts. But there are bigger boxes still, the so-called big iron machines, which are the largest single-system image servers that any vendor puts into the field. While the System i5 machines do poorly compared to their peers at the low end when it comes to value for dollar, on the biggest boxes, the System i5 can be competitive.
That’s not to say that there is not plenty of room for improvement, so don’t get the wrong idea.
By their very nature, the big iron class of servers created by IBM, Hewlett-Packard, Sun Microsystems, Fujitsu-Siemens, and Unisys are more expensive to develop, to manufacture, and to sell. The development of machines that have the fastest processors, low-latency backplanes, and cache coherent memory architectures is by itself very expensive. Equally importantly, the level of engineering tolerances goes down in these machines. Everything is tested to another level of satisfaction, since the largest companies in the world use these big iron boxes. If a marquee customer loses its computer systems, it doesn’t just hurt the server vendor at that one site; it hurts that vendor at all the sites that might be using or thinking of using a particular server and operating system combination.
With big iron, server makers use belts and suspenders to be safe–and then add a safety harness, bungee cords, a fire-proof padded suit, a football helmet, and garters. (That is metaphorical, obviously. No server was ever equipped with garters.) All of this safety costs a lot of money as well as time, and time, as we all know, is also money. That is why server buyers who are looking for low-end gear are always facing a dizzying array of options. It is relatively easy to make a processor, a memory subsystem, and an I/O subsystem for an entry server. So there are a lot of options. But making a processor architecture scale efficiently to 16, 32, 64, or more processor cores and then getting the operating system that rides on top of it to scale so applications that in turn ride on it can scale effectively across that iron, is a very big task.
And that is why choices are limited in the big iron arena. When you get to the largest big iron boxes, the pack really starts to thin out. IBM, HP, Sun, and Fujitsu-Siemens deliver the highest performance scalability in the market when it comes to online transaction processing workloads. IBM offers its System z9 mainframes as well as the Power-based System i5 and System p5 servers (which run i5/OS and AIX) to customers that need the most scalability; and its System x servers, which currently only use Xeon MP processors from Intel, also break into the low end of the big iron space.
If IBM sold Itanium-based servers that deployed Intel’s new “Montecito” dual-core Itanium 9000 processors, these boxes would, without a doubt, deliver performance that is comparable to the Power5+ boxes it has in the field. That is probably why IBM stopped making servers that use the Itanium chips. Eventually, Intel was going to get a decent Itanium chip in the field. It is also possible that IBM believed that Intel would never get a decent Itanium chip into the field, and therefore wanted to stop doing development to extend its Summit and Hurricane chipsets to do Itanium; they were designed essentially for the Xeon MP processors. But I don’t think this is the case. IBM’s selling of Itanium, Power, and mainframe systems side-by-side just makes life too complicated, particularly with Itanium being able to support Linux, Windows, and AIX (which was ported to the Itanium chip under Project Monterrey and delivered in 2001, but quickly killed). If there was an Itanium server of any consequence in the IBM catalog, customers would want AIX on it and then they would have asked for OS/400 and then i5/OS to be ported to it, too. And then why would IBM need to spend so much money on Power chip and server designs? It was clearly best to bash Itanium and promote Power–even if Windows doesn’t run on it–at least from the Big Blue point of view.
Hewlett-Packard, of course, can run four major operating systems–HP-UX, Windows, Linux, and OpenVMS–all on the same big iron box, the Integrity Superdome server. The Superdomes have been around since 2000, and are now entering their third generation. The latest machines couple HP’s “Arches” chipset and Intel’s dual-core Montecito processors. HP has only just announced the availability of these Superdome boxes, and has not yet delivered performance stats on the machines. The Arches chipset significantly boosts the memory and I/O bandwidth in the Superdomes, and coupled with the dual-core Itanium 9000s, the Superdomes can bring 128 cores running at 1.6 GHz to bear on big jobs. And because the Montecito chips have HyperThreading (HT) enabled, a fully loaded Superdome has 256 virtual threads for the operating system and its applications to play with. It is a coincidence of numbers, but the extra oomph provided by HT exactly makes up for the performance degradation that normally happens when two cores are put onto a single chip. (Montecito is a true dual-core chip, too, with dual 12 MB L3 caches, which also helps smooth out performance.) In any event, when HP finally gets Superdomes into the field and running tests, they could scale almost as well as IBM’s p5 595 servers. The p5 595s use dual-core Power5+ chips, which also have IBM’s variant of simultaneous multithreading embedded in their electronics. So software sees 128 threads on the p5 595 with all of its processor cores in place.
Sun’s biggest boxes have just been updated to have 1.8 GHz dual-core “Panther” UltraSparc-IV+ processors, up from the 1.5 GHz chips that were available in the Sun Fire E25000 servers last year. The E25000 can have up to 18 cell boards in a single system image, each board containing four dual-core Panther chips, for a total of 144 cores. This is a lot of processor cores, and that makes relational database software that is priced on the number of cores expensive on the Sun boxes. But after years of being far behind IBM and HP in terms of raw processor performance, Sun has closed the gap considerably with the Panthers. At least it looks like that. Sun hasn’t released an online transaction processing benchmark test in so long on the Sparc server line, it is hard to say for sure.
Fujitsu-Siemens has two product lines that play in the big iron space: the Sparc 64-based PrimePower 2500 machines, which run Solaris, and the Primequest 480 servers, which are based on Itanium chips and which run Windows and Linux. While Fujitsu-Siemens was very enthusiastic about demonstrating the performance and price/performance of its PrimePower line a number of years ago on the TPC-C online transaction processing benchmark test, like Sun, the partnership between Japanese server maker Fujitsu and German server maker Siemens quietly backed away from the TPC-C test. Like Sun, Fujitsu-Siemens does not publish list prices for its high-end boxes, and because it doesn’t run benchmarks on the PrimePowers, there is no easy way to find the pricing or performance data concerning these machines. The company has released an interesting benchmark test on its Primequest Itanium servers running Linux, however.
The venerable big iron box, of course, is IBM’s mainframe, and just for fun–and to make i5/OS, Windows, and Unix shops feel good about the prices they pay for these monster machines–I have ginned up some estimated TPC-C OLTP results on IBM’s System z9 EC class mainframes. While very few customers have these largest mainframes solely dedicated to running DB2-based OLTP workloads–they are often running very big batch jobs a lot of the day, some of which are in IMS databases, and others still in VSAM flat files–I still think it is illustrative to ponder how much more expensive mainframes would be to run a mixed OLTP workload solely at the upper limits of mainframe scalability.
The Metrics of Comparison
As usual, I have created a table outlining the feeds, speeds, and pricing of the big iron servers compared in this installment of the Bang for the Buck series. The machines in the table have the hardware features shown. I have tried to keep the configurations across server architectures and operating system platforms as similar as is practical based on the natures of the product lines.
I am well aware that I am showing the estimated or actual OLTP performance of a given processor complex and comparing the cost of a base configuration. In this way, I am trying to isolate the base cost of a server and show its potential performance on the TPC-C online transaction processing benchmark. Yes, the Transaction Processing Performance Council frowns on this sort of thing. But, someone has to do like-for-like comparisons. You do when you are thinking about buying a server, for instance.
For the comparisons, I have put a RAID 5 disk controller on each machine, two 36 GB disks, and 2 GB of main memory for each processor core in the box (there are some exceptions on the core count, of course). Each server also has a basic tape backup, shown in the table. Obviously, a very large server will have many RAID controllers, many more disk drives, and probably a more substantial tape library. But I have kept the basic configuration consistent across all of the machines in this Bang for the Buck series. I like consistency.
In terms of the software stack on these big iron servers, I have added an operating system and a relational database management system, and unlike in past years, I have thrown in virtual machine or logical partitioning hypervisors, because I think people are going to start using these in production. IBM’s AS/400, iSeries, and i5 servers have had such software embedded for years–and IBM’s mainframes have had it for even longer–and to make it a fair comparison, this functionality should be added to X64 servers as well.
Windows and Linux machines are configured with VMware‘s top-of-the-line ESX Server 3 with all of the bells and whistles. Windows machines are running Windows Server 2003 Data Center Edition, and the sole Linux configuration in the big iron comparison is running Red Hat Enterprise Linux 4 AS.
The System i5 servers are equipped with i5/OS V5R4, which has a DB2 database embedded in it. I have two i5 configurations–one using i5/OS Standard Edition, which has no green-screen processing capacity, and another running Enterprise Edition, which has 5250 capacity fully activated on the processors.
On the Unix servers, the Integrity machines are running HP-UX with Virtual Server Environment (VSE) partitions, the Sparc machines are running Solaris 10 with Solaris containers, and the Power machines are running AIX 5.3 with the Virtualization Engine hypervisor. This is the same logical machine hypervisor used on the i5 servers. The Unix boxes have all been equipped with Oracle‘s 10g Enterprise Edition database.
The z9 mainframes shown are equipped with z/OS 1.7 and DB2 UDB for z/OS; these mainframes have IBM’s LPAR logical partitioning installed. While IBM used to sell mainframes under what it called a basic one time charge, it has not done so for years and now mainframe shops have to rent their software. For this comparison, I have used parallel sysplex licensing (PSLC), which assumes that the mainframe is running in a clustered environment. PSLC is a bit cheaper than the variable workload pricing scheme. Mainframe software is priced according to the metered service units (MSUs) allocated to the software, with MSUs roughly scaling with MIPS. (An MSU is roughly equal to just under 6 MIPS.) This rental price on mainframe software includes patches and updates as well as tech support, and costs from thousands to hundreds of thousands of dollars per month for the full complement of mainframe software.
None of the configurations have any hardware or software support costs. Pricing is just for hardware acquisition and basic installation support.
How the i5 Stacks Up to Other Big Iron
The entry 16-core i5 595 machines overlap in performance with the high-end i5 570 machines that were detailed a few weeks ago in the enterprise server comparison. These i5 595 machines offer considerably worse value for the dollar than the i5 570s, and that is the case because the expandability that the i5 595 machines have does not come for free. The good news is that the price/performance of the i5 595 gets better as the machine gets bigger, and it gets a lot better for Enterprise Edition machines. For the largest 64-core configuration, the premium to activate 5250 processing on all 64 cores is a mere 18 percent compared to a machine with no 5250 capacity.
This is about the only premium that IBM should be charging for 5250 processing, in my opinion, and it is a pity that 5250 capacity is so much more expensive on other i5 machines. On a 16-core i5 595, the green-screen tax runs to 62 percent. At the low-end of the i5 570 line, the premium is nearly 100 percent. On the hypothetical quad-core System iQ machines I created a few weeks ago, I gave 5250 processing a 3X premium, but only after radically boosting performance. On the i5 520 and i5 550, IBM is charging a 2.5X to 3X premium for green-screen processing.
You would think IBM would want customers to love 5250 green-screen applications, and those apps that have been Web-enabled using that protocol, since the i5 is the only machine that supports it natively. It is a plus, not a minus, but IBM sure does punish those who want to use it.
The 16-core i5 595 machine is not competitive, in terms of raw price/performance, with the entry Windows, Linux, and Unix boxes in the same power class. This is particularly true of machines using the new dual-core Montecito Itaniums, which give 32-core Xeon MPs a good thumping. However, when the new dual-core “Tulsa” Xeon 7100s are dropped into these Xeon boxes, perhaps boosting performance by 60 to 70 percent, if vendors keep prices roughly the same, then the Windows and Linux boxes should pull roughly even with the Montecito boxes in the same class.
On the Unix front, HP’s rx8640 server using the 1.6 GHz Montecito chips is leading in terms of bang for the buck on 16-core machines, besting even IBM’s p5 595 using the new 2.3 GHz Power5+ processors. And Sun’s 1.8 GHz Panther UltraSparc-IV+ chips in the Sun Fire E6900 cannot even come close to touching either HP’s or IBM’s Unix boxes.
The interesting bit is that HP’s Superdome servers using the older single-core Itanium 2 chips and the “Pinnacles” sx1000 chipset deliver about 30 percent less performance and about the same bang for the buck as i5 595 machines running i5/OS Standard Edition. However, with the Arches chipset delivering a 30 percent performance boost and the Montecitos doubling the performance of the raw processors, the Superdomes are set to get a radical boost in price/performance–if HP holds the line on prices, as it has said it would. It will not be long before the value for the dollar is twice as good on HP’s iron, running either Windows or HP-UX, as the i5 Standard Edition boxes.
And while HP will not completely close the gap with IBM’s high-end p5 595s, which do more than twice as much work than a setup running i5/OS and its integrated DB2 (for reasons that IBM has yet to explain), HP will come pretty close. And Sun, even by boosting performance by moving to a 1.8 GHz Panther chip, cannot come close on OLTP workloads to catching either HP or IBM. Sun needs a new high-end architecture that does not rely so much on lots of processors and cores, and this is supposed to be the result of its partnership with Fujitsu to deliver the future “Jupiter” Sparc64 servers. These machines should have been here yesterday. Sun needed them three years ago, in fact.
Which brings us to the last column in the table. The gold column labeled Mainframe. As best as I can estimate, IBM’s new System z9 EC mainframes might be whizbang on highly tuned batch jobs and CICS queries, but for denser OLTP workloads like the TPC-C test, the mainframe delivers very poor performance compared to other RISC, Xeon, and Itanium architectures–and it does so at an absurdly high cost. And when I say absurd, I mean crazy.
Core for core, a System z9 EC will do about half the work of a System i5 595, and cost about ten times as much at list price. (I have obtained estimated list prices for mainframes as well as monthly rental fees from a source well acquainted with such matters.) When you do the math, the i5 Standard Edition machines offer about 15 times better value for the dollar running the TPC-C workload than a mainframe does. Even if you assume that customers can get a mainframe for half of list price and then add in their monthly fees for three years (which should be what a perpetual license should cost, if IBM offered one), the difference in price/performance between an i5 machine and a mainframe is 10 to 1.
If you want to judge how much value there is in having a legacy platform, just look at those prices. A monopoly is a terrible thing–unless you happen to have one yourself, of course.
To be fair to the mainframe, customers do not buy them to run huge OLTP workloads so much as very highly tuned batch jobs that were coded years or decades ago–often by people who are long since departed from the company. The amount of money that it would take to recode these applications and tune them far exceeds the savings in hardware and software–even at these ridiculous prices. The risks of changing code and possibly screwing up the entire business far outweigh the rewards of porting these legacy batch applications. But you can bet that anyone looking to develop new applications takes a hard look at mainframe pricing. And only those wishing to preserve their mainframe empires or who have so much mainframe skill that they can negate the price differences by operating much more efficiently than the typical Windows or even Unix shop continue to invest in mainframes.
Lucky for IBM, there are about 10,000 such companies in the world.