How Low Can You Go?
Published: February 16, 2006
by Timothy Prickett Morgan
We are at the dawn of a new virtualized age of X64 server computing, and very inexpensive and powerful servers are coming our way. The advent of inexpensive dual-core processors in the X64 server market will force all server makers to change the way they price and package their products, and it will eventually cause software makers to rethink their pricing, too. I feel a crunch coming, and it is one that is going to be tough on vendors and not necessarily good for customers over the long haul.
The core-agnostic pricing practices of software makers cannot hold as more and more cores are added to a socket, if for no other reason than companies do not double their end users every 18 months. Moore's Law has to stop, at some point, when it comes to software.
Some vendors are, of course, trying to buck these trends, and for good reason: They need to make money while they can. There's no way, for instance, that IBM can treat the hardware and software on each Power5 core as a whole processor when the rest of the entry and midrange server business is going agnostic about cores and is focusing on sockets. While IBM tried to set the pace with its zSeries, iSeries, and pSeries products for the past five years, counting each processor core as if it were a whole processor and charging for processor activations and systems software accordingly, the server industry is heading in the other direction. For dual-core processors at least, other operating system and middleware players--including IBM's own Software Group when talking about X64 machines--is pricing for software at the socket, not the core, level. This has the effect of cutting the price of systems and application software in half. (Well, if you want to be precise, moving from a single-core to a dual-core chip in the X64 architecture only gets you about 40 percent more performance, so it is more like a 40 percent discount if you gauge pricing against performance.)
The reasoning behind this core agnosticism is that no one can crank up clock speeds to boost performance any more because of thermal issues, and so, the thinking goes, counting processing elements is no longer a fair way to reckon either the cost of a processor or the software that runs on it. Chip makers like Intel, AMD, and Sun Microsystems want to price based on the socket because it is easier, it is practical, and it caused companies that price by the core competitive grief. There's no right or wrong here, of course. Each core in a processor socket can be isolated from the others and can run distinct software in most virtualized server architectures today, so you could make the argument the other way just as easily--as IBM continues to do for its iSeries, pSeries, and zSeries products, which have all been based on dual-core designs since 2001.
You can understand why IBM, Hewlett-Packard, and a few other makers of enterprise systems are holding out. Server average selling prices have taken a beating, and they are all in a rat race to sell more and more computing capacity just to stay in place. The analysts at IDC cooked up the chart that shows server volumes over time and the average server value (ASV) of the servers sold from 1996 through 2004 and projecting out from 2005 through 2008. (This data was published in August 2005.)
Figure 1: Server Shipments and Average Server Values. Source: IDC
On this the chart, server volumes were a piddling 2 million units in 1996, and the ASV was north of $35,000 per unit. Why is that? Well, this was the heyday of the Unix market, and even if Unix volumes were low compared to X86 servers, Unix machines cost big bucks because of their reliability and scalability, which drove up ASVs. Also, IBM's mainframes and AS/400s were still going strong in 1996 (relatively speaking, of course), and Digital Equipment, HP, and a few others were still selling lots of proprietary minicomputers and mainframes. Relentless competition in the Unix space during the dot-com, ERP, and Y2K booms of the late 1990s nearly doubled server volumes, and server makers started grabbing features from high-end machines and throwing them into midrange gear and putting midrange features into entry gear in an effort to keep selling prices from falling. This strategy propped up server ASVs, of course, but it had the long-term effect of erasing some of the distinctions between server classes. This phenomenon has happened many times in the past several decades of the system and server business, and it will keep on happening.
When the worldwide economy tanked in 2000, server volumes declined in 2001 and barely rose in 2002, and those vendors that could shifted their emphasis to Windows and Linux products and tried to keep prices as high as possible on their Unix and proprietary boxes. They did this not because they disliked their Unix and proprietary customers, but because it was the only way to keep from going broke. Technology cycles are not driven by innovation as much as those in the IT industry like to believe. Plenty of good technologies and lots of bad ones are not adopted until there is massive--I would say tectonic--pressure to get off of a technology that is suddenly perceived of as being too expensive. Technological innovation is a necessary condition for change in the data center, but economic and cultural pressure is what usually makes it happen.
What I love about this IDC chart is the fact that projecting out to 2008, IDC slapped down a ruler and drew a nice straight line from 2002 through 2008 to reckon the server volumes, which the company predicts will more than double in that time. I don't know about you, but given server consolidation and virtualization, which are all the rage right now, I have a hard time believing server volumes will not hit a plateau--and soon. Once we all have a virtualized server, we are not going to add new footprints at the same rate. The rate will decline, in fact, because we will just activate processor cores that are probably already in the box. I agree that ASVs will not be able to fall much further than IDC shows--an average of $6,000 across all server types is way, way low by historical standards--but the implication of what I think about decreased server volumes is this: the aggregate revenues in the server business will fall, and fall fast. And when that happens, vendors will look around and re-architect their machines to have a lot more features so they can prop up their ASVs and stay in business. Or, at least so their server units can post good numbers. Or, they will merge their software and server units to hide the bloodbath of red ink.
Figure 2: Performance of Servers on the TPC-C OLTP Benchmark, 1993-2005. Source: TPC
It is hard to imagine what server makers might chuck into the servers to make them worth the money they need to make their numbers. Performance has been the feature of choice since the dawn of the system business. You cut the price of a unit of performance, but you convince customers to buy a lot more, either by only offering processing capacity that comes in big chunks, as IBM did with the first-generation PowerPC machines back in 1995) or by moving new workloads onto the box (as IBM has done wonderfully with the zSeries mainframes in the past four years and is trying to do with the iSeries and to a smaller extent with the pSeries. HP has consolidated five Compaq and HP server lines onto a single Itanium platform that spans five operating systems. Unisys was in the process of doing the same until late last year, when it threw in the towel on server development and partnered with NEC to create a single server platform; Unisys is keeping control of its mainframes--at least for now.
Figure 3: Bang for the Buck on the TPC-C OLTP Benchmark, 1993-2005. Source: TPC
The problem with using performance as a bait for buyers is that we have lots and lots of it now. If you take all of the TPC-C data for every benchmark every run and plot two scatter graphs out, you will see that the distribution of performance has grown along a power curve between 1993 and 2005 such that the performance, in transactions per minute (TPM), has moved from a few thousand per server up to around 300,000 TPM. Entry machines can now process hundreds of thousands of transactions per minute, and as you double the cores again--which starts happening in 2007--there is even more performance coming. At the same time, if you plot out the dollars per TPM over the same 1993 to 2005 time span, you'll see the curve drop from around $1,200 per TPM when the TPC-C test was launched and now, it is rapidly approaching zero. Machines are being tested for under a buck per TPM. That's not the TPC-C test being irrelevant. That is increasing performance in a set footprint size being irrelevant. People are downsizing their footprints, just like they downsized their data centers and then their mainframes and then their Unix systems to Linux and Windows. Down, down, down.
In the dash to hold up server ASVs that I think might be on the horizon, I believe that those companies with their own operating systems, virtualization, middleware, management software, application software, and other gadgets are going to be sorely tempted to bundle more and more of these goodies into their offerings. And as insane as this sounds, they may even go so far as to give away their servers to get software and services revenues--as crazy as that sounds.
Heaven only knows if anyone can make money in such a hardware world. But we may just find out the hard way.