The X Factor: How Many Servers, How Much Juice, How Much Money?
March 5, 2007 Timothy Prickett Morgan
Data center managers around the world are complaining about the fact that they are running out of electricity to build out their server farms, and processor and server makers have been struggling in past years to deliver high performance products that can be virtualized and therefore be used more efficiently. The problem of power and cooling for servers seems to be a big problem, but nailing down exactly how big of a problem has been difficult.
There are reasons for this. First, few companies know how many servers they have installed with any degree of accuracy. Servers pop up all over the place, and because they are often so inexpensive to acquire, they can be bought with discretionary funds inside departments and offsite offices without the data center manager even knowing. And because the server vendors themselves are secretive about how many machines they ship, and are even more wary of telling anyone how many machines are on maintenance and therefore in use (or how many machines are in use without maintenance), getting a feel for the worldwide installed base is a problem. People make guesses on this, and necessarily so.
Further complicating the task of assessing the actual power and cooling needed for the world’s servers is the fact that server utilization varies over time as workloads change throughout the days, weeks, and months of a year. The amount of power that a server draws is typically nowhere near its maximum potential footprint–mainframes and certain high-end RISC boxes might be an exception. You can’t just look at the size of the power supply–450 watts, or 600 watts, or 1,000 watts–and say that a server draws down that much electricity and will need at least that much cooling to get rid of the heat that is generated from information processing inside the server.
Finally, on the cost front, the price of electricity varies with usage within a facility–the more you use, the more you pay in a lot of cases, and electric prices sometimes change with the season, too–so the electric bill for the data center changes as processing workloads change. And, if you want to figure out the global cost for the power and cooling of servers, you have to take into account price fluctuations throughout the year and across different regions within a country and across countries.
To try to get a handle on the use of electricity for power and cooling for servers in the world, Jonathan Koomey, a staff scientist at the Lawrence Berkeley National Laboratory and a professor at Stanford University, sat down with a spreadsheet and build a sophisticated model of server installed base and electricity usage for these servers. The report is available online thanks to Advanced Micro Devices, which has been using energy efficiency as a lever to sell against rival Intel for the past three years and which has managed to get 20 percent share of the X64 server racket largely because of the performance and performance per watt that its Opteron processors delivered compared to Intel’s Xeon and Itanium chips.
First, Koomey took on the installed base issue, and got some estimates for server shipments, takeouts, and installed base from IDC. According to those numbers, in 2000, the height of the dot-com boom years, companies the world over acquired 4.2 million servers, retired 1.9 million units, and pushed the installed base to 14.1 million servers. Volume servers–meaning X86 and entry risk boxes that cost less than $25,000–accounted for 93 percent of new server sales and 87 percent of the installed base. Fast forward to 2005. After years of unplugging midrange and high-end servers and either replacing them with more compact, more powerful midrange or high end machines (and consolidating workloads) or replacing them with volume servers (which had lots more processing, memory, and I/O capacity), IDC believes that the installed base of volume servers in the world has more than doubled to just under 26 million units. Over this same time, the base of midrange boxes has dropped from 1.8 million to 1.3 million machines, and the high-end base has shrunk from 65,600 units to 59,400 units.
Koomey wanted to isolate just the power and cooling associated with servers, so he ignored the other equipment in the data center, such as disk drives and arrays, tape backups, network equipment, and other peripherals and their power and cooling needs. This report only considers the server electricity usage for processing and the percentage of the data center cooling allocated to the server equipment. Depending on the data center, disk, tape, network, and other gear account for anywhere from 20 percent to 40 percent of electricity usage, he estimates.
Rather than just come up with an average power use for an average server, Koomey selected a range of volume, midrange, and high-end servers from a bunch of different vendors, and then calculated the maximum power of a configured machine with lots of disk and memory using either the vendor’s online configuration tools or spec sheets. He then set the typical power use at 66 percent of maximum load.
In 2000, the average volume server burned about 183 watts, the average midrange box burns 423 watts, and the average high-end box burns 5,322 watts. For 2005, Koomey estimates that an average volume server burns 218 watts, an average midrange server burns 638 watts, and an average high-end box burns 12,682 watts. Volume servers have not changed all that much, but midrange and big iron servers sure have. Of course, these machines have lots more computing power, too. (Probably around two to three times the processing power, on average, and in some cases, a factor of four or five is closer to the truth.)
When you do the math on these numbers, the worldwide installed base of volume servers directly consumed around 3.4 million kilowatts of electricity in 2004, with another 3.3 million kilowatts being used to cool the servers. Midrange servers burned 800,000 kilowatts of juice worldwide, and 1.5 million kilowatt when you add in cooling, while high-end servers burned 300,000 kilowatts directly and another 400,000 kilowatts for cooling on top of that. So the total electricity used by the 14.1 million servers running worldwide in 2000 was 6.7 million kilowatts.
Of course, electricity is not measured in a static unit (pun intended), but rather is paid for in units over time, or kilowatt-hours. When you do that conversion, you end up with 29 billion kilowatt hours for the servers in 2000, and 59 billion kilowatt-hours for the servers and their cooling together.
In 2005, the numbers get bigger, and this is the problem. There are more servers in the world, and they are burning either a little or a lot more juice, and therefore requiring a lot more cooling, too.
IDC estimates that there were 27.3 million servers installed and running at the end of 2005. Volume servers, which made up just under 26 million of these machines, burned up 5.8 million kilowatts of power, and needed another 11.5 million kilowatts of power, Koomey reckons. Oddly enough, the midrange base stayed the same, with 800,000 kilowatts of direct electricity consumption and 1.5 million kilowatts including cooling. High-end boxes more than doubled their consumption, even as the base shrank, burning up 500,000 kilowatts directly and 1 million kilowatts with cooling. When you add it all up, over 7 million kilowatts were used worldwide to power the servers in 2005, and another 7 million kilowatts were used to cool them. That works out to a total of 123 billion kilowatt-hours worldwide.
What seems obvious is that the server base has more than doubled in size in the span from 2000 to 2005. Moreover, by conservative estimates, the computing power in the average server has gone up by at least a factor of five in this time, yielding an aggregate increase in performance of a factor of 10 worldwide. This is a lot more computing power, obviously. (Those are my estimates, not Koomey’s or IDC’s.) That works out to a factor of five improvement in performance per watt, by the way. And that is not a bad trade. However, it does come at a cost.
By Koomey’s estimates, using inflation adjusted electricity prices averaged out across the world and pegged to the 2006 dollar exchange rate, the 14.1 million servers in the world burned up $3.2 billion in electricity in 2000. The base of 27.3 million machines in 2005 cost $7.3 billion for direct power and cooling. When you do the math, a server’s power and cooling averaged out to about $227 in 2000, and only rose to $267 per server, on average, in 2005. None of those numbers are as big as many have suggested in past estimates, which is the interesting bit. Other estimates have been an order of magnitude or more higher for power and cooling costs.
The question now is what happens to power costs in the future, and are these costs for power and cooling enough to shift server designs and buying patterns? Many people still think so.