For Some Customers, the Mainframe Is Green
Published: August 26, 2008
by Hesh Wiener
IBM mainframes are the low energy choice, says Big Blue. It likes to pit a mainframe running zLinux intensively on IFL engines against racks of X64 Linux servers that are lightly loaded. IBM's analysis suggests that the mainframe solution can help you slash power and cooling costs. But IBM glosses over the details. In practice, it comes down to cases. Still, it has become easier to get information about your own particular situation and to see how your practices can affect power and cooling requirements.
If you want to measure the power your mainframe is using, IBM is ready to help and in fact has been for more than a year. But the technology requires current microcode, and z9 boxes, the first to have the feature, only began providing what Big Blue calls its Mainframe Gas Gauge in May 2007. Users whose machines were installed before that date have to get upgraded firmware to expose the power monitoring functionality. Once a mainframe is able to watch its power consumption, the user organization can use IBM software tools to turn the data into useful reports.
IBM has done some of its own studies of installed z9 machines and this has enabled the company to sketch out what it believes is a good if rough picture of power consumption in its installed base. Unsurprisingly, IBM mainframes don't use all the power their labels say the might use because the maximum power listed on spec sheets is the power used by a system that is fully loaded with memory, channels, and other features. Few, if any, mainframes in the field are that fat and some are quite a lot skinnier. (IBM explains this and some related topics in one of its white papers.)
When IBM examined power usage of its largest z9 boxes, machines with the four processor modules (which it calls books) that are needed to yield up to the system's maximum 54 production cores (which are supplemented by other engines that provide system services), it found that 90 percent of the installed base used less than 70 percent of the machines' maximum rated power. The z9 can draw up to 18.4 kilowatts; 90 percent of the machines IBM studied drew 14 kilowatts or less. In its presentations, IBM also talks about the half of the installed base that uses even less power, but customers who are big on consolidation and keen on making their mainframes really work hard might well be inclined to add memory and other features that add to computing capacity. Pretty soon that extra hardware brings a system into the higher power bracket. But, it turns out, that might be a good thing.
As IBM explained when it announced the Gas Gauge, its mainframes don't use a lot more power when they are working hard compared to when they are loafing. IBM says that a z9 that is run at 99 percent utilization will use only about 150 more watts than the same machine left idle. And turning engines on doesn't change a z9's power draw much, either. IBM believes that configuring an IFL in a mainframe, which means changing the microcode so the system puts a core to work as a zLinux engine, adds only about 20 watts to the system's power consumption. In practice, when users turn on more engines they also are likely to add memory, channels, and network cards. The extra hardware will use some juice, possibly boosting power consumption by 10, 20, even 30 percent. Still, as more of the hardware gets put to work, a mainframe's MIPS will rise a lot faster than its electric bill.
IBM says that a mainframe shop can turn on an IFL and not change its power and cooling requirements very much. The company likes to point out that an IFL, or a group of them, can replace quite a few X64 Linux servers and that the mainframe, unlike the Linux servers, is built to work at high utilization all the time. Critics of IBM's claims, who, naturally enough, are often affiliated with IBM's competitors, argue that IBM likes to compare mainframes with X64 servers that are not used efficiently. They add that just about every kind of server other than a mainframe has power management built into hardware and systems software. This is certainly the case with IBM Power Systems (and their predecessors, the System p Unix boxes and System i midrange computers), and System x machines, which use chips from Intel and Advanced Micro Devices. In fact, the X64 world has a lot of experience with power management. This technology has roots in chips used in laptop computers, but these days every X64 chip has energy management technology. Energy management is not only managed by Linux and Windows, it's so much a part of the X64 world that it is one of the features Sun has emphasized in its Solaris for X64 variant.
What all this means is that a mainframe shop that just assumes moving work from X64 servers into a mainframe will save on power and cooling might turn out to be wrong. Sure, if a mainframe has a bunch of hot engines already jogging in place, turning them on for production purposes will proving computing power without much extra electric power. But if that consolidation means adding one or more books to a mainframe, boosting memory and hooking up channels, it pays to do a little research before boasting about power savings that might not materialize.
In addition, a mainframe shop has to realize that an IFL (plus its maintenance) and memory and z/VM and zLinux can be expensive. It's nice to reduce your carbon footprint. But if the bottom line shows that your costs increase by a couple hundred grand, you might reconsider that plan to replace your X64 boxes with IFLs. So you need a plan that not only saves money on power and cooling, but also ends up saving money when hardware, software, and support are figured in. Some organizations win big with consolidation, while others don't have the mix of workloads and servers that lend themselves to living economically inside a z box. The comparison you have to make is between the cost of moving to IFLs on the one hand and the cost of moving to a new (and most likely virtualized) X64, Power, Itanium, or Sparc box on the other. If you compare the z alternative (or any other alternative) to a replica of some three-year-old server farm you won't get a valid answer. You also have to do your best to estimate future capacity growth, because that can make a big difference in environmental issues and financial costs down the road.
The most abundant data on mainframe power consumption describes the z9 rather than the newer z10. IBM is bound to publish z10 data soon, but for now the only way to look at some of the details is to stick with z9 information and to presume that the z10, as the heir to z architecture, will have similar characteristics. Whether a z10 is more or less of an energy hog is impossible to say, but the chances are good that the z10 will sometimes be more efficient but sometimes less so. This is because each of the up to four books (processing modules) in a z10 has more circuitry than the books used in a z9. A z10 book with only one or a few active processing engines might use more juice than a corresponding z9 book, while a z10 with a book that mostly active probably uses less power than a z9 with as many live engines because the z9 will need more books for the same engine count.
If a z9 EC can fit in one book it will use either an 8-core or an 18-core book. IBM's studies indicate that for the most part (specifically, for 90 percent of installed systems), z9 machines with up to 8 processing cores used 6 kilowatts while those with up to 18 live cores used 9 kilowatts. Just a simple comparison of these two configurations suggests that a shop that runs with 8 live cores in the smallest book that will provide this capacity is going to use 0.75 kilowatts per core. (The power requirements will be the same whether a core is running native z code or if it's set up as a specialty engine, such as an IFL.) A bigger machine, one that uses all 18 cores in the larger single book, will, on average, use 0.5 kilowatts per core. IBM says its surveys of z9 systems show that two-book boxes with up to 28 live cores generally use no more than 12 kilowatts, or 0.43 kilowatts per core; z9 systems with three books and 38 processing cores use only 13 kilowatts or 0.34 kilowatts per core if all the cores are lit up; and z9 systems with four books and up 54 cores also burn about 13 kilowatts, reducing the power per core to less than 0.25 kilowatts per core when every core is used.
At the other end of the range of possibilities, a z9 with only one live core will still use 6 kilowatts, making it a real hog. Systems configured sparsely, meaning machines with many offline cores in one or all books, will use a lot more juice per core than the fully utilized examples that yield the highest power efficiency. Depending on workloads and other factors, it may not be possible for a particular system to be configured for the highest power efficiency. Throughput requirements will come first, and power efficiency second.
Basically, you can't simply turn on some cores in a mainframe and except the only result to be more processing power. More cores can mean more bottlenecks, and more cores in the lowest possible number of books might yield disappointing performance results even if the outcome is greener. Sometimes a system that seems to be choking can stay with tightly packed processor books but it will need to spread its I/O across more physical connections. I/O cages can add a lot to a mainframe's power usage, and in the case of machines with modest engine counts but big I/O dispersions, the cages can actually double a mainframe's peak power requirements.
Sometimes the strongest case IBM can make for the mainframe turns out to be the simplest one. If a customer has a mainframe and the mainframe has unused engines and ample resources such as memory and channels, turning on an IFL to run zLinux won't have any visible impact on the system's power and cooling requirements. From an environmental standpoint, that IFL will be free. If it also lets you unplug a rack of servers, then it's actually going to save power.
And sometimes the questions a customer has to ask before committing to consolidation on a System z also turn out to be pretty simple. First, the user might want to make sure the consolidation can proceed without requiring lots of extra power-hungry hardware. Then the user has to figure out how much the upgrade to hardware, software, maintenance, and support will cost as well as how much can be saved by unplugging the servers that will be folded into the mainframe. Finally, a diligent customer should also think about the possibility that the best choice is a third solution involving the consolidation of racks of old servers into fewer racks of newer and more efficient ones.
IBM Is Enjoying the Role of Green Giant
Making the Case for System z10 Server Consolidation
Sun Brags About Its New Green Data Center
IBM Takes Its Own Server Consolidation Medicine
IBM Sees Green in Going Green in Data Centers
How To Build a Green Data Center
Uncle Sam Pushes Energy Star Ratings for Servers
Post this story to del.icio.us
Post this story to Digg
Post this story to Slashdot