The System iWant, 2010 Edition: Blade and Cookie Sheet Boxes
February 8, 2010 Timothy Prickett Morgan
I’ve done big boxes, and little boxes, and in-between boxes in my theoretical and completely hypothetical System iWant, 2010 Edition, Power7-based machines. As I sit down to write the next installment in the System iWant series, IBM is getting ready to launch some kind of Power7-based servers on February 8, and as far as I know, blades are not part of the announcement. So I feel perfectly comfortable in giving Big Blue whatever advice I can come up with to make its Power7 blades better.
With the Power7 server designs, IBM could do some really interesting things with blade servers and blade-like “cookie sheet” servers, as I call them in deference to Google‘s homemade servers.
Let’s talk about the cookie sheets first because it is fun. In the early years, Google took a rubber mat, put it on a cookie sheet, and plunked down a motherboard, a disk drive, and power supply and that was a server behind its famous search engine. These cookie sheets were stacked up in racks and didn’t have a lot of metal obstructing airflow or adding cost and weight to the server. In its simplicity, the idea was brilliant, which is what you’d expect from a game-changing company like Google, after all.
Since that time, a number of vendors, including Hewlett-Packard, Silicon Graphics (back when it was still Rackable Systems and back before it bought the carcass of the old SGI), and Super Micro (which makes both motherboards and whitebox servers) have created cookie sheet-style machines that drop systems onto trays that are either housed in a rack or half rack, with open tops and sides, or that have trays in a standard 2U or 4U form factor rack server that allow multiple machines to be plunked inside of the chassis. Even IBM’s latest 2U iDataPlex compute chassis has two servers on trays that slide in and out of the chassis independently; trays can also be equipped with one server and storage or other peripherals, as need be.
The cookie sheet server design is about minimizing cost and weight for hyperscale deployments, in data centers running Web 2.0-style workloads on thousands, tens of thousands, or even hundreds of thousands of machines. The way SGI does it makes sense in the data center, with the highest layer of hardware abstraction being the rack, but Google has been building cookie sheet servers and stacking them up inside of shipping containers since 2005. For Google, which has massive scale problems, this makes sense. For most customers, and certainly for midrange shops, a shipping container with 1,160 two-socket servers is a bit overkill. For a lot of shops, even a rack of cookie sheets might be a lot.
That said, I think there is something to this minimalist design for servers, something that works for whatever size installation you are talking about. Computers are not nearly as delicate as we treat them, and they do not have to be run in their optimal design range for temperature, humidity, and dust. (Believe me, I know.) The extra cost of building a fussy computer may not be worth the trouble, as Google has so aptly shown. Getting into a rack to change a component on one of your rack-based servers is a huge pain in the neck. Google reaches in, unplugs the server, slaps a new one in, and it boots up on the network like a human taken over by the Borg collective. No fuss, no muss. No pesky screws and lids over the rack server. Of course, such a cookie sheet setup means the data center itself has to be more secure, since the individual box is not.
My point is, if IBM wants the Power7 line to thrive in the hyperscale world where the X64 chip rules, it has to do more of the things that a company like Google likes. Minimalist, cheap approaches. If IBM did this and also brought to bear the performance and performance per watt advantages of the Power7 architecture, those hyperscale companies, who by and large use Linux and other open source software, take a look at the boxes and even think about recompiling their code.
As for blades, I think that IBM is married to the form factor of its BladeCenter-H and BladeCenter-S blade enclosures, and it would be very surprising for IBM to come up with a new, more dense, radically different chassis. The BladeCenter-H comes in a 9U form factor and supports 14 full-height blades, and the BladeCenter-S is a 7U box that can house six of the same blades. For SMP scalability for both Opteron and Power6/6+ blades, IBM allows two adjacent two-socket blades to be linked together through an inter-blade link that converts it to a four-way server, which comes in pretty handy. A blade server from the original BladeCenter from 2002 can plug into these new chasses.
Both HP and Dell, in their design to get more density out of blade setups, offer half-height blades. This allows the HP BladeSystem c7000 enclosure (at 10U of rack capacity) to support eight full-height blades and 16 half-height blades; the smaller c3000 (which takes up 6U of space) supports half as many blades. Where IBM takes two full-height blades to make a four-socket SMP, HP only takes up one slot, and HP can put 16 two-socket blades in its chassis compared to IBM’s 14. HP even has a double-server blade, the ProLiant BL2x220c G5, that puts two physical servers on a single half-height blade, so it can get 32 whole servers in the 10U chassis. That’s pretty dense.
While this density arms race is interesting, and it has its benefits, it misses the larger point. With the AS/400 back in 1988, IBM put computers and their peripherals in 19-inch racks so components could be standardized and mixed and matched within a data center. The advent of blades for servers, disk storage, and other devices should have opened up the possibility for mixing and matching all kinds of components into a system. What vendors wanted was customer lock in, and they sure got it. But blades only comprise about 20 percent of server shipments when they should be 100 percent.
Yeah, I know that sounds crazy. Let me explain. For the past two generations of its supercomputers, SGI has actually done something very smart. Compute elements–processors of different kinds–are put on one blade. And I/O expansion, disks, and memory are put on other blades. To build a system of a particular configuration, you buy the blades and slide them all together into a system. The blades are interchangeable and you can make a machine with lots of processors and memory if that suits you, or few processors and lots of I/O and disk if you need that. This is the approach that IBM should have taken with Power Systems, and indeed, all of its machines.
Two years ago, when IBM announced the convergence of the System i and System p lines and created the Power Systems division, it also created another division called Modular Systems. This latter division, whose name is never used by IBMers, was comprised of the System x and BladeCenter products. The name was right, and I am sure that the ideas that led up to it being chosen were right, but for some reason, IBM is still building rack, tower, and blade servers that are really distinct from each other. This makes no sense to me. IBM should me making one set of motherboards–or more precisely, snippets of motherboards that I am calling blade components–that can be deployed however customers want. Just because everything is on a blade doesn’t mean it is used like the box we know as a blade server–with its integrated management and switching and its chassis lock in. But it is just plain stupid to not have standardized components so they can be interchanged within IBM systems at the very least–as disk drives have been for years, by the way, across all Big Blue’s server lines.
I know this would mean a lot of engineering on the front end, but think about it. Most of the guys of a Power 720 or Power 750 machine you buy today would be deployed on blades, even if you bought a so-called tower or rack server. So when you wanted to move to a real blade box or even a minimalist cookie sheet design, you could reuse those components. In essence, what I am suggesting does away with blades by making everything a blade. You will up a tower, a rack, or a data center with the blades you need to do the work you need to do. Each element would have a blade midplane interconnect, but it could be ignored if customers wanted to operate rack-style, with independent elements and management tools of their choosing.
It’s something to think about.
That’s the blue sky theory for the System iWant, 2010 Edition, for blades and cookie sheet servers. What I think will really happen is IBM will announce two new blade enclosures–let’s call them the BladeCenter-H2 and the BladeCenter-S2–that offer half-height blades as well as supporting the existing full-height blades. IBM might even do doubled-up servers, like that HP ProLiant above.
See how much more boring that is than what I am talking about? I guess there is hope for the Power8 generation.
The System iWant, 2010 Edition: Entry Boxes
The System iWant, 2010 Edition: Midrange Boxes
IBM Preps Power7 Launch For February
Looks Like i 7.1 Is Coming In April
The System iWant, 2010 Edition: Big Boxes
Power Systems i: The Word From On High
Power Systems i: The Windows Conundrum
Power Systems i: Thinking Inside the Box
Rolling Thunder Rollout for Power7 Processors Next Year
IBM Rolls Up an i 6.1.1 Dot Release
The Curtain Rises a Bit on the Next i OS, Due in 2010
Start Planning for Power7 Iron Now
IBM to Reveal Power7 Secrets at Hot Chips
Power 7: Lots of Cores, Lots of Threads