The X Factor: Form Follows Function
November 6, 2006 Timothy Prickett Morgan
In nature, the shape of an animal or a plant is the result of the interplay between the organism and its environment–the latter being the sum result of the forces at play, the competitive pressures between competing life forms, and the materials at hand with which to build and sustain the life form. In the data center, similar competitive pressures are at work on computer designs, and instead of working at periodic timescales, evolution happens over a human generation or less. But sometimes evolution is stalled by greed.
While there has been plenty of evolution under the skins of servers in the data center, there has been less in the skins themselves. Rack-mounted server form factors that are decades old persist, and the blade server form factors that should have easily replaced them have seen a slower uptake than many would have predicted. (Having said that, blade servers are seeing very large revenue and shipment growth–in the double digits each quarter–but the growth is slowing each year.)
Mounting electronics gear in racks that are a standard 19-inches in width has been a customary practice in the electronics industry for decades, and the reason why the height of a unit of measure in a rack is 1.75 inches is a bit of a mystery. (When people say 1U, 2U, or 4U, this is a multiple of that rack unit.) Somewhat humorously, the vershok is a standard unit of measure that Russia used prior to adopting the metric system in 1924. So we could blame the Russian scientific and military community for picking such a bizarre and non-round unit of measure for the height of a piece of rack-mounted equipment. 44.45 millimeters is a very precise unit of measure, but it is somewhat silly. Then again, the width of 482.6 millimeters of rack-mounted equipment is not exactly round, either. Racks usually come in 42U-high versions, and sometimes in 20U and 25U variants.
In any event, Compaq and Sun Microsystems usually get credit for using standard racks first in the server business with pizza box servers in the 1990s; IBM‘s AS/400 and 9370 minicomputer chassis from the 1980s were all rack-mounted gear, and used the 19-inch form factor standard. But the rack-mounting of server gear started in earnest as air-cooled computing became the norm in data centers and as companies installed RISC/Unix and X86 servers by the dozens, hundreds, and thousands to support new kinds of infrastructure workloads–application, e-mail, Web, file, print serving being the common ones. The move from host-based, mainframe-style computing to distributed, n-tier computing saved companies a lot of money, but with tower-based PC servers stacked up all over the place, computing was sprawled out all over the place and took up a lot of very expensive space in the data center. And so, the industry embraced rack-mounted, pizza box servers. Now, X86-style servers could be packed 21 or 42 to a rack, which meant X86 servers could be packed into data centers with two, three, or four times the density.
In the early 2000s, the industry went nuts over the idea of blade servers, which flipped servers and their chasses on their sides, put the servers on cards that resembled fat peripheral cards more than they did whole servers, and integrated networking functions, and mounted a blade chassis inside of a standard rack. By moving to blades, the compute density within a rack could be doubled or tripled again. The blade servers had an integrated system management backplane that all machines plugged into, and internalized switches to outside networks and storage, all of which cut down substantially on wiring. All of which saves money on system administration and real estate.
And by having an integrated backplane, the blade server chassis allows something not available with rack-based servers–account control. And that is why there is still not a standard for form factors for commercial blade servers, and why customers should demand one. In fact, the time has come to offer a unified blade server standard that spans both the telecom and service provider world and enterprises. No computer maker can afford to make both enterprise and AdvancedTCA blades, the latter being the latest in a long line of blade standards for the telecom industry.
To its credit, Hewlett-Packard‘s “Powerbar” blade server, which was killed off in the wake of the Compaq merger so HP could sell the “QuickBlade” ProLiant blade servers instead, adopted the predecessor to the ACTA telecom blade server standard. Sun has also been an aggressive supporter of the telecom blade form factors. And these and other companies who make ACTA blades did so because their telecom customers, who pay a premium for DC-based ACTA blades, gave them no choice.
This is the power of a true standard. It levels the playing field, unlike IBM’s Blade.org pseudo-standard, announced in conjunction with Intel, which seeks to make IBM’s BladeCenter chassis the standard other vendors have to adhere to.
The density that blade servers allow are important to data centers today, since they are running out of space. Blade servers have shared peripherals and shared power supplies, too, which means that they are inherently more efficient than standalone, rack-mounted servers. But there are other issues that are related to server form factors that need to be standardized.
First, power distribution should be built into the rack, whether a customer is opting for rack-mounted or blade servers. Power supplies are wickedly inefficient and often over powered compared to the loads that are typically in the machine; moreover, they generate heat inside the box, which only makes the box that more difficult to cool. Putting a power supply into each server makes little sense in a server world where shared resources is becoming the rule. As long as the power supplies are redundant. Rather than have AC power go into a server and then converted into DC, racks should come with DC power modules that can scale up as server loads require. Conversion from AC to DC should be done in the rack. And all blade server chassis and rack-mounted servers should be able to plug into this power standard. No server of any kind should have an AC power supply. This is an idea that has been commercialized by Rackable Systems within its own racks, but now it is time to take it to the industry at large.
The other thing that needs to be standardized is the blade server itself. Just like peripheral cards adhere to standards, a blade server’s shape and the way it plugs into a blade server chassis needs to be standardized so customers can mix and match blades from different vendors within a chassis and across racks. The way that chasses interconnect should also be standardized, so they can share power and extend the systems management backplane beyond a single chassis and across racks if necessary. Switches, storage blades, and other devices should also be standardized so they work within this blade server standard.
Finally, the rack that holds blade chasses and rack-servers should have integrated cooling features, too. As little heat as possible should leave a rack, and if that means integrating water blocks onto processors and other components inside servers (as PC gamers do today) and putting water chillers on the outside of racks (as many supercomputer centers are starting to do), then so be it. Data centers cost millions to hundreds of millions of dollars to build, and the goal should be to use the density afforded by blades without melting all of the computers. Cooling with moving air does not work. Data centers develop hot spots, and moving huge volumes of conditioned air around is very inefficient. These cooling features should be standardized, just like the blades and rack servers themselves.
The form factors of servers are supposed to serve the needs of customers, not those of vendors.