Why Do Rack Servers Persist When Blade Servers Are Better?
by Timothy Prickett Morgan
In December, I attended UBS Server Forum, a gathering of Wall Street players, IT executives, and IT vendors. About half of the presentations were somehow concerned with blade servers or blade PCs. The blade server is one of those "eureka!" ideas that makes intuitive sense, and everyone at the forum was generally optimistic about their prospects in the IT market. But while listening to IBM, Hewlett-Packard, RLX Technologies, and ClearCube Technology talk about the blade market, I kept wondering why rack-mounted servers persist when blade servers are clearly better for whole classes of applications.
It is a bit perplexing, but it is beginning to make sense to me, even though the server makers continue to make a clear case for the superiority of blade servers for infrastructure and high-performance computing (HPC) workloads.
Susan Whitney, general manager of IBM's xSeries X86 server business, started the forum with a presentation focused on the company's BladeCenter blade servers. IBM jumped into the game a little later than the formerly separate HP and Compaq, but it has quickly taken the pole position in blade server shipments, forcing HP into the second market-share slot and relegating RLX, the founding company of the blade server concept, into a distant-third position that has compelled the company to actually abandon its business of designing and manufacturing blade servers. This may be an exciting and new server business, but it sure ain't easy.
Whitney expressed what is becoming the mantra for IBM's Systems and Technology Group: use big SMP servers with virtualized partitions for consolidating scale-up workloads and use blade servers, possibly with virtualization software, for consolidating scaled-out workloads. IBM has found that most IT shops have three to four servers per workload, on average. For companies running hundreds or thousands of workloads, this turns into a huge sprawl of difficult to manage servers. So IBM is creating product that will allow customers to first consolidate physical servers into fewer footprints, employing virtualization technologies and workload managers to reduce the physical count of servers. Having done that, IBM wants companies to then go further and logically consolidate their machines and make more flexible systems that are dynamic and highly virtualized, including on-the-fly provisioning as well as virtual machine partitioning.
This all sounds great, but this approach is not restricted to blade servers, and there is nothing about such simplification that necessitates the use of a blade server. In fact, IBM's blade server management program, IBM Director, was originally used on regular rack-servers, then ported to blade machines, and is now being extended to full-blown SMP Unix and OS/400 boxes. IBM's long-term goal is to use the same tool across all of its servers. While this is a good idea for IBM and some of its True Blue customers, the fact that IBM Director is really not in tune with other products gets to the heart of the blade server problem.
Back in the late 1990s, when the rack-mounted server market was taking off, the tools for managing servers were really primitive and if you wanted to provision a server, you had people on the payroll to do it. There wasn't a tool. Systems management was an expensive framework that only Global 2000 companies could afford to buy, and it is debatable whether the vast sums that companies spent on these tools was worth the trouble. Even today, according to Whitney, it can take anywhere from 13 to 23 days to provision a server, but with the virtualization and partitioning tools that IBM and its partners can bring to bear, the time to provision a server can be cut to a half day. Again, this is a benefit for a modern server, not just a blade server.
Where the blade server wins out is on compactness, greater integration of server and network components, and lower power consumption. Whitney presented a chart that showed in a typical 1U rack-mounted server, about 30 percent of the power use of the entire machine was from the CPUs, with memory accounting for 11 percent, PCI buses for 3 percent, the backplane for 4 percent, the disk drives for 6 percent, 2 percent for standby components, and the remaining 44 percent for power and cooling components such as power supplies and fans. In a blade server, that 44 percent is reduced to 10 percent because of the sharing of these components, according to Whitney. This gives the blade server a tremendous advantage when it comes to electricity consumption and heat dissipation. An individual blade can cost 25 percent less than a configured 1U server, is 33 percent more efficient when it comes to power use, and takes up half the floor space. The virtualization and internalization of the network in the blade approach also cuts cabling costs by as much as 86 percent, according to Whitney, which cleans up the spaghetti wire mess in the back of the machine and reduces complexity. If you have ever configured a rack of servers, you know that this wiring mess is a real problem.
So why, then, aren't we all using blades wherever they can be used?
As I said to the audience at the forum, it is helpful sometimes to remember that a blade server is just a rack turned on its side and miniaturized, with its network infrastructure internalized and systems software that makes a blade chassis or a collection of chassis look like a standalone system that has been virtualized with partitions. The problem with blade servers, I said, is that there is no standard form factor for a chassis and a server and there is no standard software for managing all the different kinds of servers. When you buy a BladeCenter from IBM or a BladeSystem from HP, you are buying into a whole system management architecture as well as a very specific hardware architecture. With racks, there were no system management tools, so you didn't expect them, and a 1U server was a 1U server, your network was externalized, and you could mix and match servers inside of a rack and across racks at will. To get the full benefits of blades, you have to pretty much stick to one vendor. You have to drink their flavor of KoolAid, and you have to keep on drinking it.
This is, of course, exactly why there are no blade standards, no matter how much Whitney and her counterparts at BladeCenter partner Intel claim otherwise with their opening up of portions of the BladeCenter spec. When Intel and IBM freely license the specs for making a blade server and the BladeCenter chassis, as they have done for network blades, switches, and other components that you might want to plug into an IBM or Intel chassis. This is a half standard, not a standard. Without standards, blade server makers get a certain amount of lock in, and that means they can wring some profits out of the box.
The lock-in factor, the lack of blade hardware and software standards, and, according to the UBS IT staff that also attended the forum, the lack of maturity of the blade management tools are why UBS is still keen on rack-mounted servers. And this is probably why blade servers have not taken off like rack-mounted servers did.
Brad Anderson, general manager of the Industry Standard Servers unit in HP's Technology Solutions Group, followed Whitney on stage at the forum, and he talked a lot about blades. He said that HP has shipped about 8 million servers since Compaq invented the X86 server business in 1989 with the advent of the Compaq SystemPro; to date, HP has shipped over 100,000 blade servers. Anderson said that the industry will have sold about $1 billion in blade servers in 2004 (about 5 percent of the market), and that HP reckons it will grow to around $7 billion by 2008, with tower servers still comprising about $10 billion in sales and rack-mounted servers accounting for about $16 billion in sales. (Those numbers cover only X86 servers, of course, not other architectures.)
Perhaps the most interesting chart I saw during the forum was put up by Anderson, who contrasted the first couple of years of blade server shipments against the first couple of years of rack-mounted shipments. This chart roughly compared rack-mounted server shipments from 1998 through 2000 with blade server shipments from 2001 through 2004. The steepness of those curves--and the rack adoption curve is the classic hockey stick curve that starts at 45 degrees in the first year and rockets almost straight up in the following two years, compared to a more natural log curve for blade server adoption--are a measure of two different times these technologies were introduced and the problems they solved.
When rack servers took off, the IT world was in the midst of the dotcom boom and dealing with the biggest server sprawl the industry had ever seen. When you consider how many tower servers must have been crammed into data centers, compact rack servers solved a big problem at just about every company. And while blade servers offer significant benefits over rack servers, companies that are loathe to embrace any proprietary-smelling technology--and blades do have that whiff about them--are going to have to be sold on the idea of using blades.
That, says Anderson, is exactly what HP is doing. He said that companies were deploying blades as a means of speeding up the time from conception to solution deployment, for faster reprovisioning, and better organizational efficiency. He added that 80 to 90 percent of IT costs are related to ongoing costs, such as personnel costs, and that hardware only accounted for 10 to 20 percent of the costs. Blade servers, he said, attack that bigger part of the budget.
Then again, so does system management, virtualization, and provisioning software that spans blade, rack, and big SMP servers. Doug Erwin, the CEO of RLX who is taking the company out of the hardware business and into the software business exclusively, needs to aim a little higher than just allowing its Control Tower software--what he called the secret sauce in the RLX blades-to run on other vendors' blade an 1U rack servers. What the server industry needs is a robust tool that can provision, manage, and patch any Windows, Linux, or Unix server, whether it is a blade or not. It will be interesting to see if RLX will try to do this, or just stay in its comfort zone in the blade market, extending its Control Tower to run on IBM, HP, and Dell blades.