September 10, 2007 Timothy Prickett Morgan
This week, server and workstation virtualization superstar VMware will be hosting VMworld 2007, a trade show that VMware, long since owned by disk array maker EMC and most recently partially spun out through an initial public offering in mid-August, started in 2004 to bring partners and customers together to collaborate about all things virtual. Of course, going virtual is nothing new to midrange and mainframe server customers, and it is natural enough to wonder what all the hubbub is all about.
Like VMware’s market capitalization valuation, just under $26 billion as I write this, the attendance count for VMworld is going through the roof. The word on the street is that the show will have somewhere between 12,000 and 15,000 attendees. That is nearly the size of a big LinuxWorld event that I went to back in 2001 here in New York, and it is an impressive number of people to bring under any roof at the same time. VMware’s public float of 13 percent of its 375 million shares is part of the reason everyone is excited, and not just because they might have been lucky enough to get VMware’s shares when they opened around $50 a pop on the Nasdaq exchange on August 14, or even if they were more fortunate to be an original VMware shareholder who got to buy the shares on the market for their initial price of $29 and did a quick flip. With the stock now trading at above $70 a share, somebody made a lot of money–a heck of a lot more than VMware will make for years to come selling its ESX Server hypervisor for X64 processors and its related management tools. VMware is on the way to being a $1.2 billion to $1.4 billion subsidiary of EMC in 2007, and it is fairly profitable, bringing 12 percent of its $703 million in sales for 2006 to the bottom line. VMware is also the juggernaut in X64 virtualization, the place in the server market where the installed base is largest, current sales volumes are highest, and server utilization is lowest.
But virtualization is old hat for proprietary minicomputers and mainframes as well as for the popular RISC/Unix platforms that are still in existence. The System i midrange machines and the System z mainframes from IBM and the ClearPath mainframes from Unisys are the only proprietary platforms that really matter these days, with the possible exception of OpenVMS 8.3, which has been ported from Alpha RISC to Itanium processors by Hewlett-Packard. The only Unixes that matter inside the data center these days are IBM’s AIX, Sun Microsystems‘ Solaris, and HP’s HP-UX. All of these platforms have had sophisticated virtualization technologies–and in most cases, 64-bit memory extensions–for nearly a decade. The mainframes were earlier on the virtualization and later on the 64-bits. And here is the funny bit: Each one of those platforms went into revenue decline after massive waves of consolidation engendered by server virtualization.
Of course, with VMware not being a server hardware vendor, and its nemesis XenSource, in the process of being bought by Citrix Systems for $500 million, not being a hardware vendor, either, the big server crunch that is coming to the X64 platform is not going to directly affect them–at least not until the footprint counts after waves and waves of consolidation settle down and only grow according to workload growth patterns, adjusted by the efficiency of mixing workloads for maximum total system usage. Here’s what I mean: A partition running a Web server by day will run a Monte Carlo simulation by night, eliminating two footprints. And at the end of the month, the simulation might be pushed around for a few hours so end of month batch runs can hog more resources for the ERP system, eliminating more footprints. Basically, you get the peaks for various workloads out of phase in time so you can maximize utilization of the system. And that will result in a compression factor that is more than four or five to one in the X64 server market.
Just how far that compression goes depends on one thing: How I/O virtualization gets woven into the X64 architecture. As you might imagine, following the VMware IPO, vendors selling software to manage hypervisors, their virtual machine guests, and the software running inside of the partitions are coming out of the woodwork. By association with VMware and XenSource, their valuations are huge, even if they are just coming out of stealth mode, and they want to have a run at this virtualization feeding frenzy, too. One neat new company is called Xsigo, which has invented an in-band I/O virtualization appliance that severs the links between servers, storage, and networks that completely virtualizes their connections. (You can read more about Xsigo in our other newsletters and on the Breaking News section of the IT Jungle site.) Advanced Micro Devices and Intel are working on a way to virtualize I/O in their future processors, but the Xsigo appliance does it completely and, arguably, more elegantly. And given Moore’s Law, it is not hard to imagine the InfiniBand switching infrastructure and virtualized Ethernet, iSCSI, and Fibre Channel links being implemented in silicon, perhaps being snapped into sockets on the motherboard to provide virtualized I/O in an integrated fashion for an X64 platform. We are years away from seeing such capability, but Xsigo is happy to sell it to you today.
Of course, if you are an IBM midrange shop, you have had access to logical partitions since 1999, and in some ways, I/O has always been virtualized in the midrange platform, going all the way back to the System/38 in 1979. What ESX Server is to Windows and Linux is what the Systems Licensed Internal Code, or SLIC, was to OS/400. And by many metrics, SLIC was a better way to virtualize a system. But, of course, the System i is not the volume product in the server business, and the Virtualization Engine hypervisor that IBM created for the AS/400 and started shipping in 1999 and has since extended to its AIX-based Power servers does not run on X64 processors. Similarly, the variant of the z/VM operating system–which should really be called a hypervisor–that is used to enable the Integrated Facility for Linux on mainframes is also more scalable and more granular than ESX Server. But when it comes to market influence, all the innovation in the world does not matter as much as volume. Time and again, IT history has proven this out. (That doesn’t mean non-volume products are a poor choice, either technically or economically. It just means no one can write an exciting IPO story about them.)
So IBM Rochester is learning, like everyone else, to co-exist with VMware’s ESX Server, and like everyone else, the engineers of the System i platform will eventually have to support the XenEnterprise hypervisor, too. There’s no way that VMware will have all of this market to itself, not with customers and vendors all wanting two suppliers to grind against each other. As we have previously reported, IBM has done the integration work necessary to get the System i and its internal disk arrays certified as an iSCSI disk array suitable for attachment to external servers running ESX Server; as far as ESX Server is concerned, the System i box looks like any other iSCSI array. (This capability ships on September 14.)
If IBM really wanted to do something smart, it would try to ride the VMware wave a little better. For instance, no one likes the Hardware Management Console, which is necessary to create and manage partitions on the System i box, excepting Linux partitions on a few entry boxes, which can be done with another, simpler tool. Like VMware’s VirtualCenter.
Let’s go all the way out on the limb, then. It would be nice for IT shops if either ESX Server or Xen were the dominant hypervisor, and if this dominant hypervisor ran on all processor architectures and masked the differences between processors and their related hardware, presenting operating systems designed for those architectures with virtualized CPU, memory, and I/O pools. This is probably not going to happen, but with so much money now, VMware could take a stab at it. Absent this, VMware could open up the application programming interfaces on its VirtualCenter tool and collaborate with makers of other hypervisors to create a common management tool that spans all hypervisors–in effect, masking the differences at the management layer instead of at the hypervisor layer. This would be equally good as far as IT shops are concerned. So not only would the System i, for instance, be an ISCSI array for ESX Server instances running on outboard X64 iron, but VirtualCenter would be used to control those ESX Server instances as well as i5/OS, AIX, and Linux partitions back on the System i proper. Ditto for mainframes and RISC/Unix boxes, which have their own logical and virtual partitions.
And if VMware isn’t smart enough to do this, you can bet that XenSource, backed up by Citrix, surely is, as are a few dozen hungry startups, looking for the big meal they see at in San Francisco this week.