Mad Dog 21/21: Virtual’s Impatience
January 30, 2006 Hesh Wiener
The more you know about virtualization–the ability of a computer to support working images of systems that don’t physically exist–the less sure you can be about its roots. For IBM‘s big commercial customers, virtualization arrived in the mid-1970s. Now the leader in virtualization, IBM was a laggard back then, and there is every possibility that virtualization technology from others will yet upstage Big Blue’s achievements. In computing, stardom can be as ephemeral as the theatrical ghosts of 1862, whose stunning impression on audiences set the stage for technology that first appeared nearly a century later.
IBM offers virtualization technology in all its computer lines, as it explains through its Systems Software Information Center. The most mature and arguably the most comprehensive offerings are available to users of zSeries mainframes. iSeries and pSeries servers both provide virtualization that is pretty sophisticated, too, which was developed first for IBM’s OS/400 operating system six years ago and then turned into the Virtualization Engine hypervisor for OS/400, AIX, and Linux. The story isn’t quite as impressive in the xSeries arena, where most of the products use X86 and now X64 chips and, consequently, are still limited by an architecture that stands in the way of virtualization. If you like, you can pin some of the blame on Alan Turing.
In 1936, at Cambridge, Turing proved that there was a theoretical computing machine, which we now call a Turning Machine, capable of imitating the functions of any other computing machine. What is not guaranteed is the machine’s efficiency. Nevertheless, the work done by Turing showed very clearly that you could have computing wheels within wheels, an idea brilliantly turned into actual computers by IBM with its System/360. Various models in the System/360 product line were based on different computers, and all these computers did only one thing: They emulated the System/360 architecture. In a sense, they were all virtual machines supporting a single image of a System/360 built of firmware and hard logic.
Still, the System/360 line didn’t have any of the virtualization features that are now becoming universal. The only step toward virtualization that users could see taken by the System/360 appeared in the very special System 360/67, which had virtual memory. With its next generation of mainframes, the System/370, IBM first offered real memory systems, then models that included virtual memory as well as an add-in for installed fixed memory machines that provided virtual memory.
IBM’s virtual memory was not a first. Burroughs had virtual memory on its B5000 and B5500 systems a couple years before the debut of the System/360. And Burroughs learned about virtualization from the Atlas project at the University of Manchester.
Still, it took IBM to make virtual memory a commercial reality, because IBM had the whole package: disk drives, tape drives, microelectronics fabrication technology, memory manufacturing technology, software expertise, an outstanding sales team, and a customer base that trusted IBM to provide strategic technology for bookkeeping and other record keeping.
IBM also had a vision that encouraged the creation of virtual machines, even if its sales force and, for the most part, its commercial customers had absolutely no idea where this concept would lead. Like many ideas at IBM, this vision became the basis of products only after a shock. In the mid-1960s, MIT wanted a machine with hardware that would support multiple levels of security for its Project MAC. IBM lost out to General Electric, and the people at MIT built the impressive Multics time-sharing system on GE’s hardware. Bell Labs, at the time the most prestigious industrial research facility in the United States, joined the project, left the project, and glommed a number of key ideas that led to the creation of Unix and its descendants.
In reaction, IBM made an incredible effort to go beyond the technology GE provided and created a system that was, in effect, a single computer that looked like a bunch of complete System/360s, CP/CMS. From there to the VM family of operating systems was a path straight enough to be mapped out in a brief reflective essay on the still lively Web site dedicated to the story of Multics and its developers. The timesharing wars of the 1960s and 1970s are long since over, and their lessons may be forgotten by IBM’s managers, but the ghosts still lurk on the battlefield, which is now the Internet. But Pepper’s Ghost still lives in cyberspace, too.
Pepper’s Ghost, which probably should be called Dircks’ and Pepper’s Ghost, is an astounding theatrical effect that, for years after its public debut in 1862, captivated theatre audiences and inspired many other theatrical effects, magical illusions, and other developments that culminated in the invention of the motion picture. The connection between ghosts and virtual reality persists in our culture and language. Intelligence operatives whose tradecraft includes the assumption of virtual identities are sometimes called spooks, and the head of the CIA is none other than a fellow named Polter Geist, or something like that.
The concept was that of Henry Dircks, an inventor from Liverpool. John Henry Pepper, a lecturer at the Royal Polytechnic Institute in London, working with Dircks, perfected it. Basically, by projecting the image of an actor onto the surface of a piece of glass placed between an audience and a stage with other actors, the virtual image, the ghost, could interact with the directly visible players. The audience didn’t know the glass was there; all they saw was the translucent image of the projected player interacting with the live players who could pass objects, including themselves, through the image.
It might have been a poor virtual reality, although not for an audience that wanted to believe in what is saw, but so is the virtualization on X86 and X64 platforms.
The processors used in all IBM’s other servers, Sun Sparc engines, Itanium processors, and pretty much all the other chips created with servers in mind can provide multiple levels of control, so a program at the highest level can stay on top of things done by software, including whole operating environments, running at lower levels. But X86 chips don’t now provide an adequate hardware basis for virtualization. Both Intel and AMD will be enhancing their processors to remedy a lack of electronic support for virtualization. Intel is rolling out its so-called “Virtualization Technology,” or VT, on its future dual-core Xeon and Itanium processors, and AMD is putting its “Pacifica” features into its next Opterons. I expect the rollout of VT and Pacifica will be measured and the technology will be adjusted as necessary when the chip makers have gained field experience.
Two of the best known X86 virtualization hypervisors, from VMware and Xen, explicitly acknowledge the limitations of the processors on which their code runs, and the technical community has been well informed about the situation. But living with the state of the art and being content with it are two different things entirely.
The possibility that X86 systems and applications can violate the integrity of virtualization schemes means that VMware, Xen, and their ilk are still developers’ toys or lab curiosities, not a proper basis for systems that confine an application or entire operating environment the way virtualized servers can. Both schemes work well if the user plays by the rules and no accident or hack penetrates the mirrors and smoke behind the virtual illusion. But that’s not good enough for serious users, who daily encounter worms, viruses, and intrusion attempts on corporate systems, personal computers, and even their PDAs and cell phones.
The rush to virtualize the X64 world is now underway, and the first solutions with hardware support to bolster the software of virtualization engines will reach the market later this year. Microsoft is also moving ahead with virtual versions of Windows Server software, but, like everyone else, really needs Intel and AMD to provide hardware support. Linux shops are likely to favor Xen, which, like other schemes, will be adapted to take advantage of any hardware assistance the chip makers can provide. Solaris users can pretty much count on Sun to exploit any new hardware wrinkles in the X86 space, too. Solaris is being supported by VMware’s ESX Server hypervisor, Sun is participating in the Xen project, and has its own domain and container virtualization technologies built into Solaris 10.
The result, if not this year than certainly next, will be inexpensive servers with a lot more security and stability than current ones. The benefits will be of great help not only to corporate users, who constantly seek to improve the resilience of their servers, but also to small companies who depend on ISPs with shared servers for the integrity of their Web sites, email services, and other Internet-based functions. Between these extremes, mid-sized companies with X86 servers will quickly come to appreciate the benefits of systems and applications that run inside protective supervisory programs. In short, virtualization is going to bring about a boom in server replacements, but only when the technology is shown to work as promised.
For the rest of the server world, which already has access to machines with their favorite architecture that can deliver virtualization, there will be progress, too. All the server makers will have to improve their virtualization schemes so their premium products can remain ahead of the less costly alternatives in the X86 universe. If they don’t, their claims of superiority will become as transparent and ephemeral as Pepper’s Ghost.