Wanted: Cloud-i i-nfrastructure
March 14, 2011 Timothy Prickett Morgan
It looks like IBM is finally getting around to adding features to the Power Systems-IBM i combination to make it more amenable to so-called cloud computing. It is not clear when these features, such as live migration of logical partitions, will be available, but hopefully it will be soon. Once these updated features are available in the next release of the operating system, it will be truly possible for IBM, hosting companies, or maybe brand new start-ups to offer cloudy i infrastructure.
As The Four Hundred reported three weeks ago, Colin Parris, the new IBM vice president in charge of the Power Systems division, put out an IBM i strategy and roadmap whitepaper to give midrange shops a sense of what enhancements are coming down the pike. There are a number of other changes that will be necessary, I think, to covert the IBM i platform from a very respectable virtualized server platform into excellent cloudy infrastructure. And such changes could not only save the IBM i business, but revitalize it. The problem is, I think there is a lot of work to do, and little time.
First, let’s get a little perspective on this cloud phenomenon. To simplify a bit, cloud computing is what happens when you try to cobble together a mainframe from cheap components and try to mask the complexity of the underlying system. I wasn’t even born yet–and maybe a few of you were not, either–when some of the central tenets of commercial data processing were established. These include, but are not limited to, the need to share a compute facility across many users, which implies virtualization and high security. It also implies workload management, like OS/400 and i operating systems have with their subsystem software architecture. The high cost of such compute complexes does not just imply pay-per-use chargeback, but also renting raw capacity instead of buying it, too. We could return to a world where people don’t buy back-end systems any more, but rather lease or rent them, or like our smartphones, get them “free” in exchange for making a commitment to a data plan.
The IBM i platform has some interesting strengths that could be deployed to make it an excellent foundation for cloudy infrastructure. For one thing, it is expensive, just like a smartphone actually is. My Motorola Droid, for instance, had a list price approaching $700 when it was announced a year and a half ago, but I didn’t pay anywhere near that for the device. Burying the cost of the device into the service does two things. First, it makes you focus on the service and the software on the device, not just the device. It’s a complete application system, as it were. When I got my Droid, I was as concerned with staying on the Verizon network as I was on getting a smartphone with a touchscreen, camera, and funky application software for checking the weather, traffic, and maps wherever I am in the world, as well as reading email and such.
Second, the bundling allows you to mask the cost of the device, making it more palatable. You still have to do the numbers and peel it all apart, but it shifts your thinking and your accounting from the capital budget to the operational budget. This makes it easier for both consumer and provider.
It wasn’t until after IBM’s near-death experience with antitrust lawyers running from 1952, when the U.S. government sued IBM over its control over the punch card business, and the 1956 consent decree, which settled that lawsuit and which had controls over IBM’s behavior in the nascent computer business, that Big Blue actually sold tabulating and computing devices. The consent decree made IBM sell its machines to customers who wanted to own them, including the licensing of software for those devices, at a reasonable price based on the rental prices that IBM charged already for the same functionality.
The shift from owned, physical computing to shared, virtual computing is not just about infrastructure flexibility, but economic flexibility. I expect for there to be a mix of acquired, leased, and rented computing engines at most companies, and the high initial cost of the IBM i platform means that utility-style pricing can help make the i more palatable, just like the plan at Verizon made my Droid seem less expensive than it is.
As expensive as the IBM i platform is to acquire, it is legendary for its uptime, availability, security, and ease of use. All of these should, in theory, make a Power Systems box running IBM i less expensive to operate than alternative mainframe, Unix, and proprietary platforms, and it may even close the gap compared to X64-based machinery running the VMware vSphere server virtualization and vCloud Director cloud fabric. IBM has never proved to my satisfaction that an AS/400 was easier to setup and administer than a Windows or Unix box, but cloudy infrastructure gives IBM another chance to prove this box is better in some ways that justify its costs. (I remain both skeptical and hopeful until I see some hard data here.)
This closed but well-engineered approach is what makes modern Apple machines so popular. It is also why Apple co-founder Steve Jobs is vilified as much as worshipped out there in IT Land. Some people like open systems, and some people like machines that just work. (As if the two were mutually exclusive, but the goals are in contention, admittedly.)
The OS/400 platform was the first IBM platform beside the mainframe that had virtualization, and it had it way before VMware even started thinking about moving its toy Workstation hypervisor for X86-based PCs and moving it over to servers. The combination of OS/400 subsystems (which are akin to AIX workload partitions in terms of how they are used to manage workloads) and OS/400 logical partitions (which supported OS/400, then Linux, then AIX) should have made the OS/400 platform a premiere platform for application clouds. They would not have been called that back then, of course, because one of the core ideas behind cloud–having enough network bandwidth to move a workload from machine to machine, and across the corporate firewall if necessary, was not yet workable.
Luckily, with the next release of the IBM i platform, Big Blue has promised to finally deliver some of the key features that will be necessary to make what I am calling (and I hate this) cloud-i i-nfrastructure. We already have logical partitioning and subsystems, but we have been missing workload and partition migration, which AIX has had for years now. Many of us have complained about these missing features, and now apparently the people who matter have done so, too.
As the updated IBM i Strategy and Roadmap explained, the Virtual I/O Server, a key component to virtualize access to I/O devices like disk arrays and tape arrays for logical partitions on Power Systems machines, will be integrated with the native IBM i management tools, which don’t really speak AIX or PowerVM very well. IBM is also promising a means of managing virtual machine images (they are called logical machines here, IBM) and better managing of these images, including “mobility,” which means live migration of running workloads. (Or rather, it had better mean that.) IBM is also bringing a number of storage virtualization principles, including thin provisioning, to the IBM i platform. IBM is pretty vague about when this will happen, but my cursory reading of old and new roadmaps suggests the next big release is 2012. That’s later than I would like, but better than 2013.
With these features, plus some high availability (PowerHA, independent ASPs, and their party tools) and system management tools (such as the System Director and Tivoli provisioning tools) already in existence woven in, IBM should be able to create cloudy infrastructure to run RPG and COBOL applications and their DB2 for i databases.
There’s nothing wrong with hosting OS/400 and i applications in the traditional way, of course, and many companies will continue to do that. But even IBM thinks that cloudy infrastructure is going to eat into its traditional strategic outsourcing business, and for good reason. People need to pay for what they use, and not a CPW or megabyte of memory or terabyte of disk or gigabit of memory bandwidth more than that.
In short, they want to get back to the time-sharing rental base that made IBM the kind of the systems business back in the 1960s and 1970s. Given this, you would think Big Blue would be trying to figure out how to build a giant OS/400 and i cloud in Rochester, Minnesota, and getting customers who haven’t bought iron in years upgrade to new systems that way. And it should be pulling an AS/400 maneuver and getting all of the independent software vendors certified on said cloud. What am I missing here?