Still Wanted: A Power-i System of Systems
July 18, 2011 Timothy Prickett Morgan
IBM is all about “smarter systems” and “workload optimized systems” these days in its marketing, and as far as I can tell, in the past two years the company has not done anything to bring the kinds of focused design it has for supercomputing and mainframe shops, just to name two, to bear on the Power Systems/IBM i base. And I think the company is missing a big opportunity to re-engage in a big way with those midrange customers.
What got me to thinking about this, of course, was last week’s launch by Big Blue of the mini mainframe called the System zEnterprise 114, which has five or 10 mainframe engines and MIPS that range from 26 to 3,100 running z/OS, z/VM, or z/VSE workloads. Like other modern mainframes, there are a bunch of other engines in the box, which can be used as spares in the unlikely event that a core on the chip fails and that can be configured as I/O processors, Linux engines, or as special accelerators for speeding up Java/XML code or DB2 database processing.
On top of that, starting with the System zEnterprise 196 announced last year and responsible for a rebound in mainframe sales in the past three quarters (and soon to be four when IBM reports its second quarter financial results for 2011 today, July 18, after the market closes), IBM is also extending out the internal and secure networks that are used to link Hardware Management Consoles to mainframes so they can manage the provisioning, monitoring, and patching of up to 112 blade servers running in four BladeCenter chassis.
This is a special configuration called a System z BladeCenter Extension, or zBX for short, that IBM thinks will give the mainframe a new lease on life. And not so much because of hardware–anyone can integrate hardware, and I say that as a guy who has had great fun doing just that–but because of the Unified Resource Manager that allows IBM to take control of the PR/SM partitioning on mainframes, the PowerVM hypervisor on Power7 blades in the zBX, and the KVM hypervisor from Red Hat for IBM’s HX5 blades in the zBX. The latter were also just announced as options for the zBX as part of last week’s announcements. They use Intel‘s high-end Xeon 7500 and E7 processors, which have eight and 10 cores, respectively, can address lots of memory, and have big ole price tags. IBM is only supporting Linux on the Xeon blades atop KVM right now, but customers have been clamoring since last year for Windows support and IBM has promised now that it will do it, but is not foolish enough to say when that will happen.
The new zEnterprise 114 can host the same two racks of zBX iron as the bigger zEnterprise 196. IBM is not crimping the external processing on the smaller mainframe, which was an interesting, but not necessarily predictable, choice.
All of this zBX scale would be overkill for most Power System-IBM i shops. In fact, the processing power in one blade server would likely be more than the typical shop with a Power 520 or Power 720 server needs. That, however, does not mean that OS/400 and i shops don’t need a “system of systems.” In fact, I would argue that they need it more than mainframe shops do and would be more enthusiastic about an integrated heterogeneous system than even IBM’s mainframe shops.
The modern Power System machine could easily be the foundation of a midrange system of systems, of course, and I pointed that out a year ago when the System zEnterprise 196 was launched. Power Systems machines can support IBM i, AIX, and Linux natively on logical partitions. BladeCenter machines can be lashed to the Power Systems iron and storage arrays using InfiniBand or 10 Gigabit Ethernet links and Fibre Channel networks, where appropriate. So IBM could easily cobble together a baby version of the zBX aimed at Power Systems and probably even port of its Unified Resource Manager software, or some analog running on its HMCs and service processors–to make it all work and be just as secure as the mainframe.
But after thinking this over for about a year now, I don’t think the best idea for IBM i shops is a system of systems, but rather something that aggregates and controls both systems, their applications, their PCs, and any applications that might be running out there on the cloud. I want the OS/400 operating system to be the traffic cop for the whole shebang, not just the control point for applications running on Power and X64 systems.
Let’s start with the reality at most OS/400 and i shops. They have the largest machine their budgets allow and because the box is so expensive, it is much smaller than it could be and it is arguably a lot less than the aggregate capacity of Windows server capacity at their company. I would venture a guess that there are anywhere from a handful to dozens of X64 servers, most of them running Windows but maybe a smattering running Linux, running a variety of workloads. You can’t dislodge most of those Windows applications because Windows is now just as established as OS/400 and IBM i and is arguably the politically safer choice for all kinds of workloads (this is not necessarily justifiably so, but it is so nonetheless). Ditto for Windows on the desktop. There’s no point in trying to take a direct run at Windows with Linux or Mac OS.
I am, however, going to make a strong assertion. The security of the OS/400 and IBM i platform is much more sophisticated and less well understood than for Windows and Linux, and therefore makes a good insulation layer not just for native applications on the box, but for any single sign-on and authentication that needs to be done anywhere on the network. So why not put it at the center of access to PC images running on desktops and laptops (preferably in virtual mode) as well as for access to external cloud applications, too? Make IBM i software the control point for everything a user needs, regardless of where it is located. That way, the machine always remains relevant, no matter where the applications are.
I think that PC virtualization needs to be brought into this system of systems I am proposing for midrange customers because PCs are every bit as important to midrange shops as their Windows and IBM i servers. I find the approach that MokaFive has taken with virtual desktop infrastructure (VDI) is interesting in that the company does not do VDI by streaming PC images from expensive servers back in the data center, as Citrix Systems and VMware do with their respective XenDesktop and View products. Rather, MokaFive puts a thin Linux kernel out on each PC that is under control of the MokaFive Suite central server that lets a Windows or Linux image run on the local machine (a laptop or desktop) and keep running even when the network connection is broken. You centralize control, but you localize processing on the cheapest iron possible. (A server is considerably more expensive than a PC, which is why VDI has seen decent but not spectacular uptake in the market thus far.)
I think IBM needs to position IBM i in much the same way as MokaFive is positioning its systems management software. In fact, I would say that IBM should buy MokaFive and use it as a foundation of a product that manages virtualized PC and server images, with IBM i or AIX, depending on the flavor you want to pick, at the heart of it all. Then you use IBM i or AIX to do all authentication for all users, regardless of what kind of apps they run and for access to servers no matter what kind they are.
That wouldn’t be so much a system of systems as it would be a data center, network, and clients in a box. Not all that different from a System/38 back in 1980, when you think about it. Just more modern and cloudy.
The important thing about the approach I am talking about is not that it gives IBM control, but that it gives IT shops control and a sense of security. And that is something that midrange shops will pay for.
Once this software stack is done, with IBM i acting as a kind of universal resource and access manager, then you can talk about integrating all kinds of neat hardware together in ways that are more appropriate for midrange shops than the current BladeCenter machines. As I have said before, I think server components need to be more modular, meaning that a memory module can be used no matter what processor module you have, and ditto for network and I/O modules. I am thinking of building systems out of child boards, not motherboards, and making a family out of it. I am also thinking that the architecture has to allow for lots of loosely coupled machines as well as tightly coupled processors as conditions dictate. But that is an engineering problem for another day.