Flex Platform: An IBM System That Goes With The Tech Flow
Revised: February 7, 2012
by Timothy Prickett Morgan
IBM had the right idea back in April 1964 when it announced the System/360 mainframe, and it even had a name that reflected its understanding of what customers wanted. The idea was that a single line of machines, using different processor and storage technologies, would be united by levels of abstraction that would allow them to all run the same applications, but just on a different scale and with different price points.
It is a lesson that IBM quickly forgot when it bifurcated its market in 1969 with the advent of the System/3 minicomputer, and I still believe that IBM did this for legal reasons in case the U.S. Justice Department tried to bust Big Blue up over antitrust issues. The separation of the System/3 from the System/360 was made for legal reasons, so IBM had something it could spin off in the event that it lost its antitrust battle as much as it was a recognition that IBM needed to have a less complex and less costly system to sell to midrange customers. The 9370 mini-mainframe and the AS/400, launched in the late 1980s, are the byproducts of IBM's attempt to bring its disparate and incompatible mainframes and minicomputers back to one architecture, and the Project ECLipz convergence of the System i, System p, and System z lines from a decade ago was yet another attempt to get the underlying hardware closer together.
The dream of a converged system dies hard at IBM and, in fact, is slowly being accomplished by mainframe rival Unisys, which has ported most of its MCP and OS 2200 mainframe environments to run on X86 iron. The last bits of I/O for OS 2200 are hard to port to Xeon processors right now, and running compiled Sperry and Burroughs applications on top of X86 instructions in an emulation layer is fine for low-end and midrange mainframes, but the performance penalty is still too high for the largest Unisys mainframe shops. Within a few years if the rumors are correct, Intel is expected to do a little convergence of its own--something that is long overdue--by putting its Xeon and Itanium into a common socket, allowing a single system to support either architecture.
Convergence is hard, and there are a lot of ways of moving in that direction without having to go all the way to a single processor running all workloads. IBM has long since converged the AS/400 and RS/6000 processors and their I/O and memory subsystems. These families of midrange gear used to employ distinct Power processors, distinct memory addressing, and very different I/O subsystems, and basically what happened is that IBM took a stripped down implementation of OS/400 to create a hypervisor for OS/400 and AIX (and eventually Linux) while putting that hypervisor and those operating systems on RS/6000 hardware. The AS/400 lost its distinct database-as-file system and its unique asymmetric I/O embodied in intelligent I/O processors (which allowed the complete system to do more work despite using a relatively unimpressive 48-bit CISC processor, in terms of raw oomph), among other things. The System z mainframe processors use the same cache memory, decimal, floating point, and other processing elements as the most current Power7 chips, but still have unique instruction units and instruction sets (despite what some people erroneously believe). The Power and System z machines share I/O and memory subsystems and a slew of other peripherals. The same is largely true of the System x rack and tower and BladeCenter blade servers using X86 processors, but the cycle time is set by Intel and to a much lesser extent Advanced Micro Devices, not IBM, and is a bit shorter--about two years.
No matter what platform you are talking about, IBM systems are still largely monolithic systems that are built as a single set of machines that are tested with specific operating systems, verified with specific applications, and sold as new systems roughly every two to three years. This is why IBM's server sales go up and down like a roller coaster.
The impetus behind convergence is that IBM or Unisys or Hewlett-Packard brings as few machines to market as possible and benefits with more profits as these machines are differentiated through their software stack. But IBM wants more.
In fact, IBM seems to want two things, if a presentation by Ambuj Goyal, general manager of development and manufacturing for the Systems and Technology Group, presumably given to STG employees sometime last year, is any indication. Goyal, as I explained in last week's issue, has been charged with smoothing out the ups and downs in the IBM systems development and revenue streams and thinks that "refactoring" hardware using the same iterative means that operating systems have is the answer. When you refactor, you might change the key algorithms within a software stack to improve performance, but you do so in such a way as to not break compatibility so existing applications don't puke their bits all over the chips.
It is important to note that Goyal was talking out loud at the front end of a different way of doing systems development, not laying out product roadmaps and launch times.
So, what are the two things that Goyal laid out for the STG faithful?
The first thing that IBM wants to do is make better use of its Power processor designs and the chip fabs that pump out its chips in New York and Vermont. As most of us recall from the PowerPC Alliance more than two decades ago, the idea was for IBM, Motorola, and Apple to create a family of compatible processors that were an alternative to the X86 processor from Intel that would leverage the strengths of the Motorola embedded business, the strengths of IBM's systems business, and come up with a new and better desktop processor than Motorola, the bane of Apple's desktop PC biz, was able to do.
But for a lot of complicated reasons, IBM's own PowerPC business computers were stillborn, Apple suffered from delays and performance issues relating to Motorola's and IBM's desktop processors (and so Apple switched to X86 chips), and the PowerPC chips only did well as the follow-on to the 68K chips from Motorola in the embedded space and in midrange and high-end servers and clusters made by IBM. The problem, as Goyal laid it out, is that IBM and the PowerPC partners took a $100 billion revenue opportunity and shrunk it down to a $10 billion high-end Unix (and I would add OS/400 market) and a $2 billion supercomputer market.
Goyal wants to do better than that, and open up the Power processor to play in more markets, thus:
GRAPHIC REMOVED AT IBM'S REQUEST
If you do the math, by moving out into cost-optimized systems, microservers, and midrange and low-end of technical computing, big data and analytics, and underpin systems with Power processors (instead of X86 chips), IBM can more than quintuple its addressable market. None of this means that IBM wins five times as much business, of course, but it is playing in more markets. Anything that makes Power chips more appealing makes AIX and IBM i have a longer and perhaps even a better future.
IBM would go broke if it tried to attack all of those distinct markets individually, from an engineering standpoint. And so the idea with refactoring is to stop building isolated, monolithic systems and to move to a component model for all systems and their related peripherals, drivers, systems software so you can improve one aspect of the system without disturbing other parts of the system. It seems obvious, when you say it that way, but if you think about it computers are not always engineered this way, and I would argue that they are increasingly monolithic, not decreasingly so. What can you change in an iPad? Or a smartphone? If you change a service processor in a mainframe or Power system, you know you need to recode the hypervisor? Same difference. And it doesn't make a lot of sense, from an engineering point of view. A new service processor should just plug into a system with the equivalent of a BIOS update to the hypervisor.
GRAPHIC REMOVED AT IBM'S REQUEST
Now that IBM has bought Blade Network Technologies, which gives it switching, and XIV and Storwize, which gave it some missing assets in its storage business, the company can actually start building something that we can rightfully call an IBM system once again. (Even if IBM doesn't make the I/O cards, memory chips, and disk drives like it used to way back when.) And now instead of virtualizing networks in many different ways for all of its systems, it can do it in a consistent way that all of the systems--AIX, Linux, IBM i, Windows, z/OS, and so on as well as the z/VM and PowerVM hypervisors--can make use of. Ditto for storage virtualization and systems management across all the physical and virtual iron.
What customers want, and what IBM wants, is for Big Blue to make a machine that can run any workload in any fashion, with any networking and any storage using any operating system. This is done through componentized hardware and software, and the concept machine that Goyal talked about that embodies this modular approach is called the Flex Platform.
GRAPHIC REMOVED AT IBM'S REQUEST
The Flex Platform concept machine is not a rack server and it is not a blade server. It is a hybrid that falls somewhere in between that is perhaps more useful than either of those constructs. While blades are interesting and useful, they have limits in terms of thermals, memory and processor capacity, and I/O bandwidth and, quite frankly, some people just don't want them because they have a long and happy history with rack servers, which have more expansion capacity and which can run faster processors if need be than blade servers.
From the look of the graphic above, it looks like the Flex Platform is a 10U rack chassis that can hold seven full-wide servers and fourteen half-wide servers that weigh in at around 2.5 inches in height. That's somewhere halfway between a 1U and 2U server in terms of height, but get over that for the moment. The servers are called IT Elements, or ITEs, not blades, and they slide into the front of the chassis. The Flex Platform has ITEs based on Power and X86 processors--and maybe even mainframes someday--as well as ITEs for shared storage and presumably storage on each ITE for direct-attached disk and SSDs. As is the case with blade servers, you slide switch modules into the back of the chassis, which can scale up with multiple cards like a modular rack switch from Cisco Systems, Juniper Networks, or other suppliers might.
If IBM wanted to go real crazy, it could open up the specs for the Flex Platform and let other switch makers like Cisco and Juniper and other server makers like Super Micro, Quanta Computer, or heaven not forbid rivals Oracle, HP, and Dell put in their own server units. (This seems very unlikely.) At the very least, IBM can add systems full of Netezza FPGAs and other kinds of co-processors, perhaps GPUs from Nvidia and AMD or even its own 18-core PowerPC A2 processors (used in the 20 petaflops BlueGene/Q supercomputer for Lawrence Livermore National Laboratory).
It's all very intriguing, and I am very curious about what Big Blue will actually do.
Big Blue's Software Gurus Rethink Systems
IBM Taps Software Exec For Power Systems Marketing
Q&A With Power Systems Top Brass, Part One
Q&A With Power Systems Top Brass, Part Two
IBM Lays Out Plans for Future Growth and Profits
IBM Puts Power Systems and System z Server Under One Manager
IBM Reorganization Tucks Systems Under Software
Palmisano Says IBM Will Double Up Profits By 2015
Bye Bye System p and i, Hello Power Systems
Why Blade Servers Still Don't Cut It, and How They Might
Why Do Rack Servers Persist When Blade Servers Are Better?
Post this story to del.icio.us
Post this story to Digg
Post this story to Slashdot