A Mainframe Renaissance
Published: February 12, 2008
by Chris Craddock
Many of the digerati in the IT industry and on the blogosphere have been debating whether or not there is a mainframe renaissance going on. As you might expect, IBM touts their case in favor, and many in the mainframe camp assert that the "little boxes" couldn't possibly replace mainframes. There are also nay-sayers who believe the mainframe is a dinosaur: cumbersome and slow. Objectively, there are elements of truth on both sides of this argument.
But as I will describe below, each has been exploring and adopting elements of the other's technology. So I began to wonder: what would a mainframe renaissance look like, and how could we tell it was going on? And is that the only renaissance at work here?
The Renaissance (French for "rebirth") was a cultural movement that profoundly affected all aspects of European intellectual life from roughly the 14th century onward. So called "Renaissance thinkers" sought out learning from ancient texts and tried to improve their secular and worldly knowledge through the revival of ideas from antiquity and novel approaches to thought. (This definition was shamelessly appropriated from Wikipedia.)
The history of IT has some parallels to this description of the European Renaissance. All early IT processing was mainframe batch. Virtual storage, networking, databases and online transaction processing each added new wrinkles, but we never really trusted the technology enough to do real-time updates; so we still did a lot of batch master-file update processing. And since capacity was always a bottleneck, we spent our lives figuring out how to shave bytes and milliseconds.
The first IBM mainframe I worked on was a System/370 Model 158. According to IBM, the 370/158 had a cycle time of 115 nanoseconds, which is about 8.7 MHz. It was generally rated at 1.0 "old school" MIPS, so by happy coincidence the 370/158 became the benchmark to which all subsequent processors were compared.
The mainframe cultural preference for slow, methodical change was a sensible response to system limitations, but the world has changed. Minicomputers, PCs, Macs, and later still, microprocessor-based servers running Windows, Unix, or Linux, fundamentally changed our notions about user interfaces and networked accessibility to computing resources. Their world was never orderly, but chaotic, creative, and more than a little brash. They freed users from perceived (and real) limits of the glass house hegemony; and since they were located on users' desktops or departments and their prices were dramatically lower than the corporate mainframe, they flourished with little or no supervision from the bean counters.
To the chagrin of the mainframe crowd, these "small" systems blew by raw mainframe compute performance a decade ago and more recently, even the database transaction-processing crown was beginning to look shaky. The current z9 microprocessor runs at 1.7 GHz, and while estimates vary, that is somewhere in the range of 600 to 700 "old school" MIPS, depending on workload mix and other factors. It's fast compared to the 370/158, but much slower than many contemporary microprocessors.
So the mainframe world has been looking staid in comparison to the distributed world. But that is changing. At the Hot Chips Conference in Silicon Valley last fall, IBM fellow Charles Webb unveiled the next-generation mainframe microprocessor, the z6. Its 4.2 GHz clock rate is almost two and a half times faster than the architecturally similar z9. So, even with conservative assumptions, the z6 is likely to deliver at least 1,000 "old school" MIPS. That's a nice, round three orders of magnitude (1000:1) improvement over the 370/158; and while the 370/158 only had one CPU, the current z9 has up to 54, and z6-based systems will have even more. (The z6 is also a quad-core processor--IBM's first.)
IBM's holistic approach to mainframe RAS spans everything from circuit design to operating systems and middleware, so it has always been qualitatively superior to the other platforms and remains so today. Leaving aside all of the measurement variability, we have a simplistic scale with which to compare mainframe systems. Aggregate processor capacity has increased by over 50,000 times during my working life. Memory and I/O capacity and bandwidth have likewise increased far beyond anything I could have imagined 30 years ago. Energy consumption and floor space have declined dramatically, and price/performance and RAS have improved by an even greater degree; so these systems are not only getting faster, but they are also getting cheaper and more reliable. For the first time in a decade, mainframes will be competitive in compute power as well as in their traditional strengths of secure database and transaction processing. This almost begs the question why more people haven't gotten the memo.
We have a strong case for a mainframe renaissance, but the technology flow hasn't been a one-way street. Webb's presentation contains surprising details of the commonality between the p6 and z6. They target different machine architectures and system designs, but much of the chip-level design is shared and most of their physical design (frames, power, I/O busses, and so forth) have been shared between System p, System i, and System z for years. So while the mainframe has been learning from the open systems world, the open systems world has had somewhat of a renaissance of its own in learning from some classic mainframe strengths.
Now we find ourselves in an interesting place. In the past, capacity and performance were always concerns, and our businesses often grew faster than our machines. So there was always some latent demand waiting to be liberated at the next upgrade. But not even the most wildly successful businesses have approached a 50,000-fold increase over the last 30 years. If our demand for mainframe capacity remains geared to our old-world approaches, then IBM will have massively overshot the mark and modern mainframes will have (or could have) more processing power than we can possibly use. But will they?
We have to assume that IBM believes there is a market for such atom-smashing systems. The company's strategy is to soak up capacity by capturing and consolidating new workloads. Linux and zVM provide a way to consolidate some of the server sprawl in the distributed world, which has led to huge management complexity and inefficient use of those resources, not to mention floor space and energy consumption. Mainframe-like virtualization is now available across most server lines. Software solutions like VMware's ESX Server and Citrix Systems' XenServer can accomplish for Windows and Linux much of what VM did for mainframe operating systems a generation ago. So we can argue that the distributed world is reaching back into mainframe history and learning from the past.
Another thing that has changed dramatically is the cost of people versus the cost of hardware and software. With a 370/158 it was worth spending my time trying to shave runtimes, whereas on a z9 or a z6, that notion is doubtful. Moreover, our traditional application tools and techniques fall far short of the distributed world in both productivity and the subjective quality of the end-user experience. IBM has tried to address that by bringing the best of the open world to the mainframe in the form of Java, WebSphere, and related technologies such as the zAAP processor. These favor human factors and productivity over traditional views of efficiency.
It can be argued they are more economical than our traditional approaches because they provide business agility. Some would say that IBM has not effectively advertised their success, but it has been phenomenal. We can all gain business value by leveraging what IBM has done to improve business agility without sacrificing the core values of integrity and reliability. And sure, we will burn more MIPS than we're used to; but 21st century people are expensive, and we've got MIPS to burn.
This economic shift is more of a challenge to our mainframe culture than to the mainframe itself. The platform is (finally) competitive in terms of compute power, space efficiency, energy-use and application software technology. Best-of-breed ideas are now shared between the mainframe and the distributed world, so each has had something of a renaissance. I think the real renaissance will come when our customers leverage and exploit those best-of-breed capabilities to deliver business value.
Vive la revolution!
Chris Craddock is a 30-year industry veteran with 20 years of commercial product development experience. He is currently senior vice president and distinguished engineer in the Office of the CTO at mainframe software maker CA.
IBM Readies Quad-Core z6 Chip for Mainframe Iron
The IBM Mainframe Base: Alive and Kicking
The X Factor: One Socket to Rule Them All
Post this story to del.icio.us
Post this story to Digg
Post this story to Slashdot