The Supercomputer At The Heart Of The Power Systems Revival
November 28, 2016 Timothy Prickett Morgan
With both the Supercomputing 2016 conference and Thanksgiving Day behind me, I find myself being thankful that the engineers that helped IBM make the transition from proprietary 48-bit CISC processors to 64-bit PowerPC processors in 1995–yes, that was more than two decades ago–were forward thinking. But as it turns out, there were other technologies, including massive floating point performance and database acceleration, that may in the long run help the entire Power Systems line not only survive, but thrive.
We all know that the PowerPC-AS version of the Power line of chips was the only one of several early attempts at 64-bit processing among Apple, IBM, and Motorola that actually worked out and had reasonably high performance and decent enough volumes to make it a business. The folks in Rochester, Minnesota, right down the road from some of the smartest supercomputer designers in the world, created an elegant and smart 64-bit design that also had a neat feature: a double-pumped floating point math coprocessor, built right into the core. That double-pumped math unit lived on in the Power4 chip that put IBM back in the RISC/Unix and supercomputer games back in 2001, in the wake of the dot-com bust and a global recession, and that math unit lives on, heavily modified mind you, in the current Power8 and future Power9 chips from Big Blue.
Back in September, when IBM updated its Power Systems LC line of Linux-only machines, with a server code-named “Minsky” aimed specifically at high performance computing workloads such as modeling and simulation done at large enterprises and academic and government supercomputing centers, we talked about how IBM need to think outside of the box and bring this very high performance system to bear on IBM i workloads. I explained at the time that IBM needed to make use of this new hybrid supercomputer–that is what it really is, after all–as a means of doing remote visualization for desktops (kind of a modern analog of the 5250 green screen terminal) and in bringing machine learning and database acceleration also under the skins of the “integrated system” that the System/38, AS/400, iSeries, and System i have always been.
How is it that IBM i is getting machine learning and database function acceleration by GPUs last instead of first? What happened to IBM Rochester? IBM Austin? Is anybody out there listening? Is this thing on? (Tap, tap, tap. . . .) Once again, I will remind the people that run the IBM Systems group at Big Blue that there are at least 125,000 midrange shops that expect the company to provide an integrated machine learning and VDI system that also runs business and infrastructure applications, all in one fell swoop.
Last week at the SC16 conference, IBM outlined its new PowerAI machine learning platform, which I went into in detail at my other job at The Next Platform. (You can read all about it here.) This Minsky machine, which has a pair of Power8 chips that are coupled to four Nvidia Tesla P100 graphics coprocessors with a total of 21 teraflops of double precision floating point performance for simulation and modeling has 170 teraflops of half-precision floating point power that makes it a very attractive machine on which to train the neural networks that drive machine learning applications these days, which do everything from image recognition to speech translation to recommendation engines. Next year, as I learned at the SC16 conference, IBM will work with Nvidia to fire up a two-socket Power9 system (probably with 48 cores running at near 3 GHz), codenamed “Witherspoon,” with a total of six future “Volta” Tesla V100 coprocessors that will deliver at least 40 teraflops of number-crunching performance at double precision and four times that, or 160 teraflops, at the half precision used for machine learning (and we think 320 teraflops at one-quarter precision that is starting to evolve among the machine learning set).
This is a tremendous amount of performance to cram into a 2U rack server, and as I said before, I want this to be brought to bear. I want the new PowerAI machine learning stack, based on open source software, that IBM has woven together and certified to be a commercially supported product. There is no reason whatsoever that this Minsky server with the PowerAI software cannot have partitions carved out of it to run the IBM i software stack and deliver unprecedented performance to accelerate a slew of workloads. Rather than bringing AI to the Power Systems that can run IBM i we can bring IBM to the machines that are aimed at supercomputing and machine learning workloads. The same machine that keeps the books at discrete manufacturers (who build physical products) could therefore be used to design products and help customers decide on what products to buy from that manufacturer. For process manufacturers, who tend to make chemicals or food, the same machines could be used to do genetic modeling or simulate recipes for products as well as keep the books and provide visualization for the simulations. For retailers, it could be a vast recommendation engine and accelerated database for web applications and online stores as well as the backoffice system. For regional banks, the system could do advanced fraud detection, accelerated by those GPUs, plus run the banking systems and online front ends for the applications. The list could go on and on.
Here is the point: IBM has to stop bifurcating its Power Systems business and get back to creating a single, integrated system. This is how to beat Intel. This is how to beat ARM. And the future of the IBM i platform depends on IBM getting this right. It has the hardware down to a science. It has a compelling Linux story, but Linux is only 30 percent of the market, and IBM does not have 125,000 Linux customers. It has 125,000 IBM i customers, and just like it took the Application System/400 to make relational database processing not only relevant to small and midrange businesses, but also affordable, it will take an integrated Power Systems/400 to bring transaction processing, simulation and modeling, machine learning, database acceleration, Java acceleration, and virtual desktop infrastructure all under the same server skins.
I have said it before, and I will say it again: It won’t be long before Microsoft makes the Office suite available on Linux, and that will add the front office. Rather than just being a datacenter in a box, as the Power Systems-IBM i combo is for most of the companies that use it, this hybrid IBM i machine would be an entire business in a box. A true International Business Machine that does it all, and in an integrated fashion that makes it valuable to customers. IBM needs to bring IBM i shops the future that awaits them. How could it forget this?