New OpenPower Servers Present Interesting IBM i Possibilities
September 19, 2016 Timothy Prickett Morgan
It seems increasingly likely to us that over the long haul IBM will get out of manufacturing all but the largest of its Power-based servers and its System z mainframe line, which by definition is big iron. There are a number of implications of this strategy for IBM i shops, of course, but let’s be honest here. Connect the dots and this seems inevitable.
Back in the AS/400 days, IBM made a custom machine, complete with homegrown processors and auxiliary compute, its own memory, and its own disk drives. When IBM started converging the AS400 and RS/6000 lines on the same PowerPC processors in the late 1990s, it also moved to a common I/O architecture that largely eliminated the use of asymmetric processing and the I/O processors (IOPs) that made the AS400 distinct from other types of systems. Not only did IBM made all kinds of peripheral cards common between AS/400s and RS/6000s over time, it stopped making them and used the same cards as other server makers. Letting someone else fab its chips, as Globalfoundries does, is just another step away from being International Business Machines, and it will come as no surprise to us when IBM moves to standard memory with the Power9 chips in 2017 for entry machines and in 2018 for scale-up machines. Others can bend the metal to make a server, and we suspect they will. The only thing IBM will do is the fun bit, which is designing the system architecture and the processors that embody it.
It was with great interest, then, that we see that motherboard and white box server maker Supermicro is now making systems based on the Power8 processor, and Wistron is making a killer CPU-GPU hybrid box that packs a serious punch. (I detailed these machines over at The Next Platform.)
But don’t get too excited. Even though it is technically feasible to run IBM i on these Power Systems LC machines, there is no way in hell that Big Blue is going to let customers do that. Unless we give it a good reason, that is. Which, by the way, I can do. Let’s have some fun.
These two Supermicro systems are aimed at taking on X86 systems in the scale-out cluster market, and a third that is starting to ship next week employs the modified Power8 chip that supports Nvidia’s NVLink high speed interconnect for lashing GPUs to CPUs. This one is aimed specifically at supercomputing clusters that use GPUs to accelerate the performance of massively parallel simulations.
The latter machine, known as the Power Systems LC, is overkill for most IBM i shops, which have no need (as yet) to accelerate any of their work using a GPU. Or, so it would seem at first blush. But there are plenty of databases that have been written from the ground up to run on GPUs and offload parallel routines to these adjunct processing units, and we think that if IBM truly believes in hybrid computing and accelerators then it would be wise to accelerate its own DB2 database management systems–the one for IBM i, the other for Windows, Linux, and Unix, and the final one for System z mainframes–with GPUs. At some point in the not too distant future Java will be also able to offload parallel routines to GPUs, something that IBM and the Java community have been working on for many years, and if this capability comes to Java then there is no reason it cannot come to RPG and other languages deployed on the IBM i operating system.
But I have another reason for wanting to do a hybrid CPU-GPU setup with IBM i. I want to bring the desktop terminal back under the skins of the Power Systems machine, in essence creating a modern version of a 5250 green screen. Instead of using the 5250 terminal protocol and a literal dumb terminal or one that is emulated, instead I want for IBM to be able to serve up virtual PC instances from a Power server and virtualize the GPUs to offer virtual graphics. This would eliminate the need for a full-blown PC on every corporate desktop and would allow for absolute security and maintainability of each PC instance. This would require a switch to Linux, which runs on Power chips, for the desktop operating system. But this could all be masked or made to look like Windows desktop to the end users. It won’t be long before Microsoft makes the Office suite available on Linux (why not?), and that would pretty much end the discussion. My point is, rather than just being a datacenter in a box, as the Power Systems-IBM i combo is for most of the companies that use it, this hybrid IBM i machine would be an entire business in a box. A true International Business Machine, as it were.
IBM would have to get some tweaked variants of the Tesla GPU accelerators from Nvidia to do both compute and virtual GPUs for desktops at the same time. For some reason–extracting maximum profits from different customers–Nvidia bifurcates its product lines, with the Tesla accelerators being restricted to compute and the GRID accelerators being restricted to virtual GPUs for virtual desktops. In this case, the system needs to do both and installing two types of GPU accelerator is just silly. The same engines that virtualize desktops can also accelerate databases and other functions, and IBM should do this. IBM needs to believe in the future for its most loyal customers, and I am kind of annoyed that database acceleration using GPUs is not already available on DB2 for i. I remember the days when things like encoded vector indexes appeared first on the IBM midrange platform, not last or–worse yet–never. Machine learning training and inferencing routines could also be offloaded to the GPUs and trained neural nets could be used to run recommendation engines and all kinds of data manipulation and automatic encoding applications in commercial settings. IBM has to THINK.
The Power Systems LC for HPC, which is code-named “Minsky” inside of IBM, is a killer box, with two Power8 processors with either eight cores at 3.25 GHz or 10 cores at 2.86 GHz and four of the latest “Pascal” Tesla P100 GPU accelerators. It has a stupid amount of parallel processing capability–21.2 teraflops of aggregate double precision floating point processing capacity across those four Tesla P100s, and twice that at single precision and four times that at half precision (which is important for training neural networks and running neural network inference engines once a net is trained). Loaded up with all that compute and 128 GB of main memory, a system with 20 cores of Power8 runs under $50,000, IBM tells me. Charge $2,000 a core for IBM i and $500 a core for Linux and this could be a phenomenal midrange platform for doing all kinds of things.
So yes, again, I am suggesting that IBM drink its own Kool-Aid and embed machine learning in IBM i. And why not?
The System/38 was IBM’s first relational database engine, not the mainframe, and the AS/400 was the first machine to bring relational databases to the masses at a price they could afford and embedded in such a way that they did not have to be propellerheads to do it. Rather than arguing why IBM should be adding machine learning and cognitive computing too the IBM i platform, the real question is why this was not done years ago so it was ready for when this impressive hardware was coming to market? So, why is that, IBM? The answer must be that IBM does not see the 125,000 or so IBM i customers as being sophisticated enough to need all of the modern tools of business. IBM has also forgot its own genius. But, it is never too late. IBM i 9.1 and Power9 processors are coming in 2017 and the foundations for a modern, cognitive IBM i platform can still be built.
As for the two Supermicro machines, which are code-named “Briggs” and “Stratton” after the two-cycle engine makers I remember from lawnmowers, these are more traditional servers. The Stratton machine is a 1U server, technically known as the Power Systems S821LC, that has two sockets in a 1U form factor with four 3.5-inch SATA drives. It uses a geared down Power8 chip that runs at 2.09 GHz with 10 cores or 2.32 GHz with eight cores, which both fit in a 130 watt thermal envelope. The Strattion machine costs around $5,500 in a base configuration and around $10,500 with two Power8 chips and 128 GB of memory (which maxes out at 512 GB). The CPU in the Stratton server does not use the Power8 with the NVLink ports on it, but you can attach older Tesla K80 GPUs to the Power8 complex using PCI-Express links.
The Briggs machine, known as the Power Systems S822LC for Big Data, is a 2U box that has room for a dozen 3.5-inch SATA drives and has the same processor speeds and core counts as the Minksy machine, but does not have support for NVLink just like the Stratton box does not. The base Briggs box costs around $6,000 and with two ten-core chips and 128 GB of memory, you are talking about paying $11,500.
The Briggs and Stratton machines are obviously a lot less expensive than regular Power Systems that can run IBM i, and I think that Big Blue should be making these available to hosters building IBM i clouds. Supermicro is the server supplier for its SoftLayer cloud, which is why it is being tapped to build these two Power Systems LC boxes in the first place. There is no reason why IBM can’t support IBM i on these machines and help its business partners build more cost-effective clouds, and it should do just that. Time is a-wasting.