Power8 Iron Gets New I/O Options
June 1, 2015 Timothy Prickett Morgan
As part of the rollout of slightly faster processors for the entry Power8 processors and the fleshing out of the high-end of the Power Systems line a month ago, IBM rolled out a bunch of new peripheral features and a new set of peripheral expansion units that will allow for Power8 shops both high and low to expand the amount of PCI-Express 3.0 peripherals that companies can attach to their systems than is currently possible on these machines.
Such I/O and storage expansion is important, given the ever-increasing amount of compute and memory that is being put into ever-smaller system packages in the Power Systems line. But as is pointed out elsewhere in this issue of The Four Hundred, in some cases the increased expansion for I/O and storage is still less than was possible with prior generations of Power7 and Power7+ systems.
The new I/O options are outlined in announcement letter 115-021. The key new thing is the PCI-Express Gen3 I/O Drawer, and this is particularly important to IBM i shops that have relatively modest compute needs but a lot of storage and I/O. Such customers will be steered towards a two-socket Power S814 and the new I/O drawers because IBM is not allowing IBM i to run on the quad-socket Power E850 midrange machine. Thus far the reaction has been kind of tepid to this, but once customers and resellers understand the implications of these moves, as Doug Fulmer points out elsewhere in this issue, they might start kicking up a fuss and asking for IBM i to be supported on the Power E850 because it broadens their storage options and, significantly, allows them to hang a lot more direct-attached disk to their systems than can be done using the Power S824 and the new I/O expansion drawers.
IBM announced this PCI-Express 3.0 expansion drawer, Feature #EMX0, last year for attachment to the Power E870 and Power E880 enterprise-class machines, and now the number of drawers is being doubled up on these systems as well as designating the Feature #EMX0 drawer as the I/O expansion unit for smaller Power8 machines. Here is what it looks like for the entry Power8 machines, which IBM calls “scale-out” you will remember because it hopes to sell these systems as alternatives to X86 machinery in server clusters.
For most IBM i shops, an entry machine is their main database engine; the same holds true for the OpenVMS base over at Hewlett-Packard, too. But they often have the need for more storage than can be attached directly inside the system. IBM, unlike its midrange peers, adopted InfiniBand in the way it was intended, using it to link the central processing complex to external I/O expansion drawers through what IBM called a GX++ port and a 12X link. No other entry and midrange machines that I know of allow for such high-speed expansion of what is in effect direct-attached peripherals. With the Power8 machines, IBM has abandoned InfiniBand-based GX++ ports and is using PCI-Express links to hook external peripherals to the main system. Here is what it looks like hooking the new expansion drawer to the back-end of a two-socket Power8 machine:
As you can see, the peripheral expansion eats two PCI-Express slots in the system for each half of the expansion box. IBM could have moved to much faster InfiniBand links, which come in 56 Gb/sec and 100 Gb/sec speeds right now from Mellanox Technologies; the old GX++ ports ran at 20 Gb/sec because they were based on InfiniBand running at those speeds. Here is how the expansion drawers old and new stack up against each other:
The new Feature #EMX0 expansion drawer has a lot more bandwidth, with two links running at 32 GB/sec each compared to a mere 20 GB/sec for the GX++ port and 12X loops, which as the table points out might have to be used across two 12X loops. The older expansion drawer also only supported PCI-Express 1.0 x8 slots, which is fine for storage and network controllers, but is not so good for accelerators and some flash cards these days, which want fatter and faster PCI-Express slots. This chassis was engineered for high-end work and perhaps for the adoption of certain kinds of accelerators, I think. Not necessarily for the needs of entry IBM i shops. But despite that, the bandwidth will come in handy. IBM says that a Power7 machine with Feature #5877 expansion drawers will yield an average of 1 GB/sec to 2 GB/sec of bandwidth per slot, depending on how many drawers and slots hang off it, which is a lot less than the 5 GB/sec per slot that the Feature #EMX0 drawer will do. That said, the GXX+ port plus 12X loops could hang 20 slots off a single machine compared to a max of 12 for the new box. So that is the trade-off of the different architectures.
Here is how the slots stack up on the entry Power8 machines coupled with the new I/O expansion drawers:
On the entry Power S812L, Power S822, and Power S822L machines, the expansion slot count is nearly double what it was before the expansion unit was announced; on the Power S814 and Power S824 machines (which run IBM i), the slot count is more than double with the expansion unit only half filled; and with the Power S824 and Power S824L with both modules in the expansion unit it is nearly three times as many peripheral slots.
With the requisite PCI-Express system cards, cable pairs, chassis, fan-out modules, and power cords, a half-populated Feature #EMX0 expansion drawer costs $12,224 at list price. Adding the second fan-out module, cables, and adapters will boost the price to $20,010. These boxes may not be systems, but they cost like them.
On the peripheral features front, IBM has rolled out a four-port 10 Gb/sec Ethernet adapter that works with IBM i, AIX, Linux, and VIOS, which is Feature #EN16 with fiber optic cables ($3,864) and Feature #EN18 with copper cables ($3,013). There is also a new 56 Gb/sec InfiniBand adapter, which comes in short and tall versions. Pricing was not available at press time on these cards, and neither was it for the new two-port 1- Gb/sec Ethernet adapters that support the RoCE implementation of the Remote Direct Memory Access protocol that gives InfiniBand such low latency. There is also a new SAS adapter that includes RAID data protection and 12 GB of cache, which will be particularly useful for flash-heavy configurations.