Big Blue Rolls Out Red Hat Power Stack
February 15, 2021 Timothy Prickett Morgan
A few weeks ago, we told you about some of the announcements that Big Blue was packaging up for Power Systems hardware and separately for the combination of its Red Hat systems software stack and Power Systems iron for on premises datacenters. These announcements are slated to go out on February 23, as far as we know, but the IBM Announcement Letter system often has other ideas and sometimes even violates the company’s own embargoes, as if it has a mind of its own.
(For all we know, something that old and so full of data does have a mind of its own.)
In announcement letter 121-009 last week, for whatever reason, IBM let slip one of the supposedly forthcoming announcements, which was for the IBM Power Private Cloud Rack solution, which is pretty close to the same name as the IBM Power Systems Private Cloud Rack Offering that we heard about. As we suspected and as the names clearly imply, this is a complete Kubernetes container system, top to bottom, from the rack to the OpenShift containers. IBM says that a traditional eight-week deployment of Power Systems iron and the Red Hat software stack can now be condensed to eight hours, and also that using Power9 processors will allow 3.2X the density of containers per core compared to current X86 server processors on the market. We would love to see the math behind this, with AMD Epyc 7002 and 7003 series processors having 256 threads across two sockets (64 cores times two threads per core times two sockets) compared to 384 threads for IBM Power9 processors (24 cores times eight threads per core times two sockets), and by our math that is only a factor of 1.5X more thread – and supposedly then container – density. Intel has 28 core Xeon SP processors with two threads per core, which is 112 threads per two-socket server, and that is a factor of 3.4X more thread density. But you can’t ignore the gap that AMD has closed with memory bandwidth, I/O bandwidth, and core and thread count per system. IBM is also talking about software savings for software that has per thread pricing, but we don’t know of any such thing. Most of the software we see is priced on a per socket or per core basis, or tiered in some other fashion.
Having said all that, we do believe there are benefits to an integrated stack of Red Hat on Power, particularly when that same system can have logical partitions that collapse IBM i and AIX workloads onto the same machines. We are all for that, and have been banging the drum for such an approach for so many years our fingers are sore.
The minimum configuration of the Red Hat Power Stack, as I might call it if I had worked at IBM, is three Power S922 servers, each equipped with a pair of 10-core Power9 processors running at a base speed of 2.9 GHz that can turbo up to 3.8 GHz. This machine is eligible for the shared utility capacity pricing options, which were announced last year and which allows for the boxes to be activated with as little as one core and 256 GB of main memory per server and then upgraded, cloud style, as necessary as workloads expand. Each of the Power S922 machines has 256 GB of memory and 3.2 TB of NVM-Express flash. These machines are linked to a FlashSystem 5600 array with 9.6 TB of capacity through a pair of SAN24B-6 switches, which are as the name suggests 24-port Fibre Channel switches running at 6 Gb/sec. The systems are put into a 42U rack and there is an optional 52-port 1 Gb/sec Ethernet switch you can buy, which is essentially useless as far as I am concerned in a world at 100 Gb/sec and moving rapidly to 200 Gb/sec and even 400 Gb/sec.
On the software front, the systems are configured with Red Hat Enterprise Linux 8 as their operating system, which runs atop IBM’s PowerVM hypervisor, not the OpenKVM variant that only support Linux inside of virtual machines. This is important because the OpenShift Container Platform actually runs atop the CoreOS variant of Linux (which has been rebranded as RHEL CoreOS), and it is our guess is that IBM had a much easier time running this on PowerVM than it did on OpenKVM. IBM’s implementation of the OpenStack cloud controller, PowerVC, is installed as well to manage VMs, rather than Red Hat’s OpenStack distribution, and that is because IBM tuned PowerVC to run on Power iron and Red Hat probably did not do as tight of a coupling with its OpenStack and the IBM management toolchain. Data management tools in the Spectrum Scale family can also be deployed on this Red Hat Power Stack (again, that’s my name, not IBM’s, which it is abbreviating to the PCP Rack Solution at least once in the announcement letter. I try to stay away from Angel Dust and being put on the rack, myself. . . .)
For those who want a single node of this stack for developers, you can buy a single Power S922 that has all of the stack components running in PowerVM logical partitions.
Interestingly, there are no Red Hat Ansible modules for managing this stuff, which is surprising, and we wonder how IBM will use its toolbox to give customers a way to manage the on-premises gear with Power instances running Linux, AIX, and IBM i on the IBM Cloud, Skytap/Microsoft Azure, and Google Cloud.
The Red Hat Power Stack – isn’t that name just a whole lot better? – will be available on March 12.
Also available on March 12 is a new 16 Gb/sec dual-port Fibre Channel adapter, which was originally only available for AIX customers (feature codes #EN1G and #EN1H) that is now supported with Linux. No word on if this is supported with IBM i, but it probably is through the Virtual I/O Server given the initial AIX support.
Additionally, IBM is now allowing putting a specific code – feature #ENSM – into the configuration system that allows for IBM i namespaces to be duplicated at build time for new orders across a pair of U.2 NVM-Express flash drives that are used as boot devices. (This is not for the M.2 gumstick drives, which are often used for operating system boot on X86 server, but the bigger 2.5-inch and 3.5-inch NVM-Express devices. This dual IBM i namespace configuration only applies to IBM i machines running Power9 chips and supporting the P05 software tier, which means a single-core Power S922 and a four-core Power S914.