How IBM Stacks Power Cloud Up Against AWS And Azure
August 3, 2020 Timothy Prickett Morgan
We have our own habits when it comes to thinking about bang for the buck, and it is refreshing sometimes to just think about how other companies think about it.
As part of the July 14 announcements now almost three weeks ago, IBM not only rolled out new entry Power9 machines with higher I/O bandwidth as well as utility style pricing for on-premises gear to lower capital expenditures. IBM also did a direct comparison of the Power Systems Virtual Server running on the IBM Cloud for a memory-heavy instance and then compared that to fat memory slices running on Amazon Web Services and Microsoft Azure. These are, of course, the two public clouds to beat when it comes to enterprise accounts – particularly Azure particularly with those who are familiar with the Windows Server platform in their datacenter, which still is preferred by roughly three times as many IBM i shops as an adjunct platform to their core database and online transaction processing engine.
This is as much about sharing the data with you as anything else, as food for thought for both you and us. So without further fuss, here is the chart IBM embedded in some of its presentations from the launch:
The interesting thing here is that IBM has to look at it in dollars per TB per month for the database engines because it is charging for compute, memory, disk, and flash storage capacity by the hour. IBM’s prices are found at this link, and the comparable unit prices for R5 memory-intensive instances on AWS are found here and on Azure the SUSE Linux instances for running SAP applications and their databases, which Big Blue chose for this comparison, are found there. It is not at all clear what configuration IBM chose for the Power Systems slice on the cloud, but we know that it was equipped with 960 GB of main memory. The AWS r5.16xlarge instance has 64 virtual CPUs, 512 GB of main memory, Elastic Block Storage network storage instead of local disk or flash that delivers 13.6 Gb/sec of storage bandwidth, and a 10 Gb/sec virtualized Ethernet link. The M64s instance on Azure has 64 virtual CPUs, 1 TB of main memory, and 2 TB of local storage. That is not the largest memory footprint on AWS – there is an instance with 96 virtual CPUs and 768 GB of memory.
Presumably, whatever IBM set up on the Power Systems iron had 64 virtual CPUs as well or something close, although probably not 64 cores because you can’t get more than 15 cores in a logical partition on the Power S922 system that IBM has on the cloud or more than 50 cores on a logical partition with the Power E880 on the cloud. (We went into detail about IBM’s pricing on the cloud for slices of Power Systems iron back in June, so reference that for the details.)
Let’s assume for the moment that the computing oomph was equivalent and the memory was in the same ballpark and focus on how IBM is looking at how much main memory is used, presumably mostly for the database processing. The interesting bit for us is looking at a server slice based on its memory capacity over time, not on its compute capacity over time. And if you do that, and price things at a cost per TB per month, the Power Systems slice is 41 percent cheaper that the fat memory instance aimed at OLTP workloads on AWS and is 30 percent less expensive than the special SLES for SAP instance IBM chose to compare to from Azure.
We are not saying these are necessarily good or bad comparisons. We are going to play around with the configurators for all three and see what is what. This is a good starting point.