Advice For The IBM i Shop Buying X86 Servers
October 1, 2018 Timothy Prickett Morgan
We spend a lot of time at The Four Hundred talking about the Power Systems servers and the IBM i platform, but it we also keep a keen eye on what is going on in the rest of the IT world, particularly when it comes to alternative server hardware and transaction processing and data analytics platforms. There are many, many ways to skin the cats that are the backbone of the business.
Ok, so that was a bad metaphor. It happens. Like a line of bad code.
Anyway, it is not lost on us that somewhere around 95 percent of IBM i shops are also running alternative platforms for specific jobs that either wrap around the Db2 for i database and its applications or actually run the application code itself. Or, in yet other cases, some workloads that might have otherwise been created or bought for the IBM i platform were bought for Unix, Windows, or Linux server platforms from the get-go and are sitting beside the IBM i-Power Systems combo as a peer. With so much going on in the server racket these days, we thought it was time to talk a bit about these other machines, which end up being application servers, web servers, file servers, and database servers for OLAP and other analytics workloads and which sometimes present a threat to the IBM i platform and in others present an opportunity to consolidate those workloads onto the Power Systems iron in the shop on logical partitions. This can be directly done with AIX and Linux workloads, of course, but you have to find analogous applications for Windows to move them to Linux, AIX, or IBM i on Power iron.
The important thing to consider is that for the first time in a decade, AMD is winning the technology war on many fronts against its battle to take back some of the server business from Intel, which dominates with X86 vendors having well north of 80 percent of overall server revenues and well north of 95 percent of shipments. It is, except for some special cases like the IBM i platform, largely an X86 datacenter and largely Xeon machines. But that is starting to change. The “Naples” Epyc 7000 series of chips that AMD announced in June 2017 are starting to get traction. At this point, five of the top eight hyperscalers and cloud builders have deployed Epyc infrastructure, and some of them had quite heavy appetites for Opteron processors more than a decade ago when AMD last did well against Intel in servers. Intel has done better with its “Skylake” Xeon SP processors, launched in July 2017, than many might have expected, but that the server market has expanded a lot more than anyone expected, too, so there has been plenty of money sloshing around for server upgrades and everybody is getting a piece of the action.
I am not about to rewrite the substantial amount of research and analysis that I have done on the Intel and AMD server chip lines over at The Next Platform, but I will point out links to them and then talk about the strategies that IBM i shops should employ when buying servers or putting competitive pressure on IBM and its Power Systems resellers.
For the Skylake Xeon SP chips, here are the key things to read:
- Intel Melds Xeon E5 And E7 With Skylake – This one talks about the packaging, converged socket, and pricing bands for the Skylake Xeons ahead of the launch last year. It’s tricky.
- The X86 Battle Lines Drawn With Intel’s Skylake Launch – This is the launch announcement, with all the feeds, and speeds, and prices of the processors. Remember that these prices for a single unit for OEMs and ODMs if they buy in 1,000 unit trays of finished processors.
- Drilling Down Into The Xeon Skylake Architecture – As the title suggests, this is an architectural deep dive into the Skylake Xeon processor family (there are really three Skylake SP chips, not one) and the systems that use them.
- Intel Stacks Up Xeons Against AMD Epyc Systems – Here is Intel’s own competitive analysis of its own advantages and the weaknesses of the Epyc chips.
- The Huge Premium Intel Is Charging For Skylake Xeons – No matter what way you cut it, Intel is charging more, not less, for compute with the Skylake Xeons. The curves don’t lie, and neither do I.
- The End of Xeon Phi – It’s Xeon And Maybe GPUs From Here – This talks about the Intel Xeon roadmap out to 2020 and Intel’s difficulties in radically changing the Skylake design until “Ice Lake” samples in late 2019 for delivery in 2020 in systems.
For analysis on the AMD Epyc line, you should take a gander at these core stories I did:
- AMD Winds Up One-Two Punch For Servers – This is the first revelation that AMD was going to try to make a market for powerful one-socket servers.
- Competition Returns To X86 Servers In Epyc Fashion – This is the original Epyc announcement, which has the feeds and speeds and prices.
- Why Intel Must Respond To AMD’s Single Socket Threat – AMD is cramming as much stuff into a single socket Epyc server in terms of cores, memory, and I/O bandwidth that is typically in a two-socket Xeon server with a modest core count, and beating the Xeon on price performance pretty handily.
- AMD Coils For 7 Nanometer Leap Over Intel And Nvidia – Nvidia was first out of the gate with a semi-custom 12 nanometer chip making process from Taiwan Semiconductor Manufacturing Corp, but AMD is going to be getting the shrink to 7 nanometers first with its future “Navi” Radeon GPUs and the “Rome” Epyc CPUs, and beating Intel to market by about a year with the advanced process technology. (Intel’s 10 nanometer process is arguably close in terms of actual transistor etching as TSMC’s 7 nanometer process.) This has huge consequences for the server market, where smaller transistors mean more cores and cache and such and better bang for the buck generally. (Not with the Skylake Xeons, however, as we mentioned above.)
- Virtualization Is The Real Opportunity For Epyc – This discusses the enterprise server market and how core counts, memory bandwidth, and secure virtualization are all playing to AMD’s Epyc and putting pressure on Intel’s Xeon.
That’s enough homework reading. I just want you to be up to speed as we start the fourth quarter and you are probably looking to upgrade some X86 servers. Let’s talk for a moment about that.
Arm servers are not for you, IBM i shops, not yet anyway. It may seem exciting to join the Arm Army, but there is still a lot of work here to be done on the hardware and software front and not a lot of price competition. The only Windows Server that is available for Arm chips is those deployed inside of Microsoft to run its own workloads, and none of the other systems software is being made available. Stick to X86 for now for these workloads, and prefer a move to Power over a move to Arm because Power is more mature and stable. (Arm will get its day, fear not.)
Seriously consider buying single-socket AMD Epyc servers with top bin CPUs over buying two-socket Skylake Xeons with middle bin CPUs. AMD wants to eat market share, with a goal of 5 percent of shipments as it exits 2018 and maybe 10 percent or 15 percent in 2019 depending on whose whispers you want to listen to. I happen to think maybe 20 percent of the market is in play, others think maybe 25 percent. We shall see. But AMD has created a 32-core Epyc 7000 that can hold its own against Intel’s top 28-core Skylake Xeon SP part of even a pair of chips with 14 cores or 16 cores further down the line. The AMD chips deliver lots more I/O (128 PCI-Express 3.0 lanes in a single-socket machine versus 80 with Intel’s two-socket machine), plus eight memory controllers to Intel’s six and therefore have 33 percent more memory capacity and 33 percent more memory bandwidth per socket. More cores, more bandwidth, more capacity. Less money. The old adage about IBM back in the 1990s may hold for Intel Skylakes: You can find better, but you can’t pay more.
Stick with platforms that can support both Windows Server and Linux, and support them well. This is more about the configuration of the system than instruction set of the processor. If you are going to invest in X86 servers that might be around for five, six, or seven years – and at a time when memory and flash are very expensive – get ones that are configured equally well to provide a good balance between compute, memory capacity, memory bandwidth, and I/O bandwidth. This is really important. For a lot of workloads in the modern era, memory bandwidth is the constraint, not compute capacity, and in some cases memory capacity can be sacrificed for memory bandwidth. In the old days on the AS/400, when systems were built differently, we always told people to only half populate their memory slots so they had room to double up capacity later when the memory inevitably became cheaper. This was important because for OLTP workloads, adding memory and adding disk arms allows the machine to do more work (with less CPU headroom, mind you) but allowed customers to forego an even more expensive CPU upgrade that might have bumped them up into a higher OS/400 software tier. But with main memory being so expensive, now it might make more sense to have more bandwidth feeding into the CPU cores than more capacity behind it, so fully populating the memory slots on a server with lower capacity DIMMs might be the advice for a lot of workloads. My point is this: Think about the workloads and how they might change, think hard about the configurations, well in advance of buying. Don’t just accept some low-ball configuration with not enough memory. And don’t forget that flash will help speed up disk I/O when it is used as a cache, but it also puts more strain on the CPU memory complex. If you want to push more work through a system, it has to be balanced.
Don’t be afraid to leverage the threat of an X86 conversion to get better pricing on the IBM i-Power Systems combo. Even if you don’t do it, these X86 machines are very powerful and considerably less expensive than the ones running the IBM i stack. Threaten to move and then demand Power-Linux pricing at the very least. If IBM is willing to cut breaks for new customers, it should be willing to cut them for companies that stay in the Power fold.
I don’t claim that this is an exhaustive list, and if you have some other advice, I am all ears and happy to share your thoughts.