Power Systems Not Getting 3D XPoint Memory Anytime Soon
April 1, 2019 Timothy Prickett Morgan
A lot of people don’t remember this, but Intel was founded in 1968 as a maker of semiconductor main memory for mainframes, and in the early 1970s the company commanded almost as much market share in main memory as it does in datacenter compute today. But as competitors in Japan did a better job ramping up new technologies, by the early 1980s Intel’s market share dropped to somewhere between 2 percent and 3 percent, and it had no way to easily or affordably get back into the game, and by 1984 it had to wind down its memory operations.
That move out of the memory business forced the company’s founders, Gordon Moore and Robert Noyce, to hand the reins of the company over to Andy Grove, who was director of engineering when the company was founded, president in 1979 as the memory business started to implode, and was tapped in 1987 to be its chief executive officer to clean up the mess and have a go at expanding the use of microprocessors. There is no question that Grove’s strategies worked, and Intel is living off the genius of Grove every bit as much as IBM’s System z mainframe line is still living off the genius of Gene Amdahl and the IBM i platform is still living off the genius of Glenn Henry. (Ironically, after creating the System/38 architecture and working on the AS/400, Henry left IBM in the 1990s to create a clone X86 chip maker, called Centaur Technology, because he thought Intel did not have enough competition in processors.)
Sometime soon, concurrent with the launch of the “Cascade Lake” processors, the second generation in its Xeon SP line, Intel will be back in the memory business proper for the first time in three and a half decades. Concurrent with that processor launch, which is just a rev on the first generation “Skylake” Xeon SP processors that have been out since July 2017, Intel is launching a variant of its 3D XPoint persistent memory, which is being implemented using bit addressable controllers that are equivalent in function to DRAM and in form factors that slip right into DRAM DIMM slots. These are code-named “Apache Pass” and these 3D XPoint DIMMs were originally supposed to come out as part of the Skylake server platform nearly two years ago, but the 3D XPoint memory did not have a high enough yield as yet and did not offer the kind of duty cycle and capacity at the right price point to come in as a much cheaper but also slower augmentation for DRAM main memory.
Main memory capacity is a big bottleneck for a lot of workloads, and memory bandwidth is a limiting factor in a bunch of other ones. The 3D XPoint DIMMs will be sold under the Optane Persistent Memory Module, or PMM, brand and will initially be available in 128 GB, 256 GB, and 512 GB sizes, which is 2X to 4X the top-end capacity of typical DDR4 DRAM these days. The 3D XPoint memory was developed in conjunction with memory maker Micron Technology, and was initially manufactured at the shared fab that the two operate in Lehi, Utah. Last year, Intel and Micron split apart their flash and 3D XPoint memory businesses, and one of the things that Intel apparently got out of the divorce settlement is exclusive rights to sell 3D XPoint main memory in the DIMM form factor until 2021. That means IBM will either have to use other kinds of persistent main memory in Power Systems, or wait more than two years until Micron can deliver it. It also means that for all intents and purposes, Intel has a monopoly on 3D XPoint memory in the X86 server space, and has effectively barred peddlers of IBM Power9, AMD Epyc, and various Arm server chips from being able to add high volume persistent main memory into their machines.
When Intel and Micron announced 3D XPoint memory back in August 2015, it was pitched as a kind of hybrid sitting halfway between DDR$ DRAM main memory and NVM-Express flash, which was just starting to be commercialized in a limited way. The idea, according to the two companies, was that 3D XPoint would have 10X the density of DRAM, and would therefore eventually be cheaper by far, and would have 1,000X the endurance and 1,000X the performance of flash. The jury is still out on how well 3D XPoint accelerates systems while driving down their costs, but the word on the street is that 3D XPoint DIMMs has 2X lower latency than even NVM-Express flash drives and cost about one-quarter the price of DRAM in this initial batch of Optane DC PMMs from Intel. (That is not quite 1,000X on the performance scale.) Over time, as 3D XPoint chips get transistors that can be stacked up higher and higher, the hope is to get that cost down to one-tenth the price of DRAM. This means that Optane DIMMs can be used to extend main memory as well as offer a fast tier for flash, all while keeping persistence.
There are a couple of different ways to make use of Optane memory. The first is to simple use it as main memory and to treat DRAM as a kind of fast cache for it. (You can’t boot a system without DRAM, however, so don’t take it too far.) You can also treat the Optane DIMMs as a block storage device, so it looks to the system like very fast storage. There is also an App Direct model that knows Optane DIMMs are persistent memory and applications and systems software are tweaked to know what data to put in very fast DRAM (which can’t be persistent) and somewhat slower 3D XPoint (which is persistent by nature, like flash).
The reason why this matters now is that the Cascade Lakes chips from Intel will have an advantage over all other processors in that main memory will be able to be doubled, tripled, or even quadrupled affordably, thus allowing for more parts of the system stack and larger applications and datasets to be pulled into main memory. At some point, Micron will be able to sell 3D XPoint DIMMs, and everyone else will get a chance to play, too. Or other technologies, such as phase change memory (PCM) or resistive RAM (ReRAM, which is a close cousin to 3D XPoint) are in development but are not anywhere near as far along as 3D XPoint, which has been shipping in SSD form factors to the hyperscalers and cloud builders since 2017.
This could give Intel and its server partners a serious advantage on any in-memory workload, such as Apache Spark or SAP HANA, both which are wrapped around in-memory databases. IBM is very keen on growing its SAP HANA business on Power Systems, and has done a good job capturing market share away from Hewlett Packard Enterprise, Dell, and others who play in this arena. But the fact remains that fat DDR4 DRAM sticks will continue to be more expensive than even fatter Optane DC PMMs from Intel for the foreseeable future.
Big Blue could cushion the blow for IBM i shops a little by porting the Active Memory Expansion memory compression features from AIX that it announced alongside the Power7+ processors way back in 2012 to the IBM i operating system. (That was when Intel and Micron first started divulging the 3D XPoint technology to key customers and OEM and ODM partners, by the way.) By doing this being able to maybe boost effective memory by 40 percent to 50 percent over raw capacity and drive up CPU utilization in the process to get that extra memory, you are just trading CPU cycles for memory capacity, although it can boost throughput by 60 percent to 70 percent. But there is less headroom in the system to deal with spikes.