Intel’s New Xeon E5s Push Back Against Power7+ Processors
September 16, 2013 Timothy Prickett Morgan
The tug of war between the Xeon and Power processor lines from Intel and IBM continued last week as the largest chip maker in the world got its “Ivy Bridge-EP” Xeon E5-2600 v2 processors into the field.
IBM, of course, has just finished rolling out its eight-core Power7+ chips across the Power Systems and Flex Systems lines, and only last month was previewing the capabilities of the future 12-core Power8 chips due maybe in the middle of next year. Now it is Intel’s turn, and it is starting with the rollout of chips for the workhorse two-socket servers that dominate the data center landscape. The Xeon E5-2600 v2 chips offer about 50 percent more performance for top-bin parts compared to the prior generation of “Sandy Bridge-EP” Xeon E5-2600 v1 processors that launched in March 2012. (And you thought IBM’s marketeers were the only ones with product naming issues. . . . )
The interesting thing about the new Ivy Bridge Xeon E5-2600 is that there are actually three different variations of the processor. One has six cores, another has 10 cores, and the final one has 12 cores. Here’s what the 10-core chip looks like:
Forget all the feeds and speeds for a second. Yes, I know I just said that. Here is what you need to appreciate about what Intel has done with these new Xeon E5-2600 v2 chips: Intel is plunking three related but very different processors into the same socket (and indeed, in the same socket as the prior Xeon E5-2600s) so it can offer chips that are more tailored to specific workloads for customers. So what you say? Well, with Intel’s manufacturing prowess in chips, it is second to none when it comes to pushing the process envelope–it is at 22 nanometers for server chips now and will be starting up 14 nanometers for desktop and mobile chips before this year ends, and IBM is talking about 22 nanometers next year–and that is nothing new.
But now, it is offering a kind of mass customization within a processor socket, and not just by turning off elements of the chip. I would guess that this is expensive to do, but Intel’s dominant share of the server market and its chip fab capacity and skill means it can do this. If you add in the Atom C2000 chips announced a few weeks ago for microservers, storage arrays, and network devices, plus the Xeon E3-1200 v3 chips based on the “Haswell” core, then three different Xeon E5 chips, and possibly one or two impending “Ivy Bridge-EX” Xeon E7s and the expected Xeon E5-2400 v2 chips for low-cost servers and Xeon E5-4600 v2 chips for four socket machines, that is a very broad product line indeed. Intel is not just going to make it up in volume, but with a wide array of chips.
This is what will make it tough to compete against Intel, whether you are peddling a Power chip, a MIPS chip, or an ARM chip.
With the three different Xeon E5-2600 v2 chips, Intel can put 18 processors into the field with a wide range of core counts, cache sizes, memory bandwidth, and clock speeds. In general, the new chips cost about the same or a little more at the same approximate spot in the lineup for their Sandy Bridge predecessors. And even when they cost more, the incremental cost to the overall system is on the order of a few percent–we are talking on the order of 2 to 5 percent, according to the server makers I talked to last week–and the incremental performance of 40 to 50 percent justifies that increase.
Once again, Intel wins. If IBM wants Power to carve out its niche–and I think it can do that, and the OpenPower consortium is a step in the right direction–it will have to get clever about its fabbing as well as its chip design. That’s just the way it is. If you want the real detailed analysis of the new Xeon E5-2600 chips, you can check out the in-depth bit I did at The Register.
The other important point is that the future Power8 chips have to compete against these Ivy Bridge Xeons, and based on this first one, it looks like IBM can put a chip in the field that will have higher clocks, as many cores, more cache, and four times as many threads, as well as some very interesting ways to integrate memory and external accelerators. Don’t count Big Blue out yet.