Thoughts On The Power E850 And I/O Contraction
June 1, 2015 Timothy Prickett Morgan
Sorry to be so late making my comment about your very good article concerning the Power E850 and IBM i. I agree with most of what you said in the article. I have had a couple of exciting conversations about the Power E850 with Mark Olson, who is an old friend from my Rochester days. The E850 brings back memories of when I worked for Rochester as the interface with Software Group and we were always two years behind on support for all the WebSphere products. It was all about the optics of being behind, not about the immediate need for the latest versions of the WebSphere products. But. . .
The Power E850 exposes the lie of the march to “OpenPower.” If openness is so key to Power, then why in the world would you not support IBM i on the E850? It’s not like it is some sort of weird chip set that requires vast testing expense for the operating system.
All that said, I do agree with Mark’s argument that most midrange IBM i customers have more than enough memory and CPW in a Power S824. The sleeper here, however is not so much the IBM drive to keep IBM i customers from downshifting from P30 to P20 (the Power E850 would be a P20, not a P10) as it is to drive them away from direct-attached storage and/or toward lots of virtualization and solid state storage–things that drive better margins for IBM.
The new I/O drawer technology available on the Power S824 significantly reduces the amount of Direct Attached storage potentially available. A Power 740 with the Power7+ chip allowed you to attach up to four Feature 8202 expansion drawers, each with ten card slots, which gives 40 card slots available to attach EXP24s, Ethernet, and magnetic media.
On a Power S824, you can attach only two (one per node) of the new #EMX0 drawers, each of which can have two Feature EMXF fan out modules. Each fan out module provides six card slots. So you end up with only 24 card slots available for storage and other things. So that’s a 40 percent reduction in the number of card slots for a machine that only allows a single CEC with two nodes.
For a customer with two to three partitions, that’s not problem, but those folks don’t need 225,000 CPWs. The customers that need over 200,000 CPWs have more partitions. And unless they want to use VIOS to virtualize everything, like in the AIX world, they first burn through five to eight card slots for Ethernet and tape. Then they are down to 14 to 16 card slots for storage. All the disk controllers for IBM i require pairing, so you’re down to seven to eight pairs. Pretty quickly, you are down to needing either the 387 GB SSDs or a Storwize SAN instead of direct-attached storage, where you have to worry more about the number of arms over the data.
A Power E850 is a four-node machine, so in theory it would support twice as many fan out features (48 card slots). That’s where it might have impact in the midrange space where I live and work each day. So fret less about the lack of Power E850 support for IBM i, a religious battle, and more about how IBM is driving the cost of computing up for the P20 customer, a total cost of ownership battle.
And one more thing: Thanks for all your work through the years. Maybe IBM will listen more closely to your concerns about the Power E850 than to old ex-IBMers like me.
Senior Systems Architect
Thanks for thinking this through and sharing. Once again, you help me learn. We are always grateful for your insight out here in the IBM i community. It takes a whole bunch of us to make sense of this, I think.