IBM Launches Hybrid, Flexible Systems Into The Data Center
Published: April 16, 2012
by Timothy Prickett Morgan
It takes a little time and a lot of money to roll out a new server architecture, and even a company as large as IBM can't do it very often. The System/360 in 1964. The System/38 in 1979 and its follow-on, the AS/400, in 1988. The RS/6000 in 1990. The BladeCenter in 2002, and the Sequent-inspired clustered server nodes in the xSeries and pSeries in the mid-2000s. iDataplex in 2008. And now the PureSystem converged infrastructure launched last week, in 2012.
IBM started designing the BladeCenter blade server back in 1999, when some enterprise-class companies were pretty fed up with the space requirements and complexity of rack-based servers. For a while there, it looked like blade servers might become a standard and that companies of all sizes might adopt the technology, but for a variety of reasons, blades only account for about 20 percent of revenues in any given quarter. But there's this new category, driven by the needs of hyperscale Web and supercomputing customers, called density-optimized machines by many, that is growing fast. These include microservers, single-socket machines that you cram by the dozen into a chassis as well as fatter nodes that you put into a 2U or 4U chassis in twosies or foursies. In either case, these dense machines have no midplane or integrated management and switching like a blade chassis does. The basic assumption is that the application layer and storage layers in the Web application infrastructure have parallelism and replication to make many of the reliability features in a general purpose server unnecessary. With a modern app, you don't need these because you are not putting all of your application eggs in one server basket.
At the unveiling of the PureSystem family of machines at the Barclays Center in Brooklyn, Steve Mills, general manager of the converged Systems and Software Group, threw up some pie charts that showed the cost of a system and other ancillary costs over time, including people costs. (You couldn't see the charts from the webcast.)
"What overwhelmingly determines the cost of the system is people," said Mills. "People, people, people."
And, he added, while information technology has been transformative for business, eliminating people, the irony is that IT is one of the last places that has used automation to eliminate the need for people. (I don't think it is ironic, myself. I think it was absolutely intentional to give services companies like IBM Global Services a revenue stream and to allow IT people to maintain their jobs and prestige inside the data center.)
"We're going to have to help you deal with the real cost of computing" Mills conceded. "We have to break through the barriers that exist." Those barriers include the ones that exist between servers, storage, and networking, but also between the moment you decide to buy and the many months later when that system is typically up and running with real applications in the data center. The problem, explained Mills, is there is a tension between having an integrated system with limited options that is easy to deploy and the flexibility that allows the system to be adapted to fit different needs. In essence, you want to strike the balance somewhere between the AS/400 of the mid-1990s and the X86 server sprawl of the modern data center.
Just at the point where Rod Adkins was brought up on stage to show off the new Flex System iron and the PureFlex raw infrastructure boxes and PureApplication application servers, the fake newscast format that IBM used for the launch event cut away from the Barclays Center. So you are stuck with me walking you through the machines. This is a very high-level view, and I will spend time drilling down into the feeds and speeds later.
As I told you back in February, the Flex System chassis that is at the heart of the PureScale systems is a totally new design, and one that has been in the works since the summer of 2008, initially under the code-name "Clean Slate" then "Project Troy" inside of IBM, and finally to the outside world as the "Next Generation Platform" or NGP.
The FlexSystem chassis is a 10U rack-mounted unit that can hold 14 half-width or seven full-width server nodes, or a mix of server and storage when IBM eventually created an integrated version of its Storwize V7000 disk array for the box. That is a little less dense than the BladeCenter chassis, which could cram 14 vertically oriented blades in a 9U chassis, but as Steve Sibley, director of Power Systems explained it to me, the BladeCenter design was a bit challenging in terms of adding peripherals or top-bin and hot processors to a server node. By flipping the servers to a horizontal orientation, they are easier to slide in and out of the chassis and you can make the nodes a bit higher than the blades were thick so long as you make half-wide motherboard for two socket machines. This is what Cisco Systems did with its "California" Unified Computing System and it is what IBM decided to do with the FlexSystem chassis. Moreover, the BladeCenter design was subject to overheating in some cases, and these problems have been engineered out of the FlexSystem.
IBM is supporting server nodes based on its own Power7 processors as well as Intel's new "Sandy Bridge-EP" Xeon E5-2600 processors. At the moment, there are nodes with Power7 processors with either two sockets in a half wide or four sockets in a full wide configuration (that means it eats up two bays in the chassis); the Intel node only comes in a half-wide configuration now, but it is reasonable to conjecture that when Intel gets the Xeon E5-4600 processor for four-socket nodes out later this year, IBM will support a full wide (or two bay) node with this processor sporting four sockets. While Sibley didn't say this, IBM's documentation says that there are some four-bay compute nodes coming to the FlexSystem at some point in the future, and the integrated Storwize V7000 will eat four bays as well.
Each FlexSystem chassis has an integrated 10 Gigabit Ethernet switch module and optional Fibre Channel switches from QLogic and Broadcom to link out to storage area networks. Each node has two local disks, but you are meant to store operational data on the Storwize V7000 that is shared by the nodes. There are top-of-rack switches from IBM's new networking unit (based on its acquisition of Blade Network Technology) for linking multiple FlexSystem enclosures together, and you can chain up to four racks, or 16 enclosures, together and manage them as a single domain.
You can run the FlexSystem server nodes in bare metal mode if that suits your applications--data analytics and Web farms run bare metal, as do supercomputer clusters. But if you want or need to virtualize the nodes, then on the Power7 nodes you do so with PowerVM and its Virtual I/O Server side kick, and you can do it on the X86 nodes with either VMware's ESXi hypervisor or Microsoft 's Hyper-V hypervisor. Sibley told me that support for Red Hat's KVM hypervisor is coming. The Power nodes can run IBM i 6.1 and 7.1 or AIX 6.1 or 7.1 as well as Red Hat Enterprise Linux and SUSE Linux Enterprise Server. The Xeon nodes can run Microsoft Windows Server or Linux from either Red Hat or SUSE Linux.
The FlexSystem chassis comes with two 2,500 watt power supplies, and you can add another four as you need to power up more gear or want redundancy. It comes with four 80mm fans for cooling compute units and four 40mm fans for cooling switch modules, with an optional four 80mm fans for more airflow. The chassis has one management module (similar to the one in the BladeCenter chassis) with a second backup being optional, and you need to eat one of the server bays for the FlexSystem Manager appliance software, which manages all of the server, hypervisor, and network settings and is, as best as I can tell from my initial briefing, a mashup of Systems Director Management Console and its VMControl add-on plus some Tivoli and other tools from the storage and networking parts of IBM. Sibley says that the FlexSystem manager uses the same graphical user interface that was developed for the XIV and Storwize arrays and it can run on any Web browser, including those on a smartphone or tablet.
Mills: 99 and 44/100 percent pure systems
No one buys a FlexSystem chassis. The base acquisition is something called a PureFlex system, which is a new rack that IBM populates with a FlexSystem chassis and other components:
IBM's PureFlex system configurations. (Click graphic to enlarge.)
There are three configurations of the PureFlex systems, which differ from each other in price and componentry. None of the machines include processor nodes in their base configurations. The Express configuration has a chassis with two power supplies and four fans, a management node running FlexSystem Manager Standard Edition, a 10 GE top-of-rack switch, an 8 Gb Fibre Channel switch, a V7000 array with eight disk drives and two SSDs, the rack, and three years of support with an annual microcode update, as well as three days of lab services from IBM. This will run you $100,000.
The Standard version of the PureFlex platform will cost you $200,000, and it adds a second Fibre Channel switch, two more fans, and two more power supplies to the chassis, as well as eight disks to the V7000 array. IBM also tosses in something it calls account advocate, which is special hand-holding like IBM customers used to get back in the days of the System/360 and System/38, who are available on a 9x5 business hour basis. The support contract is upgraded to 24x7 for the iron as well, and the FlexSystem Manager software is upgraded to the Advanced Edition. And interestingly, IBM also activated its SmartCloud Entry (SCE) management software, the same stuff Big Blue uses to run its public cloud, on the machines for you to use. You also get five days of lab services.
If you want to go full out, the PureFlex Enterprise configuration runs $300,000, and you get a microcode update twice a year and account advocates that are available around the clock, plus a week of lab services to help with installation. IBM also tosses in some more iron, too, to justify the higher cost. An extra 10 GE switch, two more SSDs, and four more disks for the Storwize array, and the full six power supplies and eight fans.
The FlexSystem chassis and PureFlex systems will be available on May 21.
IBM is also configuring a set of PureFlex machines based on X86 nodes to run WebSphere middleware and DB2 databases to be application platforms. These are called PureApplication systems and eventually there will be variants based on Power processors and the AIX operating system. There will be four initial configurations of the PureApplication systems, which are all based on the eight-core 2.6GHz Xeon E5 processor from Intel and which all have 6.4 TB of flash storage and 48 TB of disk storage. The configurations will have 96, 192, 384, or 608 cores and 1.5 TB, 3 TB, 6.1 TB, or 9.7 TB of main memory across those X86 nodes. The exact configurations of the software stack were not available at press time, but we can tell you that they will ship to early adopter customers in the middle of next month with general availability scheduled for July 31. Pricing for the PureApplication systems was not divulged.
We will be drilling down into the details of these machines in the coming weeks, taking our time and being thorough. I am trying to get my brain wrapped around the "built in expertise" that IBM says these machines have, which automates their deployment and maintenance.
Some Thoughts About IBM's Next Generation Platform
IBM's Next Generation Platform Prepped For Launch
Flex Platform: An IBM System That Goes With The Tech Flow
Big Blue's Software Gurus Rethink Systems
IBM Taps Software Exec For Power Systems Marketing
Q&A With Power Systems Top Brass, Part One
Q&A With Power Systems Top Brass, Part Two
IBM Lays Out Plans for Future Growth and Profits
IBM Puts Power Systems and System z Server Under One Manager
IBM Reorganization Tucks Systems Under Software
Palmisano Says IBM Will Double Up Profits By 2015
Bye Bye System p and i, Hello Power Systems
Why Blade Servers Still Don't Cut It, and How They Might
Why Do Rack Servers Persist When Blade Servers Are Better?
Post this story to del.icio.us
Post this story to Digg
Post this story to Slashdot