Flex System Gets New Control Freak, Switches, And Inside V7000 Array
November 26, 2012 Timothy Prickett Morgan
The November 13 Flex System mini-announcement didn’t just feature some new Power7+ p260 server nodes and configurations of the PureFlex infrastructure stacks sporting all of the various server nodes instead of the x240 Xeon E5 node. IBM made a few other announcements in systems management, switching, and storage related to the platform.
In announcement letter 212-475, you will see that the Flex System Manager, the control freak for the entire server-storage-network mashup that is PureSystems, has been upgraded to V1.2. And in that update, the software not only speaks to the new Power7+ nodes reported on elsewhere in this issue of the newsletter, but also has support for the promised internalized version of the Storwize V7000 array and a bunch of new network and storage switches that Big Blue rolled out. Flex System Manager runs on one of the server nodes in a single management domain that can span four racks with up to 16 Flex System enclosures, for a total maximum of 223 nodes (not including the one node running the FSM tool).
V1.2 of the software not only supports that new iron, but can manage IBM’s homegrown DVS 5000V virtual switch, which can run on hypervisors on the server nodes to provide virtual switching for virtual machines and logical partitions. The update also allows for Fibre Channel over Ethernet (FCoE) management across the entire hardware stack and provides bare-metal provisioning of VMware‘s ESXi and Red Hat‘s KVM hypervisors on x86 iron as well as their respective vSwitch and Open vSwitch virtual switches. (I am going to be talking about software-defined networks and how they affect the IBM i platform in a coming issue of The Four Hundred.) The FSM V1.2 software is available on December 3.
The integrated Storwize V7000 disk array, which can be deployed as an iSCSI NAS or Fibre Channel SAN using the same iron, probably should have come out when the “Project Troy” modular servers were announced in April, but for whatever reason, IBM’s hardware engineers didn’t get the job done. Now, instead of feeding storage nodes with external V7000 arrays with their own power supplies and cabling, there is a variant of the array that has mezzanine cards that plug into the midplane of the Flex System chassis, just like server nodes do, and reaches out through integrated switches to link to the server nodes in a chassis and possibly to other nodes in other enclosures. This integrated V7000 also draws is power out of the Flex System chassis and can be managed more directly by the FSM software.
The new integrated Flex System V7000 storage node is a double-wide, double-high node that supports up to two-dozen 2.5-inch disk drives. The dual controllers in the unit (they are called canisters for some reason) can support drives that hang off them in external enclosures that can also be tucked into the Flex System chassis and up to 240 drives can be configured to one internal V7000. If you cluster multiple units together, which is also an option, you can have up to 960 disks in a single clustered storage image. You can see now why IBM chose the V7000 as the basis of its PureSystem iron. The V7000 supports solid state disks with 200 GB and 400 GB capacities and SAS disk drives spinning at 15K RPM in 146 GB and 300 GB capacities, spinning at 10K RPM that come in 300 GB, 600 GB, and 900 GB capacities, and spinning at 7,200 RPM in 500 GB and 1 TB capacities.
The internal V7000 enclosure costs $14,500, and an expansion chassis costs $3,500. Disk and flash drives costs exactly the same in the V7000 as they do for equivalent models in Power Systems servers, from hundreds to thousands of dollars, depending on speed, capacity, and type. The software that make the Storwize product useful, which you can see in announcement letter 212-385, is not included for this machine–yeah, I know, but they all do this stuff now–and real-time data compressions costs $5,500 per V7000, external virtualization support costs $5,500 per machine, and remote mirroring costs $3,000. The base Storwize file system needed for the machine costs $11,000 per box. This is not cheap storage by any stretch of the imagination.
That leaves the new switch and adapter features for the PureSystem iron.
Feature #ESW2 is a 10 Gigabit Ethernet switch that supports the Fibre Channel over Ethernet (FCoE) that makes SANs think they are talking over Fibre Channel when they are doing so over 10GE links. The idea is to converge server and storage traffic onto the same wires and switches in a machine, and to beef up Ethernet so it is lossless, like Fibre Channel and InfiniBand are. (Disk drives really hate when you drop bits, which plain vanilla Ethernet does.) This switch comes with 14 ports internally (one pointing at each server node in the chassis) and with 10 ports used as uplinks to storage arrays or aggregation switches outside the chassis. It can be upgraded with two modules that boost it as far as 42 internal ports and a mix of either Omni Ports (which support native Fibre Channel for linking to SANs) and 10GE or 40GB uplinks for a maximum of 64 total ports across the EN4093 switch, as this device is known. This switch costs $20,899, with the port upgrades costing $10,999 for each of two hops. The EN4093R switch, which is feature #ESW7, doesn’t have the Omni ports and just does 10GE and 40GE links, with 42 internal ports and 22 external ports. It costs $13,999 plus the same $10,999 for each of two upgrade modules.
In addition to these new switches, the Flex System has two new server node adapters. The CN4058 adapter, feature #EC24, is used in conjunction with the switches above if you plan to use FCoE to link to storage. It has eight ports running at 10GE speeds and costs $3,000.
IBM also has a two-port 10GE server adapter that supports the Remote Direct Memory Access (RDMA) protocol on top of Converged Ethernet, which is a funky way of faking InfiniBand’s low-latency RDMA protocol, which lets a server reach into the memory of another server over an InfiniBand network without going through the network and operating system stack. RDMA over Converged Ethernet, or RoCE as it is called, is what gives Ethernet even lower latency, and the CN4132 adapter has two ports and supports RoCE. It costs $2,100.