Cisco’s California Dream: One Vendor to Supply It All
March 23, 2009 Timothy Prickett Morgan
As rumored, router and switch maker Cisco Systems did indeed jump with both feet into the server market last Monday with the launch of its “California” Unified Computing System. And as I told you last week, those of us in the AS/400 community have a wry grin inasmuch as Cisco is taking a couple pages out of the System/38 and AS/400 playbooks to bring a unified network and server platform together with an integrated system management software stack.
Then again, if you are like me, you are probably more than a little annoyed that IBM had not long since provided an integrated switch fabric for Power Systems, making all the links between servers, storage, and the outside world into one single set of infrastructure. If you have a long memory, this, of course, was exactly what the InfiniBand specification was supposed to be, and it is interesting to me that what Cisco is doing is twisting 10 Gigabit Ethernet to support Fibre Channel disk protocols (used to link servers to storage area networks) as well as to link servers to each other through switches and therefore to networks of end user devices. If IBM had stuck to its guns and done InfiniBand end-to-end with Power5 or Power6 machines, it might have been the trendsetter.
They say a picture is worth about 1,000 words, so let me save both you and me some blather:
Got all that? Cool. My work here is done. . . .
But seriously, folks. Let’s take the California system from the top. Right on top of the rack will sit the UCS 6100 Series Fabric Interconnect, which is a fancy way of saying a glorified Cisco Nexus 5000 switch with some more brains so it can run some extra software called the UCS Manager, which manages the switches, the network adapters, the blade servers, and other hardware comprising the California System. The Nexus 5000 implements what Cisco calls a unified fabric, which means the switch can carry normal TCP/IP traffic over Ethernet as well as a tweaked version of the Fibre Channel SAN protocol over Ethernet. That collapses two networks down into one. (Like InfiniBand was supposed to do, you’ll remember.) Cisco is selling two different variants of this switch, a 20-port 10 Gigabit Ethernet model and a 40-port model.
The UCS 2100 Series Fabric Extenders, which provide up to four 10 Gigabit Ethernet links between the UCS 6100 switch, extend that fabric to the blades, as the name suggests. They sit in the UCS 5100 Blade Server Chassis, which is also where the California blade servers live. Rather than do vertical blades like IBM, Hewlett-Packard, Dell, and Sun Microsystems do, Cisco has gone horizontal. The company has full-width blade and half-width blades, which are known as the UCS B Series Blade Servers. (Like the AS/400, there was no A Series at launch time, but back in 1988 with IBM, that was because Burroughs, one half of Unisys, had mainframe machines with that name already dating from 1984.) Cisco has admitted that it will be using Intel’s forthcoming “Nehalem EP” Xeon processors in the B Series blades, but has not provided the feeds and speeds of the blades themselves. The Nehalems sport on-chip memory controllers and the QuickPath Interconnect that will do away with the frontside bus architecture for Xeon processors and provide up to four times the memory bandwidth as well as point-to-point connections between processors, memory, and peripherals.
What Cisco has said is that the B Series blades will include a custom memory expansion chip (an ASIC, or application specific integrated circuit, to be precise) that will allow the Nehalems used in the Cisco blades to address four times as much main memory as standard Nehalem motherboards. A regular two-socket Nehalem board has 96 GB of main memory across six DDR3 main memory slots (three per socket), so that implies Cisco can do 384 GB of memory on its B Series blades for two-sockets. That is assuming the use of 16 GB DDR3 memory DIMMs, and 24 memory slots. It could be that Cisco can cram that much memory on a half-width blade, but my gut tells me that the full-width blade is the one with all these memory slots and the half-width one will have half as many slots, or up to 192 GB of memory. But I am just guessing. Cisco could also, later in the year, so a “Nehalem EX” four-socket blade in the full-width form factor. If it can do that, Cisco will be able to cram 1.5 TB of main memory in a four-socket box. With main memory being a key–if not the key–ingredient to performance with virtualize server environments, Cisco is banking that this extra memory support will drive sales.
And so, of course, will Cisco’s enthusiastic support of VMware‘s ESX Server hypervisor and related tools in the forthcoming vSphere suite of tools, due perhaps in May. VMware is cooking up a whole slew of server management goodies with its next iteration of products. Cisco and VMware have even created a virtual switch, called the Nexus 1000V, which runs in an ESX Server VM partition and which virtualizes the links between the physical blade servers and the physical switch extenders. By only talking to the virtual switch instead of a physical one, a VM can move from one physical blade to another and not break its network links. (This is an issue right now with VMs and logical partitions that span multiple machines.) Cisco is OEMing its own version of the future VMware vSphere software stack and will sell this as a complete system, closed.
The California blade servers will also come with UCS network adapters, which come as mezzanine cards that plug into the blades and that have different performance characteristics. Cisco says that it will have one network adapter card that is tuned for ESX Server virtualized environments, another one tuned for legacy network protocols (presumably that means IPv4) and legacy software drivers, and yet another one for high-performance networking (presumably that means IPv6).
So, when you add it all up, a single 40-port fabric interconnect can talk to 40 separate California chasses, each of which can have a maximum of eight half-width blade servers, for a total of 320 servers. With the quad-core Nehalem EPs, that works out to 2,560 cores, all in a single management domain.
Oh, one more thing: As rumored, Cisco has tapped BMC Software to get an OEM version of its BladeLogic operating system and application management software, which does server and application provisioning, patching, and monitoring. The UCS Manager does the hardware management and BladeLogic does the software management, and the two have been tightly integrated to act as a single whole in the California system.
As I said last week, the UCS stack does not have support for Power Systems and PowerVM logical partitions, or mainframes and LPARs, or Hewlett-Packard Integrity machines and Integrity VMs, or Sun Microsystems Sparc T servers and their logical domains. And the storage for the product is not unified, either. The California blades will sport hard disks or flash-based solid state disks for local storage, but external storage comes from EMC or NetApp. Cisco is supporting Windows and Linux on this machine, but not Solaris as far as I know.
The California system is in beta testing at 10 customer sites now and is in production at both Cisco’s and Intel’s own IT operations. The product is expected to ship around the end of the second quarter, which is when I expect the kinks will be worked out of the vSphere tools after a May launch from VMware.
Cisco throws California virt-server gauntlet (The Register)
DellHPSunIBM unmoved by Cisco blades (The Register)
IBM not worried about Cisco blades (The Register)
Cisco ‘California’ blade server launch imminent? (The Register)