IBM Replaces the HMC with New Systems Director Console
April 18, 2011 Timothy Prickett Morgan
Big Blue is putting the Hypervisor Mangling Controller, er the Hardware Management Console for managing the PowerVM hypervisor for Power Systems out to pasture. The device, which was launched with the Power5-based servers way back in 2004, is a glorified and expensive PC or rack server that runs Linux and the microcode that controls PowerVM and other aspects of the hardware system.
The idea with the HMC is to get the management of the hypervisor off the machines that it manages. Earlier incarnations of logical partitioning for OS/400 were what is called a type 2 hypervisor, or a hosted one, which used OS/400 as the partition controller and then layered logical machine partitions on top of that master OS/400 instance, allowing for multiple OS/400 partitions to be run on a single box. With the HMC in 2004, the hypervisor, which was given a number of names but is now called PowerVM, was laid down on bare metal (a type 1 hypervisor, in the lingo) and the control of that hypervisor as well as its guest operating systems was moved outside of the server to another machine. This is called out-of-band systems management, and it is exactly what VMware does with its vCenter console for the ESX Server hypervisor and what Citrix Systems does with its XenCenter console for its XenServer hypervisor, both of which run only on X64-based servers.
The idea may have been good, but the HMC caused a lot of weeping and gnashing of teeth in the early years, and it also represents a single point of failure in the system (just like vCenter and XenCenter do, although VMware has just added vCenter clustering for high availability, following IBM’s lead with the HMC, which also allows clustering.) That said, the whole point of the HMC is that you could set up OS/400 and i, AIX, and Linux partitions on a server and then shoot the HMC with a howitzer and nothing bad would happen to the partitions running on the Power box or to the system settings for the server hardware. The HMC is required for configuration, but not for operation.
The problem that IBM has it that there are too many different tools for system administrators to use. You need Systems Director to manage Power, mainframe, and X64 servers, and the BladeCenter chassis has its own extra goodies to manage the backplane and integrated switching in the blade. And if you want to mix Power and X64 gear, there were slightly different toolsets, including iSeries Navigator for OS/400 and i platforms.
Going forward, says Steve Silbey, director of product management for the Power Systems line at IBM, IBM wants everyone to use one tool, and that is going to be Systems Director Management Console, or SDMC for short.
The SDMC V6.7.3 tool was announced last week is based on IBM Systems Director 6.2 Express Edition and will run on hardware that is very similar to the existing HMC tower PC and rack servers, except that it will have more memory and disk capacity configured on it. It will be able to manage both Power Systems rack and tower servers as well as Power-based blades parked in a BladeCenter chassis. Moreover, if you are managing collections of entry Power Systems or Power blades, you can use your own hardware and run the SDMC code on top of an X64 hypervisor. Specifically, IBM is supporting the SMDC tool on VMware’s ESXi 4.0 or 4.1 or the KVM hypervisor embedded in Red Hat Enterprise Linux 5.5. Of course, IBM is only supporting this virtualized SDMC on System x server iron, but there is no logical reason why it shouldn’t be able to work on any X64 box.
The SDMC will be able to manage up to 48 “small tier” Power-based systems or up to 32 “large tier” machines (these tiers are not explained) and keep track to up to 1,024 total logical partitions running across those physical servers. The SDMC collects information on all aspects of the physical systems and can be used to manage power, firmware, memory dumps and retrievals, and error reporting; it also has the “call home” features of IBM’s online support services and is able to report components that need to be replaced in a system and verify when this has been accomplished. The SDMC will also manage the Virtual I/O Server, a partition equipped with a baby AIX partition with real drivers for disk and tape arrays that allows other logical machines to talk to it to reach those assets without actually supporting those drivers itself. All aspects of partitions–micropartitioning, or carving a single CPU core into as many as ten slices, as well as mobility (live migration), suspend and resume, active memory sharing, among other PowerVM features–can be controlled through the SDMC, as they were with the HMC.
The SDMC can be mirrored in an active-active for redundancy, or you can use an SDMC and keep your existing HMC for backup in the event it fails. You can also cluster two SDMCs together in an active-passive pair if that makes you happy.
Perhaps most importantly, SDMC can now span Power rack and tower servers as well as Power blades. Up until now, many of the features, like Active Memory Management and partition suspend/resume, were only available through the HMC, but the Integrated Virtualization Management (IVM) console, was useful for creating a small number of Linux partitions on a machine that had AIX or OS/400/IBM i as the primary operating system.
The SDMC code will run on the 7042 rack-mounted HMC hardware at the moment. You have to have at least one Power7-based system under management, but it can be used to manage Power6, Power6+, and Power7 servers. Power5 and Power5+ servers are not supported with the SDMC, and neither are the I/O drawers that were created for Power5 or Power5+ servers–even if they are attached to Power6 or Power6+ machines. If you run the SDMC as a virtual appliance on top of ESXi or KVM, you need a quad-core processor E5630 processor running at 2.53 GHz (or a faster one), 6 GB of main memory, and 500 GB of disk capacity.
The SDMC will be available on May 13. The software costs $237 per server, whether you are running it on physical HMC iron or within a KVM or ESXi guest on a System x server.