Server Virtualization and Consolidation Require More Resiliency
March 10, 2008 Bill Hammond
Server consolidation will be a top priority for many IT departments in the coming year. In a recent research study conducted by Gartner, 61 percent of the companies polled were already paring their server count, and 28 percent were planning to do so in the near future. The back story on consolidation is interesting. Life often smacks of irony and the renewed interest in centralization is certainly ironic for those of us who know the history of mainframes and minicomputers.
Starting with IBM’s 1400 series machines in 1960, mainstream computing topologies were centralized. When Datapoint launched the minicomputer it dubbed ARC–short for Attached Resource Computer–in 1977, a significant new computing paradigm emerged. On paper, this new decentralized computing model allowed organizations to continue to extract value from early investments in hardware and software by enabling them to simply add needed resources to their existing network of systems. Compared to scrapping a reasonably good mainframe or minicomputer whenever capacity restrictions called for such action, incremental and relatively inexpensive enhancements can be made in the form of additional memory and disk capacity.
Over time, server decentralization has mutated into server sprawl. Underutilized computer hardware litters the floors and racks of the enterprise data centers, and large numbers of technicians are needed to maintain all of these systems. Company profits are slammed thanks to unused and underused software licenses and the hardware they run on, all of which adds up to big bucks. And going one step further in the opposite direction of simplicity, rogue departments have been known to take it upon themselves to select and implement their own software and only later call on IT to sort out the problems. These factors and others have pushed the argument in favor of IT centralization past the tipping point.
Organizations are now striving to become lean and green and to derive maximum value from their investments in technology, manpower, and energy. Reducing server counts by consolidating user workloads onto fewer systems is swiftly becoming a top-down driven priority for many organizations.
Virtualization is a key component to the consolidation movement and the System i has been poised to handle virtualization for years. Logical partitions (LPARs) were introduced to the AS/400 platform landscape 10 years ago–well in advance of the present virtual machine partitioning movement on X86 and X64 servers. LPARs support fully dynamic logical partitioning and the ability to divide a single processor into multiple partitions, allowing multiple instances of i5/OS, Linux, and AIX to run on one System i server. New BladeCenter and iSCSI attachment options extend support even further to Linux, Windows Server 2003, and even to desktop Windows XP and Windows Vista platforms. The upside of such virtualization is a fully integrated application environment.
Transitioning to a centralized environment that supports virtual machines is a big undertaking and requires that systems be unavailable for several hours in best case situations, or several days in more complicated ones.
A Single Point of Failure
As is immediately obvious, when the whole of your business runs on one or two systems, a hardware, software, or network failure that results in downtime has a much greater impact on the enterprise. In distributed topologies, a single failed system out of several is certainly going to hurt, but it will only impact the segment of the business it serves.
To enjoy the benefits of server consolidation and minimize the shock of planned and unplanned downtime, organizations can deploy a high availability solution to protect hard and soft assets. Compared to tape backups, vaulting, and hot site backups, recovery is almost immediate in instances where high availability clustering is deployed, a consideration that is very important in situations where 24×7 access to applications is necessary or when Web-based, market-facing access to applications is supported. Sometimes you can use one of you decommissioned servers and the data center it resides in as your high availability backup server and disaster recovery site. (This is a good kind of recycling.)
A high availability configuration also allows a consolidated computing environment to be gradually established without interrupting business by switching system users from the primary production system to the backup. Application availability is maintained throughout the reengineering process, for the exception of an interval of roughly 20 minutes to 40 minutes that can be scheduled over a weekend or holiday. Even more value can be derived from the high availability approach because it can be used in the consolidation process as the data transfer agent, replicating data from multiple distributed servers back to the consolidation point. By contrast, tapes that are traditionally used to perform this critical step can fail during the restore process because of normal wear, accidental damage, or environmental issues.
Finally, workload management is a key facet to maintaining acceptable response times in a consolidated computing environment. When the work of eight servers is performed by one or two, for example, acceptable response times can be tough to deliver. And if the server is accessible to large groups of users over the Web, demand can be unpredictable.
Automatic load balancing features are available in some high availability solutions. While load balancing is not very complicated in instances where users have read only access, read/write servers are trickier because of contention issues. High availability tools can be well suited to accommodate positive synchronization between primary and backup servers and bypass these problems.
A high availability solution that is part of a server virtualization and consolidation effort will require some additional investment, but the benefits of using high availability clustering can be easily justified by the value of providing a simplified transition path and a markedly shorter recovery time should a failure occur.
Bill Hammond directs product marketing efforts for information availability software at Vision Solutions. Hammond joined Vision Solutions in 2003 with over 15 years of experience in product marketing, product management and product development roles in the technology industry.