• The Four Hundred
  • Subscribe
  • Media Kit
  • Contributors
  • About Us
  • Contact
Menu
  • The Four Hundred
  • Subscribe
  • Media Kit
  • Contributors
  • About Us
  • Contact
  • Server Virtualization and Consolidation Require More Resiliency

    March 10, 2008 Bill Hammond

    Server consolidation will be a top priority for many IT departments in the coming year. In a recent research study conducted by Gartner, 61 percent of the companies polled were already paring their server count, and 28 percent were planning to do so in the near future. The back story on consolidation is interesting. Life often smacks of irony and the renewed interest in centralization is certainly ironic for those of us who know the history of mainframes and minicomputers.

    Starting with IBM’s 1400 series machines in 1960, mainstream computing topologies were centralized. When Datapoint launched the minicomputer it dubbed ARC–short for Attached Resource Computer–in 1977, a significant new computing paradigm emerged. On paper, this new decentralized computing model allowed organizations to continue to extract value from early investments in hardware and software by enabling them to simply add needed resources to their existing network of systems. Compared to scrapping a reasonably good mainframe or minicomputer whenever capacity restrictions called for such action, incremental and relatively inexpensive enhancements can be made in the form of additional memory and disk capacity.

    Over time, server decentralization has mutated into server sprawl. Underutilized computer hardware litters the floors and racks of the enterprise data centers, and large numbers of technicians are needed to maintain all of these systems. Company profits are slammed thanks to unused and underused software licenses and the hardware they run on, all of which adds up to big bucks. And going one step further in the opposite direction of simplicity, rogue departments have been known to take it upon themselves to select and implement their own software and only later call on IT to sort out the problems. These factors and others have pushed the argument in favor of IT centralization past the tipping point.

    Organizations are now striving to become lean and green and to derive maximum value from their investments in technology, manpower, and energy. Reducing server counts by consolidating user workloads onto fewer systems is swiftly becoming a top-down driven priority for many organizations.

    Virtualization is a key component to the consolidation movement and the System i has been poised to handle virtualization for years. Logical partitions (LPARs) were introduced to the AS/400 platform landscape 10 years ago–well in advance of the present virtual machine partitioning movement on X86 and X64 servers. LPARs support fully dynamic logical partitioning and the ability to divide a single processor into multiple partitions, allowing multiple instances of i5/OS, Linux, and AIX to run on one System i server. New BladeCenter and iSCSI attachment options extend support even further to Linux, Windows Server 2003, and even to desktop Windows XP and Windows Vista platforms. The upside of such virtualization is a fully integrated application environment.

    Transitioning to a centralized environment that supports virtual machines is a big undertaking and requires that systems be unavailable for several hours in best case situations, or several days in more complicated ones.

    A Single Point of Failure

    As is immediately obvious, when the whole of your business runs on one or two systems, a hardware, software, or network failure that results in downtime has a much greater impact on the enterprise. In distributed topologies, a single failed system out of several is certainly going to hurt, but it will only impact the segment of the business it serves.

    To enjoy the benefits of server consolidation and minimize the shock of planned and unplanned downtime, organizations can deploy a high availability solution to protect hard and soft assets. Compared to tape backups, vaulting, and hot site backups, recovery is almost immediate in instances where high availability clustering is deployed, a consideration that is very important in situations where 24×7 access to applications is necessary or when Web-based, market-facing access to applications is supported. Sometimes you can use one of you decommissioned servers and the data center it resides in as your high availability backup server and disaster recovery site. (This is a good kind of recycling.)

    A high availability configuration also allows a consolidated computing environment to be gradually established without interrupting business by switching system users from the primary production system to the backup. Application availability is maintained throughout the reengineering process, for the exception of an interval of roughly 20 minutes to 40 minutes that can be scheduled over a weekend or holiday. Even more value can be derived from the high availability approach because it can be used in the consolidation process as the data transfer agent, replicating data from multiple distributed servers back to the consolidation point. By contrast, tapes that are traditionally used to perform this critical step can fail during the restore process because of normal wear, accidental damage, or environmental issues.

    Seeking Balance

    Finally, workload management is a key facet to maintaining acceptable response times in a consolidated computing environment. When the work of eight servers is performed by one or two, for example, acceptable response times can be tough to deliver. And if the server is accessible to large groups of users over the Web, demand can be unpredictable.

    Automatic load balancing features are available in some high availability solutions. While load balancing is not very complicated in instances where users have read only access, read/write servers are trickier because of contention issues. High availability tools can be well suited to accommodate positive synchronization between primary and backup servers and bypass these problems.

    A high availability solution that is part of a server virtualization and consolidation effort will require some additional investment, but the benefits of using high availability clustering can be easily justified by the value of providing a simplified transition path and a markedly shorter recovery time should a failure occur.

    Bill Hammond directs product marketing efforts for information availability software at Vision Solutions. Hammond joined Vision Solutions in 2003 with over 15 years of experience in product marketing, product management and product development roles in the technology industry.

    RELATED STORIES

    Emerging Markets and Virtualization Drive Q3 Server Sales

    IBM Takes Its Own Server Consolidation Medicine

    Virtualization, Consolidation Drive Server Sales in Q1

    The X Factor: Virtualization Belongs in the System, Not in the Software

    Is the Adoption Rate of Server Virtualization Technology Over Estimated?

    Windows Consolidation with the System i: Is It Happening?

    The X Factor: Virtual Server Sprawl

    IDC Quantifies the iSeries Payback for Server Consolidation



                         Post this story to del.icio.us
                   Post this story to Digg
        Post this story to Slashdot

    Share this:

    • Reddit
    • Facebook
    • LinkedIn
    • Twitter
    • Email

    Tags: Tags: mtfh_rc, Volume 17, Number 10 -- March 10, 2008

    Sponsored by
    Maxava

    Migrate IBM i with Confidence

    Tired of costly and risky migrations? Maxava Migrate Live minimizes disruption with seamless transitions. Upgrading to Power10 or cloud hosted system, Maxava has you covered!

    Learn More

    Share this:

    • Reddit
    • Facebook
    • LinkedIn
    • Twitter
    • Email

    ACOM Updates EZ Content Manager Solidcore Supports i5/OS with Real-Time Change Control

    Leave a Reply Cancel reply

TFH Volume: 17 Issue: 10

This Issue Sponsored By

    Table of Contents

    • IBM Readies Big Power6 Boxes, New X64 Servers
    • System i Security: Lots of Room for Improvement
    • Server Virtualization and Consolidation Require More Resiliency
    • Thermometer Money: Changing a Business Partner Paradigm
    • Arrow Buys French Midrange Distributor
    • Search Engine Scanning: The System i Wins a Few Deals
    • Tango/04 Looks Ahead to 2008 as It Attains Record Results in 2007
    • SAP Shows Prototype X64-Linux-ERP Bundles
    • AIIM Survey Shows Companies Starting to Wrestle with Document Chaos
    • IBM Slashes Prices on Blade Server I/O Virtualization Software

    Content archive

    • The Four Hundred
    • Four Hundred Stuff
    • Four Hundred Guru

    Recent Posts

    • Public Preview For Watson Code Assistant for i Available Soon
    • COMMON Youth Movement Continues at POWERUp 2025
    • IBM Preserves Memory Investments Across Power10 And Power11
    • Eradani Uses AI For New EDI And API Service
    • Picking Apart IBM’s $150 Billion In US Manufacturing And R&D
    • FAX/400 And CICS For i Are Dead. What Will IBM Kill Next?
    • Fresche Overhauls X-Analysis With Web UI, AI Smarts
    • Is It Time To Add The Rust Programming Language To IBM i?
    • Is IBM Going To Raise Prices On Power10 Expert Care?
    • IBM i PTF Guide, Volume 27, Number 20

    Subscribe

    To get news from IT Jungle sent to your inbox every week, subscribe to our newsletter.

    Pages

    • About Us
    • Contact
    • Contributors
    • Four Hundred Monitor
    • IBM i PTF Guide
    • Media Kit
    • Subscribe

    Search

    Copyright © 2025 IT Jungle