Moving Off Big Iron? Be Very Careful, Gartner Says
December 9, 2019 Alex Woodie
IBM i and mainframe professionals who have grown weary of defending their systems against people who want to replace them with more “modern” X86 and cloud platforms found an unlikely ally in the form of Gartner, which earlier this year published a report that cautioned against making rash, emotionally charged technological decisions when it comes to big iron migrations.
“Replacing existing systems because people perceive them to be old can be a costly mistake,” Gartner senior analyst Thomas Klinect and vice president Mike Chuba wrote in a March piece titled Considering Leaving Legacy IBM Platforms? Beware, as Cost Savings May Disappoint, While Risking Quality.
The pressure to modernize older, or “traditional,” systems is growing among technology decision makers, Klinect and Chuba write. “Emotionally charged terms like ‘fragile,’ ‘legacy,’ and ‘technical debt’ lead to the assumption that older so-called legacy platforms and software must be replaced,” they write. “Sometimes modernizing 40-year-old software is the right answer. Yet you need to perform a proper analysis to make the correct decision.”
Expect the drumbeat to migrate or modernize IBM i and mainframe applications to get louder in 2020. Across many industries – but especially those that touch consumers, like retail, financial services, healthcare, and government – executives are feeling immense pressure to be more engaging with their customers.
Organizations are hearing that they must be more innovative in expanding into digital delivery channels, especially mobile, and the big iron systems often get blamed for holding them back. That may be true in some cases, but it may not in others.
Sometimes You Gotta Go
In some cases, an older system is to blame, and clinging to it can be a career-ender. Business requirements change over time, and sometimes a system just doesn’t have the underlying capability to support new requirements.
Consider the case of Vanguard, the 44-year-old Malvern, Pennsylvania, mutual fund company. For the past six years, the financial services firm has been actively executing a migration away from its monolithic mainframe-centric IT system in favor of a microservices-based architecture that heavily relies on pre-packaged AWS cloud services.
“We knew if Vanguard was going to stay competitive in the digital age, we needed to be better at the business of IT,” Vanguard IT executive Jeff Dowds said during an AWS re:Invent keynote last week. “We wanted to accelerate the pace of innovation. We wanted to deliver business value at startup seed.” The cloud was key to that goal, he said.
Vanguard had what Dowds called “a traditional technology stack” that was heavily virtualized. At the center was an MVS mainframe application running monolithic COBOL applications with upwards of 50 million lines of code each. Surrounding the core record-keeping applications in the mainframe were many ancillary products.
After deciding against a private cloud and picking AWS as its cloud provider, Vanguard’s first order of business was establishing security via 150 security controls. Then it started moving its microservice-based applications that ran on premise in its application platform as a service (aPaaS) into the cloud, followed by network services and its big data environment.
Most of Vanguard’s data still lived on-premise, so the company started migrating databases. The company began by replicating data from its core Db2 database on the mainframe to AWS databases using changed data capture (CDC) technology. It also added cloud-based key-value stores to satisfy developer demands.
The company ultimately decided upon several modernization approaches to wean itself off the mainframe, including emulating (re-hosting), re-engineering (automated and manual re-writing), re-platforming (to Java and Linux), repurchasing, and retiring, according to a 2017 slide deck.
It’s part and parcel of in line with the Strangler Pattern depicted in Martin Fowler’s 2004 book: “Gradually create a new system around the edges of the old, letting it grow slowly over several years until the old system is strangled,” Fowler wrote.
“We are now starting to drain our microservices from our aPaaS,” Dowds said at re:Invent. “We are accelerating the pace of our monolith decomposition, and this should allow us to decommission our aPaaS in the near future.”
After six years, the goal is within reach. “Here’s our end state, just about a 100 percent cloud-native architecture.” Vanguard enjoys benefits of 30 percent decrease in compute costs, a 30 percent increase in development speed, which are numbers any Global 2000 CIO would envy.
High Risk, High Reward?
Vanguard has enjoyed success with its large-scale application migration, even if the last mainframe has yet to be turned off. But tackling this sort of project can be extremely risky, and according to the folks at Gartner, may not be worth it.
“Some companies that abandoned traditional platforms have come to regret the transition later on,” Klinect and Chuba write. “Performing platform due diligence ensures that the decision to move is not predicated on hearsay and popular lore. Whether you choose to abandon your core platform or not, there are ways to lower your risk and assess your true cost savings.”
Companies should weigh the relative benefits of cost and agility, which cannot always be delivered simultaneously in the real world, the Gartner analysts say. They should take inventory to determine if they have fully leveraged all the acceleration technologies available in their big iron system, and also determine if they have sufficient aptitude on the platform they want to replace the mainframe or IBM i server with (usually Windows or Linux, although Linux is much more popular in the cloud).
“Some IT leaders believe they are moving to a less expensive hardware environment, with the mistaken belief that one can overpower a tool problem with hardware,” Klinect and Chuba write. “The historically low average utilization level of distributed servers has called into question the efficiency of those platforms when compared with the mainframe environment. Even with virtualization, the utilization of these platforms routinely fall well below the continuous utilization rates of traditional systems. It is not all about clock cycles. Input/output is the typical bottleneck for applications. Here, these traditional systems shine.”
Instead of taking a whole-hog approach to migration, the analysts advocate migrating some of the “lightweight” applications surrounding the core transaction processing system as a way to get a quick win. This has the added benefit of lessening the load on the mainframe or IBM i system, which can increase performance.
In the end, Klinect and Chuba make a compelling case for thinking very hard about migrating off a mainframe or IBM i server. These migrations typically cost tens of millions of dollars, they write, and once completed, can actually laden the company “with more complexity, more technical debt, and more support issues than when they started.”
“The value gained by moving applications from the traditional enterprise platform onto the next ‘bright, shiny thing’ rarely provides an improvement in the business process or the company’s bottom line,” they write. “A great deal of analysis must be performed and each cost accounted for.”
It’s refreshing to hear that kind of candor when it comes to the topic of legacy migration. Too often, these discussions degenerate into emotionally charged arguments full of ad hominin attacks against “old” technology. It’s true that mainframe and IBM i platform have stood the test of time, and judging by some of the “new” challenges that backers of modern distributed architectures are facing, they will be around for a while yet.