Boadway’s 25-Year Performance Shows No Let Up
March 2, 2020 Alex Woodie
Batch jobs running a little long? Throw some hardware at it. For as long as Mike Boadway can remember, that’s been the default response to dealing with most performance issues on the IBM i server. But when today’s fast Power9 processors and Flash drives fail to move the performance needle, maybe it’s worth reconsidering Boadway’s approach to tweaking the code and the data instead.
As the CEO of MB Software & Consulting, Boadway makes his living off solving other people’s IBM i performance issues. Since founding the company in 1995, Boadway has used his proprietary software to deliver an in-depth inspection of his client’s production IBM i code for the purpose of identifying the cause of performance bottlenecks. His claim to fame is the ability to show the duration of execution by line of code for any software running natively on IBM i or in its PASE runtime, which is something he says nobody else can offer. Depending on what the inspection finds, Boadway recommends different courses of action to address the problem.
Performance issues on IBM i servers are just as common today as they were when Boadway broke into the AS/400 business 25 years ago. In fact, the performance issues are often worse, he says.
“I have clients today that have 3.5-day batch jobs that need to get done over the weekend, and they just can’t get them done,” Boadway tells IT Jungle. “Some of the biggest companies in the world – I can’t mention their names without permission – can’t figure out how to get it done. I engage with them, run my tools, show duration by line of code and all the other things we do, and that 3.5-day job is not only done in a weekend, it’s done in 3.5 hours. ”
That might sound like a tall claim. But according to Boadway, when the root cause of a performance issue is a code or a data issue, then alleviating those roadblocks can have a huge impact on the performance and allow the application to run as quickly as the developers originally intended. It’s hard to say whether those “aha” moments are as satisfying to the developers as they are to Boadway.
“When we show duration by line of code for a long running batch job, and the developer sees where that 12-hour batch job is spending all of its time, they’re stunned by that one line of code that’s responsible for it,” Boadway says. “We recommend a change to that line of code if they can, and the 12-hour job is done in 20 minutes now. It’s something they’ve been throwing hardware at for years and it’s not solving the problem anymore. Their CPU is 10 percent used, and their job is still running for 10 hours, because it’s not a hardware issue. It’s a code issue.”
There’s no one-size-fits all answer to addressing software or code issues. If there’s a problem with the code, MB Software & Consulting may be able to help clients identify which line of code is causing the slowdown. But if clients don’t have access to source code, or programmers who are willing to modify the source, then there’s not much that can be done in that avenue.
Boadway is running into more issues related to how data is stored. He says IBM i shops are not sufficiently purging or archiving data from their production files, which is causing the data to build up and gum up the inner workings, usually SQL commands.
“So, 29 years ago, there was a year worth of data, and now there’s 30 years of data in the file,” he says. “Over time, if you’re never archiving anything, you still have old data siting in open transaction file that were designed to just contain current data. The open orders file has 29.5 years of closed orders in it, and the SQL is table scanning, reading every single row to select the one you want, because it was never indexed properly.”
Some performance issues are caused when the data is normalized when it should be stored in a de-normalized manner, and others are caused when the data is de-normalized, but should really be refined to efficiently fit into a database. Some IBM i shops would benefit tremendously from having a frank discussion about the best way to store their data, and making architectural changes to accommodate today’s data storage and access patterns.
The widespread use of SQL on IBM i has also exacerbated some database access issues, Boadway says. While SQL has made it extremely easy for anybody with very basic technical knowledge to fetch data from the Db2 for i database, there is often little thought given to the overall data access patterns and overhead associated with SQL.
“SQL is great for record set processing. It’s not as good as RPG from a performance standpoint for record-level access,” Boadway says. “What was taking 10,000 microseconds in RPG is now taking a million microseconds in SQL. It’s still a fraction of a second, so the human eye can’t see it. But when you’re doing a billion of them inside of your application, inside of a big batch process, every minute – that’s a huge increase in overhead.”
Boadway doesn’t advocate a return to record-level access in RPG, but rather taking the time to create a better database design, including creation of indexes that can alleviate some of these performance issues. It’s part and parcel of his approach of tweaking code and data to drive better performance.
Boadway got into the performance gig in the late 1980s, when his employer at the time was migrating a 30-year-old application from the IBM S/390 to the new AS/400 and SSA’s PRMS application. He didn’t have any tools to help him, and had to figure out how to get the data from the (much larger) mainframe to fit into the (much smaller) midrange machine.
“The thing I learned back then was it’s the software and the data that drives performance of the application,” he says. “The hardware is where we store that data and run that code, and you can just keep upgrading that hardware to accommodate that growth in data volumes and the growth in activity and the number of users and transactions coming in and out of your system. But [you also need] to tune the code and tune your data. That’s what I had to do, to manually figure out how to accomplish that.”
That mainframe migration project turned into a career with MB Software & Consulting. Like many midrange ISVs, the company has had its ups and downs. 9/11 hit hard, but Boadway stuck to his guns and has made a successful business out of it. In terms of his software, Boadway does all his own performance data collection, and does not rely on IBM PM/400. He last overhauled his core tools in C a decade ago, and today offers access his tools through a 5250 greenscreen and a Web portal.
It’s not always easy to win over decision-makers in IBM i shops who are accustomed to solving performance problems with hardware upgrades. “It’s a hard sell, and it has been for 25 years to convince potential clients that there’s an alternative to hardware upgrades,” Boadway says.
Ironically, the availability of fast hardware, including Power9 processors and solid state drives (SSDs), in particular, are actually making life somewhat easier for Boadway.
“Flash drives help,” he says. “They make the billion I/Os faster. But even after flash drives, there are still application-level I/O bottlenecks. Where are you going to go after flash drives? You could throw more memory at it, but it wasn’t a memory bottleneck to begin with. And now you’ve upgraded CPU, memory, and disk performance, and the job is still running for eight hours.
“Now that they’ve done all the upgrade to the hardware, it’s a lot easier to show them, ‘Hey, I was right, wasn’t I?'” he continues. “It’s not a hardware issue. It’s a software issue or a data issue. It is now time to address the software and data issues because the hardware you just spent all the money on didn’t solve it. ”