EvolveWare Makes Progress With RPG Code Modernization Using AI
November 3, 2025 Alex Woodie
IBM i shops that are looking for a partner to help them modernize RPG using AI have a number of companies to choose from, including IBM, which unveiled Project Bob last month. Another company to keep in mind is EvolveWare, which recently released its RPG code understanding tool and has plans to release a code generation product for RPG based on large language models (LLMs) in 2026.
EvolveWare has been treading the code modernization waters since it was founded by chief executive officer Miten Marfatia in Santa Clara, California, back in 2001. The company has primarily been involved in COBOL projects targeting the IBM mainframe, but in recent years it has experienced more demand for modernizing RPG applications running on IBM i.
Earlier this year, the company released support for RPG code documentation in its Intellisys platform. Intellisys currently supports code generation for RPG using its older traditional machine learning-based tools. The goal is to support RPG with the new AI-based code generation component of the product in early 2026, Marfatia told IT Jungle in a recent interview.
Early tests with using LLMs to generate new code from extracted business logic in all languages are promising, Marfatia says. However, there’s a caveat to that early success, and it is clear that certain steps must be taken.
“One of the things we are finding is that as long as our prompts are written well and the metadata that we extract using Intellisys is provided as context, the results are promising,” Marfatia says. “But you need to have context. If the context is not provided along with your prompts, then things go haywire.”

Source: EvolveWare
EvolveWare is using a mix of proprietary and open source LLMs to generate code within its Intellisys platform. Most of the early adopters are choosing to use open source LLMs that can be installed on-prem, as they don’t want their source code to be uploaded to a proprietary model, such as Grok or ChatGPT, chief technology officer Bruce Kirchner says.
“Each model has its own quirks and nuances,” Kirchner says. “Sometimes, for certain tasks, some models work better than others. And so we are pretty open to doing that testing in-house and figuring out which models work best for the particular application that we have.”
EvolveWare has taken the plain vanilla open source models and done some additional training to suit its needs and the needs of its customers. This work concentrates on the prompts and the context modeling, which is especially important in legacy code modernization projects.
“We’ve learned over a period of the last, I would say six to eight months or more, that the models that we used, when they are used along with the right prompts, as well as the extracted metadata context – or the context that we capture in the extracted metadata – the results are quite promising,” Marfatia says.
There is a lot more work involved with getting a good result with code transformation than just uploading a bunch of source code into an LLM and hitting the go button, Kirchner says.
“A lot of people just think they can throw a big BLOB of code at a large language model and tell it to generate new Java code. They might get something out of it, but they’re not going to get anything that’s really useful,” he says. “A part of the puzzle here is pulling out the pieces that are required and constructing a model in a very distinct way to produce the results that you’re looking for.”
When people migrate from COBOL to Java or C#, they are usually adopting a new application architecture, too, and that must be factored into how the new code is generated. For instance, when breaking up monolithic architectures into newer microservices architectures, the business logic responsible for managing a CICS transaction, for example, should be handled differently than the logic responsible for the user interface layer, Kirchner says.

Source: EvolveWare
“The way we extract metadata from applications, we’re able to pull the pieces of the application that we need to generate the different components, and we can ask the model to generate them in the way that we need them to be generated to work on the front end, the back end, for whatever architecture we’re looking at,” he adds.
Sometimes, getting the right output from the LLM will depend on giving it certain instructions as part of the prompt. Other times, the best result comes from providing examples of what the customer is looking for, Kirchner says. What works with COBOL may not work with RPG. While COBOL code is relatively straightforward, RPG can be more difficult to work with.
“I think we’ll come up with a strategy to make RPG transformation work,” Kircher says. “But it’s going to be different than our COBOL strategy, just like it’s different for the other types of transformations that we do.”
EvolveWare has created different prompts for each source and target, such as for COBOL-to-Java transformation or for COBOL-to-C#. The prompts are used along with human-in-the-loop process to avoid hallucinations in the LLM’s code generation, Marfatia says. The company has begun work on RPG prompts, but it hasn’t finalized them, he says.
“We’ve got the first piece done for RPG, the extraction of metadata and the rules,” Marfatia says. “We’ve done some testing on RPG. We’ve not finalized those prompts. They will be finalized early part of next year because we are swamped with some other transformers right now. But we’ve tested it enough to know that it works.”

Some things are different with how EvolveWare modernizes legacy code, but some things are the same. The first part of the process is the same. This involves EvolveWare ingesting the RPG source code, using traditional machine learning tools to extract the metadata, and then extracting the business logic from that metadata.
What’s different is what happens next. Instead of using its old own pattern-based code generation tool – which relied on classical machine learning technology – it is relying on LLMs to generate the new code from the business logic and the metadata.
“It’s not that the results are any better than what we were generating before,” Marfatia says. “But they are faster, meaning if we were to support, for example conversion from RPG to Java, previously writing such a transformer would take us between six to eight months. Today, that transformer can be written in between two to three months.”
AI helps in other ways, too. Many monolithic code bases are filled with process-oriented code that’s specific to the platform but really doesn’t impact the business logic. EvolveWare is leaning on AI to help it identify those strings of process-oriented code so that it can separate them from the morass of spaghetti code.
“From a business rules extraction perspective, the idea is just to get to the business rules and eliminate all that other noise,” Kirchner says. “We can tell the language model what we’re looking for, and it can go out and find those things and tag them for us.”
EvolveWare is also using AI for test-case generation. It doesn’t necessarily speed up the process, but it does free up engineers to get more work done.
“It can eliminate a lot of that manual work that we used to have to do in the past,” he says. “You can kick it off and let it run overnight and come in the next day and look at the results. So from a speed perspective, what it does is it frees me and other team members up to do what they want.”
We will check in with EvolveWare again in early 2025 to see how it’s progressing with the AI-based code generation for RPG. IBM hasn’t given a timeline for the release of Project Bob, which it unveiled last month as a replacement for Watson Code Assist. The field for AI-based code transformation for RPG would appear to be a wide open game at this point.
RELATED STORIES
Why Modernize Applications? The Reasons Might Surprise You
The Disconnect In Modernization Planning And Execution

