Strategic Topics To Think About For 2026, Part 2
March 9, 2026 Philippe Magne
Last month, in the first part of this series, I presented a bunch of strategic topics that I think the IBM i community needs to think about as we figure out what needs to be done in their application estates in 2026 and beyond. I talked predominantly about pragmatism versus hype with AI and then DevSecOps transformation and AI integration into DevSecOps. You can read Part 1 at this link, and that is probably a good idea before diving into Part 2.
In Part 2, I want to focus in on application security. This is a big theme, and I will address it from three angles: cybersecurity, change processes, and non-regression.
Before I do that, you may be thinking: Why non-regression in a security discussion? My personal view is that non-regression can be seen as a way to secure application changes.
With that out of the way, let’s start with cybersecurity, and specifically with static analysis security tools. I hope everyone is equipped with these because they are part of the basic toolkit. If you develop software, these tools systematically check that you have no vulnerabilities in your code.
Security Vulnerabilities
Reports show security vulnerabilities in code are increasing amid faster AI-driven development, with high-severity flaws surging 36 percent year-over-year. On IBM i, our CodeChecker solution and associated service help you check source code continuously. We monitor vulnerabilities declared online and can update you with all corresponding rules if you are subscribed to this service.
A trend we have observed is that some countries – two countries in Asia in particular, Malaysia and Hong Kong –have made static analysis security tools mandatory at the government level. That creates market momentum.
Our online CodeChecker portal delivers new rules to users and has now reached a strong cruising speed.
Of course a key (non-IBM i) player in this space is SonarQube, which we are compatible with. We are not at all in opposition. For organizations using SonarQube at the enterprise level, we bring our deep expertise in IBM i. With CodeChecker, we remain strictly focused on the so-called “legacy” world. Our predilection IBM i of course, but soon also the neighboring IBM z niche.
Data Leaks
Another key cybersecurity issue is data leaks. Let’s remember that the world over, there is a regulatory framework for securing against data leaks. I was surprised to see that even in 2025 the data protection market was sluggish, despite the many high-profile incidents reported. That is a shame. According to the 2025 Annual Data Breach Report, the number of data compromises in 2025 was 3,322, that’s a 5 percent increase compared to 2024, and a staggering 79 percent jump over five years!
Starting out some years ago from specific demands within our customer base, we developed our multi-DBMS solution DOT Anonymizer to address precisely this challenge.
Companies need to be more aware that there is real responsibility around data leaks. Because behind that, there is the reality of phishing. I’m speaking as a user: My personal telecom company had data stolen, and I received a phishing message with my banking details written in plain text. Thankfully, I reacted correctly, otherwise I would have been trapped like many other unfortunate victims.
So please, CIOs and executive teams: Put more emphasis on this domain. People are tired of having their personal data stolen.
Of course, the AI revolution and Big Data has intensified concern about sensitive data exposure. Organizations now experience hundreds of incidents per month where employees send sensitive information to AI tools, including regulated personal or financial data.
And at the same time, AI training needs much more data – and often very large and diverse datasets. For user protection this feed data must be anonymized before use in training. This is already boosting the demand for solutions like DOT Anonymizer which secure data with multiple irreversible techniques.
On this very topic, we have an interesting announcement for 2026: we are releasing the DOT Anonymizer “Magic Button.” It is a simple system that takes all the PDF documents you wish to share, anonymizes their contents, and returns them cleanly formatted. As well as its value in a gen AI context – you can also use the Magic Button to anonymize on the fly whenever you have a document to share with someone who does not need to know all the details.
Segregation Of Duties
On the subject of the application change process, I want to stress something: In the hurry to DevOps transformation, the IT world has tended to ignore that there is a real regulatory framework behind change management. Now with some level of DevOps maturity, these ‘duties’ are coming back into focus. Separation of roles and responsibilities has existed for 25 years or more. Here at ARCAD, this was our main argument for a long time. Then suddenly with DevOps, companies sidelined the topic, but it remains very real: the person who develops is not the person who should deploy to production, and certainly not the person who tests. This has been true forever. And it is why you need code reviews and, above all, multiple validation processes.
Non-Regression
This brings us to our last security topic: non-regression. Think of non-regression as the “firewall” for application changes. Let me explain.
Legacy applications are increasingly critical. If you have even a small bug inside, the side effects can be enormous for operations. You need guardrails so that as you increase your rate of change, you remain protected. Only automated regression testing provides that level of protection.
The need is even stronger now in the era of AI-driven development, where code is generated at a whole new cadence, piling on the pressure to both QA and Operations teams.
That is why the Verifier product exists in the ARCAD toolbox. We continue to invest in it. It was originally released in 2003 and has proven itself with many customers who have thanked us for the value they obtained and the mistakes they avoided in their change processes. Each time, it can prevent hundreds of thousands of euros from being lost. What we are doing now is moving toward a lighter client and better integration with third-party products you may already use, especially for web applications.
The Cloud
That brings me to the last thing I want to bring up: The Cloud.
As with AI, we need pragmatism before dogma. We lived through the Amazon Web Services and Microsoft Azure storyline that everyone would move to the cloud, and everything will be fine. Now we are in a much more nuanced reality. There are not many organizations that are fully “all-in” on the cloud. The key concept people are now embracing is hybrid cloud. Everyone acknowledges that not everything will end up in the cloud. Some will be cloud, some will remain legacy, and legacy itself often remains on-premises. That is not a failure; it is a strategic choice.
We can see how systems are structured: A highly protected, highly secured back end with enormous data processing capacity, and a digital front end that is more flexible and modern and consumes the data from that back end.
All of this means there is no longer much of a battle against “legacy.” Those who predicted legacy’s death are gone themselves. Now we must deal with the reality of hybridization, which will make everyone happy — while avoiding siloing, which is still a strong reality.
To avoid siloing, we have the remedy: A product called DROPS. Many of you already know it, and it continues to evolve with cloud technologies – especially orchestration of Kubernetes environments. DROPS has a universal positioning, across IBM i, Windows, and Linux, hybrid and multi-cloud, and now z/OS, as part of our initiative to enter the mainframe world. DROPS spans all development platforms, and its key differentiator is its ability to treat the legacy world in a highly integrated way. Multi-platform and inter-dependent artifacts are deployed synchronously and monitored from one single, auditable, point of control. That’s a long way towards a true enterprise-level DevOps pipeline.
What we are building at ARCAD is modernization and mastering critical systems with DevOps adapted to legacy environments. And to bring it all the way back to AI, we integrate AI responsibly and in a controlled way. For us, AI is obviously ultra-strategic. In our strategic plan, the goal is that 80 percent of our tools will integrate AI by default.
There are many potential productivity gains. For example, in data anonymization, AI now automatically creates anonymization scripts. And conversely, for knowledge of existing systems, as we make our proven, deterministic logic available from the latest AI platforms, so that users can benefit from our 34 years of technology providing more accuracy and relevance for their specific challenges.
We have a long-term vision, not an opportunistic one. What is a bit unfortunate is that people talk a lot about AI, and many use it just because it is a buzz topic. We are focused on concrete use cases with a controlled approach.
That is why, ultimately, we speak more of evolution than revolution. As one of my colleagues said, why do we talk about artificial intelligence when, in fact, this field is built on data – and data is very “real.” I found that point of view very interesting. So yes, it is an evolution – stronger than what we have known before. The last big revolution was the Internet. Now we are at another step. But overall, we are still in evolution rather than revolution.
And I insist again on pragmatism, because pragmatism is what allows us to absorb such an evolution. As I have mentioned several times, the key word is hybrid: hybrid infrastructures, hybrid applications, hybrid everywhere, and hybrid between deterministic and probabilistic approaches.
Register for the webinar: Trends for 2026; ARCAD Software’s strategic vision
Philippe Magne is chairman and chief executive officer of ARCAD Software.
This content is sponsored by ARCAD Software.
RELATED STORIES
Strategic Topics To Think About For 2026, Part 1
Three Things For IBM i Shops To Consider About DevSecOps
VS Code Will Be The Heart Of The Modern IBM i Platform
Using AI To Derive Application Intelligence And Drive Modernization
ARCAD Discover: Global Application Analysis With An AI Interface
How To Have The Wisdom Of Experts Woven Into Your Code
Untangling Legacy Spaghetti Code To Cook Up Microservices
DevOps Means Using The Tools You Already Have Better
Hybrid Release Management Means Creating An Application Schema
Take A Progressive Approach To DevOps
The First Step In DevOps Is Not Tools, But Culture Change
VS Code Is The Full Stack IDE For IBM i
Realizing The Promise Of Cross Platform Development With VS Code
If You Aren’t Automating Testing, You Aren’t Doing DevSecOps
The Lucky Seven Tips Of IBM i DevSecOps
Git Is A Whole Lot More Than A Code Repository
Learning To Drive Fast On The DevOps Roadmap
Expanding Fields Is A Bigger Pain In The Neck Than You Think
Value Stream Management: Bringing Lean Manufacturing Techniques To IBM i Development
Unit Testing Automation Hits Shift Left Instead of Ctrl-Alt-Delete Cash
It’s Time For An Application Healthcheck
The New Economy Presents New Opportunities For IBM i
Creating Web Services APIs Can Be Easy On IBM i
Jenkins Gets Closer IBM i Hooks, Courtesy Of ARCAD
DevOps Transformation: Engage Your IBM i Team
The All-Knowing, Benevolent Dictator Of Code
Software Change Management Has To Change With The DevOps Times
Attention Synon Users: You Can Automate Your Move To RPG Free Form And DevOps
Git Started With GitHub And ARCAD On IBM i
One Repository To Rule The Source – And Object – Code
Data Needs To Be Anonymized For Dev And Test

