Take A Progressive Approach To DevOps
October 16, 2023 Jeff Tickner
As Gene Amdahl, the chief architect of the IBM System/360 mainframe, correctly observed and what was subsequently codified as Amdahl’s Law, any kind of system that implements any kind of process is only as fast as its slowest component. What is true of bottlenecks in human processes is equally true in systems design and also in that overlapping area known as DevOps – the confluence of application development and system operations.
If you can only be as fast as your slowest bottleneck, the converse is also true that you can speed up overall throughput of a workflow or a system by improving the performance of its slowest component. With DevOps, this act of identifying the portions of the application development and deployment workflow that are the biggest bottlenecks and then working to fix them is called value stream management. It is a new name for a very old idea, applied to a new set of processes and systems.
It is also key to a successful transformation from waterfall development methods and monolithic application code practices that are still prevalent today to DevOps. The most important idea that you can have as you start on your journey to DevOps is that you can take a progressive approach to the evolution of your coding and operations practices. You don’t have to take on the entire change in technology and culture that is DevOps. You do not have to move to VS Code or Jenkins or Jira, you do not have to move in one fell swoop to an automated, continuous pipeline in one go.
The goal of DevOps, and the reason why we promote a progressive approach, is to reduce disruptions in the development and operations process, and therefore for the business that these teams ultimately support. If it is a tremendous disruption to go to DevOps, then that sort of defeats the purpose, doesn’t it? And hence, we always talk to IBM i customers about a progressive approach.
DevOps is a collaboration between development and operations, and part of that collaboration involves automating some of the manual steps that people are now doing. You do not have to automatically put all of your processes in your workflow into a Jenkins pipeline. You can take one aspect of your workflow – whatever is the highest pain point – and automate whatever is your current process for that part of the workflow. The beauty of Arcad’s DevOps tools that we deliver on the IBM i platform is that they are open, and they have APIs that you can hook into your current processes. This will give you two benefits. First, you have an immediate return on investment because you no longer have manual intervention required for that process you automated. The second benefit is that any time you have a manual intervention – a developer has to choose an option in a list or key in some release number or other key value, say – that is an opportunity to introduce an error, which can tremendously reduce the ROI of the development effort. , . A small mistake in one of these tedious, repeated manual tasks can generate major cleanup effort (and we’ve all seen them bring production down at least once in a lifetime).
It is popular and maybe even a little cool to implement Git for source control, and a lot of people want to start there. Sometimes, however, source control isn’t going to add immediate value. The developers are perfectly happy doing what they’re doing controlling the source code, and it’s when companies starts automating and streamlining the other parts of their development process that source control then becomes the answer to their latest bottleneck. And that is usually that they need to get their code into this automated process that they have built so they are not shutting down their development for a week while you are getting all the source code together for a release by hand.
The lesson is that this progressive approach to the implementation of DevOps itself presents a real opportunity for continuous optimization. This is something that we have learned at ARCAD Software. One important aspect of DevOps is the feedback loop. You take a process that is in place already and you automate it in some fashion. Then you have to ask: Was that actually useful to the overall health of the organization? Or did you just automate a process because you could? So, for instance, we never want a customer to adopt source control because they think they should or because they think it will fix a bad change management process. That’s been an unfortunate discovery is that if you have a bad change management process, going to source control doesn’t automatically or magically fix it. You have to look at your change management process, your entire application development flow, and ask: How can I improve this? Where are the real pain points? Where am I getting friction?
A lot of people think that if they go to source control, they automatically can do concurrent development with no issues. Source control certainly streamlines and reduces the friction in concurrent development. But it doesn’t solve all of the problems of concurrent development. Yes, it stops you from overwriting code, which is a big help that you don’t ever overwrite code but instead merge it. But that merge doesn’t necessarily result in something that works. We have to remember that Git doesn’t understand the RPG or COBOL language. And so it doesn’t use any intelligence in the merge, it just merges that code together in a mechanical fashion. And then ideally, once that merge happens, that’s considered a source change. Your process picks up and takes off from there and builds it. But then you have to find out if it will compile and then get it tested to see if that code is doing what it is supposed to be doing and not doing something it is not supposed to be doing. So, yes, source control means that developers can do concurrent development with more confidence, which gives you more flexibility. That is especially important in the IBM i world where lots of folks are still dealing with large programs. You see where this pain of concurrent development comes through. For example, you may have a program that does all of your customer file updates – Read, Write, Delete – and when I want to change Read, I have to wait till the guy that’s changing the Delete process is finished and goes all the way to production before I can change my Read process because it’s one big program. That is a major bottleneck right there. So if I have source control, I can have overlapping development with a higher level of confidence.
But it’s not magic. And part of making this SCM work is the openness to other tools and the ability to make incremental changes and immediately build new code, get it into test, and see the result of that change and to assess if we are accomplishing something productive. We want to start at the point of pain and optimize that process, and once we do that, this means that something else becomes a bottleneck in the workflow. And eventually the developers can be the bottleneck because they’re not feeding their code into this optimized process, and that is when source control is definitely the answer.
But remember: It’s not magic.
You also need to consider as you begin on your DevOps journey that the most common pain point that traditional IBM i shops are facing is testing. Without a doubt. The second biggest pain point is deployment.
Interestingly, the developers usually aren’t a problem until they can’t feed code into this optimized process fast enough. Generally, I can make enough changes to overwhelm the rest of the process as a developer. When that process is streamlined and it’s automated as much as possible, then it becomes really compelling. Because now I can feed in changes faster. The side effect for a lot of IBM i shops, is that big program is locked up less time. So concurrent development becomes even more valuable now that you have optimized the process.
The other problem we run into is multi-speed development. And this is really endemic on the IBM i platform, you have a lot of small changes –changing a program or a few programs to provide an enhancement or to fix a problem. And meanwhile, I have to expand the field or add fields to a file. While the development effort for each of these may be similar, the amount of testing you need is massively different because the wide impact of a field change is much larger. And so again, we are back into testing being the slowdown there. But even when we optimize the testing, we apply automation to a field change, there is a ripple effect of all the programs that are impacted, which increases the testing requirements. That’s why one of the functions we have taken from the open source world is dynamically creating test environments for large projects. It’s actually very popular amongst our customers. We had this capability before we ever even looked at source control; we could create a test environment on the IBM i on the fly, so that we could put a large project in there and isolate it.
But then we need to have a way to bring that large project back into the testing environment with the other changes in process. Because the danger here is when you have different test environments for testing different projects. You don’t want to test the integration of those projects in production, which some people end up doing. They have their big project test environment and they have their PTF test environment and they both go from there into production. But really, they are then actually testing in production, which is reckless at best. You need to know the code and the PTFs work together before you promote to production.
To learn more about a progressive approach to IBM i DevOps:
Jeff Tickner is chief technology officer North America at ARCAD Software.
This content is sponsored by ARCAD Software.