• The Four Hundred
  • Subscribe
  • Media Kit
  • Contributors
  • About Us
  • Contact
Menu
  • The Four Hundred
  • Subscribe
  • Media Kit
  • Contributors
  • About Us
  • Contact
  • Take A Progressive Approach To DevOps

    October 16, 2023 Jeff Tickner

    As Gene Amdahl, the chief architect of the IBM System/360 mainframe, correctly observed and what was subsequently codified as Amdahl’s Law, any kind of system that implements any kind of process is only as fast as its slowest component. What is true of bottlenecks in human processes is equally true in systems design and also in that overlapping area known as DevOps – the confluence of application development and system operations.

    If you can only be as fast as your slowest bottleneck, the converse is also true that you can speed up overall throughput of a workflow or a system by improving the performance of its slowest component. With DevOps, this act of identifying the portions of the application development and deployment workflow that are the biggest bottlenecks and then working to fix them is called value stream management. It is a new name for a very old idea, applied to a new set of processes and systems.

    It is also key to a successful transformation from waterfall development methods and monolithic application code practices that are still prevalent today to DevOps. The most important idea that you can have as you start on your journey to DevOps is that you can take a progressive approach to the evolution of your coding and operations practices. You don’t have to take on the entire change in technology and culture that is DevOps. You do not have to move to VS Code or Jenkins or Jira, you do not have to move in one fell swoop to an automated, continuous pipeline in one go.

    The goal of DevOps, and the reason why we promote a progressive approach, is to reduce disruptions in the development and operations process, and therefore for the business that these teams ultimately support. If it is a tremendous disruption to go to DevOps, then that sort of defeats the purpose, doesn’t it? And hence, we always talk to IBM i customers about a progressive approach.

    ARCAD for DevOps – continuous software delivery for IBM i

    DevOps is a collaboration between development and operations, and part of that collaboration involves automating some of the manual steps that people are now doing. You do not have to automatically put all of your processes in your workflow into a Jenkins pipeline. You can take one aspect of your workflow – whatever is the highest pain point – and automate whatever is your current process for that part of the workflow. The beauty of Arcad’s DevOps tools that we deliver on the IBM i platform is that they are open, and they have APIs that you can hook into your current processes. This will give you two benefits. First, you have an immediate return on investment because you no longer have manual intervention required for that process you automated. The second benefit is that any time you have a manual intervention – a developer has to choose an option in a list or key in some release number or other key value, say – that is an opportunity to introduce an error, which can tremendously reduce the ROI of the development effort. , . A small mistake in one of these tedious, repeated manual tasks can generate major cleanup effort (and we’ve all seen them bring production down at least once in a lifetime).

    It is popular and maybe even a little cool to implement Git for source control, and a lot of people want to start there. Sometimes, however, source control isn’t going to add immediate value. The developers are perfectly happy doing what they’re doing controlling the source code, and it’s when companies starts automating and streamlining the other parts of their development process that source control then becomes the answer to their latest bottleneck. And that is usually that they need to get their code into this automated process that they have built so they are not shutting down their development for a week while you are getting all the source code together for a release by hand.

    The lesson is that this progressive approach to the implementation of DevOps itself presents a real opportunity for continuous optimization. This is something that we have learned at ARCAD Software. One important aspect of DevOps is the feedback loop. You take a process that is in place already and you automate it in some fashion. Then you have to ask: Was that actually useful to the overall health of the organization? Or did you just automate a process because you could? So, for instance, we never want a customer to adopt source control because they think they should or because they think it will fix a bad change management process. That’s been an unfortunate discovery is that if you have a bad change management process, going to source control doesn’t automatically or magically fix it. You have to look at your change management process, your entire application development flow, and ask: How can I improve this? Where are the real pain points? Where am I getting friction?

    A lot of people think that if they go to source control, they automatically can do concurrent development with no issues. Source control certainly streamlines and reduces the friction in concurrent development. But it doesn’t solve all of the problems of concurrent development. Yes, it stops you from overwriting code, which is a big help that you don’t ever overwrite code but instead merge it. But that merge doesn’t necessarily result in something that works. We have to remember that Git doesn’t understand the RPG or COBOL language. And so it doesn’t use any intelligence in the merge, it just merges that code together in a mechanical fashion. And then ideally, once that merge happens, that’s considered a source change. Your process picks up and takes off from there and builds it. But then you have to find out if it will compile and then get it tested to see if that code is doing what it is supposed to be doing and not doing something it is not supposed to be doing. So, yes, source control means that developers can do concurrent development with more confidence, which gives you more flexibility. That is especially important in the IBM i world where lots of folks are still dealing with large programs. You see where this pain of concurrent development comes through. For example, you may have a program that does all of your customer file updates – Read, Write, Delete – and when I want to change Read, I have to wait till the guy that’s changing the Delete process is finished and goes all the way to production before I can change my Read process because it’s one big program. That is a major bottleneck right there. So if I have source control, I can have overlapping development with a higher level of confidence.

    But it’s not magic. And part of making this SCM work is the openness to other tools and the ability to make incremental changes and immediately build new code, get it into test, and see the result of that change and to assess if we are accomplishing something productive. We want to start at the point of pain and optimize that process, and once we do that, this means that something else becomes a bottleneck in the workflow. And eventually the developers can be the bottleneck because they’re not feeding their code into this optimized process, and that is when source control is definitely the answer.

    But remember: It’s not magic.

    You also need to consider as you begin on your DevOps journey that the most common pain point that traditional IBM i shops are facing is testing. Without a doubt. The second biggest pain point is deployment.

    Interestingly, the developers usually aren’t a problem until they can’t feed code into this optimized process fast enough. Generally, I can make enough changes to overwhelm the rest of the process as a developer. When that process is streamlined and it’s automated as much as possible, then it becomes really compelling. Because now I can feed in changes faster. The side effect for a lot of IBM i shops, is that big program is locked up less time. So concurrent development becomes even more valuable now that you have optimized the process.

    The other problem we run into is multi-speed development. And this is really endemic on the IBM i platform, you have a lot of small changes –changing a program or a few programs to provide an enhancement or to fix a problem. And meanwhile, I have to expand the field or add fields to a file. While the development effort for each of these may be similar, the amount of testing you need is massively different because the wide impact of a field change is much larger. And so again, we are back into testing being the slowdown there. But even when we optimize the testing, we apply automation to a field change, there is a ripple effect of all the programs that are impacted, which increases the testing requirements. That’s why one of the functions we have taken from the open source world is dynamically creating test environments for large projects. It’s actually very popular amongst our customers. We had this capability before we ever even looked at source control; we could create a test environment on the IBM i on the fly, so that we could put a large project in there and isolate it.

    But then we need to have a way to bring that large project back into the testing environment with the other changes in process. Because the danger here is when you have different test environments for testing different projects. You don’t want to test the integration of those projects in production, which some people end up doing. They have their big project test environment and they have their PTF test environment and they both go from there into production. But really, they are then actually testing in production, which is reckless at best. You need to know the code and the PTFs work together before you promote to production.

    To learn more about a progressive approach to IBM i DevOps:

    [Webinar] Power of Automation: Integration brings Transparency

    Jeff Tickner is chief technology officer North America at ARCAD Software.

    This content is sponsored by ARCAD Software.

    RELATED STORIES

    The First Step In DevOps Is Not Tools, But Culture Change

    VS Code Is The Full Stack IDE For IBM i

    Realizing The Promise Of Cross Platform Development With VS Code

    If You Aren’t Automating Testing, You Aren’t Doing DevSecOps

    The Lucky Seven Tips Of IBM i DevSecOps

    Git Is A Whole Lot More Than A Code Repository

    Learning To Drive Fast On The DevOps Roadmap

    Expanding Fields Is A Bigger Pain In The Neck Than You Think

    Value Stream Management: Bringing Lean Manufacturing Techniques To IBM i Development

    Unit Testing Automation Hits Shift Left Instead of Ctrl-Alt-Delete Cash

    It’s Time For An Application Healthcheck

    The New Economy Presents New Opportunities For IBM i

    Creating Web Services APIs Can Be Easy On IBM i

    Jenkins Gets Closer IBM i Hooks, Courtesy Of ARCAD

    DevOps Transformation: Engage Your IBM i Team

    The All-Knowing, Benevolent Dictator Of Code

    Software Change Management Has To Change With The DevOps Times

    Attention Synon Users: You Can Automate Your Move To RPG Free Form And DevOps

    Git Started With GitHub And ARCAD On IBM i

    One Repository To Rule The Source – And Object – Code

    Data Needs To Be Anonymized For Dev And Test

    Getting Progressive About Regression Testing

    Transforming The Art Of Code And The Face Of IBM i

    Share this:

    • Reddit
    • Facebook
    • LinkedIn
    • Twitter
    • Email

    Tags: Tags: ARCAD Software, DevOps, IBM i, Jenkins, Jira, VS Code

    Sponsored by
    DRV Tech

    Get More Out of Your IBM i

    With soaring costs, operational data is more critical than ever. IBM shops need faster, easier ways to distribute IBM applications-based data to users more efficiently, no matter where they are.

    The Problem:

    For Users, IBM Data Can Be Difficult to Get To

    IBM Applications generate reports as spooled files, originally designed to be printed. Often those reports are packed together with so much data it makes them difficult to read. Add to that hardcopy is a pain to distribute. User-friendly formats like Excel and PDF are better, offering sorting, searching, and easy portability but getting IBM reports into these formats can be tricky without the right tools.

    The Solution:

    IBM i Reports can easily be converted to easy to read and share formats like Excel and PDF and Delivered by Email

    Converting IBM i, iSeries, and AS400 reports into Excel and PDF is now a lot easier with SpoolFlex software by DRV Tech.  If you or your users are still doing this manually, think how much time is wasted dragging and reformatting to make a report readable. How much time would be saved if they were automatically formatted correctly and delivered to one or multiple recipients.

    SpoolFlex converts spooled files to Excel and PDF, automatically emailing them, and saving copies to network shared folders. SpoolFlex converts complex reports to Excel, removing unwanted headers, splitting large reports out for individual recipients, and delivering to users whether they are at the office or working from home.

    Watch our 2-minute video and see DRV’s powerful SpoolFlex software can solve your file conversion challenges.

    Watch Video

    DRV Tech

    www.drvtech.com

    866.378.3366

    Share this:

    • Reddit
    • Facebook
    • LinkedIn
    • Twitter
    • Email

    Power10 Systems Get Storage And I/O Enhancements Tech Refreshes Bring SQL-Based Services Galore For IBM i and Db2

    One thought on “Take A Progressive Approach To DevOps”

    • Paul Houston Harkins says:
      October 18, 2023 at 12:57 pm

      “As Gene Amdahl, the chief architect of the IBM System/360 mainframe, correctly observed and what was subsequently codified as Amdahl’s Law, any kind of system that implements any kind of process is only as fast as its slowest component. What is true of bottlenecks in human processes is equally true in systems design and also in that overlapping area known as DevOps – the confluence of application development and system operations.

      If you can only be as fast as your slowest bottleneck, the converse is also true that you can speed up overall throughput of a workflow or a system by improving the performance of its slowest component. ”

      As IBM i programmers now spend 75 % of their time, 1500 hours a year, trying to understand what programs are actually doing by using the primative and labor intensive IBM Debug and staring at the source code, it should be obvious that IBM Debug must be replaced immediately.

      Reply

    Leave a Reply Cancel reply

TFH Volume: 33 Issue: 63

This Issue Sponsored By

  • Rocket Software
  • ARCAD Software
  • ServiceExpress
  • Computer Keyes
  • WorksRight Software

Table of Contents

  • Tech Refreshes Bring SQL-Based Services Galore For IBM i and Db2
  • Take A Progressive Approach To DevOps
  • Power10 Systems Get Storage And I/O Enhancements
  • Please Take The IBM i Marketplace Survey
  • IBM i PTF Guide, Volume 25, Number 42

Content archive

  • The Four Hundred
  • Four Hundred Stuff
  • Four Hundred Guru

Recent Posts

  • Big Blue Raises IBM i License Transfer Fees, Other Prices
  • Keep The IBM i Youth Movement Going With More Training, Better Tools
  • Remain Begins Migrating DevOps Tools To VS Code
  • IBM Readies LTO-10 Tape Drives And Libraries
  • IBM i PTF Guide, Volume 27, Number 23
  • SEU’s Fate, An IBM i V8, And The Odds Of A Power13
  • Tandberg Bankruptcy Leaves A Hole In IBM Power Storage
  • RPG Code Generation And The Agentic Future Of IBM i
  • A Bunch Of IBM i-Power Systems Things To Be Aware Of
  • IBM i PTF Guide, Volume 27, Numbers 21 And 22

Subscribe

To get news from IT Jungle sent to your inbox every week, subscribe to our newsletter.

Pages

  • About Us
  • Contact
  • Contributors
  • Four Hundred Monitor
  • IBM i PTF Guide
  • Media Kit
  • Subscribe

Search

Copyright © 2025 IT Jungle