• The Four Hundred
  • Subscribe
  • Media Kit
  • Contributors
  • About Us
  • Contact
Menu
  • The Four Hundred
  • Subscribe
  • Media Kit
  • Contributors
  • About Us
  • Contact
  • In Praise Of One-Off Tools

    December 17, 2014 Ted Holt

    This is the last issue of Four Hundred Guru for 2014, and in the last issue of a year I try to write about something unusual, something different from the routine stuff we usually run in this august publication. I worked on several interesting projects in 2014, but the one I want to talk about today will seem really retrograde to you. I wrote a bunch of System/36 RPG II programs. How I wrote them is the real story.

    In early 2014, a friend of mine emailed a request. A client of his, who still runs an S/36, was going to migrate to a GUI-based package on a modern platform. The client needed RPG programs that would read their data files and write CSV (comma-separated values) files, which they would be able to import into the new software. They needed one program per data file. That could run into a lot of programs. My friend had just been diagnosed with a serious disease and would not be able to support the transition. Would I be able to write the conversion programs for him?

    My first reaction was that I would not be able to help. I was already too busy with a family, a full-time job, and this august publication. There was no way I could take on more work.

    But as I thought about it, I realized that I could fulfill my friend’s request. I didn’t need to write a lot of conversion programs. I only had to write one RPG II conversion program and one RPG IV program to write that RPG II program. That is to say, with the proper tool, I could churn out the conversion programs.

    My RPG IV program read two files: one containing RPG II F and I specifications, and another containing all other required information, such as the output record length, the name of the menu from which the program would run, and menu option number.

    It’s not hard to write a program that writes source code. People use such programs every day. Many of the tools that build GUI interfaces for IBM i applications write PHP, Javascript, cascading style sheet source code, and HTML. EGL generates Java and COBOL. SDA and RLU generate DDS. These are only a few examples.

    There are two basic approaches to generating source code. The one that usually makes the most sense (to me, at least) is to build a template that contains place-holders, which the generator replaces with actual values. For an example of this, read what Susan Gantner wrote about RSE and snippets.

    The other way is to embed the source statements in output commands. At first glance, this may appear foolish, but in cases where much of the generated code does not fit a template, it can be more effective. It’s the approach I used in this project.

    RPG IV makes easy work of it. Create a data structure (I use qualified template data structures) and one or more subprocedures for each type of output.

    Here’s a program that illustrates what I’m talking about.

    ctl-opt actgrp(*new) option(*srcstmt: *nodebugio);
    
    dcl-f  ToLibr   disk(92)   usage(*output);
    
    dcl-ds  CSpec_t  qualified  template;
               Sequence                  char(5);
               FormType                  char(1);
               LevelInd                  char(2);
               CondInd                   char(9);
               Factor1                   char(10);
               OpCode                    char(5);
               Factor2                   char(10);
               Result                    char(6);
               Length                    char(3);
               DecPos                    char(1);
               HalfAdjust                char(1);
               ResultingInd              char(6);
               Comment                   char(21);
    end-ds;
    
    dcl-ds  SourceRec_t    len(92)   qualified  template;
      Sequence    zoned(6);
      Date        char(6);
      Data        char(80);
    end-ds;
    
    *inlr = *on;
    WriteCSet ('SETON': 'LR');
    WriteC (*blank: ' 02' : 'SIZE': 'MULT ': '12' : 'XTND':
            *blank: *blank: *blank: *blank: *blank);
    WriteCNoInd (*blank : 'Z-ADD': '1': '#O': '5': '0': *blank);
    return;
    
    
    dcl-proc WriteCNoInd;
       dcl-pi *n;
          inFactor1   like(CSpec_t.Factor1)    const;
          inOpCode    like(CSpec_t.OpCode)     const;
          inFactor2   like(CSpec_t.Factor2)    const;
          inResult    like(CSpec_t.Result)     const;
          inLength    like(CSpec_t.Length)     const;
          inDecPos    like(CSpec_t.DecPos)     const;
          inHalfAdj   like(CSpec_t.HalfAdjust) const;
       end-pi;
    
       WriteC (*blanks: *blanks: inFactor1: inOpCode: inFactor2:
               inResult: inLength: indecPos: inHalfAdj:
               *blanks: *blanks);
    end-proc;
    
    
    dcl-proc WriteCSet;
       dcl-pi *n;
          inOpCode        like(CSpec_t.OpCode)        const;
          inResultingInd  like(CSpec_t.ResultingInd)  const;
       end-pi;
    
       WriteC (*blank: *blank: *blank: inOpCode: *blank:
               *blank: *blank: *blank: *blank: inResultingInd: *blank);
    end-proc;
    
    
    dcl-proc WriteC;
       dcl-pi *n;
          inLevelInd     like(CSpec_t.LevelInd)      const;
          inCondInd      like(CSpec_t.CondInd)       const;
          inFactor1      like(CSpec_t.Factor1)       const;
          inOpCode       like(CSpec_t.OpCode)        const;
          inFactor2      like(CSpec_t.Factor2)       const;
          inResult       like(CSpec_t.Result)        const;
          inLength       like(CSpec_t.Length)        const;
          inDecPos       like(CSpec_t.DecPos)        const;
          inHalfAdj      like(CSpec_t.HalfAdjust)    const;
          inResultingInd like(CSpec_t.ResultingInd)  const;
          inComment      like(CSpec_t.Comment)       const;
       end-pi;
    
       dcl-ds  CSpec   likeds(CSpec_t);
    
             CSpec.FormType     = 'C';
             CSpec.LevelInd     = inLevelInd;
             CSpec.CondInd      = inCondInd;
             CSpec.Factor1      = inFactor1;
             CSpec.OpCode       = inOpCode;
             CSpec.Factor2      = inFactor2;
             CSpec.Result       = inResult;
       evalr CSpec.Length       = %trim(inLength);
             CSpec.DecPos       = inDecPos;
             CSpec.HalfAdjust   = inHalfAdj;
             CSpec.ResultingInd = inResultingInd;
       WriteSrc (CSpec);
    end-proc;
    
    
    dcl-proc  WriteSrc;
       dcl-pi *n;
          inString     char(80)    const;
       end-pi;
       dcl-ds  SrcRec    likeds(SourceRec_t);
    
       dcl-s   SequenceNumber   packed(6)     static;
       SequenceNumber += 1;
       SrcRec.Sequence = SequenceNumber;
       SrcRec.Data = inString;
       write ToLibr SrcRec;
    
    end-proc;
    

    Notice the template for the C specifications, data structure CSpec_t. Notice the three subprocedures that build C specs: WriteC can handle any C spec at all. WriteCNoInd can write C specs that do not use conditioning or resulting indicators. WriteCSet handles SETON and SETOF operations. Obviously my program was much longer, but this is the basic idea.

    Also, I should point out this this example passes literals only into the subprocedure calls, but in the real application, many subprocedure calls contained variable data–file name, field name, field length, decimal precision, etc.

    Here’s the generated source code.

    000001           C                     SETON                     LR
    000002           C   02      SIZE      MULT 12        XTND
    000003           C                     Z-ADD1         #O      50
    

    As with most projects, requirements changed along the way. The client discovered more data files that were needed in CSV format, and the generated programs had to be tweaked a few times. Since I did not write each conversion program individually, dealing with changes was easy. I added F and I specs to my file of specifications and general information to the other file, then called my program to regenerate all the conversion programs in a matter of seconds.

    In his classic work of software engineering, Code Complete, Steve McConnell wrote:

    Suppose you’re given five hours to do the job and you have to make a choice:

    1. Do the job comfortably in five hours, or

    2. Spend four hours and 45 minutes feverishly building a tool to the job, and then have the tool do the job in 15 minutes.

    Most good programmers would choose the first option one time out of a million and the second option in every other case.

    This is not the first time I’ve written programs to write source code, but I have to say it was the most rewarding.

    • The client got what he needed.
    • The project was finished on time, even though I only worked on it some nights and weekends.
    • My friend got to take medical treatments without having to worry about the client.

    Here’s wishing you success with your projects in 2015. Don’t hesitate to build a tool if you need it, even if you only need it once.

    RELATED STORY

    Decisions, Decisions: Templates Or Snippets?



                         Post this story to del.icio.us
                   Post this story to Digg
        Post this story to Slashdot

    Share this:

    • Reddit
    • Facebook
    • LinkedIn
    • Twitter
    • Email

    Tags:

    Sponsored by
    DRV Tech

    Get More Out of Your IBM i

    With soaring costs, operational data is more critical than ever. IBM shops need faster, easier ways to distribute IBM applications-based data to users more efficiently, no matter where they are.

    The Problem:

    For Users, IBM Data Can Be Difficult to Get To

    IBM Applications generate reports as spooled files, originally designed to be printed. Often those reports are packed together with so much data it makes them difficult to read. Add to that hardcopy is a pain to distribute. User-friendly formats like Excel and PDF are better, offering sorting, searching, and easy portability but getting IBM reports into these formats can be tricky without the right tools.

    The Solution:

    IBM i Reports can easily be converted to easy to read and share formats like Excel and PDF and Delivered by Email

    Converting IBM i, iSeries, and AS400 reports into Excel and PDF is now a lot easier with SpoolFlex software by DRV Tech.  If you or your users are still doing this manually, think how much time is wasted dragging and reformatting to make a report readable. How much time would be saved if they were automatically formatted correctly and delivered to one or multiple recipients.

    SpoolFlex converts spooled files to Excel and PDF, automatically emailing them, and saving copies to network shared folders. SpoolFlex converts complex reports to Excel, removing unwanted headers, splitting large reports out for individual recipients, and delivering to users whether they are at the office or working from home.

    Watch our 2-minute video and see DRV’s powerful SpoolFlex software can solve your file conversion challenges.

    Watch Video

    DRV Tech

    www.drvtech.com

    866.378.3366

    Share this:

    • Reddit
    • Facebook
    • LinkedIn
    • Twitter
    • Email

    Sponsored Links

    SEQUEL Software:  Download the Is It Time to Upgrade from Query/400? white paper today. >
    Profound Logic Software:  Now On-Demand Webinar: "See What i Can Do with Mobile Applications"
    BCD:  Beat Your Q4 Numbers with Real-Time Analytics on IBM i. Watch video!

    More IT Jungle Resources:

    System i PTF Guide: Weekly PTF Updates
    IBM i Events Calendar: National Conferences, Local Events, and Webinars
    Breaking News: News Hot Off The Press
    TPM @ EnterpriseTech: High Performance Computing Industry News From ITJ EIC Timothy Prickett Morgan

    IBM: CMOD for i ‘Alive and Well’ Under New CEO, HelpSystems Snaps Up Rival Halcyon

    Leave a Reply Cancel reply

Volume 14, Number 27 -- December 17, 2014
THIS ISSUE SPONSORED BY:

Focal Point Solutions Group
WorksRight Software
System i Developer

Table of Contents

  • End of Year Feedback
  • In Praise Of One-Off Tools
  • Admin Alert: What Should an IBM i Administrator Do, Part 2

Content archive

  • The Four Hundred
  • Four Hundred Stuff
  • Four Hundred Guru

Recent Posts

  • Meet The Next Gen Of IBMers Helping To Build IBM i
  • Looks Like IBM Is Building A Linux-Like PASE For IBM i After All
  • Will Independent IBM i Clouds Survive PowerVS?
  • Now, IBM Is Jacking Up Hardware Maintenance Prices
  • IBM i PTF Guide, Volume 27, Number 24
  • Big Blue Raises IBM i License Transfer Fees, Other Prices
  • Keep The IBM i Youth Movement Going With More Training, Better Tools
  • Remain Begins Migrating DevOps Tools To VS Code
  • IBM Readies LTO-10 Tape Drives And Libraries
  • IBM i PTF Guide, Volume 27, Number 23

Subscribe

To get news from IT Jungle sent to your inbox every week, subscribe to our newsletter.

Pages

  • About Us
  • Contact
  • Contributors
  • Four Hundred Monitor
  • IBM i PTF Guide
  • Media Kit
  • Subscribe

Search

Copyright © 2025 IT Jungle