• The Four Hundred
  • Subscribe
  • Media Kit
  • Contributors
  • About Us
  • Contact
Menu
  • The Four Hundred
  • Subscribe
  • Media Kit
  • Contributors
  • About Us
  • Contact
  • How To Read A Program

    January 28, 2009 Steve Kilner

    When you have to figure out a big, complicated program you’re not familiar with, what do you do? Can you explain the process you go through? If you’re a manager of people who maintain programs, what do you know about how they figure out programs? How are they doing it? Are they doing it efficiently? Correctly?

    Research studies have shown that maintenance programmers spend about half their time trying to figure out the programs they are tasked with modifying. That’s half the maintenance budget. What can be done to improve this process, to increase quality, value delivered, and speed to market?

    Over the last 20 years, a branch of software engineering known as Program Comprehension has been developing quietly in the background, somewhere behind the architecture, language, and OS wars. Arguably it is more important than any of these other activities, as software maintenance represents the majority of costs in an application’s life cycle. By many estimates, more programmers today are engaged in maintenance than in new development. And again, half of what they do is to try to understand existing code. Improving the productivity and quality of that work should be a high priority for IT organizations. The field of Program Comprehension holds valuable information that AS/400/System i organizations can put to use if they begin by understanding some of the basics.

    While Program Comprehension does not divide into neat, hierarchical subject blocks, the following outline gives a reasonable overview of what I will introduce in this article.

    1. Program Modeling (the mental model the programmer builds):

    • Concept assignment
    • Feature location
    • The use and importance of beacons
    • Control flow
    • Data flow
    • Strategies for program modeling
    • Impediments to program modeling

    2. Static Analysis (of source code):

    • Program slicing – backward and forward
    • Key statement analysis
    • Source exploration tools and their features

    3. Dynamic Analysis (of execution trace data):

    • Basic dynamic slicing
    • Simultaneous slicing (of comparative data sets)
    • Dynamic frequency analysis

    1. Program Modeling

    How do programmers go about building a mental model of an unfamiliar program? One of the processes of knowledge acquisition that a maintenance programmer engages in is called “concept assignment.” In concept assignment the programmer takes knowledge he already has about the domain, by which I mean knowledge of the real world, business, and software applications, and attempts to map that knowledge to the source code.

    Mapping domain knowledge to source code usually begins by posing a hypothesis and then trying to prove it. With the knowledge gained from that process, the programmer then poses another hypothesis, and so on, until the driving goal has been reached.

    For example, the programmer may think, “I’m pretty sure this program prints invoices.” The programmer might then look for a printer file in the RPG file specifications, examine its DDS, and then search the RPG code for some corresponding write statements to the formats. This could confirm that hypothesis. The programmer might then think, “I believe this is driven by reading the Orders file.” The programmer would then look for that file spec and read source statements pertaining to that file in the program, and probably also look for execution of printer file outputs following Order file input and so on.

    By following this process the programmer maps business concepts about printing invoices to both some RPG and some DDS code, and has also mapped the concept of driving the invoices from the orders file to specific sections of code. This mapping constitutes new knowledge that this programmer now possesses and can act upon.

    A similar process occurs when a programmer engages in “feature location.” This is usually done in response to a specific maintenance request and the programmer must track down where a specific feature is implemented in the source code. This is concept assignment applied to a particular feature. Features are often represented by source code in multiple locations, possibly in multiple programs and other objects. The programmer must build a mental model that maps the feature’s real-world effect to the various pieces of source code.

    How the programmer goes about the task of finding the relevant source code is a key question. If the programmer does an incomplete or inaccurate job of mapping the concept or feature to the code, then any modifications he or she makes are more likely to have errors. There is wide agreement in the research community that programmers rely heavily on “beacons” to locate relevant source code. A beacon can be any sort of word or phrase in code or comments, such as field or file names, subroutine names, data structures, programming patterns, recognizable coding techniques or styles, and so on. Programmers find beacons by executing searches, scanning the code for them, and also by picking them up through intelligent serendipitous wandering. The ability to find and recognize beacons is key to maintenance productivity and quality. Indeed, studies have shown the depth of a programmer’s internal beacon repository is a key differentiator between expert and novice programmers.

    Why is this so important? Let me make another example around the invoice printing program mentioned earlier. Let’s say the programmer is looking for the printing of invoice detail and finds the following:

    C            WRITE INVDETL
    C            MOVE HDRDATE      LGDATE
     …
    C            WRITE IVLOGF
    

    In this example the programmer recognizes INVDETL as being in the print file and is probably where invoice detail lines are printed. But what’s IVLOGF? A novice programmer may shrug it off or not even notice it, but an experienced programmer sees the letters “LOG” and may think, “a-ha! Looks like there’s a log file of the print output. If I modify the print output, I’d better modify the log as well.” This may seem like a trivial example to experienced developers, but in software maintenance the ability to recognize relevant beacons is crucial to success.

    A programmer’s beacon repository consists of knowledge about the business, knowledge about the application, knowledge of programming, the programming language, and programming practices both within and without the organization. The larger the repository the greater the ability to recognize relevant–and especially, unexpected–beacons.

    As the programmer attempts to map domain knowledge to source code, he or she is also constructing mental models of “control flow” and “data flow” for the program.

    The control flow model consists primarily of understanding the sequence of operations that will occur when the program executes. This typically involves knowledge of the driving loops in a program, the subroutines, and key conditions. “There is a loop in the mainline that reads through the order header file. For each record it checks to see if an invoice should be printed, and if so, it calls the PRTHDR subroutine. Then it calls the READDTL subroutine, which calls the PRTDTL subroutine, which prints if the quantity is greater than zero.”

    The data flow model consists of a mental picture of the data coming into the program, the functional transformations that will act upon it, and the output direction of the data. “There are order header and detail records in a file consisting of customer, item, quantity, and price data. Taxes, discounts and final prices are computed and invoice totals are accumulated. Much of the order data and the computed data are output to printed invoices and written to a log file.”

    One technique programmers use is to form these models by “chunking” through lines of code and mentally aggregating them into bigger and bigger chunks. For example, this may be an initial chunk:

    C            WRITE INVDETL
    

    This may perhaps then be chunked into a significant control block:

    C      ITEMQTY   IFGT *ZERO
     …
    C            WRITE INVDETL
     …
    C            END
    

    And this may be further chunked, for example:

    C      PRTDTL   BEGSR
     …
    C      ITQTY    IFGT *ZERO
     …
    C            WRITE INVDETL
     …
    C            END
    C            ENDSR
    

    Each of these chunks–and their relationships–now represent new knowledge that the programmer has obtained.

    Programmers employ different strategies when building these mental models depending on their assigned task and their existing knowledge.

    Programmers with no domain knowledge at all (i.e., no business or application knowledge whatsoever) typically follow a “bottom-up” strategy. This is done by reading lines of code and attempting to chunk them upward into new chunks as the programmer discerns meaning. In unfamiliar territory programmers typically first try to construct a control flow model of the program, and as they acquire partial knowledge they attempt to piece together a data flow model. Some programmers approach this situation by using the “read the code for an hour” method. With no existing domain knowledge to attach to, the programmer typically relies on his or her existing knowledge of programming patterns and attempts to map the source code to such a pattern. Hopefully you do not have to frequently rely on programmers with no domain knowledge whatsoever.

    Though the bottom-up strategy is the fall-back technique of programmers with no domain knowledge, it is also used by even very knowledgeable programmers when they cannot figure out how to establish connections to their existing knowledge.

    The most common strategy for programmers with some existing domain knowledge is to use the “top-down” strategy. This is what was described earlier when talking about concept assignments. It begins, for example, as “I believe this program prints invoices.” The programmer then proceeds to form hypotheses and map existing domain knowledge to lines and chunks of code.

    Another aspect to the strategy of comprehending a program is whether the programmer decides to comprehend the entire program or only the portions needed to accomplish the assigned task. Some programs are simply too big to be comprehended from scratch. As Winston Churchill once said about a government planning report, “by its very size, this document defends itself against being read.” I think we’ve all seen some programs like that. The trade-off here, of course, is that the programmer can understand a subset of the program much more quickly, but the risk of mistakes is greater. This is where a large repository of beacons and their effective use is crucial to a successful outcome.

    An important barrier to program comprehension is “delocalization”. This comes into play anytime a programmer is attempting to understand a line of code and has to navigate to another source location to understand something else first. Examples of this would be calling a subroutine, procedure or another program, looking up the fields on a format being read or written, looking up the attributes or text for a given field, and so on. While delocalization has its obvious benefits in programming, it is important to understand that its navigation requirements can be a significant impediment to program comprehension. Anytime the programmer encounters delocalized code, he or she must interrupt his or her train of thought to engage in source code navigation. Typically this is a multi-step process. And, rather unbelievably when you think about it, in many source editors the programmer must engage all over again in navigation just to get back to where they started from.

    Given the limitations of humans’ short-term memory, researchers have shown that these navigations are important contributors to what eventually become defects, as programmers can forget one or more aspects of their mental state as it existed prior to navigation. This is not to say that delocalization should not be used when developing new programs. It is saying that: 1) the program comprehension aspect should be taken into consideration; and 2) the source tools used for maintenance should be written to minimize the navigation effort, and hence, interruptions and defects. Some studies have shown that maintenance programmers spend 25 to 30 percent of their time in navigation (i.e., being mentally interrupted).

    Another important barrier to program comprehension is the use of poor or inconsistent naming conventions. As discussed, programmers rely heavily on beacons, the most important of which are the names of program tokens, e.g., fields, files, subroutines, etc. Imagine, from the earlier examples, if the invoice print format had been named FMT2 instead of INVDETL. If so, the programmer would only recognize the write statement as a relevant beacon if FMT2 was already known as the invoice detail format. In other words, the ability to use the format name as a beacon in itself is removed without pre-existing application knowledge.

    2. Static Analysis

    The term “static analysis” refers to analyzing the source code or other “static” documentation for a program in an attempt to comprehend it. Looking at the source code in green-screen SEU is probably the most basic sort of static analysis an RPG programmer can do.

    Static analysis is performed for three primary purposes:

    • To help the programmer understand what the program does
    • To help the programmer locate code being sought for maintenance, understanding, or documentation
    • To help the programmer analyze the impact of prospective changes

    Much of what the programmer does while engaged in these processes has been described above. One additional technique that is frequently used is called “program slicing,” of which there are two flavors with distinct purposes: backward slicing and forward slicing. In both cases, slicing is the effort of the programmer to “slice away” sections of code not relevant to the task at hand, thus simplifying and reducing what the programmer must comprehend.

    Backward slicing always begins with a particular statement that the programmer has located and decided is of interest. For example, it may set a variable in a way that the programmer needs to modify. To understand the conditions that lead to the execution of that statement, the programmer works backward from there, finding all possible paths through the code that lead to the execution of that statement. The complete set of all the statements in all the paths leading to that statement is the statement’s backward slice. These are all the statements that possibly affect the statement of interest. This backward slice is in effect a program-within-a-program, and should in fact be executable in theory. Most experienced programmers perform backward slicing intuitively to analyze conditions leading to a given statement.

    Forward slicing also begins with a particular statement of interest, but works forward from that point. This is most often done when the programmer is considering modifying the particular statement, such as “what happens if I change or delete this statement?” Forward slicing involves calculating all the downstream paths through the program starting from the particular statement. The complete set of all those statements on all those paths is the “forward slice” and represents the complete set of statements that must be considered for impact analysis if the statement is changed or deleted. Again, most experienced programmers engage in forward slicing intuitively.

    Key statement analysis (KSA) is a technique meant to identify the most important statements in a program as a means to understanding what the program does. KSA has several different algorithms, two of which build on program slicing. I will only mention here backward KSA, which begins by identifying all variables output by the program. For each variable, the full backward slice from each statement that sets each variable is computed. All the backward slices for all the variables are then combined and the statements are analyzed for frequency. The statements that occur most frequently among the statements that affect all output variables are considered to be the program’s key statements. Reviewing these statements should give the programmer a head-start in comprehending the program.

    Most other programming languages besides RPG have a number of what are called “source exploration” tools available to support maintenance programmers. Some of the functions that support these efforts include:

    • Views–To reduce the mental cost of analyzing delocalized code, multiple, simultaneous source code views are provided, often with one-click navigation, so the programmer does not have to leave the code being examined or incur overhead in navigating to the other code. This can apply to called subroutines or procedures, file format definitions, field definitions, and so on.
    • Navigation–The tool should be intelligent enough, for example, to navigate directly from an EXSR statement to the corresponding BEGSR, if the programmer requests it, and vice-versa. A history should be kept, like the history feature in a browser, so the programmer can easily go backward or forward through the statements that have been viewed. There should also be a facility to allow multiple programs to be open and viewed at once.
    • Search–A search feature should be more functional than merely finding the next statement containing the search term. A navigable list of matching statements is more meaningful and useful to programmers.
    • Call graph–One or more views should be provided of the calling structure of a program, in essence, a diagram of the subroutine, procedure, and program calls. This should be navigable with the source code. Additional information that describes the interfaces of the calls is also very useful.
    • Data flow model–A summary of the data input and output by the program should be provided to aid the programmer in forming the data flow model of the program.
    • Backward slicing support–A feature that shows all instances of variable usage throughout the program assists the programmer in narrowing the comprehension task to only the relevant code for the task at hand.
    • Forward slicing support–A feature that shows downstream impact from any point in the program assists the programmer in analyzing the impact of changing the program at that particular point.

    • Key statement analysis–A feature that analyzes the frequency of statements in the backward slices of all output variables, or counts of statements in forward slices for all statements in the program.
    • Frequency analysis–This feature gives the programmer a list of all tokens (names, variables, etc.) in the program, typically in descending order of frequency to give the programmer an impression of the relative importance of the various tokens. This is one view into gaining quick impressions in program comprehension.
    • Visual cues–Many studies have shown that reading comprehension of source code is greatly improved through the use of visual cues for the programmer. Using different text colors and fonts to indicate types or meanings reduces the time and cognitive demands placed on the programmer. Using other visual cues to highlight control blocks or other sections of code (such as subroutines) also reduces the cognitive workload on the programmer.

    3. Dynamic Analysis

    The term “dynamic analysis” refers to analyzing the trace data produced by executing a program in an attempt to comprehend it. While most programmers may be familiar with using trace data to debug a program, it is perhaps less recognized as a means to comprehend what a program does. That is the purpose of dynamic analysis, and with the support of a good tool it can provide a quick and effective means of understanding an unfamiliar program. This is such an effective technique for learning about legacy systems that there is an annual conference on Program Comprehension through Dynamic Analysis. Additionally, a variation of it can be used to assist with feature location, or finding the code that implements a particular feature being sought.

    Basic dynamic slicing is useful in a variety of ways, the most common of which are: 1) to facilitate development of the programmer’s mental model of control flow; and 2) as a means of slicing the source code down to a more workable size.

    By examining the actual execution data with the use of a good visualization tool, the programmer can often quickly grasp the control flow model of a program. If the tool supports visualization of subroutine, procedure, and program calls in particular, it can be very straightforward to develop a quick model of the program’s control flow.

    Simultaneous dynamic slicing is used to locate the code that implements a particular feature. Two sets of trace data are used for: 1) trace data resulting from execution that does not implement the feature in question is collected; and then 2) trace data resulting from execution that does implement the feature is collected. Through the use of a good analysis tool the statements can be identified that implement the feature by discerning which statements execute only when test data is used that leads to the use of the feature. In other words, find the differences in the two sets of trace data. This can be a highly effective means of locating feature-related code, especially when it is scattered amongst multiple locations.

    Dynamic frequency analysis is a quick means of grasping what are the most important variables or other tokens in a program by analyzing execution trace data and calculating the frequency with which each token appears in actual execution. This can differ substantially from static frequency analysis, which counts token frequency in the source file.

    According to a number of research studies, as much as half of the budget for software maintenance operations is expended on program comprehension. I’m not sure who said it first, possibly Joel Spolsky, but there is a lot of truth in the statement: “It is harder to read code than write it.” What’s more, in many organizations, this activity takes place unseen, unmanaged, with no strategy, no accountability, and little thought about supporting tools, training, and process improvement. IT organizations that focus on this activity and apply themselves to investigating and implementing Program Comprehension solutions can make important gains in the productivity and quality of what may be their single most costly task: software maintenance.

    Steve Kilner is CEO of  vLegaci Corp. You can read Steve’s blog at www.kilnerblog.com.



                         Post this story to del.icio.us
                   Post this story to Digg
        Post this story to Slashdot

    Share this:

    • Reddit
    • Facebook
    • LinkedIn
    • Twitter
    • Email

    Tags:

    Sponsored by
    Midrange Dynamics North America

    Git up to speed with MDChange!

    Git can be lightning-fast when dealing with just a few hundred items in a repository. But when dealing with tens of thousands of items, transaction wait times can take minutes.

    MDChange offers an elegant solution that enables you to work efficiently any size Git repository while making your Git experience seamless and highly responsive.

    Learn more.

    Share this:

    • Reddit
    • Facebook
    • LinkedIn
    • Twitter
    • Email

    Sponsored Links

    Vision Solutions:  Journaling for System i resilience. Learn more.
    looksoftware:  Tough economic times are the right times to modernize and REUSE!
    Profound Logic Software:  Learn how to pick the right modernization approach. FREE Webinar!

    IT Jungle Store Top Book Picks

    Easy Steps to Internet Programming for AS/400, iSeries, and System i: List Price, $49.95
    Getting Started with PHP for i5/OS: List Price, $59.95
    The System i RPG & RPG IV Tutorial and Lab Exercises: List Price, $59.95
    The System i Pocket RPG & RPG IV Guide: List Price, $69.95
    The iSeries Pocket Database Guide: List Price, $59.00
    The iSeries Pocket Developers' Guide: List Price, $59.00
    The iSeries Pocket SQL Guide: List Price, $59.00
    The iSeries Pocket Query Guide: List Price, $49.00
    The iSeries Pocket WebFacing Primer: List Price, $39.00
    Migrating to WebSphere Express for iSeries: List Price, $49.00
    iSeries Express Web Implementer's Guide: List Price, $59.00
    Getting Started with WebSphere Development Studio for iSeries: List Price, $79.95
    Getting Started With WebSphere Development Studio Client for iSeries: List Price, $89.00
    Getting Started with WebSphere Express for iSeries: List Price, $49.00
    WebFacing Application Design and Development Guide: List Price, $55.00
    Can the AS/400 Survive IBM?: List Price, $49.00
    The All-Everything Machine: List Price, $29.95
    Chip Wars: List Price, $29.95

    inFORM Commits to Reforestation Through Arbor Day Program Hogging the Ground Day

    Leave a Reply Cancel reply

Volume 9, Number 4 -- January 28, 2009
THIS ISSUE SPONSORED BY:

ProData Computer Services
PowerTech
Guild Companies

Table of Contents

  • How To Read A Program
  • Load a Spreadsheet from a DB2/400 Database: Part 3
  • Admin Alert: Four Things to Beware of During a System Upgrade

Content archive

  • The Four Hundred
  • Four Hundred Stuff
  • Four Hundred Guru

Recent Posts

  • Public Preview For Watson Code Assistant for i Available Soon
  • COMMON Youth Movement Continues at POWERUp 2025
  • IBM Preserves Memory Investments Across Power10 And Power11
  • Eradani Uses AI For New EDI And API Service
  • Picking Apart IBM’s $150 Billion In US Manufacturing And R&D
  • FAX/400 And CICS For i Are Dead. What Will IBM Kill Next?
  • Fresche Overhauls X-Analysis With Web UI, AI Smarts
  • Is It Time To Add The Rust Programming Language To IBM i?
  • Is IBM Going To Raise Prices On Power10 Expert Care?
  • IBM i PTF Guide, Volume 27, Number 20

Subscribe

To get news from IT Jungle sent to your inbox every week, subscribe to our newsletter.

Pages

  • About Us
  • Contact
  • Contributors
  • Four Hundred Monitor
  • IBM i PTF Guide
  • Media Kit
  • Subscribe

Search

Copyright © 2025 IT Jungle