Don’t Disable Blocking
September 19, 2007 Ted Holt
Do you ever bother to look at warning messages after a successful program compilation? Most warning messages contain no useful information, but the few that do can make a difference. One message that I am always glad to see in RPG compiler listings is RNF7086.
So what is RNF7086? The text reads, “RPG handles blocking for the file. INFDS is updated only when blocks of data are transferred.” This message is a good indicator that the system is going to handle the transfer of data between a database file and the program’s buffers in an efficient manner. As efficiently as possible? Maybe not, but well enough.
Assume a program that reads a database file sequentially. Each time the program issues a read operation (sometimes called a get operation), the system gives it the data for one record. This behavior suits the program just fine, because the program can only process one record at a time anyway. If the system has to go to disk each time the program reads, the program is left with nothing to do while the slow disk access takes place.
Blocking has long been used to speed up input/output operations. Memory operations are fast; disk access is much slower. Blocking makes a program use more memory in order to reduce the number of disk accesses. Reducing the number of disk accesses makes the program run faster.
Assume the same program with blocking applied. Each time the system goes to disk for data, it brings back more than one record into memory. Maybe it brings back the data of 10, 50, or 100 records. Each time the program reads, the system first looks to see if there are any unprocessed records in memory. If so, it sends data for one record to the program. This is a memory operation, which is fast. But if not, it goes to disk for another group of records, then passes the first one of the group to the program.
Occasionally I work on a program in which a programmer has inadvertently disabled buffering. That is, each read to a file retrieves only one record. The programmer disabled buffering by including an unnecessary random-access operation. The operation I most often see is the Set Lower Limit (SETLL) operation in RPG programs. (COBOL programmers will know this operation as START.)
FSomeFile if e k disk /free setll *loval SomeFile; dow '1'; read SomeFile; if %eof(); leave; endif; // do more stuff to process the record enddo;
The program processes the record sequentially, from top to bottom. That is, the READ op code is a sequential operation. The programmer has disabled blocking by using the unnecessary SETLL op code.
How much of a gain could he expect if he removed SETLL? It’s hard to say, but I ran a quick unscientific test against a file of almost eleven and a half million records, and the version with SETLL ran 25 percent longer.
So that’s your tip for the day: Don’t disable blocking. Now for another tip: If you don’t need to access a file by key, then don’t. I ran a third test, with the record address type of K removed from the F spec. Run time was about one-fourth of the blocked keyed version. The following short table summarizes the results of the three runs.