NOMAX? No Way!
April 25, 2012 Hey, Ted
I just read *NOMAX Does Not Mean Infinite Capacity and I politely beg to differ. I work at some shops with smaller and older systems with smaller disk capacity (70 GB, 140 GB). Some are already using more than 70 percent of available space. If I read the capacities correctly, a physical file can hold over four billion records. I did some calculations and I think I’d fill up a system before reaching capacity.
For DDS-described files, I usually set some maximum number of records, as 10,000 seems ridiculously low. Now that you have mentioned SQL described tables, which I use unless I’m at a shop that uses DDS, I realize that I’m running with *NOMAX. I had better code carefully.
Several people responded to the article. The most detailed response came from my colleague Joe Hertvik, for whom I have the highest regard and respect. Here’s what he had to say:
I was just going through your *NOMAX story and thinking about your recommendation to change file sizes to *NOMAX. Here’s the things that bug me about this:
1. While IBM does put limits on those file sizes, those limits are very very large. Which means that a rogue job is still going to take up a lot of space before failing. Every once in a while, I see queries go nuts in my shop that I have to cancel.
If you have a production file that got caught in a looping cycle and the file size is set to *NOMAX, you’re still going to have millions and millions of records that may be corrupted or need to be deleted before the job eventually errors out with a CPF5272 error. This file may have many logical files sitting over it that are also going to grow extremely large and are going to take up more CPU time and disk rebuilding their access paths. When you get around to purging the excess records, you’re then going to have to reorg the file which is going to take a lot of time and the system will then have to rebuild the logical file indexes, which is also going to take a lot of time. The file won’t be available while it’s being reorged. It turns into a big mess very quickly.
2. There’s also the user effect in files with *NOMAX when a looping job fills up the file with crap records. If allowed to go too long, the bad data could affect user processing, production, etc. If a critical file fills up, errors out, and the staff has to deal with it, at least it somewhat minimizes the effect on production.
3. A lot of people are running at high disk utilization, especially when they are several years into the upgrade cycle when they need more disk space. In the shops I’ve been in, it’s not uncommon for one or more partitions to be running at 80 to 85 percent or more disk utilization. If a rogue job starts filling up disk space, it doesn’t take much to get that utilization up to 90 percent or more. And that’s when the system can crash.
I’m not sure that setting all of your file sizes to *NOMAX would be a good idea. I still hold to the old school philosophy of calculating a reasonable file size when the file is created and setting the file to that size and letting all other automatically created files use default sizes. If the file breaches that size, it indicates something is wrong and should be dealt with. Most shops might be asking for trouble if they set their file sizes to *NOMAX, especially when they are dealing with user created files, such as query output.
Then there’s this question from Ken:
Your tip “*NOMAX Does Not Mean Infinite Capacity” is great fodder for discussion within my group. We would be thrilled if you could provide some of the reasoning or justification for your stance. I’ve worked in shops from both schools of thought and would like to know what tipped the scales toward *NOMAX for you.
Two things tipped the scales, Ken. First was the nuisance of having to answer “file full” error messages because somebody (which could have been me) forgot to change the default. Second, was the observation that IBM evidently feels confident enough with *NOMAX to make it the default for SQL tables.
I used the Change Physical File (CHGPF) command to change the member size of an SQL table to a small number, then ran a program with an INSERT in an infinite loop. The system gave me message CPA5303 (Record not added. Member SOMETABLE is full.) Maybe this is the tip I should have published:
SQL tables are created with a member size of *NOMAX. Failure to change the size to something more reasonable could allow a runaway program to fill up the disk.
Thanks to everyone who wrote.