Admin Alert: Six Tips For Managing IBM i Spooled File Storage
September 11, 2013 Timothy Prickett Morgan
While recently reviewing system storage on an IBM i partition, we were shocked to discover that spooled files (SPLFs, pronounced spliffs) took up over 10 percent of our usable system storage. Based on that experience and what my shop learned cleaning it up, here are six techniques for keeping your spooled file storage under control.
The Big Six For Spooled File Storage Management
There are other techniques you can use, but these will get you started slimming down your SPLF usage. Here’s how each technique works.
Technique #1: Don’t have FNDBIGSPLF? Get FNDBIGSPLF!!!
IBM has a downloadable utility command called Find Big Spooled File (FNDBIGSPLF) that’s valuable for weeding out SPLF hogs. FNDBIGSPLF is a command that shows you which SPLFs in which output queues (OUTQs) are taking up the most storage.
The command is simple to use. It allows you to locate and display all large spooled files on your system that are above a certain storage size, specified in 4K increments.
So if I wanted to see all the SPLFs on my system that were larger than 1000K, I could run FNDBIGSPLF with these parameters:
And FNDBIGSPLF would go through each output queue and list out any spooled files that are larger than 1000K in size (the Minimum Spooled File Size parameter, MINSPLFSZ). You could then review the printout and evaluate which SPLFs should be deleted to retrieve storage. The printout could also be sent to others for review and action.
If you want to run FNDBIGSPLF interactively without generating a printout, you can run FNDBIGSPLF this way:
FNDBIGSPLF MINSPLFSZ(1000) PRTREPORT(*NO)
And this command will put you into QSHELL (QSH), where you can see your big spooled file listing as it is being generated.
Here’s a sample of what a FNDBIGSPLF screen looks like.
Because it pinpoints only the largest spooled file usage, FNDBIGSPLF provides valuable info and can be used to find many different spooled file abuses (more later).
The other thing FNDBIGSPLF provides is the amount of disk space all your spooled files are taking up on the system. After running FNDBIGSPLF for any minimum spooled file size value (the MINSPLFSZ parameter), the last page of the FNDBIGSPLF report always displays how much disk storage all of your spooled files are taking up on the system. That was how we determined that our spooled files were taking up 10 percent of our system storage, as I mentioned in the introduction. It’s a handy way to attach a number to your total spooled file storage.
IBM offers FNDBIGSPLF as a download and it’s incredibly easy to install on your system. I had it up and running on my production box within five minutes. Just go to IBM’s Find Jobs with Big Spooled Files (FNDBIGSPLF) Web site and download the utility.
Note FNDBIGSPLF is an unsupported IBM utility, so there’s no help if you run into problems. But it works great in identifying system problem areas where spooled file usage is getting out of control.
Technique #2: Check your IBM i cleanup parameters (GO CLEANUP) to delete old job logs and system generated output
For controlling job log spooled files and other system output, it’s important to check your IBM i cleanup parameters to see how often the system purges these spooled files. The cleanup parameters can be set up to automatically delete old job logs and system generated output every night. Issue the following command on an IBM i green screen to review and change your cleanup parameters.
You’ll see a screen that looks something like this.
Take option 1=Change cleanup options and you’ll see this screen.
This screen tells you whether automatic cleanup of these system objects is occurring (Allow automatic cleanup = Y or N) and what time the cleanup is occurring every day. It also tells you how many days worth of job logs in the QEZJOBLOG output queue and other system generated output the system will keep (the Number of days to keep value).
If you find you are not using automatic cleanup or you’re keeping job logs for an excessive amount of time, you can modify the parameters on this screen to keep fewer job logs and system generated output through automatic deletion.
If there’s a reason you can’t run automatic cleanup daily or you only want to run cleanup on a weekly or monthly basis, see this article for more information on manually running system cleanup.
Technique #3: Check for automatically scheduled jobs that are generating unnecessary spooled files
To cut down on spooled file clutter, you can also use FNDBIGSPLF to find regularly scheduled jobs that are sending out an excessive amount of spooled files or large spooled files. When we did our spooled file audit using FNDBIGSPLF, we found the following situations where our scheduled jobs were working against us.
Once these situations are identified, you can modify the jobs to streamline their output. FNDBIGSPLF may not find many auto-schedule jobs that are abusive spooled file generators, but it will probably show you a few situations that you can correct where spooled file abuse is occurring.
Technique #4: The 2.5 million-page SPLF problem and other user situations
It is worth running FNDBIGSPLF on a regular basis, if only to find user-generated situations where large spooled files exist on the system. Some of these situations we’ve run into include:
FNDBIGSPLF can help you find these types of situations and free up some space.
Technique #5: Selectively Clearing User and Application-Specific Output Queues
Years ago, I wrote an article on selectively deleting spooled files from designated output queues. User and application output queues are not automatically cleaned up by the operating system. The solution is to set up your own system for automatically clearing output queues, so they don’t fill up with every day output.
You can either write your own software to do this or you can use a vendor tool. There are several tools on the market that clear output queues, and I highly recommend that you set up a schedule for performing this function. Many of these tools also allow you to deleted aged SPLF output, where the spooled file is older than a certain date (only deleting spooled files that are older than 30 days, for example). This is valuable for keeping SPLFs that you might need only for a set amount of time, while deleting the rest.
Technique #6: Deleted spooled file storage may not be recovered in a day, unless . . .
Because of the way IBM set up the spooled file reclamation function, you will not automatically get your spooled file storage back. Rather, spooled file storage is reclaimed on a schedule that is controlled through the Reclaim Spooled File Storage (QRCLSPLSTG) system value. QRCLSPLSTG specifies the number of days it will take for the system to reclaim spooled file storage, after one or more spooled files have been deleted.
QRCLSPLSTG is set to eight days, by default. This means that the operating system will keep the storage space formerly occupied by your deleted spooled files open for eight days. So if you undertake a massive spooled file clean-up effort, you may not see your storage go down until the operating system reclaims the storage space eight days after you delete your spooled files.
There is a way to get around this. To retrieve your spooled file storage before its’ QRCLSPLSTG date, try this technique from an IBM i green screen.
This technique should help you get back your deleted spooled file storage without having to wait eight days.
Follow Me On My Blog, On Twitter, And On LinkedIn
Check out my blog at joehertvik.com, where I focus on computer administration and news (especially IBM i); vendor, marketing, and tech writing news and materials; and whatever else I come across.
Joe Hertvik is the owner of Hertvik Business Services, a service company that provides written marketing content and presentation services for the computer industry, including white papers, case studies, and other marketing material. Email Joe for a free quote for any upcoming projects. He also runs a data center for two companies outside Chicago. Joe is a contributing editor for IT Jungle and has written the Admin Alert column since 2002.