• The Four Hundred
  • Subscribe
  • Media Kit
  • Contributors
  • About Us
  • Contact
Menu
  • The Four Hundred
  • Subscribe
  • Media Kit
  • Contributors
  • About Us
  • Contact
  • Admin Alert: The 4 GB Access Path Size Time Bomb

    November 6, 2013 Joe Hertvik

    Many IBM i shops run enterprise software originally created more than 10 years ago. While this allows you to run older applications on newer hardware, older apps can also cause issues with files that are no longer suitable for today’s processing. This week, let’s look at one older file parameter that if not changed, can stop application processing dead in its tracks: The 4 GB Access Path Size Time Bomb.

    Time Bomb, What Time Bomb?

    The time bomb I’m referring to is the Access path size (ACCPTHSIZ) parameter in some application files. In older versions of the OS/400 operating system (the precursor to the modern IBM i OS), the ACCPTHSIZ parameter was set to *MAX4GB by default. With *MAX4GB files, the maximum file access path size can be no larger than 4 gigabytes (4 GB). And for many files that were delivered with third-party application software, the ACCPTHSIZ was automatically set to *MAX4GB.

    The problem occurs when a *MAX4GB file has been in use for several years, and its access path size approaches 4 GB. When a *MAX4GB physical or logical file access path size hits the 4 GB limit, the file will stop accepting updates and it will crash your application programs with the dreaded MCH2804 error message.

    MCH2804 - Tried to go larger than storage limit for object &1
    

    Your file will not accept updates until the access path size is modified to *MAX1TB (1 Terabyte) with either a Change Physical File (CHGPF) or a Change Logical File (CHGLF) command. And the system will immediately rebuild the object’s file index after you change the access path size, which can take a very long time and hold up production even further.

    The problem is that unless you hunt for *MAX4GB files and change them before it’s too late, you won’t know this problem exists until it crashes your application. And chances are good that any file whose access path size approaches or exceeds 4 GB is probably seeing a lot of usage, which can stop processing when you rebuild the path to a larger size.

    The rest of this article will show you how to find these files and change 4 GB files to a much larger access path size of 1 TB, which is more suitable for files in the big data era.

    How To Detect And Change 4 GB Access Path Size Files

    Here are the steps you need to audit and find 4 GB access path files and change them to a 1 Tb access path size.

    1. Run the Display File Description (DSPFD) command to gather file information.
    2. Use SQL statements to discover which files have 4 GB access paths.
    3. Determine whether you need to increase the access path size (ACCPTHSZ) parameter for any given file.
    4. Increase the file’s access path size to 1 TB, if necessary.

    Let’s look at each of these steps in detail.

    Step 1: Run the Display File Description (DSPFD) command to gather file information.

    You can use the Display File Description command to gather file attribute information on all physical and logical files on your system. Because of the way the command works, you need to run the command twice to create two separate output files containing: 1) the file attributes for the physical files on your system; and 2) the file attributes for the logical files on your system.

    Here’s the command to gather the file attributes for your system’s physical files.

    DSPFD FILE(*ALL/*ALL) TYPE(*ATR) OUTPUT(*OUTFILE) FILEATR(*PF)
     OUTFILE(libname/PHYSFILES)
    

    This command gathers all the physical file attribute data into a newly created PHYSFILES file, including the access path size. The PHYSFILES file will be built using the record format and field names listed in the QAFDPHY system file.

    You can use this command to gather file attribute information for your system’s logical files.

    DSPFD FILE(QGPL/*ALL) TYPE(*ATR) OUTPUT(*OUTFILE) FILEATR(*LF)
     OUTFILE(libname/LOGIFILES)
    

    This command gathers all the logical file attribute data into a newly created LOGIFILE file, again including the access path size for each file. The LOGIFILES file will be built using the record format and field names listed in the QAFDLGL system file.

    Step 2: Use SQL statements to discover which files have 4 GB access paths.

    Once you have files containing your physical and logical files attributes, it’s a simple process to use SQL to determine which files on your system have 4 GB access paths.

    You can run this SQL statement over the PHYSFILES file to find the physical files that have 4 GB access paths.

    SELECT PHFILE, PHLIB, PHAPSZ FROM PHYSFILES WHERE PHAPSZ = '0'
    

    Where:

    • PHFILE equals the names of the physical files with 4 GB access paths.
    • PHLIB equals the library names the physical files reside in.
    • PHAPSZ equals the size of the physical files access path. A ‘0’ in the PHAPSZ field indicates the file has a 4 GB access path size.

    You can run this SQL statement over the LOGIFILES file to find the logical files on your system that have 4 GB access paths.

    SELECT LGFILE, LGLIB, LGAPSZ FROM LOGIFILES WHERE LGAPSZ = '0'
    

    Where:

    • LGFILE equals the names of the logical files with 4 GB access paths.
    • LGLIB equals the library names the logical file reside in.
    • LGAPSZ equals the size of the logical file’s access path. A ‘0’ in the LGAPSZ field indicates the file has a 4 GB access path size.

    Print out the results of these SQL statements and you’ll have two lists that contain all the files on your system that need to be looked at for 4 GB access path sizes.

    Step 3: Determine whether you need to increase the access path size (ACCPTHSZ) parameter for any give file.

    Changing a file’s access path size can take time, sometimes an hour or more depending on the size of the file. You also need to isolate the file from any program or job access while the access path size change is taking place. Changing a file’s access path size can also impact application performance, because a *MAX4GB access path size provides better performance than a *MAX1TB file if there is a high contention for file keys on your system.

    So it’s worthwhile to evaluate whether you want or need to change the size for any 4 GB access path size files that show up on your list. If a file has a *MAX4GB access path size and the file’s current access path isn’t close to breaching the 4 GB limit, you may want to leave the path alone and change it at a later time.

    Run this DSPFD command to find the current index size for any individual physical and logical files that show up on your lists:

    DSPFD FILE(library/filename) TYPE(*MBR)
    

    In the Access Path Activity Statistics, you’ll see the current access path size on the Access path size line, like this.

    Figure 1

    (Click graphic to enlarge.)

    My rule of thumb is that if the current access path size is 75 percent or less of 4 GB, you may not want to change the access path size to 1 Terabyte at this time. If the path is over 75 percent of 4 GB and especially if the access path size is over 85 percent of 4 GB, you will probably want to change the maximum access path size to 1 Terabyte.

    You can use the following formula to determine if you should change the access path size.

    Change Access Path Size if:
    (current access path size/4000000000) > .75
    

    Feel free to change the 75 percent (.75) number in this equation to fit your shop’s own needs.

    Step 4: Increase the access path size, if necessary.

    If you decide that you need to change a particular access path size to 1 Terabyte, you’ll need to find a time when no job or object has a lock on the file you want to change, perhaps when the system is in restricted state or during a maintenance window. Once you’re sure the file will be unlocked, run one of the following commands to change the access path size to 1 Terabyte.

    For physical files:

    CHGPF FILE(library/filename) ACCPTHSIZ(*MAX1TB)
    

    For physical files:

    CHGLF FILE(LIBRARY/FILENAME) ACCPTHSIZ(*MAX1TB)
    

    When you change a file’s access path size, it will automatically kick off an index rebuild for that object. So be sure to allow plenty of time before any users or processes will attempt to use the file again, as index rebuilds can sometimes take an hour or more to finish.

    Step 5: Automate the process.

    Once you understand how to evaluate and change file access paths, you can automate this process by using a CL program in a job scheduling program. As mentioned above, plan to run the automated change access path process during a time when you can allocate the objects to be changed or your automation will not work.

    And that’s how you defuse the 4 GB Access Path Size Time Bomb.

    Follow Me On My Blog, On Twitter, And On LinkedIn

    Check out my blog at joehertvik.com, where I focus on computer administration and news (especially IBM i); vendor, marketing, and tech writing news and materials; and whatever else I come across.

    You can also follow me on Twitter @JoeHertvik and on LinkedIn.

    Joe Hertvik is the owner of Hertvik Business Services, a service company that provides written marketing content and presentation services for the computer industry, including white papers, case studies, and other marketing material. Email Joe for a free quote for any upcoming projects. He also runs a data center for two companies outside Chicago. Joe is a contributing editor for IT Jungle and has written the Admin Alert column since 2002.



                         Post this story to del.icio.us
                   Post this story to Digg
        Post this story to Slashdot

    Share this:

    • Reddit
    • Facebook
    • LinkedIn
    • Twitter
    • Email

    Tags:

    Sponsored by
    Raz-Lee Security

    Start your Road to Zero Trust!

    Firewall Network security, controlling Exit Points, Open DB’s and SSH. Rule Wizards and graphical BI.

    Request Demo

    Share this:

    • Reddit
    • Facebook
    • LinkedIn
    • Twitter
    • Email

    Sponsored Links

    Essextec:  Quick Security Check to analyze the 500 most vulnerable data points on your IBM i
    Bug Busters Software Engineering:  RSF-HA keeps you going while it saves you a bundle
    Secure Infrastructure & Services:  FREE white paper: "9 Reasons IBM Sees a Shift to the Cloud"

    More IT Jungle Resources:

    System i PTF Guide: Weekly PTF Updates
    IBM i Events Calendar: National Conferences, Local Events, and Webinars
    Breaking News: News Hot Off The Press
    TPM @ EnterpriseTech: High Performance Computing Industry News From ITJ EIC Timothy Prickett Morgan

    Mobile Apps Get More Native-Like with Sencha Touch Update IBM Enhances Disk And Flash For Power Systems

    Leave a Reply Cancel reply

Volume 13, Number 21 -- November 6, 2013
THIS ISSUE SPONSORED BY:

Robot
WorksRight Software
ASNA

Table of Contents

  • Allow Repeated Change With Before Triggers
  • Digging Out Data Duplication
  • Admin Alert: The 4 GB Access Path Size Time Bomb

Content archive

  • The Four Hundred
  • Four Hundred Stuff
  • Four Hundred Guru

Recent Posts

  • Liam Allan Shares What’s Coming Next With Code For IBM i
  • From Stable To Scalable: Visual LANSA 16 Powers IBM i Growth – Launching July 8
  • VS Code Will Be The Heart Of The Modern IBM i Platform
  • The AS/400: A 37-Year-Old Dog That Loves To Learn New Tricks
  • IBM i PTF Guide, Volume 27, Number 25
  • Meet The Next Gen Of IBMers Helping To Build IBM i
  • Looks Like IBM Is Building A Linux-Like PASE For IBM i After All
  • Will Independent IBM i Clouds Survive PowerVS?
  • Now, IBM Is Jacking Up Hardware Maintenance Prices
  • IBM i PTF Guide, Volume 27, Number 24

Subscribe

To get news from IT Jungle sent to your inbox every week, subscribe to our newsletter.

Pages

  • About Us
  • Contact
  • Contributors
  • Four Hundred Monitor
  • IBM i PTF Guide
  • Media Kit
  • Subscribe

Search

Copyright © 2025 IT Jungle