DB2 for i 7.2 Features and Fun, Part 1
June 30, 2014 Michael Sansoterra
The dump truck of database features driven by IBM is delivering yet again, unloading a full load of great enhancements for the beloved DB2 for i RDBMS. In addition to the new features included in the various IBM i 7.1 technology refresh (TR) releases, 7.2 has even more security, developer, and administrative friendly enhancements that I’ll be discussing in this and future tips.
The TRUNCATE TABLE statement is used for deleting all of the rows in a table in one fell swoop. You can think of this statement as SQL’s version of the Clear Physical File Member (CLRPFM) command.
TRUNCATE TABLE TEMP_SALES_JOURNAL -- Remove all rows
Testing on a table with over two million rows, I didn’t see a significant performance difference between DELETE and TRUNCATE so I believe this statement’s implementation is primarily to offer more functionality (reuse storage, ignore triggers) and for compatibility with other DB2 versions and other RDBMS like SQL Server.
Some interesting option pairs that can be used with this statement are shown below:
DROP STORAGE (default) or REUSE STORAGE–DROP STORAGE will reclaim the storage allocated to the table. REUSE STORAGE will remove all of the rows but leave the existing storage in place (as evidenced by the object’s size.)
IGNORE DELETE TRIGGERS (default) or RESTRICT WHEN DELETE TRIGGERS–Please note the former option is the default behavior and will simply remove the rows without firing any delete triggers defined on the table. If the RESTRICT option is selected, the statement will terminate and an error will be thrown if a delete trigger is defined on the table. If you want your delete trigger to fire, then use a DELETE statement.
CONTINUE IDENTITY (default) or RESTART IDENTITY–this setting determines whether the table’s identity column (if present) will continue as is or be reset with the original value.
IMMEDIATE–This option will prevent a ROLLBACK from replacing the rows in the table. By default, a TRUNCATE TABLE statement running within a transaction can be rolled back but this option will override that behavior.
The following TRUNCATE TABLE statement will clear the table, reclaim the storage, reset the identity column and leave the table truncated even if a ROLLBACK of the current transaction were to occur:
TRUNCATE TABLE TEMP_SALES_JOURNAL DROP STORAGE RESTART IDENTITY IMMEDIATE
Increased Timestamp Fractional Second Resolution
The TIMESTAMP data type has been enhanced so that a developer can specify what fractional portion of a second (0-12) should be stored. A fractional precision of zero means that the timestamp will only store a date and time value to the nearest second. A fractional precision of three means that the timestamp can store a value to the nearest millisecond. And now, with a fractional precision of 12, propellerhead science application developers can rejoice because they can store a timestamp value with a resolution to the nearest picosecond!
The fractional precision is specified on the TIMESTAMP data type in parenthesis as shown here (omitting the fractional seconds results in a default of six, the pre-IBM i 7.2 standard):
CREATE TABLE DEV.TS_TEST ( T1 TIMESTAMP(0) NOT NULL DEFAULT CURRENT_TIMESTAMP, T2 TIMESTAMP(3) NOT NULL DEFAULT CURRENT_TIMESTAMP, T3 TIMESTAMP(6) NOT NULL DEFAULT CURRENT_TIMESTAMP, T4 TIMESTAMP(9) NOT NULL DEFAULT CURRENT_TIMESTAMP, T5 TIMESTAMP(12) NOT NULL DEFAULT CURRENT_TIMESTAMP ); INSERT INTO DEV.TS_TEST VALUES(DEFAULT,DEFAULT,DEFAULT,DEFAULT,DEFAULT);
The inserted values are shown here with the length of each TIMESTAMP column obviously differing:
Interestingly, if you query the CURRENT_TIMESTAMP special register, you will only see a resolution down to the nearest microsecond. However, in the above example, DB2 did fill in all 12 digits of the fractional second for column T5. Even with fast processors, I still wonder if the entire fractional value is accurate. Even though DB2 can store a timestamp value to the nearest picosecond, it’s only as good as the accuracy of the code that supplies the value.
The current SQL Reference manual doesn’t give much information, but a synonym for CURRENT_TIMESTAMP is LOCALTIMESTAMP. Both can now have a fractional second precision given:
Early IBM i 7.2 adopters beware: an older JDBC, OLE DB, etc. middleware provider may not interpret the timestamp with various fractional precisions correctly. In Run SQL Scripts, for instance, a CURRENT_TIMESTAMP(0) still returned fractional seconds. Depending on the statement I tried to run, I received many errors relating to the TIMESTAMP columns I was testing.
I have to be honest, I’ve never liked the implementation of the TIMESTAMP data type on DB2 for i. With a fractional second precision of a microsecond, it is a space-wasting 26-byte hog. In contrast, the SQL Server DATETIME2 data type requires anywhere from six to eight bytes and while its range may not be as large, its usefulness suits the vast majority of business application requirements.
But DB2 for i has made a minor improvement here. Whereas TIMESTAMP used to be a fixed length of 26 bytes, the storage requirement now varies between 19 and 32 bytes depending on the given fractional second precision. If you don’t need anything beyond the nearest second, save the space and specify a precision of zero.
The SQL Reference manual indicates that the TIMESTAMP storage requirement varies from 7 to13 bytes but I believe this description is mistaken for DB2 for i. When reviewing a table with timestamp values, DSPFFD showed that anywhere from 19-32 bytes are required.
ALTER TRIGGER is a new statement in IBM i 7.2. This statement is not intended to take the place of CREATE OR REPLACE TRIGGER (for example, you can’t change the body of an SQL trigger.) Currently this statement offers control of two trigger related settings:
I’ll cover more about RCAC in a future tip. The ability to temporarily disable and re-enable a trigger using SQL is a welcome feature. Disabling a trigger is often useful during application maintenance when there are no users on the system. Such maintenance typically includes scenarios involving the loading a large number of rows into a table where the trigger processing is not wanted either due to performance or undesired side effects.
ALTER TRIGGER ADVWORKS.TRANSACTIONHISTORY_INSERT DISABLE; -- Load a large number of rows INSERT INTO ADVWORKS.TRANSACTIONHISTORY SELECT .... ALTER TRIGGER ADVWORKS.TRANSACTIONHISTORY_INSERT ENABLE;
Hopefully IBM will give us the ALTER INDEX statement as well so that indexes can be temporarily disabled and then re-enabled using SQL. Large data insertion is one area where disabling indexes can help performance.
That’s it for now. Stay tuned for more info about the great features in IBM i 7.2 (V7R2) coming your way.
Author’s Note: With respect to my statement about the TIMESTAMP data type requiring 26 bytes, this is incorrect. IBM’s Scott Forstie provided the following information:
Thanks Scott! I labored under this false assumption for quite a while and it affected my database design decisions. It makes sense though that IBM wouldn’t have such a clumsy implementation compared to other RDBMS – that should’ve been my first clue.