As the saying goes "all good things must come to an end." And alas, this is the last in my series of tips on improvements to DB2 for i in V6R1. This final tip will cover the hodgepodge of remaining notable features added since the V5R4 that haven't been covered in the prior tips. Stick around because there is some good stuff in here.
V5R4 gave us the ALTER PROCEDURE statement and now, at long last, DB2 has the ALTER FUNCTION statement. Prior to 6.1, when a function needed a modification, developers were forced to drop and re-create it. Dropping the function in turn caused all specially assigned permissions on the object to be lost. ALTER FUNCTION will allow a function to be changed without losing its current security assignment(s).
For functions written using the SQL Procedural Language, ALTER FUNCTION will come in handy when changing logic within the function's body. For external functions written in a high-level language, ALTER FUNCTION will be useful when you realize you've forgotten to specify options such as "DETERMINISTIC" or "NOT FENCED," or when you've incorrectly specified the subprocedure name.
You may have noticed the function's parameter list was included in this example. The parameter list signature is only required when the function is overloaded. In this case specifying the parameter list allows DB2 to identify which of the overloaded function definitions you intend to modify. Alternatively, you can use the ALTER SPECIFIC FUNCTION syntax and specify the function's specific name. If your function is not overloaded, then the parameters can be omitted.
For external functions, you do not need to re-specify all existing options. In this next example, only a few options that need to be changed are specified:
For an SQL function the same rules apply. You can do as little as just modify any of the options you want changed, or you can re-specify the UDF's logic to revamp it.
Please note that you cannot alter parameter names or the returns clause for scalar or table functions. (What a bummer!) I suppose this is because if you add a parameter, you're not modifying a function, you're actually creating a new one thanks to DB2's ability to overload a single function with several parameter signatures. This limitation is particularly grievous when dealing with table functions because it is a common requirement to add an additional column to the function's returned table.
RPG variables defined at the subprocedure level are now correctly interpreted and used by the pre-compiler. Consider the problem of days gone by: often a single variable is defined in the main procedure's D-Specs and defined again in one or more subprocedures. With duplicate variable definitions, the older pre-compiler was not always able to interpret which variable it should use. Now, when an SQL statement with host variables is embedded in a subprocedure, the pre-compiler will now assign host variable references to variables local to the subprocedure first, whenever possible.
In the sample code below, variable OrderId is defined twice. The subprocedure's SQL statement will fetch its data into the subprocedure's OrderId variable while leaving the globally scoped variable's value untouched.
Moving on, the pre-compiler has a new "extended indicator" option that is now available for positioned Update and Insert/Value statements. As a brief reminder, since the RPG compiler doesn't know anything about NULL values, the SQL pre-compiler uses something called indicator variables as a way to flag whether or not the value of a certain column or variable contains a NULL.
An extended indicator just expands the indicator host variable's functionality by allowing it to do more than just indicate whether a variable is null. In prior releases, an indicator variable had two valid values: 0=Not Null, -1=Null. Extended indicators can hold the following values:
-1, -2, -3, -4, or -6
indicates a null value.
-5 indicates the
DEFAULT value should be used
-7 designates the
value returned is UNASSIGNED
The new values allow extended indicator variables to indicate whether a column's value should remain unchanged during an update (-7) or use the column's default value (-5). When declaring a cursor or preparing a statement that uses extended indicators, make sure you specify the "With Extended Indicators" clause so DB2 will accept these new extended values.
In this release, RPG has improved support for SQL's large object data types (BLOB, CLOB, and DBCLOB). The previous RPG compiler limited character variables to about 64K in length (approximately 32K for double byte strings.) This restriction limited RPG developers to work with entire BLOBs and CLOBs that were a maximum of 64K. Working with bigger large objects required the use of a LOB locator and reading chunks of a LOB column into an RPG variable. Now, however, RPG allows character variables to be defined to about 16MB in length, which allows a single RPG host variable to store up to 16MB of LOB data. When working with LOBs larger than 16MB in your RPG programs, you still need to use a LOB locator in order to work with LOBs in a piecemeal fashion.
Finally, the CRTSQLxxxI and CRTSQLxxxM commands have been enhanced to allow a program compile from an IFS file in addition to a source member. This is useful as many developers use PC-based tools to write and store their RPG and RPG SQL programs in "stream files" instead of traditional source members.
Tolerance for Unsupported DB2 Syntax
For those database gurus who code for more than one DB2 platform (i, Linux, Unix, Windows, and z/OS), much of the fiddling and worry about "platform specific" features within your database scripts will be a thing of the past. This is because IBM is now allowing DB2 statements and clauses that are valid in other platforms (but unsupported in DB2 for i) to exist within your DB2 for i scripts. DB2 for i will ignore these unsupported features (so long as the syntax is valid) as a way to assist developers in writing portable code.
A hidden column is one that will not appear in a generic SELECT * query unless the column is explicitly named in the column list. Tagging a column as hidden is useful for columns meant for internal use by developers and for columns containing sensitive information such as a social security number (to prevent inadvertent discovery of data by users with a querying tool).
When creating a new table, specify IMPLICITLY HIDDEN on any column that should be hidden:
CREATE TABLE DataLib/BASEWAGE
(EMPLOYEE INT NOT NULL,
EFFDATE DATE NOT NULL DEFAULT CURRENT_DATE,
PAYRATE DEC(9,4) NOT NULL IMPLICITLY HIDDEN,
CONSTRAINT PK_BASEWAGE PRIMARY KEY (EMPLOYEE,EFFDATE))
The query result for "Select * From DataLib/BaseWage" will not include the PayRate column.
Columns in an existing table can be hidden using the ALTER TABLE statement:
ALTER TABLE CORPDATA/EMPLOYEE ALTER COLUMN SALARY SET DATA TYPE
DECIMAL (9,2) IMPLICITLY HIDDEN
To make this column visible again, specify the NOT HIDDEN value:
ALTER TABLE CORPDATA/EMPLOYEE ALTER COLUMN SALARY SET DATA TYPE
DECIMAL (9,2) NOT HIDDEN
While not the strongest security measure, hidden fields offer one more way to help protect your data. They also keep some of those pesky special internal columns from showing up by default when you're doing ad-hoc queries. Finally, every table must have at least one column that is not flagged as hidden.
Skip Locked Data
Skip Locked Data is a new option on the SELECT, DELETE, and UPDATE statements that allow DB2 to bypass the locked rows it encounters without causing the statement to fail.
Why would developers want this option? Consider that it is common for legacy data maintenance programs to employ pessimistic locking. When an operator uses a legacy program to retrieve a database row, the program places a lock on the row in anticipation of a possible update from the user. If the operator brings up a record in this manner and then walks away from his terminal, the row can be locked indefinitely. If another SQL statement such as an update or delete is executed against the same database table, the SQL statement may halt with a record lock message when it encounters a locked row.
Fortunately, the Skip Locked Data clause will allow statements running with a transaction isolation level of READ COMMITTED (CS, a.k.a. cursor stability) or REPEATABLE READ (RS, a.k.a. read stability) to ignore these locked rows and just continue on its merry way.
For example, if a process is locking one or more rows in the ItemHistory table, then the following Delete statement will just skip any locked rows it encounters instead of halting:
Delete From ItemHistory
Skip Locked Data;
So if your application can wait until the next time around to get these locked rows, then "Skip Locked Data" can be used to avoid locking related error messages.
FOR EACH STATEMENT on INSTEAD OF Triggers
"Instead of" triggers are built on views (not tables) and are special because they allow a developer to specify logic to execute when an insert, update, or delete operation is executed. See Perform the Impossible: Update a Multi-Table View using INSTEAD OF Triggers for more information on INSTEAD OF triggers.
In prior releases, INSTEAD OF triggers could only process code with a row by row granularity (FOR EACH ROW). This means that when a modification statement (INSERT/UPDATE/DELETE) is executed against the view, the trigger would run for each row affected by the statement.
Allowing statement level granularity (FOR EACH STATEMENT) on an INSTEAD OF trigger allows the trigger to be executed only once for each modification statement, which can be more efficient than row by row processing. Which granularity to choose (statement vs. row) depends on the requirements of the trigger, but the point here is that you can now choose either when coding an INSTEAD OF trigger.
The COMMENT and LABEL statements are great for documenting your database by allowing you to associate explanatory text with various SQL objects (parameters, triggers, stored procedures, functions, etc.). Starting with 6.1, COMMENT now also supports adding text to constraints. For instance, if you want to help subsequent developers know why a CHECK CONSTRAINT was added to a table column, document it using COMMENT as shown in this sample:
/* Define Constraint */
ALTER TABLE EMPLOYEE
ADD CONSTRAINT CK_PAY_TYPE ON PAY_TYPE CHECK IN ('S', 'H')
/* Comment on Constraint */
COMMENT ON CONSTRAINT CK_PAY_TYPE IS 'Allowed values are
''S'' for Salary and ''H'' for Hourly'
COMMENT is somewhat analogous to DDS' TEXT keyword. However, COMMENT text can contain up to 2,000 characters.
Unfortunately, IBM's current tools do not make this "comment text" available in an easy-to-access fashion. In the green screen you pretty much have to query the appropriate system table to find the comment on the specified object or column. (For example, system view SYSCST in library QSYS2 contains each constraint defined on the system and further contains a column called LONG_COMMENT that contains the comment text for the constraint.)
The System i Navigator (formerly iSeries Navigator) "databases" node provides a little bit better visibility for comments. For one example, consider procedure objects within Navigator's database object tree. The comment text will show up in the procedure's description section when you right-click on a stored procedure and choose properties. Or you can right-click on a procedure and choose the "comments" option.
As an aside (not related to 6.1), the related LABEL statement is very similar to COMMENT with the additional ability to add column headings to a table column (similar to the COLHDG DDS keyword.) Often LABEL is used to assign the specified text to the underlying i/OS object's text property (the well-known text field that shows on the object description). For instance, if you have a table with a long name, such as ProductCategory, the table will have a system name of something like PRODU00001. If you have many tables with long names, it can be difficult to discern which table is which, using command line tools such as WRKOBJPDM. Use LABEL as follows to identify the name of the table for easy review:
LABEL ON TABLE PRODUCTCATEGORY IS 'ProductCategory Table'
This LABEL "object description" text also shows up prominently in System i Navigator for many SQL objects.
RUNSQLSTM & System i Access RunSQL Scripts Enhancements
The RunSQL scripts tool in System i Navigator has a few nice enhancements. These include the ability to save a script to a source member or to a stream file on the IFS. (The source member option does have a few caveats since it is fixed length and RunSQL Scripts is free form.)
RunSQL scripts finally gives programmers the ability to save a query's result set to a file (such as an Excel or Lotus file). In order to use this option you must choose "Allow Save Results" from the Options menu, run your query, and then right click on the results pane and choose "Save as". Skip Marchesani gives more details about these improvements in V6R1 Enhancements for Run SQL Scripts.
The RUNSQLSTM command has a few nice enhancements by adding CL command support and the ability to process stream files. Working with stream files is a needful enhancement, as many SQL Scripts are written using various text editor tools or a tool such as RunSQL scripts. Ever since I've been on V6R1 System i Access, I've been saving the majority of my RunSQL Scripts SQL work directly to IFS files, which can now thankfully be subsequently re-run from within the green screen world using RUNSQLSTM.
RUNSQLSTM now has the ability to run CL commands, which is useful as many scripts require embedded OS commands. To run a CL command within a RUNSQLSTM script, simply use the CL: command; syntax as follows:
CL: DSPJOBLOG OUTPUT(*PRINT);
This CL command processing syntax is the same syntax allowed in System i Navigator's RunSQL Scripts utility, which improves the compatibility between the two tools.
Create SQL Procedure Enhancement
When an SQL Stored Procedure is built, behind the scenes an ILE C program with embedded SQL is created. If you have database group PTF #4 or higher installed on your system, a new "PROGRAM TYPE" clause has been added to CREATE PROCEDURE that gives the developer an option to choose whether a program or a service program is created. Here is an example:
Create Procedure xxxxx.Sample1
(In @Number1 Dec(15,5),
In @Number2 Dec(15,5),
Out @Number3 Dec(15,5))
Program Type Sub /* Main=Program Sub=Service Program */
Program type accepts two values: Sub (service program) or Main (program). In the above Sample1 procedure, specifying Sub will create a service program object called Sample1 with a subprocedure entry point called Sample1_1. Theoretically a service program should give slightly better performance when the stored procedure is repetitively called.
Improved SQE Support
Since V5R2 we've had two query optimizers running under the covers: the newer SQL Query Engine (SQE) and the Classic Query Engine (CQE). It is desirable to have the SQE process the query because it has numerous improved query optimization capabilities compared to its CQE predecessor.
When first introduced, SQE would only handle relatively simple queries while the remaining complicated queries would be routed to the CQE. However, with each successive release, SQE has been able to handle increasingly complex queries. V6R1 removes some of the last big hurdles that used to prevent a query from running in the SQE (including queries utilizing User-Defined Table functions and various character translations).
For an overview of SQE and its capabilities, see IBM DB2 for i SQL Query Engine (SQE) Information here.
More Features = Improved Productivity
With so many new features boasted in this release, it is easy to envision many ways to improve developer productivity. Whether you want to include summary rows in your result set, CL commands intertwined with your SQL scripts, or utilize the latest indexing technologies, this release has features that can help. IBM is doing a great job of keeping DB2 for i a world-class database engine.
Michael Sansoterra is a programmer/analyst for i3 Business Solutions, an IT services firm based in Grand Rapids, Michigan. Send your questions or comments for Mike to Ted Holt via the IT Jungle Contact page.
Treasury of new DB2 6.1 (V6R1) Features, Part 5: New Functions and Change Stamp Columns
Treasury of new DB2 6.1 (V6R1) Features, Part 4: Index and Data Type Enhancements
Treasury of New DB2 6.1 (V6R1) Features, Part 3: Client Special Registers
Treasury of New DB2 6.1 Features, Part 2: Grouping Sets and Super Groups
Treasury of New DB2 6.1 Features, Part 1: Query Enhancements
V6R1 Enhancements for Run SQL Scripts
SQL Cross Platform Interoperability: The Proper Function
New in V5R4: OLAP Ranking Specifications
Perform the Impossible: Update a Multi-Table View using INSTEAD OF Triggers
V5R3 SQL Enhancements
Post this story to del.icio.us
Post this story to Digg
Post this story to Slashdot