DB2 on i: The Time, Money, and Risk of Modernization
October 4, 2010 Dan Burger
You are sitting on a gold mine of database technology and probably don’t even know it. You have many of the essential pieces undiscovered in your existing system. Untapped capabilities in DB2 for i, the integrated relational database at the heart of IBM i 6.1 and 7.1, can be put to use meeting business requirements that have executives nervously searching for answers. Your first discovery is that a database does far more than store information. Take a closer look at what you have here.
This is an information management system, not a container. It’s not just a barn, a garage, or an attic.
Your company likely has business requirements that are not being met because of IT bottlenecks. And if the executive team is doing its job there are additional business requirements being anticipated. If growth is being limited by IT idiosyncrasies, there’s some reassessment that needs to be done. You have to look at factors such as application performance and scalability. Business intelligence and more sophisticated applications need serious consideration.
Mike Cain is a member of IBM‘s senior technical staff for DB2 on i. He’s a business intelligence and data warehousing subject matter expert. As a consultant, he’s been in a position to help companies analyze their business objectives and advise them what to do. The job comes with a lot of responsibility and high expectations.
Cain advises companies to implement technology that’s built into the IBM i operating system to meet business requirements. His approach could be described as “if you got ’em, smoke ’em.” A cigar was made to smoke. The DB2 database running on IBM i was made to do a lot more than most companies are getting out of it.
“Letting the database do more work–keeping track of things, making inferences, coming up with logic so the operator doesn’t have to, taking care of repeatable processes, and adding higher efficiency,” Cain says, are some of the ways database technology can help meet business requirements.
“A database management system can be ‘more aware’ of the business entities and business rules. It can do some clever logic and guarantee things. It can stay vigilant better than the application developer can,” Cain adds.
OK. It’s easy to understand how a more automated database system can go beyond customer records and order synchronization. Believing that a more intelligent query optimizer can speed up database queries is reasonable.
Most people, Cain included, would refer to this as database centric processing. To get there requires a certain amount of database modernization.
Ah, so now we’re talking about time and money and risk–the three devils of any IT project.
We’ll go down that dark alley, but first let’s note that no database needs to be entirely modernized. That’s one major distinction between a database modernization and a database migration. There’s no mandate to dump anything that is working perfectly as is. If everyone is happy, no time, money, or risk is involved.
“No one is selling new features and functions,” Cain says. “But if there are business or technical requirements that are being ignored, you have to do something to meet those requirements. Meeting those requirements are the motivation and the reasoning for a modernization project.”
The kind of features and functions that define a database modernization project are introduced using SQL–the industry standard. It’s not a coincidence that SQL Data Definition Language (DDL) is one of the hottest topics at technical conferences, local user group meetings, and seminars where programmers and analysts are learning to define objects and attributes and create tables and indexes. It’s also not a coincidence that IBM has invested in SQL, building a great deal of its functionality into the IBM i operating system. Little or no investment is being made in DDS, the traditional record-level access to databases.
“If you polled the IBM i users, there are probably 80 percent who are using SQL in some way,” Cain says.
At first that percentage sounds way too high. But the key to that estimate is that most are unwittingly using SQL.
“I could say 100 percent because the operating system uses SQL,” Cain notes. “All the people who are connecting some kind of client to their systems–let’s say their core application is RPG with native record-level access–if they connect with Microsoft Excel or Access, or an off-the-shelf query tool, or ODBC driver, that’s SQL. It doesn’t take any special training to do that, but this is why a person and a company should be motivated to incorporate SQL and modernize. Without the knowledge and the implementation, the apps don’t work as they should and it’s likely the performance or scalability won’t be acceptable.”
What about that risk factor when taking on a database modernization task? Anytime you make changes, you introduce risk, Cain points out.
“There are methodologies to minimize the risk and minimize the impact of change on an application, but changes are being made to things that have possibly been working fine for years. It requires good project management, good testing, and good quality control,” he says.
This is an issue of data integrity, and the impact of losing data has to be considered.
“When making changes from one container to another, you don’t want anything to spill on the floor,” the database expert notes. “There is some accounting and vigilance required. If you started with a million rows, you want to finish with a million rows. If you start with a specific number of orders, you need to finish with the same number and they need to mean the same thing. IBM i programmers have been doing this forever. People have added fields, columns, and introduced changes, reorganized databases. Risk is avoided by being responsible.”
The reasons companies are taking on modernization projects are sometimes intertwined and sometimes separate, but these three factors are the primary fire starters: new features, more performance, and more scalability.
New features make database apps more functional, more resilient, and capable of better performance. Increased performance translates into being faster and more efficient. Scalability is different from performance. The growing amount of data is fueled by companies not throwing data away while the business just keeps increasing data volume. Acquisitions are another factor. Adding a new company or a new set of customers puts the pressure on applications to scale. The same tried and true business logic now has to scale. Modernized applications are better able take advantage of computing and I/O resources.
Some people are going to wonder why modernization is a better idea than migration to another system.
Cain hears about this every day. In many cases, his team gets called in for this reason.
“The onus is on the IBM i team because this is their business, their area of expertise,” Cain says. But there is often a SQL Server development team and an Oracle team inside the organization.
“When the other camps (SQL Server and Oracle) are involved, they go to the meetings speaking in a modern language. They talk tables not physical files, indexes not logical files. They are also talking SQL not RPG chain or COBOL read/write. So right away it sounds like old versus new just in the conversation. And when the conversation opens up to topics like query optimization, clever data-centric techniques with SQL, and bringing more functionality to solve problems faster and more eloquently . . . the business person is going to infer that the new and exciting technology should be implemented rather than the old tired thing.”
But that, as it turns out, is a bad assumption.
“The traditional IBM i community is in a bad position to make a good argument or have a valid debate because they don’t understand the features and functions and the nomenclature. It is a black box to them. They can’t talk about the features that are in the platform that they have, because they are not using them or aware of them.”
“There’s also the issue of performance. People connect to IBM i with ODBC or JDBC and run requests and it doesn’t perform very well. Instead of figuring out best practice for DB2, they say ‘just give me the data over on the SQL or Oracle side.’ So someone writes an RPG program to extract the data and move it to another platform where best practices are used to access the data. Controlling that data on the IBM i is now lost.”
Scalability comes in as well as companies have grown their data sets while continuing to use the same RPG or COBOL logic whether it’s in batch or OLTP mode. As a result, it doesn’t scale. When users can’t get the throughput they need, they can’t process records in the given amount of time. It relates to the inefficient techniques being used.
Executives question whether this is a function of the system, or the database, or the application. Sometimes they decide it’s best to just start over, and that leads to looking at all the platforms available. And Cain says, “Then there’s a tendency to have amnesia in terms of the positive attributes of the IBM i.”
Although executives have to get out the checkbooks either way, and they are going to introduce risk and do more work either way, what can be lost in the discussion is which platform and process meets the technical and business requirements?
Cain makes the point, when it’s his advice that’s being sought, that it’s smart to rethink the IBM i platform in terms of a database server and getting more out of it while preserving the positive attributes that it’s provided.
“The goal should be making a proper business decision,” he says. “If moving is best, then do it. But when the move is based on undiscovered information (also known as lack of knowledge, which some people might call ignorance), that may not be the best thing for the business.”
Finally, let’s get to the time and money part of this.
The standard caveat is that every situation is unique, but there are considerations that help formulate estimates.
It begins with analysis. Some companies can do this internally, while others have to hire the expertise. The initial analysis should take a week or two by one person. It includes an assessment of such things as how many how many objects and tables need to be changed? What is the quality assurance process? How many applications need to be tested?
Costs vary according to the scope of the change, but in many cases where the project is tactical rather than strategic, Cain says, it can be accomplished in days or weeks rather than weeks or months. Other projects, which Cain refers to as strategic, can take many months or a year or more. It’s large scale and systematic.
System and database performance analysis are used to locate pain points. Those are the target areas for change. Determine which objects need new features and functions to deliver a business requirement. Make skills transfers part of the template for achieving other objectives. In other words, make this a learn-as-you-go procedure to gain additional efficiencies in the future.
Cain points out there will be net new processes that make use of existing information, but don’t take for granted how new processes will behave against the existing data set.
In strategic projects, companies identify a team leader, key people who can get skills and work the process through the organization. It might take a month or a year, but it’s big and systematic. The tactical project takes steps to eliminate a pain point and get the fix in place. It is usually one or two people that are assigned to that area. They get guidance and education (maybe) and are able to make changes in a matter of weeks. The variables can swing time and money from end of the pendulum to the other.
“It could be that no RPG applications are to be touched,” Cain explains. “Those apps need to work as they always have. But the underlying files must change because another app needs something in those files that’s never been there before.”
That sequence of events happens often, and it could take a week or two including making the changes and testing. The result is a new app gets an enhanced data set.
The scalability and performance example is also one that can also be accomplished in weeks. This example involves a circumstance where RPG is doing a big batch process–record-at-a-time processing–and it can’t keep up. It can’t scale. It’s done primarily in the RPG logic and reconstituting it as an SQL statement or a series of SQL statements, where data-centric processing shows its advantages over RPG record-at-a-time processing.
To cost a project, the requirements need to be identified and prioritized.
In determining the scope of the change there will also be numerous factors. Quantifying can be done by noting the objects and tables that need to be changed and identifying the sizes of those tables and their usage patterns.
If logic needs to be changed in a program or a process, what is the scope of those changes? Will the logic be simplified or will it remain complex? Sometimes a lot of complex stuff can be simplified and it is easy to do. Pin down the variables. How many apps need to change? How much code needs to be written? How many stored procedures need to be created?
All of the above factor into the time and money that will be invested. It’s only a starting point for costing the job.
Some people will always throw hardware at a problem in hopes of solving it. Sometimes, Cain says, they are successful. But modernizing the data access is another way to do it and it’s a way to get more out of what is already available in the system.