Testing For Security Inadequacies
April 12, 2016 Dan Burger
Organizations are often reluctant to admit that they have inadequate system security. Quite possibly they may not even realize it. That’s not uncommon, particularly in the IBM midrange community, where we know there’s a false sense of security deeply rooted in the conviction that IBM i is inherently secure. Nothing could be further from the truth, although thinking any system is inherently secure is an equally foolish mistake. It’s all a matter of understanding how to manage security on a system.
Preconceived notions that the IBM i platform is some sort of superhero in a box is one of the big ones. It gets talked about a lot, and actually is a more of a statement of potential than a standalone fact. We have to get past that. All the security experts agree that the box is only as secure as the IT configures it. No steps taken to make use of the security tools that come with it leaves companies without the protection they think they have. But if you’re reading anything about IBM i security these days, you’ve already been made aware of that.
Let’s shine the light into a few of the dark corners. I’ve talked with quite a few security experts during 20 years or so of poking around the IBM midrange. And I’ve found that Pat Botz holds one of the brightest flashlights. Botz is a former IBMer and a security architect for IBM i. He keeps busy now with a consulting business. He occasionally writes technical tips for IT Jungle’s Four Hundred Guru newsletter and is a session speaker at conferences such as the RPG & DB2 Summit and COMMON.
Botz and I started our conversation last week with a discussion about who has responsibility for security. Is it at the systems administrator level or the developer level? There are a few dynamics to take into account. For example, developers usually look to system admins to handle security. That’s pretty much a traditional division of labor and naturally follows the “we’ve always done it that way” line of thinking.
Keep in mind that at one time candles and torches were what we used to see in the dark. Then someone came up with an alternative that worked better.
Developers have a great deal of knowledge about the data that’s in an application. They are familiar with all the pieces in the application and they know what those pieces do. When security is a priority, developers write programs that use tools–inherent in the system–to access data appropriately. Notice that the tools are inherent in the system. It’s not the security that is inherent. And, of course, it’s the use of the tool that creates security, not the tools existence.
The admin usually doesn’t know the intricacies of the program or how it was written. But the admin should be making sure the data in the application is not accessible to everyone and that exceptions to that are approved. Sounds simple, but then why isn’t everyone doing it? Existing applications, whether they are developed in-house or purchased off the shelf, can be changed to secure data.
All data on the system can be protected. But a poorly written application that requires public access makes it vulnerable.
Broadening the knowledge and skills of both the system admin and the developer and getting both on the same page is a great starting point for improving security. It will likely take some experimentation and some practice to develop a tactical plan and make use of the available tools. One source of information is the white paper Getting Started with IBM i Security.
Which Comes First The Audit Or The Fear?
Auditors are supposed to be looking for these kinds of gaps in security and kindly asking organizations to fix them for a variety of reasons that range from personal information being snatched by those who know how to turn that data into cash, to cooking the books and making a company’s financial picture a little rosier than reality allows.
Despite what is widely recited about the ferocity of financial audits, the auditing process at IBM i shops is actually pretty lightweight. It generally focuses on whether users have limited capabilities and seldom touches on how applications are written, which is where the trouble lies because the limited capabilities placed on certain users may not be invoked and users that were thought to be locked out actually have the capability to execute commands. Cases where auditors are looking this deep are rare. And it’s not uncommon for auditors to accept what the IT director in an IBM i shop tells them rather than go looking for system security weaknesses.
Even when an auditor does admonish a company for widespread and unnecessary authority privileges, administrators have been known to wiggle off the hook by expressing grave concern that “something is going to break” if changes are made and if changes do get made things will get costly.
When things get costly, we hear about it.
IBM’s X-Force research produces an annual Cost of Data Breach Study that is independently conducted by Ponemon Institute. Its report from 2015 noted the average cost of a corporate data breach is $3.8 million, or $145 for each lost or stolen record containing sensitive and confidential information. The related costs include recovering the lost data, security and infrastructure overhauls, and IT efforts diverted from other mission-critical tasks.
Identifying and understanding the security gaps because of a breach is a painful way of discovery. It’s a dark cloud with no silver lining. If you need some help finding security gaps, this white paper titled The Cost of a Data Breach will be helpful.
The sensitive data is most often privacy related, but it can also be business-advantage related. There’s a market for both. It’s widely reported when the breach involves personal data it includes Social Security numbers, credit card numbers, home addresses, birthdates, bank account numbers, user IDs, and passwords. The theft of business-specific data trade secrets is also a growing concern. In either case, the incidence of occurrence is rising.
Here’s another common occurrence that keeps organizations from discovering better ways of doing things.
Widely known as fear of the unknown, it manifests itself when a system admin backs away from making changes because of uncertainty about what the change may bring. It’s the choice of flight over fight. Rather than learning what change will bring, the status quo is accepted while waving the nearly universal warning flag that change will impact business. Whether it will or not is unknown, but the fear that it might makes the decision. Whether there are cost-effective ways of accomplishing changes remains unexplored or little is provided in the way of investigation.
Security Is A Process
The problem is that few organizations have a process and the security process is ongoing just like everything else. The IBM i is no longer an island with a very limited number of people having access. As the environment expands, the security process needs to expand. New functions have been continually added during the evolution of OS/400 to IBM i. Security tools tend to follow new functions. For instance, digital certificates followed Web applications and Web servers on IBM i. And validation lists were added.
Tools are available to help manage the process. IBM has provided a great many and the IBM i vendor community has introduced many more. All the necessary pieces are there to create a Fort Knox type of security, but the tools have to be used in an appropriate way.
“If you take a new Power8 running IBM i and you put one file on it that contains credit card numbers, how does the system know who should have access to those numbers and who shouldn’t? It doesn’t,” says Botz to make his point that the machine is not inherently secure until the user configures it properly. To take on the topic of proper configuration, an article titled When Would You Notice an IBM i Security Breach? by security subject matter expert Robin Tatam should be on your recommended reading list.
“The only way to measure the effectiveness and efficiency of the security function in an operating system is to step back and evaluate what it cost to enforce the access requirements,” Botz says.
The costs are a result of the system overhead, including its security function, plus additional tools purchased from third-party vendors that make things easier to implement and manage.
“You should be able to see how a tool will allow you to do something more cheaply than you could do it without using that tool. Tools that allow you to enforce a requirement and manage the system over time are good tools,” Botz says. “Take into account the cost of the tool and its maintenance.”
Being secure is enforcing requirements that are identified and understood.
It’s the requirement that make a system secure and requirements are defined by what’s being prevented and allowed.
No tool makes the system secure. That’s like saying our screwdriver will fix your motor.