IBM i Security Expert Interprets PCI and Multi-Factor Authentication
July 17, 2017 Dan Burger
With data security written boldly at the top of many organizations’ priority lists, the Payment Card Industry Data Security Standard (PCI DSS) is viewed as a top line defense against data breaches. Whether a company handles credit cards and is required to implement mandated security measures or uses the PCI standard as a best practices model, IT security gurus pay attention to the PCI DSS.
We are well beyond the realization that organizations need to be secure. The emphasis has clearly shifted to how organizations become secure. How to build and maintain a secure network, protect data and regularly monitor and test the networks.
Last week, after reading Pat Botz’s blog, I contacted the IBM i security expert to talk about changes in PCI DSS, primarily regarding multi-factor authentication (MFA) and the ripple effect it will have in the IBM i Power Systems community.
MFA (also known as two-factor authentication) requires multiple types (or factors) of authentication. Typically, one of the types is something you know; i.e., a userID and passwords. At least one additional type – either something you have (a magnetic strip card or digital certificate) or something you are (fingerprint, iris scan, and so forth) – must also be provided. Using the same factor twice (for example, two user IDs and passwords for the same person) is not multi-factor authentication.
PCI requires a minimum of at least two factors.
“This is kind of a big deal because of the guidance for implementing MFA correctly,” Botz wrote in his blog. “Why is this so interesting now? Because currently there is no way to implement MFA – as recommended by PCI guidance – on IBM i!”
The PCI MFA standard says that all non-console administrative access to the cardholder data environment (CDE) and all remote access to the CDE requires MFA. PCI MFA implementation requires that both factors are authenticated before any indication of success or failure is returned to the user. In addition, only an indication that authentication failed – not which factor failed – can be returned.
IBM i admins looking for the rose in the fisted glove of PCI DSS compliance, can find it in the PCI guidance for implementation of MFA. First, the guidance says that if MFA is required to access the network, then accessing the CDE from that network can use a single factor as long as that factor meets certain requirements. Those requirements state that the single factor can be a password, provided it is not the same as the password used as a factor to access the network, or it can be a different factor (i.e., something you have or something you know).
With MFA guarding the entrance to the corporate network, organizations have two-factor authentication at the network level, which avoids the dilemma of not being able to put MFA on the IBM i.
The reason MFA can’t be implemented in a way compliant with PCI requirements (or NIST recommendations, for that matter) on IBM i, Botz says, is that the authentication is buried deep in the operating system. The flow of logic stops when a bad password is entered into the system. So, the only way to implement multi-factor authentication is through an exit point. Botz says exit points are called after the password is authenticated. If the password fails, the exit point never gets called. That leaks valuable information to attackers about which factor is failing and lets them know where to focus their reconnaissance.
“PCI is worried that if a company is using two factors to get into the network and one factor is a password and that password is also the IBM i password an attacker could find a path to the IBM i that didn’t require going through the corporate network – for example, Web service, Web application, and so on – the attacker would only need to compromise one factor,” Botz says. “In addition, the guidance says that the implementation must verify both factors ‘prior to the authentication mechanism granting the requested access. No prior knowledge of the success or failure of any factor should be provided to the individual until all factors have been presented.’ The key part of PCI that makes it impossible for any third party to implement MFA for the IBM i and have that alone be acceptable, is the requirement that both factors have to be assessed before any result is returned, so that no information about which factor was invalid is given to the user.”
If PCI compliance is not your concern, password synchronization is an option, but Botz suggests another plan that’s PCI compliant and, therefore, more difficult for a hacker to defeat.
That plan meets PCI guidelines and combines Kerberos and enterprise identity mapping (EIM) (a.k.a. Single Sign-On). It involves configuring Kerberos on the IBM i and the Windows domain so the IBM i can participate in the Windows domain authentication. EIM is configured on IBM i so that the IBM i can determine which user profile to use for the authenticated Windows user IDs.
Kerberos has a reputation for being complex. Botz, who has been working with Kerberos since 1984, says that reputation comes from a lack of knowledge.
“Anything that it not understood is harder to configure than something you do understand,” Botz says with a mouthful of understatement. “To understand Kerberos requires knowledge of the protocol, the Windows Domain Controller, and the IBM i.”
With Kerberos-based SSO, a user provides a userID and password once when logging into the Windows domain. This generates a cryptographically secured ticket. Windows applications use the ticket to authenticate to the IBM i – passwords are never sent over the network. This works for all green-screen, Web services, and Web applications. The only exception is old client-server applications where the IBM i-based application server receives the password and authenticates it. Unless those applications have been updated to optionally accept Kerberos tickets, users will need to continue providing their IBM i user profiles and passwords.
Kerberos is a cross-platform protocol that most IBM i admins do not know or understand. It’s also a fair appraisal to note many IBM i admins would not recognize a Windows Domain Controller (WDC), let alone know how to configure one. And let’s guess what percentage of WDC admins know the difference between an IBM i, an AS/400 and a Star Trek control panel.
“Kerberos is not a user ID/password authentication mechanism and despite common beliefs, the ticket does not contain passwords,” Botz explains. “Therefore, using MFA to access the network – and then using SSO via Kerberos and EIM – meets the new PCI DSS guidance.
Botz, also recognized as a SSO subject matter expert, doesn’t take credit for inventing anything. The technologies he talks about are all part of the out-of-the-box IBM i system.
With Kerberos-based SSO, a user is authenticated once at the beginning of the session, and a user no longer must sign into each application or system individually, as long as the system or app supports the Kerberos protocol.
Because many smaller companies have been overcome by PCI fumes, outsourcing all things related to PCI has become an almost common occurrence.
“It’s a very good option for companies that do not want to be burdened with the PCI-related work,” Botz says. “It makes financial sense for many shops that do smaller numbers of transactions. However, they cannot store any type of cardholder information without the PCI requirements becoming a factor. And not having access to the credit card data may be a marketing issue for some companies.”
From his perspective, Botz sees companies comparing the costs to change all processes and procedures that are in place to handle credit card transactions so outsourcing can take place with the costs to be PCI compliant. He’s helped companies analyze the decision to become PCI compliant or to outsource all credit card processing.