API Operations Management for Safe, Powerful, and High Performance APIs
May 18, 2022 Daniel Magid
Now that API enablement for IBM i applications has become mainstream, IBM i users are starting to ask questions around how best to manage API operations. To get the most out of your APIs and to provide your API users with the very best experience, you must:
- Ensure your APIs are up, running and providing rapid responses.
- Protect your APIs from attacks, misuse and faulty calls coming from API end users.
- Have a strategy for creating, deploying and maintaining your APIs over time.
In other words, you need to make sure that your APIs deliver the same reliability, security, performance and simplicity that IBM i users are accustomed to getting from their IBM i systems.
Fortunately, there are a lot of tools and techniques that can help you implement a comprehensive API management strategy. Let’s take a look at some of the things you can do.
There is an old saying that “you cannot expect what you cannot inspect.” You can’t manage things you can’t measure or monitor. So, if you want to assure great API performance, the first thing you need is a way to monitor APIs. API monitors come in two flavors: Active and Passive. Using both together provides the kind of comprehensive and timely information necessary to keep your API consumers satisfied and your systems protected.
Passive API Monitoring
Passive API monitors track all API activities and provide reports and dashboards that keep you up to date. Here is an example:
This dashboard provides an up-to-date summary of API performance. You can see the current response times users are getting (Event Loop Lag), the resources your APIs are consuming (CPU and Memory) and the number of API calls that produce errors rather than successful results. One number to pay attention to is the “Apdex Score”. Apdex is an open standard measurement for measuring overall user satisfaction with application response time. The formula for calculating Apdex is:
In this equation Satisfied Count equals the number of users who received a response within your ideal defined response time objective. Tolerating users are users who received a response outside of the ideal response time but within a tolerable amount of time. Frustrated users are users who received a response outside the tolerable timeframe. In this model, you define the ideal and tolerable response timeframes. The Apdex number then gives you a quick score to understand your overall API performance according to your objectives.
The dashboard allows you to quickly zero in on any errors that might be occurring. You can see the number of API responses that return a success message (2XX) vs responses that return an error message (3XX, 4XX, and 5XX). You can then drill into the error reports to see each specific error for troubleshooting.
When you need more granularity, you can see all of the same statistics for every API endpoint. With this information, you can quickly focus on any APIs that might be problematic.
You can also monitor how much data is being transferred via each of the endpoints, how many calls are coming from different users or different IP addresses and where delays are being introduced into API processing and lots of other useful information. Passive API Monitors provide a treasure trove of actionable data about your APIs.
Passive monitors are great for keeping you up-to-date on how users are experiencing your APIs but Active Monitors help you identify potential problems before a user is impacted. Active monitors periodically “ping” your API endpoints to check on their health. The active monitor can alert you if an API is unavailable or if it is taking an unacceptable amount of time to respond. It will track those statistics over time so you can isolate and remediate any problematic APIs.
For more information on API health checks, read our recent blog at: https://eradani.com/eradani-blog/.
Securing Your APIs
API operations management can be a critical piece of your API security efforts. Your API code should be using the latest techniques for controlling access to your APIs (e.g., JWTs, OAuth, etc.), but you can also protect your systems via API operations.
You can use the data coming from your API monitor to control the flow of calls to your APIs. This not only allows you to manage the resources that API calls are consuming (more on that later) but it can also help ensure that your APIs are not a vector for Distributed Denial of Service (DDoS) attacks. In a DDoS attack, a malicious user attempts to overwhelm your system with API calls thus denying valid users access to your applications. You can use API throttling to limit the number of calls coming into your system from any source. You can limit calls by API endpoints, users, originating IP address, API groupings, applications and other criteria.
IP Address White/Black Listing
You can configure your API monitors to block specific IP addresses or to only accept API calls from specific IP addresses. These techniques can help ensure that you are only getting API calls from known, trusted users. They can also help prevent DDoS attacks by eliminating the ability of anonymous users to reach your system.
Eliminating the need to use native IBM i credentials to access an API is another good technique for protecting your IBM i. API access should be configured to use modern authentication methods like JSON Web Tokens (JWTs) and third party verification systems like OAuth. That way, even if a malicious actor was able to discover the credentials or spoof an authorized user, they would not have native access to your IBM i.
You can use APIs to add multifactor authentication (MFA) to IBM i access. Since many attacks originate from compromised machines on your network, it can be a great safeguard to add MFA to even ordinary functions like signing onto a 5250 session. For more information on securing your APIs and using APIs for IBM i security, check out these blog posts: https://eradani.com/2022/02/18/help-ive-been-hacked/, https://eradani.com/2021/12/16/keeping-your-ibm-i-safe-in-the-face-of-attacks-like-log4shell/, https://eradani.com/2021/06/16/securely-add-apis-to-your-ibm-i-applications/
Managing API Call Volume/Monetizing APIs
Rate Limiting APIs
Since APIs give external users access to your IBM i resources, you run the risk that they can impact your system performance by overusing the APIs. Many customers have experienced the problems that result when an API consumer accidentally writes an unbreakable loop into their API calling program resulting in an overwhelming number of unnecessary calls. You can use API rate limiting and alerts to ensure you do not fall victim that kind of mistake.
API Rate Limiting allows you to place limits on the number of API calls and the amount of data that can be transferred via your APIs. The limits can be set by user, API endpoint, IP addresses, and other criteria. If a user exceeds their limit, you can have the system throttle or completely prevent their access or you can have it simply send you a notification. You can use Rate Limiting as a method to monetize your APIs. It allows you to charge fees for API access based on the volume of API calls.
You can also use Rate Limiting to control the number of API calls you make from your IBM i. If you are accessing an API from a provider that charges you fees for calling their APIs or if they have a hard limit in the number of calls you can make before you lose access, you can limit the number of API calls you make to their API. This will prevent you from unexpected expenses or from losing API access at an inconvenient time.
If you are initiating API calls from your IBM i to another system, you might want to automate those calls. For example, we talked with a company that wanted to synchronize customer information updates they were making on their IBM i with the information in their cloud-based Salesforce database. They were currently transferring files of changed data on a nightly basis to make the updates. However, that meant that for a period of time, the two systems were out of sync. We recommended that they call a Salesforce API via a trigger program on the customer master file that would send the updates to Salesforce as they were made on the IBM i. That way the systems would always be in sync. Salesforce would even send back an acknowledgement that the update was successful. The API automation system would queue the requests if the Salesforce API was unavailable and run them later so that no updates would be lost.
Other customers have added polling APIs to gather information from outside sources on a scheduled basis. We have worked with trucking companies that needed to get data from mobile devices used by drivers. Their IBM applications would periodically call APIs provided by the mobile application software to get the latest data updates. They were able to automate what had been an onerous manual task.
Organizing APIs so You Can Find Them When You Need Them
One of the problems of a growing API base is finding the right API when you need it. It is important to avoid the situation where a developer decides it is easier to create a new API rather than reuse an existing one. Establishing a well thought out structure for organizing APIs by business area or purpose will make it easy for developers to find and use them.
Promotion and Deployment
Just like your other application changes, APIs need to move through testing stages before being deployed to production. At each stage of the lifecycle (e.g., Dev, Test, QA, User Acceptance, Production), the build results need to be deployed to the appropriate servers and IBM i libraries. It is critical that the API accesses the IBM i resources using the appropriate library list for that stage. Managing your API development with a single set of DevOps tools or an integrated set of tools can help you avoid errors and production outages by ensuring that the APIs and the core IBM i business code changes stay in sync.
In many cases, APIs are built for machine-to-machine communication. There is no GUI or green screen interface. To test them, developers must write scripts to mimic how the consumers will access the APIs. The scripts must include valid tests as well as tests that exercise the error handling capabilities of the APIs. If you are expecting large volumes of API calls, it is important to perform load testing to ensure that API calls will be handled in a timely fashion.
Open source testing tools can both test the function of your APIs and also the ability of your APIs to handle projected API call volumes. Using a load testing tool (there are many open source tools available with fun names like Locust, Fiddler, The Grinder, Gatling… : https://testguild.com/load-testing-tools/) can automate the entire testing process.
Whether you are providing API access to outside users or to internal users of other applications, you need to ensure that everyone knows what version of the API is current. If you need to make an API change that will require the consumer to make a change to their systems, it is important to notify them of the coming change. Typically, you will want to continue supporting the old version for some period as users make the move to the new version. You can then deprecate the old version over time. To make this easy for your customers, you should append your API names with a version identifier so the users can control which version they are accessing. Keeping your users informed of your update schedules will ensure they remain happy consumers of your APIs.
It’s exciting to see how many IBM i users are opening up their systems to the API economy. It guarantees that the IBM i will remain an important platform in our increasingly integrated IT universe. By implementing an API operations strategy, you can ensure that you will be providing a safe, powerful, and high performing API environment. If you would like to learn more about API Operations, check out our website at www.eradani.com or contact us at firstname.lastname@example.org.
Daniel Magid is founder and chief executive officer at Eradani.
This content is sponsored by Eradani.