Kafka, JSON, DevOps: Future Proof Your IBM i With Secure, High Performance APIs
October 26, 2022 Daniel Magid
It is an exciting time to be working with an IBM i! The Rochester lab and IBM partners are rapidly pushing out new technology options that allow you to do anything with the IBM i that you can do on any other platforms. The latest in web and mobile user interfaces, the most modern languages, comprehensive security, machine learning, data visualization, internet of things, APIs – are all available to IBM i users. When combining all that technology with the unmatched reliability, ease of management and low cost of ownership of IBM i, you can be confident that your company can rely on IBM i as its core platform for many years to come.
The secret to making it easy to adopt all this technology is building an open connection strategy based on robust, resilient IBM i APIs.
The API-Driven Technology Revolution
APIs are the fundamental building blocks that allow IBM i users to quickly take advantage of emerging integration and modernization opportunities. Without requiring extensive changes or rewrites, APIs enable access to your existing RPG/COBOL/DB2 application functions and data from new technologies while providing the ability to reach out from the IBM i to outside applications and web services. They make it possible to combine the latest innovations with the proven RPG and COBOL applications upon which your business depends.
There are several things that make APIs so powerful:
“Loosely Coupled” Connections for Flexibility and Easy Maintenance
An API is an interface that operates on a “contract” basis. The agreement between the API producer and the API consumer is “if you send me this set of data, I will execute this business function with it and, if required, return to you this other set of data”. Neither the consumer nor the producer is constrained in what changes they make to their applications as long as they continue to support the contract. Since the API is independent of the business logic, the same APIs can be used by a wide variety of applications.
By connecting through an API layer, you eliminate the proliferation of individual application to application database connections. Your API server becomes the central switching station for all connections. You don’t need to worry about uncontrolled connections interfering with your ability to do database maintenance. Nor do you need to give direct access to your data tables – the API layer can ensure that users are only getting access to the data they absolutely need.
APIs allow outside applications to access data and functions in real time. Many IBM i users are replacing FTP file transfers, EDI and other batch-oriented techniques of sharing data because using those older methods means the user has out of date data. In addition, using file transfers requires that you have multiple copies of the same data which raises the possibility that the copies can get out of sync. Providing real-time access to the data means a single “source of truth” with up-to-date data for all users.
We recently worked with a company who wanted to allow their salespeople to create quotes and collect signed orders on a mobile tablet when they were onsite at a customer. They had dozens of salespeople in the field, all booking orders at the same time. The problem they faced was that multiple salespeople could accidentally sell the same inventory to different customers because they lacked real time access to their inventory data. By providing an API to their inventory system, they could see the status of the inventory in real time. When a salesperson books an order, the system immediately updates the status to reserved and no other salesperson can sell that inventory. In the real world, there are endless examples where this kind of real time data access is critical to avoiding errors and ensuring efficient business operations.
Security and Authentication
Beyond authenticating potential users, the API layer can also limit access to the developers outside the IBM i to just the data they need to perform a specific operation. Too often, outside users are given direct SQL, ODBC or JDBC access to entire tables. Using APIs, you can control access at a very granular level. You do not need to grant them access to entire tables or views. For example, we worked with a customer who provided API access to their open orders. Customers accessing the system could only see their own orders, company salespeople could see all the orders for their customers and company executives could see all orders for all customers. The API layer controlled each user’s access.
Resilience And High Performance
As business becomes increasingly dependent on API communications, it becomes critical that those APIs remain up, running and available. Consumers must be able to get their data quickly (no one wants to wait for their web page to load or for responses to their API calls) and producers must be able to handle high volumes of transactions. IBM’s support for Kafka provides a great platform for high speed, highly reliable APIs.
Kafka for High Performance, Highly Reliable APIs
Resilience and high performance are two reasons that Kafka is seeing increasing adoption on the IBM i. Kafka is an open source-based, event driven, pub-sub messaging application for providing loosely coupled connections between a variety of message producers and consumers. If that sentence seems like a whole bunch of confusing jargon to you – read on!
According to the Apache Software Foundation, more than 80% of the Fortune 500 use Kafka. It is also in use at a large number of small to medium size business. They use it to ensure that their users are getting the best possible response time when accessing their applications and to handle the rapidly increasing volume of machine to machine communications.
So why use Kafka? The answer includes:
- Highly responsive User Experience – millisecond response time even for large numbers of simultaneous requests
- Capacity to handle high volumes of requests – potential to process billions of transactions per day
- Resilience – replicated servers ensure your systems never go down
- Simple maintenance – no need to build and maintain multiple application to application integrations
Let’s look at a common sample use case:
As an example, many customers are using Kafka to integrate their IBM i applications with their e-commerce systems. They want real time sharing of transaction data among a variety of systems.
Let say I am selling my products through my e-commerce website. I might want to take several actions when a customer is preparing to place an order (there certainly could be many more than these):
- Check the customer’s credit availability
- Check inventory for product availability
- Reserve the items in the order in my IBM i inventory immediately so the same inventory is not sold more than once.
- Get a shipping quote
- Generate a price quote
The traditional way to do this would be to write several direct integrations, like this:
With Kafka, you avoid creating these direct integrations. The e-commerce system would simply publish each order inquiry to Kafka and Kafka would make those records available to each subscribing application. This is why Kafka is called an event-drive, “pub/sub” (publisher/subscriber) application. When an event occurs (an order inquiry is submitted), Kafka publishes a message that is accessible to each subscriber. The publisher does not need to know the subscribers and the subscribers do not need to know the publisher. The API layer and Kafka control access and authentication.
That kind of loosely coupled architecture not only eliminates the need to create multiple individual integrations, it also means you can maintain the e-commerce applications and the back end applications separately without worrying about breaking the connection. This is how this same e-commerce integration would look in a Kafka environment:
To ensure reliability and high performance, Kafka can automatically replicate the incoming messages onto multiple Kafka brokers. This protects you from downtime and from losing data if a Kafka server has a problem. Incoming requests will simply be routed to a running Kafka server. And, since you can have these replicated brokers running on different systems, it also provides you with practically unlimited scale in the volume of messages you can handle. There are Kafka users who are handling billions of messages per day. (LinkedIn, the original developers of Kafka recently surpassed 7 trillion messages per day.)
So, the advantages of using Kafka are:
- It is extremely fast
- It can support many to many application connections without writing multiple integrations
- Kafka connections are easy to maintain
- It protects you from unexpected downtime
- It can handle extremely high volumes of requests
There is still one challenge to using Kafka on the IBM i. How do I get the messages from Kafka which are typically in open source formats to and from formats that the IBM i understands? To avoid losing all the performance and reliability advantages of Kafka, you need something that is scalable, resilient and can perform the data transformations at the speed of Kafka.
At Eradani, we’ve solved that problem by encapsulating the transformation code in a high-speed layer within Eradani Connect. This not only ensures extremely high speed transformations, it also avoids placing a huge transformation processing load on your IBM i.
Without Kafka and Eradani Connect, your IT staff has to build and maintain many individual system to system integrations. When you make a change, you must update all of the integrations. Programmers have to write the code to secure your APIs and they have to translate open source message formats like JSON, XML, Comma delimited files and others to formats the IBM i understands. With Kafka and Eradani Connect, most of that work is done for you.
Managing API Development With Open Source DevOps
Once you begin developing APIs and integrating applications built with different technologies, it can be useful to manage all of that development using a single set of management tools.
Managing And Monitoring API Performance
Once you have implemented secure, high speed, resilient APIs, you should monitor them and ensure they are performing satisfactorily. That can be done through an API monitoring dashboard that tracks all API activity.
Future Proofing Your IBM i Platform
Daniel Magid is founder and chief executive officer at Eradani.
This content is sponsored by Eradani.