• The Four Hundred
  • Subscribe
  • Media Kit
  • Contributors
  • About Us
  • Contact
Menu
  • The Four Hundred
  • Subscribe
  • Media Kit
  • Contributors
  • About Us
  • Contact
  • It Is Time To Have A Group Chat About AI

    January 23, 2023 Timothy Prickett Morgan

    The first rule of any technology is that is can be used for good or evil, but generally it is used for something vaguely in between. The second rule, rarely used, is that some technologies need to be tightly controlled because of the global-scale damage they can cause. Nuclear fission and nuclear fusion come immediately to mind. And so does the special branch of machine learning called deep learning, which most people call AI training these days.

    I can’t remember precisely when I first started writing about AI training and the neural networks and frameworks underneath them, but I wrote up the Watson Jeopardy! grand challenge in April 2009 when Big Blue threw down the gauntlet to the human players and I was at IBM Research in January 2011 when David Ferrucci unveiled the Watson Deep QA system to Wall Street – which was before Watson took on human champions on the Jeopardy! game show a month later.

    At the time, I half jokingly – but also half seriously – asked Ferrucci if I could come to IBM Research with everything I ever wrote and create a virtual TPM to do my job for me. He laughed and politely demurred.

    On that same visit, I also told one of the researchers who created some of the core statistical algorithms that allowed Watson to have human-like knowledge processing that when we needed to go back in time to kill someone because the world was going to go all Terminator, that he was the one who would need to be killed. I was kidding, and we all had a good laugh, but just the same, all the color drained out of his face. Hopefully, that was not prophetic.

    A year later, AI researchers had created a new and different set of convolutional neural networks (CNNs) that ran on massively parallel GPU engines running the ResNet image recognition framework. At that time, it could not yet match human experts at image recognition. But it took less than three years to do so. And by the summer of 2015, I was sitting in the deep learning tutorial at Hot Chips 27 and I saw the first hint that my joke about replacing myself with a machine – and perhaps you as well, RPG programmers of the world – might actually be prophetic.

    Roland Memisevic, an assistant professor at the University of Montreal, one of the hotbeds of AI research, and now a senior director at chipmaker Qualcomm, gave a presentation that showed something I had been expecting and fearing just a bit. (You can see it here and get the PDF of it there.) In that presentation, Memisevic showed recurrent neural networks (RNNs) that, given a corpus of data based on the Internet, when given a sentence could craft a story back to the beginning from that sentence and forward to the ending. And in the next part of the presentation, he showed the same RNN generating a program from one line of code, backwards and forwards. (It might have been Python. I am not sure.) These RNNs that Memisevic was showing were tiny in terms of today’s GPT-3.5 model, which has over 175 billion parameters, which underlies the ChatGPT application that has taken the world by storm this month, and which has made its OpenAI developer a household name.

    GPT-3.5 is a natural language processing model developed by OpenAI that is an example of what is called a foundational model or a large language model. ChatGPT is a chatbot front-end on GPT-3.5, and it can write text as well as code reasonably well some of the time, and about half the time it generates something between gobbledygook and crap. The basic idea with these foundational models is you tell them what to do and they do it based on a massive statistical analysis of the relationships in the text in the dataset that you give them. That’s what the parameters are, and the model creates statistical weights that allows the model to figure out what to do in much the same manner that Watson translated a voice statement into text and did statistical analysis over 200 million pages of text in its corpus to formulate a Jeopardy! question. These neural networks are a trick of statistics and data retrieval that simulates, mimics, or replicates human imagination. (You pick one. I can’t tell. I don’t feel like a statistical trick as I write this, but maybe I’m wrong.)

    Beware ChatRPG – Or Not

    OpenAI is keeping very tight control over the GPT-3.5 model, but let’s do a thought experiment here. Let’s assume that the kind of language translation capabilities in OpenAI’s GPT-3.5, Google’s BERT, Nvidia’s NeMo Megatronm and other LLMs were available for us to use, and we had access to the compute capacity – and to the money – to train the model. Or, we got a pretrained model expert at language translation and pruned it and tuned it to do application code translation. We create something called OpenRPG and it is a foundation that takes all of the end user source code and we train it not only how to write RPG code, but to convert from ILE RPG to free form RPG, or from RPG II or RPG III to free form RPG. Or we call it OpenIBMi and we let it convert from legacy RPG to any combination of languages. If we gave it enough actual code, could we solve the application modernization problem on IBM i through a different kind of automation? If we gave it a best practices coding engine, from say ARCAD, would that help?

    This may seem absurd, but there is a company named Jasper that has access to the trained LLMs and that already has nearly 100,000 customers – and a market valuation of over $1.5 billion – that is using these models to kick out product manuals in multiple languages based on product specs as well as generating product marketing materials and web pages for companies all over the world.

    OK, so that explains that. Now we all know how those terrible manuals happened. . . .

    But seriously, Jasper co-founder Dave Rogenmoser said that its current models (not based on GPT-3) can get about 70 percent of the manual done instantly and then the remaining 30 percent is done by a human. The company is moving to wafer-scale supercomputer systems from Cerebras and the GPT-3.5 LLM behind the ChatGPT chatbot and obviously hopes to get better results and scale up its AI processing and its business.

    I will remind you that in 2010, when the ImageNet image classification challenge was first being tackled by machine learning applications running on GPUs, the error rate for machine learning was 28 percent and by the middle of 2014 had crossed the 5 percent human error rate and has long since been far, far below that level and is approaching damned near perfect. It is not going to be too long before these LLMs will be able to create programs based on best practices coding techniques for specific languages with well over 95 percent accuracy. Ditto for writing content on the Internet.

    We are all going to have to up our game in uniqueness and synthesis, and this is going to be a John Henry moment for all of us who have not been already replaced and displaced by the Industrial Revolution. Now here in the late Digital Age, the steam drill is a synthetic neural network and the sledgehammer you have is your own brain. Who knows precisely how this is all going to play out? All I know for sure is someone is going to make some money off this, and it is probably not going to be me or you.

    There is an outside chance I am wrong about how AI will displace so many people, but it will be an error in timing, not in kind. I hope it takes a lot longer, and I hope so for both our sakes.

    One last thing. AI technology is a kind of nuclear bomb, and to be precise in the metaphor, it is like a neutron bomb: It kills the people, but it leaves the buildings standing. Human beings are dicey at best with collective action, but they are a self-preserving lot for the most part. And in that spirit, we need to start seriously considering how to regulate the use of AI technology – and long before it is able to replace the functions of tens of millions to billions of people.

    We all can’t just sit around watching media, playing games, and eating fatty sludge out of a bucket paid for by a universal basic income. That is not a life with meaning.

    Earth should be about cultivating lives with meaning – somewhere around 8 billion of them, in fact. Earth is for people, and people also need to take care of the Earth and all that is on it. We are at the top of the food chain and the highest-level being on the planet (thus far) and it is up to us to make sure that Earth works. I neither want nor need AI to replace us all, and I especially do not need to make Google, Microsoft, Amazon, Facebook, Alibaba, Baidu, Tencent, ByteDance, and Apple richer and more powerful than they already are.

    This is about more than the Super 9 or even any particular nation and its government. This is something that is about all of us, and we all have an equally valid say by virtue of our natural endowment. We were endowed by our Creator with certain inalienable rights, after all, and we had better wake the hell up if we want life, liberty, and the pursuit of happiness.

    It is time to have a little family chat of our own, people.

    RELATED STORIES

    Is That What You Think Modernization Means?

    Modernization Trumps Migration for IBM i and Mainframe, IDC Says

    On the Spectrum of Application Modernization

    Debunking Modernization Myths

    Most App Modernization Projects a Struggle, Survey Finds

    Modernization Starts with the Business, and the Tech Follows

    The Great Resignation Intersects Application Modernization And Digital Transformation

    Want to Modernize? Great! Now Get to Work

    Technical Debt: The Silent Killer

    Planning A Modernization Project? Read This First

    Thoroughly Modern: Making Quick Wins Part Of Your Modernization Strategy

    So You Want To Do Containerized Microservices In the Cloud?

    Thoroughly Modern: Innovative And Realistic Approaches To IBM i Modernization

    Share this:

    • Reddit
    • Facebook
    • LinkedIn
    • Twitter
    • Email

    Tags: Tags: AI, ChatGPT, GPT-3, IBM i, ILE RPG, OpenAI, OpenIBMi, OpenRPG, RPG, RPG II, RPG III

    Sponsored by
    Midrange Dynamics North America

    Git up to speed with MDChange!

    Git can be lightning-fast when dealing with just a few hundred items in a repository. But when dealing with tens of thousands of items, transaction wait times can take minutes.

    MDChange offers an elegant solution that enables you to work efficiently any size Git repository while making your Git experience seamless and highly responsive.

    Learn more.

    Share this:

    • Reddit
    • Facebook
    • LinkedIn
    • Twitter
    • Email

    2023 IBM i Predictions, Part 2 Join The 2023 IBM i Marketplace Survey Webinar Tomorrow

    2 thoughts on “It Is Time To Have A Group Chat About AI”

    • Paul Houston Harkins says:
      January 23, 2023 at 10:31 am

      Great job Timothy Prickett Morgan !!!

      This seems to me to be about halfway (or less) to the already available Real-Time Program Audit (RTPA) for IBM i software (www.realtimeprogramaudit.com), which already translates and enhances IBM i RPG, COBOL and CLP source programs into streamig video and fully audited (as in a security camera) programs, with instantaneous output for AI beyond what you just illustrated, and could save IBM i companies billions of dollars annually.

      RTPA could be easily and quickly enhanced for any compiled programming language including IBM System z Entrerprise COBOL, and JCL, and for C++ as in this YouTube Video

      Multiply IBM i Programmer Productivity, Capability and Value with streaming video and automation

      https://youtu.be/Kv0L3e9Wqh4

      Your blue sky is already here, and much more, largely because of your great vision and efforts in the past with ITJungle.

      The Microsoft CEO just announced at all Microsoft products would soon use AI.

      Good luck with trying to get IBM to save itself.

      Reply
    • Chris Pando says:
      January 23, 2023 at 5:10 pm

      Until the users learn to ask for what they need, our jobs are safe.

      Reply

    Leave a Reply Cancel reply

TFH Volume: 33 Issue: 3

This Issue Sponsored By

  • ProData
  • New Generation Software
  • WorksRight Software
  • Raz-Lee Security
  • Manta Technologies

Table of Contents

  • It Is Time To Have A Group Chat About AI
  • 2023 IBM i Predictions, Part 2
  • Multiple Vulnerabilities Pop Up In Navigator For i
  • Participate In The 2023 IBM i Marketplace Survey Discussion
  • IBM i PTF Guide, Volume 25, Number 4

Content archive

  • The Four Hundred
  • Four Hundred Stuff
  • Four Hundred Guru

Recent Posts

  • Public Preview For Watson Code Assistant for i Available Soon
  • COMMON Youth Movement Continues at POWERUp 2025
  • IBM Preserves Memory Investments Across Power10 And Power11
  • Eradani Uses AI For New EDI And API Service
  • Picking Apart IBM’s $150 Billion In US Manufacturing And R&D
  • FAX/400 And CICS For i Are Dead. What Will IBM Kill Next?
  • Fresche Overhauls X-Analysis With Web UI, AI Smarts
  • Is It Time To Add The Rust Programming Language To IBM i?
  • Is IBM Going To Raise Prices On Power10 Expert Care?
  • IBM i PTF Guide, Volume 27, Number 20

Subscribe

To get news from IT Jungle sent to your inbox every week, subscribe to our newsletter.

Pages

  • About Us
  • Contact
  • Contributors
  • Four Hundred Monitor
  • IBM i PTF Guide
  • Media Kit
  • Subscribe

Search

Copyright © 2025 IT Jungle