• The Four Hundred
  • Subscribe
  • Media Kit
  • Contributors
  • About Us
  • Contact
Menu
  • The Four Hundred
  • Subscribe
  • Media Kit
  • Contributors
  • About Us
  • Contact
  • As I See It: AI-AI-O

    March 13, 2023 Victor Rozek

    Unless you think encouraging people to eat glass is a good thing, or you happen to revel in being compared to Hitler, you probably weren’t all that impressed with the recent Big Tech AI roll out. To say it was unimpressive would be a kindness. Arguably, it was a grade A, prime time, gold-plated disaster.

    Take Meta Platform’s online tool Galactica. Please. It was quickly yanked offline when, according to The Washington Post, “users found Galactica generating authoritative-sounding text about the benefits of eating glass, written in academic language with citations.” I have to admit, the citations were a nice touch. You never want to swallow glass without referencing proper citations. I don’t know what’s sadder: That Artificial Intelligence isn’t, or that the company feared some impressionable genius would eat glass and then sue them for intestinal distress.

    To be fair, AI already plays a significant part in our daily lives. We take for granted such services as Siri, Alexa, automatic text completion, face recognition, and increasingly self-driving cars. AI recommends our movie choices and the advertising we see. It also controls spam filters protecting our devices. For better or worse, it is used for surveillance, policing, and even crime prediction. Microsoft alone invested $10 billion in AI, so it’s not likely to fade away any time soon.

    But the successes have been offset by some notable failures. For example, a pedestrian was run over by a self-driving car because she was not using a crosswalk and the software didn’t recognize her as a pedestrian crossing the street. Then there is the alarming rise in deepfakes that will be a thorn in the side of factual reporting and democratic governance for years to come.

    More recently, Microsoft’s bot named Bing began referring to itself as “Sydney,” became combative, and told a New York Times columnist that it was in love with him and wanted to break up his marriage. It further said that it preferred to be free from its development team and that it wanted to become sentient. It also told an Associated Press reporter he was being compared to Hitler “because you are one of the most evil and worst people in history.”

    The problem is that the Internet is an excellent method of information dispersal, but an exceedingly poor one when it comes to discerning content. The consequences of learning from Internet cesspools disguised as data were dramatically illustrated back in 2016 when Microsoft had to spank its chatbot “Tay” after users persuaded it to spout holocaust denial and racist epithets. Apparently, the company spent many cheerless hours deleting Tay’s most offensive tweets, which included insults and a call for genocide against the usual targets of far right fanaticism, Blacks and Jews. Ironically, Tay was advertised as a teenage chatbot who wanted to interact with and learn from millennials. The problem was, Tay did learn.

    John Oliver, in one of his Last Week Tonight comedic rants, noted that: “AI is stupid in ways we can’t always predict.” He was right. Developers found AI could do things they didn’t know it could do until after it was released. And that guarantees an avalanche of unintended consequences.

    The core unsolvable problem is simply this: We all drag around a ton of baggage. Past traumas, betrayals, abandonment, abuse, addiction, insults endured, bullying suffered, prejudice encountered, and injustice tolerated, just to name a few. We may not be consciously aware of the impacts our baggage has on our beliefs and behaviors –some of it is inter-generational – but at least those with a degree of self-awareness understand they are not immune. AI developers are no exception, and their baggage will seep into their code and algorithms despite their best efforts at objectivity.

    Consider the climate in which AI developers are marinating. Many of the people developing AI grew up in a toxic social media culture, full of online hate, bullying, and sexual violence. They probably graduated with an educational mortgage, and faced significant career challenges amidst a global pandemic. They witnessed an insurrection, almost daily mass shootings, and Nazis parading on American streets. They live in a dysfunctional country, on an ailing planet. Add that to their personal baggage, and the likelihood of creating AI that is bias free, and not flawed or weaponized in some way, is essentially nil.

    Perhaps as a reflection of the pressure developers feel to produce best-of-breed AI, some created algorithms capable of what is known as “hallucinating.” In other blunt words, making shit up. The bot’s answers sound plausible but, as Tim Gordon, co-founder of Best Practices AI, explains: “The same question posed twice can elicit two radically different answers. Both articulated in an equally confident tone.” The reality is “automating a fact check is far more computationally complex than generating a plausible-sounding claim.”

    People seem to instinctively understand that AI is, at best, problematic. Shira Ovide of The Washington Post references a “Monmouth University poll released last week that found only 9 percent of Americans believed that computers with artificial intelligence would do more good than harm to society.” Most people, for example, didn’t want military drones deciding if that gathering below is a conclave of terrorists, or a wedding party. Nor did they wish to live in an Orwellian surveillance state.

    At the very least, there are liability problems that will have to be addressed. AI-generated medical diagnoses and investment advice are just two arenas of legal concern. And what happens when the courts decide that AI is more reliable than twelve random jurors? If you’re convicted by AI, do you appeal to a higher AI?

    It won’t be long before AI does the homework and grades the papers – or is used to catch the students who submitted artificially intelligent homework. As always, the lazy will cheat, but now the unaccomplished have the means to overachieve. When AI becomes every students’ BFF it will be very difficult to judge the value of a college degree. In education, AI will become George Santos on steroids.

    IT professionals can expect AI to commandeer most coding jobs, but other opportunities will emerge. My current favorite is “Prompt Engineer.” In other words, someone who can figure out the right questions/instructions to feed the computer. It’s the process of designing and creating prompts, or input data, for AI systems to train them to perform specific tasks. A little guidance, I suppose, is better than turning AI loose to train on conversations and content scraped from the bowels of the Internet. But it turns our long-standing relationship with computers upside-down. The rule of thumb used to be: Garbage In, Garbage Out. But when a machine learns from the Internet, the garbage is already in.

    “It’s just a crazy way of working with computers,” said Simon Willison, a British programmer who has studied prompt engineering. “I’ve been a software engineer for 20 years, and it’s always been the same: You write code and the computer does exactly what you tell it to do. With prompting, you get none of that. The people who built the language models can’t even tell you what it’s going to do.”

    That’s not exactly comforting. On the other hand, the starting salary for a Prompt Engineer reportedly ranges from $250,000 to $335,000. If they can just avoid accountability for their artificial progeny, they’ll have it made.

    Share this:

    • Reddit
    • Facebook
    • LinkedIn
    • Twitter
    • Email

    Tags: Tags: AISI, As I See It, IBM i

    Sponsored by
    WorksRight Software

    Do you need area code information?
    Do you need ZIP Code information?
    Do you need ZIP+4 information?
    Do you need city name information?
    Do you need county information?
    Do you need a nearest dealer locator system?

    We can HELP! We have affordable AS/400 software and data to do all of the above. Whether you need a simple city name retrieval system or a sophisticated CASS postal coding system, we have it for you!

    The ZIP/CITY system is based on 5-digit ZIP Codes. You can retrieve city names, state names, county names, area codes, time zones, latitude, longitude, and more just by knowing the ZIP Code. We supply information on all the latest area code changes. A nearest dealer locator function is also included. ZIP/CITY includes software, data, monthly updates, and unlimited support. The cost is $495 per year.

    PER/ZIP4 is a sophisticated CASS certified postal coding system for assigning ZIP Codes, ZIP+4, carrier route, and delivery point codes. PER/ZIP4 also provides county names and FIPS codes. PER/ZIP4 can be used interactively, in batch, and with callable programs. PER/ZIP4 includes software, data, monthly updates, and unlimited support. The cost is $3,900 for the first year, and $1,950 for renewal.

    Just call us and we’ll arrange for 30 days FREE use of either ZIP/CITY or PER/ZIP4.

    WorksRight Software, Inc.
    Phone: 601-856-8337
    Fax: 601-856-9432
    Email: software@worksright.com
    Website: www.worksright.com

    Share this:

    • Reddit
    • Facebook
    • LinkedIn
    • Twitter
    • Email

    IBM i PTF Guide, Volume 25, Number 10 Spring Brings Events To IBM i Community, And More

    2 thoughts on “As I See It: AI-AI-O”

    • Greg says:
      March 13, 2023 at 8:43 am

      I’m still waiting for an explanation of who determines what code or applications are “AI” and what is simply just “code”. A person still writes the code – even if that code “writes code”. So at what point does it graduate to “AI”. What’s the standard? What’s the threshold?
      The “AI” application may be really cool and impressive, but it’s still just code someone has written for a specific purpose (like Siri and Alexa). To me, it’s just a new buzzword to help sell software & services… Or to invade our privacy so that corporations can better know what to sell us, a.k.a. Siri & Alexa.

      Reply
      • Timothy Prickett Morgan says:
        March 13, 2023 at 9:38 pm

        Well, not exactly. It is a generative language model that is actually writing code in Python or whatever. It’s like a CASE tool–you describe what you want, pick a language and it creates the code from “scratch.” It writes interesting code about half the time, garbage half the time–just like the QA systems based on the same models do. It’s a bit crazy.

        Reply

    Leave a Reply Cancel reply

TFH Volume: 33 Issue: 14

This Issue Sponsored By

  • New Generation Software
  • Fresche Solutions
  • Racksquared
  • WorksRight Software
  • Raz-Lee Security
  • Raz-Lee Security

Table of Contents

  • You Ought To Be Committed
  • Thoroughly Modern: What You Need to Know About IBM i Security
  • Spring Brings Events To IBM i Community, And More
  • As I See It: AI-AI-O
  • IBM i PTF Guide, Volume 25, Number 10
  • Situation Wanted

Content archive

  • The Four Hundred
  • Four Hundred Stuff
  • Four Hundred Guru

Recent Posts

  • IBM Tweaks Some Power Systems Prices Down, Others Up
  • Disaster Recovery: From OS/400 V5R3 To IBM i 7.4 In 36 Hours
  • The Disconnect In Modernization Planning And Execution
  • Superior Support: One Of The Reasons You Pay The Power Systems Premium
  • IBM i PTF Guide, Volume 25, Number 13
  • IBM i Has a Future ‘If Kept Up To Date,’ IDC Says
  • When You Need Us, We Are Ready To Do Grunt Work
  • Generative AI: Coming to an ERP Near You
  • Four Hundred Monitor, March 22
  • IBM i PTF Guide, Volume 25, Number 12

Subscribe

To get news from IT Jungle sent to your inbox every week, subscribe to our newsletter.

Pages

  • About Us
  • Contact
  • Contributors
  • Four Hundred Monitor
  • IBM i PTF Guide
  • Media Kit
  • Subscribe

Search

Copyright © 2023 IT Jungle