• The Four Hundred
  • Subscribe
  • Media Kit
  • Contributors
  • About Us
  • Contact
Menu
  • The Four Hundred
  • Subscribe
  • Media Kit
  • Contributors
  • About Us
  • Contact
  • As I See It: The Tells

    November 4, 2024 Victor Rozek

    In its early incarnations, AI-generated media was often easy to spot: groups of the same people appearing multiple times in a crowd scene; multi-directional light sources in outdoor settings; mouths moving out-of-sync with the words being spoken; people whose anatomy mysteriously sprouted extra fingers or additional limbs.

    But things quickly improved to the point that deepfake images and manipulated video are no longer obvious constructions. Misinformation from both domestic and foreign sources is now rampant on the Internet, especially during the election cycle and, at a glance, it’s difficult to know if what we’re seeing is real.

    Which is why legitimate news organizations like The Washington Post and social media giants now use a number of AI detection tools to ensure the authenticity of the photographs and videos they publish.

    The Post explains how the process works. It involves uploading images, sound bites, or video clips into a deepfake detection tool comprised of a series of algorithms designed to identify indicators of inauthenticity. For example, one algorithm examines the head and face for signs that it was digitally planted on another person’s body. Another tracks unnatural lip movement and abnormal facial expression. Still another analyzes vocal patterns looking for irregular frequencies, out of context pauses, and other unlikely speech patterns.

    The algorithms also scan down to the pixel level, examining imagery for patterns of visual disturbance that deviate from surrounding areas and indicate that the image has been altered. They also compare how pixels move between video frames. In authentic videos there is no motion blur nor will the image appear to jerk from one frame to the next.

    Then an algorithm tries to reconstruct the image using a technique called diffusion. The goal isn’t to flawlessly recreate the image, but to discover what diffusion was unable to duplicate, thereby indicating possible manipulation.

    The last algorithm looks for a unique pattern in the pixel distribution that would indicate the content was created using an earlier image generation technique called GAN, short for generative adversarial network.

    Each algorithm estimates the probability of image tampering. As a final step, these deductions are blended and the detection tool offers its conclusion on whether the content is likely genuine or fake. It all sounds rigorous and thorough. Nonetheless, it’s far from foolproof.

    The Post reports: “Last year, researchers from universities and companies in the United States, Australia and India analyzed detection techniques and found their accuracy ranged from 82 percent to just 25 percent.”

    The problem, as articulated by Hany Farid, a computer science professor at the University of California at Berkeley, is that “the algorithms that fuel deepfake detectors are only as good as the data they train on. The datasets are largely composed of deepfakes created in a lab environment and don’t accurately mimic the characteristics of deepfakes that show up on social media.”

    Which is why the government is pressuring tech companies to create some sort of embedded identifier or watermark in their AI products; on-line labels that would identify AI-generated content. The fear is that as AI improves it will become increasingly difficult to spot deepfakes. And, even if AI identifiers existed, unscrupulous users would likely find deceptive workarounds. The hope is that at some point technology may be able to police itself. And it may, but it can never remedy the impacts resulting from the lack of integrity of its users.

    Ultimately, deepfakes are not created to convince us of the validity of any particular image, but to cast doubt on everything that is real, that is factual, that is truthful. It seeks to sow confusion and distrust so that policy can be forged by corrupt people with dishonorable intentions.

    In a world where no one can trust what they see or hear; when sensory evidence is manipulated for nefarious purposes, what remains is division and suspicion. As satirist Alexandra Petri quips, when facts don’t matter, the question becomes: “Who has created a nicer story for you?” Or perhaps a more frightening one.

    Conceivably even more frightening than the use of AI as a tool of deception, is its use as a tool of depraved persuasion.

    The Post reports that in a horrific effort to radicalize ignorant and impressionable young people: “Extremists are using artificial intelligence to reanimate Adolf Hitler online for a new generation, recasting the Nazi German leader who orchestrated the Holocaust as a “misunderstood” figure whose antisemitic and anti-immigrant messages are freshly resonant in politics today. In audio and video clips that have reached millions of viewers over the past month on TikTok, X, Instagram and YouTube, the führer’s AI-cloned voice quavers and crescendos as he delivers English-language versions of some of his most notorious addresses…”

    And, if you’re wondering as I did, just how many people would be interested in listening to something so hateful and repugnant? Researchers at the Institute for Strategic Dialogue found that “content glorifying, excusing or translating Hitler’s speeches into English has racked up some 25 million views across X, TikTok and Instagram since August 13.”

    All of this creates a dilemma for a country steeped in free speech. That right is not absolute, of course. Yelling “fire” in a crowded theatre, for example, is frowned upon. But lying is essentially protected. What isn’t protected is an absolute right to amplification, which unrestricted AI loosed upon the internet all but guarantees.

    Out of concern that politicians could use AI to mislead or deceive voters, a number of states are passing laws requiring candidates to disclose when they use the technology to generate political ads. But laws seldom deter liars, and ads can be launched from unknown sources.

    John Hopfield is a physicist and a professor emeritus at Princeton. This year he won the Nobel Prize in physics for “foundational discoveries and inventions that enable machine learning with artificial neural networks.”

    But Hopfield was apparently unsettled by his own success. He said that recent advances in AI technology were “very unnerving” and warned of possible catastrophe if not kept in check.

    Ultimately, many of the issues surrounding AI regulation and free speech may be decided by the Supreme Court. And that may be the biggest catastrophe of all.

    RELATED STORIES

    As I See It: Unintended Consequences

    As I See It: Sainthood

    As I See It: Communication Fail

    As I See It: Mind Hacks

    As I See It: Upgrade Exhaustion

    As I See It: Doctor AI

    As I See It: The Other Eight Hours

    As I See It: Elusive Connections

    As I See It: Luddites

    As I See It: IT Come Home

    As I See It: Focus

    As I See It: Chasing Eternity

    As I See It: Entitlement Master Class

    As I See It: Bob-the-Bot

    As I See It: Greetings, Comrade IT Professional

    As I See It: AI-AI-O

    As I See It: On the Chopping Block

    As I See It: The Good, the Bad, And The Mistaken

    Share this:

    • Reddit
    • Facebook
    • LinkedIn
    • Twitter
    • Email

    Tags:

    Sponsored by
    Computer Keyes

    Fax Directly from your IBM i

    KeyesFax is a full function automated IBM i fax system. Spooled files are burst by fax number and auto transmitted with overlays.  It combines both a send and receive facsimile processing system with a complete image package.

    The fax software will edit, send, receive, display, print, and track fax documents or images using any standard IBM i without additional expensive hardware, software or subscriptions.

    Computer Keyes has been developing Software Solutions since 1978!

    www.computerkeyes.com

    Share this:

    • Reddit
    • Facebook
    • LinkedIn
    • Twitter
    • Email

    IBM i PTF Guide, Volume 26, Number 42 Power10 Keeps Plugging Along As Power11 Looms For 2025

    2 thoughts on “As I See It: The Tells”

    • GregW says:
      November 4, 2024 at 8:26 am

      I stopped reading when you called The Washington Post a “legitimate news organization”. When one of their own employees quits and walks off a live “broadcast”, it’s time to question their legitimacy (and yours for continuing to believe them).
      AI is the LEAST of our problems when it comes to news-related manipulation.

      Reply
    • Ema Tissani says:
      November 7, 2024 at 9:12 am

      You are bringing and conflating in the same issue different aspects and bring confusion, maybe due to some preconceived personal perspective.
      One is the use of AI to – with the proper introductory disclaimers – i.e. translate and dubbing from german to english maintaining fedelty and inflexion, the fuhrer speeches in full (and not cuts without context, as it typically done). To me, that has historical research value and merits on its own. Along these lines, also the widespread Mein Kampf book is accessibile in various translations, and contains the values and ideas of Adolf. This too can be used for persuasion. Nothing to do with AI. Remember, it was the nazis the guys were burning books and inconvenient recordings, not us, we pride ourself being open (we sure ? ; ) ).
      Regarding the use of AI as a factual truth, newspapers can lie with or without AI. AI just give an additional tool to the arsenal. A text article can be “false” with a “true” image on it. The press creates the facts, never comment on it. Perspective is a fact.

      Reply

    Leave a Reply Cancel reply

TFH Volume: 34 Issue: 53

This Issue Sponsored By

  • ARCAD Software
  • Fresche Solutions
  • DRV Tech
  • Computer Keyes
  • Manta Technologies

Table of Contents

  • IBM Hikes Hardware, Software, And Services Prices
  • Thoroughly Modern: The Business Case For Success And ROI With Enterprise AI On IBM i
  • Power10 Keeps Plugging Along As Power11 Looms For 2025
  • As I See It: The Tells
  • IBM i PTF Guide, Volume 26, Number 42

Content archive

  • The Four Hundred
  • Four Hundred Stuff
  • Four Hundred Guru

Recent Posts

  • Meet The Next Gen Of IBMers Helping To Build IBM i
  • Looks Like IBM Is Building A Linux-Like PASE For IBM i After All
  • Will Independent IBM i Clouds Survive PowerVS?
  • Now, IBM Is Jacking Up Hardware Maintenance Prices
  • IBM i PTF Guide, Volume 27, Number 24
  • Big Blue Raises IBM i License Transfer Fees, Other Prices
  • Keep The IBM i Youth Movement Going With More Training, Better Tools
  • Remain Begins Migrating DevOps Tools To VS Code
  • IBM Readies LTO-10 Tape Drives And Libraries
  • IBM i PTF Guide, Volume 27, Number 23

Subscribe

To get news from IT Jungle sent to your inbox every week, subscribe to our newsletter.

Pages

  • About Us
  • Contact
  • Contributors
  • Four Hundred Monitor
  • IBM i PTF Guide
  • Media Kit
  • Subscribe

Search

Copyright © 2025 IT Jungle