• The Four Hundred
  • Subscribe
  • Media Kit
  • Contributors
  • About Us
  • Contact
Menu
  • The Four Hundred
  • Subscribe
  • Media Kit
  • Contributors
  • About Us
  • Contact
  • As I See It: With Deepfakes, Nothing Is Real Except The Consequences

    April 15, 2024 Victor Rozek

    With an election looming that promises to be conducted with snake-oil ethics, the Congress recently turned its attention from its usual dysfunctional finger pointing to hear testimony about the growing plague of deepfakes.

    The usual suspects were represented, in this case Google and Meta, by a spokesman from the industry-funded lobbying group NetChoice. Not surprisingly, they wrapped themselves in the First Amendment and accepted little or no responsibility for what their users post online. They argued that existing laws governing fraud and harassment were sufficient to protect targets (aka victims) of deepfakes.

    Standing against the “it’s-not-our-fault” brigade was the mother of a teen who was victimized by the distribution of AI generated nude images. There was, in fact, little disagreement that women and children are the most common victims of deepfakes. A startling statistic was read into the record by Representative Gerry Connolly of Virginia: He referenced a report that claimed 98 percent of all online deepfake videos were pornographic, and that women, girls, (or adults made to look like girls), were depicted in 99 percent of them.

    The problem with relying on existing laws to combat AI abuses, explains Wendy Maltz, psychotherapist and sexuality expert (and full disclosure, a family friend), is that “with AI, as with written pornography, there is no crime scene, no sentient victim.” In fact, the entire experience is insensate. The user has to fantasize that it’s real. Essentially, said Maltz, AI porn “digitally dismembers and reconstitutes human flesh for sexual stimulation. It’s no more real than relating to a shiny doorknob.” In other words, it can be legally tricky to identify an actual victim and charge a perpetrator.

    People whose lives have been ruined by deepfakes would disagree about the difficulty identifying victims. There have been children who reportedly committed suicide after being targeted by their peers. But there is little to be gained from suing some creep living in his parent’s basement for damages. Hence the focus on AI companies that enable him.

    But, since AI regulation is essentially non-existent, deepfakes are proliferating in everything from comedy and satire, to pornography and politics. And your company, where you make your living, could be next.

    The first notable attempts at voter suppression were robocalls with an AI generated voice that sounded like President Biden telling voters to not bother voting in an upcoming primary. While most voters would be skeptical of politicians telling them not to vote, the larger concern is the deliberate spread of misinformation and the denial of reality itself. Hany Farid, a professor at the University of California at Berkeley who studies digital propaganda and misinformation says AI creates a “liar’s dividend” in that it provides plausible deniability. When, for example, someone secretly records a politician raging against minorities, or police officers beating a suspect, or provides video evidence of a judge taking a bribe, the immediate defense is that it was AI generated. Thus, the very concept of truth is destabilized and the ability to make critical decisions from a common factual shared reality is undermined.

    If you can’t trust what you see, you have to trust the source presenting the information, which further contributes to the tribalism and factionalism already plaguing the country. AI becomes a perfect instrument in a post-truth world: On one hand, AI can create convincing video of people saying and doing the most abhorrent things; on the other, those who are recorded actually saying and doing those things, will deny it and blame it on AI.

    However, I believe the problems with AI will not be – and cannot be – solved by legislation. The sources of deepfakes are global and are already being used to influence elections at home and abroad. Kari Lake’s image was used in a deepfake by a group calling itself Arizona Agenda and it was good enough to fool a seasoned reporter. Recently, London’s Mayor Sadiq Khan said that fake audio of him making inflammatory comments before last year’s Armistice Day almost caused “serious disorder.” Even people as powerful and wealthy as Taylor Swift are not immune. As has been widely reported, she has been the victim of musical deepfakes and nude photos that proliferated on the Internet.

    In fact, the rich and powerful are themselves raising the alarm. At the recent aren’t-we-special conference of CEOs, and their pet political lackeys, in Davos, Switzerland, AI took center stage. The Washington Post reports that in her opening remarks “Swiss President Viola Amherd called AI-generated propaganda and lies ‘a real threat’ to world stability, “especially today when the rapid development of artificial intelligence contributes to the increasing credibility of such fake news.”

    But those who traffic in instability are arguably winning. Greed and malice are powerful motivators and AI is simply a ready tool in their service. The larger problem has always been humanity’s extraordinary gullibility. To a greater or lesser degree our’s has always been a post-truth world because people will believe absolutely anything. From the prospect of hanging with Odin in Valhalla, to burning women for being witches, to Bigfoot and Nessie, or the QAnon claim that Democrats eat babies; facts are superfluous, proof is unnecessary. Only belief counts.

    Recall the 39 members of the Heaven’s Gate Cult who committed mass suicide because they were told there was a spaceship hidden in the tail of the comet Hale-Bopp that was coming to scoop them up and return them to the planet of their origin which they called Next Level. There must not have been a Walmart on Next Level because they each packed a suitcase for the journey before drinking the final Kool-Aid.

    And so it goes. Discernment cannot be legislated. It is the antithesis of the instant gratification culture. It takes time, work, research, and a heavy dose of skepticism. In a pivotal election year we will be deluged by deepfakes from unknown sources with unknown intentions. In a very real sense, truth, transparency, and consent will also be on the ballot. These virtues are not exclusive to party or ideology, but they will require ethics on the part of information presenters, and a great deal of discernment on the part of viewers.

    RELATED STORIES

    As I See It: Doctor AI

    As I See It: The Other Eight Hours

    As I See It: Elusive Connections

    As I See It: Luddites

    As I See It: IT Come Home

    As I See It: Focus

    As I See It: Chasing Eternity

    As I See It: Entitlement Master Class

    As I See It: Bob-the-Bot

    As I See It: Greetings, Comrade IT Professional

    As I See It: AI-AI-O

    As I See It: On the Chopping Block

    As I See It: The Good, the Bad, And The Mistaken

    Share this:

    • Reddit
    • Facebook
    • LinkedIn
    • Twitter
    • Email

    Tags: Tags: As I See, IBM i

    Sponsored by
    New Generation Software

    FREE Webinar:

    Creating Great Data for Enterprise AI

    Enterprise AI relies on many data sources and types, but every AI project needs a data quality, governance, and security plan.

    Wherever and however you want to analyze your data, adopting modern ETL and BI software like NGS-IQ is a great way to support your effort.

    Webinar: June 26, 2025

    RSVP today.

    www.ngsi.com – 800-824-1220

    Share this:

    • Reddit
    • Facebook
    • LinkedIn
    • Twitter
    • Email

    IBM i PTF Guide, Volume 26, Number 15 Guru: Fooling Around With SQL And RPG

    One thought on “As I See It: With Deepfakes, Nothing Is Real Except The Consequences”

    • Greg Wilburn says:
      April 15, 2024 at 10:17 am

      Let’s be abundantly clear here… Google, Meta, and Twitter (at the time) certainly did not “wrap themselves in the First Amendment” when they extinguished the “Hunter Biden Laptop” story (or anything else that damaged Biden and/or hurt Trump).
      Media manipulation was here LONG before AI. The legacy/mainstream media outlets have been practicing this pre 2016. It was (and still is) a conscious effort from the liberal elites that control 90% of mainstream and online information. They’re just not trying to hide it anymore. I need only say three letters N-P-R.

      Reply

    Leave a Reply Cancel reply

TFH Volume: 34 Issue: 20

This Issue Sponsored By

  • ServiceExpress
  • Maxava
  • WorksRight Software
  • Software Engineering of America
  • New Generation Software

Table of Contents

  • Drilling Down Into New IBM i Perpetual And Subscription Pricing
  • Securing The Crown Jewels When Intruders Break Into The Glass House
  • Guru: Fooling Around With SQL And RPG
  • As I See It: With Deepfakes, Nothing Is Real Except The Consequences
  • IBM i PTF Guide, Volume 26, Number 15

Content archive

  • The Four Hundred
  • Four Hundred Stuff
  • Four Hundred Guru

Recent Posts

  • Public Preview For Watson Code Assistant for i Available Soon
  • COMMON Youth Movement Continues at POWERUp 2025
  • IBM Preserves Memory Investments Across Power10 And Power11
  • Eradani Uses AI For New EDI And API Service
  • Picking Apart IBM’s $150 Billion In US Manufacturing And R&D
  • FAX/400 And CICS For i Are Dead. What Will IBM Kill Next?
  • Fresche Overhauls X-Analysis With Web UI, AI Smarts
  • Is It Time To Add The Rust Programming Language To IBM i?
  • Is IBM Going To Raise Prices On Power10 Expert Care?
  • IBM i PTF Guide, Volume 27, Number 20

Subscribe

To get news from IT Jungle sent to your inbox every week, subscribe to our newsletter.

Pages

  • About Us
  • Contact
  • Contributors
  • Four Hundred Monitor
  • IBM i PTF Guide
  • Media Kit
  • Subscribe

Search

Copyright © 2025 IT Jungle