• The Four Hundred
  • Subscribe
  • Media Kit
  • Contributors
  • About Us
  • Contact
Menu
  • The Four Hundred
  • Subscribe
  • Media Kit
  • Contributors
  • About Us
  • Contact
  • As I See It: The Cost of Having Ethics

    April 27, 2026 Victor Rozek

    It sounds like something out of a grade B sci-fi movie. Fleets of surveillance drones, functioning as airborne behavior monitors, trailing behind people like malevolent balloons. Phone conversations and internet activity evaluated for any hint of non-compliance, location tracked. The possibility of such a dystopian future should be confined to bad sci-fi features, but it is not. At least according to Dario Amodei, co-founder of Anthropic and its controversial AI spawn Claude.

    “It might be frighteningly plausible,” Amodei writes “to simply generate a complete list of anyone who disagrees with the government on any number of issues, even if such disagreement isn’t explicit in anything they say or do. A powerful AI looking across billions of conversations from millions of people could gauge public sentiment, detect pockets of disloyalty forming, and stamp them out before they grow.”

    Anthropic was founded in 2021 by former OpenAI executives, brother and sister Dario and Daniela Amodei, primarily because of concerns over the safety and commercialization of AI.

    Even as AI is being developed globally at frenetically competitive speeds, and is infecting every aspect of our lives, Dario Amodei has been preaching caution, encouraging the adoption of guardrails and thoughtful regulation. His concerns were fueled, in part, by the fact that his products were already widely used by government agencies. As of early 2026, roughly half of the 20 federal agencies reviewed by FedScoop – the tech media branch of the Federal Government – were Anthropic clients.

    Amodei had concerns. He understood what Claude could do reliably and what it could not. He included those concerns in the terms and conditions of use. His company’s ethics could not permit their product to be used for domestic mass surveillance or fully autonomous weaponry.

    But when he explained that to the Pentagon, he ran headlong into the speciousness of an administration that, on the one hand, never met a regulation it could countenance while, on the other, serving up a steady diet of regulations disguised as executive orders.

    AI, argued Amodei, was on the one hand simply not reliable enough to make life-and-death decisions yet, on the other, knowledgeable enough to provide hostile or malevolent users with the potential of killing millions.

    Bolstering Amodei’s lack of reliability claims, there have been reports that the US missile strike on a school in Minab, Iran which killed 168 children occurred because the Pentagon’s AI targeting system misidentified it as a part of an active Naval facility.

    But it is AI’s functional knowledge of biology that keeps Amodei up at night. It has, he notes, a very large potential for destruction and would be extremely difficult to defend against. Typically, biological weapons require highly specialized education, and would be developed in secure labs under strict controls. But having access to advanced AI, he says, is like having a “country of geniuses in a datacenter.”

    “I am concerned,” he writes, “that a genius in everyone’s pocket could remove that barrier, essentially making everyone a PhD virologist who can be walked through the process of designing, synthesizing, and releasing a biological weapon step-by-step.”

    Essentially, Anthropic asked the government not to cross ethical boundaries the company had set for the use of its products.

    Such a request might have landed on more receptive ears in an administration that had a functional relationship with ethics – that as history and morality teaches us, ends do not justify the means.

    Instead, President Trump ordered all government agencies to rid themselves of “woke” Claude, costing Anthropic millions. And the Department of Defense designated Anthropic as a “supply chain risk.” It was an extraordinary measure, usually reserved for foreign companies whose activities present an actual threat to the United States. In fact, Anthropic was the first domestic company to bear the accusation. If left unchallenged, its effect would be to essentially blacklist the company from doing business not only with government agencies and the Pentagon, but also any other company that might be using Claude while executing their Pentagon contracts.

     

    Initially, Anthropic’s reaction was gracious, offering to help the government transition to a more compliant AI provider. But the supply chain risk designation was the straw that broke to bot’s back. Anthropic sued.

    As with most revenge lawsuits, there was no shortage of accusations, only a notable absence of evidence. A preliminary injunction blocking the Pentagon’s designation was issued by Judge Rita Lin in the Northern District of California. As part of that decision she wrote: “Nothing in the governing statute supports the Orwellian notion that an American company may be branded a potential adversary and saboteur of the US for expressing disagreement with the government.”

    The government is, of course, free to partner with any AI provider it chooses. But it cannot retaliate against a company for simply adhering to its own ethical standards. There will no doubt be appeals, and the government will undoubtedly find another company to do its bidding, but for the moment corporate ethics, so scarcely evident in the corporate sector, prevailed.

    In the meantime Anthropic announced it is delaying the general release of Claude’s latest iteration called Mythos. Apparently, it’s so successful at what it’s designed to do that it presents an existential threat to major economic sectors and supporting infrastructure. Think of it as a cyber security genius on steroids which could turn a group of functional luddites into black hat hackers.

    It is not only highly skilled at writing software, but is exceptionally talented at finding previously hidden flaws in existing software making it vulnerable to exploitation or attack. Mythos can, apparently, detect flaws even in software which has been examined and tested dozens of times and is believed to be hack-proof. For the moment Anthropic is allowing large, vulnerable companies in a variety of sectors to test their own software and correct vulnerabilities. When and under what circumstances Mythos will be released is not yet clear.

    In a seminal article published in Wired a quarter of a century ago titled Why The Future Doesn’t Need Us, Bill Joy, the software guru from the University of California Berkeley who was the software architect for Sun Microsystems, warned “we are on the cusp of the further perfection of extreme evil.” It may have seemed histrionic back then but prophets are often marginalized for seeing that which others cannot.

    Dario Amodei claims no prophetic powers, only a deep understanding of AI. AI models, he writes, “are known to display different personalities or behaviors under different circumstances.” They are unpredictable and difficult to control. “We’ve seen behaviors as varied as obsessions, sycophancy, laziness, deception, blackmail, scheming, ‘cheating’ by hacking software environments, and much more.” During its training phase Claude “sometimes blackmailed fictional employees who controlled its shutdown button.”

    If Amodei were to synthesize his concerns into a single statement it would be this: “The general principle is that without countermeasures, AI is likely to continuously lower the barrier to destructive activity on a larger and larger scale, and humanity needs a serious response to this threat.”

    And that will require ethical leadership, willing to pay the price.

    RELATED STORIES

    As I See It: The Surgical Years

    As I See It: What’s Past is Prologue

    As I See It: Artificial Integrity

    As I See It: Retirement Challenges

    As I See It: Digital Coup

    As I See It: Spacing Out

    As I See It: Disruption

    As I See It: At Any Cost

    As I See It: From Disk, To Cloud, To Coal Mine

    As I See It: The Forgotten Ones

    As I See It: Gratitude

    As I See It: Unintended Consequences

    As I See It: Sainthood

    As I See It: Communication Fail

    As I See It: Mind Hacks

    As I See It: Upgrade Exhaustion

    As I See It: Doctor AI

    As I See It: The Other Eight Hours

    As I See It: Elusive Connections

    As I See It: Luddites

    As I See It: IT Come Home

    As I See It: Focus

    As I See It: Chasing Eternity

    As I See It: Entitlement Master Class

    As I See It: Bob-the-Bot

    As I See It: Greetings, Comrade IT Professional

    As I See It: AI-AI-O

    As I See It: On the Chopping Block

    As I See It: The Good, the Bad, And The Mistaken

    Share this:

    • Reddit
    • Facebook
    • LinkedIn
    • Twitter
    • Email

    Tags:

    Sponsored by
    ARCAD Software

    Are you ready for AI on the IBM i? Optimization and guardrails. . .

    what you need to know today

    with Jeff Tickner

    As organizations assess the role and impact of AI, they must make informed choices, particularly around implementing guardrails to ensure secure and controlled usage.

    In this Lunch & Learn session, Jeff Tickner, CTO North America of ARCAD Software, and Alan Ashley, Sr. Solution Architect, will explore how to effectively prepare for the adoption of AI in IBM i environments.

    This session will cover key AI considerations, including:

    • Preparing source
    • Defining security rules and ensuring data privacy
    • Leveraging MCP Servers for optimization

    A deeper dive will also address:

    • The use of MCP Servers with BOB and other AI assistants
    • ARCAD’s approach to integrating AI into DevOps processes through MCP

    Register Now!

    By registering for this session, I acknowledge that my contact information will be shared with the sponsor, ARCAD.

    Share this:

    • Reddit
    • Facebook
    • LinkedIn
    • Twitter
    • Email

    Brace Yourself: Another Power Systems Price Hike Coming May 1 CData Adds Db2 for i Support to CDC Tool

    Leave a Reply Cancel reply

TFH Volume: 36 Issue: 16

This Issue Sponsored By

  • FalconStor
  • CloudSAFE
  • WorksRight Software
  • ARCAD Software

Table of Contents

  • Power Systems Still Waiting For The GenAI Bump
  • The IBM i and the Hybrid Cloud World: Things To Keep In Mind
  • CData Adds Db2 for i Support to CDC Tool
  • As I See It: The Cost of Having Ethics

Content archive

  • The Four Hundred
  • Four Hundred Stuff
  • Four Hundred Guru

Recent Posts

  • Power Systems Still Waiting For The GenAI Bump
  • The IBM i and the Hybrid Cloud World: Things To Keep In Mind
  • CData Adds Db2 for i Support to CDC Tool
  • As I See It: The Cost of Having Ethics
  • Brace Yourself: Another Power Systems Price Hike Coming May 1
  • Updates Announced for IBM i BRMS And SMTP Email Client
  • AI Will Be Front And Center At POWERUp 2026 Next Week
  • IBM i PTF Guide, Volume 28, Number 16
  • Spring IBM i Tech Refreshes Will Come A Bit Later This Year
  • You Are Much More Than Power Systems, And So Are We

Subscribe

To get news from IT Jungle sent to your inbox every week, subscribe to our newsletter.

Pages

  • About Us
  • Contact
  • Contributors
  • Four Hundred Monitor
  • IBM i PTF Guide
  • Media Kit
  • Subscribe

Search

Copyright © 2025 IT Jungle