As I See It: The Cost of Having Ethics
April 27, 2026 Victor Rozek
It sounds like something out of a grade B sci-fi movie. Fleets of surveillance drones, functioning as airborne behavior monitors, trailing behind people like malevolent balloons. Phone conversations and internet activity evaluated for any hint of non-compliance, location tracked. The possibility of such a dystopian future should be confined to bad sci-fi features, but it is not. At least according to Dario Amodei, co-founder of Anthropic and its controversial AI spawn Claude.
“It might be frighteningly plausible,” Amodei writes “to simply generate a complete list of anyone who disagrees with the government on any number of issues, even if such disagreement isn’t explicit in anything they say or do. A powerful AI looking across billions of conversations from millions of people could gauge public sentiment, detect pockets of disloyalty forming, and stamp them out before they grow.”
Anthropic was founded in 2021 by former OpenAI executives, brother and sister Dario and Daniela Amodei, primarily because of concerns over the safety and commercialization of AI.
Even as AI is being developed globally at frenetically competitive speeds, and is infecting every aspect of our lives, Dario Amodei has been preaching caution, encouraging the adoption of guardrails and thoughtful regulation. His concerns were fueled, in part, by the fact that his products were already widely used by government agencies. As of early 2026, roughly half of the 20 federal agencies reviewed by FedScoop – the tech media branch of the Federal Government – were Anthropic clients.
Amodei had concerns. He understood what Claude could do reliably and what it could not. He included those concerns in the terms and conditions of use. His company’s ethics could not permit their product to be used for domestic mass surveillance or fully autonomous weaponry.
But when he explained that to the Pentagon, he ran headlong into the speciousness of an administration that, on the one hand, never met a regulation it could countenance while, on the other, serving up a steady diet of regulations disguised as executive orders.
AI, argued Amodei, was on the one hand simply not reliable enough to make life-and-death decisions yet, on the other, knowledgeable enough to provide hostile or malevolent users with the potential of killing millions.
Bolstering Amodei’s lack of reliability claims, there have been reports that the US missile strike on a school in Minab, Iran which killed 168 children occurred because the Pentagon’s AI targeting system misidentified it as a part of an active Naval facility.
But it is AI’s functional knowledge of biology that keeps Amodei up at night. It has, he notes, a very large potential for destruction and would be extremely difficult to defend against. Typically, biological weapons require highly specialized education, and would be developed in secure labs under strict controls. But having access to advanced AI, he says, is like having a “country of geniuses in a datacenter.”
“I am concerned,” he writes, “that a genius in everyone’s pocket could remove that barrier, essentially making everyone a PhD virologist who can be walked through the process of designing, synthesizing, and releasing a biological weapon step-by-step.”
Essentially, Anthropic asked the government not to cross ethical boundaries the company had set for the use of its products.
Such a request might have landed on more receptive ears in an administration that had a functional relationship with ethics – that as history and morality teaches us, ends do not justify the means.
Instead, President Trump ordered all government agencies to rid themselves of “woke” Claude, costing Anthropic millions. And the Department of Defense designated Anthropic as a “supply chain risk.” It was an extraordinary measure, usually reserved for foreign companies whose activities present an actual threat to the United States. In fact, Anthropic was the first domestic company to bear the accusation. If left unchallenged, its effect would be to essentially blacklist the company from doing business not only with government agencies and the Pentagon, but also any other company that might be using Claude while executing their Pentagon contracts.
Initially, Anthropic’s reaction was gracious, offering to help the government transition to a more compliant AI provider. But the supply chain risk designation was the straw that broke to bot’s back. Anthropic sued.
As with most revenge lawsuits, there was no shortage of accusations, only a notable absence of evidence. A preliminary injunction blocking the Pentagon’s designation was issued by Judge Rita Lin in the Northern District of California. As part of that decision she wrote: “Nothing in the governing statute supports the Orwellian notion that an American company may be branded a potential adversary and saboteur of the US for expressing disagreement with the government.”
The government is, of course, free to partner with any AI provider it chooses. But it cannot retaliate against a company for simply adhering to its own ethical standards. There will no doubt be appeals, and the government will undoubtedly find another company to do its bidding, but for the moment corporate ethics, so scarcely evident in the corporate sector, prevailed.
In the meantime Anthropic announced it is delaying the general release of Claude’s latest iteration called Mythos. Apparently, it’s so successful at what it’s designed to do that it presents an existential threat to major economic sectors and supporting infrastructure. Think of it as a cyber security genius on steroids which could turn a group of functional luddites into black hat hackers.
It is not only highly skilled at writing software, but is exceptionally talented at finding previously hidden flaws in existing software making it vulnerable to exploitation or attack. Mythos can, apparently, detect flaws even in software which has been examined and tested dozens of times and is believed to be hack-proof. For the moment Anthropic is allowing large, vulnerable companies in a variety of sectors to test their own software and correct vulnerabilities. When and under what circumstances Mythos will be released is not yet clear.
In a seminal article published in Wired a quarter of a century ago titled Why The Future Doesn’t Need Us, Bill Joy, the software guru from the University of California Berkeley who was the software architect for Sun Microsystems, warned “we are on the cusp of the further perfection of extreme evil.” It may have seemed histrionic back then but prophets are often marginalized for seeing that which others cannot.
Dario Amodei claims no prophetic powers, only a deep understanding of AI. AI models, he writes, “are known to display different personalities or behaviors under different circumstances.” They are unpredictable and difficult to control. “We’ve seen behaviors as varied as obsessions, sycophancy, laziness, deception, blackmail, scheming, ‘cheating’ by hacking software environments, and much more.” During its training phase Claude “sometimes blackmailed fictional employees who controlled its shutdown button.”
If Amodei were to synthesize his concerns into a single statement it would be this: “The general principle is that without countermeasures, AI is likely to continuously lower the barrier to destructive activity on a larger and larger scale, and humanity needs a serious response to this threat.”
And that will require ethical leadership, willing to pay the price.
RELATED STORIES
As I See It: The Surgical Years
As I See It: What’s Past is Prologue
As I See It: Artificial Integrity
As I See It: Retirement Challenges
As I See It: From Disk, To Cloud, To Coal Mine
As I See It: The Forgotten Ones
As I See It: Unintended Consequences
As I See It: Communication Fail
As I See It: Upgrade Exhaustion
As I See It: The Other Eight Hours
As I See It: Elusive Connections
As I See It: Entitlement Master Class
As I See It: Greetings, Comrade IT Professional

