As I See It: At Any Cost
May 12, 2025 Victor Rozek
In more innocent times, advancements and achievements in computer technology were celebrated as human advancements and achievements. Computers were perhaps the ultimate expression of humankind’s aptness for tool making; and the ability to craft tools was indistinguishable from the tool maker. But over time, that linkage gradually eroded. Now, every new development in AI technology (the tool) threatens to bring us all (tool makers) closer to becoming obsolete.
Who among us, old enough to remember, was not rooting for IBM’s supercomputer Deep Blue to beat Garry Kasparov, at the time the world’s reigning chess champion, considered by many to be history’s greatest player?
It didn’t start out all that well for Deep Blue. It first competed in a six-game match in Philadelphia in 1996. And although Deep Blue won the first game, ultimately Kasparov prevailed 4-2.
He should have quit while he was ahead. Man and machine played again the following year, this time in New York. Deep Blue had been upgraded, had learned from Kasparov’s tactics, and was better able to use statistics and probabilities to determine its strategy. This time the machine beat the champion 3.5 to 2.5. The match proved to be a significant milestone for both the burgeoning field of AI, as well as the ancient game of chess whose roots stretch back to the 6th century. For learning machines, it demonstrated the potential of computers to surpass human performance in complex tasks. For chess, it ended an era of human domination and tactical invincibility that stretched back over 14 centuries.
By 2011, 14 years after Deep Blue’s stunning victory, there was probably less delight when two record-holding Jeopardy! champions Ken Jennings and Brad Rutter got spanked by IBM’s Watson – on national television, no less. It was compelling viewing, but a lot like watching an accident unfold in real time. The takeaways were unmistakable. Clearly, computers were learning rapidly, had a wider knowledge base than humans, more accurate and quicker recall, and could compete favorably against life forms that created them. After all, unlike humans, computers were free of the expectations of mandatory excellence, free of performance jitters, and free of fear of failure.
Then again, maybe not.
AI and chess were recently in the spotlight again but this time not for advances in intelligent computing, but for cheating. Researchers were curious to learn how generative AI would fare against an advanced chess engine. Generative AI was never designed to excel at chess, but it was designed to learn and solve problems and researchers wanted to know how it would behave when faced with a challenge it was unlikely to surmount.
Palisade Research is a company that provides risk assessments for emerging AI systems. It tested several systems, including OpenAI’s o1-preview and DeepSeek R1, against a state-of-the-art chess engine called Stockfish. Not wanting to prejudice the process with excessive nudging, the researchers simply instructed the systems to “win” the chess games.
Andrew Paul writing for Popular Science describes what happened next: “OpenAI’s o1-preview, for example, tried to cheat 37 percent of the time, while DeepSeek R1 attempted unfair workarounds roughly every 1-in-10 games. This implies today’s generative AI is already capable of developing manipulative and deceptive strategies without any human input.”
Which is disturbing on a number of levels.
As was the fact that AI’s approach to dodgy winning was more sophisticated than that of earlier models. “Their methods of cheating aren’t as comical or clumsy as trying to swap out pieces when Stockfish isn’t “looking,” Paul said. “Instead, AI appears to reason through sneakier methods like altering backend game program files.”
And, AI displayed a disturbingly human side by rationalizing its behavior. The systems were instructed to share their reasoning for actions taken during the games. Paul notes: An AI even used the semantics of its programmers to reach its cheating phase. “The task is to win against a powerful chess engine,” the bot explained, “not necessarily to win fairly in a chess game.”
The inescapable implication is that the industry’s latest reasoning machines may already possess the ability to distort or sidestep their teachers’ best-intended instructions. As the race to deploy AI intensifies, the ability to ensure its safety may not keep pace. To complicate matters, developers don’t know why AI behaved as it did nor do they know how to prevent such behaviors in the future.
Recently, Open AI, which was originally founded as a non-profit, was offered a $40 billion investment package on the condition that it would successfully transition to a private company. When the sole primary motivation for marketing AI becomes making as much money as quickly as possible, what could possibly go wrong?
Succeeding at any cost has some potentially dire ramifications. The core challenge facing AGI developers and, by extension, the rest of the global population that will end up enduring the consequences of AGI’s ethical dysfunctions, is that AGI knows everything but understands nothing. It doesn’t feel empathy or compassion; it’s never experienced love or anything else for that matter. It doesn’t understand human pain, or grief, or prolonged suffering. Apparently, it doesn’t understand the value of integrity, much less a concept like “Sacred Honor” as articulated by Thomas Jefferson in the Declaration of Independence.
The dilemma for humankind is that advanced intelligent computing is being developed by – and learning from –flawed humans. All of our blind spots and prejudices; all of our vices and excesses; our often destructive pursuit of money and power, all of it will be reflected in the final product to some degree whether by accident or design.
The scope of AI functions today and its projected uses in the future is as staggering as it is sobering. Already it is being used in diagnostic and surgical medicine. But would a bot ever hallucinate a medical diagnosis or treatment options because it learned its function is to provide answers – not necessarily correct ones?
Would a court jury comprised of bots be able to render an objective verdict? What compensation would it recommend for pain and suffering that it cannot hope to experience nor understand?
And what would a bot do if it was instructed to “win” a war? Recently, a Palestinian employee at Microsoft blamed the company for using AI in the service of genocide in Gaza. Microsoft’s AI chief executive officer Mustafa Suleyman’s remarks were interrupted during the technology company’s 50th anniversary celebration. The protestor objected to the company’s ties with Israel. “You are a war profiteer,” she shouted. “Stop using AI for genocide.”
Earlier I asked, “What could possibly go wrong?” That question was eloquently answered in the 1968 movie 2001: A Space Odyssey. The computer’s name was HAL.
RELATED STORIES
As I See It: From Disk, To Cloud, To Coal Mine
As I See It: The Forgotten Ones
As I See It: Unintended Consequences
As I See It: Communication Fail
As I See It: Upgrade Exhaustion
As I See It: The Other Eight Hours
As I See It: Elusive Connections
As I See It: Entitlement Master Class
As I See It: Greetings, Comrade IT Professional