As I See It: Disruption
August 11, 2025 Victor Rozek
There was a time, not so long ago, that pursuing a degree in Computer Science would all but guarantee a good job and the opportunity for a thriving career. Tech was booming and hungry for talent. Like sporting franchises competing for players, some companies even offered inducements in the form of signing bonuses. Early in my career I was offered a signing bonus large enough to buy a new convertible. No negotiation necessary, it was simply part of the benefits package.
But the industry has changed and career IT jobs are becoming hard to find and harder to keep. Tech companies have slashed hundreds of thousands of jobs over the past few years. The headline of an article that appeared in The Washington Post over two years ago proved to be prescient: “Tech workers had their pick of jobs for years. That era is over for now.” Increasingly, too many highly skilled professionals are competing for a shrinking number of jobs. The culprit is AI.
Historically, as new technologies evolved they created a bloom of related jobs. For example, the internal combustion engine spawned automobile manufacturing, sales showrooms, repair shops, the gas and oil industry, road building, etc. One measure of a company’s success was its need to increase its workforce. More workers implied more business, and more business meant more profit.
But AI is perhaps the first technology where the opposite is true. Its success and usefulness is measured, in part, not by how many jobs it can create, but how many it can eliminate. And globally that number is staggering. McKinsey & Company, an American multinational strategy and management consulting firm, estimates 400 million to 800 million jobs will be lost by 2030, 2.4 million in the United States alone. Low-end, repetitive, easily automated jobs such as cashiers, data entry clerks, telemarketers, and call centers are quickly disappearing. But as AI becomes more sophisticated its reach is expanding into arenas traditionally staffed by highly skilled and thoroughly educated professionals. Its success in diagnostic medicine is one such example.
As for IT, we know AI can write code, although for complex programming a human assist is recommended for quality and safety assurance. Nonetheless, author, businessman and would-be presidential candidate Andrew Yang predicts that a company that employs six programmers could lay off five and keep one whose job would be to essentially monitor and ameliorate, if necessary, the work of the AI.
In fact, a number of CEOs and AI developers have recently been expressing concerns over the speed with which AI is being developed and rolled out.
Amazon CEO Andy Jassy, for one, has for some time been reshaping his company around the concept that less is more – in this case fewer people and more technology. He recently wrote that employees should learn how to use AI tools to experiment and figure out “how to get more done with scrappier teams.” Scrappier, in this context, means smaller.
Last month, Sebastian Siemiatkowski, CEO of the financial tech company Klarna said his organization “shrunk its headcount by 40 percent, in part due to investments in AI and natural attrition in its workforce.” (Forty percent is a stunning reduction, and I doubt “natural attrition” had much to do with it.) Regardless, the company now offers services it could not hope to provide with staff alone. Among them is an AI assistant that customers can ask anything, in any language, 24/7.
Dario Amodei, co-founder and CEO of Anthropic, predicts that half of all entry level jobs will disappear within the next five years with a resulting unemployment level rising to between 10 percent to 20 percent.
Job displacement is already happening and college graduates are reportedly having a harder time finding work. The societal costs of unemployment – or underemployment – are many, but chief among them is a loss of purpose, especially crippling to the young. But whether you’re a recent college graduate new to the tech workforce, or a 20-year IT veteran, the resounding message is: prepare for disruption. At some point in the not too distant future you’ll have to be very skilled to do a job AI can’t do.
The worst disturbances, however, will not be limited to the workplace.
Nobel Prize winner Geoffrey Hinton, known as The Godfather of AI, has for years warned of the risks humanity will incur when we are no longer the apex intelligence. First among his concerns is the deliberate misuse of AI. Given human nature, that’s perhaps inevitable. Whether it’s internet scams, or deep fakes, or cyberattacks that bring down financial institutions and cripple power grids, or systems that enhance police state tracking of its citizenry, there is no shortage of opportunity for AI misuse.
Hinton is also concerned about what will happen when AI becomes too smart. What will ensue when AI outgrows us and humans become irrelevant? One possibility is that it will no longer feel obliged to provide the services on which society will have become dependent. An under-skilled population could be seen by AI as an impediment. Hinton worries that an unscrupulous government, or AI itself, could develop a deadly, highly infectious virus capable of wiping out most, if not all, of humanity. Coincidentally, in recent tests some of the latest OpenAI systems refused commands to shut down. What happens if AI should decide humans are hostile to its interests?
Another of Hinton’s concerns are autonomous drones and battlefield robots that kill at their own discretion. He believes that limiting human casualties in war will only encourage strong nations to invade weak ones. When nightly news clips of body bags are replaced by bags of bolts and computer chips, the invading country has less skin in the game and the public outcry against the endless homecoming of dead and wounded will be muted.
Finally, Hinton is concerned about AI’s ability to corrupt elections. “AI Slop” was a term originally used to describe low-quality media. But it’s everywhere now and it’s becoming more sophisticated, much easier to create, and harder to spot. It’s highly manipulative and corrosive to the concept of a shared objective reality. Hinton strongly argues for regulatory constraints on AI but acknowledges that the current political climate makes that all but impossible.
A few months ago Linda McMahon spoke at a conference on educational innovation where she repeatedly referred to AI as A1 – like the steak sauce. Oh, and in case you’ve forgotten, she’s a former professional wrestling promoter who now inexplicably serves as Secretary of Education.
Perhaps we don’t have to obsess about all the ways AI can potentially harm us. We’re already doing such a swell job of that ourselves.
RELATED STORIES
As I See It: From Disk, To Cloud, To Coal Mine
As I See It: The Forgotten Ones
As I See It: Unintended Consequences
As I See It: Communication Fail
As I See It: Upgrade Exhaustion
As I See It: The Other Eight Hours
As I See It: Elusive Connections
As I See It: Entitlement Master Class
As I See It: Greetings, Comrade IT Professional