As I See It: The Surgical Years
March 16, 2026 Victor Rozek
Should you be lucky enough to live so long, you will enter what I call “The Surgical Years.” Inevitably, regardless of how much kale you eat, or exercise you get, or protein drinks you guzzle, your body will betray you and you will require serious medical intervention. That poses a two-pronged dilemma for millions of people: cost and availability.
Insurance premiums and co-pays are skyrocketing. Out of network care is often simply unaffordable. Rural hospitals are closing, medical appointments, particularly with specialists, are often unavailable for months. Many people are forced to ignore nagging pains until their condition becomes unbearable, which compromises the possibility of prevention and amplifies the cost of care.
Lower income households can spend over a third of their income on healthcare; middle class households just under 16 percent. Having few cost effective options, people are increasingly relying on AI for medical advice. That can be problematic, according to Doctor Virginia Haoyu Sun of Harvard Medical School.
Dr. Sun argues that expectations for AI’s accuracy and precision fall short as it struggles to bridge the gap between design concepts and real-world results. AI typically answers questions with great certainty and confidence, regardless of correctness. It provides answers based on available data, not necessarily all relevant information. Medical data, such as patient and hospital records, are private and protected by HIPPA regulations—a wealth of applicable data unavailable as training material for AI.
Nor is it clear what data AI is actually digesting. Certain conditions and diseases will be more pronounced in specific regions. Certain populations or ethnic groups may be more vulnerable but have not been adequately studied. Where and how AI is trained will determine its biases, and therefore its accuracy.
Personal values may also conflict with AI recommendations. AI, Dr. Sun notes, can’t discern what a patient’s personal values may be: surgery versus radiation, longevity versus quality of life, additional treatment versus palliative care. These are intensely personal and subjective choices which may not neatly align with machine logic.
For those and other reasons, at its current state of development, Dr. Sun sees AI as a decision support tool, rather than a decision-maker. It can, however, reduce what she calls “decision-making fatigue” and liberates physicians from endless hours of documentation.
But even as the accuracy of AI is being questioned within the healing profession, more and more people are turning to AI for medical advice. OpenAI claims that in excess of 230 million users already ask ChatGPT health and wellness questions each week. Which is odd since less than a third of Americans say they trust AI, perhaps because of its predicted impact on jobs. But the specter of losing income coupled with ease of access to medical information may be precisely why AI is rapidly becoming the physician of choice.
One popular pastime is asking ChatGPT to answer questions about health issues by allowing it to access the user’s medical records, and/or requesting analysis of health and fitness levels using data from fitness trackers.
Geoffrey Fowler of The Washington Post decided to ask ChatGPT’s new offering, ChatGPT Health, to grade his cardiac health by giving it access to a decade of fitness data: “29 million steps and 6 million heartbeat measurements stored in my Apple Health app.” The grades ranged from A to F and the bot flunked him.
Properly panicked, he did two things: Went for a run, and passed his fitness info to medical professionals who strongly disagreed with the bot’s assessment.
Although AI companies claim their health apps are not designed to provide clinical assessments, after all they cannot perform physical exams, disclaimers are conditional and few and far between. Less than 1 percent of AI outputs included a medical disclaimer in 2025, down from over 26 percent in 2022, which speaks to the near total absence of regulation. Experts also fear that bots are trained to prioritize user satisfaction, sometimes at the cost of accuracy.
The reality is that people are paying more money for health insurance plans that give them less protection. Many are burdened with student debt, or have kids whose independence has been usurped by debt and who cannot afford to live outside the parental home. Others have crushing eldercare costs. Still others are either nearing retirement or elderly themselves. Once AI has evolved enough to be a reliable medical soothsayer, it will doubtless become a valued and valuable tool. Early diagnosis, effective prevention/treatment recommendations, an accurate prognosis, can save a great deal of needless worry and expense.
But if the medical ethic is First Do No Harm, then AI is largely antithetical to that principle.
Job loses and displacement, large scale surveillance systems, autonomous weapons; without regulation AI technology will doubtless cause enormous harm, ironically even to the industry that spawned it. Eleanor Pringle writing in Fortune notes that: “Software stocks in particular suffered a wipeout amid mounting concerns that large language models may replace current service offerings.” The S&P 500 software and services index shed approximately $1 trillion in market value by early February 2026.
It gets worse. “Companies in the legal, IT, consulting and logistics sectors were also impacted. JP Morgan wrote last week that some $2 trillion had been wiped off software market caps alone as a result. . . .”
Small wonder that in 2025 alone, over 50,000 US layoffs were attributed to AI, with companies like Amazon, Salesforce, and IBM citing AI as a factor in restructuring.
Former Secretary of Labor Robert Reich writes: “I member when IBM was the nation’s most valuable company and among its largest employers, with a payroll in the 1980s of nearly 400,000 employees. Today, Nvidia is nearly 20 times as valuable as IBM was then and five times as profitable (adjusted for inflation), but it employs just over 40,000. Nvidia, unlike the old IBM, designs but doesn’t make its products.”
Likewise, says Reich: “Over the past three years, Google parent Alphabet’s revenue has grown 43 percent while its payroll has remained flat. Amazon’s revenue has soared, but it’s eliminating jobs.”
No, AI is not a doctor, but it plays one on the Internet. And although some of the medical information provided by AI is valid and useful, the ultimate irony is that it’s being provided by a technology whose impacts will make a lot of people sick.
RELATED STORIES
As I See It: What’s Past is Prologue
As I See It: Artificial Integrity
As I See It: Retirement Challenges
As I See It: From Disk, To Cloud, To Coal Mine
As I See It: The Forgotten Ones
As I See It: Unintended Consequences
As I See It: Communication Fail
As I See It: Upgrade Exhaustion
As I See It: The Other Eight Hours
As I See It: Elusive Connections
As I See It: Entitlement Master Class
As I See It: Greetings, Comrade IT Professional

