As I See It: Upgrade Exhaustion
May 13, 2024 Victor Rozek
Several decades ago I recall seeing a 60 Minutes episode about an arms transfer (or perhaps it was a sale) of state-of-the-art American fighter jets to Israel. The interviewer traveled to the Middle East to see how Israeli pilots were adapting to the latest in American military technology. After being assured that the jet performed as advertised, the interviewer looked into the cockpit and marveled at all the screens, dials, switches, and gadgetry confronting the pilot. A deluge of supposedly useful information was available to the pilot in real time. But the sheer volume of it seemed daunting. Curious about how pilots manage to absorb this bounty of data, the interviewer asked: “If you’re engaging an enemy fighter, how in the world do you keep track of all this stuff?”
“Oh, we don’t,” was the reply, “we turn all that off.” Apparently, it was more distracting than useful. The manufacturer probably sought to impress the Air Force during the plane’s design phase by suggesting every conceivable addition and upgrade, knowing that every bit of new technology would increase the appeal and the cost – whether it was ultimately useful to pilots or not.
But something may be nice to have without actually being practical or even functional. I’d love to have a pet giraffe, but for obvious reasons a cat is a more manageable choice.
Practical low-tech solutions are often overlooked or dismissed simply for not being new. But in spite of the technological bounty, the Israeli pilots made several low-tech modifications to make the fighter jets better able to survive combat situations. One brilliant addition was mounting mirrors on the outside of the fuselage. During a multi-plane dogfight, with swarms of high-speed moving targets, it’s distracting for a pilot to constantly swivel his/her head to see who might be angling behind them for the kill shot. Mirrors allowed the pilots to concentrate more on flying and less on looking.
Essentially, the pilots weren’t adapting to the technology as much as they were adapting the technology to suit their own needs.
The same technological excesses plague the automobile industry. Consider that current average new car prices hover around $48,000, in large part because there are now between 1,000 and 3,000 semiconductor chips in every vehicle. But does the average driver actually use even a small portion of the available features? Full disclosure: I drive a 22-year old truck. It has over 336,000 miles on it and features such oddities as hand-cranked windows that allow me to roll the window down to exactly the spot I want without having to make multiple minute adjustments because the electric motor is overly sensitive to the touch. And, it has a simple key that starts the engine and can be replaced for $5 instead of a key fob that may cost over $800 to replace.
My wife, on the other hand, drives a newish van. It has scores of cameras, computers, bells, whistles, and other safety features that drive me nuts when I drive it. It pings if I happen to stray out of my lane; it beeps to warn me of this or that, and it has a large computer console with dozens of menus for features I will never use. It has WiFi and, in case I get lost driving into town, GPS tracks the progress of the vehicle just in case I can’t tell where I am by looking out the window. Like the Israeli pilots, I turn most of that stuff off.
Safety is, of course, an important consideration, but what happens when safety features become distracting, or drivers become less attentive believing the vehicle will manage their safety? In spite of all the computer chips, cameras, and warning systems, in my state the number of traffic fatalities per year rose 73 percent from 2010 to 2022.
The question is: How much technology is enough, and why does everything need to be computerized? Appliances tend to break more frequently these days because they are overly complex. Does anyone really need a refrigerator that makes grocery lists, or a smart toothbrush?
IT professionals and technology users alike chronically deal with upgrade exhaustion. Operating systems and apps change with unannounced regularity, often introducing features that no one requested, and changing the way perfectly functional software worked in the prior version. Migrating to new systems can be a formidable task. When the IT salesperson assures you that her new system will cut your work in half, say: “That’s great, in that case give me two.” But it never seems to work out that way.
Naturally, skill development and professional growth are essential to a successful career. But how much time do we invest relearning to do things we are already competently doing? Upgrades, whether in jet fighters, automobiles, smartphones, or computers should increase functionality without sacrificing ease and usability. It’s clear, as is often the case, that the people who design the product aren’t the ones who actually use it.
With the introduction of AI, the upgrade problem will become less visible but more consequential. Consider that at any given moment in time we believe ourselves to be at the apex of human knowledge. Our limitations are hard to see and even harder to admit. Imagine if AI existed in a historical context. Who would 15th century AI claim discovered the “New World” and what conclusions would it draw about the “savages” encountered there? In the late 17th century, would AI have found justification for the Salem Witch Trials and valid grounds for continuing to burn women? If, during the Civil War, the Confederacy had AI, would it produce compelling arguments for slavery?
One of the problems with AI is that it can be confidently mistaken, and the more we become dependent on it, the less we will question its output. As AI becomes more easily scalable, every government, every faction, every scammer and corporation will design their own AI systems to help advance their particular agenda. AI model training and the algorithms that feed it will be culturally, politically, and strategically biased. It will be impossible to control who and how AI is taught – and for what purpose. And it’s a short jump from accidental hallucination to deliberate mistake.
It is fascinating to ponder how people three hundred years from now will judge the infancy, growth, and ultimate uses of our emerging AI. Assuming, of course, we’re still around then.
RELATED STORIES
As I See It: The Other Eight Hours
As I See It: Elusive Connections
As I See It: Entitlement Master Class
As I See It: Greetings, Comrade IT Professional