As I See It: Up The Downside
October 31, 2022 Victor Rozek
For all its transformational benefits, technology has an undeniable downside which regularly produces a range of disagreeable outcomes from annoyance to overt threat. As a subject, technological disfunction is a target-rich environment, but my word count limit – not to mention a reader’s attention span limits – prevent a full accounting. So, I’ve chosen three illustrative examples.
On the annoying spectrum, is the near-complete destruction of customer service. These days, reaching an actual human being on the phone requires the patience of a monk and the persistence of a mosquito. No matter who you call, be advised their menu options have changed. No matter when you call, they are always experiencing high caller volumes and all available agents (apparently both of them) are helping other customers. All the while you’re on perpetual hold, being told ad nauseam that your call is very important (to whom is not clear) until frustration boils over and all you want to do is hang somebody from the phone tree.
When you eventually do get through, it’s to someone in a third-party call center in Vietnam or Indonesia with questionable English skills, minimal training, and little authority to resolve issues. They’ll gladly transfer you to someone who can help, but surprise, surprise, that person usually turns out to be unavailable. I’m beginning to think that the only reason customer service calls are recorded is to make sure no one is accidentally being helped.
Some corporations have decided to drop the pretense that your call has any import whatever, and simply tell you how much of your life you’ll have to waste in order to get in touch with one of their indifferent employees. My wife was recently told her estimated wait time would be over 2 hours! Suffice it to say there is a new dent in the wall.
Customer service has been castrated by computers. Perhaps AI will usher in a new generation of compassionate, helpful, and available customer service. But don’t bet on it. At best, it will enhance the pretense of customer service. Combine Artificial Intelligence and Customer Service and what do you get: Artificial Customer Service.
Besides, AI has its own range of issues from predictions of massive job elimination to warnings of a dystopian future. Toward that end, AI will soon be contributing to an avalanche of forged content, all but indistinguishable from reality. AI apps are already facilitating the creation of images of people saying things they never said, and doing things they never did. What could possibly go wrong?
It was only last April that research lab OpenAI debuted the latest version of its text-to-image generator DALL-E. Five months later, reportedly 1.5 million users are generating 2 million images a day.
Not to be outdone, Google and Meta rushed to announced that, they too, were developing similar products, while a start-up, Midjourney, outdid them all by creating an image that won an art competition at the Colorado State Fair.
Forging art is one thing, but so-called deepfakes are quite another. Politicians and celebrities are already frequent targets of humorous, salacious or downright absurd videos. Soon we’ll see Ron DeSantis administering Covid vaccines to immigrants, while Nancy Pelosi eats babies. But then there’s this: About a month after Russia invaded Ukraine, a video was posted showing president Volodymyr Zelensky ordering his country’s troops to surrender to the Russians.
The problem, as articulated by Wael Abd-Almageed, a professor at the University of Southern California’s school of engineering, is one of trust. “Historically, people trust what they see,” he said. “Once the line between truth and fake is eroded, everything will become fake, we will not be able to believe anything.” Which is to say, the deliberate computer-aided destruction of trust in our institutions has been ongoing for years and will only accelerate.
The warnings finally got loud enough to get the attention of the crack members of the US Congress. In response they came up with an “AI Bill of Rights” ostensibly to protect the public. It contains a wishlist of five guidelines, so general as to be essentially meaningless. Here’s how The Washington Post summarized it:
- “Users should be ‘protected from unsafe or ineffective’ automated systems, and tools should be expressly ‘designed to proactively protect you from harms.’
- Discriminatory uses of algorithms and other AI should be prohibited and tools should be developed with an emphasis on equity.
- Companies should build privacy protections into products to prevent ‘abusive data practices’ and users should have ‘agency’ over how their data is used.
- Systems should be transparent so that users ‘know that an automated system is being used’ and understand how it’s affecting them.
- Users should be able to “opt out of automated systems in favor of a human alternative, where appropriate.”
How these mandates would be enforced is not clear.
Which brings us to the last unintended consequence of evolving computer technology: Perpetual cyber war.
The Pentagon was recently embarrassed after Facebook and Twitter removed fake accounts suspected of being run by the Defense Department. The military conducts overseas influence operations in many languages. In most cases they are designed to counter propaganda and misinformation undermining U.S. interests. But running afoul of the very low standards of Facebook and Twitter raises concerns about efficacy and competence. After all, the first rule of clandestine ops is: Don’t get caught.
The issue of competence is a pressing one not only because our adversaries have long had an active presence in cyberspace, but because the very nature of warfare has changed.
In a Washington Post op-ed, David Ignatius reviews the analysis found in the recent book “Cyber Persistence Theory: Redefining National Security in Cyberspace.”
Its authors offer an historic perspective on warfare. Ever since humans first began slaughtering each other, the object has been to force the enemy into submission. Whether it’s more guys throwing more rocks and wielding bigger clubs; or nations building larger armies with more guns, more ships, etc. the object was the same: forced capitulation. That changed with the first atomic bomb. The object then became deterrence—everyone was going to lose in an atomic war. Now, cyber weapons have fundamentally changed the nature of warfare again. “Borders don’t matter much to digital code,” says Ignatius, “and cyberwar is a continuum (always happening at a low level), rather than an on-off switch.”
The authors view Cyberspace as “an environment of exploitation rather than coercion.” Thus, strategic gains can be achieved without requiring any concession from the target.
“Weapons can’t be counted, identified, tracked or easily controlled. They are used in a borderless electronic world where traditional ideas of sovereignty don’t work very well.” The authors argue that this domain is both easily exploitable, but also stable; computers are more easily replaced than infrastructure.
The authors recommend that strategists develop “rules for continuous engagement, rather than plan for contingencies; they should prepare for continuous operations not episodic ones, and they should seek cumulative gains, rather than final victory.”
It’s certainly messy and unpredictable, but it’s a lot safer than guns, tanks, and bombs. Wouldn’t it be ironic if forever cyberwar turned out to be an unintended gift, rather than an unintended consequence, of ever-evolving computer technology.
Now, if only someone could fix customer service.