As I See It: To Think Or Not To Think
July 28, 2014 Victor Rozek
Bloomington, Indiana, is not the stuff of science fiction. Yet for the past 30 years, teams of scientists toiling in a non-descript house near Indiana University have been quietly conducting a decidedly Asimovian experiment. They have been teaching computers to think. Given the highly publicized achievements of IBM‘s playful R&D department–the spanking of chess champion Garry Kasparov by Deep Blue, and the Jeopardy dominance of everybody’s favorite know-it-all, Watson–it is tempting to believe that the challenges of creating thinking machines have largely been solved.
But those breakthroughs relied more on brute force than nuanced understanding. Dave Ferrucci, Watson’s daddy, readily admits his group had no intention of trying to model human cognition. “Absolutely not,” he says. “We just tried to create a machine that could win at Jeopardy.”
But while IBM gorged on the publicity generated by its programmable prodigies, the Bloomington project has been progressing without fanfare. Not that the program is swaddled in secrecy. Rather, its reticent profile is a reflection of the man who created it.
Douglas Hofstadter was once the wunderkind of artificial intelligence. In 1979, as a first-time author no less, he published a book titled Gödel, Escher, Bach: An Eternal Golden Braid that quickly became “the bible of artificial intelligence.” So remarkable were his insights that they earned the 35-year-old Hofstadter a Pulitzer Prize and the National Book Award for science. But soon he and the AI community parted ways. There were many pressing and profitable problems that computers could solve without achieving a state of simulated consciousness. While Hofstadter remained fixated on teaching computers how to think, the majority of his colleagues refocused their energies on teaching computers what to think.
The latter was by far the easier task and was therefore conducive to faster progress and greater reward. It wasn’t really necessary for computers to comprehend what they did, they simply had to do it quickly and efficiently. If the goal was intelligence, machines could be trained through the use of algorithms, probabilistic programming, and pattern recognition. They “learned” from experience (another word for repetition) by digesting massive amounts of data, discerning trends, and extrapolating outcomes. Without mastering the workings of the human mind, computers could nonetheless master chess, and match dating partners; pilot drones and operate cars; conduct web searches, and pluck a single fanatic’s communications from an ocean of global babble. They provided practical and purchasable solutions to an array of difficult problems. It turned out that, for the present, being clever was more useful than being conscious. AI had moved on.
But that wasn’t good enough for Hofstadter. Understanding, he said, was being replaced by engineering, and he would not bolster the pretense that machines possessed authentic intelligence. Hofstadter’s challenge, however, was that no one fully understood the workings of the mind, so for three decades he and his graduate assistants have been tutoring machines by writing programs that “think.” Curriculum includes the subtleties of language, communicating through analogies, and using recognition as the pathway to cognition.
Though consciousness remains elusive, Hofstadter was doubtless glad to learn that computers are developing an affinity for human nature–at least its distasteful side. In 2009, researchers in Switzerland discovered that robots had learned to lie. Business Insider reports that the bots were designed “to cooperate in finding beneficial resources like energy and avoiding hazardous ones. The robots learned to lie to each other in an attempt to hoard the beneficial resources for themselves.” Greed, and the desire to win at any cost: sounds like a perfect gift for the hedge fund manager who has everything.
Machines that lie, if not specifically programmed to do so, suggests an element of self-awareness. That, at least, is Louis Del Monte’s fear. Del Monte is a physicist and author of The Artificial Intelligence Revolution. Lately, he’s been sounding the alarm about machines that will soon outmatch, or at least have access to, the world’s combined human intelligence. He thinks they will eventually find us “an unpredictable and dangerous species.” After all, we are demonstrably unstable, argues Del Monte. We “create wars, have weapons to wipe out the world twice over, and make computer viruses.”
Or perhaps computers will simply look upon us as inferior. Like Downton Abbey footmen: useful to have around, but not to be trusted with matters truly important. “As our machines get faster and ingest more data, we allow ourselves to be dumber,” writes James Somers in an article about Hofstadter that appeared in the October 2013 issue of The Atlantic. That’s true, no doubt, because the possibility of sustaining life with minimal effort is seductive. Soon everyone and everything will be tethered to vast networks of machines ostensibly working on our behalf. Paradoxically, computers amplify intelligence, but they also make us lazy. They provide knowledge without scholarship, connection without commitment, and distraction on demand. In the process, the ability (or need) to think becomes compromised.
For the moment, computers excel at calculation, not contemplation. But contemplation, the essence of self-directed thought, requires the ability to turn down the static, sit quietly, and focus. And that is a discipline increasingly in short supply. Kerry Sheridan, writing for AFP News, reported on a troubling study that showed “people would rather inflict pain on themselves than spend 15 minutes in a room with nothing to do but think.”
About 200 people were asked to simply sit in an empty room for up to 15 minutes and report on their experience. “More than 57 percent found it hard to concentrate and 89 percent said their minds wandered. About half found the experience was unpleasant.” Those results may not be all that surprising, but then researchers wondered just how far people would go to distract themselves from the horrors of keeping their own company.
In one experiment, participants were told they could distract themselves by administering a mild electric shock. “Two-thirds of the male subjects gave themselves at least one shock while they were alone. Most of the men shocked themselves between one and four times.” But one guy, with a high tolerance for pain and a low tolerance for exploring his inner landscape, “shocked himself 190 times.”
The women were a bit more restrained. Only a quarter decided to zap themselves but they did it more often, up to nine times. According to Sheridan, all of those who shocked themselves had previously said they would have paid to avoid it. But in the end, they preferred a painful stimulus to none at all.
So, we’re left with machines that can’t yet think and a growing number of people who can but won’t. Hofstadter once wrote: “I would like to understand things better, but I don’t want to understand them perfectly.” Not to worry. In his continuing effort to unravel and replicate the intricate mysteries of the human mind, perfect understanding will not be among his disappointments.