As I See It: In Search of Digital Wisdom
November 17, 2014 Victor Rozek
Logic suggests that natives always precede immigrants; and history records that the succession process frequently doesn’t work out well for the natives. At least until now. Technology, it seems, has turned succession on its head. According to Mark Prensky, when it comes to adapting to technology, not only did the immigrants precede the natives, but the natives are doing better than their precursors.
In 2001, Prensky, an educator by trade and a futurist by inclination, published an article introducing the concepts of “Digital Immigrants” and “Digital Natives.” In his model, the immigrants are the elders among us who remember a time before computers ruled the world. For immigrants, transitioning to a life dominated by technology was sometimes clumsy and always challenging. Natives, on the other hand, are the generation that, from birth, has been surrounded by electronics; weaned on a diet of video games, digital music players, computers, and cell phones. They are, says Prensky, “native speakers of the digital language,” neither frightened nor overwhelmed by the sheer volume and diversity of consumer technology.
While “immigrant” and “native” remain useful concepts, as time passed the distinctions between the two groups became less relevant. Digital technology grew to be the irresistible force, sweeping all of civilization before it like a tsunami. Given the velocity of AI research and the ubiquitous nature of computers, Prensky believes some form of singularity is near. The issue, then, is no longer one of comfort with technology, but of accelerated evolution, powered by machines designed to enhance human intelligence. Prensky calls it “digital wisdom” and goes so far as to suggest a new variation of human is emerging: H. Sapiens Digital.
Digital wisdom, in this context, refers not only to the benefits arising from the use of digital technology but also “wisdom in the prudent use of technology to enhance our capabilities.” That, at least, is Prensky’s hope, although when an opportunity to establish dominance presents, humanity has seldom shown an inclination toward self-restraint. Technology may not replace intuition, or compassion, Prensky admits, but it will bestow an unimaginable advantage on the enhanced population. Individuals with access to all of recorded knowledge and history will acquire a concentration of wisdom heretofore unimagined. However wise a digitally unenhanced person may be, he will have a difficult time competing with his ignorant but enhanced brethren.
“When we are all enhanced by implanted lie detectors, logic evaluators, and executive function and memory enhancements–all of which will likely arrive in our children’s lifetimes–who among us will be considered wise?” asks Prensky. “The advantage will go, almost certainly, to those who intelligently combine their innate capacities with their digital enhancements.”
Looming singularity will require a fundamental redesign of our educational system. If our schools appear to be floundering, their biggest failure, Prensky argues, is their inability to understand that students have fundamentally changed. “Today’s students are no longer the people our educational system was designed to teach.”
Not only are digital natives techno-centric, but evidence suggests that because of their life-long immersion in digital technology, their brains are actually different. The average college graduate will have spent twice as much time playing video games as reading. The pace and information-delivery style that natives are accustomed to, bears little resemblance to traditional teaching methods. Simply put, natives think and process information differently. But our schools, says Prensky, are full of digital immigrants trying to teach digital natives. Their pre-digital-age bias–slow, methodical, step-by-step, lecture-based learning–is foreign to natives. As one high school student reportedly said: “Every time I go to school I have to power down,” a remark that may be both literal and metaphorical. Natives prefer a fast pace, graphics rather than text, with instant feedback and frequent rewards, all while being connected to their peers. (If you like your networking with a dash of science fiction, University of Washington researchers recently completed the first brain-to-brain interface, which allowed two people in separate locations to complete a task by communicating thoughts over the Internet.)
When singularity does occur, students may find learning from artificial intelligence more compelling than traditional classroom instruction. But if the goal is to attain a higher level of “wisdom” through the amplification of human skills, can computers develop a corresponding amplified sense of morality and ethics? And if so, according to whose standards?
Prensky believes that the digitally wise will make better ethical choices, based on their ability to process far more data and simulate a wider array of possible consequences. By that standard, the scope of “wisdom” narrows to the right use of information. Under those conditions, the nuances of morality will bypass computational deliberation and remain the purview of the less intelligent but more sensitive human.
In fact, Prensky thinks there will come a time when humans will no longer be distinguished by the exceptional size of our brains. However dominant by evolutionary standards, the brain’s capacity will be dwarfed by the power of artificial intelligence. Instead, he believes humans will be recognized for their “social skills, emotional capacities, and moral intuitions.” The ability to duplicate a subset of machine skills will not be necessary or marketable; people instead will be valued for the softer side of human nature. Unless, of course, machines decide that “wisdom” resides primarily in self-interest. Then, the quest for securing personal advantage will, once again, produce a handful of winners and legions of losers.
Prensky, however, is optimistic that the digitally wise will ensure enhancements are widely available and not just reserved for the ruling elite. To date, however, the use of artificial intelligence suggests the opposite. AI has been a pivotal factor in the consolidation at the top tiers of the Internet. There is a circular reinforcement that results from the use of artificial intelligence. Companies that make decisions based on computer analysis, typically have access to large amounts of data. And the deeper the pool of data, the more effective the analysis, thus enabling companies to make better decisions and grow larger still. David Brooks offers corroboration from Astra Taylor’s book The People’s Platform. “In 2001, the top 10 websites accounted for 31 percent of all U.S. page views. But by 2010 they accounted for 75 percent of them.” The power, the money, and the proliferation of AI is concentrated at the top. Why that should change with singularity is not only unclear, but unlikely. Singularity plus infrastructure will always be more powerful than stand-alone singularity.
In any event, there is a difference between being digitally smart and being truly wise. Singularity may offer easy-to-maintain intelligence, obtainable without scholarship, but at the cost of sacrificing wisdom for ease. Regardless, the definition of wisdom in the digital age remains as elusive as ever, influenced as it is by context, religion, and culture. One man’s humanist remains another man’s infidel. One man’s computer gives offense to the God of another.
Confucius said: “By three methods may we learn wisdom. First by reflection, which is noblest; second by imitation, which is easiest; and third by experience, which is bitterest.” One thing is certain, if “digital wisdom” leads us down the bitter path, it won’t be computers that suffer.