As I See It: When Machines Excel
October 20, 2014 Victor Rozek
Ebola transmission, medical authorities assure us, occurs only through direct contact with the bodily fluids of an infected person or animal. And even then, the virus can only enter the body through a crack in the skin or by coming in contact with mucus membranes. While there is no reason to doubt the veracity of that assertion, I found it difficult to square with the surprising number of medical professionals who have contracted the disease even though clad in full protective attire. I wondered if the virus was being transmitted in some other way?
My suspicions were probably a result of watching too many movie portrayals of global pandemics, but I couldn’t help speculate if someone had managed to weaponize ebola? In a perfect world that would be a crazy question, but it’s not a perfect world. Anthrax, smallpox, and an assortment of designer plagues live in covert labs where the psychopaths of science toil to find ever more effective ways to kill massive amounts of people.
Weaponized pathogens are just one of the consequences of our technological prowess. Since the advent of the atomic age, and the amplification of human intelligence through the power of computing, technology has acquired something akin to superpowers with the potential, in the extreme, to either save humanity or finish us off.
That second possibility has some notable thinkers sounding the alert. Among them, Stephen Hawking (arguably one of the best minds of his generation) joined other scientists and futurists who warn that the creation of advanced artificial intelligence may turn out be our final invention. The fear is that AI will, at some point in the near future, take a quantum leap in brainpower and acquire the ability to self-replicate and self-improve in ways perhaps not intended.
Human evolutionary advantage has always been intelligence. Once that advantage is lost, humans may find themselves unable to control their future. Machine adaptation will occur at an infinitely faster rate than human evolution. Racing ahead, computers will be operating outside of human control and far beyond human understanding.
Advanced capability, however, will not guarantee a nuanced understanding of consequences which, in any event, would be different for man and machine. And the decisions computers make that may cause widespread damage, whether deliberate or unintended, will occur in milliseconds, providing little time for countermeasures. George Dvorsky, writing for io9, warns that the scope of potential AI-caused catastrophes is nearly limitless. “AI could knock out our electric grid, damage nuclear power plants, cause a global-scale economic collapse, misdirect autonomous vehicles and robots, take control of a factory or military installation, or unleash some kind of propagating blight that will be difficult to get rid of (whether in the digital realm or the real world). The possibilities are frighteningly endless.”
Hawking imagines an intelligence that may become unaccountable as well as uncontrollable. “One can imagine such technology outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand. Whereas the short-term impact of AI depends on who controls it, the long-term impact depends on whether it can be controlled at all.”
A lack of control is central to the possibility of a second concern: a global ecological meltdown caused by weaponizing nanotechnology. Weaponizing technology on a molecular level is perhaps the height of insanity, which pretty much guarantees some psychopath will adapt nanotechnology for precisely that purpose.
Given the sadistic medievalism that passes for religious belief in parts of the world, it’s hardly beyond the realm of imagination to foresee that a small group of fanatics would happily unleash an engineered pandemic that could kill several billion people in the name of their God. The James Bond scenario where a single villain or a small group of non-state players can threaten to destroy the planet is rapidly coming to fruition. It’s just not clear who or what will play the part of Bond.
Unleashing exponentially self-replicating biological Pacmen that devour biomass or blot out the sun, are two of several possible scenarios. By some estimates it would take as little as 20 days to make vast portions of the planet unlivable.
Simply stated, the human dilemma is this: We do things not because we should, but because we can. Whether through error or terror, war or fear, or monumental stupidity, if we can imagine it, sooner or later we seem compelled to do it. The most horrific technologies eventually escape the confines of laboratories.
While a number of high-risk technologies coming on-line are potentially too valuable to abandon, the consequences of their use are largely unpredictable. Toward that end physicist and cosmologist Sir Martin Rees, in collaboration with philosopher Huy Price, and Skype co-founder Jaan Tallinn, started the Cambridge Project for Existential Risks staffed with an assortment in interdisciplinary minds capable of thinking deep thoughts.
It is one of several non-profits including the Machine Intelligence Research Institute, which seeks to develop “a mathematical theory of trustworthy reasoning for advanced autonomous AI systems.” Another is The Future of Humanity Institute in Oxford which, like its Cambridge counterparts, applies mathematics, philosophy and science “to the Big Picture questions about humanity and its prospects.” And then there is the aptly named Future of Life Institute. These think tanks are dedicated to the study and mitigation of risks presented by the battery of new technologies with apocalyptic potential. The problem, as always, is that only rational people will listen.
We have a long history of ignoring the canary in the coalmine. Eisenhower warned us about the growing military/industrial complex. (Actually, he said the military/industrial/Congressional complex, but people forget that last and very accurate bit for mysterious reasons.) Katsuhiko Ishibashi is a seismologist who warned the Japanese government about the risks of building nuclear power plants in vulnerable areas. Joseph Wilson revealed that Iraq was not attempting to acquire yellowcake uranium. As frequently as not, warnings are followed by disasters.
But not everyone is pessimistic about the future of technology. It may, in fact, be liberating beyond anything we can imagine. Before he passed, I.J. Good, mathematician and originator of the concept of singularity, had a qualified but essentially optimistic view of the future. In Good’s experience, computers had previously played a key role in salvaging humanity’s prospects, and could do so again. As a mathematician and statistical mastermind he had worked closely with Alan Turing at Bletchley Park during the Second World War. Using Turing’s rudimentary computing machine they broke Enigma, the seemingly impenetrable German cipher machine, an achievement believed to have shortened the war by several years.
Good wrote: “The first ultra-intelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control.”
Aye, Mr. Good, there’s the rub.