As I See It: Future Schlock
February 27, 2006 Victor Rozek
Raymond Kurzweil and Bill Joy are the Yin and Yang of technological consequence prediction. Kurzweil, the optimist, is part inventor, part futurist, and part philosopher. He is a best-selling author, recipient of the 1999 National Medal of Technology, and member of the National Inventors Hall of Fame, so he knows whereof he speaks. Joy, the pessimist, is the quintessential nerd, turned computer architect, turned dark futurist. He is the co-founder of Sun Microsystems and until 2003, he served as its chief scientist. He was also co-chair of the presidential commission on the future of IT research. So he, too, knows whereof he speaks.
These are accomplished and complicated men, not given to speaking without refection. It is curious, therefore, that when these two experts look at identical data, they draw startlingly different conclusions, neither of which is terribly appealing.
In a nutshell, Kurzweil believes that if you can just live long enough and avoid the major killers like heart disease and cancer, you may be able–with the help of emerging technologies–to achieve immortality. Not coincidentally, his second career has become eating supplements. He ingests about 250 pills and powders of various sorts each day in an attempt to stave off his genetics until the good ship immortality is ready for boarding. His latest book, co-authored with Terry Grossman, M.D. is Fantastic Voyage: Live Long Enough to Live Forever. Now, that’s optimism.
Joy’s position is summarized in an article he authored for Wired in 2004. In it, he predicts a world in which technology is out of–and beyond–our control. He quotes Hans Moravec, a robotics guru and one of the founders of the world’s largest robotics research program at Carnegie Mellon University. “Biological species,” Moravec notes, “almost never survive encounters with superior competitors.” And, according to Joy, emerging technologies such as robotics, nanotechnology, and genetic engineering will produce infinitely superior competitors. Humans, he predicts, will become unnecessary-a belief reflected in the title of his article Why the Future Doesn’t Need Us. Now, that’s pessimistic.
Kurzweil’s case hinges on his belief in the inevitability of something called “technological singularity.” The concept of singularity dates back to a 1958 conversation between Stanislaw Ulam and John von Neumann. Basically, singularity predicts a time when technology will make a quantum leap after which none of the old suppositions will apply. Change will come so fast that we can not even fully imagine the outcomes from our current trammeled perspective, and humanity as we know it will cease to exist. Presumably, everyone will be fitted with a conversion kit and will reemerge as something called “posthumanity.” What that will be exactly, nobody knows, but we can postulate that it will be part human, part machine, with an artificial intelligence boost and a broadband connection to group consciousness. Something like a Borg with a heart.
For true believers who can’t wait to become 21st century bio-mechanical, chip embedded, polymer-based composites, this is nothing less than the promise of the techno-rapture.
Kurzweil offers four postulates in support of singularity. The first is: Acceptance, and Striving for the Idea of Living Forever. It’s a fancy way of saying, wishing will make it so. Kurzweil argues that many predictions that seem farfetched at the time appear inevitable after the fact (man walking on the moon, a computer beating a chess champion, the Red Sox winning the World Series). So, if we believe in the possibility of achieving immortality, we will find a way to attain it.
Kurzweil’s second postulate is: The Law of Accelerating Returns, which states that technology is progressing toward singularity at an exponential rate. It’s Moore’s Law on steroids. Kurzweil provides models which show that not only the return on technological investment, but the rate of return is increasing exponentially. Thus, singularity has its own gravitational pull. But that’s if everything goes according to plan. Those living during the Golden Age of Greece and the Glory That Was Rome probably couldn’t foresee the Dark Ages.
His third postulate assumes: An Objective Measurement of Cerebral Processing Power. He asserts that the functionality of the brain is quantifiable in terms of technology that we can build in the near future. This becomes important because after singularity, humans, as we know them, will cease to be the dominate force in scientific and technological progress. Computers will become smarter than humans, and perhaps even conscious. The quantum increase in the rate of technological change is predicted to follow the liberation of our consciousness from the confines of our biology. Plus, presumably, we would not wish to have sitters for all eternity who are dumber than we are.
Kurzweil’s last postulate has to do with Sufficient Medical Advancements. It’s a race between death and immortality, and death has had a very long head start. For Kurzweil’s generation to have a shot at immortality, it must be kept alive long enough for the exponential growth of technology to transcend the processing power of the brain (the whole brain, not just the fraction we typically use). Kurzweil predicts that, eventually, microscopic nanobots will replace and repair failing body parts and supply medicines as needed to keep body chemistry in perfect balance. The trick is surviving until then, so Kurzweil keeps popping the supplements, confident that medical science can keep him ticking until the second coming of singularity.
Kurzweil envisions a merger of biology and robotics with a resulting technological utopia where man and intelligent machine stroll into the sunset together joined at the brain if not the hip. That prediction is particularly unsettling to Joy because, as he puts it, Kurzweil the inventor has a “proven ability to imagine and create the future.”
But even if Kurzweil is successful, Joy sees a very different final outcome. “We have yet to come to terms with the fact that the most compelling 21st century technologies–robotics, genetic engineering, and nanotechnology–pose a different threat than the technologies that have come before. Specifically, robots, engineered organisms, and nanobots share a dangerous amplifying factor: They can self-replicate. A bomb is blown up only once, but one bot can become many, and quickly get out of control.”
Joy foresees a future in which genetically engineered organisms and nanobots will run amuck. He uses the example of computer networking to make his point. “The sending and receiving of messages creates the opportunity for out-of-control replication. But while replication in a computer or a computer network can be a nuisance, at worst it disables a machine or takes down a network or network service. Uncontrolled self-replication in these newer technologies runs a much greater risk–a risk of substantial damage in the physical world.”
The damage may occur as the result of an accident, but Joy doubts it, because unlike WMDs of the past which require rare materials and huge manufacturing and assembly facilities, biological weapons require a lab and a bit of specialized knowledge. “I think,” says Joy, “it is no exaggeration to say we are on the cusp of the further perfection of extreme evil, an evil whose possibility spreads well beyond that which weapons of mass destruction bequeathed to the nation-states; on to a surprising and terrible empowerment of extreme individuals.”
Beyond the traditional motivations for using destructive weaponry, Joy worries that when we reach the point where human work is no longer necessary because it is being performed by intelligent machines, the masses will become superfluous. His fear is that some malcontent or perhaps the elite who control the technology, may decide that the masses are also expendable.
If Kurzweil and Joy disagree, it is not about technology but about human nature. And, ultimately, both men may be right. Technology gives expression to both our higher selves and our darker impulses. And since we have tended to use all of the weapons we invent, there is no reason to suppose we will suddenly stop, or that machines will become bright enough to prevent us from picnicing on one another. Achieving immortality is one thing, deserving it is another.
It is tempting to dismiss Joy as a cautious disciple of Murphy’s law; a pessimist given to the belief that if anything can go wrong, it will. But Joy offers a self-fulfilling reminder in support of his position. Murphy’s law, he said, is actually Finagle’s law, “which in itself shows that Finagle was right.”
Let’s hope Joy isn’t.
Ian Pearson, Britain’s leading futurist, has a slightly more benign vision of the future. “We can already use DNA to make electronic circuits,” he says. “So it’s possible to think of a smart yogurt some time after 2020 or 2025, where the yogurt has got a whole stack of electronics in every single bacterium. You could have a conversation with your strawberry yogurt before you eat it.”
Yum. I can hardly wait.