As I See It: Celebrating Ignorance
September 26, 2011 Victor Rozek
At one time, bloody bandages were hung on poles to dry after patients were bled by physicians. It was the origin of the now innocuous red-striped barber pole. Of course, the association of barber poles with primitive medical practices has long been lost because bleeding is no longer an unchallenged medical practice. But that’s the way of certainty: it crumbles like a stale cookie.
Certitude is a beast with a short life span. What people believe, what they live for, die for, and kill for, changes as surely as the seasons. Truth is mutable and yesterday’s facts become today’s folly. One moment the world is flat, and the next its edges round into a sphere. Primitive men worship animal spirits, then morph them into gods, which are then condensed into a single deity. Whether it’s the Roman Empire or the 1,000-year Reich, sure things come and go and future generations shake their collective heads and marvel at the ignorance and barbarism of their predecessors.
Stubbornly clinging to a single certainty has proven to be empirically risky. Francis Crick, (who along with James Watson won a Nobel Prize for deciphering the structure of DNA), once advised a promising young scientist to beware of those who beat a single drum. “The dangerous man is the one who has only one idea,” he said, “because then he’ll fight and die for it. The way real science goes is that you come up with lots of ideas, and most of them will be wrong.”
The advice resonated with the young researcher who, by temperament, valued creativity, testing, and tolerance more than yesterday’s certainty. His own beliefs crystalized from annoyance with the age-old argument over the existence of God. He rejected both atheism and theism, reasoning that we don’t know nearly enough to outright dismiss the existence of God, but we know far too much to subscribe to any particular religious story. He reveled in the mystery, content to write a chapter in an unfinished story with multiple potential endings. He refused to be pigeonholed, describing himself as a “Possibilian.”
His name is David Eagleman and he is an assistant professor of neuroscience at the Baylor College of Medicine. No sooner had he identified himself as a Possibilian during a 2009 National Public Radio interview than a surprising number of people responded that they were Possibilians, too. Overnight, Possibilianism became the new, hip Internet-disbursed philosophy. (Although strictly speaking, it’s not really a philosophy, more like an intellectual position or a fast-food version of philosophy.) Nevertheless, it found adherents around the globe. Among others, Kevin Kelly, founding editor of Wired magazine declared himself a Possibilian. And, by April of this year, nearly 1,000 Facebook members announced they were switching their religious affiliation to Possibilianism, which itself is mind-bending: Having been sure the day before, they now declared themselves to be sure that they weren’t sure about anything.
Asked what he hoped to achieve, Eagleman answered: “With Possibilianism, I’m hoping to define a new position–one that emphasizes the exploration of new, unconsidered possibilities. Possibilianism is comfortable holding multiple ideas in mind; it is not interested in committing to any particular story.”
And the “multiple ideas” that crowd Eagleman’s mind are both fascinating and subversive. Since human origin is, as yet, scientifically unproven, why not, suggested Eagleman, imagine ourselves as bits of networked hardware in a cosmic program, or as cells of some vast celestial organism. He even wrote a book called Sum in which he invents 40 stories describing alternative afterlives.
A synonym for Possibilianism might be curiosity, whose path is often risky and leads to unexpected destinations. Needless to say, traditionalists are dismissive of Possibilianism. But curiosity is as old as reason itself and forms the cornerstone of The Enlightenment. As Voltaire famously put it: “Uncertainty is an uncomfortable position. But certainty is an absurd one.”
In the evolutionary sense, it’s been but a blink of an eye since plagues were considered punishment from God, slavery was a common practice, and women were chattel. It is compelling to look ahead five centuries and consider which of our current beliefs will collapse under the dead weight of reality. And which of our practices will be viewed as ignorant or barbaric. Will we discover how to genetically alter our propensity for violence, making the savagery of war inconceivable? Will the cut/burn/poison method of treating cancer seem as primitive as leeching? And what will we believe about computer technology?
There is, of course, a broad spectrum of possible answers to that question. But, depending on whether computers help save the world, or are used to destroy it, one of two scenarios could frame future beliefs about technology.
It’s highly possible that 500 years from now, technology will be so advanced and so integrated in human existence that living without it would be unimaginable if not impossible. Like the Borg, our bodies may be collections of spare parts, our consciousness connected to a universal grid. Technology will perform the functions of the labor force, mange the weather and other environmental hazzards, and facilitate interplanetary travel. Perhaps Kurzweil’s wet dream of singularity will be realized and humans will fully merge with machines so as to become indistinguishable. Under those circumstances, technology would become our new Mother–and much like the original version, indispensable, and largely ignored.
The other scenario follows the thinking of Cecile Andrews as described in her book The Circle of Simplicity. People looking for substance discover that virtual reality is a menu and not a meal. Rather than offering enrichment, technology makes it easy to live artificial, inauthentic lives. And for all their usefulness, computers dramatically amplify the world’s ills. Modern wars cannot be fought without computers; corporations could not easily move jobs overseas without computers; governments and marketeers could not invade our privacy and amass our personal information without computers; money could not be moved at light speed around the globe without computers. Perhaps after the next global conflict, or a large-scale environmental collapse, or some cataclysmic biological mishap, people will conclude that technology is simply too dangerous and will seek a return to simpler living. Technology will be taboo.
For Andrews, the questions are: Will technology bring about more justice and equity; will it preserve and sustain the planet and its people? To which Eagleman would reply: I don’t have the answer, “I’m just celebrating the vastness of our ignorance.”
Well, if Eagleman wants to celebrate ignorance, he’ll never run out of reasons to party. “Possibilianism,” he says, “is simply an appeal for intellectual humility.” Given the power of our technology and the function hubris plays in advancing human misery, his appeal sounds like a cautionary tale.
Personally, I like the idea of Possibilianism. There is so much I don’t know that if I couldn’t make my peace with ignorance, I’d be perpetually miserable. One thing I’ve come to accept with some certainty, however, is that the great universal questions are not only bigger than we think, they’re bigger than we can think. Unlike most philosophers that seek to explain the world, Eagleman would recommend withholding judgment pending further research.
But perhaps my favorite feature of Possibilianism is that it can’t be misspelled. (Believe me, for me that’s a plus.) Eagleman understands that he made the word up, and he is not attached to any single rendering. Unlike established philosophies, his embraces vagaries in spelling, which should make him the patron saint of bad spellers everywhere.