The community of thinkers on distant-future questions stretches across disciplinary bounds, with the primary uniting trait a willingness to think about the future as a topic for objective study, rather than a space for idle speculation or science fictional reverie. They include theoretical cosmologists like Sean Carroll of the California Institute of Technology, who recently wrote a book about time, and nonacademic technology mavens like Ray Kurzweil, the precocious inventor and theorist. What binds this group together is that they are not, says Bostrom, “just trying to tell an interesting story.” Instead, they aim for precision. In its fundamentals, Carroll points out, the universe is a “relatively simple system,” compared, say, to a chaotic system like a human body — and thus “predicting the future is actually a feasible task,” even “for ridiculously long time periods.”
Also among the cosmologists is Rees, the speaker at the Royal Institution, who turned his attention to the end of time after a career in physics reckoning with time’s beginning. An understanding of these vast time scales, he contends, should have a large and humbling effect on our predictions about human evolution. “It’s hard to think of humans as anything like the culmination of life,” Rees says. “We should expect humans to change, just as Darwin did when he wrote that ‘no living species will preserve its unaltered likeness into a distant futurity.’ ” Most probably, according to Rees, the most important transformations of the species will be nonbiological. “Evolution in the future won’t be determined by natural selection, but by technology,” he says — both because we have gone some distance toward mastering our biological weaknesses, and because computing power has sped up to a rate where the line between human and computer blurs. (Some thinkers call the point when technology reaches this literally unthinkable level of advancement the “singularity,” a coinage by science fiction writer Vernor Vinge.)
---Bostrom, the Oxford philosopher, puts the odds at about 25 percent, and says that many of the greatest risks for human survival are ones that could play themselves out within the scope of current human lifetimes. “The next hundred years or so might be critical for humanity,” Bostrom says, listing as possible threats the usual apocalyptic litany of nuclear annihilation, man-made or natural viruses and bacteria, or other technological threats, such as microscopic machines, or nanobots, that run amok and kill us all.
This is quite literally the stuff of Michael Crichton novels. Thinkers about the future deal constantly with those who dismiss their speculation as science fiction. But Bostrom, who trained in neuroscience and cosmology as well as philosophy, says he’s mining the study of the future for guidance on how we should prioritize our actions today. “I’m ultimately interested in finding out what we have most reason to do now, to make the world better in some way,” he says.
There is, both in Bostrom’s scenarios and in Rees’s, the possibility of a long and bright future, should we manage to have any future at all. Some of the key technologies capable of going awry also have the potential to keep us alive and prospering — making humans and post-humans a more durable species. Bostrom imagines that certain advances that are currently theoretical could combine to free us some of the more fragile aspects of our nature, such as the ability to be wiped out by a simple virus, and keep the species around indefinitely. If neuropsychologists learn to manipulate the brain with precision, we could drug ourselves into conditions of not only enhanced happiness but enhanced morality as well, aiming for less fragile or violent societies far more durable than we enjoy now, in the nuclear shadow.
And if human minds could be uploaded onto computers, for example, a smallpox plague wouldn’t be so worrisome (though maybe a computer-virus outbreak, or a spilled pot of coffee, would be). Not having a body means not being subject to time’s ravages on human flesh. “When we have friendly superintelligent machines, or space colonization, it would be easy to see how we might continue for billions of years,” Bostrom said, far beyond the moment when Rees’s post-human would sit back in his futuristic lawn chair, pop open a cold one, and watch the sun run out of fuel.
There is one surprising survival scenario of particular worry for Bostrom, however — one that involves not a physical death but a moral one. The technologies that might liberate us from the threat of extinction might also change humans not into post-humans, but into creatures who have shed their humanity altogether. Imagine, he suggests, that the hypothetical future entities (evolved biologically, or uploaded to computers and enhanced by machine intelligence) have slowly eroded their human characteristics. The mental properties and concerns of these creatures might be unrecognizable.
“What gives humans value is not their physical substance, but that we are thinking, feeling beings that have plans and relationships with others, and enjoy art, et cetera,” Bostrom says. “So there could be profound transformations that wouldn’t destroy value and might allow the creation of any greater value” by having a deeper capacity to love or to appreciate art than we present humans do. “But you could also imagine beings that were intelligent or efficient, but that don’t add value to the world, maybe because they didn’t have subjective experience.”
Bostrom ranks this possibility among the more likely ways mankind could extinguish itself. It is certainly the most insidious. And it could happen any number of ways: with a network of uploaded humans that essentially abolishes the individual, making her a barely distinguishable module in a larger intelligence. Or, in a sort of post-human Marxist dystopia, humans could find themselves dragooned into soulless ultra-efficiency, without all the wasteful acts of friendship and artistic creation that made life worth living when we were merely human.
“That would count as a catastrophe,” Bostrom notes.