Showing posts with label nick bostrom. Show all posts
Showing posts with label nick bostrom. Show all posts

May 21, 2011

Boston Globe sneaks a peek into the deep future

The Boston Globe asks: "What will happen to us?" To answer the question, writer Graeme Wood highlights the work of futurists Nick Bostrom, Sir Martin Rees, Sean Carroll and Ray Kurzweil. Highlights:
The community of thinkers on distant-future questions stretches across disciplinary bounds, with the primary uniting trait a willingness to think about the future as a topic for objective study, rather than a space for idle speculation or science fictional reverie. They include theoretical cosmologists like Sean Carroll of the California Institute of Technology, who recently wrote a book about time, and nonacademic technology mavens like Ray Kurzweil, the precocious inventor and theorist. What binds this group together is that they are not, says Bostrom, “just trying to tell an interesting story.” Instead, they aim for precision. In its fundamentals, Carroll points out, the universe is a “relatively simple system,” compared, say, to a chaotic system like a human body — and thus “predicting the future is actually a feasible task,” even “for ridiculously long time periods.”
---
Also among the cosmologists is Rees, the speaker at the Royal Institution, who turned his attention to the end of time after a career in physics reckoning with time’s beginning. An understanding of these vast time scales, he contends, should have a large and humbling effect on our predictions about human evolution. “It’s hard to think of humans as anything like the culmination of life,” Rees says. “We should expect humans to change, just as Darwin did when he wrote that ‘no living species will preserve its unaltered likeness into a distant futurity.’ ” Most probably, according to Rees, the most important transformations of the species will be nonbiological. “Evolution in the future won’t be determined by natural selection, but by technology,” he says — both because we have gone some distance toward mastering our biological weaknesses, and because computing power has sped up to a rate where the line between human and computer blurs. (Some thinkers call the point when technology reaches this literally unthinkable level of advancement the “singularity,” a coinage by science fiction writer Vernor Vinge.)
---Bostrom, the Oxford philosopher, puts the odds at about 25 percent, and says that many of the greatest risks for human survival are ones that could play themselves out within the scope of current human lifetimes. “The next hundred years or so might be critical for humanity,” Bostrom says, listing as possible threats the usual apocalyptic litany of nuclear annihilation, man-made or natural viruses and bacteria, or other technological threats, such as microscopic machines, or nanobots, that run amok and kill us all.

This is quite literally the stuff of Michael Crichton novels. Thinkers about the future deal constantly with those who dismiss their speculation as science fiction. But Bostrom, who trained in neuroscience and cosmology as well as philosophy, says he’s mining the study of the future for guidance on how we should prioritize our actions today. “I’m ultimately interested in finding out what we have most reason to do now, to make the world better in some way,” he says.
---
There is, both in Bostrom’s scenarios and in Rees’s, the possibility of a long and bright future, should we manage to have any future at all. Some of the key technologies capable of going awry also have the potential to keep us alive and prospering — making humans and post-humans a more durable species. Bostrom imagines that certain advances that are currently theoretical could combine to free us some of the more fragile aspects of our nature, such as the ability to be wiped out by a simple virus, and keep the species around indefinitely. If neuropsychologists learn to manipulate the brain with precision, we could drug ourselves into conditions of not only enhanced happiness but enhanced morality as well, aiming for less fragile or violent societies far more durable than we enjoy now, in the nuclear shadow.

And if human minds could be uploaded onto computers, for example, a smallpox plague wouldn’t be so worrisome (though maybe a computer-virus outbreak, or a spilled pot of coffee, would be). Not having a body means not being subject to time’s ravages on human flesh. “When we have friendly superintelligent machines, or space colonization, it would be easy to see how we might continue for billions of years,” Bostrom said, far beyond the moment when Rees’s post-human would sit back in his futuristic lawn chair, pop open a cold one, and watch the sun run out of fuel.

There is one surprising survival scenario of particular worry for Bostrom, however — one that involves not a physical death but a moral one. The technologies that might liberate us from the threat of extinction might also change humans not into post-humans, but into creatures who have shed their humanity altogether. Imagine, he suggests, that the hypothetical future entities (evolved biologically, or uploaded to computers and enhanced by machine intelligence) have slowly eroded their human characteristics. The mental properties and concerns of these creatures might be unrecognizable.

“What gives humans value is not their physical substance, but that we are thinking, feeling beings that have plans and relationships with others, and enjoy art, et cetera,” Bostrom says. “So there could be profound transformations that wouldn’t destroy value and might allow the creation of any greater value” by having a deeper capacity to love or to appreciate art than we present humans do. “But you could also imagine beings that were intelligent or efficient, but that don’t add value to the world, maybe because they didn’t have subjective experience.”

Bostrom ranks this possibility among the more likely ways mankind could extinguish itself. It is certainly the most insidious. And it could happen any number of ways: with a network of uploaded humans that essentially abolishes the individual, making her a barely distinguishable module in a larger intelligence. Or, in a sort of post-human Marxist dystopia, humans could find themselves dragooned into soulless ultra-efficiency, without all the wasteful acts of friendship and artistic creation that made life worth living when we were merely human.

“That would count as a catastrophe,” Bostrom notes.

August 2, 2010

HuffPo: Sims, Suffering and God: Matrix Theology and the Problem of Evil

Check out Clay Farris Naff's latest article, Sims, Suffering and God: Matrix Theology and the Problem of Evil:
And that brings us back to the Sims. How can know whether we're simulations in some superduper computer built by posthumans? Some pretty amusing objections have been raised, such as quantum tests that a simulation would fail. It seems safe to say that any sim-scientists examining the sim-universe they occupy would find that the laws of that universe are self-consistent. To assert that a future computer could simulate us, complete with consciousness, but crash when it came to testing Bell's Inequality strikes me as ludicrous. Unless, of course, the program were released by Microsoft. Oooh, sorry, Bill, cheap shot. Let's take it for granted that we could not expose a simulation from within -- unless the Creators wanted us to.

But the problem of pointless suffering leads me to very different conclusion. Recall Bostrom's first conjecture: that few or none of our civilizations reach a posthuman stage capable of building computers that can run the kind of simulation in which we might exist. There are many ways civilization could end (just ask the dinosaurs!), but the one absolutely necessary condition for survival in an environment of continually increasing technological prowess is peace. Not a mushy, bumper sticker kind of peace, but the robust containment of conflict and competition within cooperative frameworks. (Robert Wright, in his brilliant if uneven book NonZero: The Logic of Human Destiny, unfolds this idea beautifully.)

What is civilization if not a mutual agreement to sacrifice some individual desires (to not pay taxes, for example, or to run through red lights) for the greater common good? Communication, trust, and cooperation make such agreements possible, but the one ingredient in the human psyche that propels civilization forward even as we gain technological power is empathy.
Link.

June 22, 2010

Nick Bostrom on the Fermi Paradox [video]


IEET Chair Nick Bostrom discusses the Great Silence with Robert Lawrence Kuhn on Closer to the Truth. Nick and I are totally on the same wavelength here, including our agreement over the suggestion that the discovery of life in the solar system would be bad news.

April 10, 2009

Welcome to the Machine, Part 3: The Simulation Argument

Previously in series: The Ethics of Simulated Beings and Descartes's Malicious Demon.

No longer relegated to the domain of science fiction or the ravings of street corner lunatics, the "simulation argument" has increasingly become a serious theory amongst academics, one that has been best articulated by philosopher Nick Bostrom.

In his seminal paper "Are You Living in a Computer Simulation?" Bostrom applies the assumption of substrate-independence, the idea that mental states can reside on multiple types of physical substrates, including the digital realm. He speculates that a computer running a suitable program could in fact be conscious. He also argues that future civilizations will very likely be able to pull off this trick and that many of the technologies required to do so have already been shown to be compatible with known physical laws and engineering constraints.

Harnessing computational power

Similar to futurists Ray Kurzweil and Vernor Vinge, Bostrom believes that enormous amounts of computing power will be available in the future. Moore's Law, which describes an eerily regular exponential increase in processing power, is showing no signs of waning, nor is it obvious that it ever will.

To build these kinds of simulations, a posthuman civilization would have to embark upon computational megaprojects. As Bostrom notes, determining an upper bound for computational power is difficult, but a number of thinkers have given it a shot. Eric Drexler has outlined a design for a system the size of a sugar cube that would perform 10^21 instructions per second. Robert Bradbury gives a rough estimate of 10^42 operations per second for a computer with a mass on order of a large planet. Seth Lloyd calculates an upper bound for a 1 kg computer of 5*10^50 logical operations per second carried out on ~10^31 bits – this would likely be done on a quantum computer or computers built of out of nuclear matter or plasma [check out this article and this article for more information].

More radically, John Barrow has demonstrated that, under a very strict set of cosmological conditions, indefinite information processing (pdf) can exist in an ever-expanding universe.

At any rate, this extreme level of computational power is astounding and it defies human comprehension. It’s like imagining a universe within a universe -- and that's precisely be how it may be used.

Worlds within worlds

"Let us suppose for a moment that these predictions are correct," writes Bostrom. "One thing that later generations might do with their super-powerful computers is run detailed simulations of their forebears or of people like their forebears." And because their computers would be so powerful, notes Bostrom, they could run many such simulations.

This observation, that there could be many simulations, led Bostrom to a fascinating conclusion. It's conceivable, he argues, that the vast majority of minds like ours do not belong to the original species but rather to people simulated by the advanced descendants of the original species. If this were the case, "we would be rational to think that we are likely among the simulated minds rather than among the original biological ones."

Moreover, there is also the possibility that simulated civilizations may become posthuman themselves. Bostrom writes,
They may then run their own ancestor-simulations on powerful computers they build in their simulated universe. Such computers would be “virtual machines”, a familiar concept in computer science. (Java script web-applets, for instance, run on a virtual machine – a simulated computer – inside your desktop.) Virtual machines can be stacked: it’s possible to simulate a machine simulating another machine, and so on, in arbitrarily many steps of iteration...we would have to suspect that the posthumans running our simulation are themselves simulated beings; and their creators, in turn, may also be simulated beings.
Given this matrioshkan possibility, the number of "real" minds across all existence should be vastly outnumbered by simulated minds. The suggestion that we're not living in a simulation must therefore address the apparent gross improbabilities in question.

Again, all this presupposes, of course, that civilizations are capable of surviving to the point where it's possible to run simulations of forebears and that our descendants desire to do so. But as noted above, there doesn't seem to be any reason to preclude such a technological feat.

Next: Kurzweil's nano neural nets.

April 27, 2008

Nick Bostrom: "Why I hope the search for extraterrestrial life finds nothing."

Transhumanist philosopher Nick Bostrom desperately hopes that we never find signs of extraterrestrial life -- advanced or otherwise.

Why?

Because he understands the Fermi Paradox.

Or more accurately, he understands the implications of the Fermi Paradox and The Great Silence.

Because the Galaxy appears uncolonized and unperturbed by intelligent life, and because there has been ample time and motive for this to happen, we have to conclude that some kind of filter is in place that prevents life from arriving at this advanced phase.

In his recent article for Technology Review, Bostrom writes:
...the evolutionary path to life-forms capable of space colonization leads through a "Great Filter," which can be thought of as a probability barrier...The filter consists of one or more evolutionary transitions or steps that must be traversed at great odds in order for an Earth-like planet to produce a civilization capable of exploring distant solar systems. You start with billions and billions of potential germination points for life, and you end up with a sum total of zero extraterrestrial civilizations that we can observe. The Great Filter must therefore be sufficiently powerful--which is to say, passing the critical points must be sufficiently improbable--that even with many billions of rolls of the dice, one ends up with nothing: no aliens, no spacecraft, no signals. At least, none that we can detect in our neck of the woods.

Now, just where might this Great Filter be located? There are two possibilities: It might be behind us, somewhere in our distant past. Or it might be ahead of us, somewhere in the decades, centuries, or millennia to come.
We are hoping that the filter resides in our past, that we have already overcome highly improbable odds.

More disturbingly, however, it's likely that the Great Filter still awaits us in the future. There's some kind technologically instigated event that exists out there -- and no species can avoid it.

Again, Bostrom writes:
Throughout history, great civilizations on Earth have imploded--the Roman Empire, the Mayan civilization that once flourished in Central America, and many others. However, the kind of societal collapse that merely delays the eventual emergence of a space-colonizing civilization by a few hundred or a few thousand years would not explain why no such civilization has visited us from another planet. A thousand years may seem a long time to an individual, but in this context it's a sneeze. There are probably planets that are billions of years older than Earth. Any intelligent species on those planets would have had ample time to recover from repeated social or ecological collapses. Even if they failed a thousand times before they succeeded, they still could have arrived here hundreds of millions of years ago.

The Great Filter, then, would have to be something more dramatic than run-of-the mill societal collapse: it would have to be a terminal global cataclysm, an existential catastrophe. An existential risk is one that threatens to annihilate intelligent life or permanently and drastically curtail its potential for future development. In our own case, we can identify a number of potential existential risks: a nuclear war fought with arms stockpiles much larger than today's (perhaps resulting from future arms races); a genetically engineered superbug; environmental disaster; an asteroid impact; wars or terrorist acts committed with powerful future weapons; super­intelligent general artificial intelligence with destructive goals; or high-energy physics experiments. These are just some of the existential risks that have been discussed in the literature, and considering that many of these have been proposed only in recent decades, it is plausible to assume that there are further existential risks we have not yet thought of.
Bostrom, who is the director of the Future of Humanity Institute at the University of Oxford, concludes his article by making a case for increased foresight and vigorous inquiry into potential risks.

But even so, Bostrom asks, what makes us think we'd be immune to such a powerful filter?

Which is why, when he looks up at the stars, he is thankful that we have yet to see any signs of extraterrestrial life.

Read the entire article, "Where are They?"

October 11, 2007

New Scientist video featuring de Grey, Bostrom and Sandberg


Check out this New Scientist video featuring Anders Sandberg, Nick Bostrom and Aubrey de Grey. Topics discussed include transhumanism, whole brain emulation and radical life extension.

August 14, 2007

The dark side of the Simulation Argument

Kudos goes out to Nick Bostrom for having his Simulation Argument (SA) featured in the New York Times today. The SA essentially states that, given the potential for posthumans to create a vast number of ancestor simulations, we should probabilistically conclude that we are in a simulation rather than the deepest reality.

Most people give a little chuckle when they hear this argument for the first time. I've explained it to enough people now that I've come to expect it. The chuckle doesn't come about on account of the absurdity of the suggestion, it's more a chuckle of logical acknowledgment -- a reaction to the realization that it may actually be true.

But this is no laughing matter; there are disturbing implications to the SA. We appear to be damned if we're in a simulation, and damned if we're not.

Dammit, we're in a simulation!

If we were ever to prove that we exist inside a simulation, it would be proof that the transhumanist assumption is correct -- that the transition from a human to a posthuman condition is in fact possible. But that will be of little solace to us measly sims! The simulation -- er, our world -- could be shut down at any time. Or, the variables that make up our modal reality could be altered in undesirable ways (e.g. our world could be turned into a Hell realm).

Also, should we reside in a simulation, we have to pretty much assume that our digital benefactors are rather indifferent to our plight. Based on the amount of suffering going on around here we should probably assume a gnostic religious sensibility. These gods are not our allies; they may have created us, but they are not looking out for our best interests.

Dammit, we're not in a simulation!

Now, on the other side of the virtual coin, should we ever prove that we are not in a simulation, that would also be bad. It would be potential evidence that the transition to a posthuman condition may not be possible.

This problem is similar to the Fermi Paradox and the possible resolution that we are the first intelligent civilization to emerge in the Galaxy. This is a hard pill to swallow based on the extreme odds.

Similarly, we should be disturbed that we are not in a simulation because it may imply that we don't have a very bright future -- that civilizations destroy themselves before developing the capacity to create simulations. Otherwise, we have to take on a exceptionally optimistic frame and assume that we'll survive the Singularity and be that special first civilization that spawns simulations. Again, a probabilistically unsatisfactory proposition.

Of course, advanced civilizations may not create simulations on this scale. The Fermi Paradox offers yet another example as to why this is a problematic suggestion. Given the technological potential to colonize the Galaxy, why haven't advanced civilizations done so? Similarly, why wouldn't advanced civilizations create simulations given the technological capacity to do so?

The NYT article goes over a number of these issues and Bostrom provides some possible solutions. Ultimately, however, the answers are unsatisfactory.

The Simulation Argument solves the Fermi Paradox! Maybe...

Perhaps the answer to the Fermi Paradox is that we are in a simulation. It would certainly explain the Great Silence. Why bother simulating extraterrestrials? Maybe that's the point of the simulation -- to study how a civilization advances without any outside intervention.

Or maybe the Fermi Paradox exists because all civilizations are busy working on their simulations....

Or perhaps.....ah, forget it. My brain (which is probably sitting in a vat somewhere) hurts.