The community of thinkers on distant-future questions stretches across disciplinary bounds, with the primary uniting trait a willingness to think about the future as a topic for objective study, rather than a space for idle speculation or science fictional reverie. They include theoretical cosmologists like Sean Carroll of the California Institute of Technology, who recently wrote a book about time, and nonacademic technology mavens like Ray Kurzweil, the precocious inventor and theorist. What binds this group together is that they are not, says Bostrom, “just trying to tell an interesting story.” Instead, they aim for precision. In its fundamentals, Carroll points out, the universe is a “relatively simple system,” compared, say, to a chaotic system like a human body — and thus “predicting the future is actually a feasible task,” even “for ridiculously long time periods.”
---
Also among the cosmologists is Rees, the speaker at the Royal Institution, who turned his attention to the end of time after a career in physics reckoning with time’s beginning. An understanding of these vast time scales, he contends, should have a large and humbling effect on our predictions about human evolution. “It’s hard to think of humans as anything like the culmination of life,” Rees says. “We should expect humans to change, just as Darwin did when he wrote that ‘no living species will preserve its unaltered likeness into a distant futurity.’ ” Most probably, according to Rees, the most important transformations of the species will be nonbiological. “Evolution in the future won’t be determined by natural selection, but by technology,” he says — both because we have gone some distance toward mastering our biological weaknesses, and because computing power has sped up to a rate where the line between human and computer blurs. (Some thinkers call the point when technology reaches this literally unthinkable level of advancement the “singularity,” a coinage by science fiction writer Vernor Vinge.)
---Bostrom, the Oxford philosopher, puts the odds at about 25 percent, and says that many of the greatest risks for human survival are ones that could play themselves out within the scope of current human lifetimes. “The next hundred years or so might be critical for humanity,” Bostrom says, listing as possible threats the usual apocalyptic litany of nuclear annihilation, man-made or natural viruses and bacteria, or other technological threats, such as microscopic machines, or nanobots, that run amok and kill us all.
This is quite literally the stuff of Michael Crichton novels. Thinkers about the future deal constantly with those who dismiss their speculation as science fiction. But Bostrom, who trained in neuroscience and cosmology as well as philosophy, says he’s mining the study of the future for guidance on how we should prioritize our actions today. “I’m ultimately interested in finding out what we have most reason to do now, to make the world better in some way,” he says.
---
There is, both in Bostrom’s scenarios and in Rees’s, the possibility of a long and bright future, should we manage to have any future at all. Some of the key technologies capable of going awry also have the potential to keep us alive and prospering — making humans and post-humans a more durable species. Bostrom imagines that certain advances that are currently theoretical could combine to free us some of the more fragile aspects of our nature, such as the ability to be wiped out by a simple virus, and keep the species around indefinitely. If neuropsychologists learn to manipulate the brain with precision, we could drug ourselves into conditions of not only enhanced happiness but enhanced morality as well, aiming for less fragile or violent societies far more durable than we enjoy now, in the nuclear shadow.
And if human minds could be uploaded onto computers, for example, a smallpox plague wouldn’t be so worrisome (though maybe a computer-virus outbreak, or a spilled pot of coffee, would be). Not having a body means not being subject to time’s ravages on human flesh. “When we have friendly superintelligent machines, or space colonization, it would be easy to see how we might continue for billions of years,” Bostrom said, far beyond the moment when Rees’s post-human would sit back in his futuristic lawn chair, pop open a cold one, and watch the sun run out of fuel.
There is one surprising survival scenario of particular worry for Bostrom, however — one that involves not a physical death but a moral one. The technologies that might liberate us from the threat of extinction might also change humans not into post-humans, but into creatures who have shed their humanity altogether. Imagine, he suggests, that the hypothetical future entities (evolved biologically, or uploaded to computers and enhanced by machine intelligence) have slowly eroded their human characteristics. The mental properties and concerns of these creatures might be unrecognizable.
“What gives humans value is not their physical substance, but that we are thinking, feeling beings that have plans and relationships with others, and enjoy art, et cetera,” Bostrom says. “So there could be profound transformations that wouldn’t destroy value and might allow the creation of any greater value” by having a deeper capacity to love or to appreciate art than we present humans do. “But you could also imagine beings that were intelligent or efficient, but that don’t add value to the world, maybe because they didn’t have subjective experience.”
Bostrom ranks this possibility among the more likely ways mankind could extinguish itself. It is certainly the most insidious. And it could happen any number of ways: with a network of uploaded humans that essentially abolishes the individual, making her a barely distinguishable module in a larger intelligence. Or, in a sort of post-human Marxist dystopia, humans could find themselves dragooned into soulless ultra-efficiency, without all the wasteful acts of friendship and artistic creation that made life worth living when we were merely human.
“That would count as a catastrophe,” Bostrom notes.
Showing posts with label future of intelligence. Show all posts
Showing posts with label future of intelligence. Show all posts
May 21, 2011
Boston Globe sneaks a peek into the deep future
The Boston Globe asks: "What will happen to us?" To answer the question, writer Graeme Wood highlights the work of futurists Nick Bostrom, Sir Martin Rees, Sean Carroll and Ray Kurzweil. Highlights:
August 22, 2010
SETI on the lookout for artificial intelligence

Yay to SETI for finally figuring this out; shame on SETI for taking so long to acknowledge this. Marvin Minsky has been telling them to do so since the Byurakan SETI conference in 1971.
John Elliott, a SETI research veteran based at Leeds Metropolitan University, UK, agrees. "...having now looked for signals for 50 years, SETI is going through a process of realising the way our technology is advancing is probably a good indicator of how other civilisations—if they're out there—would've progressed. Certainly what we're looking at out there is an evolutionary moving target."
Both Shostak and Elliott admit that finding and decoding any eventual message from thinking machines may prove more difficult than in the "biological" case, but the idea does provide new directions to look. Shostak believes that artificially intelligent alien life would be likely to migrate to places where both matter and energy—the only things he says would be of interest to the machines—would be in plentiful supply. That means the SETI hunt may need to focus its attentions near hot, young stars or even near the centres of galaxies.
Personally, I find that last claim to be a bit dubious. While I agree that matter and energy will be important to an advanced machine-based civilization, close proximity to the Galaxy's centre poses a new set of problems, including an increased chance of running into gamma ray bursters and black holes, not to mention the problem of heat—which for a supercomputing civilization will be extremely problematic.
Moreover, SETI still needs to acknowledge that the odds of finding ETIs is close to nil. Instead, Shostak and company are droning on about how we'll likely find traces in about 25 years or so. Such an acknowledgement isn't likely going to happen; making a concession like that would likely mean they'd lose funding and have to close up shop.
So their search continues...
Source.
August 15, 2010
Scruton: The Uses of Pessimism: And the Danger of False Hope [book]

According to Scruton, institutions progress but human beings don't. And at the same time the human capacity for cruelty and violence remains infinite. Ultimately, Scruton's anti-transhumanist argument boils down to, "To be truly happy we must be pessimistic"—an adage that most transhumanists reject outright.
Book description:
Ranging widely ove—r human history and culture, from ancient Greece to the current global economic downturn, Scruton makes a counterintuitive yet persuasive case that optimists and idealists -- with their ignorance about the truths of human nature and human society, and their naive hopes about what can be changed -- have wrought havoc for centuries. Scruton's argument is nuanced, however, and his preference for pessimism is not a dark view of human nature; rather his is a 'hopeful pessimism' which urges that instead of utopian efforts to reform human society or human nature, we focus on the only reform that we can truly master -- the improvement of ourselves through the cultivation of our better instincts.From Richard King's review:
Written in Scruton's trademark style-- erudite, sweeping in scope across centuries and cultures, and unafraid to offend-- this book is sure to intrigue and provoke readers concerned with the state of Western culture, the nature of human beings, and the question of whether social progress is truly possible.
This dose of pessimism is necessary, not because of the leftist intellectuals whom Scruton endlessly takes to task, but largely because of unchecked capitalism. That, if you like, is the snail in the bottle of this conservative philosopher's engaging treatise: it fails to acknowledge that sometimes crises result from conservative patterns of thinking, and not from those who seek to challenge them.From Kenan Malik's review:
Scruton appears equally complacent about the contemporary impact of tradition. The liberalisation of social norms in recent decades undermines tradition and defies human nature, he argues. So why, he asks, should the onus be on conservatives to defend the importance of traditional forms of marriage against "innovations" such as gay partnerships?More from Roger "The Gloom Merchant" Scruton:
The answer is the one that would have been given to those who argued against miscegenation or giving women the vote. The unequal treatment of gay people is a moral wrong and no amount of tradition can make it right. It is up to Scruton to defend discrimination, not liberals to have to justify treating all equally.
Scruton insists that he is averse to optimism only in its "unscrupulous" form. The trouble is, what makes an optimist unscrupulous is, in his eyes, a belief in the possibility of "goal-directed politics". He dismisses as a "fallacy" the "belief that we can advance collectively to our goals by adopting a common plan, and by working towards it". Progressive changes, however, rarely happen by chance. History is a narrative of humans rationally and consciously transforming the world. To give up on "goal-directed politics" is to give up possibilities of betterment.
Such fallacies have led to disastrous results on account of the false hopes that are built on them. Many of these false hopes have fizzled out. But there is truth in the view that hope springs eternal in the human breast, and false hope is no exception. In the world that we are now entering there is a striking new source of false hope, in the “trans-humanism” of people like Ray Kurzweil, Max More and their followers. The transhumanists believe that we will replace ourselves with immortal cyborgs, who will emerge from the discarded shell of humanity like the blessed souls from the grave in some medieval Last Judgement.Link.
The transhumanists don’t worry about Huxley’s Brave New World: they don’t believe that the old-fashioned virtues and emotions lamented by Huxley have much of a future in any case. The important thing, they tell us, is the promise of increasing power, increasing scope, increasing ability to vanquish the long-term enemies of mankind, such as disease, ageing, incapacity and death.
But to whom are they addressing their argument? If it is addressed to you and me, why should we consider it? Why should we be working for a future in which creatures like us won’t exist, and in which human happiness as we know it will no longer be obtainable? And are those things that spilled from Pandora’s box really our enemies – greater enemies, that is, than the false hope that wars with them? We rational beings depend for our fulfilment upon love and friendship. Our happiness is of a piece with our freedom, and cannot be separated from the constraints that make freedom possible – real, concrete freedom, as opposed to the abstract freedom of the utopians. Everything deep in us depends upon our mortal condition, and while we can solve our problems and live in peace with our neighbours we can do so only through compromise and sacrifice. We are not, and cannot be, the kind of posthuman cyborgs that rejoice in eternal life, if life it is. We are led by love, friendship and desire; by tenderness for young life and reverence for old. We live, or ought to live, by the rule of forgiveness, in a world where hurts are acknowledged and faults confessed to. All our reasoning is predicated upon those basic conditions, and one of the most important uses of pessimism is to warn us against destroying them. The soul-less optimism of the transhumanists reminds us that we should be gloomy, since our happiness depends on it.
Subscribe to:
Posts (Atom)