April 27, 2009

The Abolitionist Project: Using biotechnology to abolish suffering in all sentient life

David Pearce is guest blogging this week

First, many thanks to George for inviting me to blog on Sentient Developments. I asked George what I should blog about. He suggested I might start with The Hedonistic Imperative. This topic might be more interesting to readers of Sentient Developments if I respond to critical questions or blog on themes readers feel I've unjustly neglected. If so, please let me know.

Briefly, some background. In 1995 I wrote an online manifesto which advocates the use of biotechnology to abolish suffering in all sentient life. The Hedonistic Imperative predicts that world's last unpleasant experience will be a precisely dateable event in the next thousand years or so - probably a "minor" pain in some obscure marine invertebrate. More speculatively, HI predicts that our descendants will be animated by genetically preprogrammed gradients of intelligent bliss - modes of well-being orders of magnitude richer than today's peak experiences.

I write from the perspective of what is uninspiringly known as negative utilitarianism i.e. I'd argue that we have an overriding moral responsibility to abolish suffering. If my background had been a bit different, I'd probably just call myself a scientifically-minded Buddhist. True, Gautama Buddha didn't speak about biotechnology; but to Buddhists (and Jains) talk of engineering the well-being of all sentient life is less likely to invite an incredulous stare than it does in the West.

I should also add that credit for the first published scientifically literate blueprint for a world without suffering belongs IMO to Lewis Mancini. See "Riley-Day Syndrome, Brain Stimulation and the Genetic Engineering of a World Without Pain" Medical Hypotheses (1990) 31. 201-207. As far as I can tell, Mancini's original paper sank with barely a trace. However, it is now online where it belongs: I've uploaded the text here: http://www.wireheading.com/painless.html.
[I confess my jaw dropped a couple of years ago when I stumbled across it.]

HI was originally written for an audience of analytic philosophers. The Abolitionist Project (2007) http://www.abolitionist.com/ and Superhappiness (2008) http://www.superhappiness.com/ are (I hope) more readable and up-to-date. I won't now go into the technical reasons for believing we can use biotech, robotics and nanotechnology to eradicate the molecular substrates of suffering and malaise from the biosphere. Given the exponential growth of computing power and biotechnology, the abolitionist project could in theory be completed in two or three centuries or less. This timescale is unlikely for sociological reasons. So why should anyone think it's ever going to happen? All sorts of stuff is technically feasible in principle; but a lot of so-called futurology is just a mixture of disguised autobiography and wish-fulfillment fantasy. Is this any different?

Quite possibly not; but here are two reasons for guarded optimism.

Futurists spend a lot of time discussing the possibility of posthuman superintelligence. Whatever else superintelligence may be, we implicitly assume that it must at least weakly be related to what IQ tests measure - just completely off the scale. However, IQ tests ignore one important and extraordinarily cognitively demanding skill that non-autistic humans possess. At least part of what drove the evolution of our uniquely human intelligence was our superior "mind-reading" skills and enhanced capacity for empathetic understanding of other intentional systems. This capacity is biased, selective, and deeply flawed; but I'd argue its extension and enrichment are going to play a critical role in the development of intelligent life in the universe. By contrast, conventional IQ tests are "mind-blind"; they simply ignore social cognition. I'd argue that our posthuman descendants will have a vastly richer capacity to understand the perspective of "what it is like to be" other sentient beings; and this recursively self-improving empathetic capacity will be a vital ingredient of mature superintelligence and posthuman ethics. Of course "super-empathy" doesn't by itself guarantee a utopian outcome. And I'm personally sceptical that digital computers with a classical von Neumann architecture will ever be sentient, let alone superintelligent. But a future (hypothetical) superhuman capacity for empathetic understanding does, I think, make a universal compassion for all sentient beings more likely.

Viewing the way we currently treat other sentient beings as a cognitive and not just a moral limitation is of course controversial. So secondly, let's fall back on a more cynical and conservative assumption. Assume, pessimistically, that what Bentham says of humans will be true of posthumans too: "Dream not that men will move their little finger to serve you, unless their advantage in so doing be obvious to them. Men never did so, and never will, while human nature is made of its present materials." Does this bleak analysis of (post)human nature rule out a world that supports the well-being of all sentience?

No, I don't think so. If it's broadly correct, this limitation does mean is that morally serious actors today should strive to develop advanced technology that makes the expression of (weak) benevolence towards other sentient beings trivially easy - so easy that its expression involves less effort on the part of the morally apathetic than raising one's little finger. For example, whereas one way to combat the cruelty of factory farming is to use moral arguments to promote its abolition - as in their very different ways do PETA and Peter Singer - the other, complementary strategy is to promote technologies that will allow "us" all to lead a cruelty-free lifestyle at no personal cost. Thus see the nonprofit research organization New Harvest: advancing meat substitutes: http://www.new-harvest.org/.

Thirty years hence, if meat-eaters are presented with two equally tasty products, one "natural" from an intensively-reared factory-farmed animal that's been butchered for its flesh as now, the other labelled "cruelty-free" in the form of attractively branded vatfood, how many consumers are deliberately going to choose the cruel option if it doesn't taste better? I'm aware that this kind of optimism can sound naive. Yes, we can all be selfish; but i think relatively few people are malicious, and still fewer people are consistently malicious. So long as the slightest personal inconvenience to members of the master species can be avoided, I think we can extend the parallel of developing cruelty-free cultured meat to the eradication of suffering throughout the living world: ecosystem redesign, depot-contraception, rewriting the vertebrate genome, the lot. With sufficiently advanced technology, the creation of a living world without cruelty needn't be effortful or burdensome to the morally indifferent. Technology can make what is today impossibly difficult soon merely challenging, then relatively easy, and eventually trivial. And of course a lot of people do aspire to be more than merely weakly benevolent. Maybe we're "really" just signalling to potential mates our desirability as nurturing fathers [or whatever story evolutionary psychology tells us explains our altruistic desires.]. But what matters is not our motivation or its ultimate cause, but the outcome.

A cruelty-free world is one thing; but many of us feel ambivalent about extreme happiness, let alone lifelong superhappiness of the kind promised by utopian neurobiology. One reason we may feel ambivalent is that we contemplate, for instance, the selfishness and drug-addled wits of the heroin addict; or the crazed lever-pressing of the rodent wirehead; or the impaired judgement of the euphorically manic. Intellectuals especially may be resistant to prospect of superhappiness, fearing that their intellectual acuity may be compromised. Beyond a certain point, must there be some kind of tradeoff between hedonic tone and intellectual performance?

Not necessarily. Here is just one way in which reprogramming our reward circuitry could actually serve as a tool for intelligence-amplification and cognitive enhancement. Recall Edison's much-quoted dictum: “Genius is one percent inspiration and ninety-nine percent perspiration.” The relative percentages are disputable; but the contribution of sheer hard work and intellectual focus to productivity isn't in doubt. Now if you're a student, an academic or an intellectual, imagine if you could selectively amplify the subjective reward you derive from all and only the cerebral activities that you think you ought to enjoy doing most; and conversely, imagine if you could diminish or switch off altogether the reward from life's baser pleasures. What might you achieve intellectually if you could reprogram your reward circuitry so that you could work pursuing your highest aspirations for 14 hours a day? By way of contrast, using the Internet offers an uncomfortable insight into what one is really interested in. [Sadly, I lose track of the endless hours I've wasted online viewing complete fluff. I tell myself that I'm soon going to enjoy writing a 500 page scholarly tome, The Abolitionist Project. Alas in practice it's more fun surfing the Net for trivia.] In any event, IMO the enemy of intelligence isn't bliss but indiscriminate, uniform bliss; and in the future I think superhappiness and superintelligence can be fused - seamlessly or otherwise.

Are there pitfalls here? Yes, lots. But they are technical problems with a technical solution.

Here's another example. one reason we may be ambivalent about extreme happiness is that we see how it can make people antisocial. One thinks of the heroin addict who neglects his family for the sake of his opioid habit. But what if safe, sustainable designer drugs or gene therapies were available that conferred an unlimited capacity for altruistic pleasure? It's only recently been discovered that the empathogenic hugdrug MDMA (Ecstasy) http://www.mdma.net/ triggers copious release of the "trust hormone" oxytocin: oxytocin seems to be the missing jigsaw piece in explaining MDMA's unique spectrum of action. So to take one scenario, what if mass oxytocin-therapy enabled us to be chronically kind, trusting and empathetic towards each other - the very opposite of "selfish hedonism" of popular stereotype.

Moreover this option isn't just a matter of personal lifestyle choice; I think the implications are more far-reaching. Thoughtful researchers are increasingly concerned about existential and global catastrophic risks in an era of biowarfare, nanotechnology and weapons of mass destruction. Britain's Astronomer Royal, Sir Martin Rees, puts the odds of human extinction this century at 50%. I suspect this figure is too high, but clearly the risk is not negligible. Anyhow, arguably the greatest underlying source of existential and global catastrophic lies in the Y chromosome: testosterone-driven males are responsible for the overwhelming bulk of the world's wars, aggression and reckless behaviour. Decommissioning the Y chromosome isn't currently an option; but the potential civilizing influence of pro-social drugs and gene therapies on dominant alpha males shouldn't be lightly dismissed as a risk-reduction strategy. In general, a world where intelligent agents are happier, trusting and more trustworthy is potentially a much safer world - and much more civilised too.

Are there pitfalls to modifying human nature? Again yes, lots. But there are also profound risks in retaining the biological status quo.

David Pearce


Hervé Musseau said...

About meat-in-the-vat, you might be a tad optimistic. In France, when meds are 100% reimbursed and people get the choice between the old med and its generic form, many choose the old one that they know, because they are unconvinced that the two are really the same, even though the cost is nil for them in both cases, and the generic is less costly to the society.
Likewise, many people would reject this new form of meat, as potentially unsafe.
The way GMO have been handled, vatmeat isn't going to be trusted any times soon, IMO.

Carl said...

"And I'm personally sceptical that digital computers with a classical von Neumann architecture will ever be sentient, let alone superintelligent."

Could you elaborate on this?

Spaceweaver said...

If any thought or action one might have or do will culminate in only one possible emotional result: happiness, what is going to drive conscious selection? Once the driving forces behind conscious selection are weakened or entirely disappear, it is difficult to fathom how the human mind will continue to evolve.

It seems, the hedonistic imperative presents as an ideal a kind of mental homeostasis not unlike the Buddhist concept of Nirvana. Such state also spells the end of evolution. As far as I understand, it seems that evolution at all stages be it biological, social or mental is driven by sets of opposing forces that create evolutionary pressures and therefore evolutionary motion. It also stands to reason that the emergence of a new evolutionary stage will be defined and characterized by the emergence of an entirely novel kind of driving force.

Having said that, I would propose that both suffering and happiness as we know them belong to our current evolutionary stage. The same stage that gave rise to human language, civilization and technology. At the next stage of evolution, if and when it emerges (call it the evolution of Mind), both suffering and happiness are going to become obsolete as evolutionary driving forces. We will lose interest in them and in their consequences. Our post human horizon has everything to do with the emergence of a new stage of evolution and with it a novel kind of driving force. What kind of driving force might that be is yet largely unknown.

The Hedonistic Imperative projects if so the end of our current evolutionary stage by eliminating its primary driving forces. From an evolutionary perspective I see it as an end not as a beginning.

Would much appreciate David's response to that.

Visigoth said...

"What if mass oxytocin-therapy enabled us to be chronically kind, trusting and empathetic towards each other?" Then you'd be a sucker my friend. A person so nice would easily be taken advantage of by less benevolent individuals. You will never get a 100 percent of the population to agree to such a reformatting. There will always be predators, and people who have become lambs will be easy prey for those of us who still wear wolf's clothing. Most people will chose not to become so G-rated to avoid being exploited.

midnightsun said...

In regards to in cultured meat-- the difference between it and any preference that people may feel towards generic prescriptions is that cultured meat will be healthier and tastier than "regular" meat. You will also be able to culture exotic animal meats, such as tigers, etc. that people normally can't try. They'd really be missing out if they didn't try it, whereas with no cost to them, of course they want the name brand prescription, everyone likes to get something for "free." GMO is probably a more apt argument.

@ Spaceweaver-- think of all the evolution that hasn't taken place because people are so depressed or mentally ill that they can't advance themselves or help society. Imagine if the generation before language was invented had said, once we have language, where will we go from there? and fought against it.

@ Visigoth-- I have wondered how this would come about in society as well. It would have to be something that all of society agrees to, which may not be a problem as people realize it's better (kind of like how now, if you gave someone the choice of having surgery without anesthesia, they would immediately leave the operating room and go straight to the police and report you to have your surgical license taken away. 100 years ago that would be seen as a valid choice because many people then had objections to it.) The proposed human of the future would not just be "nice" but presumably be more intelligent than the human of today and more able to recognize scams.

It is actually Darwinian evolution which has rendered us susceptible to scamming by our fellow citizens, since humans are used to trusting other members of the "tribe" for everything we need and are therefore very naturally trusting of others unless we have an explicit reason to think otherwise. So really this is a problem with current evolution, not a problem with the Hedonistic Imperative.

keystrike said...


Perhaps the fact that medications are 100% reimbursed is the reason that people will pick the more expensive one that they know. As health care costs rise, the state could elect to only offer generics for free. Brand names would probably be selected less if they cost more. The comparison to meat is not exact as food is not free.. although it would be interesting to look at the subsidies to make more precise measurements.