April 28, 2009

Guest blogger David Pearce answers your questions

David Pearce is guest blogging this week

In yesterday's post I promised I'd try to respond to comments and questions. Here is a start - but alas I'm only skimming the surface of some of the issues raised.

t.theodorus ibrahim asks: "what are the current prospects of a merger or synergy between transhumanist concerns of the rights of natural humans within a world dominated by greater intelligences and the existing unequal power relations between animals - especially between human and non-human animals? Will the human reich give way to a transhuman reich?!"

The worry that superintelligent posthumans might treat natural humans in ways analogous to how humans treat non-human animals is ultimately (I'd argue) misplaced. But reflection on such a disturbing possibility might (conceivably) encourage humans to behave less badly to our fellow creatures. Parallels with the Third Reich are best used sparingly - the term "Nazi" is too often flung around in overheated rhetoric. Yet when a Jewish Nobel laureate (Isaac Bashevis Singer, The Letter Writer) writes: "In relation to them, all people are Nazis; for the animals it is an eternal Treblinka", he is not using his words lightly. whether we're talking about vivisection to satisfy scientific curiosity or medical research, or industrialized mass killing of "inferior" beings, the parallels are real - though like all analogies when pressed to far, they eventually break down.

One reason I'm optimistic some kind of "paradise engineering" is more likely than dystopian scenarios is that the god-like technical powers of our descendants will most likely be matched by a god-like capacity for empathetic understanding - a far richer and deeper understanding than our own faltering efforts to understand the minds of sentient beings different from "us". Posthuman ethics will presumably be more sophisticated than human ethics - and certainly less anthropocentric. Also, it's worth noting that a commitment to the well-being of all-sentience is enshrined in the Transhumanist/H+ Declaration -- a useful reminder of what (I hope) unites us.

Spaceweaver, if I understand correctly, argues that perpetual happiness would lead to stasis. Lifelong bliss would bring the evolutionary development of life to a premature end: discontent is the motor of progress.

I'd agree this sort of scenario can't be excluded. Perhaps we'll get stuck in a sub-optimal rut: some kind of fool's paradise or Brave New World. But IMO there are grounds for believing that our immediate descendants (and perhaps our elderly selves?) will be both happier and better motivated. Enhancing dopamine function, for instance, boosts exploratory behaviour, making personal stasis less likely. By contrast, it's depressives who get "stuck in a rut". Moreover if we recalibrate the hedonic treadmill rather than dismantle it altogether, then we can retain the functional analogues of discontent minus its nasty "raw feels". See "Happiness, Hypermotivation and the Meaning Of Life."

FrF asks: "Could it be that transhumanists who are, for whatever reason, dissatisfied with their current lives overcompensate for their suffering by proposing grand schemes that could potentially bulldoze over all other people?"

For all our good intentions, yes. The history of utopian thought is littered with good ideas that went horribly wrong. However, technologies to retard and eventually abolish ageing, or tools to amplify intelligence, are potentially empowering - liberating, not "bulldozing". Transhumanists/H+-ers aren't trying to force anyone to be eternally youthful, hyperintelligent or even - God forbid - perpetually happy.

Genuine dilemmas do lie ahead: for instance, is a commitment to abolishing aging really consistent with a libertarian commitment to unlimited procreative freedom? But I think one reason many people are suspicious of future radical mood-enhancement technologies, for instance, is the curious fear that someone, somewhere is going to force them to be happy against their will.

It's also worth stressing that radically enriching hedonic tone is consistent with preserving most of your existing preference architecture intact: indeed in one sense, raising our "hedonic set-point" is a radically conservative strategy to improve our quality of life. Uniform bliss would be the truly revolutionary option: and uniform lifelong bliss probably isn't sociologically viable for an entire civilisation in the foreseeable future. However, stressing a future of hedonic gradients and the possibility of preference architecture conservation is arguably to underplay the cognitive significance of the hedonic transition (IMO) in prospect. Just as the world of the clinically depressed today is unimaginably different from the world of the happy, so I reckon the world of our superhappy successors will be unimaginably more wonderful than contemporary life at its best. In what ways? Well, even if I knew, I wouldn't have the conceptual equipment to say.

Visigoth rightly points out that any long-acting oxytocin-therapy [or something similar] would leave us vulnerable to being "suckered". Could genetically non-identical organisms ever wholly trust each other - or ever be wholly trustworthy - without compromising the inclusive fitness of their genes? Maybe not. I discuss the nature of selection pressure in a hypothetical post-Darwinian world here: "The Reproductive Revolution: Selection Pressure in a Post-Darwinian World". Needless to say, any conclusions drawn are only tentative.

Away from genetics, it's worth asking what does it mean to be "suckered" when life no longer resembles a zero-sum game and pleasure is no longer a finite resource - a world where everyone has maximal control over their own reward circuity. Right now this scenario sounds fantastical; but IMO the prospect is neither as fantastical (nor as remote) as it sounds. See the Affective Neuroscience and Biopsychology Lab for how close we're coming to discovering the anatomical location and molecular signature of pure bliss.

More tomorrow....

David Pearce

1 comment:

Carl said...

Why think that affective gradients are necessary for motivation at all? Consider minds that operate with formal utility functions instead of reinforcement learning. Humans are often directly motivated to act independently of pleasure and pain.