April 29, 2009

Guest blogger David Pearce answers your questions (part 2)

David Pearce is guest blogging this week.

Here are two more replies in response to questions about my Abolitionist Project article from earlier this week.

Carl makes an important point: "Why think that affective gradients are necessary for motivation at all? Consider minds that operate with formal utility functions instead of reinforcement learning. Humans are often directly motivated to act independently of pleasure and pain."

Imagine if we could find a functionally adequate substitute for the signaling role of negative affect - a bland term that hides a multitude of horrors - and replace its nastiness with formal utility functions. Why must organic robots like us experience the awful textures of physical pain, depression and malaise, while our silicon robots function well without them? True, most people regard life's heartaches as a price worth paying for life's joys. We wouldn't want to become zombies.

But what if it were feasible to "zombify" the nasty side of life completely while amplifying all the good bits - perhaps so we become "cyborg buddhas".

More radically, if the signaling role of affect proves dispensable altogether, it might be feasible computationally to offload everything mundane onto smart prostheses - and instead enjoy sublime states of bliss every moment of our lives, without any hedonic dips at all. I say more on this theme in my reply to "Wouldn't a permanent maximum of bliss be better?" I need scarcely add this is pure speculation.

Leafy asks me to comment on an "animal welfare state, and [...] how your views about the treatment of nonhuman animals (e.g., that animals need care and protection, not liberation, and when animal use or domination might be morally acceptable) differ from those of people such as Singer and Francione".

First, let's deal with an obvious question. Millions of human infants die needlessly and prematurely in the Third World each year. Shouldn't we devote all our energies to helping members of our own species first? To the extent humans suffer more than non-humans, I'd answer: yes - though rationalists should take extraordinary pains to guard against anthropocentric bias. Critically, there is no evidence that domestic, farm or wild mammals are any less sentient than human infants and toddlers. If so, we should treat their well-being impartially. A critic will respond here that human infants have moral priority because they have the potential to become full-grown adults - with the moral primacy that we claim. But we wouldn't judge a toddler with a terminal disease who will never grow up to deserve any less love and care than a healthy youngster. Likewise, the fact that a dog or a chimpanzee or a pig will never surpass the intellectual accomplishments of a three year old child is no reason to let them suffer more. Thus I think it's admirable that we spend a hundred thousand dollars trying to save the life of a 23 week old extremely premature baby; but it's incongruous that we butcher and eat billions of more sophisticated sentient beings each day. Actually, IMO words can't adequately convey the horror of what we're doing in factory farms and slaughterhouses. Self-protectively, I try and shut it out most of the time. After all, my intuitions reassure me, they're only animals, what's going on right now can't really be as bad as I believe it to be. Yet I'm also uncomfortably aware this is moral and intellectual cowardice.

Is a comprehensive welfare system for non-human animals technically feasible? Yes. The implications of an exponential growth of computing power for the biosphere are exceedingly counterintuitive. See, for example, The Singularity Institute or Ray Kurzweil -- though I'm slightly more cautious about timescales. In any event, by the end of the century we should have the computational resources to micromanage an entire planetary ecosystem. Whether we use those computational resources systematically to promote the well-being of all sentient life in that kind of timeframe is presumably unlikely. However, we already - and without the benefit of quantum supercomputers - humanely employ, for example, depot-contraception rather than culling to control the population numbers of elephants in some overcrowded African national parks. Admittedly, ecosystem redesign is only in its infancy; and we've barely begun to use genetic engineering, let alone genomic rewrites. But if our value system dictates, then we could use nanobots to go to the furthest ends of the Earth and the deep oceans and eradicate the molecular signature of unpleasant experience wherever it is found. Likewise, we could do the same to the genetic code that spawns it. In any case, for better or worse, by the mid-century large terrestrial mammals are unlikely to survive outside our "wildlife" reserves simply in virtue of habitat destruction. How much suffering we permit in these reserves is up to us.

Gary Francione and Peter Singer? Despite their different perspectives, I admire them both. As an ethical utilitarian rather than a rights theorist, I'm probably closer to Peter Singer. But IMO a utilitarian ethic dictates that factory-farmed animals don't just need "liberating", they need to be cared for. Non-human animals in the wild simply aren't smart enough to adequately look after themselves in times or drought or famine or pestilence, for instance, any more than are human toddlers and infants, and any more than were adult members of Homo sapiens before the advent of modern scientific medicine, general anaesthesia, and painkilling drugs. [actually, until humanity conquers ageing and masters the technologies needed reliably to modulate mood and emotion, this control will be woefully incomplete.]

At the risk of over-generalising, we have double standards: an implicit notion of "natural" versus "unnatural" suffering. One form of suffering is intuitively morally acceptable, albeit tragic; the other is intuitively morally wrong. Thus we reckon someone who lets their pet dog starve to death or die of thirst should be prosecuted for animal cruelty. But an equal intensity of suffering is re-enacted in Mother Nature every day on an epic scale. It's not (yet) anybody's "fault." But as our control over Nature increases, so does our complicity in the suffering of Darwinian life "red in tooth and claw". So IMO we will shortly be ethically obliged to "interfere" [intervene] and prevent that suffering, just as we now intervene to protect the weak, the sick and the vulnerable in human society.

But here comes the real psychological stumbling-block. One of the more counterintuitive implications of applying a compassionate utilitarian ethic in an era of biotechnology is our obligation to reprogram and/or phase out predators. In the future, I think a lot of thoughtful people will be relaxed about phasing out/reprogramming, say, snakes or sharks. But over the years, I've received a fair bit of hate-mail from cat-lovers who think that I want to kill their adorable pets. Naturally, I don't: I'd just like to see members of the cat family reprogrammed [or perhaps "uplifted" so they don't cause suffering to their prey. As it happens, I've only once witnessed a cat "playing" with a tormented mouse. It was quite horrific. Needless to say, the cat was no more morally culpable than a teenager playing violent videogames, despite the suffering it was inflicting. But I've not been able to enjoy watching a Tom-and-Jerry cartoon since. Of course the cat's victim was only a mouse. Its pain and terror were probably no worse than mine the last time I caught my fingers in the door. But IMO a sufficiently Godlike superintelligence won't tolerate even a pinprick's worth of pain in post-human paradise. And (demi)gods, at least, is what I predict we're going to become...

David Pearce
dave@hedweb.com
http://www.hedweb.com/

7 comments:

Michael Kirkland said...

This Abolitionist Project of yours violates the most basic of moral principles: the reciprocity principle or "golden rule".

Had some other sapience with your ideas found Earth a million years ago and undertaken such a project, we would not have had the opportunity to evolve our own sapience.

Frankly, your ideas about predators are genocidal, and using euphemisms like "phase out" is monstrous. It's somewhat perplexing that you would posit something as outlandish as "uplifting" with all its logical inconsistency over simply giving the cat vat grown mouse meat.

keystrike said...

Michael,

I agree that the "golden rule" is a rule of thumb which humans created to live peacefully amongst each other during these early years. But I disagree that it is a fundamental principle of morality. I would argue that morality is about reducing suffering and possibly increasing happiness. This itself of course does not preclude the golden rule. Personally I wish an intelligence arrived on Earth a million years ago and built a better world here. As it is, natural selection has created too much unnecessary suffering.

How is it that we evolved our own sapience? Personally I had nothing to do with the development of human or animal intelligence. One might argue that no human that has ever lived shaped the development of intelligence to any great extent. Its development is a fact of natural selection pressure based on genetic competition.

If it is even possible to truly control the development of our intelligence at this stage, it can only be advanced by a project such as the one DP writes about. As he points out, "life in dopaminergic overdrive" is more rewarding and more likely to lead to advances in knowledge. This includes knowledge in the sciences such as physics and extends to an understanding of the nature of intelligence and what it means to be "an intelligent species".

Frankly, predators are homicidal, genocidal, etc. and do monstrous things. I don't see what's so great about preserving the status quo. Should we make H1N1 and allow it to infect cells in vitro?

Leafy said...

When you talk about "phasing out predators," do you mean phasing out the species that are currently predators? Or do you mean eliminating predation by uplifting them into, for example, post-felines?

I understand why you say your views are closer to Singer's since you are both utilitarians, and since unlike Francione you have no objection to "using" nonhuman animals (or humans) as long as no suffering is involved. But in a crucial way your views are closer to Francione's, since Singer thinks it's fine to murder (but not torture) nonhuman animals and you do not.

Carl said...

You seem to hold a Chalmers-style supernatural account of consciousness, e.g. in carbon-silicon comparisons. In Chalmers' account we just happen to exist in a world in which contingent psychophysical laws are astronomically fine-tuned so that the claims our brains make about conscious experience (e.g. that different functional states are accompanied by distinct supernatural phenomenal states) come out right.


Even if the psychological intuition that our own psychological experiences are matched with corresponding supernatural experiences is inescapable (despite the knowledge that it is essentially uncorrelated with its truth over possible worlds), you can't infer from that to experiences in other sorts of systems (an Occam's Razorish prior over contingent physical law.

Under the Chalmers account, in the utterly overwhelming majority (consider the number of degrees of freedom) of possible worlds in which your phenomenal experiences match your phenomenal judgments, the same correspondence does not hold of other functionally different systems (including your past and future selves). Anthropic reasoning then suggests active disbelief that in our world psychological experiences other than the ones we are having right now are organizationally matched by phenomenal experience.

Kosmonavtka said...
This comment has been removed by the author.
Kosmonavtka said...

One of the more counterintuitive implications of applying a compassionate utilitarian ethic in an era of biotechnology is our obligation to reprogram and/or phase out predators.Predatory animals are a vital part of the environment (they stop prey animals from overpopulating an ecosystem), and the natural world has been managing itself quite well for millions of years without human interference! Being squeamish about the actions of predators (such as the cat and mouse example) is no good reason to try to change them.


In any case, for better or worse, by the mid-century large terrestrial mammals are unlikely to survive outside our "wildlife" reserves simply in virtue of habitat destruction. How much suffering we permit in these reserves is up to us.Maybe humans should manage their own population (which is already far too high), then other animals won't be endangered.

Kenmeer livermaile said...

My main question (not a challenge but a question, a projected wonderment) is:

What forms of caprice will substitute for suffering?

Caprice, chance, chaos, what have you, seem essential components of conscious development.

Strife, i.e. striving to overcome suffering, is perhaps the quintessential motivator of what human call progress.

Superhappiness doesn't mean perfection, so we will have to continue to solve problems. While it seems easy to say we'll just associate creative problem-solving with uncommonly high levels of bliss in order to inspire us to work for solutions, one of the problems might well be BOREDOM. The ennui of God. (The Big Bang might have been God blowing His brains out from sheer desperate apathy.)

This line of thought is based on the notion that what makes consciousness worth its cognizance is its raw ability to give a damn, period, and the possibility that bliss might make us apathetically sated.

(Not that bliss would distract us from taking care of business as in that classic old Vonnegut story about the happiness transmitter, but that we would increasingly find it hard to care enough to bother project our imaginations into the future and deal with the inevitable Big Ass Problem coming our way like a rogue dark star or something.)

But then, boredom might be its own answer. Boredom might cause us to back off from bliss just enough to get a little hit of misery and be thereby motivated to continue enough strife to keep the ball rolling.

Underneath all this is this simple question: would serendipity survive superhappiness?

But that's just me; I have an irrational security fetish for serendipity. fofr me, it is the music of the spheres, the quintessence of reality that holds everything together.

Thanks for giving me cause to pause and wonder on such matters.