August 17, 2006

More comments re: uplift, personhood, the species barrier and moral consideration

[my responses to a post on the Technoliberation list]:

>"How do you know species barriers are not relevant to moral
>considerations? We have many examples of species, and none of them,
>even dogs, "man's best friend," seems interested in government or
>literature. My view that they should be left alone, unless they enter
>into voluntary relationships with us, comes from what I have learned
>about evolutionary and behavioral biology, not from Star Trek.”

First, the notion that my ethics or perception of animal behaviour is derived from Star Trek is terribly condescending. Such an accusation betrays a bias (and even futureshock) against seemingly futuristic and highly speculative scenarios involving advanced biotechnologies.

Moving on to discuss the merits of the ideas themselves.

Why must we talk about species as a moral limiter? What is so special about a species such that morphological or psychological characteristics are trumped? I know perfectly capable humans who have no interest in government or literature, why do they automatically warrant moral consideration? And what about severely disabled humans who are psychologically incapable of an interest in government or literature, or who have not ‘voluntarily’ entered into relationships? Why do they merit moral consideration where a nonhuman does not? Essentially you want to have your cake and eat it, too; you’re saying that species matter because it comes down to ingrained capability and intent while simultaneously failing to recognize that these capabilities are a) not universal to humans and b) are arguably resident in some nonhumans (eg. Great apes are social creatures who live according to hierarchies (I’d call that gov’t, at least in proto form), and other species, like dolphins and elephants, have shown interest in art).

>No amount of genetic engineering has ever transformed one species into
> another, so they must have acquired discrete, bounded identities over
> the course of evolution.

You’re describing the condition of nonhuman animals, which of course is not an argument. Unless you’re buying into the naturalistic fallacy.

>You perhaps believe that genetic
>manipulation can "uplift" animals to make them more compatible with
>social and cultural practices that are the product of a very specific
>trajectory of human evolution. (The language certainly smacks of
>imperialism and the missionary imperative.)

I wouldn’t say that the goal is to make nonhumans more compatible with social and cultural practices (though there is some efficacy to that). Rather, the whole uplift thought experiment is to determine to the best of our abilities what it is that nonhumans would want for themselves given the availability of uplift technologies, what our moral obligations are to nonhuman animals, and whether or not such a project could ever be safe and effective. You say we run the risk of being imperialistic or evangelical; I say we run the risk of being neglectful to the point of reckless indifference.

>Most likely, such efforts
>will just produce damaged animals, since the relationship between
>genes and traits is complex and context dependent (i.e., the same
>gene, or group of genes, will function very differently in the
>context of the human and pig genomes).

These are open empirical questions. My own personal opinion is that these are not intractable problems.

>Sentient computers -- a
>science fiction fantasy if there ever was one! Is there even a
>glimmer of this in the most powerful supercomputers?

Irrelevant. But telling – you’ve clearly got substrate bias issues.

>Your position
>seems to be a category error: since humans have for too long placed
>other humans in lower or nonequivalent rights categories for
>pernicous reasons such as gender, race, ethnicity, sexual
>orientation, why not just extend the same rights to nonhumans? But
>rights are a human product. We can generalize humane behavior to
>other groups, we can serve our cats vegetarian meals, we can even
>program robots to help rather than harm each other, but that doesn't
>make them members of the primary moral community."

Rights may be a human product, but they apply to all persons—human or otherwise. And your concept of a human monopoly on a ‘primary moral community’ is problematic for reasons already discussed, namely your prejudice against those things we don’t describe as being “human.”

You’re suggesting that we have to leave nonhumans alone because it would be ‘imperalistic’ of us to bring them into civilization and because they have the inability to actually ask to be uplifted. I say it’s an animal welfare issue, that we can assume consent, and that it falls within not just our moral bounds to do so, but within our social obligations seeing as many of these persons already fall within the social contract.

No comments: