August 25, 2010

Phillippe Verdoux on the enhancement paradox

IEET contributor Phillippe Verdoux wonders if enhancing is necessary in order to decide whether or not enhancing is a good idea:
Many transhumanists are enthusiastic about the possibilities of cognitive enhancement. Such enthusiasts might say something like: “I want to use advanced technologies – from genetic engineering and psychoactive pharmaceuticals to neural implants and even mind-uploading – to increase my intelligence, to make me ‘smarter, wiser, or more creative’ [PDF], to produce a ‘smarter and more virtuous’ person, to mentally and emotionally augment myself.”

But...talk of enhancement presupposes some conception of the self. Specifically, it assumes that the self is capable of enduring such modifications, e.g., as a pattern, or as an immaterial soul, or whatever. The resulting enhanced being would thus still be me, it would just be a different and “better” (according to some set of criteria) version of me.
...
Now, an interesting paradox arises when one combines the above claims with a specific (and controversial) stance on what the self is...
...
More importantly, though, it must be pointed out that cognitive enhancement is only one route to the destination of greater-than-human-intelligence: the other is artificial intelligence (AI). Another option would thus be to create a superintelligent AI system that could help us deliberate about whether or not we should use cognitive enhancements. This would offer a way out of the paradox, since it doesn’t involve modifying ourselves.

The trouble is, however, that AI may turn out to be more difficult than enhancing the neurobiological core of Homo sapiens, which means that the paradox would remain intact: in this case, the most feasible way to engender a new species of ultra-smart posthumans would be through human enhancement and not AI.

Finally, one could generalize the basic idea to AI as well. That is, we might pose a general moral question about whether or not it would be good to create a species of posthumans through either method of enhancement or AI. Our ability to answer this question, though, is no doubt far more limited than the ability of a superintelligent biotechnological hybrid or completely synthetic posthuman to answer it.
More.

4 comments:

Anonymous said...

My working assumption is that cognitive enhancement will eliminate my current self and replace it with a new one.

I don't fear this any more than I fear going to sleep, as I don't expect to be the exact same person when I wake up.

I am not a state of being, I am an ongoing process, and change and flux are inherent in that. The only question that remains are which changes to embrace, and which to avoid.

ZarPaulus said...

When you wake up, you're not the same person you were when you fell asleep. Just as you're not the same person when you fall asleep as you were when you woke up that morning. Just be sure there's a transition when you're cognitively enhanced, not just copied.

Anonymous said...

As long as the transformation process is gradual rather than sudden, there is probably no destruction of "self" in any meaningful sense. And one could presumably stop or reverse the process if it turns out that they don't like what they're becoming.

Simon said...

Susan Schneider raises the same question in light of the traditional philosophical lit on the subject. Throw in Tooley's actualisation vs capacity problem and the functionalist account doesn't quite fit.