February 19, 2012

Robots and AI are not the same thing

I'm starting to become a bit sensitive to the frequency with which people conflate robotics with artificial intelligence. I often hear people talking about "robot rights" and "robot ethics" as if they were interchangeable terms.

They are not.

The former addresses the eventuality that robots will be endowed by AI (thus deserving of rights), while the latter refers to the ways in which humans choose to use robots in such settings as the workplace or the battlefield.

A robot, no matter how sophisticated, will never have any moral worth so long as it is devoid of subjective experience. Even the most complex robot will be no more valuable from an ethical perspective than an automobile or a rock.

An AI, on the other hand, has the potential for moral consideration. It is quite possible that in the not-too-distant future we will develop an AI that has subjectivity, a sense of self, and emotional capacities. Once that happens, a piece of source code will cease to be a mere object and will instead be regarded as a subject.

It does not matter where the AI resides or what its external manifestation looks like. If an AI is uploaded to a robot, and has control over its body, then it can be said that the robot carries moral worth as a complete entity—in the same way a human, with mind and body, is afforded rights. In addition, a conscious AI that exists in non-corporeal form (e.g. an artificial intellect living in a computer simulated environment) is also deserving of rights. Substrate doesn't matter; presence of mind does.

2 comments:

Elf Sternberg said...

All well and truly agreed. But a robot with subjective experience-- an AI with power over the real world-- who is not utterly morally constrained is a real danger, in an "I have no mouth and I must scream" sense, rather than a paperclipper sense.

You're making a fine distinction few people care about. And the fact is we'll get to automatic sweethearts, about whom we emote and for whom we demand empathy, while somewhere, some programmer will have at least a sense of what's going on underneath that is unlike consciousness, that is stochastic and deterministic, but that passes the turing test for enough people. That's when you have to worry-- because either they succeed, in which case un-self-reflective creatures have human rights-- or you succeed, in which case you've got a generational poisoning of our expectations of other "people."

Yordan Georgiev said...

IMHO AI would not be possible without robots feeding the data to it ...
How would you define the AI than - the hardware ( some kind of cluster of supercomputers) , the software , its Interfaces - the robots and various feeding systems.

When AI arrives (if it is not already here , ask the USA army or Google ; ) it will be much smarter than any other single human being, but legal rights ?!

No, probably ... It will be just a property of some organizational entity to which will apply any other property laws ... until of course it starts participating in the law making ; )