I'm starting to become a bit sensitive to the frequency with which people conflate robotics with artificial intelligence. I often hear people talking about "robot rights" and "robot ethics" as if they were interchangeable terms.
They are not.
The former addresses the eventuality that robots will be endowed by AI (thus deserving of rights), while the latter refers to the ways in which humans choose to use robots in such settings as the workplace or the battlefield.
A robot, no matter how sophisticated, will never have any moral worth so long as it is devoid of subjective experience. Even the most complex robot will be no more valuable from an ethical perspective than an automobile or a rock.
An AI, on the other hand, has the potential for moral consideration. It is quite possible that in the not-too-distant future we will develop an AI that has subjectivity, a sense of self, and emotional capacities. Once that happens, a piece of source code will cease to be a mere object and will instead be regarded as a subject.
It does not matter where the AI resides or what its external manifestation looks like. If an AI is uploaded to a robot, and has control over its body, then it can be said that the robot carries moral worth as a complete entity—in the same way a human, with mind and body, is afforded rights. In addition, a conscious AI that exists in non-corporeal form (e.g. an artificial intellect living in a computer simulated environment) is also deserving of rights. Substrate doesn't matter; presence of mind does.