November 19, 2008

Wallach and Allen: Six ways to build robots that do humans no harm

[*Note: See addendum at the end of this article]

New Scientist has published an article about building robots that won't harm humans.

They cite the work of Wendell Wallach and Colin Allen who are the co-authors of Moral Machines: Teaching Robots Right from Wrong. In a recent blog article, Wallach and Allen discuss six of the strategies that have been proposed to help prevent robots from turning on their human creators (in the book these and many other approaches for implementing moral decision making are discussed and not listed in this way):
  1. Keep them in low-risk situations
  2. Do not give them weapons
  3. Give them rules like Asimov's 'Three Laws of Robotics'
  4. Program robots with principles
  5. Educate robots like children
  6. Make machines master emotion
Wendell Wallach goes about critiquing the six strategies, including the observation that the U.S. military is already using robots to kill:
Semi-autonomous robotic weapons systems, including cruise missiles and Predator drones, already exist. A few machine-gun-toting robots were sent to Iraq and photographed on a battlefield, though apparently were not deployed.

However, military planners are very interested in the development of robotic soldiers, and see them as a means of reducing deaths of human soldiers during warfare.

While it is too late to stop the building of robot weapons, it may not be too late to restrict which weapons they carry, or the situations in which the weapons can be used.

Indeed, the primary problem is that if someone wants to create a band of marauding robots there's nothing really to stop them.

And if malign intentions are not the case, there's still the potential for disaster. In terms of giving them 'rules' and 'principles,' that's easier said than done. This is known as the friendliness problem, an issue that continues to vex a number of AI theorists, including Eliezer Yudkowsky.

Read the entire article.
__________________

*Addendum: It appears that the New Scientist article was rather sloppily written. I've since rewritten this article and made the appropriate corrections. According to Wendell Wallach, writing in his blog Moral Machines,
The New Scientist article has been misread by some commentators, who believe that we propose moral machines can be built with these simple strategies and that the critiques of the strategies were written by Simonite. The strategies and evaluation of the strategies were written by us. The original article from which this material was drawn can be found below on my October 13th posting.
Specifically, the New Scientist article's author, Tom Simonite, did not make it sufficiently clear as to who was responsible for the content and commentary. As Wallach noted to me in personal conversation, "90% of that article was written by me, including the strategies and the comments as to why the strategies were inadequate and simplistic. The piece was written as a tool to direct attention to the book, which is the first overview of the field of machine morality and a very serious and sophisticated look at the challenge."

The original version of the article that Simonite received can be viewed on the Moral Machines blog.

To Wendell Wallach, Colin Allen and my readers: I regret making any errors or mischaracterizations in the course of presenting this article.

1 comment:

Nato said...

I grant that we should try to reduce the chances of inadvertent harm caused by the semi-intelligent types of robots in planning and production today. In the long run, however, I don't hope for 'robots' that are no threat to humans. Self defense should be a right for all persons. We already hurt one-another all the time because of this principle, and I expect it will continue into the era when a subset of humanity enters the world from a factory rather than a hospital. Making separate rules for hardware humans seems discriminatory, wrong and ultimately counterproductive. Serious ethicists should be very careful on this point, now that we are reaching the cusp of that era.