Showing posts with label AI. Show all posts
Showing posts with label AI. Show all posts

November 25, 2008

NYT: Can battlefield robots behave more ethically than human soliders?

According to computer scientist Ronald C. Arkin, the answer to this question is yes. Arkin is currently designing software for battlefield robots under contract with the U.S. Army.

“My research hypothesis is that intelligent robots can behave more ethically in the battlefield than humans currently can,” he says.

Excerpt from the New York Times article:

In a report to the Army last year, Dr. Arkin described some of the potential benefits of autonomous fighting robots. For one thing, they can be designed without an instinct for self-preservation and, as a result, no tendency to lash out in fear. They can be built without anger or recklessness, Dr. Arkin wrote, and they can be made invulnerable to what he called “the psychological problem of ‘scenario fulfillment,’ ” which causes people to absorb new information more easily if it agrees with their pre-existing ideas.

His report drew on a 2006 survey by the surgeon general of the Army, which found that fewer than half of soldiers and marines serving in Iraq said that noncombatants should be treated with dignity and respect, and 17 percent said all civilians should be treated as insurgents. More than one-third said torture was acceptable under some conditions, and fewer than half said they would report a colleague for unethical battlefield behavior.

...

“It is not my belief that an unmanned system will be able to be perfectly ethical in the battlefield,” Dr. Arkin wrote in his report (PDF), “but I am convinced that they can perform more ethically than human soldiers are capable of.”

Dr. Arkin said he could imagine a number of ways in which autonomous robot agents might be deployed as “battlefield assistants” — in countersniper operations, clearing buildings of suspected terrorists or other dangerous assignments where there may not be time for a robotic device to relay sights or sounds to a human operator and wait for instructions.
Read the entire article, "A Soldier, Taking Orders From Its Ethical Judgment Center."

November 19, 2008

Wallach and Allen: Six ways to build robots that do humans no harm

[*Note: See addendum at the end of this article]

New Scientist has published an article about building robots that won't harm humans.

They cite the work of Wendell Wallach and Colin Allen who are the co-authors of Moral Machines: Teaching Robots Right from Wrong. In a recent blog article, Wallach and Allen discuss six of the strategies that have been proposed to help prevent robots from turning on their human creators (in the book these and many other approaches for implementing moral decision making are discussed and not listed in this way):
  1. Keep them in low-risk situations
  2. Do not give them weapons
  3. Give them rules like Asimov's 'Three Laws of Robotics'
  4. Program robots with principles
  5. Educate robots like children
  6. Make machines master emotion
Wendell Wallach goes about critiquing the six strategies, including the observation that the U.S. military is already using robots to kill:
Semi-autonomous robotic weapons systems, including cruise missiles and Predator drones, already exist. A few machine-gun-toting robots were sent to Iraq and photographed on a battlefield, though apparently were not deployed.

However, military planners are very interested in the development of robotic soldiers, and see them as a means of reducing deaths of human soldiers during warfare.

While it is too late to stop the building of robot weapons, it may not be too late to restrict which weapons they carry, or the situations in which the weapons can be used.

Indeed, the primary problem is that if someone wants to create a band of marauding robots there's nothing really to stop them.

And if malign intentions are not the case, there's still the potential for disaster. In terms of giving them 'rules' and 'principles,' that's easier said than done. This is known as the friendliness problem, an issue that continues to vex a number of AI theorists, including Eliezer Yudkowsky.

Read the entire article.
__________________

*Addendum: It appears that the New Scientist article was rather sloppily written. I've since rewritten this article and made the appropriate corrections. According to Wendell Wallach, writing in his blog Moral Machines,
The New Scientist article has been misread by some commentators, who believe that we propose moral machines can be built with these simple strategies and that the critiques of the strategies were written by Simonite. The strategies and evaluation of the strategies were written by us. The original article from which this material was drawn can be found below on my October 13th posting.
Specifically, the New Scientist article's author, Tom Simonite, did not make it sufficiently clear as to who was responsible for the content and commentary. As Wallach noted to me in personal conversation, "90% of that article was written by me, including the strategies and the comments as to why the strategies were inadequate and simplistic. The piece was written as a tool to direct attention to the book, which is the first overview of the field of machine morality and a very serious and sophisticated look at the challenge."

The original version of the article that Simonite received can be viewed on the Moral Machines blog.

To Wendell Wallach, Colin Allen and my readers: I regret making any errors or mischaracterizations in the course of presenting this article.

November 15, 2008

Convergence08: Opening panel on AI

Opening AI panel:
Peter Novrig, when asked how he would advise Obama if he were the CTO, responded, "Believe in reality." This got a great reaction from the audience.

Ben Goertzel notes that, instead of bailing out "corrupt banks" and "incompetent auto makers," that the billions of dollars should be funneled to fund private enterprise in such fields as AI, nanotechnology, and so on. Feels that the U.S. is misallocating its national resources by funding dying industries instead of AI and health care.

Panel is asked what kind of AI applications we can expect by 2015. Pell said we can expect to be able to talk to our game agents, and Omohundro talked about robotic cars and robots in the home (care for the elderly). Goertzel predicts AI for semi-automated scientific discovery and experiment design.

Questioner from the audience says that transhumanism suffers from a "public relations defecit," and wonders how transhumanists can better go about outreach and advocating for a technnological future. Omohundro feels that the popular media is contributing to some of the scare mongering and negative characterizations. Goertzel thinks it's important that we roll-out these technologies via positive applications; they need to be practical and helpful -- eventually our lives will be interwoven with these technologies and accepted.

Goertzel argues that the problem with AGI is not so much technical as it is financial. He says we could be moving 5 times faster with the requisite funding. Novrig says there isn't going to be one single breakthrough -- there's going to be thousands of applications and each along it's own path; disagrees with the singular focus that's characteristic of Goertzel's and Yudkowsky's research and thinking.

What can we do to accelerate things along? Pell says we should focus on what we're really best at and pursue paths that yield the best fruit the quickest; we should apply our talents to the problems.

What are the misconceptions surrounding AI? Pell feels that the biggest myth surrounding AI is that it's impossible and that humans are somehow special Goertzel says that long predictive time scales are typically off the mark. Omohundro feels that most people can't grok rapid/radical change, while others think far too much of the future -- instead we need to walk a middle path. Novrig noted that there have been some very healthy changes to AI theory, including the shift from logical to probabalistic approaches in AI.