Careers

'I'm sorry, I can't do that' -- Robots learning to say 'no'

A robot rejects a Tufts University researcher's command to "walk forward" because it would mean putting itself at risk of harm. Photo: HRI Laboratory at Tufts University/ YouTube video screenshot

Researchers at Tufts University said they are developing mechanisms for robots to perform a previously unheard-of task: saying "no" to orders from humans, reports United Press International (UPI).
Gordon Briggs and Matthias Scheutz of Tufts University's Human-Robot Interaction Lab presented research last week at the AI for Human-Robot Interaction symposium in Washington, DC, detailing their efforts to teach robots when to reject direct orders from a human, UPI said.

The researchers -- who titled their paper "Sorry, I can't do that" in a nod to disobedient artificial intelligence HAL 9000 from 2001: A Space Odyssey -- said their research is based on "felicity conditions," questions asked internally to determine the understanding of a task and the capability of performing it.

The felicity conditions Briggs and Scheutz suggested for robots are:

1. Knowledge: Do I know how to do X?
2. Capacity: Am I physically able to do X now? Am I normally physically able to do X?
3. Goal priority and timing: Am I able to do X right now?
4. Social role and obligation: Am I obligated based on my social role to do X?
5. Normative permissibility: Does it violate any normative principle to do X?

A video example shared by the researchers shows a robot refusing a command to "walk forward" because it would fall off the edge of a table.
The robot is convinced to walk off the edge of the table when the human explains, "I will catch you."

A second video shows another robot, which gives its own name as "Shafer," refuse to walk through a wall made of stacked objects because "there is an obstacle ahead." The human explains the obstacle is "not solid," and Shafer walks forward, knocking over the wall.
A third video, featuring a robot named "Dempster," repeats the experiment from the second clip, but the robot refuses to disable its obstacle detection capability because the human is "not authorized" -- the human is not trusted.
"Future [human-robot interaction] scenarios will necessitate robots being able to appropriate determine when and how to reject commands according to a range of different types of considerations," the researchers wrote.

Comments

'I'm sorry, I can't do that' -- Robots learning to say 'no'

A robot rejects a Tufts University researcher's command to "walk forward" because it would mean putting itself at risk of harm. Photo: HRI Laboratory at Tufts University/ YouTube video screenshot

Researchers at Tufts University said they are developing mechanisms for robots to perform a previously unheard-of task: saying "no" to orders from humans, reports United Press International (UPI).
Gordon Briggs and Matthias Scheutz of Tufts University's Human-Robot Interaction Lab presented research last week at the AI for Human-Robot Interaction symposium in Washington, DC, detailing their efforts to teach robots when to reject direct orders from a human, UPI said.

The researchers -- who titled their paper "Sorry, I can't do that" in a nod to disobedient artificial intelligence HAL 9000 from 2001: A Space Odyssey -- said their research is based on "felicity conditions," questions asked internally to determine the understanding of a task and the capability of performing it.

The felicity conditions Briggs and Scheutz suggested for robots are:

1. Knowledge: Do I know how to do X?
2. Capacity: Am I physically able to do X now? Am I normally physically able to do X?
3. Goal priority and timing: Am I able to do X right now?
4. Social role and obligation: Am I obligated based on my social role to do X?
5. Normative permissibility: Does it violate any normative principle to do X?

A video example shared by the researchers shows a robot refusing a command to "walk forward" because it would fall off the edge of a table.
The robot is convinced to walk off the edge of the table when the human explains, "I will catch you."

A second video shows another robot, which gives its own name as "Shafer," refuse to walk through a wall made of stacked objects because "there is an obstacle ahead." The human explains the obstacle is "not solid," and Shafer walks forward, knocking over the wall.
A third video, featuring a robot named "Dempster," repeats the experiment from the second clip, but the robot refuses to disable its obstacle detection capability because the human is "not authorized" -- the human is not trusted.
"Future [human-robot interaction] scenarios will necessitate robots being able to appropriate determine when and how to reject commands according to a range of different types of considerations," the researchers wrote.

Comments

আমরা রাজনৈতিক দল, ভোটের কথাই তো বলব: তারেক রহমান

তিনি বলেন, কিছু লোক তাদের স্বার্থ হাসিলের জন্য আমাদের সব কষ্টে পানি ঢেলে দিচ্ছে।

৯ ঘণ্টা আগে