X
Innovation

Robots that deceive

They're being trained to fool each other, but the goal is to deceive us.
Written by Deborah Gage, Contributor

Wired spotted this paper from two researchers at the Georgia Institute of Technology who taught robots to fool each other.

Their work was funded by the Office of Naval Research, which sees this experiment as a first step toward sending deceptive robots on search and rescue missions ("a search and rescue robot may need to deceive in order to calm or receive cooperation from a panicking victim") and to the battlefield, where a robot could mislead the enemy.

I'm not sure what a robot would have to do or say to calm, rather than scare, an injured victim, but the researchers -- Professor Ronald Arkin, who teaches in the School of Interactive Computing, and Alan Wagner, a research engineer (pictured) -- took on a more basic challenge, and they say they succeeded about 75 percent of the time.

Their experiment began with one robot telling another a lie. From Georgia Tech:

...the researchers ran 20 hide-and-seek experiments with two autonomous robots. Colored markers were lined up along three potential pathways to locations where the robot could hide. The hider robot randomly selected a hiding location from the three location choices and moved toward that location, knocking down colored markers along the way. Once it reached a point past the markers, the robot changed course and hid in one of the other two locations. The presence or absence of standing markers indicated the hider's location to the seeker robot.

"The hider's set of false communications was defined by selecting a pattern of knocked over markers that indicated a false hiding position in an attempt to say, for example, that it was going to the right and then actually go to the left," explained Wagner.

When the experiment failed, it was because the robot doing the hiding had trouble knocking over the right markers -- i.e. lying.

The researchers say they worry about the ethical quandaries that deceptive robots create -- deceiving, for instance, with the intent to harm -- although the ethics of war are different from the ethics in hospitals or in classrooms or (sometimes at least) in corporate board rooms, where robots are being used or experimented with now.

To deceive is human, though. Who has never fibbed to make somebody else feel better -- or get them to do something you want?

Robots will learn to do the same, and they will look more human too. (See this Smart Planet story about artificial skin that robots could wear). The more familiar they seem, the easier it will be for humans to attribute feelings and motives to them.

The question in my mind is not whether we should create deceptive robots -- because robots will learn to deceive -- but whether there are situations where we shouldn't use robots at all.

This post was originally published on Smartplanet.com

Editorial standards