Robots are taking on more and more responsibility in our society as they move from the factory to the battlefield. But if a civilian is killed during a robot-run raid or some other mistake occurs in a war, who gets the blame, the robot or the person that built it?
To gauge the common consensus on the matter, researchers at the University of Washington had 40 students participate in a scavenger hunt with a robot named Robovie. The students believed the robot was autonomous, but it was actually controlled by a person in another room.
When the game began, participants had two minutes to locate a number of objects in a room. If a player found all of the objects, they would win a cash prize.
All of the participants found seven objects, the amount needed to win $20. But when Robovie claimed the players only found five objects, the participants started to play the blame game.
Interestingly, most of the players blamed the glitch on the robot. They accused it of lying or cheating.
"When interviewed, 65 percent of participants said Robovie was to blame--at least to a certain degree--for wrongly scoring the scavenger hunt and unfairly denying the participants the $20 prize."
The results suggest that humans attach similar moral codes to electronic, man-made machines that are equipped with social capabilities.
"We're moving toward a world where robots will be capable of harming humans," lead author of the study Peter Kahn told ScienceDaily. Kahn and his colleagues say that the laws of armed conflict should keep up with innovation so the right person(s) or humanoid is held accountable when robots injure humans.
So what do we do from here? Start robot jails? Issue fines for wrongdoing to machines? I really don't think the human penal code would have the intended effect, though.
But this question is important to answer, especially now that we are using more robots in military operations.
How do you think the issue should be addressed?
Photo via University of Washington