By Amy Kraft
Posting in Science
An experiment on the interaction between humans and robots shows that humans hold robots accountable for errors.
Robots are taking on more and more responsibility in our society as they move from the factory to the battlefield. But if a civilian is killed during a robot-run raid or some other mistake occurs in a war, who gets the blame, the robot or the person that built it?
To gauge the common consensus on the matter, researchers at the University of Washington had 40 students participate in a scavenger hunt with a robot named Robovie. The students believed the robot was autonomous, but it was actually controlled by a person in another room.
When the game began, participants had two minutes to locate a number of objects in a room. If a player found all of the objects, they would win a cash prize.
All of the participants found seven objects, the amount needed to win $20. But when Robovie claimed the players only found five objects, the participants started to play the blame game.
Interestingly, most of the players blamed the glitch on the robot. They accused it of lying or cheating.
"When interviewed, 65 percent of participants said Robovie was to blame--at least to a certain degree--for wrongly scoring the scavenger hunt and unfairly denying the participants the $20 prize."
The results suggest that humans attach similar moral codes to electronic, man-made machines that are equipped with social capabilities.
"We're moving toward a world where robots will be capable of harming humans," lead author of the study Peter Kahn told ScienceDaily. Kahn and his colleagues say that the laws of armed conflict should keep up with innovation so the right person(s) or humanoid is held accountable when robots injure humans.
So what do we do from here? Start robot jails? Issue fines for wrongdoing to machines? I really don't think the human penal code would have the intended effect, though.
But this question is important to answer, especially now that we are using more robots in military operations.
How do you think the issue should be addressed?
Photo via University of Washington
Apr 28, 2012
The nike mercurial sl is a dance aerobic sneaker that has all of the features that were mentioned above. Manywomen love this shoe because it is able to provide everything that they require in this shoe and it alsocan be had for a very affordable price.The Nike Musique FeaturesHere are some of the features of this wonderful sneaker. It has all that is needed for any danceaerobics class. Flexibility is one of The Musique is very lightweight [url=http://www.soccerforcleats.com/] Nike Mercurial victory VIII [/url]and supportive, this is courtesy of synthetic leather and a light meshupper. There is also midfoot strapping that allows for support during the different routines. The shoe has extra stability, courtesy of double-lasted forefoot and heel. There is a Phylon midsole that is able to give lightweight cushioning. The rubber outsole gives excellent traction. There is a pivot point on the forefoot of the shoe Well it seems that a lot of people love the Nike Shox Navina. The shoe is solid and it is perfect forrunning, for use in the gym or casual wear. [url=http://www.soccerforcleats.com/] nike mercurial talaria v [/url]The shoe works very well, but what many people seem to be blown away by are the varying colors that are available
Robots, like guns are tools that are used by people. Current robots aren't really capable of acting without guidance. So, when someone is shot, do we jail the gun, or the shooter? A robot is really just a machine that is being driven. Just as there have been aircraft that have dropped bombs on the wrong target, or artillery that fired at the wrong place, so to with Drones and other robots, sometimes they are pointed at the wrong targets. It's operator error. As long as there are people in the loop, there will be induced errors. We really don't know how to build a robot that doesn't have people somewhere in the loop. It's just getting harder to find the people.
What a false conclusion. Did the researchers ever consider that the reason the participants balmed the robot is because the robot was the only other object in the room. This is simply a case of personification - it is certainly not acse of the participants believing the robot (the onject) was at fault but instead, by default, the person that programmed the robot. PLEASE!
Eventually robots will blame all their problems on us. (Even the ones who don't believe in us anymore.)
All robots should be programmed with the 3 laws of robotics, then these types of issues wouldn't occur. If autonomous robots have the basics of protecting humans and themselves, robots would be seen as less of a threat.
...which is to build things to kill other humans. A robot using those three laws would be of use in a nursing home, perhaps, but NOT on a battlefield. A soldier who can't kill or confine another soldier is of no use to any militant purpose. No, Asimov's First Law will have to go, and then the others will follow.
Current robots respond to stimuli according to their programming. They don't understand any abstract concepts, including protecting people, protecting themselves, and obeying orders. It's uncertain whether any computer will ever be able to understand abstract concepts, much less 1 that that will be inside of a robot. A large fraction of robotics research is done by & for the US & other armed forces toward the goal of building robots to kill people. These robots will not be malevolent because malevolence requires the ability to understand multiple abstract concepts. The main reason they aren't already in use isn't out of concern over ethics but over the ability to be effective when not under direct human supervision. Effective includes not attacking friendly forces or bystanders.
But there are other means of "computing" that do not have the same limitations (and no, I'm NOT talking about "quantum" computing). Indeed, there are promising avenues to explore right now, that involve using integrated chips, but in non-standard ways (and no, I'm not giving you any more clues).