Posting in Design
German researchers created a collision boundary for a robotic arm, so it would prevent the robot from slashing human flesh. As robots make their way into our homes and workplace, establishing international ethical guidelines seem like they are long overdue.
Soon, we might finally have robots in our house to do our chores for us. This of course would be awesome, as long as the robot doesn't cause any unwanted accidents.
It certainly would be nice if the futuristic robots were roaming around your house, chopping up a home cooked meal with a steak knife - but it would suck if it gashed you in the wrong place.
Fortunately, German researchers are on the case. After the scientists looked at how the robots can cause injuries, they designed a collision system so the robot would know better than to break human skin.
As you'll see in the video, the Institute of Robotics and Mechatronics researchers had robots hold bladed tools such as scissors, kitchen knives, a screwdriver, and a scalpel. The robots were then instructed to hit a piece of silicone and a pig's leg. And sure enough, the researchers found that the robot's jabs could definitely injure someone.
At the end of the video, the collision detection system is turned on. And a brave volunteer puts his arm up for a test and does not get slashed.
Perhaps, putting constraints on robotics like this is one way of giving robots a moral compass. If we are going to live with them and interact with them, it might be good if they played by our ethical rules.
Some even want an international ethics guidelines drafted to dictate how the robots can be used. COSMOS previously reported:
The robotics professor [Noel Sharkey from the University of Sheffield] also points to the comments of Microsoft founder Bill Gates, who predicted that "over the next few years robots may be a pervasive as the PC."
"We were caught off guard by the sudden increase in Internet use and it would not be a good idea to let that happen with robots," Sharkey said. "It is best if we set up some ethical guidelines now before the mass deployment of robots, rather than wait until they are in common use."
Sharkey adds, it's not the robots that scare him. It's the humans that use the robots that frighten him.
And let's not forget that GM and NASA recently unveiled Robonaut 2 (R2), a humanoid robot. GM wants R2 in its manufacturing plants and NASA wants the cool-looking robot to assist in space missions.
Assuming R2 doesn't get hold of any knives, the robot should be able to work with humans. The team would train it like it would any new human employee: It would be given detailed instructions and then left to figure things out on its own.
NASA brings up a good point:
Eventually, R2 could become such a familiar member of the crew, astronauts will find themselves saying "excuse me" when they bump into the humanoid. But how will R2 respond?
May 9, 2010
Surely, the Roomba is already a robot performing domestic chores. Also, it may be humorous sci-fi but the graphic novel Shmobots provides an enlightening possiblity of the future with AI automotons.
The sort of abstract thought required to comprehend & obey Asimov's 3 laws will be beyond the capabilities of mass produced robots for the forseeable future. Sharkey's call for ethical guidelines is so outdated as to be detached from reality. The biggest users of robots are military. In the near future, there will be armed robots that can detect & attack targets on their own when their operators are out of contact or merely busy. These robots will not understand, let alone care, that they are killing people. Your tax dollars at work. And if you don't like it, the services will gladly accept qualified people to go to Afghanistan.
DadsPad - I read those laws over 30 years ago. I saw the "Outer Limits" episode "I, Robot" when it first aired in 1964. I read the Foundation Trilogy while in high school. The important part in that First Law is the phrase "through inaction, allow a human being to come to harm." That is the reference I was trying to make. Because of the Second Law, our telling the robots NOT to interfere while we engaged in potentially fatal activities would NOT WORK! They would be forced by the First Law to interfere to prevent us from doing harm to ourselves, from their point of view.
I suggest you read up on his robot laws and how he has them react to society. Don't worry, he was one of the most influential writer of all time.
The electronic circuity already exist in table saws that retract and stop the saw blade when it comes in contact with a person.. It should be easy enought to adapt this circuit to a robot..
By those definitions, robots would not allow humans to go skydiving, ride offroad vehicles, or go to work in many different jobs, because to do so, would be to allow us to possibly come to harm. They would take the initiative, under those "Three Laws", and "protect" us from ourselves. NO THANKS!!
1.A robot may not injure a human being or, through inaction, allow a human being to come to harm. 2.A robot must obey any orders given to it by human beings, except where such orders would conflict with the First Law. 3.A robot must protect its own existence as long as such protection does not conflict with the First or Second Law. But don't forget the Zeroth Law, A robot may not harm humanity, or, by inaction, allow humanity to come to harm
All they need to do is to follow the three laws of robotics! P.S. I think you meant "roaming", not "rooming!"