In the same way that teams of people have to spend time getting to know each other, researchers at MIT believe that robots and humans have exactly the same issues.
As robotic technology becomes more firmly integrated within the manufacturing industry and brings robots into closer contact with humans, researchers from the Massachusetts Institute of Technology (MIT) have been trying to understand what factors can both harm and improve the efficiency of robot and human workers.
It is not simply a matter of safety now, not as machines take over once human tasks and become a familiar sight in many factories across the globe. With this in mind, Julie Shah, an assistant professor of aeronautics and astronautics at MIT and head of the Interactive Robotics Group in the Computer Science and Artificial Intelligence Laboratory (CSAIL), and PhD student Stefanos Nikolaidis began experimenting.
In a paper to be presented (.pdf) at the International Conference on Human-Robot Interaction in Tokyo in March, the researchers have attempted to demonstrate that "cross-training" robots and their organic counterparts can be an effective way to "team build." It's not just about keeping people from getting hurt; the lead researcher says that it's also a question of making robots smart enough to work effectively with people. Shah commented:
"People aren't robots, they don’t do things the same way every single time. And so there is a mismatch between the way we program robots to perform tasks in exactly the same way each time and what we need them to do if they are going to work in concert with people."
In other words, the unknown human element of unpredictability and error doesn't always match up with a robot's logic. But can you make robots better team players by increasing interactivity?
Apparently so. Going beyond reward methods used to train dogs -- think "good boy" and a treat -- Shah and Nikolaidis created cross-training exercises, where a robot and their human partner would swap roles on different days, and compared this to a control group where the robot was "rewarded" with positive, spoken reinforcement for performing a task correctly.
"This allows people to form a better idea of how their role affects their partner and how their partner's role affects them," Shah says.
First, the researchers modified an algorithm to allow robots to learn from swapping roles and not just from positive reinforcement, by including the capability of learning from demonstrating human partners.
Each human-robot team then carried out a task, with half using a reward-only approach, and the others using the cross-training technique.
Teams that were involved in cross-training were able to work concurrently at a boosted rate of 71 percent in comparison to reward-only groups. In addition, the time humans spent waiting around for robots to finish decreased by 41 percent.
That wasn't all. When the team studied the learning algorithms, they found that there was a far lower level of uncertainty recorded about their human partner's next moves, which further increased efficiency when sharing a task.
Shah believes improvements in team performance could be due to the greater involvement of both parties in the process, and when a questionnaire was later handed out to secure feedback over the cross-training, cross-training groups were far more likely to say that robots acted in relation to their preferences than the reward-only group.
"When the person trains the robot through reward, it is one-way: The person says 'good robot' or the person says 'bad robot,' and it’s a very one-way passage of information," Shah says.
"But when you switch roles, the person is better able to adapt to the robot’s capabilities and learn what it is likely to do, and so we think that it is adaptation on the person’s side that results in a better team performance."
Image credit: MIT