In the next 10 to 15 years personal robots may become as ubiquitous as dishwashers. And it is likely that we’ll want to program them to do specific things related to our personal household—things that manufacturers could never have imagined. So we’ll want them to be as easy to navigate and customize as our personal computers.
So how will us non-techies communicate with our robots?
Researchers at Georgia Tech’s Center for Robotics and Intelligent Machines (RIM) are trying to help create an easy-to-use human-robot interaction. They have recently identified the types of questions a robot can ask to get more information from a human so that they can learn a new task. Such questions need to be in line with something a human can first understand, and then respond to.
One of the study’s authors, Maya Cakmak from the School of Interactive Computing at Georgia Tech is quoted in a press release:
People are not so good at teaching robots because they don’t understand the robots’ learning mechanism. It’s like when you try to train a dog, and it’s difficult because dogs do not learn like humans do. We wanted to find out the best kinds of questions a robot could ask to make the human-robot relationship as ‘human’ as it can be.
So the researchers studied human behavior in order to uncover the nuance of how humans might answer questions posed by a robot. This is called “active learning” which gives machines more control over the information they receive and assimilate.
To study human subjects the researchers asked their subjects to assume the role of a robot attempting to learn a simple task.
The subjects tended to use three different types of questions: Feature, label and demonstration, and adapted these for learning how to pour salt from a box.
A feature question might be, “Can I pour salt from any height?” Meaning the robots ask about specific variables that might impact how the task is completed.
A label question is more of a simple yes or no, based on an example, “Can I pour salt like this?” This is essentially where the robot tries a new way to perform the task, and asks if it was correct or not.
And finally, the last type of question, a demonstration question, “Can you show me how to pour salt from here?” This is somewhat similar to a “label question” but here the robot is providing a new situation and asking for a demonstration from the human.
Subjects used feature queries 82 percent of the time, showing a clear preference to use this kind of question in learning a new task. According to the researchers feature queries are the most efficient, particularly when you have a shared language with common names for all the features the robot may want to ask about.
Then the researchers had human subjects answer a robot’s questions and had them rate questions on how “smart” they thought they were. Feature type questions were once again the preferred type with 72 percent of participants preferring them over any other question the robots asked.
Cakmak is again quoted:
These findings are important because they help give us the ability to teach robots the kinds of questions that humans would ask. This in turn will help manufacturers produce the kinds of robots that are most likely to integrate quickly into a household or other environment and better serve the needs we’ll have for them.
But in an email interview the lead author of the study, Andrea Thomaz, noted:
Because of the shared language, feature queries are the most complex to implement. But one of our insights is that embodied robot learners have an advantage that we can use action to make queries without having names for every feature. For example robots could ask: At this part of the task, does my hand have to be like this? Which is a nicer question than asking: Is there is any variance about the x-axis of the wrist?
Ultimately the goal is to develop a robot interface that matches the way mainstream public absorbs information, processes and learns. Thomaz adds, “Robots that learn like people should be easier to teach.”