X
Innovation

Q&A: Leila Takayama, research scientist, on human-robot interaction

Leila Takayama imagines a world where human-friendly robots interact successfully with people. But it's not always easy teaching a 500-pound machine not to block the office coffee maker.
Written by Christina Hernandez Sherwood, Contributing Writer


Leila Takayama imagines a world where human-friendly -- but not human-like -- robots interact successfully with people. These personal robots are polite (they never barrel down the hallway) and their intentions are readable (they look at a door handle before reaching for it).

A research scientist at Willow Garage, a company that develops hardware and software for personal robotics applications, Takayama studies human-robot interaction and works with designers, including a Pixar-trained character animator, to make robots more likable. We spoke recently about rude robots, Skype on wheels and the legal implications of personal robots. Below are excerpts from our interview.

Why robots? How did you get started in this work?

I'm a social scientist. I'm interested in people. The most interesting thing about people is we do seemingly irrational things, especially when faced with new technology. A lot of my research started in human-computer interactions. I've been doing that research since 2002. I care about designing technologies that leverage what we know about people, so [the technologies are] designed to respect people and our capabilities. Robots aren't designed at all to be human-friendly right now. They needed the most help out of all the technologies out there. We've come a long way in web technologies. But in robotics there hasn't been quite the same collaboration in social science and design. Robotics could benefit a lot from perspectives other than robotics.

Why haven't we seen the same collaboration in robotics as we've seen in web technologies?

It hasn't been necessary. Personal robots don't exist in the same way personal computers do. You have robots in the battlefield and in factories. There you can train the people near robots because they're professionals.

Your job is to make robots better at interacting with people. Why does it matter if robots have good manners?

A lot of the long-term deployments of robots end with the robot being shoved in a closet or 'accidentally' turned off. That's usually because the robot wasn't useful enough and was frustrating someone. All it takes is one frustrated bystander to completely hijack the robot. If we want to see any value produced by personal robots, they have to be acceptable to people in their environment. It's important for the technology to be at least somewhat socially acceptable. If it doesn't barrel down the hallway, then the robot will at least have a little longer to prove itself as a valuable technology.

What are the technological obstacles to making robots better at interacting with people?

Robotics needs more interaction designers. It's important for robots to be polite and socially acceptable, but it's even more important and basic for the robots to be safe. That's a hardcore technology problem. As people, we read intentionality into objects even though we know it's not there. If there's a moving object in a space, you know it's just a machine, but it starts feeling a little bit alive. We try to predict what it's going to do next. The more predictable its motions are, the safer it will be. Humans will know not to jump in front of it when it moves. Both the interaction designers and character animators can make the robots more readable, so people know how to not get in the way.

What about outside challenges, like whether people are even ready for a robot that interacts with them?

I do see pushback against human-like robots. I have a similar reaction. Human-friendly robots could be dog-like. I think human-friendly will be more important in the near future. Robots aren't smart like people, so we don't want to design them to look like us. Then people will expect them to be smart like people. If you make it look like, say, a turtle, we have different expectations about what it can know and do and sense. There is pushback against human-like robots, especially in Western culture where sci-fi has shown us lots of scary things robots will supposedly do to us in the future. That's why we work so closely with designers and animators on our robots.

How do you go about studying how people feel about robots?

We've had a lot of time to work on our remote presence devices [which make remote workers accessible via a webcam that moves around the office independently]. They're like Skype on wheels. For those, we build prototypes and put them in the field with people who don't necessarily love robots just because they're robots. You want to see what normal people will do with these. We do observations and notes and interviews. Then we come back to our lab and sit with the engineering team to determine the biggest issues. What makes people want to run away from the robot? What makes them want to turn it off? Based on that list, we start changing the designs of the robots.

A lot of people in the field with the remote presence devices said, 'It's too tall' or 'It's too short.' But nobody had a straight solution. If there were, we'd take it. But we decided to experiment with what happened when we changed the height of the robot. We had a robot that was taller and another that was shorter than the average person. A lot of people want to be able to convince their colleagues that their idea is a good one. When you have someone in an interaction, they're more persuasive when they're in a taller robotic body. Now we can say to the product design team, 'If your customers want to be persuasive in the workplace, you should make the robot slightly taller.' One dimension we know empirically that matters is height.

As someone who works outside the traditional office setting, I'm fascinated by your remote presence devices. Can you talk about how they work?

We didn't plan to build [a remote presence device]. It was done by Curt Meyers and Dallas Goecker. Dallas works in our office in California, but he lives in Indiana. For awhile, he was just a voice in a box on the table. It wasn't so great for him. When we needed to make real decisions, we'd argue about it. If someone didn't like what Dallas was saying, they'd just hang up on him. It's easy to get lost when you're just a little voice. There were a lot of different prototypes, including a laptop on a cart using Skype. But that didn't work because Dallas would need Curt to push him around. Over a weekend, they took a body part from another robot [PR2] and built Texai, so Dallas could drive himself around the office. At first, I thought it was a little silly. Then, I got to know Dallas as a person. He'd just stop by my office and chat. Before, I'd only talk to him in formal meetings. But when he was in the Texai, he became a person. If I don't answer his email, he comes to my office and asks me the question again. That was really powerful. It's human interaction, even though there's this machine in between. We had a need and met it with the technology we had. There's a lot of value there.

We created a spin-out company for what's now called the Beam. There's been more interest than we're able to respond to from big companies and small companies. They're trying to keep up. At Willow Garage, we bought four of the Beams for our own use. During flu season, they're great. If someone is sneezing or coughing, we just tell them to go home. Usually they're just bored in bed at home, so we beam them in and they can attend the meetings they feel up to going to. There are so many cool uses that we didn't anticipate. Families have been interested in it. We originally thought of it as remote collaboration in the workplace. But some people have said, 'It'd be nice to have one in my sister's house, so I can visit with my nieces and nephews.' It could be useful for teachers who want to have guest speakers come in. My friend is a professor at Notre Dame and her students wanted to see Willow Garage. They beamed in and I could give them a tour of the building without them having to pay for flights.

How did your studies of human interactions with robots help you design thePR2, a robot that can do mobile manipulation tasks in human environments. You've described how what once looked like a tarantula became more like a Mini Cooper -- and you got help from Pixar to improve the robot's design.

The PR2 was imagined to be like thePR1, [an earlier iteration of the personal robot] made at Stanford. With PR2, we wanted to make a robot that could run around and grab stuff. It needed to work in the human environment, the home. It ended up big because it needed to reach top cupboards and pick up toys off the floor. It's 500 pounds. If you extend its spine all the way up, it's about six feet tall. When I showed up, it didn't have a head yet. It had sensors. It needed to be able to turn the sensors to where the arm was doing a manipulation task. We have all these sensors in our head, like eyes and nose and mouth and ears. The development team just piled on all these sensors, which is why it ended up looking like a tarantula. It had so many eyes. It was scary. People would back away from the robot. They asked me to make it better. We did a quick online survey of a bunch of different sensor configurations. We asked our development team which sensors they really needed. We got the number of sensors down. Then we had to figure out a way to make the configuration less scary and more approachable. That's how we ended up with the Mini Cooper as inspiration. It's a machine, but a likable machine.

For the way the PR2 moves, that's where we turned to a character animator who we hired from Pixar. When the robots aren't predictable, somewhat dangerous situations come up and people get in the way of the sensors. It's frustrating all around. For me, that came in the form of the robot being in the way between my office and the coffee machine. I'd end up running in front of the robot to get coffee. Every so often, I'd get scolded for messing up the sensor data. We needed help. Doug worked his magic. Now we can tell what the robots are going to do before they do it. They're more approachable. It can look in the direction it's going to go before it goes that way. The head turns first before the body. When the robot is going to open a door, it looks around the corners of the door instead of silently staring at the door before suddenly reaching out for the handle. If you can show the forethoughts before the action, the robot is much more readable.

As you continue this work, what keeps you up at night? What worries you?

I worry a lot about building technology for the sake of building technology. We need to decide what we're going to invent. We're inventing the future. Is this the future we want to invent? Is this a good use for the technology or are we just making it because we can?

Another issue is the safety and legal side of this equation. There are already special categories of law for automobiles and airplanes. As long as the manufacturers follow the regulations, they're covered. We don't have that yet for personal robots. You can try to make the robots as safe as you can, but it's hard to guarantee it's going to be perfect every time. Who is liable if something goes wrong? It's going to be tricky and we need to work it out. We're moving in the right direction, but we're not there yet.

What's next for you and this work?

I've been doing a bunch of fieldwork with my team looking for needs. What are the big pressing needs people have that personal robots can help with? We've been doing in-home interviews and shadowing people on their jobs to look for opportunities. How can we help workers do the skilled work they're trained to do and free them from doing the mundane stuff? We've been prototyping new robots for that and showing them to potential end users.

Photo: Leila Takayama and the PR2

This post was originally published on Smartplanet.com

Editorial standards