Robots can deceive you. They can play hide-and-seek.
But as far as their vision goes, you might as well label them blind. Even if you give them eyes with GPS, radar, and sonar vision to help them navigate their way, they can’t use cameras to see.
Most robots do have a camera on them. But not to see. Humans need the view to remotely control the robot’s behavior.
Designing robots to see like humans has stumped robotic researchers for a long time. Carnegie Mellon University researchers are working to figure out a way around that stumbling block.
The Robotics Institute scientists found a way for computers to understand outdoor scenes. That way, if a robot needed to plan which way to walk, it could thanks to artificial intelligence about the space.
The program identifies the landscape by breaking down the images. The ground and sky are the first to be tagged and then other objects get generic geometric shapes assigned. The shapes then get a weight assignment. For instance, a brick wall would get a heavy association, the researchers reported.
Vision is one sense that robots don’t have. We know this, computers don’t understand images that well. Image search still has some catching up to do with keyword searches.
The accuracy of the current system is limited by the lack of data. No one has tried to map the world like this before. At its best, the computer is 70 percent accurate at estimating the layout of the land (when the ground and sky are eliminated) but generally about 30 percent accurate.
For security reasons, searching for faces and searching for certain objects have been on the development radar for a while. It would be useful to be able to search for a particular person. A spin-off company, Pittsburgh Pattern Recognition, has been able to identify the characters in each episode of Star Trek.