Posting in Design
Carnegie Mellon researchers have figured out how to analyze outdoor scenes using AI. This might help robots navigate the land.
Robots can deceive you. They can play hide-and-seek.
But as far as their vision goes, you might as well label them blind. Even if you give them eyes with GPS, radar, and sonar vision to help them navigate their way, they can't use cameras to see.
Most robots do have a camera on them. But not to see. Humans need the view to remotely control the robot's behavior.
Designing robots to see like humans has stumped robotic researchers for a long time. Carnegie Mellon University researchers are working to figure out a way around that stumbling block.
The Robotics Institute scientists found a way for computers to understand outdoor scenes. That way, if a robot needed to plan which way to walk, it could thanks to artificial intelligence about the space.
The program identifies the landscape by breaking down the images. The ground and sky are the first to be tagged and then other objects get generic geometric shapes assigned. The shapes then get a weight assignment. For instance, a brick wall would get a heavy association, the researchers reported.
Vision is one sense that robots don't have. We know this, computers don't understand images that well. Image search still has some catching up to do with keyword searches.
The accuracy of the current system is limited by the lack of data. No one has tried to map the world like this before. At its best, the computer is 70 percent accurate at estimating the layout of the land (when the ground and sky are eliminated) but generally about 30 percent accurate.
For security reasons, searching for faces and searching for certain objects have been on the development radar for a while. It would be useful to be able to search for a particular person. A spin-off company, Pittsburgh Pattern Recognition, has been able to identify the characters in each episode of Star Trek.
Sep 10, 2010
@tech_ed, you're right - vision is easy, edge / pattern / object recognition is the problem. But reading between the lines, these Mellon-heads seem to be researching pattern recognition, not merely vision. Many others are beating the same path, I hasten to add. If that picture at the top-right is indicative of the fruits of their labour, I'd call that very encouraging. This is the future. The path to true automatons. The most important research in the world (my 2p's).
Unfortunately, seeing requires pattern recognition...the one thing that robots are not very good at...A robot is only able to recognize a pattern that it has been introduced to in the past and stored in a database. If it encounters an object that is not within it's pattern database, it doesn't know what the object is...even if the object is in it's database, but from an angle that it does not recognize.