Microsoft's Kinect motion sensor has transformed from a novel gaming controller to a valued tool for robotics, and now is helping to interpret sign language in real time.
Scientists at Microsoft Research in China have figured out how to translate sign language into spoken or written language using specialized software built for Kinect. The company announced a prototype today that makes spontaneous conversations possible between deaf people and anyone who doesn't know sign language.
"We knew that information technology, especially computer technology, has grown up very fast. So from my point of view, I thought this is the right time to develop some technology to help [the deaf community]. That's the motivation," said Xilin Chen, deputy director of the Institute of Computing Technology at the Chinese Academy of Sciences. Chen's team combined Kinect with machine learning software.
One distinct advantage is the potential to communicate among different forms of sign language - from American to Chinese and visa versa today - and more to come. That would make it easier and less costly for a deaf person to travel internationally (no interpreter is needed).
Kinect came to Microsoft by way of its Rare gaming subsidiary and technology developed by an Israeli start-up called PrimeSense. It didn't take long for the robotics community to recognize that it can help machines interact with surrounding objects and navigate through environments at a low cost. Previous approaches more more complex and expensive, so Kinect has solved a fundamental problem.
Microsoft Open Technologies, a subsidiary of the namesake software company, recently open sourced a toolkit that makes it easier to build applications Kinect on the Windows platform. That opens the door for more use cases to come.
(image credit: Microsoft Research)