A new automatic system can log subtle gestures, shifting eye contact, and fleeting facial expressions using Microsoft Kinect’s sensors. SimSensei is one of several new initiatives designed to partially automate one of the medical profession's trickiest tasks: diagnosing depression. New Scientist reports.
At the moment, diagnosis depends on yes/no answers to standard questionnaires. Not paying attention to non-verbal cues could lead to missed diagnoses.
SimSensei’s digital avatar asks questions, says “hmm,” and guides the conversation according to the patient's answers. Behind the scenes, it uses face recognition technology and depth-sensing cameras to record and interpret body language.
To extract the right features, a team led by SimSensei developer Stefan Scherer from the University of Southern California interviewed volunteers who have and haven't been diagnosed with depression or post-traumatic stress disorder.
After filling out questionnaires to screen for these conditions, volunteers were interviewed with a high-definition webcam trained on their face, while Kinect logged their body movements.
Interviewees who were depressed were more likely to fidget and drop their gaze; they also smiled less, the team found. Watch a video of SimSensei.
Automated systems make for helpful objective observers. Other teams working on this kind of system:
- At the University of Canberra, a system looks for slower-than-usual blinking, certain upper-body movements, and when someone looks away or makes fewer gestures than normal.
- At the University of Pittsburgh, a system looks at 66 different parts of the face to spot movements that betray certain emotions, revealing the changing severity of depression.
Some of these will be presented at the Automatic Face and Gesture Recognition conference in Shanghai next month. The Association for Computing Machinery is hosting a contest in Spain this fall to see which is best at picking out clinically depressed patients from videos.