Rethinking Healthcare

Surgeons use Kinect to consult images mid-operation

Surgeons use Kinect to consult images mid-operation

Posting in Design

Microsoft Research has teamed up with hospitals in London to develop a 'touchless' way for surgeons to look at scans and other images while still scrubbed in and standing at the operating table.

Having to scrub in and then scrub out a few minutes later just to scroll through medical images can be time-consuming and break a surgeon's flow.

This month, doctors in London began trials of a new device that uses an Xbox Kinect camera to sense body position. Just by waving their arms, surgeons can consult and sift through images – like CT scans or X-rays – in the middle of an operation. New Scientist reports.

During any given surgery, a doctor must stop and consult images anywhere from once an hour to every few minutes. To avoid leaving the table or transferring contamination from non-sterile environments, many surgeons rely on assistants to handle the computer for them.

"Up until now, I'd been calling out across the room to one of our technical assistants, asking them to manipulate the image, rotate one way, rotate the other, pan up, pan down, zoom in, zoom out," says Tom Carrell at Guy's and St Thomas' hospital, who led an operation earlier this month to repair an aneurism in a patient's aorta. With the Kinect, he says, "I had very intuitive control."

Carrell used the system to look at a 3D model of a section of the abdominal aorta, captured on a CT scan. This was projected on to a 2D live image-feed of the operation site, taken with a fluoroscopic X-ray camera. So Carrell could see what was happening inside the patient, as well as using the 3D model to help navigate the twists, turns and branches of the aorta.

To maintain a touchless medical image viewer in an operating room, surgeons worked with Microsoft Research to develop a set of gestures that can be performed in a constrained space while standing at the operating table.

  • For the most common actions – such as rotating the 3D model – the team designed one-handed gestures that combine with voice commands, leaving the other hand free for operating.
  • To position a marker on an image, the surgeon simply points at the image to activate a cursor and says, "place marker."
  • Other functions, such as panning or zooming, require 2 hands.

You can watch a video demo of touchless interaction.

[Via New Scientist]

Image: Microsoft Research

Share this

Janet Fang

Contributing Editor

Janet Fang has written for Nature, Discover and the Point Reyes Light. She is currently a lab technician at Lamont-Doherty Earth Observatory. She holds degrees from the University of California, Berkeley and Columbia University. She is based in New York. Follow her on Twitter. Disclosure