By Janet Fang
Posting in Science
The robotic assistant has two winglike arms that end in tiny claws to help surgeons navigate. And it works with a Linux-based operating system to help researchers collaborate.
Okay, you may have already met Raven. Researchers unveiled the latest version of this robotic surgical assistant in January. But if you missed it, like me, here’s what’s up.
The surgical manipulator has two winglike arms that end in tiny claws to help surgeons see and navigate around, say, the heart. The naked eye just can’t see everything, and even a surgeon’s trained hands can’t feel everything.
Blake Hannaford at the University of Washington and Jacob Rosen at the University of California, Santa Cruz built the original Raven for telerobotic surgery study back in 2005 for around $250,000. Now, they’ve developed a new version, Raven II: smaller, has more dexterity in its hands, and can hold surgical tools during operations.
Their approach uses 3D ultrasound imaging to show internal organs in real time. Volumetric images are taken, and fast image processing software locates the target tissue and the instrument. (The same graphics processors that produce high-quality computer-game images are ideal for real-time medical imaging.)
They’ve created software to work with the Robot Operating System, a popular open-source robotics code, so labs can easily connect the Raven to other devices and share ideas. The Linux-based operating system lets anyone modify and improve the original code, creating a way for researchers to experiment and collaborate, the Economist explains.
Ravens have been deployed to biorobotics labs around the country, to Harvard, Johns Hopkins University, University of Nebraska-Lincoln, UCLA, and UC Berkeley (Go Bears). Some things Raven can do:
- A Harvard team matches the robot to beating heart tissue. When the surgical instrument reaches the tissue, a control loop is closed around them so that the instrument automatically moves in tandem with the beating motion of the heart. (It’s almost as if the surgeon is working on a stationary heart.)
- A Johns Hopkins team is investigating whether it could make invasive operations safer. In functional endoscopic sinus surgery, an endoscope finds and treats nasal polyps and sinus inflammation. (It’s close to your eyes and brain.) Raven uses imaging to identify the 3D location of the endoscope’s tip as the surgeon maneuvers through the sinuses.
- By superimposing the surgeon’s field of view on standard medical images, surgeons don’t need to constantly look back and forth between a map of the patient's particular anatomy and the patient through the endoscope.
Watch a video of Raven work.
"To do superhuman surgery will require robots to have enough intelligence to recognize what the surgeon is doing and to offer appropriate assistance, remotely setting up no-fly zones for safety, superimposing images,” says Gregory Hager of Johns Hopkins. “All of that is coming down the road."
[Via Popular Mechanics]
Image: University of Washington
Related video on SmartPlanet:
Apr 5, 2012
Folks, surgeons don't refer to maps to remember the anatomy of an area. They went to medical school and interned and then did residencies to bring that home to stay. What they need from the cines, x-rays, etc is YOUR particular anatomy. We mostly all have two hands but no two are alike, not even the two that are a pair. Thus the general anatomy is well known to doctors, and veternarians too, it is the specifics of the individual, usually based on the particular pathology, that is referred back to for consideration.
Ah yes, I meant the map of the patient's particular anatomy, such as the carotid artery or optic nerve that's behind the tissue. The robot can help fuse the preoperative image with what's being seen through the endoscope.