Robotic surgical systems such as Intuitive Surgical's da Vinci system provide a rich source of motion and video data from surgical procedures. In principle, this data can be used to evaluate surgical skill, provide surgical training feedback, or document essential aspects of a procedure. If processed online, the data can be used to provide context-specific information or motion enhancements to the surgeon. However, in every case, the key step is to relate recorded motion data to a model of the procedure being performed. This paper examines our progress at developing techniques for "parsing" raw motion data from a surgical task into a labelled sequence of surgical gestures. Our current techniques have achieved >90% fully automated recognition rates on 15 datasets. © Springer-Verlag Berlin Heidelberg 2005.
Mendeley helps you to discover research relevant for your work.
CITATION STYLE
Lin, H. C., Shafran, I., Murphy, T. E., Okamura, A. M., Yuh, D. D., & Hager, G. D. (2005). Automatic detection and segmentation of robot-assisted surgical motions. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 3749 LNCS, pp. 802–810). Springer Verlag. https://doi.org/10.1007/11566465_99