Learning Where to Look While Tracking Instruments in Robot-Assisted Surgery

23Citations
Citations of this article
33Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Directing of the task-specific attention while tracking instrument in surgery holds great potential in robot-assisted intervention. For this purpose, we propose an end-to-end trainable multitask learning (MTL) model for real-time surgical instrument segmentation and attention prediction. Our model is designed with a weight-shared encoder and two task-oriented decoders and optimized for the joint tasks. We introduce batch-Wasserstein (bW) loss and construct a soft attention module to refine the distinctive visual region for efficient saliency learning. For multitask optimization, it is always challenging to obtain convergence of both tasks in the same epoch. We deal with this problem by adopting ‘poly’ loss weight and two phases of training. We further propose a novel way to generate task-aware saliency map and scanpath of the instruments on MICCAI robotic instrument segmentation dataset. Compared to the state of the art segmentation and saliency models, our model outperforms most of the evaluation metrics.

Cite

CITATION STYLE

APA

Islam, M., Li, Y., & Ren, H. (2019). Learning Where to Look While Tracking Instruments in Robot-Assisted Surgery. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11768 LNCS, pp. 412–420). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-030-32254-0_46

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free