Weakly-supervised learning for tool localization in laparoscopic videos

43Citations
Citations of this article
53Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Surgical tool localization is an essential task for the automatic analysis of endoscopic videos. In the literature, existing methods for tool localization, tracking and segmentation require training data that is fully annotated, thereby limiting the size of the datasets that can be used and the generalization of the approaches. In this work, we propose to circumvent the lack of annotated data with weak supervision. We propose a deep architecture, trained solely on image level annotations, that can be used for both tool presence detection and localization in surgical videos. Our architecture relies on a fully convolutional neural network, trained end-to-end, enabling us to localize surgical tools without explicit spatial annotations. We demonstrate the benefits of our approach on a large public dataset, Cholec80, which is fully annotated with binary tool presence information and of which 5 videos have been fully annotated with bounding boxes and tool centers for the evaluation.

Cite

CITATION STYLE

APA

Vardazaryan, A., Mutter, D., Marescaux, J., & Padoy, N. (2018). Weakly-supervised learning for tool localization in laparoscopic videos. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11043 LNCS, pp. 169–179). Springer Verlag. https://doi.org/10.1007/978-3-030-01364-6_19

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free