AutoSNAP: Automatically Learning Neural Architectures for Instrument Pose Estimation

1Citations
Citations of this article
13Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Despite recent successes, the advances in Deep Learning have not yet been fully translated to Computer Assisted Intervention (CAI) problems such as pose estimation of surgical instruments. Currently, neural architectures for classification and segmentation tasks are adopted ignoring significant discrepancies between CAI and these tasks. We propose an automatic framework (AutoSNAP) for instrument pose estimation problems, which discovers and learns architectures for neural networks. We introduce 1) an efficient testing environment for pose estimation, 2) a powerful architecture representation based on novel Symbolic Neural Architecture Patterns (SNAPs), and 3) an optimization of the architecture using an efficient search scheme. Using AutoSNAP, we discover an improved architecture (SNAPNet) which outperforms both the hand-engineered i3PosNet and the state-of-the-art architecture search method DARTS.

Cite

CITATION STYLE

APA

Kügler, D., Uecker, M., Kuijper, A., & Mukhopadhyay, A. (2020). AutoSNAP: Automatically Learning Neural Architectures for Instrument Pose Estimation. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 12263 LNCS, pp. 375–384). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-030-59716-0_36

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free