Global Multi-modal 2D/3D Registration via Local Descriptors Learning

6Citations
Citations of this article
25Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Multi-modal registration is a required step for many image-guided procedures, especially ultrasound-guided interventions that require anatomical context. While a number of such registration algorithms are already available, they all require a good initialization to succeed due to the challenging appearance of ultrasound images and the arbitrary coordinate system they are acquired in. In this paper, we present a novel approach to solve the problem of registration of an ultrasound sweep to a pre-operative image. We learn dense keypoint descriptors from which we then estimate the registration. We show that our method overcomes the challenges inherent to registration tasks with freehand ultrasound sweeps, namely, the multi-modality and multidimensionality of the data in addition to lack of precise ground truth and low amounts of training examples. We derive a registration method that is fast, generic, fully automatic, does not require any initialization and can naturally generate visualizations aiding interpretability and explainability. Our approach is evaluated on a clinical dataset of paired MR volumes and ultrasound sequences.

Cite

CITATION STYLE

APA

Markova, V., Ronchetti, M., Wein, W., Zettinig, O., & Prevost, R. (2022). Global Multi-modal 2D/3D Registration via Local Descriptors Learning. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 13436 LNCS, pp. 269–279). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-031-16446-0_26

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free