Transformed Grid Distance Loss for Supervised Image Registration

1Citations
Citations of this article
4Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Many deep learning image registration tasks, such as volume-to-volume registration, frame-to-volume registration, and frame-to-volume reconstruction, rely on six transformation parameters or quaternions to supervise the learning-based methods. However, these parameters can be very abstract for neural networks to comprehend. During the optimization process, ill-considered representations of rotation may even trap the objective function at local minima. This paper aims to expose these issues and propose the Transformed Grid Distance loss as a solution. The proposed method not only solves the problem of rotation representation but unites the gap between translation and rotation. We test our methods both with synthetic and clinically relevant medical image datasets. We demonstrate superior performance in comparison with conventional losses while requiring no alteration to the network input, output, or network structure at all.

Cite

CITATION STYLE

APA

Song, X., Chao, H., Xu, S., Turkbey, B., Wood, B. J., Wang, G., & Yan, P. (2022). Transformed Grid Distance Loss for Supervised Image Registration. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 13386 LNCS, pp. 177–181). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-031-11203-4_19

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free