SuperWarp: Supervised Learning and Warping on U-Net for Invariant Subvoxel-Precise Registration

0Citations
Citations of this article
7Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In recent years, learning-based image registration methods have gradually moved away from direct supervision with target warps to self-supervision using segmentations, producing promising results across several benchmarks. In this paper, we argue that the relative failure of supervised registration approaches can in part be blamed on the use of regular U-Nets, which are jointly tasked with feature extraction, feature matching, and estimation of deformation. We introduce one simple but crucial modification to the U-Net that disentangles feature extraction and matching from deformation prediction, allowing the U-Net to warp the features, across levels, as the deformation field is evolved. With this modification, direct supervision using target warps begins to outperform self-supervision approaches that require segmentations, presenting new directions for registration when images do not have segmentations. We hope that our findings in this preliminary workshop paper will re-ignite research interest in supervised image registration techniques. Our code is publicly available from https://github.com/balbasty/superwarp.

Cite

CITATION STYLE

APA

Young, S. I., Balbastre, Y., Dalca, A. V., Wells, W. M., Iglesias, J. E., & Fischl, B. (2022). SuperWarp: Supervised Learning and Warping on U-Net for Invariant Subvoxel-Precise Registration. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 13386 LNCS, pp. 103–115). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-031-11203-4_12

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free