Hierarchical multi-modal image registration by learning common feature representations

0Citations
Citations of this article
8Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Mutual information (MI) has been widely used for registering images with different modalities. Since most intermodality registration methods simply estimate deformations in a local scale, but optimizing MI from the entire image, the estimated deformations for certain structures could be dominated by the surrounding unrelated structures. Also, since there often exist multiple structures in each image, the intensity correlation between two images could be complex and highly nonlinear, which makes global MI unable to precisely guide local image deformation. To solve these issues, we propose a hierarchical inter-modality registration method by robust feature matching. Specifically, we first select a small set of key points at salient image locations to drive the entire image registration. Since the original image features computed from different modalities are often difficult for direct comparison, we propose to learn their common feature representations by projecting them from their native feature spaces to a common space, where the correlations between corresponding features are maximized. Due to the large heterogeneity between two high-dimension feature distributions, we employ Kernel CCA (Canonical Correlation Analysis) to reveal such non-linear feature mappings. Then, our registration method can take advantage of the learned common features to reliably establish correspondences for key points from different modality images by robust feature matching. As more and more key points take part in the registration, our hierarchical feature-based image registration method can efficiently estimate the deformation pathway between two inter-modality images in a global to local manner. We have applied our proposed registration method to prostate CT and MR images, as well as the infant MR brain images in the first year of life. Experimental results show that our method can achieve more accurate registration results, compared to other state-of-the-art image registration methods.

Cite

CITATION STYLE

APA

Ge, H., Wu, G., Wang, L., Gao, Y., & Shen, D. (2015). Hierarchical multi-modal image registration by learning common feature representations. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 9352, pp. 203–211). Springer Verlag. https://doi.org/10.1007/978-3-319-24888-2_25

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free