Multimodal Image Alignment via Linear Mapping between Feature Modalities

13Citations
Citations of this article
7Readers
Mendeley users who have this article in their library.

Abstract

We propose a novel landmark matching based method for aligning multimodal images, which is accomplished uniquely by resolving a linear mapping between different feature modalities. This linear mapping results in a new measurement on similarity of images captured from different modalities. In addition, our method simultaneously solves this linear mapping and the landmark correspondences by minimizing a convex quadratic function. Our method can estimate complex image relationship between different modalities and nonlinear nonrigid spatial transformations even in the presence of heavy noise, as shown in our experiments carried out by using a variety of image modalities.

Cite

CITATION STYLE

APA

Jiang, Y., Zheng, Y., Hou, S., Chang, Y., & Gee, J. (2017). Multimodal Image Alignment via Linear Mapping between Feature Modalities. Journal of Healthcare Engineering, 2017. https://doi.org/10.1155/2017/8625951

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free