Multimodal image registration by information fusion at feature level

1Citations
Citations of this article
15Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

This paper proposes a novel multimodal image registration method which can fully utilize the multimodal information and result in a more accurate unified deformation field. Different from the existing methods which fuse the information at the image/intensity level, the proposed method fuses the multimodal information at the feature level through Gabor wavelets transformation. At this level, complementary and redundant information is distinguished reliably and efficiently, and then combined and removed respectively. Experiments on both simulated and real T1+DTI image sets illustrate that the proposed method can effectively incorporate better characterization for white matter (WM) from the DTI and for gray matter (GM) from the T1 image and lead to a more accurate and efficient multimodal image registration which paves the way for the subsequent multimodal population-based studies. © 2009 Springer-Verlag.

Cite

CITATION STYLE

APA

Li, Y., & Verma, R. (2009). Multimodal image registration by information fusion at feature level. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 5761 LNCS, pp. 624–631). https://doi.org/10.1007/978-3-642-04268-3_77

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free