Region-enhanced joint dictionary learning for cross-modality synthesis in diffusion tensor imaging

1Citations
Citations of this article
6Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Diffusion tensor imaging (DTI) has notoriously long acquisition times, and the sensitivity of the tensor computation often make this technique vulnerable to various interferences, for example, physiological motions, limited scanning time and patients with different medical conditions. In neuroimaging, studies usually involve different modalities. We considered the problem of inferring key information in DTI from other modalities. To address such a problem, several cross-modality image synthesis approaches have been proposed recently, in which the content of an image modality is reproduced based on those of another modality. However, these methods typically focus on two modalities of same complexity. In this work we propose a region-enhanced joint dictionary learning method that combines the region-specific information in a joint learning manner. The proposed method encodes intrinsic differences among different modalities, while the jointly learned dictionaries preserve common structures among them. Experimental results show that our approach has desirable properties on cross-modality image synthesis in diffusion tensor images.

Cite

CITATION STYLE

APA

Wang, D., Huang, Y., & Frangi, A. F. (2017). Region-enhanced joint dictionary learning for cross-modality synthesis in diffusion tensor imaging. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 10557 LNCS, pp. 41–48). Springer Verlag. https://doi.org/10.1007/978-3-319-68127-6_5

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free