Deep learning based imaging data completion for improved brain disease diagnosis

362Citations
Citations of this article
419Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Combining multi-modality brain data for disease diagnosis commonly leads to improved performance. A challenge in using multi-modality data is that the data are commonly incomplete; namely, some modality might be missing for some subjects. In this work, we proposed a deep learning based framework for estimating multi-modality imaging data. Our method takes the form of convolutional neural networks, where the input and output are two volumetric modalities. The network contains a large number of trainable parameters that capture the relationship between input and output modalities. When trained on subjects with all modalities, the network can estimate the output modality given the input modality. We evaluated our method on the Alzheimer's Disease Neuroimaging Initiative (ADNI) database, where the input and output modalities are MRI and PET images, respectively. Results showed that our method significantly outperformed prior methods. © 2014 Springer International Publishing.

Cite

CITATION STYLE

APA

Li, R., Zhang, W., Suk, H. I., Wang, L., Li, J., Shen, D., & Ji, S. (2014). Deep learning based imaging data completion for improved brain disease diagnosis. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 8675 LNCS, pp. 305–312). Springer Verlag. https://doi.org/10.1007/978-3-319-10443-0_39

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free