Multi-stage diagnosis of Alzheimer’s disease with incomplete multimodal data via multi-task deep learning

42Citations
Citations of this article
71Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Utilization of biomedical data from multiple modalities improves the diagnostic accuracy of neurodegenerative diseases. However, multi-modality data are often incomplete because not all data can be collected for every individual. When using such incomplete data for diagnosis, current approaches for addressing the problem of missing data, such as imputation, matrix completion and multi-task learning, implicitly assume linear data-to-label relationship, therefore limiting their performances. We thus propose multi-task deep learning for incomplete data, where prediction tasks that are associated with different modality combinations are learnt jointly to improve the performance of each task. Specifically, we devise a multi-input multi-output deep learning framework, and train our deep network subnet-wise, partially updating its weights based on the availability of modality data. The experimental results using the ADNI dataset show that our method outperforms the state-of-the-art methods.

Cite

CITATION STYLE

APA

Thung, K. H., Yap, P. T., & Shen, D. (2017). Multi-stage diagnosis of Alzheimer’s disease with incomplete multimodal data via multi-task deep learning. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 10553 LNCS, pp. 160–168). Springer Verlag. https://doi.org/10.1007/978-3-319-67558-9_19

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free