Dual-Graph Learning Convolutional Networks for Interpretable Alzheimer’s Disease Diagnosis

5Citations
Citations of this article
11Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In this paper, we propose a dual-graph learning convolutional network (dGLCN) to achieve interpretable Alzheimer’s disease (AD) diagnosis, by jointly investigating subject graph learning and feature graph learning in the graph convolution network (GCN) framework. Specifically, we first construct two initial graphs to consider both the subject diversity and the feature diversity. We further fuse these two initial graphs into the GCN framework so that they can be iteratively updated (i.e., dual-graph learning) while conducting representation learning. As a result, the dGLCN achieves interpretability in both subjects and brain regions through the subject importance and the feature importance, and the generalizability by overcoming the issues, such as limited subjects and noisy subjects. Experimental results on the Alzheimer’s disease neuroimaging initiative (ADNI) datasets show that our dGLCN outperforms all comparison methods for binary classification. The codes of dGLCN are available on https://github.com/xiaotingsong/dGLCN.

Cite

CITATION STYLE

APA

Xiao, T., Zeng, L., Shi, X., Zhu, X., & Wu, G. (2022). Dual-Graph Learning Convolutional Networks for Interpretable Alzheimer’s Disease Diagnosis. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 13438 LNCS, pp. 406–415). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-031-16452-1_39

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free