Deep multimodal learning for the diagnosis of autism spectrum disorder

59Citations
Citations of this article
95Readers
Mendeley users who have this article in their library.

Abstract

Recent medical imaging technologies, specifically functional magnetic resonance imaging (fMRI), have advanced the diagnosis of neurological and neurodevelopmental disorders by allowing scientists and physicians to observe the activity within and between different regions of the brain. Deep learning methods have frequently been implemented to analyze images produced by such technologies and perform disease classification tasks; however, current state-of-the-art approaches do not take advantage of all the information offered by fMRI scans. In this paper, we propose a deep multimodal model that learns a joint representation from two types of connectomic data offered by fMRI scans. Incorporating two functional imaging modalities in an automated end-to-end autism diagnosis system will offer a more comprehensive picture of the neural activity, and thus allow for more accurate diagnoses. Our multimodal training strategy achieves a classification accuracy of 74% and a recall of 95%, as well as an F1 score of 0.805, and its overall performance is superior to using only one type of functional data.

Cite

CITATION STYLE

APA

Tang, M., Kumar, P., Chen, H., & Shrivastava, A. (2020). Deep multimodal learning for the diagnosis of autism spectrum disorder. Journal of Imaging, 6(6). https://doi.org/10.3390/jimaging6060047

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free