ROI-Based Multimodal Neuroimaging Feature Fusion Method and Its Graph Neural Network Diagnostic Model

0Citations
Citations of this article
7Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Single-modality neuroimaging data often provide limited information and are constrained by technical issues such as signal-to-noise ratio, and resolution limitations, potentially leading to biases and an incomplete understanding of brain complexities. This can hinder the development of diagnostic and therapeutic strategies for brain disorders. To address these challenges, this paper presents the Multimodal Graph Neural Network Model based on Feature Fusion (MMP-DGNN), which leverages sMRI and PET data. The model employs an algorithm to extract and accurately describe sample features using an autoencoder. During feature fusion, a shared adjacency matrix based on feature similarity and phenotypic data is constructed for graph representation. A dual-layer graph neural network then classifies the features, with results fused at the decision layer for final classification. Experimental results show that MMP-DGNN achieves superior classification performance of 98.17%, outperforming other methods in multimodal neuroimaging data classification.

Cite

CITATION STYLE

APA

Wang, X., Yang, X., Zhang, X., & Chen, Y. (2024). ROI-Based Multimodal Neuroimaging Feature Fusion Method and Its Graph Neural Network Diagnostic Model. IEEE Access. https://doi.org/10.1109/ACCESS.2024.3435433

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free