Recent studies have combined multiple neuroimaging modalities to gain further understanding of the neurobiological substrates of aphasia. Following this line of work, the current study uses machine learning approaches to predict aphasia severity and specific language measures based on a multimodal neuroimaging dataset. A total of 116 individuals with chronic left-hemisphere stroke were included in the study. Neuroimaging data included task-based functional magnetic resonance imaging (fMRI), diffusion-based fractional anisotropy (FA)-values, cerebral blood flow (CBF), and lesion-load data. The Western Aphasia Battery was used to measure aphasia severity and specific language functions. As a primary analysis, we constructed support vector regression (SVR) models predicting language measures based on (i) each neuroimaging modality separately, (ii) lesion volume alone, and (iii) a combination of all modalities. Prediction accuracy across models was subsequently statistically compared. Prediction accuracy across modalities and language measures varied substantially (predicted vs. empirical correlation range: r =.00–.67). The multimodal prediction model yielded the most accurate prediction in all cases (r =.53–.67). Statistical superiority in favor of the multimodal model was achieved in 28/30 model comparisons (p-value range:
CITATION STYLE
Kristinsson, S., Zhang, W., Rorden, C., Newman-Norlund, R., Basilakos, A., Bonilha, L., … Fridriksson, J. (2021). Machine learning-based multimodal prediction of language outcomes in chronic aphasia. Human Brain Mapping, 42(6), 1682–1698. https://doi.org/10.1002/hbm.25321
Mendeley helps you to discover research relevant for your work.