Speech and music are two fundamental modes of human communication. Lateralisation of key processes underlying their perception has been related both to the distinct sensitivity to low-level spectrotemporal acoustic features and to top-down attention. However, the interplay between bottom-up and top-down processes needs to be clarified. In the present study, we investigated the contribution of acoustics and attention to melodies or sentences to lateralisation in fMRI functional network topology. We used sung speech stimuli selectively filtered in temporal or spectral modulation domains with crossed and balanced verbal and melodic content. Perception of speech decreased with degradation of temporal information, whereas perception of melodies decreased with spectral degradation. Applying graph theoretical metrics on fMRI connectivity matrices, we found that local clustering, reflecting functional specialisation, linearly increased when spectral or temporal cues crucial for the task goal were incrementally degraded. These effects occurred in a bilateral fronto-temporo-parietal network for processing temporally degraded sentences and in right auditory regions for processing spectrally degraded melodies. In contrast, global topology remained stable across conditions. These findings suggest that lateralisation for speech and music partially depends on an interplay of acoustic cues and task goals under increased attentional demands.
CITATION STYLE
Haiduk, F., Zatorre, R. J., Benjamin, L., Morillon, B., & Albouy, P. (2024). Spectrotemporal cues and attention jointly modulate fMRI network topology for sentence and melody perception. Scientific Reports, 14(1). https://doi.org/10.1038/s41598-024-56139-6
Mendeley helps you to discover research relevant for your work.