This paper is part of a larger study that examines cross-linguistic perception of sad and happy speech when the information is transmitted semantically (linguistic) or prosodically (affective). Here we examine American English and Japanese speakers' ability to perceive emotions in Japanese utterances. It is expected that native subjects will be better at perceiving emotion expressed semantically than nonnatives because they have access to the semantic information. However, we see that Japanese listeners like American English listeners were not successful in discriminating emotion in the semantic content of the utterance. Both native speakers and non-native speakers could perceive that a speaker is sad or happy through the affective prosody. These results show that sad and happy are universally expressed the same way even in the auditory modality. Acoustic analysis showed differences in intensity, morae duration and F0 range for the linguistic, affective and neutral utterances and sad, happy and neutral emotions. Linguistic utterances revealed acoustic differences between the three emotional stages besides differences in the semantic context.
CITATION STYLE
Menezes, C., Erickson, D., & Franks, C. (2010). Comparison between linguistic and affective perception of sad and happy - A cross-linguistic study. In Proceedings of the International Conference on Speech Prosody. International Speech Communication Association. https://doi.org/10.21437/speechprosody.2010-112
Mendeley helps you to discover research relevant for your work.