Comparison between linguistic and affective perception of sad and happy - A cross-linguistic study

1Citations
Citations of this article
10Readers
Mendeley users who have this article in their library.

Abstract

This paper is part of a larger study that examines cross-linguistic perception of sad and happy speech when the information is transmitted semantically (linguistic) or prosodically (affective). Here we examine American English and Japanese speakers' ability to perceive emotions in Japanese utterances. It is expected that native subjects will be better at perceiving emotion expressed semantically than nonnatives because they have access to the semantic information. However, we see that Japanese listeners like American English listeners were not successful in discriminating emotion in the semantic content of the utterance. Both native speakers and non-native speakers could perceive that a speaker is sad or happy through the affective prosody. These results show that sad and happy are universally expressed the same way even in the auditory modality. Acoustic analysis showed differences in intensity, morae duration and F0 range for the linguistic, affective and neutral utterances and sad, happy and neutral emotions. Linguistic utterances revealed acoustic differences between the three emotional stages besides differences in the semantic context.

Cite

CITATION STYLE

APA

Menezes, C., Erickson, D., & Franks, C. (2010). Comparison between linguistic and affective perception of sad and happy - A cross-linguistic study. In Proceedings of the International Conference on Speech Prosody. International Speech Communication Association. https://doi.org/10.21437/speechprosody.2010-112

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free