Emotion in speech: Towards an integration of linguistic, paralinguistic, and psychological analysis

3Citations
Citations of this article
8Readers
Mendeley users who have this article in their library.
Get full text

Abstract

If speech analysis is to detect a speaker's emotional state, it needs to derive information from both linguistic information, i.e., the qualitative targets that the speaker has attained (or approximated), conforming to the rules of language; and paralinguistic information, i.e., allowed variations in the way that qualitative linguistic targets are realised. It also needs an appropriate representation of emotional states. The ERMIS project addresses the integration problem that those requirements pose. It mainly comprises a paralinguistic analysis and a robust speech recognition module. Descriptions of emotionality are derived from these modules following psychological and linguistic research that indicates the information likely to be available. We argue that progress in registering emotional states depends on establishing an overall framework of at least this level of complexity. © Springer-Verlag Berlin Heidelberg 2003.

Cite

CITATION STYLE

APA

Fotinea, S. E., Bakamidis, S., Athanaselis, T., Dologlou, I., Carayannis, G., Cowie, R., … Taylor, J. G. (2003). Emotion in speech: Towards an integration of linguistic, paralinguistic, and psychological analysis. Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 2714, 1125–1132. https://doi.org/10.1007/3-540-44989-2_134

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free