Emotion recognition using physiological and speech signal in short-term observation

40Citations
Citations of this article
34Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Recently, there has been a significant amount of work on the recognition of emotions from visual, verbal or physiological information. Most approaches to emotion recognition so far concentrate, however, on a single modality while work on the integration of multimodal information, in particular on fusing physiological signals with verbal or visual data, is scarce. In this paper, we analyze various methods for fusing physiological and vocal information and compare the recognition results of the bimodal recognition approach with the results of the unimodal approach. © Springer-Verlag Berlin Heidelberg 2006.

Cite

CITATION STYLE

APA

Kim, J., & André, E. (2006). Emotion recognition using physiological and speech signal in short-term observation. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 4021 LNAI, pp. 53–64). Springer Verlag. https://doi.org/10.1007/11768029_6

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free