Recently, there has been a significant amount of work on the recognition of emotions from visual, verbal or physiological information. Most approaches to emotion recognition so far concentrate, however, on a single modality while work on the integration of multimodal information, in particular on fusing physiological signals with verbal or visual data, is scarce. In this paper, we analyze various methods for fusing physiological and vocal information and compare the recognition results of the bimodal recognition approach with the results of the unimodal approach. © Springer-Verlag Berlin Heidelberg 2006.
CITATION STYLE
Kim, J., & André, E. (2006). Emotion recognition using physiological and speech signal in short-term observation. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 4021 LNAI, pp. 53–64). Springer Verlag. https://doi.org/10.1007/11768029_6
Mendeley helps you to discover research relevant for your work.