Fusing face and body display for Bi-modal emotion recognition: Single frame analysis and multi-frame post integration

27Citations
Citations of this article
33Readers
Mendeley users who have this article in their library.
Get full text

Abstract

This paper presents an approach to automatic visual emotion recognition from two modalities: expressive face and body gesture. Pace and body movements are captured simultaneously using two separate cameras. For each face and body image sequence single "expressive" frames are selected manually for analysis and recognition of emotions. Firstly, individual classifiers are trained from individual modalities for mono-modal emotion recognition. Secondly, we fuse facial expression and affective body gesture information at the feature and at the decision-level. In the experiments performed, the emotion classification using the two modalities achieved a better recognition accuracy outperforming the classification using the individual facial modality. We further extend the affect analysis into a whole image sequence by a multi-frame post integration approach over the single frame recognition results. In our experiments, the post integration based on the fusion of face and body has shown to be more accurate than the post integration based on the facial modality only. © Springer-Verlag Berlin Heidelberg 2005.

Cite

CITATION STYLE

APA

Gunes, H., & Piccardi, M. (2005). Fusing face and body display for Bi-modal emotion recognition: Single frame analysis and multi-frame post integration. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 3784 LNCS, pp. 102–111). https://doi.org/10.1007/11573548_14

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free