Adaptive on-line neural network retraining for real life multimodal emotion recognition

13Citations
Citations of this article
19Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Emotions play a major role in human-to-human communication enabling people to express themselves beyond the verbal domain. In recent years, important advances have been made in unimodal speech and video emotion analysis where facial expression information and prosodic audio features are treated independently. The need however to combine the two modalities in a naturalistic context, where adaptation to specific human characteristics and expressivity is required, and where single modalities alone cannot provide satisfactory evidence, is clear. Appropriate neural network classifiers are proposed for multimodal emotion analysis in this paper, in an adaptive framework, which is able to activate retraining of each modality, whenever deterioration of the respective performance is detected. Results are presented based on the IST HUMAINE NoE naturalistic database; both facial expression information and prosodic audio features are extracted from the same data and feature-based emotion analysis is performed through the proposed adaptive neural network methodology. © Springer-Verlag Berlin Heidelberg 2006.

Cite

CITATION STYLE

APA

Ioannou, S., Kessous, L., Caridakis, G., Karpouzis, K., Aharonson, V., & Kollias, S. (2006). Adaptive on-line neural network retraining for real life multimodal emotion recognition. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 4131 LNCS-I, pp. 81–92). Springer Verlag. https://doi.org/10.1007/11840817_9

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free