This work presents a software framework for real time multimodal affect recognition. The framework supports categorical emotional models and simultaneous classification of emotional states along different dimensions. The framework also allows to incorporate diverse approaches to multimodal fusion, proposed by the current state of the art, as well as to adapt to context-dependency of expressing emotions and to different application requirements. The results of using the framework in audio-video based emotion recognition of an audience of different shows (this is a useful information because emotions of co-located people affect each other) confirm the capability of the framework to provide desired functionalities conveniently and demonstrate that use of contextual information increases recognition accuracy. ©2009 IEEE.
CITATION STYLE
Vildjiounaite, E., Kyllönen, V., Vuorinen, O., Mäkelä, S. M., Keränen, T., Niiranen, M., … Peltola, J. (2009). Requirements and software framework for adaptive multimodal affect recognition. In Proceedings - 2009 3rd International Conference on Affective Computing and Intelligent Interaction and Workshops, ACII 2009. https://doi.org/10.1109/ACII.2009.5349393
Mendeley helps you to discover research relevant for your work.