On the use of multi-attribute decision making for combining audio-lingual and visual-facial modalities in emotion recognition

1Citations
Citations of this article
9Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In this chapter, we present and discuss a novel approach that we have developed for the integration of audio-lingual and visual-facial modalities in a bi-modal user interface for affect recognition. Even though researchers acknowledge that two modalities can provide information that is complementary to each other with respect to affect recognition, satisfactory progress has not yet been achieved towards the integration of the two modalities. In our research reported herein, we approach the combination of the two modalities from the perspective of a human observer by employing a multi-criteria decision making theory for dynamic affect recognition of computer users. Our approach includes the speci-fcation of the strengths and weaknesses of each modality with respect to affect recognition concerning the 6 basic emotion states, namely happiness, sadness, surprise, anger and disgust, as well as the emotionless state which we refer to as neutral. We present two empirical studies that we have conducted involving

Cite

CITATION STYLE

APA

Virvou, M., Tsihrintzis, G. A., Alepis, E., Stathopoulou, I. O., & Kabassi, K. (2015). On the use of multi-attribute decision making for combining audio-lingual and visual-facial modalities in emotion recognition. Smart Innovation, Systems and Technologies, 36, 7–34. https://doi.org/10.1007/978-3-319-17744-1_2

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free