Facial expression and hand gesture analysis plays a fundamental part in emotionally rich man-machine interaction (MMI) systems, since it employs universally accepted non-verbal cues to estimate the users’ emotional state. In this paper, we present a systematic approach to extracting expression related features from image sequences and inferring an emotional state via an intelligent rule-based system. MMI systems can benefit from these concepts by adapting their functionality and presentation with respect to user reactions or by employing agent-based interfaces to deal with specific emotional states, such as frustration or anger.
CITATION STYLE
Balomenos, T., Raouzaiou, a., Ioannou, S., Drosopoulos, a., Karpouzis, K., Kollias, S., … Bourlard, H. (2005). Machine Learning for Multimodal Interaction, 3361(FEBRUARY), 318–328. Retrieved from http://www.springerlink.com/content/by0j391g4hpr5gr4/
Mendeley helps you to discover research relevant for your work.