Machine Learning for Multimodal Interaction

  • Balomenos T
  • Raouzaiou A
  • Ioannou S
  • et al.
N/ACitations
Citations of this article
54Readers
Mendeley users who have this article in their library.

Abstract

Facial expression and hand gesture analysis plays a fundamental part in emotionally rich man-machine interaction (MMI) systems, since it employs universally accepted non-verbal cues to estimate the users’ emotional state. In this paper, we present a systematic approach to extracting expression related features from image sequences and inferring an emotional state via an intelligent rule-based system. MMI systems can benefit from these concepts by adapting their functionality and presentation with respect to user reactions or by employing agent-based interfaces to deal with specific emotional states, such as frustration or anger.

Cite

CITATION STYLE

APA

Balomenos, T., Raouzaiou, a., Ioannou, S., Drosopoulos, a., Karpouzis, K., Kollias, S., … Bourlard, H. (2005). Machine Learning for Multimodal Interaction, 3361(FEBRUARY), 318–328. Retrieved from http://www.springerlink.com/content/by0j391g4hpr5gr4/

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free