Robust feature detection for facial expression recognition

37Citations
Citations of this article
36Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

This paper presents a robust and adaptable facial feature extraction system used for facial expression recognition in human-computer interaction (HCI) environments. Such environments are usually uncontrolled in terms of lighting and color quality, as well as human expressivity and movement; as a result, using a single feature extraction technique may fail in some parts of a video sequence, while performing well in others. The proposed system is based on a multicue feature extraction and fusion technique, which provides MPEG-4-compatible features assorted with a confidence measure. This confidence measure is used to pinpoint cases where detection of individual features may be wrong and reduce their contribution to the training phase or their importance in deducing the observed facial expression, while the fusion process ensures that the final result regarding the features will be based on the extraction technique that performed better given the particular lighting or color conditions. Real data and results are presented, involving both extreme and intermediate expression/emotional states, obtained within the sensitive artificial listener HCI environment that was generated in the framework of related European projects.

Cite

CITATION STYLE

APA

Ioannou, S., Caridakis, G., Karpouzis, K., & Kollias, S. (2007). Robust feature detection for facial expression recognition. Eurasip Journal on Image and Video Processing, 2007. https://doi.org/10.1155/2007/29081

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free