Automatic initialization for facial analysis in interactive robotics

10Citations
Citations of this article
13Readers
Mendeley users who have this article in their library.
Get full text

Abstract

The human face plays an important role in communication as it allows to discern different interaction partners and provides non-verbal feedback. In this paper, we present a soft real-time vision system that enables an interactive robot to analyze faces of interaction partners not only to identify them, but also to recognize their respective facial expressions as a dialog-controlling non-verbal cue. In order to assure applicability in real world environments, a robust detection scheme is presented which detects faces and basic facial features such as the position of the mouth, nose, and eyes. Based on these detected features, facial parameters are extracted using active appearance models (AAMs) and conveyed to support vector machine (SVM) classifiers to identify both persons and facial expressions. This paper focuses on four different initialization methods for determining the initial shape for the AAM algorithm and their particular performance in two different classification tasks with respect to either the facial expression DaFEx database and to the real world data obtained from a robot's point of view. © 2008 Springer-Verlag Berlin Heidelberg.

Cite

CITATION STYLE

APA

Rabie, A., Lang, C., Hanheide, M., Castrillón-Santana, M., & Sagerer, G. (2008). Automatic initialization for facial analysis in interactive robotics. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 5008 LNCS, pp. 517–526). https://doi.org/10.1007/978-3-540-79547-6_50

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free