Adding facial actions into 3D model search to analyse behaviour in an unconstrained environment

4Citations
Citations of this article
4Readers
Mendeley users who have this article in their library.
Get full text

Abstract

We investigate several methods of integrating facial actions into a 3D head model for 2D image search. The model on which the investigation is based has a neutral expression with eyes open, and our modifications enable the model to change expression and close the eyes. We show that the novel approach of using separate identity and action models during search gives better results than a combined-model strategy. This enables monitoring of head and feature movements in difficult real-world video sequences, which show large pose variation, occlusion, and variable lighting within and between frames. This should enable the identification of critical situations such as tiredness and inattention and we demonstrate the potential of our system by linking model parameters to states such as eyes closed and mouth open. We also present evidence that restricting the model parameters to a subspace close to the identity of the subject improves results. © 2010 Springer-Verlag.

Cite

CITATION STYLE

APA

Caunce, A., Taylor, C., & Cootes, T. (2010). Adding facial actions into 3D model search to analyse behaviour in an unconstrained environment. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 6453 LNCS, pp. 132–142). https://doi.org/10.1007/978-3-642-17289-2_13

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free