Shared Gaussian process latent variable model for multi-view facial expression recognition

17Citations
Citations of this article
24Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Facial-expression data often appear in multiple views either due to head-movements or the camera position. Existing methods for multi-view facial expression recognition perform classification of the target expressions either by using classifiers learned separately for each view or by using a single classifier learned for all views. However, these approaches do not explore the fact that multi-view facial expression data are different manifestations of the same facial-expression-related latent content. To this end, we propose a Shared Gaussian Process Latent Variable Model (SGPLVM) for classification of multi-view facial expression data. In this model, we first learn a discriminative manifold shared by multiple views of facial expressions, and then apply a (single) facial expression classifier, based on k-Nearest-Neighbours (kNN), to the shared manifold. In our experiments on the MultiPIE database, containing real images of facial expressions in multiple views, we show that the proposed model outperforms the state-of-the-art models for multi-view facial expression recognition. © 2013 Springer-Verlag.

Cite

CITATION STYLE

APA

Eleftheriadis, S., Rudovic, O., & Pantic, M. (2013). Shared Gaussian process latent variable model for multi-view facial expression recognition. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 8033 LNCS, pp. 527–538). https://doi.org/10.1007/978-3-642-41914-0_52

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free