Multiple-manifolds discriminant analysis for facial expression recognition from local patches set

1Citations
Citations of this article
2Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In this paper, a novel framework is proposed for feature extraction and classification of facial expression recognition, namely multiple manifold discriminant analysis (MMDA), which assumes samples of different expressions reside on different manifolds, thereby learning multiple projection matrices from training set. In particular, MMDA first incorporates five local patches, including the regions of left and right eyes, mouth and left and right cheeks from each training sample to form a new training set, and then learns projection matrix from each expression so that maximizes the manifold margins among different expressions and minimizes the manifold distances of the same expression. A key feature of MMDA is that it can extract the discriminative information of expression-specific for classification rather than that of subject-specific, leading to a robust performance in practical applications. Our experiments on Cohn-Kanade and JAFFE databases demonstrate that MMDA can effectively enhance the discriminant power of the extracted expression features.

Cite

CITATION STYLE

APA

Zheng, N., Qi, L., & Guan, L. (2015). Multiple-manifolds discriminant analysis for facial expression recognition from local patches set. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 8869, pp. 26–33). Springer Verlag. https://doi.org/10.1007/978-3-319-14899-1_3

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free