Facial expression recognition based on multi-scale CNNS

19Citations
Citations of this article
11Readers
Mendeley users who have this article in their library.
Get full text

Abstract

This paper proposes a new method for facial expression recognition, called multi-scale CNNs. It consists several sub-CNNs with different scales of input images. The sub-CNNs of multi-scale CNNs are benefited from various scaled input images to learn the optimalized parameters. After trained all these sub-CNNs separately, we can predict the facial expression of an image by extracting its features from the last fully connected layer of sub-CNNs in different scales and mapping the averaged features to the final classification probability. Multi-scale CNNs can classify facial expression more accurately than any single scale sub-CNN. On Facial Expression Recognition 2013 database, multi-scale CNNs achieved an accuracy of 71.80% on the testing set, which is comparative to other state-of-the-art methods.

Cite

CITATION STYLE

APA

Zhou, S., Liang, Y., Wan, J., & Li, S. Z. (2016). Facial expression recognition based on multi-scale CNNS. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 9967 LNCS, pp. 503–510). Springer Verlag. https://doi.org/10.1007/978-3-319-46654-5_55

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free