Automatically query active features based on pixel-level for facial expression recognition

3Citations
Citations of this article
12Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Feature extraction-based subspace learning methods normally learn a projection that can convert the high-dimensional data to the low-dimensional representation. However, they may not be suitable for better classification since features obtained by these methods ignore discriminability of the data-pixel itself. Given this, we propose a novel approach that automatically queries active features combing sparse representation classification for the facial expression recognition. The proposed approach aims to automatically query discriminative features from raw pixels, thereby fully considering the underlying characteristics existed in the source data. Especially, the proposed approach based on pixel-level adaptively selects the most active and discriminative feature for representation and classification. The intraclass low-rank decomposition and principal feature analysis are simultaneously used to guarantee that the extracted features can capture the most active energy of the raw data, and thus, the proposed approach can be also applied for other feature extraction and selection tasks. We conduct comprehensive experiments on four public datasets, and the results show superior performance than some state-of-the-art methods.

Cite

CITATION STYLE

APA

Sun, Z., Hu, Z., & Zhao, M. (2019). Automatically query active features based on pixel-level for facial expression recognition. IEEE Access, 7, 104630–104641. https://doi.org/10.1109/ACCESS.2019.2929753

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free