Uniform generic representation for single sample face recognition

2Citations
Citations of this article
7Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

In this article, we propose a uniform generic representation (UGR) method to solve the single sample per person (SSPP) problem in face recognition, which aims to find consistency between the global and local generic representations. For the local generic representation, we require the probe patches of the same image to be constructed respectively by the corresponding patches of the same gallery image and the intra-class variation dictionaries. Therefore, the probe patches' coefficients, corresponding to patch gallery dictionaries, should be similar to each other. For the global generic representation, the probe image's coefficient, corresponding to the gallery dictionary, should be similar to those of its probe patches. In order to meet the two requirements, we combine local generic representation with global generic representation in soft form. We obtain the representation coefficients by solving a simple quadratic optimization problem. UGR has been evaluated on Extended Yale B, AR, CMU-PIE, and LFW databases. Experimental results show the robustness and effectiveness of our method to illumination, expression, occlusion, time variation, and pose.

Cite

CITATION STYLE

APA

DIng, Y., Liu, F., Tang, Z., & Zhang, T. (2020). Uniform generic representation for single sample face recognition. IEEE Access, 8, 158281–158292. https://doi.org/10.1109/ACCESS.2020.3017479

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free