Abstract
At present, deep learning drives the rapid development of face recognition. However, in the unconstrained scenario, the change of facial posture has a great impact on face recognition. Moreover, the current model still has some shortcomings in accuracy and robustness. The existing research has formulated two methods to solve the above problems. One method is to model and train each pose separately. Then, a fusion decision will be made. The other method is to make "frontal" faces on the image or feature level and transform them into "frontal" face recognition. Based on the second idea, we propose a profile to the frontal revise mapping (PTFRM) module. This module realizes the revision of arbitrary poses on the feature level and transforms the multi-pose features into an approximate frontal representation to enhance the recognition ability of the existing recognition models. Finally, we evaluate the PTFRM on unconstrained face validation benchmark datasets such as Labeled Faces in theWild (LFW), Celebrities in Frontal Profile (CFP), and IARPA Janus Benchmark A(IJB-A). Results show that the chosen method for this study achieves good performance.
Author supplied keywords
Cite
CITATION STYLE
Ruan, S., Tang, C., Xu, Z., Jin, Z., Chen, S., Wen, H., … Tang, D. (2020). Multi-pose face recognition based on deep learning in unconstrained scene. Applied Sciences (Switzerland), 10(13). https://doi.org/10.3390/app10134669
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.