3D-Guided Frontal Face Generation for Pose-Invariant Recognition

6Citations
Citations of this article
12Readers
Mendeley users who have this article in their library.

Abstract

Although deep learning techniques have achieved extraordinary accuracy in recognizing human faces, the pose variances of images captured in real-world scenarios still hinder reliable model appliance. To mitigate this gap, we propose to recognize faces via generation frontal face images with a 3D-Guided Deep Pose-Invariant Face Recognition Model (3D-PIM) consisted of a simulator and a refiner module. The simulator employs a 3D Morphable Model (3D MM) to fit the shape and appearance features and recover primary frontal images with less training data. The refiner further enhances the image realism on both global facial structure and local details with adversarial training, while keeping the discriminative identity information consistent with original images. An Adaptive Weighting (AW) metric is then adopted to leverage the complimentary information from recovered frontal faces and original profile faces and to obtain credible similarity scores for recognition. Extended experiments verify the superiority of the proposed "recognition via generation"framework over state-of-the-art.

Cite

CITATION STYLE

APA

Wu, H., Gu, J., Fan, X., Li, H., Xie, L., & Zhao, J. (2023). 3D-Guided Frontal Face Generation for Pose-Invariant Recognition. ACM Transactions on Intelligent Systems and Technology, 14(2). https://doi.org/10.1145/3572035

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free