Learning from synthetic faces, though perhaps appealing for high data efficiency, may not bring satisfactory performance due to the distribution discrepancy of the synthetic and real face images. To mitigate this gap, we propose a 3D-Aided Deep Pose-Invariant Face Recognition Model (3D-PIM), which automatically recovers realistic frontal faces from arbitrary poses through a 3D face model in a novel way. Specifically, 3D-PIM incorporates a simulator with the aid of a 3D Morphable Model (3D MM) to obtain shape and appearance prior for accelerating face normalization learning, requiring less training data. It further leverages a global-local Generative Adversarial Network (GAN) with multiple critical improvements as a refiner to enhance the realism of both global structures and local details of the face simulator's output using unlabelled real data only, while preserving the identity information. Qualitative and quantitative experiments on both controlled and in-the-wild benchmarks clearly demonstrate superiority of the proposed model over state-of-the-arts.
CITATION STYLE
Zhao, J., Xiong, L., Cheng, Y., Cheng, Y., Li, J., Zhou, L., … Feng, J. (2018). 3D-aided deep pose-invariant face recognition. In IJCAI International Joint Conference on Artificial Intelligence (Vol. 2018-July, pp. 1184–1190). International Joint Conferences on Artificial Intelligence. https://doi.org/10.24963/ijcai.2018/165
Mendeley helps you to discover research relevant for your work.