Robust face frontalization in unconstrained images

2Citations
Citations of this article
1Readers
Mendeley users who have this article in their library.
Get full text

Abstract

The goal of face frontalization is to recover frontal facing views of faces appearing in single unconstrained images. The previous works mainly focus on how to achieve the frontal facing views effectively. However, they ignore the influences of the face images with occlusion. To overcome the problem, this paper presents a novel but simple scheme for robust face frontalization with only a single 3D model. We employ the same scheme with T. Hassner’s work to render the non-frontal facing view to the frontal facing view and estimate the invisible (self-occlusion) region. Subsequently, we compute the differences of the local patches around each fixed facial feature points between the average face (male average face or female average face) and test images for occlusion detection. Finally, we combine the proposed local face symmetry strategy and the Poisson image editing to fill the invisible region and occlusion region. Experimental results demonstrate advantages of the proposed method over the previous work.

Cite

CITATION STYLE

APA

Zhang, Y., Qian, J., & Yang, J. (2016). Robust face frontalization in unconstrained images. In Communications in Computer and Information Science (Vol. 662, pp. 225–233). Springer Verlag. https://doi.org/10.1007/978-981-10-3002-4_19

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free