Model and dictionary guided face inpainting in the wild

0Citations
Citations of this article
3Readers
Mendeley users who have this article in their library.
Get full text

Abstract

This work presents a method that can be used to inpaint occluded facial regions with unconstrained pose and orientation. This approach first warps the facial region onto a reference model to synthesize a frontal view. A modified Robust Principal Component Analysis (RPCA) approach is then used to suppress warping errors. It then uses a novel local patch-based face inpainting algorithm which hallucinates missing pixels using a dictionary of face images which are pre-aligned to the same reference model. The hallucinated region is then warped back onto the original image to restore missing pixels. Experimental results on synthetic occlusions demonstrate that the proposed face inpainting method has the best performance achieving PSNR gains of up to 0.74 dB over the second-best method. Moreover, experiments on the COFW dataset and a number of real-world images show that the proposed method successfully restores occluded facial regions in the wild even for CCTV quality images.

Cite

CITATION STYLE

APA

Farrugia, R. A., & Guillemot, C. (2017). Model and dictionary guided face inpainting in the wild. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 10116 LNCS, pp. 62–78). Springer Verlag. https://doi.org/10.1007/978-3-319-54407-6_5

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free