Facial Image Inpainting with Deep Generative Model and Patch Search Using Region Weight

10Citations
Citations of this article
10Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Facial image inpainting is a challenging task because the missing region needs to be filled by the new pixels with semantic information (e.g., noses and mouths). The traditional methods that involve searching for similar patches are mature but it is not suitable for semantic inpainting. Recently, the deep generative model-based methods have been able to implement semantic image inpainting although inpainting results are blurry or distorted. In this paper, through analyzing the advantages and disadvantages of the two methods, we propose a novel and efficient method that combines these two methods by a series connection, which searches for the most reasonable similar patch using the coarse image generated by the deep generative model. When training model, adding Laplace loss to standard loss accelerates model convergence. In addition, we define region weight (RW) when searching for similar patches, which makes edge connection more natural. Our method addresses the problem of blurred results in the deep generative model and dissatisfactory semantic information in the traditional methods. Our experiments, which used the CelebA dataset, demonstrate that our method can achieve realistic and natural facial inpainting results.

Cite

CITATION STYLE

APA

Wei, J., Lu, G., Liu, H., & Yan, J. (2019). Facial Image Inpainting with Deep Generative Model and Patch Search Using Region Weight. IEEE Access, 7, 67456–67468. https://doi.org/10.1109/ACCESS.2019.2919169

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free