Image Inpainting Based on Patch-GANs

27Citations
Citations of this article
12Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

In this paper, we propose a novel image inpainting framework that takes advantage of holistic and structure information of the broken input image. Different from the existing models that complete the broken pictures using the holistic features of the input, our method adopts Patch-generative adversarial networks (GANs) equipped with multi-scale discriminators and edge process function to extract holistic, structured features, and restore the damaged images. After pre-training our Patch-GANs, the proposed network encourages our generator to find the best encoding of the broken input images in the latent space using a combination of a reconstruction loss, an edge loss, and global and local guidance losses. Besides, the reconstruction and the global guidance losses ensure the pixel reliability of the generated images, and the remaining losses guarantee the contents consistency between the local and global parts. The qualitative and quantitative experiments on multiple public datasets show that our approach has the ability to produce more realistic images compared with some existing methods, demonstrating the effectiveness and superiority of our method.

Cite

CITATION STYLE

APA

Yuan, L., Ruan, C., Hu, H., & Chen, D. (2019). Image Inpainting Based on Patch-GANs. IEEE Access, 7, 46411–46421. https://doi.org/10.1109/ACCESS.2019.2909553

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free