Deep Representation Calibrated Bayesian Neural Network for Semantically Explainable Face Inpainting and Editing

8Citations
Citations of this article
10Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Image inpainting seeks to fill in corrupted areas with pixels that have a similar texture and content with its surroundings. For high-structured data, e.g., human face, some recent works can achieve quite realistic results. However, almost all existing methods learned a determined mapping from a corrupted input to the final result, yet ignored the potential multiple plausible solutions of the same input. Furthermore, they have not explored the underlying connections between those plausible solutions and semantic conditions. In this work, we propose a novel deep representation calibrated Bayesian neural network (DRCBNN) for semantically explainable face inpainting and editing. By leveraging the advantages that Bayesian decision theory deals with uncertainty, the proposed framework exploits deep representation into Bayesian decision theory and derive a deep representation calibrated evidence lower bound (ELBO). In comparison with traditional ELBO in BNN, the newly calibrated ELBO is a more task-specific loss function. After optimizing the newly calibrated ELBO, it allows to inference desired inpainting outputs in accordance with specific semantics. Finally, experiments demonstrated that our method can produce multiple semantics-aware inpainting outputs and outperforms the state-of-the-arts.

Cite

CITATION STYLE

APA

Xiong, H., Wang, C., Wang, X., & Tao, D. (2020). Deep Representation Calibrated Bayesian Neural Network for Semantically Explainable Face Inpainting and Editing. IEEE Access, 8, 13457–13466. https://doi.org/10.1109/ACCESS.2019.2963675

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free