Unmasking the potential: evaluating image inpainting techniques for masked face reconstruction

3Citations
Citations of this article
11Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

The performance of most Face Recognizers tends to degrade when dealing with masked faces, making face recognition challenging. Image inpainting, a technique traditionally used for restoring old or damaged images, removing objects, or retouching photos, could potentially aid in reconstructing masked faces. In this paper, we compared three state-of-the-art image inpainting models—PatchMatch, a traditional algorithm, and two deep learning GAN-based models, Edge Connect and Free form image inpainting—to assess their performance in regenerating masked faces. The evaluation was conducted using own created synthetic datasets MaskedFace-CelebA and MaskedFace-CelebA-HQ, along with a synthetic masked dataset created for paired comparisons of masked images with ground truth for face verification. The computed results for Image Quality Assessment (IQA) between ground truth and reconstructed facial images indicated that the Gated Convolution model performed better than the other two models. To further validate the results, the reconstructed and ground truth images were also subject to VGG16 classifier, a widely used benchmark model for image recognition. The classifier outcomes supported the quantitative and qualitative assessment based on IQA.

Cite

CITATION STYLE

APA

Agarwal, C., & Bhatnagar, C. (2024). Unmasking the potential: evaluating image inpainting techniques for masked face reconstruction. Multimedia Tools and Applications, 83(1), 893–918. https://doi.org/10.1007/s11042-023-15807-x

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free