A Two-Stage Deep Generative Model for Masked Face Synthesis

2Citations
Citations of this article
10Readers
Mendeley users who have this article in their library.

Abstract

Research on face recognition with masked faces has been increasingly important due to the prolonged COVID-19 pandemic. To make face recognition practical and robust, a large amount of face image data should be acquired for training purposes. However, it is difficult to obtain masked face images for each human subject. To cope with this difficulty, this paper proposes a simple yet practical method to synthesize a realistic masked face for an unseen face image. For this, a cascade of two convolutional auto-encoders (CAEs) has been designed. The former CAE generates a pose-alike face wearing a mask pattern, which is expected to fit the input face in terms of pose view. The output of the former CAE is readily fed into the secondary CAE for extracting a segmentation map that localizes the mask region on the face. Using the segmentation map, the mask pattern can be successfully fused with the input face by means of simple image processing techniques. The proposed method relies on face appearance reconstruction without any facial landmark detection or localization techniques. Extensive experiments with the GTAV Face database and Labeled Faces in the Wild (LFW) database show that the two complementary generators could rapidly and accurately produce synthetic faces even for challenging input faces (e.g., low-resolution face of 25 × 25 pixels with out-of-plane rotations).

Cite

CITATION STYLE

APA

Lee, S. (2022). A Two-Stage Deep Generative Model for Masked Face Synthesis. Sensors, 22(20). https://doi.org/10.3390/s22207903

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free