Face recognition is one of the hottest issues in the field of computer vision and pattern recognition. Deep learning-based recognition models already have more exceptional recognition ability than the human being on open datasets, but still cannot fully undertake the identity recognition task in real scenarios without human assistance. In this paper, we mainly analyze two obstacles, i.e., domain gap and training data shortage. We propose the unpaired Domain Transfer Generative Adversarial Network (DT-GAN) to relieve these two obstacles. We improve the GAN baseline to bridge the domain gap among datasets by generating images conforming to the style of a target domain by learning the mapping between the source domain and target domain. The generator could synthesize face with an arbitrary viewpoint at the same time. The model is trained with a combination of style transfer loss, identity loss, and pose loss, which ensures the successive domain transfer and data augment. We conduct experiments to testify the effectiveness and reasonability of DT-GAN. Experimental results demonstrate the recognition performance is dramatically boosted after domain transfer and data augment.
CITATION STYLE
Liu, J., Li, Q., Zhang, P., Zhang, G., & Liu, M. (2020). Unpaired Domain Transfer for Data Augment in Face Recognition. IEEE Access, 8, 39349–39360. https://doi.org/10.1109/ACCESS.2020.2976207
Mendeley helps you to discover research relevant for your work.