Unpaired Domain Transfer for Data Augment in Face Recognition

6Citations
Citations of this article
14Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Face recognition is one of the hottest issues in the field of computer vision and pattern recognition. Deep learning-based recognition models already have more exceptional recognition ability than the human being on open datasets, but still cannot fully undertake the identity recognition task in real scenarios without human assistance. In this paper, we mainly analyze two obstacles, i.e., domain gap and training data shortage. We propose the unpaired Domain Transfer Generative Adversarial Network (DT-GAN) to relieve these two obstacles. We improve the GAN baseline to bridge the domain gap among datasets by generating images conforming to the style of a target domain by learning the mapping between the source domain and target domain. The generator could synthesize face with an arbitrary viewpoint at the same time. The model is trained with a combination of style transfer loss, identity loss, and pose loss, which ensures the successive domain transfer and data augment. We conduct experiments to testify the effectiveness and reasonability of DT-GAN. Experimental results demonstrate the recognition performance is dramatically boosted after domain transfer and data augment.

Cite

CITATION STYLE

APA

Liu, J., Li, Q., Zhang, P., Zhang, G., & Liu, M. (2020). Unpaired Domain Transfer for Data Augment in Face Recognition. IEEE Access, 8, 39349–39360. https://doi.org/10.1109/ACCESS.2020.2976207

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free