Makeup is widely used to improve facial attractiveness and is well accepted by the public. However, different makeup styles will result in significant facial appearance changes. It remains a challenging problem to match makeup and non-makeup face images. This paper proposes a learning from generation approach for makeup-invariant face verification by introducing a bi-level adversarial network (BLAN). To alleviate the negative effects from makeup, we first generate non-makeup images from makeup ones, and then use the synthesized non-makeup images for further verification. Two adversarial networks in BLAN are integrated in an end-to-end deep network, with the one on pixel level for reconstructing appealing facial images and the other on feature level for preserving identity information. These two networks jointly reduce the sensing gap between makeup and non-makeup images. Moreover, we make the generator well constrained by incorporating multiple perceptual losses. Experimental results on three benchmark makeup face datasets demonstrate that our method achieves state-of-the-art verification accuracy across makeup status and can produce photo-realistic non-makeup face images.
CITATION STYLE
Li, Y., Song, L., Wu, X., He, R., & Tan, T. (2018). Anti-makeUp: Learning a bi-level adversarial network for makeup-invariant face verification. In 32nd AAAI Conference on Artificial Intelligence, AAAI 2018 (pp. 7057–7064). AAAI press. https://doi.org/10.1609/aaai.v32i1.12294
Mendeley helps you to discover research relevant for your work.