SLGAN: Style- and Latent-guided Generative Adversarial Network for Desirable Makeup Transfer and Removal

5Citations
Citations of this article
19Readers
Mendeley users who have this article in their library.
Get full text

Abstract

There are five features to consider when using generative adversarial networks to apply makeup to photos of the human face. These features include (1) facial components, (2) interactive color adjustments, (3) makeup variations, (4) robustness to poses and expressions, and the (5) use of multiple reference images. To tackle the key features, we propose a novel style- and latent-guided makeup generative adversarial network for makeup transfer and removal. We provide a novel, perceptual makeup loss and a style-invariant decoder that can transfer makeup styles based on histogram matching to avoid the identity-shift problem. In our experiments, we show that our SLGAN is better than or comparable to state-of-the-art methods. Furthermore, we show that our proposal can interpolate facial makeup images to determine the unique features, compare existing methods, and help users find desirable makeup configurations.

Cite

CITATION STYLE

APA

Horita, D., & Aizawa, K. (2022). SLGAN: Style- and Latent-guided Generative Adversarial Network for Desirable Makeup Transfer and Removal. In Proceedings of the 4th ACM International Conference on Multimedia in Asia, MMAsia 2022. Association for Computing Machinery, Inc. https://doi.org/10.1145/3551626.3564967

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free