RAG: Facial attribute editing by learning residual attributes

1Citations
Citations of this article
8Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Facial attribute editing aims to modify face images in the desired manner, such as changing hair color, gender, and age, adding or removing eyeglasses, and so on. Recent researches on this topic largely leverage the adversarial loss so that the generated faces are not only realistic but also well correspond to the target attributes. In this paper, we propose Residual Attribute Generative Adversarial Network (RAG), a novel model to achieve unpaired editing for multiple facial attributes. Instead of directly learning the target attributes, we propose to learn the residual attributes, a more intuitive and understandable representation to convert the original task as a problem of arithmetic addition or subtraction for different attributes. Furthermore, we propose the identity preservation loss, which proves to facilitate convergence and provide better results. At last, we leverage effective visual attention to localize the related regions and preserve the unrelated content during transformation. The extensive experiments on two facial attribute datasets demonstrate the superiority of our approach to generate realistic and high-quality faces for multiple attributes. Visualization of the residual image, which is defined as the difference between the original image and the generated result, better explains which regions RAG focuses on when editing different attributes.

Cite

CITATION STYLE

APA

Zhang, H., Chen, W., Tian, J., He, H., & Jin, Y. (2019). RAG: Facial attribute editing by learning residual attributes. IEEE Access, 7, 83266–83276. https://doi.org/10.1109/ACCESS.2019.2924959

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free