FairStyle: Debiasing StyleGAN2 with Style Channel Manipulations

4Citations
Citations of this article
23Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Recent advances in generative adversarial networks have shown that it is possible to generate high-resolution and hyperrealistic images. However, the images produced by GANs are only as fair and representative as the datasets on which they are trained. In this paper, we propose a method for directly modifying a pre-trained StyleGAN2 model that can be used to generate a balanced set of images with respect to one (e.g., eyeglasses) or more attributes (e.g., gender and eyeglasses). Our method takes advantage of the style space of the StyleGAN2 model to perform disentangled control of the target attributes to be debiased. Our method does not require training additional models and directly debiases the GAN model, paving the way for its use in various downstream applications. Our experiments show that our method successfully debiases the GAN model within a few minutes without compromising the quality of the generated images. To promote fair generative models, we share the code and debiased models at http://catlab-team.github.io/fairstyle.

Author supplied keywords

Cite

CITATION STYLE

APA

Karakas, C. E., Dirik, A., Yalçınkaya, E., & Yanardag, P. (2022). FairStyle: Debiasing StyleGAN2 with Style Channel Manipulations. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 13673 LNCS, pp. 570–586). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-031-19778-9_33

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free