Generating Synthetic Faces for Data Augmentation with StyleGAN2-ADA

1Citations
Citations of this article
12Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Generative deep learning models based on Autoencoders and Generative Adversarial Networks (GANs) have enabled increasingly realistic face-swapping tasks. The generation of representative synthetic datasets is an example of this application. These datasets need to encompass ethnic, racial, gender, and age range diversity so that deep learning models can avoid biases and discrimination against certain groups of individuals, reproducing implicit biases in poorly constructed datasets. In this work, we implement a StyleGAN2-ADA to generate representative synthetic data from the FFHQ dataset. This work consists of step 1 of a face-swap pipeline using synthetic facial data in videos to augment data in artificial intelligence model problems. We were able to generate synthetic facial data but found limitations due to the presence of artifacts in most images.

Cite

CITATION STYLE

APA

de Meira, N. F. C., Silva, M. C., Bianchi, A. G. C., & Oliveira, R. A. R. (2023). Generating Synthetic Faces for Data Augmentation with StyleGAN2-ADA. In International Conference on Enterprise Information Systems, ICEIS - Proceedings (Vol. 1, pp. 649–655). Science and Technology Publications, Lda. https://doi.org/10.5220/0011994600003467

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free