Generative deep learning models based on Autoencoders and Generative Adversarial Networks (GANs) have enabled increasingly realistic face-swapping tasks. The generation of representative synthetic datasets is an example of this application. These datasets need to encompass ethnic, racial, gender, and age range diversity so that deep learning models can avoid biases and discrimination against certain groups of individuals, reproducing implicit biases in poorly constructed datasets. In this work, we implement a StyleGAN2-ADA to generate representative synthetic data from the FFHQ dataset. This work consists of step 1 of a face-swap pipeline using synthetic facial data in videos to augment data in artificial intelligence model problems. We were able to generate synthetic facial data but found limitations due to the presence of artifacts in most images.
CITATION STYLE
de Meira, N. F. C., Silva, M. C., Bianchi, A. G. C., & Oliveira, R. A. R. (2023). Generating Synthetic Faces for Data Augmentation with StyleGAN2-ADA. In International Conference on Enterprise Information Systems, ICEIS - Proceedings (Vol. 1, pp. 649–655). Science and Technology Publications, Lda. https://doi.org/10.5220/0011994600003467
Mendeley helps you to discover research relevant for your work.