Transformer-Based GAN for New Hairstyle Generative Networks

6Citations
Citations of this article
9Readers
Mendeley users who have this article in their library.

Abstract

Traditional GAN-based image generation networks cannot accurately and naturally fuse surrounding features in local image generation tasks, especially in hairstyle generation tasks. To this end, we propose a novel transformer-based GAN for new hairstyle generation networks. The network framework comprises two modules: Face segmentation (F) and Transformer Generative Hairstyle (TGH) modules. The F module is used for the detection of facial and hairstyle features and the extraction of global feature masks and facial feature maps. In the TGH module, we design a transformer-based GAN to generate hairstyles and fix the details of the fusion part of faces and hairstyles in the new hairstyle generation process. To verify the effectiveness of our model, CelebA-HQ (Large-scale CelebFaces Attribute) and FFHQ (Flickr-Faces-HQ) are adopted to train and test our proposed model. In the image evaluation test used, FID, PSNR, and SSIM image evaluation methods are used to test our model and compare it with other excellent image generation networks. Our proposed model is more robust in terms of test scores and real image generation.

Cite

CITATION STYLE

APA

Man, Q., Cho, Y. I., Jang, S. G., & Lee, H. J. (2022). Transformer-Based GAN for New Hairstyle Generative Networks. Electronics (Switzerland), 11(13). https://doi.org/10.3390/electronics11132106

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free