From attribute-labels to faces: face generation using a conditional generative adversarial network

2Citations
Citations of this article
16Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Facial attributes are instrumental in semantically characterizing faces. Automated classification of such attributes (i.e., age, gender, ethnicity) has been a well studied topic. We here seek to explore the inverse problem, namely given attribute-labels the generation of attribute-associated faces. The interest in this topic is fueled by related applications in law enforcement and entertainment. In this work, we propose two models for attribute-label based facial image and video generation incorporating 2D and 3D deep conditional generative adversarial networks (DCGAN). The attribute-labels serve as a tool to determine the specific representations of generated images and videos. While these are early results, our findings indicate the methods’ ability to generate realistic faces from attribute labels.

Cite

CITATION STYLE

APA

Wang, Y., Dantcheva, A., & Bremond, F. (2019). From attribute-labels to faces: face generation using a conditional generative adversarial network. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11132 LNCS, pp. 692–698). Springer Verlag. https://doi.org/10.1007/978-3-030-11018-5_59

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free