Attributes Consistent Faces Generation Under Arbitrary Poses

0Citations
Citations of this article
3Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Generating visual images automatically given semantic attributes description helps building friendly human-machine interface as criminal suspects depicting and customer-oriented product design. However, this is a hard task since there exist great semantic and structure gaps between attributes description and visual images, which further cause great inference uncertainty in transforming from attribute description to vivid visual images. We aim to reduce the posterior distribute complexity given attributes (P(I|A)) by exploring face structure knowledge and imposing semantic consistency constraint in an end-to-end learning fashion. Our contributions are three-fold: (1) we address the semantic and structure consistent problem in attribute-conditioned image generation; (2) the proposed method can generate attribute consistent face images with high quality in both detailed texture and also clear structure; (3) we provide an interface for generating attribute consistent images with diverse poses.

Cite

CITATION STYLE

APA

Song, F., Tang, J., Yang, M., Cai, W., & Yang, W. (2019). Attributes Consistent Faces Generation Under Arbitrary Poses. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11362 LNCS, pp. 83–97). Springer Verlag. https://doi.org/10.1007/978-3-030-20890-5_6

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free