Weight, sex, and facial expressions: On the manipulation of attributes in generative 3D face models

8Citations
Citations of this article
17Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Generative 3D Face Models are expressive models with applications in modelling and editing. They are learned from example faces, and offer a compact representation of the continuous space of faces. While they have proven to be useful as strong priors in face reconstruction they remain to be difficult to use in artistic editing tasks. We describe a way to navigate face space by changing meaningful parameters learned from the training data. This makes it possible to fix attributes such as height, weight, age, expression or 'lack of sleep' while letting the infinity of unfixed other attributes vary in a statistically meaningful way. We propose an inverse approach based on learning the distribution of faces in attribute space. Given a set of target attributes we then find the face which has the target attributes with high probability, and is as similar as possible to the input face. © 2009 Springer-Verlag.

Cite

CITATION STYLE

APA

Amberg, B., Paysan, P., & Vetter, T. (2009). Weight, sex, and facial expressions: On the manipulation of attributes in generative 3D face models. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 5875 LNCS, pp. 875–885). https://doi.org/10.1007/978-3-642-10331-5_81

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free