Learning 3DMM deformation coefficients for rendering realistic expression images

4Citations
Citations of this article
4Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Analysis of facial expressions is a task of increasing interest in Computer Vision, with many potential applications. However, collecting images with labeled expression for many subjects is a quite complicated operation. In this paper, we propose a solution that use a particular 3D morphable model (3DMM) that, starting from a neutral image of a target subject, is capable of producing a realistic expressive face image of the same subject. This is possible thanks to the fact the used 3DMM can effectively and efficiently fit to 2D images, and then deform itself under the action of deformation parameters that are learned expression-by-expression in a subject-independent manner. Ultimately, the application of such deformation parameters to the neutral model of a subject allows the rendering of realistic expressive images of the subject. In the experiments, we demonstrate that such deformation parameters can be learned even from a small set of training data using simple statistical tools; despite this simplicity, we show that very realistic subject-dependent expression renderings can be obtained with our method. Furthermore, robustness to cross dataset tests is also evidenced.

Cite

CITATION STYLE

APA

Ferrari, C., Berretti, S., Pala, P., & Del Bimbo, A. (2018). Learning 3DMM deformation coefficients for rendering realistic expression images. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11010 LNCS, pp. 320–333). Springer Verlag. https://doi.org/10.1007/978-3-030-04375-9_27

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free