Generating Facial Expressions Associated with Text

0Citations
Citations of this article
1Readers
Mendeley users who have this article in their library.
Get full text

Abstract

How will you react to the next post that you are going to read? In this paper we propose a learning system that is able to artificially alter the picture of a face in order to generate the emotion that is associated with a given input text. The face generation procedure is function of further information about the considered person, either given (topics of interest) or automatically estimated from the provided picture (age, sex). In particular, two Convolutional Networks are trained to predict age and sex, while two other Recurrent Neural Network-based models predict the topic and the dominant emotion in the input text. First Order Logic (FOL)-based functions are introduced to mix the outcome of the four neural models and to decide which emotion to generate, following the theory of T-Norms. Finally, the same theory is exploited to build a neural generative model of facial expressions, that is used create the final face. Experimental results are performed to assess the quality of the information extraction process and to show the outcome of the generative network.

Cite

CITATION STYLE

APA

Graziani, L., Melacci, S., & Gori, M. (2020). Generating Facial Expressions Associated with Text. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 12396 LNCS, pp. 621–632). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-030-61609-0_49

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free