Automatic facial expressions, gaze direction and head movements generation of a virtual agent

7Citations
Citations of this article
8Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In this article, we present two models to jointly and automatically generate the head, facial and gaze movements of a virtual agent from acoustic speech features. Two architectures are explored: a Generative Adversarial Network and an Adversarial Encoder-Decoder. Head movements and gaze orientation are generated as 3D coordinates, while facial expressions are generated using action units based on the facial action coding system. A large corpus of almost 4 hours of videos, involving 89 different speakers is used to train our models. We extract the speech and visual features automatically from these videos using existing tools. The evaluation of these models is conducted objectively with measures such as density evaluation and a visualisation from PCA reduction, as well as subjectively through a users perceptive study. Our proposed methodology shows that on 15 seconds sequences, encoder-decoder architecture drastically improves the perception of generated behaviours in two criteria: the coordination with speech and the naturalness. Our code can be found in : https://github.com/aldelb/non-verbal-behaviours-generation.

Cite

CITATION STYLE

APA

Delbosc, A., Ochs, M., & Ayache, S. (2022). Automatic facial expressions, gaze direction and head movements generation of a virtual agent. In ACM International Conference Proceeding Series (pp. 79–88). Association for Computing Machinery. https://doi.org/10.1145/3536220.3558806

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free