When and how to smile: Emotional expression for 3D conversational agents

1Citations
Citations of this article
11Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Conversational agents have become more and more common in the multimedia worlds of films, educative applications, e - business, computer games. Many techniques have been developed to enable these agents to behave in a human-like manner. In order to do so, conversational agents are simulated with emotion and personality as well as communicative channels such as voice, head and eye movement, manipulator and facial expression. Up to now, creating facial expression from emotions has received much attention. However, most of the work concentrates on producing static facial expressions from emotions. In this paper, we propose a scheme for displaying continuous emotional states of a conversational agent on a 3D face. The main idea behind the scheme is that an emotional facial expression happens for a few seconds only when there is a significant change in the emotional states. This makes the emotional facial expressions of the conversational agents more realistic due to the fact that a facial expression only stay on the face for a few seconds. © Springer-Verlag Berlin Heidelberg 2009.

Cite

CITATION STYLE

APA

Ngo, T. D., & Bui, T. D. (2009). When and how to smile: Emotional expression for 3D conversational agents. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 5044 LNAI, pp. 349–358). https://doi.org/10.1007/978-3-642-01639-4_31

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free