Abstract
A framework for generating facial expressions from emotional states in daily conversation is described. It provides a mapping between emotional states and facial expressions, where the former is represented by vectors with psychologically-defined abstract dimensions, and the latter is coded by the Facial Action Coding System. In order to obtain the mapping, parallel data with rated emotional states and facial expressions were collected for utterances of a female speaker, and a neural network was trained with the data. The effectiveness of proposed method is verified by a subjective evaluation test. As the result, the Mean Opinion Score with respect to the suitability of generated facial expression was 3.86 for the speaker, which was close to that of hand-made facial expressions. Copyright © 2008 The Institute of Electronics, Information and Communication Engineers.
Author supplied keywords
Cite
CITATION STYLE
Mori, H., & Ohshima, K. (2008). Facial expression generation from speaker’s emotional states in daily conversation. IEICE Transactions on Information and Systems, E91-D(6), 1628–1633. https://doi.org/10.1093/ietisy/e91-d.6.1628
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.