Facial displays are key for communicating emotions in face to face conversation and can be made simultaneously with speech. However most collaborative virtual environments force the user to explicitly set avatar emotions after they have entered text or voice input. In this paper we present an intelligent system that will infer different emotions from textual input, parsing emotive expressions so that these emotions can be automatically displayed on the corresponding virtual avatars appearance. Although our intelligent avatars have their emotions driven by text input, our technique could also be applied to fully autonomous agents.
CITATION STYLE
Olveres, J., Billinghurst, M., Savage, J., & Holden, A. (1998). Intelligent, Expressive Avatars. In Proceedings of the First Workshop on Embodied Conversational Characters (WECC ’98) (pp. 47–55).
Mendeley helps you to discover research relevant for your work.