In order to build artificial conversational interfaces that display behaviors that are credible and expressive, we should endow them with the capability to recognize, adapt to, and render emotion. In this chapter, we explain how the recognition of emotional aspects is managed within conversational interfaces, including modeling and representation, emotion recognition from physiological signals, acoustics, text, facial expressions, and gesturesGestures and how emotion synthesis is managed through expressive speech and multimodal embodied agents. We also cover the main open tools and databases available for developers wishing to incorporate emotion into their conversational interfaces.
CITATION STYLE
McTear, M., Callejas, Z., & Griol, D. (2016). Affective Conversational Interfaces. In The Conversational Interface (pp. 329–357). Springer International Publishing. https://doi.org/10.1007/978-3-319-32967-3_15
Mendeley helps you to discover research relevant for your work.