Abstract
Two unobtrusive modalities for automatic emotion recognition are discussed: speech and facial expressions. First, an overview is given of emotion recognition studies based on a combination of speech and facial expressions. We will identify difficulties concerning data collection, data fusion, system evaluation and emotion annotation that one is most likely to encounter in emotion recognition research. Further, we identify some of the possible applications for emotion recognition such as health monitoring or e-leaming systems. Finally, we will discuss the growing need for developing agreed standards in automatic emotion recognition research. © Springer-Verlag Berlin Heidelberg 2007.
Author supplied keywords
Cite
CITATION STYLE
Truong, K. P., Van Leeuwen, D. A., & Neerincx, M. A. (2007). Unobtrusive multimodal emotion detection in adaptive interfaces: Speech and facial expressions. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 4565 LNAI, pp. 354–363). Springer Verlag. https://doi.org/10.1007/978-3-540-73216-7_40
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.