This paper demonstrates multimodal fusion of emotion sensory data in realistic scenarios of relatively long human-machine interactions. Fusion, combining voice and facial expressions, has been enhanced with semantic information retrieved from Internet social networks, resulting in more accurate determination of the conveyed emotion. © 2011 Springer-Verlag.
CITATION STYLE
Cueva, D. R., Gonçalves, R. A. M., Cozman, F., & Pereira-Barretto, M. R. (2011). Crawling to improve multimodal emotion detection. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 7095 LNAI, pp. 343–350). https://doi.org/10.1007/978-3-642-25330-0_30
Mendeley helps you to discover research relevant for your work.