This paper investigates how an intelligent agent could be designed to both predict whether it is bonding with its user, and convey appropriate facial expression and body language responses to foster bonding. Video and Kinect recordings are collected from a series of naturalistic conversations, and a reliable measure of bonding is adapted and verified. A qualitative and quantitative analysis is conducted to determine the non-verbal cues that characterize both high and low bonding conversations. We then train a deep neural network classifier using one minute segments of facial expression and body language data, and show that it is able to accurately predict bonding in novel conversations.
CITATION STYLE
Jaques, N., McDuff, D., Kim, Y. L., & Picard, R. (2016). Understanding and predicting bonding in conversations using thin slices of facial expressions and body language. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 10011 LNAI, pp. 64–74). Springer Verlag. https://doi.org/10.1007/978-3-319-47665-0_6
Mendeley helps you to discover research relevant for your work.