Comfortability Recognition from Visual Non-verbal Cues

5Citations
Citations of this article
16Readers
Mendeley users who have this article in their library.
Get full text

Abstract

As social agents, we experience situations in which sometimes we enjoy being involved and others where we desire to withdraw from. Being aware of others' "comfort towards the interaction"help us enhance our communications, thus this becomes a fundamental skill for any interactive agent (either a robot or an Embodied Conversational Agent (ECA)). For this reason, the current paper considers Comfortability, the internal state that focuses on the person's desire to maintain or withdraw from an interaction, exploring whether it is possible to recognize it from human non-verbal behaviour. To this aim, videos collected during real Human-Robot Interactions (HRI) were segmented, manually annotated and used to train four standard classifiers. Concretely, different combinations of various facial and upper-body movements (i.e., Action Units, Head Pose, Upper-body Pose and Gaze) were fed to the following feature-based Machine Learning (ML) algorithms: Naive Bayes, Neural Networks, Random Forest and Support Vector Machines. The results indicate that the best model, obtaining a 75% recognition accuracy, is trained with all the aforementioned cues together and based on Random Forest. These findings indicate, for the first time, that Comfortability can be automatically recognized, paving the way to its future integration into interactive agents.

Cite

CITATION STYLE

APA

Lechuga Redondo, M. E., Niewiadomski, R., Francesco, R., & Sciutti, A. (2022). Comfortability Recognition from Visual Non-verbal Cues. In ACM International Conference Proceeding Series (pp. 207–216). Association for Computing Machinery. https://doi.org/10.1145/3536221.3556631

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free