Multimodal User Feedback During Adaptive Robot-Human Presentations

8Citations
Citations of this article
21Readers
Mendeley users who have this article in their library.

Abstract

Feedback is an essential part of all communication, and agents communicating with humans must be able to both give and receive feedback in order to ensure mutual understanding. In this paper, we analyse multimodal feedback given by humans towards a robot that is presenting a piece of art in a shared environment, similar to a museum setting. The data analysed contains both video and audio recordings of 28 participants, and the data has been richly annotated both in terms of multimodal cues (speech, gaze, head gestures, facial expressions, and body pose), as well as the polarity of any feedback (negative, positive, or neutral). We train statistical and machine learning models on the dataset, and find that random forest models and multinomial regression models perform well on predicting the polarity of the participants' reactions. An analysis of the different modalities shows that most information is found in the participants' speech and head gestures, while much less information is found in their facial expressions, body pose and gaze. An analysis of the timing of the feedback shows that most feedback is given when the robot makes pauses (and thereby invites feedback), but that the more exact timing of the feedback does not affect its meaning.

References Powered by Scopus

Long Short-Term Memory

77659Citations
N/AReaders
Get full text

The measurement of observer agreement for categorical data

60642Citations
N/AReaders
Get full text

Unbiased recursive partitioning: A conditional inference framework

2921Citations
N/AReaders
Get full text

Cited by Powered by Scopus

Do you follow? A fully automated system for adaptive robot presenters

17Citations
N/AReaders
Get full text

Modeling Feedback in Interaction With Conversational Agents—A Review

16Citations
N/AReaders
Get full text

Living with Haru4Kids: Study on children's activity and engagement in a family-robot cohabitation scenario

3Citations
N/AReaders
Get full text

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Cite

CITATION STYLE

APA

Axelsson, A., & Skantze, G. (2022). Multimodal User Feedback During Adaptive Robot-Human Presentations. Frontiers in Computer Science, 3. https://doi.org/10.3389/fcomp.2021.741148

Readers over time

‘22‘23‘24‘25036912

Readers' Seniority

Tooltip

PhD / Post grad / Masters / Doc 6

55%

Professor / Associate Prof. 2

18%

Researcher 2

18%

Lecturer / Post doc 1

9%

Readers' Discipline

Tooltip

Computer Science 7

70%

Chemistry 1

10%

Engineering 1

10%

Linguistics 1

10%

Save time finding and organizing research with Mendeley

Sign up for free
0