Modeling Feedback in Interaction With Conversational Agents—A Review

6Citations
Citations of this article
21Readers
Mendeley users who have this article in their library.

Abstract

Intelligent agents interacting with humans through conversation (such as a robot, embodied conversational agent, or chatbot) need to receive feedback from the human to make sure that its communicative acts have the intended consequences. At the same time, the human interacting with the agent will also seek feedback, in order to ensure that her communicative acts have the intended consequences. In this review article, we give an overview of past and current research on how intelligent agents should be able to both give meaningful feedback toward humans, as well as understanding feedback given by the users. The review covers feedback across different modalities (e.g., speech, head gestures, gaze, and facial expression), different forms of feedback (e.g., backchannels, clarification requests), and models for allowing the agent to assess the user's level of understanding and adapt its behavior accordingly. Finally, we analyse some shortcomings of current approaches to modeling feedback, and identify important directions for future research.

Cite

CITATION STYLE

APA

Axelsson, A., Buschmeier, H., & Skantze, G. (2022, March 15). Modeling Feedback in Interaction With Conversational Agents—A Review. Frontiers in Computer Science. Frontiers Media S.A. https://doi.org/10.3389/fcomp.2022.744574

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free