A crowd-based evaluation of abuse response strategies in conversational agents

12Citations
Citations of this article
76Readers
Mendeley users who have this article in their library.

Abstract

How should conversational agents respond to verbal abuse through the user? To answer this question, we conduct a large-scale crowdsourced evaluation of abuse response strategies employed by current state-of-the-art systems. Our results show that some strategies, such as “polite refusal” score highly across the board, while for other strategies demographic factors, such as age, as well as the severity of the preceding abuse influence the user’s perception of which response is appropriate. In addition, we find that most data-driven models lag behind rule-based or commercial systems in terms of their perceived appropriateness.

Cite

CITATION STYLE

APA

Curry, A. C., & Rieser, V. (2019). A crowd-based evaluation of abuse response strategies in conversational agents. In SIGDIAL 2019 - 20th Annual Meeting of the Special Interest Group Discourse Dialogue - Proceedings of the Conference (pp. 361–366). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/W19-5942

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free