Abstract
Conversational AI systems are rapidly developing from purely transactional systems to social chatbots, which can respond to a wide variety of user requests. In this article, we establish how current state-of-the-art conversational systems react to inappropriate requests, such as bullying and sexual harassment on the part of the user, by collecting and analysing the novel #MeToo corpus. Our results show that commercial systems mainly avoid answering, while rule-based chatbots show a variety of behaviours and often deflect. Data-driven systems, on the other hand, are often non-coherent, but also run the risk of being interpreted as flirtatious and sometimes react with counter-aggression. This includes our own system, trained on “clean” data, which suggests that inappropriate system behaviour is not caused by data bias.
Cite
CITATION STYLE
Curry, A. C., & Rieser, V. (2018). #MeToo: How conversational systems respond to sexual harassment. In Proceedings of the 2nd ACL Workshop on Ethics in Natural Language Processing, EthNLP 2018 at the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HTL 2018 (pp. 7–15). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/w18-0802
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.