Detecting Bot-Generated Text by Characterizing Linguistic Accommodation in Human-Bot Interactions

3Citations
Citations of this article
92Readers
Mendeley users who have this article in their library.

Abstract

Language generation models' democratization benefits many domains, from answering health-related questions to enhancing education by providing AI-driven tutoring services. However, language generation models' democratization also makes it easier to generate human-like text at-scale for nefarious activities, from spreading misinformation to targeting specific groups with hate speech. Thus, it is essential to understand how people interact with bots and develop methods to detect bot-generated text. This paper shows that bot-generated text detection methods are more robust across datasets and models if we use information about how people respond to it rather than using the bot's text directly. We also analyze linguistic alignment, providing insight into differences between human-human and human-bot conversations.

Cite

CITATION STYLE

APA

Bhatt, P., & Rios, A. (2021). Detecting Bot-Generated Text by Characterizing Linguistic Accommodation in Human-Bot Interactions. In Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021 (pp. 3235–3247). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2021.findings-acl.286

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free