Detecting social bots on facebook in an information veracity context

24Citations
Citations of this article
52Readers
Mendeley users who have this article in their library.

Abstract

Misleading information is nothing new, yet its impacts seem only to grow. We investigate this phenomenon in the context of social bots. Social bots are software agents that mimic humans. They are intended to interact with humans while supporting specific agendas. This work explores the effect of social bots on the spread of misinformation on Facebook during the Fall of 2016 and prototypes a tool for their detection. Using a dataset of about two million user comments discussing the posts of public pages for nine verified news outlets, we first annotate a large dataset for social bots. We then develop and evaluate commercially implementable bot detection software for public pages with an overall F1 score of 0.71. Applying this software, we found only a small percentage (0.06%) of the commenting user population to be social bots. However, their activity was extremely disproportionate, producing comments at a rate more than fifty times higher (3.5%). Finally, we observe that one might commonly encounter social bot comments at a rate of about one in ten on mainstream outlet and reliable content news posts. In light of these findings and to support page owners and their communities we release prototype code and software to help moderate social bots on Facebook.

Cite

CITATION STYLE

APA

Santia, G. C., Mujib, M. I., & Williams, J. R. (2019). Detecting social bots on facebook in an information veracity context. In Proceedings of the 13th International Conference on Web and Social Media, ICWSM 2019 (pp. 463–472). Association for the Advancement of Artificial Intelligence. https://doi.org/10.1609/icwsm.v13i01.3244

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free