Testing the Generalization of Neural Language Models for COVID-19 Misinformation Detection

10Citations
Citations of this article
20Readers
Mendeley users who have this article in their library.
Get full text

Abstract

A drastic rise in potentially life-threatening misinformation has been a by-product of the COVID-19 pandemic. Computational support to identify false information within the massive body of data on the topic is crucial to prevent harm. Researchers proposed many methods for flagging online misinformation related to COVID-19. However, these methods predominantly target specific content types (e.g., news) or platforms (e.g., Twitter). The methods’ capabilities to generalize were largely unclear so far. We evaluate fifteen Transformer-based models on five COVID-19 misinformation datasets that include social media posts, news articles, and scientific papers to fill this gap. We show tokenizers and models tailored to COVID-19 data do not provide a significant advantage over general-purpose ones. Our study provides a realistic assessment of models for detecting COVID-19 misinformation. We expect that evaluating a broad spectrum of datasets and models will benefit future research in developing misinformation detection systems.

Cite

CITATION STYLE

APA

Wahle, J. P., Ashok, N., Ruas, T., Meuschke, N., Ghosal, T., & Gipp, B. (2022). Testing the Generalization of Neural Language Models for COVID-19 Misinformation Detection. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 13192 LNCS, pp. 381–392). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-030-96957-8_33

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free