Toward a better performance evaluation framework for fake news classification

43Citations
Citations of this article
61Readers
Mendeley users who have this article in their library.

Abstract

The rising prevalence of fake news and its alarming downstream impact have motivated both the industry and academia to build a substantial number of fake news classification models, each with its unique architecture. Yet, the research community currently lacks a comprehensive model evaluation framework that can provide multifaceted comparisons between these models beyond the simple evaluation metrics such as accuracy or f1 scores. In our work, we examine a representative subset of classifiers using a very simple set of performance evaluation and error analysis steps. We demonstrate that model performance varies considerably based on i) dataset, ii) evaluation archetype, and iii) performance metrics. Additionally, classifiers also demonstrate a potential bias against small and conservative-leaning credible news sites. Finally, models' performance varies based on external events and article topics. In sum, our results highlight the need to move toward systematic benchmarking.

Cite

CITATION STYLE

APA

Bozarth, L., & Budak, C. (2020). Toward a better performance evaluation framework for fake news classification. In Proceedings of the 14th International AAAI Conference on Web and Social Media, ICWSM 2020 (pp. 60–71). AAAI press. https://doi.org/10.1609/icwsm.v14i1.7279

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free