Never guess what I heard... Rumor Detection in Finnish News: a Dataset and a Baseline

3Citations
Citations of this article
55Readers
Mendeley users who have this article in their library.
Get full text

Abstract

This study presents a new dataset on rumor detection in Finnish language news headlines. We have evaluated two different LSTM based models and two different BERT models, and have found very significant differences in the results. A fine-tuned FinBERT reaches the best overall accuracy of 94.3% and rumor label accuracy of 96.0% of the time. However, a model fine-tuned on Multilingual BERT reaches the best factual label accuracy of 97.2%. Our results suggest that the performance difference is due to a difference in the original training data. Furthermore, we find that a regular LSTM model works better than one trained with a pretrained word2vec model. These findings suggest that more work needs to be done for pretrained models in Finnish language as they have been trained on small and biased corpora.

Cite

CITATION STYLE

APA

Hämäläinen, M., Alnajjar, K., Partanen, N., & Rueter, J. (2021). Never guess what I heard... Rumor Detection in Finnish News: a Dataset and a Baseline. In NLP4IF 2021 - NLP for Internet Freedom: Censorship, Disinformation, and Propaganda, Proceedings of the 4th Workshop (pp. 39–44). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2021.nlp4if-1.6

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free