Evaluating Unsupervised Representation Learning for Detecting Stances of Fake News

11Citations
Citations of this article
78Readers
Mendeley users who have this article in their library.

Abstract

Our goal is to evaluate the usefulness of unsupervised representation learning techniques for detecting stances of Fake News. Therefore we examine several pretrained language models with respect to their performance on two Fake News related data sets, both consisting of instances with a headline, an associated news article and the stance of the article towards the respective headline. Specifically, the aim is to understand how much hyperparameter tuning is necessary when finetuning the pretrained architectures, how well transfer learning works in this specific case of stance detection and how sensitive the models are to changes in hyperparameters such as batch size, learning rate (schedule), sequence length as well as the freezing technique. The results indicate that the computationally more expensive autoregression approach of XLNet (Yang et al., 2019) is outperformed by BERT-based models, notably by RoBERTa (Liu et al., 2019). While the learning rate seems to be the most important hyperparameter, experiments with different freezing techniques indicate that all evaluated architectures had already learned powerful language representations that pose a good starting point for finetuning them.

Cite

CITATION STYLE

APA

Guderlei, M., & Aßenmacher, M. (2020). Evaluating Unsupervised Representation Learning for Detecting Stances of Fake News. In COLING 2020 - 28th International Conference on Computational Linguistics, Proceedings of the Conference (pp. 6339–6349). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2020.coling-main.558

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free