POLITICS: Pretraining with Same-story Article Comparison for Ideology Prediction and Stance Detection

49Citations
Citations of this article
55Readers
Mendeley users who have this article in their library.

Abstract

Ideology is at the core of political science research. Yet, there still does not exist generalpurpose tools to characterize and predict ideology across different genres of text. To this end, we study Pretrained Language Models using novel ideology-driven pretraining objectives that rely on the comparison of articles on the same story written by media of different ideologies. We further collect a large-scale dataset, consisting of more than 3.6M political news articles, for pretraining. Our model POLITICS outperforms strong baselines and the previous state-of-the-art models on ideology prediction and stance detection tasks. Further analyses show that POLITICS is especially good at understanding long or formally written texts, and is also robust in few-shot learning scenarios.

Cite

CITATION STYLE

APA

Liu, Y., Zhang, X. F., Wegsman, D., Beauchamp, N., & Wang, L. (2022). POLITICS: Pretraining with Same-story Article Comparison for Ideology Prediction and Stance Detection. In Findings of the Association for Computational Linguistics: NAACL 2022 - Findings (pp. 1354–1374). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2022.findings-naacl.101

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free