Using Social and Linguistic Information to Adapt Pretrained Representations for Political Perspective Identification

18Citations
Citations of this article
55Readers
Mendeley users who have this article in their library.

Abstract

Understanding the political perspective shaping the way events are discussed in the media is increasingly important due to the dramatic change in news distribution. With the advance in text classification models, the performance of political perspective detection is also improving rapidly. However, current deep learning based text models often require a large amount of supervised data for training, which can be very expensive to obtain for this task. Meanwhile, models pre-trained on the general source and task (e.g. BERT) lack the ability to focus on bias-related text span. In this paper, we propose a novel framework that pretrains the text model using signals from the rich social and linguistic context that is readily available, including entity mentions, news sharing, and frame indicators. The pre-trained models benefit from tasks related to bias detection and therefore are easier to train with the bias labels. We demonstrate the effectiveness of our proposed framework by experiments on two news bias datasets. The models with pre-training achieve significant improvement in performance and are capable of identifying the text span for bias better.

Cite

CITATION STYLE

APA

Li, C., & Goldwasser, D. (2021). Using Social and Linguistic Information to Adapt Pretrained Representations for Political Perspective Identification. In Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021 (pp. 4569–4579). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2021.findings-acl.401

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free