A domain-Adaptive pre-Training approach for language bias detection in news

24Citations
Citations of this article
31Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Media bias is a multi-faceted construct influencing individual behavior and collective decision-making. Slanted news reporting is the result of one-sided and polarized writing which can occur in various forms. In this work, we focus on an important form of media bias, i.e. bias by word choice. Detecting biased word choices is a challenging task due to its linguistic complexity and the lack of representative gold-standard corpora. We present DA-RoBERTa, a new state-of-The-Art transformer-based model adapted to the media bias domain which identifies sentence-level bias with an F1 score of 0.814. In addition, we also train, DA-BERT and DA-BART, two more transformer models adapted to the bias domain. Our proposed domain-Adapted models outperform prior bias detection approaches on the same data.

Cite

CITATION STYLE

APA

Krieger, J. D., Spinde, T., Ruas, T., Kulshrestha, J., & Gipp, B. (2022). A domain-Adaptive pre-Training approach for language bias detection in news. In Proceedings of the ACM/IEEE Joint Conference on Digital Libraries. Institute of Electrical and Electronics Engineers Inc. https://doi.org/10.1145/3529372.3530932

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free