FBERT: A Neural Transformer for Identifying Offensive Content

38Citations
Citations of this article
61Readers
Mendeley users who have this article in their library.

Abstract

Transformer-based models such as BERT, XLNET, and XLM-R have achieved state-of-theart performance across various NLP tasks including the identification of offensive language and hate speech, an important problem in social media. In this paper, we present fBERT, a BERT model retrained on SOLID, the largest English offensive language identification corpus available with over 1.4 million offensive instances. We evaluate fBERT's performance on identifying offensive content on multiple English datasets and we test several thresholds for selecting instances from SOLID. The fBERT model will be made freely available to the community

Cite

CITATION STYLE

APA

Sarkar, D., Zampieri, M., Ranasinghe, T., & Ororbia, A. (2021). FBERT: A Neural Transformer for Identifying Offensive Content. In Findings of the Association for Computational Linguistics, Findings of ACL: EMNLP 2021 (pp. 1792–1798). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2021.findings-emnlp.154

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free