Transformer-based models such as BERT, XLNET, and XLM-R have achieved state-of-theart performance across various NLP tasks including the identification of offensive language and hate speech, an important problem in social media. In this paper, we present fBERT, a BERT model retrained on SOLID, the largest English offensive language identification corpus available with over 1.4 million offensive instances. We evaluate fBERT's performance on identifying offensive content on multiple English datasets and we test several thresholds for selecting instances from SOLID. The fBERT model will be made freely available to the community
CITATION STYLE
Sarkar, D., Zampieri, M., Ranasinghe, T., & Ororbia, A. (2021). FBERT: A Neural Transformer for Identifying Offensive Content. In Findings of the Association for Computational Linguistics, Findings of ACL: EMNLP 2021 (pp. 1792–1798). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2021.findings-emnlp.154
Mendeley helps you to discover research relevant for your work.