UPB at SemEval-2021 Task 5: Virtual Adversarial Training for Toxic Spans Detection

1Citations
Citations of this article
46Readers
Mendeley users who have this article in their library.

Abstract

The real-world impact of polarization and toxicity in the online sphere marked the end of 2020 and the beginning of this year in a negative way. Semeval-2021, Task 5 - Toxic Spans Detection is based on a novel annotation of a subset of the Jigsaw Unintended Bias dataset and is the first language toxicity detection task dedicated to identifying the toxicity-level spans. For this task, participants had to automatically detect character spans in short comments that render the message as toxic. Our model considers applying Virtual Adversarial Training in a semi-supervised setting during the fine-tuning process of several Transformer-based models (i.e., BERT and RoBERTa), in combination with Conditional Random Fields. Our approach leads to performance improvements and more robust models, enabling us to achieve an F1-score of 65.73% in the official submission and an F1-score of 66.13% after further tuning during post-evaluation.

Cite

CITATION STYLE

APA

Paraschiv, A., Cercel, D. C., & Dascalu, M. (2021). UPB at SemEval-2021 Task 5: Virtual Adversarial Training for Toxic Spans Detection. In SemEval 2021 - 15th International Workshop on Semantic Evaluation, Proceedings of the Workshop (pp. 225–232). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2021.semeval-1.26

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free