LTL-UDE at SemEval-2019 task 6: BERT and two-vote classification for categorizing offensiveness

13Citations
Citations of this article
78Readers
Mendeley users who have this article in their library.

Abstract

This paper describes LTL-UDE's systems for the SemEval 2019 Shared Task 6. We present results for Subtask A and C. In Subtask A, we experiment with an embedding representation of postings and use a Multi-Layer Perceptron and BERT to categorize postings. Our best result reaches the 10th place (out of 103) using BERT. In Subtask C, we applied a two-vote classification approach with minority fallback, which is placed on the 19th rank (out of 65).

Cite

CITATION STYLE

APA

Aggarwal, P., Horsmann, T., Wojatzki, M., & Zesch, T. (2019). LTL-UDE at SemEval-2019 task 6: BERT and two-vote classification for categorizing offensiveness. In NAACL HLT 2019 - International Workshop on Semantic Evaluation, SemEval 2019, Proceedings of the 13th Workshop (pp. 678–682). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/s19-2121

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free