JUST at SemEval-2020 Task 11: Detecting Propaganda Techniques using BERT Pretrained Model

11Citations
Citations of this article
72Readers
Mendeley users who have this article in their library.
Get full text

Abstract

This paper presents the JUST team submission to semeval-2020 task 11, Detection of Propaganda Techniques in News Articles. Knowing that there are two subtasks in this competition, we have participated in the Technique Classification subtask (TC), which aims to identify the propaganda techniques used in specific propaganda fragments. We have used and implemented various models to detect propaganda. Our proposed model is based on BERT uncased pre-trained language model as it has achieved state-of-the-art performance on multiple NLP benchmarks. The performance result of our proposed model has scored 0.55307 F1-Score, which outperforms the baseline model provided by the organizers with 0.2519 F1-Score, and our model is 0.07 away from the best performing team. Compared to other participating systems, our submission is ranked 15th out of 31 participants.

Cite

CITATION STYLE

APA

Altiti, O., Abdullah, M., & Obiedat, R. (2020). JUST at SemEval-2020 Task 11: Detecting Propaganda Techniques using BERT Pretrained Model. In 14th International Workshops on Semantic Evaluation, SemEval 2020 - co-located 28th International Conference on Computational Linguistics, COLING 2020, Proceedings (pp. 1749–1755). International Committee for Computational Linguistics. https://doi.org/10.18653/v1/2020.semeval-1.229

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free