This paper presents our approach taken for the shared task on Propaganda Detection in Arabic at the Seventh Arabic Natural Language Processing Workshop (WANLP 2022). We participated in Sub-task 1, where the text of a tweet is provided, and the goal is to identify the different propaganda techniques used in it. This problem belongs to multi-label classification. For our solution, we leveraged different transformer-based pre-trained language models with fine-tuning to solve this problem. In our analysis, we found that MARBERTv2 outperforms in terms of performance, where macro-F1 is 0.08175 and micro-F1 is 0.61116 compared to other language models that we considered. Our method achieved rank 4 in the testing phase of the challenge.
CITATION STYLE
Singh, G. (2022). AraProp at WANLP 2022 Shared Task: Leveraging Pre-Trained Language Models for Arabic Propaganda Detection. In WANLP 2022 - 7th Arabic Natural Language Processing - Proceedings of the Workshop (pp. 496–500). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2022.wanlp-1.56
Mendeley helps you to discover research relevant for your work.