ChavanKane at WANLP 2022 Shared Task: Large Language Models for Multi-label Propaganda Detection

3Citations
Citations of this article
29Readers
Mendeley users who have this article in their library.
Get full text

Abstract

The spread of propaganda through the internet has increased drastically over the past years. Lately, propaganda detection has started gaining importance because of the negative impact it has on society. In this work, we describe our approach for the WANLP 2022 shared task which handles the task of propaganda detection in a multi-label setting. The task demands the model to label the given text as having one or more types of propaganda techniques. There are a total of 21 propaganda techniques to be detected. We show that an ensemble of five models performs the best on the task, scoring a micro-F1 score of 59.73%. We also conduct comprehensive ablations and propose various future directions for this work.

Cite

CITATION STYLE

APA

Chavan, T., & Kane, A. (2022). ChavanKane at WANLP 2022 Shared Task: Large Language Models for Multi-label Propaganda Detection. In WANLP 2022 - 7th Arabic Natural Language Processing - Proceedings of the Workshop (pp. 515–519). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2022.wanlp-1.60

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free