A Checkpoint on Multilingual Misogyny Identification

4Citations
Citations of this article
30Readers
Mendeley users who have this article in their library.

Abstract

We address the problem of identifying misogyny in tweets in mono and multilingual settings in three languages: English, Italian and Spanish. We explore model variations considering single and multiple languages both in the pre-training of the transformer and in the training of the downstream task to explore the feasibility of detecting misogyny through a transfer learning approach across multiple languages. That is, we train monolingual transformers with monolingual data and multilingual transformers with both monolingual and multilingual data. Our models reach state-of-the-art performance on all three languages. The single-language BERT models perform the best, closely followed by different configurations of multilingual BERT models. The performance drops in zero-shot classification across languages. Our error analysis shows that multilingual and monolingual models tend to make the same mistakes.

Cite

CITATION STYLE

APA

Muti, A., & Barrón-Cedeño, A. (2022). A Checkpoint on Multilingual Misogyny Identification. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (pp. 454–460). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2022.acl-srw.37

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free