A Benchmark Study of Contrastive Learning for Arabic Social Meaning

5Citations
Citations of this article
30Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Contrastive learning (CL) brought significant progress to various NLP tasks. Despite this progress, CL has not been applied to Arabic NLP to date. Nor is it clear how much benefits it could bring to particular classes of tasks such as those involved in Arabic social meaning (e.g., sentiment analysis, dialect identification, hate speech detection). In this work, we present a comprehensive benchmark study of state-of-the-art supervised CL methods on a wide array of Arabic social meaning tasks. Through extensive empirical analyses, we show that CL methods outperform vanilla finetuning on most tasks we consider. We also show that CL can be data efficient and quantify this efficiency. Overall, our work allows us to demonstrate the promise of CL methods, including in low-resource settings.

Cite

CITATION STYLE

APA

Khondaker, M. T. I., Nagoudi, E. M. B., Elmadany, A. R., Abdul-Mageed, M., & Lakshmanan, L. V. S. (2022). A Benchmark Study of Contrastive Learning for Arabic Social Meaning. In WANLP 2022 - 7th Arabic Natural Language Processing - Proceedings of the Workshop (pp. 63–75). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2022.wanlp-1.7

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free