Affect Analysis in Arabic Text: Further Pre-Training Language Models for Sentiment and Emotion

5Citations
Citations of this article
24Readers
Mendeley users who have this article in their library.

Abstract

One of the main tasks in the field of natural language processing (NLP) is the analysis of affective states (sentiment and emotional) based on written text, and attempts have improved dramatically in recent years. However, in studies on the Arabic language, machine learning or deep learning algorithms were utilised to analyse sentiment and emotion more often than current pre-trained language models. Additionally, further pre-training the language model on specific tasks (i.e., within-task and cross-task adaptation) has not yet been investigated for Arabic in general, and for the sentiment and emotion task in particular. In this paper, we adapt a BERT-based Arabic pretrained language model for the sentiment and emotion tasks by further pre-training it on a sentiment and emotion corpus. Hence, we developed five new Arabic models: QST, QSR, QSRT, QE3, and QE6. Five sentiment and two emotion datasets spanning both small- and large-resource settings were used to evaluate the developed models. The adaptation approaches significantly enhanced the performance of seven Arabic sentiment and emotion datasets. The developed models showed excellent improvements over the sentiment and emotion datasets, which ranged from 0.15–4.71%.

Cite

CITATION STYLE

APA

Alshehri, W., Al-Twairesh, N., & Alothaim, A. (2023). Affect Analysis in Arabic Text: Further Pre-Training Language Models for Sentiment and Emotion. Applied Sciences (Switzerland), 13(9). https://doi.org/10.3390/app13095609

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free