Pre-trained Word Embeddings for Arabic Aspect-Based Sentiment Analysis of Airline Tweets

31Citations
Citations of this article
61Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Recently, the use of word embeddings has become one of the most significant advancements in natural language processing (NLP). In this paper, we compared two word embedding models for aspect-based sentiment analysis (ABSA) of Arabic tweets. The ABSA problem was formulated as a two step process of aspect detection followed by sentiment polarity classification of the detected aspects. The compared embeddings models include fastText Arabic Wikipedia and AraVec-Web, both available as pre-trained models. Our corpus consisted of 5K airline service related tweets in Arabic, manually labeled for ABSA with imbalanced aspect categories. For classification, we used a support vector machine classifier for both, aspect detection, and sentiment polarity classification. Our results indicated that fastText Arabic Wikipedia word embeddings performed slightly better than AraVec-Web.

Cite

CITATION STYLE

APA

Ashi, M. M., Siddiqui, M. A., & Nadeem, F. (2019). Pre-trained Word Embeddings for Arabic Aspect-Based Sentiment Analysis of Airline Tweets. In Advances in Intelligent Systems and Computing (Vol. 845, pp. 241–251). Springer Verlag. https://doi.org/10.1007/978-3-319-99010-1_22

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free