Automated Arabic Long-Tweet Classification Using Transfer Learning with BERT

22Citations
Citations of this article
33Readers
Mendeley users who have this article in their library.

Abstract

Social media platforms like Twitter are commonly used by people interested in various activities, interests, and subjects that may cover their everyday activities and plans, as well as their thoughts on religion, technology, or the products they use. In this paper, we present bidirectional encoder representations from transformers (BERT)-based text classification model, ARABERT4TWC, for classifying the Arabic tweets of users into different categories. This work aims to provide an enhanced deep-learning model that can automatically classify the robust Arabic tweets of different users. In our proposed work, a transformer-based model for text classification is constructed from a pre-trained BERT model provided by the hugging face transformer library with custom dense layers. The multi-class classification layer is built on top of the BERT encoder to categorize the tweets. First, data sanitation and preprocessing were performed on the raw Arabic corpus to improve the model’s accuracy. Second, an Arabic-specific BERT model was built and input embedding vectors were fed into it. Using five publicly accessible datasets, substantial experiments were executed, and the fine-tuning technique was assessed in terms of tokenized vector and learning rate. In addition, we assessed the accuracy of various deep-learning models for classifying Arabic text.

Cite

CITATION STYLE

APA

Alruily, M., Manaf Fazal, A., Mostafa, A. M., & Ezz, M. (2023). Automated Arabic Long-Tweet Classification Using Transfer Learning with BERT. Applied Sciences (Switzerland), 13(6). https://doi.org/10.3390/app13063482

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free