Deep Learning for Arabic Image Captioning: A Comparative Study of Main Factors and Preprocessing Recommendations

11Citations
Citations of this article
11Readers
Mendeley users who have this article in their library.

Abstract

Captioning of images has been a major concern for the last decade, with most of the efforts aimed at English captioning. Due to the lack of work done for Arabic, relying on translation as an alternative to creating Arabic captions will lead to accumulating errors during translation and caption prediction. When working with Arabic datasets, preprocessing is crucial, and handling Arabic morphological features such as Nunation requires additional steps. We tested 32 different variables combinations that affect caption generation, including preprocessing, deep learning techniques (LSTM and GRU), dropout, and features extraction (Inception V3, VGG16). Moreover, our results on the only publicly avail-able Arabic Dataset outperform the best result with BLEU-1=36.5, BLEU-2=21.4, BLEU-3=12 and BLEU4=6.6. As a result of this study, we demonstrated that using Arabic preprocessing and VGG16 image features extraction enhanced Arabic caption quality, but we saw no measurable difference when using Dropout or LSTM instead of GRU.

Cite

CITATION STYLE

APA

Hejazi, H., & Shaalan, K. (2021). Deep Learning for Arabic Image Captioning: A Comparative Study of Main Factors and Preprocessing Recommendations. International Journal of Advanced Computer Science and Applications, 12(11), 37–44. https://doi.org/10.14569/IJACSA.2021.0121105

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free