Automatic Arabic diacritization is one of the most important and challenging problems in Arabic natural language processing (NLP). Recurrent neural networks (RNNs) have proved recently to achieve state-of-the-art results for sequence transcription problems in general, and Arabic diacritization in specific. In this work, we investigate the effect of varying the size of the training corpus on the accuracy of diacritization. We produce a cleaned corpus of approximately 550k sequences extracted from the full dataset of Tashkeela and use subsets of this corpus in our training experiments. Our base model is a deep bidirectional long short-term memory (BiLSTM) RNN that transcribes undiacritized sequences of Arabic letters with fully diacritized sequences. Our experiments show that error rates improve as the size of training corpus increases. Our best performing model achieves average diacritic and word error rates of 1.45% and 3.89%, respectively. When compared with state-of-the-art diacritization systems, we reduce the word error rate by 12% over the best published results.
CITATION STYLE
Karim, A. A., & Abandah, G. (2021). On The Training Of Deep Neural Networks For Automatic Arabic-Text Diacritization. International Journal of Advanced Computer Science and Applications, 12(8), 276–286. https://doi.org/10.14569/IJACSA.2021.0120832
Mendeley helps you to discover research relevant for your work.