Multilingual models such as mBERT have been demonstrated to exhibit impressive cross-lingual transfer for a number of languages. Despite this, the performance drops for lower-resourced languages, especially when they are not part of the pre-training setup and when there are script differences. In this work we consider Maltese, a low-resource language of Arabic and Romance origins written in Latin script. Specifically, we investigate the impact of transliterating Maltese into Arabic scipt on a number of downstream tasks: Part-of-Speech Tagging, Dependency Parsing, and Sentiment Analysis. We compare multiple transliteration pipelines ranging from deterministic character maps to more sophisticated alternatives, including manually annotated word mappings and non-deterministic character mappings. For the latter, we show that selection techniques using n-gram language models of Tunisian Arabic, the dialect with the highest degree of mutual intelligibility to Maltese, yield better results on downstream tasks. Moreover, our experiments highlight that the use of an Arabic pre-trained model paired with transliteration outperforms mBERT. Overall, our results show that transliterating Maltese can be considered an option to improve the cross-lingual transfer capabilities.
CITATION STYLE
Micallef, K., Eryani, F., Habash, N., Bouamor, H., & Borg, C. (2023). Exploring the Impact of Transliteration on NLP Performance: Treating Maltese as an Arabic Dialect. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (pp. 22–32). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2023.cawl-1.4
Mendeley helps you to discover research relevant for your work.