Some classic machine translation (MT) Evaluation methods, such as the bilingual evaluation understudy score (BLEU), have notably underperformed in evaluating machine translations for morphologically rich languages like Arabic. However, the recent remarkable advancements in the domain of word vectors and sentence vectors have opened up new research avenues for low-resource languages. This paper proposes a novel linguistic-based evaluation method for English-translated sentences in Arabic. The proposed approach includes penalties based on length, positions, and context-based schemes such as part-of-speech tagging (POS) and multilingual sentence-BERT (SBERT) models for machine translation evaluation. The proposed technique is tested using pearson correlation as a performance evaluation parameter and compared with state-of-the-art techniques. The experimental results demonstrate that the proposed model evidently outperforms other MT evaluation methods such as BLEU.
CITATION STYLE
Beseiso, M., Tripathi, S., Al-Shboul, B., & Aljadid, R. (2022). Semantics based english-arabic machine translation evaluation. Indonesian Journal of Electrical Engineering and Computer Science, 27(1), 189–197. https://doi.org/10.11591/ijeecs.v27.i1.pp189-197
Mendeley helps you to discover research relevant for your work.