Semantics based english-arabic machine translation evaluation

1Citations
Citations of this article
11Readers
Mendeley users who have this article in their library.

Abstract

Some classic machine translation (MT) Evaluation methods, such as the bilingual evaluation understudy score (BLEU), have notably underperformed in evaluating machine translations for morphologically rich languages like Arabic. However, the recent remarkable advancements in the domain of word vectors and sentence vectors have opened up new research avenues for low-resource languages. This paper proposes a novel linguistic-based evaluation method for English-translated sentences in Arabic. The proposed approach includes penalties based on length, positions, and context-based schemes such as part-of-speech tagging (POS) and multilingual sentence-BERT (SBERT) models for machine translation evaluation. The proposed technique is tested using pearson correlation as a performance evaluation parameter and compared with state-of-the-art techniques. The experimental results demonstrate that the proposed model evidently outperforms other MT evaluation methods such as BLEU.

Cite

CITATION STYLE

APA

Beseiso, M., Tripathi, S., Al-Shboul, B., & Aljadid, R. (2022). Semantics based english-arabic machine translation evaluation. Indonesian Journal of Electrical Engineering and Computer Science, 27(1), 189–197. https://doi.org/10.11591/ijeecs.v27.i1.pp189-197

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free