A hybrid neural machine translation technique for translating low resource languages

14Citations
Citations of this article
20Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Neural machine translation (NMT) has produced very promising results on various high resource languages that have sizeable parallel datasets. However, low resource languages that lack sufficient parallel datasets face challenges in the automated translation filed. The main part of NMT is a recurrent neural network, which can work with sequential data at word and sentence levels, given that sequences are not too long. Due to the large number of word and sequence combinations, a parallel dataset is required, which unfortunately is not always available, particularly for low resource languages. Therefore, we adapted a character neural translation model that was based on a combined structure of recurrent neural network and convolutional neural network. This model was trained on the IWSLT 2016 Arabic—English and the IWSLT 2015 English—Vietnamese datasets. The model produced encouraging results particularly on the Arabic datasets, where Arabic is considered a rich morphological language.

Cite

CITATION STYLE

APA

Almansor, E. H., & Al-Ani, A. (2018). A hybrid neural machine translation technique for translating low resource languages. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 10935 LNAI, pp. 347–356). Springer Verlag. https://doi.org/10.1007/978-3-319-96133-0_26

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free