KNOWLEDGE GRAPHS EFFECTIVENESS IN NEURAL MACHINE TRANSLATION IMPROVEMENT

5Citations
Citations of this article
9Readers
Mendeley users who have this article in their library.

Abstract

Maintaining semantic relations between words during the translation process yields more accurate target-language output from Neural Machine Translation (NMT). Although difficult to achieve from training data alone, it is possible to leverage Knowledge Graphs (KGs) to retain source-language semantic relations in the corresponding target-language translation. The core idea is to use KG entity relations as embedding constraints to improve the mapping from source to target. This paper describes two embedding constraints, both of which employ Entity Linking (EL)—assigning a unique identity to entities—to associate words in training sentences with those in the KG: (1) a monolingual embedding constraint that supports an enhanced semantic representation of the source words through access to relations between entities in a KG; and (2) a bilingual embedding constraint that forces entity relations in the source-language to be carried over to the corresponding entities in the target-language translation. The method is evaluated for English-Spanish translation exploiting Freebase as a source of knowledge. Our experimental results demonstrate that exploiting KG information not only decreases the number of unknown words in the translation but also improves translation quality.

Cite

CITATION STYLE

APA

Ahmadnia, B., Dorr, B. J., & Kordjamshidi, P. (2020). KNOWLEDGE GRAPHS EFFECTIVENESS IN NEURAL MACHINE TRANSLATION IMPROVEMENT. Computer Science, 21(3), 287–306. https://doi.org/10.7494/csci.2020.21.3.3701

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free