Predictor-estimator using multilevel task learning with stack propagation for neural quality estimation

126Citations
Citations of this article
130Readers
Mendeley users who have this article in their library.

Abstract

In this paper, we present a two-stage neural quality estimation model that uses multilevel task learning for translation quality estimation (QE) at the sentence, word, and phrase levels. Our approach is based on an end-to-end stacked neural model named Predictor-Estimator, which has two stages consisting of a neural word prediction model and neural QE model. To efficiently train the two-stage model, a stack propagation method is applied, thereby enabling us to jointly learn the word prediction model and QE model in a single learning mode. In addition, we deploy multilevel task learning with stack propagation, where the training examples available for all QE subtasks (i.e., sentence/word/phrase levels) are used to train a Predictor-Estimator for a specific sub-task. All of our submissions to the QE task of WMT17 are ensembles that combine a set of neural models trained under different settings of varying dimensionalities and shuffling training examples, eventually achieving the best performances for all subtasks at the sentence, word, and phrase levels.

Cite

CITATION STYLE

APA

Kim, H., Lee, J. H., & Na, S. H. (2017). Predictor-estimator using multilevel task learning with stack propagation for neural quality estimation. In WMT 2017 - 2nd Conference on Machine Translation, Proceedings (pp. 562–568). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/w17-4763

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free