Continual Quality Estimation with Online Bayesian Meta-Learning

2Citations
Citations of this article
54Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Most current quality estimation (QE) models for machine translation are trained and evaluated in a static setting where training and test data are assumed to be from a fixed distribution. However, in real-life settings, the test data that a deployed QE model would be exposed to may differ from its training data. In particular, training samples are often labelled by one or a small set of annotators, whose perceptions of translation quality and needs may differ substantially from those of endusers, who will employ predictions in practice. To address this challenge, we propose an online Bayesian meta-learning framework for the continuous training of QE models that is able to adapt them to the needs of different users, while being robust to distributional shifts in training and test data. Experiments on data with varying number of users and language characteristics validate the effectiveness of the proposed approach.

Cite

CITATION STYLE

APA

Obamuyide, A., Fomicheva, M., & Specia, L. (2021). Continual Quality Estimation with Online Bayesian Meta-Learning. In ACL-IJCNLP 2021 - 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, Proceedings of the Conference (Vol. 2, pp. 190–197). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2021.acl-short.25

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free