Understanding the properties of minimum bayes risk decoding in neural machine translation

42Citations
Citations of this article
111Readers
Mendeley users who have this article in their library.

Abstract

Neural Machine Translation (NMT) currently exhibits biases such as producing translations that are too short and overgenerating frequent words, and shows poor robustness to copy noise in training data or domain shift. Recent work has tied these shortcomings to beam search - the de facto standard inference algorithm in NMT - and Eikema and Aziz (2020) propose to use Minimum Bayes Risk (MBR) decoding on unbiased samples instead. In this paper, we empirically investigate the properties of MBR decoding on a number of previously reported biases and failure cases of beam search. We find that MBR still exhibits a length and token frequency bias, owing to the MT metrics used as utility functions, but that MBR also increases robustness against copy noise in the training data and domain shift.

Cite

CITATION STYLE

APA

Müller, M., & Sennrich, R. (2021). Understanding the properties of minimum bayes risk decoding in neural machine translation. In ACL-IJCNLP 2021 - 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, Proceedings of the Conference (pp. 259–272). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2021.acl-long.22

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free