On exposure bias, hallucination and domain shift in neural machine translation

86Citations
Citations of this article
191Readers
Mendeley users who have this article in their library.

Abstract

The standard training algorithm in neural machine translation (NMT) suffers from exposure bias, and alternative algorithms have been proposed to mitigate this. However, the practical impact of exposure bias is under debate. In this paper, we link exposure bias to another well-known problem in NMT, namely the tendency to generate hallucinations under domain shift. In experiments on three datasets with multiple test domains, we show that exposure bias is partially to blame for hallucinations, and that training with Minimum Risk Training, which avoids exposure bias, can mitigate this. Our analysis explains why exposure bias is more problematic under domain shift, and also links exposure bias to the beam search problem, i.e. performance deterioration with increasing beam size. Our results provide a new justification for methods that reduce exposure bias: even if they do not increase performance on in-domain test sets, they can increase model robustness to domain shift.

Cite

CITATION STYLE

APA

Wang, C., & Sennrich, R. (2020). On exposure bias, hallucination and domain shift in neural machine translation. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (pp. 3544–3552). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2020.acl-main.326

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free