Conditional masked language model (CMLM) training has proven successful for non-autoregressive and semi-autoregressive sequence generation tasks, such as machine translation. Given a trained CMLM, however, it is not clear what the best inference strategy is. We formulate masked inference as a factorization of conditional probabilities of partial sequences, show that this does not harm performance, and investigate a number of simple heuristics motivated by this perspective. We identify a thresholding strategy that has advantages over the standard “mask-predict” algorithm, and provide analyses of its behavior on machine translation tasks.
CITATION STYLE
Kreutzer, J., Foster, G., & Cherry, C. (2020). Inference strategies for machine translation with conditional masking. In EMNLP 2020 - 2020 Conference on Empirical Methods in Natural Language Processing, Proceedings of the Conference (pp. 5774–5782). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2020.emnlp-main.465
Mendeley helps you to discover research relevant for your work.