Deriving Language Models from Masked Language Models

4Citations
Citations of this article
12Readers
Mendeley users who have this article in their library.

Abstract

Masked language models (MLM) do not explicitly define a distribution over language, i.e., they are not language models per se. However, recent work has implicitly treated them as such for the purposes of generation and scoring. This paper studies methods for deriving explicit joint distributions from MLMs, focusing on distributions over two tokens, which makes it possible to calculate exact distributional properties. We find that an approach based on identifying joints whose conditionals are closest to those of the MLM works well and outperforms existing Markov random field-based approaches. We further find that this derived model’s conditionals can even occasionally outperform the original MLM’s conditionals.

Cite

CITATION STYLE

APA

Hennigen, L. T., & Kim, Y. (2023). Deriving Language Models from Masked Language Models. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (Vol. 2, pp. 1149–1159). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2023.acl-short.99

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free