Training hybrid language models by marginalizing over segmentations

6Citations
Citations of this article
111Readers
Mendeley users who have this article in their library.

Abstract

In this paper, we study the problem of hybrid language modeling, that is using models which can predict both characters and larger units such as character ngrams or words. Using such models, multiple potential segmentations usually exist for a given string, for example one using words and one using characters only. Thus, the probability of a string is the sum of the probabilities of all the possible segmentations. Here, we show how it is possible to marginalize over the segmentations efficiently, in order to compute the true probability of a sequence. We apply our technique on three datasets, comprising seven languages, showing improvements over a strong character level language model.

Cite

CITATION STYLE

APA

Grave, E., Sukhbaatar, S., Bojanowski, P., & Joulin, A. (2020). Training hybrid language models by marginalizing over segmentations. In ACL 2019 - 57th Annual Meeting of the Association for Computational Linguistics, Proceedings of the Conference (pp. 1477–1482). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/p19-1143

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free