Examination and extension of strategies for improving personalized language modeling via interpolation

1Citations
Citations of this article
62Readers
Mendeley users who have this article in their library.

Abstract

In this paper, we detail novel strategies for interpolating personalized language models and methods to handle out-of-vocabulary (OOV) tokens to improve personalized language models. Using publicly available data from Reddit, we demonstrate improvements in offline metrics at the user level by interpolating a global LSTM-based authoring model with a user-personalized n-gram model. By optimizing this approach with a back-off to uniform OOV penalty and the interpolation coefficient, we observe that over 80% of users receive a lift in perplexity, with an average of 5.2% in perplexity lift per user. In doing this research we extend previous work in building NLIs and improve the robustness of metrics for downstream tasks.

Cite

CITATION STYLE

APA

Shao, L., Mantravadi, S., Manzini, T., Buendia, A., Knoertzer, M., Srinivasan, S., & Quirk, C. (2020). Examination and extension of strategies for improving personalized language modeling via interpolation. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (pp. 20–26). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2020.nli-1.3

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free