Text in many domains involves a significant amount of named entities. Predicting the entity names is often challenging for a language model as they appear less frequent on the training corpus. In this paper, we propose a novel and effective approach to building a discriminative language model which can learn the entity names by leveraging their entity type information. We also introduce two benchmark datasets based on recipes and Java programming codes, on which we evaluate the proposed model. Experimental results show that our model achieves 52.2% better perplexity in recipe generation and 22.06% on code generation than the state-of-the-art language models.
CITATION STYLE
Parvez, M. R., Ray, B., Chakraborty, S., & Chang, K. W. (2018). Building language models for text with named entities. In ACL 2018 - 56th Annual Meeting of the Association for Computational Linguistics, Proceedings of the Conference (Long Papers) (Vol. 1, pp. 2373–2383). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/p18-1221
Mendeley helps you to discover research relevant for your work.