Language modeling with sparse product of sememe experts

25Citations
Citations of this article
105Readers
Mendeley users who have this article in their library.

Abstract

Most language modeling methods rely on large-scale data to statistically learn the sequential patterns of words. In this paper, we argue that words are atomic language units but not necessarily atomic semantic units. Inspired by HowNet, we use sememes, the minimum semantic units in human languages, to represent the implicit semantics behind words for language modeling, named Sememe-Driven Language Model (SDLM). More specifically, to predict the next word, SDLM first estimates the sememe distribution given textual context. Afterwards, it regards each sememe as a distinct semantic expert, and these experts jointly identify the most probable senses and the corresponding word. In this way, SDLM enables language models to work beyond word-level manipulation to fine-grained sememe-level semantics, and offers us more powerful tools to fine-tune language models and improve the interpretability as well as the robustness of language models. Experiments on language modeling and the downstream application of headline generation demonstrate the significant effectiveness of SDLM. Source code and data used in the experiments can be accessed at https://github.com/thunlp/SDLM-pytorch.

Cite

CITATION STYLE

APA

Gu, Y., Yan, J., Zhu, H., Liu, Z., Xie, R., Sun, M., … Lin, L. (2018). Language modeling with sparse product of sememe experts. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, EMNLP 2018 (pp. 4642–4651). Association for Computational Linguistics. https://doi.org/10.18653/v1/d18-1493

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free