Integrating semantic knowledge into lexical embeddings based on information content measurement

4Citations
Citations of this article
78Readers
Mendeley users who have this article in their library.

Abstract

Distributional word representations are widely used in NLP tasks. These representations are based on an assumption that words with a similar context tend to have a similar meaning. To improve the quality of the context-based embeddings, many researches have explored how to make full use of existing lexical resources. In this paper, we argue that while we incorporate the prior knowledge with contextbased embeddings, words with different occurrences should be treated differently. Therefore, we propose to rely on the measurement of information content to control the degree of applying prior knowledge into context-based embeddings - different words would have different learning rates when adjusting their embeddings. In the result, we demonstrate that our embeddings get significant improvements on two different tasks: Word Similarity and Analogical Reasoning.

Cite

CITATION STYLE

APA

Wang, H. Y., & Ma, W. Y. (2017). Integrating semantic knowledge into lexical embeddings based on information content measurement. In 15th Conference of the European Chapter of the Association for Computational Linguistics, EACL 2017 - Proceedings of Conference (Vol. 2, pp. 509–515). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/e17-2082

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free