Measuring Word Semantic Similarity Based on Transferred Vectors

1Citations
Citations of this article
6Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Semantic similarity between words has now become a popular research problem to tackle in natural language processing (NLP) field. Word embedding have been demonstrated progress in measuring word similarity recently. However, limited to the distributional hypothesis, basic embedding methods generally have drawbacks in nature. One of the limitations is that word embeddings are usually by predicting a target word in its local context, leading to only limited information being captured. In this paper, we propose a novel transferred vectors approach to compute word semantic similarity. Transferred vectors are obtained via a reasonable combination of the source word and its nearest neighbors on semantic level. We conduct experiments on popular both English and Chinese benchmarks for measuring word similarity. The experiment results demonstrate that our method outperforms previous state-of-the-art by a large margin.

Cite

CITATION STYLE

APA

Li, C., Ma, T., Zhou, Y., Cheng, J., & Xu, B. (2017). Measuring Word Semantic Similarity Based on Transferred Vectors. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 10637 LNCS, pp. 326–335). Springer Verlag. https://doi.org/10.1007/978-3-319-70093-9_34

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free