SENSEMBED: Learning sense embeddings forword and relational similarity

240Citations
Citations of this article
396Readers
Mendeley users who have this article in their library.

Abstract

Word embeddings have recently gained considerable popularity for modeling words in different Natural Language Processing (NLP) tasks including semantic similarity measurement. However, notwithstanding their success, word embeddings are by their very nature unable to capture polysemy, as different meanings of a word are conflated into a single representation. In addition, their learning process usually relies on massive corpora only, preventing them from taking advantage of structured knowledge. We address both issues by proposing a multifaceted approach that transforms word embeddings to the sense level and leverages knowledge from a large semantic network for effective semantic similarity measurement. We evaluate our approach on word similarity and relational similarity frameworks, reporting state-of-The-Art performance on multiple datasets.

Cite

CITATION STYLE

APA

Iacobacci, I., Pilehvar, M. T., & Navigli, R. (2015). SENSEMBED: Learning sense embeddings forword and relational similarity. In ACL-IJCNLP 2015 - 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing of the Asian Federation of Natural Language Processing, Proceedings of the Conference (Vol. 1, pp. 95–105). Association for Computational Linguistics (ACL). https://doi.org/10.3115/v1/p15-1010

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free