Exploiting synonymy and hypernymy to learn efficient meaning representations

1Citations
Citations of this article
9Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Word representation learning methods such as word2vec usually associate one vector per word; however, in order to face polysemy problems, it’s important to produce distributed representations for each meaning, not for each surface form of a word. In this paper, we propose an extension for the existing AutoExtend model, an auto-encoder architecture that utilises synonymy relations to learn sense representations. We introduce a new layer in the architecture to exploit hypernymy relations predominantly present in existing ontologies. We evaluate the quality of the obtained vectors on word-sense disambiguation tasks and show that the use of the hypernymy relation leads to improvements of 1.2% accuracy on Senseval-3 and 0.8% on Semeval-2007 English lexical sample tasks, compared to the original model.

Cite

CITATION STYLE

APA

Perianin, T., Senuma, H., & Aizawa, A. (2016). Exploiting synonymy and hypernymy to learn efficient meaning representations. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 10075 LNCS, pp. 137–143). Springer Verlag. https://doi.org/10.1007/978-3-319-49304-6_17

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free