A unified model for word sense representation and disambiguation

245Citations
Citations of this article
306Readers
Mendeley users who have this article in their library.

Abstract

Most word representation methods assume that each word owns a single semantic vector. This is usually problematic because lexical ambiguity is ubiquitous, which is also the problem to be resolved by word sense disambiguation. In this paper, we present a unified model for joint word sense representation and disambiguation, which will assign distinct representations for each word sense.1 The basic idea is that both word sense representation (WSR) and word sense disambiguation (WSD) will benefit from each other: (1) highquality WSR will capture rich information about words and senses, which should be helpful for WSD, and (2) high-quality WSD will provide reliable disambiguated corpora for learning better sense representations. Experimental results show that, our model improves the performance of contextual word similarity compared to existing WSR methods, outperforms stateof- The-art supervised methods on domainspecific WSD, and achieves competitive performance on coarse-grained all-words WSD.

Cite

CITATION STYLE

APA

Chen, X., Liu, Z., & Sun, M. (2014). A unified model for word sense representation and disambiguation. In EMNLP 2014 - 2014 Conference on Empirical Methods in Natural Language Processing, Proceedings of the Conference (pp. 1025–1035). Association for Computational Linguistics (ACL). https://doi.org/10.3115/v1/d14-1110

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free