Learning topic-Sensitive word representations

12Citations
Citations of this article
130Readers
Mendeley users who have this article in their library.

Abstract

Distributed word representations are widely used for modeling words in NLP tasks. Most of the existing models generate one representation per word and do not consider different meanings of a word. We present two approaches to learn multiple topic-sensitive representations per word by using Hierarchical Dirichlet Process. We observe that by modeling topics and integrating topic distributions for each document we obtain representations that are able to distinguish between different meanings of a given word. Our models yield statistically significant improvements for the lexical substitution task indicating that commonly used single word representations, even when combined with contextual information, are insufficient for this task.

Cite

CITATION STYLE

APA

Fadaee, M., Bisazza, A., & Monz, C. (2017). Learning topic-Sensitive word representations. In ACL 2017 - 55th Annual Meeting of the Association for Computational Linguistics, Proceedings of the Conference (Long Papers) (Vol. 2, pp. 441–447). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/P17-2070

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free