Using embedding masks forword categorization

2Citations
Citations of this article
74Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Word embeddings are widely used nowadays for many NLP tasks. They reduce the dimensionality of the vocabulary space, but most importantly they should capture (part of) the meaning of words. The new vector space used by the embeddings allows computation of semantic distances between words, while some word embeddings also permit simple vector operations (e.g. summation, difference) resembling analogical reasoning. This paper proposes a new operation on word embeddings aimed to capturing categorical information by first learning and then applying an embedding mask for each analyzed category. Thus, we conducted a series of experiments related to categorization of words based on their embeddings. Several classical approaches were compared together with the one introduced in the paper which uses different embedding masks learnt for each category.

Cite

CITATION STYLE

APA

Ruseti, S., Rebedea, T., & Trausan-Matu, S. (2016). Using embedding masks forword categorization. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (pp. 201–205). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/w16-1623

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free