Gradual learning of matrix-space models of language for sentiment analysis

3Citations
Citations of this article
69Readers
Mendeley users who have this article in their library.

Abstract

Learning word representations to capture the semantics and compositionality of language has received much research interest in natural language processing. Beyond the popular vector space models, matrix representations for words have been proposed, since then, matrix multiplication can serve as natural composition operation. In this work, we investigate the problem of learning matrix representations of words. We present a learning approach for compositional matrix-space models for the task of sentiment analysis. We show that our approach, which learns the matrices gradually in two steps, outperforms other approaches and a gradient-descent baseline in terms of quality and computational cost.

Cite

CITATION STYLE

APA

Asaadi, S., & Rudolph, S. (2017). Gradual learning of matrix-space models of language for sentiment analysis. In Proceedings of the 2nd Workshop on Representation Learning for NLP, Rep4NLP 2017 at the 55th Annual Meeting of the Association for Computational Linguistics, ACL 2017 (pp. 178–185). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/w17-2621

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free