Learning sentiment-specific word embedding via global sentiment representation

37Citations
Citations of this article
64Readers
Mendeley users who have this article in their library.

Abstract

Context-based word embedding learning approaches can model rich semantic and syntactic information. However, it is problematic for sentiment analysis because the words with similar contexts but opposite sentiment polarities, such as good and bad, are mapped into close word vectors in the embedding space. Recently, some sentiment embedding learning methods have been proposed, but most of them are designed to work well on sentence-level texts. Directly applying those models to document-level texts often leads to unsatisfied results. To address this issue, we present a sentiment-specific word embedding learning architecture that utilizes local context information as well as global sentiment representation. The architecture is applicable for both sentence-level and document-level texts. We take global sentiment representation as a simple average of word embeddings in the text, and use a corruption strategy as a sentiment-dependent regularization. Extensive experiments conducted on several benchmark datasets demonstrate that the proposed architecture outperforms the state-of-the-art methods for sentiment classification.

Cite

CITATION STYLE

APA

Fu, P., Lin, Z., Yuan, F., Wang, W., & Meng, D. (2018). Learning sentiment-specific word embedding via global sentiment representation. In 32nd AAAI Conference on Artificial Intelligence, AAAI 2018 (pp. 4808–4815). AAAI press. https://doi.org/10.1609/aaai.v32i1.11916

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free