Sparsifying word representations for deep unordered sentence modeling

1Citations
Citations of this article
69Readers
Mendeley users who have this article in their library.

Abstract

Sparsity often leads to efficient and interpretable representations for data. In this paper, we introduce an architecture to infer the appropriate sparsity pattern for the word embeddings while learning the sentence composition in a deep network. The proposed approach produces competitive results in sentiment and topic classification tasks with high degree of sparsity. It is computationally cheaper to compute sparse word representations than existing approaches. The imposed sparsity is directly controlled by the task considered and leads to more interpretability.

Cite

CITATION STYLE

APA

Sattigeri, P., & Thiagarajan, J. J. (2016). Sparsifying word representations for deep unordered sentence modeling. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (pp. 206–214). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/w16-1624

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free