Feature projection for improved text classification

57Citations
Citations of this article
146Readers
Mendeley users who have this article in their library.

Abstract

In classification, there are usually some good features that are indicative of class labels. For example, in sentiment classification, words like good and nice are indicative of the positive sentiment and words like bad and terrible are indicative of the negative sentiment. However, there are also many common features (e.g., words) that are not indicative of any specific class (e.g., voice and screen, which are common to both sentiment classes and are not discriminative for classification). Although deep learning has made significant progresses in generating discriminative features through its powerful representation learning, we believe there is still room for improvement. In this paper, we propose a novel angle to further improve this representation learning, i.e., feature projection. This method projects existing features into the orthogonal space of the common features. The resulting projection is thus perpendicular to the common features and more discriminative for classification. We apply this new method to improve CNN, RNN, Transformer, and Bert based text classification and obtain markedly better results.

Cite

CITATION STYLE

APA

Qin, Q., Hu, W., & Liu, B. (2020). Feature projection for improved text classification. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (pp. 8161–8171). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2020.acl-main.726

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free