Text classification with negative supervision

15Citations
Citations of this article
157Readers
Mendeley users who have this article in their library.

Abstract

Advanced pre-trained models for text representation have achieved state-of-the-art performance on various text classification tasks. However, the discrepancy between the semantic similarity of texts and labelling standards affects classifiers, i.e. leading to lower performance in cases where classifiers should assign different labels to semantically similar texts. To address this problem, we propose a simple multitask learning model that uses negative supervision. Specifically, our model encourages texts with different labels to have distinct representations. Comprehensive experiments show that our model outperforms the state-of-the-art pre-trained model on both single- and multi-label classifications, sentence and document classifications, and classifications in three different languages.

Cite

CITATION STYLE

APA

Ohashi, S., Takayama, J., Kajiwara, T., Chu, C., & Arase, Y. (2020). Text classification with negative supervision. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (pp. 351–357). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2020.acl-main.33

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free