Constructing Contrastive Samples via Summarization for Text Classification with Limited Annotations

6Citations
Citations of this article
62Readers
Mendeley users who have this article in their library.

Abstract

Contrastive Learning has emerged as a powerful representation learning method and facilitates various downstream tasks especially when supervised data is limited. How to construct efficient contrastive samples through data augmentation is key to its success. Unlike vision tasks, the data augmentation method for contrastive learning has not been investigated sufficiently in language tasks. In this paper, we propose a novel approach to construct contrastive samples for language tasks using text summarization. We use these samples for supervised contrastive learning to gain better text representations which greatly benefit text classification tasks with limited annotations. To further improve the method, we mix up samples from different classes and add an extra regularization, named Mixsum, in addition to the cross-entropy-loss. Experiments on realworld text classification datasets (Amazon-5, Yelp-5, AG News, and IMDb) demonstrate the effectiveness of the proposed contrastive learning framework with summarization-based data augmentation and Mixsum regularization.

Cite

CITATION STYLE

APA

Du, Y., Ma, T., Wu, L., Xu, F., Zhang, X., Long, B., & Ji, S. (2021). Constructing Contrastive Samples via Summarization for Text Classification with Limited Annotations. In Findings of the Association for Computational Linguistics, Findings of ACL: EMNLP 2021 (pp. 1365–1376). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2021.findings-emnlp.118

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free