PESCO: Prompt-enhanced Self Contrastive Learning for Zero-shot Text Classification

9Citations
Citations of this article
15Readers
Mendeley users who have this article in their library.

Abstract

We present PESCO, a novel contrastive learning framework that substantially improves the performance of zero-shot text classification. We formulate text classification as a neural text matching problem where each document is treated as a query, and the system learns the mapping from each query to the relevant class labels by (1) adding prompts to enhance label matching, and (2) using retrieved labels to enrich the training set in a self-training loop of contrastive learning. PESCO achieves state-of-the-art performance on four benchmark text classification datasets. On DBpedia, we achieve 98.5% accuracy without any labeled data, which is close to the fully-supervised result. Extensive experiments and analyses show all the components of PESCO are necessary for improving the performance of zero-shot text classification.

Cite

CITATION STYLE

APA

Wang, Y. S., Chi, T. C., Zhang, R., & Yang, Y. (2023). PESCO: Prompt-enhanced Self Contrastive Learning for Zero-shot Text Classification. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (Vol. 1, pp. 14897–14911). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2023.acl-long.832

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free