Semantic-Oriented Unlabeled Priming for Large-Scale Language Models

4Citations
Citations of this article
13Readers
Mendeley users who have this article in their library.

Abstract

Due to the high costs associated with finetuning large language models, various recent works propose to adapt them to specific tasks without any parameter updates through in-context learning. Unfortunately, for in-context learning there is currently no way to leverage unlabeled data, which is often much easier to obtain in large quantities than labeled examples. In this work, we therefore investigate ways to make use of unlabeled examples to improve the zero-shot performance of pretrained language models without any finetuning: We introduce Semantic-Oriented Unlabeled Priming (SOUP), a method that classifies examples by retrieving semantically similar unlabeled examples, assigning labels to them in a zero-shot fashion, and then using them for in-context learning. We also propose bag-of-contexts priming, a new priming strategy that is more suitable for our setting and enables the usage of more examples than fit into the context window.

Cite

CITATION STYLE

APA

Liu, Y., Schick, T., & Schütze, H. (2023). Semantic-Oriented Unlabeled Priming for Large-Scale Language Models. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (pp. 32–38). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2023.sustainlp-1.2

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free