Few-Shot Learning with Siamese Networks and Label Tuning

20Citations
Citations of this article
71Readers
Mendeley users who have this article in their library.

Abstract

We study the problem of building text classifiers with little or no training data, commonly known as zero and few-shot text classification. In recent years, an approach based on neural textual entailment models has been found to give strong results on a diverse range of tasks. In this work, we show that with proper pre-training, Siamese Networks that embed texts and labels offer a competitive alternative. These models allow for a large reduction in inference cost: constant in the number of labels rather than linear. Furthermore, we introduce label tuning, a simple and computationally efficient approach that allows to adapt the models in a few-shot setup by only changing the label embeddings. While giving lower performance than model fine-tuning, this approach has the architectural advantage that a single encoder can be shared by many different tasks.

Cite

CITATION STYLE

APA

Müller, T., Pérez-Torró, G., & Franco-Salvador, M. (2022). Few-Shot Learning with Siamese Networks and Label Tuning. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (Vol. 1, pp. 8532–8545). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2022.acl-long.584

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free