Are Prompt-based Models Clueless?

14Citations
Citations of this article
52Readers
Mendeley users who have this article in their library.

Abstract

Finetuning large pre-trained language models with a task-specific head has advanced the state-of-the-art on many natural language understanding benchmarks. However, models with a task-specific head require a lot of training data, making them susceptible to learning and exploiting dataset-specific superficial cues that do not generalize to other datasets. Prompting has reduced the data requirement by reusing the language model head and formatting the task input to match the pre-training objective. Therefore, it is expected that few-shot prompt-based models do not exploit superficial cues. This paper presents an empirical examination of whether few-shot prompt-based models also exploit superficial cues. Analyzing few-shot prompt-based models on MNLI, SNLI, HANS, and COPA has revealed that prompt-based models also exploit superficial cues. While the models perform well on instances with superficial cues, they often underperform or only marginally outperform random accuracy on instances without superficial cues.

Cite

CITATION STYLE

APA

Kavumba, P., Takahashi, R., & Oda, Y. (2022). Are Prompt-based Models Clueless? In Proceedings of the Annual Meeting of the Association for Computational Linguistics (Vol. 1, pp. 2333–2352). Association for Computational Linguistics (ACL). https://doi.org/10.5715/jnlp.29.991

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free