Prompting Language Models for Linguistic Structure

29Citations
Citations of this article
45Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Although pretrained language models (PLMs) can be prompted to perform a wide range of language tasks, it remains an open question how much this ability comes from generalizable linguistic understanding versus surface-level lexical patterns. To test this, we present a structured prompting approach for linguistic structured prediction tasks, allowing us to perform zero- and few-shot sequence tagging with autoregressive PLMs. We evaluate this approach on part-of-speech tagging, named entity recognition, and sentence chunking, demonstrating strong few-shot performance in all cases. We also find that while PLMs contain significant prior knowledge of task labels due to task leakage into the pretraining corpus, structured prompting can also retrieve linguistic structure with arbitrary labels. These findings indicate that the in-context learning ability and linguistic knowledge of PLMs generalizes beyond memorization of their training data.

Cite

CITATION STYLE

APA

Blevins, T., Gonen, H., & Zettlemoyer, L. (2023). Prompting Language Models for Linguistic Structure. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (Vol. 1, pp. 6649–6663). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2023.acl-long.367

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free