Template-free Prompt Tuning for Few-shot NER

104Citations
Citations of this article
145Readers
Mendeley users who have this article in their library.

Abstract

Prompt-based methods have been successfully applied in sentence-level few-shot learning tasks, mostly owing to the sophisticated design of templates and label words. However, when applied to token-level labeling tasks such as NER, it would be time-consuming to enumerate the template queries over all potential entity spans. In this work, we propose a more elegant method to reformulate NER tasks as LM problems without any templates. Specifically, we discard the template construction process while maintaining the word prediction paradigm of pre-training models to predict a class-related pivot word (or label word) at the entity position. Meanwhile, we also explore principled ways to automatically search for appropriate label words that the pre-trained models can easily adapt to. While avoiding the complicated template-based process, the proposed LM objective also reduces the gap between different objectives used in pre-training and fine-tuning, thus it can better benefit the few-shot performance. Experimental results demonstrate the effectiveness of the proposed method over bert-tagger and template-based method under few-shot settings. Moreover, the decoding speed of the proposed method is up to 1930.12 times faster than the template-based method.

References Powered by Scopus

Prefix-tuning: Optimizing continuous prompts for generation

1792Citations
N/AReaders
Get full text

Making pre-trained language models better few-shot learners

1006Citations
N/AReaders
Get full text

How can we know what language models know?

956Citations
N/AReaders
Get full text

Cited by Powered by Scopus

Few-shot learning for medical text: A review of advances, trends, and opportunities

38Citations
N/AReaders
Get full text

A survey on semantic processing techniques

35Citations
N/AReaders
Get full text

Log Parsing with Prompt-based Few-shot Learning

32Citations
N/AReaders
Get full text

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Cite

CITATION STYLE

APA

Ma, R., Zhou, X., Gui, T., Tan, Y., Li, L., Zhang, Q., & Huang, X. (2022). Template-free Prompt Tuning for Few-shot NER. In NAACL 2022 - 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Proceedings of the Conference (pp. 5721–5732). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2022.naacl-main.420

Readers' Seniority

Tooltip

PhD / Post grad / Masters / Doc 48

80%

Researcher 7

12%

Professor / Associate Prof. 3

5%

Lecturer / Post doc 2

3%

Readers' Discipline

Tooltip

Computer Science 57

90%

Linguistics 3

5%

Engineering 2

3%

Decision Sciences 1

2%

Save time finding and organizing research with Mendeley

Sign up for free