Good Examples Make A Faster Learner Simple Demonstration-based Learning for Low-resource NER

40Citations
Citations of this article
85Readers
Mendeley users who have this article in their library.

Abstract

Recent advances in prompt-based learning have shown strong results on few-shot text classification by using cloze-style templates. Similar attempts have been made on named entity recognition (NER) which manually design templates to predict entity types for every text span in a sentence. However, such methods may suffer from error propagation induced by entity span detection, high cost due to enumeration of all possible text spans, and omission of inter-dependencies among token labels in a sentence. Here we present a simple demonstration-based learning method for NER, which lets the input be prefaced by task demonstrations for in-context learning. We perform a systematic study on demonstration strategy regarding what to include (entity examples, with or without surrounding context), how to select the examples, and what templates to use. Results on in-domain learning and domain adaptation show that the model's performance in low-resource settings can be largely improved with a suitable demonstration strategy (e.g., 4-17% improvement on 25 train instances). We also find that good demonstration can save many labeled examples and consistency in demonstration contributes to better performance.

Cite

CITATION STYLE

APA

Lee, D. H., Kadakia, A., Tan, K., Agarwal, M., Feng, X., Shibuya, T., … Ren, X. (2022). Good Examples Make A Faster Learner Simple Demonstration-based Learning for Low-resource NER. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (Vol. 1, pp. 2687–2700). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2022.acl-long.192

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free