Recently, considerable literature has grown up around the theme of few-shot named entity recognition (NER), but little published benchmark data specifically focused on the practical and challenging task. Current approaches collect existing supervised NER datasets and reorganize them into the few-shot setting for empirical study. These strategies conventionally aim to recognize coarse-grained entity types with few examples, while in practice, most unseen entity types are fine-grained. In this paper, we present FEW-NERD, a large-scale human-annotated few-shot NER dataset with a hierarchy of 8 coarse-grained and 66 fine-grained entity types. FEW-NERD consists of 188,238 sentences from Wikipedia, 4,601,160 words are included and each is annotated as context or a part of a two-level entity type. To the best of our knowledge, this is the first few-shot NER dataset and the largest human-crafted NER dataset. We construct benchmark tasks with different emphases to comprehensively assess the generalization capability of models. Extensive empirical results and analysis show that FEW-NERD is challenging and the problem requires further research. We make FEW-NERD public at https://ningding97.github.io/fewnerd/.
CITATION STYLE
Ding, N., Xu, G., Chen, Y., Wang, X., Han, X., Xie, P., … Liu, Z. (2021). FEW-NERD: A few-shot named entity recognition dataset. In ACL-IJCNLP 2021 - 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, Proceedings of the Conference (pp. 3198–3213). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2021.acl-long.248
Mendeley helps you to discover research relevant for your work.