AutoTriggER: Label-Efficient and Robust Named Entity Recognition with Auxiliary Trigger Extraction

1Citations
Citations of this article
16Readers
Mendeley users who have this article in their library.

Abstract

Deep neural models for named entity recognition (NER) have shown impressive results in overcoming label scarcity and generalizing to unseen entities by leveraging distant supervision and auxiliary information such as explanations. However, the costs of acquiring such additional information are generally prohibitive. In this paper, we present a novel two-stage framework (AUTOTRIGGER) to improve NER performance by automatically generating and leveraging “entity triggers” which are human-readable cues in the text that help guide the model to make better decisions. Our framework leverages post-hoc explanation to generate rationales and strengthens a model's prior knowledge using an embedding interpolation technique. This approach allows models to exploit triggers to infer entity boundaries and types instead of solely memorizing the entity words themselves. Through experiments on three well-studied NER datasets, AUTOTRIGGER shows strong label-efficiency, is capable of generalizing to unseen entities, and outperforms the RoBERTa-CRF baseline by nearly 0.5 F1 points on average.

Cite

CITATION STYLE

APA

Lee, D. H., Selvam, R. K., Sarwar, S. M., Lin, B. Y., Morstatter, F., Pujara, J., … Ren, X. (2023). AutoTriggER: Label-Efficient and Robust Named Entity Recognition with Auxiliary Trigger Extraction. In EACL 2023 - 17th Conference of the European Chapter of the Association for Computational Linguistics, Proceedings of the Conference (pp. 3003–3017). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2023.eacl-main.219

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free