Leveraging document-level label consistency for named entity recognition

12Citations
Citations of this article
36Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Document-level label consistency is an effective indicator that different occurrences of a particular token sequence are very likely to have the same entity types. Previous work focused on better context representations and used the CRF for label decoding. However, CRF-based methods are inadequate for modeling document-level label consistency. This work introduces a novel two-stage label refinement approach to handle document-level label consistency, where a key-value memory network is first used to record draft labels predicted by the base model, and then a multi-channel Transformer makes refinements on these draft predictions based on the explicit co-occurrence relationship derived from the memory network. In addition, in order to mitigate the side effects of incorrect draft labels, Bayesian neural networks are used to indicate the labels with a high probability of being wrong, which can greatly assist in preventing the incorrect refinement of correct draft labels. The experimental results on three named entity recognition benchmarks demonstrated that the proposed method significantly outperformed the state-of-the-art methods.

Cite

CITATION STYLE

APA

Gui, T., Ye, J., Zhang, Q., Zhou, Y., Gong, Y., & Huang, X. (2020). Leveraging document-level label consistency for named entity recognition. In IJCAI International Joint Conference on Artificial Intelligence (Vol. 2021-January, pp. 3676–3982). International Joint Conferences on Artificial Intelligence. https://doi.org/10.24963/ijcai.2020/550

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free