ENPAR: Enhancing entity and entity pair representations for joint entity relation extraction

17Citations
Citations of this article
65Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Current state-of-the-art systems for joint entity relation extraction (Luan et al., 2019; Wadden et al., 2019) usually adopt the multi-task learning framework. However, annotations for these additional tasks such as coreference resolution and event extraction are always equally hard (or even harder) to obtain. In this work, we propose a pre-training method ENPAR to improve the joint extraction performance. ENPAR requires only the additional entity annotations that are much easier to collect. Unlike most existing works that only consider incorporating entity information into the sentence encoder, we further utilize the entity pair information. Specifically, we devise four novel objectives, i.e., masked entity typing, masked entity prediction, adversarial context discrimination, and permutation prediction, to pre-train an entity encoder and an entity pair encoder. Comprehensive experiments show that the proposed pre-training method achieves significant improvement over BERT on ACE05, SciERC, and NYT, and outperforms current state-of-the-art on ACE05.

Cite

CITATION STYLE

APA

Wang, Y., Sun, C., Wu, Y., Zhou, H., Li, L., & Yan, J. (2021). ENPAR: Enhancing entity and entity pair representations for joint entity relation extraction. In EACL 2021 - 16th Conference of the European Chapter of the Association for Computational Linguistics, Proceedings of the Conference (pp. 2877–2887). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2021.eacl-main.251

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free