In document-level event extraction (DEE) task, event arguments always scatter across sentences (across-sentence issue) and multiple events may lie in one document (multi-event issue). In this paper, we argue that the relation information of event arguments is of great significance for addressing the above two issues, and propose a new DEE framework which can model the relation dependencies, called Relation-augmented Document-level Event Extraction (ReDEE). More specifically, this framework features a novel and tailored transformer, named as Relation-augmented Attention Transformer (RAAT). RAAT is scalable to capture multi-scale and multi-amount argument relations. To further leverage relation information, we introduce a separate event relation prediction task and adopt multi-task learning method to explicitly enhance event extraction performance. Extensive experiments demonstrate the effectiveness of the proposed method, which can achieve state-of-the-art performance on two public datasets. Our code is available at https://github.com/TencentYoutuResearch/RAAT.
CITATION STYLE
Liang, Y., Jiang, Z., Yin, D., & Ren, B. (2022). RAAT: Relation-Augmented Attention Transformer for Relation Modeling in Document-Level Event Extraction. In NAACL 2022 - 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Proceedings of the Conference (pp. 4985–4997). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2022.naacl-main.367
Mendeley helps you to discover research relevant for your work.