RAAT: Relation-Augmented Attention Transformer for Relation Modeling in Document-Level Event Extraction

27Citations
Citations of this article
48Readers
Mendeley users who have this article in their library.

Abstract

In document-level event extraction (DEE) task, event arguments always scatter across sentences (across-sentence issue) and multiple events may lie in one document (multi-event issue). In this paper, we argue that the relation information of event arguments is of great significance for addressing the above two issues, and propose a new DEE framework which can model the relation dependencies, called Relation-augmented Document-level Event Extraction (ReDEE). More specifically, this framework features a novel and tailored transformer, named as Relation-augmented Attention Transformer (RAAT). RAAT is scalable to capture multi-scale and multi-amount argument relations. To further leverage relation information, we introduce a separate event relation prediction task and adopt multi-task learning method to explicitly enhance event extraction performance. Extensive experiments demonstrate the effectiveness of the proposed method, which can achieve state-of-the-art performance on two public datasets. Our code is available at https://github.com/TencentYoutuResearch/RAAT.

Cite

CITATION STYLE

APA

Liang, Y., Jiang, Z., Yin, D., & Ren, B. (2022). RAAT: Relation-Augmented Attention Transformer for Relation Modeling in Document-Level Event Extraction. In NAACL 2022 - 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Proceedings of the Conference (pp. 4985–4997). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2022.naacl-main.367

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free