Which type of information affects the existing neural relation extraction (RE) models to make correct decisions is an important question. In this paper, we observe that entity type and trigger are the most indicative information for RE in each instance. Moreover, these indicative clues are always constrained to co-occur with specific relations at the corpus level. Motivated by this, we propose a novel RAtionale Graph (RAG) to organize such co-occurrence constraints among entity types, triggers and relations in a holistic graph view. By introducing two subtasks of entity type prediction and trigger labeling, we build the connection between each instance and RAG, and then leverage relevant global co-occurrence knowledge stored in the graph to improve the performance of neural RE models. Extensive experimental results indicate that our method outperforms strong baselines significantly and achieves state-of-the-art performance on the document-level and sentence-level RE benchmarks.
CITATION STYLE
Zhang, Z., Yu, B., Shu, X., Xue, M., Liu, T., & Guo, L. (2021). From What to Why: Improving Relation Extraction with Rationale Graph. In Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021 (pp. 86–95). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2021.findings-acl.8
Mendeley helps you to discover research relevant for your work.