Integrating relation constraints with neural relation extractors

2Citations
Citations of this article
40Readers
Mendeley users who have this article in their library.

Abstract

Recent years have seen rapid progress in identifying predefined relationship between entity pairs using neural networks (NNs). However, such models often make predictions for each entity pair individually, thus often fail to solve the inconsistency among different predictions, which can be characterized by discrete relation constraints. These constraints are often defined over combinations of entity-relation-entity triples, since there often lack of explicitly well-defined type and cardinality requirements for the relations. In this paper, we propose a unified framework to integrate relation constraints with NNs by introducing a new loss term, Constraint Loss. Particularly, we develop two efficient methods to capture how well the local predictions from multiple instance pairs satisfy the relation constraints. Experiments on both English and Chinese datasets show that our approach can help NNs learn from discrete relation constraints to reduce inconsistency among local predictions, and outperform popular neural relation extraction (NRE) models even enhanced with extra post-processing.

Cite

CITATION STYLE

APA

Ye, Y., Feng, Y., Luo, B., Lai, Y., & Zhao, D. (2020). Integrating relation constraints with neural relation extractors. In AAAI 2020 - 34th AAAI Conference on Artificial Intelligence (pp. 9442–9449). AAAI press. https://doi.org/10.1609/aaai.v34i05.6487

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free