Consistent Inference for Dialogue Relation Extraction

9Citations
Citations of this article
12Readers
Mendeley users who have this article in their library.

Abstract

Relation Extraction is key to many downstream tasks. Dialogue relation extraction aims at discovering entity relations from multi-turn dialogue scenario. There exist utterance, topic and relation discrepancy mainly due to multi-speakers, utterances, and relations. In this paper, we propose a consistent learning and inference method to minimize possible contradictions from those distinctions. First, we design mask mechanisms to refine utterance-aware and speaker-aware representations respectively from the global dialogue representation for the utterance distinction. Then a gate mechanism is proposed to aggregate such bi-grained representations. Next, mutual attention mechanism is introduced to obtain the entity representation for various relation specific topic structures. Finally, the relational inference is performed through first order logic constraints over the labeled data to decrease logically contradictory predicted relations. Experimental results on two benchmark datasets show that the F1 performance improvement of the proposed method is at least 3.3% compared with SOTA.

Cite

CITATION STYLE

APA

Long, X., Niu, S., & Li, Y. (2021). Consistent Inference for Dialogue Relation Extraction. In IJCAI International Joint Conference on Artificial Intelligence (pp. 3885–3891). International Joint Conferences on Artificial Intelligence. https://doi.org/10.24963/ijcai.2021/535

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free