Cross-stitching Text and Knowledge Graph Encoders for Distantly Supervised Relation Extraction

3Citations
Citations of this article
27Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Bi-encoder architectures for distantly-supervised relation extraction are designed to make use of the complementary information found in text and knowledge graphs (KG). However, current architectures suffer from two drawbacks. They either do not allow any sharing between the text encoder and the KG encoder at all, or, in case of models with KG-to-text attention, only share information in one direction. Here, we introduce cross-stitch bi-encoders, which allow full interaction between the text encoder and the KG encoder via a cross-stitch mechanism. The cross-stitch mechanism allows sharing and updating representations between the two encoders at any layer, with the amount of sharing being dynamically controlled via cross-attention-based gates. Experimental results on two relation extraction benchmarks from two different domains show that enabling full interaction between the two encoders yields strong improvements.

Cite

CITATION STYLE

APA

Dai, Q., Heinzerling, B., & Inui, K. (2022). Cross-stitching Text and Knowledge Graph Encoders for Distantly Supervised Relation Extraction. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, EMNLP 2022 (pp. 6947–6958). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2022.emnlp-main.467

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free