Distilling structured knowledge for text-based relational reasoning

3Citations
Citations of this article
90Readers
Mendeley users who have this article in their library.

Abstract

There is an increasing interest in developing text-based relational reasoning systems, which are capable of systematically reasoning about the relationships between entities mentioned in a text. However, there remains a substantial performance gap between NLP models for relational reasoning and models based on graph neural networks (GNNs), which have access to an underlying symbolic representation of the text. In this work, we investigate how the structured knowledge of a GNN can be distilled into various NLP models in order to improve their performance. We first pre-train a GNN on a reasoning task using structured inputs and then incorporate its knowledge into an NLP model (e.g., an LSTM) via knowledge distillation. To overcome the difficulty of cross-modal knowledge transfer, we also employ a contrastive learning based module to align the latent representations of NLP models and the GNN. We test our approach with two state-of-the-art NLP models on 12 different inductive reasoning datasets from the CLUTRR benchmark and obtain significant improvements.

Cite

CITATION STYLE

APA

Dong, J., Rondeau, M. A., & Hamilton, W. L. (2020). Distilling structured knowledge for text-based relational reasoning. In EMNLP 2020 - 2020 Conference on Empirical Methods in Natural Language Processing, Proceedings of the Conference (pp. 6782–6791). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2020.emnlp-main.551

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free