Interpretability Rules: Jointly Bootstrapping a Neural Relation Extractor with an Explanation Decoder

2Citations
Citations of this article
53Readers
Mendeley users who have this article in their library.
Get full text

Abstract

We introduce a method that transforms a rulebased relation extraction (RE) classifier into a neural one such that both interpretability and performance are achieved. Our approach jointly trains a RE classifier with a decoder that generates explanations for these extractions, using as sole supervision a set of rules that match these relations. Our evaluation on the TACRED dataset shows that our neural RE classifier outperforms the rule-based one we started from by 9 F1 points; our decoder generates explanations with a high BLEU score of over 90%; and, the joint learning improves the performance of both the classifier and decoder.

Cite

CITATION STYLE

APA

Tang, Z., & Surdeanu, M. (2021). Interpretability Rules: Jointly Bootstrapping a Neural Relation Extractor with an Explanation Decoder. In TrustNLP 2021 - 1st Workshop on Trustworthy Natural Language Processing, Proceedings of the Workshop (pp. 1–7). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2021.trustnlp-1.1

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free