Towards structured NLP interpretation via graph explainers

3Citations
Citations of this article
7Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Natural language processing (NLP) models have been increasingly deployed in real-world applications, and interpretation for textual data has also attracted dramatic attention recently. Most existing methods generate feature importance interpretation, which indicate the contribution of each word towards a specific model prediction. Text data typically possess highly structured characteristics and feature importance explanation cannot fully reveal the rich information contained in text. To bridge this gap, we propose to generate structured interpretations for textual data. Specifically, we pre-process the original text using dependency parsing, which could transform the text from sequences into graphs. Then graph neural networks (GNNs) are utilized to classify the transformed graphs. In particular, we explore two kinds of structured interpretation for pre-trained GNNs: edge-level interpretation and subgraph-level interpretation. Experimental results over three text datasets demonstrate that the structured interpretation can better reveal the structured knowledge encoded in the text. The experimental analysis further indicates that the proposed interpretations can faithfully reflect the decision-making process of the GNN model.

Cite

CITATION STYLE

APA

Yuan, H., Yang, F., Du, M., Ji, S., & Hu, X. (2021, December 1). Towards structured NLP interpretation via graph explainers. Applied AI Letters. John Wiley and Sons Inc. https://doi.org/10.1002/ail2.58

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free