On Guaranteed Optimal Robust Explanations for NLP Models

28Citations
Citations of this article
23Readers
Mendeley users who have this article in their library.
Get full text

Abstract

We build on abduction-based explanations for machine learning and develop a method for computing local explanations for neural network models in natural language processing (NLP). Our explanations comprise a subset of the words of the input text that satisfies two key features: optimality w.r.t. a user-defined cost function, such as the length of explanation, and robustness, in that they ensure prediction invariance for any bounded perturbation in the embedding space of the left-out words. We present two solution algorithms, respectively based on implicit hitting sets and maximum universal subsets, introducing a number of algorithmic improvements to speed up convergence of hard instances. We show how our method can be configured with different perturbation sets in the embedded space and used to detect bias in predictions by enforcing include/exclude constraints on biased terms, as well as to enhance existing heuristic-based NLP explanation frameworks such as Anchors. We evaluate our framework on three widely used sentiment analysis tasks and texts of up to 100 words from SST, Twitter and IMDB datasets, demonstrating the effectiveness of the derived explanations.

Cite

CITATION STYLE

APA

la Malfa, E., Michelmore, R., Zbrzezny, A. M., Paoletti, N., & Kwiatkowska, M. (2021). On Guaranteed Optimal Robust Explanations for NLP Models. In IJCAI International Joint Conference on Artificial Intelligence (pp. 2658–2665). International Joint Conferences on Artificial Intelligence. https://doi.org/10.24963/ijcai.2021/366

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free