Explaining NLP Models via Minimal Contrastive Editing (MICE)

77Citations
Citations of this article
166Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Humans have been shown to give contrastive explanations, which explain why an observed event happened rather than some other counterfactual event (the contrast case). Despite the influential role that contrastivity plays in how humans explain, this property is largely missing from current methods for explaining NLP models. We present MINIMAL CONTRASTIVE EDITING (MICE), a method for producing contrastive explanations of model predictions in the form of edits to inputs that change model outputs to the contrast case. Our experiments across three tasks-binary sentiment classification, topic classification, and multiple-choice question answering-show that MICE is able to produce edits that are not only contrastive, but also minimaland fluent, consistent with human contrastive edits. We demonstrate how MICE edits can be used for two use cases in NLP system development-debugging incorrect model outputs and uncovering dataset artifacts-and thereby illustrate that producing contrastive explanations is a promising research direction for model interpretability.

Cite

CITATION STYLE

APA

Ross, A., Marasovic, A., & Peters, M. E. (2021). Explaining NLP Models via Minimal Contrastive Editing (MICE). In Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021 (pp. 3840–3852). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2021.findings-acl.336

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free