Easy to Decide, Hard to Agree: Reducing Disagreements Between Saliency Methods

0Citations
Citations of this article
12Readers
Mendeley users who have this article in their library.

Abstract

A popular approach to unveiling the black box of neural NLP models is to leverage saliency methods, which assign scalar importance scores to each input component. A common practice for evaluating whether an interpretability method is faithful has been to use evaluation-by-agreement - if multiple methods agree on an explanation, its credibility increases. However, recent work has found that saliency methods exhibit weak rank correlations even when applied to the same model instance and advocated for alternative diagnostic methods. In our work, we demonstrate that rank correlation is sensitive to small perturbations when evaluating agreement and argue that Pearson-r could be a better-suited alternative. We further show that regularization techniques that increase faithfulness of attention explanations also increase agreement between saliency methods. By connecting our findings to instance categories based on training dynamics, we show that the agreement of saliency method explanations is very low for easy-to-learn instances. Finally, we connect the improvement in agreement across instance categories to local representation space statistics of instances, paving the way for work on analyzing which intrinsic model properties improve their predisposition to interpretability methods.

Cite

CITATION STYLE

APA

Jukić, J., Tutek, M., & Šnajder, J. (2023). Easy to Decide, Hard to Agree: Reducing Disagreements Between Saliency Methods. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (pp. 9147–9162). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2023.findings-acl.582

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free