An Empirical Study on Explanations in Out-of-Domain Settings

16Citations
Citations of this article
45Readers
Mendeley users who have this article in their library.

Abstract

Recent work in Natural Language Processing has focused on developing approaches that extract faithful explanations, either via identifying the most important tokens in the input (i.e. post-hoc explanations) or by designing inherently faithful models that first select the most important tokens and then use them to predict the correct label (i.e. select-then-predict models). Currently, these approaches are largely evaluated on in-domain settings. Yet, little is known about how post-hoc explanations and inherently faithful models perform in out-of-domain settings. In this paper, we conduct an extensive empirical study that examines: (1) the out-of-domain faithfulness of post-hoc explanations, generated by five feature attribution methods; and (2) the out-of-domain performance of two inherently faithful models over six datasets. Contrary to our expectations, results show that in many cases out-of-domain post-hoc explanation faithfulness measured by sufficiency and comprehensiveness is higher compared to in-domain. We find this misleading and suggest using a random baseline as a yardstick for evaluating post-hoc explanation faithfulness. Our findings also show that select-then predict models demonstrate comparable predictive performance in out-of-domain settings to full-text trained models.

Cite

CITATION STYLE

APA

Chrysostomou, G., & Aletras, N. (2022). An Empirical Study on Explanations in Out-of-Domain Settings. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (Vol. 1, pp. 6920–6938). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2022.acl-long.477

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free