Recently, state-of-the-art NLP models gained an increasing syntactic and semantic understanding of language, and explanation methods are crucial to understand their decisions. Occlusion is a well established method that provides explanations on discrete language data, e.g. by removing a language unit from an input and measuring the impact on a model’s decision. We argue that current occlusion-based methods often produce invalid or syntactically incorrect language data, neglecting the improved abilities of recent NLP models. Furthermore, gradient-based explanation methods disregard the discrete distribution of data in NLP. Thus, we propose OLM: a novel explanation method that combines occlusion and language models to sample valid and syntactically correct replacements with high likelihood, given the context of the original input. We lay out a theoretical foundation that alleviates these weaknesses of other explanation methods in NLP and provide results that underline the importance of considering data likelihood in occlusion-based explanation.1
CITATION STYLE
Harbecke, D., & Alt, C. (2020). Considering likelihood in NLP classification explanations with occlusion and language modeling. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (pp. 111–117). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2020.acl-srw.16
Mendeley helps you to discover research relevant for your work.