Interpreting Neural Networks with Nearest Neighbors

26Citations
Citations of this article
165Readers
Mendeley users who have this article in their library.

Abstract

Local model interpretation methods explain individual predictions by assigning an importance value to each input feature. This value is often determined by measuring the change in confidence when a feature is removed. However, the confidence of neural networks is not a robust measure of model uncertainty. This issue makes reliably judging the importance of the input features difficult. We address this by changing the test-time behavior of neural networks using Deep k-Nearest Neighbors. Without harming text classification accuracy, this algorithm provides a more robust uncertainty metric which we use to generate feature importance values. The resulting interpretations better align with human perception than baseline methods. Finally, we use our interpretation method to analyze model predictions on dataset annotation artifacts.

Cite

CITATION STYLE

APA

Wallace, E., Feng, S., & Boyd-Graber, J. (2018). Interpreting Neural Networks with Nearest Neighbors. In EMNLP 2018 - 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, Proceedings of the 1st Workshop (pp. 136–144). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/w18-5416

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free