Local Interpretations for Explainable Natural Language Processing: A Survey

  • Luo S
  • Ivison H
  • Han S
  • et al.
0Citations
Citations of this article
38Readers
Mendeley users who have this article in their library.

Abstract

As the use of deep learning techniques has grown across various fields over the past decade, complaints about the opaqueness of the black-box models have increased, resulting in an increased focus on transparency in deep learning models. This work investigates various methods to improve the interpretability of deep neural networks for Natural Language Processing (NLP) tasks, including machine translation and sentiment analysis. We provide a comprehensive discussion on the definition of the term interpretability and its various aspects at the beginning of this work. The methods collected and summarised in this survey are only associated with local interpretation and are specifically divided into three categories: (1) interpreting the model’s predictions through related input features; (2) interpreting through natural language explanation; (3) probing the hidden states of models and word representations.

Cite

CITATION STYLE

APA

Luo, S., Ivison, H., Han, S. C., & Poon, J. (2024). Local Interpretations for Explainable Natural Language Processing: A Survey. ACM Computing Surveys, 56(9), 1–36. https://doi.org/10.1145/3649450

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free