Explanation-based human debugging of nlp models: A survey

36Citations
Citations of this article
108Readers
Mendeley users who have this article in their library.

Abstract

Debugging a machine learning model is hard since the bug usually involves the training data and the learning process. This becomes even harder for an opaque deep learning model if we have no clue about how the model actually works. In this survey, we review papers that exploit explanations to enable humans to give feedback and debug NLP models. We call this problem explanation-based human debugging (EBHD). In particular, we categorize and discuss existing work along three dimensions of EBHD (the bug context, the workflow, and the experimental setting), compile findings on how EBHD components affect the feedback providers, and highlight open problems that could be future research directions.

Cite

CITATION STYLE

APA

Lertvittayakumjorn, P., & Toni, F. (2021). Explanation-based human debugging of nlp models: A survey. Transactions of the Association for Computational Linguistics, 9, 1508–1528. https://doi.org/10.1162/tacl_a_00440

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free