Learning variational word masks to improve the interpretability of neural text classifiers

35Citations
Citations of this article
110Readers
Mendeley users who have this article in their library.

Abstract

To build an interpretable neural text classifier, most of the prior work has focused on designing inherently interpretable models or finding faithful explanations. A new line of work on improving model interpretability has just started, and many existing methods require either prior information or human annotations as additional inputs in training. To address this limitation, we propose the variational word mask (VMASK) method to automatically learn task-specific important words and reduce irrelevant information on classification, which ultimately improves the interpretability of model predictions. The proposed method is evaluated with three neural text classifiers (CNN, LSTM, and BERT) on seven benchmark text classification datasets. Experiments show the effectiveness of VMASK in improving both model prediction accuracy and interpretability.

Cite

CITATION STYLE

APA

Chen, H., & Ji, Y. (2020). Learning variational word masks to improve the interpretability of neural text classifiers. In EMNLP 2020 - 2020 Conference on Empirical Methods in Natural Language Processing, Proceedings of the Conference (pp. 4236–4251). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2020.emnlp-main.347

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free