Rationalizing neural predictions

408Citations
Citations of this article
972Readers
Mendeley users who have this article in their library.

Abstract

Prediction without justification has limited applicability. As a remedy, we learn to extract pieces of input text as justifications - rationales - that are tailored to be short and coherent, yet sufficient for making the same prediction. Our approach combines two modular components, generator and encoder, which are trained to operate well together. The generator specifies a distribution over text fragments as candidate rationales and these are passed through the encoder for prediction. Rationales are never given during training. Instead, the model is regularized by desiderata for rationales. We evaluate the approach on multi-aspect sentiment analysis against manually annotated test cases. Our approach outperforms attention-based baseline by a significant margin. We also successfully illustrate the method on the question retrieval task.

Cite

CITATION STYLE

APA

Lei, T., Barzilay, R., & Jaakkola, T. (2016). Rationalizing neural predictions. In EMNLP 2016 - Conference on Empirical Methods in Natural Language Processing, Proceedings (pp. 107–117). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/d16-1011

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free