Iterative Recursive Attention Model for Interpretable Sequence Classification

6Citations
Citations of this article
88Readers
Mendeley users who have this article in their library.

Abstract

Natural language processing has greatly benefited from the introduction of the attention mechanism. However, standard attention models are of limited interpretability for tasks that involve a series of inference steps. We describe an iterative recursive attention model, which constructs incremental representations of input data through reusing results of previously computed queries. We train our model on sentiment classification datasets and demonstrate its capacity to identify and combine different aspects of the input in an easily interpretable manner, while obtaining performance close to the state of the art.

Cite

CITATION STYLE

APA

Tutek, M., & Šnajder, J. (2018). Iterative Recursive Attention Model for Interpretable Sequence Classification. In EMNLP 2018 - 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, Proceedings of the 1st Workshop (pp. 249–257). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/w18-5427

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free