Recognizing textual entailment with attentive reading and writing operations

1Citations
Citations of this article
4Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Inferencing the entailment relations between natural language sentence pairs is fundamental to artificial intelligence. Recently, there is a rising interest in modeling the task with neural attentive models. However, those existing models have a major limitation to keep track of the attention history because usually only one single vector is utilized to memorize the past attention information. We argue its importance based on our observation that the potential alignment clues are not always centralized. Instead, they may diverge substantially, which could cause the problem of long-range dependency. In this paper, we propose to facilitate the conventional attentive reading operations with two sophisticated writing operations - forget and update. Instead of utilizing a single vector that accommodates the attention history, we write the past attention information directly into the sentence representations. Therefore, higher memory capacity of attention history could be achieved. Experiments on Stanford Natural Language Inference corpus (SNLI) demonstrate the superior efficacy of our proposed architecture.

Cite

CITATION STYLE

APA

Liu, L., Huo, H., Liu, X., Palade, V., Peng, D., & Chen, Q. (2018). Recognizing textual entailment with attentive reading and writing operations. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 10827 LNCS, pp. 847–860). Springer Verlag. https://doi.org/10.1007/978-3-319-91452-7_54

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free