Recognizing text entailment via bidirectional LSTM model with inner-attention

12Citations
Citations of this article
8Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In this paper, we propose a sentence encoding-based model for recognizing text entailment (RTE). In our approach, the encoding process of sentences consists of two stages. Firstly, average pooling is used over word-level bidirectional LSTM (biLSTM) to generate a first-stage sentence representation. Secondly, attention mechanism is employed to replace average pooling on the same sentence for better representations. Instead of using target sentence to attend words in source sentence, we utilize the sentence’s first-stage representation to attend words appeared in itself, which is called “Inner-Attention” in our paper. Experiments conducted on Stanford Natural Language Inference (SNLI) Corpus has proved the effectiveness of “Inner-Attention” mechanism. With less number of parameters, our model outperformed the existing best sentence encoding-based approach by a large margin.

Author supplied keywords

Cite

CITATION STYLE

APA

Sun, C., Liu, Y., Jia, C., Liu, B., & Lin, L. (2017). Recognizing text entailment via bidirectional LSTM model with inner-attention. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 10363 LNAI, pp. 448–457). Springer Verlag. https://doi.org/10.1007/978-3-319-63315-2_39

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free