Refining Raw Sentence Representations for Textual Entailment Recognition via Attention

18Citations
Citations of this article
86Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In this paper we present the model used by the team Rivercorners for the 2017 RepEval shared task. First, our model separately encodes a pair of sentences into variable-length representations by using a bidirectional LSTM. Later, it creates fixed-length raw representations by means of simple aggregation functions, which are then refined using an attention mechanism. Finally it combines the refined representations of both sentences into a single vector to be used for classification. With this model we obtained test accuracies of 72.057% and 72.055% in the matched and mismatched evaluation tracks respectively, outperforming the LSTM baseline, and obtaining performances similar to a model that relies on shared information between sentences (ESIM). When using an ensemble both accuracies increased to 72.247% and 72.827% respectively.

Cite

CITATION STYLE

APA

Balazs, J. A., Marrese-Taylor, E., Loyola, P., & Matsuo, Y. (2017). Refining Raw Sentence Representations for Textual Entailment Recognition via Attention. In RepEval 2017 - 2nd Workshop on Evaluating Vector-Space Representations for NLP, Proceedings of the Workshop (pp. 51–55). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/w17-5310

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free