Shortcut-Stacked Sentence Encoders for Multi-Domain Inference

76Citations
Citations of this article
164Readers
Mendeley users who have this article in their library.

Abstract

We present a simple sequential sentence encoder for multi-domain natural language inference. Our encoder is based on stacked bidirectional LSTM-RNNs with shortcut connections and fine-tuning of word embeddings. The overall supervised model uses the above encoder to encode two input sentences into two vectors, and then uses a classifier over the vector combination to label the relationship between these two sentences as that of entailment, contradiction, or neural. Our Shortcut- Stacked sentence encoders achieve strong improvements over existing encoders on matched and mismatched multi-domain natural language inference (top singlemodel result in the EMNLP RepEval 2017 Shared Task (Nangia et al., 2017)). Moreover, they achieve the new state-of-theart encoding result on the original SNLI dataset (Bowman et al., 2015).

Cite

CITATION STYLE

APA

Nie, Y., & Bansal, M. (2017). Shortcut-Stacked Sentence Encoders for Multi-Domain Inference. In RepEval 2017 - 2nd Workshop on Evaluating Vector-Space Representations for NLP, Proceedings of the Workshop (pp. 41–45). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/w17-5308

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free