Syntax-Aware Attention for Natural Language Inference with Phrase-Level Matching

0Citations
Citations of this article
7Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Natural language inference (NLI) aims to predict whether a premise sentence can infer another hypothesis sentence. Models based on tree structures have shown promising results on this task, but the performance still falls below that of sequential models. In this paper, we present a syntax-aware attention model for NLI, by which phrase-level matching between two sentences is allowed. We design tree-structured semantic composition function that builds phrase representations according to syntactic trees. We then introduce cross sentence attention to learn interaction information based on phrase-level representations between two sentences. Moreover, we additionally explore a self-attention mechanism to enhance semantic representations by capturing the context from syntactic tree. Experimental results on SNLI and SciTail datasets demonstrate that our model has the ability to model NLI more precisely and significantly improves the performance.

Cite

CITATION STYLE

APA

Liu, M., Wang, Y., Zhang, Y., Xu, J., & Chen, Y. (2019). Syntax-Aware Attention for Natural Language Inference with Phrase-Level Matching. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11856 LNAI, pp. 156–168). Springer. https://doi.org/10.1007/978-3-030-32381-3_13

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free