Collaborative Attention Network for Natural Language Inference

0Citations
Citations of this article
1Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Attention mechanism recently shows promising performance on varies of natural language processing tasks including natural language inference. We propose a collaborative attention mechanism based on the structured self-attention and the decomposable attention, which mutually benefit each other and provide both dependent and independent information of the sentence pairs. The model performs well on natural language inference tasks while having a relatively light-weight structure. Experiments on the SNLI dataset indicate that the approach enhances the accuracy and obtains improvements compared with the pro-posed methods and the individual two models, implying that it learns a better way to represent the textual semantic.

Cite

CITATION STYLE

APA

Zhang, S., Ma, Y., Li, S., & Sun, W. (2020). Collaborative Attention Network for Natural Language Inference. In Lecture Notes in Electrical Engineering (Vol. 571 LNEE, pp. 2343–2349). Springer. https://doi.org/10.1007/978-981-13-9409-6_284

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free