Attention mechanism recently shows promising performance on varies of natural language processing tasks including natural language inference. We propose a collaborative attention mechanism based on the structured self-attention and the decomposable attention, which mutually benefit each other and provide both dependent and independent information of the sentence pairs. The model performs well on natural language inference tasks while having a relatively light-weight structure. Experiments on the SNLI dataset indicate that the approach enhances the accuracy and obtains improvements compared with the pro-posed methods and the individual two models, implying that it learns a better way to represent the textual semantic.
CITATION STYLE
Zhang, S., Ma, Y., Li, S., & Sun, W. (2020). Collaborative Attention Network for Natural Language Inference. In Lecture Notes in Electrical Engineering (Vol. 571 LNEE, pp. 2343–2349). Springer. https://doi.org/10.1007/978-981-13-9409-6_284
Mendeley helps you to discover research relevant for your work.