Mask Attention Networks: Rethinking and Strengthen Transformer

36Citations
Citations of this article
151Readers
Mendeley users who have this article in their library.

Abstract

Transformer is an attention-based neural network, which consists of two sublayers, namely, Self-Attention Network (SAN) and Feed-Forward Network (FFN). Existing research explores to enhance the two sublayers separately to improve the capability of Transformer for text representation. In this paper, we present a novel understanding of SAN and FFN as Mask Attention Networks (MANs) and show that they are two special cases of MANs with static mask matrices. However, their static mask matrices limit the capability for localness modeling in text representation learning. We therefore introduce a new layer named dynamic mask attention network (DMAN) with a learnable mask matrix which is able to model localness adaptively. To incorporate advantages of DMAN, SAN, and FFN, we propose a sequential layered structure to combine the three types of layers. Extensive experiments on various tasks, including neural machine translation and text summarization demonstrate that our model outperforms the original Transformer.

Cite

CITATION STYLE

APA

Fan, Z., Gong, Y., Liu, D., Wei, Z., Wang, S., Jiao, J., … Huang, X. (2021). Mask Attention Networks: Rethinking and Strengthen Transformer. In NAACL-HLT 2021 - 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Proceedings of the Conference (pp. 1692–1701). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2021.naacl-main.135

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free