Multi-level head-wise match and aggregation in transformer for textual sequence matching

5Citations
Citations of this article
27Readers
Mendeley users who have this article in their library.

Abstract

Transformer has been successfully applied to many natural language processing tasks. However, for textual sequence matching, simple matching between the representation of a pair of sequences might bring in unnecessary noise. In this paper, we propose a new approach to sequence pair matching with Transformer, by learning head-wise matching representations on multiple levels. Experiments show that our proposed approach can achieve new state-of-the-art performance on multiple tasks that rely only on pre-computed sequence-vectorrepresentation, such as SNLI, MNLI-match, MNLI-mismatch, QQP, and SQuAD-binary.

Cite

CITATION STYLE

APA

Wang, S., Lan, Y., Tay, Y., Jiang, J., & Liu, J. (2020). Multi-level head-wise match and aggregation in transformer for textual sequence matching. In AAAI 2020 - 34th AAAI Conference on Artificial Intelligence (pp. 9209–9216). AAAI press. https://doi.org/10.1609/aaai.v34i05.6458

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free