Composition, Attention, or Both?

3Citations
Citations of this article
16Readers
Mendeley users who have this article in their library.

Abstract

In this paper, we propose a novel architecture called Composition Attention Grammars (CAGs) that recursively compose subtrees into a single vector representation with a composition function, and selectively attend to previous structural information with a self-attention mechanism. We investigate whether these components-the composition function and the self-attention mechanism-can both induce human-like syntactic generalization. Specifically, we train language models (LMs) with and without these two components with the model sizes carefully controlled, and evaluate their syntactic generalization performance against six test circuits on the SyntaxGym benchmark. The results demonstrated that the composition function and the self-attention mechanism both play an important role to make LMs more human-like, and closer inspection of grammatical phenomena implied that the composition function allowed syntactic features, but not semantic features, to percolate into subtree representations.

Cite

CITATION STYLE

APA

Yoshida, R., & Oseki, Y. (2022). Composition, Attention, or Both? In Findings of the Association for Computational Linguistics: EMNLP 2022 (pp. 5851–5863). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2022.findings-emnlp.428

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free