Real-World Compositional Generalization with Disentangled Sequence-to-Sequence Learning

3Citations
Citations of this article
25Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Compositional generalization is a basic mechanism in human language learning, which current neural networks struggle with. A recently proposed Disentangled sequence-to-sequence model (Dangle) shows promising generalization capability by learning specialized encodings for each decoding step. We introduce two key modifications to this model which encourage more disentangled representations and improve its compute and memory efficiency, allowing us to tackle compositional generalization in a more realistic setting. Specifically, instead of adaptively re-encoding source keys and values at each time step, we disentangle their representations and only re-encode keys periodically, at some interval. Our new architecture leads to better generalization performance across existing tasks and datasets, and a new machine translation benchmark which we create by detecting naturally occurring compositional patterns in relation to a training set. We show this methodology better emulates real-world requirements than artificial challenges.

Cite

CITATION STYLE

APA

Zheng, H., & Lapata, M. (2023). Real-World Compositional Generalization with Disentangled Sequence-to-Sequence Learning. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (pp. 1711–1725). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2023.findings-acl.108

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free