Learning Algebraic Recombination for Compositional Generalization

33Citations
Citations of this article
62Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Neural sequence models exhibit limited compositional generalization ability in semantic parsing tasks. Compositional generalization requires algebraic recombination, i.e., dynamically recombining structured expressions in a recursive manner. However, most previous studies mainly concentrate on recombining lexical units, which is an important but not sufficient part of algebraic recombination. In this paper, we propose LEAR, an end-to-end neural model to learn algebraic recombination for compositional generalization. The key insight is to model the semantic parsing task as a homomorphism between a latent syntactic algebra and a semantic algebra, thus encouraging algebraic recombination. Specifically, we learn two modules jointly: a Composer for producing latent syntax, and an Interpreter for assigning semantic operations. Experiments on two realistic and comprehensive compositional generalization benchmarks demonstrate the effectiveness of our model. The source code is publicly available at https://github.com/microsoft/ContextualSP.

Cite

CITATION STYLE

APA

Liu, C., An, S., Lin, Z., Liu, Q., Chen, B., Lou, J. G., … Zhang, D. (2021). Learning Algebraic Recombination for Compositional Generalization. In Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021 (pp. 1129–1144). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2021.findings-acl.97

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free