Compositional Generalization without Trees using Multiset Tagging and Latent Permutations

4Citations
Citations of this article
15Readers
Mendeley users who have this article in their library.

Abstract

Seq2seq models have been shown to struggle with compositional generalization in semantic parsing, i.e. generalizing to unseen compositions of phenomena that the model handles correctly in isolation. We phrase semantic parsing as a two-step process: we first tag each input token with a multiset of output tokens. Then we arrange the tokens into an output sequence using a new way of parameterizing and predicting permutations. We formulate predicting a permutation as solving a regularized linear program and we backpropagate through the solver. In contrast to prior work, our approach does not place a priori restrictions on possible permutations, making it very expressive. Our model outperforms pretrained seq2seq models and prior work on realistic semantic parsing tasks that require generalization to longer examples. We also outperform non-tree-based models on structural generalization on the COGS benchmark. For the first time, we show that a model without an inductive bias provided by trees achieves high accuracy on generalization to deeper recursion depth.

Cite

CITATION STYLE

APA

Lindemann, M., Koller, A., & Titov, I. (2023). Compositional Generalization without Trees using Multiset Tagging and Latent Permutations. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (Vol. 1, pp. 14488–14506). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2023.acl-long.810

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free