Grounded Graph Decoding Improves Compositional Generalization in Question Answering

5Citations
Citations of this article
45Readers
Mendeley users who have this article in their library.

Abstract

Question answering models struggle to generalize to novel compositions of training patterns, such to longer sequences or more complex test structures. Current end-to-end models learn a flat input embedding which can lose input syntax context. Prior approaches improve generalization by learning permutation invariant models, but these methods do not scale to more complex train-test splits. We propose Grounded Graph Decoding, a method to improve compositional generalization of language representations by grounding structured predictions with an attention mechanism. Grounding enables the model to retain syntax information from the input in thereby significantly improving generalization over complex inputs. By predicting a structured graph containing conjunctions of query clauses, we learn a group invariant representation without making assumptions on the target domain. Our model significantly outperforms state-ofthe-art baselines on the Compositional Freebase Questions (CFQ) dataset, a challenging benchmark for compositional generalization in question answering. Moreover, we effectively solve the MCD1 split with 98% accuracy. All source is available at https:// github.com/gaiyu0/cfq.

Cite

CITATION STYLE

APA

Gai, Y., Jain, P., Zhang, W., Gonzalez, J., Song, D., & Stoica, I. (2021). Grounded Graph Decoding Improves Compositional Generalization in Question Answering. In Findings of the Association for Computational Linguistics, Findings of ACL: EMNLP 2021 (pp. 1829–1838). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2021.findings-emnlp.157

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free