Abstract
Although neural sequence-to-sequence models have been successfully applied to semantic parsing, they fail at compositional generalization, i.e., they are unable to systematically generalize to unseen compositions of seen components. Motivated by traditional semantic parsing where compositionality is explicitly accounted for by symbolic grammars, we propose a new decoding framework that preserves the expressivity and generality of sequence-tosequence models while featuring lexicon-style alignments and disentangled information processing. Specifically, we decompose decoding into two phases where an input utterance is first tagged with semantic symbols representing the meaning of individual words, and then a sequence-to-sequence model is used to predict the final meaning representation conditioning on the utterance and the predicted tag sequence. Experimental results on three semantic parsing datasets show that the proposed approach consistently improves compositional generalization across model architectures, domains, and semantic formalisms.
Cite
CITATION STYLE
Zheng, H., & Lapata, M. (2021). Compositional Generalization via Semantic Tagging. In Findings of the Association for Computational Linguistics, Findings of ACL: EMNLP 2021 (pp. 1022–1032). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2021.findings-emnlp.88
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.